Games and the Brain

This week I attended the first-ever meeting of the Entertainment Software and Cognitive Neurotherapeutics Society at the UCSF Mission Bay Conference center.

There were over 200 people registered. Cognitive neuroscientists made up the majority of the audience, but there were a few token representatives from industry (video game and health) and the government.

I learned from the conference that plasticity is a key concept in cognitive neuroscience: it is only by virtue of the fact that the brain can be rewired that learning can take place. Perceptual learning is the process through which a subjects reactions to a given stimuli transform over the course of repeated exposure to it.

I’ve attended several conferences on video game technology and health care–Cybertherapy, Medicine Meets Virtual Reality, and Games for Health. At these conferences, the presentations tend to spend a lot of time highlighting the specific technologies being used. How the technology is actually working to affect health remains largely inside the blackbox, couched in terms that allow the audience to imagine what actually might be going on the mind.

What was striking about ESCoNS was that the technology was almost invisible: most of the presentations attended very closely to their theories of cognitive action….to the point where, as a lay person, it was often a challenge to tell what the researchers were demonstrating through their data.

Many of the projects, it seemed, used the term “entertainment software” very loosely: the stimuli being used to test, say, the roles of attention and reward in perceptual learning (Takeo Watanabe and Yuka Sasaki, Boston University) were often standardized computer-generated cognitive tests, such as selecting which short video of moving pixels demonstrated coherent motion. In John Jonides’ (University of Michigan) study of improving fluid intelligence through cognitive training, he likewise used standardized cognitive tests, like the n-back test, involving memory tasks.

But one of the hallmarks of entertainment software is precisely that it is interactive: and yet the interactive element in these research studies was merely to choose between this or that stimuli. The procedural element of software was not being tested, but rather utilized to capture data. I would more likely to call this “research software” than “entertainment software.”

Talking to some researchers from the Brain Plasticity Institute during a break, I learned that utilizing entertainment software in scientific work is a major challenge. If your aim is to demonstrate the effects of a particular kind of cognitive training–that is, repeatedly conducting a perceptual task over a give time period– on the brain/mind then, for the sake of producing good science, it is imperative to control for as many variables as possible, and to use tools that are accepted by the field.

Creating and promoting such tools is the goal of the new NIH Toolbox, presented by Molly Wagster. The toolbox includes many pieces of peer validated software that NIH would like researchers to use in their studies in order to produce comparable, reliable data. The Toolbox is divided into Cognition, Emotion, Motor, and Sensation assessment tools. Each of these areas is further divided into subdomains, each with a battery of tests. For example, in the cognition domain, they have tests for executive function, episodic memory, language, processing speed, attention, and working memory.

Again, while each of these is delivering used software, I think one would be hard-pressed to call it “entertainment software.”

Only on paper explicitly dealt with entertainment software. University of Rochester professor Daphne Bevalier’s work on action gaming suggests that people who play action video games, as opposed to other games, have better vision (can differentiate images at low contrast), are better able to multi-task, switch tasks, and learn language, and are better able to learn how to attend carefully to their world and to learn how to perform tasks faster in general.

Unfortunately, however, a recent review of Bevalier’s work published in the journal Nature suggests that there are serious flaws in these and similar studies:

Most of the studies compare the cognitive performances of expert gamers with those of non-gamers, and suffer from well-known pitfalls of experimental design. The studies are not blinded: participants know that they have been recruited because they have gaming expertise, which can influence their performance, because they are motivated to do well and prove themselves. And the researchers know which participants are in which group, so they can have preconceptions that might inadvertently affect participants’ performance.

A more rigorous methodology is used in training studies, such as those conducted by Green and Bavelier, in which non-gamers are randomly assigned to one of two groups. One group is trained on an action video game, and the other on a different type of game, such as the slower-paced block-rearrangement task Tetris. Their performance on a cognitive task is measured before and after game training.

But these studies, too, have shortcomings. The researchers usually assume that the placebo effects, wherein subjects improve because they expect to improve, will be comparable between the two groups. In fact, each group of participants might predict that their particular training will lead to improved performance in different types of tasks, causing a differential in the placebo effect.

The studies’ results could also be confounded if one of the games more closely resembles the cognitive task being measured than does the other — a factor that is rarely taken into account by researchers.

Therefore, there seems to be some serious difficulties in conducting research using existing games, considering the complex nature of the games as both stimuli as well as cultural entities with particular meanings that researchers may be associating with players and that players themselves may also be sensitive to.

Some of the researchers I spoke with suggested that a growing area of research will involve making and modifying games such that minute differences between them will become the variable for measurement. This sounds like an interesting challenge for game development: how to subtly incorporate cognitive tests into game play. Certainly, none of the NIH toolbox tests would be ready-to-important, but could perhaps be creatively adapted.

This would help to fulfill what was clearly a dream of the conference: how to create an engaging mechanisms through which people will actually want to conduct the repetitive tasks that cognitive neuroscientists strongly believe can remake the mind for the better. While games like BrainAge claim to promote cognition, they do so without a very strong scientific footing…at least from the perspective of neuroscience.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s