New Cognitive Training Study Takes on the Critics
By Scott Barry Kaufman, Republished from the Scientific American
Brain training: yay or nay?
It’s not so simple.
Traversing the swamp of studies on cognitive training is bound to give even the boldest explorer a migraine.
But don’t despair, I’m here to help guide you along!
As we all know, people differ quite a bit from one each another in how much information they can maintain, manipulate, and transform in their heads at one time. Crucially, these differences relate to important outcomes, such as abstract reasoning, academic performance, reading comprehension, and the acquisition of new skills.
The most consistent and least controversial finding in the literature is that working memory training programs produce reliable short-term improvements in both verbal and visuospatial working memory skills. On average, the effect sizes range from moderate to large, although the long-term sustainability of these effects is much more ambiguous. These effects are called near transfer effects, because they don’t transfer very far beyond the trained domain of cognitive functioning.
What are far more controversial (and far more interesting) are far transfer effects. One particular class of far transfer effects that cognitive psychologists are particularly interested in are those that show increases in fluid intelligence: the deliberate but flexible control of attention to solve novel “on the spot” problems that cannot be perfomed by relying exclusively on previously learned habits, schemas, and scripts.
Here is where we enter the swamp.
Some studies have reported absolutely no effect of working memory training on fluid intelligence, whereas others have found an effect. The results are mixed and inconclusive. Various critics have rightfully listed a number of methodological flaws and alternative explanations that could explain the far transfer effects.
Enter Susanne Jaeggi and her colleagues, who in a brand new study, address the critics head on (and some). Through careful consideration of their study design, they attempted to resolve the primary methodological concerns of previous research. First, they randomly assigned adults to either engage in (a) working memory training or (b) answer trivia questions presented in a multiple-choice format. This latter condition served as their active control group. This has been a major criticism in the past: without an active control group, it’s possible that far transfer effects are due to placebo effects or even a Hawthorne effect.
Their working memory intervention consisted of 4 weeks of training on an “adaptiven-back task.” This task requires fast updating of information in working memory, and the program adapts to the performance of the participant. On each trial, participants have to remember the location of information presented before (1-back), the time before last (2-back), the time before that (3-back), etc. They administered two versions: an auditory version involving spoken letters, and an auditory + visuospatial version, in which both spoken letters and spatial locations had to be processed simulateously.
Crucially, the researchers also administered multiple measures of cognitive ability. This included measures of visuospatial reasoning– the ability to consciously detect complex visual patterns and rotate images in the mind– and verbal reasoning– the ability to comprehend sentences, make verbal inferences, and solve verbal analogies. This is important, because if you want to measure a construct such as visuospatial or verbal reasoning, it’s important to administer multiple indicators of that ability.
The researchers also considered the role of individual differences in the effectiveness of working memory training. People differ substantially from each other in their motivation to engage in cognitive training, as well as their need for cognition (enjoyment engaging with cognitive complexity). People also differ in the the extent to which they have a growth mindset (i.e., believe that intelligence can be modified by experience). The researchers measured all of these personal characteristics.
Finally, the researchers assessed the long-term effectiveness of their training, by including a follow-up measurement three months after completion of training.
What did they find?
Even after addressing the major criticisms of their past work, Jaeggi and colleagues still found far transfer. In particular, they found far transfer to visuospatial reasoning when people engaged in working memory training. In contrast, no effects were found when people were trained on trivia knowledge (the active control group). These effects were found despite using a working memory task that did not involve visuospatial stimuli, suggesting that the working memory training effect on visuospatial reasoning was independent of content.
They propose that a crucial mental mechanism that might have accounted for their effects is the discrimination between relevant and irrelevant stimuli. Their n-back working memory task requires ignoring distracting stimuli and quickly and efficiently focus on the most relevant stimuli to accomplish the task. They argue that their measures of visuospatial reasoning also required that skillset. This is important, because other recent research, that reported a failure to find far transfer effects, administered “complex working memory span tasks” that do not have the same demands on attention. Jaeggi and colleagues suggest that part of the reason for the inconsistency across studies might have to do with the particular working memory task that is used during training.
While the researchers only found small far transfer effects on verbal reasoning, they note that the reliabilities of their verbal tasks were significantly lower than the reliabilities of their visuospatial reasoning tasks. They also acknowledge other possibilities for the visuospatial/verbal reasoning discrepancy, such as people may have less practice on spatial than verbal tasks, so have more room for improvement. Of course, it’s also possible that there is just greater transfer of working memory training to visuospatial reasoning than verbal reasoning (other labs have also found that to be the case).
In terms of the long-term effectiveness of their training, they found no significant effect at a three-month follow-up. The researchers offer sensible caution here in interpreting this finding:
What about individual differences? Here, things get even more interesting.
First, they found that people who have a growth mindset about intelligence (believe that intelligence is malleable) showed greater improvement on the visuospatial reasoning tests than those who have a fixed mindset about intelligence (believe intelligence can’t change). This effect was only found, however, in the active control group. In other words, those who believed intelligence can change showed a greater placebo effect than those who think intelligence is fixed. This finding highlights the importance of using an active control group of people that have a wide range of beliefs about intelligence. Without such a group, some of the far transfer findings can be accounted for by a people’s beliefs about the malleability of intelligence. Nevertheless, this finding doesn’t negate the far transfer effects found in the working memory training condition, which held even after taking into account a person’s implicit theories about intelligence.
Second, the researchers found that intrinsic motivation mattered. Those who completed the study reported relatively stable engagement levels throughout the four weeks of training. In contrast, those who did not complete the study reported gradually declining engagement levels over the course of the intervention. As the researchers note, this raises the intriguing question: who actually signs up for these darn cognitive training studies, and who sticks with it for the entire four-week period?
Their data provides some hints. On the one hand, those who signed up for the study reported that they have more cognitive deficits in their lives than those who completed the pretest but dropped out of training. However, those with the highest pretest scores and the highest need for cognition scores ended up being the ones who actually completed the training!
Therefore, it seems as though the kind of person who is most likely to want to engage with cognitive training and stick with it for the entire regime is someone with a combination of (a) already high working memory, (b) high need for cognition, and (c) self-perceived cognitive deficits. The troubling implication here is that those who are most likely to complete these cognitive training studies are not the ones who need it the most.
Which brings us to perhaps the biggest challenge for cognitive training researchers: to get the cognitive training in the hands of those who need it the most, and keep them engaged throughout the entire intervention. Because let’s face it, these working memory tasks are boring. And for those with low working memory and little desire to engage in heavy cognitive manipulation of a random stream of letters and symbols, these interventions can be downright frustrating.
To be fair to Jaeggi, in her prior research she has tried to make the working memory tasks more fun for children by turning them into video games. But there’s still a long way to go on the whole cognitive-training-is-super-fun front. Also, if you really want to increase cognitive ability, I’m not convinced that working memory training is the best way forward. It seems to me that working memory training is best suited to improving working memory. But to truly increase fluid intelligence over the long haul, I’m a bigger fan of addressing those skills directly, through long-term engagement in logical and critical thinking. Indeed, recent research by Silvia Bungeand colleagues have found that engagement in reasoning training not only improves subsequent reasoning performance, but also strengthens connectivity between key areas of the brain (in the frontal and parietal lobes) associated with complex cognition.
By this point, I hope you can see some of the complexities involved in this kind of research, and why it’s not simply a matter of “brain games are bogus.” In my view, the field really needs to evolve beyond the search for broad conclusions to look at more nuanced effects, including a consideration of different intervention programs, and multiple environmental and personal factors.
While these findings by Jaeggi and colleagues probably raise more questions than they answer, that is how science works. They are to be commended for systematically attempting to address the critics, and attempt to get the science right. Thankfully researchers like them exist, because this research is immensely important. We owe it to those who really could benefit from cognitive training– such as children with specific learning disabilities and children growing up in stressful, intellectually impoverished conditions– to get the science right, and get them the help they truly need to flourish.
© 2013 Scott Barry Kaufman, All Rights Reserved
About the Author: Scott Barry Kaufman is a cognitive psychologist interested in the development of intelligence and creativity. In his latest book, Ungifted: Intelligence Redefined, he presents a new theory of human intelligence that he hopes will help all people realize their dreams. Follow on Twitter@sbkaufman.