A variety of research methods concepts are illustrated in this NPR story, Brain Games Fail a Big Scientific Test. Let's go through the journalist's story, connecting the material to course concepts.
The article starts with the conclusion: Brain games do not make you smarter.
That's the conclusion of an exhaustive evaluation of the scientific literature on brain training games and programs. It was published Monday in the journal Psychological Science in the Public Interest."
It would be really nice if you could play some games and have it radically change your cognitive abilities," Simons says. "But the studies don't show that on objectively measured real-world outcomes."
a) When the journalist writes, "exhaustive evaluation of the scientific literature," what kind of journal article is he probably describing--an empirical journal article, or a review journal article (Ch 2)?
Next, here's the backstory of the journal article in question:
In October 2014, more than 70 scientists published an open letter objecting to marketing claims made by brain training companies.
Pretty soon, another group, with more than 100 scientists, published a rebuttal saying brain training has a solid scientific base. ...
In an effort to clarify the issue, Simons and six other scientists reviewed more than 130 studies of brain games and other forms of cognitive training. The evaluation included studies of products from industry giant Lumosity....
After combining the results of all the studies (and, notably, after separating the well-conducted studies from the poorly-conducted ones), the researchers concluded that brain games do not make you smarter. Here's a statement of the results:
There were some good studies, Simons says. And they showed that brain games do help people get better at a specific task. "You can practice, for example, scanning baggage at an airport and looking for a knife," he says. "And you get really, really good at spotting that knife."
But there was less evidence that people got better at related tasks, like spotting other suspicious items, Simons says. And there was no strong evidence that practicing a narrow skill led to overall improvements in memory or thinking.
That's disappointing, Simons says, because "what you want to do is be better able to function at work or at school."
b) This example shows the theory-data cycle in action (Ch. 1). What were the two competing theories this literature review was testing? How do the data support one theory and lead us to reject the other one?
Next, as the researchers collected the studies, they found that some of them were flawed in design:
The scientists found that "many of the studies did not really adhere to what we think of as the best practices," Simons says.Some of the studies included only a few participants. Others lacked adequate control groups or failed to account for the placebo effect, which causes people to improve on a test simply because they are trying harder or are more confident.
c) How might you design a really strong study to test the effects of a brain training game? What controls would be necessary?
d) What is the downside to including only a few participants in a study? (Ch 10)
a) This was almost certainly a review journal article.
b) One theory said that brain-training games improve your general intelligence; the other theory said that brain-training games do not improve general intelligence. The review of all the studies combined is consistent with the second theory.
c) Your study designs will vary.
d) The main downside to conducting a study with only a few participants is that small samples can be more likely to produce results that are unusual or extreme, because a few outliers in the data can have an undue influence on the overall pattern. Studies based on larger sample sizes are usually more stable and we can be more confident that they will replicate.