Posit Science: Does It Help?

Tim Lundeen pointed me to the website of Posit Science, which sells ($10/month) access to a bunch of exercises that supposedly improve various brain functions, such as memory, attention, and navigation. I first encountered Posit Science at a booth at a convention for psychologists about five years ago. They had reprints available. I looked at a study published in the Proceedings of the National Academy of Sciences. I was surprised how weak was the evidence that their exercises helped.

Maybe the evidence has improved. Under the heading “world class science” the Posit Science website emphasizes a few of the 20-odd published studies. First on their list of “peer-reviewed research” is “the IMPACT study”, which has its own web page.

With 524 participants, the IMPACT study is the largest clinical trial ever to examine whether a specially designed, widely available cognitive training program significantly improves cognitive abilities in adults. Led by distinguished scientists from Mayo Clinic and the University of Southern California, the IMPACT study proves that people can make statistically significant gains in memory and processing speed if they do the right kind of scientifically designed cognitive exercises.

The study compared a few hundred people who got the Posit Science exercises with a few hundred people who got an “active control” treatment that is poorly described. It is called “computer-based learning”. I couldn’t care less that people who spend an enormous amount of time doing laboratory brain tests (1 hour/day, 5 days/week, 8-10 weeks) thereby do better on other laboratory brain tests. I wanted to know if the laboratory training produced improvement in everyday life. This is what most people want to know, I’m sure. The study designers seem to agree. The procedure description says “to be of real value to users, improvement on a training program must generalize to improvement on real-world activities”.

On the all-important question of real-world improvement, the results page said very little. I looked for the published paper. I couldn’t find it on the website. Odd. I found it on Scribd.

Effect of the training on real-world activities was measured like this:

The CSRQ-25 consists of 25 statements about cognition and mood in everyday life over the past 2 weeks, answered using a 5-point Likert scale.

Mood? Why was that included? In any case, the training group started with an average score of 2.23 on the CSRQ-25. After training, they improved by 0.07. (Significantly more than the control group.) Not only is that a tiny improvement (percentage-wise) it is unclear what it means. The measurement scale is not well-described. Was the range of possible answers 1 to 5? Or 0 to 4? What does 2 mean? What does 3 mean? It is clear, however, that on a scale where the greatest possible improvement was either 1.23 (assuming 1 was the best possible score) or 2.23 (assuming 0 was the best possible score), the actual improvement was 0.07. Not much for 50-odd hours of practice. Although the website seems proud of the large sample size (“largest clinical trial ever”), it is now clear why it was so large: With a smaller sample the tiny real-world improvement would have been undetectable. Because the website treats this as the best evidence, I assume the other evidence is even less impressive. The questions about mood are irrelevant to the website claims, which are all about cognition. Why weren’t the mood questions removed from the analysis? It is entirely possible that, had the mood questions been removed, the training would have produced no improvement.

The first author of the IMPACT study is Glenn Smith, who works at the Mayo Clinic. I emailed him to ask (a) why the assessment of real-world effects included questions about mood and (b) what happensĀ  if the mood questions are removed. I predict he won’t answer. A friend predicts he will.

More questions for Posit Science

5 Responses to “Posit Science: Does It Help?”

  1. Alex Chernavsky Says:

    See also this article from the New York Times (Oct. 31):

    The Brain Trainers

    I’ve only had time to read the first few paragraphs, but Posit Science is mentioned in the article (along with some other companies that are similar).

    Seth: The NY Times article suggests that parents with learning disabled children will pay enormous prices for training (e.g., $10,000/year) and the training does help. That’s a considerably different target population than the one aimed at by the Posit Science website: old people.

  2. L Says:

    @ Professr Roberts: Maybe you can start a website to sell your math test to professors who need to measure brain function

  3. L Says:

    @ Professor Roberts: Can you post Dr Smith’s email address? I can’t find it and I just spent one hour looking for it.

    Seth: smitg@mayo.edu

  4. Henry Mahncke Says:

    Dear Dr. Roberts,

    I’m the CEO of Posit Science, and an author on the IMPACT papers – I saw your blog post and thought I could shed some light on your questions. Generally, this information is covered in our publications, but was too detailed to have on our web pages. They are great questions!

    1) Active Control: We picked an active control that was comparable to what doctors currently recommend – which is to stay cognitively active. People in this group learned from educational DVD’s (like Cosmos, History of Art, and so forth) and took quizzes each day on what they learned. This was a good control for our study because we could match the time spent on cognitive training, and people enrolled in this arm believed that this activity could help their cognitive function – so we could maintain the study blind for participants.

    2) Self-Report Measure: The CSRQ has 25 questions, with questions like “I have felt I have a good memory”, rated 1 (almost always) to 5 (hardly ever) over the past two weeks of time. In clinical trials like this, the best way to consider the magnitude of the effect is an effect size measure – we use the standard Cohen’s d measure. On this measure, the effect on standardized neuropsychological tests is about 0.25, which is equivalent to ~10 years of cognitive function. The effect size on the self-report measure was about the same, indicating that the improvement in objective measures of cognition matched the improvement in subjective measures – people really noticed the changes in themselves. There are a few mood questions on the CSRQ, because mood and cognition are related, however repeating the analysis with the mood questions removed doesn’t affect the results – the effect is the same just considering the core cognition questions.

    We are proud of the clinical trials that we and our scientific collaborators have run – we think that this kind of evidence is crucial to the field of brain training, and we’re proud to have been involved in all of the largest and best executed studies in the field.

    Thanks for your interest in our work!

    Best regards,
    Henry Mahncke
    CEO, Posit Science

    Seth’s response to Dr. Mahncke: Thanks for your answers. I understand what you say about the control group. About the size of improvement on the all-important real-world measures, however, I disagree that “the standard Cohen’s d measure” is a good way to measure effect size. For an academic audience, maybe, but I am sure that the typical visitor to your website has no idea what that is. To convince me that your treatment reduced cognitive age by ten years, I would want to see three things: (a) All the questions of the CSRQ-25 so I could see what is being measured. (b) How the CSRQ-25 score (with the mood questions omitted) varies with age. You seem to say that the average score gets worse by 0.07 in 10 years. I am unsure that this is the case. I saw no evidence supporting that claim. (c) The effect of the IMPACT treatment on the score-versus-age function. It would be especially good to see the score versus age function separately for the two groups in the IMPACT study. A graph showing both functions (one for each group) would make it much clearer if your claim of 10 years improvement is reasonable.

  5. ChristianKl Says:

    When it comes to Brain training I would like to see how Posit Science compares to Anki.
    Anki is also mentally challenging and I would expect similar effects on the measures that the IMPACT studied.

    On the other hand Anki has the additional advantage of helping you learn knowledge.

    Seth: I agree, that would be a good comparison.