The Contribution of John Ioannidis

From an excellent Atlantic article about John Ioannidis, who has published several papers saying that medical research is far less reliable than you might think:

A different oak tree at the site provides visitors with a chance to try their own hands at extracting a prophecy. “I [bring] all the researchers who visit me here, and almost every single one of them asks the tree the same question,” Ioannidis tells me . . . “‘Will my research grant be approved?’”

A good point. I’d say his main contribution, based on this article, is pointing out the low rate of repeatability of major medical findings. Until someone actually calculated that rate, it was hard to know what it was, unless you had inside experience. The rate turned out to be lower than a naive person might think. It was not lower than an insider might think, which explains lack of disagreement:

David Gorski . . . noted in his prominent medical blog that when he presented Ioannidis’s paper on [lack of repeatability of] highly cited research at a professional meeting, “not a single one of my surgical colleagues was the least bit surprised or disturbed by its findings.”

I also like the way Ioannidis has emphasized the funding pressure that researchers face, as in that story about the oak tree.  Obviously it translates into pressure to get positive results, which translates into overstatement.

I also think his critique of medical research has room for improvement:

1. Black/white thinking. He talks in terms of right and wrong. (“We could solve much of the wrongness problem, Ioannidis says, if the world simply stopped expecting scientists to be right. That’s because being wrong in science is fine.”) This is misleading. There is signal in all that medical research he criticizes; it’s just not as strong a signal as the researchers claimed. In other words the research he says is “wrong” has value. He’s doing the same thing as all those meta-analyses that ignore all research that isn’t of “high quality”.

2. Nihilism (which is a type of black/white thinking). For example,

How should we choose among these dueling, high-profile nutritional findings? Ioannidis suggests a simple approach: ignore them all.

I’ve paid a lot of attention to health-related research and benefited greatly.  Many of the treatments I’ve studied through self-experimentation were based on health-related research. An example is omega-3. There is plenty of research suggesting its value and this encouraged me to try it. Likewise, there is plenty of evidence supporting the value of fermented foods. That evidence and many other studies (e.g., of hormesis) paint a large consistent picture.

3. Bias isn’t the only problem, but, in this article, he talks as if it is. Bias is a relatively minor problem: you can allow for it. Other problems you can’t allow for. One is the Veblenian tendency to show off. Thus big labs are better than small ones, regardless of which would make more progress. Big studies better than small, expensive equipment better than cheap, etc. And, above all, useless is better than useful. The other is a fundamental misunderstanding about what causes disease and how to fix it. A large fraction of health research money goes to researchers who think that studying this or that biochemical pathway or genetic mechanism will make a difference — for a disease that has an environmental cause. They are simply looking in the wrong place. I think the reason is at least partly Veblenian: To study genes is more “scientific” (= high-tech = expensive) than studying environments.

Thanks to Gary Wolf.

6 Responses to “The Contribution of John Ioannidis”

  1. Alex Chernavsky Says:

    I haven’t read the article about Ionnidis, but your blog post reminds me of this excellent paper on the placebo effect:

    Roberts, Alan H., Kewman, Donald G., Mercier, Lisa, Hovell, Mel (1993).”The power of nonspecific effects in healing: Implications for psychosocial and biological treatments.Clinical Psychology Review 13(5): 375-391.

    Abstract:

    We evaluate the hypothesis that the power of nonspecific effects may account for as much as two thirds of successful treatment outcomes when both the healer and the patient believe in the efficacy of a treatment. Five medical and surgical treatments, once considered to be efficacious by their proponents but no longer considered effective based upon later controlled trials, were selected according to strict inclusion criteria. A search of the English literature was conducted for all studies published for each treatment area. The results of these studies were categorized, where possible, into excellent, good, and poor outcomes. For these five treatments combined, 40 % excellent, 30 % good, and 30 % poor results were reported by proponents. We conclude that, under conditions of heightened expectations, the power of nonspecific effects far exceeds that commonly reported in the literature. The implications of these results in evaluating the relative efficacy of biological and psychosocial treatments is discussed.

  2. G Says:

    You can allow for bias!? Allowing for bias is non-bias. I think it is a huge problem.

    You might have your own (right or wrong) sense that you can surf the nutritional data and get your best result but I think that ‘ignore it all’ might be good advice for most people. The stuff you say people should eat (butter, pork fat…) has been shunned for years by people who paid attention to studies. They would have been better served by tradition and instinct than by attempting to grapple with masses of data and conflicting opinion even if they had the time and educational means.

  3. Seth Roberts Says:

    “They would have been better served by tradition and instinct.” I agree, if the right tradition is chosen. There are plenty of harmful traditions. The Chinese have a tradition of eating rice. Bad idea. They eat little animal fat (can’t afford it). Bad idea. And instinct, too, is a good guide to something but it isn’t always clear what. Why do we like sweet things? So we will eat more candy? No.

    perhaps bias is a “huge” problem. I meant it strikes me as a smaller problem that the two others I name.

  4. G Says:

    Seth, instincts can be clouded by habit. There’s a signal that tells you ‘enough chocolate, time to eat some lettuce’, but people binge on chocolate for dysfunctional reasons and dietary advice has very, very limited power to override these compulsions.

    I know someone who eats an incredible amount of sugar and is slim and healthy-looking. She is also a vegetarian. For all we know, she’s suited to that.

  5. Margaret Wilde Says:

    I think there’s a common reluctance to find environmental cause for degenerative conditions because drug companies fund so much research. You can’t ‘cure’ environmental causes by selling a pharmaceutical drug for the environment to take. Better to blame the victim/patient for the ill-health and then treat them with profitable drugs or other profitable procedures.

  6. fredafal Says:

    Another important contribution to bias not mentioned is the fact the once there is an accepted hypothesis, people look for confirmation, rather than refutation, which is required for truly critical scientific method. One huge institutional impediment to a healthy balance of refutation is the peer review process employed by almost all journals. If there is not even any place to publish, and in turn debate, negative results and possible alternate hypotheses, how can our existing hypotheses be anything but biased?

    One of the last places unbiased debate was fostered was Medical Hypotheses, which did not practice standard peer review. However Elsevier, the huge publishing house that publishes Medical Hypotheses, sacked the editor-in-chief and installed peer review, because of the publication of a controversial scientific hypothesis about AIDS. Rather than let a scientific debate of the merits, or lack there of, of the hypothesis proceed, this important forum for unbiased scientific debate was stifled for good.