clock menu more-arrow no yes mobile

Filed under:

John Ioannidis has dedicated his life to quantifying how science is broken

Medical research is in bad shape. Fraud, bias, sloppiness, and inefficiency are everywhere, and we now have studies that quantify the size of the problem.

We know that about $200 billion — or the equivalent of 85 percent of global spending on research — is routinely wasted on poorly designed and redundant studies. We know that as much as 30 percent of the most influential original medical research papers later turn out to be wrong or exaggerated. We also know that a lot of medical evidence is contradictory and unreliable, such as those studies that purport to show that just about every food we eat either causes or prevents cancer.

(Courtesy of John Ioannidis)

What all this means, says Stanford University professor Dr. John Ioannidis, is that most published research findings are false.

If medical research is hopelessly flawed, Ioannidis is the superhero poised to save it. For the past two decades, the physician-academic has used meta-research — or research on research — to document the ways science veers away from the truth by way of bias, error, and outright fraud. (He was involved in most all the research cited above.)

He says he lives by the motto, "Advancing excellence in science." If he had a villain, it would be the broken information architecture of medicine.

He even has a mythical origin story. He was raised in Greece, the home of Pythagoras and Euclid, by physician-researchers who instilled in him a love of mathematics. By seven, he quantified his affection for family members with a "love numbers" system. ("My mother was getting 1,024.42," he said. "My grandmother, 173.73.") By 19, he won the Greek Mathematical Society's national award. He graduated at the top of his University of Athens School of Medicine class at 25, and it wasn't before long that The Atlantic called him "one of the most influential scientists alive."

If we align our rewards to give credibility to good methods, maybe this is the way to make progress

Ioannidis, who now co-directs the Meta-Research Innovation Center at Stanford, has received much of this acclaim for applying his mathematical powers to measure how and where science goes wrong. In his seminal paper, "Why Most Published Research Findings are False," he developed a mathematical model to show how flawed the  research process is. Researchers run badly designed and biased experiments, too often focusing on sensational and unlikely theories instead of ones that are likely to be plausible, and ultimately distorting the evidence base — and what we think we know to be true in fields like health care and medicine.

When that paper was first published, in 2005, it stirred controversy. Scientists didn't want to accept the dismal state of affairs Ioannidis was describing. Now, ten years have passed and Vox caught up with Ioannidis to talk about what has happened since: the problems in research today, how you guard against bad science, and the implications of his work for the "evidence-based medicine" movement, which is the push for doctors to start applying the best-available science to medical practice instead of just going by what they learned in medical schools or the opinions of authority figures. He also shared his foray into writing experimental literature, and how it sustains his creativity as a researcher.

The legacy of "Why Most Published Research Findings are False"

Julia Belluz: Ten years ago, you published the paper, "Why Most Published Research Findings are False." It caused a controversy then, and has since become the single most-cited and downloaded research paper in the history of the journal PLoS Medicine, where it was published. Why do you think the paper took on such a life of its own?

John Ioannidis: The title might have contributed to its popularity, I am not sure. However, I think that paper gained in popularity relatively slowly over time. It wasn’t a major hit when it first appeared. Some people noticed it and thought it was very interesting. But in a way it gained momentum over time as more colleagues were realizing there’s potentially more to that. It was a paper I enjoyed a lot working on. When I was writing it, I was really excited about it – hopefully I am not just affected by serious recall bias here. I had been thinking about that paper for quite a long time and some of the ideas that feed into it had occupied me for a decade. When I wrote the first complete version, putting these thoughts together, I was on a little island in Greece called Sikinos. I remember writing, and feeling that things were falling into place somehow.

Julia Belluz: The paper was a theoretical model. How does it now match with the empirical evidence we have on how science is broken?

John Ioannidis: There are now tons of empirical studies on this. One field that probably attracted a lot of attention is preclinical research on drug targets, for example, research done in academic labs on cell cultures, trying to propose a mechanism of action for drugs that can be developed. There are papers showing that, if you look at a large number of these studies, only about 10 to 25 percent of them could be reproduced by other investigators. Animal research has also attracted a lot of attention and has had a number of empirical evaluations, many of them showing that almost everything that gets published is claimed to be "significant". Nevertheless, there are big problems in the designs of these studies, and there’s very little reproducibility of results. Most of these studies don’t pan out when you try to move forward to human experimentation.

Even for randomized controlled trials [considered the gold standard of evidence in medicine and beyond] we have empirical evidence about their modest replication. We have data suggesting only about half of the trials registered [on public databases so people know they were done] are published in journals. Among those published, only about half of the outcomes the researchers set out to study are actually reported. Then half — or more — of the results that are published are interpreted inappropriately, with spin favoring preconceptions of sponsors’ agendas. If you multiply these levels of loss or distortion, even for randomized trials, it’s only a modest fraction of the evidence that is going to be credible.


How to fix the problems in scientific research

Julia Belluz: How do you guard against bad science?

John Ioannidis: We need scientists to very specifically be able to filter [bad] studies. We need better peer review at multiple levels. Currently we have peer review done by a couple of people who get the paper and maybe they spend a couple of hours on it. Usually they cannot analyze the data because the data are not available – well, even if they were, they would not have time to do that. We need to find ways to improve the peer review process and think about new ways of peer review.

Recently there’s increasing emphasis on trying to have post-publication review. Once a paper is published, you can comment on it, raise questions or concerns. But most of these efforts don’t have an incentive structure in place that would help them take off. There’s also no incentive for scientists or other stakeholders to make a very thorough and critical review of a study, to try to reproduce it, or to probe systematically and spend real effort on re-analysis. We need to find ways people would be rewarded for this type of reproducibility or bias checks.

Julia Belluz: Doesn’t this require basically restructuring the whole system of science?

John Ioannidis: These are open questions, I don’t have the answers. Currently we have a couple of time points where studies get reviewed. Some studies get reviewed at a funding level, and the review may not be very scientific. Many focus on the promises of significance here, and scientists have to overpromise. There’s review at the stage of the manuscript, which seems to be pretty suboptimal. So if you think about where should we intervene, maybe it should be in designing and choosing study questions and designs, and the ways that these research questions should be addressed, maybe even guiding research — promoting team science, large collaborative studies rather than single investigators with independent studies — all the way to the post-publication peer review.

Julia Belluz: If you were made science czar, what would you fix first?

John Ioannidis: I wouldn’t have a punitive approach to research. Research is really wonderful. It’s the best thing that has happened to human beings. We need research. We need science. We need better methods of doing things. A lot of the time we know what these methods are but we don’t implement them.

I am very happy to learn from colleagues, to hear what they have to say, to brainstorm with them

Maybe what we need is to change is the incentive and reward system in a way that would reward the best methods and practices. Currently we reward the wrong things: people who submit grant proposals and publish papers that make extravagant claims. That’s not what science is about. If we align our incentive and rewards in a way that gives credibility to good methods and science, maybe this is the way to make progress.

Julia Belluz: Who is supposed to be the final arbiter in science — to stop scientists from going in the wrong direction?

John Ioannidis: It’s not an issue of finding a dictator. We need empirical data. We need research on research. Such empirical data has started accruing. We have a large number of scientists who want to perform research on research, and they are generating very important insights on how research is applied or misapplied. Then we need more meta-research on interventions, how to change things. If something is not working very well, it doesn’t mean that if we adopt something different that will certainly make things better. These are questions that our new center at Stanford, METRICS, is trying to address.

Julia Belluz: In light of all these issues with science, how would you reform how scientists are educated?

John Ioannidis: I think that one major gap is exactly education. Most scientists in biomedicine and other fields are mostly studying subject matter topics; they learn about subject matter rather than methods. I think that several institutions are slowly recognizing the need to shift back to methods and how to make a scientist better equipped in study design, understanding biases, in realizing the machinery of research rather than the technical machinery.

Julia Belluz: Has anything gotten better in the last ten years in terms of improving the quality of science?

John Ioannidis: There’s been a shift to more solutions-oriented approaches to these problems. I can’t say one field has done better than all the others, but some fields have adopted practices that can make a difference. Genomics, for example, uses replication for replicating discoveries. In medicine, randomized trials have improved their registration patterns over time so that they don’t go missing. Psychology or behavioral science researchers started wondering about the need to perform replication in the last several years. So we’ve started seeing replication, which was almost unheard of in the past. Empirical economics started moving toward adoption of experimental randomized controlled trials much like the social sciences. Ten years ago, there was very little in terms of randomized controlled trials in these fields.

Julia Belluz: Are you optimistic or pessimistic about the direction science is going in?

John Ioannidis: I am optimistic. I think that science is making progress. There’s no doubt about that. It’s just an issue of how much and how quickly.


The source of Ioannidis’s creativity

Julia Belluz: You’re known for being extremely creative in your research. One of my favorite studies of yours involved randomly choosing 50 ingredients from recipes in the Boston Cooking-School Cookbook and then looking at whether the ingredients were associated with an increased or decreased risk of cancer. What’s your process like?

John Ioannidis: It’s chaotic. I try to be systematic in whatever I do, but I think that it’s very difficult to describe a single process. I am very happy to learn from colleagues, to hear what they have to say, to brainstorm with them. The work I have done has benefited tremendously from interacting with lots of scientists. I think, if I’m ignorant in biomedicine, then I’m even more ignorant in other fields, I need to learn from others. The major challenge and even biggest opportunity is to get scientists working in different fields to communicate and share their experiences. Some fields are far ahead of others in some aspects. An important step forward is to take these advances and transplant them efficiently in other fields.

Julia Belluz: What do you read in your spare time?

John Ioannidis: I’m literally buried under hundreds and thousands of books at home. I love having books around me. In my reading, I am also pretty chaotic. Right now, I’m reading Beautiful Evidence by Edward Tufte, The Forgotten Man: A new history of the Great Depression, Tuscan Art in the Middle Ages, Memoirs of the Crusades.

One of my main biases is that I also write literature myself. I write in Greek. My writing can probably best be described as experimental. It can use mixed techniques of contemporary literature, beyond traditional poetry or poetic prose, including short stories, travelogue, stream of consciousness, scientific data, Google searches, biography, tabulations, essay, text reconstruction, computerized cloud construction, and more – along with references from history, music, visual arts, and past literature. My latest book was published a few months ago in Athens. The previous one had been a finalist for the best book award of the year in Greece. I’m really very excited about literature. I’m working on an English version of my latest book at the moment.

Julia Belluz: What themes do you address in your fiction?

John Ioannidis: The title of the latest book is Variations on the Art of the Fugue and a Desperate Ricercar. "The Art of the Fugue" was the last work of J.S. Bach and the one before last was the "Musical Offering", a work that ends in a ricercar, a composition that is complementary to a fugue. Ricercar also has the same root as ricerca which means research. The heroes of the book are "researchers" at multiple levels, ranging from people who search their memory to researchers of the natural world and discoverers who cannot satisfy their appetites for discovery and its validation. They are also fugitives, people in exile or self-exile in a crumbling world, where homes, cities and civilizations are disrupted and abandoned. So, you can think of desperate "research" by desperate but determined "researchers".

Julia Belluz: How long have you been writing fiction? When do you find time to do it?

John Ioannidis: Since I was eight, but hopefully my writing has improved since then! I was always very much interested in literature. It is a balancing act. Obviously I do get lots of stimulation from my scientific work, and some of that unavoidably will spill over to the literature work, some of the themes may have some overlap between the two. It’s something that is quite different and complementary to science.

Literature allows me to express myself in ways that would not be possible to do with scientific papers

I write anytime. It’s interesting that it can alternate [between the science and literature]. I could be on a plane, and work on writing a scientific paper and switch gear and start writing some text, and then go back to another scientific paper.

Julia Belluz: How does your literature feed your science, and vice versa?

John Ioannidis: Even the title of the book has research embedded into it. Some of the ideas in my literature pertain to science and its reliability but seen from a different perspective: the search for evidence and the realization of its limitations, our inability to predict the future and our even greater inability to predict the past which constantly gets re-interpreted and re-constructed. Literature allows me to express myself in ways that would not be possible to do with scientific papers. Scientific papers have a very rigorous way of defining questions, introduction, methods, results, discussion. There’s little leeway to deviate from that. In literature, one can take different paths and develop structures that go beyond that.

Julia Belluz: How did literature inspire you as you wrote "Why Most Published Research Findings are False"?

John Ioannidis: This paper is not to be seen in isolation. Some of my literature writing pertains to the feelings, perspectives, and even frustrations I had at about the same time. If someone wanted to see what was going on in my mind, the experimental literature that I was writing during that time are probably more informative than any of the scientific papers, which seem more fixed and objective. I was seeing lots of these biases in my everyday academic life. I could see those in real life rather than just in abstract terms where you just see what is published and polished. You see how the scientific community is working, how it’s performing its job for good or bad, and that leads you to some questioning. Also the rather atypical structure of the [that] paper may have benefited from my quest of new types of structure in literature.

Future Perfect

Who fakes cancer research? Apparently, lots of people.

Future Perfect

Space trash lasers, explained


Why leap years exist, explained in one simple animation

View all stories in Science

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.