/cdn.vox-cdn.com/uploads/chorus_image/image/42369236/453018744.0.jpg)
Why does one investment outperform another? Economists and investment firms have been studying this for centuries. But it turns out many of their most recent findings may just be wrong.
In a new NBER paper, Duke University Finance Professor Campbell R. Harvey, University of Oklahoma Assistance Finance Professor Heqing Zhu, and Texas A&M Assistant Professor of Marketing Yan Liu come to the conclusion that a majority of papers in financial economics are wrong.
What they studied
Harvey says he was inspired by a 2005 study that shook the medical community when it proclaimed that more than half of all medical study findings are wrong. He wanted to know if that was true in the area of finance as well.
He and his co-authors studied 315 papers that examine different factors that might predict returns on stocks. Those papers propose all sorts of different potentially predictive variables, like leverage and price-to-earning ratios.
He uses genetic testing as a way of explaining. Scientists wanting to find the gene that causes or is related to a particular disease might test lots of genes. For any one gene-disease test, the odds that a statistical relationship between the two is a pure coincidence are low. But as you test more and more hypotheses, the odds of finding a "statistically significant" relationship that has no causal basis get higher and higher.
What they found
Harvey and his co-authors found that study authors have not been using rigorous enough standards in determining statistical significance. As a result, they write, "most claimed research findings in financial economics are likely false."
The reason is that in trying to figure out what exactly is correlated with high returns, academics and finance gurus often compare many different variables. The statistical tests usually are significant at the 5 percent level, Harvey says. That means that when a variable is shown to be statistically significant, there's a 5-percent chance of seeing that (or a bigger) result in the numbers, even if there is no real effect present. That's pretty low odds if you're just running one test, but if you use powerful computers to run hundreds of tests, you're sure to find some "significant" relationships that are just random noise.
To show this another way, Harvey ran 200 random variables using a random number generator. The one that outperformed the rest is highlighted in dark red below.
(Source: Campbell Harvey)
That looks like a pretty great return, until you consider that it's random data — the equivalent of "a monkey throwing darts at the Wall Street Journal stock listings," Harvey says.
So if you're studying lots and lots of variables, you have a greater and greater chance of a false result, he says.
What it means
It means, first of all, that Harvey's colleagues in the finance field may have to change the way they do their studies. Not only that, but it would mean they will have to up the significance test standard as time goes on, as researchers test more and more factors.
But the implications are far more far-reaching. For one thing, it confirms what many investors may have already suspected.
"The broader insight is that some investment managers will appear to outperform — purely by luck," he says in an email to Vox.
And that means some investment managers may have to change their strategies. Campbell uses one variable he's seen as an example.
"A very prominent company in this space, one of their variables is the cube of the market capitalization of a firm," Harvey says. "That, to me, doesn't have a lot of economic foundation."