Researching to make research more reliable

In some research fields about half of the published findings are false. This may have serious implications. There is a risk that people will lose confidence in science. Anna Dreber Almenberg is a Wallenberg Scholar who is putting research itself under the microscope.

Anna Dreber Almenberg

Professor of Economics

Wallenberg Scholar

Institution:
Stockholm School of Economics

Research field:
Metascience. Interdisciplinary studies in behavioral economics. Studies on the reliability of research findings using prediction markets and other methods.

Interest in metascience has gathered pace in recent years. Metascience is the study of the strengths and weaknesses of research itself. It turns out that quality is often substandard. In fields such as experimental economics and experimental psychology, up to two-thirds of the studied results do not replicate. Dreber Almenberg, who is a professor of economics as Stockholm School of Economics, elaborates:

“Not all results are false, but a high proportion of them are poorly supported, and many findings cannot be replicated.”

The phenomenon is known as the “replication crisis”, in which other researchers struggle to reproduce the original findings when they repeat a study, even one published in a leading journal. It is often a question of “false positives”, where researchers think they have found something when nothing is going on.

“Most researchers want to find something, and for this can end up deceiving themselves. There is also publication bias, with journals wanting to publish positive results over null results. And we have also noticed there is a lack of statistical training. I would think that research fraud as such is rare, however.”

Doubting her own results

Anna clearly believes that anyone can make mistakes, and is keen not to dramatize the issue. The aim is not to point fingers, but to arrive at insights that can benefit the entire research community. She makes no secret of the fact that she has made studies whose findings she no longer believes in – which was one of the reasons she realized the gravity of the problem.

“In my presentations I usually mention my oft-cited study on financial risk-taking. It took a long time and cost a lot of money, but its findings were probably just noise.”

The study was published in 2009, and aimed to identify an underlying genetic cause for the financial risks that people take. The paper studies two dopamine genes. The variation in one of them was statistically significantly related to financial risk-taking among one hundred men. The paper attracted a great deal of attention.

“But I later realized that we basically have no statistical power with one hundred individuals. The study is still up on my website, but I state quite plainly that I no longer believe in the findings.”

Power posing does not work

Small-scale experimental studies are fairly common, and researchers are sometimes tempted to trumpet their findings, even though they are based on limited material. One well-known example is a study on “power posing”, published in 2010. With 42 participants, the study demonstrated that holding a high “power pose” for a couple of minutes can increase testosterone levels and risk taking.

“When we tried to replicate the results with a sample group that was five times larger, we detected basically no such effects, and now there is scarcely a single academic psychologist who believes in power posing – not even the first author of the original paper.”

But this does not stop the original findings living a life of their own in popular lectures on YouTube, viewed millions of times over, and as a component in leadership courses.

“It’s hard to reach a wider public with a message correcting false positives.”

Achieving more credible research

Over the past few years Anna’s research focus has shifted to reviewing studies of this kind, and formulating proposed solutions, including having higher statistical power and using larger samples.

“Another move would be to change the limit for statistically significant results, from p<0.05 to p<0.005.”

In Anna’s work, she also studies how good researchers are at predicting which results replicate using surveys and “prediction markets” where people with different kinds of prior knowledge can bet money on whether results replicate or not. The goal is to see whether there is some wisdom of crowds in predicting research results.

“Our results suggest that researchers are not perfect in their predictions, but that they are better than randomness in predicting which results replicate, and that prediction market prices can be used to analyze the probability that the tested hypothesis is true on different stages of the testing process.”

“Being chosen as a Wallenberg Scholar is an incredibly important acknowledgement. It gives me the freedom and the courage to break new ground, and take risks in my research that would not otherwise have been possible. And that freedom enables me to change track when unexpected new avenues of inquiry appear. This is something I think happens frequently, and often results in the most interesting projects.”

In times where the concept of truth is often discussed, Anna considers that her research is particularly meaningful.

“There is a risk that the public will begin to distrust research, and that there will be a debate about ‘fake science’. Our results suggest there is nothing wrong with the scientific method. Some proportion of false positives are inevitable, but something can be done about the high proportion we have seen, and it is possible to speed up the process in which we eliminate false positives from the scientific literature. It is essential that individuals and organizations can base their decisions on reliable research.”

Text Nils Johan Tjärnlund
Translation Maxwell Arding
Photo Magnus Bergström