Stockholm university

Reproducibility – a cornerstone of research

Being able to repeat research results is important in all research – it makes the research more credible. But how common is it that results can be repeated? This is something that Gustav Nilsonne, researcher in neuroscience and meta-science, investigates in his research.

Gustav Nilsonne, researcher in neuroscience and meta-science. Photo: Annika Hallman


Gustav Nilsonne’s research in neuroscience concerns, among other things, sleep and diurnal rhythms, and in parallel with this he has in recent years devoted himself to what is called meta-science, that is, research about science itself, and especially what is called reproducibility and openness in science. He is a researcher both at the Department of Psychology, Stockholm University, and at Karolinska Institutet.
“Reproducibility means that when you repeat an experiment or investigate something again, the result should be the same. Reproducibility is a cornerstone of the scientific process. This view goes back a long time. For example, Galileo Galilei believed that it is important to be able to test a theory empirically in order to be able to believe it. It is not enough that some old authority has written that something should be able to move in one way or another, it must be possible to test it and reproduce the result, according to him.”

 

Why is this so important?

“One reason is that we otherwise risk making important decisions on the wrong grounds. For example, there may be patients who receive inferior treatments, based on research that is not reproducible. But even if reproducibility is important, it is still not very common to actually test it. Most experiments are not subjected to reproduction attempts.”

 

Why?

“That is a good question. I think it may be because, among other things, it is considered more prestigious to discover something new than to do studies that confirm something that is already known.”
Gustav Nilsonne himself participated as a researcher in a study in 2015 that tried to reproduce 100 experiments in psychology, with the very goal of trying to repeat the experiments as accurately as possible. 270 researchers from different countries worked on the study.
“We came to the conclusion that only a third of the experiments could be reproduced. There was some debate about it. I’m not sure if a third is a lower number than you might expect, but the study did support the idea that a majority of the results are not reproducible.”

 

How many repetitions are needed for a result to be considered reliable?

“It depends, there is no absolute rule to follow. But one can at least say that the more surprising a find is, the more it needs to be tested.

 

Can it be difficult to repeat experiments?

“Yes, there may be unknown factors that affect a certain result, which has to do with the time and environment of an experiment, for example. But in many cases, research results are described as universal, and in the study with the 100 experiments in psychology, we pointed out the problem that the studies had given too much space to chance. I suspect that in many of the studies that we tried to repeat, several different things had been measured, and then they published what happened to look like a big effect. I also think that there are many studies that did not show any interesting effect at all, and that have never been published. All in all, such behaviours give the impression that there are many more confirmed effects than is the case.

This is what is called the reproducibility crisis, that confidence in research risks falling if it turns out that there are fewer research results that can be repeated than previously thought.
“It is a kind of crisis of reliability, you could say, a debate that has emerged in recent years. The notable cases of research fraud that have been revealed in research in various places have probably also affected the discussion”, says Gustav Nilsonne.
One problem, he believes, is what he has been talking about before, namely that it is news that is rewarded when it comes to publishing research results, both by scientific journals and by research funders.
“In such a situation, it is easy for researchers to start taking shortcuts and only see what they choose to see, and that there is a distortion of what is published.”

 

How can this be avoided?

Gustav Nilsonne points out above all two things that need to change: the merit system for researchers and that researchers themselves become more open about the research process.
“As a researcher, you can publish your hypotheses and your analysis plan in advance. This is called a preregistration. Then the reader can see what the researcher has thought from the beginning and when findings and data are published openly, it is possible to compare.”
He believes that the merit system for researchers needs to be completely reformed.
“As the system is today, researchers are judged to a large extent according to where the research is published, for example based on the journals’ so-called impact factor, or according to how many publications a certain department has had. It would be better if the assessment is made according to the content of the research.”
One way forward that facilitates both review and access to research data is open science, something that Gustav Nilsonne researches and is involved in, including the European Open Science Cloud (EOSC).

 

In what way can open science interact with reproducibility?

“It is not by necessity the case that research results will be more reproducible just because there is open science, but it is intimately connected. Researchers who use open science and make their research data available to others make it easier for other researchers to review, analyze and repeat the experiments. The interplay makes the research more credible.”
It was already at the beginning of his doctoral studies in 2005 that Gustav Nilsonne became interested in meta-science, that is research on research itself. What was interesting about it?
“One answer is that I have always been interested in knowing what makes research reliable. Another answer is that I have had doubts about my own research and whether it is reproducible. And then I think it is very important to be able to trust research results.”
A major project that he is currently involved in is a so-called multi-analyst study, “Multi 100”, where researchers from around the world will use the same data in their analyzes.
“There are 100 different studies in social science research, such as psychology, public health science and economics, which form the basis. Then independent researchers will use the same data and issues that are in the studies and see if they come to the same result.”
The purpose is to investigate how much freedom the researchers have to analyze data in the direction they want, says Gustav Nilsonne.
“We want to know how big the analytical space is, when researchers engage with the same data but from different directions and in different ways.”
The results are expected to be ready in 2022.

 

More information

The study “Estimating the reproducibility of psychological science” was published in the journal Science, 2015.