Last month, European researchers launched a program to identify errors within scientific literature. With an initial fund of 250,000 Swiss francs - roughly 285,000 USD - team leaders Malte Elson and Ruben C. Arslan are seeking experts to investigate and discover errors in scientific literature, beginning with psychological papers.
Here’s them in their own words:
ERROR is a comprehensive program to systematically detect and report errors in scientific publications, modeled after bug bounty programs in the technology industry. Investigators are paid for discovering errors in the scientific literature: The more severe the error, the larger the payout. In ERROR, we leverage, survey, document, and increase accessibility to error detection tools. Our goal is to foster a culture that is open to the possibility of error in science to embrace a new discourse norm of constructive criticism.
(Elson, 2024)
Their program follows a growing awareness of what researchers in the early 2010s called “the replication crisis:” the inability to reproduce large amounts of scientific findings. For example, the former head of cancer research at the biotechnology company Amgen, C. Glenn Begley, investigated 53 of his company’s most promising publications (pieces that would lead to groundbreaking discoveries). Of those 53, his team could only reproduce 6 (Hawkes, 2012). While 53 is not a large sample size, Nature surveyed 1,576 researchers and more than 70% reported trying and failing to reproduce published experiments (Baker, 2016).
ERROR founders Malte Elson and Ruben C. Arslan point to a poor incentive structure: “error detection as a scientific activity is relatively unappealing as there is little to gain and much to lose for both the researchers whose work is being scrutinized (making cooperation unlikely)” (Elson, 2024).
Nature concurs. Journals, they report, are less likely to publish verification of older work or work simply reporting negative findings (Baker, 2016). Reproduction gets deferred, because reproduction requires more time and money (Ibid).
Not to mention that even in science, biases can crop up - the siren call of new discoveries can lead people to publishing versus confirming results. In a noteworthy example, Begley - the aforementioned Amgen researcher - approached a scientist and explained that he tried - and failed - 50 times to reproduce the results of his experiments. The scientist answered that “they had done it six times and got this result once but put it in the paper because it made the best story” (Hawkes, 2012, emphasis added).
Bearing these issues in mind, the ERROR program hopes to incentivize error-detection and change the publication culture: opening the perception of negative results as useful data (Elson, 2024). To foster a positive environment, authors must agree to be reviewed, and hopefully, these authors can even benefit from the verification (Lee, 2024).
Since at least 2005, researchers have called for attempts to address the replication crisis (Pashler, 2012; Loaandis, 2005). While time will decide whether the ERROR program makes a difference, it provides an interesting answer to that call.
REFERENCES
Baker, M. (2016). 1,500 scientists lift the lid on reproducibility. Nature 533, 452–454. https://www.nature.com/articles/533452a.
Elson, M. (2024). ERROR: A Bug Bounty Program for Science. https://error.reviews/
Hawkes, N. (2012). Most laboratory cancer studies cannot be replicated, study shows. BMJ 344. https://doi.org/10.1136/bmj.e2555 (Published 04 April 2012)
Lee, S. (2024). Wanted: Scientific Errors. Cash Reward. The Chronicle of Higher Education. https://www.chronicle.com/article/wanted-scientific-errors-cash-reward
Loannidis, J. (2005). Why Most Published Research Findings Are False. Plos Medicine 19(8). https://doi.org/10.1371/journal.pmed.1004085
Pashler, H., Harris, C. (2012). Is the Replicability Crisis Overblown? Three Arguments Examined. Perspectives on Psychological Science, Volume 7 (6). https://journals.sagepub.com/doi/10.1177/1745691612463401
Photo by Markus Winkler on Unsplash