By Matt Grainger & Ed Ivimey-Cook | September 8, 2021
Perverse incentives in research careers lead to poor research practices prevailing. This problem may not necessarily be a problem for researchers as their careers can benefit from questionable research practices. The end users of science (such as government agencies, policy makers and wider society) are the ones who are negatively impacted by poor science and we argue here that the systemic change must come from them and not from within the research community alone.
The Society for Open, Reliable and Transparent Ecology and Evolutionary Biology’s (SORTEE) first conference took place between the 12th and 14th July 2021. Over 700 researchers from 63 different countries attended the online event (see the plenaries and short talks here).
In several of the plenaries (notably those by Richard McElreath, Hannah Fraser, and Michael Eisen), as well as in the short talks and interactive sessions, it was often acknowledged that “the system” of incentives in which researchers are often embedded (e.g. “Publish or Perish”, “H-index”, “Novelty”, etc.) leads to the “evolution” of questionable research practices. Questionable research practices (see for example: article link) are those research practices that are not generally fraudulent (and were sometimes even considered the “correct” way to conduct research) but lead to inflated effect sizes (e.g. incorrectly large differences between a control and experimental condition) and higher probability of spurious “statistically significant” results. These practices have led to a phenomenon called the “replicability crisis” (or “replication” or “reproducibility” crisis; Replication crisis - Wikipedia) where scientific findings are difficult to reproduce, calling into question the generalisability of these findings and the methodological rigour of the original study. Richard McElreath used the story of “cannibal chickens” as an allegory for the perverse outcomes of the incentive system for researchers. When selecting the most productive chickens, one runs the risk of also selecting those that will destroy and eat the eggs of others. Much like when one rewards high numbers of papers (through permanent employment, promotions, etc.), one might also select for researchers that engage in questionable research practices.
Throughout several sessions in the SORTEE conference there was an emphasis on what researchers can do to avoid these questionable research practices. This emphasis on researchers is welcome, the more researchers who adopt open and transparent research practices (the anthesis to questionable research practices) the better. As Christoph Netz pointed out in his short presentation, we know what the solutions to these problems are, yet we have not as a whole implemented them. This is because of the potential downside to engaging in open research; your research will not get published as much if you are “honest” (transparent) about its limitations and as a result you may not be selected for promotion and be successful in obtaining grant income (the lifeblood of an academic).
These disadvantages mean that researchers are incentivised to “play the game” rather than to commit to better research practices. Success in terms of career for researchers is more likely if they do not make a commitment to open science (although things are changing, slowly). The research community it seems can not therefore be the agents of change in the system. Society is affected by this replication crisis. Poor science and uncertain outcomes mean that evidence-based decision making is difficult as the reliability of the evidence is unknown. This means that there is a risk that decisions taken based on poor scientific evidence can lead to a waste of precious resources and a distrust in science. Decision makers can increase their certainty in research outcomes by not relying on a single study (rather on evidence synthesis such as systematic review and meta-analysis) but poor research practices have led to bias in the published literature (“publication bias”) often hiding the negative results and smaller effect sizes from the record. In addition, as research is often publicly funded, poor research practices represent a huge waste of public money.
Society can and should be the agents of the much-needed shake-up of the research system. If research funders (which allocate public money through grants to researchers and research institutes) dictate changes then researchers are very quick to adapt. For example, data sharing and open access publishing have become a common requirement of funding decisions. Data sharing is now commonplace, although a lack of enforcement and standardisation of the requirement means that even when data are shared they are not always reusable. The solutions to the replicability crisis are known to researchers, but these solutions need to be mandated by research funders. Fundamental change in the system is vital to ensure research can answer societally important questions in an open and honest way leading to much better value for public money.
Funding bodies can make a huge impact on the quality of science that is produced by researchers by linking grant funding to the adoption of open and transparent science. For example:
- Ensure that data sharing policies are enforced. Data need to be Findable, Accessibility, Interoperable, and Reuseable (FAIR Principles - GO FAIR).
- Ensure that analysis code/details are also shared. Researchers need to be able to replicate the analysis carried out (sharing code and data has identified errors in analysis in many cases allowing the scientific record to be corrected).
- Formalise grant applications as pre-registrations. Grant applications can already be considered a type of pre-registration (the concepts, hypotheses and methods are peer-reviewed) but this could be formalised so that grant funding or continued funding opportunities are linked to researchers sticking to (or providing detailed reasons as to why not) pre-registered hypotheses and outcome measures.
- Move away from assessing grants on “novelty”. The pursuit of “novelty” may lead to researchers inflating claims about the strength of their evidence. Funders should instead move towards assessing grants based on robustness of methodology and how well they build upon previous knowledge.
- Change how we assess grants. Do not include the number of papers, citations or other researcher-level metrics in assessments of grant applications.
Autor information:
Matt Grainger - Norwegian Institute for Nature Research, Postbox 5685 Torgarden, 7485 Trondheim, Norway
Ed Ivimey-Cook - School of Biological Sciences, University of East Anglia, Norwich Research Park, Norwich, NR4 7TJ, United Kingdom
[The opinions expressed in this blog post are those of the authors and are not necessarily endorsed by SORTEE.]