The National Centre of Expertise on Science and Society (NEWS) has become an official partner of the Netherlands Reproducibility Network (NLRN). Together NEWS and NLRN will work to make the reproducibility of research a more prominent part of how science is communicated to society.
Replication of research is a cornerstone of reliable science, yet in practice it often receives too little attention. Science communication tends to highlight new discoveries rather than the verification of existing findings. NEWS and NLRN want to help change that. NLRN connects universities, university medical centers, universities of applied sciences, and other organizations committed to improving research reproducibility. By sharing expertise and best practices, partners accelerate the transition towards more transparent and trustworthy research.
Replication and reproduction. How does it work?
What exactly is the difference between replication and reproducibility? And what is the problem? Replication refers to repeating (part of) a study. This is often carried out by a different team of researchers than those who conducted the original study, sometimes many years later. The goal is to determine whether a previously observed effect can be found again.
Reproducibility refers to repeating only the analysis. The original data (such as historical sources or experimental measurements) are analysed again by others to examine whether they draw the same conclusions from the same dataset.
Although both processes of repetition are crucial, replicating a study or reproducing an analysis is by no means straightforward. Researchers do not always share sufficient details about their study design, and scientific research often involves highly specialized knowledge and skills. In addition, data are not always shared openly or stored properly, which makes reproducibility difficult.
Moreover, there are few incentives in the scientific system to replicate research. Many funders prioritize innovative, groundbreaking and above all new research proposals. Likewise, scientific journals rarely make room for verification studies or corrections.
A more realistic view of science
‘We need to move toward a more realistic view of science,’ says Sicco de Knecht, Director of NEWS. ‘That means showing society how knowledge is built step by step, and how it is continuously tested and refined. Science is not just a series of spectacular breakthroughs; it’s a careful process of replication, verification, and consensus-building. People outside academia should also understand how scientists scrutinize and improve each other’s work.’
Important part of science and research: reproducing results in computational linguistics at Leiden University
Involving society
Michiel de Boer, Chair of NLRN, emphasizes the importance of science communication in this process. ‘Effective science communication can itself strengthen research reproducibility and help make science more accessible to the public,’ he says. ‘Think of citizen science projects, where studies conducted by professional scientists are replicated and verified together with citizen scientists. This creates a more robust and shared understanding of the phenomena being studied.’
But collaboration doesn’t have to be complex, De Boer adds. ‘It’s already extremely valuable when researchers have their assumptions questioned by people outside academia. Real-world experiences can yield powerful insights into how data are analysed or how study design might influence results.’
Next steps: building practical tools
In the coming months, NEWS and NLRN will develop ways to engage both researchers and science communicators in their shared mission. ‘We want to offer practical guidance to help make replication and reproducibility part of everyday conversations about science,’ says De Knecht. ‘For instance, through expert sessions and collaborative initiatives.’ ‘These are challenging topics,’ adds De Boer, ‘but that makes it all the more important to make them accessible and familiar.’
Post about the Workshop “Positionality Statements: A tool to Open up Your Research, held 26 September at VU Amsterdam. Organized by Dr Tamarinde Haven, Dr. Bogdana Huma and Daniela Gawehns.
What comes to your mind when you think of openness in research? If you now take out a pen and start writing freely, which associations would make it onto paper? With a group of 11 colleagues, we met, sketched and discovered how different notions of openness, academic values and personal backgrounds influence how we do research.
Positionality statements come out of standpoint theory and are based on the idea that we do not only experience the world in different ways, but that those experiences shape our research, including the study design and analysis of our data. With this workshop, we wanted to explore if positionality statements can be used to bring a wider notion of openness to the open science and reproducibility discussion.
During the introduction round, participants shared their career paths and motivations to do the research they are currently doing or supporting. We heard stories of more or less twisted paths from the full breath of academic backgrounds, fields and methodologies.
After a short introduction of standpoint theory and a warm-up exercise to get everyone used to writing freely and with pen on paper, participants were asked to write about what openness means to them.
Given the setting of the workshop within the open science week, many responses were along the lines of transparency and rigor, trust and accountability. Some mentioned openness as being something courageous, inviting criticism, fluidity and uncertainty. We also discovered that openness has a lot to do with vulnerability, a topic that came back later when discussing how detailed positionality statements should be.
The remaining writing prompts invited workshop attendees to write about their own values, beliefs and backgrounds, as well as reflecting on how those values, backgrounds and beliefs influence our research. The resulting sketches are a first step towards a draft positionality statement.
Participants received this booklet with writing prompts and further reading materials.
A topic that came back in several questions was the level of detail a positionality statement should have. In some fields of research the mention of gender or seniority might invite extra harsh criticism, while in other research contexts the gender of the researcher might be an important attribute to mention. The workshop leaders left it to a very practical approach: the author decides what is relevant for their statement. If mentioning academic backgrounds is enough to make clear through which lens a certain problem was approached, this might suffice.
This workshop introduced participants to the tool of positionality statements and was by no means enough to lead to more than a few first thoughts. We invited participants to think about opening research processes beyond sharing research outputs openly. This will not directly increase the computational reproducibility of someone’s research outputs. A positionality statement will however help consumers of research contextualize outputs and grasp how they could be reused. Accepting that our research is influenced by external and internal factors, opens up the process and makes for more vulnerable, accountable and maybe humble research.
Trainers and teachers are more than welcome to reuse the workshop materials and can find the workbook and slidedeck on zenodo: https://zenodo.org/records/17174961
More resources:
Jamieson, M. K., Govaart, G. H., & Pownall, M. (2023). Reflexivity in quantitative research: A rationale and beginner’s guide. Social and Personality Psychology Compass, 17(4), e12735. https://doi.org/10.1111/spc3.12735
Field, S., & Pownall, M. (2025). Subjectivity is a Feature, not a Flaw: A Call to Unsilence the Human Element in Science. OSF. https://doi.org/10.31219/osf.io/ga5fb_v1
This blog post is the final one in our series of posts about the “CODECHECKing goes NL” project, which has been running since May 2024. We have been working hard to promote reproducibility and Open Science in the Netherlands through a series of workshops, and we are excited to share our experiences and future plans with you.
Find the full series of posts on our project page.
Why is this important?
A paper’s code gets rarely checked – everyone in academia knows about peer reviewing articles, but few people engage in reproducibility checks of the materials behind the paper. Reproducibility checks are less about vetting research (e.g., catching fraud, finding errors), but more about ensuring the reusability of research. It is an extension of the thought that if we want to stand on the shoulder of giants, those giants better be standing on solid ground. And solid ground for computational workflows means good documentation that is understandable outside of the inner circle of the authors of a research article.
Reproducibility Checks at the Center for Digital Humanities at Leiden University on 14 th February 2025.
A reproducibility check is about the question whether one can reproduce the reported results in the paper (i.e., the statistics, tables, figures, maps, or graphs) from the provided data, code, or other materials. The CHECK-NL project focuses on the computational reproducibility of published research and tries to answer the question of “Can someone else recreate the output on their hardware using the materials and documentation provided by the authors?”. We call this type of reproduction a CODECHECK.
Who did what?
A bunch of enthusiasts for Open Science and Reproducible Research from University of Twente, TU Delft, and UMCG Groningen applied for funding from NWO via their Open Science funding scheme to organize four in-person events at their respective institutions and beyond. Through these events, they intended to jump start a Dutch reproducibility checking community. Included in the project proposal was also work on the CODECHECK website and registry to better present codechecks and the codecheckers behind them.
Along the way, the group of enthusiasts grew and instead of the planned four events, there were a total of six in-person events: one more as a conference workshop (AGILE in Dresden) and another one (TU Eindhoven) organized by attendees of the first event (exactly what this project was aiming for!). At the events, we also connected with representatives of a data repository, diamond open access publishers, and digital competence centers who are considering their own version of computational reproducibility checks.
The four events in Delft, Enschede, Rotterdam and Leiden brought in a total of 40 researchers, many of whom opened up their own work to be assessed by others, together who codechecked 15 papers. The additional events in Eindhoven and Dresden introduced an international crowd to the CODECHECK principles. Each event had a different topic, focusing on different parts of the research landscape, which resulted in different challenges and learning opportunities at each event. While the groups in Delft and Enschede mainly faced problems with computing environments, documentation, and high computational loads (too big for laptops or the workshop time), the group in Rotterdam raised the issue that reproducibility checks can be pretty dry at their core and may be almost trivial if only heavily summarized data can be shared. At the final event in Leiden, we brought linguists and digital humanists together. One of the questions raised was: how do we start a reproducibility crisis in the humanities? (Because maybe we need one to raise awareness about the important topic in this field?)
What are the results? What did we learn?
One clear lesson learned was about how different crowds from different disciplines are – although the advertisement for the events and their setup and schedule were quite similar, they played out quite differently. Another important lesson is that you need a group of enthusiastic participants to drive such events – fortunately, we always had those!.
There were people with a wide range of coding skills at the events. The wrap-up sessions always gave us the impression that all of them took something home and learned something. Working with someone else code and reproducing another researcher’s workflow requires craftsmanship and a hands-on and can-do attitude that is rarely taught in typical university classes. The workshops and the participating experienced mentors, however, could provide such a setting.
The four main in-person events required attendees to invest an entire workday into this topic. In retrospect, this might have prevented interested people from joining. For raising awareness, shorter, more targeted events might be a suitable alternative.
Getting the certificates was a nice by-product but was certainly not the only outcome. Authors whose project didn’t pass the reproducibility check were given feedback so that they can make their work still reproducible. Participants got the chance to learn from other people’s workflows and software stacks.
Another surprise was how difficult it still is to convince colleagues to submit their work to a reproducibility check. The social layer of this otherwise rather technical question is the biggest challenge for the project team and people working with reproducibility checks. The technological challenges are less exciting than the positive experiences and potential benefits, see e.g., this blog post about an author’s experience how it is to be “codechecked”.
From discussions we distilled the notion that the best time to get a reproducibility check is at the preprint stage or during peer review – then people are still motivated to fix issues before the publication. Also, a certificate is a positive signal towards peer reviewers (at least that’s what we hope). If published work gets checked, authors need to be very motivated to improve documentation or fix bugs, certainly if those are hidden in some deeper level of the code.
Concrete outcomes:
Successful community building in four different disciplines, with more than 60 participants overall, including many early career researchers; positive feedback
A published workshop recipe that facilitates others to organize similar workshops through a step-by-step best practice documentation.
Updated and improved CODECHECK Registry, which is easier to integrate in other infrastructures and now features pages for checks, for different venues such as communities or journals, and for individual codecheckers, see https://codecheck.org.uk/register/. These extensions help to make checks, their metadata and findings, more accessible and to showcase contributions to open reproducible research.
The CODECHECK or reproducibility check community in the Netherlands is growing. We met with the wider community to evaluate the project and make new plans. 4TU Research Data is planning to work on codechecks as part of their service as data repository and is working closely with the four technical universities.
The community in the Netherlands will continue to meet and work on topics like reproducibility checks as a service or as part of teaching curricula, and academic culture around code checking. Internationally, we have reached out to colleagues in Bristol and Cologne.
How can we integrate open science practices into our curricula? In this webinar two lecturers told us how they are including preregistrations in their students’ curricula.
“When you preregister your research, you’re simply specifying your research plan in advance of your study and submitting it to a registry.” (Quote from the website of the Open Science Framework)
Ewout Meijer started the webinar and told us how he convinced thesis coordinators to include pre registration in their courses. As an Open Science Ambassador for his faculty, Ewout got backed by the dean to find out where in the various courses open science practices could be integrated. Pre registrations are just one example of those practices.
He shared several tips of how he convinced colleagues to integrate open science practices:
Make it easy for your colleagues – minimize the extra workload by sharing templates and offering introduction lecture materials
Make them want it – top down mandates to include open science practices don’t work well, but if your colleagues are convinced that this is the right thing to do, they will follow
Don’t be overambitious – first the science, then open science. Finding the balance between what students need to know in terms of scientific content and what they need to know about the scientific system is difficult and will differ between courses and student populations.
Searching for “project proposals” as a required component for students to pass a course is a good way to find courses where preregistration can be taught. You just need to replace the proposal with a registration.
Including new (open science) content means kicking out some other content – see what can be replaced and what can be tossed
At Maastricht University, students are asked to use the AsPredicted template and submit that as a pdf (i.e., they don’t upload it on aspredicted.com). Ewout mentioned that not all internship projects are suited for this format, so students might have to adjust it or come up with a project just to fill in the template and pass this grading component.
Students get exposed to the idea of preregistration and the same effect goes for workgroup tutors. Tutors come from a wide range of research groups and are learning themselves about pre registrations while helping students with their thesis work.
Elen Le Foll asked her seminar students to pre-register their term paper analyses. Adding this component required some extra time investment to make sure students understood what was expected of them and for extra feedback rounds. The preregistration adds at least one round of feedback to the term paper and requires students to plan ahead and submit their preregistration on time to have enough time left to incorporate the feedback into their final data analysis. On the positive side, students can learn from feedback and include it in their work. For normal term papers, students get feedback at the end but do not need to use that feedback or cannot improve their work anymore.
As Elen’s course is an advanced course for master students, some of her students want to turn their term paper into a research paper. For them, the preregistration is an excellent way to get a timestamp for their analysis.
In the discussion with Andrea and other attendees, we discussed how the AsPredicted format can be used by students and if a full registered report might be even more suitable. We briefly touched upon the difficulty of grading pre registrations and how much detail we should ask students. Another point of discussion was how we can sell pre registration to students who are not interested in becoming researchers. This led to a discussion on how to balance the need for academic training with content and application outside of academia.
Thanks to our presentors:
Elen Le Foll is a post-doctoral researcher and lecturer in linguistics at the Department of Romance Studies at the University of Cologne. She likes to integrate Open Science principles and practices in her seminars and recently asked her students to pre-register a study as part of a term paper assignment.
Ewout Meijer works at Maastricht University and coordinates the thesis module for the research master in psychology. He introduced preregistrations for thesis projects.
This blogpost is inspired by the Open Qualitative Research Symposium at VU on 28th March 2025. We thank the organizers and speakers for this interesting and energizing meeting.
The blogpost is a summary of and reflection on some key issues that were raised during the event, how those relate to reproducibility in a broader context and how the NLRN works on solving some of those issues.
Authors: Tamarinde Haven and Daniela Gawehns; Foto credits: QOS organising team
Open and Qualitative – an epistemic clash?
How to open and share data collected with qualitative research methods was a central point of discussion and featured prominently in the workshop program. Tools for anonymizing data, and for sharing data such as videos and pictures, were presented and discussed. A recent review on enablers of reproducibility in qualitative research reflects this emphasis, finding that more than half of the included studies addressed Open Data as a topic. One of the authors explained: These papers are not all about how to share data, but many of them attempt to explain how the requirement to open data does not align with the needs of qualitative research:
While familiar privacy concerns also apply in quantitative research, qualitative researchers face an additional, epistemic challenge: most require access to un-anonymized, rich data as a basis for interpretation. And this type of data is difficult to share. The epistemic requirements clash with the broader push for transparency.
The discussion around Open Data requirements for qualitative research highlights a mismatch between the epistemics of a majority voice in the Open Science discussion and the needs of a smaller group of researchers. At the NLRN, we aim to include as many voices as possible in the reproducibility debate, and to take a range of epistemic challenges into account.
Agata Bochynska gave a presentation on how they support qualitative researchers at the library of the University of Oslo.
Qualitative stepping stones
The second main topic addressed how qualitative research methods can help quantitative scientists work more reproducibly. Reflexivity, a central aspect of qualitative research, can serve as a stepping stone toward process reproducibility and transparency in quantitative research:
“Reflexivity is the process of engaging in self-reflection about who we are as researchers, how our subjectivities and biases guide and inform the research process, and how our worldview is shaped by the research we do and vice versa” (quote from Reflexivity in quantitative research: A rationale and beginner’s guide – Jamieson – 2023 ). In their paper, Pownall and colleagues make the case for reflexivity as a basic first step towards reproducibility: a reflection on the research process makes opening that process up much easier.
The NLRN brings organisations together to share best practices and learn from each other. We would love to amplify and share cases where methods from qualitative research helped quantitative researchers approach the reproducibility of their own non-qualitative work in new ways and make it more transparent.
Parallel Communities of Practice
Several speakers voiced their concern about duplication of efforts as parallel communities of open qualitative methods form. The recent call for a European community of qualitative researchers is an answer to this fear and will hopefully create more synergies to avoid duplication of efforts and slowing the process of change down. 250 people already expressed their interest in such a community. We will cross post updates on this initiative on LinkedIn and via our newsletter.
Bogdana Huma and Sam Heijnen from VU were the core organizing team of the symposium and guided a co-creation session at the end of the day.
The NLRN will organize a symposium in collaboration with the Dutch-Belgian Context Network for Qualitative Methodology to continue the discussion on transparency at the national level and foster learning from other, non-qualitative methodologies.
Open to the citizen – with methodological consequences
A topic that gets rarely touched upon in the reproducibility discussion is participatory or community driven research. The goal of this approach is to make research more relevant by including citizens, not only as participants or as data collectors, but also as researchers and to guide the research process.
With reproducibility being a way to share research processes, questions around methods and research pipelines arise: Are those methods flexible enough to accommodate evolving consent forms, fluid management plans and research designs that are sourced from the impacted communities themselves? Is there a way to pre-register those changing plans and how would we go about it? How can we be transparent about changes and present them as an integral part of the research design, rather than as flaws in planning or execution?
The goal of our organization, the Platform for Young Meta-Scientists (PYMS), is to bring together Early Career Researchers (ECRs) working on meta-science, while providing a place to network with peers and discuss research. We consider this an important initiative as ECRs have been at the forefront of many reform initiatives, and because despite the increasing number of meta-science ECRs, many still work disconnected from other researchers with shared interests. Bringing ECRs together is a crucial step in enabling large scale research and strengthening reform initiatives that are key to improving science.
The PYMS Meeting
To further these goals within the most recent PYMS meeting, which took place at the UMCG on December 5th 2024 as a pre-symposium to the NLRN symposium on December 6th, we invited an audience of ECRs from a variety of backgrounds. We are glad to have achieved a broad representation of ECRs, including from abroad. Presentations span a variety of meta-science topics such as spin, equivalence testing, and reproducibility. Not only work from the social sciences, but also work in computational sciences, sports and exercise science, and nanobioscience were presented and discussed.
Map of PYMS attendees and their areas of expertise/interest. Map backdrop was constructed by Tamarinde Haven.
Furthermore, during the meeting we created many opportunities for discussion and networking, including invitations for collaborations, throughout the day.
We were also fortunate enough to have Tracey Weissgerber give a keynote talk on having a career in meta-science followed by a series of provocative statements on the same topic. Both led to a practically informative discussion for the attendees on having a career in meta-science.
The Future of PYMS
Since the end of last year, the board for PYMS has been renewed. Four new ECRs have joined the board to carry the torch. The new members are Sajedeh Rasti from Eindhoven University, Raphael Merz from Ruhr University Bochum, and Anouk Bouma and myself from Tilburg University. Like the previous board, we plan to continue creating a community for meta-science ECRs with informal networking opportunities. However, we also plan to expand the reach of PYMS in new ways.
As a first step, PYMS will go international during our next meeting. In this way we expand our networking beyond the Netherlands to prompt broader collaboration in our field. This is especially important here, since projects requiring large investments of time and effort from multiple parties are of crucial importance to investigate and improve scientific practice effectively. This meeting will be held this summer, there will be plenty of opportunities to discuss future projects and financial compensation for international presenters to encourage a broad representation of young meta-scientists.
Interested ECRs can join our mailing list and get a link through there to our Discord server to connect and communicate with fellow ECRs on meta-science, as well as keep up to date about future events, including the next PYMS meeting this summer.
In this workshop, participants experienced reproducibility in action through an experiment using LEGO®. In the first part of the session, each group created a structure using LEGO® bricks and prepared accompanying documentation and instructions of how to create it. Afterwards, another group attempted to recreate the structure based on the provided documentation. The exercise sparked lively discussions about the nuances of metadata and different documentation styles, the challenges of interpreting and applying it, and the critical role they play in ensuring that research can be reliably reproduced. One of the interesting conclusions was that keeping reproducibility in mind not only influences the approach to documentation and metadata but also shapes the design of the structure (i.e., research project) to ensure its reproducibility.
Feedback from the participants was overwhelmingly positive. Many appreciated the innovative approach, noting that the hands-on activity helped solidify their understanding of the importance of proper documentation and metadata’s practical application. The workshops also fostered a sense of community and collaboration, as participants shared their experiences and insights.
Spot the difference
Can you spot the difference (or lack thereof) between some of the original creations from our participants and the reconstructed versions from another team?
We are grateful to the Netherlands Reproducibility Network for providing us with the opportunity to share our knowledge and engage with such a passionate group of individuals. We look forward to continuing our efforts to promote research reproducibility and hope to bring more creative and interactive workshops to future events!
1 Donaldson, M.and Mahon, M.(2019) LEGO® Metadata for Reproducibility game pack. Documentation. University of Glasgow. (doi: 10.36399/gla.pubs.196477).
Under the theme of “The Future of Reproducibility”, about 100 scholars, policymakers, funders, researchers, science enthusiasts and facilitators met at the UMCG for a day of interaction, discussion and new connections.
Casper Albers, the head of the Open Science Program in Groningen welcomed us and kicked off the plenary morning session, underscoring the importance of retaining the focus on rigorous, open science, even in a future where funding may be scarcer. The keynote speakers presented challenges and developments in research reproducibility from two different domains: Biomedical research and History. Tracey Weissgerber shared how open research evolved in the biomedical sciences and called for a look beyond reproducibility to foster responsible and robust research results. Pim Huijnen and Pieter Huistra presented their work on reproducibility in history. They shared their insight from an interpretative field and highlighted how the domain of history needs to find its own definition of what reproducible scholarship means – and how this affects the choice of tools to achieve this reproducibility.
The seemingly endless corridors of the medical center brought people together to discuss the presentations while reaching individual step counts. The lunch area was filled with chatter of mingling as new connections formed around poster topics including a chatbot for training materials and a crowdfunding platform for replication studies.
In the afternoon, participants followed workshops on reproducibility-related topics. The participants of the lego workshop experienced hands-on what it means to work with limited metadata. Attendees could also get a glimpse of the current status of interventions to promote reproducibility and think about how they would promote reproducibility in their groups. The discussions on research integrity and applied research carried over to the coffee break, where attendees mingled and had a chance to meet the poster presenters.
The lively closing session featuring early career professionals sparked discussions around preventing dreaded futures of science communication, including how to avoid weaponization of skepticism to stir distrust in science.
What happens if you ask representatives from 11 different research performing and supporting institutions to think about how reproducibility ready their own institution is? On 4th October, the contact persons of our node members met in Utrecht to learn from each other and to get to know each other during NLRN’s first network meeting.
We used the framework of the Knowledge Exchange (KE) report on reproducibility at research performing organizations to systematically think through enablers and hindrances of reproducible research.
In small groups, we first categorized our own institutions into how reproducibility ready they are. The KE report suggests three levels of readiness; 1) there are some pockets of excellence; 2) efforts are partially coordinated and 3) there is organizational level commitment with coordinated processes. We concluded that those levels would depend on which disciplines, departments and research methodologies you are considering. We often encountered that there are differences between management levels and researchers: it can be that the university-wide management sets policies to foster reproducible research, but this might not trickle down to an individual researchers’ work. This might even lead to window dressing or “open washing” where institutions present themselves as committed to open research practices but the culture within the institution didn’t change.
The second discussion exercise was about enablers of reproducible working such as training, mentorship or recognition. In small groups, we tried to identify which enablers are already in place and on which level, and if they indeed function as enablers. The KE report comes with an assessment worksheet for institutions that some of the participants tested.
Visualization of enablers at a sample institution. Each enabler is present with their current state and its target state.
During the final discussion, we tried to figure out how the tools of the report can be used to further reproducibility in the node institutions.
The general consensus was that the tools as presented wouldn’t work as a general tool for all areas of scholarship and research. They would work largely for quantitative methodologies and would need tweaking for interpretative, qualitative, art and action based research methodologies.
The idea was raised to find an institution that would use the framework to assess their current state and work towards becoming more reproducibility ready. This process could be followed and presented as a concrete example on how the framework works.
We didn’t have enough time to talk in detail about each of the enablers. One question that was raised was if there is a hierarchy of enablers and if an institution should aim for the highest score on all of those or just some of them.
We spent a lot of time discussing training for researchers as an enabler. There were ideas of introducing reproducibility related courses to the mandatory courses at graduate schools. Others remarked that there are already a lot of training modules offered but they don’t seem to reach everyone.
The budget cuts for higher education were also discussed during this meeting with the conclusion that we need to work even more collaboratively and coordinated to make the most of the means that are already available. NLRN could play an important role in this.
In conclusion, the network event was a great opportunity to get to know each other with a lot of engagement from all participants in the discussions on the current state and future directions to increase reproducible working. In our next meeting, we will focus on concrete steps towards that future.
In June, I attended the World Conference on Research Integrity in Athens. I am still inspired by the many fruitful and fun encounters with colleagues from different places in the world and by thought provoking presentations. One of these talks will form the basis of this blogpost. It is the keynote of Daniele Fanelli entitled “Cautionary tales from metascience” in a session on the effects of research integrity on innovation and policy making. Among his main messages were (1) that replication rates are not as bad as we make them to be,(2) that reproducibility is related to the complexity of the research at hand, (3) that changing policies as a reaction to the reproducibility crisis might do more harm than good and that (4) it is not a one size fits all. Below, I will discuss these issues and will try to conclude what this means for the work within NLRN.
Let’s start with his premises that replication rates from the literature are actually not that bad. He mentioned rates of 60-90%, taking the higher values from ranges reported in the literature. I think a more fair representation would be a median rate between 50-60%. Whether that means that we are in a crisis is a different question. Crises are usually associated with specific periods in time and it’s probably reasonable to assume that replication rates would not have been much different 20 or 30 years ago, if there had been replication studies at that time. Fanelli went on to mention the large variance in replication rates across studies and apparently also (sub) disciplines and there he has a good point.
Fanelli presented results from own work, performed with data from the Brazilian reproducibility initiative, showing that complexity might indeed be related to replication. So there is at least some empirical evidence for his statement. It also seems logical that simpler, more straightforward research or research in a mature field, where there is a high degree of consensus on methods and procedures, would be easier to reproduce and results to replicate.
Fanelli went on to argue that policies focusing on incentive structures are not effective in combating questionable research practices (QRP). He showed that bias and questionable research practices are overall only strongly related to country of first author. Additionally , within countries in which QPRs are prevalent, incentive structures and publication pressure seem to be important drivers, but in other countries, these things do not seem to be related to QPRs. This would, according to Fanelli, imply that policies focused on these things would not be effective in many countries. Here I think Fanelli jumps to the conclusion a bit too quickly. All of his evidence comes from meta-research, which is by nature observational and on an aggregated level. This means that there might be confounds underlying the relations he showed. Moreover, we would need intervention studies to explore whether intervening on these aspects change outcomes. Such studies are scarce. In the field of reproducibility, there is some evidence that rigorous-enhancing practices in both original studies and replication studies can lead to high replication rates and effect sizes that are virtually unchanged in the replications.1 These practices included confirmatory tests, large sample sizes, preregistration and methodological transparency . However, this multi-lab study was done in social psychology and it is uncertain how results will be in other fields or (sub) disciplines.
All in all, there is not much evidence yet for policy interventions improving the reproducibility and replication of studies and it is probably not one size fits all. Fanelli concludes that policy should be light and adaptive and that makes sense. We will have to strike a balance between incorporating some generic principles and leaving enough room for discipline, field and country/region specific differences. How do we know what works for whom? By developing interventions / policies together with academic and non academic staff, piloting and evaluating these and when deemed viable, by implementing them on a broader scale and evaluating and adapting where necessary. These efforts need continuous monitoring. The reproducibility networks are ideally suited to support these efforts through their network of research performing institutions, communities of researchers and educators and other relevant stakeholders.
Within the Dutch Reproducibility Network we acknowledge the specificity of reproducibility and replication across disciplines and fields, which is why one of our focus areas for the coming years is non-quantitative research. We are eager to work on these and other pressing issues with our partners, striving for evidence-informed implementation of interventions and policies on reproducibility.
1 Protzko, J., Krosnick, J., Nelson, L. et al. High replicability of newly discovered social-behavioural findings is achievable. Nat Hum Behav8, 311–319 (2024). https://doi.org/10.1038/s41562-023-01749-9