The many dimensions of reproducible research:  A call to find your fellow advocates.  

Blogpost by steering group member Tamarinde Haven.

Various definitions of reproducibility and its sister concepts (replicability, replication) are floating around [Royal Netherlands Academy of Arts and Sciences 2018, Goodman et al 2016]. Whereas there are relevant (disciplinary) differences between them, they generally share a focus on the technical parts of reproducible research. 

 

With the technical focus increasingly taking centre stage [e.g., Piccolo & Frampton, 2016], one can assume technical solutions are the panacea. Say techno-optimism in the reproducibility debate. The thinking is something along the lines of: Provided that institutions have the option to facilitate the relevant infrastructure and use of tools, researchers employed at those institutions will carry out reproducible research [Knowledge Exchange, 2024].  

To be clear, making reproducible practices possible is a necessary step. But it is one of many [Nosek, 2019]. Now that you have enabled more reproducible practices, how are you going to ensure it is picked up by researchers?  

Now that you have enabled more reproducible practices, how are you going to ensure it is picked up by researchers?  

Back in 2021, I defended my thesis on fostering a responsible research climate. One of the key barriers to responsible science researchers flagged was the lack of support. Our participants were bugged down by inefficient support systems where they were connected to support staff who were generalists and could not efficiently help them [Haven et al., 2020]. 

Many Dutch research-performing organisations nowadays employ data stewards, code experts, and product owners of various types of software that have been recently developed to promote reproducibility. These experts maintain software packages and they select the relevant infrastructure to support a reproducible data flow. They advise on which tools are suitable for a given project and ensure these are up to date. They implement new standards through workshops and training. And they do so much more. 

During the launch of the Netherlands Reproducibility Network last October, we heard of various disconnects at institutions. We learned about meticulously trained data stewards sitting around, waiting for researchers to find them. Those who found them returned, but that’s only partially good news. I am not aware of their exact reward mechanisms, but many organisations follow the flawed principle that when there are no requests for a solution, we do not seem to be bothered by this problem which is false for many different reasons*. Part of this disconnect may simply be a matter of time, and culture change is a process that typically is much slower than its driving forces might be satisfied with.  

In my personal experience, these professionals have been highly knowledgeable. I found reproducibility advocates who were able to help me draft a coherent data management plan for my funding proposals, advised on relevant metadata standards, wrote the piece of Python code to connect my data with existing databases, and finally connected me with yet other specialists who maintain the newly created data archiving infrastructure in The Netherlands.  

As a network, it is in our DNA to exchange information but also – crucially – contacts for professional purposes. That is why we, as the Netherlands Reproducibility Network, want to focus on promoting connections between researchers and research support staff. What kind of strategies are currently being used to connect these groups? How can we learn from successful efforts or institutions where these parties seamlessly find one another? 

And no, we don’t plan on falling into the same trap as the participants in my thesis research talked about. General, one-size-fits-all solutions likely won’t cut it. That is why we hope to facilitate ongoing and launch new pilots to investigate connections. But as so often with these efforts, the best possible world is one in which these kinds of pilots are not necessary. So, to all my research colleagues: Please find your fellow reproducibility advocates in your institution. Acknowledge their help in your research products. And make sure to share their valuable expertise with your lab members; we should honour the crucial human dimension of reproducible research. 
 

PS. Do you know of any ongoing pilots? Get in touch!

* Just because no one reported any social safety issues to the confidential advisor, there are none, and we do not need confidential advisors? 

References: 

Goodman, S. N., D. Fanelli and J. P. Ioannidis (2016). What does research reproducibility mean? Science Translational Medicine 8, 341ps312. 

Royal Netherlands Academy of Arts and Sciences, KNAW. (2018). Replication studies – Improving reproducibility in the empirical sciences. 

Haven, T., Pasman, H. R., Widdershoven, G., Bouter, L., & Tijdink, J. (2020). Researchers’ Perceptions of a Responsible Research Climate: A Multi Focus Group Study. Science and engineering ethics, 26(6), 3017–3036. https://doi.org/10.1007/s11948-020-00256-8 

Piccolo, S. R., & Frampton, M. B. (2016). Tools and techniques for computational reproducibility. GigaScience, 5(1), 30. https://doi.org/10.1186/s13742-016-0135-4 

 

Replication in philosophy, or replicating data-free studies  

Blog post by Hans Van Eyghen, Member of the NLRN steering group

The replication crisis, which arose primarily in the biomedical and psychological sciences, was both a blessing for replications and somewhat of a curse. Its lasting impact lies in the recognition for the need for replicability. Replicability is now generally seen as a way to make the impacted disciplines better and to make them more robust, allowing for quality control and independent confirmation of findings. The minor curse inflicted by the replication crisis is that replicability is sometimes regarded as a specific solution to a specific problem. Disciplines without replication crises would not stand in need of increased replicability and any push for replicability may inflict more problems than are solved. Such a sentiment is especially at work in the humanities. The humanities did not go through a replication crisis. This has left some in the humanities with the idea that replication is a fix for others. Furthermore, pushing for increased replicability in the humanities would mean importing a problem and methodology that is not theirs.  

There is no reason why studies in the humanities would not benefit from increased quality control or corroboration

While there are profound differences between the humanities and other disciplines, the key reasons for increased replicability remain the same. Quality control refers to checking whether studies are well conducted. Quality control is key to weeding out mistakes (willed or unwilled) or other reasons why any study is not up to standards. Corroboration refers to findings the same or similar conclusions while redoing the same (or a similar) study. There is no reason why studies in the humanities would not benefit from increased quality control or corroboration. An argument in favor of increased replicability, which allows for more quality control and corroboration is therefore quickly found.  

Nonetheless, some suggest reasons why increased replicability might not be feasible for the humanities. While the details vary, the reasons center around the idea that the humanities are just too different. More than other disciplines, the humanities involve interpretation. The objects of the humanities also are not just quantifiable data, but qualitative, meaningful objects or subjects. Finally, a considerable number of studies in the humanities do not involve data, analysis thereof or anything like it all. In those cases, it is not at all clear what replication would involve.  

I will focus here on the final argument, i.e. some humanities studies do not have data and therefore have no need for replicability. Replicability usually involves being clear about the data used, how it was analyzed and how conclusions were drawn. Clear examples of such ‘data-less’ studies are a priori reasoning in philosophy. A considerable number of studies in philosophy consist of reflection on arguments or questions like ‘Is knowledge justified true belief?’ or ‘Is morality objectively true?’. Attempts at answers do not rely on surveys (if one is not engaging in experimental philosophy) or empirically collected facts. Instead, philosophers tend to rely on a priori processes, like reflective equilibrium, conceptual analysis or others.  

Photo by Alex Block on Unsplash
Photo by Alex Block on Unsplash

Does lack of data exclude replicability or replication of such studies? It does not. Data-less studies can benefit from increased transparency and details as well. As most beginning PhD-students in philosophy know, it is often highly opaque how conclusions are drawn in a priori philosophy. Philosophers do tend to clearly define terms and meticulously write down arguments for why a conclusion is valid. More than often, however, philosophers are not upfront about how they analyze their concepts. Some kind of method like conceptual analysis is usually at work but the exact type of method used is often not made explicit. Philosophers also tend not to be transparent about how they arrived at their conclusions, how they came up with examples or why they started thinking about the topic in the first place. The answers to these questions may be quite trivial and uninformative, like ‘I was thinking hard for a long time’. However, in many cases, philosophers rely heavily on input during paper discussions, presentations at conferences and peer review. Often this input goes unacknowledged, or acknowledgment is limited to one note in the final paper.  

What would a replicable data-less study look like? Like other replicable studies, it would need a section on methodology where the researcher lays out the methods she used and why these methods are appropriate. Such a section will allow (younger) researchers to reconstruct how the study was brought about and do the same study again. The replicable paper will also include information on how examples were found and how conclusions were reached. It would also include information on how the research topic was altered during the course of the ongoing research and why this was deemed necessary.  

Increased replicability of data-less studies would help make the discipline more open to newcomers and other disciplines.

Increased replicability of data-less studies would help make the discipline more open to newcomers and other disciplines. This can help avoid gatekeeping and make research more available. More importantly, it would help make studies better by allowing for quality control and corroboration, which remains the central goal of replication studies.  

Perspectives on Reproducibility – looking back at the NLRN launch symposium

On 27th October 2023, we welcomed more than 100 researchers, policy makers and research facilitators to our Launch event. The aim of the day was to exchange perspectives on reproducibility and to work towards prioritizing actions for NLRN for the coming year(s). 

During the morning, we heard how the UK Reproducibility network propelled changes in the UK research landscape and we discussed how we can learn from each other across disciplines to improve research transparency in our own field. In the afternoon, participants  followed workshops on the topics of education, infrastructure, community building and research practices. The results of those workshops were discussed in the closing panel discussion of the day, where participants and panel members suggested topics and actions for the NLRN to focus on.

Marcus Munafò giving the keynote lecture on collaborative approaches to improving research culture in practice

The symposium brought a diverse set of researchers and stakeholders together, diverse in terms of roles in the research process but also in terms of disciplines It was important to take count of the current reproducibility landscape in the Netherlands. We noted that most people were very familiar with the current state of their own field, but that the overview of the entire landscape was lacking. The NLRN can act as a connector to enable communities to learn from the challenges and advances in seemingly distant fields. 

The interactions during the plenary sessions and workshops showed how research domains differ in their challenges and current status of reproducibility. The workshop hosts were asked to work towards three focus areas or agenda points for the NLRN to work on. Concrete ideas included creating training materials for researchers on how to use existing digital infrastructure or on how to make executable figures. The community building workshop suggested that the NLRN should coordinate national codecheck events. During the infrastructure workshop, participants saw a need for determining at what level research infrastructures should be organized (local,national, international), and for discussing how research outputs and processes differ between research areas, which in turn influences the required reproducibility infrastructure. Participants from the education workshop suggested lobbying for teaching reproducible research practices from the bachelor level onwards and showcasing existing efforts in teaching team science. 

The steering group is now tasked to see which suggestions fit best with the overall goals of the network and how to prioritize them. We will select a few agenda points first while also extending and growing the network. 

Stay tuned! We will share our progress on this blog, in our newsletter and on our social media (LinkedIn and X). 

You can find the presentation slides on zenodo and watch back the keynote lecture on our website. 

Welcome to the NLRN Blog!

Hi there, welcome to our blog! We are currently setting up this blog and planning our first posts.

Within the next few weeks, you can expect a post about our launch event last month and about our first network partners. Sign-up to our newsletter for general news and follow our social media (on LinkedIn and X).

Group Picture of the Steering Group and all present Advisory Board members during the Launch of the NLRN on 27 October 2023