Making reproducibility work for qualitative research methods – and not the other way around.

This blogpost is inspired by the Open Qualitative Research Symposium at VU on 28th March 2025. We thank the organizers and speakers for this interesting and energizing meeting.

The blogpost is a summary of and reflection on some key issues that were raised during the event, how those relate to reproducibility in a broader context and how the NLRN works on solving some of those issues. 

Authors: Tamarinde Haven and Daniela Gawehns; Foto credits: QOS organising team

Open and Qualitative – an epistemic clash?

How to open and share data collected with qualitative research methods was a central point of discussion and featured prominently in the workshop program. Tools for anonymizing data, and for sharing data such as videos and pictures, were presented and discussed. A recent review on enablers of reproducibility in qualitative research reflects this emphasis, finding that more than half of the included studies addressed Open Data as a topic. One of the authors explained: These papers are not all about how to share data, but many of them attempt to explain how the requirement to open data does not align with the needs of qualitative research:

While familiar privacy concerns also apply in quantitative research, qualitative researchers face an additional, epistemic challenge: most require access to un-anonymized, rich data as a basis for interpretation. And this type of data is difficult to share. The epistemic requirements clash with the broader push for transparency.

The discussion around Open Data requirements for qualitative research highlights a mismatch between the epistemics of a majority voice in the Open Science discussion and the needs of a smaller group of researchers. At the NLRN, we aim to include as many voices as possible in the reproducibility debate, and to take a range of epistemic challenges into account. 

Agata Bochynska gave a presentation on how they support qualitative researchers at the library of the University of Oslo.

Qualitative stepping stones

The second main topic addressed how qualitative research methods can help quantitative scientists work more reproducibly. Reflexivity, a central aspect of qualitative research, can serve as a stepping stone toward process reproducibility and transparency in quantitative research:

“Reflexivity is the process of engaging in self-reflection about who we are as researchers, how our subjectivities and biases guide and inform the research process, and how our worldview is shaped by the research we do and vice versa” (quote from  Reflexivity in quantitative research: A rationale and beginner’s guide – Jamieson – 2023 ). In their paper, Pownall and colleagues make the case for reflexivity as a basic first step towards reproducibility: a reflection on the research process makes opening that process up much easier. 

The NLRN brings organisations together to share best practices and learn from each other. We would love to amplify and share cases where methods from qualitative research helped quantitative researchers approach the reproducibility of their own non-qualitative work in new ways and make it more transparent.

Parallel Communities of Practice

Several speakers voiced their concern about duplication of efforts as parallel communities of open qualitative methods form. The recent call for a European community of qualitative researchers is an answer to this fear and will hopefully create more synergies to avoid duplication of efforts and slowing the process of change down. 250 people already expressed their interest in such a community. We will cross post updates on this initiative on LinkedIn and via our newsletter.

Bogdana Huma and Sam Heijnen from VU were the core organizing team of the symposium and guided a co-creation session at the end of the day.

The NLRN will organize a symposium in collaboration with the Dutch-Belgian Context  Network for Qualitative Methodology to continue the discussion on transparency at the national level and foster learning from other, non-qualitative methodologies.  

Open to the citizen – with methodological consequences

A topic that gets rarely touched upon in the reproducibility discussion is participatory or community driven research. The goal of this approach is to make research more relevant by including citizens, not only as participants or as data collectors, but also as researchers and to guide the research process. 

With reproducibility being a way to share research processes, questions around methods and research pipelines arise: Are those methods flexible enough to accommodate evolving consent forms, fluid management plans and research designs that are sourced from the impacted communities themselves? Is there a way to pre-register those changing plans and how would we go about it? How can we be transparent about changes and present them as an integral part of the research design, rather than as flaws in planning or execution?  

Links: 

Materials of the Symposium: Events | Community Of Practice (for slide decks) and Events | Community Of Practice (for recordings)

Library used for the mentioned review on reproducibility of qualitative research: A context-consent meta-framework for designing open (qualitative) data studies | Reproducibility of qualitative research – an integrative review | Zotero 

Review Paper: MetaArXiv Preprints | Reproducibility and replicability of qualitative research: an integrative review of concepts, barriers and enablers

The many dimensions of reproducible research:  A call to find your fellow advocates.  

Blogpost by steering group member Tamarinde Haven.

Various definitions of reproducibility and its sister concepts (replicability, replication) are floating around [Royal Netherlands Academy of Arts and Sciences 2018, Goodman et al 2016]. Whereas there are relevant (disciplinary) differences between them, they generally share a focus on the technical parts of reproducible research. 

 

With the technical focus increasingly taking centre stage [e.g., Piccolo & Frampton, 2016], one can assume technical solutions are the panacea. Say techno-optimism in the reproducibility debate. The thinking is something along the lines of: Provided that institutions have the option to facilitate the relevant infrastructure and use of tools, researchers employed at those institutions will carry out reproducible research [Knowledge Exchange, 2024].  

To be clear, making reproducible practices possible is a necessary step. But it is one of many [Nosek, 2019]. Now that you have enabled more reproducible practices, how are you going to ensure it is picked up by researchers?  

Now that you have enabled more reproducible practices, how are you going to ensure it is picked up by researchers?  

Back in 2021, I defended my thesis on fostering a responsible research climate. One of the key barriers to responsible science researchers flagged was the lack of support. Our participants were bugged down by inefficient support systems where they were connected to support staff who were generalists and could not efficiently help them [Haven et al., 2020]. 

Many Dutch research-performing organisations nowadays employ data stewards, code experts, and product owners of various types of software that have been recently developed to promote reproducibility. These experts maintain software packages and they select the relevant infrastructure to support a reproducible data flow. They advise on which tools are suitable for a given project and ensure these are up to date. They implement new standards through workshops and training. And they do so much more. 

During the launch of the Netherlands Reproducibility Network last October, we heard of various disconnects at institutions. We learned about meticulously trained data stewards sitting around, waiting for researchers to find them. Those who found them returned, but that’s only partially good news. I am not aware of their exact reward mechanisms, but many organisations follow the flawed principle that when there are no requests for a solution, we do not seem to be bothered by this problem which is false for many different reasons*. Part of this disconnect may simply be a matter of time, and culture change is a process that typically is much slower than its driving forces might be satisfied with.  

In my personal experience, these professionals have been highly knowledgeable. I found reproducibility advocates who were able to help me draft a coherent data management plan for my funding proposals, advised on relevant metadata standards, wrote the piece of Python code to connect my data with existing databases, and finally connected me with yet other specialists who maintain the newly created data archiving infrastructure in The Netherlands.  

As a network, it is in our DNA to exchange information but also – crucially – contacts for professional purposes. That is why we, as the Netherlands Reproducibility Network, want to focus on promoting connections between researchers and research support staff. What kind of strategies are currently being used to connect these groups? How can we learn from successful efforts or institutions where these parties seamlessly find one another? 

And no, we don’t plan on falling into the same trap as the participants in my thesis research talked about. General, one-size-fits-all solutions likely won’t cut it. That is why we hope to facilitate ongoing and launch new pilots to investigate connections. But as so often with these efforts, the best possible world is one in which these kinds of pilots are not necessary. So, to all my research colleagues: Please find your fellow reproducibility advocates in your institution. Acknowledge their help in your research products. And make sure to share their valuable expertise with your lab members; we should honour the crucial human dimension of reproducible research. 
 

PS. Do you know of any ongoing pilots? Get in touch!

* Just because no one reported any social safety issues to the confidential advisor, there are none, and we do not need confidential advisors? 

References: 

Goodman, S. N., D. Fanelli and J. P. Ioannidis (2016). What does research reproducibility mean? Science Translational Medicine 8, 341ps312. 

Royal Netherlands Academy of Arts and Sciences, KNAW. (2018). Replication studies – Improving reproducibility in the empirical sciences. 

Haven, T., Pasman, H. R., Widdershoven, G., Bouter, L., & Tijdink, J. (2020). Researchers’ Perceptions of a Responsible Research Climate: A Multi Focus Group Study. Science and engineering ethics, 26(6), 3017–3036. https://doi.org/10.1007/s11948-020-00256-8 

Piccolo, S. R., & Frampton, M. B. (2016). Tools and techniques for computational reproducibility. GigaScience, 5(1), 30. https://doi.org/10.1186/s13742-016-0135-4 

 

Replication in philosophy, or replicating data-free studies  

Blog post by Hans Van Eyghen, Member of the NLRN steering group

The replication crisis, which arose primarily in the biomedical and psychological sciences, was both a blessing for replications and somewhat of a curse. Its lasting impact lies in the recognition for the need for replicability. Replicability is now generally seen as a way to make the impacted disciplines better and to make them more robust, allowing for quality control and independent confirmation of findings. The minor curse inflicted by the replication crisis is that replicability is sometimes regarded as a specific solution to a specific problem. Disciplines without replication crises would not stand in need of increased replicability and any push for replicability may inflict more problems than are solved. Such a sentiment is especially at work in the humanities. The humanities did not go through a replication crisis. This has left some in the humanities with the idea that replication is a fix for others. Furthermore, pushing for increased replicability in the humanities would mean importing a problem and methodology that is not theirs.  

There is no reason why studies in the humanities would not benefit from increased quality control or corroboration

While there are profound differences between the humanities and other disciplines, the key reasons for increased replicability remain the same. Quality control refers to checking whether studies are well conducted. Quality control is key to weeding out mistakes (willed or unwilled) or other reasons why any study is not up to standards. Corroboration refers to findings the same or similar conclusions while redoing the same (or a similar) study. There is no reason why studies in the humanities would not benefit from increased quality control or corroboration. An argument in favor of increased replicability, which allows for more quality control and corroboration is therefore quickly found.  

Nonetheless, some suggest reasons why increased replicability might not be feasible for the humanities. While the details vary, the reasons center around the idea that the humanities are just too different. More than other disciplines, the humanities involve interpretation. The objects of the humanities also are not just quantifiable data, but qualitative, meaningful objects or subjects. Finally, a considerable number of studies in the humanities do not involve data, analysis thereof or anything like it all. In those cases, it is not at all clear what replication would involve.  

I will focus here on the final argument, i.e. some humanities studies do not have data and therefore have no need for replicability. Replicability usually involves being clear about the data used, how it was analyzed and how conclusions were drawn. Clear examples of such ‘data-less’ studies are a priori reasoning in philosophy. A considerable number of studies in philosophy consist of reflection on arguments or questions like ‘Is knowledge justified true belief?’ or ‘Is morality objectively true?’. Attempts at answers do not rely on surveys (if one is not engaging in experimental philosophy) or empirically collected facts. Instead, philosophers tend to rely on a priori processes, like reflective equilibrium, conceptual analysis or others.  

Photo by Alex Block on Unsplash
Photo by Alex Block on Unsplash

Does lack of data exclude replicability or replication of such studies? It does not. Data-less studies can benefit from increased transparency and details as well. As most beginning PhD-students in philosophy know, it is often highly opaque how conclusions are drawn in a priori philosophy. Philosophers do tend to clearly define terms and meticulously write down arguments for why a conclusion is valid. More than often, however, philosophers are not upfront about how they analyze their concepts. Some kind of method like conceptual analysis is usually at work but the exact type of method used is often not made explicit. Philosophers also tend not to be transparent about how they arrived at their conclusions, how they came up with examples or why they started thinking about the topic in the first place. The answers to these questions may be quite trivial and uninformative, like ‘I was thinking hard for a long time’. However, in many cases, philosophers rely heavily on input during paper discussions, presentations at conferences and peer review. Often this input goes unacknowledged, or acknowledgment is limited to one note in the final paper.  

What would a replicable data-less study look like? Like other replicable studies, it would need a section on methodology where the researcher lays out the methods she used and why these methods are appropriate. Such a section will allow (younger) researchers to reconstruct how the study was brought about and do the same study again. The replicable paper will also include information on how examples were found and how conclusions were reached. It would also include information on how the research topic was altered during the course of the ongoing research and why this was deemed necessary.  

Increased replicability of data-less studies would help make the discipline more open to newcomers and other disciplines.

Increased replicability of data-less studies would help make the discipline more open to newcomers and other disciplines. This can help avoid gatekeeping and make research more available. More importantly, it would help make studies better by allowing for quality control and corroboration, which remains the central goal of replication studies.