Looking back at the National (Reproducible) Research Software Day

On 25 November 2025, more than 200 people attended the National Research Software Day in Delft. There were parallel sessions on training materials, demo sessions, and unconference tables about community building. Many of the contributions were related to reproducible research software. This is a short list of topics and themes that came up during the day.

Ole Mussmann and Sander van Rijn (both from the eScience Center) presenting the SMP Tool- a webapplication to create Software Management Plans. Foto Credit: Robert Kroonen

Unconference session on CODECHECK and reproducibility checks

During the unconference in the afternoon, Joao Guimaraes (TU Delft) and Aleksandra Wilczynska (4 TU Research Data) presented their work on reproducibility checks. Submitters to the Data repository can now request a reproducibility check by an engineer or software steward at their institution. If the reproducibility check (a very low bar CODECHECK verifying that the main graphs or tables can be reproduced with the shared materials) is successful, the entry will receives a badge.

Participants at the unconference session attempted to re-run a reproducibility check within the short allocated time and thought about implementation strategies to make such checks a normal addition to research pipelines.

Find out more about reproducibility checks/ CODECHECK at 4TU Research Data here:

CODECHECK in Practice: How TU Delft and 4TU.ResearchData Are Making Reproducibility Happen

Training materials to make compendia pre-producible

In his talk “Reproducible research through reusable code in 1 day”, Eduard Klapwijk from SURF presented his work on training materials to help researchers make their research compendia reproducible.

He mentioned that one big step for many researchers was to make their code openly available on a platform like github. And that some workshop participants decided to close their repositories again after the workshop.

The training materials can be found here: Lesson material for Reproducible research through reusable code workshop

And this is the material website: Reproducible research through reusable code

Streamlining Labtechnology

During the unconference session, Ana Caballo for Radboud University pitched her project ConDAQtor to help physicists streamline their lab technology. The problem many experimental phycisist face is that the commonly used lab software “labview” can do quite a bit of automation but it is tedious and few people invest the time to master it. As a consequence many experiments are not fully automated. Ana suggested to bring people together in a hackathon to create templates and tutorials.

Photo by Testalize.me on Unsplash

Interestingly, in the same week, Code for Thought published an episode about Sébastien Webers work on PyMoDAQ, a python package and project to help researchers build plugins for their experimental setup.

[EN] PyMoDAQ: No more reinventing the wheel – with Sébastien Weber

Making research software FAIR

Roadmap for FAIR software skills: The team behind the Research Software Roadmap present their work from the tDCC-NES retreat. They developed a toxonomy to link researcher skills to programming and code management skills. The roadmap can be used to design lesson and training plans to tailor trainings to individual needs.

Website: Research Software Roadmap | Open Science Roadmaps

Roadmap team: Charlotte Summers (Utrecht University), Nami Sunami (TU Eindhoven), Margot Teunisse (Leiden University), Helena Wedig (Erasmus University), Bjørn Bartholdy (TU Delft), Nick Hodgskin (Utrecht University), Yilin Huang (TU Delft), Neha Moopen (Utrecht University), Anastasiya Paltarzhytskaya (Radboud Univeristy)

SMP Tool: Create a Software Management Plan That Fits Your Project

During a parallel session, colleagues form the eScience Center presented a open-source, web-based SMP Tool. The tool guides developers through a questionnaire to creae a tailored software management plan and recommends best practices, resources, and examples. The tool works for small scripts just as well as for larger pieces of software.

You can find out more on this website: https://smp.research.software and on their repository: https://ss-nes.github.io/#toolkit-smp-decision-tree The project is part of the sustainable research software tool kit of the tDCC NES: SS-NES landing page

SMP Tool Team: Ole Mussmann (eScience Center), Carlos Martinez Ortiz (eScience Center), Sander van Rijn (eScience Center), Thijs van Lankveld (eScience Center), Giulio Rosani (University of Groningen)

Code Auditor: Code Auditor is a command line tool that helps assess research software projects against community best practices and provides actionable recommendations for improvement. The auditor is developed by Serkan Girgin

Find the materials on zenodo: code-auditor

Positionality Statements and DMP’s – match made in heaven or one more tick box?

This blogpost is a summary of discussions during the NLRN webinar on Open Qualitative Research and during the OSC NL Barcamp. Anyone interested in the topic is welcome to join this discussion. Please find a link to a newsletter below the text.

Background: What are positionality statements and reflexivity?

Reflexivity practices constitute one set of resources, traditionally used by qualitative researchers, to examine researchers’ beliefs and worldviews and how these may influence the research processes. Qualitative research is typically characterized by methodological approaches that embrace flexibility and subjectivity. It encompasses its own processes and procedures to ensure rigor, one of which is writing clear and coherent positionality statements*.

Positionality statements are a tool for reflecting on how researcher’s personal characteristics, experiences, and opinions could influence the research processes. They can become part of methods sections or stand on their own. Positionality statements can also be written for a research group, reflecting on how the individuals’ backgrounds align with, enhance or hinder each other.

*quote from the session description at NOSF 2025

Background: What are Data Management Plans (DMPs) ?

From Leiden University Libraries: Data management plans ask researchers to gather all information about the data in their project. They are asked to provide information on the type of data, the method of collection, the format and the documentation of the data. It also includes sections on facilities that are used, legal or ethical reasons (not) to share data, and on the way data is shared and preserved in the long term.

In practice, researchers set up such plans at the beginning of any research project as many research performing organizations mandate them. Often, DMPs are set up once and not updated throughout the projects life cycle.

During our webinar on Open Qualitative Research in October 2025, the panelists discussed, among other things positionality statements and how those help bring biases, beliefs and assumptions to light and thereby open the study process. One participant, (Anna Volkova) remarked that data management plans would be a good starting point for any researcher to think about their positionality. Given that DMPs are usually written when setting up a research project, the timing would be right for a first round of reflections. DMPs are also mandatory for any research conducted in the Netherlands and might hence be a possibility to reach colleagues that are still somewhat removed from the open science sphere, or are not familiar with reflective practices.

During the Open Science Barcamp in Groningen, we had the chance to take this idea up with fellow barcamp participants and discuss the how and why of this idea in depth:

Firstly, we established that positionality statements are a common tool in a lot of qualitative research. Also researchers in the humanities, for example historians, reflect on their own assumptions, even if they might not have a separate positionality statement. We also confirmed that DMPs are mandatory documentation at all research institutions in the Netherlands. Ideally, a DMP is used as a living document that gets updated as the research project evolves. In reality, many DMPs are written once at the start of a project and then forgotten or not updated as the project continues.

DMPs are already nudging researchers into reflection. There are for example ethics questions, and researchers are asked to come up with mitigation procedures for responsible data handling. While those reflections are centered around the project itself and not around the researcher as a person, they might open doors for deeper reflection. Another example are questions around data collection tools and software. Again, researchers are given prompts to think about the ethics and openness on platforms and tools that they will employ. This might open doors for reflections about the positionality of the person performing research.

We thought about what use mandatory reflection statements would have. Some argued that the code of conduct mandates many actions to uphold research integrity, adding reflection would be just one more. A mandate would be useful to get at least some sort of statement out of everyone. Others argued that this might encourage throw-away 3-sentence statements which stand in contrast to the deep reflections positionality statements are based on. On this point, we agreed that copy-pasting a well thought-out statements is acceptable if studies are very similar to each other.

Along the line of thought of a mandatory reflection, we briefly raised the question how much such a mandate would infringe individual researchers’ academic freedom. Would they perceive a mandate as too controlling? Or can we mandate reflection because it is (and as it becomes) part of good research practices?

We had many data stewards in the room and discussed if such a reflection moment could be part of their work.

Most research performing institutions employ data stewards who help with setting up DMPs. Many of the present stewards meet in 1-on-1 meetings with each researcher who submits a data management plan. Some of the present colleagues were open to encourage researchers to write positionality statements in those consultation sessions.

What do we do with this?

Do you have ideas how to take this forward? Share them via email, social media or in a comment under this post. Here are a few ideas:

> take these thoughts back to researchers already working with positionality statements to understand if our discussions so far are cutting too many corners from the original meaning of a positionality statement

> take several standard DMP templates and find the questions where researchers are already nudged into reflections (about the project but also about themselves)

> work on materials/ collect existing materials that datastewards could refer researchers too to start working on their positionality statement

> come up with a pilot intervention to integrate reflections on positionality in DMP consultations

> Your ideas!

Please feel free to reach out via email or sign-up here to be informed about news around this topic:

https://laposta.nl/f/ssfipa8cm1m3

Helpful links and references:

Jamieson, M. K., Govaart, G. H., & Pownall, M. (2023). Reflexivity in quantitative research: A rationale and beginner’s guide. Social and Personality Psychology Compass, 17(4), e12735. https://doi.org/10.1111/spc3.12735

Workbook Positionality Statements: https://zenodo.org/records/17174961

Blogpost Candidness: Open by Reflection – Candidness for more transparent science – reproducibilitynetwork

Background: What are positionality statements and reflexivity practices?

Reflexivity practices constitute one set of resources, traditionally used by qualitative researchers, to examine researchers’ beliefs and worldviews and how these may influence the research processes. Qualitative research is typically characterized by methodological approaches that embrace flexibility and subjectivity. It encompasses its own processes and procedures to ensure rigor, one of which is writing clear and coherent positionality statements.

Positionality statements are a tool for reflecting on how researcher’s personal characteristics, experiences, and opinions could influence the research processes. They can become part of methods sections or stand on their own. Positionality statements can also be written for a research group, reflecting on how the individuals’ backgrounds align with, enhance or hinder each other.

Background: What are Data Management Plans?

From Leiden University Libraries: Data management plans ask researchers to gather all information about the data in their project. They are asked to provide information on the type of data, the method of collection, the format and the documentation of the data. It also includes sections on facilities that are used, legal or ethical reasons (not) to share data, and on the way data is shared and preserved in the long term.

In practice, researchers set up such plans at the beginning of any research project as many research performing organizations mandate them. Often, DMPs are set up once and not updated throughout the projects life cycle.

NEWS partners up with NLRN to advance reproducibility through science communication 

The National Centre of Expertise on Science and Society (NEWS) has become an official partner of the Netherlands Reproducibility Network (NLRN). Together NEWS and NLRN will work to make the reproducibility of research a more prominent part of how science is communicated to society. 

Replication of research is a cornerstone of reliable science, yet in practice it often receives too little attention. Science communication tends to highlight new discoveries rather than the verification of existing findings. NEWS and NLRN want to help change that. NLRN connects universities, university medical centers, universities of applied sciences, and other organizations committed to improving research reproducibility. By sharing expertise and best practices, partners accelerate the transition towards more transparent and trustworthy research. 

Replication and reproduction. How does it work? 

What exactly is the difference between replication and reproducibility? And what is the problem? Replication refers to repeating (part of) a study. This is often carried out by a different team of researchers than those who conducted the original study, sometimes many years later. The goal is to determine whether a previously observed effect can be found again. 

Reproducibility refers to repeating only the analysis. The original data (such as historical sources or experimental measurements) are analysed again by others to examine whether they draw the same conclusions from the same dataset. 

Although both processes of repetition are crucial, replicating a study or reproducing an analysis is by no means straightforward. Researchers do not always share sufficient details about their study design, and scientific research often involves highly specialized knowledge and skills. In addition, data are not always shared openly or stored properly, which makes reproducibility difficult. 

Moreover, there are few incentives in the scientific system to replicate research. Many funders prioritize innovative, groundbreaking and above all new research proposals. Likewise, scientific journals rarely make room for verification studies or corrections. 

A more realistic view of science  

‘We need to move toward a more realistic view of science,’ says Sicco de Knecht, Director of NEWS. ‘That means showing society how knowledge is built step by step, and how it is continuously tested and refined. Science is not just a series of spectacular breakthroughs; it’s a careful process of replication, verification, and consensus-building. People outside academia should also understand how scientists scrutinize and improve each other’s work.’ 

Important part of science and research: reproducing results in computational linguistics at Leiden University

Involving society 

Michiel de Boer, Chair of NLRN, emphasizes the importance of science communication in this process. ‘Effective science communication can itself strengthen research reproducibility and help make science more accessible to the public,’ he says. ‘Think of citizen science projects, where studies conducted by professional scientists are replicated and verified together with citizen scientists. This creates a more robust and shared understanding of the phenomena being studied.’ 

But collaboration doesn’t have to be complex, De Boer adds. ‘It’s already extremely valuable when researchers have their assumptions questioned by people outside academia. Real-world experiences can yield powerful insights into how data are analysed or how study design might influence results.’ 

Next steps: building practical tools 

In the coming months, NEWS and NLRN will develop ways to engage both researchers and science communicators in their shared mission. ‘We want to offer practical guidance to help make replication and reproducibility part of everyday conversations about science,’ says De Knecht. ‘For instance, through expert sessions and collaborative initiatives.’ ‘These are challenging topics,’ adds De Boer, ‘but that makes it all the more important to make them accessible and familiar.’ 

Open by Reflection – Candidness for more transparent science

Post about the Workshop “Positionality Statements: A tool to Open up Your Research, held 26 September at VU Amsterdam. Organized by Dr Tamarinde Haven, Dr. Bogdana Huma and Daniela Gawehns.  

What comes to your mind when you think of openness in research? If you now take out a pen and start writing freely, which associations would make it onto paper? With a group of 11 colleagues, we met, sketched and discovered how different notions of openness, academic values and personal backgrounds influence how we do research. 

Positionality statements come out of standpoint theory and are based on the idea that we do not only experience the world in different ways, but that those experiences shape our research, including the study design and analysis of our data. With this workshop, we wanted to explore if positionality statements can be used to bring a wider notion of openness to the open science and reproducibility discussion.  

During the introduction round, participants shared their career paths and motivations to do the research they are currently doing or supporting. We heard stories of more or less twisted paths from the full breath of academic backgrounds, fields and methodologies. 

After a short introduction of standpoint theory and a warm-up exercise to get everyone used to writing freely and with pen on paper, participants were asked to write about what openness means to them. 

Given the setting of the workshop within the open science week, many responses were along the lines of transparency and rigor, trust and accountability. Some mentioned openness as being something courageous, inviting criticism, fluidity and uncertainty. We also discovered that openness has a lot to do with vulnerability, a topic that came back later when discussing how detailed positionality statements should be. 

The remaining writing prompts invited workshop attendees to write about their own values, beliefs and backgrounds, as well as reflecting on how those values, backgrounds and beliefs influence our research. The resulting sketches are a first step towards a draft positionality statement. 

Participants received this booklet with writing prompts and further reading materials.

A topic that came back in several questions was the level of detail a positionality statement should have. In some fields of research the mention of gender or seniority might invite extra harsh criticism, while in other research contexts the gender of the researcher might be an important attribute to mention. The workshop leaders left it to a very practical approach: the author decides what is relevant for their statement. If mentioning academic backgrounds is enough to make clear through which lens a certain problem was approached, this might suffice.  

This workshop introduced participants to the tool of positionality statements and was by no means enough to lead to more than a few first thoughts. We invited participants to think about opening research processes beyond sharing research outputs openly. This will not directly increase the computational reproducibility of someone’s research outputs. A positionality statement will however help consumers of research contextualize outputs and grasp how they could be reused. Accepting that our research is influenced by external and internal factors, opens up the process and makes for more vulnerable, accountable and maybe humble research.  

Trainers and teachers are more than welcome to reuse the workshop materials and can find the workbook and slidedeck on zenodo: https://zenodo.org/records/17174961

More resources:

  1. Jamieson, M. K., Govaart, G. H., & Pownall, M. (2023). Reflexivity in quantitative research: A rationale and beginner’s guide. Social and Personality Psychology Compass, 17(4), e12735. https://doi.org/10.1111/spc3.12735 
  2. similar workshop at SIPS 2021: OSF | SIPS Positionality Workshop.pdf
  3. Field, S., & Pownall, M. (2025). Subjectivity is a Feature, not a Flaw: A Call to Unsilence the Human Element in Science. OSF. https://doi.org/10.31219/osf.io/ga5fb_v1

Advancing reproducibility and Open Science one workshop at a time – community-building in the Netherlands

This is a crosspost from the codecheck website.

This blog post is the final one in our series of posts about the “CODECHECKing goes NL” project, which has been running since May 2024. We have been working hard to promote reproducibility and Open Science in the Netherlands through a series of workshops, and we are excited to share our experiences and future plans with you.

Find the full series of posts on our project page.

Why is this important?

A paper’s code gets rarely checked – everyone in academia knows about peer reviewing articles, but few people engage in reproducibility checks of the materials behind the paper. Reproducibility checks are less about vetting research (e.g., catching fraud, finding errors), but more about ensuring the reusability of research. It is an extension of the thought that if we want to stand on the shoulder of giants, those giants better be standing on solid ground. And solid ground for computational workflows means good documentation that is understandable outside of the inner circle of the authors of a research article.

Reproducibility Checks at the Center for Digital Humanities at Leiden University on 14 th February 2025.

A reproducibility check is about the question whether one can reproduce the reported results in the paper (i.e., the statistics, tables, figures, maps, or graphs) from the provided data, code, or other materials. The CHECK-NL project focuses on the computational reproducibility of published research and tries to answer the question of “Can someone else recreate the output on their hardware using the materials and documentation provided by the authors?”. We call this type of reproduction a CODECHECK.

Who did what?

A bunch of enthusiasts for Open Science and Reproducible Research from University of Twente, TU Delft, and UMCG Groningen applied for funding from NWO via their Open Science funding scheme to organize four in-person events at their respective institutions and beyond. Through these events, they intended to jump start a Dutch reproducibility checking community. Included in the project proposal was also work on the CODECHECK website and registry to better present codechecks and the codecheckers behind them.

Along the way, the group of enthusiasts grew and instead of the planned four events, there were a total of six in-person events: one more as a conference workshop (AGILE in Dresden) and another one (TU Eindhoven) organized by attendees of the first event (exactly what this project was aiming for!). At the events, we also connected with representatives of a data repository, diamond open access publishers, and digital competence centers who are considering their own version of computational reproducibility checks.

The four events in Delft, Enschede, Rotterdam and Leiden brought in a total of 40 researchers, many of whom opened up their own work to be assessed by others, together who codechecked 15 papers. The additional events in Eindhoven and Dresden introduced an international crowd to the CODECHECK principles. Each event had a different topic, focusing on different parts of the research landscape, which resulted in different challenges and learning opportunities at each event. While the groups in Delft and Enschede mainly faced problems with computing environments, documentation, and high computational loads (too big for laptops or the workshop time), the group in Rotterdam raised the issue that reproducibility checks can be pretty dry at their core and may be almost trivial if only heavily summarized data can be shared. At the final event in Leiden, we brought linguists and digital humanists together. One of the questions raised was: how do we start a reproducibility crisis in the humanities? (Because maybe we need one to raise awareness about the important topic in this field?)

What are the results? What did we learn?

One clear lesson learned was about how different crowds from different disciplines are – although the advertisement for the events and their setup and schedule were quite similar, they played out quite differently. Another important lesson is that you need a group of enthusiastic participants to drive such events – fortunately, we always had those!.

There were people with a wide range of coding skills at the events. The wrap-up sessions always gave us the impression that all of them took something home and learned something. Working with someone else code and reproducing another researcher’s workflow requires craftsmanship and a hands-on and can-do attitude that is rarely taught in typical university classes. The workshops and the participating experienced mentors, however, could provide such a setting.

The four main in-person events required attendees to invest an entire workday into this topic. In retrospect, this might have prevented interested people from joining. For raising awareness, shorter, more targeted events might be a suitable alternative.

Getting the certificates was a nice by-product but was certainly not the only outcome. Authors whose project didn’t pass the reproducibility check were given feedback so that they can make their work still reproducible. Participants got the chance to learn from other people’s workflows and software stacks.

Another surprise was how difficult it still is to convince colleagues to submit their work to a reproducibility check. The social layer of this otherwise rather technical question is the biggest challenge for the project team and people working with reproducibility checks. The technological challenges are less exciting than the positive experiences and potential benefits, see e.g., this blog post about an author’s experience how it is to be “codechecked”.

From discussions we distilled the notion that the best time to get a reproducibility check is at the preprint stage or during peer review – then people are still motivated to fix issues before the publication. Also, a certificate is a positive signal towards peer reviewers (at least that’s what we hope). If published work gets checked, authors need to be very motivated to improve documentation or fix bugs, certainly if those are hidden in some deeper level of the code.

Concrete outcomes:

What are the next steps?

The CODECHECK or reproducibility check community in the Netherlands is growing. We met with the wider community to evaluate the project and make new plans. 4TU Research Data is planning to work on codechecks as part of their service as data repository and is working closely with the four technical universities.

The community in the Netherlands will continue to meet and work on topics like reproducibility checks as a service or as part of teaching curricula, and academic culture around code checking. Internationally, we have reached out to colleagues in Bristol and Cologne.

Preregistration For Student Assignments

How can we integrate open science practices into our curricula? In this webinar two lecturers told us how they are including preregistrations in their students’ curricula.

“When you preregister your research, you’re simply specifying your research plan in advance of your study and submitting it to a registry.” (Quote from the website of the Open Science Framework)

Ewout Meijer started the webinar and told us how he convinced thesis coordinators to include pre registration in their courses. As an Open Science Ambassador for his faculty, Ewout got backed by the dean to find out where in the various courses open science practices could be integrated. Pre registrations are just one example of those practices.

He shared several tips of how he convinced colleagues to integrate open science practices:

  • Make it easy for your colleagues – minimize the extra workload by sharing templates and offering introduction lecture materials
  • Make them want it – top down mandates to include open science practices don’t work well, but if your colleagues are convinced that this is the right thing to do, they will follow
  • Don’t be overambitious – first the science, then open science. Finding the balance between what students need to know in terms of scientific content and what they need to know about the scientific system is difficult and will differ between courses and student populations.
  • Searching for “project proposals” as a required component for students to pass a course is a good way to find courses where preregistration can be taught. You just need to replace the proposal with a registration. 
  • Including new (open science) content means kicking out some other content – see what can be replaced and what can be tossed

At Maastricht University, students are asked to use the AsPredicted template and submit that as a pdf (i.e., they don’t upload it on aspredicted.com). Ewout mentioned that not all internship projects are suited for this format, so students might have to adjust it or come up with a project just to fill in the template and pass this grading component.

Students get exposed to the idea of preregistration and the same effect goes for workgroup tutors. Tutors come from a wide range of research groups and are learning themselves about pre registrations while helping students with their thesis work.

Elen Le Foll asked her seminar students to pre-register their term paper analyses. Adding this component required some extra time investment to make sure students understood what was expected of them and for extra feedback rounds. The preregistration adds at least one round of feedback to the term paper and requires students to plan ahead and submit their preregistration on time to have enough time left to incorporate the feedback into their final data analysis. On the positive side, students can learn from feedback and include it in their work. For normal term papers, students get feedback at the end but do not need to use that feedback or cannot improve their work anymore.

As Elen’s course is an advanced course for master students, some of her students want to turn their term paper into a research paper. For them, the preregistration is an excellent way to get a timestamp for their analysis. 

In the discussion with Andrea and other attendees, we discussed how the AsPredicted format can be used by students and if a full registered report might be even more suitable. We briefly touched upon the difficulty of grading pre registrations and how much detail we should ask students. Another point of discussion was how we can sell pre registration to students who are not interested in becoming researchers. This led to a discussion on how to balance the need for academic training with content and application outside of academia.

Thanks to our presentors:

Elen Le Foll is a post-doctoral researcher and lecturer in linguistics at the Department of Romance Studies at the University of Cologne. She likes to integrate Open Science principles and practices in her seminars and recently asked her students to pre-register a study as part of a term paper assignment.

Ewout Meijer works at Maastricht University and coordinates the thesis module for the research master in psychology. He introduced preregistrations for thesis projects. 

Useful Links:

Aspredicted: aspredicted.org

OSF Preregistration Templates: www.cos.io/initiatives/prereg

Making reproducibility work for qualitative research methods – and not the other way around.

This blogpost is inspired by the Open Qualitative Research Symposium at VU on 28th March 2025. We thank the organizers and speakers for this interesting and energizing meeting.

The blogpost is a summary of and reflection on some key issues that were raised during the event, how those relate to reproducibility in a broader context and how the NLRN works on solving some of those issues. 

Authors: Tamarinde Haven and Daniela Gawehns; Foto credits: QOS organising team

Open and Qualitative – an epistemic clash?

How to open and share data collected with qualitative research methods was a central point of discussion and featured prominently in the workshop program. Tools for anonymizing data, and for sharing data such as videos and pictures, were presented and discussed. A recent review on enablers of reproducibility in qualitative research reflects this emphasis, finding that more than half of the included studies addressed Open Data as a topic. One of the authors explained: These papers are not all about how to share data, but many of them attempt to explain how the requirement to open data does not align with the needs of qualitative research:

While familiar privacy concerns also apply in quantitative research, qualitative researchers face an additional, epistemic challenge: most require access to un-anonymized, rich data as a basis for interpretation. And this type of data is difficult to share. The epistemic requirements clash with the broader push for transparency.

The discussion around Open Data requirements for qualitative research highlights a mismatch between the epistemics of a majority voice in the Open Science discussion and the needs of a smaller group of researchers. At the NLRN, we aim to include as many voices as possible in the reproducibility debate, and to take a range of epistemic challenges into account. 

Agata Bochynska gave a presentation on how they support qualitative researchers at the library of the University of Oslo.

Qualitative stepping stones

The second main topic addressed how qualitative research methods can help quantitative scientists work more reproducibly. Reflexivity, a central aspect of qualitative research, can serve as a stepping stone toward process reproducibility and transparency in quantitative research:

“Reflexivity is the process of engaging in self-reflection about who we are as researchers, how our subjectivities and biases guide and inform the research process, and how our worldview is shaped by the research we do and vice versa” (quote from  Reflexivity in quantitative research: A rationale and beginner’s guide – Jamieson – 2023 ). In their paper, Pownall and colleagues make the case for reflexivity as a basic first step towards reproducibility: a reflection on the research process makes opening that process up much easier. 

The NLRN brings organisations together to share best practices and learn from each other. We would love to amplify and share cases where methods from qualitative research helped quantitative researchers approach the reproducibility of their own non-qualitative work in new ways and make it more transparent.

Parallel Communities of Practice

Several speakers voiced their concern about duplication of efforts as parallel communities of open qualitative methods form. The recent call for a European community of qualitative researchers is an answer to this fear and will hopefully create more synergies to avoid duplication of efforts and slowing the process of change down. 250 people already expressed their interest in such a community. We will cross post updates on this initiative on LinkedIn and via our newsletter.

Bogdana Huma and Sam Heijnen from VU were the core organizing team of the symposium and guided a co-creation session at the end of the day.

The NLRN will organize a symposium in collaboration with the Dutch-Belgian Context  Network for Qualitative Methodology to continue the discussion on transparency at the national level and foster learning from other, non-qualitative methodologies.  

Open to the citizen – with methodological consequences

A topic that gets rarely touched upon in the reproducibility discussion is participatory or community driven research. The goal of this approach is to make research more relevant by including citizens, not only as participants or as data collectors, but also as researchers and to guide the research process. 

With reproducibility being a way to share research processes, questions around methods and research pipelines arise: Are those methods flexible enough to accommodate evolving consent forms, fluid management plans and research designs that are sourced from the impacted communities themselves? Is there a way to pre-register those changing plans and how would we go about it? How can we be transparent about changes and present them as an integral part of the research design, rather than as flaws in planning or execution?  

Links: 

Materials of the Symposium: Events | Community Of Practice (for slide decks) and Events | Community Of Practice (for recordings)

Library used for the mentioned review on reproducibility of qualitative research: A context-consent meta-framework for designing open (qualitative) data studies | Reproducibility of qualitative research – an integrative review | Zotero 

Review Paper: MetaArXiv Preprints | Reproducibility and replicability of qualitative research: an integrative review of concepts, barriers and enablers

Platform for Young Meta-Scientists (PYMS): Discussing the Future of Meta-Science  

Blogpost by Cas Goos

The goal of our organization, the Platform for Young Meta-Scientists (PYMS), is to bring together Early Career Researchers (ECRs) working on meta-science, while providing a place to network with peers and discuss research. We consider this an important initiative as ECRs have been at the forefront of many reform initiatives, and because despite the increasing number of meta-science ECRs, many still work disconnected from other researchers with shared interests. Bringing ECRs together is a crucial step in enabling large scale research and strengthening reform initiatives that are key to improving science. 

The PYMS Meeting 

To further these goals within the most recent PYMS meeting, which took place at the UMCG on December 5th 2024 as a pre-symposium to the NLRN symposium on December 6th, we invited an audience of ECRs from a variety of backgrounds. We are glad to have achieved a broad representation of ECRs, including from abroad. Presentations span a variety of meta-science topics such as spin, equivalence testing, and reproducibility. Not only work from the social sciences, but also work in computational sciences, sports and exercise science, and nanobioscience were presented and discussed.  

Map of PYMS attendees and their areas of expertise/interest. Map backdrop was constructed by Tamarinde Haven. 

Furthermore, during the meeting we created many opportunities for discussion and networking, including invitations for collaborations, throughout the day.   

We were also fortunate enough to have Tracey Weissgerber give a keynote talk on having a career in meta-science followed by a series of provocative statements on the same topic. Both led to a practically informative discussion for the attendees on having a career in meta-science.  

The Future of PYMS 

Since the end of last year, the board for PYMS has been renewed. Four new ECRs have joined the board to carry the torch. The new members are Sajedeh Rasti from Eindhoven University, Raphael Merz from Ruhr University Bochum, and Anouk Bouma and myself from Tilburg University. Like the previous board, we plan to continue creating a community for meta-science ECRs with informal networking opportunities. However, we also plan to expand the reach of PYMS in new ways. 

As a first step, PYMS will go international during our next meeting. In this way we expand our networking beyond the Netherlands to prompt broader collaboration in our field. This is especially important here, since projects requiring large investments of time and effort from multiple parties are of crucial importance to investigate and improve scientific practice effectively. This meeting will be held this summer, there will be plenty of opportunities to discuss future projects and financial compensation for international presenters to encourage a broad representation of young meta-scientists. 

Interested ECRs can join our mailing list and get a link through there to our Discord server to connect and communicate with fellow ECRs on meta-science, as well as keep up to date about future events, including the next PYMS meeting this summer. 

A LEGO® Metadata Challenge Workshop 

Blog post by DCC Groningen and DCC UMCG

In this workshop, participants experienced reproducibility in action through an experiment using LEGO®. In the first part of the session, each group created a structure using LEGO® bricks and prepared accompanying documentation and instructions of how to create it. Afterwards, another group attempted to recreate the structure based on the provided documentation.  
The exercise sparked lively discussions about the nuances of metadata and different documentation styles, the challenges of interpreting and applying it, and the critical role they play in ensuring that research can be reliably reproduced. One of the interesting conclusions was that keeping reproducibility in mind not only influences the approach to documentation and metadata but also shapes the design of the structure (i.e., research project) to ensure its reproducibility. 

Feedback from the participants was overwhelmingly positive. Many appreciated the innovative approach, noting that the hands-on activity helped solidify their understanding of the importance of proper documentation and metadata’s practical application. The workshops also fostered a sense of community and collaboration, as participants shared their experiences and insights. 

Spot the difference 

Can you spot the difference (or lack thereof) between some of the original creations from our participants and the reconstructed versions from another team?  

Reproducible materials  

In line with the reproducibility theme, this workshop used materials from the University of Glasgow1 and participants were encouraged to bring this workshop to their own networks. 

We are grateful to the Netherlands Reproducibility Network for providing us with the opportunity to share our knowledge and engage with such a passionate group of individuals. We look forward to continuing our efforts to promote research reproducibility and hope to bring more creative and interactive workshops to future events! 

———————————————————- 

On December 6th, 2024, the Digital Competence Center (DCC) from the University of Groningen and the DCC from the University Medical Center Groningen had the pleasure of attending the Netherlands Reproducibility Network Symposium. This event brought together researchers, data managers, and enthusiasts dedicated to improving the reproducibility of scientific research from all over the country. As part of the symposium, the two DCCs collaboratively presented two engaging sessions of a workshop titled “A LEGO® Metadata Challenge.” 

1 Donaldson, M.and Mahon, M.(2019) LEGO® Metadata for Reproducibility game pack. Documentation. University of Glasgow. (doi: 10.36399/gla.pubs.196477). 

The Future of Reproducibility…  

… lies at the end of long corridors 

Under the theme of “The Future of Reproducibility”, about 100 scholars, policymakers, funders, researchers, science enthusiasts and facilitators met at the UMCG for a day of interaction, discussion and new connections.  

Casper Albers, the head of the Open Science Program in Groningen welcomed us and kicked off the plenary morning session, underscoring the importance of retaining the focus on rigorous, open science, even in a future where funding may be scarcer. The keynote speakers presented challenges and developments in research reproducibility from two different domains: Biomedical research and History. Tracey Weissgerber shared how open research evolved in the biomedical sciences and called for a look beyond reproducibility to foster responsible and robust research results. Pim Huijnen and Pieter Huistra presented their work on reproducibility in history. They shared their insight from an interpretative field and highlighted how the domain of history needs to find its own definition of what reproducible scholarship means – and how this affects the choice of tools to achieve this reproducibility.   

The seemingly endless corridors of the medical center brought people together to discuss the presentations while reaching individual step counts. The lunch area was filled with chatter of mingling as new connections formed around poster topics including a chatbot for training materials and a crowdfunding platform for replication studies. 

In the afternoon, participants followed workshops on reproducibility-related topics. The participants of the lego workshop experienced hands-on what it means to work with limited metadata. Attendees could also get a glimpse of the current status of interventions to promote reproducibility and think about how they would promote reproducibility in their groups. The discussions on research integrity and applied research carried over to the coffee break, where attendees mingled and had a chance to meet the poster presenters. 

The lively closing session featuring early career professionals sparked discussions around preventing dreaded futures of science communication, including how to avoid weaponization of skepticism to stir distrust in science.  

You can find the materials (posters and slide decks) here on Zenodo and the recording of the keynotes here.