Abstracts 2024

This website is still under construction, we are adding full abstracts as we receive them from our workshop hosts. Click on the teaser to read the full abstract!

Keynote Tracey Weissgerber

Title: Reproducibility: The biomedical perspective

Abstract:This talk will explore the current state of efforts to improve reproducibility in biomedical research, and the importance of open science for reproducibility…..

This talk will explore the current state of efforts to improve reproducibility in biomedical research, and the importance of open science for reproducibility. The open science movement started with open access, before shifting to open data and open code, and is now extending to other outputs (e.g., software, educational resources) and earlier phases in the research process (e.g., methods and protocols). We will discuss whether moving backwards through the research process is the best way to move forward, and explore the intersections between reproducibility and reusability.

Bio: Tracey Weissgerber is a meta-researcher and physiologist working to improve data visualization, …

reporting of open and reusable methods and protocols, and other factors that affect transparency and reproducibility in biomedical research. She is the organizer of the Automated Screening Working Group/ScreenIT and Excelcsior Chair at University of Coimbra.

Keynote Pim Huijnen and Pieter Huistra

Title: Walking through a circle of reproducibility in historiography

Abstract: The notions of reproducibility and replicability are conspicuously absent from the field of history, mirroring the widely shared belief among…..

The notions of reproducibility and replicability are conspicuously absent from the field of history, mirroring the widely shared belief among historians that the past should be studied in its uniqueness and singularity. These historians view historiography as a collective effort to achieve as broad an understanding of the past as possible, but not one characterized by agreement and corroboration. The cliché goes that historiography is “an argument without end,” valuing not similarity, but the range of different interpretations. In this difference lies the richness of the field.

The paradox, however, is that historical scholarship does contain elements of reproducibility. Historians, despite the above, broadly agree on many interpretations of the past. They also share norms, practices, and procedures for academic historical research, which mostly pertain to accountability and transparency. The hallmark of academically sound historical scholarship is the footnote, which implies a norm of traceability—and, to some extent, reproducibility.

Given this norm, dismissing reproducibility on fundamental grounds feels almost too easy. Why not see what happens when historical research is invited to retrace its steps? This is a pertinent and urgent question, given that the increased digital availability of primary and secondary sources will make reproductions of historical scholarship easier to undertake, whether one believes in it or not. This has been the starting point of our project on reproducing historical scholarship. In our talk, we will share our findings and the implications these findings, in our view, have for the place of reproducibility in history—and also for the meaning of reproducibility itself.

As a teaser, reproducing history requires a sound understanding of what reproductions are for. In a field that lacks the notion of a null hypothesis, reproductions will not help in validating claims. What they can do is increase transparency. Making reproductions an integral element of historical scholarship can encourage researchers to be more open about their methodological and interpretative choices. However, reproducibility is not a panacea. It has clear limits. History writing that is as reproducible as possible would be unreadable to the extent that it loses sight of what makes historiography academically and socially valuable. In short: not all important scholarship is reproducible, and not all reproducible scholarship is important. The question we would like to raise in our talk is whether this is true only for the field of history or for others as well.

Bios: Pim Huijnen works as an assistant professor in Digital Cultural History at the department…

of History and Art History of Utrechte University. His research focuses on the use of digital text analysis to investigate the history of science and the history of ideas. He is, in particular, interested in conceptual history and the circulation of knowledge between the scientific, political and commercial domains and popular culture.

Pieter Huistra is Associate Professor Theory of History at the Department of History and Art History of Utrecht University. Previously he was a Postdoctoral Fellow at KU Leuven and an Aspirant of the Research Foundation – Flanders (FWO), also at KU Leuven.

His research covers the history of science in its broadest meaning, ranging from the humanities to the biomedical and natural sciences. 

WORKSHOPS (13 hrs to 15 hrs)

WS I: The strengths and challenges of blinded analysis (2hr workshop)

Workshop Leader: Alexandra Sarafoglou

Abstract: Bring your own laptop for this hands-on workshop comparing analysis blinding and pre-registration with a few examples ….

While the social sciences have adopted preregistration as a preferred method to prevent bias, astrophysics and cosmology have embraced analysis blinding to ensure unbiased judgment since the early 2000s. In this workshop, I will explore the strengths and challenges of analysis blinding, a technique where data is temporarily altered before analysis. I will present empirical findings comparing analysis blinding to preregistration, highlight the types of projects where this approach is particularly valuable, and discuss the challenges involved in implementing it. As a practical exercise, participants will have the opportunity to explore and apply analysis blinding on an empirical dataset. Note: This workshop is designed specifically for researchers, students, and university staff who work with research data in their daily activities. Prior knowledge of analysis blinding is not required. However, participants should be comfortable loading a .csv file on their computer and performing basic data preprocessing (e.g., adding random numbers to a column, shuffling rows within groups). Participants are required to bring their own laptop and have the program they typically use for data analysis (e.g., R, JASP, SPSS, Excel) installed.

WS II: RI and Reproducibility: how the two can intertwine better

Workshop leaders: Rita Santos and Mariette vd Hoven (NRIN)

Abstract: For some, the overlap between research integrity and reproducibility is obvious, while for others this is not set in stone. In this workshop, we use definitions ….

of RI and reproducibility and the Dutch Code of Conduct to stimulate a debate. For example, is respecting the CoC a necessary condition for responsible research, while reproducibility is ‘nice to have’? Or are both RI and reproducibility at danger in times of cuts in the budgets in higher education? Is reproducibility only interesting applied in some disciplines, and are others excused? Is research integrity only about fabrication, falsification and plagiarism? After a joint exploration, we will challenge participants to formulate for their own organization (level of department, faculty or institution) concrete action points on how reproducibility and research integrity can be better intertwined for reproducibility as part of responsible conduct of research. What is needed? And how can these needs be met? 

WS III: Title: The Blueprint to Reproducibility: A LEGO® Metadata Challenge

Workshop Leaders: Lauren Seex, Saikat Chatterjee (DCC Groningen) , Stepanka Zverinova (DCC- UMCG)

Abstract: What makes research reproducible? Clear instructions, standards, or well-planned metadata? In this dynamic and playful session……

What makes research reproducible? Clear instructions, standards, or well-planned metadata? In this dynamic and playful session, participants will experience first-hand what makes documentation effective — and what doesn’t. The LEGO® Metadata for Reproducibility Game offers an engaging approach to reproducibility, as teams build a LEGO® model, document their process, and then exchange instructions to test if others can replicate their design. This interactive exercise highlights key concepts like metadata planning, standards, and effective documentation. Ideal for researchers, educators, and support staff, the workshop builds skills for creating reproducible, shareable research methods. In line with the reproducibility theme, this session uses materials from the University of Glasgow and participants are encouraged to bring this workshop to their own networks.

WS IV: Tools and Promotion Plans

Workshop Leaders: Sarah McCann, Inge Stegeman , Joeri Tijdink

Abstract: How can you promote reproducibility in your own project? There are tools available to you, but how do you integrate them into pipelines at your lab or group? At the end of this …

workshop, you ll be walking home with a concrete plan of how to promote reproducibility.

WS V: Reproducibility in Applied Research

Workshop Leaders: Gerben ter Riet (HvA), Daan Ornée (HU)

Abstract: Applied research findings are often and quickly translated into policies – what do we do if they don’t replicate after such policies are established? Balancing rigorous research with real-world needs creates ethical dilemmas, especially under tight budgets and timelines…..

Once upon a time in the West, a colleague of ours developed a diagnostic (predictive) model to help physicians assess the likelihood that a specific agent was causing the occurrence (in single patients) of an everyday disease. The model was published in a prestigious journal, and we were all proud and delighted with the accomplishment. However, shortly afterward, we submitted plans to conduct a validation study using data from new patients. In the meantime, our colleague had accepted an invitation to join the guideline committee for this disease. His enthusiasm, coupled with that of the committee members, led to the diagnostic model being incorporated into the clinical practice guideline some six months later. Two years on, the results of the validation study painted a completely different picture than our initial findings. When we approached the well-known journal to correct the record, they declined. At that time, replication studies were seen as uninteresting and low-priority. Other journals also rejected our paper. Finally, after three years, the validation study—which amounted to an implicit retraction of the original findings—was published in a lesser-known journal, where it still gathers dust today. 

This example (and many others like it) encapsulate the replication crisis as it relates to replication of published results. The replication crisis poses significant challenges in applied sciences, where promising published findings often fail to replicate, leading to wasted time, resources, and unmet expectations (Prinz, Schlange & Asadullah, 2011). In applied settings, results are frequently translated into policies or interventions, where replication issues persist. Besides pressures to publish novel, positive findings, researchers in applied sciences face unique challenges: collaboration with policymakers and practitioners can introduce pressures to simplify findings, limited awareness of scientific rigor (e.g., randomization, control groups), and less control over study environments. Balancing rigorous research with real-world needs creates ethical dilemmas, especially under tight budgets and timelines. 

In this workshop, you will actively discuss the broader issues illustrated: the focus on novelty, the lack of replication, journal policies, premature implementation of findings, communication with non-scientific stakeholders and perhaps others you detected. Together, we’ll brainstorm as many solutions as possible, especially in the context of research conducted for governments and commercial entities. When such ‘sponsors’ favour our results, they may be inclined to implement them immediately. Could this lead to “grimpact” (the harmful impact of premature findings)? How do you minimize this risk? How can you come to a joint understanding of the basic requirements to come to solid findings for all parties involved? Join our workshop to share your thoughts and insights with us. We will collect all your thoughts to develop a conversation guide when discussing these issues with practitioners and other partners outside of academia.  

Moderators 

Daan Ornee MSc, Open Science officer at Utrecht University of Applied Sciences 

Dr. Gerben ter Riet, Senior Methodology advisor at Amsterdam University of Applied Sciences 

Poster Abstracts:

Empowering Translational Health Data Science Capabilities in Population Health Management A Case of Building a Data Competence Center

In this paper we present the outcomes of a survey conducted among the research community of a Population Health Management department. The goal is to investigate how (translational) data science applications can be supported in a complex ecosystem of data sources and regulations of secondary healthcare data use. The envisioned solution is the creation of a data competence center as a multidisciplinary unit mixing research and professional support staff to provide data science technology, training, and resources to (early-career) researchers to address current challenges that are considerably impacting data quality and reproducibility in PHM research. Keywords: Population Health Management · Data Infrastructure · Data Competence Center · Translational Data Science · Reproducibility

Implementing institutional reproducibility checks to promote good computational practices 

The goal: To establish a workflow in which researchers can submit their data and code to a dedicated person or team in their faculty who will perform a reproducibility check for data and code before a result gets published 

The why: Sharing data is slowly becoming the norm in social and behavioural science research, but sharing computer code to reproduce data processing and analysis is less common. Training in “good enough” software practices, and encouragement to subsequently share software code publicly, are much needed to improve computational practices and reproducibility in the social and behavioural sciences. In this project, I will create training and guidelines and develop a workflow to provide an institutional reproducibility check for data and code submitted alongside research publications. A short, standardized report will be produced that includes an evaluation of the level of reproducibility, alongside a list of actionable points for improvement. This form of personalized feedback has the potential to: 

create demand for more rigorous reproducibility practices and for the training needed to implement those practices 

improve actual analysis code (as opposed to training sessions that are not always generalizable to actual work of the researcher) 

emphasize the importance of sharing code and encourage researchers to share their analysis code 

normalize mistakes and corrections in a safe environment  

Registered Replication Report: Johns, Schmader, & Martens (2005)

Authors: Andrea H. Stoevenbelt, Paulette C. Flore, Jelte M. Wicherts, Matus Adamkovic , Gabriel, Banik, Rachel Bergers, Robert J. Calin-Jageman, Luis. A. Gomez, Joachim Hüffmeier, Megan N. Imundo, Scott D. Martin, Steven C. Pan, Lorraine A. T. Phillips, Jakob Pietschnig, Tilli Ripp, Jan Phillip Roër, Ivan Ropovik, Joyce E. Schleu, Ann-Katrin Torka, Bruno Verschuere, Martin Voracek & Luca Weiss
Abstract: We present a registered replication report on stereotype threat. Stereotype threat refers to the fear of being judged based on negative stereotypes about the performance of a certain group one identifies with. Numerous published studies have found that stereotype threat might lower mathematics test performance among women. However, several concerns have been identified with the stereotype-threat literature. Many studies used suboptimal designs and statistical analyses; the literature might be subject to publication bias; and the cross-cultural generalizability of the effect remains unknown. This registered replication report describes the results of nine direct replications (total N = 1711) of a representative stereotype-threat study by Johns et al. (2005), who found that threatened women (but not men) underperform when they are confronted with a mathematics test that is presented to measure gender differences, and that this effect can be alleviated by altering test instructions. We found that the stereotype-threat effect among women was virtually null, and considerably smaller than the original study (N = 1277, d = 0.05, and N = 45, d = -0.82, respectively). The current results fail to replicate the stereotype-threat effect from the original study by Johns et al., hence casting doubt on the generalizability of the effect of stereotype threat on women’s mathematics performance.

Endosomal escape of nanoparticles: will it replicate?

Maha M. Said, Mustafa Gharib, Raphael Lévy

Université Sorbonne Paris Nord, Université Paris Cité, INSERM, LVTS, F-75018 Paris, France

The use of nanoprobes for intracellular sensing and bio-imaging has gained a lot of attention over the past 15 years. pH, miRNA, mRNA, free radicals and a longer list of intracellular targets are claimed to have been detected by nanoparticle probes. Those probes enter the cell through endocytosis followed by a series of subsequent events that are a necessity for intracellular sensing: endosomal escape, detection of the target, measurement of a physical output. In many articles claiming intracellular sensing, endosomal escape of nanoparticles is taken for granted, with little or no experiments carried out to demonstrate this phenomenon, yet in our experience and in other scientific articles, most nanoparticles remain within endosomes. To resolve this paradox, we have set up a reproducibility initiative which focuses on the issue of nanoparticle endosomal escape and aims to replicate influential articles reporting intracellular sensing of analyte(s) in the cytosol.

Here, we will present the key steps of our reproducibility project and the initiatives that we are taking to involve scientists in this effort. We will also highlight how we have paved the way for scientists in the field to adopt an open science approach. We hope that this work will enable us to clarify contested issues and will allow the community to build on more solid ground.

Paul Meehl Graduate School

Contributors: Daniel Lakens, Sajedeh Rasti

Over the last decade meta-science has gained recognition as an independent discipline in its own right. Despite this growing recognition and the increased attention toward this field, most researchers interested in meta-science work in departments with a different research focus. This makes it challenging for PhD students who work on meta-science projects to receive adequate PhD level training on topics that are related to their PhD. Furthermore, as meta-scientists are distributed across departments and universities, it can be difficult for early career meta-scientists to build a network of peers and connect with colleague with similar interests.

The Paul Meehl Graduate School, started in January 2024 at Eindhoven University of Technology, organizes free PhD education for graduate students interested in meta-science at European universities. Through a range of workshops on topics such as data simulation, meta-analysis and bias detection, epistemic inclusion, reproducible workflows, and theory building, the school seeks to provide foundational and advanced knowledge in meta-scientific areas. Since the Paul Meehl Graduate School operates through the volunteer lectures provided by meta-scientists, the poster includes a call for workshop ideas. The Paul Meehl Graduate School is also interested in collaborating with other graduate schools, and open to ideas to host workshops at other institutions.

JUST-OS: An AI-based chatbot for navigating Open Science resources

Despite (or due to) the overwhelming amount of information on Open Science, it is practiced way less than it is preached. When researchers search for guidance on Open Science, they typically rely on information-dense websites. These often long and unorganized lists of Open Science initiatives and resources can discourage researchers in their search for information. Alternatively, researchers might consult AI platforms, which are accessible, but include all kinds of (non-scientific) resources and lack a formal quality check. What is lacking is a comprehensive, accessible facilitation of the available and relevant resources.

To bridge this gap, we propose JUST-OS (Judicious User-friendly Support Tool for Open Science), an AI-based chatbot to navigate Open Science resources. Through a simple chat-interface, JUST-OS assists in navigating initiatives, knowledge, and resources for various stakeholders by answering users’ questions interactively and leading them to related practices or topics. Moreover, a structure with labels allows the chatbot to allocate resources to the discipline, profession, and kind of knowledge (e.g. glossary or practical guide) that are relevant to the specific user. JUST-OS is an initiative from the University of Groningen and the Framework for Open and Reproducible Research Training (FORRT), funded by the Dutch Research Council (NWO).

JUST-OS uses retrieval-augmented generation, an AI-based natural language processing method, which operates on an Open Science database on FORRT. This allows the chatbot to have an interactive conversation with users, while sourcing for input from a comprehensive, curated and continuously maintained database of Open Science resources.

A Crowdfunding Mechanism for Replication Studies

Philipp D. Koellinger (Vrije Universiteit Amsterdam, DeSci Foundation, DeSci Labs) Sina Iman (DeSci Labs) Leonie Raijmakers (Desci Labs)

Independent replication studies are a cornerstone of the scientific method. Yet, they often lack funding and incentives for scientists to undertake them. Additionally, no clear mechanism exists for stakeholders to signal which studies should be prioritized for replication. To address this, we propose a public crowdfunding mechanism designed to support the replication of over 250 million research articles indexed by CrossRef and OpenAlex.

Through the DeSci Publish platform, users can contribute funds to specific research findings via an intuitive interface that supports all currencies and various payment methods. Contributors retain the flexibility to withdraw their funds anytime until a $5,000 threshold is reached for a given study. Once this threshold is met, a call for proposals for a replication study is issued. Independent experts will review the proposals and select the most cost-effective and viable analysis plan. Based on their selection, the funding goal and deadline for the replication study are announced on the platform.

If the funding goal is reached within the deadline, the selected proposal receives 80% of the funds upfront, with the remaining 20% disbursed after the results are published and evaluated on the platform. If the goal is not reached, the collected funds are converted into a prize. This prize can be claimed by anyone who completes a high-quality replication of the study, as validated by independent experts.

A public website will provide transparency by displaying which studies have received replication donations, the total amount contributed, and the number of contributors. This mechanism not only highlights critical areas for replication but also mobilizes new investments in replication studies. By introducing market-driven incentives, it addresses gaps in the current system, fostering a culture of rigor and transparency. Its decentralized, scalable design encourages broad participation, paving the way for more reliable and reproducible scientific outcomes.

eScience Center Fellowship

The eScience Fellowship Programme is aimed at members of the academic research community, working in the Netherlands, who are passionate to act as ambassadors for the use (and reuse) of research software. Research software can be any piece of code, script, package, tool, library, or programme written for the purpose of being used in research. The Fellowship Programme supports those who want to promote or improve the awareness and use of open and sustainable research software within their institute or academic community. In 2024-2025, we granted 20 applicants an eScience Center Fellowship.   

Each Fellow is expected to carry out activities to this end within the duration of their Fellowship (12 months). Any activities related to improving awareness or use of research software are considered. Applicants are also welcome to use the Fellowship to boost existing initiatives or activities that fit the purpose of the programme. Examples of relevant initiatives and activities include:  

  • Creating a tutorial around a research software package developed in-house.  
  • Creating a series of videos on good research software practices.  
  • Inviting speakers on the topic of sustainable research software for an interactive seminar.  
  • Running a hackathon where researchers can improve the reusability of their research software and exchange ideas.   
  • Setting up a discipline-specific community to exchange expertise about digital tools within the applicant’s field.   
  • Increasing the visibility and awareness of research software in the applicant’s field. 
  • Increasing the visibility, appreciation and wellbeing of researchers who write code in any way.   

The eScience Center and its Fellowship Programme support Open Science principles. We encourage applications to include activities that focus on open source software and aim to make their outcomes openly available to relevant communities. 

Computational Reproducibility at Amsterdam UMC

Diana I. Bocancea*, Samuel Langton*, Mar Barrantes-Cepas, Emma Coomans, Niels Reijner, Ismael L. Calandri

*shared first

All authors from Amsterdam UMC

The crisis of computational reproducibility in medical science is well documented. There is not one single source of this crisis, but a number of these issues originate directly from software usage and development, such as programming literacy, version control, and documentation. At Amsterdam UMC, we are undertaking a number of initiatives to raise awareness about these issues and provide ways to improve computational reproducibility across the university. Activities include a reproducibility hackathon event (ReproHack), hands-on coding support, the development of IT infrastructure and dedicated training sessions. This poster summarizes our initiatives for the purposes of gaining feedback, sharing ideas and facilitating a discussion among fellow attendees.

How can funders promote reproducibility in their funding work? The Reproducibility Promotion Plan for Funders

Contributors (in random order): Barbara Leitner (AmsterdamUMC), Joeri Tijdink (AmsterdamUMC), Friederike Elisabeth Kohrs (Berlin Institute of Health at Charité), Alexandra Bannach-Brown (Berlin Institute of Health at Charité)

Abstract: Funders and funding institutions play a vital role in shaping research culture and fostering research integrity. They can promote best practices in research methodology and strengthen ethical considerations by establishing guidelines and policies, thereby incentivizing integrity and transparency in research. Therefore, funders are crucial for maintaining a robust research environment that contributes to knowledge advancement and societal progress in an open and transparent manner. However, current definitions of transparency and reproducibility are vague, the evaluation and monitoring of reproducible practices are often scarce in current funding criteria, and there is insufficient emphasis on reproducibility practices in funding calls and incentive structures.

Within TIER2 – enhancing Trust, Integrity and Efficiency in Research through next-level Reproducibility, an EU Horizon Europe project, members of AmsterdamUMC and Berlin Institute of Health at Charité have co-created a Reproducibility Promotion Plan (RPP), together with a group of funders, to support funders in promoting transparent and reproducible research practices. The RPP provides guiding recommendations for funders on three key themes where reproducibility and reproducible practices can be promoted amongst research they fund. These themes are policy and definitions, evaluation and monitoring, and incentives. The RPP is adaptable and can be tailored to suit every funder’s and funding institution’s needs, with general and in-depth explanation of the recommendations and guiding best practice examples.