To produce reliable knowledge, researchers must be able to repeat each other’s (and their own) studies and analyses. Broadly speaking, reproducibility is the extent to which research projects yield the same results and conclusions when they are repeated.
On a more fine-grained level, we can distinguish between computational reproducibility and replicability: Computational reproducibility is achieved when repeating a study using the exact same methods and data yields the exact same results as the original analysis; replicability is achieved when repeating a study using the same methods but different data yields qualitatively similar results and the same conclusions as the original.
From an outside perspective, reproducibility may seem like a trivial achievement. But standard scientific practice often doesn’t guarantee reproducible results: Threats to reproducibility, such as inaccessible and poorly documented study materials, methods, analysis code and data, reliance on workflows with little protection against human (and machine) error, and lacking attempts to verify the computational reproducibility and replicability of published research, have been documented in many scholarly disciplines.