Abstract:
|
How do we go about investigating the reproducibility of findings and how do we decide on the criterion for successful replications? These are not simple questions, and our decisions can have large effects on the inferences we can make about the robustness or particular studies, theories, or fields as a whole. This session will discuss the statistical issues involved in running highly informative reproducibility projects, and the implications of using different methodologies and criterions for the conclusions we are able to draw from such projects. The talk will discuss three types of meta-science projects run by the Center for Open Science, the large scale Reproducibility Projects in Psychology and Biology, the Many Labs projects, and the Many Analysts project, in terms of their varied methodologies, empirical findings, and the differing statistical inferences we can draw from each type of project about reproducibility.
|