Statistical blunders and misdeeds rocked the news in the past year—from the discrediting of popular food scientist Brian Wansink to the retraction of a landmark study on the Mediterranean diet in the New England Journal of Medicine. Besides attracting media attention, these cases revealed problems in the way we currently detect statistical errors in the published literature. The process is haphazard, time-consuming, and rife with conflict. Many scientists believe that it’s time to standardize and normalize error detection and correction. But the idea is controversial; some have even dubbed the data detectives who currently find and expose errors as “methodological terrorists.”
This panel will explore the logistics and ethics of error detection in published research. Panelists will share war stories and lessons learned from their work on recent high-profile cases; describe tools they’ve developed for automatic error detection, such as GRIM, SPRITE, and statcheck; and share their perspectives on error detection as an emerging field of statistics.