Abstract:
|
Differential misclassification can cause falsely significant study findings. The "fragility index" is the minimum number of treated participants in a clinical trial whose binary outcome, if it were misclassified, would reverse the statistical significance of the odds-ratio (OR). Herein we introduce fragility to observational epidemiologists, define fragility to differential exposure misclassification, and demonstrate that studies with tiny p-values can be fragile to differential misclassification. We define "relative fragility," which converts the minimum number of participants to a fraction. We derive a root-finding algorithm that exactly computes fragility for 2x2xK tables, even with large cell counts. When 2x2 table cells are not presented in a paper, we provide approximate "relative" exposure/outcome fragility by knowing only the 95%CI endpoint nearest to the null (denoted "r") and the odds of exposure (or outcome) among cases (or the exposed). Fragility provides a simple sensitivity analysis for the potential impact of differential misclassification that may be useful to include in epidemiologic bias analyses.
|