Abstract:

Observational studies and metaanalyses may be compromised by unmeasured confounding. We describe simple metrics characterizing the sensitivity of single studies and metaanalyses to such confounding. For a single study, the “Evalue” is defined as the minimum strength of association, on the risk ratio scale, that an unmeasured confounder would need to have with both the treatment and the outcome to fully explain away a specific treatment–outcome association, conditional on the measured covariates. A large Evalue implies that considerable unmeasured confounding would be needed to explain away an effect estimate. A small Evalue implies little unmeasured confounding would be needed to explain away an effect estimate. For metaanalyses, we will quantify the extent to which unmeasured confounding of specified magnitude could reduce to below a certain threshold the proportion of true effects that are of scientifically meaningful size. We also develop converse methods to estimate the strength of confounding capable of reducing the proportion of scientifically meaningful true effects to below a chosen threshold. All methods can be implemented with the R package EValue.
