Observational studies and meta-analyses may be compromised by unmeasured confounding. We describe simple metrics characterizing the sensitivity of single studies and meta-analyses to such confounding. For a single study, the “E-value” is defined as the minimum strength of association, on the risk ratio scale, that an unmeasured confounder would need to have with both the treatment and the outcome to fully explain away a specific treatment–outcome association, conditional on the measured covariates. A large E-value implies that considerable unmeasured confounding would be needed to explain away an effect estimate. A small E-value implies little unmeasured confounding would be needed to explain away an effect estimate. For meta-analyses, we will quantify the extent to which unmeasured confounding of specified magnitude could reduce to below a certain threshold the proportion of true effects that are of scientifically meaningful size. We also develop converse methods to estimate the strength of confounding capable of reducing the proportion of scientifically meaningful true effects to below a chosen threshold. All methods can be implemented with the R package EValue.