We propose sensitivity analyses for meta-analyses in which “statistically significant” positive results are more likely to be published than negative or “nonsignificant” results by an unknown ratio. Using inverse probability weighting and accommodating non-normal true effects, small meta-analyses, and clustering, we develop sensitivity analyses enabling statements such as: “For publication bias to shift the observed point estimate to the null, positive results would need to be at least 30-fold more likely to be published than negative or ‘nonsignificant’ results.” Comparable statements can be made regarding shifting to a chosen non-null value or shifting the confidence interval. To aid interpretation, we empirically benchmark plausible values of the publication ratio across disciplines. We show that a worst-case meta-analytic point estimate under maximal publication bias can be obtained simply by conducting a standard meta-analysis of only the negative and “nonsignificant” studies; this sometimes indicates that no amount of publication bias could “explain away” the results. We illustrate the proposed methods using real meta-analyses and provide an R package, PublicationBias.