Abstract:
|
The validity of conclusions from meta-analysis is potentially threatened by publication bias. Most existing procedures for correcting publication bias assume normality of the study-specific effects that account for between-study heterogeneity. However, this assumption may not be valid, and the performance of these bias correction procedures can be highly sensitive to departures from normality. Further, there exist few measures to quantify the magnitude of publication bias based on selection models. In this talk, we address both of these issues. First, we explore the use of heavy-tailed distributions for the study-specific effects within a Bayesian hierarchical framework. The deviance information criterion (DIC) is used to determine the appropriate distribution to use for conducting the final analysis. Second, we develop a new measure to quantify the magnitude of publication bias based on Hellinger distance. Our measure is easy to interpret and takes advantage of the estimation uncertainty afforded naturally by the posterior distribution. We illustrate our method through simulations and meta-analyses of real datasets. This is joint work with Yong Chen, Lifeng Lin, and Mary Boland.
|