Online Program Home
  My Program

Abstract Details

Activity Number: 470 - Bayes Theory and Foundations
Type: Contributed
Date/Time: Wednesday, August 2, 2017 : 8:30 AM to 10:20 AM
Sponsor: Section on Bayesian Statistical Science
Abstract #322829
Title: Convergence Properties of Gibbs Samplers for Bayesian Probit Regression with Proper Priors
Author(s): Saptarshi Chakraborty* and Kshitij Khare
Companies: University of Florida and University of Florida
Keywords: Bayesian probit model ; binary regression ; sandwich algorithms ; Data Augmentation ; geometric ergodicity ; trace class
Abstract:

The Bayesian probit regression model (Albert and Chib (1993)) is popular and widely used for binary regression. While the improper flat prior for the regression coefficients is useful in the absence of a priori information, a proper normal prior is desirable when prior information is available or in high dimensional settings where number of coefficients p is greater than sample size n. For both choices of priors, the resulting posterior density is intractable and a Data Dugmentation (DA) Markov chain is used to draw approximate samples from the posterior distribution. Establishing geometric ergodicity for this DA Markov chain provides theoretical guarantees for constructing standard errors for MCMC based estimates. In this paper, we show that in case of proper normal priors, the DA Markov chain is geometrically ergodic for all choices of the design matrix X, n and p (unlike in the improper flat prior case, where conditions are required for posterior propriety). We also derive sufficient conditions under which the DA Markov chain is trace-class, which allows us to conclude that the corresponding Haar PX-DA algorithm is strictly better than the DA algorithm in an appropriate sense.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2017 program

 
 
Copyright © American Statistical Association