Online Program

Return to main conference page
Friday, February 15
Fri, Feb 15, 3:45 PM - 5:15 PM
Jackson
Inference with Big Data

A Practical Assessment of the Sensitivities of Bayesian Model Selection (303763)

View Presentation View Presentation

*Christopher T. Franck, Virginia Tech 
Robert B. Grammacy, Virginia Tech 

Keywords: Bayesian model selection, p-values, hypothesis testing, prior specification

Concern over the misuse of p-values has become pervasive in the modern scientific landscape. Many attribute partial blame for science’s replication crisis to practices surrounding p-values. In the Big Data era, enormous sample sizes frequently yield tiny p-values even when observed effects are of little practical importance. Hence Bayesian model selection, a natural alternative to p-values, might deserve additional shelf space in the statistical marketplace of ideas. Despite its virtues, Bayesian model selection is no easy panacea since it can be highly sensitive to the prior distribution on parameters. Seemingly innocuous choices for priors can induce poor performance in model selection, and increasing sample size frequently exacerbates rather than solves the problem. We cover two examples of varying complexity using an interactive R shiny app and computer surrogate modeling for visualization. Strategies to implement and assess the sensitivity of Bayesian model selection will be provided. We hope to impart an appreciation of the strengths and weaknesses of both classical and Bayesian hypothesis testing to assist with statistical practice in industry, government, and academia.