Online Program Home
My Program

Abstract Details

Activity Number: 697
Type: Contributed
Date/Time: Thursday, August 4, 2016 : 10:30 AM to 12:20 PM
Sponsor: Section on Statistical Learning and Data Science
Abstract #319766 View Presentation
Title: Model Selection Confidence Sets by Likelihood Ratio Testing
Author(s): Chao Zheng* and Davide Ferrari and Yuhong Yang
Companies: and University of Melbourne and University of Minnesota
Keywords: model selection confidence set ; likelihood ratio test ; maximum likelihood ; detectability ; inclusion importance ; adaptive sampling
Abstract:

The traditional activity of model selection aims at discovering a single model superior to all others. In the presence of pronounced noise, however,multiple models are often found to explain the same data equally well. To resolve this model selection ambiguity, we introduce model selection confidence sets (MSCSs) in the context of maximum likelihood estimation. A MSCS is defined by a list of models statistically equivalent to the true model at a user-specified level of confidence, thus extending the familiar notion of confidence intervals to the model selection framework. We propose to construct MSCSs using the likelihood ratio test; our approach guarantees correct coverage probability of the true model when both sample size and model dimension increase. We derive conditions under which the MSCS contains all the relevant information about the true model structure. In addition, we propose natural statistics to measure importance of parameters in a principled way that accounts for the overall model uncertainty. When the overall space of feasible models is large, MSCSs is implemented by an adaptive stochastic search algorithm which samples MSCS models with high probability.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2016 program

 
 
Copyright © American Statistical Association