Online Program Home
My Program

Abstract Details

Activity Number: 439
Type: Contributed
Date/Time: Tuesday, August 2, 2016 : 2:00 PM to 3:50 PM
Sponsor: Section on Bayesian Statistical Science
Abstract #320287
Title: Convergence and Mixed Effects: Using Bayesian Models to Better Understand Linguistic Data
Author(s): Joseph Roy* and Christopher Eager and Kailen Shantz and Amelia Kimball
Companies: University of Illinois at Urbana-Champaign and University of Illinois at Urbana-Champaign and University of Illinois at Urbana-Champaign and University of Illinois at Urbana-Champaign
Keywords: multilevel models ; highly unbalanced data ; linguistics
Abstract:

Across different sub-fields of linguistics, mixed-effects models have emerged as the gold standard of statistical analysis (Baayen, et al., 2008 ; Johnson, 2009; Barr, et al, 2013; Gries, 2015). The major unifying argument for these models is that they provide a more conservative and accurate assessment of statistical significance when there are repeated measures on subjects and/or items. One problematic feature of these models is their failure to converge. Handling that failure has resulted in ad-hoc statistical practices (e.g. Gries, 2015; Bates, et al, 2015) that are outside of standard statistical practice. We present methodological benefits of a fully specified Bayesian model compared to a mixed-effects model for four linguistic datasets. Failure to converge may not be due to non-zero random slopes or random intercepts. For two of the data sets there is evidence of a non-zero random intercept. In each data set, the Bayesian model provides means to account for the multilevel variance in the data while overcoming the failure of the out-of-the-box mixed effects model to converge.


Authors who are presenting talks have a * after their name.

Back to the full JSM 2016 program

 
 
Copyright © American Statistical Association