Abstract:
|
Across different sub-fields of linguistics, mixed-effects models have emerged as the gold standard of statistical analysis (Baayen, et al., 2008 ; Johnson, 2009; Barr, et al, 2013; Gries, 2015). The major unifying argument for these models is that they provide a more conservative and accurate assessment of statistical significance when there are repeated measures on subjects and/or items. One problematic feature of these models is their failure to converge. Handling that failure has resulted in ad-hoc statistical practices (e.g. Gries, 2015; Bates, et al, 2015) that are outside of standard statistical practice. We present methodological benefits of a fully specified Bayesian model compared to a mixed-effects model for four linguistic datasets. Failure to converge may not be due to non-zero random slopes or random intercepts. For two of the data sets there is evidence of a non-zero random intercept. In each data set, the Bayesian model provides means to account for the multilevel variance in the data while overcoming the failure of the out-of-the-box mixed effects model to converge.
|