Mixture models represent an unsupervised machine learning approach, which can aid mental health research by allowing for personalized patient profiles and better integration of biological and behavioral findings (Frankfurt et al., 2016). However mental health data are expensive and time consuming to collect, resulting in much smaller sample sizes compared to those used in research domains where mixture models were developed (e.g., computer vision, education). As a result, contemporary application of mixture models are limited by their inability to adapt to the smaller samples common to mental health research.
To address this issue, we conducted a simulation study to evaluate if Bayesian approaches can increase power to detect additional latent classes in mixture modeling. Results suggest that priors with modest degrees of informativeness can substantially reduce the sample size needed to detect additional mixture classes, with minimal risk to the validity of results. We will present a systematic approach to prior specification as well as circumstances where this approach works well and poorly in mental health research.