Online Program Home
My Program

Abstract Details

Activity Number: 538 - SPEED: Predictive Analytics with Social/Behavioral Science Applications: Spatial Modeling, Education Assessment, Population Behavior, and the Use of Multiple Data Sources
Type: Contributed
Date/Time: Wednesday, August 1, 2018 : 10:30 AM to 11:15 AM
Sponsor: Social Statistics Section
Abstract #332796
Title: A Monte Carlo Simulation of the Effects of Ignoring Measurement Non-Invariance on the Standard Error for Mean Difference Testing
Author(s): Scott Colwell* and Theodore J Noseworthy
Companies: University of Guelph and York University
Keywords: Measurement; Measurement Invariance; Bias in Standard Error; Group Comparisons; Latent Variables; Monte Carlo Simulation

The ability to reliably measure latent constructs of interest is fundamental to drawing insightful conclusions in the social, behavioral and health sciences. When comparing different populations or sub-populations on a specific measure, it is important to first establish that the measure performs equally in both populations. When a measure exhibits non-invariance, the ability to make substantive cross-group comparisons become problematic as the measure may not be performing equally across respondents. A significant amount of literature exists supporting the need for invariance testing, however, to the best of our knowledge, the implications of ignoring measurement non-invariance in group mean difference testing has yet to be explored. In this research we simulated a six-item multiple group confirmatory factor analysis using R version 3.4.3 (R Core Team 2017) to allow for the mean comparison of two groups, the focal group and the comparison group. Overall, our results indicate that ignoring measurement non-invariance can lead to an underestimation or over estimation of the standard error in group mean difference testing.

Authors who are presenting talks have a * after their name.

Back to the full JSM 2018 program