Abstract:
|
The change in performance of student subgroups over time is a key target of inference from large-scale educational survey assessments such as the National Assessment of Educational Progress. The operational procedure for placing a new assessment cycle or year onto the reporting scale involves estimation of item response theory (IRT) parameters concurrent with the immediately preceding cycle, followed by a linear transformation onto the reporting metric. Trend comparisons across many years of data may accumulate error over a long chain of IRT linkages, particularly as new items are introduced into the item pool while some blocks are discontinued. In this study, the accumulative linking error is explored through simulation studies that approximate a series of ten assessment cycles under multiple conditions, including the trend pattern, sparsity of the items, and consistency of the generating item parameters.
|