Assessing measurement invariance in the presence of testlets
Dealing with measurement invariance has been an issue of concern in confirmatory factor analysis for many years. It is important to establish measurement invariance across groups so that instruments may be validly used in multiple groups for comparison of the mean or summative scores. Throughout the years, many studies have considered testing for measurement invariance in factor models. However, there have been no studies that assess measurement invariance when so-called testlets should be modeled in the factor analytic model. Testlets add nuisance covariation to the model which can interfere when trying to detect measurement invariance. In the past, models have been developed to compensate for any sort of added covariation within a model, such as the correlated error model, CT-C(M-1) model and random intercept factor model. However, can such models help detect measurement invariance in the presence of testlets? Additionally, which testlet model is most useful for detecting the true level of measurement invariance? Simulations help determine when in fact it is possible to compensate for this added testlet-based covariation and determine which method works best for various measurement invariance tests and scenarios. Generally, it is found that in some scenarios none of the models correctly identify the level of measurement invariance and, otherwise, the correlated error model is least prone to type I error.
Alvarado, Luis A, "Assessing measurement invariance in the presence of testlets" (2011). ETD Collection for University of Texas, El Paso. AAI1498264.