A Hybrid Latent-Class Item Response Model for Detecting Measurement Non-Invariance in Ordinal Scales
Abstract: Measurement non-invariance arises when the psychometric properties of a scale differ across subgroups, undermining the validity of group comparisons. At the item level, such non-invariance manifests as differential item functioning (DIF), which occurs when the conditional distribution of an item response differs across groups after controlling for the latent trait. This paper introduces a statistical framework for detecting DIF in ordinal scales without requiring known group labels or anchor items. We propose a hybrid latent-class item response model to ordinal data using a proportional-odds formulation, assigning individuals probabilistically to latent classes. DIF is captured through class-specific shifts in item intercepts and slopes, allowing for both uniform and non-uniform DIF. The identification of DIF effects is achieved via an $L_1$-penalised marginal likelihood function under a sparsity assumption, and model estimation is implemented using a tailored EM algorithm. Simulation studies demonstrate strong recovery of item parameters and both uniform and non-uniform types of DIF. An empirical application to a personality test reveals latent subgroups with distinct response patterns and identifies items that may bias group comparisons. The proposed framework provides a flexible approach to assessing measurement invariance in ordinal scales when comparison groups are unobserved or poorly defined.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.