Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = nonignorable item responses

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 504 KiB  
Article
Regularized Mislevy-Wu Model for Handling Nonignorable Missing Item Responses
by Alexander Robitzsch
Information 2023, 14(7), 368; https://doi.org/10.3390/info14070368 - 28 Jun 2023
Cited by 3 | Viewed by 1803
Abstract
Missing item responses are frequently found in educational large-scale assessment studies. In this article, the Mislevy-Wu item response model is applied for handling nonignorable missing item responses. This model allows that the missingness of an item depends on the item itself and a [...] Read more.
Missing item responses are frequently found in educational large-scale assessment studies. In this article, the Mislevy-Wu item response model is applied for handling nonignorable missing item responses. This model allows that the missingness of an item depends on the item itself and a further latent variable. However, with low to moderate amounts of missing item responses, model parameters for the missingness mechanism are difficult to estimate. Hence, regularized estimation using a fused ridge penalty is applied to the Mislevy-Wu model to stabilize estimation. The fused ridge penalty function is separately defined for multiple-choice and constructed response items because previous research indicated that the missingness mechanisms strongly differed for the two item types. In a simulation study, it turned out that regularized estimation improves the stability of item parameter estimation. The method is also illustrated using international data from the progress in international reading literacy study (PIRLS) 2011 data. Full article
(This article belongs to the Topic Soft Computing)
Show Figures

Figure 1

17 pages, 582 KiB  
Article
Nonignorable Consequences of (Partially) Ignoring Missing Item Responses: Students Omit (Constructed Response) Items Due to a Lack of Knowledge
by Alexander Robitzsch
Knowledge 2023, 3(2), 215-231; https://doi.org/10.3390/knowledge3020015 - 30 Apr 2023
Cited by 3 | Viewed by 2059
Abstract
In recent literature, alternative models for handling missing item responses in large-scale assessments have been proposed. Based on simulations and arguments based on psychometric test theory, it is argued in this literature that missing item responses should never be scored as incorrect in [...] Read more.
In recent literature, alternative models for handling missing item responses in large-scale assessments have been proposed. Based on simulations and arguments based on psychometric test theory, it is argued in this literature that missing item responses should never be scored as incorrect in scaling models but rather treated as ignorable or handled based on a model. The present article shows that these arguments have limited validity and illustrates the consequences in a country comparison using the PIRLS 2011 study. It is argued that students omit (constructed response) items because they do not know the correct item answer. A different treatment of missing item responses than scoring them as incorrect leads to significant changes in country rankings, which induces nonignorable consequences regarding the validity of the results. Additionally, two alternative item response models are proposed based on different assumptions for missing item responses. In the first pseudo-likelihood approach, missing item responses for a particular student are replaced by a score that ranges between zero and a model-implied probability computed based on the non-missing items. In the second approach, the probability of a missing item response is predicted by a latent response propensity variable and the item response itself. The models were applied to the PIRLS 2011 study, demonstrating that country comparisons change under different modeling assumptions for missing item responses. Full article
Show Figures

Figure 1

35 pages, 707 KiB  
Article
On the Treatment of Missing Item Responses in Educational Large-Scale Assessment Data: An Illustrative Simulation Study and a Case Study Using PISA 2018 Mathematics Data
by Alexander Robitzsch
Eur. J. Investig. Health Psychol. Educ. 2021, 11(4), 1653-1687; https://doi.org/10.3390/ejihpe11040117 - 14 Dec 2021
Cited by 15 | Viewed by 4559
Abstract
Missing item responses are prevalent in educational large-scale assessment studies such as the programme for international student assessment (PISA). The current operational practice scores missing item responses as wrong, but several psychometricians have advocated for a model-based treatment based on latent ignorability assumption. [...] Read more.
Missing item responses are prevalent in educational large-scale assessment studies such as the programme for international student assessment (PISA). The current operational practice scores missing item responses as wrong, but several psychometricians have advocated for a model-based treatment based on latent ignorability assumption. In this approach, item responses and response indicators are jointly modeled conditional on a latent ability and a latent response propensity variable. Alternatively, imputation-based approaches can be used. The latent ignorability assumption is weakened in the Mislevy-Wu model that characterizes a nonignorable missingness mechanism and allows the missingness of an item to depend on the item itself. The scoring of missing item responses as wrong and the latent ignorable model are submodels of the Mislevy-Wu model. In an illustrative simulation study, it is shown that the Mislevy-Wu model provides unbiased model parameters. Moreover, the simulation replicates the finding from various simulation studies from the literature that scoring missing item responses as wrong provides biased estimates if the latent ignorability assumption holds in the data-generating model. However, if missing item responses are generated such that they can only be generated from incorrect item responses, applying an item response model that relies on latent ignorability results in biased estimates. The Mislevy-Wu model guarantees unbiased parameter estimates if the more general Mislevy-Wu model holds in the data-generating model. In addition, this article uses the PISA 2018 mathematics dataset as a case study to investigate the consequences of different missing data treatments on country means and country standard deviations. Obtained country means and country standard deviations can substantially differ for the different scaling models. In contrast to previous statements in the literature, the scoring of missing item responses as incorrect provided a better model fit than a latent ignorable model for most countries. Furthermore, the dependence of the missingness of an item from the item itself after conditioning on the latent response propensity was much more pronounced for constructed-response items than for multiple-choice items. As a consequence, scaling models that presuppose latent ignorability should be refused from two perspectives. First, the Mislevy-Wu model is preferred over the latent ignorable model for reasons of model fit. Second, in the discussion section, we argue that model fit should only play a minor role in choosing psychometric models in large-scale assessment studies because validity aspects are most relevant. Missing data treatments that countries can simply manipulate (and, hence, their students) result in unfair country comparisons. Full article
Show Figures

Figure 1

Back to TopTop