Estimation of Person Ability under Rapid and Effortful Responding
Abstract
:1. Introduction
2. Item Response Models: 2PL and 3PL
3. Ability-Based Guessing Models
4. The Effort-Moderated Model (Wise and Demars 2006)
5. Extensions of the Effort-Moderated Model
6. Estimation of Person Theta Using MLE and BME
7. Importance and Goals of the Present Study
8. RBIRT Function in R
- Model: 1 = MLE estimation, 2 = BME estimation;
- Nopt: Number of options/scaling as in multiple-choice, Likert-type measures;
- Respv: Response vector of person for a given measure;
- ip: Item parameter file with the first two columns presenting a and b estimates and the third column the fixed slope defining informed guessing through combining theta with random guessing as per the San Martín et al. (2006) ability-based guessing model;
- rbe: A vector defining person rapid (0) versus effortful (1) responding;
- est: (1) Rios and Soland model, i.e., random guessing, and (2) San Martín et al. ability-based guessing model;
8.1. An Application of the RBIRT Function Using Simulated Data
Item | I1 | I2 | I3 | I4 | I5 | I6 | I7 |
Person 9 | 1 | 1 | 0 | 1 | 1 | 1 | 0 |
8.2. An Application of the RBIRT Function in the Measurement of Cognitive Ability
9. Limitations and Future Directions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
TITLE: 2PL (IRT) model | ! Title of Model to run (label, not required) |
DATA: FILE = data.dat; | ! Name of data file in fixed asci format |
VARIABLE: NAMES = u1-u6; | ! Names of items, u1-u6 |
CATEGORICAL = u1-u6; | ! Items are categorical |
ANALYSIS: ESTIMATOR = MLR; | ! Estimator is Maximum Likelihood with robust errors |
MODEL: f BY u1-u6 *; | ! One-factor model specified with 6 items, freely estimated |
f@1; | ! Factor variance fixed to unity for identification |
[u1$1-u6$1]; | ! First threshold estimating item difficulty in logits |
Appendix B
1 | For example, for a 20-item test, lower and upper theta values when response vectors include only 0 s or 1 s would be equal to ±3.68879. |
References
- Baird, Benjamin, Jonathan Smallwood, Michael D. Mrazek, Julia W. Y. Kam, Michael S. Franklin, and Jonathan W. Schooler. 2012. Inspired by distraction: Mind wandering facilitates creative incubation. Psychological Science 23: 1117–22. [Google Scholar] [CrossRef]
- Baker, Frank B., and Seock-Ho Kim. 2017. The Basics of Item Response Theory Using R. New York: Springer. [Google Scholar]
- Birnbaum, A. 1968. Some latent trait models and their use in inferring an examinee’s ability. In Statistical Theories of Mental Test Scores. Edited by F. M. Lord and M. R. Novick. London: Addison-Wesley, pp. 397–479. [Google Scholar]
- Bishop, Christopher M. 2006. Pattern Recognition and Machine Learning. Berlin: Springer. [Google Scholar]
- Bishop, Yvonne M., Stephen E. Fienberg, and Paul W. Holland. 1975. Discrete Multivariate Analysis. Cambridge: MIT Press, Reprinted by Springer, 2007. MR0381130. [Google Scholar]
- Dimitrov, Dimiter M. 2017. Examining differential item functioning: IRT-based detection in the framework of confirmatory factor analysis. Measurement and Evaluation in Counseling and Development 50: 183–200. [Google Scholar] [CrossRef]
- Embretson, Susan E., and Steven P. Reise. 2000. Item Response Theory for Psychologists. Mahwah: Lawrence Erlbaum. [Google Scholar]
- Erwin, T. Dary, and Steven L. Wise. 2002. A scholar-practitioner model for assessment. In Building a Scholarship of Assessment. Edited by Trudy W. Banta. San Francisco: Jossey-Bass, pp. 67–81. [Google Scholar]
- Foley, Brett P. 2016. Getting lucky: How guessing threatens the validity of performance classifications. Practical Assessment, Research, and Evaluation 21: 3. [Google Scholar]
- Fox, Jean-Paul, and Sukaesi Marianti. 2016. Joint modeling of ability and differential speed using responses and response times. Multivariate Behavioral Research 51: 540–53. [Google Scholar] [CrossRef]
- Gelman, Andrew, John B. Carlin, Hal S. Stern, and Donald B. Rubin. 2003. Bayesian Data Analysis. London: Chapman & Hall. [Google Scholar]
- Haladyna, Thomas M. 2004. Developing and Validating Multiple-Choice Items. New York: Psychology Press. [Google Scholar]
- Han, Kyung T. 2012. Fixing the c parameter in the three-parameter logistic model. Practical Assessment, Research & Evaluation 17: 1–24. [Google Scholar]
- Jeon, Minjeong, and Paul De Boeck. 2019. An analysis of an item-response strategy based on knowledge retrieval. Behavior Research Methods 51: 697–719. [Google Scholar] [CrossRef]
- Kyllonen, Patrick C., William C. Tirre, and Raymond E. Christal. 1991. Knowledge and processing speed as determinants of associative learning. Journal of Experimental Psychology: General 120: 89–108. [Google Scholar] [CrossRef]
- Lord, Frederic M. 1983. Maximum likelihood estimation of item response parameters when some responses are omitted. Psychometrika 48: 477–82. [Google Scholar] [CrossRef]
- Lu, Jing, Chun Wang, Jiwei Zhang, and Jian Tao. 2020. A mixture model for responses and response times with a higher-order ability structure to detect rapid guessing behaviour. British Journal of Mathematical & Statistical Psychology 73: 261–88. [Google Scholar]
- MacCallum, Robert C., Michael W. Browne, and Hazuki M. Sugawara. 1996. Power analysis and determination of sample size for covariance structure modeling. Psychological Methods 1: 130–49. [Google Scholar] [CrossRef]
- Man, Kaiwen, Jeffrey R. Harring, Hong Jiao, and Peida Zhan. 2019. Joint modeling of compensatory multidimensional item responses and response times. Applied Psychological Measurement 43: 639–54. [Google Scholar] [CrossRef] [PubMed]
- Otter, Thomas, Greg M. Allenby, and Trish Van Zandt. 2008. An integrated model of discrete choice and response time. Journal of Marketing Research 45: 593–607. [Google Scholar] [CrossRef]
- Owen, Roger J. 1975. A Bayesian sequential procedure for quantal response in the context of adaptive mental testing. Journal of the American Statistical Association 70: 351–56. [Google Scholar] [CrossRef]
- Rios, Joseph A., and James Soland. 2021. Parameter estimation accuracy of the Effort-Moderated Item Response Theory Model under multiple assumption violations. Educational and Psychological Measurement 81: 569–94. [Google Scholar] [CrossRef]
- Rios, Joseph A., and James Soland. 2022. An investigation of item, examinee, and country correlates of rapid guessing in PISA. International Journal of Testing 22: 154–84. [Google Scholar] [CrossRef]
- Rose, Norman. 2010. Maximum Likelihood and Bayes Modal Ability Estimation in Two-Parametric IRT Models: Derivations and Implementation. Jena: Schriften zur Bildungsf. [Google Scholar]
- San Martín, E., Guido del Pino, and Paul De Boeck. 2006. Irt models for ability-based guessing. Applied Psychological Measurement 30: 183–203. [Google Scholar] [CrossRef]
- Semmes, Robert, Mark L. Davison, and Catherine Close. 2011. Modeling individual differences in numerical reasoning speed as a random effect of response time limits. Applied Psychological Measurement 35: 433–46. [Google Scholar] [CrossRef]
- Soland, James, Steven L. Wise, and Lingyun Gao. 2019. Identifying disengaged survey responses: New evidence using response time metadata. Applied Measurement in Education 32: 151–65. [Google Scholar] [CrossRef]
- Szpunar, Karl K., Samuel T. Moulton, and Daniel L. Schacter. 2013. Mind wandering and education: From the classroom to online learning. Frontiers in Psychology 4: 495. [Google Scholar] [CrossRef]
- Ubulom, William J., Clifford M. Amini, and Victoria Island. 2012. Determining the effect of guessing on test scores: An empirical analysis. Scottish Journal of Arts, Social Sciences and Scientific Studies 4: 32–38. [Google Scholar]
- van der Linden, Wim J. 2006. A lognormal model for response times on test items. Journal of Educational and Behavioral Statistics 31: 181–204. [Google Scholar] [CrossRef]
- van der Linden, Wim J. 2007. A hierarchical framework for modeling speed and accuracy on test items. Psychometrika 72: 287–308. [Google Scholar] [CrossRef]
- White, Halbert. 1982. Maximum likelihood estimation of misspecified models. Econometrica 50: 1–25. [Google Scholar] [CrossRef]
- Wise, Steven L. 2006. An investigation of the differential effort received by items on a low-stakes computer-based test. Applied Measurement in Education 19: 95–114. [Google Scholar] [CrossRef]
- Wise, Steven L. 2009. Strategies for managing the problem of unmotivated examinees in low-stakes testing programs. The Journal of General Education 58: 153–66. [Google Scholar] [CrossRef]
- Wise, Steven L. 2015a. Effort analysis: Individual score validation of achievement test data. Applied Measurement in Education 28: 237–52. [Google Scholar] [CrossRef]
- Wise, Steven L. 2015b. Response time as an indicator of test taker speed: Assumptions meet reality. Measurement 13: 186–88. [Google Scholar] [CrossRef]
- Wise, S. L. 2019. An information-based approach to identifying rapid-guessing thresholds. Applied Measurement in Education 32: 325–36. [Google Scholar] [CrossRef]
- Wise, Steven L. 2020a. The impact of test-taking disengagement on item content representation. Applied Measurement in Education 33: 83–94. [Google Scholar] [CrossRef]
- Wise, Steven L. 2020b. Six insights regarding test-taking disengagement. Educational Research and Evaluation 26: 328–38. [Google Scholar] [CrossRef]
- Wise, Steven L., and Christine E. DeMars. 2006. An application of item response time: The effort-moderated IRT model. Journal of Educational Measurement 43: 19–38. [Google Scholar] [CrossRef]
- Wise, Steven L., and Hiaojing Kong. 2005. Response time effort: A new measure of examinee motivation in computer-based tests. Applied Measurement in Education 18: 163–83. [Google Scholar] [CrossRef]
- Wise, Steven L., James Soland, and Yuanchao Bo. 2020. The (non)impact of differential test taker engagement on aggregated scores. International Journal of Testing 20: 57–77. [Google Scholar] [CrossRef]
- Wise, Steven L., Megan R. Kuhfeld, and John Cronin. 2022. Assessment in the time of COVID-19: Understanding patterns of student disengagement during remote Low-Stakes testing. Educational Assessment 27: 136–51. [Google Scholar] [CrossRef]
- Zhu, Zhemin, Chun Wang, and Jian Tao. 2019. A two-parameter logistic extension model: A n efficient variant of the three-parameter logistic model. Applied Psychological Measurement 43: 449–63. [Google Scholar] [CrossRef]
Model | LogLik | Pearson χ2/DF | LRT/DF | AIC | BIC | SABIC |
---|---|---|---|---|---|---|
2PL | −28709.021 | 66.976/51 | 66.169/51 | 57,442.043 | 57,526.581 | 57,488.448 |
2PL EM | −27070.960 | 64.389/51 | 62.742/51 | 54,165.920 | 54,249.860 | 54,211.727 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sideridis, G.; Alahmadi, M. Estimation of Person Ability under Rapid and Effortful Responding. J. Intell. 2022, 10, 67. https://doi.org/10.3390/jintelligence10030067
Sideridis G, Alahmadi M. Estimation of Person Ability under Rapid and Effortful Responding. Journal of Intelligence. 2022; 10(3):67. https://doi.org/10.3390/jintelligence10030067
Chicago/Turabian StyleSideridis, Georgios, and Maisa Alahmadi. 2022. "Estimation of Person Ability under Rapid and Effortful Responding" Journal of Intelligence 10, no. 3: 67. https://doi.org/10.3390/jintelligence10030067
APA StyleSideridis, G., & Alahmadi, M. (2022). Estimation of Person Ability under Rapid and Effortful Responding. Journal of Intelligence, 10(3), 67. https://doi.org/10.3390/jintelligence10030067