Next Article in Journal
Not All Factors Contribute Equally to European-American and Hispanic Students’ SAT Scores
Next Article in Special Issue
Analysing Standard Progressive Matrices (SPM-LS) with Bayesian Item Response Models
Previous Article in Journal
Mene Mene Tekel Upharsin: Clerical Speed and Elementary Cognitive Speed are Different by Virtue of Test Mode Only
Previous Article in Special Issue
Searching for G: A New Evaluation of SPM-LS Dimensionality
Article

Same Test, Better Scores: Boosting the Reliability of Short Online Intelligence Recruitment Tests with Nested Logit Item Response Theory Models

1
IESEG School of Management, 59800 Lille, France
2
LEM-CNRS 9221, 59800 Lille, France
3
Department of Psychology, Pace University, New York, NY 10038, USA
4
Assess First, 75000 Paris, France
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Received: 29 April 2019 / Revised: 4 July 2019 / Accepted: 5 July 2019 / Published: 10 July 2019
(This article belongs to the Special Issue Analysis of an Intelligence Dataset)
Assessing job applicants’ general mental ability online poses psychometric challenges due to the necessity of having brief but accurate tests. Recent research (Myszkowski & Storme, 2018) suggests that recovering distractor information through Nested Logit Models (NLM; Suh & Bolt, 2010) increases the reliability of ability estimates in reasoning matrix-type tests. In the present research, we extended this result to a different context (online intelligence testing for recruitment) and in a larger sample ( N = 2949 job applicants). We found that the NLMs outperformed the Nominal Response Model (Bock, 1970) and provided significant reliability gains compared with their binary logistic counterparts. In line with previous research, the gain in reliability was especially obtained at low ability levels. Implications and practical recommendations are discussed. View Full-Text
Keywords: E-assessment; general mental ability; nested logit models; item-response theory; ability-based guessing E-assessment; general mental ability; nested logit models; item-response theory; ability-based guessing
Show Figures

Figure 1

MDPI and ACS Style

Storme, M.; Myszkowski, N.; Baron, S.; Bernard, D. Same Test, Better Scores: Boosting the Reliability of Short Online Intelligence Recruitment Tests with Nested Logit Item Response Theory Models. J. Intell. 2019, 7, 17. https://doi.org/10.3390/jintelligence7030017

AMA Style

Storme M, Myszkowski N, Baron S, Bernard D. Same Test, Better Scores: Boosting the Reliability of Short Online Intelligence Recruitment Tests with Nested Logit Item Response Theory Models. Journal of Intelligence. 2019; 7(3):17. https://doi.org/10.3390/jintelligence7030017

Chicago/Turabian Style

Storme, Martin, Nils Myszkowski, Simon Baron, and David Bernard. 2019. "Same Test, Better Scores: Boosting the Reliability of Short Online Intelligence Recruitment Tests with Nested Logit Item Response Theory Models" Journal of Intelligence 7, no. 3: 17. https://doi.org/10.3390/jintelligence7030017

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop