Next Article in Journal
Not All Factors Contribute Equally to European-American and Hispanic Students’ SAT Scores
Next Article in Special Issue
Analysing Standard Progressive Matrices (SPM-LS) with Bayesian Item Response Models
Previous Article in Journal
Mene Mene Tekel Upharsin: Clerical Speed and Elementary Cognitive Speed are Different by Virtue of Test Mode Only
Previous Article in Special Issue
Searching for G: A New Evaluation of SPM-LS Dimensionality
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Same Test, Better Scores: Boosting the Reliability of Short Online Intelligence Recruitment Tests with Nested Logit Item Response Theory Models

1
IESEG School of Management, 59800 Lille, France
2
LEM-CNRS 9221, 59800 Lille, France
3
Department of Psychology, Pace University, New York, NY 10038, USA
4
Assess First, 75000 Paris, France
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 29 April 2019 / Revised: 4 July 2019 / Accepted: 5 July 2019 / Published: 10 July 2019
(This article belongs to the Special Issue Analysis of an Intelligence Dataset)

Abstract

:
Assessing job applicants’ general mental ability online poses psychometric challenges due to the necessity of having brief but accurate tests. Recent research (Myszkowski & Storme, 2018) suggests that recovering distractor information through Nested Logit Models (NLM; Suh & Bolt, 2010) increases the reliability of ability estimates in reasoning matrix-type tests. In the present research, we extended this result to a different context (online intelligence testing for recruitment) and in a larger sample ( N = 2949 job applicants). We found that the NLMs outperformed the Nominal Response Model (Bock, 1970) and provided significant reliability gains compared with their binary logistic counterparts. In line with previous research, the gain in reliability was especially obtained at low ability levels. Implications and practical recommendations are discussed.

1. Introduction

With the development of the Internet, the assessment of job applicants is increasingly performed online, which facilitates large scale testing while reducing costs [1,2]. This recent trend has led to the creation of a new research field in psychometrics, referred to as e-assessment [2]. Considering the relevance of General Mental Ability (GMA) in predicting job performance [3], many e-assessment platforms have included tasks that aim at capturing it—such as logical series or logical reasoning matrices—in their online test batteries.
The assessment phase in e-recruiting poses very specific psychometric challenges. On the one hand, the assessment should ideally lead to a short-list of the best applicants [1,2]. The accuracy of the assessment is therefore a key issue in e-recruiting just like in it is in recruiting in general. On the other hand, the assessment phase cannot require from applicants that they take part in assessment processes that are too time consuming and too cognitively demanding. It is indeed not acceptable to extensively test people who have a relatively low chance of getting an interview. Perceived unfairness of the recruitment process has been shown to have a negative impact on the image of the recruiting company, which can lead to negative word of mouth and/or intentions not to complete the recruitment process [4,5]. The challenge that is inherent to e-assessment in a recruitment context is essentially the challenge of short psychometric measures, which is to extract as much information as possible from short instruments.
Extracting reliable information from short tests remains a real challenge from a measurement perspective [6]. Hopefully, psychometricians have allies in this challenging endeavour, such as Item Response Theory (IRT) modeling, which often allows them to extract more information from short psychometric tools than Classical Test Theory (CTT). Originally suggested for multiple-choice items by Bock [7], one way that researchers can take advantage of the IRT framework in logical series or matrices tests consists in extracting information from which incorrect responses were selected. This approach is based on the premise that when a test taker selects a wrong response option out of a set of wrong response options, the choice of the wrong response option can carry information about the ability of the test taker. Further, recent developments [8] applied to progressive matrices have suggested recovering additional information from distractor responses through Nested Logit Models (NLM) [9], and have indicated that such models may be more appropriate than Bock’s [7] Nominal Response Model in logical reasoning tests, but also than traditional binary IRT models [8]. In this research, recovering information from the choice of distractors has provided significant gains in reliability in comparison with not recovering such information and using traditional binary logistic models.
Currently, no study has investigated whether applying this approach in the field of recruitment would lead to gains in reliability. Yet, taking an online GMA test as part of a recruitment process is in several ways different from taking a GMA test for an experiment in the lab. There is reasonable evidence to suspect that such differences could affect the way distractors are processed by test takers, which could possibly jeopardize the very idea of recovering psychometric information from distractors. In the present article, our main aim is to extend and conceptually replicate previous research on students and in laboratory conditions [8] to online personnel pre-selection contexts, by testing whether the modeling strategies previously suggested are able, even in this context, to provide tangible gains in reliability. The effort of conducting conceptual replications in the field is crucial in psychology to rule out the possibility that a laboratory finding is too weak to be relevant in contexts that are less tightly controlled [10].

1.1. Binary Item Response Theory Models

Item Response Theory (IRT) has traditionally helped psychometricians improve the reliability of the ability estimates obtained with short intelligence measures [8,11]. IRT provides a framework that has indeed been shown to improve the reliability of measurement compared to the Classical Test Theory (CTT) approach [12]. While CTT assumes that all items are linked to the latent trait in a similar fashion, IRT assumes that each item is linked to the the latent trait in a unique manner [13]. The aim of IRT is to model the probability of a response to an item as a function of the latent trait or ability of the test taker, traditionally with a non-linear function of the latent trait that is unique for each item. In the case of binary responses, the non-linear function is, frequently, the logistic function. Because of the flexibility of its parametrization in comparison with CTT, IRT allows for the accounting of a variety of testing phenomena and extracting information that is relevant in the context of GMA assessment [8].
GMA tests, such as progressive matrices or logical series, usually contain one correct answer option and several incorrect answer options—which are often referred to as distractors. Although the response dataset is thus polytomous, it is typical to recode the dataset by collapsing the distractor responses together, which yields a dichotomous success/failure variable format. The binary IRT approach generally consists in modeling these dichotomous responses using a logistic function of the latent ability and a set of item parameters representing various item characteristics (difficulty, discrimination, etc.).
The simplest IRT models, including only one parameter and referred to as One-Parameter Logistic (1PL) models, characterize items by their level of difficulty only. The difficulty parameter corresponds to the level of the latent trait for which the slope of the function linking the ability and the probability to select the correct response option reaches its maximum—in other words, the ability level where the discrimination of the item is at its maximum. The model is often extended with another parameter—discrimination—leading to Two-Parameter Logistic (2PL) models. Such models not only take into account the difficulty of an item, but also its ability to discriminate between ability levels. The discrimination parameter corresponds to the strength of relationship between the ability and the probability to select the correct response option. Three-Parameter Logistic (3PL) models add to previous models a variable lower asymptote in the relation between the ability and the probability to select the correct response option. In the context of IRT, the lower asymptote corresponds to the probability to select the correct answer to a given item by guessing it. Therefore, 3PL models allow to characterize items regarding the extent to which they are susceptible to correct guessing. A fourth parameter is included in 4-Parameter Logistic (4PL) models, which corresponds to a variable upper asymptote in the relation between the ability and the probability to select the correct response option. In the context of IRT, the upper asymptote corresponds to the probability of responding incorrectly to an item in spite of having a level of ability that should normally lead to providing the right answer. This parameter allows the modelling of the phenomenon of inattention or slipping. Although 4PL models are used less frequently than 1, 2, and 3PL models, they have been shown to correct adequately for careless mistakes and to improve measurement efficiency [14,15].
Although binary IRT is able to model phenomena that appear in matrix-type reasoning tasks, even models that include guessing fail to account for the possibility that choosing a distractor over another one could be related to the respondent’s ability—a phenomenon often described as as ability-based guessing [16]. Indeed, the lower asymptote parameter of the 3PL and 4PL models account for the probability of correctly guessing, but what distractor is chosen when an examinee uses a guessing strategy is not considered—all distractor responses are still collapsed together as incorrect. Yet, if one considers that the guessing process is related to the ability, then the outcome of this process—the distractor chosen—can contain information about the ability that binary models fail to recover.

1.2. Recovering Distractor Information

In matrix-type or logical series type tests, distractors are usually designed in a way that they are only partially in line with the set of rules that structures the logical series. For example, if three rules are structuring the progression of a logical series, the correct response option will respect all three of them, but frequently a distractor could respect only two, while another may respect one or even none of them. In this example, a distractor that respects two out of three rules could be considered as a better response option than a distractor that would only respect one out of three rules, although both are ultimately incorrect response options. As a consequence, the wrong response options that are selected by test takers are usually not equivalent in (in)correctness, and thus could carry information about their ability [17].

1.2.1. The Nominal Response Model

A traditional approach to recovering information from distractors is to fit the nominal data with Bock’s [7] Nominal Response Model (NRM). This model is essentially a multinomial adaptation of the 2PL model, where the probability P i v that an examinee j chooses a category v—which could be the correct response or a distractor—among the m i possible responses for item i is modeled as a function of the examinee’s ability θ j , an intercept item-category parameter ζ i v and a slope item-category parameter λ i v , as well as the item-category parameters of all other categories, such as:
P i v θ j = e ζ i v + λ i v θ j k = 1 m i e ζ i k + λ i k θ j
A way to interpret this model is to essentially consider each category as having a propensity e ζ i v + λ i v θ j and the probability of selecting a category depends on the category’s propensity over the total of all category propensities. When applied to multiple choice items, a consequence of this is that the Nominal Response Model’s formulation is mathematically consistent with a response process where all response categories compete with one another and where, depending on the examinee’s ability, one category would dominate in propensity the others, and result in the examinee responding (more probably) in favor of that category [9]. But, as we later discuss, this representation of the response process may not be in line with all multiple-choice tests, especially in the case of logical reasoning matrices or logical series.

1.2.2. Nested Logit Models

In certain multiple-choice tests, in order to respond, the examinee is supposed to consider a stimulus (for example, in matrix-type tests, the incomplete matrix), from which a rule should be extracted and used to find the completing element. In such cases, it can be questioned whether examinees put into competition the different response options right away—a process that would ideally be modeled by the NRM. Instead, it could be that they first focus on understanding the stimulus (the matrix, or the beginning of the series) to find the correct response (regardless of what the response options are). From that process, two situations may arise—either they have understood the rule correctly and found the correct response—in that case, the distractors are not really considered as viable options and the correct response is selected—or they have not—and in that case the response options are put in competition in the guessing strategy.
Such a sequential process was described by Suh and Bolt [9] as a motivation to develop a new class of item-response models for multiple-choice questions where this double process could be considered: Nested Logit Models (NLM). NLMs have been designed to model situations in which the response choice possesses a nested structure, that is when the final choice of a response option is made through a sequential process.
NLMs attempt to approximate the response probabilities that occur from this sequential process and the two models that best describe each step into a single model. NLMs have two levels that separate the response options in two nests. At a higher level (level 1), the model distinguishes the choice of the correct response option versus the choice of any incorrect response option, which can be achieved with a binary logistic IRT model (e.g. the 2PL, 3PL or 4PL model). At a lower level (level 2), the model distinguishes the probability of selecting one particular distractor (as opposed to another one) as the product of the probability of selecting any distractor (which is the complement of the probability earlier modeled with the level 1 part) and a probability modeled using the propensities of each distractor—which is similar to a Nominal Response Model of the distractors.
To summarize, using the 4-Parameter NLM (4PNL) as an example for at level 1, the probability P ( x i j = u | θ j ) that the jth person selects the correct response (category u) to the ith item, depends on their ability θ j and item parameters α i (discrimination/slope), β i (difficulty/intercept), γ i (lower asymptote) and δ i (upper asymptote), such as:
P ( x i j = u | θ j ) = γ i + δ i γ i 1 + e β i + α i θ j
Similar to binary logistic models, the 3-Parameter Nested Logit (3PNL) model is a constrained 4PNL where δ i is fixed, generally (and throughout in this paper) to 1, such as:
P ( x i j = u | θ j ) = γ i + 1 γ i 1 + e β i + α i θ j
Further, the 2-Parameter Nested Logit (2PNL) is a constrained 3PNL where γ i is fixed, generally (and throughout in this paper) to 0, such as:
P ( x i j = u | θ j ) = 1 1 + e β i + α i θ j
At level 2, which models the distractor responses, the probability P ( x i j = v | θ j ) that the examinee selects the distractor category v among the m i possible distractor responses is modeled as the product of the probability of responding incorrectly 1 P ( x i j = u | θ j ) and the probability that the examinee selects the distractor conditional upon an incorrect response. The latter is in fact similar to a Nominal Response model, where distractor responses have propensities that are a function of the ability θ j , intercept ζ i v and slope λ i v . The resulting distractor model for the probability P U i j = 0 , D i j v | θ j that person j selects distractor v for item i is thus given by:
P ( x i j = v | θ j ) = 1 P ( x i j = u | θ j ) e ζ i v + λ i v θ j k = 1 m i e ξ i k + λ i k θ j
Using the level 1 4PL model in Equation (2), the distractors-model results in the 4PNL model to:
P ( x i j = v | θ j ) = 1 γ i + δ i γ i 1 + e β i + α i θ j e ζ i v + λ i v θ j k = 1 m i e ξ i k + λ i k θ j
Using the level 1 3PL model in Equation (3), the distractors-model results in the 3PNL model to:
P ( x i j = v | θ j ) = 1 γ i + 1 γ i 1 + e β i + α i θ j e ζ i v + λ i v θ j k = 1 m i e ξ i k + λ i k θ j
Using the level 1 2PL model in Equation (4), the distractors-model results in the 2PNL model to:
P ( x i j = v | θ j ) = 1 1 1 + e β i + α i θ j e ζ i v + λ i v θ j k = 1 m i e ξ i k + λ i k θ j
An important distinction to note between the models of this class and the Nominal Response Model is that, in the NLM, the probability of a correct response is not directly affected by the propensities towards the different distractors, but the probability to select the distractors is conditional upon the probability of a correct (or rather, incorrect) response. In contrast, in the Nominal Response Model, the propensities towards all response categories—correct response and distractors alike—all affect one another.
To illustrate NLM, we present in Figure 1 the item-category characteristic curves for an item of the test studied in this very paper.

1.3. The Aim of This Study

Although originally, Nominal Response Models were considered as a way to recover information from multiple-choice tests, recent research suggests that, in the case of matrix or series-type GMA tests, NLM may better fit the norminal-level data than the NRM and provide significant reliability gains in comparison with binary logistic models. In particular, Myszkowski and Storme [8] have shown that, on the last series of the Standard Progressive Matrices [18], (1) using NLM provided a better fit than Nominal Response Models to the nominal level data, and (2) NLMs allowed significant reliability gains when estimating ability.
Yet, however promising, this result has only been observed with one GMA test and has only been used on a convenience sample of undergraduates, in a low-stakes situation. This study aims at bridging this gap by replicating this result on another short GMA test, with higher stakes, and in a context that would be particularly interested in these reliability gains: Online recruitment.
The conditions under which job applicants take GMA tests are indeed very different from the conditions in which research participants take similar tests in the lab as part of a typical research study. For example, in a recruitment context, the stakes are higher in comparison with taking the test in a lab. Previous research on the effect of pressure on cognitive processes when taking intelligence tests has shown that when under pressure, working memory is busy processing intrusive thoughts which can have in turn a negative impact on performance [19,20]. It is possible that this phenomenon also affects the way distractors are processed and lead to different processing of response options. When under pressure, test takers who fail at identifying the rule that structures the progression of the series might experience high levels of stress and fail at comparing efficiently distractors to identify the best of the incorrect response options. As a consequence, in the context of the online assessment of job applicants, the choice of distractors might carry little information about the ability of test takers. If this is the case, NLMs should not lead in this context to gains in empirical reliability compared with binary models.
Furthermore, the fact that job applicants usually take online tests in their own time leads to less standardized testing contexts. Compared with the relatively controlled and quiet conditions of a lab, there might be more attentional perturbations in the environment, which might induce a shallower processing of the wrong response options. Consequently, it is possible that in the context of e-assessment, the choice of distractors is not so much reflective of the ability of the test taker, which could hinder the potential gains from NLMs.
The aim of the present study is to test whether the findings of Myszkowski and Storme [8]—obtained in a low psychological pressure and controlled context—can be replicated and generalized to an assessment situation characterized by more psychological pressure and less standardization, as well as a different test.

2. Method

2.1. Participants and Procedure

The sample consisted of 2949 French job applicants (2084 Men, 865 Women, M a g e = 36.88, S D a g e = 8.66) who responded to a logical series test that aims at measuring GMA online. The examinees responded using an e-assessment application presented in their web browser. As it is common in e-assessment, it can be expected that the standardization regarding when and where the test was taken was relatively low as job applicants were free to take the test at the time and at the place that was the most convenient for them. Of the participants, 40.96% had a master (or higher) degree, 23.64% of participants had a bachelor degree, the remaining applicants had less than a bachelor degree.

2.2. Instrument

The test under investigation—the GF20—comprises 20 incomplete logical series presented each with six response options to complete the missing part, including one correct answer that can be deducted from the application of logical rules. Each logical series consists of a 4 by 1 matrix with colored cells moving progressively on a grid according to simple geometric rules—such as translations and rotations. The 20 items that are comprised in the final test were designed and pre-tested to discriminate different levels of ability. An item example is provided in Figure 1. Except for instructions participants to complete the series, the test only included non-verbal and non-numerical content. No time limit was provided to applicants to take the test. It took them on average 21.30 min to complete the 20 items ( S D = 9.78). Items were presented one by one. Participants were instructed to provide an response to each item before they could move on to the next item, and were not able to go back.
The CTT-based reliability estimates—computed using the R package “semtools” [21] from a unidimensional model fit with the package “lavaan” [22]—of the GF20 were satisfactory, as Cronbach’s α was 0.831, Raykov’s ω congeneric reliability was 0.834 and McDonald’s ω h reliability was 0.822.

2.3. Binary IRT Modeling

2.3.1. Model Estimation

All binary IRT models—the 1-Parameter Logistic (1PL), 2-Parameter Logistic (2PL), 3-Parameter Logistic with free lower asymptote (3PL), and 4-parameter logistic (4PL) models—were estimated using an Expectation-Maximization (EM) algorithm with the R package “mirt” [23]. All models successfully converged. Nevertheless, the information matrix of the 4PL model could not be inverted in order to compute the parameter standard errors—decreasing the convergence tolerance and changing the estimation method did not solve this issue—which may be a sign that the estimates were unstable. Item characteristic curve plots, which, for binary models, present the expected probability of a correct response as a function of the latent ability θ j were plotted using the package for R “jrt” [24]. To keep the paper concise, only models with appropriate fit were plotted.

2.3.2. Model Fit

The fit of the models were then compared on Likelihood Ratio Tests (LRT) the model’s corrected Akaike Information Criterion (AICc). For the former, p values below 0.05 were used to indicate a significantly improved fit from using the more complex (least constrained) model as opposed to the least complex (most constrained) model. For the latter, a smaller AICc indicates a better (more parsimonious) model fit.
In addition, absolute model fit indices were obtained by using limited information Goodness-of-Fit statistics [25] as implemented in “mirt.” As usual—although more frequently seen in Structural Equation Modeling—and similar to the original study of Myszkowski and Storme [8], we used as absolute model fit indices the Comparative Fit Index (CFI) and Tucker-Lewis Index (TLI), with thresholds of 0.95 , along with the Standardized Root Mean Square Residual (SRMR) with a threshold of 0.08 , and the Root Mean Square Error of Approximation (RMSEA) with a threshold of 0.06 .

2.3.3. Reliability

Since the aim of this paper is to extend and replicate the finding that NLM provides an increase in measurement accuracy in logical GMA tests—as found with the Raven’s progressive matrices [8]—quantifying measurement accuracy is key. Measurement accuracy is represented by several statistics in IRT, especially information, standard error of measurement and reliability, which are mathematical transformations of one another. Because reliability is a familiar metric for most researchers—in both CTT and IRT—is conveniently bounded by 0 and 1, and is the metric chosen in the article that this study attempts to replicate, it was chosen in this study. However, it should be noted that the conclusions reached about reliability here are also extendable to information and standard errors.
Similar to the original study, reliability functions were plotted for the 2PL, 3PL and 4PL models, overlayed with their Nested Logit counterparts. In addition, and also similar to the original study, marginal estimates of empirical reliability were computed [26]. The estimate of empirical reliability reported corresponds to the reliability of the θ j scores, averaged across all cases j.

2.4. Nominal and Nested Logit IRT Models

2.4.1. Model Estimation

The models for nominal data—the Nominal Response (NR), the 2-Parameter Nested Logit (2PNL), the 3-Parameter Nested Logit (3PNL) an the 4-Parameter Nested Logit (4PNL) models—were estimated using the package “mirt” [23] using an EM algorithm. All models converged successfully. However, as with the binary models, the information matrix of the 4PNL model could not be inverted in order to compute the parameter standard errors, which may be a sign that the estimates were unstable. As for the binary models, item category curve plots, which present the expected probability of selecting a category as a function of the latent ability θ j were computed using “jrt” [24]. Again, to keep the paper concise, only models with appropriate fit were reported.

2.4.2. Model Fit

Similar to the binary models, Likelihood Ratio Tests were used to compare the different nominal models. However, only the 2PNL, 3PNL and 4PNL models are nested with one another, and thus only they allow the use of Likelihood Ratio Tests when comparing them. The AICcs of all models were computed, and the AICc was used to compare the Nominal Response model with the other models.
Polytomous models are largely more heavily parametrized than binary models, which, in some cases, prevents to compute limited information Goodness-of-Fit statistics, such as in Myszkowski and Storme [8]—thereby limiting model fit estimations. However, in this case, because of the larger sample size than in Myszkowski and Storme [8], we were able to compute them, and used the same indices and thresholds earlier discussed for the binary models.

2.4.3. Reliability

Similar to the binary models, we also computed the reliability functions of the NLMs, which were plotted as an overlay of the reliability functions of their respective binary counterparts (i.e., 2PL and 2PNL, 3PL and 3PLN, 4PL and 4PLN)—thereby facilitating visual comparisons. We also computed the empirical reliability of each model averaged across cases as an estimate of marginal reliability.
As one of the aims of this study is to examine potential gains in reliability from using NLMs as opposed to their binary counterparts, we computed the reliability gain Δ r x x between models by computing their difference. Similar to the original study and other previous studies [8,15], we used bootstrapping to obtain a Wald’s z test (based on the bootstrapped standard error) and 95 % Confidence Intervals for the reliability gains.

3. Results

3.1. Binary IRT Models

The model fit indices of all binary models are reported in Table 1. The 2PL, 3PL and 4PL models all had satisfactory fit, with the 4PL model providing the best fit. The 4PL model fitted significantly better than the 3PL model ( Δ χ 2 = 167.405 , Δ d f = 20 , p < 0.001 ), which fitted significantly better than the 2PL model ( Δ χ 2 = 519.018 , Δ d f = 20 , p < 0.001 ), which fitted significantly better than the 1PL model ( Δ χ 2 = 1100.652 , Δ d f = 19 , p < 0.001 ).
As they were the best two fitting models and provided very similar absolute fit indices, we present the item characteristic curves of both the 2PL, 3PL and 4PL models respectively in Figure 2, Figure 3 and Figure 4. Their similarity and the relatively high low asymptotes for the 4PL model—for the 3PL, they are fixed to 1—are in line with the fact that the two models provided similar fit.
The parameter estimates (along with standard errors for the 2PL and 3PL) of the 2PL, 3PL, and 4PL models are presented respectively in Table 2.
The marginal estimates of empirical reliability for all the binary models were satisfactory and close to the CTT-based estimates earlier reported, as they were 0.833 for the 1PL model, 0.849 for the 2PL model, 0.868 for the 3PL model and 0.873 for the 4PL model.

3.2. Nominal Models

The model fit indices of all nominal models are reported in Table 3. Although the Nominal Response model provided a borderline acceptable fit, it was, as hypothesized, outperformed by all the NLMs, which all presented satisfactory fit. The 4PNL model fitted significantly better than the 3PNL model ( Δ χ 2 = 82.624 , Δ d f = 20 , p < 0.001 ), which fitted significantly better than the 2PNL model ( Δ χ 2 = 541.102 , Δ d f = 20 , p < 0.001 ).
The item category curve plots of the 2PNL, 3PNL and the 4PNL are respectively presented in Figure 5, Figure 6 and Figure 7. Their model estimates as well as standard errors are presented respectively in Table 4, Table 5 and Table 6.
The marginal estimates of empirical reliability for all the nominal models were satisfactory, as they were 0.857 for the Nominal Response model, 0.867 for the 2PNL model, 0.887 for the 3PNL model and 0.888 for the 4PNL model.
As hypothesized, preferring NLMs instead of binary logistic models resulted in significant reliability gains. The average reliability gains amounted to 0.018 (Bootstrapped 95% CI = [ 0.017 ,   0.021 ] , Bootstrapped z = 17.765, p < 0.001 ) for the 2PL vs. 2PNL models, 0.019 (Bootstrapped 95% CI = [ 0.018 ,   0.023 ] , Bootstrapped z = 15.265, p < 0.001 ) for the 3PL vs. 3PNL models, and 0.015 (Bootstrapped 95% CI = [ 0.011 ,   0.020 ] , Bootstrapped z = 6.669, p < 0.001 ) for the 4PL vs. 4PNL models.
The reliability functions of the 2PL, 3PL and 4PL are reported with their Nested Logit counterparts in respectively Figure 8, Figure 9 and Figure 10. As noted by a reviewer, between a binary model and its nested counterpart, θ j is not perfectly invariant, and thus the reliability functions may cross, such as in Figure 4. This was also previously observed in the comparison between binary and nominal response models [7].
As expected, they show that using NLM provided increments in reliability especially in the lower range of abilities.

4. Discussion

The aim of the present research was to extend the previous findings of Myszkowski and Storme [8] to different testing modalities, online assessment—a different context with higher stakes—and personnel selection, on a larger sample and with a different logical reasoning test.
We found that 4-parameter models—both binary and nested logit—were likely unstable (as their information matrix could not be inverted) but they seemed to outperform their 1PL, 2PL, and 3PL counterparts. Being that the 2-parameter and 3-parameter models did not present this issue while still presenting excellent fit, the results suggest that choosing them may be a more parsimonious but still well fitting approach to this test. In fact, the 2PL and 2PNL fitting respectively almost as well as the 3PL and 3PNL, they may be a more optimal modeling strategy for this test.
We also found that, as hypothesized, Nested Logit Models (NLM) both outperformed the Nominal Response Model [7], providing significant reliability gains compared with their binary counterparts. In addition, the absolute fit of the NLMs—which was not computable in Myszkowski and Storme [8] due to the lower sample size—could be computed here and was found satisfactory, especially regarding the models including a guessing parameter (3PNL and 4PNL).
These findings overall suggest that NLMs [9] are a better modeling alternative than binary logistic models and than the Nominal Response Model [7] for logical reasoning multiple-choice tests, such as incomplete matrix or series tests, in online personnel selection settings.

4.1. Theoretical and Practical Implications

From a theoretical viewpoint, the present study can be seen as a conceptual replication and extension of Myszkowski and Storme [8]’s study on Raven’s progressive matrices. Replicating findings is an important endeavor in scientific research. This is especially true in the field of psychology, which is regularly criticized for its lack of consideration for replicating empirical findings [27]. Recently, Hüffmeier et al. [10] have designed a theoretical framework to conceptualize the replication process in psychology and have proposed a typology of replication studies. Rather than considering replication as a process separate from the initial research process, they conceptualize replication as the very research process by which fundamental findings are generalized to situations that are increasingly close to real life conditions.
When a result has been shown at a fundamental level, it may be interesting to replicate it to see if it is not due to chance. In this case, exact or close replications will be used [10]. To be able to further generalize the findings of a fundamental study, it is important to be able to perform conceptual replications in the laboratory or in the field. In conceptual replications, comparability to the original study is limited to the aspects that are considered theoretically relevant [28,29]. Among the conceptual replications are field studies. The aim of such studies is to investigate whether laboratory findings also hold under field conditions, and to rule out the possibility that a laboratory finding is a laboratory artifact or too weak to be relevant in contexts that are less tightly controlled [10]. In the framework described by Hüffmeier et al. [10], our study can be defined as a conceptual replication in the field of the study conducted by Myszkowski and Storme [8]. Our findings suggest that the characteristics of the e-assessment context do not fundamentally affect the way distractors are selected by test takers. Previous basic research on recovering distractor information is therefore relevant in an e-assessment context.
From a practical viewpoint, our findings suggest that one way to improve the accuracy of e-assessment in the context of recruitment is to recover distractor information. Web applications that use tests with distractors should try to implement NLM to get more reliable estimates of the general mental ability of job applicants. To this day, there are few software implementations of NLM. A recommendation to designers of IRT platforms would be to add NLM to their offer. For e-assessment platforms, a relatively inexpensive alternative to commercial IRT software could be to use the “mirt” [23] R library on the server side to estimate the ability of test takers using the built-in NLM function. One of the challenges of this option is that R can be a programming language that is relatively consuming in terms of computing resources and time, although θ j estimations in “mirt” are relatively fast once the parameters of the model are stored in memory. More optimizations that will facilitate the implementation of NLM in e-assessment might come in the future.
In line with the findings of Myszkowski and Storme [8], the observed gain in reliability was especially visible at relatively low levels of ability. This is not surprising as NLM recover information from wrong response options. Recruiters are usually interested in applicants with high levels of intelligence, but this is not always the case. For example, it is possible that due to high competition on the job market, a recruiter is unable to attract the best applicants, and has to select among applicants with relatively lower levels of ability. In such situations, the use of NLM could be highly valuable as it allows forming a more accurate impression of applicants on the low end of the trait, and selecting the best.
As a reviewer pointed out, the standard errors of item parameter estimates of the Nested Logit Models were overall smaller than their binary counterparts—this of course only concerns parameters that are common between models (difficulty, discrimination and, for the 3PL and 3PNL, guessing). This result may seem counterintuitive, because, in general, for a given dataset, item parameter standard errors tend to increase as model complexity increases, and the Nested Logit Models are substantially more parametrized than the binary models. However, it should be noted that the Nested Logit Models are not only more complex, but they also use, to some extent, a different dataset, in that they use more information from the base dataset. Indeed, they use the complete information from the nominal level, while binary models use only the information at the binary level. Although we have showed that, like in Myszkowski and Storme [8], Nested Logit Models resulted in gains in reliability (and thus lower standard errors) for the person estimates, the present results also suggest that the difficulty, disscrimination and guessing parameters of the Nested Logit Models are estimated with more accuracy—because they use more information—than the respective item parameters of their binary counterparts. This result calls for replication in other datasets, contexts and types of tests.
Throughout the paper, we have mostly emphasized the benefits of using NLM to improve the accuracy of ability estimates. However, NLM has other potentially interesting applications beyond improving scoring. For example, Suh and Bolt [30] have described a method relying on NLM to evaluate how distractors might contribute to Differential Item Functioning (DIF) [30]. It is indeed possible that distractors function differently across groups, leading to Differential Distractor Functioning (DDF). DDF can lead in turn to DIF, which is a major problem when using the same test on different groups. Multigroup NLM could help test designers to improve the diagnosis of the causes of DIF, and thus to improve their tests. Bolt et al. [31] have suggested another interesting application of NLM, which is to use NLM as a way to determine whether the ability distinguished by distractors is the same as the ability underlying the choice of the correct response. Here again, the use of NLM could help test designers to select items that best reflect the underlying ability.

4.2. Limitations and Future Research

Our study has several limitations which should stimulate and guide further research on the topic. A first limitation is related to the sample that was used in the study. The sample comes from a single e-assessment platform and it is therefore difficult to know whether the findings would generalize to other platforms. It is possible for example that characteristics of the design of Web applications affect the way distractors are processed by test takers. Previous research has shown that the experience of users greatly affect the cognitive processes they mobilize when using a Web application [32]. Applied to our question, it is possible that a bad Web design reduces the motivation of test takers to process distractors when they fail at identifying the rule governing the logical progression of the series. Further research is needed to test the generalizability of the findings to other platforms, but also to other types of GMA tasks.
Antother limitation is related to our sample size. NLM have more parameters than the models to which they were compared in the current study. Although our sample is larger than the one used in the original study that we conceptually replicated [8], it is still unclear whether our sample size is large enough to get reliable parameter estimates. Further research using Monte-Carlo simulations is needed on the influence of sample size on parameter estimation in NLM, and to provide clear guidelines regarding the necessary sample size.
In addition, it should be noted that the fact that NLM provided a better fit, like in Myszkowski and Storme [8]’s study, does not necessarily imply that the cognitive processes engaged in responding similar tests are necessarily only the 2-step sequence that the NLM are based on—attempting to solve the task by looking at the stimulus only and then, if the correct answer is not found, examining the distractors. Indeed, it remains very possible that the actual responding process is less clear and closer to a back-and-forth between a stimulus-based strategy and a response option comparison-based strategy. Further, it has been noted that NLM may be further improved by including the possibility that the guessing strategy (level 2) results in the choice of the correct response. In other words, choosing the correct response could then be the result or either strategy. Future research might consider this interesting possibility when such models are available in traditional IRT software.
Another limitation of this study is that it was limited in the breadth of nominal models tested by their availibility in “mirt.” Although this package provides a large number of popular models, we were not able to fit some alternatives models, notably Thissen and Steinberg [33]’s Multiple Choice Model (MCM), which essentially adds to the Nominal Response model a latent state category for examinees that corresponds to an examinee not knowing—and thus guessing—what the correct response is. Although the Nominal Response model was here outperformed by the Nested Logit models, it may be that alternative models like the MCM are better alternatives.
Another important limitation of our study is that we did not test whether the improvement in reliability translates into an improvement in predictive validity. This is because our study did not include a measure of job performance. The ability of an assessment tool to predict future job performance is crucial in the context of recruitment. Improvements in measurement reliability can lead to improvements in predictive validity, as reliability is a prerequisite for validity [34].
Whether recovering distractor information actually improves predictive validity in the context of e-assessment remains to be investigated. The answer to this question could represent an important contribution to the literature. It has indeed been shown that in situations in which test takers are under pressure, for example when stakes are high, the predictive validity of GMA tests tends to decrease [35]. Duckworth et al. [35] argued that GMA tests predict various indicators of success in life because when they are used in low stakes contexts, they essentially measure the motivation of test takers. According to Duckworth et al. [35], it is because GMA tests taken in the lab measure motivation that they are found to be positively associated with a broad range of indicators of life success. Although there is empirical evidence supporting Duckworth et al. [35]’s argument, one can wonder whether using a more precise strategy to score GMA tests could not ultimately reveal that there is a relation between GMA and various indicators of achievement. Testing the predictive validity of GMA tests scored with NLM could therefore have important implications regarding the knowledge of the true relationship between GMA and achievement in general.

Author Contributions

Conceptualization, M.S. and N.M.; methodology, M.S., S.B. and D.B.; investigation, M.S.; data curation, S.B. and D.B.; Writing—Original Draft preparation, N.M. and M.S.; Writing—Review and Editing, N.M. and M.S.; visualization, M.S.; supervision, M.S.; project administration, M.S., N.M., S.B. and D.B.

Funding

This research received no external funding.

Conflicts of Interest

Authors 3 and 4 hold positions in the company that owns the psychometric test used (GF20) in this study.

References

  1. Bartram, D. Internet recruitment and selection: Kissing frogs to find princes. Int. J. Sel. Assess. 2000, 8, 261–274. [Google Scholar] [CrossRef]
  2. Laumer, S.; von Stetten, A.; Eckhardt, A. E-assessment. Bus. Inf. Syst. Eng. 2009, 1, 263–265. [Google Scholar] [CrossRef]
  3. Schmidt, F.L.; Hunter, J. General mental ability in the world of work: Occupational attainment and job performance. J. Personal. Soc. Psychol. 2004, 86, 162. [Google Scholar] [CrossRef] [PubMed]
  4. Ryan, A.M.; Ployhart, R.E. Applicants’ perceptions of selection procedures and decisions: A critical review and agenda for the future. J. Manag. 2000, 26, 565–606. [Google Scholar] [CrossRef]
  5. Gilliland, S.W.; Steiner, D.D. Causes and consequences of applicant perceptions of unfairness. In Justice in the Workplace; Cropanzano, R., Ed.; Erlbaum: Hillsdale, NJ, USA, 2001; pp. 175–195. [Google Scholar]
  6. Tavakol, M.; Dennick, R. Making sense of Cronbach’s alpha. Int. J. Med. Educ. 2011, 2, 53. [Google Scholar] [CrossRef] [PubMed]
  7. Bock, R.D. Estimating item parameters and latent ability when responses are scored in two or more nominal categories. Psychometrika 1972, 37, 29–51. [Google Scholar] [CrossRef]
  8. Myszkowski, N.; Storme, M. A snapshot of g? Binary and polytomous item-response theory investigations of the last series of the Standard Progressive Matrices (SPM-LS). Intelligence 2018, 68, 109–116. [Google Scholar] [CrossRef]
  9. Suh, Y.; Bolt, D.M. Nested logit models for multiple-choice item response data. Psychometrika 2010, 75, 454–473. [Google Scholar] [CrossRef]
  10. Hüffmeier, J.; Mazei, J.; Schultze, T. Reconceptualizing replication as a sequence of different studies: A replication typology. J. Exp. Soc. Psychol. 2016, 66, 81–92. [Google Scholar] [CrossRef]
  11. Edelen, M.O.; Reeve, B.B. Applying item response theory (IRT) modeling to questionnaire development, evaluation, and refinement. Qual. Life Res. 2007, 16, 5. [Google Scholar] [CrossRef]
  12. Kim, S.; Feldt, L.S. The estimation of the IRT reliability coefficient and its lower and upper bounds, with comparisons to CTT reliability statistics. Asia Pac. Educ. Rev. 2010, 11, 179–188. [Google Scholar] [CrossRef]
  13. Hambleton, R.K.; Van der Linden, W.J. Advances in item response theory and applications: An introduction. Appl. Psychol. Meas. 1982, 6, 373–378. [Google Scholar] [CrossRef]
  14. Yen, Y.C.; Ho, R.G.; Laio, W.W.; Chen, L.J.; Kuo, C.C. An empirical evaluation of the slip correction in the four parameter logistic models with computerized adaptive testing. Appl. Psychol. Meas. 2012, 36, 75–87. [Google Scholar] [CrossRef]
  15. Myszkowski, N.; Storme, M. Measuring “good taste” with the visual aesthetic sensitivity test-revised (VAST-R). Personal. Individ. Differ. 2017, 117, 91–100. [Google Scholar] [CrossRef]
  16. Martín, E.S.; del Pino, G.; Boeck, P.D. IRT Models for Ability-Based Guessing. Appl. Psychol. Meas. 2006, 30, 183–203. [Google Scholar] [CrossRef] [Green Version]
  17. Matzen, L.B.V.; Van der Molen, M.W.; Dudink, A.C. Error analysis of Raven test performance. Personal. Individ. Differ. 1994, 16, 433–445. [Google Scholar] [CrossRef]
  18. Raven, J.C. Standardization of progressive matrices, 1938. Br. J. Med. Psychol. 1941, 19, 137–150. [Google Scholar] [CrossRef]
  19. Beilock, S.L.; Carr, T.H. When high-powered people fail: Working memory and “choking under pressure” in math. Psychol. Sci. 2005, 16, 101–105. [Google Scholar] [CrossRef]
  20. Gimmig, D.; Huguet, P.; Caverni, J.P.; Cury, F. Choking under pressure and working memory capacity: When performance pressure reduces fluid intelligence. Psychon. Bull. Rev. 2006, 13, 1005–1010. [Google Scholar] [CrossRef] [Green Version]
  21. Jorgensen, T.D.; Pornprasertmanit, S.; Miller, P.; Schoemann, A.; Rosseel, Y.; Quick, C.; Garnier-Villarreal, M.; Selig, J.; Boulton, A.; Preacher, K.; et al. semTools: Useful Tools for Structural Equation Modeling. Available online: https://cran.r-project.org/web/packages/semTools/semTools.pdf (accessed on 10 July 2019).
  22. Rosseel, Y. Lavaan: An R Package for Structural Equation Modeling. J. Stat. Softw. 2012, 48. [Google Scholar] [CrossRef]
  23. Chalmers, R.P. mirt: A multidimensional item response theory package for the R environment. J. Stat. Softw. 2012, 48, 1–29. [Google Scholar] [CrossRef]
  24. Myszkowski, N.; Storme, M. Judge Response Theory? A Call to Upgrade Our Psychometrical Account of Creativity Judgments. Psychol. Aesthet. Creat. Arts 2019, 13, 167–175. [Google Scholar] [CrossRef]
  25. Hansen, M.; Cai, L.; Monroe, S.; Li, Z. Limited-Information Goodness-of-Fit Testing of Diagnostic Classification Item Response Theory Models. CRESST Report 840. Natl. Center Res. Eval. Stand. Stud. Test. (CRESST) 2014, 1, 1–47. [Google Scholar]
  26. Raju, N.S.; Price, L.R.; Oshima, T.; Nering, M.L. Standardized conditional SEM: A case for conditional reliability. Appl. Psychol. Meas. 2007, 31, 169–180. [Google Scholar] [CrossRef]
  27. Fabrigar, L.R.; Wegener, D.T. Conceptualizing and evaluating the replication of research results. J. Exp. Soc. Psychol. 2016, 66, 68–80. [Google Scholar] [CrossRef]
  28. Schmidt, S. Shall we really do it again? The powerful concept of replication is neglected in the social sciences. Rev. Gen. Psychol. 2009, 13, 90–100. [Google Scholar] [CrossRef]
  29. Stroebe, W.; Strack, F. The alleged crisis and the illusion of exact replication. Perspect. Psychol. Sci. 2014, 9, 59–71. [Google Scholar] [CrossRef]
  30. Suh, Y.; Bolt, D.M. A nested logit approach for investigating distractors as causes of differential item functioning. J. Educ. Meas. 2011, 48, 188–205. [Google Scholar] [CrossRef]
  31. Bolt, D.M.; Wollack, J.A.; Suh, Y. Application of a multidimensional nested logit model to multiple-choice test items. Psychometrika 2012, 77, 339–357. [Google Scholar] [CrossRef]
  32. Abbey, B. Instructional and Cognitive Impacts of Web-Based Education; IGI Global: Dauphin County, PA, USA, 1999. [Google Scholar]
  33. Thissen, D.; Steinberg, L. A response model for multiple choice items. Psychometrika 1984, 49, 501–519. [Google Scholar] [CrossRef]
  34. Davidshofer, K.; Murphy, C.O. Psychological Testing: Principles and Applications; Pearson/Prentice HallUpper: Saddle River, NJ, USA, 2005. [Google Scholar]
  35. Duckworth, A.L.; Quinn, P.D.; Lynam, D.R.; Loeber, R.; Stouthamer-Loeber, M. Role of test motivation in intelligence testing. Proc. Natl. Acad. Sci. USA 2011, 108, 7716–7720. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. An example item of the GF20 (top) and the associated category characteristic curves as estimated by the 3-Parameter Nested Logit model (bottom). The correct response (4) is increasingly probable as θ j increases. However, the response category 3—which is the only distractor response where the blue and the yellow squares are (correctly) not adjacent—would be more probably selected by individuals with low abilities ( θ j 2.7 ), while the category 1 would be more probably selected by individuals with even lower abilities ( θ j < 3 )—thus showing that the choice of distractor may be informative of θ j .
Figure 1. An example item of the GF20 (top) and the associated category characteristic curves as estimated by the 3-Parameter Nested Logit model (bottom). The correct response (4) is increasingly probable as θ j increases. However, the response category 3—which is the only distractor response where the blue and the yellow squares are (correctly) not adjacent—would be more probably selected by individuals with low abilities ( θ j 2.7 ), while the category 1 would be more probably selected by individuals with even lower abilities ( θ j < 3 )—thus showing that the choice of distractor may be informative of θ j .
Jintelligence 07 00017 g001
Figure 2. Item characteristic curve plots of the 2-Parameter Logistic Model.
Figure 2. Item characteristic curve plots of the 2-Parameter Logistic Model.
Jintelligence 07 00017 g002
Figure 3. Item characteristic curve plots of the 3-Parameter Logistic Model.
Figure 3. Item characteristic curve plots of the 3-Parameter Logistic Model.
Jintelligence 07 00017 g003
Figure 4. Item characteristic curve plots of the 4-Parameter Logistic Model.
Figure 4. Item characteristic curve plots of the 4-Parameter Logistic Model.
Jintelligence 07 00017 g004
Figure 5. Item category curve plots of the 2-Parameter Nested Logit (2PNL) model.
Figure 5. Item category curve plots of the 2-Parameter Nested Logit (2PNL) model.
Jintelligence 07 00017 g005
Figure 6. Item category curve plots of the 3-Parameter Nested Logit (3PNL) model.
Figure 6. Item category curve plots of the 3-Parameter Nested Logit (3PNL) model.
Jintelligence 07 00017 g006
Figure 7. Item category curve plots of the 4-Parameter Nested Logit (4PNL) model.
Figure 7. Item category curve plots of the 4-Parameter Nested Logit (4PNL) model.
Jintelligence 07 00017 g007
Figure 8. Comparison of the reliability functions of the 2-Parameter Logistic (2PL) and Nested Logit (2PNL) models.
Figure 8. Comparison of the reliability functions of the 2-Parameter Logistic (2PL) and Nested Logit (2PNL) models.
Jintelligence 07 00017 g008
Figure 9. Comparison of the reliability functions of the 3-Parameter Logistic (3PL) and Nested Logit (3PNL) models.
Figure 9. Comparison of the reliability functions of the 3-Parameter Logistic (3PL) and Nested Logit (3PNL) models.
Jintelligence 07 00017 g009
Figure 10. Comparison of the reliability functions of the 4-Parameter Logistic (4PL) and Nested Logit (4PNL) models.
Figure 10. Comparison of the reliability functions of the 4-Parameter Logistic (4PL) and Nested Logit (4PNL) models.
Jintelligence 07 00017 g010
Table 1. Model fit of the binary models.
Table 1. Model fit of the binary models.
Model χ 2 df pCFITLIRMSEAAICc
1-Parameter Logistic2462.597189<0.0010.9130.9130.06458,244.74
2-Parameter Logistic1069.812170<0.0010.9660.9620.04257,182.90
3-Parameter Logistic251.3807150<0.0010.9960.9950.01556,705.29
4-Parameter Logistic196.2342130<0.0010.9970.9960.01356,579.87
Table 2. Item parameters of binary logistic models.
Table 2. Item parameters of binary logistic models.
Item1PL Model2PL Model3PL Model4PL Model
β i α i β i α i β i logit( γ i ) α i β i logit( γ i )logit( δ i )
Item 1
    Estimate2.7831.5272.9301.4172.855−4.0891.8213.3910.0020.988
    Standard error0.0720.1050.1130.1040.1576.671
Item 2
    Estimate2.5691.3912.6051.3262.575−5.0021.9992.8980.2660.979
    Standard error0.0680.0940.0960.0870.1046.298
Item 3
    Estimate1.5131.7671.7401.7351.633−3.1702.5581.9880.1610.969
    Standard error0.0550.0930.0780.1550.1362.191
Item 4
    Estimate1.4540.6401.1980.6191.185−5.5981.6972.9790.0020.844
    Standard error0.0550.0540.0480.0510.0576.083
Item 5
    Estimate1.2912.5431.8782.4621.719−3.4653.2222.0970.0800.982
    Standard error0.0530.1320.0990.1790.1021.223
Item 6
    Estimate1.4751.4421.5351.3621.477−4.9392.0281.8800.1250.952
    Standard error0.0550.0780.0670.0830.0896.479
Item 7
    Estimate1.5882.0151.9661.8911.884−6.4572.6672.5010.0450.969
    Standard error0.0560.1060.0890.0970.0846.179
Item 8
    Estimate1.4041.4121.4481.3891.365−3.3552.4151.6220.2400.951
    Standard error0.0540.0770.0640.1410.1783.531
Item 9
    Estimate1.5422.5752.2452.5932.061−2.6193.5402.4500.1480.987
    Standard error0.0550.1380.1110.2160.1180.751
Item 10
    Estimate−0.3721.085−0.3351.438−0.852−2.0021.683−0.6690.1310.898
    Standard error0.0500.0590.0460.1490.1720.285
Item 11
    Estimate−1.1370.878−0.9912.603−3.188−1.5973.013−3.4620.1760.934
    Standard error0.0520.0550.0480.3270.4000.093
Item 12
    Estimate0.7622.0780.9912.4400.697−2.1713.2350.9760.1330.969
    Standard error0.0510.1010.0700.1770.0990.276
Item 13
    Estimate−0.3131.612−0.3161.577−0.368−7.9781.9880.0210.0010.876
    Standard error0.0490.0790.0540.0760.0556.064
Item 14
    Estimate−0.6622.121−0.8022.191−0.992−4.0725.037−1.0780.0490.831
    Standard error0.0500.1050.0670.1460.1060.616
Item 15
    Estimate−1.9261.113−1.8074.520−5.921−2.3526.060−7.5930.0900.934
    Standard error0.0590.0680.0660.6080.7620.090
Item 16
    Estimate−1.1860.981−1.0645.056−5.569−1.62211.675−11.7500.1690.923
    Standard error0.0530.0580.0510.7540.8410.071
Item 17
    Estimate−1.3991.153−1.3212.815−3.228−2.09916.917−14.8830.1320.787
    Standard error0.0540.0650.0570.3110.3530.111
Item 18
    Estimate−1.1921.736−1.3302.104−1.729−3.4652.079−1.6360.0280.983
    Standard error0.0530.0890.0680.1560.1430.343
Item 19
    Estimate−1.6030.678−1.3353.085−4.858−1.6732.931−4.7390.1580.998
    Standard error0.0560.0540.0500.5200.7590.077
Item 20
    Estimate−1.8080.532−1.4632.248−4.156−1.8172.369−4.4040.1430.990
    Standard error0.0580.0530.0500.3720.5800.089
Table 3. Model fit of the nominal and nested logit models.
Table 3. Model fit of the nominal and nested logit models.
Model χ 2 df pCFITLIRMSEAAICc
Nominal Response178.034590<0.0010.9720.9410.018134,347.1
2-Parameter Nested Logit177.385390<0.0010.9780.9580.018133,725.8
3-Parameter Nested Logit126.100370<0.0010.9860.9650.016133,231.1
4-Parameter Nested Logit104.885350<0.0010.9860.9520.019133,195.5
Table 4. Item parameters of the 2PNL model.
Table 4. Item parameters of the 2PNL model.
ItemCorrect ResponseDistractors
α i β i λ i , 1 λ i , 2 λ i , 3 λ i , 4 δ i , 1 δ i , 2 δ i , 3 δ i , 4
Item 1
    Estimate1.5492.9540.4261.0670.7441.327−0.3122.8781.2481.771
    Standard error0.1030.1130.5350.3410.3830.3820.8050.5230.5790.554
Item 2
    Estimate1.3972.614−1.189−0.297−0.123−0.442−3.219−1.154−1.509−1.669
    Standard error0.0910.0960.3570.1940.2290.2310.5480.2350.2660.292
Item 3
    Estimate1.7471.736−0.915−0.392−0.914−0.559−1.147−1.094−1.369−0.593
    Standard error0.0890.0770.1830.1930.1950.1630.2050.1970.2220.167
Item 4
    Estimate0.6461.2000.8282.6232.4130.1322.7466.8834.0110.168
    Standard error0.0530.0480.6500.6460.6710.8211.1501.1221.1361.476
Item 5
    Estimate2.4171.8190.6480.1800.386−0.0771.8930.3511.343−0.259
    Standard error0.1190.0930.2150.2550.2210.2790.2600.3130.2700.355
Item 6
    Estimate1.4121.524−1.426−1.127−1.334−1.266−2.516−2.816−2.308−2.755
    Standard error0.0750.0660.2050.2440.1940.2310.2450.2830.2260.274
Item 7
    Estimate1.9451.933−0.373−0.506−1.447−1.262−0.445−0.617−1.647−1.845
    Standard error0.0980.0850.1710.1790.2180.2370.1710.1840.2670.291
Item 8
    Estimate1.4251.457−0.868−0.485−0.010−1.6110.098−0.5730.507−2.450
    Standard error0.0750.0640.1630.1940.1530.2880.1610.1890.1360.384
Item 9
    Estimate2.4352.1700.3540.5170.4850.2331.2250.7800.2081.577
    Standard error0.1230.1030.2440.2700.3070.2300.3030.3250.3650.291
Item 10
    Estimate1.090−0.3360.3270.4720.2481.1770.7010.359−0.0682.060
    Standard error0.0580.0460.1310.1440.1550.1230.1370.1450.1610.122
Item 11
    Estimate0.875−0.9910.318−0.3970.567−0.1160.6130.5010.8340.817
    Standard error0.0550.0480.1090.1070.1070.1030.0880.0930.0850.086
Item 12
    Estimate2.0870.992−0.374−1.895−0.5890.1821.101−1.4920.5421.008
    Standard error0.0980.0690.1950.2640.2060.2020.1680.3060.1850.167
Item 13
    Estimate1.626−0.321−0.7180.558−0.1390.9220.5551.4671.5802.877
    Standard error0.0780.0550.2160.2110.2010.1960.2260.1970.1970.185
Item 14
    Estimate2.096−0.804−0.577−0.696−0.539−0.221−0.613−1.659−0.3340.435
    Standard error0.1020.0670.1140.1580.1070.0920.0970.1470.0890.070
Item 15
    Estimate1.104−1.803−0.520−0.781−0.564−0.680−0.3430.130−0.730−0.432
    Standard error0.0670.0650.0870.0780.0960.0890.0670.0610.0750.070
Item 16
    Estimate0.965−1.060−0.1870.4670.802−0.1991.0740.2090.407−0.445
    Standard error0.0570.0500.0920.1130.1120.1250.0760.0860.0840.106
Item 17
    Estimate1.118−1.3090.3100.5121.3640.1492.7613.3791.6321.423
    Standard error0.0640.0560.1960.1930.2170.2120.1890.1870.2010.204
Item 18
    Estimate1.781−1.3510.400−0.2910.3210.0971.4510.3161.6192.397
    Standard error0.0900.0690.1560.1750.1520.1440.1310.1590.1290.124
Item 19
    Estimate0.675−1.335−0.936−0.235−0.812−0.294−1.1120.342−0.9350.431
    Standard error0.0530.0500.1100.0740.1040.0730.1030.0610.0940.060
Item 20
    Estimate0.533−1.463−0.7200.3900.318−0.680−1.0510.2080.541−0.578
    Standard error0.0530.0500.1100.0790.0740.0950.1030.0640.0600.087
Table 5. Item parameters of the 3PNL model.
Table 5. Item parameters of the 3PNL model.
ItemCorrect ResponseDistractors
α i β i logit( γ i ) λ i , 1 λ i , 2 λ i , 3 λ i , 4 δ i , 1 δ i , 2 δ i , 3 δ i , 4
Item 1
    Estimate1.4432.843−3.0650.3960.9270.6681.132−0.3232.7681.1961.623
    Standard error0.1250.2314.5010.4740.3040.3420.3430.7610.5000.5520.532
Item 2
    Estimate1.3192.564−4.277−1.109−0.271−0.132−0.452−3.220−1.141−1.522−1.709
    Standard error0.0860.1114.1670.3230.1740.2070.2090.5400.2260.2590.288
Item 3
    Estimate1.7191.608−2.921−0.824−0.424−0.883−0.496−1.086−1.130−1.378−0.549
    Standard error0.1490.1311.6450.1640.1760.1770.1460.1940.1930.2160.158
Item 4
    Estimate0.6251.181−4.8130.6842.3872.171−0.0762.6026.7633.888−0.230
    Standard error0.0510.0654.0230.5880.5810.6060.7691.1361.1001.1151.535
Item 5
    Estimate2.3401.664−3.4120.5360.1270.300−0.2071.7950.2981.263−0.421
    Standard error0.1680.0981.2150.1910.2270.1970.2520.2420.2930.2520.343
Item 6
    Estimate1.3631.418−3.134−1.265−0.982−1.262−1.159−2.399−2.706−2.295−2.695
    Standard error0.1250.1582.5500.1830.2170.1760.2080.2300.2640.2200.263
Item 7
    Estimate1.8261.854−6.090−0.338−0.431−1.327−1.119−0.425−0.565−1.599−1.751
    Standard error0.0900.0814.2080.1550.1600.1970.2130.1650.1740.2590.278
Item 8
    Estimate1.3781.394−4.012−0.752−0.427−0.002−1.4950.168−0.5430.512−2.442
    Standard error0.1030.1234.3130.1470.1760.1410.2630.1540.1830.1330.381
Item 9
    Estimate2.6481.969−2.0610.4080.5330.4220.2971.3050.8280.1761.664
    Standard error0.2000.1160.3700.2250.2490.2850.2120.2980.3200.3640.286
Item 10
    Estimate1.461−0.870−1.9830.3170.4560.2501.1370.7010.356−0.0612.034
    Standard error0.1520.1720.2740.1190.1320.1410.1130.1330.1410.1560.118
Item 11
    Estimate2.527−3.084−1.6190.279−0.3740.581−0.1130.6050.5150.8240.819
    Standard error0.3150.3850.0960.1060.1020.1060.0990.0870.0920.0850.085
Item 12
    Estimate2.3080.758−2.504−0.344−1.748−0.5420.1481.120−1.4120.5730.990
    Standard error0.1590.0930.3590.1800.2430.1910.1880.1620.2960.1780.161
Item 13
    Estimate1.593−0.374−7.481−0.6300.525−0.1160.8520.6151.4461.5962.837
    Standard error0.0750.0553.9820.1940.1910.1810.1770.2160.1890.1890.177
Item 14
    Estimate2.249−1.038−3.833−0.510−0.646−0.473−0.183−0.576−1.635−0.2970.452
    Standard error0.1420.1020.4340.1040.1440.0980.0850.0940.1430.0860.069
Item 15
    Estimate4.703−6.146−2.335−0.596−0.800−0.590−0.707−0.3440.144−0.721−0.422
    Standard error0.6630.8310.0890.0880.0790.0970.0890.0670.0610.0750.070
Item 16
    Estimate4.626−5.091−1.638−0.1520.4460.824−0.2141.0890.2050.404−0.452
    Standard error0.6080.6750.0720.0880.1120.1150.1180.0750.0860.0840.105
Item 17
    Estimate2.613−3.013−2.1420.3280.5201.4520.1622.7743.3871.6181.434
    Standard error0.2770.3130.1170.1820.1800.2110.1980.1880.1860.2010.202
Item 18
    Estimate2.210−1.798−3.4150.377−0.2420.3210.1221.4440.3421.6182.407
    Standard error0.1590.1430.3000.1440.1600.1410.1330.1290.1560.1270.122
Item 19
    Estimate3.167−4.950−1.672−0.901−0.241−0.773−0.300−1.0900.344−0.9110.434
    Standard error0.5230.7600.0760.1040.0760.1000.0740.1010.0610.0920.060
Item 20
    Estimate2.233−4.111−1.827−0.6590.3810.315−0.628−1.0180.2050.537−0.551
    Standard error0.3570.5520.0890.1020.0790.0730.0890.1000.0640.0600.084
Table 6. Item parameters of the 4PNL model.
Table 6. Item parameters of the 4PNL model.
ItemCorrect ResponseDistractors
α i β i logit( γ i )logit( δ i ) λ i , 1 λ i , 2 λ i , 3 λ i , 4 δ i , 1 δ i , 2 δ i , 3 δ i , 4
Item 1
    Estimate2.4473.2340.3950.9830.4130.9550.6771.174−0.3142.7781.1891.640
Item 2
    Estimate1.7332.8750.1490.983−1.077−0.283−0.142−0.448−3.143−1.150−1.530−1.697
Item 3
    Estimate2.3641.8320.1680.975−0.801−0.428−0.904−0.482−1.052−1.132−1.392−0.534
Item 4
    Estimate1.5172.7020.0160.8520.6902.3912.1750.0492.6196.7903.9130.015
Item 5
    Estimate2.4741.8970.0010.9900.5000.1210.258−0.3091.7490.2881.212−0.542
Item 6
    Estimate2.0631.6980.1920.956−1.277−1.048−1.307−1.155−2.392−2.766−2.329−2.673
Item 7
    Estimate2.3232.3770.0020.972−0.336−0.421−1.341−1.087−0.424−0.556−1.613−1.706
Item 8
    Estimate2.2601.5750.2250.955−0.765−0.4260.005−1.5510.166−0.5390.515−2.474
Item 9
    Estimate3.5082.4430.1590.9850.4470.5600.4600.3251.3430.8510.2121.691
Item 10
    Estimate1.357−0.7570.1040.9990.3320.4730.2781.1520.7110.368−0.0412.048
Item 11
    Estimate2.444−3.0230.1651.0000.286−0.3780.583−0.1100.6080.5120.8290.820
Item 12
    Estimate2.7660.9480.0980.976−0.327−1.817−0.5190.1601.131−1.4810.5900.997
Item 13
    Estimate1.576−0.3560.0001.000−0.6400.555−0.0960.8900.6071.4661.6112.861
Item 14
    Estimate2.176−0.9800.0191.000−0.510−0.661−0.469−0.177−0.579−1.646−0.2970.453
Item 15
    Estimate4.743−6.2810.0881.000−0.585−0.799−0.588−0.704−0.3480.136−0.727−0.428
Item 16
    Estimate4.613−5.1150.1621.000−0.1550.4540.830−0.2251.0870.2100.413−0.458
Item 17
    Estimate2.496−2.9140.1041.0000.3570.5551.5010.1962.7923.4091.6411.454
Item 18
    Estimate2.146−1.7450.0311.0000.359−0.2640.3030.1021.4370.3321.6112.398
Item 19
    Estimate3.048−4.8440.1570.998−0.912−0.245−0.784−0.300−1.0960.343−0.9170.433
Item 20
    Estimate2.217−4.1130.1390.991−0.6660.3930.319−0.633−1.0230.2060.540−0.555

Share and Cite

MDPI and ACS Style

Storme, M.; Myszkowski, N.; Baron, S.; Bernard, D. Same Test, Better Scores: Boosting the Reliability of Short Online Intelligence Recruitment Tests with Nested Logit Item Response Theory Models. J. Intell. 2019, 7, 17. https://doi.org/10.3390/jintelligence7030017

AMA Style

Storme M, Myszkowski N, Baron S, Bernard D. Same Test, Better Scores: Boosting the Reliability of Short Online Intelligence Recruitment Tests with Nested Logit Item Response Theory Models. Journal of Intelligence. 2019; 7(3):17. https://doi.org/10.3390/jintelligence7030017

Chicago/Turabian Style

Storme, Martin, Nils Myszkowski, Simon Baron, and David Bernard. 2019. "Same Test, Better Scores: Boosting the Reliability of Short Online Intelligence Recruitment Tests with Nested Logit Item Response Theory Models" Journal of Intelligence 7, no. 3: 17. https://doi.org/10.3390/jintelligence7030017

APA Style

Storme, M., Myszkowski, N., Baron, S., & Bernard, D. (2019). Same Test, Better Scores: Boosting the Reliability of Short Online Intelligence Recruitment Tests with Nested Logit Item Response Theory Models. Journal of Intelligence, 7(3), 17. https://doi.org/10.3390/jintelligence7030017

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop