Next Article in Journal
Communicating Intelligence Research
Previous Article in Journal
A Reappraisal of the Threshold Hypothesis of Creativity and Intelligence
Previous Article in Special Issue
Regularized Latent Class Analysis for Polytomous Item Responses: An Application to SPM-LS Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Analysis of an Intelligence Dataset

Department of Psychology, Pace University, New York, NY 10038, USA
Submission received: 28 October 2020 / Accepted: 4 November 2020 / Published: 19 November 2020
(This article belongs to the Special Issue Analysis of an Intelligence Dataset)
It is perhaps popular belief—at least among non-psychometricians—that there is a unique or standard way to investigate the psychometric qualities of tests. If anything, the present Special Issue demonstrates that it is not the case. On the contrary, this Special Issue on the “analysis of an intelligence dataset” is, in my opinion, a window to the present vividness of the field of psychometrics.
Much like an invitation to revisit a story with various styles or with various points of view, this Special Issue was opened to contributions that offered extensions or reanalyses of a single—and somewhat simple—dataset, which had been recently published. The dataset was from a recent paper (Myszkowski and Storme 2018), and contained responses from 499 adults to a non-verbal logical reasoning multiple-choice test, the SPM–LS, which consists of the Last Series of Raven’s Standard Progressive Matrices (Raven 1941). The SPM–LS is further discussed in the original paper (as well as through the investigations presented in this Special Issue), and most researchers in the field are likely familiar with the Standard Progressive Matrices. The SPM–LS is simply a proposition to use the last series of the test as a standalone test. A minimal description of the SPM–LS would probably characterize it as a theoretically unidimensional measure—in the sense that one ability is tentatively measured—comprised of 12 pass-fail non-verbal items of (tentatively) increasing difficulty. Here, I refer to the pass-fail responses as the binary responses, and the full responses (including which distractor was selected) as the polytomous responses. In the original paper, a number of analyses had been used, including exploratory factor analysis with parallel analysis, confirmatory factor analyses using a structural equation modeling framework, binary logistic item response theory models (1-, 2-, 3- and 4- parameter models), and polytomous (unordered) item response theory models, including the nominal response model (Bock 1972) and nested logit models (Suh and Bolt 2010). In spite of how extensive the original analysis may have seemed, the contributions of this Special Issue present several extensions to our analyses.
I will now briefly introduce the different contributions of the Special Issue, in chronoligical order of publication. In their paper, Garcia-Garzon et al. (2019) propose an extensive reanalysis of the dimensionality of the SPM–LS, using a large variety of techniques, including bifactor models and exploratory graph analysis. Storme et al. (2019) later find that the reliability boosting strategy proposed in the original paper—which consisted of using nested logit models (Suh and Bolt 2010) to recover information from distractor information—is useful in other contexts, by using the example on a logical reasoning test applied in a personnel selection context. Moreover, Bürkner (2020) later presents how to use his R Bayesian multilevel modeling package brms (Bürkner 2017) in order to estimate various binary item response theory models, and compares the results with the frequentist approach used in the original paper with the item response theory package mirt (Chalmers 2012). Furthermore, Forthmann et al. (2020) later proposed a new procedure that can be used to detect (or select) items that could present discriminating distractors (i.e., items for which distractor responses could be used to extract additional information). In addition, Partchev (2020) then discusses issues that relate to the use of distractor information to extract information on ability in multiple choice tests, in particular in the context of cognitive assessment, and presents how to use the R package dexter (Maris et al. 2020) to study the binary responses and distractors of the SPM–LS. I then present an analysis of the SPM–LS (especially of its monotonicity) using (mostly) the framework of Mokken scale analysis (Mokken 1971). Finally, Robitzsch (2020) proposes new procedures for latent class analysis applied on the polytomous responses, combined with regularization to obtain models of parsimonious complexity.
It is interesting to note that, in spite of the relative straightforwardness of the task and the relative simplicity of the dataset—which in the end, contains answers to a few pass-fail items in a (theoretically) unidimensional instrument—the contributions of this Special Issue offer a lot of original and new perspectives on analyzing intelligence test data. Admittedly, much like the story retold 99 times in Queneau’s Exercices de style, the dataset reanalysed in this Special Issue is, in of itself, of moderate interest. Nevertheless, the variety, breadth and complementarity of the procedures used, proposed and described here clearly demonstrate the creative nature of the field, giving an echo to the proposition by Thissen (2001) to see artistic value in psychometric engineering. I would like to thank Paul De Boeck for proposing the topic of this Special Issue and inviting me to act as guest editor, as well as the authors and reviewers of the articles published in this issue for their excellent contributions. I hope that the readers of Journal of Intelligence will find as much interest in them as I do.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Bock, R. Darrell. 1972. Estimating item parameters and latent ability when responses are scored in two or more nominal categories. Psychometrika 37: 29–51. [Google Scholar] [CrossRef]
  2. Bürkner, Paul-Christian. 2017. Brms: An R Package for Bayesian Multilevel Models Using Stan. Journal of Statistical Software 80: 1–28. [Google Scholar] [CrossRef] [Green Version]
  3. Bürkner, Paul-Christian. 2020. Analysing Standard Progressive Matrices (SPM-LS) with Bayesian Item Response Models. Journal of Intelligence 8: 5. [Google Scholar] [CrossRef] [Green Version]
  4. Chalmers, R. Philip. 2012. Mirt: A Multidimensional Item Response Theory Package for the R Environment. Journal of Statistical Software 48: 1–29. [Google Scholar] [CrossRef] [Green Version]
  5. Forthmann, Boris, Birgit Schütze Natalie Förster, Karin Hebbecker, Janis Flessner, Martin T. Peters, and Elmar Souvignier. 2020. How Much g Is in the Distractor? Re-Thinking Item-Analysis of Multiple-Choice Items. Journal of Intelligence 8: 11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Garcia-Garzon, Eduardo, Francisco J. Abad, and Luis E. Garrido. 2019. Searching for G: A New Evaluation of SPM-LS Dimensionality. Journal of Intelligence 7: 14. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Maris, Gunter, Timo Bechger, Jesse Koops, and Ivailo Partchev. 2020. dexter: Data Management and Analysis of Tests. Available online: https://rdrr.io/cran/dexter/ (accessed on 6 November 2020).
  8. Mokken, Robert J. 1971. A Theory and Procedure of Scale Analysis. The Hague and Berlin: Mouton/De Gruyter. [Google Scholar]
  9. Myszkowski, Nils, and Martin Storme. 2018. A snapshot of g? Binary and polytomous item-response theory investigations of the last series of the Standard Progressive Matrices (SPM-LS). Intelligence 68: 109–16. [Google Scholar] [CrossRef]
  10. Partchev, Ivailo. 2020. Diagnosing a 12-Item Dataset of Raven Matrices: With Dexter. Journal of Intelligence 8: 21. [Google Scholar] [CrossRef] [PubMed]
  11. Raven, John C. 1941. Standardization of Progressive Matrices, 1938. British Journal of Medical Psychology 19: 137–50. [Google Scholar] [CrossRef]
  12. Robitzsch, Alexander. 2020. Regularized Latent Class Analysis for Polytomous Item Responses: An Application to SPM-LS Data. Journal of Intelligence 8: 30. [Google Scholar] [CrossRef] [PubMed]
  13. Storme, Martin, Nils Myszkowski, Simon Baron, and David Bernard. 2019. Same Test, Better Scores: Boosting the Reliability of Short Online Intelligence Recruitment Tests with Nested Logit Item Response Theory Models. Journal of Intelligence 7: 17. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Suh, Youngsuk, and Daniel M. Bolt. 2010. Nested Logit Models for Multiple-Choice Item Response Data. Psychometrika 75: 454–73. [Google Scholar] [CrossRef]
  15. Thissen, David. 2001. Psychometric engineering as art. Psychometrika 66: 473–85. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Myszkowski, N. Analysis of an Intelligence Dataset. J. Intell. 2020, 8, 39. https://doi.org/10.3390/jintelligence8040039

AMA Style

Myszkowski N. Analysis of an Intelligence Dataset. Journal of Intelligence. 2020; 8(4):39. https://doi.org/10.3390/jintelligence8040039

Chicago/Turabian Style

Myszkowski, Nils. 2020. "Analysis of an Intelligence Dataset" Journal of Intelligence 8, no. 4: 39. https://doi.org/10.3390/jintelligence8040039

APA Style

Myszkowski, N. (2020). Analysis of an Intelligence Dataset. Journal of Intelligence, 8(4), 39. https://doi.org/10.3390/jintelligence8040039

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop