Analysis of an Intelligence Dataset

A special issue of Journal of Intelligence (ISSN 2079-3200).

Deadline for manuscript submissions: closed (31 May 2020) | Viewed by 44302

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor


E-Mail Website
Guest Editor
Department of Psychology, Pace University, 1 Pace Plaza, New York, NY 10038, USA
Interests: aesthetic abilities; intelligence; creativity; personality
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The Journal of Intelligence plans a special issue on the analysis of data from the Raven Progressive Matrices.

  • The data are available at: https://data.mendeley.com/datasets/h3yhs5gy3w/1
  • The answer key is 7,6,8,2,1,5,1,6,3,2,4,5
  • For an example of how the data can be analyzed, see:
    Myszkowski, N., & Storme, M. (2018). A snapshot of g. Binary and polytomous item-response theory investigations of the last series of the Standard Progressive Matrices (SPM-LS). Intelligence, 68, 109-116.

We welcome all manuscripts which contribute to an understanding of the data and/or to the measurement of intelligence, whether related or not to the approaches used by Myszkowski and Storme (2018). Any type of analysis qualifies on the condition that it is of a sufficiently high level of quality. The deadline for submission is 30 November 2019. Submitted manuscripts will be subjected to the regular review process.

Dr. Nils Myszkowski
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a double-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Intelligence is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

3 pages, 215 KiB  
Editorial
Analysis of an Intelligence Dataset
by Nils Myszkowski
J. Intell. 2020, 8(4), 39; https://doi.org/10.3390/jintelligence8040039 - 19 Nov 2020
Cited by 2 | Viewed by 2385
Abstract
It is perhaps popular belief—at least among non-psychometricians—that there is a unique or standard way to investigate the psychometric qualities of tests [...] Full article
(This article belongs to the Special Issue Analysis of an Intelligence Dataset)

Research

Jump to: Editorial

24 pages, 489 KiB  
Article
Regularized Latent Class Analysis for Polytomous Item Responses: An Application to SPM-LS Data
by Alexander Robitzsch
J. Intell. 2020, 8(3), 30; https://doi.org/10.3390/jintelligence8030030 - 14 Aug 2020
Cited by 7 | Viewed by 3929
Abstract
The last series of Raven’s standard progressive matrices (SPM-LS) test was studied with respect to its psychometric properties in a series of recent papers. In this paper, the SPM-LS dataset is analyzed with regularized latent class models (RLCMs). For dichotomous item response data, [...] Read more.
The last series of Raven’s standard progressive matrices (SPM-LS) test was studied with respect to its psychometric properties in a series of recent papers. In this paper, the SPM-LS dataset is analyzed with regularized latent class models (RLCMs). For dichotomous item response data, an alternative estimation approach based on fused regularization for RLCMs is proposed. For polytomous item responses, different alternative fused regularization penalties are presented. The usefulness of the proposed methods is demonstrated in a simulated data illustration and for the SPM-LS dataset. For the SPM-LS dataset, it turned out the regularized latent class model resulted in five partially ordered latent classes. In total, three out of five latent classes are ordered for all items. For the remaining two classes, violations for two and three items were found, respectively, which can be interpreted as a kind of latent differential item functioning. Full article
(This article belongs to the Special Issue Analysis of an Intelligence Dataset)
Show Figures

Figure 1

15 pages, 650 KiB  
Article
A Mokken Scale Analysis of the Last Series of the Standard Progressive Matrices (SPM-LS)
by Nils Myszkowski
J. Intell. 2020, 8(2), 22; https://doi.org/10.3390/jintelligence8020022 - 6 May 2020
Cited by 9 | Viewed by 4848
Abstract
Raven’s Standard Progressive Matrices (Raven 1941) is a widely used 60-item long measure of general mental ability. It was recently suggested that, for situations where taking this test is too time consuming, a shorter version, comprised of only the last series of the [...] Read more.
Raven’s Standard Progressive Matrices (Raven 1941) is a widely used 60-item long measure of general mental ability. It was recently suggested that, for situations where taking this test is too time consuming, a shorter version, comprised of only the last series of the Standard Progressive Matrices (Myszkowski and Storme 2018) could be used, while preserving satisfactory psychometric properties (Garcia-Garzon et al. 2019; Myszkowski and Storme 2018). In this study, I argue, however, that some psychometric properties have been left aside by previous investigations. As part of this special issue on the reinvestigation of Myszkowski and Storme’s dataset, I propose to use the non-parametric Item Response Theory framework of Mokken Scale Analysis (Mokken 1971, 1997) and its current developments (Sijtsma and van der Ark 2017) to shed new light on the SPM-LS. Extending previous findings, this investigation indicated that the SPM-LS had satisfactory scalability ( H = 0.469 ), local independence and reliability ( M S = 0.841 , L C R C = 0.874 ). Further, all item response functions were monotonically increasing, and there was overall evidence for invariant item ordering ( H T = 0.475 ), supporting the Double Monotonicity Model (Mokken 1997). Item 1, however, appeared problematic in most analyses. I discuss the implications of these results, notably regarding whether to discard item 1, whether the SPM-LS sum scores can confidently be used to order persons, and whether the invariant item ordering of the SPM-LS allows to use a stopping rule to further shorten test administration. Full article
(This article belongs to the Special Issue Analysis of an Intelligence Dataset)
Show Figures

Figure 1

18 pages, 487 KiB  
Article
Diagnosing a 12-Item Dataset of Raven Matrices: With Dexter
by Ivailo Partchev
J. Intell. 2020, 8(2), 21; https://doi.org/10.3390/jintelligence8020021 - 6 May 2020
Cited by 3 | Viewed by 4622
Abstract
We analyze a 12-item version of Raven’s Standard Progressive Matrices test, traditionally scored with the sum score. We discuss some important differences between assessment in practice and psychometric modelling. We demonstrate some advanced diagnostic tools in the freely available R package, dexter. We [...] Read more.
We analyze a 12-item version of Raven’s Standard Progressive Matrices test, traditionally scored with the sum score. We discuss some important differences between assessment in practice and psychometric modelling. We demonstrate some advanced diagnostic tools in the freely available R package, dexter. We find that the first item in the test functions badly—at a guess, because the subjects were not given exercise items before the live test. Full article
(This article belongs to the Special Issue Analysis of an Intelligence Dataset)
Show Figures

Figure 1

36 pages, 8660 KiB  
Article
How Much g Is in the Distractor? Re-Thinking Item-Analysis of Multiple-Choice Items
by Boris Forthmann, Natalie Förster, Birgit Schütze, Karin Hebbecker, Janis Flessner, Martin T. Peters and Elmar Souvignier
J. Intell. 2020, 8(1), 11; https://doi.org/10.3390/jintelligence8010011 - 9 Mar 2020
Cited by 8 | Viewed by 5180
Abstract
Distractors might display discriminatory power with respect to the construct of interest (e.g., intelligence), which was shown in recent applications of nested logit models to the short-form of Raven’s progressive matrices and other reasoning tests. In this vein, a simulation study was carried [...] Read more.
Distractors might display discriminatory power with respect to the construct of interest (e.g., intelligence), which was shown in recent applications of nested logit models to the short-form of Raven’s progressive matrices and other reasoning tests. In this vein, a simulation study was carried out to examine two effect size measures (i.e., a variant of Cohen’s ω and the canonical correlation RCC) for their potential to detect distractors with ability-related discriminatory power. The simulation design was adopted to item selection scenarios relying on rather small sample sizes (e.g., N = 100 or N = 200). Both suggested effect size measures (Cohen’s ω only when based on two ability groups) yielded acceptable to conservative type-I-error rates, whereas, the canonical correlation outperformed Cohen’s ω in terms of empirical power. The simulation results further suggest that an effect size threshold of 0.30 is more appropriate as compared to more lenient (0.10) or stricter thresholds (0.50). The suggested item-analysis procedure is illustrated with an analysis of twelve Raven’s progressive matrices items in a sample of N = 499 participants. Finally, strategies for item selection for cognitive ability tests with the goal of scaling by means of nested logit models are discussed. Full article
(This article belongs to the Special Issue Analysis of an Intelligence Dataset)
Show Figures

Figure 1

18 pages, 1166 KiB  
Article
Analysing Standard Progressive Matrices (SPM-LS) with Bayesian Item Response Models
by Paul-Christian Bürkner
J. Intell. 2020, 8(1), 5; https://doi.org/10.3390/jintelligence8010005 - 4 Feb 2020
Cited by 16 | Viewed by 7285
Abstract
Raven’s Standard Progressive Matrices (SPM) test and related matrix-based tests are widely applied measures of cognitive ability. Using Bayesian Item Response Theory (IRT) models, I reanalyzed data of an SPM short form proposed by Myszkowski and Storme (2018) and, at the same time, [...] Read more.
Raven’s Standard Progressive Matrices (SPM) test and related matrix-based tests are widely applied measures of cognitive ability. Using Bayesian Item Response Theory (IRT) models, I reanalyzed data of an SPM short form proposed by Myszkowski and Storme (2018) and, at the same time, illustrate the application of these models. Results indicate that a three-parameter logistic (3PL) model is sufficient to describe participants dichotomous responses (correct vs. incorrect) while persons’ ability parameters are quite robust across IRT models of varying complexity. These conclusions are in line with the original results of Myszkowski and Storme (2018). Using Bayesian as opposed to frequentist IRT models offered advantages in the estimation of more complex (i.e., 3–4PL) IRT models and provided more sensible and robust uncertainty estimates. Full article
(This article belongs to the Special Issue Analysis of an Intelligence Dataset)
Show Figures

Figure 1

22 pages, 963 KiB  
Article
Same Test, Better Scores: Boosting the Reliability of Short Online Intelligence Recruitment Tests with Nested Logit Item Response Theory Models
by Martin Storme, Nils Myszkowski, Simon Baron and David Bernard
J. Intell. 2019, 7(3), 17; https://doi.org/10.3390/jintelligence7030017 - 10 Jul 2019
Cited by 9 | Viewed by 7787
Abstract
Assessing job applicants’ general mental ability online poses psychometric challenges due to the necessity of having brief but accurate tests. Recent research (Myszkowski & Storme, 2018) suggests that recovering distractor information through Nested Logit Models (NLM; Suh & Bolt, 2010) increases the reliability [...] Read more.
Assessing job applicants’ general mental ability online poses psychometric challenges due to the necessity of having brief but accurate tests. Recent research (Myszkowski & Storme, 2018) suggests that recovering distractor information through Nested Logit Models (NLM; Suh & Bolt, 2010) increases the reliability of ability estimates in reasoning matrix-type tests. In the present research, we extended this result to a different context (online intelligence testing for recruitment) and in a larger sample ( N = 2949 job applicants). We found that the NLMs outperformed the Nominal Response Model (Bock, 1970) and provided significant reliability gains compared with their binary logistic counterparts. In line with previous research, the gain in reliability was especially obtained at low ability levels. Implications and practical recommendations are discussed. Full article
(This article belongs to the Special Issue Analysis of an Intelligence Dataset)
Show Figures

Figure 1

18 pages, 3345 KiB  
Article
Searching for G: A New Evaluation of SPM-LS Dimensionality
by Eduardo Garcia-Garzon, Francisco J. Abad and Luis E. Garrido
J. Intell. 2019, 7(3), 14; https://doi.org/10.3390/jintelligence7030014 - 28 Jun 2019
Cited by 8 | Viewed by 7060
Abstract
There has been increased interest in assessing the quality and usefulness of short versions of the Raven’s Progressive Matrices. A recent proposal, composed of the last twelve matrices of the Standard Progressive Matrices (SPM-LS), has been depicted as a valid measure of g [...] Read more.
There has been increased interest in assessing the quality and usefulness of short versions of the Raven’s Progressive Matrices. A recent proposal, composed of the last twelve matrices of the Standard Progressive Matrices (SPM-LS), has been depicted as a valid measure of g. Nonetheless, the results provided in the initial validation questioned the assumption of essential unidimensionality for SPM-LS scores. We tested this hypothesis through two different statistical techniques. Firstly, we applied exploratory graph analysis to assess SPM-LS dimensionality. Secondly, exploratory bi-factor modelling was employed to understand the extent that potential specific factors represent significant sources of variance after a general factor has been considered. Results evidenced that if modelled appropriately, SPM-LS scores are essentially unidimensional, and that constitute a reliable measure of g. However, an additional specific factor was systematically identified for the last six items of the test. The implications of such findings for future work on the SPM-LS are discussed. Full article
(This article belongs to the Special Issue Analysis of an Intelligence Dataset)
Show Figures

Figure 1

Back to TopTop