Intelligence Testing and Assessment

A special issue of Journal of Intelligence (ISSN 2079-3200). This special issue belongs to the section "Contributions to the Measurement of Intelligence".

Deadline for manuscript submissions: 31 May 2025 | Viewed by 985

Special Issue Editors

Individual Differences and Psychodiagnostics, Saarland University, Campus A1.3, 66123 Saarbrücken, Germany
Interests: intelligence; fluid reasoning; figural matrices; computer-based testing; test development

E-Mail Website
Guest Editor
Individual Differences and Psychodiagnostics, Saarland University, Campus A1.3, 66123 Saarbrücken, Germany
Interests: behavior genetic studies on intelligence and personality; determinants of school success; stability and change in intelligence and personality characteristics; multimodal assessment of behavior; validity of psychodiagnostics in basic and applied fields

E-Mail Website
Guest Editor
Department of Psychology, University of Duisburg-Essen, Universitätsstraße 2, 45141 Essen, Germany
Interests: intelligence and cognitive abilities; working memory; mental speed; assessment; psychometrics; performance modeling

Special Issue Information

Dear Colleagues,

Intelligence research has a long-standing history and is one of the most important predictors of many life outcomes in a variety of settings (e.g., predictions of health behavior, academic achievement, and occupational performance). The point can be made that ability assessment in general but also the assessment of general mental abilities dates back over 2000 years to when applicants for clerical positions in ancient China were being selected. Since the 19th century, researchers have increasingly developed more sophisticated tests of human intelligence. This development has also been driven by a multi-faceted approach, ranging from general mental ability tests and educational tests to specialized test forms such as inductive reasoning tests. This field of research has also introduced and refined the publicly known concept of IQ scores. Many of these early developments are still used today as they have been proven to be reliable, valid, and useful for research and application alike.

Today, computers are omnipresent in our daily lives. However, their use in intelligence testing is still largely limited to administering virtual tests that were originally created as paper-and-pencil tests, and scores are conventionally computed as the sum of correct responses. This takes advantage of neither the diverse possibilities of computerized test administration nor the rich information available in logged process data. Only recently have researchers explored innovative testing and scoring techniques that may complement established ability tests in the future:

  • Administering potentially more complex or ecologically valid test contents, e.g., videos, game-based assessment, interactive and adaptive testing, and simulation paradigms;
  • Using logged process data, such as decision sequences, partial solutions, and response time data, or even movement and behavioral assessment.
  • Exploring new ways of using these data, including innovative performance modeling (e.g., AI algorithms).

This Special Issue aims to offer a platform for researchers who work on such innovative developments and who seek to exchange their challenging experiences, either promising or disappointing. We encourage the submission of empirical papers testing the potential and limitations of innovative tests, as well as their comparability with traditional instruments. We also encourage the submission of theoretical and review articles summarizing the development and state of the art of novel assessment formats and performance modeling. Potential topics include, but are not limited to, the following:

  • The development and validation of new computer-based test formats for the assessment of intelligence (e.g., VR applications, gamified testing);
  • The reanalysis of (logged process) data from traditional intelligence test data using novel modeling techniques;
  • Comparisons of modern and traditional instruments regarding psychometric properties, including measurement equivalence and validity;
  • The prediction of intelligence without explicit intelligence testing (e.g., prediction from non-cognitive data using machine learning models).

Dr. Marco Koch
Prof. Dr. Frank M. Spinath
Prof. Dr. Florian Schmitz
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a double-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Intelligence is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligence
  • mental ability
  • intelligence test
  • computer-based testing
  • process data
  • response times

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 2408 KiB  
Article
Enhancing Spatial Ability Assessment: Integrating Problem-Solving Strategies in Object Assembly Tasks Using Multimodal Joint-Hierarchical Cognitive Diagnosis Modeling
by Jujia Li, Kaiwen Man and Joni M. Lakin
J. Intell. 2025, 13(3), 30; https://doi.org/10.3390/jintelligence13030030 - 5 Mar 2025
Viewed by 642
Abstract
We proposed a novel approach to investigate how problem-solving strategies, identified using response time and eye-tracking data, can impact individuals’ performance on the Object Assembly (OA) task. To conduct an integrated assessment of spatial reasoning ability and problem-solving strategy, we applied the Multimodal [...] Read more.
We proposed a novel approach to investigate how problem-solving strategies, identified using response time and eye-tracking data, can impact individuals’ performance on the Object Assembly (OA) task. To conduct an integrated assessment of spatial reasoning ability and problem-solving strategy, we applied the Multimodal Joint-Hierarchical Cognitive Diagnosis Model (MJ-DINA) to analyze the performance of young students (aged 6 to 14) on 17 OA items. The MJ-DINA model consists of three sub-models: a Deterministic Inputs, Noisy “and” Gate (DINA) model for estimating spatial ability, a lognormal RT model for response time, and a Bayesian Negative Binomial (BNF) model for fixation counts. In the DINA model, we estimated five spatial cognitive attributes aligned with problem-solving processes: encoding, falsification, mental rotation, mental displacement, and intractability recognition. Our model fits the data adequately, with Gelman–Rubin convergence statistics near 1.00 and posterior predictive p-values between 0.05 and 0.95 for the DINA, Log RT, and BNF sub-models, indicating reliable parameter estimation. Our findings indicate that individuals with faster processing speeds and fewer fixation counts, which we label Reflective-Scanner, outperformed the other three identified problem-solving strategy groups. Specifically, sufficient eye movement was a key factor contributing to better performance on spatial reasoning tasks. Additionally, the most effective method for improving individuals’ spatial task performance was training them to master the falsification attribute. This research offers valuable implications for developing tailored teaching methods to improve individuals’ spatial ability, depending on various problem-solving strategies. Full article
(This article belongs to the Special Issue Intelligence Testing and Assessment)
Show Figures

Figure 1

Back to TopTop