Special Issue "Cognitive Models in Intelligence Research"

A special issue of Journal of Intelligence (ISSN 2079-3200).

Deadline for manuscript submissions: closed (31 May 2018).

Special Issue Editor

Prof. Dr. Anna-Lena Schubert
E-Mail Website
Guest Editor
Institute of Psychology, University of Mainz, D-55122 Mainz, Germany
Interests: fluid intelligence; processing speed; working memory; attentional control; biological basis of intelligence; cognitive models; psychometrics

Special Issue Information

Dear Colleagues,

Identifying the cognitive processes underlying individual differences in cognitive abilities has great implications for both elementary cognitive and neuroscientific research, as well as for numerous applications, e.g., targeted training programs. Recent advancements in the development of mathematical models that describe cognitive processes such as working memory and speeded decision making have provided further opportunities to identify specific process parameters related to intelligence. Moreover, accessible software solutions allow the widespread use of mathematical models such as the diffusion model for a great number of research questions related to individual differences. Based on these developments, a number of exciting questions arise: How are parameters of cognitive models related to mental abilities? Is there a hierarchical structure of model parameters reflecting both general- and domain- or content-specific abilities? Can cognitive models provide a link between neural data and intelligence test scores in terms of mediating cognitive processes as suggested by proponents of model-based cognitive neuroscience? How can the psychometric properties of model parameters be evaluated and improved? Are there best practices and practical recommendations for the application of cognitive models in individual differences research? This Special Issue invites contributions addressing conceptual or empirical questions regarding cognitive models in intelligence research.

Dr. Anna-Lena Schubert
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Intelligence is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • cognitive models
  • mathematical models
  • diffusion model
  • item-response theory
  • factor structure
  • mental speed
  • working memory
  • model-based cognitive neuroscience

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Scoring Alternatives for Mental Speed Tests: Measurement Issues and Validity for Working Memory Capacity and the Attentional Blink Effect
J. Intell. 2018, 6(4), 47; https://doi.org/10.3390/jintelligence6040047 - 15 Oct 2018
Cited by 6 | Viewed by 5009
Abstract
Research suggests that the relation of mental speed with working memory capacity (WMC) depends on complexity and scoring methods of speed tasks and the type of task used to assess capacity limits in working memory. In the present study, we included conventional binding/updating [...] Read more.
Research suggests that the relation of mental speed with working memory capacity (WMC) depends on complexity and scoring methods of speed tasks and the type of task used to assess capacity limits in working memory. In the present study, we included conventional binding/updating measures of WMC as well as rapid serial visual presentation paradigms. The latter allowed for a computation of the attentional blink (AB) effect that was argued to measure capacity limitations at the encoding stage of working memory. Mental speed was assessed with a set of tasks and scored by diverse methods, including response time (RT) based scores, as well as ex-Gaussian and diffusion model parameterization. Relations of latent factors were investigated using structure equation modeling techniques. RT-based scores of mental speed yielded substantial correlations with WMC but only weak relations with the AB effect, while WMC and the AB magnitude were independent. The strength of the speed-WMC relation was shown to depend on task type. Additionally, the increase in predictive validity across RT quantiles changed across task types, suggesting that the worst performance rule (WPR) depends on task characteristics. In contrast to the latter, relations of speed with the AB effect did not change across RT quantiles. Relations of the model parameters were consistently found for the ex-Gaussian tau parameter and the diffusion model drift rate. However, depending on task type, other parameters showed plausible relations as well. The finding that characteristics of mental speed tasks determined the overall strength of relations with WMC, the occurrence of a WPR effect, and the specific pattern of relations of model parameters, implies that mental speed tasks are not exchangeable measurement tools. In spite of reflecting a general factor of mental speed, different speed tasks possess different requirements, supporting the notion of mental speed as a hierarchical construct. Full article
(This article belongs to the Special Issue Cognitive Models in Intelligence Research)
Show Figures

Figure 1

Article
Cognitive Models in Intelligence Research: Advantages and Recommendations for Their Application
J. Intell. 2018, 6(3), 34; https://doi.org/10.3390/jintelligence6030034 - 17 Jul 2018
Cited by 17 | Viewed by 7559
Abstract
Mathematical models of cognition measure individual differences in cognitive processes, such as processing speed, working memory capacity, and executive functions, that may underlie general intelligence. As such, cognitive models allow identifying associations between specific cognitive processes and tracking the effect of experimental interventions [...] Read more.
Mathematical models of cognition measure individual differences in cognitive processes, such as processing speed, working memory capacity, and executive functions, that may underlie general intelligence. As such, cognitive models allow identifying associations between specific cognitive processes and tracking the effect of experimental interventions aimed at the enhancement of intelligence on mediating process parameters. Moreover, cognitive models provide an explicit theoretical formalization of theories regarding specific cognitive processes that may help in overcoming ambiguities in the interpretation of fuzzy verbal theories. In this paper, we give an overview of the advantages of cognitive modeling in intelligence research and present models in the domains of processing speed, working memory, and selective attention that may be of particular interest for intelligence research. Moreover, we provide guidelines for the application of cognitive models in intelligence research, including data collection, the evaluation of model fit, and statistical analyses. Full article
(This article belongs to the Special Issue Cognitive Models in Intelligence Research)
Show Figures

Figure 1

Article
Evaluating an Automated Number Series Item Generator Using Linear Logistic Test Models
J. Intell. 2018, 6(2), 20; https://doi.org/10.3390/jintelligence6020020 - 02 Apr 2018
Cited by 6 | Viewed by 6451
Abstract
This study investigates the item properties of a newly developed Automatic Number Series Item Generator (ANSIG). The foundation of the ANSIG is based on five hypothesised cognitive operators. Thirteen item models were developed using the numGen R package and eleven were evaluated in [...] Read more.
This study investigates the item properties of a newly developed Automatic Number Series Item Generator (ANSIG). The foundation of the ANSIG is based on five hypothesised cognitive operators. Thirteen item models were developed using the numGen R package and eleven were evaluated in this study. The 16-item ICAR (International Cognitive Ability Resource1) short form ability test was used to evaluate construct validity. The Rasch Model and two Linear Logistic Test Model(s) (LLTM) were employed to estimate and predict the item parameters. Results indicate that a single factor determines the performance on tests composed of items generated by the ANSIG. Under the LLTM approach, all the cognitive operators were significant predictors of item difficulty. Moderate to high correlations were evident between the number series items and the ICAR test scores, with high correlation found for the ICAR Letter-Numeric-Series type items, suggesting adequate nomothetic span. Extended cognitive research is, nevertheless, essential for the automatic generation of an item pool with predictable psychometric properties. Full article
(This article belongs to the Special Issue Cognitive Models in Intelligence Research)
Show Figures

Figure 1

Back to TopTop