Next Issue
Volume 8, June
Previous Issue
Volume 8, December

Table of Contents

J. Intell., Volume 8, Issue 1 (March 2020) – 13 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessCommentary
How Mighty Are the Mitochondria in Causing Individual Differences in Intelligence?—Some Questions for David Geary
J. Intell. 2020, 8(1), 13; https://doi.org/10.3390/jintelligence8010013 - 17 Mar 2020
Cited by 1 | Viewed by 947
Abstract
David Geary (2019) has written a summary of his fascinating Psychological Review article on the purported role of the mitochondria in the development of intelligence (Geary 2018) [...] Full article
Open AccessArticle
Using the 16PF to Test the Differentiation of Personality by Intelligence Hypothesis
J. Intell. 2020, 8(1), 12; https://doi.org/10.3390/jintelligence8010012 - 10 Mar 2020
Viewed by 939
Abstract
The differentiation of personality by intelligence hypothesis suggests that there will be greater individual differences in personality traits for those individuals who are more intelligent. Conversely, less intelligent individuals will be more similar to each other in their personality traits. The hypothesis was [...] Read more.
The differentiation of personality by intelligence hypothesis suggests that there will be greater individual differences in personality traits for those individuals who are more intelligent. Conversely, less intelligent individuals will be more similar to each other in their personality traits. The hypothesis was tested with a large sample of managerial job candidates who completed an omnibus personality measure with 16 scales and five intelligence measures (used to generate an intelligence g-factor). Based on the g-factor composite, the sample was split using the median to conduct factor analyses within each half. A five-factor model was tested for both the lower and higher intelligence halves and were found to have configural invariance but not metric or scalar invariance. In general, the results provide little support for the differentiation hypothesis as there was no clear and consistent pattern of lower inter-scale correlations for the more intelligent individuals. Full article
Show Figures

Figure 1

Open AccessArticle
How Much g Is in the Distractor? Re-Thinking Item-Analysis of Multiple-Choice Items
J. Intell. 2020, 8(1), 11; https://doi.org/10.3390/jintelligence8010011 - 09 Mar 2020
Viewed by 916
Abstract
Distractors might display discriminatory power with respect to the construct of interest (e.g., intelligence), which was shown in recent applications of nested logit models to the short-form of Raven’s progressive matrices and other reasoning tests. In this vein, a simulation study was carried [...] Read more.
Distractors might display discriminatory power with respect to the construct of interest (e.g., intelligence), which was shown in recent applications of nested logit models to the short-form of Raven’s progressive matrices and other reasoning tests. In this vein, a simulation study was carried out to examine two effect size measures (i.e., a variant of Cohen’s ω and the canonical correlation RCC) for their potential to detect distractors with ability-related discriminatory power. The simulation design was adopted to item selection scenarios relying on rather small sample sizes (e.g., N = 100 or N = 200). Both suggested effect size measures (Cohen’s ω only when based on two ability groups) yielded acceptable to conservative type-I-error rates, whereas, the canonical correlation outperformed Cohen’s ω in terms of empirical power. The simulation results further suggest that an effect size threshold of 0.30 is more appropriate as compared to more lenient (0.10) or stricter thresholds (0.50). The suggested item-analysis procedure is illustrated with an analysis of twelve Raven’s progressive matrices items in a sample of N = 499 participants. Finally, strategies for item selection for cognitive ability tests with the goal of scaling by means of nested logit models are discussed. Full article
(This article belongs to the Special Issue Analysis of an Intelligence Dataset)
Show Figures

Figure 1

Open AccessArticle
Tracking with (Un)Certainty
J. Intell. 2020, 8(1), 10; https://doi.org/10.3390/jintelligence8010010 - 03 Mar 2020
Viewed by 981
Abstract
One of the highest ambitions in educational technology is the move towards personalized learning. To this end, computerized adaptive learning (CAL) systems are developed. A popular method to track the development of student ability and item difficulty, in CAL systems, is the Elo [...] Read more.
One of the highest ambitions in educational technology is the move towards personalized learning. To this end, computerized adaptive learning (CAL) systems are developed. A popular method to track the development of student ability and item difficulty, in CAL systems, is the Elo Rating System (ERS). The ERS allows for dynamic model parameters by updating key parameters after every response. However, drawbacks of the ERS are that it does not provide standard errors and that it results in rating variance inflation. We identify three statistical issues responsible for both of these drawbacks. To solve these issues we introduce a new tracking system based on urns, where every person and item is represented by an urn filled with a combination of green and red marbles. Urns are updated, by an exchange of marbles after each response, such that the proportions of green marbles represent estimates of person ability or item difficulty. A main advantage of this approach is that the standard errors are known, hence the method allows for statistical inference, such as testing for learning effects. We highlight features of the Urnings algorithm and compare it to the popular ERS in a simulation study and in an empirical data example from a large-scale CAL application. Full article
(This article belongs to the Special Issue New Methods and Assessment Approaches in Intelligence Research)
Show Figures

Figure 1

Open AccessReview
How Approaches to Animal Swarm Intelligence Can Improve the Study of Collective Intelligence in Human Teams
J. Intell. 2020, 8(1), 9; https://doi.org/10.3390/jintelligence8010009 - 02 Mar 2020
Viewed by 1321
Abstract
Researchers of team behavior have long been interested in the essential components of effective teamwork. Much existing research focuses on examining correlations between team member traits, team processes, and team outcomes, such as collective intelligence or team performance. However, these approaches are insufficient [...] Read more.
Researchers of team behavior have long been interested in the essential components of effective teamwork. Much existing research focuses on examining correlations between team member traits, team processes, and team outcomes, such as collective intelligence or team performance. However, these approaches are insufficient for providing insight into the dynamic, causal mechanisms through which the components of teamwork interact with one another and impact the emergence of team outcomes. Advances in the field of animal behavior have enabled a precise understanding of the behavioral mechanisms that enable groups to perform feats that surpass the capabilities of the individuals that comprise them. In this manuscript, we highlight how studies of animal swarm intelligence can inform research on collective intelligence in human teams. By improving the ability to obtain precise, time-varying measurements of team behaviors and outcomes and building upon approaches used in studies of swarm intelligence to analyze and model individual and group-level behaviors, researchers can gain insight into the mechanisms underlying the emergence of collective intelligence. Such understanding could inspire targeted interventions to improve team effectiveness and support the development of a comparative framework of group-level intelligence in animal and human groups. Full article
(This article belongs to the Special Issue Collective Intelligence: Individual and Team Ability)
Open AccessEditorial
The Many Faces of Intelligence: A Discussion of Geary’s Mitochondrial Functioning Theory on General Intelligence
J. Intell. 2020, 8(1), 8; https://doi.org/10.3390/jintelligence8010008 - 17 Feb 2020
Viewed by 1139
Abstract
David Geary’s article on intelligence (Geary 2018) and the summary of his theory in Journal of Intelligence offer a refreshing and inspirational view on intelligence [...] Full article
Open AccessBrief Report
The Effects of Using Partial or Uncorrected Correlation Matrices When Comparing Network and Latent Variable Models
J. Intell. 2020, 8(1), 7; https://doi.org/10.3390/jintelligence8010007 - 15 Feb 2020
Viewed by 1072
Abstract
Network models of the WAIS-IV based on regularized partial correlation matrices have been reported to outperform latent variable models based on uncorrected correlation matrices. The present study sought to compare network and latent variable models using both partial and uncorrected correlation matrices with [...] Read more.
Network models of the WAIS-IV based on regularized partial correlation matrices have been reported to outperform latent variable models based on uncorrected correlation matrices. The present study sought to compare network and latent variable models using both partial and uncorrected correlation matrices with both types of models. The results show that a network model provided better fit to matrices of partial correlations but latent variable models provided better fit to matrices of full correlations. This result is due to the fact that the use of partial correlations removes most of the covariance common to WAIS-IV tests. Modeling should be based on uncorrected correlations since these represent the majority of shared variance between WAIS-IV test scores. Full article
Open AccessArticle
Correlates of Self-Estimated Intelligence
J. Intell. 2020, 8(1), 6; https://doi.org/10.3390/jintelligence8010006 - 10 Feb 2020
Viewed by 1743
Abstract
This paper reports two studies examining correlates of self-estimated intelligence (SEI). In the first, 517 participants completed a measure of SEI as well as self-estimated emotional intelligence (SEEQ), physical attractiveness, health, and other ratings. Males rated their IQ higher (74.12 vs. 71.55) but [...] Read more.
This paper reports two studies examining correlates of self-estimated intelligence (SEI). In the first, 517 participants completed a measure of SEI as well as self-estimated emotional intelligence (SEEQ), physical attractiveness, health, and other ratings. Males rated their IQ higher (74.12 vs. 71.55) but EQ lower (68.22 vs. 71.81) than females but there were no differences in their ratings of physical health in Study 1. Correlations showed for all participants that the higher they rated their IQ, the higher their ratings of EQ, attractiveness, and health. A regression of self-estimated intelligence onto three demographic, three self-ratings and three beliefs factors accounted for 30% of the variance. Religious, educated males who did not believe in alternative medicine gave higher SEI scores. The second study partly replicated the first, with an N = 475. Again, males rated their IQ higher (106.88 vs. 100.71) than females, but no difference was found for EQ (103.16 vs. 103.74). Males rated both their attractiveness (54.79 vs. 49.81) and health (61.24 vs. 55.49) higher than females. An objective test-based cognitive ability and SEI were correlated r = 0.30. Correlations showed, as in Study 1, positive relationships between all self-ratings. A regression showed the strongest correlates of SEI were IQ, sex and positive self-ratings. Implications and limitations are noted. Full article
Open AccessArticle
Analysing Standard Progressive Matrices (SPM-LS) with Bayesian Item Response Models
J. Intell. 2020, 8(1), 5; https://doi.org/10.3390/jintelligence8010005 - 04 Feb 2020
Cited by 1 | Viewed by 1335
Abstract
Raven’s Standard Progressive Matrices (SPM) test and related matrix-based tests are widely applied measures of cognitive ability. Using Bayesian Item Response Theory (IRT) models, I reanalyzed data of an SPM short form proposed by Myszkowski and Storme (2018) and, at the same time, [...] Read more.
Raven’s Standard Progressive Matrices (SPM) test and related matrix-based tests are widely applied measures of cognitive ability. Using Bayesian Item Response Theory (IRT) models, I reanalyzed data of an SPM short form proposed by Myszkowski and Storme (2018) and, at the same time, illustrate the application of these models. Results indicate that a three-parameter logistic (3PL) model is sufficient to describe participants dichotomous responses (correct vs. incorrect) while persons’ ability parameters are quite robust across IRT models of varying complexity. These conclusions are in line with the original results of Myszkowski and Storme (2018). Using Bayesian as opposed to frequentist IRT models offered advantages in the estimation of more complex (i.e., 3–4PL) IRT models and provided more sensible and robust uncertainty estimates. Full article
(This article belongs to the Special Issue Analysis of an Intelligence Dataset)
Show Figures

Figure 1

Open AccessEditorial
Acknowledgement to Reviewers of Journal of Intelligence in 2019
J. Intell. 2020, 8(1), 4; https://doi.org/10.3390/jintelligence8010004 - 17 Jan 2020
Viewed by 1232
Open AccessFeature PaperArticle
Ergodic Subspace Analysis
J. Intell. 2020, 8(1), 3; https://doi.org/10.3390/jintelligence8010003 - 06 Jan 2020
Viewed by 1439
Abstract
Properties of psychological variables at the mean or variance level can differ between persons and within persons across multiple time points. For example, cross-sectional findings between persons of different ages do not necessarily reflect the development of a single person over time. Recently, [...] Read more.
Properties of psychological variables at the mean or variance level can differ between persons and within persons across multiple time points. For example, cross-sectional findings between persons of different ages do not necessarily reflect the development of a single person over time. Recently, there has been an increased interest in the difference between covariance structures, expressed by covariance matrices, that evolve between persons and within a single person over multiple time points. If these structures are identical at the population level, the structure is called ergodic. However, recent data confirms that ergodicity is not generally given, particularly not for cognitive variables. For example, the g factor that is dominant for cognitive abilities between persons seems to explain far less variance when concentrating on a single person’s data. However, other subdimensions of cognitive abilities seem to appear both between and within persons; that is, there seems to be a lower-dimensional subspace of cognitive abilities in which cognitive abilities are in fact ergodic. In this article, we present ergodic subspace analysis (ESA), a mathematical method to identify, for a given set of variables, which subspace is most important within persons, which is most important between person, and which is ergodic. Similar to the common spatial patterns method, the ESA method first whitens a joint distribution from both the between and the within variance structure and then performs a principle component analysis (PCA) on the between distribution, which then automatically acts as an inverse PCA on the within distribution. The difference of the eigenvalues allows a separation of the rotated dimensions into the three subspaces corresponding to within, between, and ergodic substructures. We apply the method to simulated data and to data from the COGITO study to exemplify its usage. Full article
(This article belongs to the Special Issue New Methods and Assessment Approaches in Intelligence Research)
Show Figures

Figure 1

Open AccessArticle
Why Intelligence Is Missing from American Education Policy and Practice, and What Can Be Done About It
J. Intell. 2020, 8(1), 2; https://doi.org/10.3390/jintelligence8010002 - 03 Jan 2020
Cited by 1 | Viewed by 1949
Abstract
To understand why education as a field has not incorporated intelligence, we must consider the field’s history and culture. Accordingly, in this cross-disciplinary collaboration between a political scientist who studies institutions and a psychologist who studies intelligence, we outline how the roots of [...] Read more.
To understand why education as a field has not incorporated intelligence, we must consider the field’s history and culture. Accordingly, in this cross-disciplinary collaboration between a political scientist who studies institutions and a psychologist who studies intelligence, we outline how the roots of contemporary American Educational Leadership as a field determine its contemporary avoidance of the concept of intelligence. Rooted in early 20th century progressivism and scientific management, Educational Leadership theory envisions professionally run schools as “Taylorist” factories with teaching and leadership largely standardized, prioritizing compliance over cognitive ability among educators. Further, the roots of modern education theory do not see the intelligence of students as largely malleable. Hence, prioritizing intelligence is viewed as elitist. For more than a century, these assumptions have impacted recruitment into education as a profession. We conclude with ideas about how to bring intelligence into mainstream schooling, within the existing K-12 education institutional context. We believe that better integration of intelligence and broader individual differences research in education policy and practice would lead to more rapid advances to finding evidence based solutions to help children. Full article
(This article belongs to the Special Issue Intelligence and Education)
Open AccessFeature PaperArticle
Disentangling the Effects of Processing Speed on the Association between Age Differences and Fluid Intelligence
J. Intell. 2020, 8(1), 1; https://doi.org/10.3390/jintelligence8010001 - 25 Dec 2019
Cited by 1 | Viewed by 1882
Abstract
Several studies have demonstrated that individual differences in processing speed fully mediate the association between age and intelligence, whereas the association between processing speed and intelligence cannot be explained by age differences. Because measures of processing speed reflect a plethora of cognitive and [...] Read more.
Several studies have demonstrated that individual differences in processing speed fully mediate the association between age and intelligence, whereas the association between processing speed and intelligence cannot be explained by age differences. Because measures of processing speed reflect a plethora of cognitive and motivational processes, it cannot be determined which specific processes give rise to this mediation effect. This makes it hard to decide whether these processes should be conceived of as a cause or an indicator of cognitive aging. In the present study, we addressed this question by using a neurocognitive psychometrics approach to decompose the association between age differences and fluid intelligence. Reanalyzing data from two previously published datasets containing 223 participants between 18 and 61 years, we investigated whether individual differences in diffusion model parameters and in ERP latencies associated with higher-order attentional processing explained the association between age differences and fluid intelligence. We demonstrate that individual differences in the speed of non-decisional processes such as encoding, response preparation, and response execution, and individual differences in latencies of ERP components associated with higher-order cognitive processes explained the negative association between age differences and fluid intelligence. Because both parameters jointly accounted for the association between age differences and fluid intelligence, age-related differences in both parameters may reflect age-related differences in anterior brain regions associated with response planning that are prone to be affected by age-related changes. Conversely, age differences did not account for the association between processing speed and fluid intelligence. Our results suggest that the relationship between age differences and fluid intelligence is multifactorially determined. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop