Previous Issue
Volume 8, September

Table of Contents

J. Intell., Volume 8, Issue 4 (December 2020) – 7 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Open AccessReview
We Can Boost IQ: Revisiting Kvashchev’s Experiment
J. Intell. 2020, 8(4), 41; https://doi.org/10.3390/jintelligence8040041 - 26 Nov 2020
Viewed by 202
Abstract
This paper examined the effects of training in creative problem-solving on intelligence. We revisited Stankov’s report on the outcomes of an experiment carried out by R. Kvashchev in former Yugoslavia that reported an IQ increase of seven points, on average, across 28 tests [...] Read more.
This paper examined the effects of training in creative problem-solving on intelligence. We revisited Stankov’s report on the outcomes of an experiment carried out by R. Kvashchev in former Yugoslavia that reported an IQ increase of seven points, on average, across 28 tests of intelligence. We argue that previous analyses were based on a conservative analytic approach and failed to take into account the reductions in the IQ test variances at the end of the three-years’ training. When standard deviations of the initial test and 2nd retest were pooled in the calculation of the effect sizes, the experimental group’s performance was 10 IQ points higher on average than that of the control group. Further, with the properly defined measures of fluid and crystallized intelligence, the experimental group showed a 15 IQ points higher increase than the control group. We concluded that prolonged intensive training in creative problem-solving can lead to substantial and positive effects on intelligence during late adolescence (ages 18–19). Full article
Show Figures

Figure 1

Open AccessArticle
Communicating Intelligence Research
J. Intell. 2020, 8(4), 40; https://doi.org/10.3390/jintelligence8040040 - 19 Nov 2020
Viewed by 317
Abstract
Despite intelligence research being among the most replicable bodies of empirical findings—a Rosetta stone across the social sciences—the communication of intelligence research with non-intelligence researchers and the public remains a challenge, especially given ongoing public controversies throughout the history of the field. Hunt [...] Read more.
Despite intelligence research being among the most replicable bodies of empirical findings—a Rosetta stone across the social sciences—the communication of intelligence research with non-intelligence researchers and the public remains a challenge, especially given ongoing public controversies throughout the history of the field. Hunt argued that “we have a communication problem.” This article is a call for intelligence researchers to consider communication at multiple levels—communication with other intelligence researchers, communication with non-intelligence researchers, and communication with the public, defined here as policymakers, practitioners, students, and general readers. It discusses ongoing tensions between academic freedom and social responsibility and provides suggestions for thinking about communication and effective research translation and implementation of intelligence research from the frameworks of science and policy research communication. It concludes with some recommendations for effective communication and stresses the importance of incentivizing more scholars to responsibly seek to educate and engage with multiple publics about the science of intelligence. Full article
Open AccessEditorial
Analysis of an Intelligence Dataset
J. Intell. 2020, 8(4), 39; https://doi.org/10.3390/jintelligence8040039 - 19 Nov 2020
Viewed by 188
Abstract
It is perhaps popular belief—at least among non-psychometricians—that there is a unique or standard way to investigate the psychometric qualities of tests [...] Full article
(This article belongs to the Special Issue Analysis of an Intelligence Dataset)
Open AccessArticle
A Reappraisal of the Threshold Hypothesis of Creativity and Intelligence
J. Intell. 2020, 8(4), 38; https://doi.org/10.3390/jintelligence8040038 - 11 Nov 2020
Viewed by 474
Abstract
Intelligence has been declared as a necessary but not sufficient condition for creativity, which was subsequently (erroneously) translated into the so-called threshold hypothesis. This hypothesis predicts a change in the correlation between creativity and intelligence at around 1.33 standard deviations above the population [...] Read more.
Intelligence has been declared as a necessary but not sufficient condition for creativity, which was subsequently (erroneously) translated into the so-called threshold hypothesis. This hypothesis predicts a change in the correlation between creativity and intelligence at around 1.33 standard deviations above the population mean. A closer inspection of previous inconclusive results suggests that the heterogeneity is mostly due to the use of suboptimal data analytical procedures. Herein, we applied and compared three methods that allowed us to handle intelligence as a continuous variable. In more detail, we examined the threshold of the creativity-intelligence relation with (a) scatterplots and heteroscedasticity analysis, (b) segmented regression analysis, and (c) local structural equation models in two multivariate studies (N1 = 456; N2 = 438). We found no evidence for the threshold hypothesis of creativity across different analytical procedures in both studies. Given the problematic history of the threshold hypothesis and its unequivocal rejection with appropriate multivariate methods, we recommend the total abandonment of the threshold. Full article
(This article belongs to the Special Issue Intelligence and Creativity)
Show Figures

Figure 1

Open AccessArticle
The Influence of Situational Cues on Children’s Creativity in an Alternative Uses Task and the Moderating Effect of Selective Attention
J. Intell. 2020, 8(4), 37; https://doi.org/10.3390/jintelligence8040037 - 19 Oct 2020
Viewed by 451
Abstract
Taking a perception-action perspective, we investigated how the presence of different real objects in children’s immediate situation affected their creativity and whether this effect was moderated by their selective attention. Seventy children between ages 9 and 12 years old participated. Verbal responses on [...] Read more.
Taking a perception-action perspective, we investigated how the presence of different real objects in children’s immediate situation affected their creativity and whether this effect was moderated by their selective attention. Seventy children between ages 9 and 12 years old participated. Verbal responses on a visual Alternative Uses Task with a low stimulus and high stimulus condition were coded on fluency, flexibility, and originality. Selective attention was measured with a visual search task. Results showed that fluency was not affected by stimulus condition and was unrelated to selective attention. Flexibility was positively associated with selective attention. Originality, net of fluency and flexibility, showed a main effect of stimulus condition in an unexpected direction, as children gave more original responses in the low stimulus condition compared to the high stimulus condition. A significant moderation effect revealed that children with better selective attention skills benefitted from a low stimulus environment, whereas children with weaker selective attention performed better in a high stimulus environment. The findings demonstrate differential effects of the immediate situation and selective attention, and support the hypothesis that creativity is impacted by immediate situation and selective attention, yet in unexpected ways. Full article
(This article belongs to the Special Issue Intelligence and Creativity)
Show Figures

Figure 1

Open AccessArticle
Effect Sizes, Power, and Biases in Intelligence Research: A Meta-Meta-Analysis
J. Intell. 2020, 8(4), 36; https://doi.org/10.3390/jintelligence8040036 - 02 Oct 2020
Viewed by 912
Abstract
In this meta-study, we analyzed 2442 effect sizes from 131 meta-analyses in intelligence research, published from 1984 to 2014, to estimate the average effect size, median power, and evidence for bias. We found that the average effect size in intelligence research was a [...] Read more.
In this meta-study, we analyzed 2442 effect sizes from 131 meta-analyses in intelligence research, published from 1984 to 2014, to estimate the average effect size, median power, and evidence for bias. We found that the average effect size in intelligence research was a Pearson’s correlation of 0.26, and the median sample size was 60. Furthermore, across primary studies, we found a median power of 11.9% to detect a small effect, 54.5% to detect a medium effect, and 93.9% to detect a large effect. We documented differences in average effect size and median estimated power between different types of intelligence studies (correlational studies, studies of group differences, experiments, toxicology, and behavior genetics). On average, across all meta-analyses (but not in every meta-analysis), we found evidence for small-study effects, potentially indicating publication bias and overestimated effects. We found no differences in small-study effects between different study types. We also found no convincing evidence for the decline effect, US effect, or citation bias across meta-analyses. We concluded that intelligence research does show signs of low power and publication bias, but that these problems seem less severe than in many other scientific fields. Full article
Show Figures

Figure 1

Open AccessCommentary
How to Compare Psychometric Factor and Network Models
J. Intell. 2020, 8(4), 35; https://doi.org/10.3390/jintelligence8040035 - 02 Oct 2020
Viewed by 1339
Abstract
In memory of Dr. Dennis John McFarland, who passed away recently, our objective is to continue his efforts to compare psychometric networks and latent variable models statistically. We do so by providing a commentary on his latest work, which he encouraged us to [...] Read more.
In memory of Dr. Dennis John McFarland, who passed away recently, our objective is to continue his efforts to compare psychometric networks and latent variable models statistically. We do so by providing a commentary on his latest work, which he encouraged us to write, shortly before his death. We first discuss the statistical procedure McFarland used, which involved structural equation modeling (SEM) in standard SEM software. Next, we evaluate the penta-factor model of intelligence. We conclude that (1) standard SEM software is not suitable for the comparison of psychometric networks with latent variable models, and (2) the penta-factor model of intelligence is only of limited value, as it is nonidentified. We conclude with a reanalysis of the Wechlser Adult Intelligence Scale data McFarland discussed and illustrate how network and latent variable models can be compared using the recently developed R package Psychonetrics. Of substantive theoretical interest, the results support a network interpretation of general intelligence. A novel empirical finding is that networks of intelligence replicate over standardization samples. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop