Next Issue
Volume 3, June
Previous Issue
Volume 2, December

J. Intell., Volume 3, Issue 1 (March 2015) – 3 articles , Pages 1-40

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Article
More is not Always Better: The Relation between Item Response and Item Response Time in Raven’s Matrices
J. Intell. 2015, 3(1), 21-40; https://doi.org/10.3390/jintelligence3010021 - 12 Mar 2015
Cited by 32 | Viewed by 5739
Abstract
The role of response time in completing an item can have very different interpretations. Responding more slowly could be positively related to success as the item is answered more carefully. However, the association may be negative if working faster indicates higher ability. The [...] Read more.
The role of response time in completing an item can have very different interpretations. Responding more slowly could be positively related to success as the item is answered more carefully. However, the association may be negative if working faster indicates higher ability. The objective of this study was to clarify the validity of each assumption for reasoning items considering the mode of processing. A total of 230 persons completed a computerized version of Raven’s Advanced Progressive Matrices test. Results revealed that response time overall had a negative effect. However, this effect was moderated by items and persons. For easy items and able persons the effect was strongly negative, for difficult items and less able persons it was less negative or even positive. The number of rules involved in a matrix problem proved to explain item difficulty significantly. Most importantly, a positive interaction effect between the number of rules and item response time indicated that the response time effect became less negative with an increasing number of rules. Moreover, exploratory analyses suggested that the error type influenced the response time effect. Full article
Show Figures

Figure 1

Article
Are Fit Indices Biased in Favor of Bi-Factor Models in Cognitive Ability Research?: A Comparison of Fit in Correlated Factors, Higher-Order, and Bi-Factor Models via Monte Carlo Simulations
J. Intell. 2015, 3(1), 2-20; https://doi.org/10.3390/jintelligence3010002 - 03 Feb 2015
Cited by 100 | Viewed by 6897
Abstract
Bi-factor confirmatory factor models have been influential in research on cognitive abilities because they often better fit the data than correlated factors and higher-order models. They also instantiate a perspective that differs from that offered by other models. Motivated by previous work that [...] Read more.
Bi-factor confirmatory factor models have been influential in research on cognitive abilities because they often better fit the data than correlated factors and higher-order models. They also instantiate a perspective that differs from that offered by other models. Motivated by previous work that hypothesized an inherent statistical bias of fit indices favoring the bi-factor model, we compared the fit of correlated factors, higher-order, and bi-factor models via Monte Carlo methods. When data were sampled from a true bi-factor structure, each of the approximate fit indices was more likely than not to identify the bi-factor solution as the best fitting. When samples were selected from a true multiple correlated factors structure, approximate fit indices were more likely overall to identify the correlated factors solution as the best fitting. In contrast, when samples were generated from a true higher-order structure, approximate fit indices tended to identify the bi-factor solution as best fitting. There was extensive overlap of fit values across the models regardless of true structure. Although one model may fit a given dataset best relative to the other models, each of the models tended to fit the data well in absolute terms. Given this variability, models must also be judged on substantive and conceptual grounds. Full article
Show Figures

Figure 1

Editorial
Acknowledgement to Reviewers of the Journal of Intelligence in 2014
J. Intell. 2015, 3(1), 1; https://doi.org/10.3390/jintelligence3010001 - 09 Jan 2015
Cited by 1 | Viewed by 3938
Abstract
The editors of the Jounral of Intelligence would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2014: [...] Full article
Previous Issue
Next Issue
Back to TopTop