Special Issue "The Great Debate: General Ability and Specific Abilities in the Prediction of Important Outcomes"

A special issue of Journal of Intelligence (ISSN 2079-3200).

Deadline for manuscript submissions: closed (12 February 2018)

Special Issue Editors

Guest Editor
Dr. Harrison J. Kell

Academic to Career Research Center, Research & Development, Educational Testing Service, Princeton, NJ 08541, USA
E-Mail
Interests: individual differences, intelligence, personality, I/O psychology, job performance
Guest Editor
Prof. Dr. Jonas Lang

Department of Personnel Management, Work, and Organizational Psychology, Ghent University, Ghent, Belgium
Website | E-Mail
Interests: adaptability; cognitive abilities; personnel selection; the influence of motivation on performance

Special Issue Information

Dear Colleagues,

The structure of intelligence has been of interest to researchers and practitioners for over 100 years. Across this century, research there has been a tension between emphasizing general and specific abilities. Beginning in the 1940s and extending to the present day, these tensions have largely been resolved by integrating specific and general abilities into hierarchical models. These hierarchical models many differ in their specifics, but they typically include a general factor that exhibits a pervasive association with item-level scores and narrower factors that are associated only with items representing specific content domains. Although researchers and practitioners diverge in how to best conceptualize the functional relations among the different levels of the hierarchy—and the best methods to study them—there is broad acceptance of hierarchical models as a viable means for studying the structure of differences in human intelligence.

Despite agreement about the hierarchical relations among general and specific abilities, there is substantial disagreement (Murphy, Cronin, and Tam, 2003; Reeve and Charles, 2008) about their respective usefulness for predicting important real-world outcomes (e.g., job performance, grades). This “great debate” about the relative practical usefulness of measures of specific and general abilities has recurred throughout the century of the research on, and application of, cognitive ability measures (Kell & Lang, 2017; Lang, Kersting, Hülsheger, and Lang, 2010). The goal of this special issue of the Journal of Intelligence is to survey as a forum for continued research into and discussion of this important topic.

We are soliciting two types of contributions. The first is empirical: We will provide a covariance matrix and the raw data for three intelligence measures from a Thurstonian test battery and school grades in a sample of 219 German adolescents (derived from Lang and Lang, 2010). Contributors can analyze these data as they see fit, with the general purpose of answering three major questions:

  • Do the data include evidence for the usefulness of specific abilities?
  • How important are specific abilities relative to general abilities for predicting grades?
  • To what degree could (or should) researchers use different prediction models for each of the different outcome criteria?

In asking contributors to analyze the same data according to their own theoretical and practical perspective(s), we hope to draw out theoretical and practical assumptions that might otherwise remain implicit—and in doing so stimulate discussion of more general questions relevant to this great debate, including:

  • To what degree are specific and general abilities useful for prediction?
  • To what degree are specific and general abilities useful for theoretically understanding cognitive abilities?
  • Does the relevance of specific abilities differ for different outcome criteria (e.g., job performance vs. academic achievements)?
  • What are the implications of the debate about the relative utility of specific and general abilities for designing multi-attribute intelligence batteries?
  • What are the implications of the relative utility of specific and general abilities for conceptualizing and measuring situational specificity – both in terms of the influence of a priori, hypothesized moderators (“weak” situational specificity) and unknown, atheoretical moderators (“strong” situational specificity)?

Realizing that researchers and practitioners differ in their orientations toward data and ideas, the second type of contribution we are soliciting is a qualitative one that treats the issue of general “versus” specific abilities for predicting real-world outcomes in an integrative or theoretical way. Although such contributions should address some of the bulleted topics listed above, the exact nature of the contribution can be determined by its author(s) and might include for, example, a critical evaluation of the relevant literature or an exploration of the implications of different models of cognitive abilities (e.g., bifactor vs. dynamical systems vs. higher-order) for emphasizing general or specific abilities in prediction.

Dr. Harrison J. Kell
Prof. Dr. Jonas Lang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Intelligence is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 350 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (7 papers)

View options order results:
result details:
Displaying articles 1-7
Export citation of selected articles as:
Open AccessEditorial
The Great Debate: General Ability and Specific Abilities in the Prediction of Important Outcomes
Received: 15 May 2018 / Revised: 28 May 2018 / Accepted: 28 May 2018 / Published: 7 September 2018
Cited by 2 | PDF Full-text (497 KB) | HTML Full-text | XML Full-text
Abstract
The relative value of specific versus general cognitive abilities for the prediction of practical outcomes has been debated since the inception of modern intelligence theorizing and testing. This editorial introduces a special issue dedicated to exploring this ongoing “great debate”. It provides an [...] Read more.
The relative value of specific versus general cognitive abilities for the prediction of practical outcomes has been debated since the inception of modern intelligence theorizing and testing. This editorial introduces a special issue dedicated to exploring this ongoing “great debate”. It provides an overview of the debate, explains the motivation for the special issue and two types of submissions solicited, and briefly illustrates how differing conceptualizations of cognitive abilities demand different analytic strategies for predicting criteria, and that these different strategies can yield conflicting findings about the real-world importance of general versus specific abilities. Full article
Figures

Figure 1

Open AccessArticle
Bifactor Models for Predicting Criteria by General and Specific Factors: Problems of Nonidentifiability and Alternative Solutions
Received: 21 March 2018 / Revised: 29 August 2018 / Accepted: 5 September 2018 / Published: 7 September 2018
Cited by 1 | PDF Full-text (4802 KB) | HTML Full-text | XML Full-text
Abstract
The bifactor model is a widely applied model to analyze general and specific abilities. Extensions of bifactor models additionally include criterion variables. In such extended bifactor models, the general and specific factors can be correlated with criterion variables. Moreover, the influence of general [...] Read more.
The bifactor model is a widely applied model to analyze general and specific abilities. Extensions of bifactor models additionally include criterion variables. In such extended bifactor models, the general and specific factors can be correlated with criterion variables. Moreover, the influence of general and specific factors on criterion variables can be scrutinized in latent multiple regression models that are built on bifactor measurement models. This study employs an extended bifactor model to predict mathematics and English grades by three facets of intelligence (number series, verbal analogies, and unfolding). We show that, if the observed variables do not differ in their loadings, extended bifactor models are not identified and not applicable. Moreover, we reveal that standard errors of regression weights in extended bifactor models can be very large and, thus, lead to invalid conclusions. A formal proof of the nonidentification is presented. Subsequently, we suggest alternative approaches for predicting criterion variables by general and specific factors. In particular, we illustrate how (1) composite ability factors can be defined in extended first-order factor models and (2) how bifactor(S-1) models can be applied. The differences between first-order factor models and bifactor(S-1) models for predicting criterion variables are discussed in detail and illustrated with the empirical example. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
How Specific Abilities Might Throw ‘g’ a Curve: An Idea on How to Capitalize on the Predictive Validity of Specific Cognitive Abilities
Received: 12 March 2018 / Revised: 5 July 2018 / Accepted: 17 July 2018 / Published: 7 September 2018
Cited by 1 | PDF Full-text (3680 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
School grades are still used by universities and employers for selection purposes. Thus, identifying determinants of school grades is important. Broadly, two predictor categories can be differentiated from an individual difference perspective: cognitive abilities and personality traits. Over time, evidence accumulated supporting the [...] Read more.
School grades are still used by universities and employers for selection purposes. Thus, identifying determinants of school grades is important. Broadly, two predictor categories can be differentiated from an individual difference perspective: cognitive abilities and personality traits. Over time, evidence accumulated supporting the notion of the g-factor as the best single predictor of school grades. Specific abilities were shown to add little incremental validity. The current paper aims at reviving research on which cognitive abilities predict performance. Based on ideas of criterion contamination and deficiency as well as Spearman’s ability differentiation hypothesis, two mechanisms are suggested which both would lead to curvilinear relations between specific abilities and grades. While the data set provided for this special issue does not allow testing these mechanisms directly, we tested the idea of curvilinear relations. In particular, polynomial regressions were used. Machine learning was applied to identify the best fitting models in each of the subjects math, German, and English. In particular, we fitted polynomial models with varying degrees and evaluated their accuracy with a leave-one-out validation approach. The results show that tests of specific abilities slightly outperform the g-factor when curvilinearity is assumed. Possible theoretical explanations are discussed. Full article
Figures

Figure 1

Open AccessArticle
Aligning Predictor-Criterion Bandwidths: Specific Abilities as Predictors of Specific Performance
Received: 13 March 2018 / Revised: 24 May 2018 / Accepted: 25 June 2018 / Published: 7 September 2018
Cited by 1 | PDF Full-text (430 KB) | HTML Full-text | XML Full-text
Abstract
The purpose of the current study is to compare the extent to which general and specific abilities predict academic performances that are also varied in breadth (i.e., general performance and specific performance). The general and specific constructs were assumed to vary only in [...] Read more.
The purpose of the current study is to compare the extent to which general and specific abilities predict academic performances that are also varied in breadth (i.e., general performance and specific performance). The general and specific constructs were assumed to vary only in breadth, not order, and two data analytic approaches (i.e., structural equation modeling [SEM] and relative weights analysis) consistent with this theoretical assumption were compared. Conclusions regarding the relative importance of general and specific abilities differed based on data analytic approaches. The SEM approach identified general ability as the strongest and only significant predictor of general academic performance, with neither general nor specific abilities predicting any of the specific subject grade residuals. The relative weights analysis identified verbal reasoning as contributing more than general ability, or other specific abilities, to the explained variance in general academic performance. Verbal reasoning also contributed to most of the explained variance in each of the specific subject grades. These results do not provide support for the utility of predictor-criterion alignment, but they do provide evidence that both general and specific abilities can serve as useful predictors of performance. Full article
Figures

Figure 1

Open AccessFeature PaperCommentary
Non-g Factors Predict Educational and Occupational Criteria: More than g
Received: 11 March 2018 / Revised: 28 May 2018 / Accepted: 8 June 2018 / Published: 7 September 2018
Cited by 2 | PDF Full-text (1565 KB) | HTML Full-text | XML Full-text
Abstract
In a prior issue of the Journal of Intelligence, I argued that the most important scientific issue in intelligence research was to identify specific abilities with validity beyond g (i.e., variance common to mental tests) (Coyle, T.R. Predictive validity of non-g [...] Read more.
In a prior issue of the Journal of Intelligence, I argued that the most important scientific issue in intelligence research was to identify specific abilities with validity beyond g (i.e., variance common to mental tests) (Coyle, T.R. Predictive validity of non-g residuals of tests: More than g. Journal of Intelligence 2014, 2, 21–25.). In this Special Issue, I review my research on specific abilities related to non-g factors. The non-g factors include specific math and verbal abilities based on standardized tests (SAT, ACT, PSAT, Armed Services Vocational Aptitude Battery). I focus on two non-g factors: (a) non-g residuals, obtained after removing g from tests, and (b) ability tilt, defined as within-subject differences between math and verbal scores, yielding math tilt (math > verbal) and verbal tilt (verbal > math). In general, math residuals and tilt positively predict STEM criteria (college majors, jobs, GPAs) and negatively predict humanities criteria, whereas verbal residuals and tilt show the opposite pattern. The paper concludes with suggestions for future research, with a focus on theories of non-g factors (e.g., investment theories, Spearman’s Law of Diminishing Returns, Cognitive Differentiation-Integration Effort Model) and a magnification model of non-g factors. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
A Tempest in A Ladle: The Debate about the Roles of General and Specific Abilities in Predicting Important Outcomes
Received: 28 February 2018 / Revised: 28 March 2018 / Accepted: 12 April 2018 / Published: 19 April 2018
Cited by 1 | PDF Full-text (213 KB) | HTML Full-text | XML Full-text
Abstract
The debate about the roles of general and specific abilities in predicting important outcomes is a tempest in a ladle because we cannot measure abilities without also measuring skills. Skills always develop through exposure, are specific rather than general, and are executed using [...] Read more.
The debate about the roles of general and specific abilities in predicting important outcomes is a tempest in a ladle because we cannot measure abilities without also measuring skills. Skills always develop through exposure, are specific rather than general, and are executed using different strategies by different people, thus tapping into varied specific abilities. Relative predictive validities of measurement formats depend on the purpose: the more general and long-term the purpose, the better the more general measure. The more specific and immediate the purpose, the better the closely related specific measure. Full article
Open AccessCommentary
Commenting on the “Great Debate”: General Abilities, Specific Abilities, and the Tools of the Trade
Received: 23 January 2019 / Accepted: 11 February 2019 / Published: 18 February 2019
PDF Full-text (289 KB) | HTML Full-text | XML Full-text
Abstract
We review papers in the special issue regarding the great debate on general and specific abilities. Papers in the special issue either provided an empirical examination of the debate using a uniform dataset or they provided a debate commentary. Themes that run through [...] Read more.
We review papers in the special issue regarding the great debate on general and specific abilities. Papers in the special issue either provided an empirical examination of the debate using a uniform dataset or they provided a debate commentary. Themes that run through the papers and that are discussed further here are that: (1) the importance of general and specific ability predictors will largely depend on the outcome to be predicted, (2) the effectiveness of both general and specific predictors will largely depend on the quality and breadth of how the manifest indicators are measured, and (3) research on general and specific ability predictors is alive and well and more research is warranted. We conclude by providing a review of potentially fruitful areas of future research. Full article
J. Intell. EISSN 2079-3200 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top