Special Issue "The Great Debate: General Ability and Specific Abilities in the Prediction of Important Outcomes"
A special issue of Journal of Intelligence (ISSN 2079-3200).
Deadline for manuscript submissions: closed (12 February 2018) | Viewed by 50970
A printed edition of this Special Issue is available here.
Interests: individual differences; intelligence; personality; I/O psychology; job performance
The structure of intelligence has been of interest to researchers and practitioners for over 100 years. Across this century, research there has been a tension between emphasizing general and specific abilities. Beginning in the 1940s and extending to the present day, these tensions have largely been resolved by integrating specific and general abilities into hierarchical models. These hierarchical models many differ in their specifics, but they typically include a general factor that exhibits a pervasive association with item-level scores and narrower factors that are associated only with items representing specific content domains. Although researchers and practitioners diverge in how to best conceptualize the functional relations among the different levels of the hierarchy—and the best methods to study them—there is broad acceptance of hierarchical models as a viable means for studying the structure of differences in human intelligence.
Despite agreement about the hierarchical relations among general and specific abilities, there is substantial disagreement (Murphy, Cronin, and Tam, 2003; Reeve and Charles, 2008) about their respective usefulness for predicting important real-world outcomes (e.g., job performance, grades). This “great debate” about the relative practical usefulness of measures of specific and general abilities has recurred throughout the century of the research on, and application of, cognitive ability measures (Kell & Lang, 2017; Lang, Kersting, Hülsheger, and Lang, 2010). The goal of this special issue of the Journal of Intelligence is to survey as a forum for continued research into and discussion of this important topic.
We are soliciting two types of contributions. The first is empirical: We will provide a covariance matrix and the raw data for three intelligence measures from a Thurstonian test battery and school grades in a sample of 219 German adolescents (derived from Lang and Lang, 2010). Contributors can analyze these data as they see fit, with the general purpose of answering three major questions:
- Do the data include evidence for the usefulness of specific abilities?
- How important are specific abilities relative to general abilities for predicting grades?
- To what degree could (or should) researchers use different prediction models for each of the different outcome criteria?
In asking contributors to analyze the same data according to their own theoretical and practical perspective(s), we hope to draw out theoretical and practical assumptions that might otherwise remain implicit—and in doing so stimulate discussion of more general questions relevant to this great debate, including:
- To what degree are specific and general abilities useful for prediction?
- To what degree are specific and general abilities useful for theoretically understanding cognitive abilities?
- Does the relevance of specific abilities differ for different outcome criteria (e.g., job performance vs. academic achievements)?
- What are the implications of the debate about the relative utility of specific and general abilities for designing multi-attribute intelligence batteries?
- What are the implications of the relative utility of specific and general abilities for conceptualizing and measuring situational specificity – both in terms of the influence of a priori, hypothesized moderators (“weak” situational specificity) and unknown, atheoretical moderators (“strong” situational specificity)?
Realizing that researchers and practitioners differ in their orientations toward data and ideas, the second type of contribution we are soliciting is a qualitative one that treats the issue of general “versus” specific abilities for predicting real-world outcomes in an integrative or theoretical way. Although such contributions should address some of the bulleted topics listed above, the exact nature of the contribution can be determined by its author(s) and might include for, example, a critical evaluation of the relevant literature or an exploration of the implications of different models of cognitive abilities (e.g., bifactor vs. dynamical systems vs. higher-order) for emphasizing general or specific abilities in prediction.
Dr. Harrison J. Kell
Prof. Dr. Jonas Lang
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a double-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Intelligence is an international peer-reviewed open access monthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.