3. The Biggest Problem: Defining the Field
Many of our readers will be familiar with Simons and Chabris’ [
5] graphic illustration of the power of selective attention. They showed observers a film of a game of players passing a basketball between them, and asked to count the number of passes. While the observers were counting a person in a gorilla suit walked between the players, gave a classic gorilla display of chest beating, and then walked off. Amazingly, a substantial fraction of the observers were so busy keeping their eye on the ball that they did not notice the gorilla!
The phenomenon of the unnoticed gorilla has an analog in the study of intelligence. Intelligence researchers are all very busy developing psychometric models, gazing at brain images, documenting the role of testing in personnel selection, and mulling over what the latest findings in molecular biology mean for the concept of intelligence. The gorilla, which the essays of Conway, De Boeck, Gray and Kaufman referred to in the current issue, is the definition of intelligence itself [
4]. Here is why we believe this is important.
Boring [
6] famously identified intelligence as “what the intelligence tests test”. Philosophically, no modern intelligence researcher would accept such a narrow definition. As a practical matter, though, many intelligence researchers do tacitly accept Boring’s definition. The measurement of intelligence is virtually universally restricted to behaviors that can be measured within the confines of a one to three hour testing session, conducted out of context of the examinee’s everyday life. This is an extremely limiting definition, for it rules out the examination of a person’s ability to deal with complex problems from multiple perspectives, to plan extended courses of action, and to store and retrieve information over long periods of time. We offer some illustrations to show how limiting this definition is.
Henry Molaison, the famous HM, was perhaps the most extensively studied person in the history of psychology. His ability to store new information in declarative memory was virtually destroyed following an operation that removed his hippocampi. He spent the rest of his life in an assisted living situation, where the study of what he could and could not remember contributed greatly to the scientific knowledge of human memory. In spite of the fact that he had to live a completely supported existence, he had a score of 112 on the Wechsler Adult Intelligence Scale [
7]. This led to statements in the literature asserting that although his memory had been destroyed his intelligence was unimpaired. Such a conclusion may fit Boring’s definition of intelligence, but it defies a more general, and we think more productive, view that intelligence refers to individual differences in cognitive ability. Following his injury, HM’s intelligence, in any reasonable conceptual sense, had dropped tremendously. If his IQ score did not show this, that was because the IQ score was not synonymous with intelligence.
HM was an example of a person who had much less intelligence, in the conceptual sense, than his IQ score indicated. Things can go the other way, too. Kaufman [
8] has discussed numerous cases in which people’s performance in society exceeds what would be predicted by their test scores. Neither HM’s case, nor many of Kaufman’s examples, can be considered cases of the inevitable prediction errors that result from the use of imperfect indicators. They are cases in which the tests failed to evaluate traits that are undeniably cognitive, that are important in human society, and on which there is a wide range of individual differences.
Another illustration of the limitations of Boring’s definition comes from cognitive psychology. Much of human society is based on the assumption that humans are rational beings; that they can analyze evidence, evaluate arguments, and consider the consequences of choices between possible actions. In fact, decades of behavioral studies of decision making have shown that while rational thought does occur, it is not as frequent as might be hoped. Evidently, behavior is greatly influenced by “intuitive” reasoning that is largely based on statistical associations and emotional connections. Kahneman [
9] refers to this as a conflict between a rapid, statistical and emotional first system of thought and a second, slower, more rational second system. Haidt [
10] offers the engaging analogy of a rider and an elephant. The rider is Kahneman’s rational second system. The elephant, guided by past associations and emotions, is the first system. The rider calculates the course the two should follow, but while the rider is doing the calculations, the elephant may see a mouse, or perhaps a familiar trail, and go the other way.
A theory of intelligence ought to be a theory that identifies the properties of good and bad riders, those who comprehend a situation and those who let the elephant go its way. In addition, tests of rationality that have the ability to resist many of the mental biases that influence Kahneman’s first system (and Haidt’s elephant) can be constructed. However, the evaluation of such reasoning is virtually ignored by the community of researchers on intelligence [
11]. This is a mistake.
We are not calling for a new theory of intelligence. We are calling for a new, clearer, delineation of the data that such a theory should explain. The importance of such definitions has long been recognized. Plato, in the Phaedrus dialogue, asserted that good definitions “carve nature at it joints”. There are two excellent historical examples of how important good definitions are to science: the Babylonian realization that atmospheric and astronomical phenomena represent separate systems, rather than a single “sky”, and Harvey’s analysis of the heart-lung-circulatory system, separate from the digestive and neural systems. In both cases the definition of the system did not constitute a theory of how it worked, such theories followed later.
The analysis of the circulatory system offers a further lesson. The circulatory system is separate from the digestive and neural systems, but it interacts with them. In order to understand how a human behaves we must understand both the internal functioning of each system and the interactions between them. In the current case, we argue that a new definition of intelligence should include those traits that (a) are cognitive; (b) exhibit substantial individual differences, for those are the data to be explained; and (c) are defined independently of the practical constraint that a good test must fit into the administrative restrictions on a testing session. The intelligence system, so defined, will interact with the emotional system and the social system to produce behavior.
It is not surprising that defining the subject matter of intelligence research has been difficult, for in everyday discourse the word
intelligence is used in various ways. The Oxford English Dictionary lists eight different definitions of the word, and several of those have sub-definitions. Individual differences are referred to in only one of the definitions. The others deal with intelligence as either a body of knowledge, or as a demonstration of cognitive competence by an individual. The definition as a body of knowledge is irrelevant to our concerns here. The definition as a display of competence has been a source of confusion [
12].
Psychometric analyses of intelligence deal with covariances between measurements, and therefore implicitly define intelligence in terms of individual differences in cognitive competence. We believe that this defines the field, but that the set of measurements studied is itself too restrictive, largely because these measurements have to fit into a testing session of three hours or less. The reason for the restriction is understandable when we consider how test are used in applied setting, especially for personnel selection. However, the restriction on the testing paradigm is an administrative one rather than a conceptual one. Demanding that measurements satisfy this restriction results in an incomplete description of individual differences in intelligence. To continue the analogy to Harvey’s medical discoveries, focusing on test scores alone is analogous to defining cardiology as the science of explaining variations in readings of blood pressure.
The limited nature of this approach to intelligence has long been recognized. Indeed, the journalist Walter Lippmann’s [
13,
14] objections to intelligence testing, voiced almost one hundred years ago, are remarkably similar to some contemporary objections to the modern use of tests. However, contemporary attempts to expand the definition seem to us to be deficient, because they either do not define intelligence in terms of individual differences, or because they bundle together traits that, though important, are not by any stretch of the imagination
cognitive traits. We review briefly two widely publicized redefinitions to explain why we reach these conclusions.
The blurring of the distinction between intelligence as personal behavior or as comparative behavior is particularly clear in the work of Robert Sternberg and, more recently, Scott Barry Kaufman. Kaufman says:
“Intelligence is the dynamic interplay of engagement and abilities in the pursuit of personal goals. Note that the focus of analysis is on the
person. All that exists for that individual is a series of
intelligent behaviors that unfold across his or her life. At no point is there a comparison between that person’s behavior and the behavior of others.” (emphasis in the original) [
15].
Sternberg similarly states:
“(Successful) intelligence is (1) the ability to achieve one’s goals, given one’s sociocultural context, (2) by capitalizing on strengths and correcting or compensating for weaknesses, (3) in order to adopt, shape or select environments (4) through a combination of analytic, creative, and practical abilities.” [
16].
These definitions, and the associated writing from which the quotes have been taken, usefully stress an important point. Cognitive performance depends upon traits that virtually anyone would include in the definition of cognition, such as the ability to hold information in working memory, and upon traits that are less obviously cognitive but that are essential for cognitive accomplishment, such as persistence in working toward one’s selected goals.
Several writers have gone even further by stressing that societies provide a variety of artifacts that amplify human thinking. The extent to which a person is considered intelligent in a particular society depends substantially upon an ability to manipulate these artifacts [
17,
18,
19,
20]. The artifacts, in turn, change the value of different individual skills as determinants of performance. Possibly one of the best examples is memory. In Homeric times, the ability to memorize long epics was highly prized. Today we read poetry. In former days, and to some extent today in European universities, undergraduate grades were determined by oral examinations, thus stressing the importance of being able to recall the literature from memory. Today, term papers are prepared with the aid of search engines. In both cases, performance has been advanced by artifacts, and the psychological traits required to achieve high level performance have changed.
The Sternberg and Kaufman approaches, augmented by the importance of technology as emphasized by Hunt, Martinez, and Reich, focus on intelligence as cognitive performance. Performance is achieved by the interaction of three different systems; a system of cognitive abilities, a system of motivational and personality traits, and a system of cognitive artifacts. In order to understand what a person will do in a situation one has to understand the states of all three systems and the interactions between them. However, in order to achieve this understanding it is also necessary to understand each of the systems separately. Consider another medical analogy. An athlete’s performance depends upon the interaction between the musculature, cardiac, and skeletal systems. From the point of view of a physician, though, cardiology and orthopedics are separate specialties. The scientific study of intelligence is analogous to cardiology or orthopedics. However the role of an “intelligence therapist”, perhaps a clinical or school psychologist trying to help a client think and perform better, is analogous to a general practitioner who tries to understand the individual as a whole person, in a particular social context.
Research on intelligence is an attempt to understand individual differences in the cognitive system, not individual performance in particular social situations. This emphasis does not in any way downplay the importance of studying how individuals adjust to their environmental niches. In fact, in clinical and some educational settings a focus on the interaction between individual capabilities and environmental affordances is probably more useful than a focus on identifying where an individual stands in the psychometric space defined by variations in competence across a population. The opposite is true for personnel selection. As the OED points out, some of the common-language definitions of intelligence stress individual differences, others (such as those adapted by Sternberg and Kaufman) address individual performance.
Neither definition of intelligence is superior to the other. The problem is that we are using the same word to address different scientific issues. Of course, one could solve this problem by simply redefining the word intelligence to conform with Sternberg and Kaufman’s use, and then finding some other word to describe the study of individual differences in cognitive power. However, word definitions are changed by usage, rather than by dictate, so we do not think that an elegant linguistic solution is likely. The term intelligence, as used in this journal and other similar ones, embodies individual differences in its definition. While we may regret the fact that someone else has chosen to use the same term for a different, equally important issue, there is very little that the readers of this journal can do about it. All we can do is be clear in our own discussions, that we are talking about individual differences in cognition.
The requirement that intelligence should refer to
cognitive traits is an important one. Of course, many non-cognitive traits are important in determining individual performance, including conscientiousness and emotional control. However, as we have argued, these traits constitute systems that are separate from the system that supports information processing and produces rational problem solving. One of the most widely publicized redefinitions of intelligence, Gardner’s [
21] “theory of multiple intelligences”, violates this restriction. Gardner includes in his definition of “intelligence” such varied traits as reasoning ability, musical ability, and agility of movement. In reviewing his own work Gardner has said:
“If I had written a book about
Seven Human Gifts or
The Seven Faculties of the Human Mind my guess is that it would not have attracted much attention. It is sobering to think that labeling can have much influence in the scholarly world, but I have little doubt that my decision to write about ‘human intelligence’ was a fateful one. Instead of producing a theory (and a book) that simply catalogued things that people could excel in, I was proposing an expansion of the term
intelligence so that it would encompass many capacities that had been considered outside its scope.” [
22].
This quote hardly describes an attempt to honor Plato’s advice about carving nature. On the other hand, Gardner’s strategy points toward a laudable social aim. By using the term
intelligence in the way that he did, Gardner called educators’ attention to the fact that there are many traits outside of the cognitive realm that are important in a person’s adjustment to society, and therefore these traits ought to be developed by the educational system. We simultaneously applaud Gardner’s attempt to broaden the scope of education, and at the same time argue that the profligate combination of desirable traits into the term
intelligence will not advance the scientific study of individual differences in cognition. (See [
23] for an expansion of these points).
In conclusion, a new definition of intelligence ought to expand the concept beyond the traits traditionally included on tests of cognition, but retain the restriction that the traits involved are cognitive ones. A particular important extension will be to expand the definition to consider those cognitive traits that can only be evaluated by observations over time, such as the ability to conduct reflective thought or to consider a problem from multiple perspectives. Obtaining such measurements is virtually impossible within the confines of a single testing session, covering at most three hours. However, this is an administrative rather than an intellectual constraint. New technologies have opened up new possibilities for testing paradigms. By using the world wide web (WWW), testing could take place over days, making it possible to observe people as they solve problems far more complicated than those posed on conventional intelligence tests. The “electronic footprints” that people leave as they go about their daily business, such as credit and health records, could be analyzed to reveal intellectual competence on either a comparative or absolute basis. To be sure, accessing such sources of data raises practical, ethical, and legal issues. The data is there. Arranging for its use requires the solution of social but not scientific problems.
Having, we hope, revealed the intellectual gorilla in our midst, we next turn to issues that deal with topics in the social or biological sciences, and that take a view of intelligence as a fixed or malleable trait.
4. The Criterion Problem: A Social Science Issue Within a Static View of Intelligence
The first problem, and opportunity, that we will deal with is the criterion problem; the problem of defining those situations in which intelligence is partially causal to behavior. We join with Roberts [
4] in believing that this problem is extremely important. It also has an interesting similarity to the definitional problem just discussed.
Historically, most studies of intelligence as a causal factor have identified some variable that is both important in itself and fairly easy to measure. Regression analyses of various sorts are then used to determine the relation between intelligence and the chosen criterion measure. This sort of research is epitomized by studies in which a cognitive test score is used to predict grade point average (GPA) or, in industry, supervisor’s ratings of job performance.
In such studies, the criterion variable is too often chosen because it is an easily measured variable that stands in for the performance characteristics that the investigator is really interested in. GPA is a good example. What investigators are interested in is academic performance. GPA, calculated across classes and topics, is at best an imperfect indicator. A GPA of 3.0 means one thing in physics or mathematics, and quite another in English or (dare we say it?) Psychology. Of course, there are some cases in which the easily identified measure is the criterion of interest, rather than a stand in for a host of unmeasured variables. This is particularly likely to be the case for applied studies. For instance, the statistic “rate of graduation x years after matriculation” is important in itself, for it has economic consequences both to the institution and the individual. A similar case can be made for variables such as “length of time employed” and “grade reached after x years of employment” in industry. We expect such indicators to continue to be used in applied situations, but to be of less and less use in research on intelligence itself.
Future studies could exploit electronic record keeping to specify how constellations of data on “intelligent behavior” can be related to similar constellations of data on academic and/or social competence. A good illustration of this is Deary
et al.’s [
24] study in which general intelligence (
g) at age 11 was used to predict educational achievement on a national examination, given at age 16. This study made use of the British educational system’s practice of presenting country-wide common examinations, along with electronic recording of results. The
g measure was a latent factor extracted from a wide-ranging intelligence test battery. The educational achievement measure was also a latent factor, extracted from tests covering 25 different subjects. The correlation between these two factors was 0.81. However the design of the study made it possible to examine differences in predictive accuracy across academic subjects which varied from
r = 0.77 to
r = 0.43. The academic field is a particularly easy one in which to conduct studies such as this because bundles of skills are reasonably closely associated with academic areas.
The problem is more challenging, but the payoff may be greater in studies of the workforce. The field of social commentary is full of predictions about what cognitive skills will be required in the 21st century. If these speculations are to be translated into scientific data, two things have to happen. First, the speculations of social commentators must be replaced by objective analyses of the cognitive skills actually required in the workplace. This might produce exciting new results, or it might reaffirm our present ideas of the skills required by the workforce. For instance, a factor analysis of the US Department of Labor’s ratings of the skills required for over 800 occupations produced a large first factor that looked very much like
g [
25]. We do not wish to argue here for the primacy of any particular measurements of the skills required by tomorrow’s workforce. Our point is that this is a question that can be answered by objective analysis of workplace requirements.
The second thing that has to happen is that ways have to be found to measure the extent to which the necessary skills are distributed in the workforce. Here the problems and potentials are essentially those already discussed in the section on the need for a new definition of intelligence. Studies similar to those suggested here have already been conducted within a military context, where centralized record keeping is far more detailed than it is in the civilian workforce [
26]. As was pointed out in the discussion of the definition of intelligence, the advent of large-scale electronic recording (“Big Data”) makes it technically possible to record the everyday use of (or failure to use) intelligence. Social and ethical issues concerning both informed consent and confidentiality will have to be resolved before the research can be conducted. The issues are complex, but there is no reason to believe that they are unsolvable.
6. Improving and Maintaining Intelligence. A Social Science Issue Within a Dynamic View of Intelligence
One of the most important tasks for cognitive researchers in the early 21st century will be developing methods to improve and then maintain people’s cognitive skills, i.e., their intelligence. To explain why we think this is important, and why we believe that there are considerable reasons to be optimistic about prospects for progress, it is first necessary to clear up an issue that we believe has frequently been confused in the literature.
Over the years there have been a variety of announcements to the effect that people can, too, improve their intelligence. The announcement is usually made with the implied assertion that someone (no reference given) has said that intelligence is fixed. In evaluating both claims of improvement and the implied assertions it is necessary to distinguish between the two definitions of intelligence described earlier, intelligence as effective cognitive performance and intelligence as one’s relative standing on a set of cognitive traits.
Intelligence as performance is clearly malleable. The simplest demonstrations of this are age effects, which are acknowledged by the use of age-appropriate norms on many intelligence tests. Adults are, in general, better problem solvers than children. This is due both to the possession of added knowledge and a better grasp on the “mechanics of thought”, such as the ability to control attention. It is also true that cognitive abilities decrease as people enter old age. Indeed, as longevity increases the frequency of severe loss of intelligence associated with old age poses a major problem in public health.
Age effects alone are sufficient to show that intelligence, in the sense of cognitive performance, is malleable within an individual. The same thing is true for societies. Over the 20th century, there were substantial increments in the average levels of the cognitive skills assessed by most intelligence tests, on a population basis. This effect has been documented by an analysis of norming data from intelligence tests (thirty years of work, summarized in Flynn [
3]) and, within the gerontology literature, by over 50 years of research using cross-sectional experimental designs to study aging [
34]. However, if we look at intelligence in the sense of relative standing, which is implied by the use of norm-referenced IQ scores, intelligence is surprisingly stable over long periods of time. Deary [
35] cites test-retest reliabilities of .5 and higher over more than a fifty year period, from age 11 until people are in their 70s and 80s. As Deary points out, such correlations show that there is substantial stability within a population cohort but that there is also a large amount of intra-individual variability. The glass is half full and half empty.
To summarize, when we talk about “improving intelligence” we are not talking about whether or not cognitive abilities are malleable, because we know that they are. The issue is about what sorts of interventions have the effect of increasing intelligence, over and above those we would expect from the normal processes of ageing, better provisions of artifacts to support cognition over one’s lifetime, and the accumulation of experience. There is an informative analogy between procedures intended to improve physical prowess and those intended to produce cognitive prowess.
Both are sought after avidly. Some segments of contemporary societies seem to be on a near frantic quest for flat abs and sharp minds. The questions we should ask about both these quests are “what classes of interventions are considered?”, “how general are the results obtained?” “for whom do those interventions work best” and “how long do these results last?” [
36,
37].
Three ways of improving intelligence have been considered; direct physical interventions such as the use of drugs or nutritional supplements, a variety of exercises intended to improve general information processing functions that underlie intelligence, and education. Positive results have been obtained within each of these classes of intervention [
23].
Although physical agents clearly have their effects due to their action on the brain, we will consider their use in our “social” section because many, if not most of the agents used have been selected by historical circumstances (e.g., the use of many pharmaceuticals and nutritional supplements in naturopathic medicine) or by a rather loose connection to physiological mechanisms.
Long term nutritional deficiencies can clearly have adverse effects on cognition, especially in children. This is a major problem in the developing world, most acutely in those countries where normal food supplies have been disrupted by war or severe climatic conditions. Many of the effects on cognition appear to be associated with sheer reduction in caloric intake, although protein deficits and the burden of parasitic and infectious diseases have also been cited as causes of intellectual deficits. Within the industrial nations, nutritional effects on cognitive development appear to be much more associated with choice of diet than with the sheer availability of appropriate nutrients. A great deal of research has been conducted on the beneficial effects of nutritional supplements and selected pharmaceutical agents upon cognition. The substances investigated range literally from fish oil to Ritalin (see [
38] for an example study and an initial set of references, as well as [
39] and [
40] for a discussion of the issues).
It would be impossible and inappropriate to try to review this vast literature here. We do point out that this is a difficult field to study, in no small part because of the conflicts between advocates who believe passionately that the benefits of many of the nutritional and pharmaceutical agents have already been “proven” and a more conservative view that treats with extreme skepticism any claim that an agent increases human intelligence (or any other trait) without evidence for a mechanism producing the effect. In addition, there are normal scientific issues concerning the reliability of findings and identification of appropriate target populations, and the distinction between short term (acute) and long term (chronic) effects. It is not inconceivable that an agent could have beneficial short term effects, e.g., on attention or memory consolidation, and long term deleterious effects through interference with neurotransmitter reception mechanisms. There are also substantial social issues involving differential access to cognitive enhancers or destroyers (e.g., alcohol, cocaine) by different segments of the population. This topic is far beyond the scope of the present article.
Nevertheless, optimism for this line of research can be justified because it satisfies two justifications for optimism; that breakthroughs come when new sources of data open up and when new ideas are introduced from outside the field. The expressed motivation for exploring virtually all the nutritional and pharmaceutical cognitive enhancers is that these substances improve attention and memory, and that they do so through their direct effect upon neurotransmitters or by capitalizing on neural plasticity. Both these topics are receiving intense research scrutiny in biomedical research, outside of the intelligence field. For instance, considerable research, largely using animal models, has shown that the enhancement of memory consolidation is possible, and has identified some of the neural structures and biochemical pathways involved [
41]. It follows that as more and more becomes known about neurotransmission and neural plasticity, researchers interested in the enhancement of cognitive function will be better able to design more effective interventions.
Just as there is a close analogy between the use of performance-enhancing drugs to increase both physical and mental capacity, there is a close analogy between the use of exercise to increase athletic prowess and the use of mental drills to enhance cognitive performance. This research has been driven by two somewhat different motivations. One is based upon the considerable findings showing that the working memory-attention control complex is central to intelligence [
23]. The other is the
disuse hypothesis that attributes some of the decline in cognition in old age to the fact that many aged people live in environments where cognitive complexity is reduced [
42]. There has been a great deal of interest in this topic, even to the point that one commercial company sponsored advertisements for “brain improvement programs” in the television presentation of the 2013 American professional football “Superbowl” championship. Not surprisingly, and similar to research on the use of pharmacological agents to improve intelligence, there have been mixed results, and the claims of efficacy of cognitive training have been received with some skepticism (e.g., [
43,
44,
45,
46]). The controversy is driven by the rightful skepticism concerning exaggerated promises made by commercial companies as described above, and related to that, due to the fact that some of the observed effects are small, that they do not seem to be easily replicable, and, possibly, due to a publication bias toward positive results.
In spite of these mixed results, there is reason for cautious optimism about the prospects of developing behavioral training methods to increase intelligence by training the working memory-attention complex. There are two reasons for this. First, virtually every human cognitive function is susceptible to training. Second, the information processing “targets” are well established. Even if some of the effects might be small, they can be highly relevant, for example for individuals who have deficits in the targeted domains which result in achievement gaps, such as children diagnosed with ADHD. In order to maximize the outcome, it may, however, be appropriate to utilize a wide array of training methods, rather than to rely on a few established paradigms, such as the n-back task or working memory spans (e.g., [
47,
48]). Just as decathlon athletes are without doubt fitter than golfers due to their broader training regimen, we could most likely maximize the efficacy of cognitive training by targeting multiple processes. Such a “kitchen-sink” approach might increase the likelihood that there are shared processes between the intervention and the outcome measures, one of the pre-requisites for generalizing effects. Therefore, the more methods and processes used, the greater the likelihood that general intelligence will be improved [
37,
49]. However, this illustrates a classic problem well known in the medical field. If the goal is to (enhance cognition in the participants) (cure the patients) the strategy of trying many things that might work has a good deal to recommend itself. If the goal is to understand the mechanisms by which improvement is achieved, a more precise experimental design is appropriate. When the precise design is used, the researcher then faces the problem of motivating participants to maintain training regimens that are unvaried and, to be honest, often tiring and boring.
In sum, the malleability of cognition and the knowledge about target processes justifies the “optimism” part of the conclusion. The following warnings justify the caution.
Young adults, and particularly young adults in populations selected for their cognitive skills (i.e., university students) are quite possible the worst population to study if one is interested in the improvement of cognitive skills. The reason is simple. These people would not be where they are if they did not already have substantial cognitive skills. Training is far more likely to be effective with young children, younger-level adolescents who are in the age range where cortical connections have not completely developed, patient populations who have deficits in the targeted domains, or the elderly, who, despite age-related reductions in brain plasticity, have room to improve.
Further, any effects of training (as well as the effects of pharmacological agents) are bound to be transient. Consider the analogy to physical training. You can improve your physical strength by weight lifting, but if you stop lifting and do not compensate by using your new-found strength to participate in strength-demanding topics, flabby muscles will return; not in days but certainly in months. For the very reason that the brain is plastic, training on key information processing functions required for intelligent behavior will only produce long term changes if either the training itself is continued or if the benefits of training are invested in further challenging mental activities [
50].
This raises an interesting definitional issue. As discussed above, there are a number of widely used drugs that are believed to improve attention, memory consolidation, or both. Ritalin is probably the prime example. While we are skeptical of some of the claims made, we do believe that these drugs work for some people, some of the time, and hence have appropriate clinical uses. These drugs are justified because of their acute rather than their chronic effects. That is, there is no claim that the drug, alone, has made a permanent change in the brain systems relative to cognition. However, what about information that a person acquires while the drug is active? To illustrate, suppose that a student uses a cognitive enhancer to study a highly useful tool for problem solving, such as mathematics. Assume further that with the drug’s aid the student acquires better mathematical skills than he or she would have acquired otherwise. Subsequently the student will probably be able to perform better on many tests of cognitive functioning, such as the SAT.
This example highlights the importance of definitions. The assertion that a physical or behavioral intervention improves intelligence rests on the claim that the intervention produces capabilities that transfer to a broad range of cognitive activities. A further assumption is that the transfer lasts for a substantial period of time; months or years rather than a few days. There is also an implied assumption that the intervention itself is what produced the improved performance, i.e., that the intervention has a proximal effect on cognitive performance. In our hypothetical drug-mathematics-SAT example the intervention has a distal effect. The temporary drug state makes it possible to acquire cognitive skills that have a permanent effect on subsequent mental challenges, including but not limited to taking the SAT. Did the drug improve intelligence?
The mathematics learning example also brings us to the one intervention that has consistently been shown to be related to improvements in intelligence: education. Education is a continuous process. The knowledge, skills and styles of thought that are acquired at one level are used to acquire knowledge, skills, and styles of thought at the next level. This is true for virtually everybody through high school, and true for the college/university level science, technology, engineering and mathematics (STEM) disciplines that many observers have claimed to be central to economic progress. Therefore, anything that affects the rate of learning is exceptionally important.
There is some evidence that early childhood (pre-school) programs do exactly this. Children who arrive at primary school having learned how to pay attention, and having acquired elementary reading and mathematics skills, are ready to gain a great deal from first grade instruction. Children without these skills suffer both cognitively, because they need to develop the skills at the same time that they are called upon to practice them, and motivationally, because of the frustrations they have when called upon to learn using cognitive artifacts, as in reading or mathematics, before they have mastered the basic information processing skills needed to master the artifacts themselves. The ABCDerian, Perry and similar intensive, very expensive preschool programs targeted toward low SES children provide evidence that, when amortized over a number of years, these programs do justify their expense, largely by a reduction in the social support required by graduates of these programs, as adults. The economist James Heckman [
51] has argued that greater returns on the dollar can be obtained from investing in preschool education than can be obtained from investing in any other part of the educational establishment. More focused studies have shown that attention can be trained in pre-school children, and that such training may have the effect of improving general reasoning scores [
52,
53,
54]. If so, the benefits of the expensive pre-school programs might be obtained at much more reasonable costs, providing the means to also improve the lot of children who are not from the very low SES groups targeted by the ABCDerian, Perry, and similar programs.
What about schooling beyond pre-school? Reviews have consistently shown that the level of education achieved is positively related to intelligence test scores, over and above selection effects as individuals leave the system [
23,
55,
56]. If we take the more general view that intelligence refers to the cognitive competence required in one’s society, rather than the competences required to take a test, we would hope that this was true. One of the major purposes of education is to increase student’s cognitive capacity, both by the acquisition of specific pieces of knowledge and by the acquisition of better reasoning capacities. This is especially the case in modern society, for formal education is the primary way in which children and young adults acquire knowledge about and skill in using our cognitive artifacts. On the other hand, in modern society the education system is so large, and tries to improve cognitive power in so many different ways, that it is difficult to say anything about it except that it works, and that it probably could be improved. Developing specific recommendations would require a much more extensive analysis than we can offer here. Identifying changes that might actually be adopted in the system would take us into the area of policy analysis, which is beyond our expertise.
Arguments over whether or not intelligence is malleable are obsolete. For the reasons cited above, there is every reason to be optimistic about future efforts to improve and/or maintain intelligence, in the general sense of individual intellectual competence. Improving intelligence, in the much narrower sense of increasing intelligence test scores, is a side issue.