You have been involved in elaborating a “biological meta-theory” for the social sciences—from the perspective of “life history evolution.” Could you start by telling us more about it?
Michael A. Woodley of Menie:
First I should explain life history theory. This is a very powerful model in evolutionary ecology for explaining the covariance of anatomical, physiological, and behavioral traits within and across species. Its core idea is that environments pose particular sets of fitness challenges to organisms, which favor the evolution of coordinated suites of adaptations; these coherent adaptive packages can be understood as strategies through which organisms overcome obstacles to fitness (i.e., reproductive success). Species that tend towards very high rates of reproduction (i.e., high yields of offspring) typically have short life expectancies and their offspring tend to be precocial—meaning that they take relatively little time to mature into their adult forms. Their behaviors are also adapted to environments with generally high and unpredictable levels of extrinsic morbidity and mortality—sources of morbidity and mortality are “extrinsic” if adaptive features of organisms have little influence on them, and they are “unpredictable” if they exhibit high spatial and temporal variability that organisms cannot anticipate. The package of adaptations—behavioral, reproductive, and so on—that typically emerges in these environmental circumstances is usually called “r strategy” (where r denotes a species’ reproductive potential). Rabbits exemplify this ecological strategy—they are ready to reproduce within six weeks after birth, and the mother spends only a few minutes per day with her offspring investing resources in their growth. Rabbits also have relatively short lifespans, and in the wild have very high odds of succumbing to predation. The opposite strategy is usually called “K strategy” (where K denotes the carrying capacity of an environment). When a species is optimized for existence at the carrying capacity of its environment, its members exhibit high longevity, prolonged gestation, and extended postnatal development. The high-density populations in which K strategists live experience relatively little, or at least predictable, extrinsic morbidity and mortality. K strategists are typically long lived, in part because they invest heavily in somatic development and maintenance. With respect to behavior, K strategists are usually highly pro-social, investing in the fitness of their genetic kin via communitarian effort. Elephants exemplify this strategy, since they have relatively low rates of fertility, but invest substantially in their (small numbers of) offspring via extended gestation and multiple years of postnatal parental investment. Moreover, they are markedly herd oriented, with individual elephants exhibiting highly protective behaviors toward their entire herd when threatened by predators.
So, why is life history theory useful to the social sciences? The answer is that life-history variation is not merely a source of inter-species differences but exists among populations and also individuals within species (as indicated earlier). When considering life history as a source of intra-specific variability, the terms fast life history (as in “live fast and die young”) and slow life history (suggestive of “live long and prosper”) are used in lieu of r and K respectively. Among individual humans, there are genetic correlations between seemingly discrete personality, social cognitive, and behavioral traits, consistent with the existence of a latent life history factor “tying” these sources of individual differences together. For example, positive genetic correlations exist between all five of the Big Five personality factors (openness, conscientiousness, extraversion, agreeableness, and emotional stability), such as to evidence a latent General Factor of Personality (GFP), high levels of which correspond to high levels of social effectiveness. The GFP is in turn positively genetically correlated with physical and mental health (which are related through a latent factor termed Covitality), and with behavioral and social cognitive traits that positively relate to investments in family, friends, communities, and (long-term) mates—these traits include future orientation, risk aversion, and self-control (collectively, these traits and their associated outcomes define the K-Factor). The latent factor superordinate to all three of these domains is called Super-K.
It is important to note that humans are highly K selected as a species (i.e., they are on average high on Super-K); nonetheless, they exhibit individual differences in Super-K, such that some people, for genetic and epigenetic reasons, have faster (i.e., lower-Super-K) life history strategies than others. Selection has likely favored heritable life-history variation among individuals as a sort of adaptive bet-hedging against spatial and temporal variability in the level and degree of instability of extrinsic morbidity and mortality. This observation suggests an important point, which is that moral evaluations applied to life history strategies are not easily justified and probably make no sense. The appropriateness of a fast or slow life history strategy is a matter of environmental circumstances. As with other organisms, humans have finite stores of bioenergetic resources, and therefore cannot invest maximally in every phenotypic domain relevant to fitness. They instead have evolved to selectively developmentally invest in phenotypes, such as to yield the life history strategies most suitable for managing the environmental challenges that characterize their phylogenies. To many people, at least, it will seem strange to apply moral judgments to the outcomes of these non-teleological evolutionary processes.
Life history theory therefore explains the evolutionary basis of the tendency of diverse traits to hang together via adaptively coordinated patterns of heritable developmental tradeoffs. The fact that rather distinct aspects of human phenotypes non-randomly associate has not escaped the awareness of sociologists, historians, economists, and (non-evolutionary) psychologists; but these scientists and scholars lack a satisfactory distal, or ultimate, causal explanation of the correlational patterns that they observe. Our objective in writing Life History Evolution: A Biological Meta-Theory for the Social Sciences (Palgrave MacMillan, London) was to demonstrate, through analysis of the works of a representative cross-section of major thinkers in sociology, history, economics, and psychology, that key publications in the annals of social science are centrally concerned with life history phenomena but fail to explicitly acknowledge this fact. By extension, those works fail to use the considerable resources of life history theory to integrate these phenomena in a coherent framework. In showing that this can be done, we demonstrate the centrality of life history strategy to human social organization. The book highlights a case of what E. O. Wilson termed consilience—that is, the convergence of different forms of inquiry into some scientific matter on the same substantive results. Evolutionary theory has been especially potent in realizing consilience because it tends to expose the ultimate unifying bases of phenomena that social and behavioral scientists only ever explain in disjointed, proximate ways.
One of your earliest claims, which is based on observing “adaptation towards specialization” and “the convergent recurrence of biological forms,” is that “natural Platonic laws may operate in ecosystems.” Could you come back to this precocious intuition?
Michael A. Woodley of Menie:
That is from one of my earliest works in theoretical ecology. The model that I proposed there was in essence an attempt to explain the recurrence of biological forms in nature. Designs tend to get “reused” across diverse taxa that are only very distantly related, but nevertheless have to adapt to similar environmental challenges. This process is known in evolutionary biology as convergent evolution. A striking example of this includes morphological analogies between the Ichthyosauria, an order of marine reptiles that lived from the early Triassic to the late Cretaceous, some 250 to 90 million years ago, and the infraorder Cetacea, which contains dolphins and whales. The parallels are striking, with species in both groupings exhibiting a similar range of morphological adaptations related to diet, reproduction, and body size.
To understand the prevalence of convergent evolution—which is not restricted to cases of species possessing analogous body plans, but encompasses entire ecological community assemblages, and even independently evolved genes that underlie the same function in their carriers across diverse taxa—I took inspiration from the work of Michael Denton. Specifically, I drew on his finding that physical-chemical constraints limit the diversity of protein structures—these constraints have the effect of producing recurrent natural archetypes that have been called “Platonic molds.” This means that natural laws have a stronger role in restricting the range of protein-structure diversity than one might ordinarily suppose (e.g., if one were to approach the matter with the view that this diversity is merely a contingent outcome of natural selection). Denton’s insights are reminiscent of pre-Darwinian models of biological transmutation, such as transcendentalism. Transcendentalists held that the range of body plans and homologous morphological structures among species stems from affinity through an ancestral archetype, which degenerated into the range of species seen in more modern times. The archetype itself was thought to have been specially created to replenish extinct species. Evolutionary change in this model is considered degenerative—living forms become corrupted (i.e., acquire morphological properties that cause them to deviate from their archetype) via the action of changing environments. This unidirectional evolution was thought to necessarily culminate in species extinction.
My thinking at the time was that ecological niches may be conceptualized as having archetypal properties, in that niches constitute centroids in an N-dimensional space, the axes of which correspond to environmental gradients with respect to which a given species is distributed in some unimodal fashion. The niche therefore constitutes a set of physical-chemical constraints on the evolutionary freedom of a given species. Ecologist G. Evelyn Hutchinson, the originator of the modern niche concept, averred that a species’ niche is defined by the species that it contains, and that there could be no such thing as a vacant niche (because the concept of niche is meaningless absent a species). Mathematically, however, this is not strictly true: Just as the niche is defined by the presence of a species, a vacant niche could be conceived as its negative—i.e., the specific absence of a species with respect to delimited dimensions of niche space. Such vacant niches—which are ultimately functions of the physical and chemical properties of ecosystems and exist prior to the evolution of a species as an adaptive potential—may therefore act on a species’ lineage in such a way that limits the range of biological forms that are possible within a given ecosystem, with natural laws acting to draw the evolutionary trajectories of species into Platonic molds. Diverse ecosystems may therefore contain a limited range of such archetypal vacant niches evoking recurrent biological forms and driving evolutionary convergence across distinct lineages.
The idea that ecological specialization constitutes a form of lineage degeneration is based on the observation that species have a general tendency in the course of their “life cycles” to overspecialize and then go extinct (this somewhat reflects the concept of typolysis from the paleontologist Otto Schindewolf). This process may stem from the fact that ecological specialism enhances the vulnerability of species to environmental and biotic disturbances through overdependence on some unique and irreplaceable facet of an ecosystem. I give the example of orchids whose pollinator platforms are evolved so as to resemble a specific sex of a specific insect for the purpose of attracting the opposite sex of that species to be used as a vector for pollination. It is clear that should that particular insect species go extinct, so too would the orchid.
You are mostly known for the “Woodley effect,” the fall of average general intelligence in the West since the Victorian era. Could you explain the nature and basis of this effect?
Michael A. Woodley of Menie:
My framework for understanding secular trends in intelligence is termed the co-occurrence model. This model is based on two observations. The first is that the cognitive phenotype corresponding to IQ—a composite measure of an individual’s relative intelligence—is not heterogeneous but consists of distinct general and specialized mental factors. The second is that the fertility advantage of those with lower compared to those with higher levels of IQ (which Benedict Morel and Sir Francis Galton first noted in the mid-19th century) is stronger the better an IQ test or subtest measures the underlying general factor of intelligence (g). This is probably because those with higher g, who have an advantage in dealing with evolutionarily novel problems, are more responsive to norms of and opportunities for fertility control than those with lower g, and are also more likely to have extended educational careers at the expense of fertility. Importantly, the g factor is the principal source of IQ heritability—more than 80% of the variation among individuals in levels of g is likely due to genetic variation among them. Given that the responsiveness of a trait to selection is proportional to its heritability (as the breeder’s equation specifies), we should expect that g, due to its uniquely high heritability, will decline the most rapidly of all factors of intelligence when under negative selection. Moreover, the high heritability of g, in addition to the apparent minimal variation in this parameter across environments, implies that the potential for environmental effects to change population levels of g is negligible. Consequently, we should expect to find phenotypic declines in g over time in populations with long-term selection against the trait, and indeed there is much evidence that this has been happening (a point to which I will return).
The heritability of cognitive factors also helps make sense of the Flynn effect—the tendency for performance on IQ tests to increase over time, which is the opposite of the trend that Raymond B. Cattell and other psychologists historically predicted on the basis of observed negative correlations between IQ and fertility. Since the Flynn effect is largest on IQ measures that only weakly relate to g, and that have associated low heritabilities, there is plenty of room for environmental improvements—such as prolonged education and better nutrition, as well as more cognitively stimulating environments, etc.—to increase intelligence at the level of specialized and narrow cognitive abilities and skills, but not at the level of g.
So we have at this point the bases of the defining prediction of the co-occurrence model: In addition to the Flynn effect, there should be a co-occurring decline in the level of a population’s g due to the action of negative genetic selection. The lack of evidence for long-term declines in IQ measures despite a negative association between IQ and fertility was in fact termed Cattell’s paradox, as Cattell was the first to predict that IQ scores should be declining due to selection, only to find in work conducted in subsequent decades that these scores were in fact rising. Richard Lynn proposed a solution to Cattell’s Paradox in the 1990s. He suggested that improved environmental quality was causing phenotypic IQ to rise, a process swamping the concurrent decline in genotypic IQ. This attenuation model would not predict certain measures of cognitive phenotypes to evidence the decline of aspects of intelligence with time—but that is precisely what my colleagues and I found.
First, we discovered that simple visual reaction times have apparently been slowing since the 1880s at the latest. Then we discovered that working memory performance has been falling since the 1930s at the latest, then color acuity since the 1980s, 3D rotation ability since the 1970s, and finally high-difficulty vocabulary usage in written text, which has been declining since 1850. These indicators of cognitive ability have certain properties that we predicted, making them uniquely sensitive to selection and other pressures favoring lower g. A number of them are ratio-scale measures, meaning that they have a true zero-point of measurement—akin to the Kelvin temperature scale—unlike interval scales, such as the scoring systems of conventional IQ tests, the values of which are relative rather than absolute (this would be akin to the Celsius and Fahrenheit temperature scales). This means that populations assessed with ratio-scale measures such as simple reaction time (visual or auditory), working memory, and color acuity at one point in time can be meaningfully compared with populations assessed with the same measures at another point in time, because the measures of both population performances are scaled absolutely. Since the Flynn effect involves gains in specialized abilities over time, IQ tests in more modern populations come to measure those abilities to a greater extent, and they measure g to a proportionally lesser extent (because people are bringing specialized abilities to bear on problems that in the past they could have only solved with g). This failure of measurement invariance across time means that IQ tests today are measuring somewhat different psychometric parameters relative to the past. The property of measurement invariance with respect to g is therefore what allows the diverse indicators mentioned above to collectively track declining g for (in some cases) well over a century—the declines in this set of indicators are what some in my field have called “Woodley effects.”
It needs to be observed that these “Woodley effects” have been corroborated with evidence indicating that the genomes of older individuals in Western populations are more enriched with genetic variants that predict IQ and educational attainment than those of younger individuals—and that in very large populations, the decline trend persists even when the confounding effects of survival bias are statistically controlled (this bias results from the fact that higher-IQ persons live longer on average than their lower-IQ counterparts). “Woodley effects” seem to have real-world consequences too, given that the rates of macro-innovation and intellectual eminence have been steadily declining since the middle of the 19th century based on convergent historiometric analyses of encyclopedias of the history of science and technology (there are signs of decline in non-scientific disciplines as well, but I am setting those aside here). As macro-innovations (which are highly and conspicuously novel breakthroughs in science and technology, e.g., the development of the theory of relativity) constitute significant social manifestations of high levels of g applied to very complex problems, declining phenotypic g consequent to relevant negative selection predictably depresses macro-innovation rates.
This is not to imply that the Flynn effect is unimportant. I have devoted a great deal of research to uncovering both the causes and consequences of the increasing cognitive specialization that populations undergoing the Flynn effect experience. The model that my colleagues and I have developed posits that reductions in extrinsic morbidity and mortality and environmental unpredictability following industrialization, and the broader modernization process, have genetically and epigenetically selected for slower life history strategies at the population level. As noted above, slower life history organisms invest more heavily (e.g., through greater allocations of time and calories) in somatic development, including brain and cognitive development, than faster life history ones. Relatively slow life history strategists are also adapted to exploit narrow stable niches—a viable strategy when extrinsic morbidity and mortality and environmental unpredictability are low—which is most effectively achieved with specialized suites of cognitive abilities and behavioral dispositions. Life history theory can therefore explain both the increase in the levels of specialized cognitive abilities that constitutes the authentic Flynn effect, and the tendency of the Flynn effect to occur in time with the skewing of cognitive profiles toward specialism as opposed to generalism.
Moreover, the life history model of the Flynn effect provides a theoretical framework in which to integrate many proposed causes of the effect (such as improved nutrition, increased exposure to schooling, reduced disease stress, etc.), since virtually all such causes either selectively favor slower life histories (because they diminish levels of extrinsic morbidity and mortality and environmental unpredictability) or are consequences of slower life histories (the latter would apply to models of the Flynn effect that invoke the decreasing size of families under modernization, allowing greater investment of resources into the relatively small number of children that parents have to raise). In light of the fact that life history slowing increases cognitive and behavioral specialization and seemingly enables the Flynn effect, it is unsurprising that my team and I have found that life history slowing and the Flynn effect predict macro-economic diversification and economic growth (measured in terms of GDP per capita), both synchronically and diachronically. Economists would of course expect these outcomes from specialization given Ricardo’s law of comparative advantage. My colleague Aurelio José Figueredo has led some of the major empirical tests of the association between economic growth and population-level life history strategy, and also made important early predictions about the role of slow life history strategies in promoting the development of personality and behavioral individuation within groups.
To summarize, the co-occurrence model offers contrasting predictions about the macro-social and economic impacts of simultaneously falling g and increasing specialized abilities (Woodley and Flynn effects, respectively). While the former predicts declining creativity and macro-innovative capacity, we know from the work of the economic historian Robert Fogel that macro-innovation is not always essential to economic growth, because it is a distal cause—indeed, micro-innovations derived from macro-innovations are much more important to this process. Hence the (now fairly well-replicated) association between the Flynn effect and GDP growth, despite the “Woodley effect” and declining macro-innovation rates.
Considering my early research in the field of theoretical ecology discussed previously, there seems to be a “grand analogy” between this tendency toward ever greater specialization in a civilization and Schindewolf’s process of typolysis, which, again, predicts species degenerating into specialized types as they near the end of their cycle of existence. That we see this same degeneration and overspecialization dynamic in certain contemporary human populations might be indicative of the action of a social typolysis process, perhaps presaging the eventual reversal of the modernization process.
Two periods must be noted in the Western history—if not the history of the world—for having produced individuals of great eminence. Namely Renaissance Italy, which saw the emergence of Leonardo da Vinci, Michelangelo, Raphael, Marsilio Ficino, Machiavelli, or Monteverdi; and Ancient Greece—especially Classical Greece—that produced Thales of Miletus, Pythagoras, Heraclitus, Socrates, Plato, Aristotle, Thucydides, or Alexander the Great. From a biocultural point of view, how do you account for those unique cognitive and creative explosions?
Michael A. Woodley of Menie:
My starting point is the observation that civilizations rise and fall. Many ancient thinkers, such as the Roman historian Polybius, had cyclical views of history, which came to compete with progressive views most sharply around the time of the Enlightenment. My work has led me to believe that the cyclical theorists were (and are) right and the progressive ones were (and are) wrong.
Two major drivers of civilizational growth are genetic selection for higher g and also for slower life history traits. For a civilization to rise, it seems that selection pressures must simultaneously favor high g, thus permitting greater intelligent control of the environment (a process called inceptive niche construction), and slower life history. Slow population-level life history allows fine-grained divisions of labor to emerge (via enhanced cognitive and strategic behavioral differentiation). Such specialization limits intra-specific competition because it reduces conflict over access to niches, which in turn facilitates cooperation that boosts aggregate productivity. In the contemporary West (and some other parts of the world), selection favors the fitness of those with lower g. But people with slower life histories (counterintuitively) have higher average fertility than those with faster life histories. There is a simple explanation to resolve this seeming paradox: Slow life history predicts positive attitudes toward children—it seems to limit focus on the costs associated with child rearing and increase desired numbers of offspring. Those with fast life history likely produced larger numbers of children historically (prior to the Industrial Revolution), not out of any desire for children, but as a consequence of executing high-mating-effort strategies without access to reliable contraception. (Nonetheless, in certain parts of the pre-industrial world with high levels of intrinsic [controllable] morbidity and mortality, the offspring of fast life history strategists would have been at much greater risk of death in infancy and childhood than the offspring of slow life history strategists, which likely conferred a fitness advantage to the latter over the former, such that selection favored slow life history in such times and places.) The availability of cheap, reliable, and easy-to-use contraceptives, as well as safe abortions, probably explains the fertility disadvantage of fast life history strategists today.
Currently, then, only one of the two necessary factors for civilizational growth are present in Western populations. Western civilization thus can be expected to stagnate (this seems to be happening already) and then decline. Slowing life history may be providing the only meaningful support to civilization, as ever greater degrees of ecological specialization (reflected in the Flynn Effect) are raising comparative advantage and generating (some) economic growth. But the decline in macro-innovation rates due to declining g is likely having a corrosive effect: Over time, the base of macro-innovations from which micro-innovations can be derived will shrink. The present regime of selection is therefore not sustainable as far as civilization is concerned.
Oscillation between regimes of group- and individual-level selection is the ultimate driver of this cycle of behavioral evolution. Group selection occurs when selection acts primarily at the level of populations, i.e., when fitness differentials are, for whatever reason, most pronounced among populations as opposed to among individuals within populations. By contrast, individual selection occurs when selection acts primarily at the level of individuals, i.e., when fitness differentials are, for whatever reason, most pronounced among individuals as opposed to among more complex biological phenomena (such as populations or species). With respect to humans at least, where the balance of selection lies between the individual and group level in any particular case seems mostly to be a function of both average temperature and variability of temperature. It has been empirically well established that lower temperatures historically predicted increased levels of inter-group violence, which seems to be the major source of group-selective pressure in humans, as measured by indices of war fatalities. Indeed, the Great Crisis of the 17th Century was associated with extremely low temperatures in Europe and saw some extraordinarily costly inter-group conflicts (such as the Thirty Years War).
Cold, variable climates seem to engender group-level selective regimes because of their effects on agriculture. By forcing up the price of food (climate-related crop failures lead to food shortages), or inducing price instability, cold weather makes populations more inclined to expansionism, typically with an eye to colonizing lands in more temperate, southerly regions of the globe where resources are expected to be more abundant. Thus, the Little Ice Age that afflicted Early Modern Europe likely induced the subsequent Age of Empire from the 18th to the 19th centuries.
When different biocultural groups engage in inter-group conflict there is a fitness premium on forms of altruism that have significant positive effects on group-level, or corporate, fitness (i.e., a biocultural group’s share of global human biomass). Such altruism can take a number of forms, e.g., heroic self-sacrifice in warfare. A more important idea that evolutionary biologist William D. Hamilton first proposed is that geniuses, i.e., those responsible for major intellectual contributions, might have an altruistic function through which they profoundly enhance the fitness of their biocultural groups; the key manifestation of this function is the development of novel ideas that provide entire groups fitness advantages over others. A good example of this process comes from the life of Dutch cartographer Gerardus Mercator. His innovations in map-making bestowed upon the Dutch, in addition (eventually) to other European peoples, the capacity to successfully colonize distant lands in subsequent centuries.
Many aspects of the psychology and behavior of geniuses suggest that they are products of group-selection regimes. It has been noted that such individuals tend to be hyperfocused on their chosen problem(s), often to the exclusion of all else. Historically they fare very poorly reproductively, with high rates of celibacy and asexuality. They tend towards moderately high levels of psychoticism, which involves high levels of creative ideation and independence of mind (those high on this dimension tend to reject social conventions). Having high g is obviously a necessary factor for genius, but not a sufficient one. Psychologist Hans Eysenck noted that there are likely a dozen or so important factors that must combine in one individual to yield genius, and this combination is very rare. Additionally, although intellectual eminence tends to concentrate in extended families (as Galton noted), it rarely ever “breeds true.” The offspring of a genius are almost never as eminent as their parent, suggesting that these rare trait combinations infrequently survive chromosomal recombination.
From all of this, it seems that geniuses “pay” for themselves fitness wise through the large fitness boosts that they confer to their biocultural groups. Only through this mechanism can geniuses increase the gene-copying success underlying the factors that collectively make them. This is a classic example of negative frequency-dependent selection—having too many geniuses is likely bad for the group, as they have low reproductive success and may impose other costs on society; but having too few geniuses handicaps populations in inter-group competition and conflict. Individual-level sexual selection in the context of strong inter-group competition or conflict acts to keep their numbers relatively low, thus achieving a balance of group-level costs and benefits that favors corporate fitness.
Political scientist Charles Murray has noted that the frequency of geniuses over time in Western populations (evaluated just as macro-innovations are—via convergent historiometric ratings from relevant experts) follows a cyclical pattern. Specifically, he documented an increase in their per capita prevalence, starting in the Renaissance (a period characterized by significant scientific and cultural achievement in Europe, as observed in the question immediately above), peaking in the early to mid-19th century, and then declining precipitously thereafter.
The decline phase coincides with the period in which the correlation between cognitive ability and fertility became negative in the West, and with the period of global warming that followed the Maunder Minimum. Warmer climates reduced the levels of ecological stress acting on Western populations, causing pacification and a significant reduction in the strength of inter-group conflict (beginning around the time of the Pax Britannica, unsurprisingly). Macro-innovations in hygiene, medicine, and agriculture in the early 19th century (when the population-level g of Europeans, as a whole (but with variation, of course), was still very high) certainly further reduced the level of ecological stress that these populations experienced, with consequent massive reductions in infant and child mortality that demographers have copiously documented. Contraceptive use among elites probably further shifted the balance of selection in favor of those with lower levels of g, as did increasingly universal and prolonged exposure to education. As populations moved into a strongly individual-level selection regime, necessarily favoring the fitness of those better genetically equipped to succeed under conditions of inter-individual as opposed to inter-group competition, one expects that their psychological makeup began to evolve in ways that are unfavorable to geniuses and other altruistic types. Simply put, modernized conditions do not favor those traits that are useful to groups in conflict or intense competition with others—g and altruism, but also perhaps “transcendent” orientations broadly, which may associate with psychobehavioral alignment to individuals’ biocultural groups, an alignment that may redound to such groups’ fitness (Murray argued that geniuses are typically concerned with “transcendental goods”). Under such conditions, genius goes extinct as the levels of the various psychometric traits needed to sustain it decline (this could explain why, in 2013, the distinguished researcher of genius Dean Keith Simonton wrote an article in Nature entitled, “After Einstein: Scientific genius is extinct”).
In group-selection, competition for prominence is not only a function of individual-selection pressures: The individual who strives to scale the pecking order (or maintain his position) does not only act under the pressure of “selfish genes;” such individuals also act in response to pressures emanating from their larger biocultural group regulating and rewarding its internal components—via what V.C. Wynne-Edwards calls “the competition for conventional prizes by conventional means.”
As a group-selectionist, how do you sum up the implications—for the economic science—of this “holistic” envisioning of social hierarchy, especially when it comes to income inequalities, the division of labor, class struggle, or the transition from an agrarian feudal system to an organic-based wage system?
Michael A. Woodley of Menie:
I view economics as basically an extension of human sociobiology. This means that economic systems and preferences are shaped by the innate characteristics of biocultural groups—they are parts of the behavioral ecologies of those groups. I do not believe that economic principles are in any way “prior” to the biological factors that shape human affairs. Interestingly, some economists are now starting to move away from theory on the grounds that it is impeding progress in the field and has led to a variety of erroneous predictions. They are coming to embrace a big data approach instead. I suspect very few economists would support such a move had economics undergone a Darwinian revolution decades ago (when it certainly could have), and become broadly integrated with the behavioral sciences through sociobiology. The field would have developed a far better theoretical grounding and its progress by now would have been considerably greater (for instance, economists would probably have noticed that rational-choice models would be much more useful if they could adequately incorporate individual differences in traits such as g).
With respect to the role of historical patterns of group- and individual-level selection in shaping economic activity, this is indeed germane to understanding the evolution and sustainability of vertical stratification (usually called inequality). Under conditions of group selection, inequalities constitute a source of division of labor that can aggregately benefit a group. The regulation of inequality historically took the form of strong social and religious convictions deriving from a sense of belonging to a social order with sacred purpose (though of course this was realized imperfectly). Certain moral foundations likely co-evolved under group selection—specifically those of purity, authority, and loyalty, which, according to social psychologist Jonathan Haidt, constitute a source of social binding. Under high-binding conditions, such as those that seem to have predominated in the Middle Ages, individuals would engage in spontaneous displays of altruism when the interests of their biocultural group were threatened (there is much evidence that this happened among a number of European populations during the crusades). Medieval Europeans also largely self-organized in compliance with monarchical dictates, state infrastructure having been at the time, far too underdeveloped to effectively coercively enforce behaviors. They manifested what has been called “self-government at the king’s command,” a phenomenon discussed in a forthcoming book that two of my colleagues (Matt Sarraf and Colin Feltham) and I have been working on for a couple of years.
The term distributism is sometimes used to characterize economics under these conditions, where labor was organized and self-regulated through a system of guilds. While a merchant class existed, its in-group orientation constrained the scope of its operations, which led to the establishment of different forms of trading arrangements for in- and out-group members.
By contrast, the contemporary political-economic system often called “globalism” is reflective of strong individual-level selection, where there exists no spontaneous tendency to prefer binding-type moral behaviors, and where individual moral foundations are better characterized as individualistic, which, going from Haidt’s work, involve preferences for harm reduction, a byproduct of ecological pacification, and egalitarianism, or “fairness,” which is inimical to the existence of hierarchy. Under such conditions individuals sacralize efforts to ameliorate the perceived social ills stemming from inequality, which (ironically) leads them to compete with one another for status through ever more exaggerated displays of commitment to egalitarian pursuits. Sometimes, perhaps often, these displays are profitable for elites—it takes little imagination, for example, to understand how the mainstreaming of lifestyles formerly considered unacceptable creates many opportunities for businesses.
My colleague Matt Sarraf has developed the idea that globalization is not merely a political-economic phenomenon but partly depends on a project concerning mass psychology: An effort to eliminate groupish or binding moral orientations. Elites encourage people the world over (although not evenly) to abandon commitment to aspects of their nations and cultures that are distinctive of the interests of their particular biological groups, and to instead become morally oriented to the well-being of individuals without regard to group status. Once this moral psychology is sufficiently prevalent in a population, there is little resistance to, and often active support for, efforts to transmogrify the unique sociocultural products and lifeways of populations, to the end of serving “global humanity”. Out of this process has arisen supranational pseudo-empires like the European Union, the entire program of which is geared toward the erasure of sovereign nations through fiscal and policy homogenization, and, as already suggested, the elevation of certain progressive social goals agreeable to those who lack binding moral foundations. This situation is exactly what one would expect when selection acts predominantly at the individual as opposed to group level for an extended period of time, but when high levels of K, and the attendant prosociality, are also retained or even increased. Still, there are profound individual differences in the traits underlying approval of this latest phase of human social evolution, and the efforts of the managers of this globalized world order to control everything from consumption habits to etiquette and moral behavior, sometimes in borderline totalitarian fashion, are irking many people a great deal. This seems to be the catalyst of the organic growth of rightist political movements across the contemporary West.
It is well known pecking-order genius Mao Zedong managed to secure—in fact, regain—his position at the top of the communist Chinese superorganism after having set youth against his rivals within the Party.
How do you sum up the consequences of the Great Leap Forward and the Cultural Revolution on patterns of selection for cognitive ability in China—and generally speaking, those of a planned economy?
Michael A. Woodley of Menie:
In some of his recent work, the economic historian Gregory Clark, and I think at least one other researcher, found that the Communist Revolution in China had a rather modest effect on social mobility rates—in other words, the best efforts of the Maoists to eradicate the old elite and any vestiges of Chinese tradition failed to make much progress toward economic egalitarianism. Clark argues that this persistence is a consequence of the high heritability of social status. Variation in wealth and occupational status seems to be a function of additive genetic variation primarily, as Clark and his colleague Neil Cummins have very recently shown using data from a large English lineage. The genetic bases of social status make it difficult to alter via social engineering: Individuals who are predisposed to develop elite status simply tend to rise to the top, irrespective of the political or economic system dominant at any particular time (as long as some opportunities to attain higher social and economic standing than others exist).
The issue of whether or not the Chinese elite has persisted in terms of the levels of the sort of traits that benefit populations under group selection (such as high g) is another matter, however. Some have claimed (e.g., evolutionary psychologist Geoffrey Miller) that the role of the State in regulating family size (via the one-child policy) would penalize the fertility of those with low g. But the data from the period of the one-child policy do not support this claim. The correlations between IQ and education, on the one hand, and both family size and fertility, on the other, have been consistently negative in the period starting in 1950. If anything, these correlations became stronger following the imposition of the one-child policy. China therefore seems to exhibit the patterns of reproduction typical of populations that have undergone an industrial revolution and a resultant breakdown of group selection.
In the old Indo-European world, the summit of the social hierarchy was not occupied by the “bourgeois,” but instead the magus—from the clairvoyant to the astrologist—and the warlike-aristocrat. As a scientist stemming from a noble family, what are your thoughts concerning the high esteem in which magic used to be held before the rise of bourgeoisie and “modern science”?
Michael A. Woodley of Menie:
These mystics were from a basically pre-scientific era. It is important to note that many of their areas of expertise would count as protosciences, containing elements that went on to be foundational to modern sciences (alchemy gave way to chemistry, astrology and astronomy were united but later separated, etc.). In so far as the materially useful aspects of these protosciences could substantially enhance the corporate fitness of human groups (such as the development of gunpowder through alchemical research, or methods of navigation based on the positions of stars as an incident of astrological research, etc.), it is unsurprising that historical elites held these areas of inquiry in high esteem, even if the output of protosciences mostly lacked or had relatively little value (such as attempts to transmute base metals into gold, or to predict the future based on casting horoscopes, etc.).
The degree to which a population holds its aristocracy in high esteem would also be a function of the degree to which that population is under group selection. Historically, aristocrats derived their nobility from their military prowess. Parental control over offspring marriage probably ensured relatively high conservation of the traits relevant to such prowess in their offspring (such as in the case of Morganatic marriage laws in Europe, which prevented the passage of estates and titles to the offspring of those whose marriages involved individuals of unequal rank). The erstwhile function of aristocracy seems to have been as a warrior (and to a lesser extent administrative) caste, continued existence of which was dependent upon material benefits from its actions accruing to those beneath it in the social hierarchy (such as making available new land and other resources through territorial conquest, etc.). The fact that the old aristocracies in Europe and intellectually eminent individuals (many of whom, as Galton noted, had strong genealogical ties to landed and noble families) are not held in high regard contemporarily is simply evidence that modernized populations, particular of the West, are and have been under (for quite some time) a regime of individual-level selection. This selective regime most benefits the fitness of cognitive/behavioral specialists occupying very narrow micro-niches within highly complexified techno-economies, as well as others who can take maximum advantage of the material abundance and security that such economies produce. These people do not necessarily need high g to succeed, only the ability to invest cognitive capital into the acquisition of very specialized skills (such as software coding). They do not need to think in terms of traditional moral and ethical categories, either, such as loyalty, honor, or virtue, because realization of such normative or evaluative ideals seems to be adaptive only (or primarily) to the extent that it contributes to the fitness of individuals’ biocultural groups in conflict with other such groups. In modernized conditions, opportunism is more viable, insofar as there are no real costs to failing to sacrifice for the fitness of one’s group and instead acting selfishly—inter-group conflict posing significant existential threats has largely disappeared (certainly in the West), so sacrifice now lacks adaptive logic.
The new elite that has emerged since the 19th century in modernized, and (again) particularly Western societies, is not so much a cognitive one, but more of an entrepreneurial one, which has been able to exploit easy-to-fake signals of commitment to the pursuit of certain moral goals (such as greater “equality” and “freedom”); this pursuit has been elevated to the status of a secular religion. In doing this, elites transfer the social costs of these behaviors to those with the fewest resources to mitigate the damage (i.e., the underclass). You asked me how my coming from a “noble” family influences my perception of these trends. I do not have much of an answer to that question, but I will say that I find very little that is recognizably noble in the behaviors of contemporary elites, sometimes called “globalists.”
Thank you for your time. Is there something you would like to add?
Michael A. Woodley of Menie:
I should mention that some of the most exciting research that I am now doing concerns a novel theoretical model of inter-organismal genomic transactions and their sociocultural and demographic effects, which I have developed primarily with Matt Sarraf over the past couple of years. We call this the social epistasis amplification model, or SEAM, which is based on two key ideas. The first is that the genomes of organisms in an environment are interactive, such that the genome of one organism can influence the gene expression and hence phenotypic development and condition of another organism, a phenomenon known as social epistasis. By extension, these effects could flow from a group of organisms to a single organism and vice versa, or from one group of organisms to another; it should be stressed that these interactive effects most likely tend to be mutual. The second is that deleterious mutations have likely been accumulating in populations that are modernizing and modernized, because the intense early life morbidity and mortality of the pre-industrial past, which likely selected against such mutations, have been all but abolished as a result of industrialization and its sequelae.
Together, these ideas suggest that the increasing frequency of harmful mutations in certain human populations may have costs that extend beyond their effects on individual carriers of mutations. Instead, via social epistasis, the effects of deleterious mutations may be amplified insofar as they might pathologically alter patterns of gene expression in organisms that do not carry these mutations (non-carriers). This dynamic could be the common basis of a variety of undesirable trends in modernized populations at least, which have so far required a congeries of theories to account for—these undesirable trends including falling fertility rates, declining sperm counts, and worsening mental (and in some respects physical) health, as well as unusual patterns of behaviors that seem to have become far more prevalent in the most developed parts of the world than anywhere else. The dynamics that the SEAM may also be involved in the loss of binding moralities and associated cultural adaptations.
Although it does not seem to be ethically possible to conduct experimental tests of this model in humans, with our colleague Stéphane Baudouin and his team, we have gotten very strong experimental support for the SEAM from mice. Specifically, we found that mice with a mutation related to autistic-like behaviors epigenetically alter the patterns of RNA expression in mice lacking this mutation to which they are exposed. Crucially, this altered RNA expression is associated with the occurrence of autistic-like behaviors in the non-carrier mice. Earlier experiments from Dr. Baudouin and his team noted that exposure to mice with an autism-related mutation depressed the testosterone levels of non-mutant mice and interfered with their ability to form stable social hierarchies.
These experimental results align with the “mouse utopia” research of noted ethologist John B. Calhoun. Calhoun found that mouse colonies raised in “utopian” conditions—i.e., in which resources were abundant and predation was absent—although initially experiencing a stage of sustained population growth, eventually stopped reproducing and died out. Notably, toward the end of the colonies’ existence, the mice exhibited a number of aberrant behaviors, including the autistic-like ones apparent in our own mouse experiments. We therefore believe that the dynamic underlying Calhoun’s “mouse utopia” collapse was probably mutation accumulation stemming from relaxed environmental conditions and subsequent amplification of the deleterious effects of the accumulated mutations to the entire mouse population. There is certainly evidence that the same process is playing out in modernized human populations today.