Next Article in Journal
“Because of the Christian Fellowship, I Decided to Stay”: How Participating in a Christian Community Shapes the Social Experiences of Chinese International Students
Previous Article in Journal
In Search of Maximal Citizenship in Educational Policy for Young People: Analysing Citizenship in Finnish Religious Education in View of the “Maximal” Conception
Open AccessArticle

Moral Foundations in the 2015-16 U.S. Presidential Primary Debates: The Positive and Negative Moral Vocabulary of Partisan Elites

School of Politics and Global Studies, Arizona State University, Tempe, AZ 85287-3902, USA
Soc. Sci. 2019, 8(8), 233; https://doi.org/10.3390/socsci8080233
Received: 29 June 2019 / Revised: 29 July 2019 / Accepted: 2 August 2019 / Published: 6 August 2019
(This article belongs to the Section Contemporary Politics and Society)

Abstract

Moral foundations theory (MFT) suggests that individuals on the political left draw upon moral intuitions relating primarily to care and fairness, whereas conservatives are more motivated than liberals by authority, ingroup, and purity concerns. The theory of conservatism as motivated social cognition (CMSC) suggests that conservatives are more attuned than liberals to threat and to negative stimuli. Because evidence for both accounts rests on studies of mass publics, however, it remains unclear whether political elites of the left and right exhibit these inclinations. Thus, this analysis uses the 2015-16 United States presidential primary season as an occasion to explore partisan differences in candidates’ moral rhetoric. The analysis focuses on verbal responses to questions posed during party primary debates, a setting that is largely unscripted and thus potentially subject to intuitive influences. The Moral Foundations Dictionary is employed to analyze how frequently candidates used words representing various moral foundations, distinguishing between positive and negative references to each. Consistent with CMSC, the Republican candidates were more likely to use negative-valence moral terminology, describing violations of moral foundations. The direction of some partisan differences contradicts the expectations of MFT. Donald Trump, a novice candidate, was an exception to the typical Republican pattern, making markedly lower overall use of moral-foundations vocabulary.
Keywords: moral foundations theory; motivated social cognition; presidential debates; partisan differences; ideology; Donald Trump moral foundations theory; motivated social cognition; presidential debates; partisan differences; ideology; Donald Trump

1. Introduction

In an era of marked concern over ideological and partisan polarization in the United States and other Western democracies, an increasing volume of research in political psychology investigates whether the differences between the political left and right reflect deep-seated predispositions, emotion-laden impulses, or might even have roots in innate characteristics. Probing below the level of issue opinions, political scientists and psychologists have argued that relatively intuitive or near-automatic responses often emerge in individuals as they react to various political contexts or stimuli. Further, these scholars maintain, there are clear and systematic differences in the ways that conservatives and liberals tend to respond to certain situations, reflecting a fundamental left/right distinction in political cognition (Jost et al. 2003; Amodio et al. 2007; Graham et al. 2009; Carraro et al. 2011; Hibbing et al. 2014).
Thus far, the vast majority of research in this vein has examined left/right differences among mass publics, often using convenience samples of students or online questionnaire respondents. In applying this research to actual political systems, an important question is whether elite political actors—politicians themselves—exhibit left/right differences in intuitions and behaviors that are similar to those among the general public. Unfortunately, researchers are constrained in their ability to assess the political intuitions of elites, given the relatively scripted nature of politicians’ behavior and speech on the public stage, along with a lack of access to elites in their more private or unguarded moments (but for attempts, see Neiman et al. 2016; Lipsitz 2018; Jones et al. 2018).
In order to take steps toward examining the political cognition of elites, one approach is to focus on their use of language in relatively unscripted occasions of political speech. In this paper, such an approach is followed in exploring whether there may be systematic partisan differences in the moral impulses that animate high-level political candidates. The study is motivated theoretically by two prominent frameworks from the field of political psychology, each of which highlights systematic differences in the ways that left-leaning and right-leaning individuals think and make judgments.
First, moral foundations theory (MFT) directs attention to left/right differences in the arousal of certain moral intuitions, also called moral foundations.1 MFT studies of mass publics have found that the political views and judgments of left-leaning individuals (often called liberals or progressives) arise disproportionately from impulses regarding fairness and empathetic care. Right-leaning individuals (i.e., conservatives), meanwhile, are said to be considerably more motivated than are liberals by impulses relating to in-group loyalty, obedience to authorities seen as legitimate, and a concern with purity and avoidance of contamination (Haidt 2007; Haidt and Graham 2007; Graham et al. 2009).
Second, the study draws upon the theory of conservatism as motivated social cognition (CMSC), which focuses on the underlying motivations that distinguish right- from left-leaning individuals’ thinking about politics. According to this framework, environmental stimuli evoking fear, threat, and uncertainty arouse certain existential and epistemic motivations, such as a concern with preventing losses or a desire for certainty. These motives, in turn, probabilistically influence individuals’ attraction to politically conservative beliefs, namely resistance to change and endorsement of status-quo hierarchies (Jost et al. 2003). Specifically relevant for the current study, CMSC holds that there will be an association between an individual’s level of conservatism and their focus on elements in their environment that may be perceived as threatening, frightening, or unpleasant (Janoff-Bulman 2009; Jost et al. 2017).
While not necessarily “competing” theoretical accounts, MFT and CMSC place emphasis on different facets of political thinking (Leeuwen and Park 2009). MFT focuses on left/right differences in the strength of specific moral intuitions. CMSC, on the other hand, highlights a broad negative/positive dimension, in which conservatives pay greater attention than liberals to negative stimuli. Both accounts are considered in the current study, exploring the manner in which political elites employ positive and negative moral concepts. Automated content analysis is applied to the U.S. presidential primary debates held in late 2015 and early 2016, as Democratic and Republican candidates jockeyed for advantage in appealing to their same-party voters during the nominating contest. Primary debates are appropriate for analysis of intuitive thinking (or fast thinking, see Kahneman 2011) in that they represent a relatively unscripted occasion for political speech. They are also pressure-packed, suggesting that candidates operate under conditions of significant cognitive load. To examine verbal use of moral foundations, more than 800 segments of candidate speech are examined, each representing an extended response or statement during a debate. The data are drawn from five Democratic and five Republican debates that were closely matched in timing.
The study finds that some partisan differences are evident in the use of words associated with certain moral foundations, particularly after controlling for the issue domain under discussion. However, some of these differences are not in the direction anticipated by MFT. In addition, by distinguishing between words that describe positive and negative aspects of each moral foundation, the study finds clearer support for CMSC’s contention that conservatives tend to focus on negative elements. In the debates, the experienced Republican presidential candidates were more likely than their Democratic counterparts to use words describing the violation of moral concepts.
Not all of the Republican candidates were longtime politicians or establishment conservatives, however. A secondary research question in this paper asks whether the novice candidate Donald Trump, given his populist, anti-establishment posture, used a vocabulary with a distinctive moral palette. The results show differences between Trump and his fellow Republican candidates, with Trump most notable for his low overall use of words associated with moral foundations. This study, in combination with the relatively few others of its type (Motyl 2012; Clifford and Jerit 2013; Sagi and Dehghani 2014; Neiman et al. 2016; Lipsitz 2018), should help scholars explore both the potential and the pitfalls of using automated text analysis to provide indications of left/right differences in cognition among political elites.

2. Moral Foundations, Motivated Cognition, and Ideological Differences

The notion that partisan or ideological differences among individuals rest in part on psychological affinities or identity-based motivations is an idea with a long history (see, among many others, Converse 1964; Pratto et al. 1994; Green et al. 2004; Mason 2018). While a rationalist approach to political behavior suggests that people first decide on their issue positions or ideological worldview and then choose a party affiliation that comports with those views, the intuitionist perspective suggests that partisan affiliations reflect feeling as much as reasoning. Some scholars of biopolitics even argue that there are innate characteristics that tend to predispose individuals toward liberalism and conservatism (Hibbing et al. 2014). Other social and political psychologists, meanwhile, point toward personality traits and deep-seated core values as individual differences that may underlie ideological or partisan differences (e.g., Block and Block 2006; Carney et al. 2008; Gerber et al. 2010).
Drawing upon social psychology, cultural anthropology, and evolutionary theories, moral foundations theory (MFT) is one leading attempt to provide a systematic account of the deep-seated basis of ideological differences (Haidt 2007; Haidt 2012). MFT maintains that individuals worldwide draw upon five clusters of moral intuitions, which are akin to emotional responses. Its theorists suggest that each of these moral impulses is present in all societies, to varying degrees, and perhaps in all humans. Each moral foundation, MFT posits, became imprinted as a cognitive tendency through a process of evolutionary selection. Behaviors consistent with the moral foundations likely were adaptive for individuals who lived in the foraging tribes that were the relevant social groups for the vast majority of human history. Accordingly, moral foundations are seen as being nearly instinctual and automatic, rather than a product of rational reasoning (Haidt 2001; Haidt 2012).
MFT proposes that five core values represent humans’ foundational moral intuitions (Haidt and Joseph 2004; Haidt and Graham 2007; Graham et al. 2011):
  • Care/Harm: A concern with nurturance and empathy, and a strong aversion for (or outrage regarding) situations evoking harm or cruelty;
  • Fairness/Cheating: A concern with just treatment and proportionality of rewards under a system of social rules2;
  • Ingroup/Betrayal: Sometimes called Loyalty, a concern with primordial allegiance to one’s own family, clan, tribe, nation, or another relevant social group;
  • Authority/Subversion: A concern with adherence to traditional or legitimate authorities, whether they be sociopolitical hierarchies or doctrinal authorities (e.g., prescribed religious practices);
  • Purity/Degradation: Sometimes called Sanctity, a concern with avoiding contamination and elevating oneself above base, animalistic behaviors.
Although MFT posits that some adherence to these five foundations is near-universal in human societies, empirical MFT studies find that contemporary left/right distinctions in political ideology are associated with differential arousal of these five foundations (Haidt and Graham 2007; Graham et al. 2009; Graham et al. 2011). Liberals primarily are animated by impulses surrounding Care and Fairness, whereas conservatives tend to accord considerably more importance than liberals do to Ingroup, Authority, and Purity. These latter three, according to MFT, are binding moral foundations that are inclined toward maintaining social order, traditional mores, and well-defined social roles and structures. By contrast, Care and Fairness are individuating values that tend to prioritize the claims and nurturance needs of individuals (or social subgroups), sometimes against the claims of the more powerful (Haidt 2012).3
In this view, then, differential emphases on the five moral foundations underlie many ideological differences between the left and right. In various survey- and lab-based studies deploying MFT, study participants who self-identify as liberal or left-leaning, regardless of their world region of origin, tend to indicate that Care and Fairness are by far the most relevant of the five foundations in determining whether a situation or behavior is morally wrong. Conservatives, by contrast, typically demonstrate a five-channel morality. That is, they are motivated by all five foundations, but give significantly more weight to Ingroup, Authority, and Purity than liberals do (Haidt and Graham 2007; Graham et al. 2009). The left/right differences regarding moral intuitions have been most apparent on social and cultural issues, rather than economic ones.
Nearly all published empirical work on judgments and behaviors relevant to moral foundations has focused on the mass public. However, to move beyond lab experiments, convenience samples, and public-opinion surveys to the realm of the political system itself necessitates establishing whether elite political actors demonstrate left/right differences in moral intuitions that are similar to lay respondents. This remains an open question. On one hand, politicians are political sophisticates, with more political and policy information available to them and more well-developed ideological worldviews than is typical among average citizens. This sophistication, as well as the strategic nature of much elite political action, might suggest that politicians are more likely to engage in effortful thinking about politics and policy, and are thus less likely to make gut-level political and policy judgments. On the other hand, politicians are no less human than other members of the public. They should not be immune to the habitual, emotion-laden cognition that appears to be common in political thinking (Marcus et al. 2000; Westen 2007; Hetherington and Weiler 2009). Indeed, it is possible that the political convictions that induce people to seek elective office emerge from particularly strong, deep-seated motivations that are not always apparent at the level of conscious awareness.
One unobtrusive means of examining the moral intuitions of politicians is to analyze their use of words. To assist in analyzing moral rhetoric, Graham et al. (2009) developed the Moral Foundations Dictionary, a systematically derived list of words and word stems that pertain to the evocation of each of the five foundations. They initially applied this dictionary to examining sermons spoken in liberal and conservative religious denominations (Graham et al. 2009). In subsequent work using the Moral Foundations Dictionary and focusing on political elites, Neiman et al. (2016) concluded that the rhetoric of politicians generally fails to follow the script anticipated by MFT. Lipsitz (2018), however, found, with some nuances, support for MFT in her analysis of the differences between Democratic and Republican campaign advertisements. Elsewhere, Fulgoni et al. (2016) found some relevant distinctions in moral word usage between liberal and conservative media outlets, while Motyl (2012) used the Moral Foundations Dictionary to describe the differences between Democratic and Republican party platforms across time.
When comparing political speech or text among liberals and conservatives, other theoretical perspectives emerging from political psychology might be operationalized, considering them alongside or in combination with MFT. One of the most influential frameworks regarding the fundamental left/right differences is the theory of conservatism as motivated social cognition (CMSC). CMSC views conservatives as highly attuned to situations or stimuli evoking threat (Jost et al. 2003; Jost et al. 2017). An oft-cited meta-analysis concludes that conservative beliefs are “adopted in part because they satisfy various psychological needs” and that “political conservatism [is] an ideological belief system that is significantly (but not completely) related to motivational concerns having to do with the psychological management of uncertainty and fear” (Jost et al. 2003, p. 369). Similarly, Janoff-Bulman (2009, p. 120) holds that left/right ideological differences “reflect the fundamental psychological distinction between approach and avoidance motivation. Conservatism is avoidance based; it is focused on preventing negative outcomes (e.g., societal losses) …”
Consistent with CMSC, experimental evidence indicates that “negative information [in the form of unpleasant words or pictures] exerts a stronger automatic attention-grabbing power in the case of political conservatives, as compared to liberals” (Carraro et al. 2011, p. 5). Fessler et al. (2017) found that conservatives were more likely than liberals to rate false statements about potential hazards as credible in comparison to false statements about potential benefits. Other research shows conservatives to be more sensitive to disgust than liberals, a difference that translates into more severe moral judgments among conservatives on a range of sociocultural issues (Inbar et al. 2009; Inbar et al. 2012; Eskine et al. 2011; Smith et al. 2011). This type of ideological difference, where aversive stimuli arouse conservatives more than liberals, has even been detected in individuals’ physiological processes. For example, skin conductance measures of electrodermal activity after subjects viewed unpleasant images led Dodd et al. (2012) to conclude that “the political left rolls with the good and the political right confronts the bad.”
One study relevant to both MFT and CMSC examined a corpus of political news coverage in partisan-leaning American media sources, and concluded that researchers studying moral foundations also should give attention to negativity bias among conservatives. Drawing upon the distinction between positive- and negative-valenced words in the Moral Foundations Dictionary, Fulgoni et al. (2016, p. 3735) found that “intriguingly, while liberals were concerned with both the vice and virtue aspects in their moral foundations, conservatives seemed to focus only on the vice aspect, denouncing the lack of loyalty and respect for authority.” Similarly, the empirical analysis below distinguishes between positively and negatively valenced moral-foundations rhetoric while examining liberal/conservative differences.

3. Candidates’ Primary-Debate Responses as a Data Source

Existing research using the Moral Foundations Dictionary focuses on political texts or speech that most likely were carefully crafted, for example, party platforms, political advertisements, or politicians’ official Twitter feeds. This scripted text may be unsuited for assessing a speaker’s intuitive or deep-seated psychological orientation toward particular moral foundations. Instead, the author makes a case for analyzing the candidates’ speech in primary debates. Presidential primary debates are used as occasions to analyze a politicians’ use of moral vocabulary for three main reasons. First, televised debates are consequential in the United States: They are prominent, highly visible political spectacles, often seen as important in helping the media and voters make sense of candidates’ views and character (Fridkin et al. 2007). This is particularly the case for debates in the lower-information context of the primary, as opposed to the general-election, season (McKinney and Warner 2013).
Second, primary debates are held separately for each party, with an anticipated audience consisting disproportionately of active and involved supporters of that party (the so-called party base). Thus, it is anticipated that primary debates represent an occasion when Democratic and Republican candidates are particularly distinguishable from one another and concerned with demonstrating their ideological bona fides. In general-election debates, by contrast, only one candidate represents each party, and each may strategically aim to court the median voter in the upcoming general election, potentially toning down their more ideological appeals and emotional rhetoric. Indeed, Lipsitz (2018) showed that television advertising by candidates in 2008 tended to be more moderate or restrained in its moral vocabulary in the general-election stage than during the primary phase. Graham et al. (2009), in the debut application of their Moral Foundations Dictionary, decided not to analyze the convention acceptance speeches of presidential candidates as they initially intended, finding those speeches to be “so full of policy proposals, and of moral appeals to the political center of the country, that extracting distinctive moral content was unfeasible …” (2009, p. 1038).
Third, in contrast to party platforms or political TV advertising, which by their nature tend to be highly polished and developed by groups of people, speech during the debates tends to be at least partially unscripted and of the candidates’ own making. The unplanned nature of such speech is advantageous, since both MFT and CMSC hold that psychological predispositions manifest themselves as quick, gut-level responses to situations. Intuitive responses are particularly likely when individuals are operating under significant stress, or cognitive load (Kahneman 2011; Bargh and Chartrand 1999). By contrast, effortful (reflective) cognition, such as that involved in crafting campaign advertising, is much more likely to invoke rationalist considerations. Thus, the fact that primary debates are somewhat freewheeling, unpredictable, pressure-filled, and fast-paced events may be expected to accentuate authentic emotional or intuitive speech patterns, eliciting the candidates’ “natural inclination” (Jordan et al. 2019, p. 3)
Granted, it is well known that candidates rehearse for debates, probably practicing canned responses to likely questions. In addition, if politicians can predict the emotional responses their word choices are likely to evoke among the public, they might try to use moral vocabulary strategically. That being said, the relatively loose and informal format of many primary debates—including unanticipated questions, moderator interruptions seeking clarification, inter-candidate crosstalk, attacks, and counter-claims, and in some cases audience involvement in questioning—means that even very experienced candidates cannot be entirely sure where the conversation will go. In such a setting, the candidates seem more likely to speak from their heart (or their gut). By contrast, in the case of paid advertising or social media campaigns, messages are likely to have been strategically formulated for anticipated audiences, often by paid consultants. Even Twitter messages, where there is textual evidence that members of Congress display significant left/right value differences in language use (Jones et al. 2018), often are produced by professional communications staff.
Primary debate responses therefore seem a reasonable venue for investigating partisan differences in the relatively unscripted use of words relating to the five moral foundations. The theoretical discussion and literature review in the prior section suggests two hypotheses:
Hypothesis 1.
Based on MFT’s description of differences in the deep-seated moral intuitions of liberals and conservatives, Democratic candidates are expected to make heavier use than Republicans of wording associated with the Care/Harm and Fairness/Cheating foundations. Republican candidates, on the other hand, are expected to emphasize Ingroup/Betrayal, Authority/Subversion, and Purity/Degradation concepts to a greater degree than Democrats.
Hypothesis 2.
Given CMSC’s evidence on the tendency of conservatives to focus on aversive stimuli and potentially harmful outcomes, Republican candidates are expected to emphasize the negative aspects of moral foundations in their rhetoric, focusing more than Democrats on violations of particular moral commitments.

4. Data

The focus in this paper is on the Democratic and Republican candidates during the 2015–2016 primary campaign, as primary debates across numerous U.S. states helped each party’s electorate evaluate candidates and winnow the field in advance of the 2016 national election. The Democratic Party had clear frontrunners from the early going, Hillary Clinton and Bernie Sanders. The other Democrats suspended their campaigns by the end of 2015, with the exception of Martin O’Malley, who withdrew on 1 February, 2016. By contrast, the Republican field, at one point including 17 announced candidates, was slower to shrink, ultimately being reduced in March 2016 to Ted Cruz, John Kasich, and Donald Trump.
Ten debates were selected—five for each party—with several goals in mind: To match relatively closely in time across the two parties, thereby controlling as best as possible for political context; to capture both early and later dynamics of the front-loaded nominating season; to feature all of the candidates who wound up being the most competitive within their party’s field; and to include a variety of formats and media sponsors (see Table 1).4 With the exception of the first debates for each party, which were a month apart in September and October 2015, each of the remaining debates are separated by no more than four days from a companion debate in the other party.
The full-text transcript of each debate (gathered from Woolley and Peters 2018) was read and converted to a plain-text file, while correcting the small number of typographical errors and misspellings. These files were then separated into chunks, or segments of text. Each segment comprises a particular candidate’s response to a specific debate question or discussion topic. Any interruptions or extraneous text (e.g., follow-up questions from moderators, brief interjections from competing candidates, indications of audience applause, etc.) were deleted, such that the remaining words represent a candidate’s entire response on a particular topic, up to the point at which the debate moved on to a new issue or to a different candidate. In debates where candidates made opening and closing statements, these were included as individual segments, although such statements often referred to multiple topics or issues. The opening and closing statements, representing 10% of all segments, present a potentially interesting contrast to the remainder of the corpus, as candidates are probably more likely to script them in advance.
The 832 text segments resulting from this approach constitute the units of analysis for the study. Segments had a median length of 176 words, with a range from 19 to 864 words. Each segment was coded for the candidate speaking, the date of the debate, the media organization moderating the debate, and, importantly for considerations of which moral concerns might be evoked, the issue domain or policy topic under discussion.
For the substantive issue coding, each segment was assigned exclusively to one topic that was its predominant focus. In addition to opening/closing statements, there are several categories: Security includes national security, terrorism, war and peace, military budget, intelligence, surveillance, foreign aid, sovereignty, and events in specific foreign countries, as well as domestic security topics relating to crime, policing, prisons, illegal drugs, and homeland security. Economy/Budget refers to subjects such as economic management and trends, monetary policy, economic regulation (including environmental regulation), discussions of specific industries, international trade, and all topics relating primarily to fiscal policy, taxes and budgeting (except military spending). Immigration includes topics related to legal or unauthorized immigration and refugees. Social welfare includes discussions of poverty, inequality, education, and healthcare, as well as specific social insurance or entitlement programs, including veteran benefits. Culture/Identity comprises issues concerning race relations, ethnicity, gender, and religion, as well as social regulations such as gun control, abortion, and same-sex marriage. Campaign Dynamics/General Politics includes campaign strategy, views of and events pertaining to competing candidates and their records, campaign funding, polls, media coverage, interest-group politics, civic engagement, discussions of Congress, the Supreme Court, then-President Obama, and other U.S. political actors. Self refers to segments focused on the candidate’s own experience, character, record, values, governing style, priorities, or temperament. Finally, a residual Other (miscellaneous) category accounts for the balance of the segments.5
Table 2 provides some summary statistics (with the social welfare and culture/identity topics combined for display purposes), showing how often each candidate addressed the various topics. Overall, just over half of the segments (54%) were spoken by Republicans. Given the more rapid winnowing of the Democratic field, the dataset captures considerably more segments for Clinton and Sanders, reflecting the lack of other competitors dividing the time with them, than for any individual Republican speaker. The major candidates within each party dealt with most of the substantive topics to a comparable degree. However, the novelty of the Trump candidacy is apparent in that he was asked about himself (or in some cases, turned the topic of conversation to himself) approximately three times as often as the other candidates spoke about themselves (15% compared to 5%, p < 0.001). In comparing the two parties as a whole, the only substantive categories addressed to a significantly different degree were social welfare and culture/identity issues, which received more attention in Democratic debates. The Republicans had significantly more opening and closing statements due to that party’s greater number of candidates.
The automated text-analysis program Recursive Inspection of Text Scanner (RIOT Scan) is designed to identify the frequency of words or word stems from pre-specified dictionaries in a particular segment of text (Boyd 2016). In this case, the Moral Foundations Dictionary developed by Graham et al. (2009) is used, and the program identifies the percentage of words reflecting the five categories of Care, Fairness, Authority, Ingroup, and Purity.
Importantly, it also decomposes each of the five categories into words carrying a positive valence—identified in the Moral Foundations Dictionary as virtue words—and those carrying a negative valence—vice words. Graham et al. (2009, p. 1039) conceived of vice words as those expressing a violation of the foundation in question. For example, for the Care/Harm foundation, word stems such as compassion* and secur* are among the virtue words (representing Care), whereas violen* and abandon* are among the vice (i.e., Harm) words. The disaggregation of positive and negative words within each foundation is useful for judging the potential negativity bias of the Republican candidates (Hypothesis 2). A tendency of Republicans to focus more on vice words would be broadly consistent with the attention of conservatives to threat, highlighted by the CMSC perspective (Jost et al. 2003).
In addition to tabulating words representing each moral foundation, the software provides a variety of additional statistics for each segment, including its length in words and the percentage of its words dealing with general morality. This latter category refers to words indicative of normative judgments (e.g., good, immoral*), but which do not invoke a specific moral foundation.

5. Exploring Differences in Moral Rhetoric

Several initial impressions are apparent from an exploration of descriptive statistics regarding the occurrence of words representing the five moral foundations (see Figure 1). First, the vast majority of words used in a typical spoken response do not fall within any of the five foundations. None of the foundations represent even 1 percent of the average segment. However, these low averages obscure that fact that while many segments have zero words representing any particular foundation, some candidate statements use certain foundations in a much more pronounced way. As Lipsitz (2018, pp. 64–65) noted in her study of political ads, the Moral Foundations Dictionary contains only the tiniest fraction of words in the English language, so it is unrealistic to expect high numbers of morality words to appear in any given chunk of speech. Considering all candidates, 2.2% of the words in the mean segment represented any of the five foundations (similar to the percentage found in Graham et al. 2009 and Lipsitz 2018). This average percentage of moral-foundations words is slightly but not significantly higher among Democrats than Republicans (2.27% vs. 2.10%, p = 0.11; all significance tests two-tailed).
Second, the five foundations appear with quite unequal frequency. The words relating to Ingroup, Care, and Authority are used much more than Fairness or Purity words. As in other studies of political text or speech (Kraft 2018; Lipsitz 2018; Clifford and Jerit 2013), Purity/Degradation words appear very infrequently, constituting something of a rare event. Since mentions of Purity words are minimal, this paper refrains from drawing any conclusions about that moral foundation, as explained in greater detail below. Theoretically speaking, this is unfortunate, since it was the Purity-related emotion of disgust and its opposite awe (elevation) that initially inspired much of Haidt’s research leading to MFT (Haidt 2012). Realistically, however (and despite Donald Trump’s seemingly frequent labeling of unwelcome things and behaviors as “disgusting”), there seems to be relatively little use of Purity words in contemporary political talk.6
Third, examining the relative height of the bars in Figure 1, the partisan differences in the usage of the five foundations do not comport very strongly with MFT’s expectations. Although, as Hypothesis 1 anticipates, Democratic candidates used a higher percentage of Care and Fairness terminology in their statements, and Republicans used more Authority words, these differences are slight and none is statistically significant, according to t-tests for differences in the means. The one set of partisan differences that is significant—for Ingroup—is in the unanticipated direction, with Democrats, on average, using more Ingroup words than Republicans (p = 0.04). Purity words, as noted, occur very infrequently among either party.
Fourth, however, additional partisan differences appear if positive-valence (virtue) words are distinguished from negative-valence (vice) words within each foundation. As the top part of each bar in Figure 1 shows, the Republicans used the vice dimension of four of the five foundations more frequently than Democrats. Moreover, in four of five cases Democrats used more virtue words (with a tie in the fifth, Authority). If the total usage of virtue words and of vice words is summed across all five moral foundations, the partisan differences are significant for the combined total for virtue (Democrats 1.69%, Republicans 1.40%, p = 0.001), while differences appear but do not reach standard levels of statistical significance for vice (Democrats 0.58%, Republicans 0.69%, p = 0.11).
With the five foundations disaggregated between virtue and vice, the partisan differences are statistically significant for four types of words: Care-Virtue (Democrats higher, p = 0.02); Ingroup-Virtue (Democrats higher, p = 0.01), Purity-Virtue (Democrats higher, p = 0.003); and Authority-Vice (Republicans higher, p = 0.04). This pattern provides some initial support for Hypothesis 2 (conservatives’ negativity bias), but appears unsupportive of Hypothesis 1 concerning differential party appeal to moral foundations. Of the four significant differences, only two (Care-Virtue and Authority-Vice) follow the expectation that the liberal party will emphasize the individuating moral foundations and the conservative party will embrace the binding foundations.
The significant Democratic advantage in the use of Ingroup-Virtue words, surprising given the expectations of MFT, merits closer examination. To investigate how candidates of each party used these loyalty-oriented words, the text of every segment containing at least two such words was gathered, yielding a similar number of segments for each party (106 Republican, 100 Democratic). Figure 2 shows the most frequently used Ingroup-Virtue words in these segments by party. The Republican candidates, consistent with a nationalistic orientation, were more inclined to mention the United States (united is an Ingroup word) as well as the word nation. Democrats, on the other hand, were more likely to talk about community or communities and to use the adverb together. In general, many Ingroup-Virtue words in the Moral Foundation Dictionary seem quite consistent with a social and political philosophy that stresses communitarian or mutualistic obligations (e.g., collective and solidarity, although those words were not much used in the debates). These sentiments are reflected in the proverb, “It takes a village to raise a child,” which formed the basis for the title of a book by then-First Lady Hillary Clinton (1996), and in her 2016 campaign slogan “Stronger Together.”
Graham et al. (2009) found a similarly unexpected difference with respect to Ingroup words in their analysis of the sermons of liberal and conservative churches. In that case, the liberal denomination (Unitarian Universalists) made more Ingroup references than the conservative denomination (Southern Baptists). By examining the sermons’ Ingroup words in context, the authors provided an explanation, concluding “liberals were much more likely than conservatives to use these words in order to reject the foundational concerns of ingroup loyalty and group solidarity” (Graham et al. 2009, p. 1039, emph. added). The interpretation in the current study is different, however, namely, that the Democratic candidates spoke affirmatively about certain positive-valence Ingroup concepts (such as community) more than the Republicans did.
Whereas an examination of how candidates used Ingroup-Virtue words in context helps illuminate the Democrats’ heavier use, the same cannot be said for the use of Purity-Virtue words, in which Democrats also were higher, despite MFT’s expectations. Although the Purity-Virtue list includes words that bring to mind precepts of religious and social conservatism (holy, sacred*, virtuous), it also includes words with more secular applications that may resonate with left-leaning environmentalists, social-justice advocates, and reformers (e.g., innocent, pure*, integrity). The truth of the matter, however, is that Purity-Virtue words saw such scant use by either party that there is too little information to draw conclusions. Even less in evidence are Purity-Vice words. Given the rareness of these occurrences, the multivariate analysis below omits any separate examination of Purity words.7
If the segments in the dataset are sorted by issue domain, it is apparent that the topic of discussion has some relationship to the amount of moral foundations rhetoric (see Table 3, top panel). For instance, immigration emerges as a topic provoking a somewhat more moralistic vocabulary, not only in the case of the Ingroup foundation (for which one of the vice word stems is immigra*), but also for the Authority foundation and for general morality words. Social welfare topics tended to evoke Care rhetoric while making sparse use of Ingroup, Authority, and (surprisingly) Fairness words. Finally, compared to other substantive topics, segments on economy/budget topics tended to feature fewer Care, Ingroup, and Authority words.
Other than the culture/identity segments, the candidates’ opening and closing statements, likely the least spontaneous responses in a typical debate, were the topic with the heaviest inclusion of Fairness words (0.31%), nearly double the percentage as in segments on other topics (0.16%; p = 0.001). Given the opportunity afforded by opening and closing statements to choose one’s own topics, the candidates may have made special effort to address Fairness, a moral foundation that otherwise was evidently not very much invoked in these debates.8
Turning to the usage of moral rhetoric by individual candidates (bottom panel of Table 3), perhaps most distinctive are the low levels of moral-foundations language used by Donald Trump. Among the candidates with at least three dozen segments in the dataset (who might be considered major candidates), Trump showed the absolute lowest use of three of the five moral foundations: Care, Fairness, and Ingroup. Using t-tests to compare Trump to all 15 of the non-Trump candidates, Trump was significantly lower than the rest for Care and Ingroup, and nearly so for Fairness (p = 0.06). If all five sets of concepts are combined, Trump used moral-foundations language approximately one-third less often than the other candidates (Trump mean = 1.51%, mean of other candidates combined = 2.27%; p < 0.001). Further investigation indicates that Trump’s uniqueness primarily stemmed from his low use of virtue words (1.00% Trump versus 1.61% others, p < 0.001) rather than from differences in vice rhetoric (0.51% vs. 0.66%, n.s.).9 Media coverage during the campaign sometimes portrayed Trump as being prone to drawing pessimistic or even dystopian conclusions about the contemporary United States, notably in his acceptance speech at the Republican National Convention (Meyerson 2016). His tendency to use very few positive-valence words about the five moral foundations offers some tentative support for these anecdotal impressions.
Although Trump apparently eschewed rhetoric evoking the moral foundations, he was not averse to using words with more general moral connotations. Of the major candidates, Trump was the highest in percentage use of general morality terminology, words like bad, correct, or offensive that do not directly derive from any of the five foundations. These words constituted 0.53% of the average Trump segment, compared to an average of 0.35% in all other candidates’ segments combined (p = 0.01). This pattern may arise in part as a function of Trump’s tendency to use superlatives when speaking, but also from his predilection for casting judgment, whether in the form of bitter aspersions of his critics or high praise of his supporters. Other analyses of text corpora have similarly found Trump to be an outlier in his political speech. For example, examining a wide range of texts, Jordan and Pennebaker (2017) found that Trump displayed far lower use of analytic language than the norm among major-party presidential candidates in the most recent five election cycles. Oliver and Rahn (2016) argued that Trump’s style of speech stood out as more populist than other 2016 candidates in its brief sentences, simple vocabulary, anti-elitism, and appeals to common sense.

6. Multivariate Analysis of Partisan and Trump Differences

6.1. Methodological Considerations

Having described some general patterns regarding the occurrence of moral rhetoric in the debate segments, the paper next addresses the task of estimating a relatively simple multivariate model of differences in use of Moral Foundations words. With a limited number of candidates and observational data regarding a single election cycle, no claim is made that the resulting estimates demonstrate any sort of causal relationships, nor that they would hold across other times and places. However, by controlling for the topic under discussion, the model should help assess, if only for the 2015-16 campaign, the claim that the candidates of each party use distinctive moral vocabularies.
The discussion thus far of the descriptive data suggests that in order to examine the differences in the spoken use of moral-foundation words appropriately, it is important to keep four analytical considerations in view:
  • It is very likely that the use of particular moral foundations is, in part, a function of which issue domain is under discussion.
  • Of the candidates in this dataset, one appears distinctive in his low use of the moral foundations. More generally, Trump’s status as a political novice and his idiosyncratic outsider campaign suggests that when examining partisan differences, it is advisable to distinguish Trump from the other GOP candidates. Otherwise, grouping him with the other Republicans may tend to drag down the average of the use of moral language by that party’s candidates, most of whom are likely to be closer to that party’s establishment values than the populist Trump.
  • Positive-valence and negative-valence aspects of the moral vocabulary (i.e., virtue and vice words) may have distinct partisan dynamics.
  • When looking at the use of specific moral foundations, the most common outcome is that zero percent of the words in a particular segment draw from that foundation.
The first consideration implies the inclusion of appropriate controls for issue domain. Dummy variables are used for each of the substantive policy categories listed in the top panel of Table 3, while the Campaign/Self category is combined with the relatively infrequent Other topic as the reference category. The second consideration suggests controlling for whether the speaker is Trump before reaching any conclusions about partisan differences in moral-foundation usage. Specifically, the model includes a dummy variable to denote the Democratic candidates, with an additional dichotomous variable identifying Trump segments. Given the third point above, the occurrence of each foundation’s virtue words and vice words are estimated separately. For reasons described above, the multivariate analysis omits analysis of the Purity/Degradation category.
Regarding the fourth point, the manner in which the dependent variables (use of moral-foundations words) are distributed argues heavily against ordinary least squares regression (OLS). OLS would fail to take into consideration the dependent variable’s lower bound of zero and its profusion of zero values, thereby tending to bias standard errors downward (King 1989). Instead, negative binomial regression is used, which is appropriate for estimating an overdispersed dependent variable where zeroes occur frequently. However, negative binomial models are designed for event-count data, rather than continuous dependent variables such as the percentages of moral-foundations words produced by the RIOT Scan software. For this reason, the percentages are converted into raw counts of the number of words falling into each category (e.g., the number of Care-Virtue words appearing in a segment).
Count data is typically a function, in part, of the opportunity for the event in question to occur (e.g., the number of flu cases per school will tend to covary strongly with the number of students enrolled in the school). This mathematical regularity is typically modeled by including the logged value of the so-called exposure variable on the right-hand side of the equation, with its coefficient constrained to a value of 1. In the current analysis, the total word count of the text chunk is the relevant exposure variable. Simply put, the longer the segment, the more opportunity the speaker has to say one or more words referring to a particular moral foundation.10

6.2. Results

This section focuses first on the political-party variable, indicating the estimated difference attributable to the speaker being a Democrat, and then on the Trump indicator variable. Both are considered relative to the omitted category, the non-Trump Republican candidates. Since all of the independent variables are dichotomous, it is possible to compare the magnitude of the partisan and candidate effects on moral rhetoric to the differences attributable to the issue domain under discussion. Full regression results are in Appendix A Table A1. However, since the coefficients in negative binomial regressions can be difficult to interpret, Figure 3 and Figure 4 display the average marginal effects, that is, the change in the estimated number of moral-foundation words, for the Democrats and Trump, respectively, in comparison to the non-Trump Republicans.
By separating the positive- from negative-valence words associated with each of the five moral foundations, it is evident that there are indeed some partisan distinctions in the candidates’ vocabulary. As Figure 3 shows, the Democratic candidates tended to use somewhat fewer Care-Vice words (indicating less of a focus on Harm) than the non-Trump Republican candidates. In reference to Hypothesis 1, this is contrary to a straightforward reading of MFT, which predicts that those on the left will invoke the individuating moral foundations more often. In reference to Hypothesis 2, however, the tendency of conservatives to confront negative threats (Jost et al. 2003) and to turn their attention to aversive stimuli (Dodd et al. 2012) seems especially pertinent in the case of Care-Vice vocabulary, which features such words as war, kill, ruin*, and destroy.
The segments spoken by the Democratic candidates (79% of which were from Clinton and Sanders) tended to use more Fairness-Vice (i.e., Cheating) words than the non-Trump Republicans that is, words such as unfair*, bias*, discriminat*, and prejud*. This focus of Democrats accords with MFT, which anticipates that individuals on the political left focus disproportionately on fairness concerns. Also consistent with the predictions of MFT (though significant only at 0.08) is the Democrats’ less intensive reference to the Authority-Vice foundation, corresponding to the concept of Subversion. The correspondingly greater focus of the non-Trump Republicans on violations of this binding moral foundation is in keeping with MFT’s expectation that conservatives are particularly wary of threats to the traditional social order.
However, as in the bivariate comparisons in Figure 1, the multivariate estimates show the Democrats using significantly more Ingroup-Virtue words than the Republicans. This pattern is clearly contrary to what MFT implies should be the case. Thus, for two of the four moral categories in which partisan differences are perceptible, the regression results contradict MFT’s expectations.
In addition to estimating word use for specific moral foundations, the same independent variables were employed to estimate the total number of virtue words and total number of vice words used per segment, summed across all five MFT categories, including Purity (see last two columns of Appendix A Table A1). Here, controlling once again for issue topic, the results are consistent with the expectations of Hypothesis 2, as Democrats used more virtue words (albeit significant only at p = 0.06). The non-Trump Republicans employed significantly more vice words (p = 0.01), providing further support for CMSC’s contention that conservatives are more likely than liberals to focus on threats and aversive stimuli when deploying moral rhetoric.
Next, the distinctiveness of candidate Trump is considered. Figure 4 shows that, when controlling for issue domain, Trump’s differences continue to stand out with regard to four categories of the moral foundations. In comparison to his Republican competitors, Trump used significantly fewer Care-Vice, Fairness-Virtue, Ingroup-Virtue, and Ingroup-Vice words. Further, the Trump gap in these categories far exceeds the partisan difference in each case, and rivals or exceeds nearly all of the issue-domain coefficients. (See Appendix A Table A1, and note the differences between Figure 3 and Figure 4 in the vertical scale.) One exception to the general rule of Trump’s lack of use of moral-foundations vocabulary is that he is more likely than his GOP counterparts to use Fairness-Vice words. This higher focus on so-called Cheating concerns renders him more similar to the Democrats. It brings to mind Trump’s frequent claims during the 2016 campaign that the political system was rigged and that the treatment of the United States by other nations in trade agreements and military alliances was unfair.
Overall, however, Trump was primarily distinctive in his notably low use of both virtue and vice words across the moral foundations (see last two columns of Appendix A Table A1).11 Perhaps Trump’s scant recourse to moral-foundations language may reflect an idiosyncratic sense of morality, or, his critics might argue, an amorality. In September 2018, an anonymous op-ed article in the New York Times, ostensibly written by a high-level Trump Administration official, attracted considerable attention for asserting that “anyone who works with him knows he is not moored to any discernible first principles that guide his decision making” (Anonymous 2018).12

7. Concluding Discussion

7.1. Reconsidering Moral Foundations Theory in Conjunction with Motivated Social Cognition

Are there left/right differences in moral intuitions among politicians, as MFT reveals there to be among members of the mass public? Judging by how often candidates of the two major U.S. parties use morally tinged words in the heat of presidential primary debates, the answer appears to be a partial “yes.” However, not all of the partisan differences reflect the expectations of MFT. Instead, the results suggest that when comparing the moral vocabulary of left and right, researchers should give as much (if not more) attention to the question of negative versus positive valence as to the five moral foundations emphasized by MFT.
MFT suggests that the moral impulses motivating liberals primarily reflect the individuating Care and Fairness foundations. By contrast, the theory anticipates that the binding moral foundations (Ingroup, Authority, and Purity) mainly are the province of conservatives. The data analyzed here, however, suggest a more nuanced perspective is in order. First, the candidates’ usage of Purity terminology is minimal (consistent with the scant occurrence of Purity words in other studies using the Moral Foundations Dictionary). Thus, the multivariate analysis did not separately examine the use of Purity vocabulary. Fairness words, though more common than Purity language, also were surprisingly infrequent, as was the case in the political ads analyzed by Lipsitz (2018).
Second and more importantly, the results suggest important distinctions between positive- and negative-valence words representing the various moral impulses. Once controls are introduced for the topic under discussion, the Republican primary candidates (setting aside the idiosyncratic Trump) used more Authority-Vice words than Democrats, consistent with theoretical expectations. Further consistent with MFT, the Democrats were heavier users of Fairness-Vice concepts (i.e., a concern with cheating). Fairness-Vice words (e.g., unfair, injustice, discrimination, exclusion) fit quite well with liberal Democratic ideological orientations.
So far, so good for MFT. However, the non-Trump Republicans used more Care-Vice (i.e., Harm) words than Democrats did, while the Democrats exceeded the Republicans in use of Ingroup-Virtue words. These patterns contradict MFT’s expectation that the left focuses heavily on Care, and the right on Ingroup concepts. In short, reconsideration may be in order regarding the expected relationship of political ideology to the Care/Harm and Ingroup/Betrayal impulses, at least in the case of spoken communication by the 2016 U.S. presidential candidates. The Ingroup-Virtue terminology, in particular, seems quite consistent with solidarity-oriented or communitarian tenets of the political left.13 It is possible that this tendency of Democratic candidates to emphasize Ingroup-Virtue is a characteristic specific to political elites, who tend to have a considerably more well-developed and coherently structured political ideology than lay citizens do (Kinder and Kalmoe 2017). The Ingroup foundation merits further research regarding possible elite/mass differences, especially since MFT has drawn its evidence predominantly from the mass public.
With regard to Republican candidates’ greater emphasis than Democrats on the vocabulary of Harm, these findings suggest greater support for CMSC’s perspective (highlighting the tendency of conservatives to be attuned to threat and aversive stimuli) than for MFT (which anticipates liberals to focus disproportionately on violations of care and empathy). Consistent with the conservatives’ focus on the negative, when words from all five moral foundations were summed, there was a clear tendency for the experienced Republican candidates to use vice words more often than their Democratic counterparts. The current study hints that, at least among these political elites, the negativity bias of conservatives may override their tendency (if any) to focus on binding moral foundations.

7.2. The Trump Difference

Additionally, there is the matter of the ultimate Republican nominee, Donald Trump. Trump repeatedly presented himself during the nominating campaign as sui generis, a figure very different from the existing political establishment of either party. However, was he a different moral type of candidate? In the primary debates examined here, Trump was indeed distinctive: He deviated from his GOP competitors (and from the Democrats) in making the least use overall of moral-foundations vocabulary. The one exception, Trump’s high usage of words related to Fairness-Vice (i.e., cheating), rendered him more similar to the Democrats than to his fellow Republicans.
At the same time, however, Trump was the highest of any of the major candidates in his use of generalized morality (i.e., good/bad) words that lack moral-foundations content. Trump also was the most likely of the major candidates to make himself the subject of discussion, though some of that is likely a function of the type of debate questions asked of such a unique figure.14 These results provide suggestive quantitative support for the notion that Trump is a very different type of politician. By the yardstick of the admittedly limited measures used here, Trump spoke in significantly different ways than the more traditional conservative Republicans who ran against him. Perhaps surprisingly, he deemphasized the vice aspects as well as virtue aspects of various moral foundations. Nor was he similar to the Democratic candidates (a party he had identified with in the past). Future research could examine additional text corpuses (e.g., Twitter posts) to see whether they, too, show Trump using distinctive moral vocabularies in comparison to other politicians (see also Jordan and Pennebaker 2017; Oliver and Rahn 2016).

7.3. Limitations of the Study and Considerations for Future Research

The current analysis is admittedly limited in scope. It draws upon 10 debates involving 16 candidates in a single representative democracy, arguably an usual one (Taylor et al. 2014). Equally important to note, it covers only a six-month period during one recent election campaign involving two political parties. Compared to many other democracies, U.S. parties often are viewed as ideologically broad, thus possibly presenting a more difficult empirical test for finding significant left/right differences than in some systems. However, in recent decades American parties have grown significantly more polarized and their voters more well-sorted ideologically (Schier and Eberly 2016). Rather than being the last word on the political intuitions of elites, this study’s intention was to lay groundwork and suggest a general approach to measuring left/right differences in unscripted political speech that might be applied to study candidates in other years, and perhaps in other nations. Admittedly, there are few if any other democracies with the sort of wide-open nominating competition seen in American primary elections, although other venues exist in these countries for contesting intra-party leadership battles and for engaging in unscripted political speech. While the author makes no broad claims about the universality of the patterns found, this case study of a single country in a single election year offers a level of detail and context that may be less easily attainable in studies aggregating data from many years or countries.
Nevertheless, to validate the current results regarding MFT and CMSC and make broader claims for generalizability, future studies could widen the lens to other time periods and other types of states and election systems. Within the United States, an examination of primary debates during other presidential election years in which no incumbent candidate was running (e.g., 1988, 2000, 2008) would prove useful. (Election years in which presidents run for reelection generally lack primary debates for the president’s party, since the incumbent is the presumptive nominee.) In particular, including variation in which party held the White House would help to establish whether Republicans’ greater negativity in 2016 might have been time-bound, due to an inclination to criticize conditions under then-President Obama. More generally, the different sets of issues and different perceived threats on the political agenda in other election years would provide additional variation in the electoral context.
The current study can make no claims about how partisan elites’ word choice ultimately influences public evaluations of the parties or candidates. Instead, the proximate goal was to assess the possibility of using debate speech to render some basic assessments of how politicians think about moral concepts and categories, since nearly all MFT and CMSC research to date has focused on mass publics. Ultimately, it may be possible to compare elites’ unscripted talk about political issues to citizens’ conceptions, with the latter perhaps measured through speech at public meetings, in deliberative polls, or in professionally facilitated focus groups. However, much more challenging, and beyond the scope of the types of evidence investigated here, would be the possibility of finding ways to make micro/macro linkages between the moral self-presentation of candidates, the changing opinions of various categories of the mass public, and ultimately, voting behavior and election outcomes. If moral foundations matter politically to the public, then the manner in which political elites and the mass media frame their rhetoric morally may strongly shape the public’s judgments of candidates or of policy issues (Clifford and Jerit 2013; Kraft 2018). Indeed, Lipsitz’s (2018) findings suggest that political elites use moral appeals strategically in order to provoke voters’ emotions, at least in the context she examines, which is scripted campaign advertising.
A key finding of the current study involves the differences between the virtue and vice subcategories when considering how often candidates of each party draw upon particular moral foundations. Future research applying MFT, not only among political elites but also among the mass public, likewise should consider ways to overlay the negative/positive valence dimension on measures of adherence to the moral foundations (see also Kraft 2018, p. 1032; Leeuwen and Park 2009). In other words, wherever possible, researchers should separately consider people’s evaluations of behaviors that uphold, and behaviors that violate, each foundation.
Finally, a methodological consideration for future studies in this vein involves the question of whether to continue to rely upon automated content analysis. Many social scientists seek to avoid the fallibility of human judgment by automating the measurement of political communication where possible. The current analysis suggests some of the positive potential, but also pitfalls, of the Moral Foundations Dictionary in particular. Comparing a speaker’s words to a pre-prepared dictionary gives the aura of a more objective scientific enterprise, in comparison to interpretive analyses in which researchers engage with political speech post-hoc and attempt to intuit the speaker’s meanings. However, there likely is no avoiding the need for some subjective interpretation, especially to take into account the political context of the rhetoric in question. Machine learning techniques for analyzing text ultimately may overcome some of these limitations, but present their own set of interpretive challenges (Jones et al. 2018, p. 430; Langer et al. 2019). In the present analysis, a first pass at the data, looking only at the frequency of words from each of the moral foundations, suggested one, somewhat murky, set of conclusions. Disaggregating between the virtue and vice aspects of the foundations suggested different sorts of partisan and candidate differences. The tendency of the Democratic candidates to emphasize Ingroup-Virtue words, initially puzzling from the MFT perspective, made considerably more sense when considering the actual list of such words in the Moral Foundations Dictionary (e.g., together, collective, commun*), especially in the context of a race in which Democratic candidates tried to appeal to the party’s progressive base.
In short, although the Moral Foundations Dictionary is potentially a very useful tool, it should be deployed with care. It ought to be viewed as a starting point, not as a definitive scorecard, in assessing the types of deep-seated intuitions that individuals—including high-level politicians—draw upon when making political judgments.

Funding

This research received no external funding.

Acknowledgments

The author thanks John Shown and Sydney Morse for excellent research assistance, Tehama Lopez Bunyasi and Bobbi Gentry for helpful comments on earlier versions of this paper, and the anonymous reviewers for suggestions and constructive criticism. Any errors are the author’s responsibility.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A

Table A1. Negative binomial regressions estimating the number of moral-foundations words used.
Table A1. Negative binomial regressions estimating the number of moral-foundations words used.
Care/HarmFairness/CheatingIngroup/BetrayalAuthority/SubversionTotal MF Words
VirtueViceVirtueViceVirtueViceVirtueViceVirtueVice
Democratic candidate0.07
(0.16)
−0.34 *
(0.16)
−0.27
(0.19)
3.59 **
(1.37)
0.23 *
(0.11)
−0.18
(0.17)
0.12
(0.12)
−0.98 #
(0.55)
0.13 #
(0.07)
−0.31 *
(0.12)
Trump−0.10
(0.23)
−0.41 #
(0.23)
-1.02 **
(0.37)
3.22 **
(1.21)
−0.83 **
(0.20)
-1.09 **
(0.35)
−0.14
(0.19)
−0.03
(0.39)
−0.39 **
(0.11)
−0.43 *
(0.18)
Topic: Security0.88 **
(0.18)
0.84 **
(0.18)
−0.31
(0.24)
−0.83
(0.67)
0.23 #
(0.13)
0.97 **
(0.24)
−0.06
(0.15)
0.22
(0.46)
0.17 *
(0.08)
0.78 **
(0.14)
Economy & Budget0.10
(0.23)
−0.48 #
(0.27)
0.31
(0.23)
−0.80
(0.78)
−0.30 #
(0.17)
−0.64 #
(0.35)
−0.40 *
(0.17)
−0.89
(0.70)
−0.21 *
(0.09)
−0.55 *
(0.21)
Immigration0.63 *
(0.24)
0.00
(0.25)
−0.66 *
(0.32)
0.72
(0.67)
0.28
(0.17)
2.16 **
(0.25)
0.57 **
(0.16)
2.12 **
(0.41)
0.33 **
(0.10)
1.26 **
(0.15)
Social welfare1.72 **
(0.20)
−0.33
(0.25)
−0.41
(0.43)
-1.38
(1.05)
−0.06
(0.19)
−0.93 #
(0.54)
−0.55 **
(0.20)
-1.53
(1.08)
0.31 **
(0.10)
−0.43 #
(0.22)
Culture0.78 **
(0.25)
0.56 **
(0.21)
0.62 *
(0.25)
−0.80
(1.07)
0.28
(0.19)
−0.35
(0.42)
0.10
(0.17)
0.62
(0.62)
0.31 **
(0.11)
0.38 *
(0.19)
Opening & closing0.61 **
(0.23)
0.19
(0.27)
0.66 **
(0.24)
0.82
(0.51)
0.45 **
(0.14)
0.62 *
(0.30)
−0.01
(0.17)
−0.63
(0.66)
0.33 **
(0.09)
0.33 #
(0.19)
Sponsor: CNN−0.07
(0.16)
−0.24
(0.16)
−0.18
(0.19)
0.08
(0.46)
−0.05
(0.11)
−0.11
(0.17)
0.33 **
(0.12)
−0.41
(0.42)
0.05
(0.06)
−0.21 #
(0.12)
Fox Business0.20
(0.21)
0.14
(0.20)
−0.62 *
(0.27)
0.87
(1.08)
0.17
(0.16)
0.15
(0.24)
0.14
(0.17)
0.11
(0.48)
0.09
(0.09)
0.05
(0.16)
Constant−6.36 **
(0.22)
−5.71 **
(0.19)
−6.09 **
(0.26)
−11.56 **
(1.40)
−5.43 **
(0.13)
−6.68 **
(0.25)
−5.58 **
(0.14)
−7.55 **
(0.49)
−4.37 **
(0.08)
−5.20 **
(0.15)
Wald chi-sq.97.7 **71.1 **49.4 **27.7 **57.3 **189.8 **54.8 **79.1 **87.8 **167.8 **
Notes: Cell entries are negative binomial regression coefficients, with the count of words from the relevant moral-foundations category as the dependent variable. The “Total MF Words” columns sum together the Virtue (Vice) counts for all five moral foundations, including Purity. The logged total number of words in the segment is included as an exposure variable with its coefficient constrained to 1. Variance inflation factors suggest multicollinearity is not an issue (mean VIF of 1.4). Robust standard errors are in parentheses. For all models, n = 832 segments. # p < 0.10; *p < 0.05; ** p < 0.01.
Table A2. Probit regressions estimating the presence of any moral-foundations words.
Table A2. Probit regressions estimating the presence of any moral-foundations words.
Care/HarmFairness/CheatingIngroup/BetrayalAuthority/SubversionTotal MF Words
VirtueViceVirtueViceVirtueViceVirtueViceVirtueVice
Democratic candidate0.07
(0.13)
−0.32 *
(0.13)
−0.11
(0.14)
1.17 **
(0.44)
0.29 *
(0.12)
−0.16
(0.14)
0.17
(0.12)
−0.41 *
(0.20)
0.28 #
(0.15)
−0.27 *
(0.12)
Trump−0.34 *
(0.17)
−0.30 #
(0.16)
−0.56 **
(0.20)
1.11 *
(0.43)
−0.56 **
(0.17)
−0.75 **
(0.22)
−0.23
(0.15)
−0.13
(0.21)
−0.46 **
(0.17)
−0.45 **
(0.16)
Topic: Security0.62 **
(0.14)
0.59 **
(0.14)
−0.10
(0.16)
−0.40
(0.32)
0.18
(0.13)
0.75 **
(0.15)
−0.09
(0.13)
0.17
(0.22)
0.33 *
(0.16)
0.63 **
(0.13)
Economy & Budget0.10
(0.17)
−0.40 *
(0.17)
0.29 #
(0.17)
−0.23
(0.38)
−0.29 #
(0.16)
−0.22
(0.20)
−0.23
(0.15)
−0.20
(0.30)
0.03
(0.18)
−0.45 **
(0.16)
Immigration0.38 *
(0.18)
0.02
(0.18)
−0.20
(0.20)
0.29
(0.30)
0.06
(0.17)
1.51 **
(0.20)
0.32 #
(0.17)
1.10 **
(0.22)
0.20
(0.20)
1.06 **
(0.19)
Social welfare0.79 **
(0.17)
−0.15
(0.19)
−0.42 #
(0.21)
−0.65
(0.43)
−0.21
(0.17)
−0.50 #
(0.26)
−0.40 *
(0.17)
−0.50
(0.42)
0.18
(0.20)
−0.30 #
(0.18)
Culture0.35 #
(0.21)
0.72 **
(0.21)
0.55 **
(0.21)
−0.32
(0.47)
0.17
(0.21)
0.03
(0.27)
0.24
(0.20)
0.32
(0.30)
0.07
(0.24)
0.41 *
(0.21)
Opening & closing0.41 *
(0.18)
0.19
(0.18)
0.38 *
(0.18)
0.31
(0.28)
0.64 **
(0.17)
0.51 **
(0.19)
0.08
(0.16)
−0.22
(0.29)
1.11 **
(0.28)
0.35 *
(0.17)
Sponsor: CNN−0.02
(0.12)
−0.33 **
(0.12)
−0.04
(0.13)
−0.12
(0.21)
−0.05
(0.12)
−0.17
(0.14)
0.24 *
(0.11)
−0.16
(0.20)
0.30 *
(0.14)
−0.27 *
(0.12)
Fox Business0.07
(0.17)
−0.03
(0.17)
−0.62 **
(0.20)
−0.15
(0.43)
−0.04
(0.17)
0.01
(0.19)
0.03
(0.16)
0.19
(0.24)
−0.03
(0.21)
−0.10
(0.17)
Word count0.00 **
(0.00)
.00 **
(0.00)
0.00 **
(0.00)
0.00 **
(0.00)
0.00 **
(0.00)
0.00 **
(0.00)
0.00 **
(0.00)
0.00
(0.00)
0.01 **
(0.00)
0.00 **
(0.00)
Constant−1.54 **
(0.19)
−0.95 **
(0.18)
−1.46 **
(0.19)
−3.23 **
(0.54)
−0.99 **
(0.18)
−1.52 **
(0.20)
−0.92 **
(0.17)
−1.54 **
(0.25)
−0.79 **
(0.22)
−0.66 **
(0.17)
Wald chi-sq.103.7 **105.9 **91.4 **47.3 **113.7 **140.6 **67.7 **54.0 **112.2 **143.8 **
Notes: Cell entries are probit regression coefficients. Dependent variable is set at 1 for segments with any words from the specified moral-foundation category, 0 otherwise. The “Total MF Words” columns reflect the use of any Virtue (Vice) words from any moral foundation, including Purity. Robust standard errors are in parentheses. For all models, n = 832. # p < 0.10; * p < 0.05; ** p < 0.01.

References

  1. Amodio, David M., John T. Jost, Sarah L. Master, and Cindy M. Yee. 2007. Neurocognitive correlates of liberalism and conservatism. Nature Neuroscience 10: 1246–47. [Google Scholar] [CrossRef] [PubMed]
  2. Anonymous. 2018. I am part of the resistance inside the Trump administration. New York Times, September 5, A23. [Google Scholar]
  3. Bargh, John, and Tanya Chartrand. 1999. The unbearable automaticity of being. American Psychologist 54: 462–79. [Google Scholar] [CrossRef]
  4. Block, Jack, and Jeanne Block. 2006. Nursery school personality and political orientation two decades later. Journal of Research in Personality 40: 734–49. [Google Scholar] [CrossRef]
  5. Boyd, Ryan. 2016. RIOT Scan: Recursive Inspection of Text Scanner (version 2.0.21). Software. Available online: http://riot.ryanb.cc (accessed on 31 March 2016).
  6. Carney, Dana R., John T. Jost, Samuel D. Gosling, and Jeff Potter. 2008. The secret lives of liberals and conservatives: Personality profiles, interaction styles, and the things they leave behind. Political Psychology 29: 807–40. [Google Scholar] [CrossRef]
  7. Carraro, Luciana, Luigi Castelli, and Claudia Macchiella. 2011. The automatic conservative: Ideology-based attentional asymmetries in the processing of valenced information. PLoS ONE 6: e26456. [Google Scholar] [CrossRef]
  8. Clifford, Scott, and Jennifer Jerit. 2013. How words do the work of politics: Moral foundations theory and the debate over stem cell research. Journal of Politics 75: 659–71. [Google Scholar] [CrossRef]
  9. Clinton, Hillary Rodham. 1996. It Takes a Village, and Other Lessons Children Teach Us. New York: Simon & Schuster. [Google Scholar]
  10. Converse, Phillip E. 1964. The nature of belief systems in mass publics. In Ideology and Discontent. Edited by David Apter. New York: Free Press, pp. 206–61. [Google Scholar]
  11. Dodd, Michael D., Amanda Balzer, Carly M. Jacobs, Michael W. Gruszczynski, Kevin B. Smith, and John R. Hibbing. 2012. The political left rolls with the good and the political right confronts the bad: Connecting physiology and cognition to preferences. Philosophical Transactions of the Royal Society B 367: 640–49. [Google Scholar] [CrossRef]
  12. Eskine, Kendall J., Natalie A. Kacinik, and Jesse J. Prinz. 2011. A bad taste in the mouth: Gustatory disgust influences moral judgment. Psychological Science 22: 295–99. [Google Scholar] [CrossRef] [PubMed]
  13. Fessler, Daniel, Anne Pisor, and Colin Holbrook. 2017. Political orientation predicts credulity regarding putative hazards. Psychological Science 28: 651–60. [Google Scholar] [CrossRef]
  14. Fridkin, Kim L., Patrick J. Kenney, Sarah Allen Gershon, Karen Shafer, and Gina Serignese Woodall. 2007. Capturing the power of a campaign event: The 2004 presidential debate in Tempe. Journal of Politics 69: 770–85. [Google Scholar] [CrossRef]
  15. Fulgoni, Dean, Jordan Carpenter, Lyle Ungar, and Daniel Preotiuc-Pietro. 2016. An empirical exploration of moral foundations theory in partisan news sources. Paper presented at the Language Resources and Evaluation Conference, Portorož, Slovenia, May 23–28; Available online: https://www.aclweb.org/anthology/L16-1591 (accessed on 18 March 2018).
  16. Gerber, Alan S., Gregory A. Huber, David Doherty, Conor M. Dowling, and Shang E. Ha. 2010. Personality and political attitudes: Relationships across issue domains and political contexts. American Political Science Review 104: 111–33. [Google Scholar] [CrossRef]
  17. Graham, Jesse, Jonathan Haidt, and Brian A. Nosek. 2009. Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology 96: 1029–46. [Google Scholar] [CrossRef] [PubMed]
  18. Graham, Jesse, Ravi Iyer, Brian A. Nosek, Jonathan Haidt, Sena Koleva, and Peter H. Ditto. 2011. Mapping the moral domain. Journal of Personality and Social Psychology 101: 366–85. [Google Scholar] [CrossRef] [PubMed]
  19. Green, Donald, Bradley Palmquist, and Eric Schickler. 2004. Partisan Hearts and Minds: Political Parties and the Social Identities of Voters. New Haven: Yale University Press. [Google Scholar]
  20. Haidt, Jonathan. 2001. The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review 108: 814–34. [Google Scholar] [CrossRef] [PubMed]
  21. Haidt, Jonathan. 2007. The new synthesis in moral psychology. Science 316: 998–1002. [Google Scholar] [CrossRef] [PubMed]
  22. Haidt, Jonathan. 2012. The Righteous Mind: Why Good People Are Divided by Politics and Religion. New York: Pantheon. [Google Scholar]
  23. Haidt, Jonathan, and Jesse Graham. 2007. When morality opposes justice: Conservatives have moral intuitions that liberals may not recognize. Social Justice Research 20: 98–116. [Google Scholar] [CrossRef]
  24. Haidt, Jonathan, and Craig Joseph. 2004. Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus 133: 55–66. [Google Scholar] [CrossRef]
  25. Hetherington, Marc J., and Jonathan D. Weiler. 2009. Authoritarianism and Polarization in American Politics. New York: Cambridge University Press. [Google Scholar]
  26. Hibbing, John R., Kevin B. Smith, and John R. Alford. 2014. Predisposed: Liberals, Conservatives, and the Biology of Political Differences. New York: Routledge. [Google Scholar]
  27. Inbar, Yoel, David A. Pizarro, and Paul Bloom. 2009. Conservatives are more easily disgusted than liberals. Cognition & Emotion 23: 714–25. [Google Scholar] [CrossRef]
  28. Inbar, Yoel, David Pizarro, Ravi Iyer, and Jonathan Haidt. 2012. Disgust sensitivity, political conservatism, and voting. Social Psychological and Personality Science 3: 537–44. [Google Scholar] [CrossRef]
  29. Janoff-Bulman, Ronnie. 2009. To provide or protect: Motivational bases of political liberalism and conservatism. Psychological Inquiry 20: 120–28. [Google Scholar] [CrossRef]
  30. Jones, Kevin L., Sharareh Noorbaloochi, John T. Jost, Richard Bonneau, Jonathan Nagler, and Joshua A. Tucker. 2018. Liberal and conservative values: What we can learn from congressional tweets. Political Psychology 39: 423–43. [Google Scholar] [CrossRef]
  31. Jordan, Kayla N., and James W. Pennebaker. 2017. The exception or the rule: Using words to assess analytic thinking, Donald Trump, and the American presidency. Translational Issues in Psychological Science 3: 312–16. [Google Scholar] [CrossRef]
  32. Jordan, Kayla N., Joanna Sterling, James W. Pennebaker, and Ryan L. Boyd. 2019. Examining long-term trends in politics and culture through language of political leaders and cultural institutions. Proceedings of the National Academy of Sciences 116: 3476–81. [Google Scholar] [CrossRef] [PubMed]
  33. Jost, John T., Jack Glaser, Arie W. Kruglanski, and Frank J. Sulloway. 2003. Political conservatism as motivated social cognition. Psychological Bulletin 129: 339–75. [Google Scholar] [CrossRef] [PubMed]
  34. Jost, John T., Chadly Stern, Nicholas O. Rule, and Joanna Sterling. 2017. The politics of fear: Is there an ideological asymmetry in existential motivation? Social Cognition 35: 324–53. [Google Scholar] [CrossRef]
  35. Kahneman, Daniel. 2011. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux. [Google Scholar]
  36. Kinder, Donald, and Nathan Kalmoe. 2017. Neither Liberal nor Conservative: Ideological Innocence in the American Public. Chicago: University of Chicago Press. [Google Scholar]
  37. King, Gary. 1989. Variance specification in event count models: From restrictive assumptions to a generalized estimator. American Journal of Political Science 33: 762–84. [Google Scholar] [CrossRef]
  38. Kraft, Patrick W. 2018. Measuring morality in political attitude expression. Journal of Politics 80: 1028–33. [Google Scholar] [CrossRef]
  39. Langer, Melanie, John T. Jost, Richard Bonneau, Megan MacDuffie Metzger, Sharareh Noorbaloochi, and Duncan Penfold-Brown. 2019. Digital dissent: An analysis of the motivational contents of tweets from an Occupy Wall Street demonstration. Motivation Science 5: 14–34. [Google Scholar] [CrossRef]
  40. Leeuwen, Florian Van, and Justin H. Park. 2009. Perceptions of social dangers, moral foundations, and political orientation. Personality and Individual Differences 47: 169–73. [Google Scholar] [CrossRef]
  41. Lipsitz, Keena. 2018. Playing with emotions: The effect of moral appeals in elite rhetoric. Political Behavior 40: 57–78. [Google Scholar] [CrossRef]
  42. Manning, Bayless. 1977. The Congress, the executive and intermestic affairs. Foreign Affairs 55: 306–24. [Google Scholar] [CrossRef]
  43. Marcus, George, W. Russell Neuman, and Michael MacKuen. 2000. Affective Intelligence and Political Judgment. Chicago: University of Chicago Press. [Google Scholar]
  44. Mason, Lilliana. 2018. Uncivil Agreement: How Politics Became Our Identity. Chicago: University of Chicago Press. [Google Scholar]
  45. McKinney, Mitchell S., and Benjamin R. Warner. 2013. Do presidential debates matter? Examining a decade of campaign debate effects. Argumentation and Advocacy 49: 238–58. [Google Scholar] [CrossRef]
  46. Meyerson, Harold. 2016. Trump’s dystopia. American Prospect. July 22. Available online: https://prospect.org/article/trumps-dystopia (accessed on 21 June 2019).
  47. Motyl, Matt. 2012. Party evolutions in moral intuitions: A text-analysis of US political party platforms from 1856–2008. Social Science Research Network eLibrary. Available online: https://ssrn.com/abstract=2158893 (accessed on 18 March 2018). [CrossRef]
  48. Neiman, Jayme L., Frank J. Gonzalez, Kevin Wilkinson, Kevin B. Smith, and John R. Hibbing. 2016. Speaking different languages or reading from the same script? Word usage of Democratic and Republican politicians. Political Communication 33: 212–40. [Google Scholar] [CrossRef]
  49. Oliver, J. Eric, and Wendy M. Rahn. 2016. Rise of the Trumpenvolk: Populism in the 2016 election. Annals, AAPSS 667: 189–206. [Google Scholar] [CrossRef]
  50. Petersen, Michael Bang, Daniel Sznycer, Leda Cosmides, and John Tooby. 2012. Who deserves help? Evolutionary psychology, social emotions, and public opinion about welfare. Political Psychology 33: 395–418. [Google Scholar] [CrossRef] [PubMed]
  51. Pratto, Felicia, Jim Sidanius, Lisa M. Stallworth, and Bertram F. Malle. 1994. Social dominance orientation: A personality variable predicting social and political attitudes. Journal of Personality and Social Psychology 67: 741–63. [Google Scholar] [CrossRef]
  52. Sagi, Eyal, and Morteza Dehghani. 2014. Moral rhetoric in Twitter: A case study of the U.S. federal shutdown of 2013. Proceedings of the Annual Meeting of the Cognitive Science Society 36: 1347–52. Available online: https://escholarship.org/uc/item/9sw937kk (accessed on 18 March 2018).
  53. Schier, Steven, and Todd Eberly. 2016. Polarized: The Rise of Ideology in American Politics. Lanham: Rowman & Littlefield. [Google Scholar]
  54. Smith, Kevin B., Douglas Oxley, Matthew V. Hibbing, John R. Alford, and John R. Hibbing. 2011. Disgust sensitivity and the neurophysiology of left-right political orientations. PLoS ONE 6: e25552. [Google Scholar] [CrossRef]
  55. Taylor, Steven, Matthew Shugart, Arend Lijphart, and Bernard Grofman. 2014. A Different Democracy: American Government in a 31-Country Perspective. New Haven: Yale University Press. [Google Scholar]
  56. Westen, Drew. 2007. The Political Brain: The Role of Emotions in Deciding the Fate of the Nation. New York: PublicAffairs. [Google Scholar]
  57. Woolley, John, and Gerhard Peters. 2018. Presidential debates, 1960–2016. In The American Presidency Project. Santa Barbara: University of California, Available online: http://www.presidency.ucsb.edu/debates.php (accessed on 28 April 2016).
1
MFT is most closely associated with the social psychologist Jonathan Haidt, although many other researchers in a variety of disciplines have drawn upon it. At this writing, Google Scholar tallies more than 2600 studies using the phrase “moral foundations theory.”
2
After the initial development of MFT, Haidt (2012, pp. 168–74) concluded that liberals and conservatives tend to conceive of Fairness somewhat differently, with liberals concerned more with distributional outcomes and conservatives oriented more toward proportionality (i.e., getting what one is due in relation to one’s contribution). However, experimental work on social welfare attitudes by Petersen et al. (2012) suggested that the emphasis on proportionality is common to both left- and right-leaning individuals in Denmark and the United States.
3
Haidt (2012) subsequently proposed Liberty/Oppression as a candidate for a sixth moral foundation. However, most published work continues to focus on the original five, as does the Moral Foundations Dictionary used in the current study.
4
I include only the main Republican debates, not the “undercard” events featuring low-tier candidates who were relegated to a separate, less visible forum.
5
The author developed a set of secondary codes for more specific subcategories (for example, differentiating environmental regulation from other economic topics, or health care from other social welfare policies), but cell sizes for particular candidates speaking on these subtopics typically were very small and there was little or no improvement in fit in the multivariate models. Immigration is arguably a narrower category than the others. However, it was coded as a stand-alone category for a few reasons: Immigration occupied a considerable place on the U.S. news agenda in 2015–2016; its framing in the national political debate seemed likely to evoke moral imagery in many cases; and its position as an “intermestic” policy (Manning 1977), at the juncture of domestic policy and international relations, makes it a poor fit for the other categories. In addition, there is one practical consideration: The word roots immig* and foreign* appear in the list of Ingroup-Vice words within the Moral Foundations Dictionary. Given that these word roots are used frequently in discussions of immigration policy, it is important to be able to control for instances in which immigration is the topic of the segment.
6
Perhaps because disgust has limited verbal referents in standard English, experimental studies sometimes use videos, photos, or smells rather than words to induce disgust, and often measure physiological manifestations of disgust (such as facial expressions) rather than relying on verbal self-reports (e.g., Smith et al. 2011; Eskine et al. 2011).
7
There were only 28 segments in which Democrats used any Purity-Virtue words, with a total of 35 words used in this category; the corresponding figures for Republicans were 15 segments and 20 total words. For both parties, the word stem clean* accounted for the majority of all Purity-Virtue words used, usually as a modifier for energy, power, or environment. Regarding Purity-Vice, there were a mere 9 Democratic segments (each having only one word in that category) and 12 Republican segments (including a total of 19 such words).
8
Further investigation shows that this difference in emphasis derives almost entirely from opening/closing statements’ high usage of Fairness-Virtue, rather the Fairness-Vice, words.
9
Ted Cruz, who among the major candidates perhaps most clearly embraced social and religious conservatism, used Care and Ingroup words at a significantly higher than average rate. Further analysis indicates that most of Cruz’s difference in language use derived from his high usage of Vice words, consistent with the notion of a negativity bias among conservatives.
10
Given the generally few words per segment from any particular moral-foundations category, an alternative approach would be to use a probit model to estimate the probability that any such words occur in a segment. The dichotomous measure, however, would fail to make use of some important information, since the percentage of words from specific moral-foundations categories that appear in segments ranges as high as 8% in this sample and varies considerably below that level. Negative binomial regression takes into account such differences among the nonzero count values. Nevertheless, probit estimates lead to near-identical conclusions about the variables of interest, as described below.
11
To focus on Trump’s differences from Democratic candidates, I ran an additional set of models that were identical to those reported in Appendix Table A1 except they included only segments spoken by Trump or by Democrats (n = 480). The Trump indicator variable thus estimates his differences from Democratic candidates, rather than from the non-Trump Republicans. In this alternative set of models, the significant (p < 0.05) differences between Trump and Democrats were as follows: Trump used fewer Fairness-Virtue words, fewer Ingroup-Virtue words, fewer Ingroup-Vice words, and fewer Virtue words overall.
12
If the negative binomial regressions reported in this section are replaced with a probit model, estimating the presence or absence of any words from a particular moral foundation (see note 10), results for the Democratic Party and Trump dummy variables remain the same with respect to their signs, and nearly the same with regard to significance levels. See Appendix Table A2.
13
Consistent with the results presented here, Oliver and Rahn (2016, pp. 193–94) showed that speeches by Democratic presidential candidates of 2016 invoked more “people-centrism” (e.g., mentions of “the American people”) than speeches by Republicans.
14
The author estimated a probit model (not shown) predicting segments where Self was the topic of discussion. Controlling for whether the speaker was a Democrat, whether CNN or Fox sponsored the debate, the total word count of the segment, and the number of months elapsed since October 2015 (all of which were statistically insignificant), the Trump dummy variable was positive and significant (coefficient of 0.74, p < 0.001).
Figure 1. Partisan differences in word use (in %) across the five moral foundations. Note: These are simple cross-tabulations without statistical controls. The vertical axis represents the average percentage of words in a segment representing a particular moral foundation (thus, 0.8 signifies 0.8%, not 80%). Dem. = Democratic; GOP = Republican.
Figure 1. Partisan differences in word use (in %) across the five moral foundations. Note: These are simple cross-tabulations without statistical controls. The vertical axis represents the average percentage of words in a segment representing a particular moral foundation (thus, 0.8 signifies 0.8%, not 80%). Dem. = Democratic; GOP = Republican.
Socsci 08 00233 g001
Figure 2. The most frequently spoken Ingroup-Virtue words, by party. Note: the horizontal axis represents the total number of mentions of each word, across all segments where at least two Ingroup-Virtue words appear (n = 106 Republican and 100 Democratic segments).
Figure 2. The most frequently spoken Ingroup-Virtue words, by party. Note: the horizontal axis represents the total number of mentions of each word, across all segments where at least two Ingroup-Virtue words appear (n = 106 Republican and 100 Democratic segments).
Socsci 08 00233 g002
Figure 3. The estimated marginal effect of a Democratic speaker (relative to non-Trump Republican candidates) on the number of moral-foundations words used per segment. Dot represents point estimate and bar represents 95% confidence interval. The All 5 Virtue and All 5 Vice categories sum together the counts for all five moral foundations, including Purity.
Figure 3. The estimated marginal effect of a Democratic speaker (relative to non-Trump Republican candidates) on the number of moral-foundations words used per segment. Dot represents point estimate and bar represents 95% confidence interval. The All 5 Virtue and All 5 Vice categories sum together the counts for all five moral foundations, including Purity.
Socsci 08 00233 g003
Figure 4. The estimated marginal effect of Donald Trump as speaker (relative to non-Trump Republican candidates) on the number of moral-foundations words used per segment. Dot represents point estimate and bar represents 95% confidence interval. The All 5 Virtue and All 5 Vice categories sum together the counts for all five moral foundations, including Purity.
Figure 4. The estimated marginal effect of Donald Trump as speaker (relative to non-Trump Republican candidates) on the number of moral-foundations words used per segment. Dot represents point estimate and bar represents 95% confidence interval. The All 5 Virtue and All 5 Vice categories sum together the counts for all five moral foundations, including Purity.
Socsci 08 00233 g004
Table 1. Debates included in the analysis.
Table 1. Debates included in the analysis.
RepublicanDemocratic
DateSponsor(s)DateSponsor(s)
16 September 2015CNN/Salem Media13 October 2015CNN/Facebook
10 November 2015Fox Business/Wall Street Journal14 November 2015CBS/Twitter
14 January 2016Fox Business/Wall Street Journal17 January 2016NBC/YouTube
13 February 2016CBS11 February 2016PBS
10 March 2016CNN/Washington Times9 March 2016Univision/Washington Post
Table 2. The distribution of segments across candidates and subject matter.
Table 2. The distribution of segments across candidates and subject matter.
Topic of Segments, in %
CandidateNo. of SegmentsAvg. Words/SegmentSecurity/Foreign PolicyEconomy/BudgetImmigrationSocial/CulturalCampaign/General PoliticsSelf
Republican
J. Bush5019924141612202
Cruz641992019119225
Kasich49237182461668
Rubio6425027311311132
Trump99166241112102015
Others 11251942618711144
Democratic
Clinton14218522151125136
O’Malley52181292381962
Sanders1591531816728185
Others 1281564311414411
1 Other candidates represented in dataset (each with 36 or fewer segments) include Republicans Ben Carson, Chris Christie, Carly Fiorina, Mike Huckabee, Rand Paul, and Scott Walker; and Democrats Lincoln Chafee and Jim Webb.
Table 3. Mean usage of morality words, per segment (as % of words).
Table 3. Mean usage of morality words, per segment (as % of words).
Care/HarmFairness/CheatingIngroup/BetrayalAuthority/SubversionPurity/Degr’nGeneral Morality
By topic:
Security0.970.130.840.550.030.35
Economy/Budget0.380.190.370.370.040.31
Immigration0.570.151.481.110.020.56
Social Welfare1.110.100.510.350.060.28
Culture/Identity0.800.320.700.590.110.38
Campaign/Self0.450.160.520.480.040.43
Opening/Closing0.680.310.960.500.070.31
By candidate:
Clinton0.800.190.710.420.040.33
Sanders0.600.180.840.500.080.32
Other Dem.0.790.180.880.680.100.28
Cruz0.960.180.980.740.020.46
Kasich0.700.260.620.350.070.29
Trump0.500.100.320.560.040.53
Other GOP0.640.170.740.550.050.38
Back to TopTop