Is It Virtuous to Love Truth and Hate Falsehood?

: There is a great deal of academic literature, much of it coming from the social sciences and from social epistemology, which presents itself as addressing a very general problem: the problem of excessive falsehood. Falsehood comes in two general forms: false statements and false beliefs. Of course, falsehood, in both these forms, has always been with us, but it is often supposed to be on the rise. I will argue that there is no new or growing problem of excessive falsehood (variously referred to as the problem of “misinformation” or “fake news”). Furthermore, we should reject the very idea that falsehood as such is a problem, and hence we should reject the idea of coming up with public policy responses to this so-called problem. I argue that the idea that falsehood is a problem is a natural consequence of the idea that it is virtuous to love truth and hate falsehood. I argue that, although there are several virtues related to truth (such as the intellectual virtue of curiosity and the moral virtue of honesty), a love of truth and hatred of falsehood are not themselves virtues. “Beauty is Truth, Truth Beauty.”—that is all Ye know on earth, and all ye need to know. (John Keats,


Introduction
We are constantly being warned that we live in a "post-truth" age, in which the value, obtainability, and even the reality of truth are under attack. The term "post-truth" is just one of an array of neologisms that have emerged, including "fake news", "conspiracy theory", "filter bubble", and "echo chamber", which are used to express the view that truth is being lost in a sea of falsehood. Social scientists and philosophers have made suggestions about how to deal with this so-called problem, and conventional media outlets have been keen to place the blame for it on new media and new information technologies. In this paper, I will argue that this is all a mistake. Although we want as many of our beliefs to be as true as possible, we should reject the very idea that falsehood as such is a problem, and hence we should reject the idea of coming up with public policy responses to this so-called problem.
Those who see themselves as involved in a struggle against falsehood also see themselves as being involved in a struggle for truth. Indeed, it is common, both in academic philosophy and beyond, to treat these two struggles as one and the same. However, although these two struggles are certainly related, they are not, strictly speaking, identical. Indeed, as William James [2] pointed out, pursuing truth and avoiding falsehood are not only different; they can, and sometimes do, come into conflict. Nonetheless, in this paper I will assume, for the sake of argument, that getting rid of false beliefs involves a corresponding acquisition of true beliefs, and vice versa. Hence, in this paper, the alleged problem of excessive falsehood will be treated as equivalent to the alleged problem of insufficient truth. I do not think this simplification affects anything of substance in my argument.
I expect my denial that this alleged problem (strictly speaking two problems) is really a problem to be met with incredulous stares. Enthusiasts for truth in general and foes of falsehood in general (usually the same people) will accuse me of indulging in postmodern madness. Am I denying the reality, knowability, or desirability of truth? I am not. I deny neither that there is a distinction between truths and falsehoods, nor that we have capacities which allow us sometimes (indeed often) to identify truths, nor do I deny either the intrinsic or instrumental value of having true beliefs on a wide variety of topics. Let me explain.

On Our Inability to Distinguish Our Current Beliefs from Reality
Everyone knows, or at any rate everyone should know, that we all have false beliefs. I, for example, am aware enough of my track record of getting things wrong to know that I currently have false beliefs. But, of course, I cannot identify right now which of my beliefs are false, because as soon as I identify them, I cease to believe them. In other words, none of us can make the distinction between (1) what is the case (i.e., what is true) and (2) what we now believe to be the case (i.e., what we now believe to be true).
It will be objected that this cannot be right, since we can come to question one or more of our beliefs, and that seems to require asking ourselves whether what we now believe to be the case actually is the case, which requires us to make the distinction that I say no one can make. I respond that this is not strictly an accurate account of what happens when we "question our beliefs". When one is questioning one or more of one's beliefs, one is not questioning one's current beliefs; one is questioning what one has believed up until now. As soon as one seriously questions one or more of one's beliefs, one no longer believes them.
It is this fact-that no one can identify, at a given time, which of their beliefs is falsethat is responsible for Moore's paradox [3] (p. 543): the absurdity of a statement or belief of the form "p, but I don't believe that p". From the subjective perspective, "p" and "I believe that p" are equivalent. Of course, others can often make the distinction between what I believe and what is true, just as I can often make this distinction about past (and perhaps future) temporal stages of myself. In this sense, each of our beliefs are objectively distinguishable from reality, but subjectively identical to what necessarily counts for us as reality at any given time.
It follows from this that when anyone thinks to ask the question "Why do people have certain false beliefs?", this question is inevitably, from that person's perspective, equivalent to the question "Why do people disagree with me about these matters?". In this light, consider Cass Sunstein's book On Rumors [4]; I have argued elsewhere [5] (pp. 101-102) that this book is not really about rumours at all. The clue to its actual content is to be found in its subtitle Why Falsehoods Spread, Why We Believe Them, What Can Be Done. Since falsehood is indistinguishable from the first-person perspective of what the person in question believes to be false at the time, the contents of the book are all too predictable. Sunstein's inquiry into why falsehoods spread is actually about why statements that Sunstein and like-minded people believe to be false spread. His inquiry into why we believe falsehoods is in fact about why other people believe things Sunstein believes to be false (i.e., why people disagree with him). Finally, the inquiry into what can be done about this supposed problem is about how we can stop people from disagreeing with Sunstein (and his presumed readers).
There is nothing wrong, of course, with an expert worrying about the spread of falsehoods within their domain of expertise, or worrying about people believing them. Sunstein, by contrast, is, or at any rate claims to be, investigating falsehoods in general, divorced from any subject in which he, or anyone else, could conceivably claim to have expertise. My point, of course, is not that Sunstein is particularly lacking in the kind of universal expertise that would be required for the project he claims that he is engaged in. My point is that no one has, or could have 1 , this kind of expertise.
There are a number of replies that Sunstein or his defenders could make at this point, but before discussing them, I want to step back and take a look at the broader theoretical picture that more or less explicitly motivates contemporary truth-evangelism.

Veritism and Its Discontents
The missionary zeal for truth, which I have been describing, is reminiscent of the zeal for happiness found in the writings of the classical utilitarians of the 18th and 19th centuries. Just as the classical utilitarians sought to make the world a happier place, the truth evangelists seek to make the world a more truthful place. This analogy has not gone unnoticed by those who call themselves veritists. Alvin Goldman, who coined this term, describes the veritistic approach to epistemology in the following passage: The structure here is perfectly analogous to the structures of consequentialist schemes in moral theory. One type of state, such as happiness or utility, is taken to have fundamental or intrinsic moral value, and other items, such as actions, rules, or institutions, are taken to have instrumental value insofar as they tend to produce (token) states with fundamental value [6] (p. 87).
Although Goldman talks about "consequentialism" in this passage, it would have been clearer if he had referred more specifically to utilitarianism, since other forms of consequentialism are not necessarily committed to the existence of a single intrinsic moral value. Truth, and more specifically true belief, is to veritistic epistemology what happiness is to classical utilitarian ethics, while false belief, in this analogy, corresponds to unhappiness.
Elsewhere, I have criticised veritism on grounds that are analogous to common criticisms of the utilitarianism on which it is modelled [5] (pp. [3][4][5][6][7][8][9][10][11][12]. Those criticisms come in two forms: those which argue that there is more than one intrinsic value in the domain in question (ethics or epistemology) that we should be concerned to promote, and those which argue that there are constraints on how the value in question should be promoted.
Here, I will put aside those arguments and assume, for the sake of argument, that truth is the only thing that fundamentally matters when it comes to forming our beliefs. I will argue that veritism faces a fundamental objection which is not faced, or at least not faced to anything like the same degree, by utilitarianism.
Utilitarians can be divided into those who think that it is possible to be mistaken about whether one is happy, and those who think that to be impossible. The former group think of happiness as objective. There is a fact about whether, and to what extent, one is happy, and one may well be mistaken about it. The latter group think of happiness as subjective. One is happy, on this view, if, and only if, one believes oneself to be happy. Jeremy Bentham, who thought that, from the perspective of moral philosophy, "the game of push-pin is of equal value with the arts and sciences of music and poetry" [7] Book III, Chapter 1, appears to belong to the latter class. John Stuart Mill, who thought that a pig or a fool might mistakenly think themselves happier than Socrates [8] Chapter 2, belongs to the former class. It is precisely this feature of Mill's thought that many authors, I think rightly, find hard to reconcile with Mill's advocacy of individual liberty, and his insistence that coercing adults, for their own sake, is wrong. Since, if Mill is right, an adult fool could be wrong about whether he or she is happy, then it is hard to see how a consistent Millian utilitarian could object to coercing him or her into being happy.
The lesson I take from this is that, if you want to be a utilitarian and you do not want to be an authoritarian, you should be a subjectivist about happiness. Subjectivism about happiness is a view with a significant degree of antecedent plausibility anyway, quite independently of the merits, or otherwise, of utilitarian thought. Even if happiness is not analytically identical to subjective happiness, they do seem to be largely coextensive. With very few exceptions, if any, each person is the best judge of whether and to what extent he or she is happy.
So, happiness and subjective happiness are either identical, or close to identical. By contrast, subjective truth is not even close to being identical to truth. There are some circumstances in which believing that something is true can make it true, but these are of limited significance, and this is where the analogy between utilitarianism and veritism breaks down. The logic of veritism leads to authoritarianism and uniformity in a way that the logic of its intellectual forebear, utilitarianism, does not (or at any rate, need not). When utilitarians try to promote happiness, they need not be promoting states of mind in others identical to (or even similar to) the states of mind that they would want in themselves. By contrast, when veritists try to promote true belief, they are inevitably promoting states of mind (i.e., beliefs) like their own (i.e., the ones that seem to them at the time to be true).
Of course, there is nothing wrong with trying to persuade others to believe what you believe. However, trying to get people to agree with you about some matter you judge to be important and about which you are particularly knowledgeable is one thing; trying to get people to agree with you in general is quite another.
Not all forms of truth-evangelism are as blatant as Cass Sunstein's. I will now consider four somewhat more subtle forms of it in academic philosophy in recent decades.

Example 1: David Lewis on Academic Appointments
David Lewis [9] has written about an apparent peculiarity in the hiring practices of university departments, including philosophy departments, that are characterised by two features: (1) their agreed aim is the advancement of knowledge (understood simply as true belief) and (2) there are prolonged disputes over what is true. Lewis says that we would expect people in such departments to openly promote those who agree with them, and says that he finds it puzzling that an appointing department will typically behave as if the truth value of "candidates' doctrines are weightless, not a legitimate consideration at all" [9] (p. 190). Lewis suggests that there is a tacit treaty between academics with opposing views: according to the terms of this treaty, those with truth on their side should "ignore the advantage of being right" and not promote their own views in return for those who do not have this advantage agreeing to do the same. According to Lewis, we ignore the truth of a particular candidate's doctrines: Because if we, in the service of truth, decided to stop ignoring it, we know that others, in the service of error, also would stop ignoring it. We have exchanged our forbearance for theirs. If you think that a bad bargain, think well who might come out on top if we gave it up. Are you so sure that knowledge would be a winner? [9] (p. 200). This is an ingenious, but ultimately unconvincing, argument. Presumably, if there really were such an implicit agreement, the parties to it would be aware of it, and most academics are certainly not aware of having made any such deal.
Lewis's premise that the promotion of knowledge, understood simply as true belief, is the fundamental value for which universities-and philosophy departments in particularexist, seems to be the flaw in his argument. If Lewis was right about this, it would imply that we should assess student essays on the basis of the truth of the positions argued for (after all, we certainly do not have the kind of implicit bargain Lewis envisages with them). Instead, of course, we try, not always successfully, to put aside our views about the truth of students' conclusions, and assess their work instead on the basis of how well they explain and justify (or rationally defend) their conclusions. Philosophy, after all, as its tradition and etymology suggest, is not about the love of truth or even knowledge; rather it is about the love of wisdom, and wisdom consists, to a great extent, of an awareness of the dangers of this conception of ourselves as promoters of truth and foes of falsehood.

Example 2: Alvin Goldman on Democracy
Alvin Goldman has applied his veritism to the issue of improving "our" democracy. He says that "we" need to address the question "What kinds of knowledge (or information) is it essential that voters should have?" [6] (p. 320) before "we" evaluate our social policies with a view to remedying epistemic shortcomings in the voting population. Only by ensuring that voters have the relevant true beliefs as inputs can we ensure that we get the correct outputs in the form of candidates and/or policies that are most likely to advance the common good. This is a fundamentally undemocratic idea. It assumes that "we" are in a better position than the voting population as a whole to know what "they" need to know; that if only we could get them to believe certain things that we have antecedently identified as both true and important, we would improve our democratic practices (making them more democratic?). This is the idea that authoritarians around the world have called by the euphemism "guided democracy", according to which voting can be safely left to the population as a whole, so long as the information on which they base their vote is controlled by a technocratic elite. It should be clear that guided democracy is no more a species of democracy than a decoy duck is a species of duck.

Example 3: The Epistemic Panic over "Fake News"
Although there is a widespread recent concern about something called "fake news", there is little agreement about what this neologism denotes, or why we should be concerned about it. When authors do give definitions, they frequently do so in strikingly different and palpably incompatible ways. Some definitions stipulate that fake news must be intentionally false (e.g., McIntyre 2018, p. 112) [10], some that it be false but not necessarily intentionally false (e.g., Levy 2017, p. 20) [11], while others do not require it to be false at all (e.g., Gelfert 2018 and Meyer 2019) [12,13].
Gelfert and Meyer claim that the label "fake news" refers not to individual items of information (the kind of thing that can be false) but to sources of information and, especially, websites. In their view, fake news sites are those in which "(typically) false or misleading claims" are presented as "news". Meyer [13] presents the results of a study he conducted in which he showed articles from "real news" and "fake news" to participants in a random order and took records of how credible they found the articles. The most striking thing about Meyer [13] is that, although he finds that many people are unable to distinguish "fake news" from "real news", he never seems to question his own ability to reliably make this distinction. Given that news items can be on (virtually) any subject whatsoever, this seems to presuppose remarkable powers on his part. It certainly assumes that he is better able to work out which websites contain typically false or misleading claims than are the participants in his study. If this does not involve quite the claim to universal expertise which we saw was implicit in Sunstein's study of falsehoods, it seems to come very close. There is an irony in Meyer's [13] conclusion that those prone to believe "fake news" are also likely to lack intellectual virtues such as "intellectual humility". Now, a defender of Sunstein and Meyer could argue that they are not really presupposing that they have universal expertise, because they have outsourced the expertise to others who do have genuine expertise. But that does not really solve the problem; rather, it just turns it into a meta-problem. In order to identify the experts in a particular field one needs to have a degree of meta-expertise (i.e., expertise in identifying experts), and no one has universal meta-expertise any more than they have universal first-order expertise.

Example 4: Neil Levy's Bad Beliefs
Neil Levy has recently written a book about why people have "bad beliefs" and what can be done about that alleged problem. Levy does not explicitly present himself as being engaged in a mission to promote the truth and reduce falsehood in general, because he does not explicitly identify so-called bad beliefs with false beliefs. In fact, he leaves open the possibility that some bad beliefs could be true [14] (p. x).
Unfortunately, Levy does not offer a definition of "bad belief". In fact, he explicitly refuses to define the term [14] (p. xi). The implication of this refusal seems to be that he is using the terms "bad" and "belief" in the ordinary natural language ways that we can all understand without need of a definition. But if that were the case, it is hard to see how he could succeed in giving a general explanation of the phenomenon. The project of giving a general account of why people believe badly seems as hopeless as giving a general account of why people behave badly, without first giving an account of the nature of bad behaviour. Surely the variety of ways in which we can go wrong when forming our beliefs, just like the variety of ways we can go wrong in other parts of our life, are far too varied to allow for a unified explanation.
Although Levy does not define "bad belief", he does offer a suggestion about how we might understand it when he says that exemplars of bad belief all involve issues which are "controversial but shouldn't be" [14] (p. x). Because Levy does not identify bad beliefs with false beliefs, he is not making quite the same mistake, for which I have criticised Sunstein and others, of trying to provide a general explanation of false belief, but he does seem to be making a related mistake. Who, after all, gets to decide what should be considered controversial and what should be considered settled? What kind of expertise would be required to identify beliefs (which could presumably be on any topic at all) that should not even be up for debate?

The Value of Truth
So far, I have argued that there are serious dangers with treating truth in the way that utilitarians have historically treated happiness. Does this mean that I am downplaying the value of truth? Am I saying that we should not nurture and love truth? In fact, I am saying that, though I do so very reluctantly out of fear of being misunderstood. There is a pronounced tendency in our culture, which is particularly prominent in romantic literature and the philosophy of reifying truth, and indeed personalising, even deifying, it. This is not the place to settle metaphysical issues about whether there is such a thing as truth, over and above the various things that are true. What I would insist, however, is that even if there is such a thing, it is not the proper object of love or even of desire. A person who wants the truth about whether it is safe to cross the road and a person who wants the truth about the origins of the universe do not, in any meaningful sense, want the same thing: the truth. Insofar as there is anything common to what these people want, it is to be found in the mundane fact that they both want to have certain very different truths added to their body of beliefs (in other words, they both want to know certain things). Neither person appears to be motivated by a desire for truth, as such. If they were, their desire could easily be fulfilled in ways that are obviously foolish, neurotic, or self-destructive.
Some people will object that without a love (or at least a desire) for truth, one cannot exhibit important virtues, such as the moral virtue of honesty or the intellectual virtue of curiosity. I disagree. An honest person and a curious person are not united by a shared love of a single thing: the truth. Rather, the former simply has a commitment to only (or almost only) speaking truths, while the latter has a commitment to discovering truths about a wide range of topics. It is highly misleading to say that they are committed to a single thing: the truth. Insofar as there is a common thread to their virtues, it is simply that they are both committed to accurate representations. In the honest person, the representations are assertions made to others. In the curious person, the representations are his or her own beliefs about the world.

Conclusions
Of course, it is noble and worthwhile to pursue truth (i.e., true beliefs) and shun falsehood (i.e., false beliefs) about a wide range of topics. It is natural and common to infer from this that the pursuit of truth and the shunning of falsehood are themselves noble and worthwhile endeavours, but this seems to be a mistake, and one with dangerous consequences. All censors claim that they are only censoring falsehoods, and most of them genuinely believe it. Once you take the promotion of truth and/or the reduction of falsehood, as opposed to the promotion of truth and/or the reduction of falsehood about this or that topic, as worthwhile goals, the desirability of reducing diversity of belief and modelling the entire world on yourself (i.e., what you now believe to be true) is unavoidable.
It is true that some veritists, such as David Lewis, have argued for diversity of belief on the ground that it leads to more true beliefs overall in the long term. As we saw, he argues that we rightly allow a certain amount of falsehood, in philosophy and other controversial subjects, in order to avoid a full-scale battle over truth that might result in the triumph of falsehood. This strategy is similar to that which J. S. Mill employed in his best-known arguments against censorship [15] Chapter 2. According to Mill, we should be committed to free speech, and, to that extent, adopt a permissive attitude to falsehoods, because that is the best way to promote truth in the long term. I do not doubt that Mill and Lewis are right that even well-intentioned (and perhaps especially well-intentioned) censorship regimes, and other attempts to create homogeneity of opinion, often result in people believing more falsehoods in the long term. However, I also think that if this is your reason for being opposed to censorship and in favour of diversity of opinion, then you are supporting the right things for the wrong reason. Inevitably, your position will be too open to changing circumstances. Inevitably, you will come across a topic about which you feel so strongly that you will insist that opposing views should not even be up for discussion. At that point, you will have replaced things that we should all be committed to, freedom of speech and thought, with something, the truth, which, supposing it exists at all, is of no value in itself.

Conflicts of Interest:
The author declares no conflict of interest. Notes 1 I suppose it is logically possible for there to be a being who could be an expert in everything, but I think we can be confident that no human has or ever will have such expertise.