3.1. Necessary vs. Contingent Moral Facts
It is typically not going to be the case that a rational agent is the only thing that exists in a world. Those worlds will necessarily have to be governed by some contingent physics and physical history, realizing and sustaining that rational agent. And it is likely that there will be many rational agents in any given world (whether by the same process as made the one or by the activity of that one); and, therefore, there will likely be ‘societies’ that each rational agent must interact with. These contingent material facts will then entail contingent moral facts.
For example, in our world, physics and history conspire to make us mortal and fragile, and we cannot as yet correct this defect of our world. We may one day, if we transition to live within digital simulations whose physics we can rewrite at will (
Carrier 2020,
2024 with
Blackford and Broderick 2014;
Wiley 2014). But for now, ‘what is moral’ (the moral facts for current human beings) is causally constrained by present physics. Hence ‘murder’ is ‘immoral’ only because of these contingent facts. If we one day live in a world where killing someone always causes them to immediately rise back from the dead invigorated and healed of all ailments, ‘murder’ would no longer be immoral, as it would then have no immoral
consequence (but in fact a moral one). Likewise, if we were an asexual species of civilized jellyfish, there would be no fact of the matter whether any kind of ‘sex’ was immoral for us, as no such organs or behaviors exist for us to concern ourselves with. This is simply an extension of the casuistics of all moral reasoning. For example, in ‘life boat’ scenarios, what is ‘moral’ may differ from other contexts because the physical circumstances and thus the available options differ. What is ‘moral’ to do when you are being attacked by an army is different from what is ‘moral’ to do in peacetime, because the circumstances change and thus constrain your options. This is why killing in ‘self defense’ is widely regarded as moral, and not murder: because the circumstances are different, such as to entail that
in those circumstances killing is, unlike usually, the least worst and thus most moral option. This is obvious on any straightforward consequentualism, but even deontologically we would always will there to be a universal law to “never kill unless necessary to save a life,” or whatever rule would preserve the best total outcome that would still respect persons as an end in themselves, and that we would feel better about ourselves embodying (realizing Kant’s “sense of self-worth”).
But underlying (and thus grounding) all these contingent moral facts will be some universal set of moral facts, which facts are here only encountering different physical circumstances dictating how they can be realized. The most universal moral fact of the matter underlying all these cases is something like ‘produce by your choice of behavior the least unnecessary harm’. That rule implicates many other logically entailed goals, e.g., ‘unnecessary harm’ includes harm to yourself (e.g., not just physical injuries to yourself or your welfare but also injuries to your conscience and sense of self-worth) and harm to others, both directly (e.g., do not steal, murder, etc.) and indirectly (e.g., do not allow events to bring needless harms that you could have prevented), and includes every side of a circumstance. For example, how you behave toward a disabled person might rationally reflect how you would want to be treated if you were in their same circumstances. What aid you gift to others might rationally reflect the harm that giving too much, or taking too great a risk, might bring to you and others. And so on. Moral facts ‘on the ground’ will always be complicated, because reality is complicated. But reality still dictates what, in practice, is the right thing to do (the behavior you ought to prefer). Hence murder is only wrong because of what it does; yet what it does is determined by local physics, which can change.
What all these moral facts entail will be the most preferable behavior
in particular is therefore going to be defined by all these changing circumstances—e.g., does killing someone harm them. Hence, killing someone to effect a successful surgery on them is not considered murder but saving their life (
Murphy 2014). And these determining circumstances can include limitations of knowledge, means, situation, physics, biology, economics, even social systems that you have no option but to work with. But that does not undo the fact that there are still universal moral facts of the matter (like avoiding unnecessary harm). And those facts might even be diverse. The imperative to ‘avoid unnecessary harm’ might be a first-order universal; but there might also be universal moral facts derivable from that.
For example, all moral imperatives, indeed even all other virtues, might be subsumable under three basic moral virtues of reasonableness, compassion, and honesty, and these might be moral virtues (and thus entail moral facts, and thus moral imperatives) in every possible world. For example, we might imagine circumstances (however bizarre) in which the morally best behavior is to be unreasonable; but then that would by that very fact be the most reasonable behavior in that circumstance. Similarly, reasonableness mediates the other virtues (whereby too much or too little compassion or honesty is, in turn, ‘unreasonable’). But it may even be that the cascade of rational steps from ‘we ought to be rational’ to ‘we ought to determine what is preferable’ to ‘we would always prefer to live in a world where we gain more joys from consort with others than difficulties’ would land a rational person, in every possible world, with a superlative value for compassion; and that or other cascades may lead to a superlative value for honesty. Those would simply always be the best ways to live, to achieve the goals that we would always rationally prefer when available, in every possible world.
But these are all just working examples. My point at present is not that I claim to have solved or worked out or proved what is or is not moral here, but only that there is a fact of the matter to be solved or worked out or proved. Moral facts exist. And they exist in all possible worlds. And God has nothing to do with grounding or justifying them. We can be wrong about what they are, or unaware of what they are, or have ideas about them of mixed quality, but we know what it would take to perfect ourselves away from these defects. We know what we are supposed to be looking for, and that it does exist to be found.
For example, we once mistakenly thought it was best to deny women the vote, until we learned we were factually wrong about everything we thought warranted that; while at the same time, we remain rightly confident that those same reasons factually remain for denying toddlers the vote. We can have overlooked all manner of things, that taking back into account would change what we rationally conclude is morally best. But that would not negate the fact that there is a fact of the matter as to what is morally best. The fact that we observe this progress—more knowledge producing improved moralities (
Zuckerman 2008;
Pinker 2011;
Shermer 2015)—supports the conclusion: there is something to find, it derives from the facts of the world, and it is grounded in the inalienable properties of rational agency.
God’s only two possible roles here are that, being supposedly all wise and all-knowing and all empowered, he could have (and morally should have, by any rational calculation of what ‘is’ moral) taught us the best moralities (and even the reasons for them, their actual grounds) everywhere, and from the earliest times (but did not:
Avalos 2005,
2013,
2015;
Carrier 2016). And God could have (and morally should have, by any rational calculation of what ‘is’ moral) fixed our world’s physics to have given us a far more moral and just world to live in (but did not:
Carrier 2020,
2023b;
Sterba 2019). But even had God done either, neither would ‘ground’ morality, but only casuistically shape its implementation. Just as we will one day be able to do when we can design our own worlds to live in (e.g.,
Carrier 2020).
So it remains necessarily the case that, for any rational agent, there will always be true moral facts, in every possible universe. And therefore God is not needed to ground them. Nor could he ground them. Because God cannot change what would be rational for a rational person to do. So there is no way God could ‘unmake’ the conclusion that it is always more rational to do less harm within the circumstances an agent is in, or the conclusion that only the ignorant, the irrational, or the insane will fail to agree with that.
3.2. Emotional Objections to Moral Facts
It will also be of no avail that you ‘do not like’ what the true moral facts turn out to be. Since they would be objectively true (they will be the only actual imperatives a rational agent truly will prefer to every other, and thus truly ‘ought’ to), your opinions to the contrary will simply be false. This means it is also of no avail to complain that, this being the case, moral facts are fundamentally ‘egoist’ (as in, deriving from what a rational agent would want most). Because if egoist moral facts are the only true moral facts, then your desire for ‘other’ moral facts is moot. It would then simply be true that morality is egoist, and therefore all other moralities are false. This objection usually trades on irrational conflations anyway. There is a fundamental distinction between selfishness (caring not for others) and self-interested selflessness (caring for others because it serves your own physical and mental wellbeing). And harming your own conscience and sense of self-worth is a self-interest (just as Kant argued and science has since confirmed, per above).
Theists cannot disagree. For if they claim moral facts are grounded in God because of how our behaviors affect our divinely mediated fate (whether the appeal be to heaven, death, hell, abandonment by God, or whatever the projected outcomes are supposed to be), then that is simply egoism—self-interest is grounding moral facts. If instead they appeal to how immoral behavior injures our minds or souls in some sense that we are supposed to care about, then that is
again egoism—self-interest is grounding moral facts. And so on. Theists are stuck on the horns of a dilemma here. They can try to find some self-interested reason to care about their moral facts (thus simply replicating the egoism they had been complaining about:
Carrier 2012, pp. 3–4), or they can abandon any reason to care about their moral facts (thus eliminating them from any hope of being true:
Carrier 2012, pp. 1–6). We cannot rationally believe moral facts are true ‘even when’ we have no rational reason to prefer them, because that is self-contradictory. Any moral facts that we
do have a rational reason to prefer would then, by definition, supersede those, and thus eliminate the theist’s moral facts from the category of being true.
But the complaint is unwarranted anyway. The fear, I suppose, is that if moral facts are egoist, then the only true morality will be ruthlessly selfish (something like the morality of Ayn Rand, say). Of course, if that is the moral truth, then it simply is the moral truth. Disliking it will make no difference to whether it is true. And all objections of this kind fall to this same point: no matter how weird you find the results, they remain necessarily true. The syllogism leading our Demonstration is valid and sound. If it then should entail the true moral facts are in some way weirder or more vexing than we thought, then we were simply wrong about what was morally true.
But this worry is not well founded. The fact that theists (just like atheists) can adduce a plethora of reasons why a ‘ruthlessly selfish’ morality (or
any supposedly weird morality) would be bad suggests that the true moral facts are not ‘ruthlessly selfish’ (or in whatever way weird), but in fact are pretty much what most of us suspect they are: a rational interest in acting to produce a better world for everyone, including oneself. Moral truth
cannot be unnecessarily harmful or unreasonable, because any behavior that was unnecessarily harmful or unreasonable would fail to supersede behaviors less harmful or more reasonable. The
latter would then be the moral truth. And as it happens, the virtues of compassion and honesty, for example (and not a mere pretense of them),
are better for oneself (in fact essential to mental and physical wellbeing), as has been established by the findings of game theory and human psychology (
Carrier 2018, summarizing
Axelrod 1997;
Bergman 2002;
Fedyk 2017;
Churchland 2011;
Narvaez and Lapsley 2009;
Garrigan et al. 2018).
That this will be the case in every possible world can be deduced from a cascade of values derived from pure reason (
Carrier 2021;
Smith 1994). Purely rational agents with access to all options (e.g., agents who could rewrite all their emotional responses at will) will always choose compassion and honesty over discompassion and dishonesty because the one produces more joys and fewer miseries than the other, both internally (enjoying a more pleasant and contented conscience) and externally (enjoying a more cooperative society). You could doubt this, though it would be odd for a theist to doubt that moral facts are rational. But there are reasons to allay those doubts.
In the first case, becoming the sort of person you hate will rationally entail hating yourself (because inconsistency is irrational), leading to increasing personal discontent (a deprivation from Kant’s “sense of self worth”), whereas becoming the sort of person you like will entail liking yourself, leading to increasing personal contentment (hence
Bergman 2002; and
Garrigan et al. 2018). There is no other logically possible way to access these preferable outcomes, other than by self-deception; but self-deception entails false beliefs, and no true moral facts can follow from false beliefs. False premises cannot secure true conclusions. And our only concern here is with what is
true; not what can be falsely believed.
Which is what theists are already arguing when they appeal, for example, to the role of human conscience and the damage immorality inflicts upon your inner person. That is the same thing; and will be the same in all possible worlds. For example, a hypothetical rational agent capable of choosing what emotions to have will always find contentment preferable to dissatisfaction, and will always find more and easier paths to contentment by cultivating compassion and honesty. This is why psychopaths are so discontented (
Love and Holder 2014;
Blasco-Belled et al. 2023;
Doren 1996;
Reid et al. 1986;
Sinnott-Armstrong 2007, pp. 119–296, 363–66, 381–82): their mental illness hinders their access to the joys of compassion and other virtues (directly, in lost affect; and indirectly, through a poverty of fulfilling or even effective relationships:
Seto and Davis 2021;
Martens [2006] 2014), and drives them to irrational behavior against their own self-interest (
Vaurio et al. 2022;
Beaver et al. 2014). This is also why it is imperative to be rational: because irrationality is always inevitably counter to self-interest (
Carrier 2011, pp. 426–27, n. 36). It therefore can never be imperative in any possible world.
In the second case, the net outcome of different arrangements of social interaction upon oneself is logically the same in all possible worlds—because there are only so many ways a society can interact, and the effects are always the same. For example, strict tit-for-tat reasoning always leads to inevitable death-loops of feuding, diminishing access to the benefits of cooperation, whereas strict leniency leads to inevitable death-loops of exploitation, also diminishing access to the benefits of cooperation (
Axelrod 1997;
Fedyk 2017). Computer models show that the ideal behavior in all possible systems is a ‘modulated’ tit-for-tat: always start nice and only punish defectors, but add in a rationally determined amount of proactive forgiveness (rather than
always punishing defectors, sometimes we should trust and cooperate instead of retaliating, i.e.,
sometimes we should meet hostility with kindness) and a rationally determined amount of targeted spitefulness (to defeat would-be manipulators of forgiveness by switching back to a ‘never-cooperate’ strategy with repeated defectors, i.e.,
sometimes we should never cooperate).
From this can be deduced the conclusion that compassion and honesty modulated by reasonableness will always produce the statistically best outcomes for you no matter what social system you are dropped into (
Axelrod 1997,
2006). And habituating in yourself the sentiments of reasonableness, compassion, and honesty will maximize your compliance with this strategy, and thus
ensure the best available outcomes for you, in terms of the goods you can acquire from or lose to any social system you are in. This conclusion is unchanged by converse outcomes, since the reason this strategy is more rational is that it
statistically produces better results; so that it occasionally fails does not matter to the point, because there is no alternative strategy that would
reduce that rate of failure. Hence, the fact that being reasonably nice ‘can’ get you killed or robbed, for example, is irrelevant to the fact that there is no other strategy with a
lower risk of getting unjustly killed or robbed (or suffering some other commensurate deprivations). Any ‘tweak’ to the strategy that would mitigate that risk ends up right where computer models find it: modulated tit-for-tat. Trying to avoid worse outcomes (too many exploiters, or too many feuds) always ends at the peak available strategy, which is the one here described. And that strategy will be most reliably enacted by the reasonably compassionate and honest.
Therefore, the complaint theists have here is moot. Their own morality is either egoist or not true. And the findings of logic and science are that what we already tend to agree is moral is the best-performing egoist behavior. Compassion, honesty, and reasonableness lead both to you being more satisfied with who you are and who you have become and (more often) to better material outcomes for you in your interactions with society. That you can avoid this correlation with irrationality (such as adopting false beliefs about yourself, convincing yourself that you actually like mean and dishonest people or the poor outcomes you are producing, or that you are not cruel or dishonest or unreasonable when in fact you are) does not remove the truth of it (what you would think of yourself when your beliefs about yourself are true, and thus what is actually true about yourself). So insofar as what we want to know is what the true moral facts are, all the ways that that truth can be avoided do not matter. What the true moral facts are remains as it is.
3.3. Analytical Concerns
Might my account of moral facts still be compatible with divine command theory? The answer depends on what one means by “divine command theory.” As explained in the foregoing analysis, as a rational agent God can discover and thus could report to us what the moral facts are. But those facts would not thereby depend on God. So they do not gain their status as moral facts simply by being God’s commands. Remove God from the system, and all moral facts remain the same, just as detailed in the foregoing analysis. God is not needed to command them. They would be discoverable to any rational agent, like human beings. God would only have an advantage of superior faculties, so he has greater epistemic access to what the moral facts are. But he does not ontologically create them—except insofar as he can change the physics that entails them (and thus change what they are), but as explained in the forgoing analysis, that, too, does not make God necessary for the ensuing moral facts to be true, because those new moral facts still follow from the physics, not from God, so that if accidents, or humans, or anything else made those same changes to physics, and not God (for example in computer simulated worlds, as just discussed), then the moral facts would likewise change in exactly the same way, and remain just as true. It does not matter where any world’s physics comes from: it always entails certain moral facts regardless. Thus God is never necessary for moral facts. Even if God necessarily existed (as unlikely as that may be), his existence would still not be necessary for moral facts to exist. They would exist anyway. Because (as demonstrated) they derive from other things.
Another question is whether we could argue that there must be other ‘defining’ features of moral imperatives than what is stated in my Premise 1, such as that moral imperatives must concern the wellbeing of others. But this mistakes the definition of moral facts (how we distinguish facts as moral) for the reality of moral facts (what actually turns out to be a moral fact). Whether ‘concern for the wellbeing of others’ is moral is a question of fact, not of definition. Because if some imperative supersedes that one, then that one by definition cannot be true—and therefore cannot be a fact at all, moral or otherwise (as previously explained). What the moral facts turn out to be is an empirical question, not analytical. Moral facts cannot be arbitrarily defined into existence, and therefore ‘what is’ moral cannot be part of the definition of moral facts qua moral facts. Whether moral facts do meet such a criterion of ‘concern for others’ (or indeed any other) simply has to be discovered. I presented in the foregoing analysis some evidence that concern for others is indeed a component of moral facts in this (and probably any) world. But it is analytically possible that, for example, only selfishness is moral (and, for example, only Ayn Rand’s moral system is true). That possibility cannot be excluded by tautology. It can only be excluded by evidence. The same is the case for whether there are behaviors pertaining only to the self (like smoking or suicide) that are moral or not.
One might also ask whether this entire analysis of moral facts presupposes that moral facts must be rational. It does not presuppose but entails this. Rational beliefs and desires are by definition those beliefs and desires that are arrived at from true facts without fallacy. Moral facts must be (or be capable of being) justified true beliefs. So true facts cannot be reliably ascertained by fallacy. Only rationally obtained facts can be regarded as ‘true’ (whether those be moral facts or any other). Indeed there is no way to reliably ascertain which facts are true, even to serve as premises in
discovering moral facts other than to do so rationally—as in, to do so without fallacies of logic. This is why, insofar as one is able, “it is imperative to be rational” (as was stated, with citation to a demonstration, in
Section 3.2). After all, if moral imperatives, to be moral facts, must supersede all other imperatives, then irrationally derived claims to moral fact can never be true, as they would always be superseded by imperatives that
are true: the ones derived
without fallacy. Hence “uninformed or illogical” moralities will always be superseded by informed and logical moralities (as was argued in
Section 2.2).
Another question is whether this analysis conflates or ignores the distinction between moral values and moral imperatives. Are those different kinds of moral facts? This was addressed in
Section 3.1 (with supporting examples in
Section 3.2, including citations to corresponding science). In the cited scientific literature, moral values are moral desires, and moral virtues are habituated (persistent) moral desires (and sometimes those terms are interchangeable). But moral facts can only meaningfully be defined as what we ought to do (imperative propositions). Moral values or virtues that entailed no imperatives would fail to be describable as moral. For example, if ‘knowledge is good’ entails no behavior in respect to knowledge, then that statement becomes irrelevant to morality. Moral virtues (as dispositions of character embodying moral values) do increase the probability of adhering to the moral imperatives that they entail (e.g., as already cited:
Narvaez and Lapsley 2009;
Bergman 2002;
Churchland 2011;
Garrigan et al. 2018). But they are not
analytically required for that. It is analytically possible (even if empirically unlikely) for someone to ‘behave morally’ absent any guiding values or virtues, simply by always obeying true moral imperatives. So the role of moral virtues is instrumental and not grounding. What makes a value or virtue ‘moral’ is precisely what behaviors it rationally impels (see the discussion of psychopathy in
Section 3.2 and of goals as value cascades in
Section 2.2). Hence, as already discussed, it may even be a moral imperative that we “habituate” moral virtues (hence moral desires or ‘values’) in ourselves (like “the sentiments of reasonableness, compassion, and honesty,” per the conclusion of
Section 3.2). Thus virtue theory itself reduces to a system of hypothetical imperatives (
Foot 1972;
Carrier 2015b). Even when we speak of ‘moral persons’ we are speaking of persons with dispositions to behave morally. Behavior thus defines what is moral.
Another analytical concern is whether ‘imperatives’ can even be true or false in the grammatical sense (like the sentence ‘feed your child’), and so we should instead be talking about ‘obligations’ or ‘duties’ (which then entail imperatives). But as stated at the end of
Section 2.2, the word ‘imperative’ here is not the grammatical but the philosophical term for ‘imperative propositions’, e.g., ‘hypothetical imperatives’ and ‘categorical imperatives’, which can be true or false, and indeed the conditions of their being true or false is discussed in the literature on them here cited. As such, duties or obligations are just tautological restatements of imperatives, not distinct assertions. The sentences ‘you ought to feed your child’ and ‘you have a moral obligation to feed your child’ are simply different ways to state the same imperative proposition. So this leaves no distinction relevant to the present analysis.