Next Article in Journal
Proclus on ἕνωσις: Knowing the One by the One in the Soul
Next Article in Special Issue
Beaming Bodies: A Neo-Lockean Account of Material Persistence
Previous Article in Journal / Special Issue
Personal Time and Transmigration Time Travel
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Absurdity of Rational Choice: Time Travel, Foreknowledge, and the Aesthetic Dimension of Newcomb Problems

by
Craig Bourne
1,* and
Emily Caddick Bourne
2
1
Philosophy, School of Creative Arts, University of Hertfordshire, College Lane, Hatfield AL10 9AB, UK
2
Philosophy, School of Social Sciences, Faculty of Humanities, University of Manchester, Manchester M13 9PL, UK
*
Author to whom correspondence should be addressed.
Philosophies 2024, 9(4), 99; https://doi.org/10.3390/philosophies9040099
Submission received: 30 April 2024 / Revised: 18 June 2024 / Accepted: 28 June 2024 / Published: 6 July 2024
(This article belongs to the Special Issue Time Travel 2nd Edition)

Abstract

:
Nikk Effingham and Huw Price argue that in certain cases of Newcomb problems involving time travel and foreknowledge, being given information about the future makes it rational to choose as an evidential decision theorist would choose. Although the cases they consider have some intuitive pull, and so appear to aid in answering the question of what it is rational to do, we argue that their respective positions are not compelling. Newcomb problems are structured such that whichever way one chooses, one might be led by one’s preferred decision theory to miss out on some riches (riches which others obtain whilst employing their preferred decision theory). According to the novel aesthetic diagnosis we shall offer of the Newcomb dialectic, missing out in this way does not render one irrational but, rather, subject to being seen as absurd. This is a different kind of cost but not one that undermines one’s rationality.

1. Introduction: Time Travel and Rational Decision

There is a tradition, in thinking about rational choice, of taking Newcomb cases as test cases for the (in)correctness of the competing recommendations generated by alternative decision theories. We shall argue that Newcomb cases cannot play such a role.
Some of these putative test cases involve elements of time travel and foreknowledge. Although we might wonder whether such cases are too outlandish to tell us anything about rational choice in everyday situations, it has been proposed that they reveal features of the mechanics of different decision theories, which might then help us to evaluate those theories. We agree that these cases help us to uncover something interesting about rational choice. But their interest does not lie in testing what it is rational to choose. What they help to illustrate, we shall argue, is how rationality is subject to being made absurd.
Let us begin with an engaging case, presented by Nikk Effingham [1], of an agent who confronts a predicament arising in virtue of time travel:
WAR:
‘Imagine my future self appears, dying of radiation sickness. Before dying, he (me?) reveals that a nuclear war will kill us all. Time cannot be changed, so war looks likely. But if you use excessive means (e.g., torture/hypnosis) to make me believe a war has taken place (when none had!), poison me with polonium, and send me back in time, we’d have a consistent scenario but everyone (except myself) gets to live. Is it rational to use excessive means?’ [1] (p. 176)
Effingham’s answer is ‘yes’. The agent here appears to have a sort of opportunity we do not ordinarily have. They get to fiddle the evidence. The agent who chooses to use excessive means aims to bring it about that the testimony they receive from the time-travelling Nikk is not, as it initially seems to be, evidence of a nuclear war. A similar scenario takes place in Nacho Vigalondo’s 2007 film Timecrimes (Los cronocrímenes), in which the time-traveller Hector sees what appears clearly to be his wife falling from a roof but travels back in time and, by disguising another woman as his wife, arranging for her to be on the roof at the correct time, and so on, brings it about that he witnesses this other woman’s death.
Just what is shown by the special relationship that time travel allows to hold between agents’ decisions and what they have evidence of? In his arguments for causal over evidential decision theory, David Lewis criticises the evidentialist for endorsing ‘an irrational policy of managing the news so as to get good news about matters which you have no control over’ [2] (p. 5). Hector, and Nikk’s poisoner, have an opportunity to do something that looks a little like ‘managing the news’. You do what you can to make it the case that you have not in fact discovered that there will be nuclear war; Hector does what he can to make it the case that he did not in fact see his wife fall from the roof. But in these situations, the distinction between acting so as to affect the evidence we have and acting so as to influence the events that are the source of that evidence appears to collapse: the agents are bringing it about that the evidence they have is evidence of one thing rather than another. Do scenarios like these, then, favour acting as an evidential decision theorist would, and escape Lewis’s chastisement of ‘managing the news’? Effingham thinks they do. In holding that you should use excessive means, he maintains that the kinds of choices which are ordinarily favoured by causal decision theory end up being the wrong course of action in time-travel scenarios.
The puzzle can be generalised: it concerns the impact for decision theory of scenarios in which agents have out-of-the-ordinary access to information. Scenarios involving time travel and foreknowledge provide us with peculiar ways of obtaining information about the past and future outside the normal order in which information becomes available to us. The view that Effingham puts forward is that these ways of obtaining information have an impact on what choice it is rational to make in so-called Newcomb cases. A related argument has been made by Huw Price [3], who contends that we can uncover a deep problem for causal decision theory by reflecting on what it is rational to do in scenarios in which we are given supernatural foreknowledge.
Newcomb’s problem is a puzzle about rational choice in which causal decision theory (CDT) and evidential decision theory (EDT) famously make divergent recommendations. We begin by outlining the traditional Newcomb problem, which will allow us to frame more precisely the distinct but related aims of Effingham’s proposal, Price’s proposal, and ultimately, the contrary proposal that we will make.

1.1. The Traditional Newcomb Case

In the traditional Newcomb case, a person P is given two choices for what to do with two boxes. One of the boxes is transparent and can be seen to contain £1000. The other is opaque. P can choose to take only the opaque box (known as ‘one-boxing’) or to take both boxes (known as ‘two-boxing’). Whether the opaque box contains £0 or £1,000,000 is determined by what prediction is made, by some independent prediction mechanism with a good success rate, concerning what choice the person makes concerning whether to take the one box or the two. If it is predicted that the person chooses to one-box, £1,000,000 is placed in the opaque box. If it is predicted that the person chooses to two-box, the opaque box is left empty. Given the good success rate of the prediction mechanism, then, we can accept that there tends to be £1,000,000 in the opaque box in cases where the chooser one-boxes, and there tends to be nothing in the opaque box in cases where the chooser two-boxes.
Newcomb’s problem asks whether it is rational to one-box or rational to two-box. And we can explicate a rationale for either choice. Informally, the rationale for one-boxing is that if P one-boxes then the prediction mechanism, being as reliable as it is, will probably predict that P one-boxes, in which case P will go home with £1,000,000; whereas if P two-boxes, then the prediction mechanism will probably predict that P two-boxes, in which case the opaque box will be empty and P will go home with £1000. So, P is better off as a one-boxer than a two-boxer. The rationale for two-boxing, however, is that regardless of whether the opaque box contains £1,000,000 or contains nothing, by two-boxing P will obtain that plus £1000. So, P is always going to be £1000 better off by two-boxing than they would have been by one-boxing. Debate over the correct course of action in Newcomb’s problem hinges on which of these competing ways of evaluating the situation is preferable.
Newcomb’s problem divides two varieties of decision theory based on whether we think that the strategy deployed by a rational agent ought to take into account the causal information that the prediction does not depend causally on the choice the agent makes. Causal decision theory (CDT) takes into account the information that the contents of the boxes is causally independent of what choice I make, whereas evidential decision theory (EDT) treats this causal information as irrelevant to rational choice. The dispute is over which of these two conceptions of rational action is the better one. Evidential decision theory takes rational decision to respond to the fact that making the choice to one-box is a good indicator that you will receive a million. Causal decision theory takes rational decision to respond to the fact that what you choose has no influence on what is in the box. The result is that CDT traditionally recommends two-boxing, whilst EDT traditionally recommends one-boxing.
Most commonly, Newcomb’s problem is set up under a description which says that the prediction happens before P makes the choice. But this is not essential (Lewis [4] (pp. 236–237); see also note 4). What is essential is that we accept that what ends up in the box (£1,000,000 or nothing) is causally independent of my choice—without this, the problem will not distinguish the causal and evidential theorist based on the choices they make. (As we shall see in Section 3.1, they may still be distinguished in terms of the reasons for their choice, even without the assumption of causal independence, but set this complication aside for now.) It is also common to set up Newcomb’s problem under a description which says that the prediction mechanism is very good—perhaps 90% or more of its predictions are correct, for example. This is a useful way of dramatizing the conditions of the problem, but it is not essential. What is essential is just that the prediction mechanism be reliable enough that the expected utility of one-boxing is greater than the expected utility of two-boxing when causal information is not taken into account. With this in place, Newcomb cases prise apart two different conceptions of rational decision by setting up the scenario in which those two conceptions will lead to two opposing recommendations for which choice to make.

1.2. The Debate over Time-Travel and Foreknowledge Cases

As we shall see, both Effingham and Price think that the causal decision theorist should come round to the evidentialist’s recommendation in cases of information from the future. However, there is an important difference in the scope of their claims. As Effingham presents the claim, there is something special about time-travel cases which means that the strategies of CDT and EDT generate the same recommendation in time-travel Newcomb scenarios, whereas they generate opposing recommendations in other Newcomb scenarios. So, for Effingham, a causal decision theorist should do what an evidentialist does in time-travel scenarios—they should do the equivalent of ‘one-boxing’ rather than ‘two-boxing’—but they may do so for causalist reasons. It is not that time-travel cases undermine CDT, but rather that in time-travel cases, the two conceptions of rationality lead to the same action. We discuss these arguments in Section 3 and Section 4. For Price, by contrast, cases of information from the future do undermine CDT, thus revealing something about what is the correct standard for rational decision in general. These cases point toward a problem in CDT’s conception of rationality and a way in which it should be modified (into the alternative conception of rational decision which Price coins as ‘EviCausalism’). We discuss those arguments in Section 4. As we shall see, though, this difference between Effingham’s and Price’s aims is underscored by a key parallel. Both proceed via a striking claim about causation: that causal dependence is sensitive to what we have reason to believe will happen.
Our view is that the impact on rational choice of these peculiar ways of obtaining information—via time travel and foreknowledge—is less significant than both Price and Effingham suggest. We will argue that it is cogent to maintain both the causalist conception of rationality, and traditional causalist recommendations to ‘two-box’, in time-travel and foreknowledge cases. Our objective is not, however, to argue that traditional CDT is the correct decision theory. It may be, but what we are attempting here is not a first-order intervention into decision theory. Instead, we intend to draw some second-order conclusions about the nature of the disagreements that Newcomb cases prompt. In particular, we shall propose that there is an aspect of how rationality and irrationality are to be experienced and assessed which Effingham’s and Price’s treatments do not sufficiently address. Choice, including rational choice, can be experienced as absurd or ironic. Evaluating something as absurd or ironic is, we shall say, to have an aesthetic response to it; such evaluations are akin to (and, indeed, form part of) regarding things as comic or tragic.1 Newcomb problems are, on the view we shall propose, in large part aesthetic problems, for a distinctive feature of Newcomb scenarios is that they subject both sides to making choices that are vulnerable to being evaluated as absurd or ironic. This is decidedly not the same thing as being evaluated as irrational. Price and Effingham’s proposals underrate the extent to which the intricate, peculiar features of time travel and foreknowledge inflect, specifically, these aesthetic features of rational decision. Both proposals are too quick to conflate heightened absurdity with irrationality.

2. The Newcomb Dialectic

We shall begin by setting up in some detail what we think is interesting about traditional Newcomb cases. Although this will mean that we do not talk about time travel again until Section 3, what we have to say here will be key to how we assess Newcomb cases involving information about the future.

2.1. Burdens of Proof and Embarrassing Questions

The aspect of Newcomb cases we shall be most interested in here is the nature of the dialectic they produce. This dialectic has a number of significant features:
First, each strategy is irrational from the other’s point of view. By the one-boxer’s conception of rationality, the two-boxer is irrational not to act in the way that tends to make the chooser £1,000,000 better off. By the two-boxer’s conception of rationality, the one-boxer is irrational to turn their back on a free £1000 that is there for the taking. Moreover, whilst it is clear why each side has reasons from their own point of view for their choice, it is not clear that anything could be said to make one side’s reasons compelling to the other side. When the two-boxer articulates their reasons to the one-boxer or vice versa, they simply articulate what is, from the other’s point of view, an irrational decision-making strategy. Lewis describes the debate between causal and evidential decision theories with respect to Newcomb problems as ‘hopelessly deadlocked’ [2] (p. 5). We think a helpful way to understand the deadlock is in terms of the incommensurability of strategies. Both sides have reasons by their own standards, but neither’s reasons are reasons by the internal standards of the other’s conception of rationality.
Although incommensurability prevents the one-boxer and the two-boxer from being compelled by the reasonableness of each other’s strategies, each might attempt to convert their opponent in a slightly different way: by embarrassing them into surrender. The evidentialist can raise a potentially embarrassing question for the causalist: the famous taunt ‘If you’re so clever, why ain’t you rich?’ (Like we one-boxers are.) But the causalist can raise a potentially embarrassing question for the evidentialist: ‘If you’re so clever, why ain’t you richer?’ (Look at that £1000 you didn’t take.)
Neither side’s attempt to embarrass has obvious priority. As we have narrated it, the causalist’s question looks like a retort to the evidentialist’s. In Robert Sugden’s statement of the ‘why ain’t you rich?’ question, on the other hand, the evidentialist’s question is framed as a retort to the causalist’s. In Sugden’s example, Irene, an evidentialist, has taken £1,000,000 by one-boxing, and Rachel, a causalist, has taken £1000 by two-boxing, and Irene is first challenged by Rachel, who asks her why she didn’t take the second box as well, for ‘surely Irene can see that she has just thrown away’ £1000 ([5] (p. 342)). But, he continues, ‘Irene has an obvious reply: “If you’re so smart why ain’t you rich?”’ [5] (p. 342).
As James M. Joyce observes, this retort is no answer:
‘Rachel asks Irene why she (Irene) didn’t take the extra thousand. Irene replies by asking Rachel why she (Rachel) isn’t rich. This answer is, as lawyers say, “nonresponsive”. Irene has replied to Rachel’s question with an ad hominem that changes the subject. “I know why I’m not rich, Irene”, Rachel should respond; “my question had to do with why didn’t you take the [£]1000.”’
[6] (p. 152)
The same can be said for the dialectic as we narrated it, where the evidentialist’s embarrassing question comes first and the causalist’s in reply. In terms of burden of proof, it matters little who speaks first; the supposed ‘burden’ can be shifted back and forth across questions that, if they serve as requests for justification, talk past each other. Of course, when trying to embarrass another person whilst simultaneously upholding one’s own dignity in the face of the threat of embarrassment, there may be more broadly rhetorical advantages to speaking first or to speaking second—although either position can usually be turned to the speaker’s advantage. Perhaps the respondent looks as if they are on the back foot; or perhaps, by equalising the embarrassment, they look as if they are deflating the significance of what their opponent has said. What is important for our purposes is that in Newcomb scenarios, we have two conceptions of what is rational, each of which has some claim to be viable in other scenarios, and each of which renders the other’s choice irrational, and because of this, each side can attempt to make the other’s choice look foolish. Embarrassing questions surrounding Newcomb strategies have less in common with genuine cases of argument and counter-argument (which trade in demands for justification) than they do with this exchange:
You’re an idiot!
You are, you mean!
Oh, are you talking about yourself?
What, like you were?
This is not to say that Newcomb exchanges are entirely devoid of argumentative significance. Each side holds ground because they have been given no reason to defect that does not assume the other’s point of view, and that in itself is argumentatively significant. But to approach them as battles to transfer the burden of proof would misconstrue their basis. Neither is it to say that Newcomb exchanges are not clever. They are. (Even the ‘you are’ exchange, though juvenile, is not artless.) But they are clever in that each party needs an understanding of their own standards and the other’s standards in order to find the aspects of difference which afford ways to make the other’s action look foolish; more specifically, as we shall elaborate on in a moment, look absurd. Another good comparison for Newcomb exchanges is the retort ‘you would say that’:
You’re irrational!
Well, you would say that, wouldn’t you!
Well, you would say that, wouldn’t you!
Although it is not impossible for such utterances to shift a genuine burden of proof, they can also function purely by shifting the spotlight from one of two incommensurable frameworks to the other. When acting within either framework can be made absurd, this shift of attention is a shift in whose absurdity we are witnessing, and that is a spotlight most of us do not particularly enjoy being under.

2.2. Newcomb, Irony and Absurdity

One-boxers and two-boxers do not limit their trade-off to rhetorical questions. Lewis [7] has an alternative retort to the evidentialist’s taunt of ‘why ain’t you rich?’ He is not rich because ‘riches were reserved for the irrational’ [7] (p. 377). Here, Lewis follows Gibbard and Harper [8] in maintaining that all that is shown by the distribution of gains in the Newcomb problem is that its set-up favours irrationality (in the sense that when we compare the resulting wealth of two-boxers with that of one-boxers, the two-boxers have come off the poorer). To be a rational person is to suffer missing out on the prizes a Newcomb game will give you for getting things wrong. The same response is available to the evidentialist when the causalist asks them: ‘If you’d have chosen like me, you’d have £1,001,000—so tell me, if you’re so clever, why ain’t you richer?’ The evidentialist can say, ‘Because being richer is reserved for the irrational’. From the evidentialist’s point of view, the Newcomb set-up is such that the only way you could get everything that might be available (£1,001,000) is by being irrational, and so, again, the rational person just has to live with missing out. If they wish, each side can also add something to ‘because that is reserved for the irrational’ which reasserts their own standards and shifts the spotlight to their opponent. The causalist can say, ‘I’m not rich because that is reserved for the irrational—and at least I didn’t choose like you, else I would have been even worse off, whereas you would have been even better off if you’d chosen like me.’ The evidentialist can say, ‘I’m not richer because that is reserved for the irrational—though I notice they seldom get it, even though they were warned in advance that two-boxers get empty boxes while one-boxers (like me over here) rake it in.’
In saying that riches were ‘reserved’ for the irrational Lewis is not, of course, suggesting that either the predictor (which need not be an agent at all) or the wider world has an actual penchant for irrationality and has deliberately chosen to reward it. But the phrase captures the fact that the set-up of a Newcomb game—the choices available and the mechanisms in play—are such that we might as well be in a world that favoured irrationality, disfavoured rationality, and distributed rewards accordingly.
Compare this scenario: you come across a ladder propped across the path with plenty of space to walk under comfortably. Suppose that you are not superstitious. Suppose you reject superstition: from your point of view, it is a serious failing of rationality to take good and back luck as things that are distributed according to what we do with ladders, mirrors, salt, and so on. You also have no reason that is cogent from your own point of view to avoid the ladder—for example, there is nobody up the ladder with a can of paint that might fall on your head, and so on—and there are some non-negligible benefits to going under the ladder (a little time will be saved, it would be an inconvenience to walk around the ladder, etc.). But you are with your superstitious friend, who warns you that you should not walk under the ladder, since this brings bad luck. Since from your point of view the friend’s recommendation is irrational and walking under the ladder is rational, that is what you do. But as soon as you have been under the ladder, you trip and fall over; your keys bounce down the drain, your nice hat rolls off under the wheel of a car and is squashed, and your palms are all grazed. ‘You see?’ says your friend as they help you up. Then a bird shits on you. ‘Well, at least that will bring you some good luck,’ says your friend.
Say what you like—keys are reserved for the irrational, hats are reserved for the irrational—you look ridiculous. Why? You haven’t done anything wrong. But you have been made an object of irony. Bad timing, bad paving, but not bad choice; your situation is ridiculous not because of what you did but because of what it ended up looking like. Although people are often embarrassed by falling over in public anyway, the particular quality of the embarrassment, frustration, or amusement in this case attaches to the irony of your having bad luck after choosing to walk under the ladder. This irony is enabled because another, conflicting model of rational choice exists (according to the superstitious proponent of this model, at least) which cautions against such choices, and which has been made very audible to you at this point. We will treat this as a case of absurdity, saying that cases like this reveal that in certain circumstances, rational action is made absurd. (Which does not mean it should not be chosen—it should, since that choice is the rational one.)
We can distinguish a few loci of absurdity:
(a)
You acted (from your point of view) with rational competence, yet the world looks as if your (from your point of view, rationally incompetent) opponent got things right. Moreover, it is your rational action that has brought it about that the world looks as if your opponent got things right.
(b)
Events that befall you are such that they would be explained if, even though your opponent gets things wrong and you get them right, the vindictive world had taken an opportunity to punish your competence.
(a) and (b) compare the actual world to two different non-actual worlds, one ‘magical world’ in which superstition is rational and an ‘antirational world’ in which superstition is irrational but is rewarded for this. These are incompatible, so in one sense, the world cannot look as if both of these things were true. But note that although the world therefore does not look as if it is simultaneously magical and anti-rational, the world can simultaneously look as if it is magical and look as if it is anti-rational.
(c)
On both (a) and (b), you might further look like somebody whose hubris was duly pricked. In option (a), perhaps your downfall is a fitting rebuke to your obtuseness in sticking to a perspective that was irrational. On option (b), perhaps it is a fitting rebuke to your smug intellectual satisfaction in your rationality.2
We propose that there are some illuminating parallels between this kind of scenario and a Newcomb scenario. For a Newcomb scenario is ideally constructed to make rational choice absurd. It is important, though, that we are not proposing a total isomorphism between the ladder case and Newcomb cases. The superstitious person presumably posits causal connections between walking under a ladder and having bad luck, whereas in the traditional Newcomb case, everyone agrees that the choice does not causally influence what is in the box. Likewise, in the ladder case, the two parties will disagree in their predictions (the superstitious person will predict an instance of bad luck, whereas you will abstain from such a prediction). In Newcomb cases, on the other hand, the one-boxer and two-boxer can agree in their predictions of what the one-boxer and the two-boxer are likely to receive.
Since causalist and evidentialist reasoning agree in what they predict for the one-boxer and the two-boxer, it would be a mistake to apply (a) by saying that the outcomes look more like the world works according to an evidentialist than a causalist framework, or vice versa. However, there is something in the vicinity of (a) that helps to characterise what is ridiculous about the two-boxer’s situation from the two-boxer’s point of view. The association of being rational with coming out of a situation better off is enough to construe the situation as one that looks like the one-boxer made the rational choice. Moreover, the two-boxer’s own rational decisions ironically contribute (assuming they do receive an empty box) to creating that impression of their own irrationality. Both the two-boxer and one-boxer are positioned to experience their outcomes as ridiculous in the sense of (a), once we specify what aspect of the situation to focus on. The one-boxer is susceptible to absurdity of type (a) when we focus on whether a person receives everything that is available to them, and the two-boxer is susceptible when we focus on whether a person receives very much. Assuming the one-boxer does receive £1,000,000, the absurdity of type (a) is deferred until we narrow the focus accordingly, but it is worth noting that they also risk immediate absurdity—if they are unfortunate enough to be one of the few who takes the one box and finds it empty, they really will look stupid. Look stupid—not be stupid. That is what is so stupid about it. They have made themselves look like somebody who got it wrong precisely by (from their own point of view) getting it right. Presumably (c), the sense of deflated hubris, will also be particularly strong in this kind of case. (Who could hold back a laugh at this bitter case of rationality turned against its agent?) As for (b), this parallels the sense of rewards being ‘reserved for the irrational’ in Newcomb cases. From either point of view, a Newcomb situation is set up so that at some point, the reason we miss out on something is because we choose (from our own point of view) rationally.
This brings us to an element of absurdity in Newcomb cases that goes beyond what we find in the case of the ladder. In Newcomb problems, nobody comes out unscathed. Each party is vulnerable to looking like someone who has lost out through poor decision and hubris when (from their point of view) they have not. The two-boxer does not think they have lost out on £1,000,000, but given that they are aware of the one-boxer’s perspective, they can appreciate themselves as looking like someone who was so belligerent they missed a decent opportunity for a million. Given the two-boxer’s perspective, the one-boxer can appreciate themselves as looking like someone who was so inexplicably obtuse as to turn up their nose at a guaranteed, no-strings-attached £1000. The non-superstitious person who walks under the ladder has not been proved wrong by their bad luck, and they should not rethink whether they acted rationally and reasonably. But to fail to see that they look like a person who thought they were clever and was brought down by fate would be a failure of aesthetic sensitivity. They got things right, but they look bad; that is why it is an embarrassing situation, or a funny situation, or whatever particular response the absurdity generates in a particular person in the particular circumstances. The significance of the Newcomb case is that it makes everyone look bad even if they’re doing everything right. It is tailor-made for ironizing rationality.

3. Causal Dependence in Time-Travelling Newcomb Cases

3.1. Effingham’s Proposal: CDT Recommends ‘One-Box’ Choices in Time-Travel Scenarios

Recall that in the WAR example, Effingham thinks that when we receive from the time-travelling Nikk a message that there will be a nuclear war, it is rational to give Nikk a delusional belief that nuclear war has taken place and send him back in time to deliver the message—thus creating a causal loop within which Nikk’s message has an explanation that does not require a war really to take place. Similarly, Effingham thinks it is rational to cut off your thumb in the following case:
THUMBS:
‘[Daniel] Nolan imagines that you time travel to the past to a rather unusual archaeological dig where whatever you unearth you get to keep. You are particularly keen on getting hold of a particular statue. Checking the aged remains of tomorrow’s (probably, but not certainly, veridical) newspaper brought back from the future, you discover that a person with one thumb will discover the statue. You value the statue more than your thumb; you see everyone else appears to have their thumbs; you have a pair of hedge clippers with you. Do you then cut off your thumb?’ [1] (p. 194)
Effingham asserts that these are Newcomb cases and proposes that those who one-box in the standard Newcomb case will use excessive means in WAR and cut off their thumb in THUMBS.
We may be struck by a disanalogy. The person who cuts off their thumb takes it, presumably, that in doing so they create a cause of the newspaper report saying what it does about thumbs, and the person who poisons Nikk takes it that they thereby affect how his testimony is caused. In the classic Newcomb case, however, the one-boxer does not take their choice to be causally responsible for the prediction that they one-box.
But notice that the reason this is important in the classic Newcomb case is that it is the prediction that causes the box to contain, or not, £1,000,000. For the choice to causally influence the prediction would therefore be for it to causally influence what is in the box. Thus, the disanalogy with the classic Newcomb case is mitigated by the fact that THUMBS is not described such that taking cutting off one’s thumb to cause the newspaper report leads to taking cutting off one’s thumb to cause the finding of the statue (which is the analogue of getting the £1,000,000). In other words, there is space to cut off one’s thumb for genuinely evidentialist reasons. The evidentialist may indeed believe that by cutting off their thumb they create the facts that, (if and) when they find the statue, will cause the newspaper report to say that the finder has one thumb rather than two. What makes them an evidentialist is that they do not need any further causal beliefs in order for their desire to find the statue to rationalize (from their point of view) their cutting off their thumb.
We said earlier that deluding the time-traveller looks a little like ‘managing the news’, and the same goes for cutting off your thumb. So, we have three cases where the choice affects what the person does or does not have evidence of. One-boxing in the standard Newcomb case gives me evidence that my box has £1,000,000 in it. By deluding the time-travelling Nikk, I prevent his testimony from being evidence that there is a war. By cutting off my thumb (we will call this ‘one-thumbing’), I prevent the report from giving me evidence that the statue is not found by me. Now, ordinarily, from the point of view of causal decision theory, controlling whether you have evidence of O is an irrational response to O’s value or disvalue when it has no causal impact on whether O. But Effingham argues that in time-travel scenarios, controlling whether you have evidence of O can be causally responsible for whether O. So, those who two-box in the standard Newcomb case should join the one-boxer in using excessive means and cutting off their thumb, because the recommendations of causal and evidential decision theory align in these time-travel cases.
Now, clearly we have the potential for causal loops when our ‘one-boxer’ one-thumbs in THUMBS or uses excessive means in WAR. In THUMBS, the reporting of the discoverer having one thumb causes me to cut off my thumb, and if I do turn out to be the person who finds the statue, my having cut off my thumb causes the report to say the discoverer had one thumb. Nikk’s report of nuclear war causes me to use excessive means on Nikk, which causes Nikk to report a nuclear war. But Effingham’s claim is more novel than this. He argues that there are further interesting instances of causal dependence in play: the causal dependence of finding the statue on cutting off your thumb, and causal dependence of peace on poisoning the time traveller.
At this point, we might wonder whether Effingham has undermined his claim that these are Newcomb problems. Note the difference in what the one-thumber may say here compared to the ‘one-boxer’ as traditionally conceived. The one-thumber can claim that they cut off their thumb to acquire the statue. It would be misleading, though, for a one-boxer in a traditional Newcomb case to say that they one-boxed in order to receive a million (perhaps it would be more accurate to say that they one-boxed given that those who have a million in their box tend to be one-boxers). We could say that THUMBS and WAR no longer really count as Newcomb cases for that reason. Joyce places this kind of restriction on Newcomb cases: ‘It is part of the definition of a Newcomb problem that the decision maker must believe that what she does will not affect what [is] predicted’ [5] (p. 152). By extension, we might say that if it is appropriate for the agent in THUMBS (or WAR) to represent their choice as causing their discovery of the statue (or averting the war), then they do not really face a Newcomb problem.
Similarly, we might think, the traditional Newcomb problem collapses if we treat it as just a case of causation. That is, if we suppose the ‘prediction’ mechanism to be of some kind that secures the causal dependence of the contents of the box on the chooser’s choice—such as foresight, i.e., perceptual access to the future choice, or some kind of communication with future persons who witness that choice—then the chooser can cause the box to contain £1,000,000 by choosing to one-box.3 Then, the case seems no longer to divide CDT and EDT in any interesting way. In fact, though, this is not quite true. There remains the difference that the causal decision theorist thinks this information about the causal relations is important, whereas a genuine evidential theorist will not need information about whether the contents of the box causally depends on their choice in order to one-box. It is perhaps for this reason that Price—who, as we shall see in Section 4, wants to make a similar revision to Effingham’s to how we conceptualise even the standard Newcomb cases—labels Joyce’s restriction a purely ‘terminological matter’ that does not destabilise the important point about how causal decision theorists should reason [3] (p. 510).4 In fact, then, it is one-thumberCDT who may express themselves by saying they one-thumbed to acquire the statue; one-thumberEDT may better say that they one-thumbed given that the finder of the statue has one thumb.
So, let us grant Effingham his claim that there can be Newcomb cases in which causalists should behave as evidentialists do but for causalist reasons. His goal is then to motivate causalist ‘one-boxing’, in cases like THUMBS and WAR, through an argument for causal dependence, which proceeds via two further cases. The first is a version of a grandfather paradox, although Effingham (presumably focussing on the removal of the conditions of one’s own existence) labels it ‘Autoinfanticidal’:
AUTOINFANTICIDAL:
‘I plan to time travel and kill Pappy in 1930. You are certain that: I am in rude health; Pappy is my grandfather; Pappy will not rise from the dead three days hence; I will not change my mind freely etc. … ‘Unsavoury Bets Inc.’ are offering odds of ten million to one that I succeed’ [1] (p. 184)
Unlike the cases above, this generates a strong intuition about what is rational. As Effingham puts it, ‘Since you know that it’s a metaphysical impossibility that you come away with more money, you would be out of your gourd if you gambled—no-one can recommend gambling one’s life savings when one knows that they’re going to lose’ [1] (p. 184). On the other hand, Effingham points out, if you were given the opportunity to bet on my killing Pappy tomorrow, and had all the same information (rude health, grandfather, cannot resurrect, resolute will to kill, etc.), then you would be rational to place the bet, since you would expect me to succeed. In the time travel case, what is to be expected is not that I kill Pappy, but rather, that I face the kind of fluke accident Lewis [11] talks about in grandfather cases—the gun jams, I trip, I collapse despite being otherwise healthy, and so on. These are amongst the ‘commonplace reasons’ Lewis refers to when he says:
‘Since Tim [a time-travelling grandson who goes to 1921 with murder in mind] didn’t kill Grandfather in the “original” 1921, consistency demands that neither does he kill Grandfather in the “new” 1921. Why not? For some commonplace reason. Perhaps some noise distracts him at the last moment, perhaps he misses despite all his target practice, perhaps his nerve fails, perhaps he feels a pang of unaccustomed mercy.’5
[11] (p. 150)
Effingham’s point is that whatever it happens to be, we can be sure something will go wrong. He argues that this shows something about which counterfactual judgements are pertinent to rational choice. We can put his point like this:
‘If I were to try to kill Pappy, I would succeed’ is true if what we hold fixed in evaluating the counterfactual is: my constitution, Pappy’s constitution, the state of the gun, the fact that I have the ability to walk up to Pappy, raise the gun and pull the trigger, my desires and states of mind, my disposition to remain calm under pressure, and so on.
‘If I were to try to kill Pappy, I would fail’ is true if what we are holding fixed is that Pappy lives.
Since it is plainly irrational to bet on my successfully killing Pappy, the latter counterfactual is the one that governs rational choice. So, Effingham suggests, when assessing rationality in time travel cases, we should order worlds according to the principles of ordering that are required to make that counterfactual come out true.
Now, Effingham notes, worlds in which I fail to kill Pappy may be quite distant. For it may be that they ‘are far away in virtue of low-chance, remarkable events occurring at them’ [1] (p. 185). Some of the ‘commonplace reasons’ may be low-chance and remarkable in this way—a well-kept gun jamming, a fluke collapse at the crucial moment, and so on. In taking such things as features that make a world more distant, Effingham is drawing on Lewis’s suggestion that low-chance remarkable coincidences (‘quasi-miracles’) make a world more distant (Lewis [12] (pp. 59–63)). Whilst there is much to discuss concerning what exactly quasi-miracles are, and whether Lewis is right to propose that they detract from similarity to a (non-quasi-miraculous) base world, that is not essential here. Suppose they do, and suppose further that you have reason to think that the only way I fail to kill Pappy will be by such a quasi-miracle (perhaps all the ‘commonplace reasons’ that are not low-chance, remarkable coincidences are ruled out by being inconsistent with other information I know to be true of 1921 or later times). Effingham’s point is that it is still rational to bet on my failure rather than my success—we can be confident that I will fail in remarkable ways if the alternative is successfully doing the impossible. This means we should adopt an ordering of worlds according to which ‘metaphysically impossible worlds are always further away than metaphysically possible worlds’ (even if the metaphysically possible worlds are themselves very distant).6 [1] (p. 185)
Having arrived at this principle, Effingham is able to argue for unusual causal dependences of outcomes on choices, which he illustrates through a further case:
PLAGUE:
‘You are certain that, at the minimal cost of –υ20 you can trick Enemy into trying to use a time machine. You are certain that: your intel is perfect; the time machine is in mint condition; if Enemy is tricked, then they’ll definitely attempt to use it; etc. In light of all of this you are certain that the only thing that could prevent them activating the time machine is a particular indeterministic event which may (or may not) take place next week. If that event occurs, a virus will kill Enemy prior to activating the time machine. The objective chance of that event occurring is miniscule and you can do nothing to affect that chance. If Enemy lives, that’s worth υ0; Enemy dying is worth +υ1000; Enemy successfully using the time machine would be terrible, –υ10000, but you’re convinced of the argument of Chapter 12 that your credence of that happening should be effectively zero.’ [1] (p. 178)
As the quote makes clear, for this case, we need to buy into a certain view of time travel that Effingham develops elsewhere in the book ([1] (pp. 147–75)). In particular, we need to know that Effingham argues the following about backwards time travel:
-
It is physically possible and so has a non-zero chance.
-
For journeys of most kinds, it is metaphysically impossible. So, any worlds where such a journey takes place is a world where something metaphysically impossible happens, making it more distant than any metaphysically possible world where Enemy fails.
-
For journeys of a few very select kinds, backwards time travel is metaphysically possible, but such journeys are low-chance and remarkable, making their worlds more distant than non-quasi-miraculous worlds in which Enemy fails.
Whilst each of these points is worthy of discussion in its own right, that is not our focus. What matters to us is the upshot: a world where the virus kills Enemy is closer than worlds where the time travel journey succeeds. Thus, ‘If I were to trick enemy into attempting time travel, then the virus-causing event will occur’ should be evaluated as true. Given a counterfactual theory of causal dependence (and the fact that it is also true that if I were not to trick enemy into attempting time travel, the hugely improbable virus-causing event in question would not occur), we can conclude that the choice to trick enemy makes a causal contribution to the occurrence of the virus. Utilizing the distinction between ‘biff causation’ and ‘counterfactual causation’, Effingham maintains that whilst there is no ‘biff’ causation between my trick and the virus—it is not a case of physical impact, like one snooker ball transferring momentum to another—what we find in time travel cases is counterfactual dependence of ‘biff’-unrelated events on the agent’s choices. And either this counterfactual dependence just is causal dependence, or we should treat it as a substitute for causal dependence in ‘exotic time travel cases’ [1] (p. 188). The moral: in time-travel scenarios, decisions can have causal significance even if analogous decisions in non-time-travel cases would have only evidential significance. Thus, causal and evidential decision theories align in their recommendations for time-travel Newcomb scenarios.

3.2. Problems for Effingham’s Argument

At best, however, this argument shows that impossibility makes a difference to what the causal decision theorist should choose—not that time travel per se makes a difference. Effingham does not say much to defend explicitly why his results for PLAGUE and AUTOINFANTICIDAL translate to his other cases. We shall concentrate on THUMBS, but as far as we can see, the same arguments apply mutatis mutandis to WAR.
It is true that, supposing we hold fixed that the discoverer of the statue is one-thumbed, then a world where I am the discoverer and am not one-thumbed (because I did not reach for the shears) is metaphysically impossible. Then, it does come out true that the closest worlds where I have two thumbs are worlds where I do not find the statue. This does not, however, give us enough for causal dependence. For that, we also need it to be true that worlds where I cut off my thumb and find the statue are closer than worlds where I cut off my thumb and do not find the statue. And for that, we need an argument that they are closer than worlds where some other person with one thumb finds the statue. Given that having one thumb gives me no physical or cognitive advantage relevant to finding the statue (if anything, the removal will take up time I could use on the search), this ordering of worlds is hard to justify. It is certainly not something the causal decision theorist should find immediately attractive nor is there anything to motivate it in Effingham’s impossibility argument.
There is a way of making the impossibility argument do the requisite work. We could insist on holding the following fixed: a one-thumbed person finds the statue and everyone who is not me has two thumbs. Then, any world where I cut off my thumb, but someone other than me finds the statue, is impossible, because all the worlds are ones where a one-thumbed person finds the statue and where everyone but me is two-thumbed.
Notice, though, that this also introduces a feature where the pull of the idea that I place myself in contention by cutting off my thumb is weakened. Since ‘a one-thumbed person finds the statue and everyone but me has two thumbs’ entails that I find the statue, holding it fixed is tantamount to holding fixed that I find the statue and I have one thumb and everyone else has two. So now let us imagine that the newspaper report tells us that directly. It announces that I find the statue, mentioning also the facts about thumbs. Cutting off one’s thumb in response to this newspaper report seems less compelling. Why not think: great that I find the statue; shame that I lose my thumb (I wonder how that happens).
And this reveals what the options were in the first place, for it also seems reasonable to read that response back over to the original THUMBS case. When seeing the future newspaper report that the discoverer of the statue has one thumb, the wannabe discover can think, ‘I hope that’s me; though it’s a shame that if it is, I lose my thumb along the way’. This is the analogue of Macbeth’s ‘If chance will have me king, why, chance may crown me/Without my stir’ (Act 1, Scene 3). Effingham’s account of causal dependence may have the consequence that in these circumstances, the diligent search that leads me to find the statue also (counterfactually) causes the (biff-unrelated) incident in which I lose my thumb—since my being two-thumbed is incompatible with my finding the statue and the finder being one-thumbed. This is compatible with what we have said to challenge the different claim that a causalist should choose to cut off their thumb in order to bring about the discovery. Thus, even if we agree with Effingham’s account of causal dependence, it does not compel the two-boxer to be a one-thumber. Moreover, a rationale for two-thumbing remains available and articulable. Indeed, we can now see that far from bringing the recommendations of CDT and EDT into alignment, THUMBS exhibits aspects of the typical Newcomb dialectic which confirm the availability of a distinct causalist course of action.
We have seen that one hallmark of Newcomb cases is that regardless of which strategy is the rational one, we can construe being rational as something that leads to losing out on some profit that accrues to irrational action. Consider what happens when the one-thumber obtains the statue and the two-thumber does not. Let us assume that the one-thumber really has chosen through evidentialist strategy and the two-thumber has chosen through causalist strategy.7 The one-thumber says, ‘If you’re so clever, how come I’m the one in the newspaper? I don’t mind losing a thumb if I get a statue.’ The two-thumber says, ‘Right, but you needn’t have lost the thumb at all. You got the statue. Shame you didn’t take my approach; you could have had two thumbs to hold it with. And a good job for me that I did, or I would have been even worse off; no statue, and one thumb down as well.’ Thus, the one-thumber can draw attention to who obtains a statue, and the two-thumber can draw attention to who acquires everything that is available to them. Given that the scenario supports the usual Newcomb dialectic, we should say, firstly, that not cutting off your thumb remains equally as cogent an option as two-boxing in the Newcomb case, and so, secondly, that time travel in itself adds nothing special to Newcomb scenarios.8 In particular, time travel need not introduce special considerations against two-boxing.
But consider this variant on the original Newcomb case, in which there appears to be no viable two-boxing option.9 Suppose a set-up where the chooser’s choice to one-box or two-box is ‘predicted’ by a mechanism which receives, from the future, an accurate record of how many boxes the chooser actually takes. As usual, £1,000,000 will be placed in the opaque box if and only if the ‘prediction’ is that you one-box. So, let us suppose that, knowing how the ‘prediction’ mechanism works in this case, the chooser approaches the game certain that there will be £1,000,000 in the opaque box if and only if they take one box. Nevertheless, there is still a way in which someone who decides to two-box takes home a million: through flukes similar to those considered in the case of AUTOINFANTICIDAL. For instance, I choose to two-box, but an accident at the last minute renders me unable to extend both hands, leading me to grasp only one box, and the ‘predictor’ to record my eventual (accidental) one-boxing. Because of this, let us call the case FLUKOMB.
In FLUKOMB, it appears that the only way for the two-boxer to do at least as well as the one-boxer is for them to fail to act on their decision because of some accident outside their control. This might suggest that nobody should find two-boxing attractive. But we disagree. The causal decision theorist can still rationalize two-boxing. If there is not £1,000,000 in the opaque box, then at least I will have £1000 rather than nothing. If there is £1,000,000 in there, then that means I will receive a million. After all, it has been put in there because I take it. But then it would be irrational to not choose to take the £1000, too. I will not get it—I know already that if there is £1,000,000 in the opaque box, that is because I do not succeed in acquiring all I could. But at least I can say I tried. At least the reason I do not have everything will not be that I deliberately chose to forgo the £1000, like the others did. I will have given myself a shot at £1,001,000, something the one-boxer deliberately does not give themselves. I might be £1000 down by bad luck (or up £1,000,000 by a fluke), but at least I will not have thrown away £1000 through my own irrationality.10
Is this reasoning mandated by CDT? No, because the causalist could instead focus primarily on the hypothesis that successfully two-boxing causes the total available to be only £1000. This one-boxerCDT concentrates on a reason to avoid two-boxing which is missing in the standard Newcomb case (where causal independence is assumed), whereas the two-boxerCDT concentrates on how a reason for two-boxing can be maintained from the standard case. Insofar as either starting point is available, two courses of action are made intelligible by causalist standards.
Although we think there is still space for two-boxing in cases like FLUKOMB, our aim is not to deny that there could ever be a case where there really is no space for two-boxing. Placing tighter and tighter constraints on cases may indeed convert them into ones where nobody has a reason to two-box and where EDT and CDT straightforwardly agree that the one rational course of action is to one-box. In cases like that, they will not poke fun at the course of action their rival has chosen. But where there remains space for two conflicting courses of action, rendered rational by different sets of standards, it is a mistake to think that some further application of rationality can adjudicate on what it is correct to do. Although each side can continue to articulate something about the other’s choice, what they are providing in doing so is the resources to find the other’s choice absurd.

4. Does Foreknowledge Trump Objective Chances?

Effingham presents a number of other candidate time-travel Newcomb cases in his book, some of which raise interesting questions about the role of (beliefs about) chance in rational choice. Part of Effingham’s argument is that in time travel cases, the causalist should understand choosing the equivalent of one-boxing as raising the chance of the desirable outcome [1] (pp. 188–189). This argument, however, is closely related to the treatment of impossibility discussed in the last section; since we have already said what we have to say about that, we will focus instead on some different aspects of how such cases intersect with questions about chance, moving towards our response to Price’s argument for EviCausalism.
Let us begin with this case, which Effingham notes has received some discussion in the literature:
PAUPER:
‘Knight will fight in tomorrow’s battle. If Knight buys new armour, his objective chance of surviving will be 0.99, otherwise it’ll be 0.5. Buying new armour will make him destitute and he’ll have to live on as a pauper. Prizing his life more highly than his riches, Knight is about to buy the armour. But then Knight uses a crystal ball and comes to know for certain that he’ll survive the battle unscathed, though not whether he bought the armour or not.’ [1] (p. 193)
Effingham says that whilst ‘[t]raditional two-boxer reasoning will support purchasing the armour anyhow since it, e.g., improves his objective chance of survival’ [1] (p. 193), in this case, he thinks, the causalist Knight should come round to the other choice, and ‘save his money’ [1] (p. 193).
Effingham attributes the case to Price [13], who in turn reports it from Lewis’s 1982 letters with Wlodek Rabinowicz. There, Lewis writes:
‘… I believe in the logical possibility of time travel, precognition, etc., and I see no reason why suitable evidence might not convince a perfectly rational agent that these possibilities are realised, and in such a way as to bring him news from the future. … It seems to me completely unclear what conduct would be rational for an agent in such a case. Maybe the very distinction between rational and irrational conduct presupposes something that fails in the abnormal case. You know that spending all you have on armour would greatly increase your chances of surviving the coming battle, but leave you a pauper if you do survive; but you also know, by news from the future that you have excellent reason to trust, that you will survive. (The news doesn’t say whether you will have bought the armour.) Now: is it rational to buy the armour? I have no idea—there are excellent reasons both ways. And I think that even those who have the correct two-box’st intuitions about Newcomb’s problem may still find this new problem puzzling. That is, I don’t think the appeal of not buying armour is just a misguided revival of V-maximizing intuitions that we’ve elsewhere overcome.’
Lewis, in Price [13] (p. 19)
Lewis agrees with Effingham that the two-boxer should not obviously buy the armour, but he does not agree that the two-boxer should not buy the armour. Interestingly, he also does not agree that not buying the armour is an analogue of one-boxing. To support this, consider that WAR and THUMBS share this structure: the best outcome of all would be to obtain two goods (no war + no complicity in torture; a statue + two thumbs). The representative ‘one-boxer’ obtains one good, and the representative ‘two-boxer’ obtains the other. In ordinary Newcomb cases, the two-boxer advocates their strategy because they think that the one-boxer’s strategy acquiesces in getting just one good from the outset; this is irrational by the two-boxer’s standards. PAUPER is different. The goods are: win and save money. The ‘two-boxer’ obtains the win without the saving. The ‘one-boxer’, though, obtains both goods. Given this, the two-boxer seems to lose the opportunity to ask the one-boxer ‘why ain’t you richer?’. However, this (rhetorical) question can be deferred. Lewis says there are ‘excellent reasons both ways’, so there is an attraction to buying the armour: the person who does not willingly lowers their objective chance of survival. Thus, those who buy the armour can say that there is something their opponent misses out on (even if their opponent wins the battle and saves their money). They miss out on a decent chance of doing well, even if as it happens, they do well anyhow.11
A similar case of missing out on a chance (but obtaining the desired outcome anyway) is found in Huw Price’s [3] ‘Chewcomb’ cases. Price is also more explicit than Effingham about why he thinks Lewis cannot back out of the challenge of such cases in the way his letter suggests. ‘Chewcomb’ cases are not structurally isomorphic to PAUPER.12 But, like PAUPER, they involve information about the future (which, in Price’s discussion, comes from supernatural sources but could also come from testimony from more ‘conventional’ time travellers).

4.1. Price’s Argument for EviCausalism (and against Causalism) in Foreknowledge Cases

In Price’s CHEWCOMB scenario, God offers you a bet on the outcome of the toss of a fair coin. If you bet Heads, you will receive £100 if the coin lands Heads and £0 if it lands Tails. If you bet Tails, you will receive £50 if the coin lands Tails and £0 if it lands Heads. So, it is clearly rational to bet Heads.
But Satan can see the future, and comes along with some information God has not given you:
‘Satan informs you … “I bet you didn’t know this. On those actual future occasions where you yourself bet on the coin, it comes up Tails about 99% of the time. (On other occasions, it is about 50% Tails.)” What strategy is rational at this point? Should you assess your expected return in the light of the objective chances? Or should you avail yourself of Satan’s further information?’
[3] (p. 497)
Price thinks it is overwhelmingly plausible that you should bet Tails.
Similarly to PAUPER, when someone bets Tails in CHEWCOMB and receives £50, we cannot say ‘Why ain’t you richer [in money]?’ The fact that they would have received £100 if it had landed Heads and they had bet Heads is irrelevant, because it did not land Heads. What we could insist on, though, is that the person has missed out on the chance of doing better. They might ask ‘Why ain’t you richer [in chances of good outcomes]?’ The retort has less panache, but it is there if we want it. It is related to the rhetoric Price attributes to the figure he calls the ‘mad-dog objectivist’:
‘“To hell with Satan”, says this mad-dog objectivist, thumping the table. “By betting Tails, you irrationally forgo an equal chance of a greater reward.”’
[3] (p. 522)
The problem, as Price argues, is that even Lewis rejects mad-dog objectivism. This is explicit in what Lewis says about the Principal Principle. The Principal Principle says, roughly, that it is rational to make your credences match what you take the objective chances to be. So, if you believe a coin is fair, your credence in its landing Heads should be 0.5 and your credence in its landing Tails should be 0.5. However, Lewis apparently allows that there are cases where subjective credence should follow evidence rather than chance, and that information from the future is just such a case. Hall, who is an ancestor of both Price’s proposal and Effingham’s, expresses it as follows:
‘Lewis himself notes [15] (p. 274) that there are possibilities (involving such things as time travellers, seers, and circular spacetimes) in which the past carries news from the future which, if known, breaks the connection between credence and chance. When the past does carry such news, I will say that it contains “crystal balls”’
[16] (p. 508)
Lewis grants that when crystal balls are involved, the information they deliver trumps the application of the Principal Principle and, therefore, trumps our knowledge of objective chances in guiding rational credence.13 What makes this problematic is that it is in tension with his commitment to CDT in Newcomb cases. Price’s case of BOXY CHEWCOMB brings it out starkly:
BOXY CHEWCOMB:
‘Suppose that God offers you the contents of an opaque box, to be collected tomorrow. He informs you that the box will then contain [£]0 if a fair coin to be tossed at midnight lands Heads, and [£]1,000,000 if it lands Tails. Next to it is a transparent box, containing [£]1000. God says, “You can have that money, too, if you like”. At this point Satan whispers in your ear, saying, “It is definitely a fair coin, but my crystal ball tells me that in 99 percent of future cases in which people choose to one-box in this game, the coin actually lands Tails; and ditto for two-boxing and Heads.”’ [3] (p. 505)
Again, Price thinks you should exploit Satan’s tip, just as you should in CHEWCOMB. And if we do not want to be ‘mad-dog objectivists’, it seems we should say the same and should one-box. Yet if we want to be causalists, we should say the opposite and two-box. In allowing information about the future distribution to override objective chances, we are acting based on the fact that choosing to one-box gives me reason to be confident that the coin lands Tails. But that is just what the one-boxer has been doing in Newcomb problems from the start. That is, CDT gives us reason to say that since Satan’s information is of merely evidential significance, and what I choose has no causal influence on how the coin lands, Satan’s information should have no influence on how I bet. Yet allowing exceptions to the Principal Principle gives us reason to say that though the correlation is merely evidential, it should override the information I have about chances and should influence how I bet.
Price’s proposed way forward is to move to a subjectivist account of causation:
‘[C]ausal dependence should be regarded as an analyst-expert about the conditional credences required by an evidential decisionmaker. A little more formally, the proposal goes something like this:
(EC): B is causally dependent on A just in case an expert agent would take P(B|A) ≠ P(B), in a calculation of the V-utility of bringing it about that A (in circumstances in which the agent is not indifferent to whether B).
Since this suggestion takes it to be definitive of causal belief that its role is to guide a particular kind of evidential judgment, I shall call it the EviCausalist proposal.’
[3] (p. 509)
‘V-utility’ is what the evidential decision theorist aims to maximize in Newcomb cases. Whilst there is clearly scope for a great deal of interesting discussion about this conception of causal dependence, focussing on some broad points is all that is needed for our current purposes. First, what falls out of Price’s proposal is that causal dependence is a function of what an (astute) evidential decision theorist would take to be relevant to what for the purposes of decision. For example, we obtain the result that in BOXY CHEWCOMB, ‘[b]y choosing to one-box rather than two-box, we greatly increase the chance of Tails’ [3] (p. 509). Second, and as this point makes clear, this shares the spirit of Effingham’s proposal—though Price’s argument, unlike Effingham’s, does not need to recruit ideas about impossibility. (Nor, relatedly, does it appeal to a counterfactual theory of causation.)14 Thus, Effingham thinks we need to move to his concept of causation only in certain cases, in view of the special ways in which information can reach us across time in these cases, whereas Price grounds it in a quite general link between causation and evidence. For Price, the special significance of information from the future is that those are the cases in which even the most committed causalist cannot viably sustain their opposition to ‘managing the news’. Eventually they must give up the fight and succumb instead to EviCausalism.15
Or must they?

4.2. Causalism against All Odds?

Price does indicate a possible line of resistance for the traditional two-boxer. They could concede CHEWCOMB but try to keep two-boxing in BOXY CHEWCOMB by appealing to the difference noted earlier. In CHEWCOMB, they do not have the option of asking the one-boxer ‘Why ain’t you richer?’ (unless by this they mean ‘richer in chances of good outcomes’). But in BOXY CHEWCOMB, they can use the usual line, for there really is £1000 there that the one-boxer leaves behind.
Price’s reply is that now we have EviCausalism on the table, this move will not work. He argues that the question can only be effective if the evidentialist is prepared to accept the causalist’s assumption that had they (the evidentialist) taken the other box as well, they would have had £1,000,000 plus £1000. And this will not move the EviCausalist:
‘[U]nlike traditional Evidentialists, who accept the Causalist’s conception of the modal landscape, EviCausalists will simply deny that “they would have gotten [the million] whatever they did”. On the contrary, as they understand the counterfactuals … they would have received only [£]1000 had they two-boxed. It is the two-boxers who are irrational in this counterfactual sense, by the EviCausalists’ lights: had the two-boxers one-boxed instead, they would have had the million.’
[3] (p. 519)16
Price contends that the traditional two-boxer invokes the point that the evidentialist has missed out on something as ‘the standard Causalist response to the Why Ain’t You Rich? argument’ (p. 519). He thinks that one-boxing as an EviCausalist deprives the two-boxer of this comeback and therefore ushers the one-boxer’s position through to victory.
This may be true if the effectiveness of the ‘Why ain’t you richer?’ is only or primarily about the one-boxer accepting that by their own standards, two-boxing would have been a more lucrative move. But this characterisation misrepresents, we suggest, what the dialectic was in the first place. As we interpret the Newcomb dialectic (Section 2), what is key is that the salience of another’s standards makes a choice that is rational by my own standards vulnerable to also being absurd. This does not require agreement with the other’s standards. Consider the following comparison: appreciating the absurdity when you fall after walking under the ladder does not require sympathy to superstition; the fact that you reject superstition only enhances the irony of how things have worked out for you. It does not matter that the one-boxer does not think that they would have been richer if they had taken two boxes. What matters is that the two-boxer thinks it.
To reinforce this, notice that equally, the effectiveness of the evidentialist’s taunt ‘Why ain’t you rich?’ is not undermined by the fact that the traditional causalist denies that they would have been richer had they taken just one box. Price’s argument takes it as significant that when asked ‘Why ain’t you richer?’, the EviCausalist can say, ‘I couldn’t have been—or at least, I wouldn’t have been by acting like you. Had I taken both boxes, I would have brought it about that (or raised the chances that) the opaque box was empty.’ But this is of a piece with what the traditional causalist has been able to say all along. ‘I couldn’t have been—or at least, I wouldn’t have been by acting like you. My box was empty, so had I taken just it, I would have ended up with nothing.’ Price claims he has deprived the two-boxer of a response to ‘Why ain’t you rich?’; but the same considerations would show that no response was ever needed.17
It is a mistake to conceive of the Newcomb dialectic as if it is a case of A trying to make B accept that B is wrong by B’s standards and vice versa. The causalist knows that the evidentialist is right by evidentialist standards, and the evidentialist knows that the causalist is right by causalist standards. The dialectic is sustained by the fact that each is wrong by the other’s standards, and in neither case does this give the other party a reason to change their standards; there are two competing conceptions of rationality in play, both of which have the capacity to make the other look absurd. This does not change when we pit the traditional causalist against the EviCausalist rather than the traditional evidentialist. It is in virtue of the conception of rationality that party B rejects that party A is able to make party B look a certain way. The EviCausalist is still walking away from a box with £1000 in it, and this is all the causalist needs in order to make them look absurd; whether the EviCausalist believes that choice brings about a profit rather than a loss is not the point. You look absurd after walking under the ladder because of what your superstitious friend thinks it means. The evidentialist and the EviCausalist look absurd walking away from £1000 because of what the causalist thinks it means.
The upshot of this is that trying to fight through to a resolution of Newcomb is not a way for a conception of rationality to prove itself—although making the other side absurd is a beautiful, interesting, intriguing way for a conception of rationality to display itself. Newcomb is not a test case for decision theory. It is a lovely illustration of the aesthetic potential that results once we put more than one decision theory on the table.
Could we go further and refuse even to concede CHEWCOMB? Price is right that allowing exceptions to the Principal Principle whilst being causalist in Newcomb cases amounts to a tension in how one feels about ‘managing the news’. But it seems to us—even if not to Lewis—that it is in fact a viable option to be a mad-dog objectivist in cases such as CHEWCOMB and supernatural foreknowledge cases in general. Yes, you can be pretty sure in advance that you are not going to become rich by sticking with the Principal Principle. But that is fine—in the Newcomb game, you can be pretty sure in advance that you are not going to become rich by two-boxing. The same dialectical option is available to the mad-dog objectivist as to the two-boxer and to the Newcomb one-boxer: ‘What, that thing I missed out on? Reserved for the irrational’.
To say that mad-dog objectivism is a viable position is not to say it is a particularly pleasant one to be in. Even the crassest commentator can focus attention such that you are made absurd. ‘But the Devil pretty much told you how to bet. He handed it to you on a platter.’ But how easy it is to make someone’s rational choice absurd is a different matter from whether their choice is rational. And what we have here is no different in kind from what we can say to the causalist in the Newcomb case: ‘But you were told about the prediction mechanism. How much more of a clue did you need?’ The analogous comment demonstrates that there is a conception of rationality that is not mad-dog objectivism’s, just as there is a conception of rationality that is not the causalist’s, but it does not show the conception it makes absurd to be incorrect.
Here, then, is one way to respond to Price’s tension: if you are a causalist, be a mad-dog objectivist, too. The incidents that best draw attention to your objectivism per se—to your refusal to give up the Principal Principle—are likely to be those in which your (rational, from your point of view) choices are most readily experienced as absurd. But it is in the nature of rational choice to sometimes be absurd, because there are incommensurable conceptions of it. So, accept the embarrassment and the indignity you might feel on those occasions, and take comfort in the fact that they will be few and far between (since the Devil rarely brings news from the future).
Also of comfort is the fact that the mad-dog objectivist can make their opponent’s choice absurd, too. Some of the possibilities of absurdity are symmetrical across the one-boxer and the Tails-backer. For one thing, the one-boxer is always vulnerable to how they will look if they are unfortunate enough to actually get an empty box, and the Tails-backer is always vulnerable to how they will look if this turns out to be the time the coin lands Heads. And in those circumstances, the artful two-boxer and mad-dog objectivist are certainly at liberty to accentuate the absurdity, e.g., by saying ‘Ha! Serves you right!’ (just as they may by saying ‘Ha! Now that is justice!’ if they take away £1,001,000 in Newcomb, or £100 in CHEWCOMB). Moreover, in the case of a Tails-backer who receives £50, as soon as they claim that they did the right thing, the mad-dog objectivist can cast them as somebody who congratulates themselves on their own good luck. Nevertheless, some of the lines the two-boxer uses to cast the one-boxer’s choice as absurd are less effective when used by the mad-dog objectivist betting Heads in CHEWCOMB. ‘Why ain’t you richer [in chances of good outcomes]?’ just is less effective in evoking the absurd than ‘Why ain’t you richer [in money]?’ Walking away from a chance simply is not as vivid a way of looking like a fool than walking away from £1000. Setting aside the important differences noted between PAUPER and CHEWCOMB, this also applies to the Knight who buys the armour and foregoes the 0.99 objective chance of survival.
In one sense, this does not matter, since the point we have been pushing is that it is a mistake to conflate the issue of how rational our choices are with the issue of how embarrassing our choices may be given that they can fall under the gaze of a disagreeing party. However, choice and agency are more than just decision in the sense of decision theory. Humans evaluate the situations in which they make decisions not just in terms of whether the decision is rational or irrational; they also evaluate, in those situations that lend themselves to it, the aesthetic aspects of the situation, such as its absurdity or irony, whose various presentations we respond to in a variety of ways—we may relish it, we may laugh, we may squirm. Different individuals have different aesthetic profiles—that is, they differ in how they respond to absurdity when faced with it—and our sensitivities can also push us in different directions on different occasions. One option, then, is to embrace the fact that choice is sensitive to our aesthetic profile as well as what is rational. Perhaps it is all right to respond partly to what ironies and embarrassments we find tolerable and to organise our choices based partly on our tolerance for the absurd. And sometimes we have to be more resilient in the face of absurdity to make some choices than to make others—even if the conception of rationality we endorse recommends both (or recommends neither).
This means there is a get-out clause—if we really want it—for any two-boxers who just could not bring themselves to be mad-dog objectivists. If a decision theorist is shown the limits of their tolerance to absurdity by some scenario, then to choose out of character in those circumstances may be a legitimate exercise of the human capacity for choice. Rather than a conversion, or a case of inconsistency, or evidence of the configuration of causal relations, it may amount to something else entirely—an indication that the absurdity of being rational is sometimes too much to bear.

Author Contributions

Each author contributed equally on all aspects of this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

Thanks to the audience at the University of Manchester philosophy work-in-progress seminar, to three anonymous referees for helpful comments, and to Alasdair Richmond.

Conflicts of Interest

The authors declare no conflict of interest.

Notes

1.
We give a more detailed account of the aesthetics of absurdity in our book (in preparation) on the aesthetic experience of the extraordinary.
2.
Or, perhaps, to the ungrounded optimism you have displayed in expecting the world not to work against those who make good choices for good reasons. It may be that a person looks heroic through looking like someone who persists in making good choices despite what seems to be a world which punishes reason. Certain comedy characters (Victor Meldrew in the BBC’s One Foot in the Grave, for example) might best be understood as heroes for continually finding themselves faced with such ordeals. (Thanks to Thomas Smith for some helpful discussions of this kind of heroism.)
3.
Newcomb himself discusses a backward-communicating device, the antitelephone, in [9]. His proposal, however, is that such a device is impossible, because it would make possible a contradiction. So, Newcomb had reservations about backwards causation similar in spirit to those of Mellor [10] (pp. 125–135).
4.
That the causal link would, in the Newcomb case where the predictor has foreknowledge, be a case of backwards causation is less important than it may initially appear. As Lewis [4] (pp. 236–237) notes, the classic Newcomb problem can equally be set up as one in which the mechanism predicts what you already have chosen rather than what you will choose later. The apparent connection of the traditional Newcomb problem to backwards causation in particular is a narrative accident. Having the choice come later than the prediction which causes the filling (or not) of the box is a very efficient way of ruling out various ways which come to mind in which there may be causal dependence of the contents of the box on the choice made. For someone who is willing to explore the possibility of backwards causation, of course, not all such ways are ruled out. But the possibility that a causal decision theorist will one-box because they believe they are dealing with a case of backwards causation does not dissolve the interest of Newcomb cases, which is sustained by the fact that somebody else—the evidentialist—does not need to believe they are dealing with a case of backwards causation in order to one-box. (It is also worth noting here that a separate question arises about when it is rational to change one’s beliefs about the causal structure of the world—for example, about what should constitute compelling evidence of backwards causation—which would, for a causal decision theorist, be pertinent also to the question of which choice it is rational to make. But that question is beyond the scope of this paper.)
5.
The uses of ‘original’ and ‘new’ that Lewis puts in scare-quotes allude to a way of thinking about visiting the past that he has discussed and is clear we should not take literally. The ‘original’ and ‘new’ 1921 are simply (one and the same) 1921.
6.
Effingham will say the same even if we amplify the remarkableness. An anonymous referee suggests the example where Unsavoury Bets Inc. simultaneously offers one million such bets (on one million such grandfather-killing expeditions). They suggest that taking it that ‘something will go wrong’ is less plausible here, since it would be a ‘statistical miracle’ for a million things to go wrong. It would indeed be a quasi-miracle, but Effingham can still maintain that a quasi-miraculous, metaphysically possible world is closer than a metaphysically impossible world. (Our own position on a case like this, in the spirit of the position of this paper, is that heightening the extraordinariness of a scenario may heighten its absurdity, but that this is independent of what it is rational to do. A fuller discussion of the relation between quasi-miracles and the absurd is given in our book (in preparation) on the aesthetic experience of the extraordinary.)
7.
That is, the one-thumber is not just a causalist who has misjudged the counterfactual dependences, and they are not trying to persuade the two-thumber that they ought by the standards of CDT to choose one-thumbing. They are trying to demonstrate the superiority of choosing as an evidentialist.
8.
Of course, one difference is that the one-thumber could, if they wished, raise the point that the report said the discoverer had one thumb, but this does not work well as a piece of spotlight shifting, since it plays too easily into the hands of the two-thumber, who will say, ‘Well, it wouldn’t have if you hadn’t gone and cut off your thumb for no good reason.’ And, moreover, it leaves the one-thumber’s circumstances vulnerable to being compared to a Flann O’Brien construction, inspiring the Keats and Chapman-style pun: ‘You’ve cut off your thumb despite your fate’. Invoking such constructions (such as those found in O’Brien’s The Various Lives of Keats and Chapman) as a comparison would cast the one-thumber as absurd.
9.
Thanks to an anonymous referee for the suggestion.
10.
A complication of cases like FLUKOMB is that once we imagine the chooser expecting flukes, we might be tempted to define a new matrix of options for the chooser with utilities based on the (dis)value to the chooser of whatever flukes they anticipate as plausible ways for them to accidentally take one box. But since this would both change the problem and take us into a discussion too far afield from our key points of disagreement with Effingham, we set it aside here.
11.
There are also other potential disanalogies to be explored between PAUPER and Newcomb cases, and which Lewis may have in mind. First, we are within our rights to question whether not buying armour is more analogous to traditional one-boxing or two-boxing. Initially—for example, in Nozick’s [14] canonical framing of the Newcomb problem—the dispute was presented in terms of a clash between two principles: the principle to Maximise Expected Utility (MEU) (which appears to favour one-boxing) and the principle of Dominance (which favours two-boxing). The latter says that when option A gives you a better outcome than option B no matter what the state of the world, option A dominates B, and that the dominating option should always be chosen. Nowadays, the Newcomb debate is more often presented as a clash between two concepts of utility, with both sides adhering to MEU, but their conceptions and calculations of utility differing. The fact that the dispute initially invited characterisation in terms of a clash of MEU with Dominance, however, allows us to draw attention to a respect in which not buying the armour is unlike one-boxing. If we are to imagine that both hypothetical Knights end up winning their battle, then it is actually Effingham’s so-called ‘one-boxer’ whose action is favoured by Dominance. Putting it another way, the financial saving that the ‘one-boxer’ chooses here is in fact more like the visible £1000 that the traditional two-boxer chooses.
We could undermine this by questioning whether it is appropriate to focus on comparing the scenarios in which the two hypothetical Knights make their different choices and win the battle. Overall, it seems (to us) that Effingham is encouraging us to imagine in this way—he says that ‘knowing that his safety is assured, [Knight] may as well make a name for himself and fight naked’ [1] (p. 193)—but focussing too heavily on this may obscure the fact that one of the puzzles raised by the case is whether the documented win is interpreted as an opportunity to save money on armour or as evidence that we do buy the armour (which is why it is effective to point out that the news report does not tell us whether we bought the armour). If we take the first option and conceptualise it as an opportunity, we run into the further disanalogy with Newcomb mentioned here. If we do not, the fact that there is this complication over whether to do so is itself a difference between this case and Newcomb cases. Either way, the analogy between decision in PAUPER and decision in Newcomb cases is not thoroughly secure.
12.
For example, they sidestep some of the complications of PAUPER mentioned in the previous note.
13.
At least, that is what we shall assume for the purposes of this paper, although Lewis’s comments on when it is ‘fair to ignore’ [15] (p. 274) complications involving foreknowledge, and his comments (above) to Rabinowicz, allow for a more nuanced view, in which continuing to employ the Principal Principle is not irrational since these are rather cases where the adjudication of what is rational must break down.
14.
Effingham is aware of the link between his account of Price’s and of this difference [1] (p. 198, note 2).
15.
Which also means withdrawing from their position on the original Newcomb case. EviCausalism gives us the result that we should bet Tails in CHEWCOMB, one-box in BOXY CHEWCOMB, and one-box in traditional Newcomb.
16.
Again, the counterfactuals here do not indicate a counterfactual theory of causation: they are the counterfactuals Price takes to be grounded by the (evi)causal facts.
17.
Have we not committed to the idea that the evidentialist accepts the causalist’s counterfactuals in holding that they may avail themselves of ‘being richer is reserved for the irrational’? Does this not express that the evidentialist agrees that had they irrationally taken both boxes, they would have been richer? No; that is not at all what the ‘riches are reserved for the irrational’ locution means. Consider: when Lewis, as a two-boxer holding his £1000 and responding to ‘Why ain’t you rich?’, says that ‘Riches are reserved for the irrational’, he is not admitting that had he irrationally one-boxed, he would have been rich. On the contrary, he thinks that had he irrationally one-boxed, he would have had nothing.

References

  1. Effingham, N. Time Travel: Probability and Impossibility; Oxford University Press: Oxford, UK, 2020. [Google Scholar]
  2. Lewis, D. Causal Decision Theory. Australas. J. Philos. 1981, 59, 5–30. [Google Scholar] [CrossRef]
  3. Price, H. Causation, Chance, and the Rational Significance of Supernatural Evidence. Philos. Rev. 2012, 121, 483–538. [Google Scholar] [CrossRef]
  4. Lewis, D. Prisoners’ Dilemma is a Newcomb Problem. Philos. Public Aff. 1979, 8, 235–240. [Google Scholar]
  5. Hargreaves Heap, S.; Hollis, M.; Lyons, B.; Sugden, R.; Weale, A. The Theory of Choice: A Critical Guide; Basil Blackwell: Oxford, UK, 1992. [Google Scholar]
  6. Joyce, J. Foundations of Causal Decision Theory; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar]
  7. Lewis, D. Why Ain’cha Rich? Noûs 1981, 15, 377–380. [Google Scholar] [CrossRef]
  8. Gibbard, A.; Harper, W. Counterfactuals and Two Kinds of Expected Utility. In Foundations and Applications of Decision Theory; Hooker, A., Leach, J.J., McClennan, E.F., Eds.; D. Reidel: Dordrecht, The Netherlands, 1978; pp. 125–162. [Google Scholar]
  9. Benford, G.A.; Book, D.L.; Newcomb, W.A. The Tachyonic Antitelephone. Phys. Rev. D 1970, 2, 263–265. [Google Scholar] [CrossRef]
  10. Mellor, D.H. Real Time II; Routledge: London, UK, 1998. [Google Scholar]
  11. Lewis, D. The Paradoxes of Time Travel. Am. Philos. Q. 1976, 13, 145–152. [Google Scholar]
  12. Lewis, D. Postscripts to ‘Counterfactual Dependence and Time’s Arrow’. In Philosophical Papers; Lewis, D., Ed.; Oxford University Press: Oxford, UK, 1986; Volume II, pp. 52–66. [Google Scholar]
  13. Price, H. The Lion, the ‘Which?’ and the Wardrobe—Reading Lewis as a Closet One-Boxer. Available online: https://philsci-archive.pitt.edu/4894/ (accessed on 28 March 2024).
  14. Nozick, R. Newcomb’s Problem and Two Principles of Choice. In Essays in Honor of Carl G. Hempel; Rescher, N., Ed.; D. Reidel: Dordrecht, The Netherlands, 1969; pp. 114–146. [Google Scholar]
  15. Lewis, D. A Subjectivist’s Guide to Objective Chance. In Studies in Inductive Logic and Probability; Jeffrey, R., Ed.; University of California Press: Berkeley, CA, USA, 1980; Volume II, pp. 263–293. [Google Scholar]
  16. Hall, N. Correcting the Guide to Objective Chance. Mind 1994, 103, 505–517. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bourne, C.; Caddick Bourne, E. The Absurdity of Rational Choice: Time Travel, Foreknowledge, and the Aesthetic Dimension of Newcomb Problems. Philosophies 2024, 9, 99. https://doi.org/10.3390/philosophies9040099

AMA Style

Bourne C, Caddick Bourne E. The Absurdity of Rational Choice: Time Travel, Foreknowledge, and the Aesthetic Dimension of Newcomb Problems. Philosophies. 2024; 9(4):99. https://doi.org/10.3390/philosophies9040099

Chicago/Turabian Style

Bourne, Craig, and Emily Caddick Bourne. 2024. "The Absurdity of Rational Choice: Time Travel, Foreknowledge, and the Aesthetic Dimension of Newcomb Problems" Philosophies 9, no. 4: 99. https://doi.org/10.3390/philosophies9040099

APA Style

Bourne, C., & Caddick Bourne, E. (2024). The Absurdity of Rational Choice: Time Travel, Foreknowledge, and the Aesthetic Dimension of Newcomb Problems. Philosophies, 9(4), 99. https://doi.org/10.3390/philosophies9040099

Article Metrics

Back to TopTop