The Absurdity of Rational Choice: Time Travel, Foreknowledge, and the Aesthetic Dimension of Newcomb Problems
Abstract
:1. Introduction: Time Travel and Rational Decision
- WAR:
- ‘Imagine my future self appears, dying of radiation sickness. Before dying, he (me?) reveals that a nuclear war will kill us all. Time cannot be changed, so war looks likely. But if you use excessive means (e.g., torture/hypnosis) to make me believe a war has taken place (when none had!), poison me with polonium, and send me back in time, we’d have a consistent scenario but everyone (except myself) gets to live. Is it rational to use excessive means?’ [1] (p. 176)
1.1. The Traditional Newcomb Case
1.2. The Debate over Time-Travel and Foreknowledge Cases
2. The Newcomb Dialectic
2.1. Burdens of Proof and Embarrassing Questions
As James M. Joyce observes, this retort is no answer:‘Rachel asks Irene why she (Irene) didn’t take the extra thousand. Irene replies by asking Rachel why she (Rachel) isn’t rich. This answer is, as lawyers say, “nonresponsive”. Irene has replied to Rachel’s question with an ad hominem that changes the subject. “I know why I’m not rich, Irene”, Rachel should respond; “my question had to do with why didn’t you take the [£]1000.”’[6] (p. 152)
You’re an idiot!You are, you mean!Oh, are you talking about yourself?What, like you were?
You’re irrational!Well, you would say that, wouldn’t you!Well, you would say that, wouldn’t you!
2.2. Newcomb, Irony and Absurdity
- (a)
- You acted (from your point of view) with rational competence, yet the world looks as if your (from your point of view, rationally incompetent) opponent got things right. Moreover, it is your rational action that has brought it about that the world looks as if your opponent got things right.
- (b)
- Events that befall you are such that they would be explained if, even though your opponent gets things wrong and you get them right, the vindictive world had taken an opportunity to punish your competence.
- (c)
- On both (a) and (b), you might further look like somebody whose hubris was duly pricked. In option (a), perhaps your downfall is a fitting rebuke to your obtuseness in sticking to a perspective that was irrational. On option (b), perhaps it is a fitting rebuke to your smug intellectual satisfaction in your rationality.2
3. Causal Dependence in Time-Travelling Newcomb Cases
3.1. Effingham’s Proposal: CDT Recommends ‘One-Box’ Choices in Time-Travel Scenarios
- THUMBS:
- ‘[Daniel] Nolan imagines that you time travel to the past to a rather unusual archaeological dig where whatever you unearth you get to keep. You are particularly keen on getting hold of a particular statue. Checking the aged remains of tomorrow’s (probably, but not certainly, veridical) newspaper brought back from the future, you discover that a person with one thumb will discover the statue. You value the statue more than your thumb; you see everyone else appears to have their thumbs; you have a pair of hedge clippers with you. Do you then cut off your thumb?’ [1] (p. 194)
- AUTOINFANTICIDAL:
- ‘I plan to time travel and kill Pappy in 1930. You are certain that: I am in rude health; Pappy is my grandfather; Pappy will not rise from the dead three days hence; I will not change my mind freely etc. … ‘Unsavoury Bets Inc.’ are offering odds of ten million to one that I succeed’ [1] (p. 184)
‘Since Tim [a time-travelling grandson who goes to 1921 with murder in mind] didn’t kill Grandfather in the “original” 1921, consistency demands that neither does he kill Grandfather in the “new” 1921. Why not? For some commonplace reason. Perhaps some noise distracts him at the last moment, perhaps he misses despite all his target practice, perhaps his nerve fails, perhaps he feels a pang of unaccustomed mercy.’5[11] (p. 150)
‘If I were to try to kill Pappy, I would succeed’ is true if what we hold fixed in evaluating the counterfactual is: my constitution, Pappy’s constitution, the state of the gun, the fact that I have the ability to walk up to Pappy, raise the gun and pull the trigger, my desires and states of mind, my disposition to remain calm under pressure, and so on.‘If I were to try to kill Pappy, I would fail’ is true if what we are holding fixed is that Pappy lives.
- PLAGUE:
- ‘You are certain that, at the minimal cost of –υ20 you can trick Enemy into trying to use a time machine. You are certain that: your intel is perfect; the time machine is in mint condition; if Enemy is tricked, then they’ll definitely attempt to use it; etc. In light of all of this you are certain that the only thing that could prevent them activating the time machine is a particular indeterministic event which may (or may not) take place next week. If that event occurs, a virus will kill Enemy prior to activating the time machine. The objective chance of that event occurring is miniscule and you can do nothing to affect that chance. If Enemy lives, that’s worth υ0; Enemy dying is worth +υ1000; Enemy successfully using the time machine would be terrible, –υ10000, but you’re convinced of the argument of Chapter 12 that your credence of that happening should be effectively zero.’ [1] (p. 178)
- -
- It is physically possible and so has a non-zero chance.
- -
- For journeys of most kinds, it is metaphysically impossible. So, any worlds where such a journey takes place is a world where something metaphysically impossible happens, making it more distant than any metaphysically possible world where Enemy fails.
- -
- For journeys of a few very select kinds, backwards time travel is metaphysically possible, but such journeys are low-chance and remarkable, making their worlds more distant than non-quasi-miraculous worlds in which Enemy fails.
3.2. Problems for Effingham’s Argument
4. Does Foreknowledge Trump Objective Chances?
- PAUPER:
- ‘Knight will fight in tomorrow’s battle. If Knight buys new armour, his objective chance of surviving will be 0.99, otherwise it’ll be 0.5. Buying new armour will make him destitute and he’ll have to live on as a pauper. Prizing his life more highly than his riches, Knight is about to buy the armour. But then Knight uses a crystal ball and comes to know for certain that he’ll survive the battle unscathed, though not whether he bought the armour or not.’ [1] (p. 193)
‘… I believe in the logical possibility of time travel, precognition, etc., and I see no reason why suitable evidence might not convince a perfectly rational agent that these possibilities are realised, and in such a way as to bring him news from the future. … It seems to me completely unclear what conduct would be rational for an agent in such a case. Maybe the very distinction between rational and irrational conduct presupposes something that fails in the abnormal case. You know that spending all you have on armour would greatly increase your chances of surviving the coming battle, but leave you a pauper if you do survive; but you also know, by news from the future that you have excellent reason to trust, that you will survive. (The news doesn’t say whether you will have bought the armour.) Now: is it rational to buy the armour? I have no idea—there are excellent reasons both ways. And I think that even those who have the correct two-box’st intuitions about Newcomb’s problem may still find this new problem puzzling. That is, I don’t think the appeal of not buying armour is just a misguided revival of V-maximizing intuitions that we’ve elsewhere overcome.’Lewis, in Price [13] (p. 19)
4.1. Price’s Argument for EviCausalism (and against Causalism) in Foreknowledge Cases
‘Satan informs you … “I bet you didn’t know this. On those actual future occasions where you yourself bet on the coin, it comes up Tails about 99% of the time. (On other occasions, it is about 50% Tails.)” What strategy is rational at this point? Should you assess your expected return in the light of the objective chances? Or should you avail yourself of Satan’s further information?’[3] (p. 497)
‘“To hell with Satan”, says this mad-dog objectivist, thumping the table. “By betting Tails, you irrationally forgo an equal chance of a greater reward.”’[3] (p. 522)
‘Lewis himself notes [15] (p. 274) that there are possibilities (involving such things as time travellers, seers, and circular spacetimes) in which the past carries news from the future which, if known, breaks the connection between credence and chance. When the past does carry such news, I will say that it contains “crystal balls”’[16] (p. 508)
- BOXY CHEWCOMB:
- ‘Suppose that God offers you the contents of an opaque box, to be collected tomorrow. He informs you that the box will then contain [£]0 if a fair coin to be tossed at midnight lands Heads, and [£]1,000,000 if it lands Tails. Next to it is a transparent box, containing [£]1000. God says, “You can have that money, too, if you like”. At this point Satan whispers in your ear, saying, “It is definitely a fair coin, but my crystal ball tells me that in 99 percent of future cases in which people choose to one-box in this game, the coin actually lands Tails; and ditto for two-boxing and Heads.”’ [3] (p. 505)
‘[C]ausal dependence should be regarded as an analyst-expert about the conditional credences required by an evidential decisionmaker. A little more formally, the proposal goes something like this:(EC): B is causally dependent on A just in case an expert agent would take P(B|A) ≠ P(B), in a calculation of the V-utility of bringing it about that A (in circumstances in which the agent is not indifferent to whether B).Since this suggestion takes it to be definitive of causal belief that its role is to guide a particular kind of evidential judgment, I shall call it the EviCausalist proposal.’[3] (p. 509)
4.2. Causalism against All Odds?
‘[U]nlike traditional Evidentialists, who accept the Causalist’s conception of the modal landscape, EviCausalists will simply deny that “they would have gotten [the million] whatever they did”. On the contrary, as they understand the counterfactuals … they would have received only [£]1000 had they two-boxed. It is the two-boxers who are irrational in this counterfactual sense, by the EviCausalists’ lights: had the two-boxers one-boxed instead, they would have had the million.’
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
1. | We give a more detailed account of the aesthetics of absurdity in our book (in preparation) on the aesthetic experience of the extraordinary. |
2. | Or, perhaps, to the ungrounded optimism you have displayed in expecting the world not to work against those who make good choices for good reasons. It may be that a person looks heroic through looking like someone who persists in making good choices despite what seems to be a world which punishes reason. Certain comedy characters (Victor Meldrew in the BBC’s One Foot in the Grave, for example) might best be understood as heroes for continually finding themselves faced with such ordeals. (Thanks to Thomas Smith for some helpful discussions of this kind of heroism.) |
3. | Newcomb himself discusses a backward-communicating device, the antitelephone, in [9]. His proposal, however, is that such a device is impossible, because it would make possible a contradiction. So, Newcomb had reservations about backwards causation similar in spirit to those of Mellor [10] (pp. 125–135). |
4. | That the causal link would, in the Newcomb case where the predictor has foreknowledge, be a case of backwards causation is less important than it may initially appear. As Lewis [4] (pp. 236–237) notes, the classic Newcomb problem can equally be set up as one in which the mechanism predicts what you already have chosen rather than what you will choose later. The apparent connection of the traditional Newcomb problem to backwards causation in particular is a narrative accident. Having the choice come later than the prediction which causes the filling (or not) of the box is a very efficient way of ruling out various ways which come to mind in which there may be causal dependence of the contents of the box on the choice made. For someone who is willing to explore the possibility of backwards causation, of course, not all such ways are ruled out. But the possibility that a causal decision theorist will one-box because they believe they are dealing with a case of backwards causation does not dissolve the interest of Newcomb cases, which is sustained by the fact that somebody else—the evidentialist—does not need to believe they are dealing with a case of backwards causation in order to one-box. (It is also worth noting here that a separate question arises about when it is rational to change one’s beliefs about the causal structure of the world—for example, about what should constitute compelling evidence of backwards causation—which would, for a causal decision theorist, be pertinent also to the question of which choice it is rational to make. But that question is beyond the scope of this paper.) |
5. | The uses of ‘original’ and ‘new’ that Lewis puts in scare-quotes allude to a way of thinking about visiting the past that he has discussed and is clear we should not take literally. The ‘original’ and ‘new’ 1921 are simply (one and the same) 1921. |
6. | Effingham will say the same even if we amplify the remarkableness. An anonymous referee suggests the example where Unsavoury Bets Inc. simultaneously offers one million such bets (on one million such grandfather-killing expeditions). They suggest that taking it that ‘something will go wrong’ is less plausible here, since it would be a ‘statistical miracle’ for a million things to go wrong. It would indeed be a quasi-miracle, but Effingham can still maintain that a quasi-miraculous, metaphysically possible world is closer than a metaphysically impossible world. (Our own position on a case like this, in the spirit of the position of this paper, is that heightening the extraordinariness of a scenario may heighten its absurdity, but that this is independent of what it is rational to do. A fuller discussion of the relation between quasi-miracles and the absurd is given in our book (in preparation) on the aesthetic experience of the extraordinary.) |
7. | That is, the one-thumber is not just a causalist who has misjudged the counterfactual dependences, and they are not trying to persuade the two-thumber that they ought by the standards of CDT to choose one-thumbing. They are trying to demonstrate the superiority of choosing as an evidentialist. |
8. | Of course, one difference is that the one-thumber could, if they wished, raise the point that the report said the discoverer had one thumb, but this does not work well as a piece of spotlight shifting, since it plays too easily into the hands of the two-thumber, who will say, ‘Well, it wouldn’t have if you hadn’t gone and cut off your thumb for no good reason.’ And, moreover, it leaves the one-thumber’s circumstances vulnerable to being compared to a Flann O’Brien construction, inspiring the Keats and Chapman-style pun: ‘You’ve cut off your thumb despite your fate’. Invoking such constructions (such as those found in O’Brien’s The Various Lives of Keats and Chapman) as a comparison would cast the one-thumber as absurd. |
9. | Thanks to an anonymous referee for the suggestion. |
10. | A complication of cases like FLUKOMB is that once we imagine the chooser expecting flukes, we might be tempted to define a new matrix of options for the chooser with utilities based on the (dis)value to the chooser of whatever flukes they anticipate as plausible ways for them to accidentally take one box. But since this would both change the problem and take us into a discussion too far afield from our key points of disagreement with Effingham, we set it aside here. |
11. | There are also other potential disanalogies to be explored between PAUPER and Newcomb cases, and which Lewis may have in mind. First, we are within our rights to question whether not buying armour is more analogous to traditional one-boxing or two-boxing. Initially—for example, in Nozick’s [14] canonical framing of the Newcomb problem—the dispute was presented in terms of a clash between two principles: the principle to Maximise Expected Utility (MEU) (which appears to favour one-boxing) and the principle of Dominance (which favours two-boxing). The latter says that when option A gives you a better outcome than option B no matter what the state of the world, option A dominates B, and that the dominating option should always be chosen. Nowadays, the Newcomb debate is more often presented as a clash between two concepts of utility, with both sides adhering to MEU, but their conceptions and calculations of utility differing. The fact that the dispute initially invited characterisation in terms of a clash of MEU with Dominance, however, allows us to draw attention to a respect in which not buying the armour is unlike one-boxing. If we are to imagine that both hypothetical Knights end up winning their battle, then it is actually Effingham’s so-called ‘one-boxer’ whose action is favoured by Dominance. Putting it another way, the financial saving that the ‘one-boxer’ chooses here is in fact more like the visible £1000 that the traditional two-boxer chooses. We could undermine this by questioning whether it is appropriate to focus on comparing the scenarios in which the two hypothetical Knights make their different choices and win the battle. Overall, it seems (to us) that Effingham is encouraging us to imagine in this way—he says that ‘knowing that his safety is assured, [Knight] may as well make a name for himself and fight naked’ [1] (p. 193)—but focussing too heavily on this may obscure the fact that one of the puzzles raised by the case is whether the documented win is interpreted as an opportunity to save money on armour or as evidence that we do buy the armour (which is why it is effective to point out that the news report does not tell us whether we bought the armour). If we take the first option and conceptualise it as an opportunity, we run into the further disanalogy with Newcomb mentioned here. If we do not, the fact that there is this complication over whether to do so is itself a difference between this case and Newcomb cases. Either way, the analogy between decision in PAUPER and decision in Newcomb cases is not thoroughly secure. |
12. | For example, they sidestep some of the complications of PAUPER mentioned in the previous note. |
13. | At least, that is what we shall assume for the purposes of this paper, although Lewis’s comments on when it is ‘fair to ignore’ [15] (p. 274) complications involving foreknowledge, and his comments (above) to Rabinowicz, allow for a more nuanced view, in which continuing to employ the Principal Principle is not irrational since these are rather cases where the adjudication of what is rational must break down. |
14. | Effingham is aware of the link between his account of Price’s and of this difference [1] (p. 198, note 2). |
15. | Which also means withdrawing from their position on the original Newcomb case. EviCausalism gives us the result that we should bet Tails in CHEWCOMB, one-box in BOXY CHEWCOMB, and one-box in traditional Newcomb. |
16. | Again, the counterfactuals here do not indicate a counterfactual theory of causation: they are the counterfactuals Price takes to be grounded by the (evi)causal facts. |
17. | Have we not committed to the idea that the evidentialist accepts the causalist’s counterfactuals in holding that they may avail themselves of ‘being richer is reserved for the irrational’? Does this not express that the evidentialist agrees that had they irrationally taken both boxes, they would have been richer? No; that is not at all what the ‘riches are reserved for the irrational’ locution means. Consider: when Lewis, as a two-boxer holding his £1000 and responding to ‘Why ain’t you rich?’, says that ‘Riches are reserved for the irrational’, he is not admitting that had he irrationally one-boxed, he would have been rich. On the contrary, he thinks that had he irrationally one-boxed, he would have had nothing. |
References
- Effingham, N. Time Travel: Probability and Impossibility; Oxford University Press: Oxford, UK, 2020. [Google Scholar]
- Lewis, D. Causal Decision Theory. Australas. J. Philos. 1981, 59, 5–30. [Google Scholar] [CrossRef]
- Price, H. Causation, Chance, and the Rational Significance of Supernatural Evidence. Philos. Rev. 2012, 121, 483–538. [Google Scholar] [CrossRef]
- Lewis, D. Prisoners’ Dilemma is a Newcomb Problem. Philos. Public Aff. 1979, 8, 235–240. [Google Scholar]
- Hargreaves Heap, S.; Hollis, M.; Lyons, B.; Sugden, R.; Weale, A. The Theory of Choice: A Critical Guide; Basil Blackwell: Oxford, UK, 1992. [Google Scholar]
- Joyce, J. Foundations of Causal Decision Theory; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar]
- Lewis, D. Why Ain’cha Rich? Noûs 1981, 15, 377–380. [Google Scholar] [CrossRef]
- Gibbard, A.; Harper, W. Counterfactuals and Two Kinds of Expected Utility. In Foundations and Applications of Decision Theory; Hooker, A., Leach, J.J., McClennan, E.F., Eds.; D. Reidel: Dordrecht, The Netherlands, 1978; pp. 125–162. [Google Scholar]
- Benford, G.A.; Book, D.L.; Newcomb, W.A. The Tachyonic Antitelephone. Phys. Rev. D 1970, 2, 263–265. [Google Scholar] [CrossRef]
- Mellor, D.H. Real Time II; Routledge: London, UK, 1998. [Google Scholar]
- Lewis, D. The Paradoxes of Time Travel. Am. Philos. Q. 1976, 13, 145–152. [Google Scholar]
- Lewis, D. Postscripts to ‘Counterfactual Dependence and Time’s Arrow’. In Philosophical Papers; Lewis, D., Ed.; Oxford University Press: Oxford, UK, 1986; Volume II, pp. 52–66. [Google Scholar]
- Price, H. The Lion, the ‘Which?’ and the Wardrobe—Reading Lewis as a Closet One-Boxer. Available online: https://philsci-archive.pitt.edu/4894/ (accessed on 28 March 2024).
- Nozick, R. Newcomb’s Problem and Two Principles of Choice. In Essays in Honor of Carl G. Hempel; Rescher, N., Ed.; D. Reidel: Dordrecht, The Netherlands, 1969; pp. 114–146. [Google Scholar]
- Lewis, D. A Subjectivist’s Guide to Objective Chance. In Studies in Inductive Logic and Probability; Jeffrey, R., Ed.; University of California Press: Berkeley, CA, USA, 1980; Volume II, pp. 263–293. [Google Scholar]
- Hall, N. Correcting the Guide to Objective Chance. Mind 1994, 103, 505–517. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bourne, C.; Caddick Bourne, E. The Absurdity of Rational Choice: Time Travel, Foreknowledge, and the Aesthetic Dimension of Newcomb Problems. Philosophies 2024, 9, 99. https://doi.org/10.3390/philosophies9040099
Bourne C, Caddick Bourne E. The Absurdity of Rational Choice: Time Travel, Foreknowledge, and the Aesthetic Dimension of Newcomb Problems. Philosophies. 2024; 9(4):99. https://doi.org/10.3390/philosophies9040099
Chicago/Turabian StyleBourne, Craig, and Emily Caddick Bourne. 2024. "The Absurdity of Rational Choice: Time Travel, Foreknowledge, and the Aesthetic Dimension of Newcomb Problems" Philosophies 9, no. 4: 99. https://doi.org/10.3390/philosophies9040099
APA StyleBourne, C., & Caddick Bourne, E. (2024). The Absurdity of Rational Choice: Time Travel, Foreknowledge, and the Aesthetic Dimension of Newcomb Problems. Philosophies, 9(4), 99. https://doi.org/10.3390/philosophies9040099