Next Article in Journal
Structural Holes in Social Networks with Exogenous Cliques
Previous Article in Journal
Game Theory of Pollution: National Policies and Their International Effects

Games 2017, 8(3), 31; https://doi.org/10.3390/g8030031

Article
The Monty Hall Problem as a Bayesian Game
Department of Economics, University of Texas at Austin, Austin, TX 78712, USA
Received: 25 May 2017 / Accepted: 20 July 2017 / Published: 26 July 2017

Abstract

:
This paper formulates the classic Monty Hall problem as a Bayesian game. Allowing Monty a small amount of freedom in his decisions facilitates a variety of solutions. The solution concept used is the Bayes Nash Equilibrium (BNE), and the set of BNE relies on Monty’s motives and incentives. We endow Monty and the contestant with common prior probabilities (p) about the motives of Monty and show that, under certain conditions on p, the unique equilibrium is one in which the contestant is indifferent between switching and not switching. This coincides and agrees with the typical responses and explanations by experimental subjects. In particular, we show that our formulation can explain the experimental results in Page (1998), that more people gradually choose switch as the number of doors in the problem increases.
Keywords:
monty hall; equiprobability bias; games of incomplete information; bayes nash equilibrium
JEL Classification:
C60; C72; D81; D82

1. Introduction

The “Monty Hall” problem is a famous scenario in decision theory. It is a simple problem, yet people confronted with this dilemma almost overwhelmingly seem to make the incorrect choice. This paper provides justification for why a rational agent may actually be reasoning correctly when he or she makes this ostensibly erroneous choice.
This notorious problem arrived in the public eye in the form of a September, 1990 column published by Marilyn vos Savant in Parade Magazine1 (vos Savant (1990) [5]). vos Savant’s solution drew a significant amount of ire, as people vehemently disagreed with her answer. The ensuing debate can be recounted in several subsequent articles by vos Savant [6,7], as well as in a 1991 New York Times article by James Tierney [8]. The problem draws its name from the former host of the TV show “Let’s Make a Deal”, Monty Hall, and is formulated as follows:
There is a contestant (whom we shall call Amy (A)) on the show and its famous host Monty Hall (M). Amy and Monty are faced on stage by three doors. Monty hides a car behind one of the doors at random, and a goat behind the other two doors. Amy selects a door, and then, before revealing to Amy the outcome of her choice, Monty opens one of the other unopened doors to reveal a goat. Amy is then given the option of switching to the other unopened door, or of staying on her current choice. Should Amy switch?
There are a significant number of papers in both the economics and psychology literatures that look at the typical person’s answers to this question. The overwhelming result presented in these bodies of work, including [9,10,11], is that that subjects confronted with the problem solve it incorrectly and fail to recognize that Amy should switch. The typical justification put forward by the subjects is that the likelihood of success if they were to switch doors is the same as if they were to stay with their original choice of door. This mistake is often cited as an example of the equiprobability bias heuristic ([12,13,14]) as described in Lecoutre (1992) [15]. Other possible causes for this mistake mentioned in the literature are the “illusion of control” ([10,16,17]), the “endowment effect” ([9,10,11,14,18]), or a misapplication of Laplace’s “principle of insufficient reason” ([10]).
It is important to recognize; however, that in the scenario presented above, as in vos Savant’s columns, Selvin’s articles, and many other formulations of the problem, it is unclear whether Monty must reveal a goat and allow Amy to switch. However, their subsequent analysis and solutions imply that this is an implicit condition on the problem.
One of the first properties of the model formulated by our paper is that the game show host is also a strategic player, in addition to the game’s contestant. Indeed this agrees with the New York Times [8] article, in which the eponymous host Monty, himself, notes that he had a significant amount of freedom. To model this problem, we will formulate it as a non-cooperative game, as developed by Nash, von Neumann and others. There have been several other papers that model this scenario as a game: notably by Fernandez and Piron (1999) [19], Mueser and Granberg (1999) [20], Bailey (2000) [21], and Gill (2011) [22]. The most similar of these works to this paper is [19]. There, Fernandez and Piron also note Monty’s freedom to choose, and in fact allow him to manipulate various behavioral parameters of Amy’s. In addition to giving Monty this wide array of strategy choices, the authors also introduce a third player, “the audience”. Monty’s objective as a host is to make the situation difficult for the audience to predict, and this incentive, combined with Monty’s menu of options, allow for a variety of equilibria.
In this paper, we are able to generate our results simply by giving Monty the option to allow or prevent Amy from switching choices. We note that Monty’s incentives and personality matter, and show that his disposition towards Amy’s success affects the equilibria of the game. We model this by considering a game of Incomplete Information2. Both Monty and Amy share a common prior about whether Monty is “sympathetic” or “antipathetic” towards Amy, but only Monty sees the realization of his “type” before the game is played. Using this, we are able to show that there may be an equilibrium where it is not optimal for Amy to always switch from her initial choice following a revelation by Monty. There is an equilibrium under which Amy’s payoffs under switching and staying are the same (as people are wont to claim). We also generalize the three door problem to one with n doors, and show that our formulation agrees with the experimental evidence in Page (1998) [24]. As the number of doors increases, the range of prior probabilities in support of an equilibrium where Amy always switches grows. Concurrently, as the number of doors increases, the range of prior probabilities under which there is an equilibrium where Amy does not always switch, shrinks. Moreover, there is a cutoff number of doors, beyond which the only equilibrium is the one where Amy always switches.
Furthermore, in Appendix B we show that the results established in this paper are not dependent on the specific assumptions made about Monty’s preferences. We relax our original conditions on Monty’s preferences and endow him with a continuum of types which model a much broader swath of possible motives. As before, the size of the set of prior probabilities in support of the “always switch” equilibrium increases as the number of doors increases. Furthermore, the size of the set of priors in support of an equilibrium where Amy does not always switch is decreasing in the number of doors.
There are several contributions made by this paper. Unlike other papers in the literature, this paper allows for uncertainty about Monty’s motives. In contrast to our formulation, the modal construction of the Monty Hall scenario is as a static zero-sum game (as in [21,22]), with no uncertainty whatsoever about Monty’s motives. While the paper by Mueser and Granberg (1999) [20] compares briefly the Nash Equilibria in games where Monty is respectively neutral and antipathetic, they assume that his preferences are common knowledge in each. In contrast to [20], we assume only that the contestant has a set of priors over Monty’s motives.
We also examine how the number of doors affects the equilibria in the problem. Experimental results in [24] show that the number of doors in the scenario affects play, and this paper is the first to show how these results can arise naturally as equilibrium play by rational agents. In addition, our formulation of the scenario in Appendix B captures a variety of Monty’s possible preferences beyond the simple sympathetic/antipathetic dichotomy, and we show that our results hold even in this general environment.
The structure of this paper is as follows. In Section two, we formalize the problem as one where Monty may choose whether to reveal a goat and examine the cases where Monty derives utility from Amy’s success or failure, respectively. In the next section, we further extend the model by allowing there to be uncertainty about whether Monty is sympathetic or antipathetic. We also then extend this analysis beyond the three door case to the more general n door case. Finally, in Appendix B we further generalize the model by allowing for a continuum of Montys, each with different preferences.

2. The Model

We consider the onstage scenario, where Monty (M) hides two goats and one car among three doors and Amy (A) must then select a door. Suppose that following the initial selection of a door by Amy, Monty has two choices:
  • Reveal a goat behind one of the unselected doors, what we shall call “reveal” ( r ) .
  • Not reveal a goat ( h ) .
Amy has different feasible strategies, depending on what Monty’s choice is: If Monty reveals a goat, then Amy may
  • Switch ( s ) .
  • Not switch (or keep) ( k ) .
If Monty does not reveal (choice h), then Amy has no option to switch choices and must keep her original selection. This is the same assumption as is made in [19]. Note that few papers in the literature address Monty’s protocol if revealing a goat and allowing Amy to switch is not mandatory. Moreover, as noted in [20], whether Monty must reveal a goat and allow Amy to switch is itself ambiguous in a significant number of papers (as well as in the column published by vos Savant and various popular retellings of the problem).
To simplify the analysis, we impose that Monty must hide the car completely at random. Consequently, we can model this situation as a game of Incomplete Information:
Amy’s initial pick can be modeled as a random draw or a “move of nature”. With probability 1 / 3 , her first pick is correct and with probability 2 / 3 , it is incorrect. Monty is able to observe the realization of this draw prior to making his decision and see whether or not Amy is correct. Thus, this game can be modeled as a sequential Bayesian game, where Amy and Monty have the common prior θ 1 = 1 / 3 and θ 2 = 2 / 3 . We can rephrase this by saying that Amy’s initial pick determines Monty’s type, which is given by the binary random variable θ ,
θ θ 1 , θ 2 Pr ( θ = θ 1 ) = 1 3 , Pr ( θ = θ 2 ) = 2 3
The Monty of type θ 1 is the Monty in the scenario where Amy has made the correct initial guess, and conversely, the Monty of type θ 2 is the one where Amy was incorrect initially. Monty knows his type, but Amy only knows the distribution over types. That is, Monty is able to view the “draw” realization, i.e., whether or not Amy is correct, before choosing his action. Amy cannot observe the outcome and has only her prior to go by. We may formally write the sets of strategies for Amy and type θ i of Monty as S A 3 and S M , respectively:
S A = s , k S M = r , h
Before going further, we need to specify the two players’ preferences in this strategic interaction. Naturally, Amy prefers to end up with the car than with the goat4. Assigning Amy to be player 1 and Monty, player 2, we write this formally as:
u A ( k , r ) = u A ( k , h ) = u A ( s , h ) > u A ( s , r ) for θ = θ 1 u A ( s , r ) > u A ( k , r ) = u A ( k , h ) = u A ( k , r ) for θ = θ 2
It is reasonable to suppose that Monty would slightly prefer to reveal a goat5, and thereby give Amy the option of switching, than to not reveal a goat, and prevent Amy from switching. However, we also endow Monty with preferences over Amy’s success or failure (he cares whether Amy ultimately ends up with a car or goat), and moreover, we stipulate that these preferences are in a sense stronger than Monty’s desire to reveal. Given Amy’s choice of keep, Monty would always prefer to allow her to switch (since the outcome of Amy’s choice would not change):
u M ( k , r ) > u M ( k , h )
However, given Amy’s choice of switch, Monty would prefer to allow her to switch (choice r) only if Amy switching would result in Monty’s desired outcome.
We now divide the analysis into two cases:
Case 1:
This is what we will call the Sympathetic Case: Monty and Amy’s preferences are aligned in the sense that all else equal Monty would prefer that Amy be correct rather than incorrect.
Case 2:
We call this the Antipathetic Case: Here, all else equal, Monty would prefer that Amy be incorrect. Their preferences are not aligned.

2.1. Case 1, Sympathetic

We stipulate that all parameters of the game are common knowledge, including Monty’s preferences; i.e., it is common knowledge that Monty is a sympathetic player. As the sympathetic player, Monty would prefer that Amy be successful. Consequently, in the state of the world where Amy’s initial guess is correct ( θ 1 ), Monty would prefer that Amy not switch. Likewise, in the state of the world where Amy guesses wrong initially ( θ 2 ), Monty would rather Amy switch. We write explicitly
F o r θ = θ 1 , u M ( s , j ) u A ( k , j ) , j r , h F o r θ = θ 2 , u M ( k , j ) u A ( s , j ) , j r , h
As above, Amy’s and Monty’s sets of strategies S A and S M are
S A = s , k S M = r , h
and in Figure 1 we present the game in extensive form.
The corresponding payoff matrix is:
Games 08 00031 i001
Lemma 1.
The unique Bayes-Nash Equilibrium is in pure strategies6; it is
( s , ( h , r ) )
Amy always switches, type θ 1 of Monty chooses not to reveal, and type θ 2 of Monty chooses to reveal. It is clear that following Monty’s choice, Amy knows exactly whether her initial choice was correct. Monty’s choice completely reveals to Amy the state of the world.

2.2. Case 2, Antipathetic

As in Case 1 (Section 2.1), Monty’s preferences are common knowledge; i.e., that he is antipathetic towards Amy. In contrast to Monty’s preferences as a sympathetic player (Section 2.1), as an antipathetic player, Monty would prefer that Amy be unsuccessful. Accordingly, in the state of the world where Amy’s initial guess is correct ( θ 1 ), Monty would prefer that Amy switch. Consonantly, in the state of the world where Amy guesses wrong initially ( θ 2 ), Monty would rather Amy not switch. We write explicitly
F o r θ = θ 1 , u M ( k , j ) u A ( s , j ) , j r , h F o r θ = θ 2 , u M ( s , j ) u A ( k , j ) , j r , h
Defining Amy and Monty’s strategies as before, in Figure 2 we present the game in extensive form.
The corresponding payoff matrix is:
Games 08 00031 i002
Interestingly, there are no pure strategy Bayes-Nash Equilibria.
Lemma 2.
The unique BNE is in mixed strategies7; it is:
Pr A ( s ) = 1 2 , ( r , Pr M ( r ) = 1 2 )
Amy switches half the time, while type θ 1 of Monty chooses to reveal, and type θ 2 of Monty chooses to reveal half of the time. Using Bayes’ law, we write:
Pr ( C o r r e c t | R e v e a l ) = Pr ( R e v e a l | C o r r e c t ) Pr ( C o r r e c t ) Pr ( R e v e a l ) = 1 ( 1 / 3 ) 1 ( 1 / 3 ) + ( 1 / 2 ) ( 2 / 3 ) = 1 2 Pr ( C o r r e c t | N o t R e v e a l ) = Pr ( N o t R e v e a l | C o r r e c t ) Pr ( C o r r e c t ) Pr ( N o t R e v e a l ) = 0 ( 1 / 3 ) 1 / 3 = 0
Thus, a completely rational Amy knows that if Monty chooses to reveal a goat, there is a 50 percent chance that her initial choice was correct. Simply by allowing Monty the option to not reveal, we have shown that if Monty and Amy’s preferences are opposed, the unique equilibrium is one where Amy does not always switch. Moreover, this equilibrium is one under which Amy’s beliefs are exactly those proposed so often by subjects in the many experiments: that her chances of success are equal under her decision to switch or not.

3. Uncertainty about Monty’s Motives

We now take the framework from the previous section and extend it as follows. Suppose now that Amy is unsure about whether Monty is sympathetic or antipathetic. We model this by saying that Amy and Monty have a common prior p, defined as
p = Pr ( M = M S )
p is the probability that Monty is sympathetic (type M S ); and naturally, 1 p is the probability that Monty is antipathetic (type M A ). Consequently, there are 4 states of the world: θ 1 , θ 2 × M S , M A Thus, we may present the game matrices:
Games 08 00031 i003
We write the following lemma:
Lemma 3.
Given uncertainty about Monty’s motives as formulated above, there is a pure strategy BNE,
( s , h , r , r , h )
for p 1 3 .
We see that if Amy is sufficiently optimistic about the nature of Monty, there is a pure strategy equilibrium where she switches.
There may also be mixed strategy BNE, and we write:
Lemma 4.
Given uncertainty about Monty’s motives as formulated above, there is a mixed strategy BNE,
Pr A ( s ) = 1 2 , ( α , r , r , β )
where type M S θ 1 plays Pr ( r ) = α and type M S θ 2 plays Pr ( r ) = β , where α, β satisfy
p α + 3 p + 2 β 2 p β 1 = 0
Equation (11) has a solution8 in acceptable (i.e., α , β [ 0 , 1 ] ) α and β for all p 1 2 .
How should we interpret these equilibria? We see that as long as Amy believes that there is at least a 50 percent chance that Monty is antipathetic, there is an equilibrium where she switches half of the time. Interestingly, as long as the common belief as to Monty’s type falls in that range, Amy’s mixing strategy is independent of the actual value of p. Amy mixes equally regardless of whether p is 0 or 1 2 . On the other hand, Monty’s strategies are not independent of p. We may rearrange Equation (11) and take derivatives to see how Monty’s optimal choice of α and β vary depending on p. Rearranging (11), we obtain,
β ( p , α ) = 1 + p α 3 p 2 2 p
Then,
β p = 2 α 4 2 2 p 2
This is strictly less than 0 for all p 1 2 and for all permissible α . Similarly,
α ( p , β ) = 3 p + 2 β 2 p β 1 p
Then,
α p = 1 2 β p 2
This is greater than or equal to 0 for β 1 2 and for all p. However, we show later (in Appendix A.3) that β must be less than or equal to 1 2 , and a brief examination of Equation (14) shows that β cannot equal 1 2 . Therefore, Equation (15) is positive for all acceptable values of the parameters.
As the common belief that Monty is sympathetic increases, type M S θ 1 plays reveal more often. Likewise, as p increases, type M A θ 2 plays reveal more often. Both relationships are somewhat counter-intuitive: as p increases, the sympathetic Monty reveals more often in the state where Amy was correct, and the unsympathetic Monty reveals less in the state where Amy is incorrect.

Generalizing to n Doors

We now generalize the previous scenario to one with n 1 doors ( n 3 ) concealing goats and one door concealing a car. Monty has hidden a car behind one of n doors, and a goat behind each of the remaining ( n 1 ) doors. As before, Amy selects a door, but before the outcome of her choice is revealed, Monty may open ( n 2 ) of the ( n 1 ) remaining doors to reveal goats. If Monty reveals the goats, Amy has the option of switching to the other unopened door or remaining with her current choice of door. This formulation agrees with the scenario faced by experimental subjects in Page (1998)9 [24].
It is easy to obtain the following lemma:
Lemma 5.
The unique pure strategy BNE is
( s , ( h , r , r , h ) )
for p 1 n .
As before, there may also be a mixed strategy BNE, presented in the lemma,
Lemma 6.
There is a set of mixed strategy BNE given by:
Pr A ( s ) = 1 2 , ( α , r , r , β )
where type M S θ 1 plays Pr ( r ) = α and type M S θ 2 plays Pr ( r ) = β . This equation defines a BNE for all acceptable α, β (i.e., α , β [ 0 , 1 ] ) satisfying
p α + n p + ( n 1 ) β ( n 1 ) p β 1 = 0
As a corollary,
Corollary 1.
Equation (16),
p α + n p + ( n 1 ) β ( n 1 ) p β 1 = 0
has a solution in acceptable α and β for all p that satisfy:
1 n α p 1 ( n 1 ) β ( n 1 ) ( n 1 ) β p 1 ( n 1 ) β n ( n 1 ) β
Note that the second inequality in Corollary 1 requires that β 1 n 1 . The proof of this corollary is simple and may be found in Appendix A.3.
Define set A n as
A n = 1 n , 1
That is, for a given situation with n doors, A n is the set of priors, p, under which there is a pure strategy BNE where Amy switches. We can then obtain the following theorem:
Theorem 1.
As the number of doors increases, the set of priors that support a BNE where Amy switches is monotonic in the following sense:
A n A n + 1
Moreover, in the limit, as the number of doors goes to infinity, we see that any interior probability supports a BNE where Amy switches.
Now, take the derivatives of the three constraints on the set of priors, p, given by Equation (17). They are:
1 ( n α ) 2 1 ( β 1 ) ( n 1 ) 2 1 ( ( β 1 ) n β ) 2
Evidently, each of these is negative for all acceptable values of the parameters and therefore, we see that the constraints are all strictly decreasing in n. We shall also find it useful to look at the limit of the two possible upper bounds for p:
lim n 1 n α = 0
lim n 1 ( n 1 ) β ( n 1 ) ( n 1 ) β = β 1 β
The term in Equation (21) is strictly positive for all values of n, whereas the term in Equation (22) is less than 0 for all n greater than some cutoff value. Thus, there must be some cutoff value n ^ , beyond which the first constraint, that 1 n α p , does not bind. Indeed, it is simple to derive the following lemma:
Lemma 7.
The constraint 1 n α p does not bind for all n > n ^ , where n ^ is given by
n ^ = 1 2 α + 2 + β α 2 4 α + 4 β
Define the set B n as the interval of prior probabilities p, that satisfy the conditions (A6) in Corollary 1 for a given number of doors n, n > n ^ . Because n > n ^ , the constraint
1 ( n 1 ) β ( n 1 ) ( n 1 ) β p 1 ( n 1 ) β n ( n 1 ) β
binds. Let b denote the left hand side of this expression and a the right hand side:
b = 1 ( n 1 ) β ( n 1 ) ( n 1 ) β a = 1 ( n 1 ) β n ( n 1 ) β
Thus, B n is defined as
B n = [ a , b ]
We can now state a natural corollary to Lemma 7:
Corollary 2.
There is a finite number N N such that if n > N , B n =
Proof. 
This corollary follows immediately from Lemma 7 and Equations (A7) and (A9) (monotonicity of the constraints, and behavior of left-hand side constraint in the limit). ☐
Since each set B n is a closed interval, it is natural to define the size of the set, ν n , as the Lebesgue measure of the interval. That is, if a, b are the two endpoints of the interval B n ,
ν n = | b a |
We can obtain the following theorem:
Theorem 2.
ν n > ν n + 1 . That is, the size of the set of priors under which there is a mixed strategy BNE is strictly decreasing in n.
Proof may be found in Appendix A.4.
This theorem, in conjunction with Theorem 1, above, supports the experimental evidence from Page (1998) [24]. There, the author finds evidence that supports the hypotheses that people’s performance on the Monty Hall problem increases as the number of doors increases, and that, moreover, their improvement in performance is gradual in nature. Both findings follow from Theorems 1 and 2 in our paper. If we view the prior probability p as itself being drawn from some population distribution of prior probabilities, we see that as the number of doors increases, the likelihood that there is a BNE where Amy always switches is strictly increasing. The probability that the random draw of p falls in the required interval is strictly increasing for a fixed distribution, since the interval grows monotonically as n increases.

4. Conclusions

In this paper we have developed and pursued several ideas. The first is that even a small amount of freedom on the part of Monty begets equilibria that differ from the canonical “always switch” solution. Moreover, given this freedom, Monty’s incentives and preferences matter, and affect optimal play.
One of the characteristics of a mixed strategy equilibrium in games in which agents have two pure strategies is that each player’s mixing strategy leaves the other player (or player’s) indifferent between their two pure strategies. Thus, in a sense, the equiprobability bias may not be as irrational as it seems. When the agent is confronted with a strategic adversary, and the decision problem becomes a game, in an equilibrium, any mixed strategy yields for the agent the exact same expected reward. Indeed, the frequent remarks and comments in the literature by subjects confronted with this problem coincide exactly with this equality of reward in expectation.
Of course, that is not to say that the participants in these experiments must be correct in doing so. In the variations posed there is almost always at least an implicit restriction requiring Monty to reveal a goat (and often an explicit restriction). In such phrasings of the problem, regardless of Monty’s motivation, the situation is merely a decision problem on the part of Amy, and the optimal solution is the one where she should always switch. However, many decision-making situations encountered by people in social settings, perhaps especially atypical ones—think of a sales situation—are strategic in nature and so it could be that mistakes such as the equiprobability heuristic are indicative of this. That is, so many situations are strategic that decision makers treat every situation as if it were strategic.
Continuing with this line of reasoning, results established in this paper could also explain findings in the literature examining learning in the Monty Hall dilemma. There have been numerous studies, including [10,11,12,13,16], that show that subjects going through repeated iterations of the Monty Hall problem increase their switching rate over time, as they learn to play the problem optimally. In the model presented in this paper, uncertainty about Monty’s incentives drives the participant Amy to not play the “always switch” strategy. Through repeated play, Monty’s preferences and type become more apparent, and as it becomes more and more clear to Amy that Monty is not malevolent, the strategy of always switching becomes the unique equilibrium. This also agrees with the “Eureka moment” described in Chen and Wang (2010) [26]. There, they find that in the 100-door version of the Monty problem subjects do not play always switch for the first ( n 1 ) periods of play. Once the subjects have played ( n 1 ) repetitions of the game they reach an “epiphany” point, after which they play always switch. This is just the behavior that one would expect in a learning model where one player has an unknown type—experimentation until a cutoff belief is reached. One possible future avenue of research would be the further development of this idea by writing a formal model of this scenario, and analyzing the equilibria of that game.
The experimental results in Mueser and Granberg (1999) [20] provide support for the assumptions made in the above paragraph. In their experiment, they differentiate between phrasings of the problem where Monty must always open a door and ones where Monty need not always open a door. Furthermore, they describe explicitly to the subjects the motives of Monty, and categorize him as having either neutral, positive, or negative intent. They find that when Monty is not forced to open a door, his intent is significant in its effect on the subjects decisions. Moreover, subjects behave as our model predicts they should, and are “...much less likely to switch when the host is attempting to prevent them from winning the prize than when he wishes to help them.” ([20]). Extending this idea, another research idea that could be pursued would be the examination of learning in a scenario similar to that in [20]—one where the subjects are aware of Monty’s motives.
Finally, one possible alternative explanation for why the results shown in Page (1998) [24] occur is that in the 100 door scenario, it becomes more obvious to the subject that information was revealed, for instance, to an subject using the “Principle of Restricted Choice”10 [27]. However, this explanation does not satisfactorily explain why subjects are not able to transfer their knowledge from the 100-door problem to the 3-door problem, as detailed in [24]. The model constructed in this paper is able to explain that occurrence: it is because in the three door case the relative importance of Monty’s motives becomes more important. Having more doors makes an unknown piece of information (Monty’s disposition) less important or relevant, and thereby clarifies the situation for Amy. It should be possible to design an experiment that highlights to the subjects Monty’s motives and level of freedom, and in that context examine how their decision making is affected by increasing the number of doors.

Acknowledgments

This paper has greatly benefited from comments from Rosemary Hopcroft, Joseph Whitmeyer, and three referees.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A.

Appendix A.1. Lemma 1 Proof

Proof. 
First we look for pure strategy BNE. The possible candidates for pure strategy BNE are ( s , ( h , r ) ) and ( k , ( r , r ) ) . To establish that these are equilibria we check that Amy does not deviate profitably. That is, we need to check that E u A ( s , ( h , r ) ) E u A ( k , ( h , r ) ) and E u A ( k , ( r , r ) ) E u A ( s , ( r , r ) ) .
E u A ( s , ( h , r ) ) E u A ( k , ( h , r ) ) 1 1 3
E u A ( k , ( r , r ) ) E u A ( s , ( r , r ) ) 1 3 1 3
We see that ( s , ( h , r ) ) is an equilibrium, but ( k , ( r , r ) ) is not.
From Lemma 4, using p = 1 , it is clear that there is no mixed strategy BNE. ☐

Appendix A.2. Lemma 2 Proof

Proof. 
First we look for pure strategy BNE. The possible candidates for pure strategy BNE are ( s , ( r , h ) ) and ( k , ( r , r ) ) . To establish that these are equilibria we check that Amy does not deviate profitably. That is, we need to check that E u A ( s , ( r , h ) ) E u A ( k , ( r , h ) ) and E u A ( k , ( r , r ) ) E u A ( s , ( r , r ) ) .
E u A ( s , ( r , h ) ) E u A ( k , ( r , h ) ) 1 1 3
E u A ( k , ( r , r ) ) E u A ( s , ( r , r ) ) 1 3 1 3
We see that neither ( s , ( r , h ) ) nor ( k , ( r , r ) ) are equilibria.
From Lemma 4, using p = 0 , it is clear that the unique mixed strategy BNE is
Pr A ( s ) = 1 2 , ( r , Pr M ( r ) = 1 2 )
 ☐

Appendix A.3. Corollary 1 Proof

Proof. 
First, we rewrite Equation (16):
α = n p + ( n 1 ) β ( n 1 ) p β 1 p β = 1 + p α n p ( n 1 ) ( 1 p )
Since α and β are probability distributions, we know that α [ 0 , 1 ] and β [ 0 , 1 ] . Thus, we may examine each of them in turn:
β 0 1 + p α n p ( n 1 ) ( 1 p ) 0 1 + p α n p 0 1 n α p
β 1 1 + p α n p ( n 1 ) ( 1 p ) 1 1 + p α n p ( n 1 ) ( 1 p ) p ( α 1 ) n 2
α 0 n p + ( n 1 ) β ( n 1 ) p β 1 p 0 n p + ( n 1 ) β ( n 1 ) p β 1 0 p 1 ( n 1 ) β n ( n 1 ) β
α 1 n p + ( n 1 ) β ( n 1 ) p β 1 p 1 n p + ( n 1 ) β ( n 1 ) p β 1 p p 1 ( n 1 ) β ( n 1 ) ( n 1 ) β
Evidently, (17),
1 ( n 1 ) β n ( n 1 ) β p 1 ( n 1 ) β ( n 1 ) ( n 1 ) β
holds if and only if β 1 n 1 . ☐

Appendix A.4. Theorem 2 Proof

Proof. 
It is sufficient to show that
b n < a n
Suppose for the sake of contradiction that
b n > a n
which holds,
1 ( β 1 ) ( n 1 ) 2 > 1 ( ( β 1 ) n β ) 2 β 2 ( n 1 ) 2 β ( n 1 ) ( n + 1 ) + 2 n 1 < 0
Denote the left-hand side of this expression by ϕ = ϕ ( β , n ) . Then, ϕ β < 0 and so we see that ϕ is strictly decreasing in β . Since by Corollary 1 β is bounded above by 1 n 1 , ϕ > ϕ ( β = 1 n 1 ) for all permissible β .
We evaluate ϕ at β = 1 n 1 and obtain
1 ( n + 1 ) + 2 n 1 < 0 n 1 < 0
We have obtained a contradiction and have proved our result. ☐

Appendix B. Generalization of the Model

The purpose of this appendix is to generalize the model formulated earlier in the paper, and to show that the results established earlier still hold in this more general setting.
We generalize the model given in Section 3 in the following manner. Endow Monty with a continuum of types, given by the random variable M. Then, the game’s payoffs can be formulated as:
Games 08 00031 i004
where, as before, θ 1 and θ 2 are realizations of the random variable θ ; and where the parameters, ϵ , η , and ξ , are random variables representing the possible variations in Monty’s preferences:
ϵ 0 , 0 . 5 η [ ϵ , N 0 ] ξ [ - N 1 , N 2 ]
where N 0 , N 1 , and N 2 are positive integers. The parameter values have the following intuitive meanings: ϵ is the “Revelation Bonus”. It is the additional utility that Monty gets from allowing Amy the freedom to switch her choice. Note that ϵ is a discrete (binary) random variable, while the other random variables are continuous. This allows for there to be a positive measure of types of Monty indifferent between choosing r or h (given a choice of k by Amy). η (note η ϵ ) is the “Revelation and Switch Bonus”. It is the additional utility that Monty gets from Amy switching. Naturally, it must be weakly greater than the revelation bonus, since Amy must be allowed to switch before she can switch. The last parameter, ξ , quantifies Monty’s attitude towards Amy. If ξ > 1 , Monty is antipathetic; and if ξ < 1 , Monty is sympathetic.
Amy’s payoffs are the same as before, and the same across all states. We examine Monty’s best responses. There are critical values for the parameters:
ϵ = 0 η + ξ = 1 1 + η = ξ
For example, suppose ϵ = ξ = 0 . Then, given that Amy’s initial guess is correct ( θ = θ 1 ), Monty’s best response to switch is h if η 1 , and his best response to switch is r if η 1 . Based on Monty’s best responses, these critical values lead to a partition of the parameter space into six regions, with corresponding probabilities that sum to 1. The probabilities, p i , i 1 , , 6 , i = 1 6 p i = 1 , with their corresponding regions, are:
p 1 : ϵ = 0 , η + ξ < 1 , 1 + η > ξ p 2 : ϵ = 0 , η + ξ > 1 , 1 + η > ξ p 3 : ϵ = 0 , η + ξ > 1 , 1 + η < ξ p 4 : ϵ = . 5 , η + ξ < 1 , 1 + η > ξ p 5 : ϵ = . 5 , η + ξ > 1 , 1 + η > ξ p 6 : ϵ = . 5 , η + ξ > 1 , 1 + η < ξ
Figure A1 illustrates Monty’s Best Responses for each of his types, where the shaded region is Monty’s best response:
Figure A1. Best Responses of Monty Types.
Figure A1. Best Responses of Monty Types.
Games 08 00031 g003

Appendix B.1. Pure Strategy Equilibria

First, denote by P the distribution on Monty’s type M generated by the parameter distributions. We write the following lemmata:
Lemma A1.
s , ( h , r ; r , r ; r , h ; h , r ; r , r ; r , h ) is11 a BNE for a distribution of priors p i , i 1 , , 6 , that satisfies
( p 1 + p 4 ) + n 2 n ( p 2 + p 5 + 1 ) ( p 3 + p 6 ) 0
Derivation of this lemma is easy: simply impose
E P u ( s , ( h , r ; r , r ; r , h ; h , r ; r , r ; r , h ) ) E P u ( k , ( h , r ; r , r ; r , h ; h , r ; r , r ; r , h ) )
Using inequality (A17), define ϕ as
ϕ = ( p 1 + p 4 ) + n 2 n ( p 2 + p 5 + 1 ) ( p 3 + p 6 )
We take the derivative of ϕ with respect to n and obtain
ϕ n = 2 n 2 ( p 2 + p 5 + 1 ) > 1
That is, ϕ is strictly increasing in n. Moreover, we may also examine the behavior of ϕ in the limit:
lim n ϕ = ( p 1 + p 2 + p 4 + p 5 + 1 ) ( p 3 + p 6 ) > 0
Define set X n as the set of priors under which, for a given situation with n doors, there is a pure strategy BNE where Amy switches. Given this definition and Equations (A20) and (A21), we may write the following theorem:
Theorem A1.
As the number of doors increases, the set of priors that support a BNE where Amy switches is monotonic in the following sense:
X n X n + 1
Moreover, in the limit, as the number of doors goes to infinity, any probability supports a BNE where Amy switches.
This theorem shows that the result established in Theorem 1 still holds in this more general setting. Our extended setting also illustrates that for small values of n, there are pure strategy equilibria where Amy does not switch. We can derive the following lemma:
Lemma A2.
There is no pure strategy BNE where Amy plays keep for a distribution of priors p i , i 1 , , 6 that satisfies
n 2 n ( 1 + p 4 + p 5 + p 6 ) > p 1 + p 2 + p 3
Proof. 
First, note that if k , ( r , h ; r , h ; r , h ; r , r ; r , r ; r , r ) is not an equilibrium, then there is no pure strategy BNE where Amy plays k.
Thus, to prove our lemma it is sufficient to establish the distribution of priors under which k , ( r , h ; r , h ; r , h ; r , r ; r , r ; r , r ) is not an equilibrium. To do this, simply impose
E P u ( k , ( r , h ; r , h ; r , h ; r , r ; r , r ; r , r ) ) E P u ( s , ( r , h ; r , h ; r , h ; r , r ; r , r ; r , r ) )
 ☐
Using inequality (A23), define φ as
φ = n 2 n ( 1 + p 4 + p 5 + p 6 ) ( p 1 + p 2 + p 3 )
We take the derivative of φ with respect to n and obtain
φ n = 2 n 2 ( 1 + p 4 + p 5 + p 6 ) > 0
That is, φ is strictly increasing in n. Moreover, we may also examine the behavior of φ in the limit:
lim n φ = 1 + p 4 + p 5 + p 6 > 0
Thus, from the monotonicity of φ (Equation (A26)) and the behavior of φ in the limit (Equation (A27)), we may write the following theorem:
Theorem A2.
There is a finite number N N such that if n > N , there is no pure strategy BNE where Amy plays keep.
This result also is supported by the experimental results in Page (1998) [24]. There is a threshold number of doors beyond which there is no pure strategy equilibrium where Amy plays keep.

Appendix B.2. Mixed Strategy Equilibria

It is clear that the types of Monty that play a mixed strategy as a best response to a mixed strategy of Amy will be members of the populations p 4 and p 6 , as can be seen from Figure A1. Let p 4 and p 6 be nonzero, and suppose that there is a mixed strategy BNE where Amy plays s with probability γ . Then, there are measures of Monty, p ^ 4 and p ^ 6 , who best respond to Amy’s mixed strategy with the mixed strategies Pr ( r ) = α and Pr ( r ) = β , respectively. Denote by p ˜ 4 and p ˜ 6 , the respective submeasures of p 4 and p 6 that best respond to Amy’s mixed strategy γ with r. Similarly, denote by p 4 and p 6 , the respective submeasures of p 4 and p 6 that best respond to Amy’s mixed strategy with h. Naturally,
p ^ 4 + p ˜ 4 + p 4 = p 4
p ^ 6 + p ˜ 6 + p 6 = p 6
Henceforth, suppose that p ^ 4 and p ^ 6 are nonzero.
Lemma A3.
There is a set of mixed strategy BNE given by:
Pr A ( s ) = γ , ( h , r ; r , r ; r , h ; α , r ; r , r ; r , β )
where a measure p ^ 4 of Monty plays Pr ( r ) = α and a measure p ^ 6 of Monty plays Pr ( r ) = β . This equation defines a BNE for all acceptable α, β (i.e., α , β [ 0 , 1 ] ) satisfying
n p 1 p 3 + ( n 2 ) p 2 + p 5 + ( 1 2 α ) p ^ 4 + p 4 p ˜ 4 p 6 + ( n 1 ) p 4 + ( 2 β 1 ) p ^ 6 + p ˜ 6 p 6 + n 2 = 0
Define Y as
Y = n p 1 p 3 + ( n 2 ) p 2 + p 5 + p 4 p ˜ 4 + ( n 1 ) p ˜ 6 p 6
Then,
α = Y + ( n 2 ) + 2 ( n 1 ) β p ^ 6 + p ^ 4 p 6 + ( n 1 ) p 4 p ^ 6 2 p ^ 4 β = 2 α p ^ 4 Y ( n 2 ) + p 6 p ^ 4 + ( n 1 ) p ^ 6 p 4 2 ( n 1 ) p ^ 6
Since α and β are probability distributions, they must fall in the interval [ 0 , 1 ] . We can use this to generate the following relations:
α 0 Y + ( n 2 ) + 2 ( n 1 ) β p ^ 6 + p ^ 4 p 6 + ( n 1 ) p 4 p ^ 6 0 α 1 Y + ( n 2 ) + 2 ( n 1 ) β p ^ 6 + p ^ 4 p 6 + ( n 1 ) p 4 p ^ 6 2 p ^ 4 β 0 2 α p ^ 4 Y ( n 2 ) + p 6 p ^ 4 + ( n 1 ) p ^ 6 p 4 0 β 1 2 α p ^ 4 Y ( n 2 ) + p 6 p ^ 4 + ( n 1 ) p ^ 6 p 4 2 ( n 1 ) p ^ 6
Observe that there are 10 parameters12 of interest: ( p 1 , p 2 , p 3 , p ^ 4 , p ˜ 4 , p 4 , p 5 , p ^ 6 , p ˜ 6 , p 6 ) . The inequalities in Equation (A31) are satisfied only when each probability lies within a specific interval; denote these intervals B n i , i = 1 , , 10 . For each B n i , the difference between its endpoints is the natural Lebesgue measure of the interval.
We can construct the subset, Ψ n , of the 10-dimensional probability space, as the product of the 10 B n i ’s,
Ψ n = × i B n i
and extend the product of the 10 respective Lebesgue measures on the intervals to the Lebesgue measure on the product space, and denote this measure of Ψ n by ζ n . We write the following theorem:
Theorem A3.
There is a finite threshold number K N , such that for n > K , B n + 1 i B n i for each i.
Naturally, for n > K , ζ n is strictly decreasing in n.
Corollary A1.
As n goes to infinity, the measure on the set Ψ n , ζ n , goes to 0.
Proof of Theorem A3 and Corollary A1 may be found in Appendix B.3.

Appendix B.3. Theorem A3 Proof

In order to satisfy the inequalities in Equation (A31), the measure p 3 must satisfy
α 0 Y + ( n 2 ) + 2 ( n 1 ) β p ^ 6 + p ^ 4 p 6 + ( n 1 ) p 4 p ^ 6 0 Y + ( n 2 ) + 2 ( n 1 ) β p ^ 6 + p ^ 4 p 6 + ( n 1 ) p 4 p ^ 6 + n p 3 n p 3 p 3 Y ^ + ( n 2 ) + 2 ( n 1 ) β p ^ 6 + p ^ 4 p 6 + ( n 1 ) p 4 p ^ 6 n
where
Y ^ = Y + n p 3
α 1 Y + ( n 2 ) + 2 ( n 1 ) β p ^ 6 + p ^ 4 p 6 + ( n 1 ) p 4 p ^ 6 2 p ^ 4 Y + ( n 2 ) + 2 ( n 1 ) β p ^ 6 + p ^ 4 p 6 + ( n 1 ) p 4 p ^ 6 2 p ^ 4 + n p 3 n p 3 p 3 Y ^ + ( n 2 ) + 2 ( n 1 ) β p ^ 6 + p ^ 4 p 6 + ( n 1 ) p 4 p ^ 6 2 p ^ 4 n
β 0 2 α p ^ 4 Y ( n 2 ) + p 6 p ^ 4 + ( n 1 ) p ^ 6 p 4 0 2 α p ^ 4 Y ( n 2 ) + p 6 p ^ 4 + ( n 1 ) p ^ 6 p 4 n p 3 n p 3 p 3 2 α p ^ 4 + Y ^ + ( n 2 ) p 6 p ^ 4 ( n 1 ) p ^ 6 p 4 n
β 1 2 α p ^ 4 Y ( n 2 ) + p 6 p ^ 4 + ( n 1 ) p ^ 6 p 4 2 ( n 1 ) p ^ 6 2 α p ^ 4 Y ( n 2 ) + p 6 p ^ 4 + ( n 1 ) p ^ 6 p 4 2 ( n 1 ) p ^ 6 n p 3 n p 3 p 3 2 α p ^ 4 + Y ^ + ( n 2 ) p 6 p ^ 4 ( n 1 ) p ^ 6 p 4 + 2 ( n 1 ) p ^ 6 n
We write the following lemma:
Lemma A4.
There is a finite threshold number M N , where, for n > M , constraints (A33) and (A35) bind.
Proof. 
We first prove the lemma for constraint (A33): subtract the right hand side of inequality (A33) from the right hand side of inequality (A37) and obtain
ι = 2 ( n 1 ) ( 1 β ) p ^ 6 2 α p ^ 4 n
Take the derivative of ι with respect to n, yielding
ι n = 2 ( 1 β ) p ^ 6 + 2 α p ^ 4 n 2 0
Likewise, take the limit of ι as n goes to infinity
lim n ι = 2 ( 1 β ) p ^ 6 0
The proof for constraint (A35) follows analogously: subtract the right hand side of inequality (A36) from the right hand side of inequality (A35) and obtain
ω = 2 ( n 1 ) β p ^ 6 2 ( 1 α ) p ^ 4 n
Take the derivative of ω with respect to n, yielding
ω n = 2 β p ^ 6 + 2 ( 1 α ) p ^ 4 n 2 0
Likewise, take the limit of ω as n goes to infinity
lim n ω = 2 β p ^ 6 0
 ☐
Suppose n > K . Then, B n 3 = [ a 3 , b 3 ] , where
a 3 = Y ^ + ( n 2 ) + 2 ( n 1 ) β p ^ 6 + p ^ 4 p 6 + ( n 1 ) p 4 p ^ 6 2 p ^ 4 n b 3 = Y ^ + ( n 2 ) + 2 ( n 1 ) β p ^ 6 + p ^ 4 p 6 + ( n 1 ) p 4 p ^ 6 n
Define ν n 3 as
ν n 3 = | b 3 a 3 | = 2 p ^ 4 n
The derivative of ν n 3 with respect to n is
ν n 3 n = 2 p ^ 4 n 2 0
The interval of permissible p 3 is shrinking monotonically as n increases. Moreover, examination of ν n 3 in the limit illustrates that the interval of permissible p 3 shrinks to a set with Lebesgue measure 0 (a point) as n goes to infinity.
The analogous argument holds for p 1 , p 2 , p 5 , p ˜ 4 , p 4 , p ˜ 6 , and p 6 and consequently, it remains to examine p ^ 4 and p ^ 6 . First, we rewrite Equation (A30) as
α = Y + ( n 2 ) + n p ^ 4 ( n 1 ) ( 1 2 β ) + 1 p ^ 6 p ˜ 6 + p 6 + ( n 1 ) p ˜ 4 + p 4 2 p ^ 4 β = ( 2 α n ) p ^ 4 + n p ^ 6 Y ( n 2 ) + p ˜ 6 + p 6 ( n 1 ) p ˜ 4 + p 4 2 ( n 1 ) p ^ 6
As before, impose α [ 0 , 1 ] and β [ 0 , 1 ] . From the α term, after some calculations, we see that p ^ 4 must lie in the following interval:
K n 2 p ^ 4 K n
where
K = ( n 1 ) ( 1 2 β ) 1 p ^ 6 Y ( n 2 ) + p ˜ 6 + p 6 ( n 1 ) p ˜ 4 + p 4
Similarly, from the β term, after some calculations, we obtain that p ^ 4 must lie in the following interval:
U n 2 α p ^ 4 U 2 ( n 1 ) p ^ 6 n 2 α
where
U = n p ^ 6 Y ( n 2 ) + p ˜ 6 + p 6 ( n 1 ) p ˜ 4 + p 4
Evidently constraint (A48) binds for all n greater than some finite natural number N and thus we have obtained our result for p ^ 4 .
Finally, we turn our attention to p ^ 6 . From the α term, after some calculations, we conclude that p ^ 6 must lie in the following interval:
K ^ ( n 1 ) ( 1 2 β ) + 1 p ^ 6 K ^ 2 p ^ 4
where
K ^ = Y + ( n 2 ) + n p ^ 4 p ˜ 6 + p 6 + ( n 1 ) p ˜ 4 + p 4
Clearly, if β 1 2 , we have obtained our result for p ^ 6 . We present the following lemma:
Lemma A5.
In the limit, as n goes to infinity, β 1 2 .
Proof. 
Recall, from Equation (A47) we have
β = ( 2 α n ) p ^ 4 + n p ^ 6 Y ( n 2 ) + p ˜ 6 + p 6 ( n 1 ) p ˜ 4 + p 4 2 ( n 1 ) p ^ 6
Then,
lim n β = p ^ 6 p ^ 4 1 p ˜ 4 p 4 p 1 + p 3 p 2 p 5 p ˜ 6 + p 6 2 p ^ 6 = 1 2 + p ^ 4 1 p ˜ 4 p 4 p 1 + p 3 p 2 p 5 p ˜ 6 + p 6 2 p ^ 6
Suppose for the sake of contradiction that lim n β = 1 2 . Then,
p 1 + p 3 p 2 p 5 p ˜ 6 + p 6 = 1 + p ^ 4 + p ˜ 4 + p 4
which holds only if
p 4 = 0
This violates our initial assumption that p 4 > 0 and so we may conclude that lim n β 1 2 . ☐
We have proved Theorem A3.

References

  1. Selvin, S. A problem in probability. Am. Stat. 1975, 29, 67. [Google Scholar]
  2. Selvin, S. On the Monty Hall problem. Am. Stat. 1975, 29, 134. [Google Scholar]
  3. Nalebuff, B. Puzzles. J. Econ. Perspect. 1987, 1, 157–163. [Google Scholar] [CrossRef]
  4. Gardner, M. The 2nd Scientific American Book of Mathematical Puzzles and Diversions; Simon and Schuster: New York, NY, USA, 1961. [Google Scholar]
  5. Vos Savant, M. Ask Marilyn. Parade Magazine, 9 September 1990; 15. [Google Scholar]
  6. Vos Savant, M. Ask Marilyn. Parade Magazine, 2 December 1990; 25. [Google Scholar]
  7. Vos Savant, M. Marilyn vos Savant’s reply. Am. Stat. 1991, 45, 347–348. [Google Scholar]
  8. Tierney, J. Behind Monty Hall’s doors: Puzzle, debate and answer. The New York Times, 21 July 1991; 1. [Google Scholar]
  9. Franco-Watkins, A.M.; Derks, P.L.; Dougherty, M.R.P. Reasoning in the Monty Hall problem: Examining choice behaviour and probability judgements. Think. Reason. 2003, 9, 67–90. [Google Scholar] [CrossRef]
  10. Friedman, D. Monty Hall’s three doors: Construction and deconstruction of choice anomaly. Am. Econ. Rev. 1998, 88, 933–946. [Google Scholar]
  11. Petrocelli, J.V.; Harris, A.K. Learning inhibition in the Monty Hall Problem: The role of dysfunctional counterfactual prescriptions. Personal. Soc. Psychol. Bull. 2011, 37, 1297–1311. [Google Scholar] [CrossRef] [PubMed]
  12. Saenen, L.; Heyvaert, M.; van Dooren, W.; Onghena, P. Inhibitory control in a notorious brain teaser: The Monty Hall dilemma. Think. Reason. 2015, 21, 176–192. [Google Scholar] [CrossRef]
  13. Saenen, L.; van Dooren, W.; Onghena, P. A randomized Monty Hall experiment: The positive effect of conditional frequency feedback. ZDM Math. Educ. 2015, 47, 837. [Google Scholar] [CrossRef]
  14. Tubau, E.; Aguilar-Lleyda, D.; Johnson, E.D. Reasoning and choice in the Monty Hall Dilemma (MHD): Implications for improving Bayesian reasoning. Front. Psychol. 2015, 6, 353. [Google Scholar] [CrossRef] [PubMed][Green Version]
  15. Lecoutre, M.P. Cognitive models and problem spaces in ‘purely random’ situations. Educ. Stud. Math. 1992, 23, 557–568. [Google Scholar] [CrossRef]
  16. Granberg, D.; Dorr, N. Further exploration of two stage decision making in the Monty Hall dilemma. Am. J. Psychol. 1998, 111, 561–579. [Google Scholar] [CrossRef]
  17. Herbranson, W.T.; Schroeder, J. Are birds smarter than mathematicians ? Pigeons (Columba livia) perform optimally on a version of the Monty Hall Dilemma. J. Comp. Psychol. 2010, 12, 1–13. [Google Scholar] [CrossRef] [PubMed]
  18. Granberg, D.; Brown, T.A. The Monty Hall dilemma. Personal. Soc. Psychol. Bull. 1995, 21, 711–723. [Google Scholar]
  19. Fernandez, L.; Piron, R. Should she switch ? A game-theoretic analysis of the Monty Hall problem. Math. Mag. 1999, 72, 214–217. [Google Scholar] [CrossRef]
  20. Mueser, P.; Granberg, D. The Monty Hall Dilemma Revisited: Understanding the Interaction of Problem Definition and Decision Making; Mimeo, The University of Missouri: Columbia, MO, USA, 1999. [Google Scholar]
  21. Bailey, H. Monty Hall uses a mixed strategy. Math. Mag. 2000, 73, 135–141. [Google Scholar] [CrossRef]
  22. Gill, R.D. The Monty Hall problem is not a probability puzzle (It’s a challenge in mathematical modelling). Stat. Neerl. 2011, 65, 58–71. [Google Scholar] [CrossRef]
  23. Fudenberg, D.; Tirole, J. Game Theory; MIT Press: Cambridge, MA, USA, 1991. [Google Scholar]
  24. Page, S.E. Let’s Make a Deal. Econ. Lett. 1998, 61, 175–180. [Google Scholar] [CrossRef]
  25. Segal, L. Letters to the editor. The New York Times. 16 August 1991. Available online: http://www.nytimes.com/1991/08/11/opinion/l-suppose-you-had-100-doors-with-goats-behind-99-of-them-624691.html (accessed on 25 July 2017).
  26. Chen, W.; Wang, J.T. Epiphany Learning for Bayesian Updating: Overcoming the Generalized Monty Hall Problem; Mimeo, National Taiwan University: Taipei, Taiwan, 2010; Available online: http://homepage.ntu.edu.tw/~josephw/EpiphanyMonty_20101207.pdf (accessed on 2 June 2017).
  27. Miller, J.B.; Sanjurjo, A. A Bridge from Monty Hall to the (Anti-)Hot Hand: Restricted Choice, Selection Bias, and Empirical Practice; Working Paper; IGIER: Milan, Italy, 2015; Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2709837 (accessed on 5 June 2017).
1.
Note that this problem was published as early as 1975 by Selvin (1975) [1,2]; and was also mentioned in Nalebuff (1987) [3], among others. In fact, the Monty Hall dilemma is itself equivalent to the earlier published Three Prisoners problem that Martin Gardner initially presented in 1959 ([4]).
2.
See, for example, Fudenberg and Tirole (1991) [23].
3.
Since a strategy for a player must specify an action at every information set encountered by a player, Amy’s strategy should, strictly speaking, be: S A = s , k × k . Our simplification of this; however, is clearer and does not affect the analysis.
4.
In [21], Bailey references a humorous letter to the New York Times editor [25] by the American author Lore Segal concerning this very assumption. Segal writes, “Your front-page article 21 July on the Monty Hall puzzle controversy neglects to mention one of the behind-the-door options: to prefer the goat to the auto. The goat is a delightful animal, although parking might be a problem”.
5.
Presumably, the tension engendered by Amy’s decision on whether or not to switch is attractive to the audience of the show, and Monty recognizes this.
6.
For derivation of this, see Appendix A.1.
7.
For derivation of this, see Appendix A.2.
8.
For derivation of this, see Appendix A.3.
9.
In [24] Monty always reveals the goats, though it is unclear whether the subjects know that this was a mandatory action.
10.
A reformulation of Bayes’ Rule in odds form.
11.
Each pair separated by semicolons refers to a type p i ’s strategy. These can be obtained by reading off the top rows of Figure A1.
12.
p 4 and p 6 need not be listed, as they are fully determined by their respective submeasures, which we have listed. Of course, one of our listed measures is redundant, e.g., we could write p 1 in terms of the other measures: p 1 = 1 i = 2 6 p i .
Figure 1. Case 1 (Sympathetic).
Figure 1. Case 1 (Sympathetic).
Games 08 00031 g001
Figure 2. Case 2 (Antipathetic).
Figure 2. Case 2 (Antipathetic).
Games 08 00031 g002

© 2017 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Back to TopTop