Next Article in Journal
Risk Assessment Uncertainties in Cybersecurity Investments
Previous Article in Journal
Game Theory of Tumor–Stroma Interactions in Multiple Myeloma: Effect of Nonlinear Benefits
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Successful Nash Equilibrium Agent for a Three-Player Imperfect-Information Game

1
Ganzfried Research, Miami Beach, FL 33139, USA
2
School of Computing and Information Sciences, Florida International University, Miami, FL 33199, USA
*
Author to whom correspondence should be addressed.
Games 2018, 9(2), 33; https://doi.org/10.3390/g9020033
Submission received: 13 May 2018 / Revised: 5 June 2018 / Accepted: 6 June 2018 / Published: 8 June 2018

Abstract

:
Creating strong agents for games with more than two players is a major open problem in AI. Common approaches are based on approximating game-theoretic solution concepts such as Nash equilibrium, which have strong theoretical guarantees in two-player zero-sum games, but no guarantees in non-zero-sum games or in games with more than two players. We describe an agent that is able to defeat a variety of realistic opponents using an exact Nash equilibrium strategy in a three-player imperfect-information game. This shows that, despite a lack of theoretical guarantees, agents based on Nash equilibrium strategies can be successful in multiplayer games after all.

1. Introduction

Nash equilibrium has emerged as the central solution concept in game theory, in large part due to the pioneering PhD thesis of John Nash proving that one always exists in finite games [1]. For two-player zero-sum games (i.e., competitive games where the winnings of one player equal the losses of the other player), the solution concept is particularly compelling, as it coincides with the concept of minimax/maximin strategies developed earlier by John von Neumann [2]. In that work, von Neumann proved that playing such a strategy guarantees a value of the game for the player in the worst case (in expectation) and that the value is the best worst-case guarantee out of all strategies. Essentially, this means that a player can guarantee winning (or at least tying) in the worst case if he/she follows such a strategy and alternates the roles of player 1 and 2. Therefore, for two-player zero-sum games, Nash equilibrium enjoys this “unbeatability” property. This has made it quite a compelling solution concept, and in fact agents based on approximating Nash equilibrium have been very successful and have even been able to defeat the strongest humans in the world in the popular large-scale game of two-player no-limit Texas hold ’em poker [3,4]. Nash equilibrium is additionally compelling for two-player zero-sum games because it can be computed in polynomial time [5].
For non-zero-sum games and games with more than two players, while Nash equilibrium is still guaranteed to exist due to Nash’s result, none of these additional properties hold, as highlighted by the classic Battle of the Sexes Game depicted in Figure 1. This game has three Nash equilibrium strategy profiles: when both players select Opera (i.e., (Opera, Opera)), when both players select Football (Football, Football) and where both select their preferred option with probability 3 5 . Clearly in this game the success of playing a Nash equilibrium depends heavily on the strategy chosen by the other player. For example, if the wife follows her strategy from the first Nash equilibrium and plays Opera, but the husband follows his strategy from the second Nash equilibrium and plays Football, the wife will receive the worst possible payoff of zero despite following a Nash equilibrium. While this example is just for a two-player game, the same phenomenon can occur in games with more than two players (though as described above, it cannot occur in two-player zero-sum games). Even three-player zero-sum games are not special, as any two-player general-sum game can be converted into a three-player zero-sum game by adding a “dummy” third player whose payoff equals negative the sum of the other two players’ payoff. Furthermore, even if we wanted to compute a Nash equilibrium, it has been proven to be PPAD-complete, and it is widely conjectured that no efficient algorithm exists [6], though several heuristic approaches have been developed for strategic-form games (i.e., matrix games such as Battle of the Sexes) with varying degrees of success in different settings [7,8,9,10,11]. There have also been techniques developed that approximate the Nash equilibrium to a provably very small degree of approximation error in a three-player imperfect-information game [12,13].
Thus, the problem of how to create strong agents for non-zero-sum and multiplayer games, and in particular the question of whether Nash equilibrium strategies are successful, remains an open problem—perhaps the most important one at the intersection of artificial intelligence and game theory. Of course, the most successful approach would not just simply follow a solution concept, but would also attempt to learn and exploit the weaknesses of the opponents [14,15] (note that this would be potentially very helpful for two-player zero-sum games as well, as Nash equilibrium may not fully exploit the mistakes of suboptimal opponents as much as successful exploitative agents even for that setting). However, successfully performing opponent exploitation is very difficult, particularly in very large games where the number of game iterations and observations of the opponents’ play is small compared to the number of game states. Furthermore, such approaches are susceptible to being deceived and counterexploited by sophisticated opponents. It is clear that pure exploitation approaches are insufficient to perform well against a mix of opponents of unknown skill level and that a strong strategy rooted in game-theoretic foundations is required.
The strongest existing agents for large multiplayer games have been based on approaches that attempt to approximate Nash equilibrium strategies [16,17]. In particular, they apply the counterfactual regret minimization algorithm [18], which has also been used for two-player zero-sum games and has resulted in super-human level play for both limit Texas hold ’em [19] and no-limit Texas hold ’em [3,4]. These agents have performed well in the three-player limit Texas hold ’em division of the Annual Computer Poker Competition, which is held annually at the AI conferences AAAI or IJCAI [20]. Counterfactual regret minimization (CFR) is an iterative self-play algorithm that is proven to converge to a Nash equilibrium in the limit for two-player zero-sum games. It can be integrated with various forms of Monte Carlo sampling in order to improve performance both theoretically and in practice. For multiplayer and non-zero-sum games the algorithm can also be run, though the strategies computed are not guaranteed to form a Nash equilibrium. It was demonstrated that it does in fact converge to an ϵ -Nash equilibrium (a strategy profile in which no agent can gain more than ϵ by deviating) in the small game of three-player Kuhn poker, while it does not converge to equilibrium in Leduc hold ’em [16]. It was subsequently proven that it guarantees converging to a strategy that is not dominated and does not put any weight on iteratively weakly-dominated actions [17]. While for some small games this guarantee can be very useful (e.g., for two-player Kuhn poker a high fraction of the actions are iteratively-weakly-dominated), in many large games (such as full Texas hold ’em) only a very small fraction of actions are dominated, and the guarantee is not useful. Other approaches based on integrating the fictitious play algorithm with algorithms for finding optimal policies in Markov decision processes, such as policy iteration, have been demonstrated experimentally to converge to ϵ -equilibrium for very small ϵ in a no-limit Texas hold ’em poker tournament endgame [12,13]. It has been proven that if these algorithms converge, then the resulting strategy profile constitutes a Nash equilibrium (while CFR does not have such a guarantee); however, the algorithms are not proven to converge in general, despite the fact that they did for the game that was experimented on.
The empirical success of the three-player limit Texas hold ’em agents in the Annual Computer Poker Competition suggests that CFR-based approaches that are attempting to approximate a Nash equilibrium are promising for multiplayer games. However, the takeaway is not very clear. First, the algorithms are not guaranteed to converge to equilibrium for this game, and there is no guarantee on whether the strategies used by the agents constitute a Nash equilibrium or are even remotely close to one. Furthermore, there were only a small number of opposing agents submitted to the competition who may have a questionable skill level, so it is not clear whether the CFR-based approaches actually produce high-quality strategies or whether they just produced strategies that happened to outperform mediocre opponents and would have done very poorly against strong ones. While these CFR-based approaches are clearly the best so far and seem to be promising, they do not conclusively address the question of whether Nash equilibrium strategies can be successful in practice in interesting multiplayer games against realistic opponents.
In this paper, we create an agent based on an exact Nash equilibrium strategy for the game of three-player Kuhn poker. While this game is relatively small, and in particular quite small compared to three-player limit Texas hold ’em, it is far from trivial to analyze and has been used as a challenge problem at the Annual Computer Poker Competition for the past several years [20]. A benefit of experimenting on a small problem is that exact Nash equilibrium strategies can be computed analytically [21]. That paper computed an infinite family of Nash equilibrium strategies, though it did not perform experiments to see how they performed in practice against realistic opponents. The poker competition also did not publish any details of the agents who participated, so it is unclear what approaches were used by the successful agents. We ran experiments with our equilibrium agent against 10 agents that were created recently as part of a class project. These agents were computed using a wide range of approaches, which included deep learning, opponent modeling, rule-based approaches, as well as game-theoretic approaches. We show that an approach based on using a natural Nash equilibrium strategy is able to outperform all of the agents from the class. This suggests that agents based on using Nash equilibrium strategies can in fact be successful in multiplayer games, despite the fact that they do not have a worst-case theoretical guarantee. Of course since we just experimented on one specific game there is no guarantee that this conclusion would apply beyond this to other games, and more extensive experiments are needed to determine whether this conclusion would generalize.

2. Three-Player Kuhn Poker

Three-player Kuhn poker is a simplified form of limit poker that has been used as a testbed game in the AAAI Annual Computer Poker Competition for several years [20]. There is a single round of betting. Each player first antes a single chip and is dealt a card from a four-card deck that contains one Jack (J), one Queen (Q), one King (K) and one Ace (A). The first player has the option to bet a fixed amount of one additional chip (by contrast, in no-limit games, players can bet arbitrary amounts of chips) or to check (remain in the hand, but not bet an additional chip). When facing a bet, a player can call (i.e., match the bet) or fold (forfeit the hand). No additional bets or raises beyond the additional bet are allowed (while they are allowed in other common poker variants such as Texas hold ’em, both for the limit and no-limit variants). If all players but one have folded, then the player who has not folded wins the pot, which consists of all chips in the middle. If more than one player has not folded by the end, there is a showdown, in which the players reveal their private card and the player with the highest card wins the entire pot (which consists of the initial antes plus all additional bets and calls). The ace is the highest card, followed by the king, queen and jack. As one example of a play of the game, suppose the players are dealt queen, king and ace respectively, and Player 1 checks, then Player 2 checks, then Player 3 bets, then Player 1 folds, then Player 2 calls; then Player 3 would win a pot of five, for a profit of three from the amount with which he/she started the hand.
Note that although three-player Kuhn poker is only a synthetic simplified form of poker and is not actually played competitively, it is still far from trivial to analyze and contains many of the interesting complexities of popular forms of poker such as Texas hold ’em. First, it is a game of imperfect information, as players are dealt a private card that the other agents do not have access to, which makes the game more complex than a game with perfect information that has the same number of states. Despite the size, it is not trivial to compute Nash equilibrium analytically, though recently an infinite family of Nash equilibria has been computed [21]. The equilibrium strategies exhibit the phenomena of bluffing (i.e., sometimes betting with weak hands such as a jack or queen) and slow-playing (also known as trapping) (i.e., sometimes checking with strong hands such as a king or ace in order to induce a bet from a weaker hand). To see why, suppose an agent X played a simple strategy that only bet with an ace or sometimes a king. Then the other agents would only call the bet if they had an ace, since otherwise they would know they are beat (since there is only one king in the deck, if they held a king, they would know that Player X held an ace). But now if the other agents are only calling with an ace, it is unprofitable for Player X to bet with a king, since he/she will lose an additional chip whenever another player holds an ace and will not get a call from a worse hand; it would be better to check and then potentially call with the hope that the other player is bluffing (or to fold if you think the player is bluffing too infrequently). A better strategy may be to bet with an ace and to sometimes bet with a jack as a bluff, to put the other players in a challenging situation when holding a queen or king. However, Player X may also want to sometimes check with an ace as well so that he/she can still have some strong hands after he/she checks and the players are more wary of betting him/her after a check.
An infinite family of Nash equilibria for this game has been computed and can be seen in the tables from a recent article by Szafron et al. [21]. The family of equilibria is based on several parameter values, which once selected determine the probabilities for the other portions of the strategies. One can see from the tables that randomization and including some probability of trapping and bluffing are essential in order to have a strong and unpredictable strategy. Thus, while this game may appear quite simple at first glance, the analysis is still very far from simple, and the game exhibits many of the complexities of far larger games that are played competitively by humans for large amounts of money.

3. Nash Equilibrium-Based Agent

One may wonder why it is worthwhile to create agents and experiment on three-player Kuhn poker, given that the game has been “solved,” as described in the preceding section. First, as described there are infinitely many Nash equilibria in this game (and furthermore, there may be others beyond those in the family computed in the prior work). Therefore, even if we wanted to create an agent that employed a Nash equilibrium “solution,” it would not be clear which one to pick, and the performance would depend heavily on the strategies selected by the other agents (who may not even be playing a Nash equilibrium at all). This is similar to the phenomenon described for the Battle of the Sexes Game in the Introduction, where even though the wife may be aware of all the equilibria, if she attends the Opera as part of the (O, O) equilibrium while the husband does Football as part of the (F, F) equilibrium, both players obtain very low payoff despite both following an equilibrium. A second reason is that, as also described in the Introduction, Nash equilibrium has no theoretical benefits in three-player games, and it is possible that a non-equilibrium strategy (particularly one that integrates opponent modeling and exploitation) would perform better, even if we expected the opponents may be following a Nash equilibrium strategy, but particularly if we expect them to be playing predictably and/or making mistakes.
Therefore, despite the fact that exact Nash equilibrium strategies have been computed for this game, it is still very unclear what a good approach is for creating a strong agent against a pool of unknown opponents.
For our agent, we have decided to use a Nash equilibrium strategy that has been singled out as being more robust than the others in prior work and that obtains the best worst-case payoff assuming that the other agents are following one of the strategies given by the computed infinite equilibrium family [21]. We depict this strategy in Table 1. This table assigns values for the 21 free parameters in the infinite family of Nash equilibrium strategies. To define these parameters, a j k , b j k and c j k denote the action probabilities for players P 1 , P 2 and P 3 , respectively, when holding card j and taking an aggressive action (Bet (B) or Call (C)) in situation k, where the betting situations are defined in Table 2. Prior work has actually singled out a range of strategies that receive the best worst-case payoff; above, we have described the lower bound of this space, and we also experiment using the strategy that falls at the upper bound (Table 3). We call the first Nash Equilibrium agent NE1 and the second NE2.

4. Experiments

We experimented against 10 of 11 agents submitted recently for a class project (we ignored one agent that ran very slowly, which performed poorly).1 These agents utilized a wide variety of approaches, ranging from neural networks to counterfactual regret minimization to opponent modeling to rule-based approaches. For each grouping of 3 agents, we ran matches consisting of 3000 hands between each of the 6 permutations of the agents (with the same cards being dealt for the respective positions of the agents in each of the duplicated matches). The number of hands per match (3000) is the same value used in the Annual Computer Poker Competition, and the process of duplicating the matches with the same cards between the different agent permutations is a common approach that significantly reduces the variance. We ran 10 matches for each permutation of 3 agents. Table 4 shows the overall payoff (divided by 100,000) for each agent. The Nash agent received the highest payoff. The results are very similar when using both equilibrium strategies NE1 and NE2, with NE2 performing slightly better.

5. Conclusions

Creating strong agents for games with more than two players, and in particular the question of whether Nash equilibrium strategies are successful, is an important open problem—perhaps the most important one at the intersection of game theory and AI. We demonstrated that an agent based on following an exact Nash equilibrium is able to outperform agents submitted for a recent class project that utilize a wide variety of approaches. This suggests that agents based on using Nash equilibrium strategies can in fact be successful in multiplayer games, despite the fact that they do not have a worst-case theoretical guarantee.

Author Contributions

S.G. designed the experiments and wrote the paper. A.N. and J.P. implemented the two Nash equilibrium agents and ran the experiments.

Acknowledgments

We thank the School of Computing and Information Sciences at Florida International University. This project is not supported by any grants.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nash, J. Non-cooperative games. Ph.D. Thesis, Princeton University, Princeton, NJ, USA, 1950. [Google Scholar]
  2. Von Neumann, J. Zur Theorie der Gesellschaftsspiele. Math. Ann. 1928, 100, 295–320. [Google Scholar] [CrossRef]
  3. Brown, N.; Sandholm, T. Safe and Nested Subgame Solving for Imperfect-Information Games. In Proceedings of the Annual Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  4. Moravčík, M.; Schmid, M.; Burch, N.; Lisý, V.; Morrill, D.; Bard, N.; Davis, T.; Waugh, K.; Johanson, M.; Bowling, M. DeepStack: Expert-level artificial intelligence in heads-up no-limit poker. Science 2017, 356, 508–513. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Koller, D.; Megiddo, N. The Complexity of Two-Person Zero-Sum Games in Extensive Form. Games Econ. Behav. 1992, 4, 528–552. [Google Scholar] [CrossRef]
  6. Chen, X.; Deng, X. 3-Nash is PPAD-Complete; Report No. 134; Electronic Colloquium on Computational Complexity: Trier, Germany, 2005. [Google Scholar]
  7. Berg, K.; Sandholm, T. Exclusion Method for Finding Nash Equilibrium in Multiplayer Games. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
  8. Porter, R.; Nudelman, E.; Shoham, Y. Simple Search Methods for Finding a Nash Equilibrium. Games Econ. Behav. 2008, 63, 642–662. [Google Scholar] [CrossRef]
  9. Govindan, S.; Wilson, R. A Global Newton Method to Compute Nash Equilibria. J. Econ. Theory 2003, 110, 65–86. [Google Scholar] [CrossRef]
  10. Sandholm, T.; Gilpin, A.; Conitzer, V. Mixed-Integer Programming Methods for Finding Nash Equilibria. In Proceedings of the National Conference on Artificial Intelligence (AAAI), Pittsburgh, PA, USA, 9–13 July 2005; pp. 495–501. [Google Scholar]
  11. Lemke, C.; Howson, J. Equilibrium points of bimatrix games. J. Soc. Ind. Appl. Math. 1964, 12, 413–423. [Google Scholar] [CrossRef]
  12. Ganzfried, S.; Sandholm, T. Computing an approximate jam/fold equilibrium for 3-player no-limit Texas hold’em tournaments. In Proceedings of the International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), Estoril, Portugal, 12–16 May 2008. [Google Scholar]
  13. Ganzfried, S.; Sandholm, T. Computing equilibria in multiplayer stochastic games of imperfect information. In Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI), Pasadena, CA, USA, 11–17 July 2009. [Google Scholar]
  14. Johanson, M.; Zinkevich, M.; Bowling, M. Computing Robust Counter-Strategies. In Proceedings of the Annual Conference on Neural Information Processing Systems (NIPS), Vancouver, BC, Canada, 3–6 December 2007; pp. 1128–1135. [Google Scholar]
  15. Ganzfried, S.; Sandholm, T. Game theory-based opponent modeling in large imperfect-information games. In Proceedings of the International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), Taipei, Taiwan, 2–6 May 2011. [Google Scholar]
  16. Abou Risk, N.; Szafron, D. Using Counterfactual Regret Minimization to Create Competitive Multiplayer Poker Agents. In Proceedings of the International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), Toronto, ON, Canada, 10–14 May 2010. [Google Scholar]
  17. Gibson, R. Regret Minimization in Games and the Development of Champion Multiplayer Computer Poker-Playing Agents. Ph.D. Thesis, University of Alberta, Edmonton, AB, Canada, 2014. [Google Scholar]
  18. Zinkevich, M.; Bowling, M.; Johanson, M.; Piccione, C. Regret Minimization in Games with Incomplete Information. In Proceedings of the Annual Conference on Neural Information Processing Systems (NIPS), Vancouver, BC, USA, 3–6 December 2007. [Google Scholar]
  19. Bowling, M.; Burch, N.; Johanson, M.; Tammelin, O. Heads-up Limit Hold’em Poker is Solved. Science 2015, 347, 145–149. [Google Scholar] [CrossRef] [PubMed]
  20. Annual Computer Poker Competition. Available online: http://www.computerpokercompetition.org/ (accessed on 13 May 2018).
  21. Szafron, D.; Gibson, R.; Sturtevant, N. A Parameterized Family of Equilibrium Profiles for Three-Player Kuhn Poker. In Proceedings of the International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), St. Paul, MN, USA, 6–10 May 2013. [Google Scholar]
1.
Figure 1. Battle of the Sexes Game.
Figure 1. Battle of the Sexes Game.
Games 09 00033 g001
Table 1. Parameter values used for our first Nash Equilibrium agent (NE1). The strategies fall at the “lower bound” of the space of “robust” Nash equilibrium strategies that receive the best worst-case payoff assuming the other agents follow one of the strategies given by the computed infinite equilibrium family. The values are for the free parameters in the infinite family of Nash equilibrium strategies. To define these parameters, a j k , b j k and c j k denote the action probabilities for players P 1 , P 2 and P 3 , respectively, when holding card j and taking an aggressive action (Bet (B) or Call (C)) in situation k, where the betting situations are defined in Table 2.
Table 1. Parameter values used for our first Nash Equilibrium agent (NE1). The strategies fall at the “lower bound” of the space of “robust” Nash equilibrium strategies that receive the best worst-case payoff assuming the other agents follow one of the strategies given by the computed infinite equilibrium family. The values are for the free parameters in the infinite family of Nash equilibrium strategies. To define these parameters, a j k , b j k and c j k denote the action probabilities for players P 1 , P 2 and P 3 , respectively, when holding card j and taking an aggressive action (Bet (B) or Call (C)) in situation k, where the betting situations are defined in Table 2.
P 1 P 2 P 3
a 11 = 0 b 11 = 0 c 11 = 0
a 21 = 0 b 21 = 0 c 21 = 1 2
a 22 = 0 b 22 = 0 c 22 = 0
a 23 = 0 b 23 = 0 c 23 = 0
a 31 = 0 b 31 = 0 c 31 = 0
a 32 = 0 b 32 = 0 c 32 = 0
a 33 = 1 2 b 33 = 1 2 c 33 = 1 2
a 34 = 0 b 34 = 0 c 34 = 0
a 41 = 0 b 41 = 0 c 41 = 1
Table 2. Betting situations in three-player Kuhn poker. For each player and situation, the sequence of capital letters denotes the history of the betting so far. ‘K’ stands for the action of “check” (which passes the turn to the next agent and does not put additional money in the pot); ‘F’ stands for “Fold” (give up on the hand and forfeit the pot); and ‘B’ stands for “Bet” (put additional money in the pot, which opponents are forced to match to remain in the hand). For example, for row P 1 Situation 1, the history of actions so far is that Player 1 has checked, Player 2 has checked and Player 3 has bet, and it is now Player 1’s turn.
Table 2. Betting situations in three-player Kuhn poker. For each player and situation, the sequence of capital letters denotes the history of the betting so far. ‘K’ stands for the action of “check” (which passes the turn to the next agent and does not put additional money in the pot); ‘F’ stands for “Fold” (give up on the hand and forfeit the pot); and ‘B’ stands for “Bet” (put additional money in the pot, which opponents are forced to match to remain in the hand). For example, for row P 1 Situation 1, the history of actions so far is that Player 1 has checked, Player 2 has checked and Player 3 has bet, and it is now Player 1’s turn.
Situation P 1 P 3 P 3
1KKK
2KKBBKB
3KBFKKBFBF
4KBCKKBCBC
Table 3. Parameter values used for our second Nash equilibrium agent (NE2). The strategies fall at the “upper bound” of the space of “robust” Nash equilibrium strategies that receive the best worst-case payoff assuming the other agents follow one of the strategies given by the computed infinite equilibrium family. The values are for the free parameters in the infinite family of Nash equilibrium strategies. To define these parameters, a j k , b j k and c j k denote the action probabilities for players P 1 , P 2 and P 3 , respectively, when holding card j and taking an aggressive action (Bet (B) or Call (C)) in situation k, where the betting situations are defined in Table 2.
Table 3. Parameter values used for our second Nash equilibrium agent (NE2). The strategies fall at the “upper bound” of the space of “robust” Nash equilibrium strategies that receive the best worst-case payoff assuming the other agents follow one of the strategies given by the computed infinite equilibrium family. The values are for the free parameters in the infinite family of Nash equilibrium strategies. To define these parameters, a j k , b j k and c j k denote the action probabilities for players P 1 , P 2 and P 3 , respectively, when holding card j and taking an aggressive action (Bet (B) or Call (C)) in situation k, where the betting situations are defined in Table 2.
P 1 P 2 P 3
a 11 = 0 b 11 = 1 4 c 11 = 0
a 21 = 0 b 21 = 1 4 c 21 = 1 2
a 22 = 0 b 22 = 0 c 22 = 0
a 23 = 0 b 23 = 0 c 23 = 0
a 31 = 0 b 31 = 0 c 31 = 0
a 32 = 0 b 32 = 1 c 32 = 0
a 33 = 1 2 b 33 = 7 8 c 33 = 0
a 34 = 0 b 34 = 0 c 34 = 1
a 41 = 0 b 41 = 1 c 41 = 1
Table 4. Experiments using Nash agents against class project agents. The values reported are the total overall payoff of the agent in dollars (assuming an ante of $1 and a bet size of $1), divided by 100,000.
Table 4. Experiments using Nash agents against class project agents. The values reported are the total overall payoff of the agent in dollars (assuming an ante of $1 and a bet size of $1), divided by 100,000.
NashA1A2A3A4A5A6A7A8A9A10
2.81 (NE1)2.241.172.54−1.662.321.74−1.34−9.54−3.471.42
2.81 (NE2)2.251.182.54−1.652.321.74−1.34−9.56−3.481.42

Share and Cite

MDPI and ACS Style

Ganzfried, S.; Nowak, A.; Pinales, J. Successful Nash Equilibrium Agent for a Three-Player Imperfect-Information Game. Games 2018, 9, 33. https://doi.org/10.3390/g9020033

AMA Style

Ganzfried S, Nowak A, Pinales J. Successful Nash Equilibrium Agent for a Three-Player Imperfect-Information Game. Games. 2018; 9(2):33. https://doi.org/10.3390/g9020033

Chicago/Turabian Style

Ganzfried, Sam, Austin Nowak, and Joannier Pinales. 2018. "Successful Nash Equilibrium Agent for a Three-Player Imperfect-Information Game" Games 9, no. 2: 33. https://doi.org/10.3390/g9020033

APA Style

Ganzfried, S., Nowak, A., & Pinales, J. (2018). Successful Nash Equilibrium Agent for a Three-Player Imperfect-Information Game. Games, 9(2), 33. https://doi.org/10.3390/g9020033

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop