Previous Article in Journal
“Anything Goes” in an Ultimatum Game?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Pure Bayesian Nash Equilibria for Bayesian Games with Multidimensional Vector Types and Linear Payoffs

Department of Computing, Faculty of Engineering, Imperial College London, London SW7 2AZ, UK
*
Author to whom correspondence should be addressed.
Games 2025, 16(4), 37; https://doi.org/10.3390/g16040037 (registering DOI)
Submission received: 21 April 2025 / Revised: 10 June 2025 / Accepted: 4 July 2025 / Published: 14 July 2025

Abstract

In this work, we study n-agent Bayesian games with m-dimensional vector types and linear payoffs, also called linear multidimensional Bayesian games. This class of games is equivalent with n-agent, m-game uniform multigames. We distinguish between games that have a discrete type space and those with a continuous type space. More specifically, we are interested in the existence of pure Bayesian Nash equilibriums for such games and efficient algorithms to find them. For continuous priors, we suggest a methodology to perform Nash equilibrium searches in simple cases. For discrete priors, we present algorithms that can handle two-action and two-player games efficiently. We introduce the core concept of threshold strategy and, under some mild conditions, we show that these games have at least one pure Bayesian Nash equilibrium. We illustrate our results with several examples like the double-game prisoner’s dilemma (DGPD), the game of chicken, and the sustainable adoption decision problem (SADP).

1. Introduction

Game theory provides an abstract framework to model a broad range of decision-making scenarios in real-life situations. From social cooperation (Larson, 2021) to biological evolution (Friedman, 1998) and economics (Shapiro, 1989) we can find models of games that can help to predict, or at least explain, the decision-makers’ behavior. “Game theory is a bag of analytical tools designed to help us understand the phenomena that we observe when decision-makers interact” (Osborne & Rubinstein, 1994). A game is designed to model any situation involving decision-makers (players and/or agents) that are rational and reason strategically. An agent is rational when it has a well-defined objective, is able to identify a preferable situation, and attempts to maximize its payoff. Strategic reasoning is the use of knowledge in a manner that best anticipates the other agents’ actions and taking them into account in the decision process. The knowledge of an agent is the information that it has before making its decision. Information only known by one player is said to be private knowledge. When it is shared among a group of players, we say that this is common knowledge.
More formally, we assume that agents are indexed by a set I. We denote the set of actions of agents by A, the set of outcomes by O, and consider an outcome function f : A O that gives the outcome of a set of actions. The preferences of any agent are specified by the maximization of a utility (or payoff) function u : O R . The simplest form of a game is the Strategic Game given in Definition 1. An action profile a = ( a i ) i I is an outcome of the game. We use the index i to designate “all players except i” and write a = ( a i , a i ) for a A .
Definition 1.
A Strategic Game G = I , ( A i ) i I , ( u i ) i I comprises the following:
1. 
A finite set I = { 1 , 2 , 3 , , n } of n agents.
2. 
For each agent i I , there exists a nonempty set A i of possible actions. If  A i is finite for all agents, then we say that the game is finite.
3. 
A set A = i I A i of possible outcomes.
4. 
A set of utility functions u i : A R specifying the preferences of agent i.
A Nash equilibrium (NE) “captures a steady state” (Osborne & Rubinstein, 1994) of a game in which all agents have no incentive to deviate from their action unilaterally (Definition 2).
Definition 2.
A Nash equilibrium of the strategic game G = I , ( A i ) i I , ( u i ) i I is an action profile a * A such that
i I a i A i u i ( a i * , a i * ) u i ( a i , a i * )
For a set X, let P ( X ) denote its set of subsets.
Definition 3.
Given an agent i and the action a i A i , the best-response function B i : A i P ( A i ) is such that
B i ( a i ) = { a i A i u i ( a i , a i ) u i ( a i , a i ) a i A i }
This implies that a * is a Nash equilibrium if and only if a * B ( a * ) : = B i ( a i ) i I .
The prisoner’s dilemma (PD) (Flood, 1958) is a widely used toy example that models a simple situation of self-interest driven decision and illustrates a case where the Nash equilibrium is not Pareto efficient (Osborne & Rubinstein, 1994). When extended to more than two agents, it is often associated with the tragedy of the commons (Carrozzo Magli et al., 2021). This is a situation in which “individuals, who have open access to a shared resource act independently according to their own self-interest and, contrary to the common good of all users, cause depletion of the resource through their uncoordinated action (Hardin, 1968).”
However, empirical evidence challenges this theoretical prediction. In (Poundstone, 1993), the authors observed that New Zealand’s unlocked newspaper boxes (where taking without paying is easy) function successfully. Rather than exploiting this vulnerability, most customers voluntarily pay, suggesting they understand that collective defection would collapse the system.
Consider, for example, the TV show “Friend or Foe?”, which aired between 2002 and 2005 in the US. Two players who previously failed to answer some questions have to play “The Trust Box”: if they both choose Friend, they share a specified amount of money. If they both choose Foe, the money is lost. If one plays Friend while the other plays Foe, the player choosing Foe wins all the money. Although (Friend, Friend) is not a NE, data collected from this kind of game showed that in such situation, “cooperation is surprisingly high” (van den Assem et al., 2011). Experience suggests that when a significant amount of money is at stake, players tend to cooperate. Indeed, humans are not only driven by material considerations but also by moral and sociocultural matters.

1.1. Bayesian Games

Concrete situations usually involve uncertainty at multiple levels so Strategic Games with perfect information (such as the prisoner’s dilemma or hawk–dove (Székely et al., 2010)) may not be sufficient to describe the game. Usually, we denote this unknown information as a state of nature and model it with a state space Ω (Osborne & Rubinstein, 1994). Suppose each agent has some private information (or private knowledge): this information, called the type θ i of agent i, is unknown to the other agents, who can only have a (subjective) probability distribution over such possible types for an agent. So, we assume that they have in mind a type space Θ and a probability measure over Θ , and, following von Neumann and Morgenstern’s theory (von Neumann & Morgenstern, 1944), they play a * that maximizes the expected value of u ( f ( a * , θ ) ) subject to θ Θ .
A game G is Bayesian (Harsanyi, 1967, 1968a, 1968b) when there is uncertainty for the agents (as in Definition 4). In this situation, an agent can no longer anticipate the output of the game with certainty because some information about the game is unknown. Variance in the outcome is generally associated with risk or “how much we could diverge from what we expect”. A human player who wants to maximize his/her payoff without taking too much risk could be modeled by optimizing a combination of the expected value and the associated variance, as seen in portfolio management models (Luenberger, 1998).
A key feature of Bayesian games is that strategies are functions of agents’ types, mapping from private information to action choices. Indeed, the action of an agent depends on all the information it has (its own type) at the moment of the decision. We distinguish pure strategies s i : Θ i A i from mixed strategies σ i : Θ i Δ A i , where Δ X denotes the set of all probability distributions on a set X. In the latter case, the action played is no longer deterministic but chosen randomly according to a probability distribution.
Definition 4.
A Bayesian game G is a game in strategic form with incomplete information which we denote by G = I , ( A i , Θ i , u i , p i ( · ) ) i I where
1. 
I = { 1 , , n } is the set of agents.
2. 
A i is agent i’s action set and A = i I A i is the set of action outcomes or action profiles.
3. 
Θ i is agent i’s type space and Θ = i I Θ i is the set of type profiles.
4. 
u i : A × Θ R is agent i’s utility for each i I .
5. 
p i : Θ [ 0 , 1 ] is a (subjective) joint probability on Θ for each i I .
For such games, we can derive the notion of Bayesian Nash equilibrium from the NE of strategic games where we maximize an expected utility u i ¯ subject to agents’ types. We also distinguish pure and mixed Nash equilibria. Thanks to the Nash theorem (Theorem 1), we know that there always exists a mixed-strategy Nash equilibrium (denoted by mixed NE). However, this theorem does not give a practical method to find this Nash equilibrium. In fact, finding a Nash equilibrium is often very complex and cannot be computed in a reasonable time. In 1994, Christos H. Papadimitriou introduced the complexity class PPAD (polynomial parity arguments on directed graphs) (Papadimitriou, 1994) in order to classify the problem of finding a Nash equilibrium. Later, he also showed that the problem of finding a Nash equilibrium for a finite game is PPAD-complete (Daskalakis et al., 2009).
Theorem 1
(Nash Theorem (Osborne & Rubinstein, 1994)). Every finite strategic game has a mixed-strategy Nash equilibrium.
Although finding a Nash equilibrium is very hard in general, we can find some classes of games that have nice properties and an efficiently computable Nash equilibrium (Rabinovich et al., 2013).

1.2. Linear Multidimensional Bayesian Games

For our study, we consider linear multidimensional Bayesian games (Definition 5). They were initially introduced by Krishna and Perry to model multiple object auctions (Krishna & Perry, 1998). This class of games was shown (Edalat et al., 2018) to be equivalent to another class of games called uniform multigames (Theorem 2). A multigame (Definition 6) is a game that comprises several local games that are played simultaneously by all agents. Each local game has its own payoff matrix and possible actions. For example, an agent playing “Head or Tail” with one person and “Rock, Paper, Scissors” with another person simultaneously can be modeled by a multigame. Even if both local games are solved simultaneously, the agent is just playing on two “separate” games and tries to maximize a global utility which is a linear combination of local utilities. More specifically, for multigames in which every agent has the same set of actions in all local games, we can consider uniform games: agents can only take the same action in all local games. When a multigame is uniform, an agent chooses only one strategy that is played identically in all local games. In other words, one decision has multiple consequences and each agent wants to optimize their overall utility. An agent chooses a global strategy that is applied to all the local games. Let R + denote the set of non-negative real numbers.
Definition 5 (Linear Multidimensional Bayesian Games).
A Bayesian game G is m-dimensional if the type space of each agent is a bounded subset of R + m . When the positive integer m > 1 is implicitly given, we say that G is multidimensional. A multidimensional Bayesian game is linear if the utility of each agent only depends linearly on its own type components, i.e., there exists L i ( s i , s i ) R m such that u i ( s i , s i , θ i , θ i ) = j J L i ( s i , s i ) j θ i j .
Definition 6.
A multigame
G = I , J , ( w i ) i I , ( G j ) j J , ( Θ i ) i I , ( A i j , u i j ) i I , j J , p ( · ) )
is a game in strategic form with incomplete information with the following structure:
1. 
The set of agents I = { 1 , , n } .
2. 
The set of n-agent basic games is given by G j , where j J = { 1 , , m } with action space A i j and utility function u i j : A i j × A i j R for each agent i I in the game G j .
3. 
Agent i’s strategy is s i = ( s i j ) j J S i = j J A i j where s i j is agent i’s action in G j .
4. 
Agent i’s type is θ i = ( θ i j ) j J Θ i with θ i j 0 , w i > 0 and j J θ i j w i .
5. 
Agent i’s utility for the strategy profile ( s i , s i ) and type profile ( θ i , θ i ) depends linearly on its types:
u i ( s i , s i , θ i , θ i ) = j J θ i j u i j ( s 1 j , , s n j )
6. 
The agents’ type profile θ = ( θ 1 , , θ n ) i I Θ i is drawn from a given joint probability distribution p ( θ ) . For any θ i Θ i , the function p ( · θ i ) specifies a conditional probability distribution Θ i representing what agent i believes about the types of the other agents if its own type where θ i .
The type coefficients θ i j represent agent i’s weight or priority for game j, reflecting how much agent i values outcomes in that particular game relative to others.
Theorem 2
(Edalat et al., 2018). Suppose that G is a linear m-dimensional Bayesian game with a bounded type space Θ i R + m for each i I ; then, G is equivalent with a uniform multigame with m basic games.
Note that according to Definition 6, the type can be any m-dimensional vector with positive coefficients. We say that a multigame is normalized when i I , w i = 1 . Any multigame can be converted into a normalized multigame by adding a well-chosen local game (Edalat et al., 2018). Thus, we can assume without loss of generality that multigames are normalized; as such, all agents’ type coefficients add up to 1. As previously mentioned, this paper focuses on uniform multigames that have the following two basic features: (1) identical action sets in all local games for every agent i I , i.e.,  j J , A i j = A i and (2) each agent plays the same action in all basic games G j , i.e.,  S i = { ( s , s , , s ) s A i } . Additionally, we assume that the agents’ types are independent
i I , p i ( θ i θ i ) = p i ( θ i )
We aim to show, beyond what is provided in (Edalat et al., 2018), that the multigame framework can model a wide range of complex situations that are worth exploring such as coordinated international environmental and social actions. First, consider a situation in which two countries could either keep an unsustainable traditional production system or shift to more responsible production. This shift is only beneficial if both countries commit to it. However, each country can be tempted to keep the old traditional production system, which does not incur an additional cost and is more efficient and profitable (at least this is what agents trust). This situation falls into the original prisoner’s dilemma framework, and we end up with a situation where both countries keep their traditional production (assuming that they act rationally).
Now, we extend this situation to N companies in the same geographical area. We assume that each company that continues to keep an unsustainable production unit causes a pollution cost of 1 unit. Thus, the (shared) pollution cost k is the number of companies that do not shift their production unit, 0 k N . On the other hand, a company that shifts its production unit for a more sustainable one has to pay an additional (fixed) cost c N . Thus, c i = k + c represents a company i that shifts its production and c i = k is a company that keeps it. If all N agents shift, they all end up with a (small) cost c, and if no agent does so, they all end up with a (high) cost of N. Therefore, it is in the common interest that everyone shifts their production unit. However, as each individual company has no incentive to deviate from not shifting, the only Nash equilibrium of this game occurs when they all avoid the shift. The latter example can handle any number of companies but remains very limited as we cannot include the particularities of the agents (types) and the uncertainty about what other companies value the most (i.e., the Bayesian aspect).
Later on, we study a more powerful approach through the multigames framework. We call it the Sustainable Adoption Decision Problem (SADP): n independent countries share m possible concerns like population well-being, air pollution, economic stability, education, or preserving biodiversity. Each country has its own (subjective) priorities characterized by an m-dimensional vector and has to choose between keeping their current lifestyle or radically shifting to a more sustainable one.
We denote using G m , n , a a uniform multigame with n agents, m local games, and a number of possible actions, assuming that all agents have the same number of possible actions. When we have games with two actions ( a = 2 ), we call these two actions C (cooperation) and D (defection). In this paper, we study G m , n , 2 multigames with continuous type space and show that with a simple condition on local games, the existence of a pure Bayesian Nash equilibrium is guaranteed. At the same time, we define the notion of threshold strategy and discuss the possibility to extend it to any number of actions. Then, we operate the same kind of analysis for discrete games. For both continuous and discrete type spaces we define a particular kind of multigame G 2 , 2 , 2 that is called double-game prisoner’s dilemma (DGPD). In the final part, we propose algorithms that can efficiently find NE in particular situations and formulate postulates by exploring some properties of multigames.

1.3. Motivations and Applications

Real-world decision-makers rarely optimize a single objective. Firms balance profit against environmental impact and reputation (Porter & Kramer, 2011), countries weigh economic growth against sustainability (Stern, 2007), and investors consider returns alongside risk and social responsibility (Friede et al., 2015). These multidimensional preferences, combined with incomplete information about others’ priorities, motivate our study of linear multidimensional Bayesian games.
Our focus on pure (Nash) equilibria reflects their practical relevance. When organizations like companies or government decide whether to adopt sustainable practices or negotiate agreements, decisions are typically binary rather than probabilistic mixtures (Heal & Kunreuther, 2011). Pure strategies offer clearer interpretation and direct implementation. The threshold strategies we identify—where agents act based on whether their type exceeds a critical value—naturally capture many real-world decision rules and provide tractable equilibrium characterizations.
The computational algorithms we develop bridge theory and practice. While equilibrium existence is guaranteed theoretically, finding equilibria in multidimensional games remains computationally challenging (Rabinovich et al., 2013). Our algorithms exploit the linear payoff structure to achieve efficiency, enabling analysis of realistic scenarios.
Applications span multiple domains. In environmental economics, our Sustainable Adoption Decision Problem (SADP) captures how countries with heterogeneous priorities over economic and environmental objectives can reach stable agreements without central coordination (Nordhaus, 2015). In corporate strategy, firms competing while balancing profit, social responsibility, and reputation can achieve stable market configurations when each has private information about its own priorities. Financial markets exhibit similar dynamics as investors with diverse preferences over return, risk, and ESG criteria reach equilibrium allocations (Pástor et al., 2021).
This framework addresses pressing global challenges requiring multidimensional trade-offs under uncertainty—climate change mitigation, pandemic response, sustainable development. By proving that pure equilibria exist in broad game classes and providing efficient algorithms to find them, we offer both theoretical insights and practical tools for policy design. The threshold structure reveals how incremental changes in preferences can trigger discrete behavioral shifts (Granovetter, 1978), while our classification of games illuminates which structures guarantee stable outcomes. These contributions suggest natural extensions to games with more actions and correlated types, suggesting promising directions for future research.

1.4. Related Work

While  (Krishna & Perry, 1998) introduced linear multidimensional Bayesian games for multi-object auctions, the literature on multidimensional games extends significantly beyond this foundation. (Reny, 2011) studied monotone equilibria in multi-unit auctions with interdependent values, and (Athey, 2001) characterized monotone equilibria in games with single-dimensional types, providing techniques partially applicable to multidimensional settings. The complexity of multidimensional mechanism design has been explored by (Manelli & Vincent, 2007), who analyzed revenue maximization in multiple-good monopolies, while (Rabinovich et al., 2013) developed algorithms for Bayesian games with continuous types but without exploiting linear payoff structures.
Our work builds on (Edalat et al., 2018), who established the equivalence between linear multidimensional Bayesian games and uniform multigames and proved existence of ex-post Nash equilibria. We extend their framework by focusing on pure Bayesian Nash equilibria, where agents must form beliefs about others’ types, rather than ex-post equilibria. We introduce threshold strategies as a unifying framework for both continuous and discrete type spaces, proving all equilibria in two-action games must have this structure. Additionally, we develop the first polynomial-time algorithms for finding these equilibria.
Our threshold strategy concept relates to the broader monotone strategy literature (McAdams, 2003; Milgrom & Shannon, 1994), but the linear payoff structure yields much sharper characterizations. Recent work by (Einy & Haimanko, 2023) on potential Bayesian games (Monderer & Shapley, 1996) provides complementary results under different assumptions. (He & Sun, 2019) identified necessary and sufficient conditions for pure equilibria in general Bayesian games; our condition—having purely competitive or cooperative local games—provides an alternative sufficient condition for our specific class.
Beyond theoretical contributions, our algorithms achieve polynomial-time complexity for two-action cases by exploiting threshold structures, contrasting with the exponential worst-case complexity of general methods. Our empirical classification of games based on equilibrium existence properties (Dresher, 1970; Rinott & Scarsini, 2000; Stanford, 1995, 1997, 1999) reveals patterns specific to linear multidimensional games. Applications span environmental economics (Harstad, 2012; Martimort, 2010) and other domains where agents balance multiple objectives under incomplete information.

2. Uniform Multigames with Continuous Type Space

2.1. General Remarks

We use the standard Lebesgue measure on finite dimensional Euclidean spaces. When the type space is continuous, there are three possibilities: the probability distribution is either discrete, continuous, or a mixture of both. If the probability distribution is fully discrete, we fall into the discrete type space case that we study later on. We choose to exclude the mixture case so that the distribution over the type space has no atomic value.
We suppose that the probability distribution for the game is absolutely continuous with respect to the Lebesgue measure: Denote the Lebesgue measure on R m by λ m ; if p is a probability distribution on R m , then the probability distribution p is absolutely continuous with respect to λ m if for any measurable set E, we have λ m ( E ) = 0 p ( E ) = 0 . Let p i be the probability distribution for agent i I . We assume that we can use a probability density function: recall that, for any measurable set E, the probability density function f i  satisfies:
p i ( θ i E ) = E f i ( θ i ) d θ i
In our case, p i ( θ i E ) is the probability, according to agent i, that the other agents have a type in the set E Θ i and f i is the associated density function.
The type space Θ is assumed to be a compact subset of R m and, thanks to the fact that the multigame is normalized, we have d i m ( Θ i ) = m 1 .
A pure strategy for agent i is denoted by s i , a mixed strategy by σ i . We use σ i if we do not know a priori the nature of the strategy. We recall that for a pure strategy s i : Θ i A i the agent plays an action for a given type. For a mixed strategy σ i : Θ i Δ A i , the agent follows a probability for a given type where σ i ( θ i , a i ) is the probability of agent i playing a i when their type is θ i .

2.2. Threshold Strategy

First, consider agent i’s expected utility U ¯ i ( a i , θ i , σ i ) given that it plays action a i , has type θ i and other agents follow the strategy σ i :
U ¯ i ( a i , θ i , σ i ) = θ i f i ( θ i ) U i ( a i , θ i , σ i ( θ i ) ) d θ i
U i ( a i , θ i , σ i ( θ i ) ) is agent i’s utility playing a i with type θ i given that the others follow σ i ( θ i ) . This utility can be expressed in terms of U i ( a i , θ i , a i ) for a i A i :
U i ( a i , θ i , σ i ( θ i ) ) = a i A i U i ( a i , θ i , a i ) σ i ( θ i , a i )
As G is a multigame, U i ( a i , θ i , a i ) can be expressed as follows:
U i ( a i , θ i , a i ) = j J u i j ( a i , a i ) θ i j
We define
ζ i a i ( σ i ) : = θ i f i ( θ i ) σ i ( θ i , a i ) d θ i
u ¯ i j ( a i , σ i ) : = a i u i j ( a i , a i ) ζ i a i ( σ i )
Here, ζ i a i ( σ i ) is the probability that a i is played by others given that they follow strategy σ i . Thus, u ¯ i j ( a i , σ i ) is the expected utility for agent i in the local game j if it plays action a i and others follow the strategy σ i . Using these, we can write the following:
(6) U i ¯ ( a i , θ i , σ i ) = θ i f i ( θ i ) a i σ i ( θ i , a i ) j J u i j ( a i , a i ) θ i j d θ i (7) = j J θ i j a i u i j ( a i , a i ) θ i f i ( θ i ) σ i ( θ i , a i ) d θ i (8) = j J θ i j a i u i j ( a i , a i ) ζ i a i ( σ i ) (9) = j J θ i j u ¯ i j ( a i , σ i ) (10) = θ i · ( u ¯ i j ( a i , σ i ) ) j J
We thus have a more compact and explicit expression of the expected utility U i ¯ ( a i , θ i , σ i ) . Indeed, it can be computed as the scalar product of the vector type of agent i and the vector of the expected utilities for local games.
So far, we have kept game parameters m , n and a as general as possible, the previous expression holds for any choice of those parameters. To go further on with the analysis, we consider that a = 2 . We will discuss extension to any number of actions in the Section 4.
We aim to evaluate whether agent i with type θ i and opponents’ strategy σ i prefers to play C or D. We compare U i ¯ ( C , θ i , σ i ) and U i ¯ ( D , θ i , σ i ) :
(11) U i ¯ ( C , θ i , σ i ) U i ¯ ( D , θ i , σ i ) = j J θ i j ( u ¯ i j ( C , σ i ) u ¯ i j ( D , σ i ) ) (12) = j J θ i j δ i j ( σ i ) (13) = θ i · δ i ( σ i )
where δ i j ( σ i ) = u ¯ i j ( C , σ i ) u ¯ i j ( D , σ i ) and δ i ( σ i ) = ( δ i j ( σ i ) ) j J . This difference expressed as a scalar product indicates the best action for agent i: if it is strictly positive, then the best action is C; if it is strictly negative, then the best action is D; and if it is equal to zero, then any mixed combination of C and D is the best response.
Definition 7 (Threshold strategy).
The vector δ i is called agent i’s threshold. A threshold strategy σ i with threshold δ i for agent i is a strategy such that
σ i ( θ i ) = C θ i · δ i > 0 D θ i · δ i < 0 α i ( θ i ) C + ( 1 α i ( θ i ) ) D θ i · δ i = 0 ,
where α i ( θ i ) [ 0 , 1 ] . Such a strategy is also denoted by ( σ i , δ i , α i ) .
The first and second cases ( θ i · δ i 0 ) are called pure components of the strategy and the last case ( θ i · δ i = 0 ) is called the mixed component. Notice that as a direct consequence of what we previously said the best response is always a threshold strategy and thus any Bayesian Nash equilibrium σ = ( σ 1 , σ 2 , , σ n ) is exclusively composed of threshold strategies. Also, a threshold strategy is said to be pure when θ i , α i ( θ i ) { 0 , 1 } .

2.3. Existence of Pure Bayesian Nash Equilibrium

Theorem 3.
If a (normalized) uniform multigame G m , n , 2 with continuous type space and continuous prior has a mixed-strategy Nash equilibrium ( σ 1 , σ 2 , , σ n ) with non-zero threshold vectors for all agents then it has a pure Bayesian Nash equilibrium.
Proof. 
Consider a mixed Nash equilibrium σ = ( σ 1 , σ 2 , , σ n ) that satisfies the condition given in the theorem. We show that starting from the mixed strategy σ we can derive s = ( s 1 , s 2 , , s n ) a pure-strategy Nash equilibrium.
First, we notice that σ i ’s are threshold strategies ( σ i , δ i * , α i ) with threshold δ i * = δ i ( σ i ) as they are the best responses for each agent given the opponents’ strategies. We suppose that there is at least one agent i such that σ i is a mixed strategy ( α i 0 , 1 ) because otherwise σ is already a pure-strategy Nash equilibrium, and the proof is over. For each i I , derive the pure strategy s i from σ i by replacing the mixed component with a pure action, i.e., by setting α i { 0 , 1 } . We now demonstrate that s = ( s 1 , s 2 , , s n ) is a Bayesian Nash equilibrium for G.
By construction, s i ’s are threshold strategies with the same threshold as σ i ’s. So, if we can show that δ i ( σ i ) = δ i ( s i ) for all agents i, then s is a Bayesian Nash equilibrium. For this purpose, we prove that
i I , a i A i . ζ i a i ( σ i ) = ζ i a i ( s i )
by computing the difference:
ζ i a i ( σ i ) ζ i a i ( s i ) = θ i f i ( θ i ) [ σ i ( θ i , a i ) s i ( θ i , a i ) ] d θ i .
Since δ i ( σ i ) is a non-zero vector, the set E i = { θ i θ i · δ i ( σ i ) = 0 } is contained in a hyperplane of R m 1 . Thus, for any agent i, the set Θ 1 × × E i × × Θ n has zero measure. Also, note that σ i ( a i , θ i ) s i ( a i , θ i ) only when θ i k i Θ { i , k } × E k where the latter set is the finite union of null sets. In other words, σ i ( θ i , a i ) and s i ( θ i , a i ) are equal almost everywhere. So, the difference expressed by the integral is zero. This shows that the constructed pure strategy s is a Nash equilibrium solution.    □
Theorem 3 provides a pure Bayesian Nash equilibrium for uniform multigames with two actions, but it relies on a specific condition on mixed-strategy solutions. In practice, we do not seek to enumerate all possible mixed solutions just to check whether any one of them has non-zero threshold vectors for all agents. Fortunately, we can find conditions that do not rely on the mixed solutions but can help us to determine whether there exists pure solutions.
Definition 8 (Purely cooperative/competitive local game).
A local game j J is said to be purely cooperative for agent i I if
a i A i . u i j ( C , a i ) > u i j ( D , a i )
and to be purely competitive if
a i A i . u i j ( C , a i ) < u i j ( D , a i )
This notion is equivalent to the condition that C (or D) is a strictly dominant strategy for agent i in game j.
Proposition 1.
An agent with at least one purely competitive (or cooperative) local game will always play a threshold strategy with a non-zero threshold regardless of the opponents’ strategy σ i .
Proof. 
Assume that game k is purely cooperative for agent i (the same reasoning applies to a purely competitive game). Let us compute δ i k ( σ i ) :
δ i k ( σ i ) = a i ( u i k ( C , a i ) u i k ( D , a i ) ) ζ i a i ( σ i ) .
Since the terms ζ i a i ( σ i ) express probabilities, they are non-negative with a i ζ i a i ( σ i ) = 1 . Because the local game k is purely cooperative for agent i, we have u i k ( C , a i ) u i k ( D , a i ) > 0 for all k J . Thus, the sum is strictly positive and δ i k ( σ i ) > 0 which implies δ i ( σ i ) 0 .    □
In the light of Proposition 1 we easily deduce the following:
Theorem 4.
Any normalized uniform multigame G m , n , 2 with continuous type space and continuous prior and at least one purely competitive/cooperative local game for each agent has a pure Bayesian Nash equilibrium.
Proof. 
We note that, according to Proposition 1, all agents must play a threshold strategy with a non-zero threshold. So, any mixed Nash equilibrium must comprise threshold strategies with non-zero thresholds. Thus, the conditions of Theorem 3 hold, and a pure Bayesian Nash equilibrium exists.    □

2.4. The Double Game Prisoner Dilemma

In this subsection, we define and study a more particular type of multigame from (Edalat et al., 2012) called the double game prisoner’s dilemma (DGPD). On the discrete type space section, we will also refer to such multigames.
Definition 9.
The double game prisoner’s dilemma is a ( 2 , 2 , 2 ) -multigame (two agents, two local games, two actions) such that the first local game is a prisoner’s dilemma game (described by Table 1) and the second local game is a “social game” motivating cooperation (described by Table 2).
The coefficients follow the given conditions (illustrated in Figure 1):
s = z , t > r > y > p > s , r > ( t + s ) / 2 , y > ( r + p ) / 2
There are two games, so we choose to denote the vector types of both agents with 1 θ 1 θ 1 and 1 θ 2 θ 2 , and θ i is called the pro-social coefficient of agent i.
As a direct consequence of Theorem 4, we can easily deduce the existence of a pure Bayesian Nash equilibrium.
Corollary 1.
For any DGPD with a continuous type space and a continuous prior, there exists a pure Bayesian Nash equilibrium (comprising threshold strategies).
Proof. 
This is a direct consequence of Theorem 4 as the social game is a purely cooperative game for both agents.    □
Replacing u i j ( a i , a i ) and θ i in the previous computations by DGPD parameters gives us the following:
U i ¯ ( C , θ i , σ i ) = ( 1 θ i ) ζ i C ( σ i ) r + ζ i D ( σ i ) s + θ i y
U i ¯ ( D , θ i , σ i ) = ( 1 θ i ) ζ i C ( σ i ) t + ζ i D ( σ i ) p + θ i s
Note that for a given σ i the expected values U i ¯ ( C , θ i , σ i ) and U i ¯ ( D , θ i , σ i ) are linear functions in θ i , and they cross at θ i = θ i * [ 0 , 1 ] since we have
y s , ζ i C ( σ i ) r + ζ i D ( σ i ) s ζ i C ( σ i ) t + ζ i D ( σ i ) p .
As a result of this, we can formulate a more convenient definition for threshold strategy in the DGPD context.
Definition 10 (DGPD Threshold strategy).
A threshold strategy σ i with threshold θ i * for agent i is a strategy such that
σ i ( θ i ) = D θ i < θ i * C θ i > θ i * α i C + ( 1 α i ) D θ i = θ i *
Again, by construction, the best response must be a threshold strategy as we have just rearranged the notation compared to the general definition. Note that when the pro-social coefficient is low (i.e., below the threshold), agent i defects, and when the pro-social coefficient is high (i.e., above the threshold), agent i cooperates. Figure 2 summarizes the concept of threshold strategy in the context of DGPD.
We define the threshold function  θ i * : [ 0 , 1 ] R using the following:
θ i * ( x ) = x ( t r ) + ( 1 x ) ( p s ) x ( t r ) + ( 1 x ) ( p s ) + ( y s )
Using a simple calculation, we conclude that, given the other agent strategy σ i , agent i’s best response is a threshold strategy with threshold θ i * = θ i * ( ζ i C ( σ i ) ) .
We now define λ and μ that are combinations of DGPD payoff parameters as in (Edalat et al., 2018):
μ : = θ i * ( 0 ) = p s p s + y s , λ : = θ i * ( 1 ) = t r t r + y s
Proposition 2 ( θ i * monotonicity).
The threshold function θ i * : [ 0 , 1 ] R is monotonic and is increasing if μ < λ , decreasing if μ > λ and is constant for μ = λ .
Proof. 
The monotonicity is straightforward. Then, we just note that θ i * ( 0 ) = μ and θ i * ( 1 ) = λ .    □
Proposition 3.
For any DGPD with continuous type space and continuous prior, there exists a pure Bayesian Nash equilibrium. This equilibrium comprises threshold strategies with thresholds θ i * [ min ( λ , μ ) , max ( λ , μ ) ] .
Proof. 
The predicate θ i * = θ i * ( ζ i C ( σ i ) ) [ min ( λ , μ ) , max ( λ , μ ) ] results from the fact that ζ i C ( σ i ) [ 0 , 1 ] and θ i * ( ζ i C ( x ) ) is monotonic in [ 0 , 1 ] .    □

2.5. Example for the SADP

Consider a simple situation with two companies, each having two different visions and two possible actions. Assume, for example, that the first vision is to protect the brand reputation and the second is to gain market shares. Suppose that there is an opportunity to establish a new production facility in a controversial area. Seizing the opportunity will for sure help gain market shares but will also impact negatively on the reputation. Each company can either leave this opportunity (C) or compete to set up a new site in the area (D). Table 3 and Table 4 summarize this situation:
Both companies have a continuous type space and prior. According to our last result, there exists a pure Bayesian Nash equilibrium comprising threshold strategies. By computing λ = 1 / 2 and μ = 1 / 3 , we also know that the thresholds θ i * are in [ 1 3 , 1 2 ] . If we add the assumption that priors are uniform, we can show (see Section 2.7 on Algorithmic results) that
θ 1 * = θ 2 * = 5 17 2 0.4384

2.6. Application to Other 2-Players Games

In this section, we present other examples of multigames for which we can apply our result on the existence of pure NE.

2.6.1. A Chicken Game Variation

In the game of chicken (also called the hawk–dove game) (Osborne & Rubinstein, 1994) there are two agents that can either pursue conflict (Conflict) or avoid it (Avoid). The best outcome for an agent is to play Conflict while the other plays Avoid. As opposed to the PD, if they both play Conflict, they both face the worst outcome. An example is given in Table 5.
Because it is mainly a toy example like the PD, this game can only model very specific situations. We suggest that the game can be understood as a combination of two drivers: ego and survival. Under the ego consideration, we want to play Conflict to not be considered the “chicken”. It is important is to show that we dominate our opponent and have a stronger mind. Under the survival consideration, we mainly want to avoid the situation where both agents play Conflict. An example of both games is given in Table 6 and Table 7. The double game comprising those two games with continuous priors has a pure Nash equilibrium because the ego game has a strictly dominant strategy (Conflict).

2.6.2. A Battle of Sexes Variation

In the Battle of Sexes (also called Bach or Stravinsky), there are two agents that want to meet in an event but have opposed tastes (see Table 8). The main goal of both is to spend time together, but they also value going to the event they enjoy the most.
As presented in the previous example, we can try to decompose considerations into taste and social considerations. In the taste game (Table 9) both agents are motivated to follow their taste no matter what the other does. In the social game (Table 10), the agents want to be at the same event, no matter which event it is.
Like before, the double game comprising those two games with continuous priors has a pure Nash equilibrium because the taste game has a strictly dominant strategy (Bach for agent 1 and Stravinsky for agent 2).

2.6.3. An Assurance Game Variation

The assurance game (or Stag Hunt) is a two-player game involving a conflict between personal safety and social cooperation. An example of payoff matrix is given in Table 11. In essence, this game is very similar to the prisoner’s dilemma. Thus, we naturally combine it with a social game and obtain a variation of the DGPD.

2.7. Equilibrium Computation

2.7.1. Uniform Prior

Recall the definition of ζ i C ( σ i ) and θ i * ( ζ i C ) that characterize the best response of an agent given its opponent’s strategy (assuming that they both follow a threshold strategy):
ζ i C ( σ i ) = p i ( θ i [ max ( λ , μ ) , 1 ] ) + min ( λ , μ ) max ( λ , μ ) f i ( θ i ) σ i ( θ i , C ) d θ i
θ i * ( x ) = x ( t r ) + ( 1 x ) ( p s ) x ( t r ) + ( 1 x ) ( p s ) + ( y s )
In the case of a uniform prior, p i ( θ i [ max ( λ , μ ) , 1 ] ) = 1 max ( λ , μ ) . For simplicity, we use the notation α = min ( λ , μ ) and β = max ( λ , μ ) . Given that the opponent plays a threshold strategy with the threshold θ i * , we have
ζ i C ( σ i ) = 1 β + θ i * β f i ( θ i ) d θ i = 1 β + ( β θ i * ) = 1 θ i *
Therefore, we can rewrite the threshold function θ i * as a function of θ i * :
θ i * ( θ i * ) = ( 1 θ i * ) ( t r + s p ) + ( p s ) ( 1 θ i * ) ( t r + s p ) + ( p s + y s ) = d ( 1 θ i * ) + e d ( 1 θ i * ) + e + f ,
where
d : = t r + s p , e : = p s , f : = y s
First, assume that the solution is symmetric for both players, meaning that θ 1 * = θ 2 * :
θ i * = d ( 1 θ i * ) + e d ( 1 θ i * ) + e + f
which is reduced to
d ( θ i * ) 2 + ( 2 d + e + f ) θ i * ( d + e ) = 0
This is a quadratic equation. To evaluate the number of solutions we compute the discriminant:
(23) Δ = ( 2 d + e + f ) 2 4 d ( d + e ) (24) = ( e + f ) 2 + 4 d f
If we search for non-symmetric solutions, the condition θ i * = θ i * no longer holds. By performing the same kind of computation we end up with
d ( e + f ) ( θ i * ) 2 + ( ( d + e + f ) 2 d 2 ) θ i * ( e ( d + e + f ) + d f ) = 0
This is also a quadratic equation, and we notice that it is the same quadratic as the symmetric case but multiplied by the constant ( e + f ) 0 . Hence, both equations have exactly the same solutions.
Proposition 4.
Under the DGPD assumptions, the quadratic Equations (23) and (25) always have two solutions.
Proof. 
The discriminant of those quadratics can be written as Δ = ( e f ) 2 + 4 ( d + e ) f . From the constraints on the DGPD parameters, f > 0 and d + e = t r > 0 , we thus have Δ > 0 and the quadratics have two solutions x = ( 2 d + e + f ) Δ 2 d and x + = ( 2 d + e + f ) + Δ 2 d .    □
Now, we need to check the validity of the solutions, i.e.,  α x s o l β .
Proposition 5.
Among the two solutions of the quadratics (23) and (25), x + is always valid, and x is always invalid.
Proof. 
By computing x μ and x λ we can notice that depending on the sign of d, we have either x > max ( μ , λ ) or x < min ( μ , λ ) . In both cases, the solution is not valid. With the same kind of reasoning, we can show that depending on the sign of d, we have either μ < x + < λ or λ < x + < μ . Thus, x + is always valid.    □
We end up with the fact that such games have always exactly one pure NE which is symmetric (i.e., both players have the same threshold) where θ 1 * = θ 2 * = x + .

2.7.2. General Solution

We denote by F i ( x ) the cumulative distribution function of agent i’s prior f i ( θ i ) . In the case of a uniform prior, we were able to explicitly find F i ( · ) and deduce an algebraic equation for the solutions. To find a solution for any prior, we have to solve the following set of equations:
θ 1 * = A 1 d F 1 ( θ 2 * ) B 1 d F 1 ( θ 2 * )
θ 2 * = A 2 d F 2 ( θ 1 * ) B 2 d F 2 ( θ 1 * )
with
ζ i C ( σ i ) = F i ( 1 ) F i ( θ i * ) A i = d F i ( 1 ) + e B i = A i + f
Therefore, for a continuous type space, the NE search is equivalent to solving a nonlinear multivariate equation:
f ( θ 1 * , θ 2 * ) = θ 1 * A 1 d F 1 ( θ 2 * ) B 1 d F 1 ( θ 2 * ) , θ 2 * A 2 d F 2 ( θ 1 * ) B 2 d F 2 ( θ 1 * )
find ( θ 1 * , θ 2 * ) such that f ( θ 1 * , θ 2 * ) = 0

3. Uniform Multigames with Discrete Type Space

3.1. General Remarks

The type space is now assumed to be discrete. The term U ¯ i ( a i , θ i , σ i ) , for  i I , is the same as the continuous prior except that the integral is now replaced with a sum. Thus, we keep the same notations and consider
ζ i a i ( σ i ) : = θ i Θ i p i ( θ i ) σ i ( θ i , a i ) d θ i
U ¯ i ( a i , θ i , σ i ) = θ i Θ i p i ( θ i ) U i ( a i , σ i ( θ i ) , θ i ) = θ i · ( u ¯ i j ( a i , σ i ) ) j J
When there are only two actions, the notion of threshold strategy remains exactly the same; however, since the type space is discrete, two strategies with different thresholds can evaluate to the same value for types θ i Θ i and actions a i A i .
Definition 11 (Equivalent strategies and thresholds).
Two strategies σ i and σ i for player i I are said to be equivalent ( σ i σ i ) if
( θ i , a i ) Θ i × A i , σ i ( θ i , a i ) = σ i ( θ i , a i )
Given two threshold strategies ( σ i , δ i ) and ( σ i , δ i ) , δ i and δ i are said to be equivalent ( δ i δ i ) if σ i σ i .
Notice that the binary relation ∼ restricted to threshold strategies is obviously an equivalence relation.
We also note that in contrast to the relation between strategies, the relation between thresholds is not an equivalence relation. Indeed, if a threshold leads to a mixed threshold strategy we cannot write δ i δ i because α i of Definition 7 is not constrained. In other words, two strategies with the same threshold δ i can be different (as long as the mixed case is reached by some θ i Θ i ). Note that a threshold strategy σ i is mixed if and only if there exists θ i Θ i such that θ i · δ i = 0 .
Given a threshold strategy σ i , let S e q ( σ i ) : = { σ i σ i σ i } be the set of threshold strategies equivalent to σ i . Given a threshold δ i , let T e q ( δ i ) : = { δ i δ i δ i } be the set of thresholds equivalent to δ i .
Proposition 6.
Suppose that a threshold δ i for  i I leads to a pure threshold strategy (i.e., θ i Θ i   θ i · δ i 0 ). Then, we have the following properties:
1. 
T e q ( δ i ) is a non-empty set.
2. 
For any (strictly) positive λ R * + , λ δ i T e q ( δ i ) .
3. 
T e q ( δ i ) is a convex set.
4. 
If Θ i is finite, T e q ( δ i ) contains vectors that are not collinear with δ i .
Proof. 
1. We have δ i T e q ( δ i ) ; thus, the latter set is non-empty.
2.
Since the strategy only depends on the sign of θ i · δ i , multiplying by a strictly positive λ has no impact on the resulting action.
3.
Take δ i 1 , δ i 2 T e q ( δ i ) , λ [ 0 , 1 ] and δ i 3 = λ δ i 1 + ( 1 λ ) δ i 2 , then notice that δ i 3 · θ i = λ ( δ i 1 · θ i ) + ( 1 λ ) ( δ i 2 · θ i ) has the same sign as δ i · θ i , so that δ i 3 T e q ( δ i ) .
4.
Assume that Θ i is finite and consider ϵ = 1 m + 1 min θ i Θ i θ i · δ i and ϵ = ( ϵ , ϵ , , ϵ ) . Let δ i : = δ i + ϵ :
θ i · δ i θ i · δ i = θ i · ϵ m ϵ < min θ i Θ i θ i · δ i
So, θ i · δ i and θ i · δ i have the same sign for any θ i Θ i which concludes the proof.
   □

3.2. The Double Game Prisoner’s Dilemma

We keep the same framework for the DGPD with a discrete type space. For each i I , we have
U ¯ i ( a i , θ i , σ i ) = ζ i C ( σ i ) U i ( a i , C , θ i ) + ζ i D ( σ i ) U i ( a i , D , θ i )
and
U i ¯ ( C , θ i , σ i ) = ζ i C ( σ i ) ( 1 θ i ) r + θ i y + ζ i D ( σ i ) ( 1 θ i ) s + θ i y
U i ¯ ( D , θ i , σ i ) = ζ i C ( σ i ) ( 1 θ i ) t + θ i s + ζ i D ( σ i ) ( 1 θ i ) p + θ i s
Proposition 7.
Consider agent i and let θ i x < θ i x + 1 be two consecutive types from Θ i . All threshold strategies σ i with a threshold θ i * such that θ i x < θ i * < θ i x + 1 are equivalent.
Proof. 
This follows from the fact that there exists no type θ i Θ i between θ i x and θ i x + 1 , so as long as we keep the threshold between those two values we end up with the same actions for agent i.    □
Note that for threshold strategies, the third case ( θ i = θ i * ) can only be reached if θ i * Θ i . So, if  α i { 0 , 1 } or θ i * Θ i , then the strategy is pure.
The search for a pure Nash equilibrium in the discrete case requires a different method since we do not have a general result for two-action multigames that can be used as in the case of continuous type space. To go further, we assume that the type space is finite. Recall for player i that ζ i a i ( σ i ) = θ i Θ i p i ( θ i ) σ i ( θ i , a i ) is the probability that the other agent’s action is a i given that it follows σ i .
Proposition 8 ( ζ i C monotonicity).
Consider agent i and suppose that the other agent follows a threshold strategy σ i with threshold θ i * . This threshold strategy induces a value ζ i C ( σ i ) for agent i. If the other agent changes its strategy σ i by increasing its threshold θ i * , then the induced value ζ i C ( σ i ) decreases.
Proof. 
By increasing its threshold value, agent i decreases the probability of playing C and thus decreases the value
ζ i C ( σ i ) = θ i Θ i p i ( θ i ) σ i ( θ i , C )
   □
For integers K , L with K L , let K , L denote the set of integers, K , K + 1 , , L .
Lemma 1.
Suppose f : 0 , M 0 , N and g : 0 , N 0 , M are both increasing (or both decreasing). Then, there exists c 0 , N such that f ( g ( c ) ) = c .
Proof. 
We first notice that h : = f g is increasing and 0 h ( x ) N for 0 x N . Hence, there exists a least non-negative integer k such that c : = h k ( 0 ) = h k + 1 ( 0 ) and it follows that h ( c ) = c .    □
Theorem 5.
For any DGPD with finite type space there exists a pure Bayesian Nash equilibrium comprising threshold strategies such that for both agents, the threshold θ i * [ min ( λ , μ ) , max ( λ , μ ) ] .
Proof. 
We suppose that μ > λ (the reasoning for μ < λ is similar). Suppose that agent i plays a threshold strategy ( σ i , θ i * ) . Then, the agent i’s best response is also a threshold strategy ( σ i , θ i * ) . By the monotonicity of θ i * and ζ i C (Propositions 2 and 8), if agent i increases its threshold then the threshold of agent i’s best response also increases (if μ > λ then agent i’s best response threshold decreases).
Now, consider the partition of [ 0 , 1 ] into intervals according to agents’ type spaces:
Θ i = { θ i 1 , θ i 2 , , θ i n i } with 0 θ i 1 < θ i 2 < < θ i n i 1
I i 0 = [ 0 , θ i 1 ] , I i 1 = [ θ i 1 , θ i 2 ] , I i n i = [ θ i n i , 1 ]
For agent i’s threshold strategy ( σ i , θ i * ) and agent i ’s best response ( σ i , θ i * ) , there exists (1) k i 0 , n i such that θ i * I i k i and (2) k i 0 , n i such that θ i * I i k i . Note that if θ i * Θ i , then it belongs to two adjacent intervals I i k i and I i k i + 1 . In this case, we arbitrarily choose to take I i k i .
We define the transition functions t 1 2 : 0 , n 1 0 , n 2 and t 2 1 : 0 , n 2 0 , n 1 such that k i = t i i ( k i ) . In order to show that there exists a pure Nash equilibrium, we need to show that
( k 1 , k 2 ) 0 , n 1 × 0 , n 2 k 1 = t 2 1 ( k 2 ) , k 2 = t 1 2 ( k 1 )
Equivalently, we search for k i such that k i = t i i ( t i i ( k i ) ) . From Lemma 1 with t 1 2 and t 2 1 as f and g, we can find a solution k i .    □

3.3. Example

Suppose that G is a double game prisoner’s dilemma with a uniform prior such that utilities for both agents are given according to Table 12 and Table 13 and type space Θ i = { t / 60 : t 0 , 60 } for i = 1 , 2 . We obtain μ = 1 / 5 and λ = 1 / 4 (represented in Figure 3).
To find a pure Bayesian NE, one has to find a pair ( s 1 , s 2 ) of threshold strategies such that s 1 is the best response to s 2 and s 2 is the best response to s 1 . From the symmetry of the game, we can expect that s 1 = s 2 (i.e., they have the same threshold). We have
θ i * ( ζ i C ) = ζ i C ( 20 16 ) + ( 1 ζ i C ) ( 6 3 ) ζ i C ( 20 16 ) + ( 1 ζ i C ) ( 6 3 ) + ( 15 3 ) = ζ i C + 3 ζ i C + 15
while ζ i C can take three different values:
θ i * 1 5 , 13 60 ζ i C = 48 61 θ i * 13 60 , 14 60 ζ i C = 47 61 θ i * 14 60 , 1 4 ζ i C = 46 61
Possible values for θ i * are θ i * ( 48 61 ) 0.239875 , θ i * ( 47 61 ) 0.239085 or θ i * ( 46 61 ) 0.238293 . Those three values are in the interval ] 14 60 , 1 4 [ . So, the pair ( s 1 , s 2 ) where
s 1 ( θ 1 * ) = s 2 ( θ 2 * ) = D if θ i * < 1 4 C if θ i * 1 4
is a pure (Bayesian) strategy Nash equilibrium for the game G.

3.4. On a Simple Multigame Classification

In this section, we consider G 2 , 2 , 2 multigames with finite type space. Both actions are still denoted by C and D (even if they are not necessarily associated to cooperation or defection). The payoff matrix U is not constrained as opposed to DGPD configuration. First, we define a simple classification of such games according to properties of U.
Definition 12.
We denote the type space configuration by ( Θ i , p i ) . Any payoff matrix U belongs to one of the following sets:
1. 
The full set: payoff matrices U such that for any type space configuration the game G = U , ( Θ i , p i ) has a pure Nash equilibrium.
2. 
The solutionless set: payoff matrices U such that for any type space configuration the game G = U , ( Θ i , p i ) has no pure Nash equilibrium.
3. 
The hybrid set: payoff matrices U such that the existence of a pure NE for G = U , ( Θ i , p i ) depends on the type space configuration.
Proposition 9.
The full set is not empty: it contains matrices U with the DGPD payoff constraints. The hybrid set is also not empty.
Proof. 
The first set is obviously not empty in view of Theorem 5. The second set is not empty as we can find G and G sharing the same payoff matrix such that one has a pure NE and the other not (see next examples).    □
Consider the uniform double game G with utilities for agents i = 1 , 2 given by Table 14 with type space Θ 1 = { 0.2 , 0.5 , 0.6 } , Θ 2 = { 0.2 , 0.4 , 0.8 } and the prior p 1 , p 2 such that
p 1 ( θ 1 = 0.2 ) = 0.3 p 1 ( θ 1 = 0.5 ) = 0.4 p 1 ( θ 1 = 0.6 ) = 0.3 p 2 ( θ 2 = 0.2 ) = 0.3 p 2 ( θ 2 = 0.4 ) = 0.4 p 2 ( θ 2 = 0.8 ) = 0.3
This game has no pure Bayesian Nash equilibrium. Consider a slight variation G of the game G such that utilities for agents i = 1 , 2 are given by Table 14 with type space Θ 1 = { 0.2 , 0.5 , 0.7 } , Θ 2 = { 0.2 , 0.4 , 0.8 } and the prior p 1 , p 2 such that
p 1 ( θ 1 = 0.2 ) = 0.3 p 1 ( θ 1 = 0.5 ) = 0.4 p 1 ( θ 1 = 0.7 ) = 0.3 p 2 ( θ 2 = 0.2 ) = 0.3 p 2 ( θ 2 = 0.4 ) = 0.4 p 2 ( θ 2 = 0.8 ) = 0.3
Consider σ 1 as a C/D threshold strategy with θ 1 * ] 0.5 , 0.7 [ and σ 2 as a C/D threshold strategy with θ 2 * ] 0.2 , 0.4 [ . The pair ( σ 1 , σ 2 ) is a pure Bayesian Nash equilibrium for the game G .
Note that we do not give any particular property for the solutionless set. We postulate that this set is empty (see Section 3.5 on algorithmic results). Recall the following formulae for the G 2 , 2 , 2 configuration:
U i ¯ ( a i , θ i , σ i ) = ( 1 θ i ) u ¯ i 1 ( a i , σ i ) + θ i u ¯ i 2 ( a i , σ i )
u ¯ i j ( a i , σ i ) = ζ i C ( σ i ) u i j ( a i , C ) + ( 1 ζ i C ( σ i ) ) u i j ( a i , D )
Observe that with no assumption on the payoff matrix, there is no guarantee that U i ¯ ( C , θ i , σ i ) and U i ¯ ( D , θ i , σ i ) will cross for a given σ i . Moreover, the crossing point θ i * (if it exists) is not guaranteed to be inside [ 0 , 1 ] as illustrated by Figure 4 and Figure 5. It cuts R into two regions, one in which the best action is C and the other in which the best action is D. If the left region is C then the resulting strategy is a C/D strategy (Figure 4) and if the left region is D then the resulting strategy is a D/C strategy (Figure 5). We call this the strategy type. For any DGPD configuration, the best response is always a D/C strategy.
As with the DGPD, we define the threshold function θ i * ( σ i ) for agent i:
θ i * ( σ i ) = u ¯ i 1 ( C , σ i ) u ¯ i 1 ( D , σ i ) ( u ¯ i 1 ( C , σ i ) u ¯ i 1 ( D , σ i ) ) ( u ¯ i 2 ( C , σ i ) u ¯ i 2 ( D , σ i ) )
When u ¯ i 1 ( C , σ i ) u ¯ i 1 ( D , σ i ) = u ¯ i 2 ( C , σ i ) u ¯ i 2 ( D , σ i ) , both utility functions have the same slopes and the threshold function is not defined. As long as u ¯ i 1 ( C , σ i ) u ¯ i 1 ( D , σ i ) , utility functions are not equal; so, there is one best action (either C or D). In this case, the best response is also a threshold strategy with the threshold θ i * = ± . Otherwise, both utility functions are equal (there is an infinite number of crossing points). Because of the latter case, we can no longer state that any best response comprises threshold strategies.
Definition 13 (D/C threshold strategy).
A threshold D/C strategy σ i with threshold θ i * R { , + } for agent i is a strategy such that
σ i ( θ i ) = D θ i < θ i * C θ i > θ i * α i C + ( 1 α i ) D θ i = θ i *
Now, we can express the threshold as a function of ζ i C [ 0 , 1 ] instead of σ i :
θ i * ( ζ i C ) = δ i 1 D + ζ i C ( δ i 1 C δ i 1 D ) δ i 1 D δ i 2 D + ζ i C ( δ i 1 C δ i 1 D + δ i 2 D δ i 2 C )
where   δ i j a i : = u i j ( C , a i ) u i j ( D , a i )
The case of equal slopes is reached at the forbidden value ζ ˜ i C :
ζ ˜ i C : = δ i 1 D δ i 2 D δ i 1 C δ i 2 C + δ i 2 D δ i 1 D
Therefore, the case of equal utility functions happens when
ζ ˜ i C = δ i 1 D δ i 1 D δ i 1 C .
We notice that the graph of θ i * ( ζ i C ) is split into two regions by the forbidden value (Figure 6 and Figure 7). Each region is associated with a different strategy type. Thus, if  ζ ˜ i C is not in [ 0 , 1 ] , then agent i will always play the same strategy type notwithstanding its opponent’s strategy. In Figure 7, the forbidden value is outside [ 0 , 1 ] , so agent i will only play a D/C strategy. In Figure 6, the forbidden value is in [ 0 , 1 ] , so agent i may play both strategy types depending on the opponent’s strategy σ i : if ζ i C ( σ i ) < ζ ˜ i C then agent i plays a C/D strategy and if ζ i C ( σ i ) > ζ ˜ i C then agent i plays a D/C strategy. Suppose that ζ ˜ i C [ 0 , 1 ] for i = 1 , 2 . Both agents always stick to the same strategy type and there are three cases: (1) both play C/D, (2) both play D/C (like the DGPD), and (3) one plays C/D while the other plays D/C.
In order to characterize the variations of θ i * , define
Δ i : = δ i 1 D δ i 2 C δ i 1 C δ i 2 D
Proposition 10
( θ i * monotonicity). The threshold function θ i * ( ζ i C ) is monotonic and satisfies the following
1. 
if Δ i > 0 then θ i * ( ζ i C ) is increasing,
2. 
if Δ i < 0 then θ i * ( ζ i C ) is decreasing,
3. 
if Δ i = 0 then θ i * ( ζ i C ) is constant.
Proof. 
Simply compute the derivative of the function θ i * ( ζ i C ) which is a homography f : x a x + b c x + d and obtain the condition on Δ i .    □
Proposition 11.
Let U be the payoff matrix of a two-player double game such that
1. 
ζ ˜ i C [ 0 , 1 ] for i = 1 , 2 where ζ ˜ i C is the forbidden value of θ i * ( ζ i C ) ,
2. 
Δ 1 and Δ 2 have the same sign if both agents play the same strategy type or have opposite sign otherwise.
Then, for any type space configuration ( Θ i , p i ) the game G = U , ( Θ i , p i ) has a pure Bayesian Nash equilibrium.
Proof. 
The two conditions imply that (1) agents have a unique strategy type and (2) their best response threshold functions have the same monotonicity, i.e., increasing or decreasing. Thus, we can follow the same reasoning as we had for DGPD to prove the existence of a pure Bayesian NE.    □

3.5. Algorithmic Results

In this section, we develop efficient algorithms to find pure Bayesian Nash equilibria for finite type space G 2 , 2 , 2 . The first part focuses on an optimized version for DGPD while the second one focuses on a more general version. In the third part, the complexity of both algorithms are evaluated and compared to each other.

3.5.1. Algorithm for DGPD

Recall that the agents’ best response sets only comprise threshold strategies. Thus, given a finite type space (with n i elements for agent i) the search space comprises n 1 × n 2 threshold strategy pairs. The pure Bayesian Nash equilibrium search consists of finding a fixed point (two strategies that are the mutual best responses of each other) among those combinations.
For clarity, we formulate a graphical method to represent the solution search that we call a strategy diagram that looks like Figure 8. There are two unit intervals [ 0 , 1 ] , one for each agent’s type space. An arrow from agent i’s interval I i to agent i ’s interval I i indicates that if agent i plays a threshold strategy with a threshold in I i , the best response of agent i is a threshold strategy with a threshold in I i . A solution is then simply represented by two compatible arrows as displayed in Figure 9.
We now introduce Algorithm 1 for NE search on finite DGPD. For every threshold strategy of agent 1, we compute the associated best response of agent 2 and then compute the best response of agent 1 given the latter. Whenever for any of agent 1’s threshold strategy the computed best response is equivalent, we obtain a pure NE. The procedure compute_cumul_proba() returns the cumulative probabilities given a probability distribution. finder() returns the index of the type space interval that contains a given threshold. search_space_boundaries() computes α = m i n ( μ , λ ) and β = m a x ( μ , λ ) and then the associated indices in the type space. This helps us optimize the overall algorithm by reducing the search space to thresholds θ i * [ α , β ] . Finally, threshold_i() is the threshold function of agent i.
Algorithm 1 Exhaustive NE search
Require:  t , r , y , p , s , Θ 1 , Θ 2 , p 1 , p 2
c u m u l _ p r o b a _ 1 c o m p u t e _ c u m u l _ p r o b a ( p 1 )
c u m u l _ p r o b a _ 2 c o m p u t e _ c u m u l _ p r o b a ( p 2 )
s t a r t _ 1 , e n d _ 1 s e a r c h _ s p a c e _ b o u n d a r i e s ( t , r , y , p , s , Θ 1 )
 
for  s t a r t _ 1 i < e n d _ 1 + 1  do
   ζ 2 C 1 c u m u l _ p r o b a _ 1 [ i ]
   θ 2 * t h r e s h o l d _ 2 ( ζ 2 C )
   k f i n d e r ( Θ 2 , θ 2 * )
   ζ 1 C 1 c u m u l _ p r o b a _ 2 [ k ]
   θ 1 * t h r e s h o l d _ 1 ( ζ 1 C )
  if  Θ 1 [ i 1 ] θ 1 * Θ 1 [ i ]  then
   return  θ 1 * , θ 2 *
  end if
end for
return False
Figure 10 and Figure 11 illustrate the finding of NE search for two situations that we encounter. Note that when λ < μ we reverse the second agent’s axis as the best response threshold monotonicity is reversed compared to λ > μ . Also note that there could be multiple solutions (Figure 11). If we only seek one solution then we stop at the first finding to reduce the computational cost.

3.5.2. General Algorithm

We now describe a different algorithm that can handle a broader range of games but is more expensive. It can also handle DGPD games but is computationally less efficient as it explores a larger search space without exploiting the DGPD structure.
Apart from the fact that we have to explore both C/D and D/C strategies, the double game algorithm is very similar to the DGPD algorithm. For each type space interval of agent 1, we compute the best response of agent 2 and the best response of agent 1 given the latter. We check that we fall back to the initial strategy by ensuring that the thresholds are in the same interval and that the strategy types are the same. Of course, we cannot benefit from the reduced search space with λ and μ as they make no sense in the general context. The overall procedure is summarized in Figure 12.
Given this general algorithm we explore some properties of payoff matrices. We generated hundreds of random payoff matrices and thousands of type space configurations and ran Nash equilibrium searches. We experimentally classified the payoff matrices thanks to the NE search results. Interestingly, we notice that none of the generated matrices belongs to the solutionless set as we postulated earlier. Secondly, we find that there exist matrices in the full set that do not satisfy the conditions of Proposition 11: the set of conditions is sufficient, but not necessary, for belonging to the full set.
Next, using a modified version of the NE search that does not stop at the first solution (if it exists) we computed the average number of solutions for different type space configurations. Let input size denote the number of elements in the type space for both players. In this context, n = c a r d ( Θ 1 ) and m = c a r d ( Θ 2 ) .
Figure 13 shows the results for a sample of payoff matrices randomly chosen from the full set (thanks to our previous classification). The first trivial result is that the average never goes under 1 (otherwise it would not belong to the full set). Secondly, it seems that the average number of solution can either be constant, increase or decrease with the input size. In many cases, we even observe that the average number tends to stabilize around an arbitrary value which seems to be an integer.
We repeated this experiment with payoff matrices randomly picked from the hybrid set. Figure 14 shows a sample of results for such matrices. As we said earlier for the full set, it seems that the averages can either increase, decrease, or stay constant and that they converge to different values. This time, those values are smaller than 1 and may not be integers.

3.5.3. Complexity Comparison

In this section, we discuss the complexity of each algorithm and compare their performances.
First of all, the left part of Figure 15 displays the complexity of DGPD NE search for different variations of inputs. When n = m , both input sizes vary, while when n = 10 or m = 10 only input size varies. Notice that when the first agent’s input size is constant, the complexity is also constant. When n varies, the complexity is linear and is almost independent of whether m varies or not. In fact, the main driver of the complexity is the main loop that iterates over the elements of Θ 1 . The procedure finder() has a low computational cost as it comprises a binary search. In practice, for imbalanced type space size, one should always consider the agent having the smallest type space as the first agent.
The right part of Figure 15 illustrates the improved performance of DGPD specific algorithm over the more general one. The latter also has a linear complexity but with a much higher slope. One explanation comes from the fact that for each element of Θ 1 we consider both C/D and D/C strategies as starting points. Also, we do not reduce the search space with bounds like μ and λ .
For the general algorithm, we also compare the complexity for different variations in the input size as display in Figure 16 (left). Again, the complexity heavily relies on the first agent’s type space size. In contrast to the DGPD algorithm, we notice that for m = 10 the complexity is slightly below the complexity when n = m . On the right side of Figure 16 we see that this behavior cannot be explained by a difference in the number of iterations on the main loop. It is probably due to the cost of finder() that increases in O ( log ( m ) ) when m increases. This effect might also affect the DGPD algorithm but is not significant in our experiment.

4. Conclusions

In this paper, we explored pure Bayesian Nash equilibrium existence for a subset of uniform multigames. We made the distinction between games according to their type space, either continuous or discrete.
For continuous type space games with two actions, we showed in Theorem 4 the existence of a pure Bayesian Nash equilibrium when there are local games having a strictly dominant strategy for each agent. We illustrated its application through the DGPD and SDAP examples that can model real situations with more precision than toy examples usually presented. In Section 2.7 we formulated a methodology to solve two-action games with any kind of prior.
For finite type space DGPD, we showed in Theorem 5 the existence of a pure Bayesian Nash equilibrium. Following this, we were able to provide efficient algorithms to find pure Bayesian Nash equilibrium and explore experimentally our classification of discrete double multigames (Proposition 11).
Threshold strategy is a core concept developed for both the continuous and the discrete type space games with two actions. As we saw, a threshold strategy is fully characterized by its threshold and defines three regions in the type space: one associated to a pure action C, one associated to a pure action D, and one associated to a mix. By construction, the best response must be a threshold strategy. The threshold strategies presented for the DGPD are the more basic version where we play D if θ i < θ i * , play C if θ i > θ i * , and a mix of both if θ i = θ i * .
As for future work, we could try to extend this notion to more than two actions. In this case, we would consider the argmax a i A i θ i · ( u ¯ i j ( a i , σ i ) ) j J to determine the action played by agent i with type θ i . When the highest value is reached by two or more actions a i , the response of agent i would be a mixed combination of such actions. This definition would still use the fact that a Nash equilibrium solution (be it pure or mixed) must comprise threshold strategies.

Author Contributions

Conceptualization, A.E.; formal analysis, S.H.; investigation, S.H.; methodology, S.H.; software, S.H.; supervision, A.E.; validation, A.E. and S.H.; visualization, S.H.; writing—original draft preparation, S.H.; writing—review and editing, A.E. and S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The algorithms presented in the study are openly available in GitHub on the repository https://github.com/huot-s/pure_ne_multigames (accessed on 3 July 2025). No new data were created or analyzed in this study.

Acknowledgments

We would like to thank Samira Hossein Ghorban https://orcid.org/0000-0003-4147-3181 (accessed on 3 July 2025) for helping us design some of the examples displayed in this article, in particular, the one in Section 3.3.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DGPDDouble-Game Prisoner’s Dilemma
SADPSustainable Adoption Decision Problem
NENash Equilibrium
PDPrisoner’s Dilemma
PPADPolynomial Parity Arguments on Directed graphs
CCooperation
DDefection

References

  1. Athey, S. (2001). Single crossing properties and the existence of pure strategy equilibria in games of incomplete information. Econometrica Wiley, Econometric Society. Available online: http://www.jstor.org/stable/2692247 (accessed on 3 July 2025).
  2. Carrozzo Magli, A., Posta, P. D., & Manfredi, P. (2021). The tragedy of the commons as a prisoner’s dilemma. its relevance for sustainability games. Sustainability, 13(15), 8125. [Google Scholar] [CrossRef]
  3. Daskalakis, C., Goldberg, P. W., & Papadimitriou, C. H. (2009). The complexity of computing a Nash equilibrium. SIAM Journal on Computing, 39(1), 195–259. [Google Scholar] [CrossRef]
  4. Dresher, M. (1970). Probability of a pure equilibrium point in n-person games. Journal of Combinatorial Theory, 8(1), 134–145. [Google Scholar] [CrossRef]
  5. Edalat, A., Ghorban, S. H., & Ghoroghi, A. (2018). Ex post Nash equilibrium in linear Bayesian games for decision making in multi-environments. Games, 9(4), 85. [Google Scholar] [CrossRef]
  6. Edalat, A., Ghoroghi, A., & Sakellariou, G. (2012). Multi-games and a double game extension of the prisoner’s dilemma. arXiv, arXiv:12.05.4973. [Google Scholar]
  7. Einy, E., & Haimanko, O. (2023). Pure-strategy equilibrium in Bayesian potential games with absolutely continuous information. Games and Economic Behavior, 140, 341–347. [Google Scholar] [CrossRef]
  8. Flood, M. M. (1958). Some experimental games. Management Science, 5(1), 5–26. [Google Scholar] [CrossRef]
  9. Friede, G., Busch, T., & Bassen, A. (2015). ESG and financial performance: Aggregated evidence from more than 2000 empirical studies. Journal of Sustainable Finance and Investment, 5, 210–233. [Google Scholar] [CrossRef]
  10. Friedman, D. (1998). On economic applications of evolutionary game theory. Journal of Evolutionary Economics, 8(1), 15–43. [Google Scholar] [CrossRef]
  11. Granovetter, M. (1978). Threshold models of collective behavior. American Journal of Sociology, 83, 1420–1443. [Google Scholar] [CrossRef]
  12. Hardin, G. (1968). The tragedy of the commons. Science, 162(3859), 1243–1248. [Google Scholar] [CrossRef]
  13. Harsanyi, J. C. (1967). Games with incomplete information played by “Bayesian” players, i-iii. part i. the basic model. Management Science, 14(3), 159–182. [Google Scholar] [CrossRef]
  14. Harsanyi, J. C. (1968a). Games with incomplete information played by “Bayesian” players, i-iii. part ii. Bayesian equilibrium points. Management Science, 14(5), 320–334. [Google Scholar] [CrossRef]
  15. Harsanyi, J. C. (1968b). Games with incomplete information played by “Bayesian” players, i-iii. part iii. the basic probability distribution of the game. Management Science, 14(7), 486–502. [Google Scholar] [CrossRef]
  16. Harstad, B. (2012). Climate contracts: A game of emissions, investments, negotiations, and renegotiations. The Review of Economic Studies, 79, 1527–1557. [Google Scholar] [CrossRef]
  17. He, W., & Sun, Y. (2019). Pure-strategy equilibria in Bayesian games. Journal of Economic Theory, 180, 11–49. [Google Scholar] [CrossRef]
  18. Heal, G., & Kunreuther, H. (2011). Tipping climate negotiations. National Bureau of Economic Research. [Google Scholar] [CrossRef]
  19. Krishna, V., & Perry, M. (1998). Efficient mechanism design. Hebrew University of Jerusalem. [Google Scholar]
  20. Larson, J. M. (2021). Networks of conflict and cooperation. Annual Review of Political Science, 24(1), 89–107. [Google Scholar] [CrossRef]
  21. Luenberger, D. G. (1998). Investment science. Oxford University Press. [Google Scholar]
  22. Manelli, A. M., & Vincent, D. R. (2007). Multidimensional mechanism design: Revenue maximization and the multiple-good monopoly. Journal of Economic Theory, 137, 153–185. [Google Scholar] [CrossRef]
  23. Martimort, D. (2010). Multi-Contracting mechanism design. In Advances in economics and econometrics: Theory and applications, ninth world congress (Volume I). Cambridge University Press. [Google Scholar] [CrossRef]
  24. McAdams, D. (2003). Isotone equilibrium in games of incomplete information. Econometrica Wiley, Econometric Society. Available online: http://www.jstor.org/stable/1555494 (accessed on 3 July 2025).
  25. Milgrom, P., & Shannon, C. (1994). Monotone comparative statics. Econometrica Wiley, Econometric Society. Available online: http://www.jstor.org/stable/2951479 (accessed on 3 July 2025).
  26. Monderer, D., & Shapley, L. S. (1996). Potential games. Games and Economic Behavior, 14(1), 124–143. [Google Scholar] [CrossRef]
  27. Nordhaus, W. (2015). Climate clubs: Overcoming free-riding in international climate policy. American Economic Review, 105, 1339–1370. [Google Scholar] [CrossRef]
  28. Osborne, M. J., & Rubinstein, A. (1994). A course in game theory (Vol. 1). The MIT Press. ISBN 9780262650403. [Google Scholar]
  29. Papadimitriou, C. H. (1994). On the complexity of the parity argument and other inefficient proofs of existence. Journal of Computer and System Sciences, 48(3), 498–532. [Google Scholar] [CrossRef]
  30. Pástor, Ľ., Stambaugh, R. F., & Taylor, L. A. (2021). Sustainable investing in equilibrium. Journal of Financial Economics, 142, 550–571. [Google Scholar] [CrossRef]
  31. Porter, M. E., & Kramer, M. R. (2011). Creating shared value. Harvard Business Review, 89, 4–5. [Google Scholar]
  32. Poundstone, W. (1993). Prisoner’s dilemma: John von neumann, game theory, and the puzzle of the bomb. Knopf Doubleday Publishing Group. ISBN 0-385-41580-X. [Google Scholar]
  33. Rabinovich, Z., Naroditskiy, V., Gerding, E. H., & Jennings, N. R. (2013). Computing pure bayesian-nash equilibria in games with finite actions and continuous types. Artificial Intelligence, 195, 106–139. [Google Scholar] [CrossRef]
  34. Reny, P. J. (2011). On the existence of monotone pure-strategy equilibria in bayesian games. Econometrica Wiley, Econometric Society. Available online: http://www.jstor.org/stable/41057464 (accessed on 3 July 2025).
  35. Rinott, Y., & Scarsini, M. (2000). On the number of pure strategy nash equilibria in random games. Games and Economic Behavior, 33(2), 274–293. [Google Scholar] [CrossRef]
  36. Shapiro, C. (1989). The theory of business strategy. The Rand Journal of Economics, 20(1), 125–137. [Google Scholar] [CrossRef]
  37. Stanford, W. (1995). A note on the probability of k pure nash equilibria in matrix games. Games and Economic Behavior, 9(2), 238–246. [Google Scholar] [CrossRef]
  38. Stanford, W. (1997). On the distribution of pure strategy equilibria in finite games with vector payoffs. Mathematical Social Sciences, 33(2), 115–127. [Google Scholar] [CrossRef]
  39. Stanford, W. (1999). On the number of pure strategy nash equilibria in finite common payoffs games. Economics Letters, 62(1), 29–34. [Google Scholar] [CrossRef]
  40. Stern, N. (2007). The economics of climate change: The stern review. Cambridge University Press. [Google Scholar]
  41. Székely, T., Moore, A. J., & Komdeur, J. (2010). Social behaviour: Genes, ecology and evolution. Cambridge University Press. [Google Scholar]
  42. van den Assem, M. J., van Dolder, D., & Thaler, R. H. (2011). Split or steal? cooperative behavior when the stakes are large. Management Science, 58, 2–20. [Google Scholar] [CrossRef]
  43. von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior. John Wiley and Sons. [Google Scholar]
Figure 1. Example of coefficients verifying the DGPD conditions.
Figure 1. Example of coefficients verifying the DGPD conditions.
Games 16 00037 g001
Figure 2. DGPD strategy summarized ( λ < μ ).
Figure 2. DGPD strategy summarized ( λ < μ ).
Games 16 00037 g002
Figure 3. Representation of both agent’s type space.
Figure 3. Representation of both agent’s type space.
Games 16 00037 g003
Figure 4. Crossing point (threshold) inside [ 0 , 1 ] .
Figure 4. Crossing point (threshold) inside [ 0 , 1 ] .
Games 16 00037 g004
Figure 5. Crossing point outside [ 0 , 1 ] .
Figure 5. Crossing point outside [ 0 , 1 ] .
Games 16 00037 g005
Figure 6. Forbidden value inside [ 0 , 1 ] .
Figure 6. Forbidden value inside [ 0 , 1 ] .
Games 16 00037 g006
Figure 7. Forbidden value outside [ 0 , 1 ] .
Figure 7. Forbidden value outside [ 0 , 1 ] .
Games 16 00037 g007
Figure 8. If agent 1 plays a threshold strategy with θ 1 * [ 0.1 , 0.3 ] , agent 2’s best response is a threshold strategy with θ 2 * [ 0.1 , 0.4 ] .
Figure 8. If agent 1 plays a threshold strategy with θ 1 * [ 0.1 , 0.3 ] , agent 2’s best response is a threshold strategy with θ 2 * [ 0.1 , 0.4 ] .
Games 16 00037 g008
Figure 9. A pure Bayesian NE represented by compatible arrows.
Figure 9. A pure Bayesian NE represented by compatible arrows.
Games 16 00037 g009
Figure 10. Strategy diagram when λ > μ .
Figure 10. Strategy diagram when λ > μ .
Games 16 00037 g010
Figure 11. Strategy diagram when λ < μ .
Figure 11. Strategy diagram when λ < μ .
Games 16 00037 g011
Figure 12. Search process for the discrete double game.
Figure 12. Search process for the discrete double game.
Games 16 00037 g012
Figure 13. Average number of solutions with respect to the input size of both agents. Each curve corresponds to a payoff matrix. Among matrices belonging to the full set, we distinguish those satisfying the conditions of Proposition 11 (left) from those not satisfying them (right).
Figure 13. Average number of solutions with respect to the input size of both agents. Each curve corresponds to a payoff matrix. Among matrices belonging to the full set, we distinguish those satisfying the conditions of Proposition 11 (left) from those not satisfying them (right).
Games 16 00037 g013
Figure 14. Average number of solutions with respect to the input size of both agents. Each curve corresponds to a payoff matrix from the hybrid set.
Figure 14. Average number of solutions with respect to the input size of both agents. Each curve corresponds to a payoff matrix from the hybrid set.
Games 16 00037 g014
Figure 15. DGPD time complexity (left) and comparison with general algorithm (right).
Figure 15. DGPD time complexity (left) and comparison with general algorithm (right).
Games 16 00037 g015
Figure 16. Complexity comparison for different input sizes through computation time (left) and average number of iterations (right).
Figure 16. Complexity comparison for different input sizes through computation time (left) and average number of iterations (right).
Games 16 00037 g016
Table 1. Prisoner’s dilemma payoffs.
Table 1. Prisoner’s dilemma payoffs.
Agent 2
CD
Agent 1C ( r , r ) ( s , t )
D ( t , s ) ( p , p )
Table 2. Social game payoffs.
Table 2. Social game payoffs.
Agent 2
CD
Agent 1C ( y , y ) ( y , z )
D ( z , y ) ( z , z )
Table 3. Market share game.
Table 3. Market share game.
Company B
CD
Company AC ( 5 , 5 ) ( 0 , 9 )
D ( 9 , 0 ) ( 2 , 2 )
Table 4. Reputation  game.
Table 4. Reputation  game.
Company B
CD
Company AC ( 4 , 4 ) ( 4 , 0 )
D ( 0 , 4 ) ( 0 , 0 )
Table 5. The chicken game payoff matrix.
Table 5. The chicken game payoff matrix.
AvoidConflict
Avoid ( 2 , 2 ) ( 1 , 3 )
Conflict ( 3 , 1 ) ( 0 , 0 )
Table 6. Ego game.
Table 6. Ego game.
Agent 2
AvoidConflict
Agent 1Avoid ( 0 , 0 ) ( 1 , 1 )
Conflict ( 1 , 1 ) ( 0 , 0 )
Table 7. Survival game.
Table 7. Survival game.
Agent 2
AvoidConflict
Agent 1Avoid ( 0 , 0 ) ( 0 , 0 )
Conflict ( 0 , 0 ) ( 2 , 2 )
Table 8. The Bach or Stravinsky payoff matrix.
Table 8. The Bach or Stravinsky payoff matrix.
BachStravinsky
Bach ( 10 , 7 ) ( 2 , 2 )
Stravinsky ( 0 , 0 ) ( 7 , 10 )
Table 9. Taste game.
Table 9. Taste game.
Agent 2
BS
Agent 1B ( 1 , 0 ) ( 1 , 1 )
S ( 0 , 0 ) ( 0 , 1 )
Table 10. Social game.
Table 10. Social game.
Agent 2
BS
Agent 1B ( 2 , 2 ) ( 0 , 0 )
S ( 0 , 0 ) ( 2 , 2 )
Table 11. The Stag Hunt payoff matrix.
Table 11. The Stag Hunt payoff matrix.
StagHunt
Stag ( 10 , 10 ) ( 1 , 8 )
Hunt ( 8 , 1 ) ( 5 , 5 )
Table 12. Utilities for the first game.
Table 12. Utilities for the first game.
CD
C ( 16 , 16 ) ( 3 , 20 )
D ( 20 , 3 ) ( 6 , 6 )
Table 13. Utilities for the second game.
Table 13. Utilities for the second game.
CD
C ( 15 , 15 ) ( 15 , 3 )
D ( 3 , 15 ) ( 3 , 3 )
Table 14. Utilities for both basic games.
Table 14. Utilities for both basic games.
CD
C ( 3 , 4 ) ( 4 , 3 )
D ( 5 , 0 ) ( 1 , 1 )
CD
C ( 2 , 4 ) ( 6 , 5 )
D ( 7 , 2 ) ( 5 , 2 )
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huot, S.; Edalat, A. Pure Bayesian Nash Equilibria for Bayesian Games with Multidimensional Vector Types and Linear Payoffs. Games 2025, 16, 37. https://doi.org/10.3390/g16040037

AMA Style

Huot S, Edalat A. Pure Bayesian Nash Equilibria for Bayesian Games with Multidimensional Vector Types and Linear Payoffs. Games. 2025; 16(4):37. https://doi.org/10.3390/g16040037

Chicago/Turabian Style

Huot, Sébastien, and Abbas Edalat. 2025. "Pure Bayesian Nash Equilibria for Bayesian Games with Multidimensional Vector Types and Linear Payoffs" Games 16, no. 4: 37. https://doi.org/10.3390/g16040037

APA Style

Huot, S., & Edalat, A. (2025). Pure Bayesian Nash Equilibria for Bayesian Games with Multidimensional Vector Types and Linear Payoffs. Games, 16(4), 37. https://doi.org/10.3390/g16040037

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop