Next Article in Journal
Machine Learning-Based Monitoring of DC-DC Converters in Photovoltaic Applications
Previous Article in Journal
A Machine Learning Approach for Solving the Frozen User Cold-Start Problem in Personalized Mobile Advertising Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reinforcement Learning for Mean-Field Game

1
School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN 47907, USA
2
School of Industrial Engineering, Purdue University, West Lafayette, IN 47907, USA
3
Department of Electrical and Computer Engineering, Ohio State University, Columbus, OH 43210, USA
4
Department of Electrical Engineering, I.I.T. Kanpur, Kanpur 208016, UP, India
*
Authors to whom correspondence should be addressed.
Algorithms 2022, 15(3), 73; https://doi.org/10.3390/a15030073
Submission received: 6 January 2022 / Revised: 16 February 2022 / Accepted: 19 February 2022 / Published: 22 February 2022
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)

Abstract

:
Stochastic games provide a framework for interactions among multiple agents and enable a myriad of applications. In these games, agents decide on actions simultaneously. After taking an action, the state of every agent updates to the next state, and each agent receives a reward. However, finding an equilibrium (if exists) in this game is often difficult when the number of agents becomes large. This paper focuses on finding a mean-field equilibrium (MFE) in an action-coupled stochastic game setting in an episodic framework. It is assumed that an agent can approximate the impact of the other agents’ by the empirical distribution of the mean of the actions. All agents know the action distribution and employ lower-myopic best response dynamics to choose the optimal oblivious strategy. This paper proposes a posterior sampling-based approach for reinforcement learning in the mean-field game, where each agent samples a transition probability from the previous transitions. We show that the policy and action distributions converge to the optimal oblivious strategy and the limiting distribution, respectively, which constitute an MFE.

1. Introduction

1.1. Motivation

We live in a world where multiple agents interact repeatedly in a common environment. For example, multiple robots interact to achieve a specific goal. Multi-agent reinforcement learning (MARL) refers to the problem of learning and planning in a sequential decision-making system with unknown underlying system dynamics. The agents need to learn the system dynamics by trying different actions and observing rewards received over time. Learning in a MARL is fundamentally different from the traditional single-agent reinforcement learning (RL) problem since agents not only interact with the environment but also with each other. Thus, an agent, when trying to learn the underlying system dynamics, has to consider the action taken by the other agents. Changes in the policy (or actions) of any agent affect the others and vice versa.
One natural learning algorithm is to extend existing RL algorithms to the MARL by assuming that the other agents’ actions are independent. However, studies show that a smart agent which learns the joint actions of the others performs better as compared to the agent that does not learn the joint action of other agents [1,2]. For any agent, the actions of other agents become a part of the state. This results in the state space increases exponentially as the number of agents increases. When the agents are strategic, i.e., they only want to take actions that maximize their utility (or value), Nash equilibrium is often employed as the equilibrium concept. The existing equilibrium solving approaches work for some restricted games when there exists an adversarial equilibrium or coordination equilibrium [3]. Also, these approaches can handle a handful of agents because of the exponential increase in the state space. The computational complexity of finding Nash Equilibrium at every stage game prevents applications of these approaches in games where the number of agents is large [4].
In this paper, we consider MARL as an environment where a large number of agents co-exist. Similar to [5], we utilize a mean-field approach where we assume that the Q-function of an agent is affected by the mean actions of the others. Mean-field game drastically reduces the complexity, since an agent only needs to consider the empirical distribution of the actions played by other agents. Such mean-field games exist in several domains. For example, the mean-field game is observed in a cyber-security game where a large number of agents such as terminal nodes or servers make individual decisions about their security [6,7]. However, the ultimate security depends on the decisions made by other agents as well. For example, consider a network of computers, where there are a large number of agents and each agent manages a computer. If an agent invests heavily in building firewalls, its computer can still be breached if other agents’ computers are not secure. In the security game, each agent invests a certain amount to attain a security level. However, the investment level depends on the investment made by the other agents. If the number of agents is large, the game can be modeled as the mean-field game as the average investment made per agent impacts the decision of an agent.
Another example of a mean-field game is the demand response price in the smart grid [8,9]. The utility company sets a price based on the average demand per household. Hence, if at a certain time the average demand is high, the utility company may increase the price. The agent now might want to reduce its own consumption to decrease its costs resulting from the increased price. Mean-field equilibrium is the equilibrium concept in the mean-field game.

1.2. Contribution

We seek to obtain a model-based RL algorithm to find the mean-field equilibrium in an episodic-set up. To the best of our knowledge, this is the first work on an episodic RL algorithm for mean-field equilibrium. We consider an oblivious strategy [10,11], where each agent takes an action based only on its state. Thus, even though the transition probability and reward depend on the empirical distribution of the agents’ actions and states, an agent only seeks policy based on its own state. Hence, an agent does not need to track the policy state of the other agents.
In our algorithm, we maintain a history of the samples and sample an MDP that fits the history with high confidence at the start of the episode. The policy is then computed with this MDP. The agents then play the computed policy and collect samples over the steps of the episode. We show that such an algorithm converges to equilibrium.

1.3. Related Literature

Unlike the standard literature on the mean-field equilibrium on stochastic games [10,12,13], we consider that the transition probabilities are unknown to the agents. Instead, each agent learns the underlying transition probability matrix using a readily implementable posterior sampling approach [14,15]. All agents employ the best response dynamics to choose the best response strategy which maximizes the (discounted) payoff for the remaining episode length. We show the asymptotic convergence of the policy and the action distribution to the optimal oblivious strategy and the limiting action distribution respectively. We estimate the value function using backward induction and show that the value function estimates converge to the optimal value function of the true distribution. We also use the compactness of state and action space to show that the converged point constitutes a mean-field equilibrium (MFE).
Ref. [5] considers a variant of the Mean-field game where the state is the same across the agents. Unlike [5], we consider a generalized version of the game where the state can be different for different agents. Further, we do not consider a game where adversarial equilibrium or coordinated equilibrium is required to be present. We also do not need to track the action and the realized Q-value of other agents as was the case in [5].
Recently, the authors of [16] studied a policy-gradient based approach to achieve mean-field equilibrium. The authors of [17,18] considered variants of Q-learning to achieve Nash equilibrium. Actor-critic based algorithms have been analyzed in [19]. The authors of [20] consider a deep neural network-based algorithm for mean-field games. In contrast, this paper considers a posterior sampling-based approach. As noted in [21], model-based approaches converge faster than the model-free approaches in general. Thus, understanding of theoretical properties of the model-based approaches is an important problem. Further, all these papers require huge storage space as the policy depends on the actions of the other agents whereas in our setting we provide a policy that is oblivious of the states and actions of other agents.
The authors of [22,23] compute mean-field equilibrium for a setting where the evolution of the state of an agent does not depend on the action or states of the other agents. However, in our setting, we consider a generic setting where the both reward and the next state of an agent depend on the actions as well as the states of the other agents. Thus, though the mean-field game converges to the potential game in the above papers, in our setting, the game does not converge to the potential game. Hence, finding an equilibrium strategy is more challenging in our setting. The key contribution of the paper is a Posterior-sampling based algorithm that is used by each agent in a multi-agent setting, which is shown to converge to a mean-field equilibrium. The proposed algorithm does not assume the knowledge of transition probabilities and learns them using a posterior sampling approach. Further, our algorithm is computationally efficient as the policy only depends on the current state of an agent, i.e., it is oblivious of the states and actions of other agents.

2. Background

2.1. Multi-Player Stochastic Game

An n-player stochastic game is formalized by the system dynamics tuple M = { S , A , P , r , τ , ρ , γ } . The agents are indexed by the set [ n ] = { 1 , 2 , , n } . The state of the ith agent at time t is given by s i , t S , where S is the state space set. A ( s ) is defined as the set of the feasible actions any agent can take in state s. A is the action space set defined as s S A ( s ) . We also assume that both S and A are finite sets. Since finite sets are also compact, the assumption allows us to use results from previous works of [10]. Since we have n agents in our system, the combined state space of the system becomes S × n = S × S × × S , and the combined action space of the system becomes A × n = A × A × × A . Let s t = s S × n be a vector of length n, and the ith element of s denotes the states of the ith agent at time t. Similarly, let a t = a A × n be a vector of length n, and the ith element of s denotes the action taken by the ith agent at time t.
If the agents play joint action a t = a A × n the next state of the system s t + 1 = s S × n follows the probability distribution P ( s t + 1 = s | s t = s , a t = a ) . Along with the state updates, ith agent also receives a reward r i , t = R i ( s t , a t , s t + 1 ) [ 0 , 1 ] . We further assume that the reward function R i ( · ) independent of the agent i, and each player is trying to optimize for the same reward. Hence, we can drop the subscript i in the reward function R i . However, since each agent can be in different state, or play different action, their individual rewards will be different and we still use the subscript i to differentiate between the instantaneous reward of the ith agent. The constant γ [ 0 , 1 ) is the discount factor, and ρ is the initial state distribution such that s 0 ρ .
We consider an episodic framework where the length of the time horizon or the length of episodes is τ . State space set S , action space set A , τ are known and need not be learned by the agent. We consider that the game is played in episodes k = 0 , 1 , 2 , . In each episode, the game is played in discrete steps, j [ τ ] = { 0 , 1 , , τ 1 } . The episodes begin at times t k = k τ , k = 0 , 1 , 2 , . At each time t, the state of the agent i is given by s i , t , the agent selects an action a i , t , agent observes a scalar reward r i , t and the state transitions to the state s t + 1 . Let H i , t = ( s i , 1 , a i , 1 , r i , 1 , , s i , t 1 , a i , t 1 , r i , t 1 , s i , t ) denote the history available to the agent i till time t.

2.2. Mean-Field Game

In a game with a large number of players, we might expect that the distribution of agents over the action space carries more meaning than the actions themselves. It is intuitive that a single agent has a negligible effect on the game as the number of agents increases. The effect of other agents on a single agent’s payoff is only via the action distribution of the population. This intuition is formalized in the mean-field game. We now formally define the mean-field game and equilibrium concepts.
First, we define few notations. Let α i , t ( a ) : A [ 0 , 1 ] be the fraction of the agents (excluding agent i) that take action a A at time t. Mathematically, we have
α i , t ( a ) = 1 n 1 m [ n ] { i } 𝟙 ( a m , t = a ) ,
where 𝟙 ( a j , t = a ) is the indicator function that the agent j takes action a at time t. Since we assume an episodic framework, α i , t ( a ) can be different at each time index in an episode and also across the episodes. The episodic nature of the problem will be used later (in (9)) to define convergence to a value that depends only the time index in the episode. Further, since each agent selects exactly one action from A , we have
α i , t ( a ) 0     a A ,   and   a A α i , t ( a ) = 1
Similar to distribution of agents over actions, we define f i , t : S [ 0 , 1 ] as the distribution of agents (excluding agent i) over the state space S .
f i , t ( a ) = 1 n 1 m [ n ] { i } 𝟙 ( s m , t = s ) ,
In a mean-field game, every agent i [ n ] assumes that its next state s i , t + 1 is randomly distributed according to the transition probability distribution P i conditioned on agent’s current state s i , t , the action taken a i , t and other agents’ distribution over actions α i , t . Also, the reward is function of the agents current state and action and the next state.
s i , t + 1 P i ( · | s i , t , a i , t , α i , t )
r i , t = ϕ ( s i , t , a i , t , α i , t , s i , t + 1 )
Thus the agent does not need to concern itself with the actions of the other agents, as the population action distribution α i , t becomes a part of the environment. This updated environment dynamics can now be used in decision-making. Note that the distribution of the population action α i , t could be explicitly taken into account for deciding an action as well. Note that the reward may also depend on f the state distribution of other agents. The analysis would have been similar.
Example 1.
We now provide an example drawn from a real-life application that simulates our setting. Suppose we consider the scenario of malware spreading. The state X i , t = 0 means that an agent is vaccinated and can not infect at t. On the other hand, X i , t can vary between 0 and 1 with n quantization levels. The action space of an agent is a i , t = { 0 , 1 } . If a i , t = 0 then the agent does not take any action. If a i , t = 1 , agent i takes action in order to protect itself. In order to simplify the model, we consider the following state evolution model
X i , t + 1 = min { X i , t + ω i , t α i , t ( 0 ) , 1 } i f   a i , t = 0 0 i f   a i , t = 1
Note that if all the other agents have taken action 1 i.e., they protect themselves, then an agent has less chance to be infected. Thus, the state is smaller when the fraction of agents taking action 0 (i.e., α i , t ( 0 ) ) is smaller. ω i , t is a noise who has its value in { 1 / n , 2 / n , , 1 } .
The reward function for agent i is defined as follows r i , t = X i , t α i , t ( 1 ) λ a i , t . The reward function increases if the number of agents who have protected themselves is higher. λ is the cost that depends on the action. This example explains the necessity to compute the mean-field equilibrium.
Note that in general action of an agent should depend on the action distribution of other agents. However, Proposition 1 from [10] says that under equilibrium, an oblivious strategy performs as well as a strategy that considers other agents’ actions. Thus, the strategy does not need to explicitly consider the value of α i , t .
Definition 1.
An agent i [ n ] is said to follow an oblivious deterministic strategy π i when the agent selects an action considering only time index j in an episode and current state s i , j .
π i : S × [ τ ] A
a i , j = π i ( s i , j , j )
For the rest of the paper, we will focus on oblivious deterministic strategy for all agents.

2.3. Value Function, Q Function and Policy

We now define a value function for an agent i [ n ] for oblivious policy π i at lth time step in an episode as:
V i , π i , l ( s | α i , l ) = E P i , π i j = l τ 1 γ j l r i , t | s i , l = s
= E P i , π i j = l τ 1 γ j l ϕ ( s i , j , a i , j , s i , j + 1 ) | s i , l = s .
The expectation in Equation (8) is taken over the actions taken from time step l and the states visited after time step l in an episode. We will consider the rest of the definitions from some ith agent’s perspective, i [ n ] , so subscripts i and i will be dropped for brevity.
We note that the action space and state space are finite and hence the set of strategies available to the players is also finite. The player adopts the lower myopic best response dynamics to choose the policy. A lower myopic policy selects an action with the lowest index among the actions that maximize the value function. As time proceeds, the strategies and the action distribution converge to the asymptotic equilibrium [10].
Let α j [ 0 , 1 ] | A | be the limiting population action distribution for jth time index in episode k. We note that due to the episodic framework, the limiting action distribution depends on the index in an episode. Then, from the definition of limit, for every ϵ > 0 there exist a K ϵ < such that for all k > K ϵ , we have
| | α k τ + j α j | | 2 < ϵ
where | | . | | 2 denotes the 2 norm.
The value function defined in Equation (8) satisfies the Bellman-property for finite horizon MDPs, given by
V π , l ( s | α l ) = E P i , π i j = l τ 1 γ j l r i , j | s i , l = s
= s S P i ( s | s , a , α l ) r i , l + E P i , π i j = l + 1 τ 1 γ j l r i , j = s
= r ¯ l + γ s S P ( s | s , a , α l ) V π , l + 1 ( s | α l + 1 )
where r ¯ l = s S P ( s | s , a , α l ) r l , and a = π ( s , l ) .
Similarly, we also define the Q-function as:
Q π , l ( s , a | α l ) = r ¯ l + γ s S P ( s | s , a , α l ) V π , l + 1 ( s | α l + 1 )
We further consider the agents are strategic and hence care only about individual rewards. The goal of each agent is to find an optimal oblivious policy π , such that,
V π , l ( s | α l ) V π , l ( s | α l )   s S ,   l [ τ ] .
Let α = [ α 0 , , α τ 1 ] [ 0 , 1 ] τ × | A | , then we can define the optimal oblivious strategy:
Definition 2.
The set P ( α ) is the set of the optimal oblivious strategies which are chosen from the Q-function generated by α. In other words, for a given α, a policy π ¯ P ( α ) if and only if
π ¯ ( s , l ) arg max a Q π ¯ , l ( s , a | α l ) s S   l [ τ ]
Here, the policy π ¯ ( s , l ) is used at lth time index in an episode so that the Q-value Q π , l ( s ) is maximized for all states s S . Note that π ¯ ( s , l ) does not depend on the distribution α explicitly. Hence, it is an oblivious strategy where each agent takes its decision based on its own observed state only. Since the reward function is bounded and γ < 1 , the set P ( α ) is always non-empty. However, finding the optimal action is challenging for an oblivious strategy profile. We denote the initial population state distribution denoted by f 0 . We note that as α t evolves, the population state distribution f t also evolves. After convergence, for a time index j in any episodes, the population state distribution will converge to the limiting population state distribution f j , or
| | f k τ + j f j | | 2 0 .

2.4. Stationary Mean-Field Equilibrium

Throughout this paper, we seek to compute a mean-field equilibrium and action strategy. Thus, the action was taken by an agent only depends on its state independent of its episode. Further, such an action profile should converge to a stationary action distribution and state distribution. We now formally define a mean-field equilibrium.
Definition 3
([24]). We say that Mean-Field Equilibrium (MFE) is achieved by an oblivious strategy π ¯ ( · ) , if the strategy for the players, population action, and the state distribution is such that:
  • Each player i optimizes its expected discounted payoff assuming that population action distribution α is fixed; i.e., it satisfies (15).
  • For strategy π ¯ of any player i, the fixed population action distribution α satisfies
    α j ( a ) = 1 n s 𝟙 π ¯ ( s , j ) = a p ( s | s , a , α j 1 )     j [ τ ]
    We define the above as α D ^ ( π ¯ , f )
  • For α and π ¯ , the state distribution f satisfies
    f ( s j ) = s f ( s ) p ( s | s , π ( s ) , α j 1 )     j [ τ ]
    which we denote as f D ( π ¯ , α ) .
Specifically, if we fix α and each agent takes action π ¯ which belongs to P ( α ) , then the action distribution should return to α as it is invariant under the transition probability. Further, if we fix α and each agent takes action π ¯ , the state distribution should give back f.
Since the players are learning an oblivious strategy, no agent observes the states or actions of the other agents. Also, an agent does not know the probability transition matrix, and reward function and will try to estimate it from the past observations as described in the next section.

3. Proposed Algorithm

In this section, we propose an algorithm, which will be shown to converge to the mean-field equilibrium (MFE) in the following section. For each agent i, the algorithm begins with a prior distribution g over the stochastic game with state-space set S and action space set A and time horizon τ . The prior distribution g for modeling state transition probability distribution is typically taken to be Dirichlet distribution [14,15].
The game is played episodes k = 0 , 1 , 2 , . The length of each episode is given by τ . In each episode, the game is played in discrete steps, j = 0 , 1 , , τ 1 . The episodes begin at times t k = k τ , k = 0 , 1 , 2 , . At each time t = k τ + j , the state of the agent is given by s t , it selects an action a t , and observes a scalar reward r t then transitions to the state s t + 1 . Let H t = ( s 1 , a 1 , r 1 , s t , a t 1 , r t 1 ) denote the history of the agent till time t.
The proposed algorithm is described in Algorithm 1. At the beginning of each episode, the MDP, M k is sampled from the posterior distribution conditioned on the history H t k in Line 4. We note that the sampling of MDP only relates to the sampling of the transition probability P and the reward distribution since the rest of the parameters are known. We note that the algorithm doesn’t perform explicit exploration like an ϵ -greedy algorithm. Instead, the algorithm samples a new MDP M k for episode k in Line 4. The Algorithm can generate a new trajectory from the new policy [14,15] solved for the sampled MDP M k . We assume that after some samples, α k has converged. The proposed algorithm converges as the induced transition probability and reward function converge after α k converge.
Algorithm 1 Proposed Algorithm for Mean-Field Game with Best Response Learning Dynamics.
1:
Input: Prior distribution g, time horizon τ , γ
2:
Initialize H 0 = ϕ .
3:
for episodes k = 0 , 1 , 2 ,  do
4:
      Sample M k g ( · | H k τ ) .
5:
      Obtain optimal Q for M k from Algorithm 2
6:
      for time steps j = 0 , …, τ 1  do
7:
            Play a j = arg max a Q j ( s j , a ) .
8:
            Observe reward r j , action of the agent a j , and next state s j + 1 .
9:
           Append action taken a j , reward obtained r j , and state update s j + 1 to history
H k τ + j + 1 = H k τ + j { a j , r j , s j + 1 } .
10:
    end for
11:
end for
We use Backwards Induction algorithm [25] described in Algorithm 2 to obtain the Q-value function for the current sampled MDP (Line 5, Algorithm 1). Backward induction in Algorithm 2 starts from the end of the episode and calculates the potential maximum rewards for each state and action (Line 5). The algorithm then goes back in the episode (Line 8), to calculate the maximum possible cumulative rewards for each state and action in Line 11. After all the time indices in an episode are covered, the algorithm returns the calculated optimal Q-values. We obtain the policy π k from the calculated Q-values and the policy is not altered in an episode. Recall for a given α , a policy π P ( α ) if and only if π k ( s , j ) arg max a Q π k , j ( s , a | α j ) for all s S and j = 0 , 1 , , τ 1 . Let α k be the population action distribution in episode k, then the algorithm aims to choose a policy π k P ( α k ) . In order to choose the policy π k from the set P ( α k ) , we use lower myopic learning dynamics, where at each episode we choose the strategy which is the smallest action index in the set P ( α k ) .
Algorithm 2 Backwards Induction Algorithm.
1:
Input:  M = { S , A , P , r , τ , γ }            ▹ Sampled MDP from Algorithm 1
2:
Initialize Q l ( s , a ) = 0     s S , a A , l [ τ ] .
3:
for state s S  do
4:
      for state a A  do
5:
           Update Q-value function for last action
Q τ 1 ( s , a ) = s S P ( s | s , a ) r ( s , a , s )
6:
      end for
7:
end for
8:
for time steps l = τ 2 , , 0  do
9:
      for state s S  do
10:
          for state a A  do
11:
              Update Q-value function
Q l ( s , a ) = s S P ( s | s , a ) ×
r ( s , a , s ) + γ arg max a Q l + 1 ( s , a )
12:
          end for
13:
    end for
14:
end for
15:
Return:  Q l ( s , a )     l , s , a
We note that M k is used in the algorithm instead of M where M , the true distribution, is not known. In order to obtain an estimate, each agent samples a transition probability matrix according to the posterior distribution. Each agent follows the strategy π k according to the Q-values over the episode. Based on the action decision by each agent, we update the value function and the Q-function based on the obtained reward functions which depend on the value of α k . The detailed algorithm steps can be seen in Algorithm 1. We note that, as the algorithm converges, the value of α converges, and thus all the transition probabilities and value functions depend on the limiting distribution.

4. Convergence Result

In this section, we’ll show that if the oblivious strategy is chosen according to the proposed algorithm, then the oblivious strategy π and the limiting population action distribution α constitutes a Mean-Field Equilibrium (MFE). More formally, we have obtained the following–
Theorem 1.
The optimal oblivious strategy obtained from Algorithm 1 and the limiting action distribution constitute a mean-field equilibrium and the value function obtained from the algorithm converges to the optimal value function of the true distribution.
The rest of the section proves this result. We first note that the lower-myopic best response strategy leads to a convergence of the action strategy following the results in [10] for finite action space and state space. We note that there might be multiple actions that can maximize the state-action value function. This may lead to choosing different actions at different iterations for the same state. To avoid the oscillations between the best actions and hence keep the policy stable, we choose a lower-myopic strategy. This lower-myopic strategy avoids conflicts when the agents have a non-unique strategy that maximizes the value function. Further, any way of resolving the multiple optima could be used, including upper-myopic giving the same result. Having shown that α converges, we now proceed to show that the converged point of the algorithm results in an MFE.
We first show the conditions needed for a policy π , a population state f, and action distribution α to constitute an MFE (Section 4.1). Then, we show that the conditions for the policy to be MFE given in Section 4.1 are met for any optimal oblivious strategy (Section 4.2). Thus, the key property that is required to show the desired result is that the proposed algorithm leads to an optimal oblivious strategy. In order to show that, we show that the value function of the sampled distribution converges to the true distribution (Section 4.3). The result in Section 4.3 shows that the value function iterates eventually converge to the value function with knowledge of the true underlying distribution of the transition probability M , thus proving that the proposed algorithm converges to an optimal oblivious strategy which constitutes a mean-field equilibrium thus proving the theorem.

4.1. Conditions for a Strategy to Be a MFE

In this section, we will describe the conditions for an oblivious strategy π to be a MFE. Recall that, in Section 2.2, we defined two maps P ( α ) and D ( π , α ) . For a given action coupled stochastic game, the map P ( α ) for a given population action distribution α gives the set of the optimal oblivious strategies. Further, the map D ( π , α ) for a given population action distribution α and oblivious strategy π gives the set of invariant population state distribution f.
We define the map D ^ ( π , f ) which gives the induced population action distribution α induced from the oblivious strategy π and the population state distribution f. The following lemma gives the conditions that the stochastic game constitutes a mean-field equilibrium. These conditions have been provided in [11], and the reader is referred to [11] for further details and proof of this result.
Lemma 1
(Definition 7 [11]). An action coupled stochastic game with the strategy π, population state distribution f and population action distribution α constitute a mean-field equilibrium if π P ( α ) , f D ( π , α ) and α D ^ ( π , f ) .

4.2. Conditions of Lemma 1 Are Met for Any Optimal Oblivious Strategy

In this section, we show that the conditions of Lemma 1 are met for any optimal oblivious strategy. In the mean-field equilibrium, each agent plays according to the strategy π P ( α ) . If the average population action distribution is α , and each agent takes an oblivious strategy, hence, we must have the evolution of the state space such that the oblivious strategy on those states leads to an average action distribution of α . Since we assume a large number of agents, including agent i’s own state will not change the population state distribution. So, we let the average population state distribution be f j at time index j as,
f j ( s ) = 1 n m [ n ] 𝟙 s m , j = s
where s m , j is the state of the agent m at time index j. Similarly, we also include agent i’s action as well in the population action distribution α . Then, we have,
E [ f j ( s ) ] = E 1 n m [ n ] 𝟙 s m , j = s
= 1 n m [ n ] E 𝟙 s m , j = s
= 1 n m [ n ] P { s m , j = s }
= 1 n m [ n ] s S P { s m , j 1 = s } P s | s , π ( s ) , α j 1
= s S 1 n m [ n ] P { s m , j 1 = s } P s | s , π ( s ) , α j 1
= s S E f j 1 ( s ) P s | s , π ( s ) , α j 1
where P ( { s m , j = s } ) is probability of agent m being in state s at time index j. Here, Equation (24) follows from the transition probability matrix. Recursively replacing f j 1 in Equation (26) using Equation (21) for all j [ τ ] gives the required result of f D ( π , α ) .
The above statement also implies that α must satisfy
α j ( a ) = π 1 ( a , j ) f j ( s )
where π 1 ( a , j ) represents the set of states s for which a π ( s , j ) . This is equivalent to saying that if all the agents follow the optimal oblivious strategy π P ( α ) , then the population state distribution f and the population action distribution α satisfy f D ( π , α ) and α D ^ ( π , f ) .

4.3. Sampling Does Not Lead to a Gap for Expected Value Function

In the last subsection, we proved that there exists an optimal oblivious strategy. We will show that the policies generated by Algorithm 2, π k , for the sampled system dynamics M k in episode k by Algorithm 1 converges to the optimal oblivious policy π ¯ . To show the convergence of the policy π k to π ¯ , we will show that the value function of the optimal oblivious policy π k converges to the optimal value function of the true system dynamics.
We will first describe the lemmas that are used to prove the required result. We start by stating the Azuma-Hoeffding Lemma for obtaining confidence intervals.
Lemma 2
(Azuma-Hoeffding Lemma [15]). If Y n is a zero-mean martingale with almost surely bounded increments, | Y i Y i 1 | ≤ C, then for any δ≥ 0 with probability at least 1 δ , Y n C 2 n log ( 1 / δ ) .
We also utilize the following result of [15] on any σ -measurable function g.
Lemma 3.
If f is the distribution of M then, for any σ ( H t k ) -measurable function g,
E [ g ( M ) | H t k ] = E [ g ( M k ) | H t k ] .
At the start of every episode, each agent samples system dynamics from the posterior distribution given H t k . The following result bounds the difference between the optimal value function learned by the true distribution M function using the optimal policy π which is unknown, and the optimal value function achieved by the sampled distribution M k from the policy π k .
Lemma 4.
Let V π k , j M k ( s | α j ) ) be the optimal value function for an oblivious policy π k ( s , j ) = arg max a Q π k , j M k ( s , a | α l ) for the sampled system dynamics M k chosen form Algorithm 1. Then V π k , j M k ( s | α j ) ) converges to the optimal value function, V π , j ( s | α j ) , of the true system dynamics M i.e., for all states s S as k with probability at least 1 δ ,
V π k , j M k ( s | α j ) ) V π , j ( s | α j ) 0
Proof. 
We note that since the optimal value function is a σ ( H t k ) -measurable, we can use Lemma 3 to bound the difference of the optimal value functions of the sampled distribution at episode k and the true distribution to show that for all states s S ,
E V π , j ( s | α j ) V π k , j M k ( s | α j ) ) = 0
Note that the length of all episodes is given by τ and the support of the reward is [0, 1]. Therefore for all states s S , we have V π , 0 ( s | α 0 ) V π k , 0 M k ( s | α 0 ) ) [ τ , τ ] . Note that this condition is similar to bounded increments in Azuma-Hoeffding Lemma (Lemma 2).
Since V π , 0 ( s | α 0 ) V π k , 0 M k ( s | α 0 ) [ τ , τ ] is a zero mean martingale with respect to the filtration { H t k : k = 1 , . . , m } , and satisfies the assumptions of Azuma-Hoeffding Lemma, we obtain the result as in the statement of the Lemma. Also, for all states s S , we have, V π , 0 ( s | α 0 ) V π k , 0 M k ( s | α 0 ) ) [ τ , τ ] . So, the difference is a zero-mean martingale and has the bounded increments property. Applying the Azuma-Hoeffding Lemma to the martingale, we have the following result,
k = 1 m V π , 0 ( s | α 0 ) V π k , 0 M k ( s | α 0 ) τ 2 m log ( 1 / δ )
For total time T of the algorithm, we have m = T / τ . Thus, we obtain,
k = 1 m V π , 0 ( s | α 0 ) V π k , 0 M k ( s | α 0 ) τ 2 T τ log ( 1 / δ )
= 2 T τ log ( 1 / δ )
Thus, for all θ > 1 / 2 , as T , we have
k = 1 T / τ V π , 0 ( s | α 0 ) V π k , 0 M k ( s | α ) 0 T θ 0
Substituting θ = 1 , the above expression says that τ times the average difference in an episode which converges to zero as total time T , which gives us the convergence of the optimal value functions of the two distributions. Thus, we have
V π , 0 ( s | α 0 ) V π k , 0 M k ( s | α 0 ) 0   as   k

5. Conclusions

We consider an action-coupled stochastic game consisting of a large number of agents where the transition probabilities are unknown to the agents. We utilize the concept of mean-field equilibrium where each agent’s reward and the transition probability are only impacted through the mean distribution of the actions of the other agents. When the number of agents grows large, the mean-field equilibrium becomes equivalent to the Nash equilibrium. We propose a posterior sampling-based approach where each agent draws a sample using an updated posterior distribution and selects an optimal oblivious strategy accordingly. We show that the proposed algorithm converges to the mean-field equilibrium without knowing the transition probabilities apriori.
This paper shows asymptotic convergence to the mean-field equilibrium while finding the convergence rate is an interesting future direction. Further, the convergence rate in the number of users to the mean-field limit (akin to [26], while for the mean-field game rather than mean-field control) is an important direction.

Author Contributions

All the authors contributed significantly to the work. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Science Foundation under grants CCF-1527486 and CNS-1618335.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tan, M. Multi-agent reinforcement learning: Independent versus cooperative agents. In Proceedings of the 10th International Conference on International Conference on Machine Learning, Amherst, MA, USA, 27–29 June 1993; Morgan Kaufmann Publishers Inc.: Burlington, MA, USA, 1993; pp. 330–337. [Google Scholar]
  2. Panait, L.; Luke, S. Cooperative multi-agent learning: The state of the art. Auton. Agents Multi-Agent Syst. 2005, 11, 387–434. [Google Scholar] [CrossRef]
  3. Littman, M.L. Friend-or-foe Q-learning in general-sum games. In Proceedings of the ICML, Williamstown, MA, USA, 28 June–1 July 2001; Volume 1, pp. 322–328. [Google Scholar]
  4. Hu, J.; Wellman, M.P. Nash Q-learning for general-sum stochastic games. J. Mach. Learn. Res. 2003, 4, 1039–1069. [Google Scholar]
  5. Yang, Y.; Luo, R.; Li, M.; Zhou, M.; Zhang, W.; Wang, J. Mean Field Multi-Agent Reinforcement Learning. arXiv 2018, arXiv:1802.05438. [Google Scholar]
  6. Miao, L.; Wang, L.; Li, S.; Xu, H.; Zhou, X. Optimal defense strategy based on the mean field game model for cyber security. Int. J. Distrib. Sens. Netw. 2019, 15, 1550147719831180. [Google Scholar] [CrossRef] [Green Version]
  7. Kolokoltsov, V.N.; Bensoussan, A. Mean-field-game model for botnet defense in cyber-security. Appl. Math. Optim. 2016, 74, 669–692. [Google Scholar] [CrossRef] [Green Version]
  8. Li, J.; Xia, B.; Geng, X.; Ming, H.; Shakkottai, S.; Subramanian, V.; Xie, L. Energy Coupon: A Mean Field Game Perspective on Demand Response in Smart Grids. In Proceedings of the 2015 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems, Portland, OR, USA, 15–19 June 2015; pp. 455–456. [Google Scholar]
  9. Farzaneh, H.; Kebriaei, H.; Aminifar, F. Deterministic Mean Field Game for Energy Management in a Utility with Many Users. In Proceedings of the 2018 Smart Grid Conference (SGC), Sanandaj, Iran, 28–29 November 2018; pp. 1–6. [Google Scholar]
  10. Adlakha, S.; Johari, R. Mean field equilibrium in dynamic games with strategic complementarities. Oper. Res. 2013, 61, 971–989. [Google Scholar] [CrossRef] [Green Version]
  11. Adlakha, S.; Johari, R. Mean Field Equilibrium in Dynamic Games with Complementarities. arXiv 2010, arXiv:1011.5677. [Google Scholar]
  12. Weintraub, G.Y.; Benkard, C.L.; Van Roy, B. Markov perfect industry dynamics with many firms. Econometrica 2008, 76, 1375–1411. [Google Scholar]
  13. Adlakha, S.; Johari, R.; Weintraub, G.Y. Equilibria of dynamic games with many players: Existence, approximation, and market structure. J. Econ. Theory 2015, 156, 269–316. [Google Scholar] [CrossRef] [Green Version]
  14. Agrawal, S.; Jia, R. Optimistic posterior sampling for reinforcement learning: Worst-case regret bounds. arXiv 2017, arXiv:1705.07041. [Google Scholar]
  15. Osband, I.; Russo, D.; Van Roy, B. (More) efficient reinforcement learning via posterior sampling. arXiv 2013, arXiv:1306.0940. [Google Scholar]
  16. Subramanian, J.; Mahajan, A. Reinforcement learning in stationary mean-field games. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, Montreal, QC, Canada, 13–17 May 2019; pp. 251–259. [Google Scholar]
  17. Guo, X.; Hu, A.; Xu, R.; Zhang, J. Learning mean-field games. arXiv 2019, arXiv:1901.09585. [Google Scholar]
  18. Anahtarcı, B.; Karıksız, C.D.; Saldi, N. Fitted Q-Learning in Mean-field Games. arXiv 2019, arXiv:1912.13309. [Google Scholar]
  19. Fu, Z.; Yang, Z.; Chen, Y.; Wang, Z. Actor-critic provably finds Nash equilibria of linear-quadratic mean-field games. arXiv 2019, arXiv:1910.07498. [Google Scholar]
  20. Yang, J.; Ye, X.; Trivedi, R.; Xu, H.; Zha, H. Learning deep mean field games for modeling large population behavior. arXiv 2017, arXiv:1711.03156. [Google Scholar]
  21. Pong, V.; Gu, S.; Dalal, M.; Levine, S. Temporal difference models: Model-free deep rl for model-based control. arXiv 2018, arXiv:1802.09081. [Google Scholar]
  22. Mguni, D.; Jennings, J.; de Cote, E.M. Decentralised learning in systems with many, many strategic agents. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
  23. Elie, R.; Pérolat, J.; Laurière, M.; Geist, M.; Pietquin, O. Approximate fictitious play for mean field games. arXiv 2019, arXiv:1907.02633. [Google Scholar]
  24. Light, B.; Weintraub, G.Y. Mean field equilibrium: Uniqueness, existence, and comparative statics. In Columbia Business School Research Paper; Paper No. 19-3; Columbia Business School Publishing: New York, NY, USA, 2018. [Google Scholar]
  25. Puterman, M.L. Markov Decision Processes: Discrete Stochastic Dynamic Programming; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  26. Mondal, W.U.; Agarwal, M.; Aggarwal, V.; Ukkusuri, S.V. On the approximation of cooperative heterogeneous multi-agent reinforcement learning (marl) using mean field control (mfc). arXiv 2021, arXiv:2109.04024. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Agarwal, M.; Aggarwal, V.; Ghosh, A.; Tiwari, N. Reinforcement Learning for Mean-Field Game. Algorithms 2022, 15, 73. https://doi.org/10.3390/a15030073

AMA Style

Agarwal M, Aggarwal V, Ghosh A, Tiwari N. Reinforcement Learning for Mean-Field Game. Algorithms. 2022; 15(3):73. https://doi.org/10.3390/a15030073

Chicago/Turabian Style

Agarwal, Mridul, Vaneet Aggarwal, Arnob Ghosh, and Nilay Tiwari. 2022. "Reinforcement Learning for Mean-Field Game" Algorithms 15, no. 3: 73. https://doi.org/10.3390/a15030073

APA Style

Agarwal, M., Aggarwal, V., Ghosh, A., & Tiwari, N. (2022). Reinforcement Learning for Mean-Field Game. Algorithms, 15(3), 73. https://doi.org/10.3390/a15030073

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop