Partially Observable Mean Field Multi-Agent Reinforcement Learning Based on Graph–Attention

Summary Traditional multi-agent reinforcement learning algorithms are diﬃcultly applied in a large-scale multi-agent environment. The introduction of mean ﬁeld theory has enhanced the scalability of multi-agent reinforcement learning in recent years. This paper considers partially observable multi-agent reinforcement learning (MARL), where each agent can only observe other agents within a ﬁxed range. This partial observability aﬀects the agent’s ability to assess the quality of the actions of surrounding agents. This paper focuses on developing a method to capture more eﬀective information from local observations in order to select more eﬀective actions. Previous work in this ﬁeld employs probability distributions or weighted mean ﬁeld to update the average actions of neighborhood agents, but it does not fully consider the feature information of surrounding neighbors and leads to a local optimum. In this paper, we propose a novel multi-agent reinforcement learning algorithm, Partially Observable Mean Field Multi-Agent Reinforcement Learning based on Graph–Attention (GAMFQ) to remedy this ﬂaw. GAMFQ uses a graph attention module and a mean ﬁeld module to describe how an agent is inﬂuenced by the actions of other agents at each time step. This graph attention module consists of a graph attention encoder and a diﬀerentiable attention mechanism, and this mechanism outputs a dynamic graph to represent the eﬀectiveness of neighborhood agents against central agents. The mean–ﬁeld module approximates the eﬀect of a neighborhood agent on a central agent as the average eﬀect of eﬀective neighborhood agents. We evaluate GAMFQ on three challenging tasks in the MAgents framework. Experiments show that GAMFQ outperforms baselines including the state-of-the-art partially observable mean-ﬁeld reinforcement learning algorithms.


INTRODUCTION
Reinforcement learning has been widely used in video games [26] and recently in education [7].For multi-agent reinforcement learning (MARL) [33], it involves multiple autonomous agents that make autonomous decisions to accomplish some specific competitive or cooperative tasks by maximizing global reward, it has been applied in some real-world scenarios such as autonomous mobile [21] drone swarm confrontation [1] and multi-UAV collaboratively delivering goods [22].For example, in some of the drone swarm adversarial tasks, drones need to make actions based on autonomous decisions.Due to the inevitable death of some drones in the confrontation environment [38], the surviving drones must constantly evolve their strategies in real-time during the interaction with the environment to obtain the overall maximum reward.In order to make better interaction among agents, it is required that each agent in the multi-agent system can effectively perceive environmental information and fully acquire the information of surrounding agents.
However, the global communication cost among multiple agents is high, and in many practical tasks, each agent only observes part of the environmental information.Take the task of Autonomous Driving as an example, each vehicle makes decision in the limited observation space which is a typical local observation scene.Each agent can only rely on limited observation information in the local observation environment, therefore the agent needs to learn a decentralized strategy.There are two common decentralization strategies.One is Centralized Training and Decentralized Execution (CTDE), which requires agents to communicate with each other during training and to independently make decisions based on their own observations during testing in order to adapt to large-scale multi-agent environments.Some classic algorithms using the CTDE framework such as MADDPG [15], QMIX [19] and MAVEN [16].Another one takes the policy of decentralized training and decentralized execution, in which each agent can only observe part of the information during the training and testing phases, which is closer to the real environment with limited communication.Especially large-scale multi-agent environments are complex and nonstationary [10], it is difficult for agents to observe the entire environment globally, limiting their ability to find the best actions.Furthermore, as the number of agents increases, joint optimization of all information in a multi-agent environment may result in a huge joint state-action space, which also brings scalability challenges.This paper focuses on the second strategy.
Traditional multi-agent reinforcement learning algorithms are difficult to be applied in large-scale multi-agent environments, especially when the number of agents is exponential.Recent studies address the scalability issues of multi-agent reinforcement learning [31,30,12] by introducing mean-field theory, i.e., the multi-agent problem is reduced to a simple two-agent problem.However, Yang et al. [31] assumes that each agent can observe global information, which is difficult to apply in some real tasks.Therefore, it is necessary to study large-scale multi-agent reinforcement learning algorithms in partially observable cases [3].In addition, researchers have intensively studied mean-field-based multi-agent reinforcement learning algorithms to improve performance in partially observable cases.One way is to further decompose the Q-function of the mean field-based multiagent reinforcement learning algorithm [34,6].Another way uses probability distribution or weighted mean field to update the mean action of neighborhood agents [5,37,23,28].Hao [8] combined the graph attention with the mean field to calculate the interaction strength between agents when agents interact, but only considered the scene where the agent has a fixed relative position, and the agents can observe the global information.The difference is that when the agent is partially observable, we consider the dynamic change of the agent's position and the death scene of the agent, and construct a more flexible partial observable graph attention network based on the mean field.
However, for partially observable multi-agent mean field reinforcement learning, the existing methods do not fully consider the feature information of the surrounding neighbors, which will lead to falling into local optimum.This paper focuses on identifying the neighborhood agents that may have the greater influence on the central agent in a limited observation space, in order to avoid the local optimum issue.Since the graph neural network [29] can fully aggregate the relationship between the central agent and its surrounding neighbors, we propose a graph attention-based mechanism to calculate the importance of neighbor agents to estimate the average action more efficiently.
The main contributions of this paper are as follows: • We propose a partially observable mean-field reinforcement learning based on the graph-attention (GAMFQ), which can learn a decentralized agent policy from an environment without requiring global information of an environment.In the case of partially observable large-scale agents, the judgment of the importance of neighbor agents is insufficient in our GAMFQ.
• We theoretically demonstrate that the settings of the GAMFQ algorithm are close to Nash equilibrium.
• Experiments on three challenging tasks in the MAgents framework show that GAMFQ outperforms two baseline algorithms as well as the state-of-the-art partially observable mean-field reinforcement learning algorithms.

RELATED WORK
Most of the recent MARL algorithms for partial observability research are model-free reinforcement learning algorithms based on the CTDE framework.The most classic algorithm MADDPG [15] introduces critics that can observe global information in training to guide actor training, but only use actors with local observation information to take actions in testing.QMIX [19] uses a hybrid network to combine the local value functions of a single agent, and adds global state information assistance in the training and learning process to improve the performance of the algorithm.MAVEN [16] is able to solve complex multi-agent tasks by introducing latent spaces for hierarchical control by value-mixing and policy-based approaches.However, these multiagent reinforcement learning algorithm using the CTDE framework is difficult to scale to large-scale multi-agent environments, because there will be hard-to-observe global information that prevents the agents from training better policies.
For large-scale multi-agent environments, Yang et al. [31] introduced the mean-field theory, which approximates the interaction of many agents as the interaction between the central agent and the average effects from neighboring agents.However, partially observed multi-agent mean-field reinforcement learning algorithms still have a space to improve.Some researchers further decompose the Q-function of the mean field based multi-agent reinforcement learning algorithm.Zhang et al. [34] trained agents through the CTDE paradigm, transforming each agent's Q-function into its local Q-function and its mean field Q-function, but this approach is not strictly partially observable.Gu et al. [6] proposes a mean field multi-agent reinforcement learning algorithm with local training and decentralized execution.The Q-function is decomposed by grouping the observable neighbor states of each agent in a multi-agent system, so that the Q-function can be updated locally.In addition, some researchers have focused on improving the mean action in mean field reinforcement learning.Fang et al. [5] adds the idea of mean field to MADDPG, and proposes a multi-agent reinforcement learning algorithm based on weighted mean field, so that MADDPG can adapt to large-scale multi-agent environment.Wang et al. [28] propose a weighted mean-field multi-agent reinforcement learning algorithm based on reward attribution decomposition by approximating the weighted mean field as a joint optimization of implicit reward distribution between a central agent and its neighbors.Zhou et al. [37] uses the average action of neighbor agents as a label, and trained a mean field prediction network to replace the average action.Subramanian et al. [23] proposed two multiagent mean field reinforcement learning algorithms based on partially observable settings: POMFQ(FOR) and POMFQ(PDO), extracting partial samples from Dirichlet or Gamma distribution to estimate partial observable mean action.Although these methods achieve good results, they do not fully consider the feature information of surrounding neighbors.
Graph Neural Networks (GNNs) are able to mine graph structures from data for learning.In multi-agent reinforcement learning, GNNs can be used to model interactions between agents.In recent work, graph attention mechanisms have been used for multi-agent reinforcement learning.Zhang et al. [32] integrated the importance of the information of surrounding agents based on the multi-head attention mechanism, effectively integrate the key information of the graph to represent the environment and improve the cooperation strategy of agents with the help of multi-agent reinforcement learning.DCG [2] decomposed the joint value function of all agents into gains between pairs of agents according to the coordination graph, which can flexibly balance the performance and generalization ability of agents.Li et al. [13] proposed a deep implicit coordination graph (DICG) structure that can adapt to dynamic environments and learn implicit reasoning about joint actions or values through graph neural networks.Ruan et al. [20] proposed a graph-based coordination strategy, which decomposes the joint team strategy into a graph generator and a graph-based coordination strategy to realize the coordination behavior between agents.MAGIC [17] more accurately represented the interactions between agents during communication by modifying the standard graph attention network and compatible with differentiable directed graphs.
In the dynamic MARL system where competition and confrontation coexist, it is very difficult to directly apply the graph neural network, because the agent will die, the graph structure of the constructed large-scale agent system has the problem of large spatial dimension.However, graph neural networks can better mine the relationship between features, and the introduction of mean-field theory can further improve the advantages of mean-field multi-agent reinforcement learning.
Our approach differs from related work above in that it uses a graph attention mechanism to select surrounding agents that are more important to the central agent in a partially observable environment.GAMFQ uses a graph attention module and a mean field module to describe how an agent is influenced by the actions of other agents at each time step, where graph attention consists of a graph attention encoder and a differentiable attention mechanism, and finally outputs a dynamic graph to represent the effectiveness of the neighborhood agent to the central agent.The mean field module approximates the influence of a neighborhood agent on a central agent as the average influence of the effective neighborhood agents.Using these two modules together is able to efficiently estimate the average action of surrounding agents in partially observable situations.GAMFQ does not require global information about the environment to learn decentralized agent policies from the environment.

MOTIVATION & PRELIMINARIES
In this section, we represent discrete-time non-cooperative multi-agent task modeling as a stochastic game (SG).SG can be defined as a tuple < ,  1 , … ,   ,  1 , … ,   , ,  >, where  represents the true state of the environment.Each agent  ∈ {1, … , } chooses an action at each time step   ∈   .The reward function for agent  is   ∶  ×  1 × ⋯ ×   → .State transitions are dynamically represented as  ∶  ×  1 × ⋯ ×   → Ω() . is a constant representing the discount factor.It represents a stable state, and in this stable state, all agents will not deviate from the best strategy given to others.The disadvantage is that it cannot be applied to the coexistence of multiple agents.Yang et al. [31] introduced mean field theory, which approximates the interaction of many agents as the interaction between the average effect of a central agent and neighboring agents, and solves the scalability problem of SG.
The Nash equilibrium of general and random games can be defined as a strategy tuple (  ) . This shows that when all other agents are implementing their equilibrium strategy, no one agent will deviate from this equilibrium strategy and receive a strictly higher reward.When all agents follow the Nash equilibrium strategy, the Nash Q-function of agent  is   * (, ).Partially observable stochastic games can generate a partially observable Markov decision process (POMDP), we review the partially observable Markov decision (Dec-POMDP) in Section 3.1 and analyze the partially observable model from a theoretical perspective.Section 3.2 first introduces the globally observable mean-field multi-agent reinforcement learning, and then introduces the partially observable mean-field reinforcement learning algorithm (POMFQ) based on the POMDP framework, and analyzes the existing part of the observable in detail.The limitation of mean-field reinforcement learning POMFQ(FOR) [23] is that the feature information of surrounding neighbors is not fully considered.In a partially observable setting, each agent  observable neighborhood agent information   can be used to better mine the relationship between features through a graph attention network.Introducing graph attention networks into partially observable mean-field multi-agent reinforcement learning can further improve their performance, and Section 3.3 briefly introduces graph attention networks.

⟩
, where  = {1, … , } represents the set of agents,  represents the global state,   represents the set of action spaces of the -th agent,  represents the observation space of the agents, and the agent  receives observation   ∈   through the observation function (, ) ∶  ×  → , and the transition function  ∶  ×  1 × … ×   ×   → [0, 1] represents the environment transitions from a state to another one.At each time step , the agent  chooses an action    ∈   , gets a reward    ∶  ×    →  w.r.t. a state and an action. ∈ [0, 1] is a reward discount factor.Agent  has a stochastic policy   conditioned on its observation   or action observation history , and according to the all agents's joint policy The value function of agent  under the joint strategy  is the value function ] can be obtained, and then the Q-function can be formalized as . Our work is based on the POMDP framework.

Partially Observable Mean Field Reinforcement Learning
Mean-field theory-based reinforcement learning algorithm [31] approximates interactions among multiple agents as two-agent interactions, where the second agent corresponds to the average effect of all other agents.Yang et al. [31] decomposes the multi-agent Q-function into pairwise interacting local Q-functions as follows: where   is the index set of the neighbors of the agent  and   represents the discrete action of the agent  and is represented by one-shot coding.Mean field Q-function is cyclically updated according to Eq.2-5: where where ā  is the mean action of the neighborhood agent,    is the reward for agent  at time step ,   is the value function of agent , and  is the Boltzmann parameter.Literature [31] assumes that each agent has global information, and for the central agent, the average action of the neighboring agents is updated by Eq. 4.However, in a partially observable multi-agent environment, the way of calculating the average action in Eq. 4 is no longer applicable.
In the case of partial observability, Subramanian et al. [23] take  samples from the Dirichlet distribution to update the average action of Eq. 4, and achieve better performance than the mean field reinforcement learning algorithm.The formula is as follows: where  denotes the size of the action space,  1 , … ,   denotes the number of occurrences of each action,  is the Dirichlet parameter,  is the classification distribution.But the premise of the Dirichlet distribution is to assume that the characteristics of each agent are independent to achieve better clustering based on the characteristics of neighboring agents.In fact, in many multiagent environments, the characteristics of each agent has a certain correlation, but the Dirichlet distribution does not consider this correlation, which makes it unable to accurately describe the central agent and the neighborhood agents.There will be some deviations in the related information.Figure 1 shows the process of a battle between the red and green teams, in which each agent can observe the information of the friendly agent, and the action space of the agent is {, ,  , ℎ}.The central agent enclosed by the red circle is affected by the surrounding friendly agents.We use the Dirichlet distribution to simulate and calculate the probability of the central agent moving in each direction, as shown below: It can be obtained that the probability of the agent moving down is the highest, which is essentially due to the large number of agents moving .However, moving  is the optimal action for the agent to form an encirclement trend with friends.The Dirichlet distribution results in a local optimal solution rather than finding the optimal action.
Zhang et al. [35] believes that the correlation between two agents is crucial for multi-agent reinforcement learning.First, the paper calculates the correlation coefficient between each pair of agents, and then shields the communication among weakly correlated agents, thereby reducing the dimensionality of the state-action value network in the input space.Inspired by Zhang et al. [35], for large-scale partially observable multi-agent environments, it is more necessary to select the importance of neighborhood agents.In our paper, we will adopt a graph attention method to filter out more important neighborhood agents, discard unimportant agent information, and achieve more accurate estimation of the average actions of neighborhood agents.

Graph Attention Network
Graph neural network [29] can better mine the graph structure form between data.Graph Attention Network (GAT) [25] is composed of a group of graph attention layers, each graph attention layer acts on the node feature vector of node  denoting as   through a weight matrix  , and then uses softmax to normalize the neighbor nodes of the central node: where   is the attention coefficient of each node, indicating the importance of node  to node .Finally, the output features are obtained by weighting the input features ℎ  , and the update rule for each node  is: where   represents the feature of node ,   is the set of adjacent nodes of node , and (⋅) is a nonlinear activation function.

APPROACH
In this section, we propose a novel method called Partially Observable Mean Field Multi-Agent Reinforcement Learning based on Graph-Attention (GAMFQ), which can be applied to large-scale partially observable MARL tasks, where the observation range of each agent is limited, and the feature information of other agents in the fixed neighborhood is intelligently observed.The overall architecture of the GAMFQ algorithm is depicted in Figure 2, including two important components: the Graph Attention Module and the Mean Field Module: (i) In our Graph-Attention Module, the information observed locally by each agent is spliced firstly.Then the high-dimensional feature representations are obtained by a latent space mapping process which followed by a one-layer LSTM network to obtain the time-series correlation of the target agent, and the hidden layer of the LSTM is used as the input of the graph attention module to initialize the constructed graph nodes.Then to enhance the aggregation of neighbor agents to target agent, a similar process is implemented as a FC mapping network followed by a GAT layer.After that, the final representation of agents are obtained by a MLP layer with the input of the representations of target agent and other observable agents.Finally, we adopt layer-normalized method to obtain the adjacency matrix {   }  1 via Gumbel Softmax.(ii) The Mean Field Module utilizes the adjacency matrix {   }  1 from Graph Attention Module to obtain adopting action from important neighbor agents, in which the joint Q-function of each agent  approximates the Mean-Field Q-function   (, ) ≈   POMF ( ,   , ã ) of important neighbor agents, where the Q-value is partially observable mean-field(POMF) Q-value, and ã is the average action of the important neighborhood a gents that is partially observable by agent .Each component is described in detail below.

Graph-Attention Module
Feature Vector Effective Feature Vector

Graph-Attention Module
To more accurately re-determine the influence of agent 's neighbor   on itself, we need to be able to extract useful information from the local observations of agent .The local observations of each agent include the embedding information of neighboring agents.For each agent  and each time step , the information of a local observation of length   , is expressed as , where     represents the feature of the   -th neighbor agent of agent , and    ∈    × ,    ∈  1× .  is concatenated from the embedding features of each neighbor.Our goal is to learn an adjacency matrix {   }  1 to extract more important embedding information for the agent  from local observations at each time step .Since graph neural networks can better mine the information of neighbor nodes, we propose a graph attention structure suitable for large-scale multi-agent systems.This structure focuses on information from different agents by associating weights to observations based on the relative importance of other agents in their local observations.The Graph-Attention structure is constructed by concatenating a graph attention encoder and a differentiable attention mechanism.For the local observation    of agent  at time step ,   ′  is first encoded using a fully connected layer (FC) , and is passed to the LSTM layerin order to generate the hidden state ℎ   and cell state    of agent , where ℎ   serves as the input of the graph attention module to initialize the constructed graph nodes: where (⋅) is a fully connected layer representing the observed encoder.ℎ   is encoded as a message: where    is the aggregated information of the neighborhood agents observed by agent  at time step .The input encoding information   is passed to the GAT encoder and hard attention mechanism, where the hard attention mechanism consists of MLP and Gumbel Softmax function.Finally, the output adjacency matrix {   }  1 is used to determine which agents in the neighborhood have an influence on the current agent.The GAT encoder helps to efficiently encode the agent's local information, which is expressed as: Additionally, we take the form of the same attention mechanism as GAT [25], expressed as: where  (⋅) is the activation function,   ∈   is the weight vector,    ∪ {} represents the central agent  and its observable neighborhood agent set, and   ∈  × is the weight matrix.The node feature of agent  is expressed as: where  (⋅) is an exponential linear unit function.Connecting the features of each node in pairs: , we can get a matrix   ∈  ×  ×2 , where   , represents the relevant features of agent .Taking   as the input of MLP which is followed by a Gumbel Softmax function, the connected vector     can be obtained.The connected vector    consists of elements   , where  represents the neighbors of the central agent .The element    = 1 in the adjacency matrix indicates that the action of the agent  will have an impact on the agent .Conversely,    = 0 means that the agent's actions have no effect on the agent .

Mean Field Module
This Graph-Attention method selects important   agents from the neighbors   of agent , and compute the average of the actions of the choosed neighbor agents: where ⋅ is the element-wise multiplication.
In the above formula,   represents the important neighborhood agent for agent .Then the Q-value of each agent is shown in Eq. 17.Note that the Q-value here is a partially observable Q-value.
where the value function   is expressed as According to the above graph attention mechanism, more important neighborhood agents are obtained.The new average action ã  is calculated by Eq.16, and then the strategy    of agent  is updated by the following formula:

Theoretical Proof
This subsection is devoted to proving that the setting of GAMFQ is close to the Nash equilibrium.Subramanian et al. [23] showed that in partially observable cases, the fixed observation radius (FOR) setting is close to a Nash equilibrium, where the mean action of each agent's neighborhood agents is approximated by a dirichlet distribution.First, we state some assumptions, which are the same as literature [23], and are followed by all the theorems and analyses below.
Assumption 1.For any  and , there is lim →∞    () = ∞...1.This assumption guarantees a probability of 1 that old information is eventually discarded.Assumption 2. Suppose some measurability conditions are as follow: (1) ( 0 Assumption 5.Each action-value pair can be accessed indefinitely, and the reward is limited.Assumption 6.Under the limit  → ∞ of infinite exploration, the agent's policy is greedy.
This assumption ensures that the agent is rational.

Assumption 7.
In each stage of a stochastic game, a Nash equilibrium can be regarded as a global optimum or saddle point.
Based on these assumptions, Subramanian et al. [23] give the following lemma.
Lemma 1. [23] When the Q-function is updated using the partially observable update rule in Eq.2, and assumptions 3, 5, and 7 hold, the following holds for  → ∞: where  * is the Nash Q-value,    is the partially observable mean-field Q-function, and  is the bound of the  map.The probability that the above formula holds is at least  −1 , where  = ||.
In our GAMFQ setting, for partially observable neighborhood agents, we choose to select a limited number of important agents by using graph attention, and then update the POMF Q function.The following theorem proves that the setting of GAMFQ is close to Nash equilibrium.
Theorem 1.The distance between the MFQ (globally observable) mean action ā and the GAMFQ (partially observable) mean action ã satisfies the following formula: When  → ∞, the probability >= , where   is the number of observed neighbor agents, ã is the partially observable mean action obtained by graph attention in Eq. 16, ā is the globally observable mean action in Eq. 4.
Assuming that each agent is globally observable, the mean of important agents selected by graph attention is close to the true underlying global observable ā.Since the GAMF Q-function is updated by taking finite samples through graph attention, the empirical mean is ã.Theorem 2. If the Q-function is Lipschitz continuous with respect to the mean action, i.e.  is constant, then the MF Q-function   and GAMF Q-function   satisfy the following relation: When the limit  → ∞, the probability is ≥ () −1 , where  = ||,  is the action space of the agent.
In the proof of theorem 2, first consider a Q-function that is Lipschitz continuous for all ā and ã.According to theorem 1, the above formula can further deduce the result of theorem 2. The total number of components is equal to the action space .The bound of theorem 1 is probability >= , and since there are  random variables, the probability of theorem 2 is at least () −1 .When the first  − 1 random variable is fixed, the deterministic last ā component satisfies the relationship that the sum of the individual components is 1.Since each agent's action is represented by a one-hot encoding, the ã′ component of GAMFQ also satisfies the relationship that the sum of the individual components is 1, and the component of the agent's average action does not change due to the application of graph attention.The proof of theorem 2 ends.Theorem 3. A stochastic process in form   ( + 1) =   () +   () remains bounded in the range [ * − 2,  * + 2] on limit  → ∞ if assumptions 1,2,3 and 4 are satisfied, and are guaranteed not to diverge to infinity.Where  is the boundary of the  map in assumption 4(4).This theorem can be proved in terms of Tsitsiklis [24] and by extension.The result of theorem 3 can then be used to derive theorem 4.
Theorem 4. When the Q-function is updated using the partially observable update rule in Eq.17, and assumptions 3, 5, and 7 hold, the following holds for  → ∞: where  * is the Nash Q-value,   is the partially observable mean-field Q-function, and  is the bound of the  map.The probability that the above formula holds is at least  −1 , where  = ||.
Theorem 4 shows that the GAMFQ update is very close to the Nash equilibrium at the limit  → ∞, i.e. reaching a plateau for stochastic policies.Therefore, the strategy of Eq.19 is approximately close to this plateau.Theorem 4 is an application of theorem 3, using assumptions 3, 5 and 7 .However, in MARL, reaching a Nash equilibrium is not optimal, but only a fixed-point guarantee.Therefore, to achieve better performance, each selfish agent will still tend to pick a limited number of samples.To balance theory and performance when selecting agents from the neighborhood, an appropriate number of agents (more efficient agents) need to be used for better multi-agent system performance.This paper uses the graph attention structure to filter out more important proxies, which can better approximate the Nash equilibrium.

Algorithm
The implementation of GAMFQ follows the related work of the previous POMFQ [23], the difference is that the graph attention structure is used to select the neighborhood agents that are more important to the central agent when updating the average action.Algorithm 1 gives the pseudocode of the GAMFQ algorithm.It obtains effective neighbor agents by continuously updating the adjacency matrix    to update the agent's strategy.

EXPERIMENTS
In this section, we describe three different tasks based on the MAgent framework and give some experimental setup and training details for evaluating the GAMFQ performance.

Environments and Tasks
Subramanian et al. [23] designed three different cooperative-competitive strategies in the MAgent framework [36] as experimental environments, and our experiments adopt the same environments.In these three tasks, the map size is set to 28*28, where the observation range of each agent is 6 units.The state space is the concatenation of the feature information of other agents within each agent's field of view, including location, health, and grouping information.The action space includes 13 move actions and 8 attack actions.In addition, each agent is required to handle at most 20 other agents that are closest.We will evaluate against these three tasks: • Multibattle environment: There are two groups of agents fighting each other, each containing 25 agents.The agent gets -0.005 points for each move, -0.1 points for attacking an empty area, 200 points for killing an enemy agent, and 0.2 points for a successful attack.Each agent is 2*2 in size, has a maximum health of 10 units, and a speed of 2 units.After the battle, the team with the most surviving agents wins.If both teams have the same number of surviving agents, the team with the highest reward wins.The reward for each team is the sum of the rewards for the individual agents in the team.
• Battle-Gathering environment: There is a uniform distribution of food in the environment, each agent can observe the location of all the food.In addition to attacking the enemy to get rewards, each agent can also eat food to get rewards.Agents get 5 points for attacking enemy agents, and the rest of the reward settings are the same as the Multibattle environment.
• Predator-Prey environment: There are 40 predators and 20 prey, where each predator is a square grid of size 2*2 with a maximum health of 10 units and a speed of 2 units.Prey is a 1*1 square with a maximum health of 2 units and a speed of 2.5 units.To win the game, the predator must kill more prey, and the prey must find a way to escape.In addition, predators and prey have different reward functions, predators get -0.3 points for attacking space, 1 point for successfully attacking prey, 100 points for killing prey, -1 point for attacked prey, and 0.5 points for dying.Unlike the Multibattle environment, when the round ends for a fairer duel, if the two teams have the same number of surviving agents, it is judged as a draw.

Evaluation
We consider four algorithms for the above three games: MFQ, MFAC [31], POMFQ(FOR) and GAMFQ, where MFQ and MFAC are baselines and POMFQ(FOR) [23] is the state-of-the-art algorithm.
The original baselines MFQ and MFAC were proposed by Yang et al. [31] based on global observability, and the idea was to approximate the influence of the neighborhood agents on the central agent as their average actions, thereby updating the actions of the neighborhood agents.We fix the observation radius of each agent in the baseline MFQ and MFAC and apply it to a partially observable environment, where neighbor agents are agents within a fixed range.The POMFQ(FOR) algorithm introduces noise in the mean action parameters to encourage exploration, uses Bayesian inference to update the Dirichlet distribution, and samples 100 samples from the Dirichlet distribution to estimate partially observable mean field actions.The GAMFQ algorithm judges the effectiveness of neighborhood agents within a fixed range through the graph attention mechanism, selects more important neighborhood agents, and updates the average action by averaging the actions of these agents.

Hyperparameters
In the three tasks, each algorithm was trained for 2000 epochs in the training phase, generating two sets of A and B sets of models.In the test phase, 1000 rounds of confrontation were conducted, of which the first 500 rounds were the first group A of the first algorithm against the second group B of the second algorithm, and the last 500 groups were the opposite.The hyperparameters of MFQ, MFAC, POMFQ(FOR) and GAMFQ are basically the same.Table 1 lists the hyperparameters during training of the four algorithms, and the remaining parameters can be seen in [23].

RESULTS AND DISCUSSION
In this section, we evaluate the performance of GMAFQ in three different environments, including Multibattle, Battle-Gathering, and Predator-Prey.We benchmark against two algorithms, MFQ and MFAC, and compare with the state-of-the-art POMFQ

Reward
Figure 3 shows how the reward changes as the number of iterations increases during training.We plot the reward changes for the four algorithms in different game environments during the first 1000 iterations.Since each algorithm is self-training which results in a large change in the reward of the algorithm, we use the least squares method to fit the reward change graph.In Figure 3, the solid black line represents the reward change graph of the GAMFQ algorithm.From Figure 3 (a), (b) and (c), it can be seen that the reward of the GAMFQ algorithm can increase rapidly, indicating that the GAMFQ algorithm can converge rapidly in the early stage, and the convergence performance is better than the other three algorithms.

FIGURE 3
Train results of three games.The reward curve for each algorithm is fitted by the least squares method.

Elo Calculation
We use ELO Score [11] to evaluate the performance of the two groups of agents, the advantage of which is that it takes into account the strength gap between the opponents themselves.ELO ratings are commonly used in chess to evaluate one-on-one situations, and this approach can similarly be extended to N-versus-N situations.For the algorithm proposed in the paper, we record the total rewards of the two teams of agents during each algorithm confrontation, which are  1 and  2 , respectively.
Then the expected win rates of the two groups of agents are: where  1 +  2 = 1.By analyzing the actual and predicted winning rates of the two groups of agents, the new ELO score of each team after the game ends can be obtained: where  1 represents the actual winning or losing value, 1 means the team wins, 0.5 means the two teams are tied, and 0 means the team loses. is represented as a floating coefficient.To create a gap between agents, we set  to 32.For each match, we faced off 500 times and calculated the average ELO value for all matches.3, in Battle-Gathering environment, the ELO score of the MFQ algorithm is the highest, and the ELO score of the GAMFQ algorithm is average.This is because the collection environment contains food, and some algorithms tend to eat food to get rewards quickly, rather than attacking enemy agents.However, the final game winning or losing decision is made by comparing the number of remaining agents between the two teams of agents.As shown in Table 4, in Predator-Prey environment, the ELO score of the GAMFQ algorithm has the highest ELO score of 860, which is significantly better than the other three algorithms.From the experimental results in the three environments, we can summarize that ELO score of the GAMFQ algorithm is better than other three algorithms, showing better performance.

Results
Figure 4 shows the face-off results of the four algorithms in the three tasks.Figure 4(a) shows the faceoff results of Multibattle game.The different colored bars for each algorithm represent the results of an algorithm versus others.We do not conduct adversarial experiments between the same algorithms because we consider that the adversarial properties of the same algorithms are equal.The vertical lines in the bar graph represent the standard deviation of wins for groups A and B over 1,000 face-offs.Figure 4(a) shows GAMFQ against three other algorithms, all with a win rate above 0.7. Figure 4(b) shows the faceoff results of Battle-Gathering game.In addition to getting rewards for killing enemies, agents can also get rewards from food.It can be seen that MFQ loses to all other algorithms, MFAC and POMFQ (FOR) perform in general, and our GAMFQ is clearly ahead of other algorithms.
Figure 4(c) shows thwe faceoff results of Predator-Prey game.The standard deviation of this game is significantly higher than the previous two games, due to the fact that both groups A and B are trying to beat each other in the environment.It can be seen that the GAMFQ algorithm is significantly better than other three algorithms, reaching a winning rate of 1.0.Experiments in the above three multi-agent combat environments show that GAMFQ can show good performance over MFQ, MFAC and POMFQ(FOR).To visualize the effectiveness of the GAMFQ algorithm, we visualize the confrontation between GAMFQ and POMFQ (FOR) in a Multibattle environment, as shown in Figure 5, where the red side is GMAFQ and the blue side is POMFQ (FOR).It can be seen from the confrontation process that for the GAMFQ algorithm, when an agent decides to attack, the surrounding agents will also decide to attack under its influence, forming a good cooperation mechanism.On the contrary, for the POMFQ (FOR) algorithm, some blue-side agents are chosen to attack, some are chosen to escape, and no common fighting mechanism was formed.Similarly, in the Battle-Gathering environment of Figure 6, GAMFQ can learn the surrounding mechanism well.In the Predator-Prey environment of Figure 7, when GAMFQ acts as a predator, the technique of surrounding the prey POMFQ (FOR) can be learned.On the contrary, when POMFQ (FOR) acted as a predator, it failed to catch the prey GMAFQ.Figure 8 is an ablation study that investigates the performance of the GAMFQ algorithm for different observation radius in a Multibattle environment.where the solid line represents the least squares fit of the reward change.It can be seen from the figure that when the number of training is small, the performance of the algorithm is better as the observation distance increases.But with the increase of training times, when R=4, the performance of the algorithm is the best, so the appropriate observation distance can achieve better performance.What is more important in this paper is to explore the effect of the ratio of observable distance to the number of agents on the performance of the algorithm, so there is no experiment with more agents.

CONCLUSION
In this paper, we proposed a new multi-agent reinforcement learning algorithm, Graph Attention-based Partially Observable Mean Reinforcement Learning (GAMFQ), to address the problem of large-scale partially observable multi-agent environments.Although existing methods are close to Nash equilibrium, they do not take into account the direct correlation of agents.Based on the correlation between agents, GAMFQ uses a Graph-Attention module to describe how each agent is affected by the actions of other agents at each time step.Experimental results on three challenging tasks in the MAgents framework illustrate that, our proposed method outperforms baselines in all these games and outperforms the state-of-the-art partially observable meanfield reinforcement learning algorithms.In the future, we will further explore the correlation between agents to extend to more common cooperation scenarios.

FIGURE 1
FIGURE 1A battle environment of the red and blue groups, where the red agent in the center is distributed by Dirichlet to calculate the action.

FIGURE 2
FIGURE 2Schematic of GAMFQ.Each agent can observe the feature information of other agents within a fixed range, input it into the Graph-Attention Module, and output an adjacency matrix to represent the effectiveness of the neighborhood agent to the central agent.
Train results of Predator-Prey game.
Faceoff results of Predator-Prey game.

FIGURE 4
FIGURE4 Faceoff results of three games.The * in the legend indicates the enemy.For example, the first blue bar in the bar graph corresponding to the GAMFQ algorithm is the result of the confrontation between GAMFQ and MFQ, and we do not conduct confrontation experiments between the same algorithms.

FIGURE 5
FIGURE 5 Visualization of the standoff between GAMFQ and POMFQ (FOR) in a Multibattle game.

FIGURE 6
FIGURE 6Visualization of the standoff between GAMFQ and POMFQ (FOR) in a Battle-Gathering game.

FIGURE 7
FIGURE 7 Visualization of the standoff between GAMFQ and POMFQ (FOR) in a Predator-Prey game.

FIGURE 8
FIGURE 8 Ablation study.R represents the observation radius of the agent.

TABLE 1
Hyperparameters for four algorithms training.We implement our method and comparative methods on three different tasks.Note that we only used 50 agents in our experiments and did not test more agents, this is because the proportion of other agents that each agent can see is more important than the absolute number.

TABLE 2
The ELO Score of four algorithms in Multibattle environment.

TABLE 3
The ELO Score of four algorithms in Battle-Gathering environment.

Table 2 ,
3,4shows the ELO scores of the four algorithms on the three tasks.It can be seen from Table2that in Multibattle environment, the GAMFQ algorithm has the highest ELO score of 3579, which is significantly better than the other three algorithms.As shown in Table

TABLE 4
The ELO Score of four algorithms in Predator-Prey environment.