Forward and Backward Bellman Equations Improve the Efficiency of the EM Algorithm for DEC-POMDP

Decentralized partially observable Markov decision process (DEC-POMDP) models sequential decision making problems by a team of agents. Since the planning of DEC-POMDP can be interpreted as the maximum likelihood estimation for the latent variable model, DEC-POMDP can be solved by the EM algorithm. However, in EM for DEC-POMDP, the forward–backward algorithm needs to be calculated up to the infinite horizon, which impairs the computational efficiency. In this paper, we propose the Bellman EM algorithm (BEM) and the modified Bellman EM algorithm (MBEM) by introducing the forward and backward Bellman equations into EM. BEM can be more efficient than EM because BEM calculates the forward and backward Bellman equations instead of the forward–backward algorithm up to the infinite horizon. However, BEM cannot always be more efficient than EM when the size of problems is large because BEM calculates an inverse matrix. We circumvent this shortcoming in MBEM by calculating the forward and backward Bellman equations without the inverse matrix. Our numerical experiments demonstrate that the convergence of MBEM is faster than that of EM.


Introduction
Markov decision process (MDP) models sequential decision making problems and has been used for planning and reinforcement learning [1][2][3][4]. MDP consists of an environment and an agent. The agent observes the state of the environment and controls it by taking actions. The planning of MDP is to find the optimal control policy maximizing the objective function, which is typically solved by the Bellman equation-based algorithms such as value iteration and policy iteration [1][2][3][4].
Decentralized partially observable MDP (DEC-POMDP) is an extension of MDP to a multiagent and partially observable setting, which models sequential decision making problems by a team of agents [5][6][7]. DEC-POMDP consists of an environment and multiple agents, and the agents cannot observe the state of the environment and the actions of the other agents completely. The agents infer the environmental state and the other agents' actions from their observation histories and control them by taking actions. The planning of DEC-POMDP is to find not only the optimal control policy but also the optimal inference policy for each agent, which maximize the objective function [5][6][7]. Applications of DEC-POMDP include planetary exploration by a team of rovers [8], target tracking by a team of sensors [9], and information transmission by a team of devices [10]. Since the agents cannot observe the environmental state and the other agents' actions completely, it is difficult to extend the Bellman equation-based algorithms for MDP to DEC-POMDP straightforwardly [11][12][13][14][15].
DEC-POMDP can be solved using control as inference [16,17]. Control as inference is a framework to interpret a control problem as an inference problem by introducing auxiliary variables [18][19][20][21][22]. Although control as inference has several variants, Toussaint and Storkey showed that the planning of MDP can be interpreted as the maximum likelihood estimation for a latent variable model [18]. Thus, the planning of MDP can be solved by EM algorithm, which is the typical algorithm for the maximum likelihood estimation of latent variable models [23]. Since the EM algorithm is more general than the Bellman equation-based algorithms, it can be straightforwardly extended to POMDP [24,25] and DEC-POMDP [16,17]. The computational efficiency of the EM algorithm for DEC-POMDP is comparable to that of other algorithms for DEC-POMDP [16,17,[26][27][28], and the extensions to the average reward setting and to the reinforcement learning setting have been studied [29][30][31].
However, the EM algorithm for DEC-POMDP is not efficient enough to be applied to real-world problems, which often have a large number of agents or a large size of an environment. Therefore, there are several studies in which improvement of the computational efficiency of the EM algorithm for DEC-POMDP was attempted [26,27]. Because these studies achieve improvements by restricting possible interactions between agents, their applicability is limited. Therefore, it is desirable to have improvement in the efficiency for more general DEC-POMDP problems.
In order to improve the computational efficiency of EM algorithm for general DEC-POMDP problems, there are two problems that need to be resolved. The first problem is the forward-backward algorithm up to the infinite horizon. The EM algorithm for DEC-POMDP uses the forward-backward algorithm, which has also been used in EM algorithm for hidden Markov models [23]. However, in the EM algorithm for DEC-POMDP, the forward-backward algorithm needs to be calculated up to the infinite horizon, which impairs the computational efficiency [32,33]. The second problem is the Bellman equation. The EM algorithm for DEC-POMDP does not use the Bellman equation, which plays a central role in the the planning and in the reinforcement learning for MDP [1][2][3][4]. Therefore, the EM algorithm for DEC-POMDP cannot use the advanced techniques based on the Bellman equation, which makes it possible to solve large-size problems [34][35][36].
In some previous studies, resolution of these problems was attempted by replacing the forward-backward algorithm up to the infinite horizon with the Bellman equation [32,33]. However, in these studies, the computational efficiency could not be improved completely. For example, Song et al. replaced the forward-backward algorithm with the Bellman equation and showed that their algorithm is more efficient than EM and other DEC-POMDP algorithms by the numerical experiments [32]. However, since a parameter dependency is overlooked in [32], their algorithm may not find the optimal policy under a general situation (see Appendix D for more details). Moreover, Kumar et al. showed that the forward-backward algorithm can be replaced by linear programming with the Bellman equation as a constraint [33]. However, their algorithm may be less efficient than the EM algorithm when the size of problems is large. Therefore, previous studies have not yet completely improved the computational efficiency of EM algorithm for DEC-POMDP.
In this paper, we propose more efficient algorithms for DEC-POMDP than EM algorithm by introducing the forward and backward Bellman equations into it. The backward Bellman equation corresponds to the traditional Bellman equation, which has been used in previous studies [32,33]. In contrast, the forward Bellman equation has not yet been used for the planning of DEC-POMDP explicitly. This equation is similar to that recently proposed in the offline reinforcement learning of MDP [37][38][39]. In the offline reinforcement learning of MDP, the forward Bellman equation is used to correct the difference between the data sampling policy and the policy to be evaluated. In the planning of DEC-POMDP, the forward Bellman equation plays the important role in inferring the environmental state.
We propose the Bellman EM algorithm (BEM) and the modified Bellman EM algorithm (MBEM) by replacing the forward-backward algorithm with the forward and backward Bellman equations. They are different in terms of how to solve the forward and backward Bellman equations. BEM solves the forward and backward Bellman equations by calculating an inverse matrix. BEM can be more efficient than EM because BEM does not calculate the forward-backward algorithm up to the infinite horizon. However, since BEM calculates the inverse matrix, it cannot always be more efficient than EM when the size of problems is large, which is the same problem as [33]. Actually, BEM is essentially the same as [33]. In the linear programming problem of [33], the number of variables is equal to that of constraints, which enables us to solve it only from the constraints without the optimization. Therefore, the algorithm in [33] becomes equivalent to BEM, and they suffers from the same problem as BEM.
This problem is addressed by MBEM. MBEM solves the forward and backward Bellman equations by applying the forward and backward Bellman operators to the arbitrary initial functions infinite times. Although MBEM needs to calculate the forward and backward Bellman operators infinite times, which is the same problem with EM, MBEM can evaluate approximation errors more tightly owing to the contractibility of these operators. It can also utilize the information of the previous iteration owing to the arbitrariness of the initial functions. These properties enable MBEM to be more efficient than EM. Moreover, MBEM resolves the drawback of BEM because MBEM does not calculate the inverse matrix. Therefore, MBEM can be more efficient than EM even when the size of problems is large.
Our numerical experiments demonstrate that the convergence of MBEM is faster than that of EM regardless of the size of problems.
The paper is organized as follows: In Section 2, DEC-POMDP is formulated. In Section 3, the EM algorithm for DEC-POMDP, which was proposed in [16], is briefly reviewed. In Section 4, the forward and backward Bellman equations are derived, and the Bellman EM algorithm (BEM) is proposed. In Section 5, the forward and backward Bellman operators are defined, and the modified Bellman EM algorithm (MBEM) is proposed. In Section 6, EM, BEM, and MBEM are summarized and compared. In Section 7, the performances of EM, BEM, and MBEM are compared through the numerical experiment. In Section 8, this paper is concluded, and future works are discussed.
x t ∈ X is the state of the environment at time t. y i t ∈ Y i , z i t ∈ Z i , and a i t ∈ A i are the observation, the memory, and the action available to the agent i ∈ {1, ..., N}, respectively. X , Y i , Z i , and A i are finite sets. y t := (y 1 t , .., y N t ), z t := (z 1 t , .., z N t ), and a t := (a 1 t , .., a N t ) are the joint observation, the joint memory, and the joint action of the N agents, respectively.
The time evolution of the environmental state x t is given by the initial state probability p(x 0 ) and the state transition probability p(x t+1 |x t , a t ). Thus, agents can control the environmental state x t+1 by taking appropriate actions a t . The agent i cannot observe the environmental state x t and the joint action a t−1 completely, and obtains the observation y i t instead of them. Thus, the observation y t obeys the observation probability p(y t |x t , a t−1 ). The agent i updates its memory from z i t−1 to z i t based on the observation y i t . Thus, the memory z i t obeys the initial memory probability ν i (z i 0 ) and the memory transition probability λ i (z i t |z i t−1 , y i t ). The agent i takes the action a i t based on the memory z i t by following the action probability π i (a i t |z i t ). The reward function r(x t , a t ) defines the amount of reward that is obtained at each step depending on the state of the environment x t and the joint action a t taken by the agents.
The objective function in the planning of DEC-POMDP is given by the expected return, which is the expected discounted cumulative reward: (1) θ := (π, λ, ν) is the policy, where π := (π 1 , ..., π N ), λ := (λ 1 , ..., λ N ), and ν := (ν 1 , ..., ν N ). γ ∈ (0, 1) is the discount factor, which decreases the weight of the future reward. The closer γ is to 1, the closer the weight of the future reward is to that of the current reward. The planning of DEC-POMDP is to find the policy θ that maximizes the expected return J(θ) as follows: In other words, the planning of DEC-POMDP is to find how to take the action and how to update the memory for each agent to maximize the expected return. x t is the state of the environment at time t. y i t , z i t , and a i t are the observation, the memory, and the action available to the agent i ∈ {1, ..., N}, respectively. The agents update their memories based on their observations, and take their actions based on their memories to control the environmental state. The planning of DEC-POMDP is to find their optimal memory updates and action selections that maximize the objective function. x t is the state of the environment at time t. y i t , z i t , and a i t are the observation, the memory, and the action available to the agent i ∈ {1, ..., N}, respectively. (a) r t ∈ R is the reward, which is generated at each time. (b) o ∈ {0, 1} is the optimal variable, which is generated only at the time horizon T.

EM Algorithm for DEC-POMDP
In this section, we explain the EM algorithm for DEC-POMDP, which was proposed in [16].

Control as Inference
In this subsection, we show that the planning of DEC-POMDP can be interpreted as the maximum likelihood estimation for a latent variable model (Figure 2b).
We introduce two auxiliary random variables: the time horizon T ∈ {0, 1, 2, ...} and the optimal variable o ∈ {0, 1}. These variables obey the following probabilities: where r max and r min are the maximum and the minimum value of the reward function r(x, a), respectively. Thus,r(x, a) ∈ [0, 1] is satisfied. By introducing these variables, DEC-POMDP changes from Figure 2a to Figure 2b. While Figure 2a considers the infinite time horizon, Figure 2b considers the finite time horizon T, which obeys Equation (3). Moreover, while the reward r t := r(x t , a t ) is generated at each time in Figure 2a, the optimal variable o is generated only at the time horizon T in Figure 2b.
Note that o is the observable variable, and x 0:T , y 1:T , z 0:T , a 0:T , T are the latent variables.
Therefore, the planning of DEC-POMDP is equivalent to the maximum likelihood estimation for the latent variable model as follows: Intuitively, while the planning of DEC-POMDP is to find the policy which maximizes the reward, the maximum likelihood estimation for the latent variable model is to find the policy which maximizes the probability of the optimal variable. Since the probability of the optimal variable is proportional to the reward, the planning of DEC-POMDP is equivalent to the maximum likelihood estimation for the latent variable model.

EM Algorithm
Since the planning of DEC-POMDP can be interpreted as the maximum likelihood estimation for the latent variable model, it can be solved by the EM algorithm [16]. EM algorithm is the typical algorithm for the maximum likelihood estimation of latent variable models, which iterates two steps, E step and M step [23].
In the E step, we calculate the Q function, which is defined as follows: where θ k is the current estimator of the optimal policy. In the M step, we update θ k to θ k+1 by maximizing the Q function as follows: Since each iteration between the E step and the M step monotonically increases the likelihood p(o = 1; θ k ), we can find θ * that locally maximizes the likelihood p(o = 1; θ).

M
Step Proposition 1 ([16]). In the EM algorithm for DEC-POMDP, Equation (8) can be calculated as follows: a −i := (a 1 , ..., a i−1 , a i+1 , ..., a N ). y −i and z −i are defined in the same way. F(x, z; θ) and V(x, z; θ) are defined as follows: F(x, z; θ) quantifies the frequency of the state x and the memory z, which is called the frequency function in this paper. V(x, z; θ) quantifies the probability of o = 1 when the initial state and memory are x and z, respectively. Since the probability of o = 1 is proportional to the reward, V(x, z; θ) is called the value function in this paper. Actually, V(x, z; θ) corresponds to the value function [33].

E Step
F(x, z; θ k ) and V(x, z; θ k ) need to be obtained to calculate Equations (9)- (11). In [16], F(x, z; θ k ) and V(x, z; θ k ) are calculated by the forward-backward algorithm, which has been used in EM algorithm for the hidden Markov model [23].
In [16], the forward probability α t (x, z) and the backward probability β t (x, z) are defined as follows: It is easy to calculate α 0 (x, z; θ k ) and β 0 (x, z; θ k ) as follows: Moreover, α t+1 (x, z; θ k ) and β t+1 (x, z; θ k ) are easily calculated from α t (x, z; θ k ) and β t (x, z; θ k ): where Equations (18) and (19) are called the forward and backward equations, respectively. Using Equations (16)- (19), α t (x, z; θ k ) and β t (x, z; θ k ) can be efficiently calculated from t = 0 to t = ∞, which is called the forward-backward algorithm [23]. By calculating the forward-backward algorithm from t = 0 to t = ∞, F(x, z; θ k ) and V(x, z; θ k ) can be obtained as follows [16]: However, F(x, z; θ k ) and V(x, z; θ k ) cannot be calculated exactly by this approach because it is practically impossible to calculate the forward-backward algorithm until t = ∞. Therefore, the forward-backward algorithm needs to be terminated at t = T max , where T max is finite. In this case, F(x, z; θ k ) and V(x, z; θ k ) are approximated as follows: T max needs to be large enough to reduce the approximation errors. In the previous study, a heuristic termination condition was proposed as follows [16]: However, the relation between T max and the approximation errors is unclear in Equation (25). We propose a new termination condition to guarantee the approximation errors as follows: We set an acceptable error bound ε > 0. If is satisfied, then are satisfied.

Summary
In summary, the EM algorithm for DEC-POMDP is given by Algorithm 1. In the E step, we calculate α t (x, z; θ k ) and β t (x, z; θ k ) from t = 0 to t = T max by the forward-backward algorithm. In M step, we update θ k to θ k+1 using Equations (9)- (11). The time complexities of the E step and the M step are O((|X ||Z |) 2 T max ) and O((|X ||Z |) 2 |Y ||A|), respectively. Note that A := ⊗ N i=1 A i , and Y and Z are defined in the same way. The EM algorithm for DEC-POMDP is less efficient when the discount factor γ is closer to 1 or the acceptable error bound ε is smaller because T max needs to be larger in these cases. (20).

Bellman EM Algorithm
In the EM algorithm for DEC-POMDP, α t (x, z; θ k ) and β t (x, z; θ k ) are calculated from t = 0 to t = T max to obtain F(x, z; θ k ) and V(x, z; θ k ). However, T max needs to be large to reduce the approximation errors of F(x, z; θ k ) and V(x, z; θ k ), which impairs the computational efficiency of the EM algorithm for DEC-POMDP [32,33]. In this section, we calculate F(x, z; θ k ) and V(x, z; θ k ) directly without calculating α t (x, z; θ k ) and β t (x, z; θ k ) to resolve the drawback of EM.

Forward and Backward Bellman Equations
The following equations are useful to obtain F(x, z; θ k ) and V(x, z; θ k ) directly: Theorem 2. F(x, z; θ) and V(x, z; θ) satisfy the following equations: Equations (29) and (30) are called the forward Bellman equation and the backward Bellman equation, respectively.
Proof. See Appendix B.
Note that the direction of time is different between Equations (29) and (30). In Equation (29), x and z are earlier state and memory than x and z, respectively. In Equation (30), x and z are later state and memory than x and z, respectively.
The backward Bellman Equation (30) corresponds to the traditional Bellman equation, which has been used in other algorithms for DEC-POMDP [11][12][13][14]. In contrast, the forward Bellman equation, which is introduced in this paper, is similar to that recently proposed in the offline reinforcement learning [37][38][39].
Since the forward and backward Bellman equations are linear equations, they can be solved exactly as follows: where Therefore, we can obtain F(x, z; θ k ) and V(x, z; θ k ) from the forward and backward Bellman equations.

Bellman EM Algorithm (BEM)
The forward-backward algorithm from t = 0 to t = T max in the EM algorithm for DEC-POMDP can be replaced by the forward and backward Bellman equations. In this paper, the EM algorithm for DEC-POMDP that uses the forward and backward Bellman equations instead of the forward-backward algorithm from t = 0 to t = T max is called the Bellman EM algorithm (BEM).

Comparison of EM and BEM
BEM is summarized as Algorithm 2. The M step in Algorithm 2 is almost the same as that in Algorithm 1-only the E step is different. While the time complexity of the E step in Calculate p(x , z |x, z; θ k ) by Equation (20).
Moreover, BEM can be more efficient than EM when the discount factor γ is close to 1 or the acceptable error bound ε is small because T max needs to be large enough in these cases. However, when the size of the state space |X | or that of the joint memory space |Z | is large, BEM cannot always be more efficient than EM because BEM needs to calculate the inverse matrix (I − γP(θ k )) −1 . To circumvent this shortcoming, we propose a new algorithm, the modified Bellman EM algorithm (MBEM), to obtain F(x, z; θ k ) and V(x, z; θ k ) without calculating the inverse matrix.

Forward and Backward Bellman Operators
We define the forward and backward Bellman operators as follows: where ∀ f , v : X × Z → R. From the forward and backward Bellman equations, A θ and B θ satisfy the following equations: Thus, F(x, z; θ k ) and V(x, z; θ k ) are the fixed points of A θ and B θ , respectively. A θ and B θ have the following useful property:

Proposition 3.
A θ and B θ are contractive operators as follows: where ∀ f , g, u, v : X × Z → R.
Note that the norm is different between Equations (37) and (38). It is caused by the difference of the time direction between A θ and B θ . While x and z are earlier state and memory than x and z, respectively, in the forward Bellman operator A θ , x and z are later state and memory than x and z, respectively, in the backward Bellman operator B θ .
We obtain F(x, z; θ k ) and V(x, z; θ k ) using Equations (35)- (38), as follows: Therefore, it is shown that F(x, z; θ k ) and V(x, z; θ k ) can be calculated by applying the forward and backward Bellman operators, A θ and B θ , to arbitrary initial functions, f (x, z) and v(x, z), infinite times.

Modified Bellman EM Algorithm (MBEM)
The calculation of the forward and backward Bellman equations in BEM can be replaced by that of the forward and backward Bellman operators. In this paper, BEM that uses the forward and backward Bellman operators instead of the forward and backward Bellman equations is called modified Bellman EM algorithm (MBEM).

Comparison of EM, BEM, and MBEM
Since MBEM does not need the inverse matrix, MBEM can be more efficient than BEM when the size of the state space |X | and that of the joint memory space |Z | are large. Thus, MBEM resolves the drawback of BEM.
On the other hand, MBEM has the same problem as EM. MBEM calculates A θ k and B θ k infinite times to obtain F(x, z; θ k ) and V(x, z; θ k ). However, since it is practically impossible to calculate A θ k and B θ k infinite times, the calculation of A θ k and B θ k needs to be terminated after L max times, where L max is finite. In this case, F(x, z; θ k ) and V(x, z; θ k ) are approximated as follows: L max needs to be large enough to reduce the approximation errors of F(x, z; θ k ) and V(x, z; θ k ), which impairs the computational efficiency of MBEM. Thus, MBEM can potentially suffer from the same problem as EM. However, we can theoretically show that MBEM is more efficient than EM by comparing T max and L max under the condition that the approximation errors of F(x, z; θ k ) and V(x, z; θ k ) are smaller than the acceptable error bound ε.
When f (x, z) = p 0 (x, z; ν k ) and v(x, z) =r(x, z; π k ), Equations (41) and (42) can be calculated as follows: which are the same with Equations (23) and (24), respectively. Thus, in this case, L max = T max , and the computational efficiency of MBEM is the same as that of EM. However, MBEM has two useful properties that EM does not have, and therefore, MBEM can be more efficient than EM. In the following, we explain these properties in more detail. The first property of MBEM is the contractibility of the forward and backward Bellman operators, A θ k and B θ k . From the contractibility of the Bellman operators, L max is determined adaptively as follows: Proposition 5. We set an acceptable error bound ε > 0. If are satisfied, then are satisfied.

Proof. See Appendix C.3.
T max is always constant for every E step, T max = (log(1 − γ)ε)/ log γ − 1 . Thus, even if the approximation errors of F(x, z; θ k ) and V(x, z; θ k ) are smaller than ε when t T max , the forward-backward algorithm cannot be terminated until t = T max because the approximation errors of F(x, z; θ k ) and V(x, z; θ k ) cannot be evaluated in the forwardbackward algorithm.
L max is adaptively determined depending on A L θ k f (x, z) and B L θ k v(x, z). Thus, if , z) and B L θ k v(x, z) are close enough to F(x, z; θ k ) and V(x, z; θ k ), the E step of MBEM can be terminated because the approximation errors of F(x, z; θ k ) and V(x, z; θ k ) can be evaluated owing to the contractibility of the forward and backward Bellman operators.
The second property of MBEM is the arbitrariness of the initial functions, f (x, z) and v(x, z). In MBEM, the initial functions, f (x, z) and v(x, z), converge to the fixed points, F(x, z; θ k ) and V(x, z; θ k ), by applying the forward and backward Bellman operators, A θ k and B θ k , L max times. Therefore, if the initial functions, f (x, z) and v(x, z), are close to the fixed points, F(x, z; θ k ) and V(x, z; θ k ), L max can be reduced. Then, the problem is what kind of the initial functions are close to the fixed points.
We suggest that F(x, z; θ k−1 ) and V(x, z; θ k−1 ) are set as the initial functions, f (x, z) and v(x, z). In most cases, θ k−1 is close to θ k . When θ k−1 is close to θ k , it is expected that F(x, z; θ k−1 ) and V(x, z; θ k−1 ) are close to F(x, z; θ k ) and V(x, z; θ k ). Therefore, by setting F(x, z; θ k−1 ) and V(x, z; θ k−1 ) as the initial functions f (x, z) and v(x, z), respectively, L max is expected to be reduced. Hence, MBEM can be more efficient than EM because MBEM can utilize the results of the previous iteration, F(x, z; θ k−1 ) and V(x, z; θ k−1 ), by this arbitrariness of the initial functions.
However, it is unclear how small L max can be compared to T max by setting F(x, z; θ k−1 ) and V(x, z; θ k−1 ) as the initial functions f (x, z) and v(x, z). Therefore, numerical evaluations are needed. Moreover, in the first iteration, we cannot use the results of the previous iteration, F(x, z; θ k−1 ) and V(x, z; θ k−1 ). Therefore, in the first iteration, we set f (x, z) = p 0 (x, z; ν k ) and v(x, z) =r(x, z; π k ) because these initial functions guarantee L max ≤ T max from Proposition 6.
MBEM is summarized as Algorithm 3. The M step of Algorithm 3 is exactly the same as that of Algorithms 1 and 2, and only the E step is different. The time complexity of the E step in MBEM is O((|X ||Z |) 2 L max ). MBEM does not use the inverse matrix, which resolves the drawback of BEM. Moreover, MBEM can reduce L max by the contractibility of the Bellman operators and the arbitrariness of the initial functions, which can resolve the drawback of EM.

Summary of EM, BEM, and MBEM
EM, BEM, and MBEM are summarized as in Table 1. The M step is exactly the same among these algorithms, and only the E step is different: • EM obtains F(x, z; θ k ) and V(x, z; θ k ) by calculating the forward-backward algorithm up to T max . T max needs to be large enough to reduce the approximation errors of F(x, z; θ k ) and V(x, z; θ k ), which impairs the computational efficiency. • BEM obtains F(x, z; θ k ) and V(x, z; θ k ) by solving the forward and backward Bellman equations. BEM can be more efficient than EM because BEM calculates the forward and backward Bellman equations instead of the forward-backward algorithm up to T max . However, BEM cannot always be more efficient than EM when the size of the state |X | or that of the memory |Z | is large because BEM calculates an inverse matrix to solve the forward and backward Bellman equations. • MBEM obtains F(x, z; θ k ) and V(x, z; θ k ) by applying the forward and backward Bellman operators, A θ k and B θ k , to the initial functions, f (x, z) and v(x, z), L max times. Since MBEM does not need to calculate the inverse matrix, MBEM may be more efficient than EM even when the size of problems is large, which resolves the drawback of BEM. Although L max needs to be large enough to reduce the approximation errors of F(x, z; θ k ) and V(x, z; θ k ), which is the same problem as EM, MBEM can evaluate the approximation errors more tightly owing to the contractibility of A θ k and B θ k , and can utilize the results of the previous iteration, F(x, z; θ k−1 ) and V(x, z; θ k−1 ), as the initial functions, f (x, z) and v(x, z). These properties enable MBEM to be more efficient than EM.
forward and backward Bellman operators

Numerical Experiment
In this section, we compare the performance of EM, BEM, and MBEM using numerical experiments of four benchmarks for DEC-POMDP: broadcast [40], recycling robot [15], wireless network [27], and box pushing [41]. Detailed settings such as the state transition probability, the observation probability, and the reward function are described at http: //masplan.org/problem_domains, accessed on 22 June 2020. We implement EM, BEM, and MBEM in C++. Figure 3 shows the experimental results. In all the experiments, we set the number of agent N = 2, the discount factor γ = 0.99, the upper bound of the approximation error ε = 0.1, and the size of the memory available to the ith agent |Z i | = 2. The size of the state |X |, the action |A i |, and the observation |Y i | are different for each problem, which are shown on each panel. We note that the size of the state |X | is small in the broadcast (a,e,i) and the recycling robot (b,f,j), whereas it is large in the wireless network (c,g,k) and the box pushing (d,h,l). While the expected return J(θ k ) with respect to the computational time is different between the algorithms (a-d), that with respect to the iteration k is almost the same (e-h). This is because the M step of these algorithms is exactly the same. Therefore, the difference of the computational time is caused by the computational time of the E step.
The convergence of BEM is faster than that of EM in the small state size problems, i.e., Figure 3a,b. This is because EM calculates the forward-backward algorithm from t = 0 to t = T max , where T max is large. On the other hand, the convergence of BEM is slower than that of EM in the large state size problems, i.e., Figure 3c,d. This is because BEM calculates the inverse matrix.
The convergence of MBEM is faster than that of EM in all the experiments in Figure 3a-d. This is because L max is smaller than T max as shown in Figure 3i-l. While EM requires about 1000 calculations of the forward-backward algorithm to guarantee that the approximation error of F(x, z; θ k ) and V(x, z; θ k ) is smaller than ε, MBEM requires only about 10 calculations of the forward and backward Bellman operators. Thus, MBEM is more efficient than EM. The reason why L max is smaller than T max is that MBEM can utilize the results of the previous iteration, F(x, z; θ k−1 ) and V(x, z; θ k−1 ), as the initial functions, f (x, z) and v(x, z). It is shown from L max and T max in the first iteration. In the first iteration k = 0, L max is almost the same with T max because F(x, z; θ k−1 ) and V(x, z; θ k−1 ) cannot be used as the initial functions f (x, z) and v(x, z) in the first iteration. On the other hand, in the subsequent iterations k ≥ 1, L max is much smaller than T max because MBEM can utilize the results of the previous iteration, F(x, z; θ k−1 ) and V(x, z; θ k−1 ), the initial functions f (x, z) and v(x, z).

Conclusions and Future Works
In this paper, we propose the Bellman EM algorithm (BEM) and the modified Bellman EM algorithm (MBEM) by introducing the forward and backward Bellman equations into the EM algorithm for DEC-POMDP. BEM can be more efficient than EM because BEM does not calculate the forward-backward algorithm up to the infinite horizon. However, BEM cannot always be more efficient than EM when the size of the state or that of the memory is large because BEM calculates the inverse matrix. MBEM can be more efficient than EM regardless of the size of problems because MBEM does not calculate the inverse matrix. Although MBEM needs to calculate the forward and backward Bellman operators infinite times, MBEM can evaluate the approximation errors more tightly owing to the contractibility of these operators, and can utilize the results of the previous iteration owing to the arbitrariness of initial functions, which enables MBEM to be more efficient than EM. We verified this theoretical evaluation by the numerical experiment, which demonstrates that the convergence of MBEM is much faster than that of EM regardless of the size of problems.
Our algorithms still leave room for further improvements that deal with the real-world problems, which often have a large discrete or continuous state space. Some of them may be addressed by the advanced techniques of the Bellman equations [1][2][3][4]. For example, MBEM may be accelerated by the Gauss-Seidel method [2]. The convergence rate of the E step of MBEM is given by the discount factor γ, which is the same as that of EM. However, the Gauss-Seidel method modifies the Bellman operators, which allows the convergence rate of MBEM to be smaller than the discount factor γ. Therefore, even if F(x, z; θ k−1 ) and V(x, z; θ k−1 ) are not close to F(x, z; θ k ) and V(x, z; θ k ), MBEM may be more efficient than EM by the Gauss-Seidel method. Moreover, in DEC-POMDP with a large discrete or continuous state space, F(x, z; θ k ) and V(x, z; θ k ) cannot be expressed exactly because it requires a large space complexity. This problem may be resolved by the value function approximation [34][35][36]. The value function approximation approximates F(x, z; θ k ) and V(x, z; θ k ) using parametric models such as neural networks. The problem is how to find the optimal approximate parameters. The value function approximation finds them by the Bellman equation. Therefore, the potential extensions of our algorithms may lead to the applications to the real-world DEC-POMDP problems.

Acknowledgments:
We would like to thank the lab members for a fruitful discussion.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A. Proof in Section 3
Appendix A.1. Proof of Theorem 1 J(θ) can be calculated as follows: In order to prove Proposition 1, we calculate Q(θ; θ k ). It can be calculated as follows: where C is a constant independent of θ.

Appendix B. Proof in Section 4
Proof of Theorem 2 F(x, z; θ) can be calculated as follows: Therefore, Equation (29) is proved. Equation (30) can be proved in the same way.

Appendix C. Proof in Section 5
Appendix C.4. Proof of Proposition 6 When f (x, z) = p 0 (x, z; ν k ) and v(x, z) =r(x, z; π k ), the termination conditions of MBEM can be calculated as follows: is satisfied, holds. Thus, L max satisfies · : R → Z is defined by x := min{n ∈ Z|n > x}. From Proposition 2, the minimum T max is given by the following equation: Therefore, L max ≤ T max holds.

Appendix D. A Note on the Algorithm Proposed by Song et al.
In this section, we show that a parameter dependency is overlooked in the algorithm of [32]. We outline the derivation of the algorithm in [32] and discuss the parameter dependency. Since we use the notation in this paper, it is recommended to read the full paper before reading this section.
Then, the problem is how to calculate Equation (A46). It cannot be calculated analytically because θ dependency ofṼ(z i 0 , a i 0 , y i 1 , z i 1 ; θ) is too complex. However, ref. [32] overlooked the parameter dependency ofr(z i 0 , a i 0 ; π −i ) andṼ(z i 0 , a i 0 , y i 1 , z i 1 ; θ), and therefore, it calculated Equation (A46) as follows: However, Equations (A47) and (A48) do not correspond to Equation (A46), and therefore, the algorithm as a whole may not always provide the optimal policy.