Next Article in Journal
Design of Nonlinear Autoregressive Exogenous Model Based Intelligence Computing for Efficient State Estimation of Underwater Passive Target
Previous Article in Journal
Probability Representation of Quantum States
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Forward and Backward Bellman Equations Improve the Efficiency of the EM Algorithm for DEC-POMDP

by
Takehiro Tottori
1,* and
Tetsuya J. Kobayashi
1,2,3,4
1
Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo 113-8654, Japan
2
Department of Electrical Engineering and Information Systems, Graduate School of Engineering, The University of Tokyo, Tokyo 113-8654, Japan
3
Institute of Industrial Science, The University of Tokyo, Tokyo 153-8505, Japan
4
Universal Biology Institute, The University of Tokyo, Tokyo 113-8654, Japan
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(5), 551; https://doi.org/10.3390/e23050551
Submission received: 19 March 2021 / Revised: 22 April 2021 / Accepted: 26 April 2021 / Published: 29 April 2021
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
Decentralized partially observable Markov decision process (DEC-POMDP) models sequential decision making problems by a team of agents. Since the planning of DEC-POMDP can be interpreted as the maximum likelihood estimation for the latent variable model, DEC-POMDP can be solved by the EM algorithm. However, in EM for DEC-POMDP, the forward–backward algorithm needs to be calculated up to the infinite horizon, which impairs the computational efficiency. In this paper, we propose the Bellman EM algorithm (BEM) and the modified Bellman EM algorithm (MBEM) by introducing the forward and backward Bellman equations into EM. BEM can be more efficient than EM because BEM calculates the forward and backward Bellman equations instead of the forward–backward algorithm up to the infinite horizon. However, BEM cannot always be more efficient than EM when the size of problems is large because BEM calculates an inverse matrix. We circumvent this shortcoming in MBEM by calculating the forward and backward Bellman equations without the inverse matrix. Our numerical experiments demonstrate that the convergence of MBEM is faster than that of EM.

1. Introduction

Markov decision process (MDP) models sequential decision making problems and has been used for planning and reinforcement learning [1,2,3,4]. MDP consists of an environment and an agent. The agent observes the state of the environment and controls it by taking actions. The planning of MDP is to find the optimal control policy maximizing the objective function, which is typically solved by the Bellman equation-based algorithms such as value iteration and policy iteration [1,2,3,4].
Decentralized partially observable MDP (DEC-POMDP) is an extension of MDP to a multiagent and partially observable setting, which models sequential decision making problems by a team of agents [5,6,7]. DEC-POMDP consists of an environment and multiple agents, and the agents cannot observe the state of the environment and the actions of the other agents completely. The agents infer the environmental state and the other agents’ actions from their observation histories and control them by taking actions. The planning of DEC-POMDP is to find not only the optimal control policy but also the optimal inference policy for each agent, which maximize the objective function [5,6,7]. Applications of DEC-POMDP include planetary exploration by a team of rovers [8], target tracking by a team of sensors [9], and information transmission by a team of devices [10]. Since the agents cannot observe the environmental state and the other agents’ actions completely, it is difficult to extend the Bellman equation-based algorithms for MDP to DEC-POMDP straightforwardly [11,12,13,14,15].
DEC-POMDP can be solved using control as inference [16,17]. Control as inference is a framework to interpret a control problem as an inference problem by introducing auxiliary variables [18,19,20,21,22]. Although control as inference has several variants, Toussaint and Storkey showed that the planning of MDP can be interpreted as the maximum likelihood estimation for a latent variable model [18]. Thus, the planning of MDP can be solved by EM algorithm, which is the typical algorithm for the maximum likelihood estimation of latent variable models [23]. Since the EM algorithm is more general than the Bellman equation-based algorithms, it can be straightforwardly extended to POMDP [24,25] and DEC-POMDP [16,17]. The computational efficiency of the EM algorithm for DEC-POMDP is comparable to that of other algorithms for DEC-POMDP [16,17,26,27,28], and the extensions to the average reward setting and to the reinforcement learning setting have been studied [29,30,31].
However, the EM algorithm for DEC-POMDP is not efficient enough to be applied to real-world problems, which often have a large number of agents or a large size of an environment. Therefore, there are several studies in which improvement of the computational efficiency of the EM algorithm for DEC-POMDP was attempted [26,27]. Because these studies achieve improvements by restricting possible interactions between agents, their applicability is limited. Therefore, it is desirable to have improvement in the efficiency for more general DEC-POMDP problems.
In order to improve the computational efficiency of EM algorithm for general DEC-POMDP problems, there are two problems that need to be resolved. The first problem is the forward–backward algorithm up to the infinite horizon. The EM algorithm for DEC-POMDP uses the forward–backward algorithm, which has also been used in EM algorithm for hidden Markov models [23]. However, in the EM algorithm for DEC-POMDP, the forward–backward algorithm needs to be calculated up to the infinite horizon, which impairs the computational efficiency [32,33]. The second problem is the Bellman equation. The EM algorithm for DEC-POMDP does not use the Bellman equation, which plays a central role in the the planning and in the reinforcement learning for MDP [1,2,3,4]. Therefore, the EM algorithm for DEC-POMDP cannot use the advanced techniques based on the Bellman equation, which makes it possible to solve large-size problems [34,35,36].
In some previous studies, resolution of these problems was attempted by replacing the forward–backward algorithm up to the infinite horizon with the Bellman equation [32,33]. However, in these studies, the computational efficiency could not be improved completely. For example, Song et al. replaced the forward–backward algorithm with the Bellman equation and showed that their algorithm is more efficient than EM and other DEC-POMDP algorithms by the numerical experiments [32]. However, since a parameter dependency is overlooked in [32], their algorithm may not find the optimal policy under a general situation (see Appendix D for more details). Moreover, Kumar et al. showed that the forward–backward algorithm can be replaced by linear programming with the Bellman equation as a constraint [33]. However, their algorithm may be less efficient than the EM algorithm when the size of problems is large. Therefore, previous studies have not yet completely improved the computational efficiency of EM algorithm for DEC-POMDP.
In this paper, we propose more efficient algorithms for DEC-POMDP than EM algorithm by introducing the forward and backward Bellman equations into it. The backward Bellman equation corresponds to the traditional Bellman equation, which has been used in previous studies [32,33]. In contrast, the forward Bellman equation has not yet been used for the planning of DEC-POMDP explicitly. This equation is similar to that recently proposed in the offline reinforcement learning of MDP [37,38,39]. In the offline reinforcement learning of MDP, the forward Bellman equation is used to correct the difference between the data sampling policy and the policy to be evaluated. In the planning of DEC-POMDP, the forward Bellman equation plays the important role in inferring the environmental state.
We propose the Bellman EM algorithm (BEM) and the modified Bellman EM algorithm (MBEM) by replacing the forward–backward algorithm with the forward and backward Bellman equations. They are different in terms of how to solve the forward and backward Bellman equations. BEM solves the forward and backward Bellman equations by calculating an inverse matrix. BEM can be more efficient than EM because BEM does not calculate the forward–backward algorithm up to the infinite horizon. However, since BEM calculates the inverse matrix, it cannot always be more efficient than EM when the size of problems is large, which is the same problem as [33]. Actually, BEM is essentially the same as [33]. In the linear programming problem of [33], the number of variables is equal to that of constraints, which enables us to solve it only from the constraints without the optimization. Therefore, the algorithm in [33] becomes equivalent to BEM, and they suffers from the same problem as BEM.
This problem is addressed by MBEM. MBEM solves the forward and backward Bellman equations by applying the forward and backward Bellman operators to the arbitrary initial functions infinite times. Although MBEM needs to calculate the forward and backward Bellman operators infinite times, which is the same problem with EM, MBEM can evaluate approximation errors more tightly owing to the contractibility of these operators. It can also utilize the information of the previous iteration owing to the arbitrariness of the initial functions. These properties enable MBEM to be more efficient than EM. Moreover, MBEM resolves the drawback of BEM because MBEM does not calculate the inverse matrix. Therefore, MBEM can be more efficient than EM even when the size of problems is large. Our numerical experiments demonstrate that the convergence of MBEM is faster than that of EM regardless of the size of problems.
The paper is organized as follows: In Section 2, DEC-POMDP is formulated. In Section 3, the EM algorithm for DEC-POMDP, which was proposed in [16], is briefly reviewed. In Section 4, the forward and backward Bellman equations are derived, and the Bellman EM algorithm (BEM) is proposed. In Section 5, the forward and backward Bellman operators are defined, and the modified Bellman EM algorithm (MBEM) is proposed. In Section 6, EM, BEM, and MBEM are summarized and compared. In Section 7, the performances of EM, BEM, and MBEM are compared through the numerical experiment. In Section 8, this paper is concluded, and future works are discussed.

2. DEC-POMDP

DEC-POMDP consists of an environment and N agents (Figure 1 and Figure 2a) [7,16]. x t X is the state of the environment at time t. y t i Y i , z t i Z i , and a t i A i are the observation, the memory, and the action available to the agent i { 1 , , N } , respectively. X , Y i , Z i , and A i are finite sets. y t : = ( y t 1 , . . , y t N ) , z t : = ( z t 1 , . . , z t N ) , and a t : = ( a t 1 , . . , a t N ) are the joint observation, the joint memory, and the joint action of the N agents, respectively.
The time evolution of the environmental state x t is given by the initial state probability p ( x 0 ) and the state transition probability p ( x t + 1 | x t , a t ) . Thus, agents can control the environmental state x t + 1 by taking appropriate actions a t . The agent i cannot observe the environmental state x t and the joint action a t 1 completely, and obtains the observation y t i instead of them. Thus, the observation y t obeys the observation probability p ( y t | x t , a t 1 ) . The agent i updates its memory from z t 1 i to z t i based on the observation y t i . Thus, the memory z t i obeys the initial memory probability ν i ( z 0 i ) and the memory transition probability λ i ( z t i | z t 1 i , y t i ) . The agent i takes the action a t i based on the memory z t i by following the action probability π i ( a t i | z t i ) . The reward function r ( x t , a t ) defines the amount of reward that is obtained at each step depending on the state of the environment x t and the joint action a t taken by the agents.
The objective function in the planning of DEC-POMDP is given by the expected return, which is the expected discounted cumulative reward:
J ( θ ) = E θ t = 0 γ t r ( x t , a t ) .
θ : = ( π , λ , ν ) is the policy, where π : = ( π 1 , , π N ) , λ : = ( λ 1 , , λ N ) , and ν : = ( ν 1 , , ν N ) . γ ( 0 , 1 ) is the discount factor, which decreases the weight of the future reward. The closer γ is to 1, the closer the weight of the future reward is to that of the current reward.
The planning of DEC-POMDP is to find the policy θ that maximizes the expected return J ( θ ) as follows:
θ : = arg max θ J ( θ ) .
In other words, the planning of DEC-POMDP is to find how to take the action and how to update the memory for each agent to maximize the expected return.

3. EM Algorithm for DEC-POMDP

In this section, we explain the EM algorithm for DEC-POMDP, which was proposed in [16].

3.1. Control as Inference

In this subsection, we show that the planning of DEC-POMDP can be interpreted as the maximum likelihood estimation for a latent variable model (Figure 2b).
We introduce two auxiliary random variables: the time horizon T { 0 , 1 , 2 , } and the optimal variable o { 0 , 1 } . These variables obey the following probabilities:
p ( T ) = ( 1 γ ) γ T ,
p ( o = 1 | x T , a T ) = r ¯ ( x T , a T ) : = r ( x T , a T ) r min r max r min
where r max and r min are the maximum and the minimum value of the reward function r ( x , a ) , respectively. Thus, r ¯ ( x , a ) [ 0 , 1 ] is satisfied.
By introducing these variables, DEC-POMDP changes from Figure 2a to Figure 2b. While Figure 2a considers the infinite time horizon, Figure 2b considers the finite time horizon T, which obeys Equation (3). Moreover, while the reward r t : = r ( x t , a t ) is generated at each time in Figure 2a, the optimal variable o is generated only at the time horizon T in Figure 2b.
Theorem 1
([16]). The expected return J ( θ ) in DEC-POMDP (Figure 2a) is linearly related to the likelihood p ( o = 1 ; θ ) in the latent variable model (Figure 2b) as follows:
J ( θ ) = ( 1 γ ) 1 ( r max r min ) p ( o = 1 ; θ ) + r min .
Note that o is the observable variable, and x 0 : T , y 1 : T , z 0 : T , a 0 : T , T are the latent variables.
Proof. 
See Appendix A.1. □
Therefore, the planning of DEC-POMDP is equivalent to the maximum likelihood estimation for the latent variable model as follows:
θ = arg max θ p ( o = 1 ; θ ) .
Intuitively, while the planning of DEC-POMDP is to find the policy which maximizes the reward, the maximum likelihood estimation for the latent variable model is to find the policy which maximizes the probability of the optimal variable. Since the probability of the optimal variable is proportional to the reward, the planning of DEC-POMDP is equivalent to the maximum likelihood estimation for the latent variable model.

3.2. EM Algorithm

Since the planning of DEC-POMDP can be interpreted as the maximum likelihood estimation for the latent variable model, it can be solved by the EM algorithm [16]. EM algorithm is the typical algorithm for the maximum likelihood estimation of latent variable models, which iterates two steps, E step and M step [23].
In the E step, we calculate the Q function, which is defined as follows:
Q ( θ ; θ k ) : = E θ k log p ( o = 1 , x 0 : T , y 0 : T , z 0 : T , a 0 : T , T ; θ ) o = 1
where θ k is the current estimator of the optimal policy.
In the M step, we update θ k to θ k + 1 by maximizing the Q function as follows:
θ k + 1 : = arg max θ Q ( θ ; θ k ) .
Since each iteration between the E step and the M step monotonically increases the likelihood p ( o = 1 ; θ k ) , we can find θ that locally maximizes the likelihood p ( o = 1 ; θ ) .

3.3. M Step

Proposition 1
([16]). In the EM algorithm for DEC-POMDP, Equation (8) can be calculated as follows:
π k + 1 i ( a | z ) = a i , z i π k ( a | z ) x , x , z p ( x , z | x , z , a ; λ k ) F ( x , z ; θ k ) r ¯ ( x , a ) + γ V ( x , z ; θ k ) a , z π k ( a | z ) x , x , z p ( x , z | x , z , a ; λ k ) F ( x , z ; θ k ) r ¯ ( x , a ) + γ V ( x , z ; θ k ) ,
λ k + 1 i ( z | z , y ) = z i , z i , y i λ k ( z i | z i , y i ) x , x p ( x , y | x , z ; π k ) F ( x , z ; θ k ) V ( x , z ; θ k ) z , z , y λ k ( z i | z i , y i ) x , x p ( x , y | x , z ; π k ) F ( x , z ; θ k ) V ( x , z ; θ k ) ,
ν k + 1 i ( z ) = z i ν k ( z ) x p 0 ( x ) V ( x , z ; θ k ) z ν k ( z ) x p 0 ( x ) V ( x , z ; θ k ) .
a i : = ( a 1 , , a i 1 , a i + 1 , , a N ) . y i and z i are defined in the same way. F ( x , z ; θ ) and V ( x , z ; θ ) are defined as follows:
F ( x , z ; θ ) : = t = 0 γ t p t ( x , z ; θ ) ,
V ( x , z ; θ ) : = t = 0 γ t p 0 ( o = 1 | x , z , T = t ; θ )
where p t ( x , z ; θ ) : = p ( x t = x , z t = z ; θ ) , and p 0 ( o = 1 | x , z , T ; θ ) : = p ( o = 1 | x 0 = x , z 0 = z , T ; θ ) .
Proof. 
See Appendix A.2. □
F ( x , z ; θ ) quantifies the frequency of the state x and the memory z, which is called the frequency function in this paper. V ( x , z ; θ ) quantifies the probability of o = 1 when the initial state and memory are x and z, respectively. Since the probability of o = 1 is proportional to the reward, V ( x , z ; θ ) is called the value function in this paper. Actually, V ( x , z ; θ ) corresponds to the value function [33].

3.4. E Step

F ( x , z ; θ k ) and V ( x , z ; θ k ) need to be obtained to calculate Equations (9)–(11). In [16], F ( x , z ; θ k ) and V ( x , z ; θ k ) are calculated by the forward–backward algorithm, which has been used in EM algorithm for the hidden Markov model [23].
In [16], the forward probability α t ( x , z ) and the backward probability β t ( x , z ) are defined as follows:
α t ( x , z ; θ k ) : = p t ( x , z ; θ k ) ,
β t ( x , z ; θ k ) : = p 0 ( o = 1 | x , z , T = t ; θ k ) .
It is easy to calculate α 0 ( x , z ; θ k ) and β 0 ( x , z ; θ k ) as follows:
α 0 ( x , z ; θ k ) = p 0 ( x , z ; ν k ) : = p 0 ( x ) ν k ( z ) ,
β 0 ( x , z ; θ k ) = r ¯ ( x , z ; π k ) : = a π k ( a | z ) r ¯ ( x , a ) .
Moreover, α t + 1 ( x , z ; θ k ) and β t + 1 ( x , z ; θ k ) are easily calculated from α t ( x , z ; θ k ) and β t ( x , z ; θ k ) :
α t + 1 ( x , z ; θ k ) = x , z p ( x , z | x , z ; θ k ) α t ( x , z ; θ k ) ,
β t + 1 ( x , z ; θ k ) = x , z β t ( x , z ; θ k ) p ( x , z | x , z ; θ k )
where
p ( x , z | x , z ; θ k ) = y , a λ k ( z | z , y ) p ( y | x , a ) p ( x | x , a ) π k ( a | z ) .
Equations (18) and (19) are called the forward and backward equations, respectively. Using Equations (16)–(19), α t ( x , z ; θ k ) and β t ( x , z ; θ k ) can be efficiently calculated from t = 0 to t = , which is called the forward–backward algorithm [23].
By calculating the forward–backward algorithm from t = 0 to t = , F ( x , z ; θ k ) and V ( x , z ; θ k ) can be obtained as follows [16]:
F ( x , z ; θ k ) = t = 0 γ t α t ( x , z ; θ k ) ,
V ( x , z ; θ k ) = t = 0 γ t β t ( x , z ; θ k ) .
However, F ( x , z ; θ k ) and V ( x , z ; θ k ) cannot be calculated exactly by this approach because it is practically impossible to calculate the forward–backward algorithm until t = . Therefore, the forward–backward algorithm needs to be terminated at t = T max , where T max is finite. In this case, F ( x , z ; θ k ) and V ( x , z ; θ k ) are approximated as follows:
F ( x , z ; θ k ) = t = 0 γ t α t ( x , z ; θ k ) t = 0 T max γ t α t ( x , z ; θ k ) ,
V ( x , z ; θ k ) = t = 0 γ t β t ( x , z ; θ k ) t = 0 T max γ t β t ( x , z ; θ k ) .
T max needs to be large enough to reduce the approximation errors. In the previous study, a heuristic termination condition was proposed as follows [16]:
γ T max p ( o = 1 | T max ; θ k ) T = 0 T max 1 γ T p ( o = 1 | T ; θ k ) .
p ( o = 1 | T ; θ k ) = x , z α T ( x , z ; θ k ) β T ( x , z ; θ k ) where T = T + T . However, the relation between T max and the approximation errors is unclear in Equation (25). We propose a new termination condition to guarantee the approximation errors as follows:
Proposition 2.
We set an acceptable error bound ε > 0 . If
T max > log ( 1 γ ) ε log γ 1
is satisfied, then
F ( x , z ; θ k ) t = 0 T max γ t α t ( x , z ; θ k ) < ε ,
V ( x , z ; θ k ) t = 0 T max γ t β t ( x , z ; θ k ) < ε
are satisfied.
Proof. 
See Appendix A.3. □

3.5. Summary

In summary, the EM algorithm for DEC-POMDP is given by Algorithm 1. In the E step, we calculate α t ( x , z ; θ k ) and β t ( x , z ; θ k ) from t = 0 to t = T max by the forward–backward algorithm. In M step, we update θ k to θ k + 1 using Equations (9)–(11). The time complexities of the E step and the M step are O ( ( | X | | Z | ) 2 T max ) and O ( ( | X | | Z | ) 2 | Y | | A | ) , respectively. Note that A : = i = 1 N A i , and Y and Z are defined in the same way. The EM algorithm for DEC-POMDP is less efficient when the discount factor γ is closer to 1 or the acceptable error bound ε is smaller because T max needs to be larger in these cases.
Algorithm 1 EM algorithm for DEC-POMDP
  • k 0 , Initialize θ k .
  • T max ( log ( 1 γ ) ε ) / log γ 1
  • while  θ k or J ( θ k ) do not converge do
  •    Calculate p ( x , z | x , z ; θ k ) by Equation (20).
  •    //—E step—//
  •     α 0 ( x , z ; θ k ) p 0 ( x , z ; ν k )
  •     β 0 ( x , z ; θ k ) r ¯ ( x , z ; π k )
  •    for  t = 1 , 2 , , T max   do
  •        α t ( x , z ; θ k ) x , z p ( x , z | x , z ; θ k ) α t 1 ( x , z ; θ k )
  •        β t ( x , z ; θ k ) x , z β t 1 ( x , z ; θ k ) p ( x , z | x , z ; θ k )
  •    end for
  •     F ( x , z ; θ k ) t = 0 T max γ t α t ( x , z ; θ k )
  •     V ( x , z ; θ k ) t = 0 T max γ t β t ( x , z ; θ k )
  •    //—M step—//
  •    Update θ k to θ k + 1 by Equations (9)–(11).
  •     k k + 1
  • end while
  • return  θ k

4. Bellman EM Algorithm

In the EM algorithm for DEC-POMDP, α t ( x , z ; θ k ) and β t ( x , z ; θ k ) are calculated from t = 0 to t = T max to obtain F ( x , z ; θ k ) and V ( x , z ; θ k ) . However, T max needs to be large to reduce the approximation errors of F ( x , z ; θ k ) and V ( x , z ; θ k ) , which impairs the computational efficiency of the EM algorithm for DEC-POMDP [32,33]. In this section, we calculate F ( x , z ; θ k ) and V ( x , z ; θ k ) directly without calculating α t ( x , z ; θ k ) and β t ( x , z ; θ k ) to resolve the drawback of EM.

4.1. Forward and Backward Bellman Equations

The following equations are useful to obtain F ( x , z ; θ k ) and V ( x , z ; θ k ) directly:
Theorem 2.
F ( x , z ; θ ) and V ( x , z ; θ ) satisfy the following equations:
F ( x , z ; θ ) = p 0 ( x , z ; ν ) + γ x , z p ( x , z | x , z ; θ ) F ( x , z ; θ ) ,
V ( x , z ; θ ) = r ¯ ( x , z ; π ) + γ x , z p ( x , z | x , z ; θ ) V ( x , z ; θ ) .
Equations (29) and (30) are called the forward Bellman equation and the backward Bellman equation, respectively.
Proof. 
See Appendix B.1. □
Note that the direction of time is different between Equations (29) and (30). In Equation (29), x and z are earlier state and memory than x and z, respectively. In Equation (30), x and z are later state and memory than x and z, respectively.
The backward Bellman Equation (30) corresponds to the traditional Bellman equation, which has been used in other algorithms for DEC-POMDP [11,12,13,14]. In contrast, the forward Bellman equation, which is introduced in this paper, is similar to that recently proposed in the offline reinforcement learning [37,38,39].
Since the forward and backward Bellman equations are linear equations, they can be solved exactly as follows:
F ( θ ) = ( I γ P ( θ ) ) 1 p ( ν ) ,
V ( θ ) = ( ( I γ P ( θ ) ) 1 ) T r ( π )
where
F i ( θ ) : = F ( ( x , z ) = i ; θ ) , V i ( θ ) : = V ( ( x , z ) = i ; θ ) P i j ( θ ) : = p ( ( x , z ) = i | ( x , z ) = j ; θ ) p i ( ν ) : = p 0 ( ( x , z ) = i ; ν ) , r i ( π ) : = r ¯ ( ( x , z ) = i ; π )
Therefore, we can obtain F ( x , z ; θ k ) and V ( x , z ; θ k ) from the forward and backward Bellman equations.

4.2. Bellman EM Algorithm (BEM)

The forward–backward algorithm from t = 0 to t = T max in the EM algorithm for DEC-POMDP can be replaced by the forward and backward Bellman equations. In this paper, the EM algorithm for DEC-POMDP that uses the forward and backward Bellman equations instead of the forward–backward algorithm from t = 0 to t = T max is called the Bellman EM algorithm (BEM).

4.3. Comparison of EM and BEM

BEM is summarized as Algorithm 2. The M step in Algorithm 2 is almost the same as that in Algorithm 1—only the E step is different. While the time complexity of the E step in EM is O ( ( | X | | Z | ) 2 T max ) , that in BEM is O ( ( | X | | Z | ) 3 ) .
Algorithm 2 Bellman EM algorithm (BEM)
  • k 0 , Initialize θ k .
  • while  θ k or J ( θ k ) do not converge do
  •    Calculate p ( x , z | x , z ; θ k ) by Equation (20).
  •    //—E step—//
  •     F ( θ k ) ( I γ P ( θ k ) ) 1 p ( ν k )
  •     V ( θ k ) ( ( I γ P ( θ k ) ) 1 ) T r ( π k )
  •    //—M step—//
  •    Update θ k to θ k + 1 by Equations (9)–(11).
  •     k k + 1
  • end while
  • return  θ k
BEM can calculate F ( x , z ; θ k ) and V ( x , z ; θ k ) exactly. Moreover, BEM can be more efficient than EM when the discount factor γ is close to 1 or the acceptable error bound ε is small because T max needs to be large enough in these cases. However, when the size of the state space | X | or that of the joint memory space | Z | is large, BEM cannot always be more efficient than EM because BEM needs to calculate the inverse matrix ( I γ P ( θ k ) ) 1 . To circumvent this shortcoming, we propose a new algorithm, the modified Bellman EM algorithm (MBEM), to obtain F ( x , z ; θ k ) and V ( x , z ; θ k ) without calculating the inverse matrix.

5. Modified Bellman EM Algorithm

5.1. Forward and Backward Bellman Operators

We define the forward and backward Bellman operators as follows:
A θ f ( x , z ) : = p 0 ( x , z ; ν ) + γ x , z p ( x , z | x , z ; θ ) f ( x , z ) ,
B θ v ( x , z ) : = r ¯ ( x , z ; π ) + γ x , z p ( x , z | x , z ; θ ) v ( x , z )
where f , v : X × Z R . From the forward and backward Bellman equations, A θ and B θ satisfy the following equations:
F ( x , z ; θ ) = A θ F ( x , z ; θ ) ,
V ( x , z ; θ ) = B θ V ( x , z ; θ ) .
Thus, F ( x , z ; θ k ) and V ( x , z ; θ k ) are the fixed points of A θ and B θ , respectively. A θ and B θ have the following useful property:
Proposition 3.
A θ and B θ are contractive operators as follows:
A θ f A θ g 1 γ f g 1 ,
B θ u B θ v γ u v
where f , g , u , v : X × Z R .
Proof. 
See Appendix C.1. □
Note that the norm is different between Equations (37) and (38). It is caused by the difference of the time direction between A θ and B θ . While x and z are earlier state and memory than x and z, respectively, in the forward Bellman operator A θ , x and z are later state and memory than x and z, respectively, in the backward Bellman operator B θ .
We obtain F ( x , z ; θ k ) and V ( x , z ; θ k ) using Equations (35)–(38), as follows:
Proposition 4.
lim L A θ L f ( x , z ) = F ( x , z ; θ ) ,
lim L B θ L v ( x , z ) = V ( x , z ; θ )
where f , v : X × Z R .
Proof. 
See Appendix C.2. □
Therefore, it is shown that F ( x , z ; θ k ) and V ( x , z ; θ k ) can be calculated by applying the forward and backward Bellman operators, A θ and B θ , to arbitrary initial functions, f ( x , z ) and v ( x , z ) , infinite times.

5.2. Modified Bellman EM Algorithm (MBEM)

The calculation of the forward and backward Bellman equations in BEM can be replaced by that of the forward and backward Bellman operators. In this paper, BEM that uses the forward and backward Bellman operators instead of the forward and backward Bellman equations is called modified Bellman EM algorithm (MBEM).

5.3. Comparison of EM, BEM, and MBEM

Since MBEM does not need the inverse matrix, MBEM can be more efficient than BEM when the size of the state space | X | and that of the joint memory space | Z | are large. Thus, MBEM resolves the drawback of BEM.
On the other hand, MBEM has the same problem as EM. MBEM calculates A θ k and B θ k infinite times to obtain F ( x , z ; θ k ) and V ( x , z ; θ k ) . However, since it is practically impossible to calculate A θ k and B θ k infinite times, the calculation of A θ k and B θ k needs to be terminated after L max times, where L max is finite. In this case, F ( x , z ; θ k ) and V ( x , z ; θ k ) are approximated as follows:
F ( x , z ; θ k ) = A θ k f ( x , z ) A θ k L max f ( x , z ) ,
V ( x , z ; θ k ) = B θ k v ( x , z ) B θ k L max v ( x , z ) .
L max needs to be large enough to reduce the approximation errors of F ( x , z ; θ k ) and V ( x , z ; θ k ) , which impairs the computational efficiency of MBEM. Thus, MBEM can potentially suffer from the same problem as EM. However, we can theoretically show that MBEM is more efficient than EM by comparing T max and L max under the condition that the approximation errors of F ( x , z ; θ k ) and V ( x , z ; θ k ) are smaller than the acceptable error bound ε .
When f ( x , z ) = p 0 ( x , z ; ν k ) and v ( x , z ) = r ¯ ( x , z ; π k ) , Equations (41) and (42) can be calculated as follows:
A θ k L max f ( x , z ) = t = 0 L max γ t p t ( x , z ; θ ) ,
B θ k L max v ( x , z ) = t = 0 L max γ t p 0 ( o = 1 | x , z , T = t ; θ )
which are the same with Equations (23) and (24), respectively. Thus, in this case, L max = T max , and the computational efficiency of MBEM is the same as that of EM. However, MBEM has two useful properties that EM does not have, and therefore, MBEM can be more efficient than EM. In the following, we explain these properties in more detail.
The first property of MBEM is the contractibility of the forward and backward Bellman operators, A θ k and B θ k . From the contractibility of the Bellman operators, L max is determined adaptively as follows:
Proposition 5.
We set an acceptable error bound ε > 0 . If
A θ k L f A θ k L 1 f 1 < 1 γ γ ε ,
B θ k L v B θ k L 1 v < 1 γ γ ε
are satisfied, then
F ( x , z ; θ k ) A θ k L f ( x , z ; θ k ) < ε ,
V ( x , z ; θ k ) B θ k L v ( x , z ; θ k ) < ε
are satisfied.
Proof. 
See Appendix C.3. □
T max is always constant for every E step, T max = ( log ( 1 γ ) ε ) / log γ 1 . Thus, even if the approximation errors of F ( x , z ; θ k ) and V ( x , z ; θ k ) are smaller than ε when t T max , the forward–backward algorithm cannot be terminated until t = T max because the approximation errors of F ( x , z ; θ k ) and V ( x , z ; θ k ) cannot be evaluated in the forward–backward algorithm.
L max is adaptively determined depending on A θ k L f ( x , z ) and B θ k L v ( x , z ) . Thus, if A θ k L f ( x , z ) and B θ k L v ( x , z ) are close enough to F ( x , z ; θ k ) and V ( x , z ; θ k ) , the E step of MBEM can be terminated because the approximation errors of F ( x , z ; θ k ) and V ( x , z ; θ k ) can be evaluated owing to the contractibility of the forward and backward Bellman operators.
Indeed, when f ( x , z ) = p 0 ( x , z ; ν k ) and v ( x , z ) = r ¯ ( x , z ; π k ) , MBEM is more efficient than EM as follows:
Proposition 6.
When f ( x , z ) = p 0 ( x , z ; ν k ) and v ( x , z ) = r ¯ ( x , z ; π k ) , L max T max is satisfied.
Proof. 
See Appendix C.4. □
The second property of MBEM is the arbitrariness of the initial functions, f ( x , z ) and v ( x , z ) . In MBEM, the initial functions, f ( x , z ) and v ( x , z ) , converge to the fixed points, F ( x , z ; θ k ) and V ( x , z ; θ k ) , by applying the forward and backward Bellman operators, A θ k and B θ k , L max times. Therefore, if the initial functions, f ( x , z ) and v ( x , z ) , are close to the fixed points, F ( x , z ; θ k ) and V ( x , z ; θ k ) , L max can be reduced. Then, the problem is what kind of the initial functions are close to the fixed points.
We suggest that F ( x , z ; θ k 1 ) and V ( x , z ; θ k 1 ) are set as the initial functions, f ( x , z ) and v ( x , z ) . In most cases, θ k 1 is close to θ k . When θ k 1 is close to θ k , it is expected that F ( x , z ; θ k 1 ) and V ( x , z ; θ k 1 ) are close to F ( x , z ; θ k ) and V ( x , z ; θ k ) . Therefore, by setting F ( x , z ; θ k 1 ) and V ( x , z ; θ k 1 ) as the initial functions f ( x , z ) and v ( x , z ) , respectively, L max is expected to be reduced. Hence, MBEM can be more efficient than EM because MBEM can utilize the results of the previous iteration, F ( x , z ; θ k 1 ) and V ( x , z ; θ k 1 ) , by this arbitrariness of the initial functions.
However, it is unclear how small L max can be compared to T max by setting F ( x , z ; θ k 1 ) and V ( x , z ; θ k 1 ) as the initial functions f ( x , z ) and v ( x , z ) . Therefore, numerical evaluations are needed. Moreover, in the first iteration, we cannot use the results of the previous iteration, F ( x , z ; θ k 1 ) and V ( x , z ; θ k 1 ) . Therefore, in the first iteration, we set f ( x , z ) = p 0 ( x , z ; ν k ) and v ( x , z ) = r ¯ ( x , z ; π k ) because these initial functions guarantee L max T max from Proposition 6.
MBEM is summarized as Algorithm 3. The M step of Algorithm 3 is exactly the same as that of Algorithms 1 and 2, and only the E step is different. The time complexity of the E step in MBEM is O ( ( | X | | Z | ) 2 L max ) . MBEM does not use the inverse matrix, which resolves the drawback of BEM. Moreover, MBEM can reduce L max by the contractibility of the Bellman operators and the arbitrariness of the initial functions, which can resolve the drawback of EM.
Algorithm 3 Modified Bellman EM algorithm (MBEM)
  • k 0 , Initialize θ k .
  • F ( x , z ; θ k 1 ) p 0 ( x , z ; ν k )
  • V ( x , z ; θ k 1 ) r ¯ ( x , z ; π k )
  • while  θ k or J ( θ k ) do not converge do
  •    Calculate p ( x , z | x , z ; θ k ) by Equation (20).
  •    //—E step—//
  •     F 0 ( x , z ) F ( x , z ; θ k 1 )
  •     V 0 ( x , z ) F ( x , z ; θ k 1 )
  •     L 0
  •    repeat
  •        F L + 1 ( x , z ) A θ k F L ( x , z )
  •        V L + 1 ( x , z ) B θ k V L ( x , z )
  •        L L + 1
  •    until max { F L F L 1 1 , V L V L 1 } < 1 γ γ ε
  •    //—M step—//
  •    Update θ k to θ k + 1 by Equations (9)–(11).
  •     k k + 1
  • end while
  • return  θ k

6. Summary of EM, BEM, and MBEM

EM, BEM, and MBEM are summarized as in Table 1. The M step is exactly the same among these algorithms, and only the E step is different:
  • EM obtains F ( x , z ; θ k ) and V ( x , z ; θ k ) by calculating the forward–backward algorithm up to T max . T max needs to be large enough to reduce the approximation errors of F ( x , z ; θ k ) and V ( x , z ; θ k ) , which impairs the computational efficiency.
  • BEM obtains F ( x , z ; θ k ) and V ( x , z ; θ k ) by solving the forward and backward Bellman equations. BEM can be more efficient than EM because BEM calculates the forward and backward Bellman equations instead of the forward–backward algorithm up to T max . However, BEM cannot always be more efficient than EM when the size of the state | X | or that of the memory | Z | is large because BEM calculates an inverse matrix to solve the forward and backward Bellman equations.
  • MBEM obtains F ( x , z ; θ k ) and V ( x , z ; θ k ) by applying the forward and backward Bellman operators, A θ k and B θ k , to the initial functions, f ( x , z ) and v ( x , z ) , L max times. Since MBEM does not need to calculate the inverse matrix, MBEM may be more efficient than EM even when the size of problems is large, which resolves the drawback of BEM. Although L max needs to be large enough to reduce the approximation errors of F ( x , z ; θ k ) and V ( x , z ; θ k ) , which is the same problem as EM, MBEM can evaluate the approximation errors more tightly owing to the contractibility of A θ k and B θ k , and can utilize the results of the previous iteration, F ( x , z ; θ k 1 ) and V ( x , z ; θ k 1 ) , as the initial functions, f ( x , z ) and v ( x , z ) . These properties enable MBEM to be more efficient than EM.

7. Numerical Experiment

In this section, we compare the performance of EM, BEM, and MBEM using numerical experiments of four benchmarks for DEC-POMDP: broadcast [40], recycling robot [15], wireless network [27], and box pushing [41]. Detailed settings such as the state transition probability, the observation probability, and the reward function are described at http://masplan.org/problem_domains, accessed on 22 June 2020. We implement EM, BEM, and MBEM in C++.
Figure 3 shows the experimental results. In all the experiments, we set the number of agent N = 2 , the discount factor γ = 0.99 , the upper bound of the approximation error ε = 0.1 , and the size of the memory available to the ith agent | Z i | = 2 . The size of the state | X | , the action | A i | , and the observation | Y i | are different for each problem, which are shown on each panel. We note that the size of the state | X | is small in the broadcast (a,e,i) and the recycling robot (b,f,j), whereas it is large in the wireless network (c,g,k) and the box pushing (d,h,l).
While the expected return J ( θ k ) with respect to the computational time is different between the algorithms (a–d), that with respect to the iteration k is almost the same (e–h). This is because the M step of these algorithms is exactly the same. Therefore, the difference of the computational time is caused by the computational time of the E step.
The convergence of BEM is faster than that of EM in the small state size problems, i.e., Figure 3a,b. This is because EM calculates the forward–backward algorithm from t = 0 to t = T max , where T max is large. On the other hand, the convergence of BEM is slower than that of EM in the large state size problems, i.e., Figure 3c,d. This is because BEM calculates the inverse matrix.
The convergence of MBEM is faster than that of EM in all the experiments in Figure 3a–d. This is because L max is smaller than T max as shown in Figure 3i–l. While EM requires about 1000 calculations of the forward–backward algorithm to guarantee that the approximation error of F ( x , z ; θ k ) and V ( x , z ; θ k ) is smaller than ε , MBEM requires only about 10 calculations of the forward and backward Bellman operators. Thus, MBEM is more efficient than EM. The reason why L max is smaller than T max is that MBEM can utilize the results of the previous iteration, F ( x , z ; θ k 1 ) and V ( x , z ; θ k 1 ) , as the initial functions, f ( x , z ) and v ( x , z ) . It is shown from L max and T max in the first iteration. In the first iteration k = 0 , L max is almost the same with T max because F ( x , z ; θ k 1 ) and V ( x , z ; θ k 1 ) cannot be used as the initial functions f ( x , z ) and v ( x , z ) in the first iteration. On the other hand, in the subsequent iterations k 1 , L max is much smaller than T max because MBEM can utilize the results of the previous iteration, F ( x , z ; θ k 1 ) and V ( x , z ; θ k 1 ) , the initial functions f ( x , z ) and v ( x , z ) .

8. Conclusions and Future Works

In this paper, we propose the Bellman EM algorithm (BEM) and the modified Bellman EM algorithm (MBEM) by introducing the forward and backward Bellman equations into the EM algorithm for DEC-POMDP. BEM can be more efficient than EM because BEM does not calculate the forward–backward algorithm up to the infinite horizon. However, BEM cannot always be more efficient than EM when the size of the state or that of the memory is large because BEM calculates the inverse matrix. MBEM can be more efficient than EM regardless of the size of problems because MBEM does not calculate the inverse matrix. Although MBEM needs to calculate the forward and backward Bellman operators infinite times, MBEM can evaluate the approximation errors more tightly owing to the contractibility of these operators, and can utilize the results of the previous iteration owing to the arbitrariness of initial functions, which enables MBEM to be more efficient than EM. We verified this theoretical evaluation by the numerical experiment, which demonstrates that the convergence of MBEM is much faster than that of EM regardless of the size of problems.
Our algorithms still leave room for further improvements that deal with the real-world problems, which often have a large discrete or continuous state space. Some of them may be addressed by the advanced techniques of the Bellman equations [1,2,3,4]. For example, MBEM may be accelerated by the Gauss–Seidel method [2]. The convergence rate of the E step of MBEM is given by the discount factor γ , which is the same as that of EM. However, the Gauss–Seidel method modifies the Bellman operators, which allows the convergence rate of MBEM to be smaller than the discount factor γ . Therefore, even if F ( x , z ; θ k 1 ) and V ( x , z ; θ k 1 ) are not close to F ( x , z ; θ k ) and V ( x , z ; θ k ) , MBEM may be more efficient than EM by the Gauss–Seidel method. Moreover, in DEC-POMDP with a large discrete or continuous state space, F ( x , z ; θ k ) and V ( x , z ; θ k ) cannot be expressed exactly because it requires a large space complexity. This problem may be resolved by the value function approximation [34,35,36]. The value function approximation approximates F ( x , z ; θ k ) and V ( x , z ; θ k ) using parametric models such as neural networks. The problem is how to find the optimal approximate parameters. The value function approximation finds them by the Bellman equation. Therefore, the potential extensions of our algorithms may lead to the applications to the real-world DEC-POMDP problems.

Author Contributions

Conceptualization, T.T., T.J.K.; Formal analysis, T.T., T.J.K.; Funding acquisition, T.J.K.; Writing—original draft, T.T., T.J.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by JSPS KAKENHI Grant Number 19H05799 and by JST CREST Grant Number JPMJCR2011.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to thank the lab members for a fruitful discussion.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof in Section 3

Appendix A.1. Proof of Theorem 1

J ( θ ) can be calculated as follows:
J ( θ ) : = E θ t = 0 γ t r ( x t , a t ) = x 0 : , a 0 : p ( x 0 : , a 0 : ; θ ) t = 0 γ t r ( x t , a t ) = t = 0 γ t x t , a t p ( x t , a t ; θ ) r ( x t , a t ) = ( 1 γ ) 1 T = 0 p ( T ) x T , a T p ( x T , a T ; θ ) ( r max r min ) p ( o = 1 | x T , a T ) + r min = ( 1 γ ) 1 ( r max r min ) p ( o = 1 ; θ ) + r min
where x 0 : : = { x 0 , , x } and a 0 : : = { a 0 , , a } .

Appendix A.2. Proof of Proposition 1

In order to prove Proposition 1, we calculate Q ( θ ; θ k ) . It can be calculated as follows:
Q ( θ ; θ k ) : = E θ k log p ( o = 1 , x 0 : T , y 0 : T , z 0 : T , a 0 : T , T ; θ ) o = 1 = E θ k i = 1 N t = 0 T log π i ( a t i | z t i ) + t = 1 T log λ i ( z t i | z t 1 i , y t i ) + log ν i ( z 0 i ) o = 1 + C = i = 1 N Q ( π i ; θ k ) + Q ( λ i ; θ k ) + Q ( ν i ; θ k ) + C ,
where
Q ( π i ; θ k ) : = E θ k t = 0 T log π i ( a t i | z t i ) o = 1 ,
Q ( λ i ; θ k ) : = E θ k t = 1 T log λ i ( z t i | z t 1 i , y t i ) o = 1 ,
Q ( ν i ; θ k ) : = E θ k log ν i ( z 0 i ) o = 1 .
C is a constant independent of θ .
Proposition A1
([16]). Q ( π i ; θ k ) , Q ( λ i ; θ k ) , and Q ( ν i ; θ k ) are calculated as follows:
Q ( π i ; θ k ) a , z π k ( a | z ) log π i ( a i | z i ) x , x , z p ( x , z | x , z , a ; λ k ) F ( x , z ; θ k ) ( r ¯ ( x , a ) + γ V ( x , z ; θ k ) ) ,
Q ( λ i ; θ k ) z , z , y λ k ( z | z , y ) log λ i ( z i | z i , y i ) x , x p ( x , y | x , z ; π k ) F ( x , z ; θ k ) V ( x , z ; θ k ) ,
Q ( ν i ; θ k ) z p 0 ( z ; ν n ) log ν i ( z i ) x p 0 ( x ) V ( x , z ; θ k ) .
F ( x , z ; θ k ) and V ( x , z ; θ k ) are defined by Equations (12) and (13).
Proof. 
Firstly, we prove Equation (A6). Q ( π i ; θ k ) can be calculated as follows:
Q ( π i ; θ k ) : = E θ k t = 0 T log π i ( a t i | z t i ) o = 1 = 1 p ( o = 1 ; θ k ) T = 0 x 0 : T , y 0 : T , z 0 : T , a 0 : T p ( o = 1 , x 0 : T , y 0 : T , z 0 : T , a 0 : T , T ; θ k ) t = 0 T log π i ( a t i | z t i ) = 1 p ( o = 1 ; θ k ) T = 0 t = 0 T x t , z t , a t p ( o = 1 , x t , z t , a t , T ; θ k ) log π i ( a t i | z t i ) = 1 p ( o = 1 ; θ k ) T = 0 t = 0 T x t , z t , a t p ( T ) p t ( x t , z t ; θ k ) π k ( a t | z t ) p t ( o = 1 | x t , z t , a t , T ; θ k ) log π i ( a t i | z t i ) = 1 p ( o = 1 ; θ k ) T = 0 t = 0 T x , z , a p ( T ) p t ( x , z ; θ k ) π k ( a | z ) p t ( o = 1 | x , z , a , T ; θ k ) log π i ( a i | z i ) = 1 p ( o = 1 ; θ k ) x , z , a π k ( a | z ) log π i ( a i | z i ) T = 0 p ( T ) t = 0 T p t ( x , z ; θ k ) p t ( o = 1 | x , z , a , T ; θ k ) .
Since T = 0 t = 0 T = t = 0 T T = t , we have
Q ( π i ; θ k ) = 1 p ( o = 1 ; θ k ) x , z , a π k ( a | z ) log π i ( a i | z i ) t = 0 p t ( x , z ; θ k ) T = t p ( T ) p t ( o = 1 | x , z , a , T ; θ k ) = 1 γ p ( o = 1 ; θ k ) x , z , a π k ( a | z ) log π i ( a i | z i ) t = 0 p t ( x , z ; θ k ) T = t γ T p t ( o = 1 | x , z , a , T ; θ k ) .
Since p t ( o = 1 | x , z , a , T ; θ k ) = p 0 ( o = 1 | x , z , a , T t ; θ k ) , -4.6cm0cm
Q ( π i ; θ k ) = 1 γ p ( o = 1 ; θ k ) x , z , a π k ( a | z ) log π i ( a i | z i ) t = 0 p t ( x , z ; θ k ) T = t γ T p 0 ( o = 1 | x , z , a , T t ; θ k ) = 1 γ p ( o = 1 ; θ k ) x , z , a π k ( a | z ) log π i ( a i | z i ) t = 0 γ t p t ( x , z ; θ k ) T = 0 γ T p 0 ( o = 1 | x , z , a , T ; θ k ) = 1 γ p ( o = 1 ; θ k ) x , z , a π k ( a | z ) log π i ( a i | z i ) F ( x , z ; θ k ) T = 0 γ T p 0 ( o = 1 | x , z , a , T ; θ k ) .
T = 0 γ T p 0 ( o = 1 | x , z , a , T ; θ k ) can be calculated as follows:
T = 0 γ T p 0 ( o = 1 | x , z , a , T ; θ k ) = r ¯ ( x , a ) + T = 1 γ T p 0 ( o = 1 | x , z , a , T ; θ k ) = r ¯ ( x , a ) + T = 1 γ T x , z p 1 ( o = 1 | x , , T ; θ k ) p ( x , z | x , z , a ; λ k ) = r ¯ ( x , a ) + γ x , z p ( x , z | x , z , a ; λ k ) T = 0 γ T p 0 ( o = 1 | x , z , T ; θ k ) = r ¯ ( x , a ) + γ x , z p ( x , z | x , z , a ; λ k ) V ( x , z ; θ k ) .
From Equations (A11) and (A12), we obtain
Q ( π i ; θ k ) = 1 γ p ( o = 1 ; θ k ) a , z π k ( a | z ) log π i ( a i | z i ) x , x , z p ( x , z | x , z , a ; λ k ) F ( x , z ; θ k ) ( r ¯ ( x , a ) + γ V ( x , z ; θ k ) ) .
Therefore, Equation (A6) is proved.
Secondly, we prove Equation (A7). Q ( λ i ; θ k ) can be calculated as follows:
Q ( λ i ; θ k ) : = E θ k t = 1 T log λ i ( z t i | z t 1 i , y t i ) o = 1 = 1 p ( o = 1 ; θ k ) T = 1 x 0 : T , y 0 : T , z 0 : T , a 0 : T p ( o = 1 , x 0 : T , y 0 : T , z 0 : T , a 0 : T , T ; θ k ) t = 1 T log λ i ( z t i | z t 1 i , y t i ) = 1 p ( o = 1 ; θ k ) T = 1 t = 1 T x t 1 : t , y t , z t 1 : t p ( o = 1 , x t 1 : t , y t , z t 1 : t , T ; θ k ) log λ i ( z t i | z t 1 i , y t i ) = 1 p ( o = 1 ; θ k ) T = 1 t = 1 T x t 1 : t , y t , z t 1 : t p ( T ) p t 1 ( x t 1 , z t 1 ; θ k ) p ( x t , y t | x t 1 , z t 1 ; π k ) × λ k ( z t | z t 1 , y t ) p t ( o = 1 | x t , z t , T ; θ k ) log λ i ( z t i | z t 1 i , y t i ) = 1 p ( o = 1 ; θ k ) x , x , y , z , z λ k ( z | z , y ) log λ i ( z i | z i , y i ) p ( x , y | x , z ; π k ) × T = 1 p ( T ) t = 1 T p t 1 ( x , z ; θ k ) p t ( o = 1 | x , z , T ; θ k ) .
Since T = 1 t = 1 T = t = 1 T = t , we have
Q ( λ i ; θ k ) = 1 p ( o = 1 ; θ k ) x , x , y , z , z λ k ( z | z , y ) log λ i ( z i | z i , y i ) p ( x , y | x , z ; π k ) × t = 1 p t 1 ( x , z ; θ k ) T = t p ( T ) p t ( o = 1 | x , z , T ; θ k ) = 1 γ p ( o = 1 ; θ k ) x , x , y , z , z λ k ( z | z , y ) log λ i ( z i | z i , y i ) p ( x , y | x , z ; π k ) × t = 1 p t 1 ( x , z ; θ k ) T = t γ T p t ( o = 1 | x , z , T ; θ k ) .
Since p t ( o = 1 , x , z , T ; θ k ) = p 0 ( o = 1 | x , z , T t ; θ k ) ,
Q ( λ i ; θ k ) = 1 γ p ( o = 1 ; θ k ) x , x , y , z , z λ k ( z | z , y ) log λ i ( z i | z i , y i ) p ( x , y | x , z ; π k ) × t = 1 p t 1 ( x , z ; θ k ) T = t γ T p 0 ( o = 1 | x , z , T t ; θ k ) = ( 1 γ ) γ p ( o = 1 ; θ k ) x , x , y , z , z λ k ( z | z , y ) log λ i ( z i | z i , y i ) p ( x , y | x , z ; π k ) × t = 0 γ t p t ( x , z ; θ k ) T = 0 γ T p 0 ( o = 1 | x , z , T ; θ k ) = ( 1 γ ) γ p ( o = 1 ; θ k ) x , x , y , z , z λ k ( z | z , y ) log λ i ( z i | z i , y i ) p ( x , y | x , z ; π k ) F ( x , z ; θ k ) V ( x , z ; θ k ) .
Therefore, Equation (A7) is proved.
Finally, we prove Equation (A8). Q ( ν i ; θ k ) can be calculated as follows:
Q ( ν i ; θ k ) : = E θ k log ν i ( z 0 i ) o = 1 = 1 p ( o = 1 ; θ k ) T = 0 x 0 : T , y 0 : T , z 0 : T , a 0 : T p ( o = 1 , x 0 : T , y 0 : T , z 0 : T , a 0 : T , T ; θ k ) log ν i ( z 0 i ) = 1 p ( o = 1 ; θ k ) T = 0 x 0 , z 0 p ( o = 1 , x 0 , z 0 , T ; θ k ) log ν i ( z 0 i ) = 1 p ( o = 1 ; θ k ) T = 0 x 0 , z 0 p ( T ) p 0 ( x 0 ) ν k ( z 0 ) p 0 ( o = 1 | x 0 , z 0 , T ; θ k ) log ν i ( z 0 i ) = 1 p ( o = 1 ; θ k ) T = 0 x , z p ( T ) p 0 ( x ) ν k ( z ) p 0 ( o = 1 | x , z , T ; θ k ) log ν i ( z i ) = 1 p ( o = 1 ; θ k ) z ν k ( z ) log ν i ( z i ) x p 0 ( x ) T = 0 p ( T ) p 0 ( o = 1 | x , z , T ; θ k ) = 1 γ p ( o = 1 ; θ k ) z ν k ( z ) log ν i ( z i ) x p 0 ( x ) V ( x , z ; θ k ) .
Therefore, Equation (A8) is proved. □
Equations (9)–(11) can be calculated from Equations (A6)–(A8) using the Lagrange multiplier method [16]. Therefore, Proposition 1 is proved.

Appendix A.3. Proof of Proposition 2

The left-hand side of Equation (27) can be calculated as follows:
F ( x , z ; θ k ) t = 0 T max γ t α t ( x , z ; θ k ) = t = T max + 1 γ t α t ( x , z ; θ k ) t = T max + 1 γ t α t ( x , z ; θ k ) = t = T max + 1 γ t = γ T max + 1 1 γ .
Therefore, if
γ T max + 1 1 γ < ε
is satisfied, Equation (27) is satisfied. From Equation (A19), we obtain Equation (26). Therefore, Equation (26)⇒Equation (27) is proved. Equation (26)⇒Equation (28) can be proved in the same way.

Appendix B. Proof in Section 4

Proof of Theorem 2

F ( x , z ; θ ) can be calculated as follows:
F ( x , z ; θ ) : = t = 0 γ t p t ( x , z ; θ ) = p 0 ( x , z ; ν ) + t = 1 γ t p t ( x , z ; θ ) = p 0 ( x , z ; ν ) + t = 1 γ t x , z p ( x , z | x , z ; θ ) p t 1 ( x , z ; θ ) = p 0 ( x , z ; ν ) + γ x , z p ( x , z | x , z ; θ ) t = 1 γ t 1 p t 1 ( x , z ; θ ) = p 0 ( x , z ; ν ) + γ x , z p ( x , z | x , z ; θ ) F ( x , z ; θ ) .
Therefore, Equation (29) is proved. Equation (30) can be proved in the same way.

Appendix C. Proof in Section 5

Appendix C.1. Proof of Proposition 3

The left-hand side of Equation (37) can be calculated as follows:
A θ f A θ g 1 = γ P ( θ ) f P ( θ ) g 1 γ P ( θ ) 1 f g 1 = γ f g 1 .
where P i j ( θ ) : = p ( ( x , z ) = i | ( x , z ) = j ; θ ) , f i : = f ( ( x , z ) = i ) , and g i : = g ( ( x , z ) = i ) . Equation (38) can be proved in the same way.

Appendix C.2. Proof of Proposition 4

We prove Equation (39) by showing lim L F ( θ ) A θ L f 1 = 0 . Since F ( x , z ; θ ) is the fixed point of A θ ,
F ( θ ) A θ L f 1 = A θ L F A θ L f 1 .
From the contractibility of A θ ,
A θ L F ( θ ) A θ L f 1 γ L F ( θ ) f 1 .
Since F ( θ ) f 1 is finite, the right-hand side is 0 when L . Hence,
lim L F ( θ ) A θ L f 1 = 0
is satisfied, and Equation (39) is proved. Equation (40) can be proved in the same way.

Appendix C.3. Proof of Proposition 5

From the definition of the norm,
F ( θ k ) A θ k L f F ( θ k ) A θ k L f 1 .
The right-hand side can be calculated as follows:
F ( θ k ) A θ k L f 1 F ( θ k ) A θ k L + 1 f 1 + A θ k L + 1 f A θ k L f 1 = A θ k F ( θ k ) A θ k L + 1 f 1 + A θ k L + 1 f A θ k L f 1 γ F ( θ k ) A θ k L f 1 + γ A θ k L f A θ k L 1 f 1 .
From this inequality, we have
F ( θ k ) A θ k L f 1 γ 1 γ A θ k L f A θ k L 1 f 1 .
Therefore, if A θ k L f A θ k L 1 f 1 < ( 1 γ ) γ 1 ε is satisfied,
F ( θ k ) A θ k L f < ε
holds. Equation (48) is proved in the same way.

Appendix C.4. Proof of Proposition 6

When f ( x , z ) = p 0 ( x , z ; ν k ) and v ( x , z ) = r ¯ ( x , z ; π k ) , the termination conditions of MBEM can be calculated as follows:
A θ k L f A θ k L 1 f 1 γ L 1 A θ k f f 1 = γ L 1 γ P ( θ k ) p ( ν k ) 1 γ L P ( θ k ) 1 p ( ν k ) 1 = γ L ,
B θ k L v B θ k L 1 v γ L .
The calculation of B θ k L v B θ k L 1 v is omitted because it is the same as that of A θ k L f A θ k L 1 f 1 . If
γ L < 1 γ γ ε L > log ( 1 γ ) ε log γ 1
is satisfied,
A θ k L f A θ k L 1 f 1 < ( 1 γ ) γ 1 ε ,
B θ k L v B θ k L 1 v < ( 1 γ ) γ 1 ε
holds. Thus, L max satisfies
L max log ( 1 γ ) ε log γ 1 .
· : R Z is defined by x : = min { n Z | n > x } . From Proposition 2, the minimum T max is given by the following equation:
T max = log ( 1 γ ) ε log γ 1 .
Therefore, L max T max holds.

Appendix D. A Note on the Algorithm Proposed by Song et al.

In this section, we show that a parameter dependency is overlooked in the algorithm of [32]. We outline the derivation of the algorithm in [32] and discuss the parameter dependency. Since we use the notation in this paper, it is recommended to read the full paper before reading this section.
Firstly, we calculate the expected return J ( θ ) to derive the algorithm in [32]. Since [32] considers the case where ν ( z ) = δ z , z 0 , we also consider the same case in this section. The expected return J ( θ ) can be calculated as follows:
J ( θ ) : = E θ t = 0 γ t r ( x t , a t ) = x 0 , a 0 p ( x 0 ) π ( a 0 | z 0 ) r ( x 0 , a 0 ) + γ x 1 , y 1 , z 1 p ( x 1 | x 0 , a 0 ) p ( y 1 | x 1 , a 0 ) λ ( z 1 | z 0 , y 1 ) V ( x 1 , z 1 ; θ ) .
V ( x , z ; θ ) is the value function, which is defined as follows:
V ( x , z ; θ ) : = E θ t = 0 γ t r ( x t , a t ) x 0 = x , z 0 = z .
Equation (A36) can be rewritten as:
J ( θ ) = a 0 i π i ( a 0 i | z 0 i ) r ˜ ( z 0 i , a 0 i ; π i ) + a 0 i , y 1 i , z 1 i π i ( a 0 i | z 0 i ) λ i ( z 1 i | z 0 i , y 1 i ) V ˜ ( z 0 i , a 0 i , y 1 i , z 1 i ; θ )
where
r ˜ ( z 0 i , a 0 i ; π i ) : = x 0 , a 0 i p ( x 0 ) π i ( a 0 i | z 0 i ) r ( x 0 , a 0 ) , V ˜ ( z 0 i , a 0 i , y 1 i , z 1 i ; θ )
: = γ x 0 , a 0 i , x 1 , y 1 i , z 1 i p ( x 0 ) π i ( a 0 i | z 0 i ) p ( x 1 | x 0 , a 0 ) p ( y 1 | x 1 , a 0 ) λ i ( z 1 i | z 0 i , y 1 i ) V ( x 1 , z 1 ; θ ) .
Maximizing J ( θ ) is equivalent to maximizing log J ( θ ) , and log J ( θ ) can be calculated as follows:
log J ( θ ) = log a 0 i η ( z 0 i , a 0 i ; θ k ) π i ( a 0 i | z 0 i ) r ˜ ( z 0 i , a 0 i ; π i ) η ( z 0 i , a 0 i ; θ k ) + a 0 i , y 1 i , z 1 i ρ ( z 0 i , a 0 i , y 1 i , z 1 i ; θ k ) π i ( a 0 i | z 0 i ) λ i ( z 1 i | z 0 i , y 1 i ) V ˜ ( z 0 i , a 0 i , y 1 i , z 1 i ; θ ) ρ ( z 0 i , a 0 i , y 1 i , z 1 i ; θ k )
where
η ( z 0 i , a 0 i ; θ k ) = π k i ( a 0 i | z 0 i ) r ˜ ( z 0 i , a 0 i ; π k i ) ξ ( z 0 i ; θ k ) ,
ρ ( z 0 i , a 0 i , y 1 i , z 1 i ; θ k ) = π k i ( a 0 i | z 0 i ) λ k i ( z 1 i | z 0 i , y 1 i ) V ˜ ( z 0 i , a 0 i , y 1 i , z 1 i ; θ k ) ξ ( z 0 i ; θ k ) ,
ξ ( z 0 i ; θ k ) = a 0 i π k i ( a 0 i | z 0 i ) r ˜ ( z 0 i , a 0 i ; π k i ) + a 0 i , y 1 i , z 1 i π k i ( a 0 i | z 0 i ) λ k i ( z 1 i | z 0 i , y 1 i ) V ˜ ( z 0 i , a 0 i , y 1 i , z 1 i ; θ k ) .
By the Jensen’s inequality, Equation (A41) can be calculated as follows:
log J ( θ ) a 0 i η ( z 0 i , a 0 i ; θ k ) log π i ( a 0 i | z 0 i ) r ˜ ( z 0 i , a 0 i ; π i ) η ( z 0 i , a 0 i ; θ k ) + a 0 i , y 1 i , z 1 i ρ ( z 0 i , a 0 i , y 1 i , z 1 i ; θ k ) log π i ( a 0 i | z 0 i ) λ i ( z 1 i | z 0 i , y 1 i ) V ˜ ( z 0 i , a 0 i , y 1 i , z 1 i ; θ ) ρ ( z 0 i , a 0 i , y 1 i , z 1 i ; θ k ) = : Q ( θ ; θ k ) .
log J ( θ ) = Q ( θ ; θ k ) is satisfied when θ = θ k .
Then, θ k + 1 is defined as follows:
θ k + 1 : = arg max θ Q ( θ ; θ k )
In this case, the following proposition holds:
Proposition A2
([32]). log J ( θ k + 1 ) log J ( θ k ) .
Proof. 
From Equation (A45), log J ( θ k + 1 ) Q ( θ k + 1 ; θ k ) . From Equation (A46), Q ( θ k + 1 ; θ k ) Q ( θ k ; θ k ) = log J ( θ k ) . Therefore, log J ( θ k + 1 ) log J ( θ k ) is satisfied. □
Therefore, since Equation (A46) monotonically increases J ( θ ) , we can find θ , which locally maximizes J ( θ ) . This is the algorithm in [32].
Then, the problem is how to calculate Equation (A46). It cannot be calculated analytically because θ dependency of V ˜ ( z 0 i , a 0 i , y 1 i , z 1 i ; θ ) is too complex. However, ref. [32] overlooked the parameter dependency of r ˜ ( z 0 i , a 0 i ; π i ) and V ˜ ( z 0 i , a 0 i , y 1 i , z 1 i ; θ ) , and therefore, it calculated Equation (A46) as follows:
π k + 1 i ( a 0 i | z 0 i ) = η ( z 0 i , a 0 i ; θ k ) + y 1 i , z 1 i ρ ( z 0 i , a 0 i , y 1 i , z 1 i ; θ k ) a 0 i η ( z 0 i , a 0 i ; θ k ) + y 1 i , z 1 i ρ ( z 0 i , a 0 i , y 1 i , z 1 i ; θ k ) ,
λ k + 1 i ( z 1 i | z 0 i , y 1 i ) = a 0 i ρ ( z 0 i , a 0 i , y 1 i , z 1 i ; θ k ) a 0 i , z 1 i ρ ( z 0 i , a 0 i , y 1 i , z 1 i ; θ k ) .
However, Equations (A47) and (A48) do not correspond to Equation (A46), and therefore, the algorithm as a whole may not always provide the optimal policy.

References

  1. Bertsekas, D.P. Dynamic Programming and Optimal Control: Vol. 1; Athena Scientific: Belmont, MA, USA, 2000. [Google Scholar]
  2. Puterman, M.L. Markov Decision Processes: Discrete Stochastic Dynamic Programming; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  3. Sutton, R.S.; Barto, A.G. Introduction to Reinforcement Learning; MIT Press: Cambridge, MA, USA, 1998; Volume 135. [Google Scholar]
  4. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction; MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
  5. Kochenderfer, M.J. Decision Making under Uncertainty: Theory and Application; MIT Press: Cambridge, MA, USA, 2015. [Google Scholar]
  6. Oliehoek, F. Value-Based Planning for Teams of Agents in Stochastic Partially Observable Environments; Amsterdam University Press: Amsterdam, The Netherlands, 2010. [Google Scholar]
  7. Oliehoek, F.A.; Amato, C. A Concise Introduction to Decentralized POMDPs; Springer: Berlin/Heidelberg, Germany, 2016; Volume 1. [Google Scholar]
  8. Becker, R.; Zilberstein, S.; Lesser, V.; Goldman, C.V. Solving transition independent decentralized Markov decision processes. J. Artif. Intell. Res. 2004, 22, 423–455. [Google Scholar] [CrossRef] [Green Version]
  9. Nair, R.; Varakantham, P.; Tambe, M.; Yokoo, M. Networked distributed POMDPs: A synthesis of distributed constraint optimization and POMDPs. In Proceedings of the AAAI’05: Proceedings of the 20th National Conference on Artificial Intelligence, Pittsburgh, PA, USA, 9–13 July 2005; pp. 133–139. [Google Scholar]
  10. Bernstein, D.S.; Givan, R.; Immerman, N.; Zilberstein, S. The complexity of decentralized control of Markov decision processes. Math. Oper. Res. 2002, 27, 819–840. [Google Scholar] [CrossRef]
  11. Bernstein, D.S.; Hansen, E.A.; Zilberstein, S. Bounded policy iteration for decentralized POMDPs. In Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI), Edinburgh, UK, 30 July–5 August 2005; pp. 52–57. [Google Scholar]
  12. Bernstein, D.S.; Amato, C.; Hansen, E.A.; Zilberstein, S. Policy iteration for decentralized control of Markov decision processes. J. Artif. Intell. Res. 2009, 34, 89–132. [Google Scholar] [CrossRef] [Green Version]
  13. Amato, C.; Bernstein, D.S.; Zilberstein, S. Optimizing fixed-size stochastic controllers for POMDPs and decentralized POMDPs. Auton. Agents Multi-Agent Syst. 2010, 21, 293–320. [Google Scholar] [CrossRef]
  14. Amato, C.; Bonet, B.; Zilberstein, S. Finite-state controllers based on mealy machines for centralized and decentralized pomdps. In Proceedings of the AAAI Conference on Artificial Intelligence, Atlanta, GA, USA, 11–15 July 2010. [Google Scholar]
  15. Amato, C.; Bernstein, D.S.; Zilberstein, S. Optimizing memory-bounded controllers for decentralized POMDPs. arXiv 2012, arXiv:1206.5258. [Google Scholar]
  16. Kumar, A.; Zilberstein, S. Anytime planning for decentralized POMDPs using expectation maximization. In Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence, Catalina Island, CA, USA, 8–11 July 2010; pp. 294–301. [Google Scholar]
  17. Kumar, A.; Zilberstein, S.; Toussaint, M. Probabilistic inference techniques for scalable multiagent decision making. J. Artif. Intell. Res. 2015, 53, 223–270. [Google Scholar] [CrossRef] [Green Version]
  18. Toussaint, M.; Storkey, A. Probabilistic inference for solving discrete and continuous state Markov Decision Processes. In Proceedings of the 23rd International Conference on Machine Learning, Pittsburgh, PA, USA, 25–29 June 2006; pp. 945–952. [Google Scholar]
  19. Todorov, E. General duality between optimal control and estimation. In Proceedings of the 47th IEEE Conference on Decision and Control, Cancun, Mexico, 9–11 December 2008; pp. 4286–4292. [Google Scholar]
  20. Kappen, H.J.; Gómez, V.; Opper, M. Optimal control as a graphical model inference problem. Mach. Learn. 2012, 87, 159–182. [Google Scholar] [CrossRef] [Green Version]
  21. Levine, S. Reinforcement learning and control as probabilistic inference: Tutorial and review. arXiv 2018, arXiv:1805.00909. [Google Scholar]
  22. Sun, X.; Bischl, B. Tutorial and survey on probabilistic graphical model and variational inference in deep reinforcement learning. In Proceedings of the IEEE Symposium Series on Computational Intelligence (SSCI), Xiamen, China, 6–9 December 2019; pp. 110–119. [Google Scholar]
  23. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  24. Toussaint, M.; Harmeling, S.; Storkey, A. Probabilistic Inference for Solving (PO) MDPs; Technical Report; Technical Report EDI-INF-RR-0934; School of Informatics, University of Edinburgh: Edinburgh, UK, 2006. [Google Scholar]
  25. Toussaint, M.; Charlin, L.; Poupart, P. Hierarchical POMDP Controller Optimization by Likelihood Maximization. UAI 2008, 24, 562–570. [Google Scholar]
  26. Kumar, A.; Zilberstein, S.; Toussaint, M. Scalable multiagent planning using probabilistic inference. In Proceedings of the 22nd International Joint Conference on Artificial Intelligence, Barcelona, Spain, 16–22 July 2011. [Google Scholar]
  27. Pajarinen, J.; Peltonen, J. Efficient planning for factored infinite-horizon DEC-POMDPs. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, Barcelona, Spain, 16–22 July 2011; Volume 22, p. 325. [Google Scholar]
  28. Pajarinen, J.; Peltonen, J. Periodic finite state controllers for efficient POMDP and DEC-POMDP planning. Adv. Neural Inf. Process. Syst. 2011, 24, 2636–2644. [Google Scholar]
  29. Pajarinen, J.; Peltonen, J. Expectation maximization for average reward decentralized POMDPs. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Proceedings of the European Conference, ECML PKDD 2013, Prague, Czech Republic, 23–27 September 2013; Springer: Berlin/Heidelberg, Germany, 2013; pp. 129–144. [Google Scholar]
  30. Wu, F.; Zilberstein, S.; Jennings, N.R. Monte-Carlo expectation maximization for decentralized POMDPs. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, Beijing, China, 3–9 August 2013. [Google Scholar]
  31. Liu, M.; Amato, C.; Anesta, E.; Griffith, J.; How, J. Learning for decentralized control of multiagent systems in large, partially-observable stochastic environments. In Proceedings of the AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; Volume 30. [Google Scholar]
  32. Song, Z.; Liao, X.; Carin, L. Solving DEC-POMDPs by Expectation Maximization of Value Function. In Proceedings of the AAAI Spring Symposia, Palo Alto, CA, USA, 21–23 March 2016. [Google Scholar]
  33. Kumar, A.; Mostafa, H.; Zilberstein, S. Dual formulations for optimizing Dec-POMDP controllers. In Proceedings of the AAAI, Phoenix, AZ, USA, 12–17 February 2016. [Google Scholar]
  34. Bertsekas, D.P. Approximate policy iteration: A survey and some new methods. J. Control. Theory Appl. 2011, 9, 310–335. [Google Scholar] [CrossRef] [Green Version]
  35. Liu, D.R.; Li, H.L.; Wang, D. Feature selection and feature learning for high-dimensional batch reinforcement learning: A survey. Int. J. Autom. Comput. 2015, 12, 229–242. [Google Scholar] [CrossRef]
  36. Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A.A.; Veness, J.; Bellemare, M.G.; Graves, A.; Riedmiller, M.; Fidjeland, A.K.; Ostrovski, G.; et al. Human-level control through deep reinforcement learning. Nature 2015, 518, 529–533. [Google Scholar] [CrossRef]
  37. Hallak, A.; Mannor, S. Consistent on-line off-policy evaluation. In Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017; pp. 1372–1383. [Google Scholar]
  38. Gelada, C.; Bellemare, M.G. Off-policy deep reinforcement learning by bootstrapping the covariate shift. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 3647–3655. [Google Scholar]
  39. Levine, S.; Kumar, A.; Tucker, G.; Fu, J. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv 2020, arXiv:2005.01643. [Google Scholar]
  40. Hansen, E.A.; Bernstein, D.S.; Zilberstein, S. Dynamic programming for partially observable stochastic games. In Proceedings of the AAAI, Palo Alto, CA, USA, 22–24 March 2004; Volume 4, pp. 709–715. [Google Scholar]
  41. Seuken, S.; Zilberstein, S. Improved memory-bounded dynamic programming for decentralized POMDPs. arXiv 2012, arXiv:1206.5295. [Google Scholar]
Figure 1. Schematic diagram of DEC-POMDP. DEC-POMDP consists of an environment and N agents ( N = 2 in this figure). x t is the state of the environment at time t. y t i , z t i , and a t i are the observation, the memory, and the action available to the agent i { 1 , , N } , respectively. The agents update their memories based on their observations, and take their actions based on their memories to control the environmental state. The planning of DEC-POMDP is to find their optimal memory updates and action selections that maximize the objective function.
Figure 1. Schematic diagram of DEC-POMDP. DEC-POMDP consists of an environment and N agents ( N = 2 in this figure). x t is the state of the environment at time t. y t i , z t i , and a t i are the observation, the memory, and the action available to the agent i { 1 , , N } , respectively. The agents update their memories based on their observations, and take their actions based on their memories to control the environmental state. The planning of DEC-POMDP is to find their optimal memory updates and action selections that maximize the objective function.
Entropy 23 00551 g001
Figure 2. Dynamic Bayesian networks of DEC-POMDP (a) and the latent variable model for the time horizon T { 0 , 1 , 2 , } (b). x t is the state of the environment at time t. y t i , z t i , and a t i are the observation, the memory, and the action available to the agent i { 1 , , N } , respectively. (a) r t R is the reward, which is generated at each time. (b) o { 0 , 1 } is the optimal variable, which is generated only at the time horizon T.
Figure 2. Dynamic Bayesian networks of DEC-POMDP (a) and the latent variable model for the time horizon T { 0 , 1 , 2 , } (b). x t is the state of the environment at time t. y t i , z t i , and a t i are the observation, the memory, and the action available to the agent i { 1 , , N } , respectively. (a) r t R is the reward, which is generated at each time. (b) o { 0 , 1 } is the optimal variable, which is generated only at the time horizon T.
Entropy 23 00551 g002
Figure 3. Experimental results of four benchmarks for DEC-POMDP: (a,e,i) broadcast; (b,f,j) recycling robot; (c,g,k) wireless network; (d,h,l) box pushing. (ad) The expected return J ( θ k ) as a function of the computational time. (eh) The expected return J ( θ k ) as a function of the iteration k. (il) T max and L max as functions of the iteration k. In all the experiments, we set the number of agent N = 2 , the discount factor γ = 0.99 , the upper bound of the approximation error ε = 0.1 , and the size of the memory available to the ith agent | Z i | = 2 . The size of the state | X | , the action | A i | , and the observation | Y i | are different for each problem, which are shown on each panel.
Figure 3. Experimental results of four benchmarks for DEC-POMDP: (a,e,i) broadcast; (b,f,j) recycling robot; (c,g,k) wireless network; (d,h,l) box pushing. (ad) The expected return J ( θ k ) as a function of the computational time. (eh) The expected return J ( θ k ) as a function of the iteration k. (il) T max and L max as functions of the iteration k. In all the experiments, we set the number of agent N = 2 , the discount factor γ = 0.99 , the upper bound of the approximation error ε = 0.1 , and the size of the memory available to the ith agent | Z i | = 2 . The size of the state | X | , the action | A i | , and the observation | Y i | are different for each problem, which are shown on each panel.
Entropy 23 00551 g003
Table 1. Summary of EM, BEM, and MBEM.
Table 1. Summary of EM, BEM, and MBEM.
EMBellman EM (BEM)Modified Bellman EM (MBEM)
E stepforward–backward algorithm
F ( x , z ; θ k ) t = 0 T max γ t α t ( x , z ; θ k )
V ( x , z ; θ k ) t = 0 T max γ t β t ( x , z ; θ k )
O ( ( | X | | Z | ) 2 T max )
forward and backward Bellman equations
F ( θ k ) = ( I γ P ( θ k ) ) 1 p ( ν k )
V ( θ k ) = ( ( I γ P ( θ k ) ) 1 ) T r ( π k )
O ( ( | X | | Z | ) 3 )
forward and backward Bellman operators
F ( x , z ; θ k ) A θ k L max f ( x , z )
V ( x , z ; θ k ) B θ k L max v ( x , z )
O ( ( | X | | Z | ) 2 L max )
M stepEquations (9)–(11)
O ( ( | X | | Z | ) 2 | Y | | A | )
Equations (9)–(11)
O ( ( | X | | Z | ) 2 | Y | | A | )
Equations (9)–(11)
O ( ( | X | | Z | ) 2 | Y | | A | )
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tottori, T.; Kobayashi, T.J. Forward and Backward Bellman Equations Improve the Efficiency of the EM Algorithm for DEC-POMDP. Entropy 2021, 23, 551. https://doi.org/10.3390/e23050551

AMA Style

Tottori T, Kobayashi TJ. Forward and Backward Bellman Equations Improve the Efficiency of the EM Algorithm for DEC-POMDP. Entropy. 2021; 23(5):551. https://doi.org/10.3390/e23050551

Chicago/Turabian Style

Tottori, Takehiro, and Tetsuya J. Kobayashi. 2021. "Forward and Backward Bellman Equations Improve the Efficiency of the EM Algorithm for DEC-POMDP" Entropy 23, no. 5: 551. https://doi.org/10.3390/e23050551

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop