Integral Reinforcement-Learning-Based Optimal Containment Control for Partially Unknown Nonlinear Multiagent Systems

This paper focuses on the optimal containment control problem for the nonlinear multiagent systems with partially unknown dynamics via an integral reinforcement learning algorithm. By employing integral reinforcement learning, the requirement of the drift dynamics is relaxed. The integral reinforcement learning method is proved to be equivalent to the model-based policy iteration, which guarantees the convergence of the proposed control algorithm. For each follower, the Hamilton–Jacobi–Bellman equation is solved by a single critic neural network with a modified updating law which guarantees the weight error dynamic to be asymptotically stable. Through using input–output data, the approximate optimal containment control protocol of each follower is obtained by applying the critic neural network. The closed-loop containment error system is guaranteed to be stable under the proposed optimal containment control scheme. Simulation results demonstrate the effectiveness of the presented control scheme.


Introduction
Distributed coordination control of multiagent systems (MASs) has drawn expansive interest due to its potential application on agricultural irrigation [1], disaster rescue [2], microgrid scheduling [3], marine survey [4] and wireless communication [5]. The distributed coordination control aims to guarantee that all agents which exchange local information by communicating with their neighbors reach an agreement on some variables of interest [6]. Over the last decade, containment control has received increasing attention because of its remarkable performance in addressing the secure control issues, such as hazardous material treatment [7] and fire rescue [8]. The goal of containment control is to drive the followers to enter and keep within the convex hull spanned by multiple leaders. Numerous interesting and significant results of containment control have been presented. Reference [9] developed a fuzzy-observer-based backstepping control to achieve the containment of MASs. An adaptive funnel containment control was proposed in [10], where the containment errors converged to an adjustable funnel boundary. In practical applications, containment control has been developed for autonomous surface vehicles [4], unmanned aerial vehicles [11] and spacecrafts [12]. Notice that most of the aforementioned works have ignored the control performance with a minimum of energy consumption.
It is well-known that the Riccati equation or the Hamilton-Jacobi-Bellman equation (HJBE) are solved to acquire the optimal control for linear or nonlinear systems [13], respectively. In other words, the Riccati equation is a particular case of the HJBE. As a classical optimization algorithm, dynamic programming (DP) [14] is regarded as an effective way to obtain the optimal solution of the HJBE. However, as the dimension of state variables increases, the computation of the DP approach expands as a geometric series, which arouses the dilemma of the "curse of dimensionality". With the success of AlphaGo, reinforcement learning (RL) has stimulated increasing enthusiasm from scholars (1) Different from existing control schemes [9,20], an IRL method is introduced to construct the integral Bellman equation without the system identification. Furthermore, IRL proves to be equivalent to model-based PI, which guarantees the convergence of the developed control algorithm. (2) The IRL-based OCC scheme is implemented by a critic-only architecture for nonlinear MASs with unknown drift dynamics, rather than by an actor-critic architecture for linear MASs [25][26][27]. Thus, the proposed scheme simplifies the control structure. (3) In contrast to the existing OCC schemes [20][21][22] which guarantee the weight errors to be UUB, a modified weight-updating law is presented to tune the critic NN weights, whose weight error dynamic is asymptotically stable.
This paper is organized as follows. In Section 2, graph theory and its application to the containment of MASs are outlined. In Section 3, the IRL-based OCC scheme and its convergence proof are presented for nonlinear MASs. Then, the stability of the closed-loop containment error systems is analyzed in detail. In Section 4, two simulation examples demonstrate the effectiveness of the proposed scheme. In Section 5, concluding remarks are drawn.

Graph Theory
For a network with N agents, the information interactions among agents are reflected by a weighted graph G = (V, ε, A) with the nonempty finite set of nodes V = {υ 1 , . . . , υ N }, the edge set ε ⊆ V × V and the nonnegative weighted adjacency matrix A = [a ip ]. If node υ i links to node υ p , the edge (υ i , υ p ) ∈ ε is available with a ip > 0; otherwise, a ip = 0. For a node υ i , the node υ p is named as a neighbor of υ i when (υ i , υ p ) ∈ ε. In this way, It implies that each row sum of L equals to zero. A sequence of edges described by (υ 1 , υ 2 ), (υ 3 , υ 4 ), . . . with υ i ∈ V is defined as a directed path. For arbitrary (υ i , υ p ) ∈ V, a directed graph is strongly connected, if there is a directed path from υ i to υ p , while the directed graph is said to contain a spanning tree if there exists a directed path from a root node to every other nodes with respect to G. This paper focuses on a strongly connected digraph with a spanning tree.

Problem Description
Consider the leader-follower nonlinear MASs in the form of the graph G with M leaders and N followers, where the node dynamic of the ith follower is modeled bẏ where x i ∈ R n is the state vector for the ith follower, µ i ∈ R m is the control input vector, i = 1, 2, . . . , N, and the nonlinear functions f (x i ) ∈ R n and g i (x i ) ∈ R n×m represent the unknown drift dynamic and the control input matrix, respectively. Denote the global state vector as Assumption 1. f (x i ) and g i (x i ) are Lipschitz continuous on the compact set Ω i with f (0) = 0 and the system (1) is controllable.
Define the node dynamic of the jth leader aṡ where r j ∈ R n stands for the state vector of the jth leader, j = 1, 2, . . . , M and h j (r j ) ∈ R n satisfies Lipschitz continuity.
Definition 1 (Convex hull [8]). A set C ⊆ R M×n is convex if for any y 1 , y 2 ∈ C and ∀ρ ∈ (0, 1), (1 − ρ)y 1 + ρy 2 ∈ C. A convex hull of a finite set Y = {y 1 , y 2 , . . . , y M } is the minimal convex set, i.e., Co The containment control aims to find a set of distributed control protocols µ = {µ 1 , µ 2 , . . . , µ N } such that all followers stay in the convex hull formed by the leaders, i.e., x i (t) → Co(Y) with Y = {r 1 , r 2 , . . . , r M }. For the ith follower, the local neighborhood containment error e i is formulated as where e i ∈ R n , b ij ≥ 0 represents the pinning gain.
In fact, the connection between the ith follower and the jth leader is available if and only if b ij > 0. Denote the communication graph , I n represents the n-dimension identity matrix, 1 M stands for the M-dimensional column vector whose every element equals to 1 and B = [B 1 , B 2 , . . . , B M ] ∈ R N×N M . Considering (1), (2) and (3), for the ith follower, the local neighborhood containment error dynamic is formulated asė where . For the ith follower, the local neighborhood containment error is dominated not only by local states and local control inputs, but also by the information from its neighbors and the leaders. In order to implement the synchronization of the partially unknown nonlinear MASs (i.e., e i → 0), an IRL-based OCC scheme is designed in the next subsection.

Optimal Containment Control
For the local neighborhood containment error dynamic (4), define the cost function as where P i (e i , µ i , µ −i ) = e T i Q i e i + ∑ p∈{N i ,i} µ T p R ip µ p is a utility function, µ −i = {µ p |p ∈ N i } represents a set of the local control protocols from the neighbors of node υ i , and Q i ∈ R n×n and R ip ∈ R m×m are the positive definite matrices.
Definition 2 (Admissible control policies [17]). The feedback control policies µ i (e i ) (i ∈ I) are defined to be admissible with respect to (5) on a compact set Definition 3 (Nash equilibrium [17]). An N-tuple admissible control policy µ * (e) = {µ * 1 (e 1 ), µ * 2 (e 2 ), . . . , µ * N (e N )} is said to constitute a Nash equilibrium solution in graph G x , if the following N inequalities are satisfied This paper aims to find an N-tuple optimal admissible control policy µ * (e) to minimize the cost function (5) for each follower such that the Nash equilibrium solution in G x (i.e., the OCC protocols) is obtained.
For arbitrary µ i (e i ) ∈ A(Ω i ) of the ith follower, define the value function When (6) is finite, then the Bellman equation is (7) where V i (0) = 0 and ∇C i (e i ) = ∂C i (e i )/∂e i . For the ith follower, the local Hamiltonian is Define the optimal value function as According to [13], the optimal value function C * i (e i ) satisfies the HJBE as follows The local OCC protocol is It should be mentioned that the analytical solution of the HJBE is intractable to obtain since C * i (e i ) is unknown. According to [15], the solution of the HJBE is successively approximated through a sequence of iterations with policy evaluation and policy improvement where (k) represents the kth iteration index with k ∈ N + . From (11), we can see that the policy evaluation requires the accurate mathematical model of (1). However, the accurate mathematical model is always difficult to obtain in practice. To break this bottleneck, the IRL method is developed to relax the requirement of the accurate model in the policy evaluation.

Integral Reinforcement Learning
For t τ > 0, (6) can be rewritten as Based on the integral Bellman Equation (13), Compared to (7), the policy evaluation (14) is not required for the accurate system dynamics in (1).
if and only if C is the only solution of (11).
Proof of Theorem 1. Considering (11), the time derivative of C Integrate on both sides of (16) within [t, t + t τ ], that is According to the derivation of (16) and (17) i (e i ) satisfies the integral Bellman Equation (15). Next, we verify the uniqueness of the Subtracting (16) into (18) yields Solving (19), we have Υ i (e i ). One can derive that C  i (e i ) is the only solution of (11). Theorem 1 reveals that the IRL algorithm with (15) and (12) theoretically equals to the model-based PI algorithm, whose relevant convergence analysis was provided in [15]. Hence, the IRL algorithm can be guaranteed to be convergent.

Theorem 2.
Considering the nonlinear MAS with partially unknown dynamic as (1), the local neighborhood containment error dynamic as (4) and the optimal value function C * i (e i ) as (8), the closed-loop containment error system is guaranteed to be asymptotically stable under the local OCC protocol (10). Furthermore, the containment control is achieved with a set of the OCC protocols {µ * 1 , µ * 2 , . . . , µ * N } if there is a spanning tree in the directed graph.
Proof of Theorem 2. Selecting the Lyapunov function candidate as C * i (e i ). Combining (7), (8) and (10), then Substituting (20) into the time derivative of V * i (e i ), theṅ Therefore,Ċ * i (e i ) ≤ 0. One can conclude that the closed-loop containment error system (4) is asymptotically stable with the local OCC protocol (10). Since a spanning tree exists in the directed graph, the containment control of the nonlinear MAS with partially unknown dynamic can be achieved.

Critic NN Implementation
Based on the Stone-Weierstrass approximation theorem, on the compact set Ω i , the optimal function C * i (e i ) and its partial gradient can be established by a critic NN as where φ * i ∈ R l i represents the ideal weight, σ i (·) ∈ R l i represents the activation function, l i represents the number of hidden neurons and ω i (e i ) stands for the reconstruction error.
Since the ideal weight vector is unknown, the approximation of C * i (e i ) and ∇C * i (e i ) are expressed asĈ where ∇σ i (e i ) = ∂σ i (e i )/∂e i andφ i ∈ R l i represents the estimation of φ * i . Then, the local OCC protocol (10) can be approximated bŷ The approximate local Hamiltonian is Combining (14) and (21) with (25) yields whereφ i = φ * i −φ i represents the weight estimation error and Φ i = In order to tuneφ i , the steepest descent algorithm is employed to minimize E ci = 1 2 e 2 ci . A modified updating law ofφ i iṡφ where l ci > 0 andη i , the estimation of η i , can be updated bẏη where l si > 0 is a design constant. Considering (26) and (27), the weight estimation error is updated by˙φ Theorem 3. Considering the nonlinear MAS with partially unknown dynamic as (1), the local neighborhood containment error dynamic as (4) and the critic NN with the modified updating laws (27) and (28), thenφ i is guaranteed to be asymptotically stable.
Under the framework of the critic-only architecture, the IRL-based OCC scheme is presented. For each follower, the local neighborhood containment error (3) is established by communicating with its neighbors and the leaders. The value function of each follower is approximated by the critic NN (23), whose weights are tuned by a modified weight updating law (27). Based on (1), (3) and (23), the local OCC protocol (24) is obtained. The structural diagram of the developed IRL-based OCC scheme is shown in Figure 1.

Remark 1.
In the actor-critic architecture, the optimal value function and the optimal control policy are approximated by a critic NN and an actor NN, respectively. While for the critic-only architecture, the optimal value function is approximated by a critic NN and the optimal control policy is directly obtained by combining (10) and (22). Hence, the critic-only architecture keeps the same performance as the actor-critic one. In contrast, the critic-only architecture utilizes a single critic NN only, which implies that the control structure is simplified and the computation burden is reduced.

Theorem 4.
Considering the nonlinear MAS with partially unknown dynamics as (1), the local neighborhood containment error dynamic as (4), the optimal value function as (8) and the critic NN which is updated by (27) and (28), the local containment control protocol (24) can guarantee the closed-loop containment error system (4) to be UUB.  Proof of Theorem 4. The Lyapunov function candidate is chosen as Considering (20), (21) and Assumption 3, the time derivative of (33) corresponding to (4) isΞ Notice that Then, (34) becomes (35) It showsL i2 < 0 if e i lies outside the compact set Therefore, the closed-loop containment error system (4) is UUB under the local containment control protocol (24).

Remark 2.
In Assumption 1, we know that the nonlinear functions f (x) and g i (x) are Lipschitz continuous on a compact set Ω i containing the origin, f (0) = 0. It indicates that the developed control scheme is effective in a compact set Ω i . If the system states are outside this compact set, this scheme might be invalid. In Theorem 4, we analyzed the system stability within such a compact set via the Lyapunov direct method, which means the closed-loop system is stable in the compact set under the developed IRL-based OCC scheme.

Simulation Study
This section provides two simulation examples to support the developed IRL-based OCC scheme.

Example 1
Consider a six-node graph network connected by three leader nodes. The directed topology of the graph is displayed in Figure 2. As displayed in Figure 2, nodes 1-3 stand for the leaders 1-3 and nodes 4-6 represent the followers 1-3. In (3), the edge weights and pinning gains were set to 0.5. The node dynamic of the jth leader is described asṙ j =Ār j , where r j = [r j1 , r j2 ] T ∈ R 2 represents the state vector, j = 1, 2, 3 andĀ For the ith follower, the node dynamic is formulated asẋ The related parameters were chosen as Q i = 5I 2 , R ip = R ii = 1, l ci = 0.1 and l si = 0.1.
The simulation results are shown in Figures 3-5 using the developed IRL-based OCC protocols. The evolution procedure of the local neighborhood containment errors for triple followers is shown in Figure 3, which indicates that the local neighborhood containment errors were regulated to zero under the developed control protocols. Thus, the containment control of MAS could be reached. Figures 4 and 5 depict the state curves of the leaders and the followers, where all followers moved and stayed within the region formed by the envelope curves. It implies that the satisfactory performance of the containment control was acquired. The state curves of the followers and the leaders are displayed as 2-D phase plane plot in Figure 6 and the region enveloped by the three leaders υ 1 , υ 2 and υ 3 is shown at three different instants (t = 16.0 s, 20.3 s and 25.0 s). We can observe from Figure 6 that the followers converged to the convex hull.

Example 2
Consider the nonlinear MAS consisting of three single-link robot arms and triple leader nodes. A rigid link is attached to each robot arm via a gear train to a direct current motor [28]. In Figure 2, the directed topology among these robot arms is shown. We chose the values of all edge weights and pinning gain as 1.
For the ith follower, the model (36) can be rewritten as Similar to Example Section 4.1, the local neighborhood containment error vector was given as e i = [e i1 , e i2 ] T ∈ R 2 . The critic NN structures and the related activation functions were initialized as in Example Section 4.1. The critic NN weights were initialized as the random values within (0, 36) and the parameters of initialization and control were chosen as r 1 (0) = [0, 0.6] T , Figures 7-11 show the simulation results. The local neighborhood containment errors converged to a small region around zero as depicted in Figure 7, which shows that the containment control of the nonlinear MAS was achieved. In Figures 8 and 9, it can be found that the state trajectories of single-link robot arms (36) entered and stayed within the region enveloped by the leader nodes as the time progressed, which indicated the satisfactory performance of the developed scheme. The evolution curves of all agents are illustrated as the 2-D phase plane plot in Figure 10. We can see that the convex hull formed by the leaders υ 1 , υ 2 and υ 3 contains the followers at the time instants t = 5.0 s, 10.0 s, 14.5 s and 26.0 s, which implies that the followers converged to the convex hull. Figure 11 describes the curves of the containment control inputs, which shows the regulation process of the containment error system.

Conclusions
This paper investigated the OCC problem of nonlinear MASs with partially unknown dynamics via the IRL method. Based on the IRL method, the integral Bellman equation was constructed to relax the requirement of the drift dynamics. The proposed control algorithm was guaranteed to converge by analyzing the convergence of IRL. With the aid of the universal approximation capability of the NN, the solution of the HJBE was acquired by a critic NN with a modified weight-updating law which guaranteed the asymptotical stability of the weight error dynamics. By using the Lyapunov stability theorem, we showed that the closed-loop containment error system was UUB. From the simulation results of two examples, the effectiveness of the proposed IRL-based OCC scheme was illustrated. In the considered MASs, the information among all agents was transmitted by a desired communication network, which is always confronted with some security issues, such as attacks and packet dropouts. The focus of our future work is to develop a novel distributed resilient containment control for the MASs subjected to attacks and packet dropouts.