Next Article in Journal
Studies of the Variability of Polyphenols and Carotenoids in Different Methods Fermented Organic Leaves of Willowherb (Chamerion angustifolium (L.) Holub)
Previous Article in Journal
A Novel Relay Selection Scheme Based on Q-Learning in Multi-Hop Wireless Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Edge-Event-Triggered Synchronization for Multi-Agent Systems with Nonlinear Controller Outputs

1
The School of Aeronautics and Astronautics, Central South University, Changsha 410083, China
2
The School of Electronic and Information Engineering, Harbin Institute of Technology, Shenzhen 518055, China
3
The Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology, Hong Kong, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(15), 5250; https://doi.org/10.3390/app10155250
Submission received: 18 June 2020 / Revised: 19 July 2020 / Accepted: 22 July 2020 / Published: 30 July 2020
(This article belongs to the Section Robotics and Automation)

Abstract

:
This paper addresses the synchronization problem of multi-agent systems with nonlinear controller outputs via event-triggered control, in which the combined edge state information is utilized, and all controller outputs are nonlinear to describe their inherent nonlinear characteristics and the effects of data transmission in digital communication networks. First, an edge-event-triggered policy is proposed to implement intermittent controller updates without Zeno behavior. Then, an edge-self-triggered solution is further investigated to achieve discontinuous monitoring of sensors. Compared with the previous event-triggered mechanisms, our policy design considers the controller output nonlinearities. Furthermore, the system’s inherent nonlinear characteristics and networked data transmission effects are combined in a unified framework. Numerical simulations demonstrate the effectiveness of our theoretical results.

1. Introduction

In recent years, distributed coordinated control of multi-agent systems has attracted a lot of attention from researchers [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]. The main attention is focused on two aspects: one is how to more accurately describe the models of multi-agent systems, and the other is how to design the information interaction mechanisms between agents to achieve control objectives. For these two aspects, researchers have not only studied the coordinated control of multi-agent systems with various dynamics, but also combined the design of data interaction mechanisms and the effects of communication networks, sensors and controllers, for instance, data packet loss, sensor noises, external disturbances, etc.
Recently, event- and self-triggered sampling [21,22,23,24,25,26,27,28,29,30,31,32,33] have been introduced into multi-agent coordinated control [32,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51], and its sampling times are determined by event-triggered conditions rather than by synchronous or asynchronous digital clock signals. In comparison with traditional periodic sampled-data control, distributed event-triggered control performs data sampling according to the needs of agents, avoiding redundant sampling at unnecessary times, thereby saving system resources. According to the information acquisition methods of the sensors equipped by agents, distributed event-triggered sampling can usually be divided into node-based [34,35,36,38,44,52,53] and edge-based [39,40,41,42,43,45,46,47,54,55]. These two methods are suitable for different scenarios. For the node-based event-triggered control, the sensors assembled in each agent are required to sample each individual’s absolute state information and communicate with neighbor individuals. For instance, employing GPS and inertial navigation equipments to obtain its absolute velocity and position information in the given inertial reference frame. For edge-based event-triggered control, it is applicable in the scenarios where the absolute state information is difficult to measure. For instance, in the deep sea or deep space, each AUV or spacecraft employs sonar or LiDAR to measure the relative velocity and position between itself and its neighbors to achieve multi-agent coordination goals. In [39,45,46,47,54,55], all relevant edge state information of each agent is acquired and processed together, which is different from the way of asynchronously acquiring and processing edge state information [40,41,42,43,45]. However, most of the previous results have not considered the controller output nonlinearities. On one hand, the controller device may have inherent nonlinear characteristics that cause the output signals to be distorted. For instance, the nonlinear characteristics of the control circuits may lead to signal output distortion and saturation. On the other hand, the employment of quantizers may also lead to the distortion of controller output information.
In this paper, we provide edge-event- and edge-self-triggered control solutions to solve the synchronization problem of Lipschitz nonlinear multi-agent systems with controller output nonlinearities, in which the edge states associated with each individual are processed together. Besides, we take the effects of controller output nonlinearities into account. It does not only include the nonlinear characteristics of controllers, but also the effects of nonlinear coding required for data transmission in digital communication networks. To avoid the requirement for sensor continuous scanning due to continuous monitoring of measurement errors in the proposed event-triggered condition, an edge-self-triggered policy is proposed to save sensor resource consumption. In the designed event-triggered policy, neighboring individuals do not need to employ communication networks to transmit data. However, in the proposed self-triggered policy, a small amount of data communication is necessary, which brings savings in sensor resources. There exists a trade-off between communication resource consumption and sensor resource consumption between the two policies, which are suitable for different application scenarios. Nonlinear dynamic models can be used to describe a large number of actual systems in engineering scenarios, for example, various types of nonlinear circuits, spring-mass-vehicle devices, inverted pendulums, etc. The research of nonlinear multi-agent system coordination control problems can provide solutions for such scenarios. The proposed triggering mechanisms can save resources consumed by interaction, and the considered output nonlinear fluctuation can evaluate the impact of communication transmissions and actuator nonlinearities. Although there have been some results on the coordination of nonlinear multi-agent systems based on event-triggered control [45,54,55], the influences of data quantization and actuator nonlinearities on systems have not been considered. Our work evaluates the applicability of event-triggered mechanisms in such scenarios. The novelties of this study are summarized in the following.
  • The proposed event- and self-triggered policies take the effects of controller output nonlinearities into account, not only including controllers’ inherent nonlinear properties, but also the signal distortion caused by network data transmission. In comparison with edge-based distributed event-triggered control in [39,40,41,42,43,45,46,47,54,55], our policies consider non-ideal digital signal transmission in actual application scenarios, paving the way for digital applications of distributed sampled-data control.
  • The proposed event- and self-triggered schemes have trade-offs in the scheduling of system resources. The event-triggered policy does not require digital signal transmission between neighboring individuals but requires all sensors to continuously obtain edge state information to evaluate triggering conditions and implement intermittent controller updates. The proposed self-triggered policy avoids continuous acquisition of edge state information and requires a small amount of information transmission between neighboring individuals. The trade-off between these two policies is also suitable for scenarios with different levels of resource schedulability. Furthermore, in comparison with the previous distributed self-triggered approaches [34,39,41,42,46,47,52], the interactive control inputs in our algorithm are quantized information, which is a more efficient solution for saving communication bandwidth.
The rest of this article is arranged as follows. Section 2 gives preliminaries on graph theory and problem formulation. Section 3 provides convergence analysis of the proposed edge-event-triggered policy. Section 4 further designs the edge self-triggered control algorithm. Section 5 shows the results on numerical simulations. Finally, Section 6 concludes this paper.

2. Preliminaries and Problem Formulation

2.1. Preliminaries on Graph Theory

For an undirected graph G consisting of m edges and N nodes, it has the form G = ( V , E ) with node set V = { v 1 , v 2 , , v n } and edge set E = { ( v i , v j ) : v i , v j V } . The set of neighbors of node i is denoted by N i = { v j V : ( v j , v i ) E } , where | N i | represents the cardinality of N i , i.e., 2 m = | N 1 | + | N 2 | + + | N N | . Then, we introduce several important matrices in the graph theory. The matrix which relates the nodes to the edges is called incidence matrix, denoted by H = { h k i } R m × N [47]. By choosing arbitrary orientation for all edges in the undirected graph G , the entries of its incidence matrix are defined as h k i = 1 if the kth edge sinks at node i, h k i = 1 if the kth edge leaves node i, or h k i = 0 otherwise. Therefore, the Laplacian matrix is represented as L = H T W H and the edge Laplacian matrix is L E = H H T , where W = diag ( w 1 , w 2 , , w m ) and each w p > 0 , p = 1 , 2 , , m , is the weight of pth edge. Moreover, the matrices L E and H T H share the common nonzero eigenvalues.
For the edge ( v i , v j ) E with the orientation from agents v j to v i , let y p ( t ) = y ( i , j ) ( t ) = x i ( t ) x j ( t ) be the corresponding edge state.

2.2. Problem Formulation

We consider a multi-agent system with a group of agents with nonlinear dynamics, which are described as
x i ( t ) = f ( x i ) + u i ( t ) , i V
where x i ( t ) R n is the state vector of the ith agent, u i ( t ) R n is the control input, and f ( x i ) R n is the inherent nonlinear dynamics.
Definition 1.
The nonlinear function f ( x ) in system (1) satisfies the Lipschitz condition in x R n , i.e.,
f ( x i ) f ( x j ) L x i x j , x i , x j R n ,
where L represents the Lipschitz constant.
In this article, the main goal is to design edge-based event- and self-triggered policies to solve the synchronization problem of nonlinear multi-agent systems. Assume that for each agent the given event conditions are constantly monitored to determine whether to update, and the event time sequence of each agent v i is set as t 0 i , t 1 i , t 2 i , . For agent v i , denote the latest sampling edge states of ε p by y ^ p i ( t ) = y ^ p i ( t k i ) = y p ( t k i ) , t [ t k i , t k + 1 i ) . For each agent v i , i = 1 , 2 , , N , we design the control protocol with nonlinear controller outputs as follows,
u i ( t ) = g c p E i h i p w p y ^ p i ( t ) , t [ t k i , t k + 1 i ) ,
where the control gain c > 0 . Here, the nonlinear vector function g ( Y ) : R n R n has the form g ( Y ) = g 1 ( Y 1 ) T , g 2 ( Y 2 ) T , , g N ( Y N ) T T . Here, for any q = 1 , 2 , , N , g q ( Y q ) satisfies the following properties.
P1.
g q ( Y q ) = 0 if and only if Y q = 0 .
P2.
g q ( Y q ) = g q ( Y q ) .
P3.
g 1 Y q g q ( Y q ) k 2 Y q with the positive scalars κ 1 < κ 2 .
Besides, the following inequalities can be derived based on the properties P1–P3.
  • k 1 2 Y q 2 g q ( Y q ) 2 k 2 2 Y q 2 .
  • k 1 Y q 2 Y q g q ( Y q ) k 2 Y q 2 .
  • k 1 k 2 g q ( Y q ) 2 Y q g q ( Y q ) k 2 k 1 g q ( Y q ) 2 .
For system (1), its graph G satisfies the following assumption and lemma.
Assumption 1.
The interaction topology G of system (1) is connected.
Lemma 1.
For a connected graph G , its Laplacian matrix L and edge Laplacian matrix L E satisfy λ 2 ξ T ξ ξ T L ξ λ n ξ T ξ and λ ¯ 2 ζ T ζ ζ T L E ζ λ ¯ n ζ T ζ , for any real vectors ξ , ζ R n , where λ 2 , λ ¯ 2 and λ n , λ ¯ n are the minimum and maximum non-zero eigenvalues of L, L E , respectively, with the properties ξ T 1 n = 0 and ζ.
Remark 1.
The proposed controller output nonlinearities mainly include two aspects: one is the inherent nonlinear characteristics of controllers, such as the nonlinear oscillation characteristics of the control circuit. The second is the nonlinear encoding required for network transmission of information, such as quantization transmission. These influencing factors can be described by nonlinear functions that satisfy the properties P1–P3.

3. Edge-Event-Triggered Policy

We now introduce the following event-triggered condition. For i = 1 , 2 , , N , denote r i ( t ) = p E i h i p w p y p ( t ) and r ^ i ( t k i ) = r ^ i ( t ) = p E i h i p w p y ^ p . Define the state measurement error by
e i ( t ) = r ^ i ( t ) r i ( t ) , t [ t k i , t k i + 1 ) .
Each triggering instant t k i is determined by
t k + 1 i = inf t > t k i : f ¯ i ( t ) 0 , i N ,
where
f ¯ i ( t ) = e i ( t ) 2 σ i g ( c r ^ i ( t k i ) ) 2 β 1 β 2 + β 3 t 2 .
Here, σ i and β i , i = 1 , 2 , , N are positive constants to be designed.
Denote f ˜ ( y p ) = f ( x i ) f ( x j ) , the edge state vector y ( t ) satisfies
y ˙ ( t ) = F ( y ) c ( H I n ) G ( c r ^ ) = F ( y ) c ( H I n ) G ( c ( r + e ) ) ,
where x ( t ) = [ x 1 T ( t ) x 2 T ( t ) x N T ( t ) ] T , y ( t ) = [ y 1 T ( t ) y 2 T ( t ) y m T ( t ) ] T , e ( t ) = [ e 1 T ( t ) e 2 T ( t ) e N T ( t ) ] T , F ( y ) = [ f ˜ ( y 1 ) T f ˜ ( y 2 ) T f ˜ ( y m ) T ] T , and G ( c r ^ ) = [ g ( c r ^ 1 ) T g ( c r ^ 2 ) T g ( c r ^ m ) T ] T . Here, r ( t ) = ( H T W I n ) y ( t ) , where r ( t ) = [ r 1 ( t ) T r 2 ( t ) T r N ( t ) T ] T .
The following theorem presents some sufficient conditions for achieving synchronism tracking of system (1) with event-triggered information.
Theorem 1.
Suppose that the underlying topology G of the Lipschitz nonlinear multi-agent system (1) with control input (3) and triggering condition (6) is connected. If the nonlinear function g ( · ) satisfies properties P1–P3 and there exist positive scalars a, b 1 , b 2 , c, β 1 , β 2 , β 3 , k 1 , k 2 , and σ i , i = 1 , 2 , , N , such that the defined parameter S 3 > 0 in (16). Then, the system (1) achieves state synchronization asymptotically for any initial condition. Moreover, all triggering time intervals are lower bounded by a positive scalar such that all agents achieve Zeno-free triggering.
Proof. 
Choose the following Lyapunov function.
V ( t ) = y ( t ) T ( W I n ) y ( t ) ,
Taking the time derivative of V ( t ) along the trajectory of (7), which yields
d V ( t ) d t = 2 y ( t ) T ( W I n ) F ( y ) 2 y ( t ) T ( W H I n ) G ( c r ^ ) a w max y ( t ) T ( W I n ) y ( t ) + 1 a F ( y ) T F ( y ) 2 [ ( H T W I n ) y ( t ) ] T G ( c r ^ ) a w max + L 2 a w min y ( t ) T ( W I n ) y ( t ) 2 r ^ ( t ) T G ( c r ^ ) + 2 e ( t ) T G ( c r ^ ) ,
where we use the fact that 2 y ( t ) T ( W I n ) F ( y ) a y ( t ) T ( W 2 I n ) y ( t ) + 1 a F ( y ) T F ( y ) for any positive real constant a to derive the first inequality, and the second one takes advantage of the Lipschitz condition of the function f ( y ) . However, a has to be fixed due to the S 1 .
Furthermore, the triangle inequality ξ T ζ b 2 ξ T ξ + 1 2 b ζ T ζ holds for any real vectors ξ , ζ and positive scalar b, which implies that
r ( t ) T r ( t ) = [ r ^ ( t ) e ( t ) ] T [ r ^ ( t ) e ( t ) ] ( 1 + b 1 ) r ^ ( t ) T r ^ ( t ) + ( 1 + 1 b 1 ) e ( t ) T e ( t ) ,
and
2 e ( t ) T G ( c r ^ ) b 2 e ( t ) T e ( t ) + 1 b 2 G ( c r ^ ) T G ( c r ^ ) b 2 σ max + 1 b 2 G ( c r ^ ) T G ( c r ^ ) + b 2 β 1 N β 2 + β 3 t 2 b 2 σ max k 2 2 c 2 + k 2 2 c 2 b 2 r ^ ( t ) T r ^ ( t ) + b 2 β 1 N β 2 + β 3 t 2 ,
where the final inequality uses that the nonlinear vector function G ( c r ^ ) satisfies G ( c r ^ ) 2 k 2 2 ( c r ^ ( t ) ) 2 . Noting that σ max = m a x i { 1 , 2 , , N } σ i . Obtained from the inequality (10), one gets
r ^ ( t ) T r ^ ( t ) 1 1 + b 1 r ( t ) T r ( t ) + 1 b 1 e ( t ) T e ( t ) 1 1 + b 1 r ( t ) T r ( t ) + σ max b 1 G ( c r ^ ) T G ( c r ^ ) + β 1 N b 1 ( β 2 + β 3 t 2 ) 1 1 + b 1 r ( t ) T r ( t ) + σ max k 2 2 c 2 b 1 r ^ ( t ) T r ^ ( t ) + β 1 N b 1 ( β 2 + β 3 t 2 ) ,
Then, we have
r ^ ( t ) T r ^ ( t ) b 1 ( 1 + b 1 ) ( b 1 + σ max k 2 2 c 2 ) r ( t ) T r ( t ) + β 1 N ( b 1 + σ max k 2 2 c 2 ) ( β 2 + β 3 t 2 ) .
The inequality (9) yields
V ( t ) ( a w max + L 2 a w min ) y ( t ) T ( W I n ) y ( t ) + ( b 2 σ max k 2 2 c 2 + k 2 2 c 2 b 2 ) r ^ ( t ) T r ^ ( t ) 2 r ^ ( t ) T G ( c r ^ ) + b 2 β 1 N β 2 + β 3 t 2 ( a w max + L 2 a w min ) y ( t ) T ( W I n ) y ( t ) [ 2 k 1 c ( b 2 σ max k 2 2 c 2 + k 2 2 c 2 b 2 ) ] r ^ ( t ) T r ^ ( t ) + b 2 β 1 N β 2 + β 3 t 2 K y ( t ) T ( W I n ) y ( t ) S 1 r ( t ) T r ( t ) + β 1 N β 2 + β 3 t 2 S 2 ,
where
K = a w max + L 2 a w min , S 1 = 2 b 1 b 2 k 1 c b 1 b 2 2 σ max k 2 2 c 2 b 1 k 2 2 c 2 b 2 ( 1 + b 1 ) ( b 1 + σ max k 2 2 c 2 ) , S 2 = 1 + b 1 b 1 S 1
with the nonlinear vector function r ^ ( t ) T G ( c r ^ ) k 1 c r ^ ( t ) T r ^ ( t ) . Let
S 3 = λ ¯ 2 w min S 1 K .
The inequality (14) yields
V ( t ) ( K λ ¯ 2 w min S 1 ) y ( t ) T ( W I n ) y ( t ) + β 1 N β 2 + β 3 t 2 S 2 S 3 V ( t ) + β 1 N β 2 + β 3 t 2 S 2 ,
Therefore, according to the comparison principle in [56], it follows that V ( t ) V ¯ ( t ) , V ¯ ( t ) = S 3 V ¯ ( t ) + β 1 N β 2 + β 3 t 2 S 2 . Here, β 1 N β 2 + β 3 t 2 S 2 L and lim t β 1 N β 2 + β 3 t 2 S 2 0 , then lim t V ¯ ( t ) = 0 [47]. Therefore, the synchronization is achieved.
Next, we show that the considered system does not exhibit Zeno behavior. If there is a lower bound on the inter-event intervals of any agent under the event-triggered condition (6), Zeno-free triggering can be achieved. For each pair of adjacent agents v i and v j , define an auxiliary vector function U p ( t ) = p E i h i p w p [ g ( c r ^ i ) g ( c r ^ j ) ] and Π p ( t ) = L r ^ i ( t ) + U p ( t ) . Here, U p ( t ) and Π p ( t ) lie between any two consecutive event instants of agent v i or v j . Let U k p = max t [ t k i , t k + 1 i ) U p ( t ) and Π k p = max t [ t k i , t k + 1 i ) Π p ( t ) . The measurement error vector e i ( t ) satisfies
e ˙ i ( t ) = r ˙ i ( t ) p E i h i p w p [ f ( x i ) f ( x j ) ] + U k p L p E i h i p w p y p ( t ) ] + U k p L e i ( t ) + Π k p .
As e i ( t k i ) = 0 holds, d d t e i ( t ) 2 satisfies
d d t e i ( t ) 2 2 e ˙ i ( t ) e i ( t ) ( 2 L + β ) e i ( t ) 2 + ( Π k p ) 2 β ,
with the scalar β > 0 , which implies that
e i ( t ) 2 t k i t Π k p e ( 2 L + β ) ( t s ) β d s = Π k p e ( 2 L + β ) ( t t k i ) 1 β ( 2 L + β ) .
Applying the condition (6), the next event does not occur until e ( t ) 2 = σ i g ( c r ^ i ( t k i ) ) 2 + β 1 β 2 + β 3 t 2 , which yields that the inter-event time interval is lower bounded by τ k i = t t k i with
Π k p e ( 2 L + β ) t k i 1 β ( 2 L + β ) = σ i g ( c r ^ i ) 2 + β 1 β 2 + β 3 t 2 .
Therefore, it follows that
τ k i > 1 2 L + β ln 1 + σ i β ( 2 L + β ) g ( c r ^ i ) 2 Π k p .
Equation (22) holds because the function β 1 β 2 + β 3 t 2 > 0 until t . Therefore, no Zeno behavior occurs. The proof is completed. □

4. Edge-Self-Triggered Policy

In the event-triggered formulation, it becomes apparent that continuous monitoring of the measurement error norm e ( t ) is required to check condition (6). In the following self-trigged control algorithm, this requirement is relaxed. Specifically, the next time t k + 1 i at which control law is updated is predetermined at the previous event time t k i and no state or error measurement is required between the control update instants. Define an increasing positive sequence k 0 i , k 1 i , , k q ¯ i , taking into account the effects of neighboring agent updates, with t k i = t k 0 i and t k q ¯ i t k + 1 i . By the inequality (20), the measurement error e i ( t ) satisfies
e i ( t ) 2 Π k p e ( 2 L + β ) ( t t k i ) 1 β ( 2 L + β ) ,
where
U p ( t ) = p E i h i p w p g ( c r ^ i ) g ( c r ^ j ) , U k p = max t [ t k i , t k + 1 i ) U p ( t ) ,
and
Π p ( t ) = L r ^ i ( t ) + U p ( t ) , Π k p = max t [ t k i , t k + 1 i ) Π p ( t ) .
By (6) and (23), the self-triggered function is as follows,
f ˜ ( τ k i ) = Π k p e ( 2 L + β ) τ k i 1 β ( 2 L + β ) σ i g ( c r ^ i ) 2 β 1 β 2 + β 3 ( t k i + τ k i ) 2 ,
For each i = 1 , 2 , , N , the self-triggered rule defines the next update time as follows. If there is a τ k i > 0 such that f ˜ ( τ k i ) = 0 , the next update time t k + 1 i take place at most τ k i time units after t k i . Moreover, agent v i also checks the condition (26) when its neighbors updated. Otherwise, if the inequality f ˜ ( τ k i ) < 0 holds for all τ k i > 0 , then agent v i waits until the next update of the control law of one of its neighbors to recheck its condition.
According to the above analysis, we give Algorithm 1 and Theorem 2 as follows.
Algorithm 1: Edge-Self-Triggered Control Algorithm
Step 1: Set the algorithm execution time T . At initial time t = t 0 , let k = 0 , t k i = t 0 , q = 0 and k q i = 0 . Let r ^ i ( t ) = r i ( t 0 ) ; update u i ( t 0 ) = g ( c r ^ i ) ; compute U k p and Π k p by (24) and (25); compute maximum allowable value τ k i by (26) such that f ˜ ( τ k i ) = 0 .
Step 2: When 0 < t T , check the events related to agents v i , v j ; perform the following steps:
– If any agent updated before t k i + τ k i , let q = q + 1 ; recompute U k p and Π k p by (24) and (25), such that f ˜ ( τ k i ) = 0 ;
– If there is no agent updated before t k i + τ k i , the event related to agent v j , occurs at t k i + τ k i ; let k = k + 1 ; update the event instant t k i ; let r ^ i ( t ) = r i ( t k i ) ; update u i ( t k i ) = g ( c r ^ i ) ; let q = 0 ; compute U k p and Π k p by (24) and (25); compute maximum allowable value τ k i by (26) such that f ˜ ( τ k i ) = 0 .
Step 3: When t > T , jump out of Algorithm 1.
Theorem 2.
Under the assumptions and conditions in Theorem 1, the system (1) with the control law (3) and Algorithm 1 can achieve synchronization for any initial condition. Furthermore, Zeno-free triggering can be achieved for all agents.
Remark 2.
The proposed edge-event- and edge-self-triggered policies are suitable for different scenarios with different system resources. The proposed event-triggered policy employs sensors to continuously measure edge state information to generate measurement errors and does not require communication resources, for instance, each individual employs LiDAR, ultrasonic rangefinder, and sonar to measure the relative position and relative speed of all neighbors. However, the proposed self-triggered policy can avoid continuous monitoring of these sensors, but it requires communication networks for information exchange. Therefore, these two policies include a trade-off between the scheduling of sensor resources and communication resources, which can be suitable for the scenarios with different levels of resource utilization.
Remark 3.
Compared with the previous results on self-triggered control [34,39,41,42,46,47,52], the interactive control input information of our algorithm contains the considered nonlinear factors, which include the influence of the logarithmic quantizer on the interactive information. This setting can save the communication bandwidth. This is also the theoretical expansion of the distributed self-triggered mechanism in these scenarios.

5. Simulations

This section provides two numerical simulations to verify the effectiveness of the proposed policies. For a connected nonlinear multi-agent system with 4 agents and 4 edges, its connected topology is shown in Figure 1.
Let the nonlinear dynamics of each agent be expressed as follows,
x ˙ i ( t ) = x + 2 cos ( x ) sin ( t ) + u i ( t )
where x i ( t ) = [ x i , 1 x i , 2 ] T . For the controller, taking into account its inherent nonlinearity and the nonlinear effects caused by data communication quantization, the inherent nonlinear function and quantization function are provided respectively. Let the controllers’ inherent nonlinearity function F ( · ) be
F 1 ( c r ^ i , 1 ) = c r ^ i , 1 ( 1 0.3 cos ( 0.5 c r ^ i , 1 + 0.5 sin ( c r ^ i , 1 ) + 0.2 c r ^ i , 1 cos ( 0.5 c r ^ i , 1 ) + 0.5 sin ( 3 c r ^ i , 1 ) ) ) , F 2 ( c r ^ i , 2 ) = c r ^ i , 2 1 0.3 cos 0.5 c r ^ i , 2 + sin ( c r ^ i , 2 ) .
Let the logarithmic quantizer function g q F q ( · ) , q = 1 , 2 , satisfy
g F q ( · ) = 0 , if F q ( · ) = 0 , sign ( F q ( · ) ) e ln | F q ( · ) | + 1 2 , otherwise ,
with the quantization parameter = 0.3 . Obviously, the nonlinear functions g q F q ( · ) , q = 1 , 2 , satisfy the properties P1–P3 with k 1 = 0.7 and k 2 = 1.3 . The aforementioned vector functions F ( · ) and g ( · ) correspond to Figure 2. Let the weight matrix be W = diag ( 0.85 , 0.97 , 1.12 , 0.78 ) with w min = 0.78 and w max = 1.12 . Let the triggering thresholds be σ i = 0.1 + 0.01 i , i = 1 , 2 , 3 , 4 . Let β 1 = 0.02 and β 2 = β 3 = 1 . The control gain is given by c = 6.78 . It can be verified that all parameter settings meet the conditions of Theorem 1. The initial values of all agents are given by [ x 1 ( 0 ) T x 2 ( 0 ) T x 3 ( 0 ) T x 4 ( 0 ) T ] T = [ 2 6 4 4 6 8 8 2 ] T . The results of edge-event- and edge-self-triggered policies are illustrated in Figure 3 and Figure 4 and Table 1 and Table 2, respectively. Here, we show the evolutionary state trajectories, the update times of all controllers, and the update values of all controllers. In subgraphs (a) of Figure 3 and Figure 4, Xset-1 and Xset-2 represent the values of [ x 1 , 1 x 2 , 1 x 3 , 1 x 4 , 1 ] T , and [ x 1 , 2 x 2 , 2 x 3 , 2 x 4 , 2 ] T , respectively. It can be observed that the proposed control policies can prompt all agents to achieve state synchronization when the controllers are updated intermittently and the considered system includes controller output nonlinearities. Furthermore, Table 2 also shows the effectiveness of the trade-off between different system resources in the scenarios with nonlinear controller outputs, that is, the event-triggered policy has fewer sampling times, but requires continuous employment of sensors; the self-triggered policy has more sampling times and requires communication networks, but can save sensor resources.

6. Conclusions

This paper investigates the synchronization problem of nonlinear multi-agent systems with controller output nonlinearities based on edge-event- and edge-self-triggered policies. The combined edge state is utilized by all controllers, and it is affected by the controllers’ inherent nonlinear characteristics and the quantization transmission of communication networks. The event-triggered strategy ensures the intermittent update of all controllers instead of continuous execution, while the self-triggered strategy further prevents sensors from continuously capturing information, thereby saving sensor resources. Not only are the proposed policies effective, but also the trade-off of different policies for different system resources is effective. Our future work on multi-agent systems will involve the performance design, network security problem, and switching topologies.

Author Contributions

Conceptualization, M.-Z.D. and C.Z.; methodology, J.L.; software, C.Z.; validation, J.L., J.W. and C.Z.; formal analysis, J.L. and M.-Z.D.; investigation, J.L. and M.-Z.D.; resources, J.W.; data curation, C.Z.; writing—original draft preparation, J.L.; writing—review and editing, J.L., M.-Z.D. and J.W.; visualization, C.Z.; supervision, J.W.; project administration, M.-Z.D.; funding acquisition, M.-Z.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ren, W.; Beard, R.W. Consensus seeking in multiagent systems under dynamically changing interaction topologies. IEEE Trans. Autom. Control 2005, 50, 655–661. [Google Scholar] [CrossRef]
  2. Olfati-Saber, R.; Fax, J.A.; Murray, R.M. Consensus and cooperation in networked multi-agent systems. Proc. IEEE 2007, 95, 215–233. [Google Scholar] [CrossRef] [Green Version]
  3. Cao, Y.; Yu, W.; Ren, W.; Chen, G. An overview of recent progress in the study of distributed multi-agent coordination. IEEE Trans. Ind. Inform. 2012, 9, 427–438. [Google Scholar] [CrossRef] [Green Version]
  4. Qin, J.; Gao, H. A sufficient condition for convergence of sampled-data consensus for double-integrator dynamics with nonuniform and time-varying communication delays. IEEE Trans. Autom. Control 2012, 57, 2417–2422. [Google Scholar] [CrossRef]
  5. Oh, K.K.; Park, M.C.; Ahn, H.S. A survey of multi-agent formation control. Automatica 2015, 53, 424–440. [Google Scholar] [CrossRef]
  6. Jing, G.; Zheng, Y.; Wang, L. Consensus of multiagent systems with distance-dependent communication networks. IEEE Trans. Neural Netw. Learn. Syst. 2016, 28, 2712–2726. [Google Scholar] [CrossRef] [PubMed]
  7. Zheng, Y.; Ma, J.; Wang, L. Consensus of hybrid multi-agent systems. IEEE Trans. Neural Netw. Learn. Syst. 2017, 29, 1359–1365. [Google Scholar] [CrossRef] [Green Version]
  8. Wang, Q.; Wang, J. Fully Distributed Fault-Tolerant Consensus Protocols for Lipschitz Nonlinear Multi-Agent Systems. IEEE Access 2018, 6, 17313–17325. [Google Scholar] [CrossRef]
  9. Liu, G.; Zhao, L.; Yu, J. Adaptive Finite-Time Consensus Tracking for Nonstrict Feedback Nonlinear Multi-Agent Systems with Unknown Control Directions. IEEE Access 2019, 7, 155262–155269. [Google Scholar] [CrossRef]
  10. Qin, J.; Zhang, G.; Zheng, W.X.; Kang, Y. Adaptive sliding mode consensus tracking for second-order nonlinear multiagent systems with actuator faults. IEEE Trans. Cybern. 2018, 49, 1605–1615. [Google Scholar] [CrossRef]
  11. Qin, J.; Li, M.; Shi, Y.; Ma, Q.; Zheng, W.X. Optimal synchronization control of multiagent systems with input saturation via off-policy reinforcement learning. IEEE Trans. Neural Netw. Learn. Syst. 2018, 30, 85–96. [Google Scholar] [CrossRef]
  12. Ren, C.; Shi, Z.; Du, T. Distributed Observer-Based Leader-Following Consensus Control for Second-Order Stochastic Multi-Agent Systems. IEEE Access 2018, 6, 20077–20084. [Google Scholar] [CrossRef]
  13. Wei, B.; Xiao, F. Distributed Consensus Control of Linear Multiagent Systems with Adaptive Nonlinear Couplings. IEEE Trans. Syst. Man Cybern. Syst. 2019. [Google Scholar] [CrossRef]
  14. He, L.; Zhang, J.; Hou, Y.; Liang, X.; Bai, P. Time-Varying Formation Control for Second-Order Discrete-Time Multi-Agent Systems with Directed Topology and Communication Delay. IEEE Access 2019, 7, 33517–33527. [Google Scholar] [CrossRef]
  15. Ma, J.; Ye, M.; Zheng, Y.; Zhu, Y. Consensus analysis of hybrid multiagent systems: A game-theoretic approach. Int. J. Robust Nonlinear Control 2019, 29, 1840–1853. [Google Scholar] [CrossRef]
  16. Wei, B.; Xiao, F.; Shi, Y. Fully distributed synchronization of dynamic networked systems with adaptive nonlinear couplings. IEEE Trans. Cybern. 2019. [Google Scholar] [CrossRef]
  17. Qin, J.; Zhang, G.; Zheng, W.X.; Kang, Y. Neural network-based adaptive consensus control for a class of nonaffine nonlinear multiagent systems with actuator faults. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3633–3644. [Google Scholar] [CrossRef]
  18. Fu, W.; Qin, J.; Shi, Y.; Zheng, W.X.; Kang, Y. Resilient Consensus of Discrete-Time Complex Cyber-Physical Networks under Deception Attacks. IEEE Trans. Ind. Inform. 2020, 16, 4868–4877. [Google Scholar] [CrossRef]
  19. Wei, B.; Xiao, F.; Shi, Y. Synchronization in kuramoto oscillator networks with sampled-data updating law. IEEE Trans. Cybern. 2019. [Google Scholar] [CrossRef]
  20. Qin, J.; Ma, Q.; Yu, X.; Wang, L. On Synchronization of Dynamical Systems over Directed Switching Topologies: An Algebraic and Geometric Perspective. IEEE Trans. Autom. Control 2020. [Google Scholar] [CrossRef] [Green Version]
  21. Åström, K.J.; Bernhardsson, B. Comparison of periodic and event based sampling for first-order stochastic systems. IFAC Proc. Vol. 1999, 32, 5006–5011. [Google Scholar] [CrossRef] [Green Version]
  22. Velasco, M.; Fuertes, J.; Marti, P. The self triggered task model for real-time control systems. In Proceedings of the Work-in-Progress Session of the 24th IEEE Real-Time Systems Symposium (RTSS03), Cancun, Mexico, 3–5 December 2003; Volume 384. [Google Scholar]
  23. Miskowicz, M. Send-on-delta concept: An event-based data reporting strategy. Sensors 2006, 6, 49–63. [Google Scholar] [CrossRef] [Green Version]
  24. Tabuada, P. Event-triggered real-time scheduling of stabilizing control tasks. IEEE Trans. Autom. Control 2007, 52, 1680–1685. [Google Scholar] [CrossRef] [Green Version]
  25. Miskowicz, M. Event-Based Control and Signal Processing; CRC Press: Boca Raton, FL, USA, 2018. [Google Scholar]
  26. Pan, L.; Lu, Q.; Yin, K.; Zhang, B. Signal source localization of multiple robots using an event-triggered communication scheme. Appl. Sci. 2018, 8, 977. [Google Scholar] [CrossRef] [Green Version]
  27. Theodosis, D.; Dimarogonas, D.V. Event-Triggered Control of Nonlinear Systems with Updating Threshold. IEEE Control Syst. Lett. 2019, 3, 655–660. [Google Scholar] [CrossRef]
  28. Zhu, C.; Li, C.; Chen, X.; Zhang, K.; Xin, X.; Wei, H. Event-Triggered Adaptive Fault Tolerant Control for a Class of Uncertain Nonlinear Systems. Entropy 2020, 22, 598. [Google Scholar] [CrossRef]
  29. Pérez-Torres, R.; Torres-Huitzil, C.; Galeana-Zapién, H. A Cognitive-Inspired Event-Based Control for Power-Aware Human Mobility Analysis in IoT Devices. Sensors 2019, 19, 832. [Google Scholar] [CrossRef] [Green Version]
  30. Thakur, S.S.; Abdul, S.S.; Chiu, H.Y.S.; Roy, R.B.; Huang, P.Y.; Malwade, S.; Nursetyo, A.A.; Li, Y.C.J. Artificial-intelligence-based prediction of clinical events among hemodialysis patients using non-contact sensor data. Sensors 2018, 18, 2833. [Google Scholar] [CrossRef] [Green Version]
  31. Socas, R.; Dormido, R.; Dormido, S. New control paradigms for resources saving: An approach for mobile robots navigation. Sensors 2018, 18, 281. [Google Scholar] [CrossRef] [Green Version]
  32. Hu, Y.; Lu, Q.; Hu, Y. Event-based communication and finite-time consensus control of mobile sensor networks for environmental monitoring. Sensors 2018, 18, 2547. [Google Scholar] [CrossRef] [Green Version]
  33. Zietkiewicz, J.; Horla, D.; Owczarkowski, A. Sparse in the Time Stabilization of a Bicycle Robot Model: Strategies for Event-and Self-Triggered Control Approaches. Robotics 2018, 7, 77. [Google Scholar] [CrossRef] [Green Version]
  34. Dimarogonas, D.V.; Frazzoli, E.; Johansson, K.H. Distributed event-triggered control for multi-agent systems. IEEE Trans. Autom. Control 2012, 57, 1291–1297. [Google Scholar] [CrossRef]
  35. Seyboth, G.S.; Dimarogonas, D.V.; Johansson, K.H. Event-based broadcasting for multi-agent average consensus. Automatica 2013, 49, 245–252. [Google Scholar] [CrossRef]
  36. Guo, M.; Dimarogonas, D.V. Nonlinear consensus via continuous, sampled, and aperiodic updates. Int. J. Control 2013, 86, 567–578. [Google Scholar] [CrossRef]
  37. Zhang, X.; Zhang, J. Distributed event-triggered control of multiagent systems with general linear dynamics. J. Control Sci. Eng. 2014, 2014, 698546. [Google Scholar] [CrossRef] [Green Version]
  38. Nowzari, C.; Cortés, J. Distributed event-triggered coordination for average consensus on weight-balanced digraphs. Automatica 2016, 68, 237–244. [Google Scholar] [CrossRef] [Green Version]
  39. Zhu, W.; Jiang, Z.P. Event-based leader-following consensus of multi-agent systems with input time delay. IEEE Trans. Autom. Control 2015, 60, 1362–1367. [Google Scholar] [CrossRef]
  40. Liuzza, D.; Dimarogonas, D.V.; Di Bernardo, M.; Johansson, K.H. Distributed model based event-triggered control for synchronization of multi-agent systems. Automatica 2016, 73, 1–7. [Google Scholar] [CrossRef] [Green Version]
  41. De Persis, C.; Postoyan, R. A Lyapunov redesign of coordination algorithms for cyber-physical systems. IEEE Trans. Autom. Control 2016, 62, 808–823. [Google Scholar] [CrossRef] [Green Version]
  42. Adaldo, A.; Liuzza, D.; Dimarogonas, D.V.; Johansson, K.H. Cloud-Supported Formation Control of Second-Order Multiagent Systems. IEEE Trans. Control Netw. Syst. 2017, 5, 1563–1574. [Google Scholar] [CrossRef]
  43. Xiao, F.; Shi, Y.; Ren, W. Robustness analysis of asynchronous sampled-data multiagent networks with time-varying delays. IEEE Trans. Autom. Control 2017, 63, 2145–2152. [Google Scholar] [CrossRef] [Green Version]
  44. Gao, J.; Tian, D.; Hu, J. Event-Triggered Cooperative Output Regulation for Heterogeneous Multi-Agent Systems with an Uncertain Leader. IEEE Access 2019, 7, 174270–174279. [Google Scholar] [CrossRef]
  45. Wang, J.; Luo, X.; Yan, J.; Guan, X. Event-Triggered Consensus Control for Second-Order Multi-Agent Systems with/without Input Time Delay. IEEE Access 2019, 7, 156993–157002. [Google Scholar] [CrossRef]
  46. Hu, W.; Yang, C.; Huang, T.; Gui, W. A distributed dynamic event-triggered control approach to consensus of linear multiagent systems with directed networks. IEEE Trans. Cybern. 2018, 63, 2145–2152. [Google Scholar] [CrossRef] [PubMed]
  47. Sun, Z.; Huang, N.; Anderson, B.D.; Duan, Z. Event-Based Multiagent Consensus Control: Zeno-Free Triggering via L p Signals. IEEE Trans. Cybern. 2018, 50, 284–296. [Google Scholar] [CrossRef] [PubMed]
  48. Wang, Y.W.; Lei, Y.; Bian, T.; Guan, Z.H. Distributed control of nonlinear multiagent systems with unknown and nonidentical control directions via event-triggered communication. IEEE Trans. Cybern. 2020, 50, 1820–1832. [Google Scholar] [CrossRef]
  49. Nguyen, A.T.; Nguyen, T.B.; Hong, S.K. Dynamic Event-Triggered Time-Varying Formation Control of Second-Order Dynamic Agents: Application to Multiple QuadcoptersSystems. Appl. Sci. 2020, 10, 2814. [Google Scholar] [CrossRef] [Green Version]
  50. Xu, G.H.; Xu, M.; Ge, M.F.; Ding, T.F.; Qi, F.; Li, M. Distributed Event-Based Control of Hierarchical Leader-Follower Networks with Time-Varying Layer-To-Layer Delays. Energies 2020, 13, 1808. [Google Scholar] [CrossRef] [Green Version]
  51. Shen, Y.; Kong, Z.; Ding, L. Flocking of Multi-Agent System with Nonlinear Dynamics via Distributed Event-Triggered Control. Appl. Sci. 2019, 9, 1336. [Google Scholar] [CrossRef] [Green Version]
  52. Hu, W.; Liu, L.; Feng, G. An event-triggered control approach to cooperative output regulation of heterogeneous multi-agent systems. IFAC-PapersOnLine 2016, 49, 564–569. [Google Scholar] [CrossRef]
  53. Li, H.; Chen, G.; Huang, T.; Zhu, W.; Xiao, L. Event-triggered consensus in nonlinear multi-agent systems with nonlinear dynamics and directed network topology. Neurocomputing 2016, 185, 105–112. [Google Scholar] [CrossRef]
  54. Li, Z.; Yan, J.; Yu, W.; Qiu, J. Event-Triggered Control for a Class of Nonlinear Multiagent Systems with Directed Graph. IEEE Trans. Syst. Man Cybern. Syst. 2020. [Google Scholar] [CrossRef]
  55. Li, Z.; Yan, J.; Yu, W.; Qiu, J. Adaptive Event-Triggered Control for Unknown Second-Order Nonlinear Multiagent Systems. IEEE Trans. Cybern. 2020. [Google Scholar] [CrossRef] [PubMed]
  56. Khalil, H.K. Nonlinear Systems; Prentice Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
Figure 1. Connected topology.
Figure 1. Connected topology.
Applsci 10 05250 g001
Figure 2. The functions of controller output nonlinearities. (a) The controllers’ inherent nonlinearity function F ( · ) . (b) The logarithmic quantizer function g ( · ) .
Figure 2. The functions of controller output nonlinearities. (a) The controllers’ inherent nonlinearity function F ( · ) . (b) The logarithmic quantizer function g ( · ) .
Applsci 10 05250 g002
Figure 3. Simulation results of edge-event-triggered control. (a) State trajectories of all individuals. (b) Update times of all controllers. (c) Update values of all controllers.
Figure 3. Simulation results of edge-event-triggered control. (a) State trajectories of all individuals. (b) Update times of all controllers. (c) Update values of all controllers.
Applsci 10 05250 g003
Figure 4. Simulation results of edge-self-triggered control. (a) State trajectories of all individuals. (b) Update times of all controllers. (c) Update values of all controllers.
Figure 4. Simulation results of edge-self-triggered control. (a) State trajectories of all individuals. (b) Update times of all controllers. (c) Update values of all controllers.
Applsci 10 05250 g004
Table 1. Comparisons of the triggering numbers of each agent in the time interval t [ 0 , 10 ] .
Table 1. Comparisons of the triggering numbers of each agent in the time interval t [ 0 , 10 ] .
Agents v 1 v 2 v 3 v 4 Total NumbersAverage Time Intervals
The edge-event-triggered policy353523201130.354
The edge-self-triggered policy7197145483610.111
Table 2. Comparisons of the trade-off between different system resources in t [ 0 , 10 ] .
Table 2. Comparisons of the trade-off between different system resources in t [ 0 , 10 ] .
System ResourcesSampling TimesCommunication TimesController Update Times
The edge-event-triggered policyContinuous0113
The edge-self-triggered policy361361361

Share and Cite

MDPI and ACS Style

Liu, J.; Dai, M.-Z.; Zhang, C.; Wu, J. Edge-Event-Triggered Synchronization for Multi-Agent Systems with Nonlinear Controller Outputs. Appl. Sci. 2020, 10, 5250. https://doi.org/10.3390/app10155250

AMA Style

Liu J, Dai M-Z, Zhang C, Wu J. Edge-Event-Triggered Synchronization for Multi-Agent Systems with Nonlinear Controller Outputs. Applied Sciences. 2020; 10(15):5250. https://doi.org/10.3390/app10155250

Chicago/Turabian Style

Liu, Jie, Ming-Zhe Dai, Chengxi Zhang, and Jin Wu. 2020. "Edge-Event-Triggered Synchronization for Multi-Agent Systems with Nonlinear Controller Outputs" Applied Sciences 10, no. 15: 5250. https://doi.org/10.3390/app10155250

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop