On Leader-Following Consensus in Multi-Agent Systems with Discrete Updates at Random Times

This paper studies the leader-following consensus problem in continuous-time multi-agent networks with communications/updates occurring only at random times. The time between two consecutive controller updates is exponentially distributed. Some sufficient conditions are derived to design the control law that ensures the leader-following consensus is asymptotically reached (in the sense of the expected value of a stochastic process). The numerical examples are worked out to demonstrate the effectiveness of our theoretical results.


Introduction
In recent years, we witnessed increasing attention on the distributed cooperative control of dynamic multi-agent systems due to their vast applications in various fields. In many situations, groups of dynamic agents need to interact with each other and their goal is to reach an agreement (consensus) on a certain task. For example, it can be flocking of birds during migration [1,2] to eventually reach their destinations; or robot teams synchronized in order to accomplish their collective tasks [3,4]. The main challenge for distributed cooperative control of multi-agent systems is that interaction between agents is only based on local information. There already exists a vast literature concerning first-order [3,5], second-order [6,7], and fractional-order [8][9][10] networks. For a survey of the recent results, we refer the reader to [11]. Within different approaches to the consensus problem in multi-agent networks, one can find continuous-time agents' state evolving (the state trajectory is a continuous curve) [3,5,12,13], discrete-time agents' state evolving (the state trajectory is a sequence of values) [14][15][16][17][18][19], and both continuous and discrete-time agents' state evolving (the domain of the state trajectory is any time scale) [20][21][22][23]. An important question connected with the consensus problem is whether the communication topology is fixed over time or is time-varying; that is, communications channels are allowed to change over time [24]. The latter case seems to be more realistic; therefore, scientists mostly focus on it. Going farther, in real-word situations, it may happen that agents' states are continuous but an exchange of information between agents occurs only at discrete time instants (update times). This issue was already addressed in the literature [25]. In this paper we also investigate such a situation. However, our approach is new and more challenging: we consider the case when agents exchange information between each other at random instants of times. Another question to be answered is whether the consensus problem is considered with or without the leader. Based on the existence of a leader, there are two kinds of consensus problems: leaderless consensus and leader-following consensus. This last-mentioned problem relies on establishing conditions under which, through local interactions, all the agents reach an agreement upon a common state (consensus), which is defined by the dynamic leader. A great number of works have been already devoted to the consensus problem with the leader (see, e.g., [24,[26][27][28] and the references given there).
In the present paper, we investigate a leader-following consensus for multi-agent systems. It is assumed that the agents' state variables are continuous but exchange information between them occurs only at discrete time instants (update times) appearing randomly. In other words, the consensus control law is applied at those update times. We analyze the case when a sequence of update times is a sequence of random variables and the waiting time between two consecutive updates is exponentially distributed. To avoid unnecessary complexity, we assume that the update times are the same for all agents. Combining of continuity of state variables with discrete random times of communications requires the introduction of an artificial state variable for each agent that evolves in continuous-time and is allowed to have discontinuities; the primary state variable is continuous for all time. Between update times, both the original state and the artificial variable evolve continuously according to some specified dynamics. At randomly occurring update times, the state variable keeps its current value, while the artificial variable is updated by the state values received from other agents, including the leader. It is worth noting that, in the case of deterministic update times, known initially, the idea of artificial state variables is applied in [15,16,25]. The presence of randomly occurring update times in the model leads to a total change of the behavior of the solutions. They are changed from deterministic real valued functions to stochastic processes. This requires combining of results from the probability theory with the ones of the theory of ordinary differential equations with impulses. In order to analyze the leader-following consensus problem, we define the state error system and a sample path solution of this system. Since solutions to the studied model of a multi-agent system with discrete-time updates at random times are stochastic processes, asymptotically reaching leader-following consensus is understood in the sense of the expected value.
The paper is organized in the following manner. In Section 2, we describe the multi-agent system of our interest in detail. Some necessary definitions, lemmas, and propositions from probability theory are given in Section 3. Section 4 contains our main results. First, we describe a stochastic process that is a solution to the continuous-time system with communications at random times. Next, sufficient conditions for the global asymptotic leader-following consensus in a continuous-time multi-agent system with discrete-time updates occurring at random times are proven. In Section 5, an illustrative example with numerical simulations is presented to verify theoretical discussion. Some concluding remarks are drawn in Section 6.
Notation: For a given vector x ∈ R n , x stands for its Euclidean norm x = √ x T x. For a given square n × n matrix, A = [a ij ], A stands for its spectral norm A = max i {λ i }, where λ i are the eigenvalues of A.A T . We have A ≤ n max i,j |a ij | and e A ≤ e A .

Statement of the Model
We consider a multi-agent system consisting of N agents and one leader. The state of agent i is denoted by y i : [t 0 , ∞) → R, i = 1, . . . , N, and the state of the leader by y r : [t 0 , ∞) → R, where t 0 ≥ 0 is a given initial time. Without information exchange between agents, the leader has no influence on the other agents (see Example 1, Case 1.1, and Example 2, Case 2.1). In order to analyze the influence of the leader on the behavior of the other agents, we assume that there is information exchange between agents but it occurs only at random update times. In other words, the model is set up as the continuous-time multi-agent system with discrete-time communications/updates occurring only at random times.
Let us denote by (Ω, F , P) a probability space, where Ω is the sample space, F is a σ-algebra on Ω, and P is the probability on F . Consider a sequence of independent, exponentially-distributed random variables {τ k } ∞ k=1 with parameter λ > 0 and such that ∑ ∞ k=1 τ k = ∞ with a probability 1. Define the sequence of random variables {ξ k } ∞ k=0 by where t 0 is a given initial time. The random variable τ k measures the waiting time of the k-th event time after the (k − 1)-st controller update occurs and the random variable ξ k is connected with the random event time and it denotes the length of time until k controller updates occur for t ≥ t 0 . At each time ξ k agent i updates its state variable according to the following equation: where u i : R → R is the control input function for the i-th agent. Here, ∆y i (ξ k ) is the difference between the value of the state variable of the i-th agent after the update y i (ξ k + 0) and before it y i (ξ k ); i.e., ∆y i (ξ k ) = y i (ξ k + 0) − y i (ξ k ). The state of the leader remains unchanged; that is, For each agent i we consider the control law, at the random times ξ k , k = 1, 2, . . . , based on the information it receives from its neighboring agents and the leader: where weights a ii (t) ≡ 0, i = 1, 2 . . . , N, and a ij (t) ≥ 0, t ≥ t 0 , i, j = 1, 2, . . . , N, are entries of the weighted connectivity matrix A(t) at time t: and ω i (t) > 0 if the virtual leader is available to agent i at time t, while ω i (t) = 0 otherwise. Between two update times ξ k−1 and ξ k , any agent i has information only about his own state. More precisely, the dynamics of agent i are described by The leader for the multi-agent system is an isolated agent with constant reference state y r (t) = 0. Observe that the model described above can be written as a system of differential equations with impulses at random times ξ k , k = 1, 2, . . . , and waiting time between two consecutive updates τ k as follows: with initial conditions We introduce an additional (artificial) variable Y i for each state y i , i = 1, 2, . . . , N, such that it has discontinuities at random times ξ k and Y r = y r . These variables allow us to keep the state of each agent y i , i = 1, 2, . . . , N, as a continuous function of time. Between two update times ξ k−1 and ξ k the evolution of Y i and Y r are given by Then, by Equations (2) and (4), we obtain Consequently, we get the following system: At each update time we set: The initial conditions for (5) and (6) are: Observe that dynamics described by (5) lead to a decrease of the absolute difference between a state variable y i and an artificial variable Y i , i = 1, 2, . . . , N. Whereas by (6), the value of Y i is updated using the information received, while y i remains unchanged. Therefore, Equations (5) and (6) provide a formal description of the multi-agent system with continuous-time states of agents and information exchange between agents occurring at discrete-time instants.
. . , N, be errors between any state y i or Y i , and the leader state y r , at time t. Then, by (5)-(7), one gets the following error system: where coefficients d ij are the entries of the matrix Now let us introduce the 2N × 2N-dimensional matrices Then, denoting Z = (x 1 , X 1 , x 2 , X 2 . . . , x N , X N ) T , we can write error system (8) in the following matrix form:

Some Preliminary Results from Probability Theory
In this section, having in mind the definitions of random variables {τ k } ∞ k=1 and {ξ k } ∞ k=1 , given in Section 2, we list some facts from probability theory that will be used in the proofs of our main results.
Let t ≥ t 0 be a fixed point. Consider the events Note that, for any fixed point t and any element ω ∈ Ω, there exists a natural number k such that ω ∈ S k (t) and ω / ∈ S j (t) for j = k, or for any fixed point t there exists a natural number k such that ∆ k (t) = 1 and ∆ j (t) = 0 for j = k.
where E{.} denotes the mathematical expectation.

Corollary 1 ([30]
). The probability that there will occur exactly k controller updates of each agent until the time t, t ≥ t 0 , is given by the equality

Definition 1 ([29]
). We say that the stochastic processes m and n satisfy the inequality m(t) ≤ n(t),

Proposition 2 ([29]
). If the stochastic processes y and u satisfy the inequality m(t) ≤ n(t) for t ∈ J ⊂ R, then E(m(t)) ≤ E(n(t)) for t ∈ J.

Leader-Following Consensus
Consider the sequence of points {t k } ∞ k=1 , where the point t k is an arbitrary value of the corresponding random variable τ k , k = 1, 2, . . . . Define the increasing sequence of points {T k } ∞ k=0 by T 0 = t 0 and T k = T 0 + ∑ k j=1 t j for k = 1, 2, . . . .

Remark 1.
Note that if t k is a value of the random variable τ k , k = 1, 2, . . . , then T k is a value of the random variable ξ k , k = 1, 2, . . . , correspondingly.
Since the multi-agent system with the leader described by system (2)-(3) is equivalent to system (9), we focus on initial value problem (9).
Let us consider the following system of impulsive differential equations with fixed points of impulses and fixed length of action of the impulses: or its equivalent matrix form Note that system (11) is a system of impulsive differential equations with impulses at deterministic time moments {T k } ∞ k=0 . For a deeper discussion of impulsive differential equations we refer the reader to [31] and the references given there. The solution to (11) depends not only on the initial condition (t 0 , Z 0 ) but also on the moments of impulses T k , k = 1, 2, . . . , i.e., on the arbitrary chosen values t k of the random variables τ k , k = 1, 2, . . . , and is given by The set of all solutions Z(t; t 0 , Z 0 , {T k }) of the initial value problems of type (11) for any values t k of the random variables τ k , k = 1, 2, . . . , generates a stochastic process with state space R 2N . We denote it by Z(t; t 0 , Z 0 , {τ k }) and call it a solution to initial value problem (9). Following the ideas of a sample path of a stochastic process [29,32] we define a sample path solution of studied system (9).

Definition 2.
For any given values t k of the random variables τ k , k = 1, 2, 3, . . . , respectively, the solution Z(t; t 0 , Z 0 , {T k }) of the corresponding initial value problem (10) is called a sample path solution of initial value problem (9).

Definition 3.
A stochastic process Z(t; t 0 , Z 0 , {τ k }) with an uncountable state space R 2N is said to be a solution of initial value problem (9) if, for any values t k of the random variables τ k , k = 1, 2, . . . , the corresponding function Z(t; t 0 , Z 0 , {T k }) is a sample path solution of initial value problem (9).

Remark 2.
Observe that since x i (t) = y i (t) − y r (t), i = 1, ..., N, and initial value problem (2)-(3) is equivalent to initial value problem (9), equality (12) means that Now we prove the main results of the paper, which are sufficient conditions for the leader-following consensus in a continuous-time multi-agent system with discrete-time updates occurring at random times. Theorem 1. Assume that: hold, and there exists a real α ∈ (0, 1) such that and (A2) The random variables τ k , k = 1, 2, . . . , are independently exponentially distributed with the parameter λ such that λ > 2N 1−α .
Then, for any initial point t 0 ≥ 0 the solution Z(t; t 0 , Z 0 , {τ k }) of the initial value problem with random moments of impulses (9) is given by the formula and the expected value of the solution satisfies the inequality Proof. Let t 0 ≥ 0 be an arbitrary given initial time. According to (A1), we have and e C(t) ≤ e 2N for t ≥ t 0 . For any k = 1, 2, . . . , we choose an arbitrary value t k of the random variable τ k and define the increasing sequence of points T 0 = t 0 , T k = t 0 + ∑ k j=1 t j , k = 1, 2, 3, . . . . By Remark 1, for any natural k, T k is a value of the random variable ξ k . Consider the initial value problem of impulsive differential equations with fixed points of impulses (11). The solution of initial value problem (11) is given by the formula Then, for t ∈ (T k−1 , T k ], we get the following estimation The solutions Z(t; t 0 , Z 0 , {T k }) generate continuous stochastic process Z(t; t 0 , Z 0 , {τ k }) that is defined by (16). It is a solution to initial value problem of impulsive differential equation with random moments of impulses (9). According to Proposition 2, Proposition 3, and inequality (17), we get Therefore, applying Corollary 1, we obtain (14) and (15) are satisfied only for ω i (t), i = 1, 2, . . . , N, such that ω i (t) = 0 for all i = 1, 2. . . . , N and t ≥ t 0 . Indeed, assume that there exist i = 1, 2 . . . , N and t * ≥ t 0 , such that ω i (t * ) = 0. Then inequality (14) reduces to |1 − ∑ N j=1 a ij (t * )| < α 2N . If 1 < ∑ N j=1 a ij (t * ), then from (14) it follows that 1 ≤ α(N−1) 2N , i.e, 2 N N−1 < α, which is not possible since α ∈ (0, 1). Therefore, assume that 1 ≥ ∑ N j=1 a ij (t * ) and

Theorem 2.
If the assumptions of Theorem 1 are satisfied, then the leader-following consensus for multi-agent system (2) is reached asymptotically.
According to Remark 3, condition (A1) is satisfied only in the case when a leader is available to each agent at any update random time. An interpretation of this situation can be the following. A leader can be viewed as the root node for the communication network; if there exists a directed path from the root to each agent (device), then all the agents can track the objective successfully. Since the leader can perceive more information in order to guide the whole group to complete the task (consensus), it seems to be reasonable to demand that he is available to each follower at any update random time.

Illustrative Examples
In this section, the numerical examples are given to verify the effectiveness of the proposed sufficient conditions for a multi-agent system to achieve asymptotically the leader-following consensus.
In all examples, we set t 0 = 0 and consider a sequence of independent exponentially distributed random variables {τ k } ∞ k=1 with parameter λ > 0 (it will be defined later in each example) and the sequence of random variables {ξ k } ∞ k=0 defined by (1).

Example 1.
Let us consider a system of three agents and the leader. In order to illustrate the meaningfulness of the studied model and the obtained results, we consider three cases. Case 1.1. There is no information exchange between agents and the leader is not available. The dynamics of agents are given by y r (t) = 0, (18) Figure 1 shows the solution to system (18) with the initial values: y 0 1 = 1, y 0 2 = 2, y 0 3 = 3, y 0 r = 3 2 . From the graphs in Figure 1 it can be seen that the leader-following consensus is not reached.

Case 1.2.
There is information exchange between agents (including the leader) occurring at random update times.
The dynamics between two update times of each agent and of the leader are given by (compare with (18)): y r (t) = 0, The consensus control law at any update time ξ k , k = 1, 2, . . . , is given by Hence, Observe that, for α ∈ (0.6, 1), Assumption (A1) of Theorem 1 is fulfilled. Let λ = 45. Then, for α = 0.7, Assumption (A2) of Theorem 1 holds. Therefore, by Theorem 2, the leader-following consensus for multi-agent system (19) with the consensus control law (20) at any update time is reached asymptotically.
Clearly, the leader state is y r (t) ≡ 1.5. For each value of the random variables (i)-(iv) we get the system of impulsive differential equations with fixed points of impulses of type (11) with N = 3 and matrices C(t), D(t) given above. Figures 2-4 present the state trajectories of the leader y r (t) and agent y 1 (t), y 2 (t) and y 3 (t), respectively. Apparently, it is visible that the leader-following consensus is reached for all considered sample path solutions.   The dynamics between two update times of each agent are given by (19), and at update time ξ k , k = 1, 2, . . . , the following control law is applied: u 1 (ξ k ) = −(1 − 0.09 cos 2 (τ k )) (y 1 (ξ k ) − y r (ξ k )) , u 2 (ξ k ) = −(1 + 0.09 sin(τ k ))(y 2 (ξ k ) − y r (ξ k )), Therefore, and C(t) is the same as in Case 1.2. It is easy to check that, for α = 0.7 and λ = 45, assumptions (A1) and (A2) are fulfilled. According to Theorem 2, the leader-following consensus for multi-agent system (19) with the consensus control law (21) at any update time is reached asymptotically.
To illustrate the behavior of the solutions of the model with impulses occurring at random times, we consider sample path solutions with the same data as in Case 1.2. Figures 5-7 present the state trajectories of the leader y r (t) and agent y 1 (t), y 2 (t) and y 3 (t), respectively. Apparently, it is visible that the leader-following consensus is reached in all considered sample path solutions.   Example 2. Let the system consist of four agents and the leader. In order to illustrate the meaningfulness of the studied model and the obtained results, we consider four cases.
Clearly, the leader state is y r (t) ≡ 1.5. Figures 9-12 present the state trajectories of the leader y r (t) and agent y 1 (t), y 2 (t), y 3 (t), and y 4 (t), respectively. Apparently, it is visible that the leader-following consensus is reached for all considered sample path solutions.

Case 2.3.
There is information exchange between agents occurring at random update times but the leader is not available for agents.
To illustrate the behavior of the solutions of the model with impulses occurring at random times, we fix λ = 55 and consider sample path solutions with the same data as in Case 2.2. Figures 13-16 present the state trajectories of the leader y r (t) and agent y 1 (t), y 2 (t), y 3 (t) and y 4 (t), respectively. Observe that the leader-following consensus is not reached in all considered sample path solutions.    Case 2.4. The leader is not available to one agent at all update times. The dynamics between two update times of each agent are given by (23), and at update time ξ k , k = 1, 2, . . . , the control law is applied: − (y 4 (ξ k ) − y r (ξ k )) , k = 1, 2, . . . .
To illustrate the behavior of the solutions of the model with impulses occurring at random times, we consider sample path solutions with the same data as in Case 2.2.
Figures 17-20 present the state trajectories of 4 agents and the leader, respectively. Observe that the leader-following consensus is not reached in all considered sample path solutions. It is visible in Figure 18 where the graphs of the state trajectory of the second agent for various values of random variables are presented. This shows the importance of Assumption (A1). However, we emphasize that in the model considered in this paper the information exchange between agents is possible only at discrete random update times and the waiting time between two consecutive updates is exponentially distributed (similarly to queuing theory). Of course, in general, it is obvious that if the leader is continuously available for agents, then the leader-following consensus is reached. But in this paper, we consider the situation when the leader is available just from time to time at random times (so he is not available continuously). We deliver conditions under which, in spite of lack of this continuous information flow from the leader to agents, the leader-following consensus is still reached.    Both examples illustrate that the interaction between the leader and the other agents only at random update times changes significantly the behavior of the agents. If conditions (A1) and (A2) are satisfied, then the leader-following consensus is reached in multi-system (2).

Conclusions
The leader-following consensus problem is a key point in the analysis of dynamic multi-agent networks. In this paper, we considered the situation when agents exchanged information only at discrete-time instants that occurred randomly. The proposed control law was distributed, in the sense that only information from neighboring agents was included, which implied that the control law was applied only at update times that occurred randomly. In the cases wherein the random update times were equal to the initially given times, our model was reduced to a continuous-time multi-agent system with discrete-time communications studied in [25]. The main difference between our model and the previous approaches was that we considered a sequence of update times as a sequence of random variables. Besides, unlike in other investigations, the waiting time between two consecutive updates was exponentially distributed. This was motivated by the most useful distribution in queuing theory. The presence of randomly occurring update times required using results from the probability theory and the theory of differential equations with impulses in order to describe our proposed solution to the considered multi-agent system. We provided conditions on the control law that ensured asymptotic leader-following consensus in the sense of the expected value of a stochastic process. This work may be treated as the first step towards the analysis of consensus problems of multi-agents with discrete updates at random times. For example, one of the possible problems to be investigated in the future is to deliver conditions under which the consensus is achieved in the multi-agent system with discrete updates at random times in spite of denial-of-service attacks or for systems with double-integrator dynamics. Another important and interesting issue is to work out a model of a real-world system of agents and to apply our theoretical results. For this purpose, we have to develop or adapt the existing numerical procedures for simulating the evolution of a system with a greater number of agents. This problem is currently under investigation. Funding: The APC was funded by the Bialystok University of Technology Grant W/WI-IIT/1/2020, financed from a subsidy provided by the Minister of Science and Higher Education.