1. Introduction
The problem of dynamic consensus, also referred to as dynamic average tracking, consists of computing the average of a set of time-varying signals distributed across a communication network. This problem has recently attracted a lot of attention due to its applications in general cyber-physical systems and robotics. For instance, dynamic consensus algorithms can be used to coordinate multi-robot systems, formation control, and target tracking, as in [
1,
2]. A more detailed overview of the common problems in the field of multi-robot coordination can be found in [
3].
Many dynamic consensus approaches in continuous time exist in the literature [
4]. For example, a high-order linear dynamic consensus approach is proposed in [
5], which is capable of achieving arbitrarily small steady-state errors when persistently varying signals are used by tuning a protocol parameter. The protocol in [
6] adopts a similar linear approach and analyses its performance against bounded cyber-attacks in the communication network. In [
7], another linear consensus protocol is proposed to address the problem of having active and passive sensing nodes with time-varying roles. While these approaches obtain good theoretical results in continuous time, only discrete-time communication is possible in practice. In contrast, other works are designed directly in a discrete-time setting where local time-varying signals are sampled with a fixed sampling step shared across the network. For example, a high-order discrete-time linear dynamic consensus algorithm is proposed in [
8] with similar performance to those in continuous time. The proposal in [
9] extends the previous work to ensure it is robust to initialization errors. Similar ideas are applied to distributed convex optimization in [
10].
Besides having discrete communication, maintaining a low communication burden in the network is desirable, both from power consumption and communication bandwidth usage perspectives. For this purpose, an event-triggered approach can be used to decide when agents should communicate instead of doing so at every sampling instant. Several works use event-triggered conditions in which the estimate for the average signal for a particular agent is only shared when it changes sufficiently with respect to the last transmitted value. For example, an event-triggered communication rule is applied to a continuous-time dynamic consensus problem in [
11]. In [
12], an event-triggered dynamic consensus approach is analysed for homogeneous and heterogeneous multi-agent systems. These ideas have shown to be helpful in distributed state estimation. For instance, deterministic [
13] and stochastic [
14] event-triggering rules have been used to construct distributed Kalman filters. The proposal in [
15] borrows these ideas to construct an extended Kalman filter. Distributed set-membership estimators have also been studied in the context of event-triggered communication [
16]. Event-triggered schemes for estimation typically result in a trade-off between the quality of the estimates and the number of triggered communication events, as is pointed out in [
17]. This allows the user to tune the parameters of the event-triggering condition to reduce communication while achieving a desired estimation error for the consensus estimates. A more in-depth look into event-triggered strategies can be found in [
18].
However, most event-triggered and discrete-time approaches for dynamic consensus are based on linear techniques, even in recent works [
15]. This means that the estimations cannot be exact when the dynamic average is persistently varying, e.g., when the local signals behave as a sinusoidal. Concretely, these approaches typically attain a bounded steady-state error, which can become arbitrarily small by increasing some parameters in the algorithm [
4]. Despite this, as typical with high-gain arguments, increasing such parameters can reduce the algorithm’s robustness when only imperfect measurements or noisy local signals are available [
19]. In contrast, exact dynamic consensus algorithms can achieve exact convergence towards the dynamic average under reasonable assumptions for the local signals, including the persistently varying case. For example, a discontinuous First-Order Sliding Mode (FOSM) approach is used in [
20] to achieve exact convergence in this setting. Similar ideas are applied in [
21] for continuous-time state estimation. The work in [
22] proposes a High-Order Sliding Mode (HOSM) approach for exact dynamic consensus, which was then extended in [
23] to account for initialization errors. Note that these approaches are mainly based on sliding modes to achieve such an exact convergence. As a result, continuous-time analysis is mostly available for these protocols.
Unfortunately, sliding modes deteriorate their performance in the discrete-time setting due to the so-called chattering ([
24], Chapter 3). In particular, FOSM approaches for dynamic consensus in [
20,
21] deteriorate its performance in an order proportional to the time step in the discrete-time setting. A standard solution to attenuate chattering is to use HOSM [
24]. To the best of our knowledge, the only methods that use HOSM for dynamic consensus are [
22,
23]. However, discrete-time analysis or event-triggered extensions for these approaches have not been discussed so far in the literature.
As a result of this discussion, we contribute a novel distributed dynamic consensus algorithm with the following features. First, the algorithm uses discrete-time samples of the local signal. We also propose the adoption of event-triggered communication between agents to reduce the communication burden. Our proposal is consistently exact, which means that the exact estimation of the average signal is obtained when the time step approaches zero and when events are triggered at each sampling step. We show the advantage of our method against linear approaches, in which the permanent consensus error cannot be eliminated. Moreover, in the general discrete-time case, we show that our proposal reduces chattering when compared to exact FOSM approaches.
This article is organized as follows.
Section 2 provides a description of the problem of interest and our proposal to solve the consensus problem on a discrete-time setting using event-triggered communication. A formal analysis of the convergence for our protocol can be found in
Section 3. In
Section 4, we validate our proposal and compare it to other methods through simulation experiments. Furthermore,
Section 5 provides a qualitative discussion comparing our proposal to other related works. The conclusions and future work are drawn in
Section 6.
3. Protocol Convergence
In this Section, we provide a formal analysis of the convergence of our consensus protocol. To do so, we introduce three technical lemmas before providing the proof of Theorem 1 in
Section 3.1.
First, define
and
. Moreover, define:
where
, as well as
. In addition, note that
where
by construction from (
2). Hence,
with
and
.
Thus, the protocol in (
3) can be written in vector form as:
for which the condition
in Theorem 1 can be written as
.
The purpose of the internal variables
is to approximate the disagreement between local signals expressed as
with
recalling that
such that if
exactly, then all the agents reach consensus towards:
The first step to show that approximates is showing that it is always orthogonal to the consensus direction in the following result.
Lemma 1. Let be connected and hold . Then, (5) satisfies . Proof. The proof follows by strong induction. Note that,
from (
5) since
. Hence,
due to the initial condition
. Now, use this as an induction base for the rest of the
. Assume from the induction hypothesis that
with
and
. Then, from (
5):
from which it follows that
similarly as before, completing the proof. □
Now, let the error
and recall from Corollary A2 in
Appendix B that
can be expanded as:
where
. Therefore, combining (
5) and (
6), the error dynamics for
can be written as:
using
due to
. Rearranging (
7) leads to:
Hence, all trajectories of (
8) satisfy the inclusions:
with
,
, and the set valued functions:
To show the convergence of the error system in (
9), we write an equivalent continuous-time system as follows. First, we extend solutions to (
7) as
for any
according to:
where
is an arbitrary vector
such that (
11) complies (
9) at
. Hence, it follows that:
Finally, note that
satisfies
and hence
. Thus, we have:
In the following, we derive some important properties for (
13).
Lemma 2. Given any , the differential inclusion (13) is invariant under the transformation: Proof. Write
with
and note that:
and
. The same reasoning applies to the case where
. Now, compute the dynamics of the transformed variables in the new time
:
which completes the proof. □
Lemma 3. Let Assumption 1 hold. Hence, if , then there exist sufficiently high gains such that (13) is finite-time stable towards the origin. Proof. The proof follows by noting that:
for
and:
Hence, (
13) with
is equivalent to (
A2) in
Appendix A, which is finite-time stable towards the origin according to Corollary A1, completing the proof. □
With these results, we are ready to show Theorem 1.
3.1. Proof of Theorem 1
We use a similar reasoning as in Theorem 2 in [
27] where the asymptotics in Theorem 1 are obtained through an argument of continuity of solutions for differential inclusions with respect to the parameter
[
28]. First, let the transformation:
and note that the dynamics of the new variables comply with:
according to Lemma 2. Lemma 3 implies that if
then (
15) complies with
for some
. Hence, by the continuity of the solutions of the inclusion (
15), there exists a sufficiently small
such that (
15) complies
for some
.
Using
, the original variables comply that for
:
with
, which is a constant since
is fixed. Therefore,
. This means that:
and, thus,
. Consequently, evaluating the previous inequality component-wise and at
, then
for some
where we used
, completing the proof of Theorem 1. A similar argument is used for the additional outputs in (
4) in order to show Corollary 1.
4. Numerical Experiments
To validate our proposal, we show experiments on a simulation scenario. The communication network for the
agents is described by the graph
shown in
Figure 2. As an example, we consider
. For this case, we set the gains
for all agents, which were chosen as established in Remark 4. We have arbitrarily chosen initial conditions as
and
as well as
to ensure
. The internal reference signals for each agent have been provided as
. For the sake of generality, we have arbitrarily chosen the following amplitudes
, frequencies
, and phases
, respectively:
Our goal is to show that we can achieve consistently exact dynamic consensus on the average of the reference signals, , as defined in Remark 1. For our protocol, the error due to the discretization and event-triggered communication is in an arbitrarily small neighbourhood of zero. The desired neighbourhood can be tuned with and , according to Theorem 1. Thus, we have tested our proposal with different sampling times and event thresholds .
Figure 3 shows the consensus results for our protocol with a sampling period of
and no event-triggered communication (
as an approximation of the ideal continuous-time case with full communication. For the Figures, we define
for
. The desired dynamic average of the signals,
, is plotted in a dashed red line. It can be observed that, after some time, the consensus error falls close to zero.
When we add event-triggered communication, the consensus error increases with the value of the triggering threshold
. This can be observed in
Figure 4 and
Figure 5, which show the results with
and
, respectively, keeping
. Note that, especially for a small
, the event-triggered error is more apparent in the instants where the measured signal changes its slope. This is due to the shape of the event-triggering condition (
2), since events are triggered when the current estimate differs from the last transmitted one by more than some threshold
. In the instants around the change in slope, this difference remains smaller than the triggering threshold for a longer time, with the signal around the flat region. Hence, a more noticeable trigger-induced error appears.
The sampling period
also affects the magnitude of the steady-state consensus error.
Figure 6 shows the consensus results with
. When compared to
Figure 4, it is apparent that the consensus error has increased with the sampling period. Note that the oscillating shape of the error is due to the sinusoidal shape of the signal of interest.
The increase in error due to
is common in event-triggered setups where there is a trade-off between the number of triggered events and the quality of the results. When we set a higher value of
, we achieve a reduction in communication between agents since information is only exchanged at event instants in the corresponding links, at the cost of an increase in the resulting error. To further analyse the said trade-off,
Figure 7 shows the effect that the reduction in communication has on the consensus error. This figure shows that the communication through the network of agents can be significantly reduced with respect to the full communication case without producing a high increase in the steady-state consensus error. The communication rate has been computed as follows:
A communication rate of 100% represents full communication, where each agent transmits to all its neighbours at every sampling time. For our proposal, note that when an event is triggered in the link , a message is sent from i to its neighbour j and agent j replies with another message to agent i. Thus, for every event that is triggered, two messages are sent through the network. We have considered that when events are triggered in the links and simultaneously, only one message is sent in each direction.
A similar trade-off is found for the sampling time.
Figure 7 also shows the evolution of the consensus error with different values of the sampling period. Note that, if event-triggered communication is used (
), the value of
has a higher impact on the steady-state error than the sampling period when
. In that case, the steady-state error does not asymptotically tend to zero but rather remains in a neighbourhood of zero that is bounded according to
.
To compare with the other approaches in the literature, we have obtained consensus results for the same simulation scenario using a linear protocol and a First-Order Sliding Modes (FOSM) protocol, which are two existing options for dynamic consensus in the state of the art as described in the following. Note that by performing some adaptations, the linear and FOSM protocols can be obtained as particular cases of our protocol. To implement the linear protocol, we use (
3) with
, which is a similar setting as in [
4,
9]. For the FOSM protocol, we use the equations in (
3) setting
, which correspond to one of the proposals in [
20]. For both cases, we set
and
to simulate the case with full communication.
Figure 8 compares the consensus error for the linear and FOSM protocols against our proposal. The FOSM protocol can eliminate the steady-state consensus error in continuous time implementations, but the discretized version suffers from chattering. In contrast, we can see that our method reduces chattering, improving the steady-state error. The linear approach cannot eliminate the permanent consensus error in the estimates of the dynamic average
, regardless of the value of
(note that the shape of the error is due to the sinusoidal nature of the signals used in the experiment).
We show the numerical results in
Table 1 to summarize our comparison. We include the maximum steady-state error obtained for the linear and FOSM protocols and for our method. We include the results with full communication (
) for comparison and the values obtained with our event-triggered setup (
). To highlight the advantage of event-triggered communication, we include the level of communication in the network of each approach. The results from
Table 1 show that our proposal can significantly reduce the steady-state error compared to the linear and FOSM protocols. Moreover, including event-triggered communication instead of having each agent transmit to all neighbours at every sampling instant produces a drastic reduction in the amount of communication through the network while still performing better than the linear and FOSM protocols with full communication.
Finally, we showed in Corollary 1 that our proposal could also recover the derivatives of the signal of interest with some bounded error. We compare the results obtained for the derivatives against those of a linear protocol. For the FOSM protocol, recall that
is used in (
3) to compute it and, therefore, it only obtains the internal state
, but not the corresponding internal states for the derivatives. Although the linear approach can recover the derivatives of the desired average signal, our proposal is more accurate, as shown in
Figure 9 and
Figure 10. With our protocol, the derivatives are recovered with some bounded error due to the sampling and event-triggered communication. It can be seen in
Figure 10 that this error is more apparent in high-order derivatives and its magnitude can be tuned according to the parameters
, as shown in Corollary 1. Hence, the improvement obtained with our protocol with respect to the linear one is also evident.
5. Comparison to Related Work
In this Section, we offer a qualitative comparison of our work with other methods in the literature. Note that our work is related to the EDCHO protocol presented in [
22], which provides a continuous-time formulation of an exact dynamic consensus algorithm. Both works are similar in that they use HOSM to achieve exact convergence. However, since discrete communication is used in practice, our work contributes a discrete-time formulation that guarantees exact convergence when the sampling step approaches zero. Moreover, we have included event-triggered communication to alleviate the communication load in the network of agents.
Other exact dynamic consensus approaches appear in [
20,
21], which use FOSM. In the discrete-time consensus problem, FOSM protocols suffer from chattering, a steady-state error caused by discretization. The chattering deteriorates the performance of the consensus algorithm in an order of magnitude proportional to the sampling step
when using FOSM. Using HOSM, we have shown in Theorem 1 that our protocol diminishes the effect of chattering. In particular, the error bound is proportional to
, with
m being the order of the sliding modes. We showcased this improvement in the experiment results from
Figure 8 and
Table 1 in
Section 4.
Even though the use of sliding modes results in the exact convergence of the consensus algorithm in continuous-time implementations, several works still rely on linear protocols. A high-order, discrete-time implementation of linear dynamic consensus protocols appears in [
9] whose performance does not depend on the sampling step, contrary to our work. Nonetheless, this approach requires the agents to communicate on each sampling instant. This feature is improved in our approach by the use of event-triggered communication.
Many recent works featuring event-triggered schemes to reduce communication among the agents [
13,
15,
29] attain a similar trade-off between the consensus quality and communication rate. However, they also rely on linear consensus protocols. When persistently varying signals are used, this always results in a non-zero steady-state error, which cannot be eliminated even in continuous-time implementations. Hence, even though our event-triggering mechanism is similar to those from other works, our HOSM approach can achieve a vanishing error when
are arbitrarily small.