Open Access
This article is

- freely available
- re-usable

*Algorithms*
**2019**,
*12*(9),
185;
https://doi.org/10.3390/a12090185

Article

Consensus Tracking by Iterative Learning Control for Linear Heterogeneous Multiagent Systems Based on Fractional-Power Error Signals

^{1}

Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Institute of Automation, Jiangnan University, Wuxi 214122, China

^{2}

Shanghai Keliang Information Technology & Engineering Company , Ltd., Shanghai 200233, China

^{*}

Author to whom correspondence should be addressed.

Received: 25 July 2019 / Accepted: 3 September 2019 / Published: 5 September 2019

## Abstract

**:**

This paper deals with the consensus tracking problem of heterogeneous linear multiagent systems under the repeatable operation environment, and adopts a proportional differential (PD)-type iterative learning control (ILC) algorithm based on the fractional-power tracking error. According to graph theory and operator theory, convergence condition is obtained for the systems under the interconnection topology that contains a spanning tree rooted at the reference trajectory named as the leader. Our algorithm based on fractional-power tracking error achieves a faster convergence rate than the usual PD-type ILC algorithm based on the integer-order tracking error. Simulation examples illustrate the correctness of our proposed algorithm.

Keywords:

heterogeneous linear multiagent systems; consensus tracking; fractional-power tracking error; PD type iterative learning control## 1. Introduction

In the past decades, the cooperative control problem of multiagent systems [1,2] has been extensively studied due to its wide applications in many engineering fields, e.g., multirobot cooperative control, formation control of unmanned aerial vehicles, traffic control, smart grid, and so on. As a fundamental problem of distributed cooperative control, consensus problem requires all agents to reach an agreement on the states of interest, and various consensus problems have been thoroughly studied, such as leader-following consensus [3], group consensus [4], finite-time consensus [5], and so on.

Iterative learning control (ILC), a classic learning control strategy, is designed to improve tracking performance by applying information obtained from the past control experiment [6,7,8]. The current input signal is usually generated by the previous input signal plus the tracking error (either the derivative of tracking error or the integral of the tracking error). ILC is used to accomplish control tasks that repeat on finite-time intervals, and it does not require an accurate system model. Due to this attractive feature of ILC, it has been widely used in practical systems and engineering practice, for example, industrial robots that perform repetitive tasks such as welding and handling [9], servo systems whose command signals are periodic functions [10], etc.

Many industry issues require repeated execution and coordination among several subsystems, for example, cooperative control of several robotic arms on an industrial production line. Recently, the consensus tracking problem of multiagent systems by ILC strategy has attracted the attention of researchers, since many industry issues can be solved under the ILC algorithm [11,12,13,14]. Unfortunately, existing results on consensus problems by ILC algorithm are mostly given for homogeneous multiagent systems, of which all the agents have the same dynamics. In some real engineering applications subjected to various restrictions or to reach the goals with lowest costs, the dynamics of cooperating agents are required to be distinct, e.g., coordination control of unmanned aerial vehicles and unmanned ground vehicles, so the study on heterogeneous multiagent systems is more practical and meaningful. Yang et al. [15] proposed an ILC algorithm to solve the consensus tracking problems of homogeneous and heterogeneous multiagent systems, respectively, and the output consensus conditions have been obtained based on the concept of graph-dependent matrix norm. Li [16] considered a heterogeneous multiagent system composed of first- and second-order dynamics, and the leader was assumed to have second-order dynamics. Different protocols have been designed for the heterogeneous following agents, so that all the following agents tracked the state of the leader asymptotically [16].

As an important evaluation index of the ILC algorithm, the convergence rate refers to the speed at which the multiagent systems approach the reference trajectory, and has stimulated research interest for a long time. In order to accelerate the convergence rate of the ILC algorithm, taking the proportional differential (PD)-type learning law as an example, an acceleration correction algorithm with variable gain and adjustment of learning interval was designed for the linear time-invariant system [17]. Tao et al. proposed an interpolating algorithm to regulate the reference trajectory in order to achieve faster convergence rate [18]. The convergence rate of closed-loop ILC algorithm is obviously faster than that of open-loop ILC algorithm, due to real-time performance of closed-loop ILC algorithm [19]. The convergence rate was greatly improved by introducing adaptive gains into the ILC algorithm [20]. Sun et al. used the terminal converging strategy for ILC algorithm, and reached the finite-time convergence [21]. For the consensus tracking problem of multiagent systems, Yang et al. proposed an ILC algorithm with input sharing, i.e., each agent exchanged its input information to the neighbors, and obtained a faster convergence rate [22].

This paper focuses on the output consensus tracking problem of heterogeneous linear multiagent systems, in which the following agents are required to track the output trajectory of the leader. It is worth noting that the state dimensions of agents may be different, but the output must have the same dimension. In order to accelerate the consensus convergence rate of multiagent systems, a novel PD-type ILC consensus algorithm is proposed by using the fractional-power tracking error. Sufficient consensus condition, which depends on the control parameters, is obtained based on the operator theory.

## 2. Preliminaries

#### 2.1. Digraph

In this paper, we study the consensus tracking problem via leader-following coordination control structure. The information flow among following agents forms a digraph G [23], and the leader is denoted by vertex 0.

The digraph $G=\left(V,E,\mathcal{A}\right)$ composed of the following agents contains the vertex set $V=\left\{1,\cdots ,n\right\}$, the edge set $E\subseteq V\times V$, and the adjacent matrix $\mathcal{A}=\left[{a}_{ij}\right]\in {R}^{n\times n}$. A directed edge from i to j in G is denoted by ${e}_{i,j}=(i,j)\in E$, meaning that the node j can obtain information from the node i. Assume ${a}_{j,i}>0\iff {e}_{i,j}\in E$ and ${a}_{i,i}=0$ for all $i\in V$. The set of neighbors of node i is denoted by ${N}_{i}=\{j\in V:(j,i)\in E\}$. The Laplacian matrix of the digraph G is defined as $L=D-\mathcal{A}=\left[{l}_{i,j}\right]\in {R}^{n\times n}$, where $D=\mathrm{diag}\{{\sum}_{j=1}^{n}{a}_{i,j},i=1,\cdots ,n\}$ is the degree matrix. In the digraph G, a directed path from node ${i}_{1}$ to node ${i}_{s}$ is a sequence of ordered edges of the form $({i}_{1},{i}_{2}),\cdots ,({i}_{s-1},{i}_{s})$, where ${i}_{j}\in V$. A digraph is said to have a spanning tree if there exists a node that forms a directed path from this node to every other node.

Then, the interconnection among all agents is characterized by a compound digraph $\overline{G}=\left\{0\cup G\right\}$, and the coupling weight of the ith following agent to the leader is denoted by ${s}_{i}$. ${s}_{i}>0$ if agent i can obtain information from the leader; otherwise ${s}_{i}=0$. Besides, let $S=\mathrm{diag}\{{s}_{i},i\in V\}$.

Through this paper, we take into account the following topology.

**Assumption**

**1.**

The interconnection topology of the following agents and the leader contains a spanning tree with the leader being its root.

Under Assumption 1, the eigenvalues of $L+S$ all have positive real parts [24].

#### 2.2. Critical Definitions

**Definition**

**1.**

For a given vector $q={[{q}_{1},{q}_{2},\cdots {q}_{n}]}^{T}\in {R}^{n}$, $\u2225q\u2225$ is any vector norm. For any matrix $W=[{\mathrm{w}}_{ij}{]}_{m\times l}\in {R}^{m\times l}$, $\u2225W\u2225$ is the matrix norm induced by the vector norm. In particular, ${\u2225W\u2225}_{\infty}=\underset{1\le i\le m}{max}{\displaystyle \sum _{j=1}^{l}}\left|{w}_{ij}\right|$, ${\u2225W\u2225}_{1}=\underset{1\le j\le l}{max}{\displaystyle \sum _{i=1}^{m}}\left|{w}_{ij}\right|$, and ${\u2225W\u2225}_{2}=\sqrt{\rho \left({W}^{T}W\right)}$, where $\rho \left(\xb7\right)$ is the spectral radius. For all matrices $W\in {R}^{m\times l}$, $Z\in {R}^{l\times n}$, and $F=WZ\in {R}^{m\times n}$, the following property is satisfied.

$${\u2225WZ\u2225}_{m\times n}\le {\u2225W\u2225}_{m\times l}{\u2225Z\u2225}_{l\times n}.$$

**Definition**

**2.**

[21] If a function $f\left(x\right):{R}^{n}\to {R}^{n},\exists D\subset {R}^{n},L>0,0<\alpha \le 1,$ s.t.
where $\forall {x}_{1},{x}_{2}\in D$, then $f\left(x\right)$ is Hölder continuous in region D.

$$\u2225f\left({x}_{1}\right)-f\left({x}_{2}\right)\u2225\le L{\u2225{x}_{1}-{x}_{2}\u2225}^{\alpha},$$

#### 2.3. Consensus Tracking Problem

The consensus tracking problem analyzed in this paper requires that spatially separative agents track a desired reference trajectory via cooperative control, and we take the ILC strategies for each agent. Additionally, we take into account a class of linear heterogeneous multiagent systems, and the dynamics of agent i are in the following form at the kth iteration.
where ${x}_{i,k}\left(t\right)\in {R}^{{p}_{i}}$, ${u}_{i,k}\left(t\right)\in {R}^{{r}_{i}}$, and ${y}_{i,k}\left(t\right)\in {R}^{m}$ are the state and control input and output, respectively, of the agent i, ${A}_{i}\in {R}^{{p}_{i}\times {p}_{i}}$, ${B}_{i}\in {R}^{{p}_{i}\times {r}_{i}}$, and ${C}_{i}\in {R}^{m\times {p}_{i}}$ are matrices with proper dimension. The states of agents are different, because this paper studies the consensus problem of heterogeneous multiagent systems, which were mentioned in the article [15]. Since this paper deal with the output tracking problem, the output dimensions of all agents must be consistent with the reference trajectory in order to facilitate comparison with the reference trajectory.

$$\left\{\begin{array}{c}{\dot{x}}_{i,k}\left(t\right)={A}_{i}{x}_{i,k}\left(t\right)+{B}_{i}{u}_{i,k}\left(t\right)\hfill \\ {y}_{i,k}\left(t\right)={C}_{i}{x}_{i,k}\left(t\right)\hfill \end{array}\right.,\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}i=1,2,\cdots ,n,$$

The reference trajectory ${y}_{d}\left(t\right)\in {R}^{m}$ is defined on a time interval $t\in [0,T]$, which is regarded as the leader’s output trajectory. In the distributed coordination control framework, the leader’s output trajectory ${y}_{d}\left(t\right)$ is only accessed by a subset of the following agents, and the left agents reach the leader’s trajectory indirectly. The control objective is to design a new ILC algorithm so that the output signals of all following agents converge to the reference trajectory ${y}_{d}\left(t\right)$ asymptotically, i.e.,

$$\underset{k\to \infty}{lim}\left[{y}_{d}\left(t\right)-{y}_{i,k}\left(t\right)\right]=0,i\in V,t\in [0,T].$$

**Remark**

**1.**

The leader in this paper is a virtual leader, i.e., the reference trajectory is generated by the virtual leader, but the leader does not actually exist, so there will be no collision between the leader and the following agents. Moreover, we assume that there will be no collision between all followers, and we avoid collision by placing all followers in a reasonable position. For example, system (2) formulates the dynamics of a robot in industrial production lines, the output denotes the angle of the robot’s joints, and the robots will not have collision problems due to their judicious distance.

## 3. Design And Analysis of ILC Consensus Algorithm

Firstly, we define the tracking error of each agent as
and
where ${e}_{i,k}\left(t\right)$ and ${e}_{ij,k}\left(t\right)$ are the tracking errors with the leader and neighboring agents, respectively, of agent i at the kth iteration.

$${e}_{i,k}\left(t\right)={y}_{d}\left(t\right)-{y}_{i,k}\left(t\right),$$

$${e}_{ij,k}\left(t\right)={y}_{j,k}\left(t\right)-{y}_{i,k}\left(t\right),j\in {N}_{i},$$

Let ${\xi}_{i,k}\left(t\right)$ denotes the information received or measured of the agent i at the kth iteration, and we get
where ${a}_{ij}$ is the adjacency elements associated with the edges of the digraph, and ${s}_{i}$ is the coupling weight of the ith following agent to the leader.

$${\xi}_{i,k}\left(t\right)=\sum _{j\in {N}_{i}}{a}_{ij}{e}_{ij,k}^{}\left(t\right)+{s}_{i}{e}_{i,k}^{}\left(t\right),$$

In terms of ILC algorithm, the tracking error continues to decrease with increasing iteration number, and the convergence rate will be slow when the tracking error is small. Hence, we are encouraged to appropriately amplifying the tracking error when the tracking error is small; the following agents can track the leader’s trajectory more quickly.

In this paper, we adopt the PD type ILC consensus algorithm and introduce a fractional-power proportional part as follows,
where ${\mathsf{\Gamma}}_{i}\in {R}^{{r}_{i}\times m}$, ${K}_{i}\in {R}^{{r}_{i}\times m}$ are the learning gains, $0<\alpha <1$, and the sign function $sign\left({\xi}_{i,k}\left(t\right)\right)$ is defined as
Moreover, the concrete steps of the algorithm (6) are shown in Algorithm 1.

$$\begin{array}{cc}\hfill & {u}_{i,k+1}\left(t\right)\hfill \\ =& {u}_{i,k}\left(t\right)+{\mathsf{\Gamma}}_{i}{\dot{\xi}}_{i,k}\left(t\right)+{K}_{i}sign\left({\xi}_{i,k}\left(t\right)\right){\left|{\xi}_{i,k}\left(t\right)\right|}^{\alpha},\hfill \end{array}$$

$$sign\left({\xi}_{i,k}\left(t\right)\right)=\left\{\begin{array}{c}\begin{array}{cc}1& {\xi}_{i,k}\left(t\right)>0,\end{array}\\ \begin{array}{cc}0& {\xi}_{i,k}\left(t\right)=0,\end{array}\\ \begin{array}{cc}-1& {\xi}_{i,k}\left(t\right)<0.\end{array}\end{array}\right.$$

Algorithm 1 Iterative learning control (ILC) consensus algorithm based on fractional-power error signals |

Parameters: agent i, iteration index k, ${a}_{ij}$, ${s}_{i}$, ${\mathsf{\Gamma}}_{i}$,${K}_{i}$, $\alpha $ Input: the reference trajectory ${y}_{d}\left(t\right)$ Output: the control input ${u}_{i,k}\left(t\right)$ 1: initialize the iteration index k=1 2: while (${e}_{i,k}\left(t\right)!=0,i\in V,t\in [0,T]$) 3: initialize the initial state ${x}_{i,k}\left(0\right)=0$ 4: calculate the output ${y}_{i,k}\left(t\right)$ by the Equation (2), 5: calculate the tracking errors of each agent ${e}_{i,k}\left(t\right)$ by the Equation (3), 6: calculate the tracking errors with the neighboring agents ${e}_{ij,k}\left(t\right)$ by the Equation (4), 7: calculate ${\xi}_{i,k}\left(t\right)$ by the Equation (5) 8: if k=1 ${u}_{i,k}\left(t\right)=0$ 9: else ${u}_{i,k}\left(t\right)={u}_{i,k-1}\left(t\right)+{\mathsf{\Gamma}}_{i}{\dot{\xi}}_{i,k-1}\left(t\right)+{K}_{i}sign\left({\xi}_{i,k-1}\left(t\right)\right){\left|{\xi}_{i,k-1}\left(t\right)\right|}^{\alpha}$ 10: end if 11: set k=k+1 12: end while 13: print the control input ${u}_{i,k}\left(t\right)$ |

**Remark**

**2.**

Mishra et al. [23] used the sliding mode control to solve the finite-time consensus problem, where the sign function is used to force the state trajectories onto the sliding surface from any arbitrary initial location in the phase plane. However, the sign function in this paper is to extract the sign of the tracking errors, and then take fractional-power of the absolute value of the tracking errors. Fractional-power tracking errors can accelerate the convergence rate when the tracking errors are small.

Then, it is obtained from (3) and (4) that
and then (5) turns to be

$$\begin{array}{c}{e}_{ij,k}\left(t\right)={e}_{i,k}\left(t\right)-{e}_{j,k}\left(t\right),\hfill \end{array}$$

$$\begin{array}{c}\hfill {\xi}_{i,k}\left(t\right)=\sum _{j\in {N}_{i}}{a}_{ij}\left({e}_{i,k}^{}\left(t\right)-{e}_{j,k}^{}\left(t\right)\right)+{s}_{i}{e}_{i,k}^{}\left(t\right).\end{array}$$

**Remark**

**3.**

It should be pointed out that, in order to ensure the convergence of the multiagent systems, α is in the range of 0 to 1 and cannot be too small.

Let ${x}_{k}\left(t\right)={[{x}_{1,k}^{T}\left(t\right),{x}_{2,k}^{T}\left(t\right),\phantom{\rule{2.0pt}{0ex}}\cdots ,{x}_{n,k}^{T}\left(t\right)]}^{T}$, ${u}_{k}\left(t\right)={[{u}_{1,k}^{T}\left(t\right),{u}_{2,k}^{T}\left(t\right),\cdots ,{u}_{n,k}^{T}\left(t\right)]}^{T}$, and the systems (2) are written in the compact-vector form as
where $A=\mathrm{diag}\{{A}_{1},{A}_{2},\cdots ,{A}_{n}\}$, $B=\mathrm{diag}\{{B}_{1},{B}_{2},\cdots ,{B}_{n}\}$ and $C=\mathrm{diag}\{{C}_{1},{C}_{2},\cdots ,{C}_{n}\}$. Defining ${e}_{k}\left(t\right)={[{e}_{1,k}^{T}\left(t\right),{e}_{2,k}^{T}\left(t\right),\cdots ,{e}_{n,k}^{T}\left(t\right)]}^{T}$ and ${\xi}_{k}\left(t\right)={[{\xi}_{1,k}^{T}\left(t\right),{\xi}_{2,k}^{T}\left(t\right),\cdots ,{\xi}_{n,k}^{T}\left(t\right)]}^{T}$, and Equation (8) can be written as
where $S=\mathrm{diag}\{{s}_{1},{s}_{2},\cdots ,{s}_{n}\}$, and L is the Laplacian matrix of G.

$$\left\{\begin{array}{c}{\dot{x}}_{k}\left(t\right)=A{x}_{k}\left(t\right)+B{u}_{k}\left(t\right),\hfill \\ {y}_{k}\left(t\right)=C{x}_{k}\left(t\right),\hfill \end{array}\right.$$

$$\begin{array}{c}\hfill {\xi}_{k}\left(t\right)=\left((L+S)\otimes {I}_{m}\right){e}_{k}\left(t\right),\end{array}$$

Next, we define the following operators $\overline{sign}\left({\xi}_{k}\left(t\right)\right):{C}_{r}[0,T]\to {C}_{r}[0,T]$ and ${f}^{\alpha}\left({\xi}_{k}\left(t\right)\right):{C}_{r}[0,T]\to {C}_{r}[0,T]$,
and

$$\begin{array}{cc}\hfill & \overline{sign}\left({\xi}_{k}\left(t\right)\right)\hfill \\ =& \mathrm{diag}\left\{sign\left({\xi}_{1,k}\left(t\right)\right),\cdots ,sign\left({\xi}_{n,k}\left(t\right)\right)\right\},\hfill \end{array}$$

$$\begin{array}{cc}\hfill & {f}^{\alpha}\left({\xi}_{k}\left(t\right)\right)\hfill \\ =& {\left[{\left|{\xi}_{1,k}\left(t\right)\right|}^{\alpha},{\left|{\xi}_{2,k}\left(t\right)\right|}^{\alpha},\cdots ,{\left|{\xi}_{n,k}\left(t\right)\right|}^{\alpha}\right]}^{T}.\hfill \end{array}$$

Consequently, the compact-vector version of algorithm (6) is
where $\mathsf{\Gamma}=\mathrm{diag}\left\{{\mathsf{\Gamma}}_{1},{\mathsf{\Gamma}}_{2},\cdots ,{\mathsf{\Gamma}}_{n}\right\}$, and $K=\mathrm{diag}\left\{{K}_{1},{K}_{2},\cdots ,{K}_{n}\right\}$.

$$\begin{array}{cc}\hfill & {u}_{k+1}\left(t\right)\hfill \\ =& {u}_{k}\left(t\right)+\mathsf{\Gamma}{\dot{\xi}}_{k}\left(t\right)+K\overline{sign}\left({\xi}_{k}\left(t\right)\right){f}^{\alpha}\left({\xi}_{k}\left(t\right)\right),\hfill \end{array}$$

Substituting (10) into (13), we have

$$\begin{array}{cc}\hfill & {u}_{k+1}\left(t\right)\hfill \\ =& {u}_{k}\left(t\right)+\mathsf{\Gamma}\left((L+S)\otimes {I}_{m}\right){\dot{e}}_{k}\left(t\right)\hfill \\ & +K\overline{sign}\left({\xi}_{k}\left(t\right)\right){f}^{\alpha}\left(\left((L+S)\otimes {I}_{m}\right){e}_{k}\left(t\right)\right).\hfill \end{array}$$

To realize the consensus tracking, we need the following assumption on the initial states of agents.

**Assumption**

**A2.**

The initial tracking error resetting condition is satisfied for all agents, i.e., ${\xi}_{i,k}\left(0\right)=0$.

Now, we go to the main result, and some important lemmas are listed first.

**Lemma**

**1.**

[25] Let $x\left(t\right)$, $c\left(t\right)$, and $a\left(t\right)$ be a real valued continuous function on $[0,T]$, and $a\left(t\right)\ge 0$, if
then,

$$x\left(t\right)\le c\left(t\right)+{\int}_{0}^{t}a\left(\tau \right)x\left(\tau \right)d\tau ,t\in [0,T],$$

$$x\left(t\right)\le c\left(t\right)+{\int}_{0}^{t}a\left(\tau \right)c\left(t\right){e}^{{\int}_{\tau}^{t}a\left(\sigma \right)d\sigma}d\tau ,t\in [0,T].$$

**Lemma**

**2.**

[25] Let ${\left\{{b}_{k}\right\}}_{k\ge 0}({b}_{k}\ge 0)$ be a constant sequence that converges to zero. Define an operator: ${Q}_{k}:{C}_{r}[0,T]\to {C}_{r}[0,T]$ satisfying
where $M\ge 1$ is a constant, and the norm of r dimensional vector ${C}_{r}[0,T]$ takes the maximum value. Suppose that $P\left(t\right)$ is a continuous function matrix of $r\times r$ dimension, and let $P:{C}_{r}[0,T]\to {C}_{r}[0,T]$ satisfy
Then, $\underset{n\to \infty}{lim}(P+{Q}_{n})\cdots (P+{Q}_{0})\left(u\right)\left(t\right)=0$ holds uniformly with t, if $\rho \left(P\right)<1$.

$$\u2225{Q}_{k}\left(u\right)\left(t\right)\u2225\le M({b}_{k}+{\int}_{0}^{t}\u2225u\left(s\right)\u2225ds),$$

$$P\left(u\right)\left(t\right)=P\left(t\right)u\left(t\right).$$

**Theorem**

**1.**

Consider the heterogeneous multiagent systems (2) with the novel PD-type ILC consensus algorithm (6). With Assumptions 1 and 2, the following agents track the leader’s trajectory as $k\to \infty $, i.e., ${lim}_{k\to \infty}{e}_{k}\left(t\right)=0,t\in [0,T]$, if the following inequality holds,

$$\rho \left({I}_{mn}-CB\mathsf{\Gamma}((L+S)\otimes {I}_{m})\right)<1.$$

**Proof.**

Available from Assumption 2, we have ${x}_{k+1}\left(0\right)-{x}_{k}\left(0\right)=0$. Take integral operation of dynamical system (9) with (14), and we get

$$\begin{array}{cc}\hfill & {x}_{k+1}\left(t\right)-{x}_{k}\left(t\right)\hfill \\ =& {x}_{k+1}\left(0\right)-{x}_{k}\left(0\right)+{\int}_{0}^{t}A\left({x}_{k+1}\left(\tau \right)-{x}_{k}\left(\tau \right)\right)d\tau \hfill \\ & +{\int}_{0}^{t}B\left({u}_{k+1}\left(\tau \right)-{u}_{k}\left(\tau \right)\right)d\tau \hfill \\ =& A{\int}_{0}^{t}\left({x}_{k+1}\left(\tau \right)-{x}_{k}\left(\tau \right)\right)d\tau +B\mathsf{\Gamma}((L+S)\otimes {I}_{m}){e}_{k}\left(t\right)\hfill \\ & +BK{\int}_{0}^{t}\overline{sign}\left({\xi}_{k}\left(\tau \right)\right){f}^{\alpha}\left(\left((L+S)\otimes {I}_{m}\right){e}_{k}\left(\tau \right)\right)d\tau .\hfill \end{array}$$

Then, taking norms on two sides of (16) yields

$$\begin{array}{cc}\hfill & \u2225{x}_{k+1}\left(t\right)-{x}_{k}\left(t\right)\u2225\hfill \\ \le & \u2225A\u2225{\int}_{0}^{t}\u2225{x}_{k+1}\left(\tau \right)-{x}_{k}\left(\tau \right)\u2225d\tau +\u2225B\mathsf{\Gamma}((L+S)\otimes {I}_{m})\u2225\u2225{e}_{k}\left(t\right)\u2225\hfill \\ & +\u2225BK\u2225{\int}_{0}^{t}\u2225\overline{sign}\left({\xi}_{k}\left(\tau \right)\right)\u2225\u2225{f}^{\alpha}\left(\left((L+S)\otimes {I}_{m}\right){e}_{k}\left(\tau \right)\right)\u2225d\tau \hfill \\ \le & \u2225A\u2225{\int}_{0}^{t}\u2225{x}_{k+1}\left(\tau \right)-{x}_{k}\left(\tau \right)\u2225d\tau +\u2225B\mathsf{\Gamma}((L+S)\otimes {I}_{m})\u2225\u2225{e}_{k}\left(t\right)\u2225\hfill \\ & +\u2225BK\u2225{\int}_{0}^{t}\u2225{f}^{\alpha}\left(\left((L+S)\otimes {I}_{m}\right){e}_{k}\left(\tau \right)\right)\u2225d\tau .\hfill \end{array}$$

With Lemma 1 , Equation (17) becomes
where ${f}^{\alpha}\left(\left((L+S)\otimes {I}_{m}\right){e}_{k}\left(t\right)\right)$ is a continuous vector function.

$$\begin{array}{cc}\hfill & \u2225{x}_{k+1}\left(t\right)-{x}_{k}\left(t\right)\u2225\hfill \\ \le & \u2225B\mathsf{\Gamma}((L+S)\otimes {I}_{m})\u2225\u2225{e}_{k}\left(t\right)\u2225+\u2225BK\u2225{\int}_{0}^{t}\u2225{f}^{\alpha}\left(\left((L+S)\otimes {I}_{m}\right){e}_{k}\left(\tau \right)\right)\u2225d\tau \hfill \\ & +\u2225A\u2225{\int}_{0}^{t}[\u2225B\mathsf{\Gamma}((L+S)\otimes {I}_{m})\u2225\u2225{e}_{k}\left(\tau \right)\u2225+\u2225BK\u2225{\int}_{0}^{\tau}\u2225{f}^{\alpha}\left(\left((L+S)\otimes {I}_{m}\right){e}_{k}\left(s\right)\right)\u2225ds]{e}^{{\int}_{\tau}^{t}\u2225A\u2225d\sigma}d\tau \hfill \end{array}$$

According to the Hölder continuity of the vector function, there is a constant ${f}_{0}$ such that the following formulation holds,

$$\begin{array}{cc}\hfill & {\int}_{0}^{t}\u2225{f}^{\alpha}\left(\left((L+S)\otimes {I}_{m}\right){e}_{k}\left(\tau \right)\right)\u2225d\tau \hfill \\ \le & {\int}_{0}^{t}{f}_{0}\u2225\left((L+S)\otimes {I}_{m}\right){e}_{k}\left(\tau \right)\u2225d\tau .\hfill \end{array}$$

Deduce from (19) and (18) that
where ${M}_{1}=\u2225B\mathsf{\Gamma}((L+S)\otimes {I}_{m})\u2225,{M}_{2}={f}_{0}\u2225BK\u2225\u2225(L+S)\otimes {I}_{m}\u2225+\u2225A\u2225{e}^{\u2225A\u2225T}\u2225B\mathsf{\Gamma}((L+S)\otimes {I}_{m})\u2225+{f}_{0}\u2225A\u2225T{e}^{\u2225A\u2225T}\u2225BK\u2225\u2225(L+S)\otimes {I}_{m}\u2225.$

$$\begin{array}{cc}\hfill & \u2225{x}_{k+1}\left(t\right)-{x}_{k}\left(t\right)\u2225\hfill \\ \le & \u2225B\mathsf{\Gamma}((L+S)\otimes {I}_{m})\u2225\u2225{e}_{k}\left(t\right)\u2225\phantom{\rule{1.0pt}{0ex}}+{f}_{0}\u2225BK\u2225{\int}_{0}^{t}\u2225\left((L+S)\otimes {I}_{m}\right){e}_{k}\left(\tau \right)\u2225d\tau \hfill \\ & +\u2225A\u2225{\int}_{0}^{t}[\u2225B\mathsf{\Gamma}((L+S)\otimes {I}_{m})\u2225\u2225{e}_{k}\left(\tau \right)\u2225+{f}_{0}\u2225BK\u2225{\int}_{0}^{\tau}\u2225\left((L+S)\otimes {I}_{m}\right){e}_{k}\left(s\right)\u2225ds]{e}^{{\int}_{\tau}^{t}\u2225A\u2225d\sigma}d\tau \hfill \\ \le & \u2225B\mathsf{\Gamma}((L+S)\otimes {I}_{m})\u2225\u2225{e}_{k}\left(t\right)\u2225\hfill \\ & +[{f}_{0}\u2225BK\u2225\u2225(L+S)\otimes {I}_{m}\u2225+\u2225A\u2225{e}^{\u2225A\u2225T}\u2225B\mathsf{\Gamma}((L+S)\otimes {I}_{m})\u2225+{f}_{0}\u2225A\u2225T{e}^{\u2225A\u2225T}\u2225BK\u2225\u2225(L+S)\otimes {I}_{m}\u2225]\hfill \\ & \times {\int}_{0}^{t}\u2225{e}_{k}^{}\left(\tau \right)\u2225d\tau \hfill \\ =& {M}_{1}\u2225{e}_{k}\left(\tau \right)\u2225+{M}_{2}{\int}_{0}^{t}\u2225{e}_{k}\left(\tau \right)\u2225d\tau ,\hfill \end{array}$$

Thus,

$$\begin{array}{cc}\hfill & {\int}_{0}^{t}\u2225{x}_{k+1}\left(s\right)-{x}_{k}\left(s\right)\u2225ds\hfill \\ \le & {M}_{1}{\int}_{0}^{t}\u2225{e}_{k}^{}\left(s\right)\u2225ds+{M}_{2}{\int}_{0}^{t}{\int}_{0}^{s}\u2225{e}_{k}^{}\left(\tau \right)\u2225d\tau ds\hfill \\ \le & ({M}_{1}+{M}_{2}T){\int}_{0}^{t}\u2225{e}_{k}^{}\left(s\right)\u2225ds.\hfill \end{array}$$

Next, we analyze the tracking error, and it follows from (3) that

$$\begin{array}{cc}\hfill & {e}_{i,k+1}\left(t\right)\hfill \\ =& {y}_{d}\left(t\right)-{y}_{i,k+1}\left(t\right)\hfill \\ =& {e}_{i,k}\left(t\right)-({y}_{i,k+1}\left(t\right)-{y}_{i,k}\left(t\right)).\hfill \end{array}$$

Recursively, following (16) yields

$$\begin{array}{cc}\hfill & {e}_{k+1}\left(t\right)-{e}_{k}\left(t\right)\hfill \\ =& C({x}_{k}\left(t\right)-{x}_{k+1}\left(t\right))\hfill \\ =& CA{\int}_{0}^{t}\left({x}_{k}\left(\tau \right)-{x}_{k+1}\left(\tau \right)\right)d\tau -CB\mathsf{\Gamma}((L+S)\otimes {I}_{m}){e}_{k}\left(t\right)\hfill \\ & -CBK{\int}_{0}^{t}\overline{sign}\left({\xi}_{k}\left(\tau \right)\right){f}^{\alpha}\left(\left((L+S)\otimes {I}_{m}\right){e}_{k}\left(\tau \right)\right)d\tau .\hfill \end{array}$$

Evidently, we get

$$\begin{array}{cc}\hfill & {e}_{k+1}\left(t\right)\hfill \\ =& ({I}_{mn}-CB\mathsf{\Gamma}((L+S)\otimes {I}_{m})){e}_{k}\left(t\right)+CA{\int}_{0}^{t}\left({x}_{k}\left(\tau \right)-{x}_{k+1}\left(\tau \right)\right)d\tau \hfill \\ & -CBK{\int}_{0}^{t}\overline{sign}\left({\xi}_{k}\left(\tau \right)\right){f}^{\alpha}\left(\left((L+S)\otimes {I}_{m}\right){e}_{k}\left(\tau \right)\right)d\tau .\hfill \end{array}$$

To recall Lemma 2, we define the following operators; $P:{C}_{r}[0,T]\to {C}_{r}[0,T]$ and ${Q}_{k}:{C}_{r}[0,T]\to {C}_{r}[0,T]$,
and
It is obvious that P and ${Q}_{k}$ satisfy the definitions in Lemma 2.

$$P\left({e}_{k}\right)\left(t\right)=({I}_{mn}-CB\mathsf{\Gamma}((L+S)\otimes {I}_{m})){e}_{k}\left(t\right),$$

$${Q}_{k}\left({e}_{k}\right)\left(t\right)=CA{\int}_{0}^{t}\left({x}_{k}\left(\tau \right)-{x}_{k+1}\left(\tau \right)\right)d\tau -CBK{\int}_{0}^{t}\overline{sign}\left({\xi}_{k}\left(\tau \right)\right){f}^{\alpha}\left(\left((L+S)\otimes {I}_{m}\right){e}_{k}\left(\tau \right)\right)d\tau .$$

Referring to Equations (25) and (26), Equation (24) becomes

$$\begin{array}{cc}\hfill & {e}_{k+1}\left(t\right)\hfill \\ =& P\left({e}_{k}\right)\left(t\right)+{Q}_{k}\left({e}_{k}\right)\left(t\right)\hfill \\ =& (P+{Q}_{k})\left({e}_{k}\right)\left(t\right)\hfill \\ =& (P+{Q}_{k})(P+{Q}_{k-1})\left({e}_{k-1}\right)\left(t\right)\hfill \\ =& (P+{Q}_{k})(P+{Q}_{k-1})\cdots (P+{Q}_{0})\left({e}_{0}\right)\left(t\right).\hfill \end{array}$$

Take the norm of both sides of Equation (26), and we substitute Equations (19) and (21) into it. Then, we get
where $M=max(1,\u2225CA\u2225({M}_{1}+{M}_{2}T)+{f}_{0}\u2225CBK\u2225\u2225(L+S)\otimes {I}_{m}\u2225)$. Consequently, operator ${Q}_{k}$ satisfies the condition of the operator defined in Lemma 2.

$$\begin{array}{cc}& \u2225{Q}_{k}\left({e}_{k}\right)\left(t\right)\u2225\hfill \\ \le & \u2225CA\u2225{\int}_{0}^{t}\u2225{x}_{k}\left(\tau \right)-{x}_{k+1}\left(\tau \right)\u2225d\tau \hfill \\ & +\u2225CBK\u2225{\int}_{0}^{t}\u2225\overline{sign}\left({\xi}_{k}\left(\tau \right)\right)\u2225\u2225{f}^{\alpha}\left(\left((L+S)\otimes {I}_{m}\right){e}_{k}\left(\tau \right)\right)\u2225d\tau \hfill \\ \le & \u2225CA\u2225{\int}_{0}^{t}\u2225{x}_{k}\left(\tau \right)-{x}_{k+1}\left(\tau \right)\u2225d\tau \hfill \\ & +{f}_{0}\u2225CBK\u2225{\int}_{0}^{t}\u2225\left((L+S)\otimes {I}_{m}\right){e}_{k}\left(\tau \right)\u2225d\tau \hfill \\ \le & (\u2225CA\u2225({M}_{1}+{M}_{2}T)+{f}_{0}\u2225CBK\u2225\u2225(L+S)\otimes {I}_{m}\u2225){\int}_{0}^{t}\u2225{e}_{k}\left(\tau \right)\u2225d\tau \hfill \\ \le & M{\int}_{0}^{t}\u2225{e}_{k}\left(\tau \right)\u2225d\tau ,\hfill \end{array}$$

In the following, we consider two simple special cases for the heterogeneous multiagent systems (2) and ILC consensus algorithm (6).

**Corollary**

**1.**

Consider the heterogeneous multiagent systems (2) with algorithm (6), and suppose that ${C}_{i}{B}_{i}{\mathsf{\Gamma}}_{i}=\beta \times {I}_{m}$, where $\beta \in R$ is a constant. With Assumptions 1 and 2, the following agents track the leader’s trajectory with $t\in [0,T]$ as $k\to \infty $, if there exists a positive constant $\beta >0$ satisfying that
where ${\lambda}_{i},i=1,2,\cdots ,n$ are eigenvalues of $L+S$.

$$\underset{i=1,2,\cdots ,n}{max}\left|1-\beta {\lambda}_{i}\right|<1,$$

**Proof.**

When ${C}_{i}{B}_{i}{\mathsf{\Gamma}}_{i}=\beta \times {I}_{m}$, the convergence condition in Theorem 1 becomes

$$\rho \left({I}_{mn}-\beta ((L+S)\otimes {I}_{m})\right)<1.$$

Under Assumption 1, $L+S$ a nonsingular matrix, and its eigenvalues have positive real parts [24]. Therefore, the convergence condition of the spectral radius in Theorem 1 is simplified as the convergence condition of the eigenvalues. □

Condition (30) requires that $\beta {\lambda}_{i},i=1,2,\cdots ,n$ lie in the circle shown in Figure 1.

Additionally, we can get a more conservative distributed conditions when the topology is symmetric, i.e., ${a}_{ij}={a}_{ji}$.

**Corollary**

**2.**

For heterogeneous multiagent system (2) with algorithm (6), we suppose that ${C}_{i}{B}_{i}{\mathsf{\Gamma}}_{i}=\beta \times {I}_{m}$, where $\beta \in R$ is a constant. Assume that the topology of agents (2) and leader is symmetric and has a spanning tree rooted at the leader. With Assumption 2, if there exists a positive constant $\beta >0$ satisfying that
the following agents track the leader’s trajectory with $t\in [0,T]$ as $k\to \infty $.

$$\sum _{j\in {N}_{i}}{a}_{ij}+{s}_{i}<\frac{2}{\beta},i=1,2,\cdots ,n,$$

## 4. Simulation And Discussion

In order to verify the efficiency of our proposed consensus algorithm, we consider a system consisting of the six heterogeneous following agents, given by
The leader’s trajectory is chosen as ${y}_{d}\left(t\right)=0.25{t}^{2}(5-t),t\in [0,5]$.

$$\left\{\begin{array}{c}{\dot{x}}_{1}\left(t\right)=\left[\begin{array}{cc}0& 2\\ -2& -3\end{array}\right]{x}_{1}\left(t\right)+\left[\begin{array}{c}0\\ 1\end{array}\right]{u}_{1}\left(t\right),\phantom{\rule{3.0pt}{0ex}}\hfill \\ {y}_{1}\left(t\right)=\left[\begin{array}{cc}0& 0.1\end{array}\right]{x}_{1}\left(t\right),\hfill \end{array}\right.$$

$$\left\{\begin{array}{c}{\dot{x}}_{2}\left(t\right)=\left[\begin{array}{cc}0.4& -0.2\\ 2& -1\end{array}\right]{x}_{2}\left(t\right)+\left[\begin{array}{c}0\\ 0.4\end{array}\right]{u}_{2}\left(t\right),\phantom{\rule{3.0pt}{0ex}}\hfill \\ {y}_{2}\left(t\right)=\left[\begin{array}{cc}0.2& 1\end{array}\right]{x}_{2}\left(t\right),\hfill \end{array}\right.$$

$$\left\{\begin{array}{c}{\dot{x}}_{i}\left(t\right)=\left[\begin{array}{ccc}1& -0.5& 0\\ 0.1& 0& 0.2\\ 1& -2& -3\end{array}\right]{x}_{i}\left(t\right)+\left[\begin{array}{c}0.1\\ 0\\ 1\end{array}\right]{u}_{i}\left(t\right),\phantom{\rule{3.0pt}{0ex}}\hfill \\ {y}_{i}\left(t\right)=\left[\begin{array}{ccc}0.1& 0.2& 0.4\end{array}\right]{x}_{i}\left(t\right),\begin{array}{cc}& i=\end{array}3,4,\hfill \end{array}\right.$$

$$\left\{\begin{array}{c}{\dot{x}}_{i}\left(t\right)=\left[\begin{array}{cccc}1& 0.2& 0.3& 0.5\\ 0.2& 0.3& 0& 0\\ 0.1& 0& 0.2& 1\\ 1& -4& -1& -3\end{array}\right]{x}_{i}\left(t\right)+\left[\begin{array}{c}0.5\\ 0\\ 0\\ 1\end{array}\right]{u}_{i}\left(t\right),\phantom{\rule{3.0pt}{0ex}}\hfill \\ {y}_{i}\left(t\right)=\left[\begin{array}{cccc}0& 0& 0& 0.2\end{array}\right]{x}_{i}\left(t\right),\begin{array}{cc}& i=\end{array}5,6.\hfill \end{array}\right.$$

The digraph formed by the six following agents and the leader is shown in Figure 2, where vertex 0 represents the leader and the vertices 1–6 represent the following agents.

The Laplacian for six following agents is
and $S=\mathrm{diag}\{1.5,0,0,2,0,0\}$, so the eigenvalues of $L+S$ are $2.6624\pm 0.5623j,1.8774\pm 0.7449j,0.3551,0.6753$.

$$L=\left[\begin{array}{cccccc}2& 0& -1& -1& 0& 0\\ -1& 1& 0& 0& 0& 0\\ 0& -1& 1& 0& 0& 0\\ 0& 0& 0& 1& -1& 0\\ 0& 0& -1& 0& 2& -1\\ 0& 0& 0& -1& 0& 1\end{array}\right],$$

The parameters of fractional-power ILC algorithm (14) are chosen as $\mathsf{\Gamma}=\mathrm{diag}\{4,1,1,1,2,2\}$ and $K=\mathrm{diag}\{6,1.2,1.5,1.5,3,3\}$, and it is easy to verify that $\rho \left({I}_{mn}-CB\mathsf{\Gamma}((L+S)\otimes {I}_{m})\right)=0.86<1$, i.e., these parameters chosen above satisfy the condition (15) of Theorem 1.

It is seen from Assumption 2 that the initial tracking errors are assumed to be zero, so all agents are reset to the same initial position after each iteration. The control input signals for the first iteration of all agents are set to 0.

We choose $\alpha $ as $0.7$, and the tracking processes at the 10th and 70th iteration are represented by Figure 3 and Figure 4, respectively. Figure 5 shows the control input signals when the six following agents fully track the leader’s trajectory. In addition, Figure 6 illustrates the evolution of each agent’s tracking error ${e}_{i,k}\left(t\right)$. Set ${max}_{t\in [0,T]}\left|{e}_{i,k}\left(t\right)\right|<{10}^{-3}$ as the precision requirement, Figure 6 clearly shows that all the following agents track the leader’s trajectory when the iteration reaches 70 times.

Then, we choose $\alpha =0.85$; other parameters remain unchanged. Figure 7 shows the tracking errors of the six following agents, and the novel ILC consensus algorithm needs to iterate 80 times so that all agents completely track the leader’s trajectory with $\alpha =0.85$. Obviously, selecting different control parameters yields distinct consensus convergence rate, and how to get the best control parameters for the algorithm could refer to the main idea of the agent-based simulator for tourist urban routes in the work by the authors of [26].

In order to compare the tracking performance of the novel ILC consensus algorithm based on fractional-power error signals with that of the traditional PD-type ILC algorithm, we choose $\alpha =1$, so the algorithm (14) is converted into the traditional PD-type ILC algorithm. In Figure 8, all agents with the traditional ILC algorithm completely track the leader’s trajectory after the 100th iteration, but the convergence rate is slower than that of the ILC algorithm based on fractional-power error signals with $\alpha \in [0.7,1)$. As discussed above, it is discovered that fractional-power error signals show beneficial influences on the convergence rate.

In order to consider the external environmental effects, as well as model uncertainties of the multiagent systems, we use the following model to illustrate the robustness of the proposed ILC algorithm.
where ${f}_{i}\left({x}_{i,k}\left(t\right)\right)$ represents unmodeled dynamics and ${w}_{i,k}\left(t\right)$ denotes repetitive disturbance. Choose ${f}_{1}\left({x}_{1}\right)={x}_{1}$, ${f}_{2}\left({x}_{2}\right)=sin{x}_{2}$, ${f}_{3}\left({x}_{3}\right)=cos{x}_{3}$, ${f}_{4}\left({x}_{4}\right)={x}_{4}$, ${f}_{5}\left({x}_{5}\right)=sin{x}_{5}$, ${f}_{6}\left({x}_{6}\right)=cos{x}_{6}$, ${w}_{1,k}\left(t\right)={w}_{2,k}\left(t\right)=\left[\begin{array}{c}sin\left(10\pi t\right)\\ cos\left(10\pi t\right)\end{array}\right]$, ${w}_{3,k}\left(t\right)={w}_{4,k}\left(t\right)=0.5\left[\begin{array}{c}\begin{array}{c}cos\left(10\pi t\right)\\ 2sin\left(10\pi t\right)\end{array}\\ 3sin\left(10\pi t\right)\end{array}\right]$, ${w}_{5,k}\left(t\right)={w}_{6,k}\left(t\right)=0.5\left[\begin{array}{c}sin\left(10\pi t\right)\\ 2sin\left(10\pi t\right)\\ 3cos\left(10\pi t\right)\\ 4cos\left(10\pi t\right)\end{array}\right]$; other parameters remain unchanged. Figure 9 shows that the proposed ILC algorithm is sufficiently robust to model uncertainty and to repetitive disturbances.

$$\begin{array}{c}\hfill \left\{\begin{array}{c}{\dot{x}}_{i,k}\left(t\right)={A}_{i}{x}_{i,k}\left(t\right)+{f}_{i}\left({x}_{i,k}\left(t\right)\right)+{B}_{i}{u}_{i,k}\left(t\right)+{w}_{i,k}\left(t\right)\hfill \\ {y}_{i,k}\left(t\right)={C}_{i}{x}_{i,k}\left(t\right)\hfill \end{array}\right.,i=1,2,3,4,5,6,\end{array}$$

## 5. Conclusions

In this paper, we adopt a PD-type ILC consensus algorithm to solve the consensus tracking problem of a class of linear heterogeneous multiagent systems. In the algorithm, we replace the normal proportional tracking error part by a fractional-power tracking error. With the interconnection topology, which contains a spanning tree rooted at the leader that denotes the desired trajectory, convergence condition is obtained in the light of graph theory and operator theory. Fractional-power tracking error feedback amplifies the tracking error appropriately when the tracking error is small, so it realizes a faster convergence rate than the integer-power tracking error.

## Author Contributions

All authors contributed to the initial motivation of the work, to the research and numerical experiments and to the writing. All authors have read and approved the final manuscript.

## Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 61973139, 61473138), the Six Talent Peaks Project in Jiangsu Province (Grant No. 2015-DZXX-011), and the Natural Science Foundation of Jiangsu Province (Grant No. BK20151130).

## Conflicts of Interest

The authors declare no conflicts of interest.

## References

- Ge, X.; Han, Q.L.; Ding, D. A survey on recent advances in distributed sampled-data cooperative control of multi-agent systems. Neurocomputing
**2017**, 275, 1684–1701. [Google Scholar] [CrossRef] - Seyboth, G.S.; Wei, R.; Allgöwer, F. Cooperative control of linear multiagent systems via distributed output regulation and transient synchronization. Automatica
**2016**, 68, 132–139. [Google Scholar] [CrossRef] - Liu, K.X.; Duan, P.H.; Duan, Z.S.; Cai, H.; Lu, J. Leader-following consensus of multiagent systems with switching networks and event-triggered control. IEEE Trans. Circuits Syst. I Reg. Pap.
**2018**, 65, 1696–1706. [Google Scholar] [CrossRef] - Yu, J.Y.; Wang, L. Group consensus in multiagent systems with switching topologies and communication delays. Syst. Control Lett.
**2010**, 59, 340–348. [Google Scholar] [CrossRef] - Wang, H.; Yu, W.W.; Wen, G.H. Finite-time bipartite consensus for multiagent systems on directed signed networks. IEEE Trans. Circuits Syst. I Reg. Pap.
**2018**, 65, 4336–4348. [Google Scholar] [CrossRef] - Bristow, D.A.; Tharayil, M.; Alleyne, A.G. A survey of iterative learning control. IEEE Control Syst. Mag.
**2006**, 26, 96–114. [Google Scholar] - Chen, Y.Y.; Chu, B.; Freeman, C.T. Point-to-point iterative learning control with optimal tracking time allocation. IEEE Trans. Control Syst. Technol.
**2018**, 26, 1685–1698. [Google Scholar] [CrossRef] - Jin, X. Iterative learning control for non-repetitive trajectory tracking of robot manipulators with joint position constraints and actuator faults. Int. J. Adapt. Control Signal Process.
**2017**, 31, 859–875. [Google Scholar] [CrossRef] - Lambrecht, J.; Kleinsorge, M.; Rosenstrauch, M. Spatial programming for industrial robots through task demonstration. Int. J. Adv. Robot Syst.
**2013**, 10, 254. [Google Scholar] [CrossRef] - Fen, Y.; Xu, J.M. Segment filtering iterative learning control for motor servo systems. In Proceedings of the 27th Chinese Control and Decision Conference, Qingdao, China, 23–25 May 2015; pp. 6103–6106. [Google Scholar]
- Zhang, T.; Li, J.M. Iterative Learning Control for multiagent systems with finite-leveled sigma-delta quantization and random packet losses. IEEE Trans. Circuits Syst. I Reg. Pap.
**2017**, 64, 2171–2181. [Google Scholar] [CrossRef] - Li, J.S.; Liu, S.Y.; Li, J.M. Observer-based distributed adaptive iterative learning control for linear multiagent systems. Int. J. Syst. Sci.
**2017**, 48, 2948–2955. [Google Scholar] [CrossRef] - Wu, Q.F.; Liu, S. Iterative learning control of multiagent consensus with initial error correction. Comput. Eng. Appl.
**2014**, 50, 29–35. [Google Scholar] - Yang, S.P.; Xu, J.X.; Yu, M. An iterative learning control approach for synchronization of multiagent systems under iteration-varying graph. In Proceedings of the 52nd IEEE Conference on Decision and Control, Florence, Italy, 10–13 December 2013; pp. 6682–6687. [Google Scholar]
- Yang, S.P.; Xu, J.X.; Huang, D.Q. Optimal iterative learning control design for multiagent systems consensus tracking. Syst. Control Lett.
**2014**, 69, 80–89. [Google Scholar] [CrossRef] - Li, J.S.; Li, J.M. Iterative learning control approach for a kind of heterogeneous multiagent systems with distributed initial state learning. Appl. Math. Comput.
**2015**, 265, 1044–1057. [Google Scholar] - Lan, T.Y.; Lin, H. Accelerated iterative learning control algorithm with variable gain and adjustment of interval in sense of Lebesgue-p norm. Control Decis.
**2017**, 32, 2071–2075. [Google Scholar] - Tao, H.F.; Dong, X.Q.; Yang, H.Z. Optimal algorithm and application for point to point iterative learning control via updating reference trajectory. Control Theory Appl.
**2016**, 9, 1207–1213. [Google Scholar] - Sun, M.X.; Wang, D. Closed-loop iterative learning control for non-linear systems with initial shifts. Int. J. Adapt. Contr. Signal Process.
**2002**, 16, 515–538. [Google Scholar] [CrossRef] - Cao, W.; Cong, W.; Li, J. Iterative learning control of variable gain with initial state study. Control Decis.
**2012**, 27, 473–476. [Google Scholar] - Sun, M.X.; Bi, H.B.; Zhou, G.L.; Wang, H.F. Feedback assisted PD type iterative learning control: Initial value problem and correction strategy. Acta Autom. Sin.
**2015**, 41, 157–164. [Google Scholar] - Yang, S.P.; Xu, J.X.; Li, X.F. Iterative learning control with input sharing for multiagent consensus tracking. Syst. Control Lett.
**2016**, 94, 97–106. [Google Scholar] [CrossRef] - Mishra, R.K.; Sinha, A. Event-triggered sliding mode based consensus tracking in second order heterogeneous nonlinear multiagent systems. Eur. J. Control
**2019**, 45, 30–44. [Google Scholar] [CrossRef] - Lin, Z.; Francis, B.; Maggiore, M. Necessary and sufficient graphical conditions for formation control of unicycles. IEEE Trans. Autom. Control
**2005**, 50, 121–127. [Google Scholar] - Lin, H.; Wang, L. Iterative Learning Control Theory; Northwestern Polytechnical University Press: Xi’an, China, 1998. [Google Scholar]
- García-Magariño, I. ABSTUR: An Agent-based Simulator for Tourist Urban Routes. Expert Syst. Appl.
**2015**, 42, 5287–5302. [Google Scholar] [CrossRef]

**Figure 6.**Tracking error of ILC algorithm based on fractional-power error signals with $\alpha =0.7$.

**Figure 7.**Tracking error of ILC algorithm based on fractional-power error signals with $\alpha =0.85$.

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).