Fixed-Time Distributed Time-Varying Optimization for Nonlinear Fractional-Order Multiagent Systems With Unbalanced Digraphs

: This paper investigates the problem of ﬁxed-time distributed time-varying optimization of a nonlinear fractional-order multiagent system (FOMAS) over a weight-unbalanced directed graph (digraph), where the heterogeneous unknown nonlinear functions and disturbances are involved. The aim is to cooperatively minimize a convex time-varying global cost function produced by a sum of time-varying local cost functions within a ﬁxed time, where each time-varying local cost function does not have to be convex. Using a three-step design procedure, a fully distributed ﬁxed-time optimization algorithm is constructed to achieve the objective. The ﬁrst step is to design a fully distributed ﬁxed-time estimator to estimate some centralized optimization terms within a ﬁxed time T 0 . The second step is to develop a novel discontinuous ﬁxed-time sliding mode algorithm with nominal controller to derive all the agents to the sliding-mode surface within a ﬁxed time T 1 , and meanwhile the dynamics of each agent is described by a single-integrator MAS with nominal controller. In the third step, a novel estimator-based fully distributed ﬁxed-time nominal controller for the single-integrator MAS is presented to guarantee all agents reach consensus within a ﬁxed time T 2 , and afterwards minimize the convex time-varying global cost function within a ﬁxed time T 3 . The upper bound of each ﬁxed time T m ( m = 0,1,2,3 ) is given explicitly, which is independent of the initial states. Finally, a numerical example is provided to validate the results.


Introduction
More recently, distributed optimization in a group of multiagent systems (MASs) has received considerable attention owing to its broad applications including but not limited to sensor networks [1], economic dispatch of power grids [2,3] and wireless resource management [4].The primary goal of distributed optimization in MASs is to minimize the sum of local cost functions (termed the global cost function) cooperatively, where each agent has a local cost function only known to itself.Existing results mainly focus on implementing distributed optimization of MASs in a discrete-time manner [1,5,6], which is different from the case where each agent has continuous-time dynamics.Since many physical systems have continuous-time dynamics [7], the distributed optimization problem of MASs with various continuous-time dynamics is extensively studied, such as firstorder MASs [8,9], second-order MASs [10,11] and higher-order MASs [12,13].Normally, the implementation of distributed optimization relies upon some factors, i.e., algorithm performance, the information exchange network topology, the agent inherent dynamics, as well as the characteristics of the local cost functions.Existing works on the above four factors will be discussed, respectively, to point out their corresponding limitations, and further bring out the research motivation and contribution of this paper.

Related Work and Its Limitations
(1) Fixed-Time Optimal Convergence: One of the paramount criteria to evaluate the performance of the distributed optimization algorithm is the optimal convergence rate.Nonetheless, in the aforementioned discrete-time and continuous-time optimization algorithms, all agents reach an agreement and converge to the global optimal solution as time approaches infinity.In order to satisfy the finite-time convergence requirement of some practical tasks, some finite-time distributed optimization algorithms are designed in [3,14,15].For the finite-time distributed optimization algorithms, the convergence time relies on initial states directly.It is impossible to ensure a predetermined convergence time if certain initial state data are not supplied in advance.Designing fixed-time distributed optimization algorithms that are independent of any initial states is required.So far, little work about the fixed-time distributed optimization problem of MASs [16][17][18][19] has been addressed by utilizing the idea of fixed-time stability [20].Note that, all the aforementioned finite-and fixed-time optimization algorithms have some constraints, i.e., each local cost function and its gradient are respectively required to be (strongly) convex and/or Lipschitz, and the agents' topology is considered to be undirected.
(2) Weight-Unbalanced Directed Topology: The aforesaid optimal algorithms are designed based on the assumption that the information exchange network topology among agents are either undirected or directed but weight-balanced [21], which would generally fail when they are relocated to unbalanced digraphs.Unbalanced digraphs present a unique challenge that has received widespread recognition in the optimization community [5].Existing attempts to consider weight-unbalanced digraphs have used certain additional information, such as the in-neighbors and out-degree information [22,23], out-neighbors and in-degree information [24], and the zero eigenvalue's corresponding left eigenvector in the Laplacian matrix [10,25,26], which might not be feasible in practice [5,27].Without employing the certain information mentioned above, the distributed optimization problem of MASs with unbalanced digraphs is studied by designing a scaling-function-based discrete-time algorithm in [5] and a continuous-time coordination algorithm in [27].But the designed algorithms in [5,27] depend on certain global information, based on the assumption that each local cost function is strongly convex, and can only achieve an asymptotic optimal convergence.The above limitations hinder the algorithms' implementation in real applications.
(3) Heterogeneous Nonlinear Fractional-Order Dynamics: Additionally, existing works on distributed optimization usually assume all agents have homogeneous linear dynamics, and/or share a unique dynamical mode, i.e., single-integrator dynamics [28,29], doubleintegrator dynamics [10,11] and general linear dynamics [12].But oftentimes, many physical systems are inherently nonlinear and susceptible to heterogeneous disturbances.The tools used in the homogeneous and linear frameworks cannot be used in the heterogeneous and nonlinear ones, and the designed optimal algorithms for homogeneous linear MASs are generally not suitable for heterogeneous nonlinear MASs.The distributed optimization problems for MASs with heterogeneous disturbances and heterogeneous nonlinear MASs are respectively studied in [8,14,18] and in [13,30], where the considered topologies are either undirected [8,13,14] or directed but weight-balanced [30].Note that all considered MASs and designed controllers above have integer-order dynamics.A fractional-order (FO) system with a fractional order can be used to correctly describe the dynamics of anomalous systems with memory or hereditary features [31,32].In addition, FO controllers are more reliable and offer more design flexibility than integer-order controllers [33].The FO system is attracting more attention and offers a wide range of practical applications [34].Recently, the authors in [35] study the distributed optimization problem of nonlinear uncertain fractional-order MASs (FOMASs) via designing an adaptive surface control protocol with asymptotic optimal convergence rate.To the best of our knowledge, the fixed-time dis-tributed optimization of FOMASs with heterogeneous nonlinear functions and disturbances over a weight-unbalanced digraph has not been reported.
(4) Time-Varying Local Cost Functions: Furthermore, while time-varying local cost functions are frequently used in applications like monitoring a time-varying optimal solution, the studies listed above primarily focus on the distributed optimization issue using time-invariant local cost functions.The authors of [11,13,15,19,28,36,37] study the distributed optimization problem with time-varying local cost functions, also known as the distributed time-varying optimization problem, which is more challenging to solve because its optimal point (trajectory) may be time-varying.A connectivity-preserving optimization algorithm is developed in [11] to make all agents achieve consensus within a finite-time and the consensus values converge to a time-varying optimal trajectory asymptotically.Via distributed average tracking, some optimization algorithms are designed to cooperatively minimize the sum of time-varying local cost functions within a finite time in [15] and within a fixed time in [19].At present, there are several limitations in the aforementioned distributed time-varying optimization problems.For example, the considered network topologies in [11,13,15,19,28,37] are undirected and the designed algorithms in [11,13,28,37] can only achieve asymptotic convergence.Moreover, the authors in [11,15,19,28,37] only consider the integrator-type agent dynamics, each time-varying local cost function and its Hessian are forced to be convex and invertible in [11,13,37], respectively.This paper intends to overcome the aforementioned limitations.

Research Motivation
Motivated by the discussion and observation on the aforementioned four factors, this paper aims to solve the fixed-time distributed time-varying optimization problem of an FOMAS with time-varying local cost functions, heterogeneous unknown nonlinear functions and disturbances over a weight-unbalanced directed network.In this paper, each time-varying local cost function and its Hessian are not necessarily forced to be convex and invertible, respectively.As previously mentioned, the aforesaid four factors (i.e., fixedtime optimal convergence, weight-unbalanced directed topology, heterogeneous nonlinear fractional-order dynamics, and time-varying local cost functions) have their corresponding motivations, and existing related works about those four factors have their corresponding limitations or constraints.Fixed-time optimal convergence has a fast convergence rate and meets the demand for ensuring a predefined convergence time independent of any initial states.The benefit of weighted unbalanced direct topology is that it requires fewer communication channels and less equipment.Heterogeneous nonlinear dynamics and time-varying local cost functions are common in many physical systems and applications.Based on a practical perspective, the above four factors are comprehensively considered in a unified framework and the existing limitations of some related works are removed in this paper.The studied problem of this paper is very challenging, and the derived results are of great significance in theory and practice.

Research Contribution
This paper focuses on studying the problem of distributed optimization of an FOMAS through comprehensive consideration of the following four factors: fixed-time optimal convergence, weight-unbalanced directed topology, heterogeneous nonlinear agent inherent dynamics, and time-varying local cost functions.By integrating the idea of fixed-time stability [20], the distributed estimator (or tracking control) method [14], the sliding-mode control technique and the distributed leaderless consensus control method, this paper proposes an estimator-based fully distributed fixed-time optimization algorithm to solve such a problem.Compared with the existing related works, the research contribution of this work is threefold.
(1) A fixed-time optimal convergence protocol independent of any initial states is designed; this is different from the designed asymptotic optimal convergence protocols in [5,30,37], and the finite-time optimal convergence protocols in [3,14,15] depen-dent of initial states.However, the fixed-time optimal convergence protocols are designed in [16][17][18][19], where the considered topologies among agents are undirected.(2) A weight-unbalanced directed topology without employing certain additional information is considered, which includes the undirected topologies considered in [8,16,18], weightbalanced directed topologies in [19,21,30], and weight-unbalanced directed topologies, and employs certain additional information in [23][24][25][26] as its special cases.However, the weight-unbalanced directed topology without employing certain additional information is considered in [5,27], where the designed protocols are only asymptotic optimal convergence.(3) An FOMAS with time-varying local cost functions, heterogeneous unknown nonlinear functions and disturbances is investigated; this is in contrast to the studied MAS with linear and homogeneous integer-order dynamics in [9,12,28,29].Note that each local cost function is required to be convex in [8,[11][12][13]29], strongly convex in [5,17,27,37], and the Hessian of each local cost function is forced to be invertible and equal in [9][10][11]13,37].However, in this paper, only the global cost function is forced to be convex but not necessarily each local cost function, and only the Hessian of the global cost function is forced to be invertible but not necessarily the Hessian of each local cost function.
In sum, the work of this paper is an extension of and/or an improvement on the above-mentioned works.

Notations
Let R, R + , R n and R n×m be respectively the sets of all real numbers, nonnegative real numbers, n-dimensional real column vectors and n × m real matrices.Symbols ⊗, I N and I N respectively represent the Kronecker product, index set {1, 2, . . ., N} and N × N identity matrix.For a real number p > 0 and a vector x = [x 1 , x 2 , . . ., x n ] T ∈ R n , x p = [x p 1 , x p 2 , . . ., x p n ] T , sig p (x) = [sig p (x 1 ), sig p (x 2 ), . . ., sig p (x n )] T with sig p (x i ) = |x i | p sgn(x i ), sgn(x) = [sgn(x 1 ), sgn(x 2 ), . . ., sgn(x n )] T , where sgn(x i ) represents the signum function of x i ; d dt and ∂ ∂x i represent the differential operator and partial differential operator with respect to t and x i , respectively; for a function f (x, t) : R n × R + → R that is twice differentiable, its gradient is denoted by

Fractional Integral and Derivative
In this paper, let p ∈ (0, 1], The following definitions and property are available in [38].

Directed Graph Theories
The following directed graph theories can be found in [5,27].A weighted directed graph (digraph) among N nodes (agents) is modeled as G. Let V = {V 1 , V 2 , . . ., V N } be the node set, and A = [a ij ] N×N with weights a ij ≥ 0 be the adjacency matrix, where a ij > 0 if and only if agent j is available to agent i, and a ij = 0 otherwise.The Laplacian matrix L = [l ij ] N×N can be defined as a ji are called the in-degree and out-degree of agent i.A digraph is called weight-balanced if and only if A digraph is termed strongly connected if there exists a directed path between any two nodes.A digraph graph has a directed spanning tree if and only if there exists a node (termed the root) with a directed path to all other nodes.Assumption 1.The digraph G is time-invariant and strongly connected.
If Assumption 1 holds, we have the following important lemma.
Lemma 2 ([40]).If H is a nonsingular M-matrix, there exist an N × N positive diagonal matrix N and λ( H) represents the minimal eigenvalue value of H.

Problem Statement
Consider an FOMAS composed of N agents under a digraph G. Inspired by [18,30], the FOMAS is assumed to satisfy the following continuous-time heterogeneous nonlinear dynamics: where 0 < p i ≤ 1, x i ∈ R and τ i ∈ R are, respectively, agent i's state and disturbance, This study aims to find a fully distributed optimization protocol to solve the fixed-time distributed time-varying optimization problem formulated below.
Fixed-time distributed time-varying optimization problem: Find a fully distributed controller u i in (5) for each agent, such that for any given initial states x i (0), there exists a fixed-time T independent of the initial states, and x i converges to the optimization point x * i within the fixed-time T, i.e., x i = x * i as t ≥ T for each i ∈ I N , where x * i ∈ R is the minimizer of the following distributed time-varying optimization problem: and the constant δ i represents the final consensus configuration (or expected formation) such that Remark 1.It is worth pointing out that existing optimization problems usually require all x i and reach an exact consensus (see [8][9][10][11][12][13][14][15][16][17][18][19]), i.e., x i = x j , ∀i, j ∈ I N , and they converge to a common optimization point x * .But achieving perfect consensus for all x i in real applications would be incredibly difficult.In other words, there is always an offset between x i and x j for ∀i = j.Thus, in the optimization problem (6), it is required that x i − x j = δ i − δ j for ∀i, j ∈ I N , and each x i converges to its own optimization point x * i , where Thus, the studied time-varying optimization problem (6) is more generic and practical, and has a wider range of applications in resource management/allocation problems [4], economic dispatch problems [18] and optimal rendezvous formation problems [29].
The following assumptions are required to solve the fixed-time distributed timevarying optimization problem of the nonlinear FOMAS (5).Assumption 2. There exists a positive scalar function hi (x i , t) such that |h i (x i , t) Assumption 3. The time-varying global cost function f 0 (x 0 , t) is a twice continuously differentiable function with respect to x 0 , and its Hessian ∇ 2 f 0 (x 0 , t) = ∂ 2 f 0 (x 0 ,t) Remark 2. From Assumption 2, each agent's nonlinear function and disturbance are converted into a simplified relation that only involves an available (computable) scalar function.Assumption 2 is mild and has been used for consensus algorithm design of uncertain nonidentical MASs in [43][44][45].Additionally, from Assumption 2, both of h i (x i , t) and τ i are unavailable, hi (x i , t) is available to agent i ∈ I N only and can be used in the algorithm design.Remark 3. Note that each local cost function is required to be convex in [8,[11][12][13]29], strongly convex in [5,17,27,37], and the Hessian of each local cost function is required to be invertible and equal in [9][10][11]13,37].However, in Assumption 3, only the global cost function is required to be convex but not necessarily each local cost function, and only the Hessian of the global cost function is required to be invertible but not necessarily the Hessian of each local cost function.Thus, Assumption 3 is mild.The invertibility of ∇ 2 f 0 (x 0 , t) in Assumption 3 implies that ∑ N i=1 f i (x 0 , t) is strictly convex; thus, there exists a unique solution in the time-varying optimization problem (6).

Fixed-Time Sliding Mode Control
A sliding-mode-based optimization controller is first proposed as where the functions r i ∈ R and w i ∈ R satisfy the following fractional-order dynamics: and u * i is the nominal controller to be designed later.
Remark 4. Specifically, if p i = 1 ∀i ∈ I N , the fractional-order dynamics (8) reduce to Theorem 1.Under Assumption 2, the nonlinear FOMAS (5) with the protocol (7) consisting of the sliding-mode manifold (8) reaches the sliding-mode surface r i = 0 within a fixed time T 1 , satisfying Proof.Let V r (t) = ∑ N i=1 r 2 i be the Lyapunov function.By using Property 1, ( 5), ( 7) and ( 8), i ∈ I N , we have that From (11), Assumption 2 and Lemma 3, we have that Vr (t) satisfies where By invoking Lemma 4, we have V r (t) = 0 within a fixed time T 1 , satisfying (10).Therefore, the sliding-mode surface r i = 0 for each i ∈ I N can be achieved within the fixed time T 1 .
As proved in Theorem 1, r i = 0 as t ≥ T 1 ; thus, ṙi = D p i x i − D p i z i = 0, i.e., D p i x i = D p i z i = w i according to (8).Hence, as t ≥ T 1 , the dynamics of each agent can be described as the following single-integrator MAS: where we have Property 1 and (8) in the first equality and the last equality, respectively.
Remark 5.According to Theorem 1, when t ≥ T 1 , the nonlinear FOMAS (5) with the proposed protocol (7) is equivalent to the single-integrator MAS (13) with nominal controller.In view of (8), for each i ∈ I N , r i is independent of any unknown information h i (x i , t) or τ i , but depends only on its own absolute state information x i and the nominal controller u * i .This holds true for the protocol (7).In the following, it just needs to design a fully distributed nominal controller u * i such that the fixed-time distributed time-varying optimization problem of the single-integrator MAS (13) is solved.

Main Results
In this section, over a strongly connected digraph (could be weight-unbalanced), we first design a centralized fixed-time convergent or optimization protocol by embedding some centralized optimization terms into the fixed-time optimization control scheme in Section 5.1.In Section 5.2, the centralized optimization protocol is transformed into a distributed optimization protocol via designing a distributed fixed-time estimator to estimate the centralized optimization terms.

Centralized Fixed-Time Optimization Protocol Design
Before designing the centralized fixed-time optimization protocol, three centralized optimization terms about the time-varying global cost function are denoted as A neighborhood position error variable is designed as Based on Assumption 3, we design the following nominal controller (for each i ∈ I N ): where the optimization term φ i (t) is defined as Theorem 2. Under Assumptions 1-3, consider the nonlinear FOMAS (5) controlled by the fixedtime optimization controller (7) consisting of the sliding-mode manifold (8), nominal controller (15) and optimization term (16).Then x i = x * i is achieved within a fixed time T, satisfying Proof.The proof process includes two steps.As r i = 0 within a fixed time T 1 , we will show that, ∀i, j ∈ I N , x i − δ i = x j − δ j along the sliding-mode surface r i = 0 within a fixed time T 2 in Step 1; and afterwards, we will show that x i = x * i within a fixed time T 3 in Step 2.
Step 2 (Fixed-time optimization): this, together with (16), implies that where By invoking Lemma 4, we have Therefore, according to Lemma 5, we can get that, as t ≥ T = ∑ 3 i=1 T i , x i = x * i for ∀i ∈ I N , where x * i is the unique minimizer of problem ( 6).
Remark 6.It follows from Theorem 2 that the proposed fixed-time protocol (7) with ( 8), ( 15) and ( 16) not only guarantees the system's state converges to an optimal solution in a fixed time, but also avoids the requirements that each f i (x i , t) is convex in [8,12,13,37] and each ∇ 2 f i (x, t) is invertible and equal in [9,10,13,37].Unfortunately, the protocol (7) is centralized because φ i (t) as given in (16) depends on the knowledge of F 1 , F 2 and F 3 , which is global information (centralized optimization terms).In the following subsection, a class of fixed-time estimators based on tracking will be designed to reconstruct the global information (16) in a fully distributed manner.

Distributed Fixed-Time Optimization Protocol Design
To rebuild the centralized optimization terms, a distributed fixed-time estimator is developed for each agent below.Distributed fixed-time optimization protocol for the nonlinear FOMAS (5) will be constructed in accordance with the result of Theorem 2 and the developed distributed estimator. Let Each local cost function f j (x j , t) is unique to each agent j ∈ I N , and as a result, ω j is also unique.In the leaderless FOMAS ( 5) with a digraph G, each agent j ∈ I N may be regarded as a virtual leader.Then, by building a distributed fixed-time leader-following network estimator, the information ω j of the virtual leader may be tracked in fixed-time by all the agents i ∈ I N (treated as N virtual followers).Before proceeding, denote diagonal matrices Āj = diag( āj1 , . . ., āji , . . ., ājN ) ∈ R N×N with ājj > 0 and āji = 0, ∀i = j, i, j ∈ I N .If Assumption 1 holds (the leader-following network has a directed spanning tree), then each matrix H j = L + Āj defines the interaction topology among the leader-following network, which is a nonsingular M-matrix.Under Assumption 1, it follows from Lemma 2 that there exist a positive diagonal matrix ) and a positive constant In this section, a distributed fixed-time estimator or tracking for each agent i ∈ I N is constructed as , y i j,3 ] T , l j ≥ sup t ωj , c j , d j > 0 are constants, and 0 < j < 1, ∀i, j ∈ I N .Here, ωj is assumed to be bounded for any j ∈ I N and t ∈ R + .Lemma 6.Under Assumption 1, consider the distributed fixed-time estimator (30) for each agent i ∈ I N , the centralized optimization term φ(t) given in ( 16) is reconstructed in a distributed manner as within a fixed time T 0 , where , and F i,3 = ∑ N j=1 i j,3 , and Proof.Denote H j = [h j ik ] N×N = L + Āj and χ i j,1 = i j,1 − ∇ f j (x j , t) for ∀i, j ∈ I N .From (30), ∀i, j ∈ I N , the neighborhood-based error variable y i j,1 satisfies ẏi Recall that each matrix H j is a nonsingular M-matrix if Assumption 1 holds; it thus follows from Lemma 2 that there exists a positive diagonal matrix , where η j = λ( H j ) > 0 and j ∈ I N .Choose a Lyapunov function for each j ∈ I N as Differentiating V j (t) along ( 33) yields (each j ∈ I N ) where the last inequality holds since where for each j ∈ I N .It thus follows from Lemma 4 that V j (t) = 0 within a fixed time T * j , satisfying for j ∈ I N .Thus, as t ≥ T * j , e., F i,2 = F 2 and F i,3 = F 3 , as t ≥ T 0 for each agent i ∈ I N .Therefore, as t ≥ T 0 , the distributed optimization term φ i (t) given in (31) for each agent i ∈ I N is equivalent to the centralized optimization term φ(t) given in (16).Now, to summarize Lemma 6, Theorems 1 and 2, we can state and establish the main theorem of this paper.Theorem 3.Under Assumptions 1-3, consider the nonlinear FOMAS (5) controlled by the distributed optimization controller (7) consisting of the sliding-mode manifold (8), nominal controller (15), estimator (30) and optimization term (31).
where i ∈ I N , ρ j = Proof.The proof process includes four steps, i.e., Step 1-4.By repeating the proof processes of Lemma 6, Theorem 1, Step 1 and Step 2 in Theorem 2, respectively, it can be proved that, ∀i, j ∈ I N , φ i (t) = φ(t) as t ≥ T 0 in Step 1, each agent reaches the sliding-mode surface Step 3, and Step 4, and, hence, omitted here.
If Assumptions 1-3 are satisfied, the implementation process of the fixed-time distributed optimization of the nonlinear FOMAS (5) with generic digraph is summarized as in Algorithm 1.

Remark 7.
To prevent the divergence that the time-varying signals ω j would otherwise cause, we introduce a signum function term −l j sgn(∆ i j ) in (30), where l j ≥ sup t ωj .Note that l j ≥ sup t ωj is similar to Assumption 4.6 in [28], and ωj is bounded if all of d dt (∇ f j (x j , t)), d dt ( ∂ ∂t ∇ f j (x j , t)) and d dt (∇ 2 f j (x j , t)) are bounded.The boundedness of ωj might be restrictive, but it can be satisfied for many cost functions.For example, f i (x i , t) = (a i x i + g i (t)) 2 , ωj is bounded if g i (t) 2 , ġi (t) 2 and gi (t) 2 are bounded [28], such as g i (t) = i(sin(t) + cos(t)), ie −t sin(t), i 1+t or i tanh(t), and so on.
Remark 8.In the algorithm (7), the sliding-mode control term − hi (x i , t)sgn(r i ) − a 1 sig 1−ε 1 (r i ) − b 1 sig 1+ε 1 (r i ) is used to derive all agents reaching the sliding-mode surface in a fixed time., the nominal controller u * i designed by (15) consists of consensus control term −a 2 sig 1−ε 2 (e x i ) − b 2 sig 1+ε 2 (e x i ) and an estimator-based optimization term −φ i (t) estimated by (30) and (31).The consensus control term is used for reaching consensus in fixed time, and the estimator-based optimization term is used for capturing the minimizer of the optimization problem (6) within a fixed time.
Remark 9. Since none of the parameters, including ε k , a k ,b k , c j , d j and j , k = 1, 2, 3 and j = 1, 2, . . ., N, depend on any global information, the designed distributed optimization controller (7) using ( 8), ( 15), ( 30) and ( 31) is fully distributed.Also the designed fully distributed optimization controller is suitable for weight-unbalanced digraphs, and task requirements can be satisfied by modifying the parameters of the expected settling time in (44).
Algorithm 1 Fixed-time distributed optimization algorithm: A fractional-stage implementation If Assumptions 1-3 are satisfied, the whole fixed-time distributed optimization procedure is summarized by the following four cascading stages.Stage 1: Fixed-time estimator of the centralized optimization term φ(t): upgrade ( 5) using ( 7) with ( 8), ( 15), (30) and (31).According to Lemma 6, as t ≥ T 0 = max j∈I N {T * j }, the distributed optimization term φ i (t) given in ( 31) is equivalent to the centralized optimization term φ(t) given in (16) for each i ∈ I N .
Stage 3: Fixed-time consensus of x i − δ i , ∀i ∈ I N : continue upgrading ( 5) using ( 7) with ( 8), ( 15), (30) and (31).According to the proof of Step 1 in Theorem 2, as t ≥ T Stage 4: Fixed-time convergence of x i − x * i , ∀i ∈ I N : continue upgrading ( 5) using ( 7) with ( 8), ( 15), (30) and (31).According to the proof of Step 2 in Theorem 2, as t ≥ Remark 10.The previous works in [11,13,15,19,28,37] about the distributed time-varying optimization problem of MASs are only applicable for a connected undirected topology.Whereas, our work is applicable for a strongly connected directed topology.Furthermore, both fixed-time optimal convergence and nonlinear dynamics are considered here.The detailed comparison between our work and the works in [11,13,15,19,28,37] is listed in Table 1.If the nonlinear FOMAS (5) reduces to the single-integrator MAS (13), as a byproduct, we have the following theorem.Theorem 4.Under Assumptions 1 and 3, consider the single-integrator MAS (13) controlled by the continuous distributed optimization controller (15) consisting of the estimator (30) and optimization term (31).Then x i = x * i is achieved within a fixed time T 0 + T 2 + T 3 , where i ∈ I N , T 0 = max j∈I N {T * j } and T * j is given by (32), T 2 and T 3 are given by ( 27) and (29), respectively.
Remark 11.The proposed discontinuous controller (7) in Theorem 3 may cause chattering, and both the proposed discontinuous controller (7) in Theorem 3 and the continuous controller (15) in Theorem 4 may result in singularity when F i,3 in (31) is irreversible at some point, i.e., F i,3 = 0 at t = t * with t * ∈ (0, T 0 ).To avoid these unexpected phenomena, some control techniques/methods need to be developed, which are interesting topics for future research.
To avoid the singularity caused by F i,3 in (31), a special case of Assumption 3 is studied; that is, each ∇ 2 f i (x i , t) in Assumption 3 is equal [37], i.e., ∇ 2 f i (x i , t) = ∇ 2 f j (x j , t), ∀i, j ∈ I N .Then, the optimization term φ i (t) in ( 15) is designed as where j,2 , i j,1 and i j,2 are designed by the estimator (30) T , l j ≥ sup t ωj , c j , d j > 0, and 0 < j < 1, ∀i, j ∈ I N .
By using the optimization term ( 45) for ( 15), a singularity-free distributed optimization controller is thus derived and designed.The following theorems, as byproducts of Theorem 3, are stated and established directly.Theorem 5.Under Assumptions 1-3, if ∇ 2 f i (x i , t) = ∇ 2 f j (x j , t), ∀i, j ∈ I N , consider the nonlinear FOMAS (5) controlled by the distributed optimization controller (7) consisting of the sliding-mode manifold (8), nominal controller (15), estimator (30) and optimization term (45).Then x i = x * i is achieved within a fixed time ∑ 3 m=0 T m , satisfying (44), i ∈ I N .
Theorem 6.Under Assumptions 1 and 3, if ∇ 2 f i (x i , t) = ∇ 2 f j (x j , t), ∀i, j ∈ I N , consider the single-integrator MAS (13) controlled by the continuous distributed optimization controller (15), consisting of the estimator (30) and optimization term (45).Then x i = x * i is achieved within a fixed time T 0 + T 2 + T 3 , where i ∈ I N , T 0 = max j∈I N {T * j } and T * j is given by (32), T 2 and T 3 are given by ( 27) and (29), respectively.
The gradient of each time-varying cost function f j (x j , t), j = 0, 1, . . ., 5, and its two partial differential operators with respect to t and x j are given in Table 2, where f 0 (x 0 , t) = ∑ N i=1 f i (x 0 , t) is the time-varying global cost function, x 0 = x i − δ i , ∀i ∈ I N .It is easily verified from Table 2 that the time-varying local cost functions f 2 (x 2 , t), f 4 (x 4 , t) and f 5 (x 5 , t) are non-convex, the Hessian of each time-varying local cost function is not equal and ∇ 2 f 2 (x 2 , t) is not invertible.Therefore, the methods in the literature mentioned in Remark 3 fail to work.It is also verified from Table 2 that the time-varying global cost function f 0 (x 0 , t) is strictly convex; thus, Assumption 3 is satisfied, and there exists a unique minimizer to the optimization problem (6).By some calculations, the unique minimizer of ( 6) is x * i = 0.4 sin(πt) + i − 3 for each i ∈ I N , which shows that the optimal points (trajectories) are time-varying.Note that x * = x * i − δ i = 0.4 sin(πt) − 3 is a common optimization point of ( 6).The trajectories of φ i in (31) and φ in ( 16) are shown in Figure 2; these imply that φ i = φ as t ≥ 1 s, ∀i ∈ I N .The trajectories of r i , x i , x * i , x i − δ i , and x * are respectively provided in Figures 3-5.It is seen from Figure 3 that, ∀i ∈ I N , r i = 0 is achieved as t ≥ 2 s (all agents achieve fixed-time sliding mode control), while there exist chattering phenomena.It is seen from Figures 4 and 5 that, ∀i, j ∈ I N , x i − δ i = x j − δ j is achieved as t ≥ 4 s (all agents reach fixed-time consensus), and x i − δ i = x * i − δ i = x * is achieved as t ≥ 5 s (all agents reach fixed-time optimization).The trajectories of x * 1 = 0.4 sin(πt) − 2, and x 1 with four different initial states x 1 (0) = 2, x 1 (0) = 0, x 1 (0) = −2, x 1 (0) = −4, and all other parameters and initial values given above are shown in Figure 6.It is observed from Figure 6 that the fixed time within which x 1 = x * 1 is achieved is independent of the initial states.Therefore, Theorem 3 is verified by the simulation results.∇ 2 f j (x j , t)     x 1 * =0.4sin( t)-2 x 1 with x 1 (0)=2 x 1 with x 1 (0)=0 x 1 with x 1 (0)=-2 x 1 with x 1 (0)=-4

Conclusions
This study has considered the fixed-time distributed optimization problem of a nonlinear FOMAS with heterogeneous time-varying cost functions, nonlinear functions and disturbances under generic weight-unbalanced digraph.By integrating the fixed-time Lyapunov stability theory and the sliding-mode control technique, a novel estimator-based fully distributed optimization algorithm with fixed-time optimal convergence has been designed solve the problem.One simulation has been given demonstrate the effectiveness of our method over wider range of time-varying local cost functions, considering a weight-unbalanced digraph.An interesting research direction would be to further extend the results of this study to the FOMAS with the digraph having a spanning tree, time-varying consensus configurations, or a continuous fixed-time optimization algorithm.

Figure 4 .Figure 5 .
Figure 4. Trajectories of x i and x * i .

Table 1 .
Comparison of Distributed Time-Varying Optimization.

Table 2 .
The Gradient of Each Time-Varying Cost Function, and its Two Partial Differential Operators.
j∇ f j (x j , t)∂ ∂t ∇ f j (x j , t)