Next Article in Journal
Estimating Skewness and Kurtosis for Asymmetric Heavy-Tailed Data: A Regression Approach
Previous Article in Journal
A Novel Optimization Method and Its Application for Hazardous Materials Vehicle Routing Problem Under Different Road Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Continuous-Time Distributed Optimization Algorithm for Multi-Agent Systems with Parametric Uncertainties over Unbalanced Digraphs

School of Information Science and Engineering, Chongqing Jiaotong University, Chongqing 400074, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2025, 13(16), 2692; https://doi.org/10.3390/math13162692
Submission received: 5 July 2025 / Revised: 6 August 2025 / Accepted: 18 August 2025 / Published: 21 August 2025

Abstract

This paper investigates distributed optimization problems for multi-agent systems with parametric uncertainties over unbalanced directed communication networks. To settle this class of optimization problems, a continuous-time algorithm is proposed by integrating adaptive control techniques with an output feedback tracking protocol. By systematically employing Lyapunov stability theory, perturbed system analysis, and input-to-state stability theory, we rigorously establish the asymptotic convergence property of the proposed algorithm. A numerical simulation further demonstrates the effectiveness of the algorithm in computing the global optimal solution.

1. Introduction

Distributed optimization in multi-agent systems has attracted substantial interest in recent years due to its broad applicability in diverse domains such as smart grids [1], distributed machine learning [2,3], resource allocation [4,5], and cooperative robotics [6]. Specifically, distributed optimization can be utilized to achieve state-of-charge balancing control at the battery-pack level in grid-connected battery energy storage systems, thereby enhancing overall system efficiency and effectively preventing potential damage caused by overcharging or discharging during operation [7]. Compared to centralized approaches, distributed optimization offers notable advantages in terms of preserving data privacy, computational scalability, and robustness against node or link failures [8,9].
Research on distributed optimization problems (DOPs) can generally be categorized into discrete-time and continuous-time frameworks. Following the decentralized optimization paradigm introduced in [10], a wide range of discrete-time algorithms have been developed [11,12,13]. In parallel, continuous-time distributed algorithms have also gained considerable attention. For scenarios where the communication network is undirected [14,15,16,17], several methods have been proposed. To address complex setups that involve both consensus constraints and nonsmooth cost functions, the authors of [14] proposed an adaptive method based on a distance penalty function, while the authors of [15] developed a subgradient algorithm exhibiting exponential convergence. Furthermore, zero-gradient-sum algorithms that are initialization-free and based on sliding mode control were proposed in [16,17] to address DOPs with consensus constraints.
Nevertheless, algorithms designed under the assumption of undirected graphs often encounter difficulties in accommodating more general network topologies. To overcome these limitations, recent research has explored distributed methods applicable to weight-balanced digraphs [18,19,20,21]. The authors of [18] proposed a primal–dual consensus algorithm that removes the dependency on initial states. For solving systems of linear equations in a distributed setting, a predefined-time consensus protocol was introduced in [19]. Moreover, to accommodate parametric uncertainties in multi-agent systems, an adaptive control strategy was developed in [20]. In the presence of external disturbances, the authors of [21] introduced a distributed algorithm that ensures finite-time state convergence for all agents.
Despite these advances, the above-mentioned algorithms are not directly applicable under unbalanced digraphs, which frequently arise in practice. To address this gap, several continuous-time algorithms have been proposed specifically for such networks [22,23,24]. For example, the authors of [22] introduced an auxiliary variable to counteract the imbalance effects inherent in directed topologies. In addressing resource allocation problems with nondifferentiable cost functions, the authors of [23] proposed an adaptive protocol that estimates a positive right eigenvector of the out-Laplacian matrix to facilitate convergence to the optimal solution. Furthermore, a gradient tracking method specifically designed for consensus-constrained problems was proposed in [24]; however, its applicability is restricted to scenarios where the Laplacian matrix is asymmetric, rendering it unsuitable for undirected graphs.
Leveraging insights from existing methods, this paper investigates the design of distributed consensus algorithms for multi-agent systems with parametric uncertainties. In practical implementations, such systems are likely to exhibit unbalanced directed networks due to inherent limitations in communication links or nodes. To this end, we develop a continuous-time algorithm capable to work over unbalanced digraphs, with particular emphasis on handling consensus constraints and parametric uncertainties. The primary contributions of this study are summarized as follows:
  • A novel distributed continuous-time optimization algorithm is developed for multi-agent systems over unbalanced digraphs. Unlike existing methods such as [17,19,20], which are restricted to undirected or weight-balanced graphs, the proposed adaptive algorithm is applicable to more general directed graphs.
  • This protocol employs a virtual vector to construct a reference signal, enabling the vector to approach the optimal solution of the considered problem. Moreover, the proposed approach can effectively handle parametric uncertainties, which are not considered in some related works such as [24,25,26].
  • The asymptotic convergence of the proposed algorithm is rigorously established through the integration of Lyapunov stability theory and input-to-state stability (ISS) analysis. Additionally, the method improves agent-level privacy by eliminating the need to access the cost function (sub)gradients of neighboring agents, in contrast to existing approaches such as [24,27].
To facilitate understanding, the structure of this paper is organized as follows. The second section introduces mathematical preliminaries, graph theory fundamentals, and problem formulation. The third section presents the main results, including algorithm design and convergence analysis. The fourth section validates the efficacy of the algorithm through a numerical simulation, and the last section concludes this work.

2. Preliminaries and Problem Formulation

2.1. Notations

Let R and R + denote the sets of real numbers and positive real numbers, respectively. The symbols R n and R n × m represent the spaces of n-dimensional real vectors and n × m real matrices, respectively. The identity matrix of size n is denoted by I n . The vectors 1 n and 0 n denote n-dimensional column vectors whose entries are all ones and all zeros, respectively. The Kronecker product is represented by ⊗. diag ( x 1 ,   x 2 ,   ,   x n ) denotes a diagonal matrix with scalar diagonal entries x 1 ,   x 2 ,   ,   x n , while blkdiag ( A 1 ,   A 2 ,   ,   A n ) denotes a block diagonal matrix composed of matrices A 1 ,   A 2 ,   ,   A n on its diagonal and zero matrices elsewhere. For any vector y R n , the Euclidean norm is denoted by y .
Consider a differentiable function f : R n R . Its gradient is denoted by f . The function f is said to be strongly convex on a convex set Ω R n if there exists a constant C R + such that ( x z ) T f ( x ) f ( z ) > C x z 2 for all x , z Ω with x z . In addition, if the gradient g ( x ) of a function g : R n R is globally l-Lipschitz-continuous on R n , then there exists a constant l R + such that g ( x ) g ( z )     l x z for all x , z R n .

2.2. Graph Theory

The interaction topology of a multi-agent system is modeled by a directed graph G = ( V , E , A ) , where V = { v 1 ,   v 2 ,   ,   v N } denotes the set of vertices, E V × V is the set of directed edges, and A = [ a i j ] R N × N represents the adjacency matrix corresponding to the communication topology. A directed edge ( v i , v j ) E indicates that agent v i can access the state information of agent v j . A directed path from vertex v i to vertex v j is a sequence of directed edges of the form ( v i , v i 1 ) , ( v i 1 ,   v i 2 ) ,   ,   ( v i k , v j ) . The digraph G is said to be strongly connected if there exists a directed path between every pair of distinct nodes. The index set of all agents is denoted by S = { 1 ,   2 ,   ,   N } . Furthermore, all entries of A are non-negative, and a i j > 0 if and only if ( v i , v j ) E . The graph Laplacian matrix L = [ l i j ] R N × N is defined by l i i = j = 1 N a i j and l i j = a i j for i j . The graph G is said to be weight-balanced if 1 N T L = 0 N T ; otherwise, it is unbalanced.
Lemma 1 
([28]). Let L R N × N be the Laplacian matrix of a strongly connected digraph G . Then, the following properties hold:
1. 
L has a simple zero eigenvalue with the associated right eigenvector 1 N , and all other eigenvalues have positive real parts.
2. 
Let q = [ q 1 ,   ,   q N ] T with q i R + for i S denote the left eigenvector of L corresponding to the zero eigenvalue. Define Q = diag ( q 1 ,   ,   q N ) . Then the matrix L ^ = Q L + L T Q satisfies
min ϑ T x = 0 , x 0 x T L ^ x x T x > λ 2 ( L ^ ) N ,
where ϑ is any vector with positive entries and λ 2 ( L ^ ) represents the second smallest eigenvalue of L ^ . Moreover, q = 1 N if and only if G is weight-balanced.

2.3. Problem Formulation

Consider a multi-agent system comprising N agents. The continuous-time dynamics can be described by
x ˙ i ( t ) = u i ( t ) + ψ i T ( x i ( t ) ) ω ¯ i , y i ( t ) = x i ( t ) , i S ,
where x i ( t ) R n is the state associated with agent i, u i ( t ) R n is the control input, and y i ( t ) R n is the control output. The vector ω ¯ i R m i contains unknown system parameters, and ψ i ( x i ) : R n R m i × n denotes a matrix-valued regressor, which can be constructed using the output y i and the known system dynamics.
The objective of this study is to formulate a distributed continuous-time optimization algorithm such that each agent, operating under an unbalanced directed communication topology, can drive y i ( t ) to track the solution that optimally solves the following problem:
min x R n f ( x ) = i = 1 N f i ( x ) ,
where f i : R n R is a private objective function known only to the agent i, and f ( x ) represents the global objective function. The optimization problem above can be equivalently reformulated as
min x i R n i = 1 N f i ( x i ) , subject to x i = x j , i , j S .
Assumption 1. 
Each individual cost function f i ( x i ) is differentiable and strongly convex, with its gradient f i ( x i ) being i -Lipschitz.
Assumption 2. 
The unbalanced directed communication graph of agents is strongly connected.
Remark 1. 
The distributed problem formulated in this paper is of practical significance and has been previously investigated in works such as [16,24,29]. Moreover, Assumptions 1 and 2 are commonly adopted in distributed optimization [16,17,30]. Under Assumption 1, the Problem (3) is guaranteed to admit a unique globally optimal solution, with both existence and uniqueness being mathematically provable.
Define y * as the optimal output of the dynamical system (1). Then the objective of this paper is to develop a control protocol that ensures each agent’s control output y i ( t ) asymptotically approaches y * ; in other words,
lim t y i ( t ) y * = 0 , i S .

3. Main Results

3.1. Algorithm Design

In this section, we propose a distributed continuous-time optimization algorithm with adaptive coupling gains for unbalanced directed communication networks. The algorithm is designed as follows:
(5a) y ˙ i = c i ( y i r i ) + ψ i T ( x i ) ( ω ¯ i ω ˜ i ) , (5b) r ˙ i = θ 1 f i ( y i ) z i i θ 2 ( α i + β i ) j S a i j ( r i r j ) j S a i j ( υ i υ j ) , (5c) υ ˙ i = θ 2 ( α i + β i ) j S a i j ( r i r j ) , (5d) z ˙ i = j S a i j ( z i z j ) , ω ˜ ˙ i = Λ i ψ i ( x i ) ( y i r i ) , α ˙ i = e i T e i ,
where β i = e i T e i and e i = j S a i j ( r i r j ) . Here, r i R n , υ i R n , ω ˜ i R m i , and z i R N are auxiliary variables, with z i k representing the k-th entry of z i . The initial condition of z i satisfies z i i ( 0 ) = 1 and z i k ( 0 ) = 0 for k i . The parameters c i , θ 1 , and θ 2 are positive constants and Λ i R m i × m i is positive definite. The adaptive coupling gain α i is initialized with a positive value, i.e., α i ( 0 ) R + . In Equation (5), each agent i transmits only its local variables r i , υ i , and z i to neighbors, without sharing (sub)gradient information, as conducted in [24,27], thereby enhancing agent-level privacy.
Remark 2. 
Equation (5) introduces the virtual vector r i to generate a reference signal that converges to the minimizer of Problem (3). The variable ω ˜ i is introduced to estimate the unknown parameter vector, with the estimation process regulated by the matrix Λ i , while the adaptive coupling gains α i and β i are designed to drive the agents toward the minimizer of Problem (3). The auxiliary variable υ i is introduced to ensure that r i eventually reaches consensus, while the parameters θ 1 , θ 2 , and c i are incorporated to facilitate the subsequent rigorous convergence analysis. Additionally, the variable z i is introduced to facilitate the estimation of the left eigenvector of L associated with the zero eigenvalue, without requiring prior knowledge of this information, as is needed in [31].
Remark 3. 
Rather than being predetermined, the coupling gains α i and β i are adaptively tuned during the algorithm’s execution. Their updates depend exclusively on the local error e i arising from the discrepancy between virtual vectors of neighboring agents. In particular, α i asymptotically converges to a finite constant as the system outputs achieve consensus, which will be established in the subsequent convergence analysis.
The control Equation (5) admits a compact representation as
(6a) y ˙ = ( c I n ) ( y r ) + ψ T ( x ) ( ω ¯ ω ˜ ) , (6b) r ˙ = θ 1 ( Φ 3 1 I n ) f ( y ) θ 2 ( ( Φ 1 + Φ 2 ) L I n ) r ( L I n ) υ , (6c) υ ˙ = θ 2 ( ( Φ 1 + Φ 2 ) L I n ) r , (6d) z ˙ = ( L I N ) z , ω ˜ ˙ = Λ ψ ( x ) ( y r ) , α ˙ i = e i T e i ,
where y = [ y 1 T ,   y 2 T ,   ,   y N T ] T R N n , r = [ r 1 T ,   r 2 T ,   ,   r N T ] T R N n , υ = [ υ 1 T ,   υ 2 T ,   ,   υ N T ] T R N n , f ( y ) = [ f 1 ( y 1 ) T , f 2 ( y 2 ) T , , f N ( y N ) T ] T R N n , c = diag ( c 1 ,   c 2 ,   ,   c N ) R N × N , ω ˜ = [ ω ˜ 1 T ,   ω ˜ 2 T ,   ,   ω ˜ N T ] T R i = 1 N m i , Λ = blkdiag ( Λ 1 ,   Λ 2 ,   ,   Λ N ) R i = 1 N m i × i = 1 N m i , ψ ( x ) =   blkdiag ( ψ 1 ( x 1 ) , , ψ N ( x N ) ) R i = 1 N m i × N n , Φ 1 = diag ( α 1 ,   α 2 ,   ,   α N ) R N × N , Φ 2 = diag ( β 1 ,   β 2 ,   ,   β N ) R N × N , and Φ 3 1 = diag 1 z 1 1 , 1 z 2 2 , , 1 z N N R N × N .
Lemma 2. 
Under Assumptions 1 and 2, define variables δ = y r and ω ^ = ω ¯ ω ˜ . If δ = ω ^  0, and ( y ˜ , υ ˜ ) satisfy
(7a) 0 = θ 1 ( Ξ 1 I n ) f ( y ˜ ) θ 2 ( ( Φ 1 + Φ 2 ) L I n ) y ˜ ( L I n ) υ ˜ , (7b) 0 = θ 2 ( ( Φ 1 + Φ 2 ) L I n ) y ˜ ,
where y ˜ = [ y ˜ 1 T , y ˜ 2 T , , y ˜ N T ] T , Ξ 1 = diag 1 ξ 1 ,   1 ξ 2 ,   ,   1 ξ N , with ξ = [ ξ 1 ,   ξ 2 ,   ,   ξ N ] T being the left eigenvector of L corresponding to its zero eigenvalue, then y ˜ = 1 N y * , where y * is the optimal solution to Problem (3).
Proof of Lemma 2. 
Since δ = ω ^ 0 , it follows from (6) that the point ( y ˜ , υ ˜ ) satisfying (7) constitutes an equilibrium of the dynamical system described by Equation (6). From (7b), we obtain y ˜ = 1 N d , where d R n . Left-multiplying both sides of (7a) by ξ T I n gives i = 1 N f i ( y ˜ i ) = 0 . Under Assumption 1, the optimality condition i = 1 N f i ( y * ) = 0 holds. Therefore, it follows that y ˜ = 1 N y * . □
Remark 4. 
Lemma 2 establishes the optimality condition for Equation (5), deriving the necessary equations for the optimal solution y * , which forms the basis for the subsequent convergence analysis.

3.2. Convergence Analysis

In this section, we prove the convergence property of Equation (5) by employing Lemma 2 in conjunction with global uniform asymptotic stability and ISS theory.
Based on Lemma 2, to prove lim t y i ( t ) = y * , we must show the convergence of ( y , υ ) to ( y ˜ , υ ˜ ) . Through coordinate transformations, we define new variables δ = y r , ω ^ = ω ¯ ω ˜ , η = r y ˜ , and μ = υ υ ˜ . Taking the difference of (6) and (7), the transformed dynamics of ( δ , η , μ , ω ^ , z , α i ) derived from (6) are given by the following differential equations:
(8a) δ ˙ = ( c I n ) δ + ψ T ( x ) ω ^ r ˙ ,   η ˙ = θ 1 ( Φ 3 1 I n ) φ + θ 1 ( ( Ξ 1 Φ 3 1 ) I n ) f ( y ˜ ) (8b) θ 2 ( ( Φ 1 + Φ 2 ) L I n ) η ( L I n ) μ , (8c) μ ˙ = θ 2 ( ( Φ 1 + Φ 2 ) L I n ) η , (8d) z ˙ = ( L I N ) z , ω ^ ˙ = Λ ψ ( x ) δ , α i ˙ = e i T e i ,
where φ = f ( y ) f ( y ˜ ) and e i = j S a i j ( η i η j ) .
Based on Lemmas 2 and (8), we can conclude that to prove the convergence of variable y i in (5) to the optimal point y * of Problem (3), we need to show that δ , ω ^ , and η converge to 0 as time tends to infinity. Since Ξ 1 Φ 3 1 , the origin is not an equilibrium point of (8). For subsequent convergence analysis, we introduce additional coordinate transformations for η and μ , defining new variables η ^ = ( L I n ) η and μ ^ = ( L I n ) μ . Therefore, we first need to prove lim t δ ( t ) = 0 , lim t η ^ ( t ) = 0 , lim t μ ^ ( t ) = 0 , and lim t ω ^ ( t ) = 0 and then prove lim t η ( t ) = 1 N σ 1 and lim t μ ( t ) = 1 N σ 2 , where σ 1 , σ 2 R n are two constant vectors. Finally, by proving σ 1 = 0 and σ 2 < , we ultimately prove lim t y i ( t ) = y * .
Let η ^ i R n represent the column vector consisting of elements from the ( ( i 1 ) × n + 1 ) -th to the ( i × n ) -th of the vector η ^ . Employing e i = j S a i j ( η i η j ) , we obtain η ^ i = e i , and the dynamics of ( δ , η ^ , μ ^ , ω ^ , z , α i ) can thus be rewritten as
(9a) δ ˙ = ( c I n ) δ + ψ T ( x ) ω ^ r ˙ ,   η ^ ˙ = θ 1 ( Φ 3 1 I n ) φ + θ 1 ( ( Ξ 1 Φ 3 1 ) I n ) f ( y ˜ ) (9b) θ 2 ( ( Φ 1 + Φ 2 ) L I n ) η ^ ( L I n ) μ ^ , (9c) μ ^ ˙ = θ 2 ( ( Φ 1 + Φ 2 ) L I n ) η ^ , (9d) z ˙ = ( L I N ) z , ω ^ ˙ = Λ ψ ( x ) δ , α i ˙ = η ^ i T η ^ i .
For the subsequent convergence analysis, we define w = col ( η ^ , μ ^ , δ ) . Then the dynamics of w can be written as
η ^ ˙ μ ^ ˙ δ ˙ w ˙ = θ 1 ( L Φ 3 1 I n ) φ θ 2 ( L ( Φ 1 + Φ 2 ) I n ) η ^ ( L I n ) μ ^ θ 2 ( L ( Φ 1 + Φ 2 ) I n ) η ^ ( c I n ) δ + ψ T ( x ) ω ^ r ˙ h 1 ( w ) + θ 1 ( L ( Ξ 1 Φ 3 1 ) I n ) f ( y ˜ ) 0 0 h 2 ( t ) ,
where w ˙ = h 1 ( w ) represents the unperturbed system.
Remark 5. 
Under Assumption 1, it can be verified that h 1 ( w ) is locally Lipschitz-continuous for w Ω , where Ω R 3 N n is any compact set. Furthermore, h 2 ( t ) maintains boundedness for all t 0 . According to Theorem 4.19 in [32], to verify the asymptotic convergence of system (10) to the origin when h 2 ( t )   0, we must first demonstrate the global uniform asymptotic stability of the unperturbed system and the ISS of system (10).
Theorem 1. 
Suppose Assumptions 1 and 2 hold, and the initial adaptive coupling gains satisfy α i ( 0 ) > 0 for all i = 1 , 2 , , N . Then the proposed initialization-free continuous-time optimization algorithm described by Equation (5) can ensure that the control output y i ( t ) tracks the optimal trajectory y * through the appropriate selection of parameters c i , θ 1 , and θ 2 .
Proof of Theorem 1. 
The proof consists of three main steps:
  • Prove the global uniform asymptotic stability of the unperturbed system w ˙ = h 1 ( w ) ;
  • Establish the ISS property of the system (10);
  • Demonstrate the convergence of variables η and μ in (8) to 0 and 1 N σ 2 , respectively, as t .
The detailed proof proceeds as follows.
Step 1. Consider w = 0 as the equilibrium point of the unperturbed system w ˙ = h 1 ( w ) . Accordingly, the following Lyapunov function candidate is considered:
V = V 1 + s 1 V 2 + V 3 + s 2 V 4 ,
where s 1 = 33 N λ ¯ N ( L T L ) θ 2 λ 2 2 ( L ¯ ) , with λ ¯ N ( L T L ) denoting the maximum eigenvalue of L T L and λ 2 ( L ¯ ) representing the second smallest eigenvalue of the matrix L ¯ = Ξ L + L T Ξ (abbreviated as λ ¯ N and λ 2 for simplicity). Moreover, the components of the Lyapunov function are defined as
V 1 = 1 2 i = 1 N ( α i α ˜ ) 2 + 1 2 i = 1 N ξ i ( 2 α i + β i ) η ^ i T η ^ i , V 2 = 1 2 ( η ^ + μ ^ ) T ( Ξ I n ) ( η ^ + μ ^ ) , V 3 = 1 2 δ T δ + 1 2 ω ^ T Λ 1 ω ^ , V 4 = 1 2 X T P 1 X ,
where α ˜ and s 2 are positive constants, X = [ η T , μ T ] T , P 1 = τ 11 τ 12 τ 12 τ 22 I N n , and τ 11 , τ 12 , and τ 22 are positive constants satisfying the positive definiteness condition τ 11 τ 22 > τ 12 2 .
The derivative of V 1 along the trajectories of the unperturbed system w ˙ = h 1 ( w ) is given by
V ˙ 1 = η ^ T ( Φ 1 α ˜ I N ) I n η ^ + i = 1 N ξ i ( 2 α i + 2 β i ) η ^ i T η ^ i + i = 1 N ξ i β i α ˙ i θ 2 η ^ T ( Φ 1 + Φ 2 ) ( Ξ L + L T Ξ ) ( Φ 1 + Φ 2 ) I n η ^ + η ^ T ( Φ 1 + Ξ Φ 2 α ˜ I N ) I n η ^ θ 1 η ^ T ( Φ 1 + Φ 2 ) ( Ξ L + L T Ξ ) Φ 3 1 I n φ η ^ T ( Φ 1 + Φ 2 ) ( Ξ L + L T Ξ ) I n μ ^ .
Define π = ( Φ 1 + Φ 2 ) I n η ^ . Since both Φ 1 and Φ 2 are positive diagonal matrices, there exists a positive vector ϑ = ( Φ 1 + Φ 2 ) 1 ξ 1 N such that ϑ T π = ( ξ T ( Φ 1 + Φ 2 ) 1 1 N T ) ( ( Φ 1 + Φ 2 ) I n ) η ^ = ( ξ T L 1 N T ) η = 0 . Therefore, according to Lemma 1, we obtain
θ 2 η ^ T ( Φ 1 + Φ 2 ) ( Ξ L + L T Ξ ) ( Φ 1 + Φ 2 ) I n η ^ θ 2 λ 2 N η ^ T ( Φ 1 + Φ 2 ) 2 I n η ^ .
Since z i i ( t ) > 0 for t > 0 , i = 1 , 2 , , N , and defining z ^ = min inf t > 0 z i i ( t ) , i S , we can employ Young’s inequality to obtain
2 θ 1 η ^ T ( Φ 1 + Φ 2 ) Ξ L Φ 3 1 I n φ θ 2 λ 2 4 N η ^ T ( Φ 1 + Φ 2 ) 2 I n η ^ + 4 θ 1 2 N λ ¯ N θ 2 λ 2 z ^ 2 φ 2 ,
2 η ^ T ( Φ 1 + Φ 2 ) Ξ L I n μ ^ θ 2 λ 2 4 N η ^ T ( Φ 1 + Φ 2 ) 2 I n η ^ + 4 N λ ¯ N θ 2 λ 2 μ ^ 2 .
Substituting (13), (14), and (15) into (12), we can further derive
V ˙ 1 θ 2 λ 2 2 N η ^ T ( Φ 1 + Φ 2 ) 2 I n η ^ + 4 θ 1 2 N λ ¯ N θ 2 λ 2 z ^ 2 φ 2 + 4 N λ ¯ N θ 2 λ 2 μ ^ 2 + η ^ T ( ( Φ 1 + Ξ Φ 2 α ^ I N ) I n ) η ^ .
The derivative of V 2 along the trajectories of the unperturbed system is given by
V ˙ 2 = θ 1 η ^ T ( Ξ L Φ 3 1 I n ) φ η ^ T ( Ξ L I n ) μ ^ θ 1 μ ^ T ( Ξ L Φ 3 1 I n ) φ μ ^ T ( Ξ L I n ) μ ^ λ 2 4 μ ^ 2 + λ 2 + 2 λ ¯ N λ 2 η ^ 2 + ( 8 + λ 2 ) θ 1 2 λ ¯ N 4 λ 2 z ^ 2 φ 2 .
To derive the above inequality, we employ Lemma 1 in conjunction with Young’s inequality; specifically,
μ ^ T ( Ξ L I n ) μ ^ λ 2 2 μ ^ 2 , η ^ T ( Ξ L I n ) μ ^ λ 2 8 μ ^ 2 + 2 λ ¯ N λ 2 η ^ 2 , θ 1 μ ^ T ( Ξ L Φ 3 1 I n ) φ λ 2 8 μ ^ 2 + 2 θ 1 2 λ ¯ N λ 2 z ^ 2 φ 2 , θ 1 η ^ T ( Ξ L Φ 3 1 I n ) φ η ^ 2 + θ 1 2 λ ¯ N 4 z ^ 2 φ 2 .
Next, let V ^ = V 1 + s 1 V 2 , and then the derivative of V ^ along the trajectories of the unperturbed system satisfies
V ^ ˙ η ^ T θ 2 λ 2 2 N ( Φ 1 + Φ 2 ) 2 + ( Φ 1 + Ξ Φ 2 ) α ˜ I N + 33 N λ ¯ N ( λ 2 + 2 λ ¯ N ) θ 2 λ 2 3 I N I n η ^ 17 N λ ¯ N 4 θ 2 λ 2 μ ^ 2 + k ¯ φ 2 ,
where k ¯ = 4 θ 1 2 N λ ¯ N θ 2 λ 2 z ^ 2 + 33 N θ 1 2 λ ¯ N 2 ( 8 + λ 2 ) 4 θ 2 λ 2 3 z ^ 2 .
According to Assumption 1, we have φ = f ( y ) f ( y ˜ ) y y ˜ δ + η , where ^ = max { 1 ,   2 ,   ,   N } . Let λ ¯ 2 ( L T L ) denote the second smallest eigenvalue of L T L , abbreviated as λ ¯ 2 . Then we obtain η ^ 2 = η T ( L T L I n ) η λ ¯ 2 η 2 . Therefore, by applying Young’s inequality, we can further derive
φ T φ ^ 2 ( 1 + γ 1 ) δ 2 + ^ 2 1 + 1 γ 1 η 2 ^ 2 ( 1 + γ 1 ) δ 2 + ^ 2 1 + 1 γ 1 η ^ 2 λ ¯ 2 ,
where γ 1 > 0 . Substituting (19) into (18) yields
V ^ ˙ η ^ T θ 2 λ 2 2 N ( Φ 1 + Φ 2 ) N λ 2 I N 2 I n η ^ α ˜ ϵ 1 ϵ 2 θ 2 N 2 λ 2 η ^ 2 17 N λ ¯ N 4 θ 2 λ 2 μ ^ 2 + ϵ 3 δ 2 .
where ϵ 1 = 33 N λ ¯ N ( λ 2 + 2 λ ¯ N ) θ 2 λ 2 3 , ϵ 2 = k ¯ ^ 2 λ ¯ 2 1 + 1 γ 1 , and ϵ 3 = k ¯ ^ 2 ( 1 + γ 1 ) . By choosing α ˜ 1 + ϵ 1 + ϵ 2 + θ 2 N 2 λ 2 , we get
V ^ ˙ η ^ 2 17 N λ ¯ N 4 θ 2 λ 2 μ ^ 2 + ϵ 3 δ 2 .
Subsequently, we compute the derivative of V 3 along the trajectories of the unperturbed system w ˙ = h 1 ( w ) and apply Young’s inequality to obtain
V ˙ 3 = δ T ( c I n ) δ δ T r ˙ δ T ( c I n ) δ + γ 2 2 δ 2 + 1 2 γ 2 r ˙ 2 ,
where γ 2 > 0 . Further processing r ˙ 2 yields
r ˙ 2 = η ˙ 2 = θ 1 ( Φ 3 1 I n ) φ θ 2 ( Φ 1 + Φ 2 ) L I n η ( L I n ) μ 2 ( 1 + γ 3 ) θ 1 2 φ T ( Φ 3 1 Φ 3 1 I n ) φ + 1 + 1 γ 3 X T P 2 X ( 1 + γ 3 ) θ 1 2 z ^ 2 φ T φ + 1 + 1 γ 3 X T P 2 X ,
where γ 3 > 0 and P 2 = θ 2 2 ( Φ 1 + Φ 2 ) 2 θ 2 ( Φ 1 + Φ 2 ) θ 2 ( Φ 1 + Φ 2 ) 1 L T L I n . Substituting (19) and (23) into (22), we can obtain
V ˙ 3 ε δ 2 + 1 2 γ 2 1 + 1 γ 3 X T P 2 X + ( 1 + γ 3 ) 2 γ 2 z ^ 2 1 + 1 γ 1 θ 1 2 ^ 2 η 2 ,
where ε = c min γ 2 2 ( 1 + γ 3 ) ( 1 + γ 1 ) θ 1 2 ^ 2 2 γ 2 z ^ 2 , with c min = min i S { c i } .
The derivative of V 4 along the trajectories of the unperturbed system is
V ˙ 4 = X T P 3 X θ 3 τ 12 μ T μ θ 1 τ 11 η T ( Φ 3 1 I n ) φ θ 1 τ 12 μ T ( Φ 3 1 I n ) φ ,
where P 3 = θ 2 ( τ 11 τ 12 ) ( Φ 1 + Φ 2 ) τ 11 θ 2 ( τ 12 τ 22 ) ( Φ 1 + Φ 2 ) ( 1 θ 3 ) τ 12 L I n , with 0 < θ 3 < 1 . Applying Young’s inequality and (19), we derive
θ 1 τ 11 η T ( Φ 3 1 I n ) φ θ 1 τ 12 μ T ( Φ 3 1 I n ) φ θ 1 τ 11 γ 4 2 η 2 + θ 1 τ 11 2 γ 4 + θ 1 τ 12 2 γ 5 ^ 2 z ^ 2 ( 1 + γ 1 ) δ 2 + 1 + 1 γ 1 η 2 + θ 1 τ 12 γ 5 2 μ 2 .
Substituting (26) back into (25) yields
V ˙ 4 X T P 3 X + b 1 η 2 b 2 μ 2 + b 3 δ 2 ,
where γ 4 > 0 , γ 5 > 0 , b 1 = θ 1 τ 11 γ 4 2 + ^ 2 z ^ 2 θ 1 τ 11 2 γ 4 + θ 1 τ 12 2 γ 5 1 + 1 γ 1 , b 2 = θ 3 τ 12 θ 1 τ 12 γ 5 2 , b 3 = ^ 2 z ^ 2 θ 1 τ 11 2 γ 4 + θ 1 τ 12 2 γ 5 1 + γ 1 , and θ 2 ( τ 12 τ 22 ) = τ 11 . Appropriately choosing θ 1 and θ 3 , we can ensure b 2 > 0 .
Combining (21), (24), and (27), the derivative of V along the trajectories of the unperturbed system is obtained as
V ˙ η ^ 2 17 N λ ¯ N 4 θ 2 λ 2 μ ^ 2 s 2 b 2 μ 2 + ( ϵ 3 ε + s 2 b 3 ) δ 2 + 1 2 γ 2 1 + 1 γ 3 X T P 1 X s 2 X T P 3 X + s 3 η 2 ,
where s 3 = ( 1 + γ 3 ) 2 γ 2 z ^ 2 1 + 1 γ 1 θ 1 2 ^ 2 + s 2 b 1 . Since P 1 is positive definite, by appropriately selecting s 2 and ensuring that c min ϵ 3 + s 2 b 3 + γ 2 2 + ( 1 + γ 3 ) ( 1 + γ 1 ) θ 1 2 ^ 2 2 γ 2 z ^ 2 , together with choosing τ 11 τ 12 sufficiently large, it is guaranteed that V ˙ is negative definite. This shows that the origin is a globally uniformly asymptotically stable equilibrium point of the unperturbed system w ˙ = h 1 ( w ) , and the adaptive control gain α i converges to a positive constant.
Step 2. Prove the ISS of system (10) and the convergence of w ( t ) = col ( η ^ , μ ^ , δ ) to zero as t .
From Step 1, the derivative of V with respect to time along trajectories of system (10) satisfies
V ˙ η ^ T θ 2 λ 2 2 N ( Φ 1 + Φ 2 ) N λ 2 I N 2 I n η ^ α ˜ ϵ 1 ϵ 2 N 2 λ 2 η ^ 2 17 N λ ¯ N 4 θ 2 λ 2 μ ^ 2 + ϵ 3 δ 2 + V ˙ 3 + s 2 V ˙ 4 + 2 θ 1 η ^ T ( Φ 1 + Φ 2 ) Ξ L ( Ξ 1 Φ 3 1 ( t ) ) I n f ( y ˜ ) + s 4 θ 1 ( η ^ + μ ^ ) T ( Ξ L ( Ξ 1 Φ 3 1 ( t ) ) I n ) f ( y ˜ ) ,
where s 4 = 33 N λ ¯ N λ 2 2 . Defining the input disturbance v ( t ) = ( Ξ L ( Ξ 1 Φ 3 1 ( t ) ) I n ) f ( y ˜ ) , we can obtain
2 θ 1 η ^ T ( ( Φ 1 + Φ 2 ) Ξ L ( Ξ 1 Φ 3 1 ( t ) ) I n ) f ( y ˜ ) θ 2 λ 2 4 N η ^ T ( ( Φ 1 + Φ 2 ) 2 I n ) η ^ + 4 θ 1 2 N θ 2 λ 2 v ( t ) 2 ,
θ 1 ( η ^ + μ ^ ) T ( Ξ L ( Ξ 1 Φ 3 1 ( t ) ) I n ) f ( y ˜ ) λ 2 8 θ 2 μ ^ 2 + η ^ 2 + 8 θ 2 + λ 2 4 λ 2 θ 1 2 v ( t ) 2 .
Substituting (30) and (31) into (29), and appropriately selecting s 2 , we further get
V ˙ η ^ T θ 2 λ 2 4 N ( Φ 1 + Φ 2 ) 2 N λ 2 I N 2 I n η ^ α ˜ ϵ 1 ϵ 2 s 4 θ 2 N λ 2 η ^ 2 N λ ¯ N 8 θ 2 λ 2 μ ^ 2 + ( ϵ 3 ε + s 2 b 3 ) δ 2 + 4 θ 1 2 N θ 2 λ 2 + s 4 θ 1 2 ( 8 + λ 2 ) 4 λ 2 v ( t ) 2 .
Define κ = min 1 , N λ ¯ N 8 θ 2 λ 2 , ε ϵ 3 s 2 b 3 and ρ = 4 θ 1 2 N θ 2 λ 2 + s 4 θ 1 2 ( 8 + λ 2 ) 4 λ 2 N . By selecting α ˜ 1 + ϵ 1 + ϵ 2 + s 4 + θ 2 N λ 2 and ensuring c min ϵ 3 + s 2 b 3 + γ 2 2 + ( 1 + γ 3 ) ( 1 + γ 1 ) θ 1 2 ^ 2 2 γ 2 z ^ 2 , one obtains
V ˙ η ^ 2 N λ ¯ N 8 θ 2 λ 2 μ ^ 2 ( ε ϵ 3 s 2 b 3 ) δ 2 + ρ v ( t ) 2 κ w ( t ) 2 + ρ v ( t ) 2 .
For any 0 < ζ < 1 , when w ( t ) ρ κ ζ v ( t ) , it holds that
V ˙ ( 1 ζ ) κ w ( t ) 2 .
According to Theorem 4.19 in [32], it has been established that system (10) is input-to-state stable. Furthermore, we can show that lim t z ( t ) = 1 N ξ , which implies lim t Φ 3 1 ( t ) = Ξ 1 . Following the ISS definition, we prove the convergence of system (10) to zero as t ; i.e., for the perturbed system w ( t ) = col ( η ^ , μ ^ , δ ) , we have lim t δ ( t ) = 0 , lim t η ^ ( t ) = 0 , lim t μ ^ ( t ) = 0 , lim t ω ^ ( t ) = 0 and lim t α i ( t ) = α ˜ .
Step 3. Prove lim t η ( t ) = 0 and lim t μ ( t ) = 1 N σ 2 with σ 2 < .
From the established results lim t η ^ ( t ) = 0 and lim t μ ^ ( t ) = 0 , we obtain lim t η ( t ) = 1 N σ 1 and lim t μ ( t ) = 1 N σ 2 , where σ 1 , σ 2 R n . Taking limits on both sides of (8b) yields
0 = lim t ( Φ 3 1 ( t ) I n ) ( f ( y ) f ( y ˜ ) ) = ( Ξ 1 I n ) ( f ( y ˜ ) f ( η + y ˜ ) ) = ( Ξ 1 I n ) ( f ( y ˜ ) f ( 1 N σ 1 + y ˜ ) ) ,
Thereby we obtain
( Ξ 1 I n ) f ( y ˜ ) = ( Ξ 1 I n ) f ( 1 N σ 1 + y ˜ ) .
Left-multiplying both sides of (36) by ξ T I n gives i = 1 N f i ( y ˜ i ) = i = 1 N f i ( y ˜ i + σ 1 ) . Under Assumption 1, the optimal solution to Problem (3) is unique, which implies σ 1 = 0 . Meanwhile, from (8c), it is straightforward to verify that σ 2 < . Thus, we have proved lim t η ( t ) = 0 and lim t μ ( t ) = 1 N σ 2 , thereby establishing the convergence of ( y , υ ) to ( y ˜ , υ ˜ ) . In conclusion, the states y i in Equation (5) converge to the optimal solution y * of Problem (2). □
Remark 6. 
For (28) and (33), it suffices to select parameters c i such that c min is sufficiently large, ensuring that the coefficient of δ 2 remains negative, thus guaranteeing global uniform asymptotic stability and ISS.
Remark 7. 
While the introduction of adaptive gains α i and β i inevitably increases the computational burden to some extent, it avoids the need for computing the inverse of Hessian matrices, as required in [16,17,29], thereby maintaining a relatively low overall computational complexity. In addition, the adaptive gains allow for dynamic adjustment of the control law, which helps improve the convergence performance of the system. To further reduce communication overhead, especially in bandwidth-constrained scenarios, our future work will consider integrating event-triggered communication mechanisms into the proposed framework.

4. Numerical Simulation

This section verifies the effectiveness of Equation (5) through a numerical simulation. The simulation considers a strongly connected but unbalanced directed communication network composed of six agents, where each directed edge has a communication weight of 1. The topology is shown in Figure 1. All local objective functions f i ( x ) ( x R 2 , i = 1 ,   2 ,   ,   6 ) are defined as follows:
f 1 ( x ) = 0.4 ( x 1 0.6 ) 2 + 0.6 x 1 x 2 + 0.2 x 2 2 ,
f 2 ( x ) = 0.6 ( x 1 2.4 ) 2 + 0.4 x 1 x 2 + 0.2 ( x 2 + 0.2 ) 2 ,
f 3 ( x ) = 1.2 ( x 1 1.5 ) 2 + 0.8 x 2 2 + ln ( 3 + 0.6 x 1 2 ) + ln ( 1.5 + 1.2 x 2 2 ) ,
f 4 ( x ) = 2.5 ( x 1 1.5 ) 2 + 0.4 ( x 2 1.6 ) 2 + ln ( 4 + 0.8 x 1 2 ) + ln ( 2.5 + 2 x 2 2 ) ,
f 5 ( x ) = 2.2 ( 0.5 x 1 2.5 ) 2 + ( 0.5 x 2 3 ) 2 + 0.8 x 1 5 x 1 2 + 2.5 + 0.4 exp 0.7 x 2 2 ,
f 6 ( x ) = 1.2 ( 0.6 x 1 3 ) 2 + ( 0.6 x 2 5 ) 2 + 0.2 x 1 6 x 1 2 + 1 + 0.8 exp 0.6 x 2 2 .
First, the optimal control output y * = [ 1.916 , 1.603 ] T corresponding to the optimization problem is determined by other efficient gradient algorithms to verify the effectiveness of Equation (5). The variables x i ( 0 ) , y i ( 0 ) , r i ( 0 ) , and υ i ( 0 ) are randomly initialized with parameter settings θ 1 = 0.5 , θ 2 = 1 , c i = 2 , α i ( 0 ) = 1 , ψ i T ( x i ) = x i , ω ¯ i = 1.5 , and Λ i = 1 and a step size of 0.001. Figure 2 and Figure 3 demonstrate that both components of the outputs of all six agents converge asymptotically to 1.916 and 1.603 , respectively. The evolution of the global cost function f ( x ) = i = 1 6 f i ( x i ) is shown in Figure 4, confirming convergence to the optimal value f ( x * ) = 46.04 , where x * = y * denotes the exact optimal solution of Problem (2). Figure 5 presents the adaptive coupling gains α i ( t ) , which converge to positive constants. Therefore, the obtained results validate the effectiveness of the proposed algorithm in solving Problem (2) over unbalanced digraphs.
Meanwhile, the trajectories of the estimated parameters ω ˜ i ( t ) for each agent over time are depicted in Figure 6, illustrating that ω ˜ i ( t ) eventually converges to the true unknown parameter ω ¯ i for i = 1 ,   2 ,   ,   6 . Consequently, the proposed algorithm is capable of addressing DOPs for control systems with unknown parameters.
In contrast to its guaranteed convergence to the optimal solution over weight-balanced digraphs, Algorithm (4) proposed in [20] fails to achieve convergence under the same unbalanced directed network, as evidenced by the trajectories depicted in Figure 7.

5. Conclusions

In this paper, an adaptive continuous-time algorithm has been proposed, which is able to solve the DOPs of multi-agent systems with unknown parameters over unbalanced directed communication networks. The proposed approach is fully distributed and each agent only exchanges auxiliary variable information with neighboring agents rather than gradient information of agents. By utilizing Lyapunov stability theory, the developed approach has been proved to be globally convergent. Additionally, a simulation example has been provided to demonstrate the effectiveness and superiority of our scheme. In future work, we will further study DOPs with local constrains and unknown system parameters over unbalanced digraphs.

Author Contributions

Conceptualization, Q.Y. and C.J.; methodology, C.J.; software, C.J.; validation, Q.Y.; formal analysis, Q.Y.; investigation, C.J.; resources, Q.Y.; data curation, C.J. and Q.Y.; writing—original draft, C.J.; writing—review and editing, C.J. and Q.Y.; visualization, C.J.; supervision, Q.Y.; project administration, Q.Y. and C.J.; funding acquisition, Q.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the project of the Science and Technology Research Program of Chongqing Municipal Education Commission, grant number KJQN202100744.

Data Availability Statement

The data generated during this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DOPsDistributed optimization problems
ISSInput-to-state stability

References

  1. Cherukuri, A.; Cortes, J. Initialization-free distributed coordination for economic dispatch under varying loads and generator commitment. Automatica 2016, 74, 183–193. [Google Scholar] [CrossRef]
  2. Li, M.; Andersen, D.G.; Smola, A.; Yu, K. Communication efficient distributed machine learning with the parameter server. Adv. Neural Inf. Process. Syst. 2014, 27, 19–27. [Google Scholar]
  3. Yi, X.; Zhang, S.; Yang, T.; Chai, T.; Johansson, K.H. A primal-dual SGD algorithm for distributed nonconvex optimization. IEEE/CAA J. Autom. Sin. 2022, 9, 812–833. [Google Scholar] [CrossRef]
  4. Beck, A.; Nedic, A.; Ozdaglar, A.; Teboulle, M. Optimal distributed gradient methods for network resource allocation problems. IEEE Trans. Control Netw. Syst. 2014, 1, 64–74. [Google Scholar] [CrossRef]
  5. Doan, T.T.; Beck, C.L. Distributed resource allocation over dynamic networks with uncertainty. IEEE Trans. Autom. Control 2020, 66, 4378–4384. [Google Scholar] [CrossRef]
  6. Carnevale, G.; Camisa, A.; Notarstefano, G. Distributed online aggregative optimization for dynamic multirobot coordination. IEEE Trans. Autom. Control 2022, 68, 3736–3743. [Google Scholar] [CrossRef]
  7. Ning, B.; Han, Q.L.; Zuo, Z. Distributed optimization for multiagent systems: An edge-based fixed-time consensus approach. IEEE Trans. Cybern. 2017, 49, 122–132. [Google Scholar] [CrossRef]
  8. Olfati-Saber, R.; Fax, J.A.; Murray, R.M. Consensus and cooperation in networked multi-agent systems. Proc. IEEE 2007, 95, 215–233. [Google Scholar] [CrossRef]
  9. Nedic, A.; Ozdaglar, A. Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 2009, 54, 48–61. [Google Scholar] [CrossRef]
  10. Ho, Y.; Servi, L.; Suri, R. A class of center-free resource allocation algorithms. IFAC Proc. Vol. 1980, 13, 475–482. [Google Scholar] [CrossRef]
  11. Lü, Q.; Liao, X.; Li, H.; Huang, T. Achieving acceleration for distributed economic dispatch in smart grids over directed networks. IEEE Trans. Netw. Sci. Eng. 2020, 7, 1988–1999. [Google Scholar] [CrossRef]
  12. Wang, Y.; Zhang, H.; Li, Z. Distributed Optimization Control for the System with Second-Order Dynamic. Mathematics 2024, 12, 3347. [Google Scholar] [CrossRef]
  13. Shi, Y.; Ran, L.; Tang, J.; Wu, X. Distributed optimization algorithm for composite optimization problems with non-smooth function. Mathematics 2022, 10, 3135. [Google Scholar] [CrossRef]
  14. Zhou, H.; Zeng, X.; Hong, Y. Adaptive exact penalty design for constrained distributed optimization. IEEE Trans. Autom. Control 2019, 64, 4661–4667. [Google Scholar] [CrossRef]
  15. Li, W.; Zeng, X.; Liang, S.; Hong, Y. Exponentially convergent algorithm design for constrained distributed optimization via nonsmooth approach. IEEE Trans. Autom. Control 2021, 67, 934–940. [Google Scholar] [CrossRef]
  16. Guo, G.; Zhang, R.; Zhou, Z.D. A local-minimization-free zero-gradient-sum algorithm for distributed optimization. Automatica 2023, 157, 111247. [Google Scholar] [CrossRef]
  17. Guo, G.; Zhou, Z.D.; Zhang, R. Distributed fixed-time optimization with time-varying cost: Zero-gradient-sum scheme. IEEE Trans. Circuits Syst. II Express Briefs 2024, 71, 3086–3090. [Google Scholar] [CrossRef]
  18. Ji, L.; Yu, L.; Zhang, C.; Guo, X.; Li, H. Initialization-free distributed prescribed-time consensus based algorithm for economic dispatch problem over directed network. Neurocomputing 2023, 533, 1–9. [Google Scholar] [CrossRef]
  19. Lian, M.; Guo, Z.; Wen, S.; Huang, T. Distributed predefined-time algorithm for system of linear equations over directed networks. IEEE Trans. Circuits Syst. II Express Briefs 2023, 71, 2139–2143. [Google Scholar] [CrossRef]
  20. Su, P.; Wang, T.; Yu, J.; Dong, X.; Li, Q.; Ren, Z.; Tan, Q.; Lv, R.; Liang, Z. Continuous-Time Algorithms for Distributed Optimization Problem on Directed Digraphs. In Proceedings of the 2024 43rd Chinese Control Conference (CCC), Kunming, China, 28–31 July 2024; IEEE: New York, NY, USA, 2024; pp. 5772–5776. [Google Scholar] [CrossRef]
  21. Wang, J.; Liu, D.; Feng, J.; Zhao, Y. Distributed Optimization Control for Heterogeneous Multiagent Systems under Directed Topologies. Mathematics 2023, 11, 1479. [Google Scholar] [CrossRef]
  22. Zhu, Y.; Yu, W.; Wen, G.; Ren, W. Continuous-time coordination algorithm for distributed convex optimization over weight-unbalanced directed networks. IEEE Trans. Circuits Syst. II Express Briefs 2018, 66, 1202–1206. [Google Scholar] [CrossRef]
  23. Lian, M.; Guo, Z.; Wen, S.; Huang, T. Distributed adaptive algorithm for resource allocation problem over weight-unbalanced graphs. IEEE Trans. Netw. Sci. Eng. 2023, 11, 416–426. [Google Scholar] [CrossRef]
  24. Dhullipalla, M.H.; Chen, T. A Continuous-Time Gradient-Tracking Algorithm for Directed Networks. IEEE Control Syst. Lett. 2024, 8, 2199–2204. [Google Scholar] [CrossRef]
  25. Zhang, J.; Liu, L.; Wang, X.; Ji, H. Fully distributed algorithm for resource allocation over unbalanced directed networks without global lipschitz condition. IEEE Trans. Autom. Control 2022, 68, 5119–5126. [Google Scholar] [CrossRef]
  26. Zhang, J.; Hao, Y.; Liu, L.; Wang, X.; Ji, H. Fully Distributed Continuous-Time Algorithm for Nonconvex Optimization over Unbalanced Digraphs. In Proceedings of the 2023 9th International Conference on Control, Decision and Information Technologies (CoDIT), Rome, Italy, 3–6 July 2023; IEEE: New York, NY, USA, 2023; pp. 1–6. [Google Scholar] [CrossRef]
  27. Dai, H.; Jia, J.; Yan, L.; Fang, X.; Chen, W. Distributed fixed-time optimization in economic dispatch over directed networks. IEEE Trans. Ind. Inform. 2020, 17, 3011–3019. [Google Scholar] [CrossRef]
  28. Li, Z.; Duan, Z. Cooperative Control of Multi-Agent Systems: A Consensus Region Approach; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  29. Zhou, S.; Wei, Y.; Cao, J.; Liu, Y. Multi/Single-Stage Sliding Manifold Approaches for Prescribed-Time Distributed Optimization. IEEE Trans. Autom. Control 2025, 70, 2794–2801. [Google Scholar] [CrossRef]
  30. Li, H.; Zhang, M.; Yin, Z.; Zhao, Q.; Xi, J.; Zheng, Y. Prescribed-time distributed optimization problem with constraints. ISA Trans. 2024, 148, 255–263. [Google Scholar] [CrossRef]
  31. Li, Z.; Ding, Z.; Sun, J.; Li, Z. Distributed adaptive convex optimization on directed graphs via continuous-time algorithms. IEEE Trans. Autom. Control 2017, 63, 1434–1441. [Google Scholar] [CrossRef]
  32. Khalil, H.K. Nonlinear Systems, 3rd ed.; Prentice Hall Inc.: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
Figure 1. An unbalanced digraph of six agents, identified by numbers 1 to 6.
Figure 1. An unbalanced digraph of six agents, identified by numbers 1 to 6.
Mathematics 13 02692 g001
Figure 2. The trajectories of the system control outputs y i j over time, where y i j represents the j-th component of the control output y i , with i { 1 ,   2 ,   ,   6 } and j { 1 , 2 } .
Figure 2. The trajectories of the system control outputs y i j over time, where y i j represents the j-th component of the control output y i , with i { 1 ,   2 ,   ,   6 } and j { 1 , 2 } .
Mathematics 13 02692 g002
Figure 3. The trajectory of the error of system control outputs i = 1 6 y i ( t ) y * 2 over time.
Figure 3. The trajectory of the error of system control outputs i = 1 6 y i ( t ) y * 2 over time.
Mathematics 13 02692 g003
Figure 4. The trajectory of the global cost function f ( x ) = i = 1 6 f i ( x ) over time, with f ( x * ) denoting the optimal value of this function.
Figure 4. The trajectory of the global cost function f ( x ) = i = 1 6 f i ( x ) over time, with f ( x * ) denoting the optimal value of this function.
Mathematics 13 02692 g004
Figure 5. The trajectories of the adaptive coupling gains α i ( t ) over time.
Figure 5. The trajectories of the adaptive coupling gains α i ( t ) over time.
Mathematics 13 02692 g005
Figure 6. The trajectories of the unknown parameter estimation variables ω ˜ i ( t ) over time.
Figure 6. The trajectories of the unknown parameter estimation variables ω ˜ i ( t ) over time.
Mathematics 13 02692 g006
Figure 7. The trajectories of the state variables over time in Algorithm (4) of [20] under the same unbalanced directed communication network, where y 1 * and y 2 * denote the first and second components of y * , respectively.
Figure 7. The trajectories of the state variables over time in Algorithm (4) of [20] under the same unbalanced directed communication network, where y 1 * and y 2 * denote the first and second components of y * , respectively.
Mathematics 13 02692 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, Q.; Jiang, C. A Continuous-Time Distributed Optimization Algorithm for Multi-Agent Systems with Parametric Uncertainties over Unbalanced Digraphs. Mathematics 2025, 13, 2692. https://doi.org/10.3390/math13162692

AMA Style

Yang Q, Jiang C. A Continuous-Time Distributed Optimization Algorithm for Multi-Agent Systems with Parametric Uncertainties over Unbalanced Digraphs. Mathematics. 2025; 13(16):2692. https://doi.org/10.3390/math13162692

Chicago/Turabian Style

Yang, Qing, and Caiqi Jiang. 2025. "A Continuous-Time Distributed Optimization Algorithm for Multi-Agent Systems with Parametric Uncertainties over Unbalanced Digraphs" Mathematics 13, no. 16: 2692. https://doi.org/10.3390/math13162692

APA Style

Yang, Q., & Jiang, C. (2025). A Continuous-Time Distributed Optimization Algorithm for Multi-Agent Systems with Parametric Uncertainties over Unbalanced Digraphs. Mathematics, 13(16), 2692. https://doi.org/10.3390/math13162692

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop