Abstract
This paper investigates distributed optimization problems for multi-agent systems with parametric uncertainties over unbalanced directed communication networks. To settle this class of optimization problems, a continuous-time algorithm is proposed by integrating adaptive control techniques with an output feedback tracking protocol. By systematically employing Lyapunov stability theory, perturbed system analysis, and input-to-state stability theory, we rigorously establish the asymptotic convergence property of the proposed algorithm. A numerical simulation further demonstrates the effectiveness of the algorithm in computing the global optimal solution.
Keywords:
distributed optimization; continuous-time algorithm; unbalanced directed networks; adaptive control; parametric uncertainties MSC:
93D50
1. Introduction
Distributed optimization in multi-agent systems has attracted substantial interest in recent years due to its broad applicability in diverse domains such as smart grids [], distributed machine learning [,], resource allocation [,], and cooperative robotics []. Specifically, distributed optimization can be utilized to achieve state-of-charge balancing control at the battery-pack level in grid-connected battery energy storage systems, thereby enhancing overall system efficiency and effectively preventing potential damage caused by overcharging or discharging during operation []. Compared to centralized approaches, distributed optimization offers notable advantages in terms of preserving data privacy, computational scalability, and robustness against node or link failures [,].
Research on distributed optimization problems (DOPs) can generally be categorized into discrete-time and continuous-time frameworks. Following the decentralized optimization paradigm introduced in [], a wide range of discrete-time algorithms have been developed [,,]. In parallel, continuous-time distributed algorithms have also gained considerable attention. For scenarios where the communication network is undirected [,,,], several methods have been proposed. To address complex setups that involve both consensus constraints and nonsmooth cost functions, the authors of [] proposed an adaptive method based on a distance penalty function, while the authors of [] developed a subgradient algorithm exhibiting exponential convergence. Furthermore, zero-gradient-sum algorithms that are initialization-free and based on sliding mode control were proposed in [,] to address DOPs with consensus constraints.
Nevertheless, algorithms designed under the assumption of undirected graphs often encounter difficulties in accommodating more general network topologies. To overcome these limitations, recent research has explored distributed methods applicable to weight-balanced digraphs [,,,]. The authors of [] proposed a primal–dual consensus algorithm that removes the dependency on initial states. For solving systems of linear equations in a distributed setting, a predefined-time consensus protocol was introduced in []. Moreover, to accommodate parametric uncertainties in multi-agent systems, an adaptive control strategy was developed in []. In the presence of external disturbances, the authors of [] introduced a distributed algorithm that ensures finite-time state convergence for all agents.
Despite these advances, the above-mentioned algorithms are not directly applicable under unbalanced digraphs, which frequently arise in practice. To address this gap, several continuous-time algorithms have been proposed specifically for such networks [,,]. For example, the authors of [] introduced an auxiliary variable to counteract the imbalance effects inherent in directed topologies. In addressing resource allocation problems with nondifferentiable cost functions, the authors of [] proposed an adaptive protocol that estimates a positive right eigenvector of the out-Laplacian matrix to facilitate convergence to the optimal solution. Furthermore, a gradient tracking method specifically designed for consensus-constrained problems was proposed in []; however, its applicability is restricted to scenarios where the Laplacian matrix is asymmetric, rendering it unsuitable for undirected graphs.
Leveraging insights from existing methods, this paper investigates the design of distributed consensus algorithms for multi-agent systems with parametric uncertainties. In practical implementations, such systems are likely to exhibit unbalanced directed networks due to inherent limitations in communication links or nodes. To this end, we develop a continuous-time algorithm capable to work over unbalanced digraphs, with particular emphasis on handling consensus constraints and parametric uncertainties. The primary contributions of this study are summarized as follows:
- A novel distributed continuous-time optimization algorithm is developed for multi-agent systems over unbalanced digraphs. Unlike existing methods such as [,,], which are restricted to undirected or weight-balanced graphs, the proposed adaptive algorithm is applicable to more general directed graphs.
- This protocol employs a virtual vector to construct a reference signal, enabling the vector to approach the optimal solution of the considered problem. Moreover, the proposed approach can effectively handle parametric uncertainties, which are not considered in some related works such as [,,].
- The asymptotic convergence of the proposed algorithm is rigorously established through the integration of Lyapunov stability theory and input-to-state stability (ISS) analysis. Additionally, the method improves agent-level privacy by eliminating the need to access the cost function (sub)gradients of neighboring agents, in contrast to existing approaches such as [,].
To facilitate understanding, the structure of this paper is organized as follows. The second section introduces mathematical preliminaries, graph theory fundamentals, and problem formulation. The third section presents the main results, including algorithm design and convergence analysis. The fourth section validates the efficacy of the algorithm through a numerical simulation, and the last section concludes this work.
2. Preliminaries and Problem Formulation
2.1. Notations
Let and denote the sets of real numbers and positive real numbers, respectively. The symbols and represent the spaces of n-dimensional real vectors and real matrices, respectively. The identity matrix of size n is denoted by . The vectors and denote n-dimensional column vectors whose entries are all ones and all zeros, respectively. The Kronecker product is represented by ⊗. denotes a diagonal matrix with scalar diagonal entries , while denotes a block diagonal matrix composed of matrices on its diagonal and zero matrices elsewhere. For any vector , the Euclidean norm is denoted by .
Consider a differentiable function . Its gradient is denoted by . The function f is said to be strongly convex on a convex set if there exists a constant such that for all with . In addition, if the gradient of a function is globally l-Lipschitz-continuous on , then there exists a constant such that for all .
2.2. Graph Theory
The interaction topology of a multi-agent system is modeled by a directed graph , where denotes the set of vertices, is the set of directed edges, and represents the adjacency matrix corresponding to the communication topology. A directed edge indicates that agent can access the state information of agent . A directed path from vertex to vertex is a sequence of directed edges of the form . The digraph is said to be strongly connected if there exists a directed path between every pair of distinct nodes. The index set of all agents is denoted by . Furthermore, all entries of are non-negative, and if and only if . The graph Laplacian matrix is defined by and for . The graph is said to be weight-balanced if ; otherwise, it is unbalanced.
Lemma 1
([]). Let be the Laplacian matrix of a strongly connected digraph . Then, the following properties hold:
- 1.
- L has a simple zero eigenvalue with the associated right eigenvector , and all other eigenvalues have positive real parts.
- 2.
- Let with for denote the left eigenvector of L corresponding to the zero eigenvalue. Define . Then the matrix satisfieswhere ϑ is any vector with positive entries and represents the second smallest eigenvalue of . Moreover, if and only if is weight-balanced.
2.3. Problem Formulation
Consider a multi-agent system comprising N agents. The continuous-time dynamics can be described by
where is the state associated with agent i, is the control input, and is the control output. The vector contains unknown system parameters, and denotes a matrix-valued regressor, which can be constructed using the output and the known system dynamics.
The objective of this study is to formulate a distributed continuous-time optimization algorithm such that each agent, operating under an unbalanced directed communication topology, can drive to track the solution that optimally solves the following problem:
where is a private objective function known only to the agent i, and represents the global objective function. The optimization problem above can be equivalently reformulated as
Assumption 1.
Each individual cost function is differentiable and strongly convex, with its gradient being -Lipschitz.
Assumption 2.
The unbalanced directed communication graph of agents is strongly connected.
Remark 1.
The distributed problem formulated in this paper is of practical significance and has been previously investigated in works such as [,,]. Moreover, Assumptions 1 and 2 are commonly adopted in distributed optimization [,,]. Under Assumption 1, the Problem (3) is guaranteed to admit a unique globally optimal solution, with both existence and uniqueness being mathematically provable.
Define as the optimal output of the dynamical system (1). Then the objective of this paper is to develop a control protocol that ensures each agent’s control output asymptotically approaches ; in other words,
3. Main Results
3.1. Algorithm Design
In this section, we propose a distributed continuous-time optimization algorithm with adaptive coupling gains for unbalanced directed communication networks. The algorithm is designed as follows:
where and . Here, , , , and are auxiliary variables, with representing the k-th entry of . The initial condition of satisfies and for . The parameters , , and are positive constants and is positive definite. The adaptive coupling gain is initialized with a positive value, i.e., . In Equation (5), each agent i transmits only its local variables , , and to neighbors, without sharing (sub)gradient information, as conducted in [,], thereby enhancing agent-level privacy.
Remark 2.
Equation (5) introduces the virtual vector to generate a reference signal that converges to the minimizer of Problem (3). The variable is introduced to estimate the unknown parameter vector, with the estimation process regulated by the matrix , while the adaptive coupling gains and are designed to drive the agents toward the minimizer of Problem (3). The auxiliary variable is introduced to ensure that eventually reaches consensus, while the parameters , , and are incorporated to facilitate the subsequent rigorous convergence analysis. Additionally, the variable is introduced to facilitate the estimation of the left eigenvector of L associated with the zero eigenvalue, without requiring prior knowledge of this information, as is needed in [].
Remark 3.
Rather than being predetermined, the coupling gains and are adaptively tuned during the algorithm’s execution. Their updates depend exclusively on the local error arising from the discrepancy between virtual vectors of neighboring agents. In particular, asymptotically converges to a finite constant as the system outputs achieve consensus, which will be established in the subsequent convergence analysis.
The control Equation (5) admits a compact representation as
where , , , , , , , , , , and .
Lemma 2.
Under Assumptions 1 and 2, define variables and . If 0, and satisfy
where , , with being the left eigenvector of L corresponding to its zero eigenvalue, then , where is the optimal solution to Problem (3).
Proof of Lemma 2.
Since , it follows from (6) that the point satisfying (7) constitutes an equilibrium of the dynamical system described by Equation (6). From (7b), we obtain , where . Left-multiplying both sides of (7a) by gives . Under Assumption 1, the optimality condition holds. Therefore, it follows that . □
Remark 4.
Lemma 2 establishes the optimality condition for Equation (5), deriving the necessary equations for the optimal solution , which forms the basis for the subsequent convergence analysis.
3.2. Convergence Analysis
In this section, we prove the convergence property of Equation (5) by employing Lemma 2 in conjunction with global uniform asymptotic stability and ISS theory.
Based on Lemma 2, to prove , we must show the convergence of to . Through coordinate transformations, we define new variables , , , and . Taking the difference of (6) and (7), the transformed dynamics of derived from (6) are given by the following differential equations:
where and .
Based on Lemmas 2 and (8), we can conclude that to prove the convergence of variable in (5) to the optimal point of Problem (3), we need to show that , , and converge to as time tends to infinity. Since , the origin is not an equilibrium point of (8). For subsequent convergence analysis, we introduce additional coordinate transformations for and , defining new variables and . Therefore, we first need to prove , , , and and then prove and , where are two constant vectors. Finally, by proving and , we ultimately prove .
Let represent the column vector consisting of elements from the -th to the -th of the vector . Employing , we obtain , and the dynamics of can thus be rewritten as
For the subsequent convergence analysis, we define . Then the dynamics of w can be written as
where represents the unperturbed system.
Remark 5.
Under Assumption 1, it can be verified that is locally Lipschitz-continuous for , where is any compact set. Furthermore, maintains boundedness for all . According to Theorem 4.19 in [], to verify the asymptotic convergence of system (10) to the origin when
0, we must first demonstrate the global uniform asymptotic stability of the unperturbed system and the ISS of system (10).
Theorem 1.
Suppose Assumptions 1 and 2 hold, and the initial adaptive coupling gains satisfy for all . Then the proposed initialization-free continuous-time optimization algorithm described by Equation (5) can ensure that the control output tracks the optimal trajectory through the appropriate selection of parameters , , and .
Proof of Theorem 1.
The proof consists of three main steps:
- Prove the global uniform asymptotic stability of the unperturbed system ;
- Establish the ISS property of the system (10);
- Demonstrate the convergence of variables and in (8) to and , respectively, as .
The detailed proof proceeds as follows.
Step 1. Consider 0 as the equilibrium point of the unperturbed system . Accordingly, the following Lyapunov function candidate is considered:
where , with denoting the maximum eigenvalue of and representing the second smallest eigenvalue of the matrix (abbreviated as and for simplicity). Moreover, the components of the Lyapunov function are defined as
where and are positive constants, , , and , , and are positive constants satisfying the positive definiteness condition .
The derivative of along the trajectories of the unperturbed system is given by
Define . Since both and are positive diagonal matrices, there exists a positive vector such that . Therefore, according to Lemma 1, we obtain
Since for , , and defining , we can employ Young’s inequality to obtain
Substituting (13), (14), and (15) into (12), we can further derive
The derivative of along the trajectories of the unperturbed system is given by
To derive the above inequality, we employ Lemma 1 in conjunction with Young’s inequality; specifically,
Next, let , and then the derivative of along the trajectories of the unperturbed system satisfies
where .
According to Assumption 1, we have , where . Let denote the second smallest eigenvalue of , abbreviated as . Then we obtain . Therefore, by applying Young’s inequality, we can further derive
where . Substituting (19) into (18) yields
where , , and . By choosing , we get
Subsequently, we compute the derivative of along the trajectories of the unperturbed system and apply Young’s inequality to obtain
where . Further processing yields
where and . Substituting (19) and (23) into (22), we can obtain
where , with .
The derivative of along the trajectories of the unperturbed system is
where , with . Applying Young’s inequality and (19), we derive
Substituting (26) back into (25) yields
where , , , , , and . Appropriately choosing and , we can ensure .
Combining (21), (24), and (27), the derivative of V along the trajectories of the unperturbed system is obtained as
where . Since is positive definite, by appropriately selecting and ensuring that , together with choosing sufficiently large, it is guaranteed that is negative definite. This shows that the origin is a globally uniformly asymptotically stable equilibrium point of the unperturbed system , and the adaptive control gain converges to a positive constant.
Step 2. Prove the ISS of system (10) and the convergence of to zero as .
From Step 1, the derivative of V with respect to time along trajectories of system (10) satisfies
where . Defining the input disturbance , we can obtain
Define and . By selecting and ensuring , one obtains
For any , when , it holds that
According to Theorem 4.19 in [], it has been established that system (10) is input-to-state stable. Furthermore, we can show that , which implies . Following the ISS definition, we prove the convergence of system (10) to zero as ; i.e., for the perturbed system , we have , , , and .
Step 3. Prove and with .
From the established results and , we obtain and , where . Taking limits on both sides of (8b) yields
Thereby we obtain
Left-multiplying both sides of (36) by gives . Under Assumption 1, the optimal solution to Problem (3) is unique, which implies . Meanwhile, from (8c), it is straightforward to verify that . Thus, we have proved and , thereby establishing the convergence of to . In conclusion, the states in Equation (5) converge to the optimal solution of Problem (2). □
Remark 6.
Remark 7.
While the introduction of adaptive gains and inevitably increases the computational burden to some extent, it avoids the need for computing the inverse of Hessian matrices, as required in [,,], thereby maintaining a relatively low overall computational complexity. In addition, the adaptive gains allow for dynamic adjustment of the control law, which helps improve the convergence performance of the system. To further reduce communication overhead, especially in bandwidth-constrained scenarios, our future work will consider integrating event-triggered communication mechanisms into the proposed framework.
4. Numerical Simulation
This section verifies the effectiveness of Equation (5) through a numerical simulation. The simulation considers a strongly connected but unbalanced directed communication network composed of six agents, where each directed edge has a communication weight of 1. The topology is shown in Figure 1. All local objective functions (, ) are defined as follows:
Figure 1.
An unbalanced digraph of six agents, identified by numbers 1 to 6.
,
,
,
,
,
.
First, the optimal control output corresponding to the optimization problem is determined by other efficient gradient algorithms to verify the effectiveness of Equation (5). The variables , , , and are randomly initialized with parameter settings , , , , , , and and a step size of 0.001. Figure 2 and Figure 3 demonstrate that both components of the outputs of all six agents converge asymptotically to and , respectively. The evolution of the global cost function is shown in Figure 4, confirming convergence to the optimal value , where denotes the exact optimal solution of Problem (2). Figure 5 presents the adaptive coupling gains , which converge to positive constants. Therefore, the obtained results validate the effectiveness of the proposed algorithm in solving Problem (2) over unbalanced digraphs.
Figure 2.
The trajectories of the system control outputs over time, where represents the j-th component of the control output , with and .
Figure 3.
The trajectory of the error of system control outputs over time.
Figure 4.
The trajectory of the global cost function over time, with denoting the optimal value of this function.
Figure 5.
The trajectories of the adaptive coupling gains over time.
Meanwhile, the trajectories of the estimated parameters for each agent over time are depicted in Figure 6, illustrating that eventually converges to the true unknown parameter for . Consequently, the proposed algorithm is capable of addressing DOPs for control systems with unknown parameters.
Figure 6.
The trajectories of the unknown parameter estimation variables over time.
In contrast to its guaranteed convergence to the optimal solution over weight-balanced digraphs, Algorithm (4) proposed in [] fails to achieve convergence under the same unbalanced directed network, as evidenced by the trajectories depicted in Figure 7.
Figure 7.
The trajectories of the state variables over time in Algorithm (4) of [] under the same unbalanced directed communication network, where and denote the first and second components of , respectively.
5. Conclusions
In this paper, an adaptive continuous-time algorithm has been proposed, which is able to solve the DOPs of multi-agent systems with unknown parameters over unbalanced directed communication networks. The proposed approach is fully distributed and each agent only exchanges auxiliary variable information with neighboring agents rather than gradient information of agents. By utilizing Lyapunov stability theory, the developed approach has been proved to be globally convergent. Additionally, a simulation example has been provided to demonstrate the effectiveness and superiority of our scheme. In future work, we will further study DOPs with local constrains and unknown system parameters over unbalanced digraphs.
Author Contributions
Conceptualization, Q.Y. and C.J.; methodology, C.J.; software, C.J.; validation, Q.Y.; formal analysis, Q.Y.; investigation, C.J.; resources, Q.Y.; data curation, C.J. and Q.Y.; writing—original draft, C.J.; writing—review and editing, C.J. and Q.Y.; visualization, C.J.; supervision, Q.Y.; project administration, Q.Y. and C.J.; funding acquisition, Q.Y. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the project of the Science and Technology Research Program of Chongqing Municipal Education Commission, grant number KJQN202100744.
Data Availability Statement
The data generated during this study are available from the corresponding author upon reasonable request.
Conflicts of Interest
The authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| DOPs | Distributed optimization problems |
| ISS | Input-to-state stability |
References
- Cherukuri, A.; Cortes, J. Initialization-free distributed coordination for economic dispatch under varying loads and generator commitment. Automatica 2016, 74, 183–193. [Google Scholar] [CrossRef]
- Li, M.; Andersen, D.G.; Smola, A.; Yu, K. Communication efficient distributed machine learning with the parameter server. Adv. Neural Inf. Process. Syst. 2014, 27, 19–27. [Google Scholar]
- Yi, X.; Zhang, S.; Yang, T.; Chai, T.; Johansson, K.H. A primal-dual SGD algorithm for distributed nonconvex optimization. IEEE/CAA J. Autom. Sin. 2022, 9, 812–833. [Google Scholar] [CrossRef]
- Beck, A.; Nedic, A.; Ozdaglar, A.; Teboulle, M. Optimal distributed gradient methods for network resource allocation problems. IEEE Trans. Control Netw. Syst. 2014, 1, 64–74. [Google Scholar] [CrossRef]
- Doan, T.T.; Beck, C.L. Distributed resource allocation over dynamic networks with uncertainty. IEEE Trans. Autom. Control 2020, 66, 4378–4384. [Google Scholar] [CrossRef]
- Carnevale, G.; Camisa, A.; Notarstefano, G. Distributed online aggregative optimization for dynamic multirobot coordination. IEEE Trans. Autom. Control 2022, 68, 3736–3743. [Google Scholar] [CrossRef]
- Ning, B.; Han, Q.L.; Zuo, Z. Distributed optimization for multiagent systems: An edge-based fixed-time consensus approach. IEEE Trans. Cybern. 2017, 49, 122–132. [Google Scholar] [CrossRef]
- Olfati-Saber, R.; Fax, J.A.; Murray, R.M. Consensus and cooperation in networked multi-agent systems. Proc. IEEE 2007, 95, 215–233. [Google Scholar] [CrossRef]
- Nedic, A.; Ozdaglar, A. Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 2009, 54, 48–61. [Google Scholar] [CrossRef]
- Ho, Y.; Servi, L.; Suri, R. A class of center-free resource allocation algorithms. IFAC Proc. Vol. 1980, 13, 475–482. [Google Scholar] [CrossRef]
- Lü, Q.; Liao, X.; Li, H.; Huang, T. Achieving acceleration for distributed economic dispatch in smart grids over directed networks. IEEE Trans. Netw. Sci. Eng. 2020, 7, 1988–1999. [Google Scholar] [CrossRef]
- Wang, Y.; Zhang, H.; Li, Z. Distributed Optimization Control for the System with Second-Order Dynamic. Mathematics 2024, 12, 3347. [Google Scholar] [CrossRef]
- Shi, Y.; Ran, L.; Tang, J.; Wu, X. Distributed optimization algorithm for composite optimization problems with non-smooth function. Mathematics 2022, 10, 3135. [Google Scholar] [CrossRef]
- Zhou, H.; Zeng, X.; Hong, Y. Adaptive exact penalty design for constrained distributed optimization. IEEE Trans. Autom. Control 2019, 64, 4661–4667. [Google Scholar] [CrossRef]
- Li, W.; Zeng, X.; Liang, S.; Hong, Y. Exponentially convergent algorithm design for constrained distributed optimization via nonsmooth approach. IEEE Trans. Autom. Control 2021, 67, 934–940. [Google Scholar] [CrossRef]
- Guo, G.; Zhang, R.; Zhou, Z.D. A local-minimization-free zero-gradient-sum algorithm for distributed optimization. Automatica 2023, 157, 111247. [Google Scholar] [CrossRef]
- Guo, G.; Zhou, Z.D.; Zhang, R. Distributed fixed-time optimization with time-varying cost: Zero-gradient-sum scheme. IEEE Trans. Circuits Syst. II Express Briefs 2024, 71, 3086–3090. [Google Scholar] [CrossRef]
- Ji, L.; Yu, L.; Zhang, C.; Guo, X.; Li, H. Initialization-free distributed prescribed-time consensus based algorithm for economic dispatch problem over directed network. Neurocomputing 2023, 533, 1–9. [Google Scholar] [CrossRef]
- Lian, M.; Guo, Z.; Wen, S.; Huang, T. Distributed predefined-time algorithm for system of linear equations over directed networks. IEEE Trans. Circuits Syst. II Express Briefs 2023, 71, 2139–2143. [Google Scholar] [CrossRef]
- Su, P.; Wang, T.; Yu, J.; Dong, X.; Li, Q.; Ren, Z.; Tan, Q.; Lv, R.; Liang, Z. Continuous-Time Algorithms for Distributed Optimization Problem on Directed Digraphs. In Proceedings of the 2024 43rd Chinese Control Conference (CCC), Kunming, China, 28–31 July 2024; IEEE: New York, NY, USA, 2024; pp. 5772–5776. [Google Scholar] [CrossRef]
- Wang, J.; Liu, D.; Feng, J.; Zhao, Y. Distributed Optimization Control for Heterogeneous Multiagent Systems under Directed Topologies. Mathematics 2023, 11, 1479. [Google Scholar] [CrossRef]
- Zhu, Y.; Yu, W.; Wen, G.; Ren, W. Continuous-time coordination algorithm for distributed convex optimization over weight-unbalanced directed networks. IEEE Trans. Circuits Syst. II Express Briefs 2018, 66, 1202–1206. [Google Scholar] [CrossRef]
- Lian, M.; Guo, Z.; Wen, S.; Huang, T. Distributed adaptive algorithm for resource allocation problem over weight-unbalanced graphs. IEEE Trans. Netw. Sci. Eng. 2023, 11, 416–426. [Google Scholar] [CrossRef]
- Dhullipalla, M.H.; Chen, T. A Continuous-Time Gradient-Tracking Algorithm for Directed Networks. IEEE Control Syst. Lett. 2024, 8, 2199–2204. [Google Scholar] [CrossRef]
- Zhang, J.; Liu, L.; Wang, X.; Ji, H. Fully distributed algorithm for resource allocation over unbalanced directed networks without global lipschitz condition. IEEE Trans. Autom. Control 2022, 68, 5119–5126. [Google Scholar] [CrossRef]
- Zhang, J.; Hao, Y.; Liu, L.; Wang, X.; Ji, H. Fully Distributed Continuous-Time Algorithm for Nonconvex Optimization over Unbalanced Digraphs. In Proceedings of the 2023 9th International Conference on Control, Decision and Information Technologies (CoDIT), Rome, Italy, 3–6 July 2023; IEEE: New York, NY, USA, 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Dai, H.; Jia, J.; Yan, L.; Fang, X.; Chen, W. Distributed fixed-time optimization in economic dispatch over directed networks. IEEE Trans. Ind. Inform. 2020, 17, 3011–3019. [Google Scholar] [CrossRef]
- Li, Z.; Duan, Z. Cooperative Control of Multi-Agent Systems: A Consensus Region Approach; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
- Zhou, S.; Wei, Y.; Cao, J.; Liu, Y. Multi/Single-Stage Sliding Manifold Approaches for Prescribed-Time Distributed Optimization. IEEE Trans. Autom. Control 2025, 70, 2794–2801. [Google Scholar] [CrossRef]
- Li, H.; Zhang, M.; Yin, Z.; Zhao, Q.; Xi, J.; Zheng, Y. Prescribed-time distributed optimization problem with constraints. ISA Trans. 2024, 148, 255–263. [Google Scholar] [CrossRef]
- Li, Z.; Ding, Z.; Sun, J.; Li, Z. Distributed adaptive convex optimization on directed graphs via continuous-time algorithms. IEEE Trans. Autom. Control 2017, 63, 1434–1441. [Google Scholar] [CrossRef]
- Khalil, H.K. Nonlinear Systems, 3rd ed.; Prentice Hall Inc.: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).