Next Article in Journal
Distributed Cooperative Path Planning for Multi-UAV in Information-Rich and Dynamic Environments
Previous Article in Journal
A Binocular Vision-Assisted Method for the Accurate Positioning and Landing of Quadrotor UAVs
Previous Article in Special Issue
A Cooperative Decision-Making and Control Algorithm for UAV Formation Based on Non-Cooperative Game Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Technical Note

Distributed Constrained Optimization Algorithms for Drones

School of Mathematics, Southeast University, Nanjing 210096, China
Drones 2025, 9(1), 36; https://doi.org/10.3390/drones9010036
Submission received: 18 November 2024 / Revised: 28 December 2024 / Accepted: 4 January 2025 / Published: 6 January 2025

Abstract

:
The present study addresses a critical issue within the realm of drones: the challenge of distributed constrained optimization. Our research delves into an optimization scenario where the decision variable is confined to a closed convex set. The primary objective is to develop a distributed algorithm capable of tackling this optimization problem. To achieve this, we have crafted distributed algorithms for both balanced graphs and unbalanced graphs, with the method of feasible direction employed to address the considered constraint, and the method of estimating left eigenvector to address the unbalance, incorporating momentum elements. We have demonstrated that the algorithms exhibit linear convergence when the local objective functions are both smooth and strongly convex, and when the step-sizes are appropriately chosen. Additionally, the simulation outcomes validate the efficacy of our distributed algorithms.

1. Introduction

Recently, distributed optimization has emerged as a significant research area in drones, leading to substantial advances. During past years, although several attempts were made to employ distributed optimization algorithms to address the decision problems involved in drones (see [1,2]), their usage in multi-robot systems is still limited to only a handful of examples, as discussed in [3]. On the other hand, it is also mentioned in [3] that many problems, including multi-robot target tracking, cooperative estimation, distributed simultaneous localization and mapping, and collaborative motion planning in multi-robot coordination and collaboration, can be formulated and solved within the framework of distributed optimization. Therefore, in this paper, we focus on designing novel distributed optimization algorithms and explore their potential applications in drones.
Recently, numerous influential results of distributed optimization have been achieved in this domain, encompassing both continuous-time distributed optimization algorithms (see [4,5,6,7,8,9,10,11]) and discrete-time distributed optimization algorithms (see [12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30]). Notably, significant attention has been attracted by distributed constrained optimization, particularly with discrete-time algorithms designed for closed convex set constraints.
Initially, research primarily centered on distributed optimization algorithms that employed nonnegative decaying step-sizes, leading to algorithms with sub-linear convergence rates. The earliest studies addressed unconstrained optimization problems, with distributed algorithms over balanced graphs developed in [12]. Subsequent work expanded these methods to include closed convex set constraints in [13] and global general constraints in [14]. When fixed unbalanced graphs were considered, distributed algorithms were developed to handle optimization problems with N non-identical set constraints in [15] and those with additional equality and inequality constraints in [16]. Under time-varying unbalanced graph sequences, the Push-sum framework facilitated the design of distributed algorithms for unconstrained optimization in [17], while an improved push–pull framework enabled solutions to problems with equality constraints and set constraints in [18].
More recent efforts have emphasized the design of distributed optimization algorithms achieving linear convergence rates. In [19], a distributed algorithm over balanced graphs was developed for unconstrained optimization, while further studies addressed the case of fixed unbalanced graphs in [20,21,22]. For time-varying unbalanced graphs, the Push-DIGing algorithm and a push–pull-based algorithm were proposed in [23,24]. These methods, however, are limited to solving unconstrained optimization problems. For constrained cases, an algorithm over a fixed unbalanced graph was developed for problems with a closed convex set constraint in [25].
Then, for further accelerating the convergence of the designed algorithms, the momentum terms were introduced in the proposed distributed optimization algorithms in [27,28,29], where only the unconstrained optimization problems were studied. Moreover, the approach of employing momentum terms was later extended in the case with a global closed convex set constraint in [30]. Although the momentum-based algorithm in [30] achieves linear convergence, its use of dual iterative scales adds computational complexity. Thus, this paper aims to design novel distributed optimization algorithms for solving the constrained optimization problem with only one iterative scale and thus with lower computational complexity.
This paper’s main contribution lies in proposing the distributed optimization algorithms with linear convergence and momentum terms for problems with a global closed convex set constraint. Utilizing the method of feasible directions to handle the constraint, this approach eliminates the need for the dual iterative scales used in [30], thus reducing computational complexity.
We structure the remainder of the paper as follows. Section 2 introduces the relevant notations, graph theory, and matrix theory. Section 3 presents the decision problem of drones, the proposed distributed algorithm, the main theorem, and a detailed convergence analysis. In Section 4 and Section 5, the results of simulation and conclusions are given, respectively.

2. Preliminaries

2.1. Notations

In this paper, the n-dimensional real vectors set is denoted by the R n . Especially, 1 denotes the vectors with a proper dimension and all entries being 1. Let x R n be a vector; x i denotes its i-th entry, and x T denotes its transpose. Moreover, x stands for its 2-norm. Especially, if all x i , i = 1 , 2 , , n are positive (non-negative), x is called positive (non-negative). Moreover, e i stands for the vectors with a proper dimension and with its i-th entry being 1 and other entries being 0. Additionally, for a function f ( y ) defined on R n , f ( x ) represents the gradient of f at x. Let R m × n denote the m × n -dimensional real matrices set. Especially, I N denotes the real identity matrix with a proper dimension. Let M R m × n be a matrix; its transpose and the 2-norm induced matrix norm are, respectively, denoted by M and M T . Moreover, its i j -th is denoted by M i j . Especially, if all M i j , i = 1 , 2 , , m , i = 1 , 2 , , j are positive (non-negative), M is called positive (non-negative). Moreover, for a closed convex set X 0 R n and a vector w, P Ω 0 ( w ) denotes the projection of w on X 0 .

2.2. Graph Theory

In this paper, the notations and definitions associated with the graph are the same as those described in [25] and thus are omitted here.

2.3. Matrix Theory

This subsection gives the definition of doubly stochastic matrices and the significant results of doubly stochastic matrices and non-negative matrices.
Definition 1.
Let A R N × N be a non-negative matrix. A is called a non-negative stochastic matrix if A 1 = 1 . Moreover, A is called a non-negative doubly stochastic matrix if A 1 = 1 and 1 T A = 1 T .
Lemma 1
([31,32]). Let A R N × N be a non-negative doubly stochastic matrix associated with the strongly connected directed graph G . Especially, for all 1 i N , suppose that A i i > 0 are positive. Then, we have A 1 N 1 1 T ρ , with 0 < ρ < 1 being a constant.
Lemma 2
([32]). For a non-negative matrix G R n × n , if G δ < θ δ holds for a positive constant θ and a positive vector δ R n , then G < θ .

3. Main Results

We will give the decision problem of drones, the proposed distributed algorithm and main theorem, and the convergence analysis in this section.

3.1. Decision Problem of Drones

In recent years, drones have seen significant application across various practical fields such as the military, terrain exploration, and power grid fault detection. As multi-scenario tasks become more complex, task completion often requires the coordination of multiple drones. Due to the lengthy time required and the lack of robustness in solving large-scale optimization problems, centralized optimal decision-making methods cannot meet the intelligent demands of multi-drone clusters for handling complex tasks. Consequently, distributed optimization algorithms are increasingly employed to enable efficient autonomous decision-making in multi-drone clusters. For instance, in terrain exploration tasks, the influence of complex terrain requires that multi-drone clusters consider not only their inherent optimization objectives but also the impact of terrain constraints on drone actions. Therefore, the autonomous decision-making problem for multi-drone clusters can be formulated as the following mathematical model:
min k   h ( k ) = 1 N i = 1 N h i ( k )   s . t .   k K 0 ,
where h i : R m R is the convex function representing the optimization objectives of drones and K 0 R m is a set being convex and closed standing for the constraint observed by drones. As discussed in [25], for a optimal solution k * to problem Equation (1), it holds that
k * = P K 0 ( k * ν 0 h ( k * ) ) ,
with ν 0 being an arbitrary positive constant. Especially, let m = 1 in the subsequent analyses for convenience, and the case with m > 1 has been discussed in detail in [25].
In the following, we first give a necessary assumption for problem Equation (1), which is standard and usually introduced in the works researching the distributed optimization algorithms having linear convergence.
Assumption 1.
All h i are L-smooth and α-strongly convex on R m , where α and L are positive constants.

3.2. Proposed Distributed Algorithm and Main Theorem

In this subsection, the distributed optimization algorithm is designed on the strongly connected balanced graph G , and the main theorem about the proposed algorithm owing linear convergence is given.
Especially, motivated by [26,30], the discrete-time distributed algorithm is designed as in Algorithm 1, where k i ( a ) R m is the estimation on the optimal solution of node i at a; A is the weighted matrix associated with G , which is assumed to be non-negative doubly stochastic; l i ( a ) R m is the estimation on the gradient of h of node i at a; and α , β , and η > 0 are fixed step-sizes. Especially, k i ( 0 ) can be arbitrarily selected in R n and l i ( 0 ) = h i ( k i ( 0 ) ) .
Algorithm 1: Distributed Algorithm over Balanced Graph
I: Input: α > 0 , β > 0 , η > 0 , N, A;
II: Initialize: For all i, k i ( 0 ) = k i ( 1 ) m , r i ( 0 ) = h i ( k i ( 0 ) ) m ;
III: Iteration rule:
k i ( a + 1 ) = j = 1 N A i j k j ( a ) + η e i ( a ) + β d i ( a ) ,
l i ( a + 1 ) = j = 1 N A i j l j ( a ) + r i ( a ) ,

 with
e i ( a ) = k i ( a ) k i ( a 1 ) , d i ( a ) = P B 0 k i ( a ) α β l i ( a ) k i ( a ) , r i ( a ) = h i ( k i ( a + 1 ) ) h i ( k i ( a ) ) .

VI: Output
For rewriting Equation (3) in compact form and thus the convenience in the subsequent analysis, we introduce the following variable:
k ( a ) = ( k 1 ( a ) , k 2 ( a ) , , k N ( a ) ) T , l ( a ) = ( l 1 ( a ) , l 2 ( a ) , , l N ( a ) ) T , d ( a ) = ( d 1 ( a ) , d 2 ( a ) , , d N ( a ) ) T , e ( a ) = ( e 1 ( a ) , e 2 ( a ) , , e N ( a ) ) T , H ( k ( a ) ) = ( h 1 ( k 1 ( a ) ) , , h N ( k N ( a ) ) ) T ,
and K = K 0 N . Then, we can rewrite Equation (3) as
k ( a + 1 ) = A k ( a ) + η e ( a ) + β d ( a ) ,
l ( a + 1 ) = A l ( a ) + H ( k ( a + 1 ) ) H ( k ( a ) ) .
Theorem 1.
With the feasible step-sizes α, β, and η, whose feasible regions are detailedly defined in the subsequent convergence analysis, for all i , k i ( a ) under Algorithm 1 converge to the unique optimal solution to problem Equation (1) linearly under Assumption 1.
Proof. 
See Section 3.3 for the detailed proof. □

3.3. Convergence Analysis for Algorithm 1

The detailed convergence analysis of Algorithm 1 is shown in this subsection, with the strict proof of Theorem 1 given.
Moreover, we introduce several variables necessarily involved in the subsequent convergence analysis, which are defined as
k ¯ ( a ) = 1 N 1 T k ( a ) ,      l ¯ ( a ) = 1 N 1 T l ( a ) .
Lemma 3.
For Algorithm 1, one has
k ¯ ( a + 1 ) = k ¯ ( a ) + 1 N η e ( a ) + 1 N β d ( a ) ,
l ¯ ( a + 1 ) = 1 N 1 T H ( k ( a + 1 ) ) = 1 N i = 1 N h i ( k i ( a + 1 ) ) .
Proof. 
This proof is omitted here since it can be easily completed based on the proof of ([25] [Lemma 6]). □
Next, by introducing the variable as
x ( q ) = k ( a ) 1 k ¯ ( a ) N k ¯ ( a ) b * e ( a ) l ( a ) 1 l ¯ ( a ) ,
we give a key lemma for establishing the linear convergence property of Algorithm 1.
Lemma 4.
Under Assumption 1 and when 0 < α β < 2 L , there holds
x ( a + 1 ) G x ( a ) ,
where
G = ρ + 3 β 0 2 η 2 α β + α L γ η α 2 + 2 β + α L 2 β + α L η α 2 L ( 2 + 2 β + α L ) 2 L ( 2 β + α L ) 2 L η ρ + 2 L α .
Proof. 
See Appendix A for the detailed proof. □
In the following, the detailed proof of Theorem 1 is prepared.
Proof of Theorem 1.
Clearly, it is sufficient for completing the proof to select the feasible step-sizes α , β , η in order to make ρ ( G ) < 1 . Considering 0 < α β < 2 L , we can select β = 2 L α . Furthermore, noting Lemma 2, we can complete the proof by proving that there is a positive vector δ = [ δ 1 , δ 2 , δ 3 , δ 4 ] satisfying G δ < δ , i.e.,
δ 1 > ( ρ + 3 β ) δ 1 + 2 η δ 3 + 2 α δ 4 , δ 2 > ( β + α L ) δ 1 + γ δ 2 + η δ 3 + α δ 4 , δ 3 > ( 2 + 2 β + α L ) δ 1 + ( 2 β + α L ) δ 2 + η δ 3 + α δ 4 , δ 4 > 2 L ( 2 + 2 β + α L ) δ 1 + 2 L ( 2 β + α L ) δ 2 + 2 η L δ 3 + ( ρ + 2 α L ) δ 4 .
Then, substituting β = 2 α L into Equation (8), we can obtain
δ 1 > ( ρ + 6 α L ) δ 1 + 2 η δ 3 + 2 α δ 4 , δ 2 > 3 α L δ 1 + γ δ 2 + η δ 3 + α δ 4 , δ 3 > ( 2 + 7 α L ) δ 1 + 7 α L δ 2 + η δ 3 + α δ 4 , δ 4 > 2 L ( 2 + 7 α L ) δ 1 + 14 α L 2 δ 2 + 2 η L δ 3 + ( ρ + 2 α L ) δ 4 ,
which is equivalent to
2 η δ 3 < ( 1 ρ 6 α L ) δ 1 2 α δ 4 , η δ 3 < 3 α L δ 1 + ( 1 γ ) δ 2 α δ 4 , η δ 3 < ( 2 + 7 α L ) δ 1 7 α L δ 2 + δ 3 α δ 4 , 2 η L δ 3 < 2 L ( 2 + 7 α L ) δ 1 14 α L 2 δ 2 + ( 1 ρ 2 α L ) δ 4 .
Since η , δ 3 , and L are positive, if δ 1 , δ 2 , δ 4 , α , and η exist, which are positive constants, such that
0 < ( 1 ρ 6 α L ) δ 1 2 α δ 4 , 0 < 3 α L δ 1 + ( 1 γ ) δ 2 α δ 4 , 0 < ( 2 + 7 α L ) δ 1 7 α L δ 2 + δ 3 α δ 4 , 0 < 2 L ( 2 + 7 α L ) δ 1 14 α L 2 δ 2 + ( 1 ρ 2 α L ) δ 4 ,
holds, positive constants δ 1 , δ 2 , δ 3 , δ 4 , α , and η exist such that Equation (10) holds. For arbitrary positive constant δ 3 , considering γ = 1 μ α with β = 2 L α , we select δ 1 = q 1 δ 3 , δ 2 = q 2 δ 3 , and δ 3 = q 3 δ 3 , with q 1 = 1 4 , q 2 = 9 4 μ ( 1 ρ ) , and q 3 = 3 2 ( 1 ρ ) . Then, when
α < max { 1 ρ 6 L + 2 q 3 , 1 2 q 1 7 L q 1 + 7 L q 2 + q 3 , L 28 L 2 q 1 + 28 L 2 q 2 + 4 L q 3 } ,
Equation (11) holds. Then, with the selected α , we can select
η < max { ( 1 ρ ) q 1 ( 6 L + 2 q 3 ) α 2 , ( 3 L q 1 + μ q 2 q 3 ) α , ( 1 2 q 1 ) ( 7 L q 1 + 7 L q 2 + q 3 ) α , [ ( 1 ρ ) q 3 4 L q 1 ] 2 L , ( 14 L 2 q 1 + 14 L 2 q 2 + 2 L q 3 ) α 2 L } ,
to let Equation (10) hold. Therefore, the proof has been completed. □

3.4. Distributed Algorithm on Unbalanced Graph

It can be noted that in Section 3.2, we only consider the case that the communication graph between all drones is balanced, which means that any two directly connected drones (agents) can exchange information bidirectionally. However, in many cases that require privacy protection, the information exchange between two directly connected drones (agents) is unidirectional, which implies that the communication graph between all drones is unbalanced. Furthermore, it is difficult to design a nonnegative doubly stochastic matrix for an unbalanced graph, and a nonnegative stochastic matrix is always employed in the distributed optimization algorithms for cases involving the unbalanced graphs.
Accordingly, when matrix A is only the stochastic matrix, Algorithm 1 is ineffective to address the problem in Equation (1), and we will take the case without the constraint and without the momentum terms e i ( a ) , for example, to explain the detailed reason. When the constraint and the momentum terms e i ( a ) are omitted, Algorithm 1 can be rewritten as
k i ( a + 1 ) = j = 1 N A i j k j ( a ) α l i ( a ) ,
l i ( a + 1 ) = j = 1 N A i j l j ( a ) + r i ( a ) ,
with the definition of r i ( a ) being the same as that in Section 3.2.
It is worth noting from Lemma 3 that l ¯ ( a ) can be approximatively seen as the gradient of the global objective function h at x ¯ ( t ) upon all x i ( t ) achieving consensus. Moreover, from the property of the non-negative doubly stochastic matrix as given in Lemma 2, we can obtain that all x i ( t ) achieve consensus linearly. Furthermore, for the consensus state k ¯ ( t ) , we have
k ¯ ( a + 1 ) = k ¯ ( a ) α l ¯ ( a ) ,
which can be approximatively seen as the classical gradient-decent iteration for obtaining the minimum of h ( k ) . Therefore, all k i ( a ) under Algorithm 1 will converge to the minimum of h ( k ) .
However, as discussed in [25], when A is only a non-negative doubly stochastic matrix, the result in Lemma 2 will become A 1 N 1 π T ρ where π is a nonnegative left eigenvector of A with respect to the eigenvalue 1, satisfying 1 T π = N . Then, we should re-define k ¯ ( a ) and l ¯ ( a ) as
k ¯ ( a ) = 1 N π T k ( a ) ,     l ¯ ( a ) = 1 N π T l ( a ) ,
and can obtain that all l i ( a ) will converge to the consensus state l ¯ ( a ) and l ¯ ( a ) = 1 N π T H ( k ( a + 1 ) ) = 1 N i = 1 N π i h i ( k i ( a + 1 ) ) . Therefore, l i ( a ) can only be approximatively seen as the gradient of π i 1 N 2 i = 1 N π i h i ( k ) rather than h ( k ) , and thus Algorithm 1 will not converge to the minimum of the h ( k ) .
Clearly, if the detailed information of π i can be exactly obtained, the terms r i ( a ) in Algorithm 1 can be modified as
r i ( a ) = h i ( k i ( a + 1 ) ) π i h i ( k i ( a ) ) π i ,
to ensure that Algorithm 1 is still effective, with A being the only non-negative stochastic. However, it is difficult to exactly obtain the detailed information of π in most cases, and thus the iteration
x i ( a + 1 ) = j = 1 N A i j x j ( a ) ,
with all x i ( 0 ) = e i R N , is designed to estimate π . Furthermore, as discussed in [25], all x i ( a ) will converge to π linearly, and thus x i i ( a ) will converge to π i linearly. Finally, with the idea of estimating π , we can modify the term r i ( a ) in Algorithm 1 as
r i ( a ) = h i ( k i ( a + 1 ) ) x i i ( a + 1 ) h i ( k i ( a ) ) x i i ( a ) ,
to ensure that Algorithm 1 is still effective, with A being only non-negative stochastic.
Finally, motivated by [26,30], the distributed algorithm over the unbalanced graph is designed as in Algorithm 2.
Algorithm 2: Distributed Algorithm over Unbalanced Graph
I: Input: α > 0 , β > 0 , η > 0 , N, A;
II: Initialize: For all i, k i ( 0 ) = k i ( 1 ) m , r i ( 0 ) = h i ( k i ( 0 ) ) m , x i ( 0 ) = e i R N ;
III: Iteration rule:
k i ( a + 1 ) = j = 1 N A i j k j ( a ) + η e i ( a ) + β d i ( a ) ,
l i ( a + 1 ) = j = 1 N A i j l j ( a ) + r i ( a ) ,
x i ( a + 1 ) = j = 1 N A i j x j ( a ) ,

 with
e i ( a ) = k i ( a ) k i ( a 1 ) , d i ( a ) = P B 0 k i ( a ) α β l i ( a ) k i ( a ) , r i ( a ) = h i ( k i ( a + 1 ) ) x i i ( a + 1 ) h i ( k i ( a ) ) x i i ( a ) .

VI: Output
Let
W ( k ( a ) ) = h 1 ( k 1 ( a ) ) x 1 1 ( a ) , , h N ( k N ( a ) ) x N N ( a ) T ,
we can rewrite Equation (17) with the unbalanced graph as
k ( a + 1 ) = A k ( a ) + η e ( a ) + β d ( a ) ,
l ( a + 1 ) = A l ( a ) + W ( k ( a + 1 ) ) W ( k ( a ) ) .
Theorem 2.
With the feasible step-sizes α and η, for all i , k i ( a ) under Algorithm 2 converge to the unique optimal solution to problem Equation (1) linearly under Assumption 1.
Proof. 
See Section 3.5 for the detailed proof. □

3.5. Convergence Analysis for Algorithm 2

Now, we can establish the convergence properties of Algorithm 2.
Lemma 5.
With the given initial values, one has
k ¯ ( a + 1 ) = k ¯ ( a ) + 1 N π T η e ( a ) + 1 N π T β d ( a ) ,
l ¯ ( a + 1 ) = 1 N π T W ( k ( a + 1 ) )
= 1 N i = 1 N π i x i i ( a + 1 ) h i ( k i ( a + 1 ) ) .
Let X ( a ) = diag { ( x 1 1 ( a ) ) 1 , ( x 2 2 ( a ) ) 1 , , ( x N N ( a ) ) 1 } .
Proof. 
This proof is omitted here as it can be readily accomplished based on the proof of ([25] [Lemma 8]). □
Next, define the following variable as
x ( q ) = k ( a ) 1 k ¯ ( a ) N k ¯ ( a ) k * e ( a ) l ( a ) 1 l ¯ ( a ) ,
Lemma 6.
Under Assumption 1 and when 0 < α β < 2 L , there holds
x ( a + 1 ) G x ( a ) + ϕ ( a ) ,
where G = G 1 ( α , β ) + G 2 and ϕ ( a ) are defined as
G 1 ( α , β ) =   3 β 0 0 2 α β + α B L 0 0 α 2 β + α B L 2 β + α L 0 α 2 B L ( 2 β + α B L ) 2 B L ( 2 β + α L ) 0 2 B L α ,
G 2 = ρ 0 2 η 0 0 γ η 0 2 0 η 0 4 B L 0 2 B L η ρ ,
ϕ ( a ) = 0 N ϕ 1 ( a ) N ϕ 1 ( a ) 2 B L N ϕ 1 ( a ) + 2 ϕ 2 ( a ) .
Proof. 
See Appendix A for the detailed proof. □
With Lemma 6, the proof of Theorem 2 can be similarly completed based on the proof of Theorem 1; therefore, we omit it.
Remark 1.
It can be obtained from the proof of Theorem 1 that with the desired step-sizes α, β, and η, the optimal error satisfies
k i ( a ) k * C 0 ρ a ( G 1 ) ,
where C 0 is a positive constant. Therefore, for an arbitrarily small positive constant ϵ, it is necessary to execute Algorithm 1 at least ln ϵ ln ρ ( G 1 ) ln C 0 ln ρ ( G 1 ) times to ensure that k i ( a ) k * ϵ , while ( M + 1 ) ( ln ϵ ln ρ ( G 1 ) ln C 0 ln ρ ( G 1 ) ) times are needed for Algorithm 1 in [30], where M is a positive constant satisfying M > ln 1 2 ln ρ with 0 < ρ < 1 being a constant. Clearly, Algorithm 2 in this paper and Algorithm 2 in [30] have the similar case with respect to the computational complexity.

4. Simulations

Here, a simulation example is shown to verify the theoretical results developed in this paper. Especially, the multi-drones target tracking problem is considered for the simulation in this paper, which was also discussed in [3] and can be simplistically formulated as
min k t K 0 , t i = 1 N t = 1 T y i , t k t 2 ,
where N represents the number of drones for tracking the moving target; T represents the number of moving steps of the target; and y i , t = k t * + u i , t , with k t * being the real state of the target at step t and u i , t being the sampling value at step t of the random variable u i introduced to represent the measurement noise of the drone i. The aim of this task is employing N drones to track the real target trajectory k 1 * , k 2 * , ⋯, k T * . Especially, in this simulation, N is selected as 8, T is set as 5, k * = ( k 1 * , , k 5 * ) T = ( 2 , 1 , 0 , 1 , 2 ) T , and u i takes values in [ 0.1 , 0.1 ] randomly and uniformly. For convenience, all values of the parameters involved in this simulation are collected in Table 1.
Moreover, it is assumed that the state of the moving target at the step t is contained in K 0 , t , which is selected as [ 2 , 2 ] for all t = 1 , 2 , 3 , 4 , 5 . Additionally, the balanced communication graph necessarily involved in the distributed Algorithm 1 is depicted as in Figure 1. Accordingly, we selected the weighted matrix A as
1 / 3 1 / 3 0 0 0 0 0 1 / 3 1 / 3 1 / 3 1 / 3 0 0 0 0 0 0 1 / 3 1 / 3 1 / 3 0 0 0 0 0 0 1 / 3 1 / 3 1 / 3 0 0 0 0 0 0 1 / 3 1 / 3 1 / 3 0 0 0 0 0 0 1 / 3 1 / 3 1 / 3 0 0 0 0 0 0 1 / 3 1 / 3 1 / 3 1 / 3 0 0 0 0 0 1 / 3 1 / 3
Moreover, the step-sizes are selected as α = 0.01 , β = 0.2 , and η = 0.1 in this simulation, respectively. Especially, in Figure 2, the transient behaviors of all k i , t ( a ) under Algorithm 1 are shown, which shows that all k i , t ( a ) under Algorithm 1 converge to the unique optimal solution k * = ( 2 , 1 , 0 , 1 , 2 ) T .
Furthermore, with the same settings except that the unbalanced communication graph necessarily involved in the distributed algorithm (18) is depicted as in Figure 3 and the weighted matrix A is selected as
1 / 2 0 0 0 0 0 0 1 / 2 1 / 2 1 / 2 0 0 0 0 0 0 0 1 / 2 1 / 2 0 0 0 0 0 0 0 1 / 2 1 / 2 0 0 0 0 0 0 1 / 3 1 / 3 1 / 3 0 0 0 0 1 / 3 0 0 1 / 3 1 / 3 0 0 0 0 0 0 0 1 / 2 1 / 2 0 0 0 0 0 0 0 1 / 2 1 / 2 ,
the transient behaviors of all k i , t ( a ) under Algorithm 2 are shown in Figure 4, which shows that all k i , t ( a ) under Algorithm 2 converge to the unique optimal solution k * = ( 2 , 1 , 0 , 1 , 2 ) T .
Moreover, in order to highlight the good performance at reducing the computational complexity compared to Algorithm 1 in [30], the simulations of Algorithm 1 in this paper and Algorithm 1 in [30] under the same settings are further made. Especially, we introduce the convergence index (CI) as
CI = 1 N i = 1 N k i ( a ) k * k * ,
with k i ( a ) = ( k i , 1 ( a ) , k i , 2 ( a ) , , k i , N ( a ) ) T and show the transient behaviors of CI under two Algorithms in Figure 5 for comparison. It can be seen from Figure 5 that two distinct Algorithms share a similar convergence rate. On the other hand, the operating time of GPU with different Algorithms is recoded during the simulations, which shows that for obtaining the similar convergence performance, 0.079 s is required for Algorithm 1 in this paper, while 0.240 s is needed for Algorithm 1 in [30].
Additionally, in order to explore how the momentum parameters affect the convergence rate, the simulations of Algorithm 1 with different momentum parameters are further made under the same other settings. Furthermore, the transient behaviors of CI under Algorithm 1 with different momentum parameters are shown in Figure 6 for comparison, which shows that the convergence rates in terms of the CI under Algorithm 1 with bigger momentum parameters are faster than that with smaller momentum parameters, and Algorithm 1 is nonconvergent when the momentum parameter is too big ( η 1 ).
Remark 2.
It is worth mentioning that although the definitions of the upper bound of the desired α, β, and η are given in Equations (12) and (13), it is still difficult to exactly give the explicit bounds of the desired α, β, and η since the global information L, μ, and ρ are necessarily involved in the definitions. On the other hand, it can also be noted from Equations (12) and (13) that the proposed Algorithms would be convergent when α, β, and η are sufficient small. Therefore, when carrying out the simulation for the proposed Algorithms, arbitrarily positive constants can be selected for the step-sizes first. Then, if the Algorithms with the selected step-sizes are nonconvergent, the smaller positive constants could be selected for the step-sizes until the Algorithms converge.

5. Conclusions

This paper has examined a constrained optimization problem within a closed convex set and designed a distributed algorithm incorporating momentum terms. Especially, the method of feasible direction and the method of estimating the left eigenvector have been employed to address the considered constraint and the unbalance, respectively. The algorithm’s linear convergence has been demonstrated with carefully chosen step-sizes, and the optimal ranges for these step-sizes are clearly defined. Compared to the existing algorithms designed in [30], the algorithm proposed in this paper has lower computational complexity. Simulation results have confirmed the algorithm’s effectiveness. Future research will explore the impact of time-varying graph sequences and more complex constraints. Furthermore, further research will also explore overcoming the limitation of the increased complexity with respect to operating the designed algorithms in large-scale drone networks, accompanied with the momentum terms. Additionally, further research will also explore the sensitivity analysis on step-size settings and momentum parameters.

Funding

This work was supported by the National Natural Science Foundation of China under Grant No. 62203110, the Natural Science Foundation of Jiangsu Province under Grant BK20242027, the Elite Medical Professionals Project of China-Japan Friendship Hospital under Grant No. ZRJY2021-GG09, and the CAMS Innovation Fund for Medical Sciences (CIFMS)-Clinical and Translational Medicine Research under Grant No. 2020-I2M-C&T-B-093.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

Appendix A

In this section, the proofs of Lemmas 4 and 6 are collected.
Proof of Lemma 4.
This proof mainly includes four steps where the bounds of k ( a + 1 ) 1 k ¯ ( a + 1 ) , k ¯ ( a + 1 ) k * , e ( a ) , and l ( a + 1 ) 1 c l ¯ ( a + 1 ) are, respectively, established.
Step 1: Bound k ( a + 1 ) 1 k ¯ ( a + 1 ) . Noting Equations (5a) and (6a), we have
k ( a + 1 ) 1 k ¯ ( a + 1 )   = A k ( a )   +   η e ( a )   +   β d ( a )     1 k ¯ ( a )     1 N 1 1 T ( η e ( a )   +   β d ( a ) ) = A 1 N 1 1 T ( k ( a ) 1 k ¯ ( a ) ) + I 1 N 1 1 T η e ( a ) + β [ P K k ( a ) α β l ( a ) l ( a ) P K 1 k ¯ ( a ) α β 1 l ¯ ( a ) 1 k ¯ ( a ) ] + β 1 N 1 1 T [ P K 1 k ¯ ( a ) α β 1 l ¯ ( a ) 1 k ¯ ( a ) P K k ( a ) α β l ( a ) k ( a ) ] .
Then, based on Lemma 1 and the projection operator’s non-expansive property, we can obtain
k ( a + 1 ) 1 k ¯ ( a + 1 )   ρ k ( a ) 1 k ¯ ( a ) + 2 η e ( a ) + β k ( a ) 1 k ¯ ( a ) + β k ( a ) α β l ( a ) 1 k ¯ ( a ) α β 1 l ¯ ( a ) + β 1 k ¯ ( a ) α β 1 l ¯ ( a ) k ( a ) α β l ( a ) ( ρ   +   3 β ) k ( a ) 1 k ¯ ( a )   +   2 η e ( a )   +   2 α l ( a ) 1 l ¯ ( a ) .
Step 2: Bound N k ¯ ( a + 1 ) k * . From Equation (6a), one has
k ¯ ( a + 1 ) u *   = k ¯ ( a ) + 1 N η 1 T e ( a ) + 1 N β 1 T d ( a ) k * = k ¯ ( a ) + β P K 0 k ¯ ( a ) α β h ( k ¯ ( a ) ) k ¯ ( a ) k * 1 N β 1 T P K 1 k ¯ ( a ) 1 α β h ( k ¯ ( a ) ) + 1 N β 1 T P K k ( a ) α β l ( a ) + 1 N η 1 T e ( a ) .
Then, based on ([26] [Lemma 7]) and with 0 < α β < 2 L , we can obtain
N k ¯ ( a + 1 ) k *   N γ k ¯ ( a ) k * + η e ( a ) + β 1 k ¯ ( a ) 1 α β h ( k ¯ ( a ) ) k ( a ) α β l ( a ) N γ k ¯ ( a ) k * + η e ( a ) + β k ( a ) 1 k ¯ ( a ) + α 1 h ( k ¯ ( a ) ) l ( a ) N γ k ¯ ( a ) k * + η e ( a ) + β k ( a ) 1 k ¯ ( a ) + α 1 N 1 1 T H ( 1 k ¯ ( a ) ) 1 N 1 1 T H ( k ( a ) ) + α l ( a ) 1 l ¯ ( a ) ,
where γ = 1 β ( 1 γ 0 ) and
γ 0 = max | 1 α β L | , | 1 α β μ | .
Then, noting that H ( k ( a ) ) is also L-Lipschitz under Assumption 1, we have
N k ¯ ( a + 1 ) k *   N γ k ¯ ( a ) k * + η e ( a ) + β k ( a ) 1 k ¯ ( a ) + α L k ( a ) 1 k ¯ ( a ) + l ( a ) 1 l ¯ ( a ) N γ k ¯ ( a ) k * + ( β + α L ) k ( a ) 1 k ¯ ( a ) + η e ( a ) + α l ( a ) 1 l ¯ ( a ) .
Step 3: Bound e ( a + 1 ) . Directly, noting e ( a ) = k ( a + 1 ) k ( a ) , we can obtain
e ( a + 1 )   = A k ( a ) + η e ( a ) + β d ( a ) k ( a ) = A k ( a ) 1 k ¯ ( a ) + 1 k ¯ ( a ) k ( a ) + η e ( a ) + β P K k ( a ) α β l ( a ) k ( a ) β P K 1 k * 1 α β h ( k * ) 1 k * ,
which means that
e ( a + 1 )   = A k ( a ) 1 k ¯ ( a ) + 1 k ¯ ( a ) k ( a ) + η e ( a ) + 2 β k ( a ) 1 k * + α l ( a ) 1 h ( k * ) ( ρ + 1 ) k ( a ) 1 k ¯ ( a ) + η e ( a ) + 2 β k ( a ) 1 k * + α l ( a ) 1 l ¯ ( a ) + α 1 l ¯ ( a ) 1 h ( k * ) 2 k ( a ) 1 k ¯ ( a ) + η e ( a ) + 2 β k ( a ) 1 k * + α l ( a ) 1 l ¯ ( a ) + α 1 N 1 1 T H ( k ( a ) ) 1 N 1 1 T H ( 1 k * ) 2 k ( a ) 1 k ¯ ( a ) + η e ( a ) + α l ( a ) 1 l ¯ ( a ) + ( 2 β + α L ) k ( a ) 1 k * ( 2 + 2 β + α L ) k ( a ) 1 k ¯ ( a ) + η e ( a ) + ( 2 β + α L ) N k ¯ ( a ) k * + α l ( a ) 1 l ¯ ( a ) .
Step 4: Bound l ( a + 1 ) 1 l ¯ ( a + 1 ) . From Equation (5b), we can obtain
l ( a + 1 ) 1 l ¯ ( a + 1 )   = A l ( a ) + H ( k ( a + 1 ) ) H ( k ( a ) ) 1 l ¯ ( a ) 1 N 1 1 T ( H ( k ( a + 1 ) ) H ( k ( a ) ) ) = ( A l ( a ) 1 l ¯ ( a ) ) + ( I 1 N 1 1 T ) ( H ( k ( a + 1 ) ) H ( k ( a ) ) ) = ( A 1 N 1 1 T ) ( l ( a ) 1 l ¯ ( a ) ) + ( I 1 N 1 1 T ) ( H ( k ( a + 1 ) ) H ( k ( a ) ) ) .
Then, from Equation (A7), we can obtain
l ( a + 1 ) 1 l ¯ ( a + 1 )   ρ l ( a ) 1 l ¯ ( a ) + 2 L e ( a + 1 ) ρ l ( a ) 1 l ¯ ( a ) + 2 L ( 2 + 2 β + α L ) k ( a ) 1 k ¯ ( a ) + 2 L ( 2 β + α L ) N k ¯ ( a ) k * + 2 L α l ( a ) 1 l ¯ ( a ) + 2 L η e ( a ) + 2 L α l ( a ) 1 l ¯ ( a ) ( ρ + 2 L α ) l ( a ) 1 l ¯ ( a ) + 2 L η e ( a ) + 2 L ( 2 + 2 β + α L ) k ( a ) 1 k ¯ ( a ) + 2 L ( 2 β + α L ) N k ¯ ( a ) k * .
Here, considering Equations (A2), (A5), (A7) and (A9), we have completed the proof. □
Proof of Lemma 6.
This proof includes into four steps, which is the same as the proof process of Lemma 4.
Step 1: Bound k ( a + 1 ) 1 k ¯ ( a + 1 ) . Similar to Equation (A10), we can obtain directly
k ( a + 1 ) 1 k ¯ ( a + 1 )   = A k ( a )   +   η e ( a )   +   β d ( a )     1 k ¯ ( a )     1 N 1 π T ( η e ( a )   +   β d ( a ) ) = A 1 N 1 π T ( k ( a ) 1 k ¯ ( a ) ) + I 1 N 1 π T η e ( a ) + β [ P K k ( a ) α β l ( a ) k ( a ) P K 1 k ¯ ( a ) α β 1 l ¯ ( a ) 1 k ¯ ( a ) ] + β 1 N 1 π T [ P K 1 k ¯ ( a ) α β 1 l ¯ ( a ) 1 k ¯ ( a ) P K k ( a ) α β l ( a ) k ( a ) ] ( ρ   +   3 β ) k ( a ) 1 k ¯ ( a )   +   2 η e ( a )   +   2 α l ( a ) 1 l ¯ ( a ) .
Step 2: Bound N k ¯ ( a + 1 ) k * . From Equation (19a), one has
k ¯ ( a + 1 ) k *   = k ¯ ( a ) + 1 N η π T e ( a ) + 1 N β π T d ( a ) k * = k ¯ ( a ) + β P K 0 k ¯ ( a ) α β h ( k ¯ ( a ) ) k ¯ ( a ) k * 1 N β π T P K 1 k ¯ ( a ) 1 α β h ( k ¯ ( a ) ) + 1 N β π T P K k ( a ) α β l ( a ) + 1 N η π T e ( a ) .
According to ([26] [Lemma 7]) and under the condition that 0 < α β < 2 L , it follows that
N k ¯ ( a + 1 ) k *   N γ k ¯ ( a ) k * + η e ( a ) + β 1 k ¯ ( a ) 1 α β h ( k ¯ ( a ) ) k ( a ) α β l ( a ) N γ k ¯ ( a ) k * + η e ( a ) + β k ( a ) 1 k ¯ ( a ) + α 1 h ( k ¯ ( a ) ) l ( a ) N γ k ¯ ( a ) k * + η e ( a ) + β k ( a ) 1 k ¯ ( a ) + α 1 N 1 1 T H ( 1 k ¯ ( a ) ) 1 N 1 π T X ( a ) H ( k ( a ) ) + α l ( a ) 1 l ¯ ( a ) .
It is clear that for all i that [ π i / x i i ( a ) ] and [ π i / x i i ( a ) ] 1 are uniformly bounded with respect to a. Therefore, we assume that max { x i i , [ 1 / x i i ( a ) ] , [ π i / x i i ( a ) ] , | π i / [ z i i ( a ) ] 1 | } B .
Based on Assumption 1, it follows from the forth term in the right-hand side of Equation (A12) that
α 1 N π T X ( a ) H ( k ( a ) ) 1 N 1 T H ( 1 k ¯ ( a ) ) α N π T X ( a ) H ( k ( a ) ) π T X ( a ) H ( 1 k ¯ ( a ) ) + α N π T X ( a ) H ( 1 k ¯ ( a ) ) 1 T H ( 1 k ¯ ( a ) ) α i = 1 N π i x i i ( a ) h i ( k i ( a ) ) h ¯ i ( k i ( a ) ) N + ϕ 1 ( a ) α B L i = 1 N k i ( a ) k ¯ i ( a ) N + ϕ 1 ( a ) α B L i = 1 N k i ( a ) k ¯ i ( a ) 2 N + ϕ 1 ( a ) = α B L N k ( a ) 1 k ¯ ( a ) + ϕ 1 ( a ) ,
where ϕ 1 ( a ) = ( α / N ) 1 π T X ( a ) × H ( 1 k ¯ ( a ) ) .
Then, combining Equation (A13) with Equation (A12) together yields
N k ¯ ( a + 1 ) k *   N γ k ¯ ( a ) k * + η e ( a ) + N ϕ 1 ( a ) + α l ( a ) 1 l ¯ ( a ) + ( β + α B L ) k ( a ) 1 k ¯ ( a ) .
Step 3: Bound e ( a + 1 ) . Similar to Equation (A6), we can directly obtain
e ( a + 1 )   = A k ( a ) 1 k ¯ ( a ) + 1 k ¯ ( a ) k ( a ) + η e ( a ) + β P K k ( a ) α β l ( a ) k ( a ) β P K 1 k * 1 α β h ( k * ) 1 k * ,
in what follows that
e ( a + 1 )   = A k ( a ) 1 k ¯ ( a ) + 1 k ¯ ( a ) k ( a ) + η e ( a ) + 2 β k ( a ) 1 k * + α l ( a ) 1 h ( k * ) ( ρ + 1 ) k ( a ) 1 k ¯ ( a ) + η e ( a ) + 2 β k ( a ) 1 k * + α l ( a ) 1 l ¯ ( a ) + α 1 l ¯ ( a ) 1 N 1 j = 1 N h j ( k ¯ ( a ) ) + α 1 N 1 j = 1 N h j ( k ¯ ( a ) ) 1 N 1 j = 1 N h j ( k * ) .
From Equation (A13), we have
α l ¯ ( a ) 1 N j = 1 N h j ( k ¯ ( a ) )   = α N π T X ( a ) H ( k ( a ) ) 1 T H ( k ¯ ( a ) ) α B L N k ( a ) 1 k ¯ ( a ) + ϕ 1 ( a ) .
According to Assumption 1, and by substituting Equation (A17) into Equation (A16), it is followed by that
e ( a + 1 )   2 k ( a ) 1 k ¯ ( a ) + η e ( a ) + 2 β k ( a ) 1 k * + α l ( a ) 1 l ¯ ( a ) + α B L k ( a ) 1 k ¯ ( a ) + N ϕ 1 ( a ) + α L N k ¯ ( a ) k * ( 2 + 2 β + α B L ) k ( a ) 1 k ¯ ( a ) + η e ( a ) + α l ( a ) 1 l ¯ ( a ) + N ϕ 1 ( a ) + ( 2 β + α L ) N k ¯ ( a ) 1 k * .
Step 4: Bound l ( a + 1 ) 1 l ¯ ( a + 1 ) . From Equation (18b), we obtain
l ( a + 1 ) 1 l ¯ ( a + 1 )   = ( A 1 N 1 π T ) ( l ( a ) 1 l ¯ ( a ) ) + ( I 1 N 1 π T ) ( W ( k ( a + 1 ) ) W ( k ( a ) ) ) .
Based on ([25] [Lemma 9]), we have
W ( k ( a + 1 ) ) W ( k ( a ) ) B L e ( a + 1 ) + ϕ 2 ( a ) .
where ϕ 2 ( a ) = ( X ( a + 1 ) X ( a ) ) H ( k ( a ) ) . Then, by combining Equations (A18)–(A20), it follows that
l ( a + 1 ) 1 l ¯ ( a + 1 ) 2 B L ( 2 + 2 β + α B L ) k ( a ) 1 k ¯ ( a ) + 2 B L η e ( a ) + ( 2 B L α + ρ ) l ( a ) 1 l ¯ ( a ) + 2 B L N ϕ 1 ( a ) + 2 B L ( 2 β + α L ) N k ¯ ( a ) 1 k * + 2 ϕ 2 ( a ) .
By writing 1, 2, 3, and 4 as matrix inequalities, we obtain Here; by converting Equations (A10), (A14), (A18) and (A21), we have completed the proof. □

References

  1. Shorinwa, O.; Haksar, R.N.; Washington, P.; Schwager, M. Distributed multirobot task assignment via consensus ADMM. IEEE Trans. Robot. 2023, 39, 1781–1800. [Google Scholar] [CrossRef]
  2. Huang, Y.; Kuai, J.; Cui, S.; Meng, Z.; Sun, J. Distributed Algorithms via Saddle-Point Dynamics for Multi-Robot Task Assignment. IEEE Robot. Automat. Lett. 2024, 9, 11178–11185. [Google Scholar] [CrossRef]
  3. Shorinwa, O.; Halsted, T.; Yu, J.; Schwager, M. Distributed optimization methods for multi-robot systems: Part 1-A tutorial. IEEE Robot. Automat. Mag. 2024, 31, 121–138. [Google Scholar] [CrossRef]
  4. Wang, J.; Elia, N. A control perspective for centralized and distributed convex optimization. In Proceedings of the 2011 50th IEEE Conference on Decision and Control and European Control Conference, Orlando, FL, USA, 12–15 December 2011; Volume 3, pp. 3800–3805. [Google Scholar]
  5. Gharesifard, B.; Cortés, J. Distributed continuous-time convex optimization on weight-balanced digraphs. IEEE Trans. Autom. Control 2014, 59, 781–786. [Google Scholar] [CrossRef]
  6. Hong, H.; Yu, X.; Yu, W.; Zhang, D.; Wen, G. Distributed convex optimization on state-dependent undirected graphs: Homogeneity technique. IEEE Trans. Control Netw. Syst. 2020, 7, 42–52. [Google Scholar] [CrossRef]
  7. Zhu, Y.; Yu, W.; Wen, G.; Ren, W. Continuous-time coordination algorithm for distributed convex optimization over weight-unbalanced directed networks. IEEE Trans. Circuits Syst II Exp Briefs 2019, 66, 1202–1206. [Google Scholar] [CrossRef]
  8. Liu, Q.; Wang, J. A second-order multi-agent network for bound-constrained distributed optimization. IEEE Trans. Autom. Control 2015, 60, 3310–3315. [Google Scholar] [CrossRef]
  9. Lin, P.; Ren, W.; Farrell, J.A. Distributed continuous-time optimization: Nonuniform gradient gains, finite-time convergence, and convex constraint set. IEEE Trans. Autom. Control 2017, 62, 2239–2253. [Google Scholar] [CrossRef]
  10. Yuan, D.; Ho, D.W.C.; Xu, S. Regularized primal-dual subgradient method for distributed constrained optimization. IEEE Trans. Cybern. 2016, 46, 2109–2118. [Google Scholar] [CrossRef] [PubMed]
  11. Yang, S.; Liu, Q.; Wang, J. A multi-agent system with a proportional-integral protocol for distributed constrained optimization. IEEE Trans. Autom. Control 2017, 62, 3461–3467. [Google Scholar] [CrossRef]
  12. Nedić, A.; Ozdaglar, A. Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 2009, 54, 48–61. [Google Scholar] [CrossRef]
  13. Nedić, A.; Ozdaglar, A.; Parrilo, P.A. Constrained consensus and optimization in multi-agent networks. IEEE Trans. Autom. Control 2010, 55, 922–938. [Google Scholar] [CrossRef]
  14. Zhu, M.; Martínez, S. On distributed convex optimization under inequality and equality constraints. IEEE Trans. Autom. Control 2012, 57, 151–164. [Google Scholar]
  15. Mai, V.S.; Abed, E.H. Distributed optimization over directed graphs with row stochasticity and constraint regularity. Automatica 2019, 102, 94–104. [Google Scholar] [CrossRef]
  16. Liu, H.; Zheng, W.X.; Yu, W. Distributed discrete-time algorithms for convex optimization with general local constraints on weight-unbalanced digraph. IEEE Trans. Control Netw. Syst. 2021, 8, 51–64. [Google Scholar] [CrossRef]
  17. Nedić, A.; Olshevsky, A. Distributed optimization over time-varying directed graphs. IEEE Trans. Autom. Control 2015, 60, 601–615. [Google Scholar] [CrossRef]
  18. Liu, H.; Yu, W.; Wen, G.; Zheng, W.X. Distributed algorithm over time-varying unbalanced graphs for optimization problem subject to multiple local constraints. IEEE Trans. Control Netw. Syst. 2024. [Google Scholar] [CrossRef]
  19. Qu, G.; Li, N. Harnessing smoothness to accelerate distributed optimization. IEEE Trans. Control Netw. Syst. 2018, 5, 1245–1260. [Google Scholar] [CrossRef]
  20. Xi, C.; Xin, R.; Usman, K.A. ADD-OPT: Accelerated distributed directed optimization. IEEE Trans. Autom. Control 2018, 63, 1329–1339. [Google Scholar] [CrossRef]
  21. Xi, C.; Mai, V.S.; Xin, R.; Abed, E.H.; Khan, U.A. Linear Convergence in Optimization Over Directed Graphs With Row-Stochastic Matrices. IEEE Trans. Autom. Control 2018, 63, 3558–3565. [Google Scholar] [CrossRef]
  22. Pu, S.; Shi, W.; Xu, J.; Nedić, A. A push-pull gradient method for distributed optimization in networks. IEEE Trans. Autom. Control. 2020, 66, 1–16. [Google Scholar] [CrossRef]
  23. Nedić, A.; Ozdaglar, A.; Shi, W. Achieving geometric convergence for distributed optimization over time-varying graphs. SIAM J. Optim. 2017, 27, 2597–2633. [Google Scholar] [CrossRef]
  24. Saadatniaki, F.; Xin, R.; Khan, U.A. Decentralized optimization over time-varying directed graphs with row and column-stochastic matrices. IEEE Trans. Autom. Control 2020, 65, 4769–4780. [Google Scholar] [CrossRef]
  25. Liu, H.; Yu, W.; Chen, G. Discrete-time algorithms for distributed constrained convex optimization with linear convergence rates. IEEE Trans. Cybern. 2022, 52, 4874–4885. [Google Scholar] [CrossRef]
  26. Liu, H.; Yu, W.; Zheng, W.X.; Nedić, A.; Zhu, Y. Distributed constrained optimization algorithms with linear convergence rate over time-varying unbalanced graphs. Automatica 2024, 159, 111346. [Google Scholar] [CrossRef]
  27. Xin, R.; Khan, U.A. Distributed heavy-ball: A generalization and acceleration of first-order methods with gradient tracking. IEEE Trans. Autom. Control 2020, 65, 2627–2633. [Google Scholar] [CrossRef]
  28. Li, H.; Cheng, H.; Wang, Z.; Wu, G.-C. Distributed Nesterov gradient and heavy-ball double accelerated asynchronous optimization. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 5723–5737. [Google Scholar] [CrossRef]
  29. Gao, J.; Liu, X.; Dai, Y.-H.; Huang, Y.; Yang, P. A Family of Distributed Momentum Methods Over Directed Graphs With Linear Convergence. IEEE Trans. Autom. Control 2023, 68, 1085–1092. [Google Scholar] [CrossRef]
  30. Luan, M.; Wen, G.; Liu, H.; Huang, T.; Chen, G.; Yu, W. Distributed discrete-time convex optimization with closed convex set constraints: Linearly convergent algorithm design. IEEE Trans. Cybern. 2024, 54, 2271–2283. [Google Scholar] [CrossRef] [PubMed]
  31. Ren, W.; Beard, R.W. Consensus seeking in multiagent systems under dynamically changing interaction topologies. IEEE Trans. Autom. Control 2005, 50, 655–661. [Google Scholar] [CrossRef]
  32. Horn, R.A.; Johnson, C.R. Matrix Analysis; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
Figure 1. Balanced graph with eight interacting agents.
Figure 1. Balanced graph with eight interacting agents.
Drones 09 00036 g001
Figure 2. Behaviors of the states k i , t ( a ) under Algorithm 1.
Figure 2. Behaviors of the states k i , t ( a ) under Algorithm 1.
Drones 09 00036 g002
Figure 3. Unbalanced graph with eight interacting agents.
Figure 3. Unbalanced graph with eight interacting agents.
Drones 09 00036 g003
Figure 4. Behaviors of the states k i , t ( a ) under Algorithm 2.
Figure 4. Behaviors of the states k i , t ( a ) under Algorithm 2.
Drones 09 00036 g004
Figure 5. Behaviors of the CI under Algorithm 1 in this paper and under Algorithm 1 in [30].
Figure 5. Behaviors of the CI under Algorithm 1 in this paper and under Algorithm 1 in [30].
Drones 09 00036 g005
Figure 6. Behaviors of the CI under Algorithm 1 with different η .
Figure 6. Behaviors of the CI under Algorithm 1 with different η .
Drones 09 00036 g006
Table 1. Values of parameters.
Table 1. Values of parameters.
ParametersNT α β η
Values85 0.01 0.2 0.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, H. Distributed Constrained Optimization Algorithms for Drones. Drones 2025, 9, 36. https://doi.org/10.3390/drones9010036

AMA Style

Liu H. Distributed Constrained Optimization Algorithms for Drones. Drones. 2025; 9(1):36. https://doi.org/10.3390/drones9010036

Chicago/Turabian Style

Liu, Hongzhe. 2025. "Distributed Constrained Optimization Algorithms for Drones" Drones 9, no. 1: 36. https://doi.org/10.3390/drones9010036

APA Style

Liu, H. (2025). Distributed Constrained Optimization Algorithms for Drones. Drones, 9(1), 36. https://doi.org/10.3390/drones9010036

Article Metrics

Back to TopTop