Next Article in Journal
A Named Entity Recognition Method for Chinese Vehicle Fault Repair Cases Based on a Combined Model
Previous Article in Journal
Logarithmic NTRU-Based Certificateless Ring Signature in E-Voting Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Event-Triggered Discrete-Time ZNN Algorithm for Distributed Optimization with Time-Varying Objective Functions

by
Liu He
1,
Hui Cheng
1,* and
Yunong Zhang
2
1
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China
2
School of Intelligent Systems Engineering, Sun Yat-sen University, Shenzhen 518107, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(7), 1359; https://doi.org/10.3390/electronics14071359
Submission received: 23 February 2025 / Revised: 22 March 2025 / Accepted: 26 March 2025 / Published: 28 March 2025
(This article belongs to the Section Systems & Control Engineering)

Abstract

:
This paper investigates discrete-time distributed optimization for multi-agent systems (MASs) with time-varying objective functions. A novel fully distributed event-triggered discrete-time zeroing neural network (ET-DTZNN) algorithm is proposed to address the discrete-time distributed time-varying optimization (DTDTVO) problem without relying on periodic communication. Each agent determines the optimal solutions by relying solely on local information, such as its own objective function. Moreover, information is exchanged with neighboring agents exclusively when event-triggered conditions are satisfied, significantly reducing communication consumption. The ET-DTZNN algorithm is derived by discretizing a proposed event-triggered continuous-time ZNN (ET-CTZNN) model via the Euler formula. The ET-CTZNN model addresses the time-varying optimization problem in a semi-centralized framework under continuous-time dynamics. Theoretical analyses rigorously establish the convergence of both the ET-CTZNN model and the ET-DTZNN algorithm. Simulation results highlight the algorithm’s effectiveness, precision, and superior communication efficiency compared with traditional periodic communication-based approaches.

1. Introduction

The rapid advancement of large-scale networks and the progress in big data theory have significantly fueled the exploration of distributed optimization algorithms. These algorithms have found widespread application across various scientific and engineering domains [1,2,3,4]. These algorithms prove particularly valuable in application contexts such as power economic dispatch [2], Nash equilibrium seeking [3], and smart grids [4]. Within distributed optimization architectures, the collective objective function of multi-agent networks emerges from the aggregated summation of individual agents’ local sub-objectives. During operational execution, each agent maintains exclusive access to its designated sub-objective function while implementing state information exchange protocols with neighbouring agents. The global optimal solution is attained through consensus across all participating agents [1]. Recent advancements in distributed optimization have focused on various innovative approaches to enhance efficiency and effectiveness. For example, Pan et al. presented a robust optimization method for active distribution networks with variable-speed pumped storage to mitigate the risks associated with new energy sources [5]. Another research by Luo et al. introduced a consensus-based distributed optimal dispatch algorithm for integrated energy microgrids, aiming to achieve economic operation and power supply–demand balance [6]. Additionally, Tariq et al. presented an optimal control strategy for centralized thermoelectric generation systems using the barnacles mating optimization algorithm to address nonuniform temperature distribution [7]. Further, Yang et al. proposed the heavy-ball Barzilai–Borwein algorithm, a discrete-time method combining the heavy-ball momentum and Barzilai–Borwein step sizes for static distributed optimization over time-varying directed graphs [8]. Lastly, a privacy-preserving distributed optimization algorithm for economic dispatch over time-varying directed networks was developed by An et al. to protect sensitive information while ensuring fully distributed and efficient operation [9].
While prior researches have extensively addressed distributed optimization with static objective functions, real-world implementations frequently necessitate handling distributed optimization problems where objective functions dynamically evolve through time [10,11,12,13]. While distributed static optimization problem deals with fixed objective functions, distributed time-varying optimization (DTVO) problem involves objective functions that change over time. This dynamic nature introduces several challenges, such as the need for continuous adaptation to changing conditions, ensuring convergence to time-dependent optimal solutions and maintaining communication efficiency among agents [12]. Recent advancements in the DTVO problem have introduced several innovative algorithms to address dynamic challenges. Zheng et al. proposed a specified-time convergent algorithm tailored for distributed optimization problems characterized by time-varying objective functions. This algorithm guarantees convergence within a predefined timeframe [14]. Wang et al. developed a distributed algorithm for time-varying quadratic optimal resource allocation, focusing on applications like multi-quadrotor hose transportation [15]. Ding et al. developed a distributed algorithm based on output regulation theory to handle both equality and inequality constraints in time-varying optimization problems [16]. Rahili et al. studied the time-varying distributed convex optimization problem for continuous-time multi-agent systems and introduced control algorithms for single-integrator and double-integrator dynamics [17].
Introduced by Zhang et al. in 2002 [18], zeroing neural network (ZNN) models and ZNN algorithms offer a solution for tackling a variety of challenging time-varying problems. Thanks to their ability to handle parallel computing, their suitability for hardware implementation, and their distributed nature, ZNN models have been extensively applied to solve various time-varying challenges since their creation [19,20,21,22]. Moreover, in recent years, researchers have introduced several innovative algorithms to address distributed time-varying problems. For example, Xiao et al. proposed a distributed fixed-time ZNN algorithm designed to achieve fixed-time and robust consensus in multi-agent systems under switching topology [22]. Liao et al. introduced a novel ZNN algorithm for real-time management of multi-vehicle cooperation, illustrating its effectiveness and robustness in achieving real-time cooperation among multiple vehicles [23]. Yu et al. presented a data-driven control algorithm based on discrete ZNN for continuum robots, ensuring real-time control and adaptability to varying conditions [24]. Jin et al. introduced a distributed approach to collaboratively generate motion within a network of redundant robotic manipulators, utilizing a noise-tolerant ZNN to handle communication interferences and computational errors [25].
The aforementioned distributed ZNN models and algorithms tackle continuous-time DTVO problems, and it is essential to develop discrete-time ZNN algorithms for the practical implementation of theoretical continuous-time solutions in digital systems, ensuring accurate and efficient real-time computations while leveraging the advantages of digital hardware [26]. While there are plenty researches on centralized discrete-time ZNN algorithms [26,27], research on distributed ZNN algorithms that tackles discrete-time distributed time-varying problems is scarce. Notably, conventional discrete-time ZNN algorithms predominantly employ periodic iteration mechanisms [26,27]. This architecture inherently incurs elevated communication consumption (if applied to distributed ZNN algorithms), particularly under small sampling intervals or reduced iteration step sizes, while communication consumption constitutes a critical performance metric for assessing distributed algorithmic efficacy. In contrast, event-triggered communication paradigms strategically regulate inter-agent data transmission through state-dependent triggering criteria, thereby achieving substantial communication efficiency gains. Therefore, some researchers have dedicated their efforts to event-triggered ZNN algorithms and the results have shown significant progress in addressing time-varying problems. For instance, Dai et al. introduced an adaptive fuzzy ZNN model integrated with an event-triggered mechanism to address the time-varying matrix inversion problem. Their approach achieved finite-time convergence and demonstrated robustness [28]. Kong et al. introduced a predefined time fuzzy ZNN algorithm with an event-triggered mechanism for kinematic planning of manipulators, ensuring predefined time convergence and reducing computational burden [29].
Motivated by the need to address the discrete-time DTVO (DTDTVO) problem, a novel event-triggered discrete-time ZNN (ET-DTZNN) algorithm is proposed. This algorithm is built upon an event-triggered continuous-time zeroing neural network (ET-CTZNN) model, which was specifically designed to address the continuous-time distributed time-varying optimization (CTDTVO) problem within a semi-centralized framework. By discretizing the ET-CTZNN model using the Euler formula, the ET-DTZNN algorithm enables agents to compute optimal solutions based on local time-varying objective functions and exchange information with neighbors only when specific event-triggered conditions are met. This algorithm significantly reduces communication consumption while preserving the temporal tracking capabilities inherent to ZNN algorithms. In this paper, the ET-DTZNN algorithm is designed to solve strongly convex distributed optimization problem without constraints; it assumes strong convexity of agents’ objective functions.
The structure of this paper consists of six sections. In Section 2, mathematical notations, graph theory preliminaries, and the DTDTVO problem formulation are introduced. Section 3 focuses on the ET-CTZNN model, encompassing the formulation of the CTDTVO problem in Section 3.1 and the detailed design process of the ET-CTZNN model in Section 3.2. The design process and description of the ET-DTZNN algorithm are provided in Section 4. Section 5 delivers theoretical proofs demonstrating the convergence of both the ET-CTZNN model and the ET-DTZNN algorithm. Numerical examples showcasing the effectiveness, precision, and enhanced communication efficiency of the ET-DTZNN algorithm are presented in Section 6. Finally, Section 7 summarizes the paper and outlines the key contributions of this research.
  • An innovative ET-CTZNN model is introduced to tackle the CTDTVO problem. The ET-CTZNN model operates within a semi-centralized framework, solving the CTDTVO problem in continuous time.
  • The ET-CTZNN model is discretized using the Euler formula to develop the fully distributed ET-DTZNN algorithm. The ET-DTZNN algorithm enables agents to compute optimal solutions based on local time-varying objective functions and exchange state information with neighbors only when specific event-triggered conditions are met, significantly reducing communication consumption.
  • Comprehensive theoretical analyses are conducted, establishing the convergence of both the ET-CTZNN model and the ET-DTZNN algorithm, ensuring their effectiveness in solving the DTVO problem.

2. Preliminaries and Problem Formulation for DTDTVO

To lay the basis for further investigation, some preliminaries concerning mathematical notations and graph theory are given in Section 2.1, and the formulation of the DTDTVO problem is given in Section 2.2.

2.1. Preliminaries

This subsection outlines key preliminaries related to mathematical notations and graph theory. The notation · 2 denotes the a vector’s or the 2-norm of a matrix, while ( · ) T signifies its transpose. Furthermore, I n represents the n × n identity matrix, 1 d = { 1 , , 1 } T R d is a d-dimensional vector of ones, and ⊗ denotes the Kronecker product [30]. The Kronecker product is used to extend graph operators like the Laplacian matrix into higher-dimensional spaces, which is essential for distributed algorithms to handle multi-dimensional agent states.
The following paragraph introduces fundamental concepts from graph theory [31]. These principles establish the theoretical basis needed for modeling and analyzing MASs. The notation G = { V , Y , A } denotes a weighted graph, V = { v 1 , v 2 , , v n } represents the vertex set, and Y V × V denotes the edge set. The adjacency matrix A = ( a i j ) n × n R n × n is defined such that a i j > 0 if ( v i , v j ) Y and a i j = 0 otherwise. The neighbor set of the ith vertex is represented by N i . Graph G is classified as connected and undirected if there exists a path connecting any pair of vertices in G and if ( v i , v j ) Y implies ( v j , v i ) Y . The degree matrix D = diag { d 1 , d 2 , , d n } R n × n is given by d i = j = 1 n a i j . Moreover, the Laplacian matrix is defined as L = D A . In distributed optimization, agents work together across a network to solve a shared problem. Consensus means all agents agree on the same solution, which is essential for success. The Laplacian matrix reflects the network’s structure, helping agents align their solutions by emphasizing connections between them and reducing differences with their neighbors. This alignment aids the optimization process by coordinating local efforts.
Assumption 1.
Graph G is undirected and connected.
If G is not connected, the system may split into isolated subnetworks, preventing global consensus and leading to inconsistent optimization solutions across agents.

2.2. Problem Formulation for DTDTVO

To establish a foundation for further exploration and discussion, this subsection provides the formulation of the DTDTVO problem. The details are outlined to facilitate a clear understanding of the problem structure and its associated constraints.
Let us consider the following DTDTVO problem with the optimal solution x k + 1 * to be solved by a multi-agent system consisting of n agents at each computational time interval [ k τ , ( k + 1 ) τ ) [ t 0 , t f ) [ 0 , + ) :
min . f ( x k + 1 , t k + 1 ) = i = 1 n f i ( x k + 1 , t k + 1 ) ,
where x k + 1 R d . In addition, f i ( x k + 1 , t k + 1 ) : R d R represents the local time-varying objective function of agent i. It is required that f i be strongly convex and second-order differentiable for all i = 1 , 2 , , n . Here, x k + 1 R d is the shared optimization variable. This variable is common across all agents in the MAS, and the goal is for the agents to collaboratively optimize it. The time-varying objective function f i ( x k + 1 , t k + 1 ) is obtained by sampling f i ( x ( t ) , t ) at the time instant t = ( k + 1 ) τ , referred to as t k + 1 . Here, τ R + represents the sampling interval, and k N denotes the update index.
Given that the communication graph G is connected and undirected, the Laplacian matrices associated with G facilitate consensus among agents. According to [32], DTDTVO Problem (1) can be transformed into an equivalent formulation, as described in the following lemma.
Lemma 1.
When the graph G is strongly connected and undirected, DTDTVO Problem (1) can be equivalently represented by the following formulations [32]:
min . φ ( x k + 1 , t k + 1 ) = i = 1 n f i ( x i k + 1 , t k + 1 ) ,
s . t . L x k + 1 i = 0 n d ,
with x i k + 1 R d denoting the local solution of the shared optimization variable x k + 1 computed by agent i. In addition, the collective state of the whole MAS x i k + 1 is defined as x i k + 1 = [ x 1 k + 1 ; x 2 k + 1 ; ; x n k + 1 ] R n d and the coefficient matrix L is defined as L = L I d .
In the real-time solving process of DTDTVO Problem (2)–(3), all computations must be executed using present and/or previous data. To address the DTDTVO problem in a fully distributed way, the data utilized are limited to local information, including the local objective function and observations of neighbors’ states. For example, at time instant t k , for agent i with a neighboring set N i , only information such as x i p , f i ( x i p , t p ) , and x j p with j N i for p k is available. The future unknown information such as f i ( x i k + 1 , t k ) and x j k + 1 is not computed yet and cannot be used for computing the future unknown x k + 1 during [ t k , t k + 1 ) . And since x ˙ j k is also computed during [ t k , t k + 1 ) , it is also unknown to agent i. This paper aims to find the future unknown optimal solution x k + 1 during [ t k , t k + 1 ) from present and/or previous data in a fully distributed manner.

3. Event-Triggered Continuous-Time ZNN Model

In this section, a CTZNN model is developed to solve the CTDTVO problem in Section 3.1. On the basis of the CTZNN model, an ET-CTZNN model is designed by introducing an event-triggered mechanism in Section 3.2.

3.1. Continuous-Time ZNN Model

To develop the distributed discrete-time ZNN algorithm to achieve the solution of (2)–(3) effectively, a continuous-time ZNN (CTZNN) model is constructed first. Let us consider the continuous-time counterpart of (2)–(3) expressed as follows:
min . φ ( x ( t ) , t ) = i = 1 n f i ( x i ( t ) , t ) ,
s . t . L x ( t ) = 0 n d ,
with x ( t ) = [ x 1 ( t ) ; x 2 ( t ) ; ; x n ( t ) ] R n d , where x i ( t ) denotes the local solution of the shared optimization variable x ( t ) computed by agent i in continuous time.
Under Lemma 1, the consensus problem is redefined as Equality Constraint (5). While equality constraints are traditionally addressed using the Lagrange function in the conventional ZNN design method [26], this approach is not feasible for Equality Constraint () due to the inherent rank deficiency of the Laplacian matrix L. As a result, defining L = L I d also yields a rank-deficient matrix. Therefore, an alternative method is required to derive the optimal solution for CTDTVO Problem (4)–(5). A penalty function is introduced to handle the consensus problem. The penalty function is defined as
P ( x ( t ) ) = 1 4 σ i = 1 n j N i x i ( t ) x j ( t ) 2 ,
with σ > 0 being a large positive parameter. Clearly, as L x ( t ) converges to 0 n d , the value of P ( x ( t ) ) tends toward zero. We select a quadratic penalty P ( x ( t ) ) to smoothly enforce consensus by minimizing state differences across agents, leveraging its convexity for stable convergence. Alternatives like an absolute-value penalty could work but might slow convergence due to non-differentiability, though they would still ensure consensus. As a result, Problem (4)–(5) is reformulated into an equivalent CTDTVO problem without equality constraints, described as follows:
min . F ( x ( t ) , t ) = φ ( x ( t ) , t ) + P ( x ( t ) ) .
Assuming that F ( x ( t ) , t ) / ( x ( t ) ) exists and remains continuous, the optimal solution is determined by satisfying the following equation:
F ( x ( t ) , t ) x ( t ) = φ ( x ( t ) , t ) + σ L x ( t ) = 0 ,
where
φ ( x ( t ) , t ) = φ ( x ( t ) , t ) x ( t ) = [ f 1 ( x 1 ( t ) , t ) , f 2 ( x 2 ( t ) , t ) , , f n ( x n ( t ) , t ) ] T ,
with
f i ( x i ( t ) , t ) = f i ( x i ( t ) , t ) x i ( t ) .
To derive the optimal solution of Equation (8), an error function is introduced as
ε ( t ) = F ( x ( t ) , t ) x ( t ) = φ ( x ( t ) , t ) + σ L x ( t ) .
To ensure the error function ε ( t ) approaches zero, its derivative with respect to time t (9) must remain negative [26]. As a result, the ZNN design formula is applied as follows:
d ε ( t ) d t = γ Ψ ( ε ( t ) ) ,
where ψ ( · ) denotes an array composed of activation functions, with the parameter γ playing a role in regulating the convergence rate. To meet the necessary conditions, the activation function ψ ( · ) must adhere to two fundamental properties: it needs to be odd and monotonically increasing. For simplicity, this paper selects the linear function Ψ ( x ) = x as the activation function. Substituting Equation (10) into Equation (9) yields the following model:
( H ( x ( t ) , t ) + L ) x ˙ ( t ) + ˙ φ ( x ( t ) , t ) = γ ε ( t ) ,
where
H ( x ( t ) , t ) = H 1 ( x 1 ( t ) , t ) 0 0 0 H 2 ( x 2 ( t ) , t ) 0 0 0 H n ( x n ( t ) , t ) ,
with H i ( x i ( t ) , t ) R d × d representing the Hessian matrix associated with agent i, where i = 1 , 2 , , n . The specific definition of H i ( x i ( t ) , t ) is provided as follows:
H i ( x i ( t ) , t ) = 2 f i ( x i ( t ) , t ) x i 1 2 ( t ) 2 f i ( x i ( t ) , t ) x i 1 ( t ) x i 2 ( t ) 2 f i ( x i ( t ) , t ) x i 1 ( t ) x i d ( t ) 2 f i ( x i ( t ) , t ) x i 2 ( t ) x i 1 ( t ) 2 f i ( x i ( t ) , t ) x i 2 2 ( t ) 2 f i ( x i ( t ) , t ) x i 2 ( t ) x i d ( t ) 2 f i ( x i ( t ) , t ) x i d ( t ) x i 1 ( t ) 2 f i ( x i ( t ) , t ) x i d ( t ) x i 2 ( t ) 2 f i ( x i ( t ) , t ) x i d 2 ( t ) ,
where x i j ( t ) denotes the jth element of vector x i ( t ) . In addition, the notation ˙ φ ( x ( t ) , t ) is defined as
˙ φ ( x ( t ) , t ) = [ ˙ f 1 ( x 1 ( t ) , t ) , ˙ f 2 ( x 2 ( t ) , t ) , , ˙ f n ( x n ( t ) , t ) ] T ,
in which ˙ f i ( x i ( t ) , t ) = 2 f i ( x i ( t ) , t ) / x i t .
By performing straightforward matrix computations, the distributed version of Equation (11), specifically designed for agent i, is derived as follows:
H i ( x i ( t ) , t ) x ˙ i ( t ) + σ j N i ( x ˙ i ( t ) x ˙ j ( t ) ) = ˙ f i ( x i ( t ) , t ) γ ( f i ( x i ( t ) , t ) + σ j N i ( x i ( t ) x j ( t ) ) ) .
Let us define an auxiliary term as e i ( t ) = j N i x j ( t ) . Hence, the above Function (12) is formulated as
( H i ( x i ( t ) , t ) + σ d i I ) x ˙ i ( t ) = σ e ˙ i ( t ) ˙ f i ( x i ( t ) , t ) γ f i ( x i ( t ) , t ) + σ ( d i x i ( t ) e i ( t ) ) ,
where d i represents the degree of the Laplacian matrix L corresponding to agent i, and e ˙ i ( t ) = j N i x ˙ j ( t ) . The matrix J i ( t ) is defined as J i ( t ) = H i ( x i ( t ) , t ) + σ d i I . Hence, one has
J i ( t ) x ˙ i ( t ) = σ e ˙ i ( t ) ˙ f i ( x i ( t ) , t ) γ f i ( x i ( t ) , t ) + σ ( d i x i ( t ) e i ( t ) ) .
Therefore, the CTZNN model is finalized as
x ˙ i ( t ) = γ J i 1 ( t ) f i ( x i ( t ) , t ) + σ d i x i ( t ) e i ( t ) + σ J i 1 ( t ) e ˙ i ( t ) J i 1 ( t ) ˙ f i ( x i ( t ) , t ) .
Theoretically speaking, J i ( t ) is non-singular since strict convexity of f i ( x i ( t ) , t ) ensures the Hessian H i ( x i ( t ) , t ) is positive definite and σ d i I (with σ > 0 and d i > 0 for connected agents) adds a positive shift, maintaining invertibility. In practice, numerical errors might make H i ( x i ( t ) , t ) nearly singular, risking solution stability. If f i ( x i ( t ) , t ) is not strictly convex due to parameter issues, convergence could fail. Regularization, such as J i ( t ) = H i ( x i ( t ) , t ) + σ d i I + ϱ I with small ϱ > 0 , could stabilize computation by ensuring positive definiteness, though it slightly modifies the optimum.

3.2. Event-Triggered Continuous-Time ZNN Model

The ET-CTZNN model incorporates an event-triggered mechanism, significantly reducing computational costs compared to the CTZNN model (13). The subsequent sections detail the mechanism and the event-triggered conditions of the ET-CTZNN model.
The following reference measurement ϕ i ( t ) is introduced to construct the event-triggering condition:
ϕ i ( t ) = ε i ( t ) ε i ( t i a ) , t [ t i a , t i b ) ,
where t i a represents the triggering time of the current event while t i b corresponds to the next event’s triggering time. The interval between two consecutive triggered instants [ t i a , t i b ) denotes the triggered time interval. The local distributed error ε i ( t ) serves as a metric to evaluate the system state within the time interval [ t i a , t i b ) . The definition of ε i ( t ) is ε i ( t ) = f i ( x i ( t ) , t ) + σ j N i ( x i ( t ) x j ( t ) ) , and ε ( t ) = [ ε 1 ( t ) ; ε 2 ( t ) ; ; ε n ( t ) ] .
The event-triggered condition is designed on the basis of the value relation between the distributed system error ε i ( t ) 2 and the reference value ϕ i ( t ) 2 . When the difference between ε i ( t ) 2 and ϕ i ( t ) 2 exceeds a set range, the event is triggered, and the next triggering time t i b is reached. Thus, the event-triggered condition for agent i is given as follows:
t i b = inf { t > t i a | ϕ i ( t ) 2 ρ ε i ( t ) 2 > 0 } ,
where parameter ρ satisfies 0 < ρ < 1 . Therefore, for agent i, when t [ t i a , t i b ) , the ET-CTZNN model is finalized as
x ˙ i ( t ) = γ J i 1 ( t ) f i ( x i ( t i a ) , t i a ) + σ d i x i ( t i a ) e i ( t i a ) + σ J i 1 ( t ) e ˙ i ( t ) J i 1 ( t ) ˙ f i ( x i ( t ) , t ) .
The event-triggered mechanism’s use of frozen values between triggering events may lead to larger errors when the local functions f i ( x i ( t ) , t ) vary rapidly over time, as the outdated information may not capture the current dynamics, though our theoretical analysis under controlled conditions (e.g., with bounded noise) in Section 5.2.1 indicates acceptable performance.

4. Event-Triggered Discrete-Time ZNN Algorithm

For convenient implementation on digital hardware and the development of numerical algorithms, it is important to discretize the proposed ET-CTZNN model into a discrete-time algorithm. In Section 4.1, we first propose a semi-centralized ET-DTZNN model to solve the DTDTVO problem. Then, by introducing the Euler formula, a fully distributed ET-DTZNN algorithm is designed in Section 4.2. The proposed ET-DTZNN algorithm addresses the DTDTVO problem through a fully distributed approach, achieving a substantial reduction in communication overhead among agents.

4.1. Semi-Centralized ET-DTZNN Model

Following the ET-CTZNN model (13), the discrete-time ZNN (DTZNN) model is given as
x ˙ i k = γ J i k 1 ( f i ( x i k , t k ) + σ ( d i x i k e i k ) ) + σ J i k 1 e ˙ i k J i k 1 ˙ f i ( x i k , t k ) ,
where J i k = J i ( t k ) , e i k = e i ( t k ) , and e ˙ i k = e ˙ i ( t k ) . Furthermore, based on the ET-CTZNN model (13) and the DTZNN model (16), an ET-DTZNN model is constructed. The event-triggered time of agent i in discrete time is denoted as t i 0 = 0 < t i 1 < t i 2 < < t i s < . Under the discrete-time event-triggered mechanism, when t k [ t i s , t i s + 1 ) , one has x ˙ i k = x ˙ i s . Therefore, on the basis of the ET-CTZNN model (13), for agent i, the function that determines the triggered event in the ET-DTZNN model is defined as follows:
t i s + 1 = inf { t k > t i s | ϕ i k 2 ρ ε i k 2 > 0 } ,
where ε i k = f i ( x k , t k ) + σ j N i ( x i k x j k ) and ϕ i k = ε i k ε i s . For agent i, when t k [ t i s , t i s + 1 ) , the ET-DTZNN model is finalized as
x ˙ i k = γ J i k 1 ( f i ( x i s , t i s ) + σ ( d i x i s e i s ) ) + σ J i k 1 e ˙ i k J i k 1 ˙ f i ( x i k , t i k ) .
Upon examining the ET-CTZNN model (15) and the ET-DTZNN model (18), one observes that e ˙ i ( t i a ) and e ˙ i s are required for agent i to compute x ˙ i ( t ) and x i k + 1 , respectively. Consequently, if an analytical solution is required, before the first event-triggered time instant, instead of each agent independently computing x ˙ i ( t ) and x i k + 1 , the ET-CTZNN model (13) and the ET-DTZNN model (18) only compute x ˙ ( t ) and x k + 1 in a semi-decentralized manner. In other words, while information exchange between agents is distributed, the ET-CTZNN model (15) and the ET-DTZNN model (18) are ultimately solved in a centralized way before the first event-triggered time instant. Furthermore, in real-world applications, directly knowing or obtaining the time derivative x ˙ i ( t ) and x ˙ i k may be difficult. Therefore, it is necessary to develop a fully distributed ET-DTZNN algorithm that addresses the DTDTVO problem through a fully distributed approach.

4.2. Fully Distributed ET-DTZNN Algorithm

In traditional distributed optimization algorithms, agents achieve consensus by exchanging x i k with their neighbors. However, as an alternative, this paper proposes that each agent retains a short-term memory of its neighbors’ states, eliminating the need for direct exchanges of x i k . Using the Euler formula, agents can effectively estimate x ˙ j k for j N i based on the stored information, enabling an accurate approximation of e ˙ i k .
The Euler formula referenced in this paper is commonly used for approximating derivatives in discrete systems [26]:
x ˙ i k = x ˜ ˙ i k + O ( τ ) = x i k x i k 1 τ + O ( τ ) .
Each agent stores just one prior state x j k 1 per neighbor, sufficient for the Euler approximation. The truncation error associated with this Euler formula is of order O ( τ ) . As a result, the approximation of e ˙ i ( t ) can be expressed as follows:
e ˙ i k = e ˜ ˙ i k + O ( τ ) = j N i x ˜ ˙ j k + O ( τ ) .
By incorporating the two-step time-discretization (TD) formula [26]
ξ ˙ k = 1 τ ξ k + 1 ξ k + O ( τ ) ,
the fully distributed ET-DTZNN algorithm, designed to solve the DTDTVO problem, is derived as follows:
x i k + 1 x i k h J i k 1 f i ( x i s , t i s ) + σ ( d i x i s e i s ) + τ σ J i k 1 e ˜ ˙ i k τ J i k 1 ˙ f i ( x i k , t i k ) ,
where the convergence rate is regulated by parameter h = τ γ . Distinct from the ET-CTZNN model (15) and the ET-DTZNN algorithm (18), the newly designed ET-DTZNN algorithm (22) tackles the DTDTVO problem in a fully distributed fashion. Importantly, the communication among agents is restricted to sharing their respective solutions x i k , ensuring that essential local information, such as each agent’s objective function, stays completely private and hidden from neighboring agents.
The performance of the ET-DTZNN algorithm depends on the parameters ρ , γ , τ , and σ , which influence convergence speed, accuracy, communication efficiency, and consensus quality. The event-triggering threshold ρ controls the sensitivity of the triggering condition: a smaller ρ increases the frequency of updates, enhancing responsiveness and potentially speeding up convergence by reducing approximation errors, but at the cost of higher communication overhead, while a larger ρ reduces communication, possibly slowing convergence. The convergence parameter γ accelerates convergence with larger values by increasing the speed of error correction, but if too large, it may lead to overshooting and instability due to overly aggressive updates. The sampling gap τ affects accuracy, a smaller τ improves accuracy by closely approximating the continuous-time dynamics, while a larger τ may degrade performance and risk instability if it exceeds stability limits. The penalty parameter σ enforces consensus, with larger values ensuring tighter agreement among agents but potentially increasing sensitivity to noise if too large.
The value of parameter ρ should be chosen according to the desired balance between communication efficiency and convergence speed; a moderate value between 0.5 and 0.7 often provides a good trade-off, ensuring sufficient updates without excessive communication. The value of parameter τ should be chosen according to the balance between accuracy and computational efficiency, ensuring it remains within stability limits; for problems requiring high accuracy, a smaller τ is preferable, while less stiff problems may tolerate a larger τ . The value of parameter γ should be chosen according to the desired convergence speed, starting with a moderate value and adjusting based on τ and the problem’s stiffness to avoid overshooting. The value of parameter σ should be chosen according to the need for consensus strength, typically in the range of 10 to 100, depending on the network size and expected noise levels, to enforce agreement without over-penalizing.

5. Theoretical Analyses

This section establishes and proves the convergence theorem for the ET-CTZNN model (15) in Section 5.1. Section 5.2 provides a theoretical analysis of the model’s robustness and its ability to exclude Zeno behavior. In addition, Section 5.3 provides convergence theorems and investigates the maximum steady-state residual error (MSSRE) associated with the ET-DTZNN algorithm (22).

5.1. Convergence Theorem for ET-CTZNN Model

Based on the analysis outlined in Section 3, CTDTVO Problem (4)–(5) is reformulated as the time-varying Equation (8). Consequently, solving the CTDTVO problem equates to resolving the matrix Equation (8). The convergence theorem for ET-CTZNN Model (15) is presented as Theorem 1.
Theorem 1.
Consider CTDTVO Problem (4)–(5) under the following assumptions: (i) each function f i x i ( t ) , t is at least twice continuously differentiable with respect to x i and continuous with respect to t; (ii) the communication graph G is connected and undirected; (ii) the parameters satisfy γ > 0 and ρ ( 0 , 1 ) . Starting from any initial state x ( 0 ) R n d , the residual error ε ( t ) 2 of the entire multi-agent system converges to zero, i.e., lim t + ε ( t ) 2 = 0 .
Proof of Theorem 1.
For agent i, a Lyapunov candidate function is defined as
V i ( t ) = ε i ( t ) 2 2 2 = ε i T ( t ) ε i ( t ) 0 .
Therefore, the time derivative of V i ( t ) can be expressed as follows:
V ˙ i ( t ) = ε i T ( t ) ε ˙ i ( t ) = γ ε i T ( t ) ε i ( t i a ) = γ ε i T ( t ) ε i ( t ) γ ε i T ( t ) ( ε i ( t i a ) ε i ( t ) ) = γ ε i ( t ) 2 2 γ ε i T ( t ) ϕ i ( t ) .
According to the event-triggered Condition (14) with the parameter ρ satisfying 0 < ρ < 1 , it follows that ϕ i ( t ) 2 < ε i ( t ) 2 for all t [ t i a , t i b ) . Consequently, the inequality V ˙ i ( t ) < 0 holds consistently for agent i. Based on the Lyapunov stability theorem [33], this ensures that ε i ( t ) 2 converges to zero, which implies lim t + ε i ( t ) 2 = 0 for i [ 1 , 2 , , n ] . Since ε ( t ) = [ ε 1 ( t ) ; ε 2 ( t ) ; ; ε n ( t ) ] , it follows that lim t + ε ( t ) 2 = 0 . Furthermore, for all agents, x i ( t ) globally converges to the optimal solution x i * ( t ) . Therefore, the entire system’s state vector x ( t ) = [ x 1 ( t ) , x 2 ( t ) , , x n ( t ) ] also converges to x * ( t ) . Thus, the proof is completed. □

5.2. Robustness and Zeno Behavior Analysis for ET-CTZNN Model

In this subsection, we analyze the robustness and Zeno behavior of the ET-CTZNN model. The robustness analysis evaluates the model’s performance under bounded noise for each agent, while the Zeno behavior analysis ensures the absence of an infinite number of triggering events in a finite time, enhancing the practical feasibility of the model.

5.2.1. Analysis on Robustness for ET-CTZNN Model

We begin by introducing a bounded noise term to ET-CTZNN Model (15) to assess its robustness. For each agent i, we consider the local error function modified with an additive noise term δ i ( t ) , defined as
x ^ ˙ i ( t ) = x ˙ i ( t ) + δ i ( t ) ,
where δ i ( t ) R d is the noise affecting x ˙ i ( t ) of agent i, bounded by δ i ( t ) 2 Δ for some constant Δ > 0 . This noise may originate from measurement inaccuracies, communication delays, or computational errors.
To analyze the impact of bounded noise δ i ( t ) , we define the Lyapunov function for agent i as
V i ( t ) = 1 2 ε i ( t ) 2 2 .
Hence, the time derivative of V i ( t ) under noisy environment is derived as
V ˙ i ( t ) = ε i T ( t ) ε ˙ i ( t ) = ε i T ( t ) J i ( t ) x ^ ˙ i ( t ) σ e i ˙ ( t ) + ˙ f i ( x i ( t ) , t ) = ε i T ( t ) J i ( t ) x ˙ i ( t ) σ e i ˙ ( t ) + ˙ f i ( x i ( t ) , t ) + J i ( t ) δ i ( t ) = ε i T ( t ) γ ε i ( t i a ) + J i ( t ) δ i ( t ) = γ ε i T ( t ) γ ε i ( t ) + γ ϕ i ( t ) + J i ( t ) δ i ( t ) γ ε i ( t ) 2 2 + γ ε i ( t ) 2 ϕ i ( t ) 2 + ε i ( t ) 2 J i ( t ) F δ i ( t ) 2 .
The Hessian matrix H i ( x i ( t ) , t ) is bounded and J i ( t ) = H i ( x i ( t ) , t ) + σ d i I with σ and d i being positive constants. Therefore, matrix J i ( t ) is also bounded. We assume that bounded matrix J i ( t ) satisfies J i ( t ) 2 B J where B J is a positive constant. Hence, based on the event-triggering Condition (14), we obtain
V ˙ i ( t ) γ ( 1 ρ ) ε i ( t ) 2 2 + B J Δ ε i ( t ) 2 V ˙ i ( t ) ε i ( t ) 2 γ ( 1 ρ ) ε i ( t ) 2 B J Δ .
Subsequently, based on the values of γ ( 1 ρ ) ε i ( t ) 2 B J Δ , the analysis is segmented into the next three cases.
  • Case 1: If ε i ( t ) 2 > B J Δ / γ ( 1 ρ ) , then we obtain V ˙ i ( t ) < 0 , which implies that ε i ( t ) 2 tends to decrease and converges towards B J Δ / γ ( 1 ρ ) . That is, lim t ε i ( t ) 2 = B J Δ / γ ( 1 ρ ) .
  • Case 2: If ε i ( t ) 2 = B J Δ / γ ( 1 ρ ) , then we obtain V ˙ i ( t ) 0 . (i) When V ˙ i ( t ) < 0 , we determine that V i ( t ) is a decreasing function, which implies that | ε i ( t ) | 2 is less than B J Δ / γ ( 1 ρ ) , and the situation turns into Case 3. (ii) When ε i ( t ) = 0 , it indicates that ε i ( t ) = ε i ( t ) 2 remains on the surface of a sphere with a radius of B J Δ / γ ( 1 ρ ) . That is, lim t ε i ( t ) 2 = B J Δ / γ ( 1 ρ ) .
  • Case 3: If ε i ( t ) 2 < B J Δ / γ ( 1 ρ ) , then we have ε i ( t ) 2 γ ( 1 ρ ) ε i ( t ) 2 B J Δ > 0 . (i) When V ˙ i ( t ) < 0 , we determine that V i ( t ) is a decreasing function, which indicates that lim t ε i ( t ) 2 < B J Δ / γ ( 1 ρ ) . (ii) When V ˙ i ( t ) = 0 , it indicates that ε i ( t ) 2 remains on the surface of a sphere with a radius of less than B J Δ / γ ( 1 ρ ) . That is, lim t ε i ( t ) 2 < h β / η . (iii) When V ˙ i ( t ) > 0 , we determine that V i ( t ) is an increasing function, which indicates that the residual error ε i ( t ) 2 tends to converge towards B J Δ / γ ( 1 ρ ) as time increases. That is, lim t ε i ( t ) 2 = B J Δ / γ ( 1 ρ ) .
In summary, the maximum boundary of the residual error ε i ( t ) 2 with bounded noise is
lim t ε i ( t ) 2 = B J Δ / γ ( 1 ρ ) ,
indicating that with bounded noise, the residual error converges to a value proportional to the noise magnitude, scaled by the convergence parameter and event-triggering threshold.

5.2.2. Analysis on Zeno Behavior for ET-CTZNN Model

Zeno behavior occurs if an infinite number of triggering events happen in a finite time, undermining the practical utility of the event-triggered mechanism. In this subsection, we exclude Zeno behavior for the ET-CTZNN model.
If the lower bound of the inter-event time interval t i b t i a can be determined and inequality t i b t i a > 0 is always satisfied, then Zeno behavior is excluded. Since the event-triggering condition is based on the value relation between ϕ i ( t ) 2 and ε i ( t ) 2 , we investigate the time derivative of ϕ i ( t ) 2 .
For t [ t i a , t i b ) , one has ϕ ˙ i ( t ) = ε ˙ i ( t ) . Hence,
d ϕ i ( t ) d t ϕ ˙ i ( t ) 2 = ε ˙ i ( t ) 2 = γ ε ˙ i ( t i a ) 2 .
Since ϕ i ( t i a ) 2 = 0 , one further obtains
ϕ i ( t i b ) t i a t i b ϕ ˙ i ( s ) d s γ ε ˙ i ( t i a ) 2 t i a t i b .
Additionally, considering the event-triggering Condition (14), for t [ t i a , t i b ) , one obtains ε i ( t ) 2 ε i ( t i a ) 2 ε i ( t ) ε i ( t i a ) 2   =   ϕ i ( t ) 2 ρ ε i ( t ) 2 . Therefore, it is determined that ε i ( t ) 2 ε i ( t i a ) 2 / ( 1 + ρ ) . For the next event-triggering time instant t i b , the following inequality holds: ϕ i ( t i b ) 2 > ρ ε i ( t ) 2 ρ ε i ( t i a ) / ( 1 + ρ ) . Hence, it is determined that
γ ε ˙ i ( t i a ) 2 t i a t i b > ϕ i ( t i b ) 2 ρ ε i ( t i a ) 2 1 + ρ .
The lower bound of the inter-event time interval can be expressed as
t i a t i b > ρ ε i ( t i a ) 2 γ ( 1 + ρ ) ε i ( t i a ) 2 = ρ γ ( 1 + ρ ) > 0 .
This formulation effectively demonstrates the nonexistence of Zeno behavior.

5.3. Convergence Theorems of ET-DTZNN Algorithm

This subsection provides a comprehensive theoretical analysis of ET-DTZNN Algorithm (22), which is regarded as a linear two-step algorithm. To aid in understanding, several relevant lemmas are introduced below [26].
Lemma 2.
A linear N-step method is formulated as j = 0 N ω j α k + j = τ j = 0 N υ j β k + j . The characteristic polynomials of this method are z ( ι ) = j = 0 N ω j ι j and ζ ( ι ) = j = 0 N υ j ι j . For zero-stability, all complex roots of z ( ι ) must satisfy | ι | 1 . Furthermore, if | ι | = 1 , the corresponding root should be simple.
Lemma 3.
A linear N-step method is defined as j = 0 N ω j α k + j = τ j = 0 N υ j β k + j . The truncation error’s order can be evaluated by calculating w 0 = j = 0 N ω j and w j = j = 1 N j i / i ! ω j j = 0 N j i 1 / ( i 1 ) ! υ j . If w q 0 while w i = 0 for all i < q , the N-step method exhibits a truncation error of O ( τ q ) .
Lemma 4.
A linear N-step method is expressed as j = 0 N ω j α k + j = τ j = 0 N υ j β k + j . Its characteristic polynomials are represented as z ( ι ) = j = 0 N ω j ι j and ζ ( ι ) = j = 0 N υ j ι j . To ensure consistency, conditions z ( 1 ) = 0 and z ˙ ( 1 ) = ζ ( 1 ) must hold, where z ˙ is the derivative of z ( ι ) .
Lemma 5.
A linear N-step method is expressed as j = 0 N ω j α k + j = τ j = 0 N υ j β k + j . The convergence of the N-step method is ensured if, and only if, Lemmas 3 are satisfied. Specifically, an N-step method is convergent if it is both zero-stable and consistent.
Building upon the established lemmas, the convergence theorem of the ET-DTZNN algorithm (22) is proven. These lemmas provide the theoretical foundation to demonstrate the algorithm’s convergence properties rigorously.
Theorem 2.
We consider the CTDTVO problem (4)–(5). We assume that, for every agent i, its local objective function f i ( x i ( t ) , t ) possesses continuous second-order derivatives. With the design parameter γ > 0 and a sufficiently small sampling interval gap, the ET-DTZNN Algorithm (22) is convergent, with a truncation error of O ( τ 2 ) .
Proof of Theorem 2.
In accordance with Lemma 5, ET-DTZNN Algorithm (22) achieves convergence provided it satisfies both zero-stability and consistency. Additionally, as highlighted in Lemma 2, the algorithm’s first characteristic polynomial is defined as follows:
z 2 ( ι ) = ι 1 .
The root of the characteristic polynomial is ι = 1 . It follows that all roots satisfy | ι | 1 , and when | ι | = 1 , the root is simple. Hence, the ET-DTZNN algorithm (22) is zero-stable.
On the basis of (21) and (22), the ET-DTZNN algorithm (22) is described by the following equation:
x i k + 1 = x i k + τ x ˙ i k + σ O ( τ ) + τ O ( τ ) = x i k + τ x ˙ i k + O ( τ 2 ) .
According to Lemma 3, w j = j = 1 N j i / i ! ω j j = 0 N j i 1 / ( i 1 ) ! υ j . Therefore, w 2 is calculated as
w 2 = 2 2 2 ! 1 2 2 ! 1 1 1 ! = 1 2 0 .
We compute w j = 0 for j = 0 , 1 . Therefore, based on Lemma 3, ET-DTZNN Algorithm (22) possesses a truncation error order of O ( τ 2 ) . Additionally, the second characteristic polynomial of the ET-DTZNN algorithm (22) is given as ζ ( ι ) = 1 . Consequently, it follows that z ˙ ( 1 ) = ζ ( 1 ) = 1 . Thus, according to Lemma 4, the ET-DTZNN Algorithm (22) is consistent with an order of O ( τ 2 ) .
In alignment with Lemma 5, ET-DTZNN Algorithm (22) achieves convergence as it fulfills the criteria of zero-stability alongside consistency of order O ( τ 2 ) . Thus, the proof is concluded. □
Theorem 3.
We consider CTDTVO Problem (4)–(5). We assume that, for every agent i, its local objective function f i ( x i ( t ) , t ) possesses continuous second-order derivatives. With the design parameter γ > 0 and a sufficiently small sampling interval gap, the maximum steady-state residual error (MSSRE) produced by ET-DTZNN Algorithm (22), lim k + sup ε i k 2 , is of O ( τ 2 ) .
Proof of Theorem 3.
We let x i k + 1 * represent the actual solution to the problem, i.e., ε i k + 1 = 0 when x i k + 1 = x i k + 1 * . According to Theorem 3, ET-DTZNN Algorithm (22) exhibits a truncation error of O ( τ 2 ) , implying that x i k + 1 = x i k + 1 * + O ( τ 2 ) . By employing the Taylor expansion, it follows that
ε i k + 1 ( x i k + 1 , t k + 1 ) = ε i k + 1 x i k + 1 * + O ( τ 2 ) , t k + 1 = ε i k + 1 x i k + 1 * , t k + 1 + ε i k + 1 ( x i k + 1 , t k + 1 ) x i k + 1 O ( τ 2 ) + O ( τ 4 ) ε i k + 1 ( x i k + 1 , t k + 1 ) x i k + 1 O ( τ 2 ) .
Hence, the MSSRE is deduced as
lim k + sup ε i k + 1 ( x i k + 1 , t k + 1 ) 2 lim k + sup ε i k + 1 ( x i k + 1 * , t k + 1 ) x i k + 1 * O ( τ 2 ) 2 lim k + sup ε i k + 1 ( x i k + 1 * , t k + 1 ) x i k + 1 * F O ( τ 2 ) ,
where · F denotes the Frobenius norm of a matrix. Given that x i k + 1 * is the actual solution of the problem, it follows that ε k + 1 i ( x i k + 1 * , t k + 1 ) / x i k + 1 * F is a constant. Consequently, the MSSRE produced by ET-DTZNN Algorithm (22) is of O ( τ 2 ) . Thus, the proof is complete. □

6. Numerical Experiments

In this section, three numerical examples are provided to demonstrate the validity and effectiveness of the proposed ET-DTZNN Algorithm (22) discussed in this paper.

6.1. Example 1: DTDTVO Problem Solved by ET-DTZNN Algorithm

We consider a DTDTVO problem involving a multi-agent system comprising five agents. The DTDTVO problem is formulated as follows:
min . f ( x k + 1 , t k + 1 ) = i = 1 5 f i ( x i k + 1 , t k + 1 ) ,
where x i k R 2 . The structure of the network’s communication topology, represented by G , is presented in Figure 1, illustrating the connections among the agents. Detailed time-varying local objective functions f i for each agent are provided in Table 1.
Prior to the experiment, the parameters of ET-DTZNN Algorithm (22) need to be appropriately set. The operation duration is defined as T = 10 s, with a sampling interval τ of 0.001 s and parameter h configured at 0.3 . For the penalty function, σ is assigned a value of 50. Furthermore, the initial states x i 0 are randomly initialized within the range x i j 0 [ 1 , 1 ] . The experimental results are showcased in Figure 2, Figure 3, Figure 4 and Figure 5.
Figure 2 and Figure 3 illustrate the trajectories of x i 1 k and x i 2 k for all agents synthesized by ET-DTZNN Algorithm (22) with τ = 0.001 s and ρ = 0.6 . The figure includes five different trajectories, each represented by a different colored dashed line: x 1 k (blue), x 2 k (orange), x 3 k (green), x 4 k (red), and x 5 k (purple). The experimental results depicted in Figure 2 and Figure 3 illustrate the effectiveness of ET-DTZNN Algorithm (22) in achieving convergence and consensus for all agents. At the beginning of the simulation, all trajectories exhibit some initial oscillations as the agents adjust their states based on the event-triggered conditions and local objective functions. Over time, the trajectories of all agents converge to a steady state, indicating that ET-DTZNN Algorithm (22) successfully computes the optimal solutions for the time-varying objective functions.
Since achieving consensus is a central aspect of distributed optimization, a specific metric that captures the degree of consensus among the agents over time is introduced. Let us define the maximum steady-state deviation (MSSD) as C max ( t k ) = max i , j { 1 , 2 , n } x i k x j k 2 . The corresponding trajectory of MSSD is illustrated in Figure 4. From Figure 4, it is evident that the MSSD swiftly declines to nearly zero within the initial 2 s, showcasing the algorithm’s capability to effectively achieve consensus. This swift convergence highlights the capability of the ET-DTZNN algorithm to minimize state deviations across agents, ensuring they collaboratively reach a unified solution for the DTDTVO problem efficiently.
Additionally, several experiments are conducted with ET-DTZNN Algorithm (22) solving the DTDTVO problem with different τ . The MSSRE for the whole system is defined as ε k 2 = i = 1 5 ε i k 2 . The corresponding trajectories of MSSREs ε k 2 are presented in Figure 5 and Table 2.
In Figure 5, the MSSREs ε k 2 are shown for various values of τ . The figure illustrates three distinct trajectories: the blue dashed line represents a sampling interval of τ = 0.001 s, the orange dashed line corresponds to τ = 0.0005 s, and the green dashed line indicates τ = 0.0001 s. For τ = 0.001 s, the MSSRE decreases from approximately 10 2 to about 10 3 , represented by the blue dashed line. For τ = 0.0005 s, the MSSRE shows faster convergence, reducing to 10 4 , and it is represented by the orange dashed line. The green dashed line, which corresponds to τ = 0.0001 s, indicates the most rapid decrease, with the MSSRE reducing from 10 2 to 10 5 , highlighting the improved precision achieved with smaller time steps. The figure reveals an evident pattern: a decrease in τ leads to a reduction in MSSRE, highlighting improved performance of the algorithm when smaller τ values are utilized. From Figure 5 and Table 3, it can be observed that the MSSRE ε k 2 follows an order of O ( τ 2 ) , which is consistent with and validates the theoretical analyses. It is worth pointing out that an MSSRE of 10 3 is acceptable for applications like robotic coordination, where errors below 10 2 are typically sufficient, whereas energy networks often demand stricter accuracy, such as 10 4 or lower, which we achieve with smaller τ (e.g., MSSRE of 10 5 at τ = 0.0001 , s ).
In addition, several experiments are conducted with ET-DTZNN Algorithm (22) solving the DTDTVO problem with different ρ . The numbers of trigger events for ET-DTZNN Algorithm (22) with different values of ρ for five agents are presented in Figrue Figure 6 and Table 3.
From Table 3, one sees that, for ρ = 0.1 , the number of trigger events is significantly higher for all agents compared with ρ = 0.3 and ρ = 0.6 . Specifically, Agent 4 has the highest number of trigger events with 6459, followed by Agent 5 with 6161 and Agent 2 with 5141. As ρ increases to 0.3, the number of trigger events decreases for all agents, indicating that a higher ρ value leads to fewer trigger events. This trend continues as ρ increases to 0.6, with the number of trigger events further decreasing for all agents. Agent 4 still has the highest number of trigger events at 2417 for ρ = 0.3 and 2156 for ρ = 0.6 , followed by Agent 5 with 2312 for ρ = 0.3 and 2345 for ρ = 0.6 . Figure 6 further illustrates the impact of ρ on the convergence rate, showing trajectories of MSSRE ε k 2 for τ = 0.001 s. For ρ = 0.1 , the convergence rate is the fastest, reaching a low MSSRE within approximately 2 s, while for ρ = 0.3 and ρ = 0.6 , the convergence slows, taking around 3 and 4 s, respectively. At ρ = 0.8 , the convergence rate is significantly slower, requiring over 6 s to reach a similar MSSRE. This highlights that a smaller ρ accelerates convergence by triggering more frequent updates. However, there is a diminishing return in balancing communication cost and convergence rate; for example, when ρ = 0.8 , the communication cost (number of trigger events) does not reduce significantly compared to ρ = 0.6 —e.g., Agent 4 has 1968 triggers at ρ = 0.8 versus 2156 at ρ = 0.6 —but the convergence rate is much slower, taking over 6 s compared to 4 s for ρ = 0.6 . The results showcase that the ET-DTZNN algorithm effectively reduces the number of trigger events as ρ increases, highlighting its efficiency in minimizing communication consumption, though at the cost of slower convergence for larger ρ values.

6.2. Example 2: Comparative Experiments

The widely used gradient neural network algorithm addresses time-varying problems effectively. In this subsection, a comparative experiment is performed to evaluate the GNN-based algorithm against the proposed ET-DTZNN algorithm.
First, based on the algorithm presented in [34], a periodic sampling GNN (PSGNN) algorithm is developed to address the DTDTVO problem. Under the PSGNN algorithm, agents periodically exchange their information with neighboring agents at time intervals given by t k = k τ . For agent i, the scalar-valued energy function is expressed as
μ i k = 1 2 ε i k 2 2 ,
where ε i k = f i ( x k , t k ) + σ j N i ( x i k x j k ) = f i ( x k , t k ) + σ d i x i k σ e i k . By utilizing the gradient information of Energy Function (28), the following expression is derived:
x ˙ i k = γ G μ i k x i k = γ G ε i k x i k T ε i k = γ G ( H i k + d i σ I ) T ε i k = γ G J i k T ( f i ( x k , t k ) + σ d i x i k σ e i k ) ,
where the parameter γ G scales the convergence rate of the GNN algorithm [3]. Thus, the PSGNN algorithm for addressing the DTDTVO problem is defined as
x i k + 1 = x i k τ γ G J i k T ( f i ( x k , t k ) + σ d i x i k σ e i k ) .
Subsequently, both PSGNN and ET-DTZNN algorithms are applied to address the DTDTVO problem. To ensure a fair comparison, the parameter σ and the initial states are kept consistent with those defined in Section 6.1, while parameters γ and γ G are set to γ = γ G = 3000 . The resulting experimental outcomes are displayed in Table 4 and Figure 7, which provide a detailed comparison of the performance of the PSGNN algorithm and the ET-DTZNN algorithm for solving the DTDTVO problem.
Figure 7 illustrates the trajectories of the MSSRE ϵ k 2 over time t for two different algorithms: PSGNN and ET-DTZNN. The y-axis represents ϵ k 2 on a logarithmic scale, ranging from 10 5 to 10 2 . The blue dashed line represents the ET-DTZNN algorithm, which shows a rapid decrease in ϵ k 2 initially, stabilizing around 10 5 . This indicates that the ET-DTZNN algorithm quickly reduces the residual errors and maintains a low error level throughout the experiment. On the other hand, the orange dash–dot line represents the PSGNN algorithm, which shows a slower decrease in ϵ k 2 and stabilizes around 10 2 . This indicates that the PSGNN algorithm requires more time to decrease the residual errors and sustains a higher level of error when compared to the ET-DTZNN algorithm. Overall, the ET-DTZNN algorithm presents superior performance in terms of convergence rate and accuracy compared with the PSGNN algorithm. The analysis showcases the success of the ET-DTZNN algorithm in addressing the DTDTVO problem under the configuration of ρ = 0.6 and τ = 0.0001 s, highlighting its efficiency and precision.
Table 4 presents the MSSREs of the PSGNN and ET-DTZNN algorithms in solving the DTDTVO problem with different values of τ . From Table 4, it is observed that as the value of τ decreases, the MSSRE values for both algorithms also decrease. Specifically, for τ = 0.001 s, the MSSRE of PSGNN algorithm is 4.21 × 10 1 and the MSSRE of ET-DTZNN algorithm is 7.71 × 10 3 . For τ = 0.0005 s, the MSSRE of PSGNN algorithm is 1.15 × 10 1 and the MSSRE of ET-DTZNN algorithm is 5.81 × 10 4 . Finally, for τ = 0.0001 s, the MSSRE of PSGNN algorithm is 2.19 × 10 2 and the MSSRE of ET-DTZNN algorithm is 6.49 × 10 5 . Overall, the ET-DTZNN algorithm consistently outperforms the PSGNN algorithm in terms of precision across all tested τ values. The results highlight the superior performance of the ET-DTZNN algorithm in achieving more accurate solutions compared with the PSGNN algorithm.

6.3. Example 3: DTDTVO Problem Solved by ET-DTZNN Algorithm in Large MASs

In this subsection, two experiments employ the ET-DTZNN algorithm to tackle a DTDTVO problem within a large MAS under static and dynamic topology, respectively. We consider a DTDTVO problem involving a MAS with 20 agents. The DTDTVO problem is formulated as follows:
min . f ( x k + 1 , t k + 1 ) = i = 1 20 f i ( x i k + 1 , t k + 1 ) ,
where x i k R 2 . Each agent i ( i = 1 , , 20 ) possesses a local time-varying objective function f i ( x i k , t k ) = a i 1 ( t k ) x i 1 k 2 + a i 2 ( t k ) x i 2 k 2 + b i 1 ( t k ) x i 1 k + b i 2 ( t k ) x i 2 k , where x i k = [ x i 1 k , x i 2 k ] T R 2 . The coefficients are a i 1 ( t k ) = 0.5 + 0.2 sin ( t k + 0.1 i ) , a i 2 ( t k ) = 0.5 + 0.2 cos ( t k + 0.1 i ) , b i 1 ( t k ) = cos ( 2 t k + 0.1 i ) , and b i 2 ( t k ) = sin ( 2 t k + 0.1 i ) , rendering the functions time-varying and distinct with a phase difference of 0.1 radians between consecutive agents.
Before the experiment begins, it is essential to configure the parameters of the ET-DTZNN algorithm properly. In both static and dynamic topology scenarios, the total operation duration is set at T = 10 s, with parameter h defined as 0.3 and ρ assigned a value of 0.2 . Due to the scale of the multi-agent system (MAS), the penalty function parameter σ is assigned a value of 200. Additionally, the initial states x i 0 are randomly generated within the range of x i j 0 [ 1 , 1 ] .

6.3.1. ET-DTZNN in Large MAS Under Static Topology

Under the static topology, the agents are linked via a ring-shaped communication topology, enabling each agent i to exchange information bidirectionally with neighbors ( i 1 ) and ( i + 1 ) . The adjacency matrix A assigns A i , j = 1 for j = ( i ± 1 ) and 0 otherwise, with degree matrix D i , i = 2 and Laplacian L = D A , ensuring network connectivity for distributed consensus. The experimental results are illustrated through Figure 8, Figure 9 and Figure 10.
Figure 8 and Figure 9 illustrate the trajectories of x i 1 k and x i 2 k for all agents under static topology synthesized by ET-DTZNN Algorithm (22) with τ = 0.01 s and ρ = 0.2 . The experimental findings, as shown in Figure 8 and Figure 9, demonstrate the capability of the ET-DTZNN algorithm to ensure both convergence and consensus across all agents within a large MAS. At the beginning of the simulation, all trajectories exhibit some initial deviations as the agents adjust their states based on the event-triggered conditions and local objective functions. As time progresses, the trajectories of all agents gradually stabilize, demonstrating the capability of the ET-DTZNN algorithm to effectively resolve the DTDTVO problem in large multi-agent systems.
Additionally, experiments are conducted with the ET-DTZNN algorithm solving the DTDTVO problem in a large MAS with different τ . Figure 10 showcases a semi-log plot illustrating the MSSRE ε k 2 over a 10-second interval for the ET-DTZNN algorithm addressing a DTDTVO problem with 20 agents. This plot compares the results of two trials, where τ = 0.01 s is represented by the blue dashed line and τ = 0.001 s is depicted by the orange dotted line. The results show that the smaller sampling interval ( τ = 0.001 s) yields a low MSSRE of 10 3 , while τ = 0.01 s exhibits periodic oscillations peaking at 10 1 , driven by the time-varying objective function, confirming the theoretical MSSRE scaling of O ( τ 2 ) .

6.3.2. ET-DTZNN in Large MAS Under Dynamic Topology

Under the dynamic topology, the agents are linked via a time-varying communication structure, where the topology switches every 1 s to a new randomly generated strongly connected undirected graph. At each switching instant, the adjacency matrix A is constructed using an Erdos–Rényi random graph model with a connection probability of 0.3 [31], ensuring bidirectional information exchange between connected agents. The generation process guarantees strong connectivity by verifying that the resulting graph allows a path between any pair of agents i and j in both directions when considered as a directed graph. This dynamic, strongly connected topology ensures network adaptability and persistent consensus capability despite frequent structural changes. The experimental results are illustrated through Figure 11, Figure 12 and Figure 13.
Figure 11 and Figure 12 illustrate the trajectories of x i 1 k and x i 2 k for all agents synthesized by ET-DTZNN Algorithm (22) under dynamic topology with τ = 0.01 s and ρ = 0.2 . The experimental results depicted in Figure 11 and Figure 12 illustrate the effectiveness of the ET-DTZNN algorithm in achieving convergence and consensus for all agents in a large MAS under dynamic topology. The dynamic topology introduces noticeable perturbations at each switching instant (e.g., at t = 1 , 2 , 3 , ), manifesting as slight discontinuities or oscillations in the trajectories. These perturbations are more pronounced in the early stages when the states are farther apart but diminish as consensus is approached, indicating the robustness of the ET-DTZNN algorithm to topological changes. The impact of the dynamic topology is evident in the small, periodic deviations from smooth convergence, yet the algorithm maintains overall stability and achieves consensus, highlighting its adaptability to frequent structural changes in the communication network.
Additionally, experiments are conducted with the ET-DTZNN algorithm solving the DTDTVO problem in a large MAS with different τ . Figure 13 displays a semi-log plot illustrating the MSSRE ε k 2 over a 10-second interval. This plot evaluates the ET-DTZNN algorithm’s performance in solving a DTDTVO problem with 20 agents under dynamic topology. Two trials are compared, with the blue dashed line representing τ = 0.01 s and the orange dotted line depicting τ = 0.001 s. The MSSRE trajectories demonstrate the algorithm’s convergence despite the dynamic topology, with both experiments showing a rapid initial decrease followed by periodic fluctuations. For τ = 0.01 s, the MSSRE drops from 10 3 to around 10 1 within the first second, while for τ = 0.001 s, it decreases to around 10 1 , reflecting the theoretical O ( τ 2 ) error bound. The dynamic topology introduces noticeable spikes in the MSSRE at each switching instant (e.g., t = 1 , 2 , 3 , ), as the sudden change in communication structure perturbs the consensus process. These spikes are more pronounced for τ = 0.01 s due to the larger sampling interval, which limits the algorithm’s ability to quickly adapt to topological changes. In contrast, the smaller τ = 0.001 s, combined with a larger γ , results in a lower MSSRE and smaller fluctuations, indicating better adaptability to the dynamic topology. Despite these perturbations, the MSSRE remains bounded, underscoring the robustness of ET-DTZNN in handling frequent topological switches while maintaining effective distributed optimization.
It is worth mentioning that the fully distributed nature of the ET-DTZNN algorithm ensures that each agent’s computation time per iteration depends primarily on the size of its local neighborhood rather than the total number of agents in the MAS. As shown in Equation (22), each agent computes its update using only its own objective function and the states of its neighbors, resulting in a computational complexity that scales with degree d i of the agent in graph G. This design keeps per-agent computation lightweight even as the system size grows. Furthermore, the event-triggered mechanism, governed by the condition in Equation (17), reduces communication overhead by limiting inter-agent exchanges to instances when the local error ϕ i k 2 exceeds a threshold relative to ε i k 2 . This approach significantly mitigates communication delays, a critical factor in real-time applications, as evidenced by the rapid convergence observed in Figure 8 and Figure 11 across both static and dynamic topologies.

7. Conclusions

To address the DTDTVO problem for MASs characterized by time-varying objective functions, a novel ET-CTZNN model is introduced. This model effectively tackles the CTDTVO problem using a semi-centralized approach. Building upon the ET-CTZNN framework, the ET-DTZNN algorithm is developed. By discretizing the ET-CTZNN model using the Euler formula, the ET-DTZNN algorithm enables agents to compute optimal solutions based on local information and communicate with neighbors only when specific event-triggered conditions are met, significantly reducing communication consumption. For both the ET-CTZNN model and the ET-DTZNN algorithm, their convergence theorems are thoroughly validated. Numerical experiments highlight the effectiveness of the proposed ET-DTZNN algorithm, showcasing its precision and enhanced communication efficiency. Looking forward, the ET-DTZNN algorithm shows potential for applications in smart grids, sensor networks, and collaborative robotics, where it can optimize time-varying objectives while minimizing communication. However, the ET-DTZNN algorithm assumes strong convexity of objective functions, uses a static event-triggering threshold, and relies on Euler discretization. Future work may focus on real-world validation to address challenges like distributed non-convex optimization, integrating adaptive event-triggering for better adaptability, and exploring higher-order discretization methods like Runge–Kutta to enhance stability and accuracy.

Author Contributions

Conceptualization, L.H.; methodology, L.H. and H.C.; software, L.H.; validation, H.C. and Y.Z.; formal analysis, H.C. and L.H.; investigation, H.C. and L.H.; resources, L.H. and Y.Z.; data curation, L.H.; writing—original draft preparation, L.H.; writing—review and editing, L.H. and Y.Z.; visualization, L.H. and Y.Z.; supervision, H.C. and Y.Z.; project administration, Y.Z.; funding acquisition, H.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the China National Key R&D Program (with number 2022ZD0119602), the National Natural Science Foundation of China (with number 62376290), and Natural Science Foundation of Guangdong Province (with number 2024A1515011016).

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors express their sincere thanks to Sun Yat-sen University for its support and assistance in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this paper:
DTVODistributed Time-Varying Optimization
MASsMulti-Agent Systems
ZNNZeroing Neural Network
CTDTVOContinuous-Time Distributed Time-Varying Optimization
DTDTVODiscrete-Time Distributed Time-Varying Optimization
CTZNNContinuous-Time Zeroing Neural Network
DTZNNDiscrete-Time Zeroing Neural Network
ET-CTZNNEvent-Triggered Continuous-Time Zeroing Neural Network
ET-DTZNNEvent-Triggered Discrete-Time Zeroing Neural Network
MSSREMaximum Steady-State Residual Error
MSSDMaximum Steady-State Deviation
GNNGradient Neural Network
PSGNNPeriodic Sampling Gradient Neural Network

References

  1. Patari, N.; Venkataramanan, V.; Srivastava, A.; Molzahn, D.K.; Li, N.; Annaswamy, A. Distributed optimization in distribution systems: Use cases, limitations, and research needs. IEEE Trans. Power Syst. 2022, 37, 3469–3481. [Google Scholar]
  2. Cavraro, G.; Bolognani, S.; Carli, R.; Zampieri, S. On the need for communication for voltage regulation of power distribution grids. IEEE Trans. Control Netw. Syst. 2019, 6, 1111–1123. [Google Scholar]
  3. He, L.; Cheng, H.; Zhang, Y. Centralized and decentralized event-triggered Nash equilibrium-seeking strategies for heterogeneous multi-agent systems. Mathematics 2025, 13, 419. [Google Scholar] [CrossRef]
  4. Molzahn, D.K.; Dorfler, F.; Sandberg, H.; Low, S.H.; Chakrabarti, S.; Baldick, R.; Lavaei, J. A survey of distributed optimization and control algorithms for electric power systems. IEEE Trans. Smart Grid 2017, 8, 2941–2962. [Google Scholar]
  5. Pan, P.; Chen, G.; Shi, H.; Zha, X.; Huang, Z. Distributed robust optimization method for active distribution network with variable-speed pumped storage. Electronics 2024, 13, 3317. [Google Scholar] [CrossRef]
  6. Luo, S.; Peng, K.; Hu, C.; Ma, R. Consensus-based distributed optimal dispatch of integrated energy microgrid. Electronics 2023, 12, 1468. [Google Scholar] [CrossRef]
  7. Tariq, M.I.; Mansoor, M.; Mirza, A.F.; Khan, N.M.; Zafar, M.H.; Kouzani, A.Z.; Mahmud, M.A.P. Optimal control of centralized thermoelectric generation system under nonuniform temperature distribution using barnacles mating optimization algorithm. Electronics 2021, 10, 2839. [Google Scholar] [CrossRef]
  8. Yang, S.; Shen, Y.; Cao, J.; Huang, T. Distributed heavy-ball over time-varying digraphs with barzilai-borwein step sizes. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 8, 4011–4021. [Google Scholar]
  9. An, W.; Ding, D.; Dong, H.; Shen, B.; Sun, L. Privacy-preserving distributed optimization for economic dispatch over balanced directed networks. IEEE Trans. Inf. Forensics Secur. 2024, 20, 1362–1373. [Google Scholar]
  10. Liu, M.; Li, Y.; Chen, Y.; Qi, Y.; Jin, L. A distributed competitive and collaborative coordination for multirobot systems. IEEE Trans. Mobile Comput. 2024, 23, 11436–11448. [Google Scholar]
  11. Huang, B.; Zou, Y.; Chen, F.; Meng, Z. Distributed time-varying economic dispatch via a prediction-correction method. IEEE Trans. Circuits Syst. I 2022, 69, 4215–4224. [Google Scholar] [CrossRef]
  12. He, S.; He, X.; Huang, T. A continuous-time consensus algorithm using neurodynamic system for distributed time-varying optimization with inequality constraints. J. Franklin I 2021, 358, 6741–6758. [Google Scholar] [CrossRef]
  13. Sun, S.; Xu, J.; Ren, W. Distributed continuous-time algorithms for time-varying constrained convex optimization. IEEE Trans. Autom. Control 2023, 68, 3931–3945. [Google Scholar] [CrossRef]
  14. Zheng, Y.; Liu, Q.; Wang, J. A specified-time convergent multiagent system for distributed optimization with a time-varying objective function. IEEE Trans. Autom. Control 2023, 68, 1590–1605. [Google Scholar] [CrossRef]
  15. Wang, B.; Sun, S.; Ren, W. Distributed time-varying quadratic optimal resource allocation subject to nonidentical time-varying hessians with application to multiquadrotor hose transportation. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 6109–6119. [Google Scholar] [CrossRef]
  16. Ding, Z. Distributed time-varying optimization—An output regulation approach. IEEE Trans. Cybern. 2024, 54, 1362–1373. [Google Scholar] [CrossRef]
  17. Rahili, S.; Ren, W. Distributed continuous-time convex optimization with time-varying cost functions. IEEE Trans. Autom. Control 2017, 62, 1590–1605. [Google Scholar] [CrossRef]
  18. Zhang, Y.; Jiang, D.; Wang, J. A recurrent neural network for solving sylvester equation with time-varying coefficients. IEEE Trans. Neural Netw. 2002, 13, 1053–1063. [Google Scholar] [CrossRef]
  19. Zhang, Y.; He, L.; Hu, C.; Guo, J.; Li, J.; Shi, Y. General four-step discrete-time zeroing and derivative dynamics applied to time-varying nonlinear optimization. J. Comput. Appl. Math. 2019, 347, 314–329. [Google Scholar] [CrossRef]
  20. Liao, B.; Han, L.; He, Y.; Cao, X.; Li, J. Prescribed-time convergent adaptive ZNN for time-varying matrix inversion under harmonic noise. Electronics 2022, 11, 1636. [Google Scholar] [CrossRef]
  21. Zhou, P.; Tan, M.; Ji, J.; Jin, J. Design and analysis of anti-noise parameter-variable zeroing neural network for dynamic complex matrix inversion and manipulator trajectory tracking. Electronics 2022, 11, 824. [Google Scholar] [CrossRef]
  22. Xiao, L.; Luo, J.; Li, J.; Jia, L.; Li, J. Fixed-time consensus for multiagent systems under switching topology: A distributed zeroing neural network-based method. IEEE Syst. Man Cybern. Mag. 2024, 10, 44–55. [Google Scholar]
  23. Liao, B.; Wang, T.; Cao, X.; Hua, C.; Li, S. A novel zeroing neural dynamics for real-time management of multi-vehicle cooperation. IEEE Trans. Intell. Veh. 2024, early access, 1–16. [Google Scholar]
  24. Tan, N.; Yu, P.; Zhong, Z.; Zhang, Y. Data-driven control for continuum robots based on discrete zeroing neural networks. IEEE Trans. Ind. Inform. 2023, 19, 7088–7098. [Google Scholar]
  25. Jin, L.; Li, S.; Xiao, L.; Lu, R.; Liao, B. Cooperative motion generation in a distributed network of redundant robot manipulators with noises. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 1715–1724. [Google Scholar]
  26. Chen, J.; Pan, Y.; Zhang, Y. ZNN continuous model and discrete algorithm for temporally variant optimization with nonlinear equation constraints via novel td formula. IEEE Trans. Syst. Man Cybern. Syst. 2024, 54, 3994–4004. [Google Scholar]
  27. Zuo, Q.; Li, K.; Xiao, L.; Wang, Y.; Li, K. On generalized zeroing neural network under discrete and distributed time delays and its application to dynamic Lyapunov equation. IEEE Trans. Syst. Man Cybern. Syst. 2022, 52, 5114–5126. [Google Scholar]
  28. Dai, J.; Tan, P.; Xiao, L.; Jia, L.; He, Y.; Luo, J. A fuzzy adaptive zeroing neural network model with event-triggered control for time-varying matrix inversion. IEEE Trans. Fuzzy Syst. 2023, 31, 3974–3983. [Google Scholar]
  29. Kong, Y.; Zeng, X.; Jiang, Y.; Sun, D. A predefined time fuzzy neural solution with event-triggered mechanism to kinematic planning of manipulator with physical constraints. IEEE Trans. Fuzzy Syst. 2024, 32, 4646–4659. [Google Scholar]
  30. Golub, G.; Loan, C.F. Matrix Computations, 4th ed.; John Hopkins Univesity Press: Baltimore, MD, USA, 2013; pp. 22–33. [Google Scholar]
  31. Diestel, R. Graph Theory, 5th ed.; Springer: Berlin/Heidelberg, Germany, 2017; pp. 1–33. [Google Scholar]
  32. Bahman, G.; Jorge, C. Distributed continuous-time convex optimization on weight-balanced digraphs. IEEE Trans. Automat. Contr. 2014, 59, 781–786. [Google Scholar]
  33. Khalil, H.K. Nonlinear Systems, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2002; pp. 111–181. [Google Scholar]
  34. Zhang, Z.; Yang, S.; Zheng, L. A penalty strategy combined varying-parameter recurrent neural network for solving time-varying multi-type constrained quadratic programming problems. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 2993–3004. [Google Scholar]
Figure 1. Communication topology graph G of multi-agent system in Examples 1 and 2.
Figure 1. Communication topology graph G of multi-agent system in Examples 1 and 2.
Electronics 14 01359 g001
Figure 2. Trajectories of x i 1 k for all agents synthesized by ET-DTZNN algorithm with τ = 0.001 s and ρ = 0.6 .
Figure 2. Trajectories of x i 1 k for all agents synthesized by ET-DTZNN algorithm with τ = 0.001 s and ρ = 0.6 .
Electronics 14 01359 g002
Figure 3. Trajectories of x i 2 k for all agents synthesized by ET-DTZNN algorithm with τ = 0.001 s and ρ = 0.6 .
Figure 3. Trajectories of x i 2 k for all agents synthesized by ET-DTZNN algorithm with τ = 0.001 s and ρ = 0.6 .
Electronics 14 01359 g003
Figure 4. Trajectory of MSSD C max ( t k ) for ET-DTZNN algorithm when solving DTDTVO problem with ρ = 0.6 , σ = 50 , and τ = 0.001 s.
Figure 4. Trajectory of MSSD C max ( t k ) for ET-DTZNN algorithm when solving DTDTVO problem with ρ = 0.6 , σ = 50 , and τ = 0.001 s.
Electronics 14 01359 g004
Figure 5. Trajectories of MSSRE ε k 2 for ET-DTZNN algorithm when solving DTDTVO problem with ρ = 0.6 and different τ .
Figure 5. Trajectories of MSSRE ε k 2 for ET-DTZNN algorithm when solving DTDTVO problem with ρ = 0.6 and different τ .
Electronics 14 01359 g005
Figure 6. Trajectories of MSSRE ε k 2 for ET-DTZNN algorithm when solving DTDTVO problem with τ = 0.001 s and different ρ .
Figure 6. Trajectories of MSSRE ε k 2 for ET-DTZNN algorithm when solving DTDTVO problem with τ = 0.001 s and different ρ .
Electronics 14 01359 g006
Figure 7. Trajectories of MSSREs ε k 2 for PSGNN and ET-DTZNN algorithms when solving DTDTVO problem with ρ = 0.6 and τ = 0.0001 s.
Figure 7. Trajectories of MSSREs ε k 2 for PSGNN and ET-DTZNN algorithms when solving DTDTVO problem with ρ = 0.6 and τ = 0.0001 s.
Electronics 14 01359 g007
Figure 8. Trajectories of x 1 k for all agents synthesized by ET-DTZNN algorithm when solving DTDTVO problem within 20 agents under the static topology with τ = 0.01 s.
Figure 8. Trajectories of x 1 k for all agents synthesized by ET-DTZNN algorithm when solving DTDTVO problem within 20 agents under the static topology with τ = 0.01 s.
Electronics 14 01359 g008
Figure 9. Trajectories of x 2 k for all agents synthesized by ET-DTZNN algorithm when solving DTDTVO problem within 20 agents under static topology with τ = 0.01 s.
Figure 9. Trajectories of x 2 k for all agents synthesized by ET-DTZNN algorithm when solving DTDTVO problem within 20 agents under static topology with τ = 0.01 s.
Electronics 14 01359 g009
Figure 10. Trajectories of MSSREs ε k 2 for ET-DTZNN algorithm when solving DTDTVO problem within 20 agents under static topology.
Figure 10. Trajectories of MSSREs ε k 2 for ET-DTZNN algorithm when solving DTDTVO problem within 20 agents under static topology.
Electronics 14 01359 g010
Figure 11. Trajectories of x 1 k for all agents synthesized by ET-DTZNN algorithm when solving DTDTVO problem within 20 agents under dynamic topology with τ = 0.01 s.
Figure 11. Trajectories of x 1 k for all agents synthesized by ET-DTZNN algorithm when solving DTDTVO problem within 20 agents under dynamic topology with τ = 0.01 s.
Electronics 14 01359 g011
Figure 12. Trajectories of x 2 k for all agents synthesized by ET-DTZNN algorithm when solving DTDTVO problem within 20 agents under dynamic topology with τ = 0.01 s.
Figure 12. Trajectories of x 2 k for all agents synthesized by ET-DTZNN algorithm when solving DTDTVO problem within 20 agents under dynamic topology with τ = 0.01 s.
Electronics 14 01359 g012
Figure 13. Trajectories of MSSREs ε k 2 for ET-DTZNN algorithm when solving DTDTVO problem within 20 agents under dynamic topology.
Figure 13. Trajectories of MSSREs ε k 2 for ET-DTZNN algorithm when solving DTDTVO problem within 20 agents under dynamic topology.
Electronics 14 01359 g013
Table 1. Expressions of time-varying objective functions in Example 1.
Table 1. Expressions of time-varying objective functions in Example 1.
f i ( x i t k , t k ) Expression
f 1 ( x 1 t k , t k ) sin ( 0.1 t k 2 ) x 11 t k 2 / 4 + x 12 t k 2 + cos ( 0.1 t k 2 ) x 11 t k x 12 t k
f 2 ( x 2 t k , t k ) x 21 t k 2 + sin ( 0.1 t k 2 ) x 22 t k 2 / 4 + cos ( 0.3 t k 2 ) x 21 t k
f 3 ( x 3 t k , t k ) x 31 t k 2 + x 32 t k 2 + sin ( 3 t k 2 ) x 32 t k
f 4 ( x 4 t k , t k ) x 41 t k 4 + cos ( 0.2 t k 2 ) x 42 t k 2 + sin ( 0.3 t k 2 ) x 42 t k
f 5 ( x 5 t k , t k ) cos ( 0.2 t k 2 ) x 51 t k 2 + x 52 t k 4 + sin ( 0.3 t k 2 ) x 51 t k
Table 2. MSSREs of ET-DTZNN algorithm when solving DTDTVO problem with different τ .
Table 2. MSSREs of ET-DTZNN algorithm when solving DTDTVO problem with different τ .
τ MSSRE ε k 2
0.001 s 7.71 × 10 3
0.0005 s 5.81 × 10 4
0.0001 s 6.49 × 10 5
Table 3. Numbers of trigger events of ET-DTZNN algorithm when solving DTDTVO problem with τ = 0.001 s and different ρ .
Table 3. Numbers of trigger events of ET-DTZNN algorithm when solving DTDTVO problem with τ = 0.001 s and different ρ .
AgentNumber of Trigger Events with ρ = 0.1 Number of Trigger Events with ρ = 0.3 Number of Trigger Events with ρ = 0.6 Number of Trigger Events with ρ = 0.8
Agent 11915184014041289
Agent 25141161415121444
Agent 34867159814391304
Agent 46459241721561968
Agent 56161231223452132
Table 4. MSSREs of PSGNN and ET-DTZNN algorithms in solving DTDTVO problem with γ = γ G and different values of τ .
Table 4. MSSREs of PSGNN and ET-DTZNN algorithms in solving DTDTVO problem with γ = γ G and different values of τ .
τ MSSRE ε k 2 of PSGNNMSSRE ε k 2 of ET-DTZNN
0.001 s 4.21 × 10 1 7.71 × 10 3
0.0005 s 1.15 × 10 1 5.81 × 10 4
0.0001 s 2.19 × 10 2 6.49 × 10 5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

He, L.; Cheng, H.; Zhang, Y. Event-Triggered Discrete-Time ZNN Algorithm for Distributed Optimization with Time-Varying Objective Functions. Electronics 2025, 14, 1359. https://doi.org/10.3390/electronics14071359

AMA Style

He L, Cheng H, Zhang Y. Event-Triggered Discrete-Time ZNN Algorithm for Distributed Optimization with Time-Varying Objective Functions. Electronics. 2025; 14(7):1359. https://doi.org/10.3390/electronics14071359

Chicago/Turabian Style

He, Liu, Hui Cheng, and Yunong Zhang. 2025. "Event-Triggered Discrete-Time ZNN Algorithm for Distributed Optimization with Time-Varying Objective Functions" Electronics 14, no. 7: 1359. https://doi.org/10.3390/electronics14071359

APA Style

He, L., Cheng, H., & Zhang, Y. (2025). Event-Triggered Discrete-Time ZNN Algorithm for Distributed Optimization with Time-Varying Objective Functions. Electronics, 14(7), 1359. https://doi.org/10.3390/electronics14071359

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop