Next Article in Journal
A Collaborative Optimization Model for Metro Passenger Flow Control Considering Train–Passenger Symmetry
Previous Article in Journal
Unveiling the Role of Vector Potential in the Aharonov–Bohm Effect
Previous Article in Special Issue
A Discrete-Time Neurodynamics Scheme for Time-Varying Nonlinear Optimization with Equation Constraints and Application to Acoustic Source Localization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Survey of Neurodynamic Methods for Control and Computation in Multi-Agent Systems

1
Department of Economics, National and Kapodistrian University of Athens, 10559 Athens, Greece
2
College of Computer Science and Engineering, Jishou University, Jishou 416000, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(6), 936; https://doi.org/10.3390/sym17060936
Submission received: 5 May 2025 / Revised: 4 June 2025 / Accepted: 5 June 2025 / Published: 12 June 2025
(This article belongs to the Special Issue Symmetry and Asymmetry in Intelligent Control and Computing)

Abstract

:
Neurodynamics is recognized as a powerful tool for addressing various problems in engineering, control, and intelligent systems. Over the past decade, neurodynamics-based methods and models have been rapidly developed, particularly in emerging areas such as neural computation and multi-agent systems. In this paper, we provide a brief survey of neurodynamics applied to computation and multi-agent systems. Specifically, we highlight key models and approaches related to time-varying computation, as well as cooperative and competitive behaviors in multi-agent systems. Furthermore, we discuss current challenges, potential opportunities, and promising future directions in this evolving field.

1. Introduction

Neurodynamics is a powerful tool for handling various problems, which is inspired by the working principles of human brains. The early work on neurodynamics can be traced back to the 1980s when Hopfield proposed a famous neural network, which was later called the Hopfield neural network. Sometimes, neurodynamics is also referred to as recurrent neural networks when we are concerned about its architecture of neuron connections compared to the traditional forward neural networks that are widely used in pattern recognition [1,2,3,4]. Neurodynamics is also referred to as dynamic neural networks when we are concerned with its dynamic features, as neurodynamic models or methods are characterized by differential equations. Neurodynamics has a strong connection with cybernetics or control theory since feedback plays an important role in neurodynamic methods or models, which ensures that such methods or models can have a theoretically guaranteed performance, such as globally asymptotic convergence. Neurodynamics, together with forward neural networks [5,6,7], fuzzy systems [8,9], evolutionary computation [10,11,12,13,14,15,16], and meta-heuristic algorithms [17,18,19] are the major components of computational intelligence. Recently, some researchers tried to use neurodynamics to boost the performance of forward neural networks [20,21]. This field now plays a significant role in bridging biological inspirations and engineering implementations, especially in contexts where traditional static neural architectures fall short in solving dynamic, time-sensitive, or optimization-intensive problems.
Neurodynamics has been applied to the location of underwater acoustic sensor networks [22], resource allocation for end–edge collaboration in the internet of vehicles [23], portfolio analysis in high-frequency trading [24], path planning [25,26,27], and motion control of redundant manipulators [28,29,30,31,32,33]. In particular, the theory and application of neurodynamics for computation have been extensively investigated in the past decade. The basic idea of neurodynamics for handling computation problems is by developing neurodynamic models whose equilibrium include the solution of the considered problems, and a proper design can ensure that the equilibrium is globally asymptotically stable. With suitable activation functions acting as nonlinear mapping of feedback information, some neurodynamic models can have finite-time stability by which some state variables can converge to the solutions of the computation problems within finite time [34,35,36,37,38]. An interesting part of neurodynamics for computation is its extendability to various applications stated above by converting the problems in the applications into computation problems like linear equations, matrix inversion problems, or optimization problems. Furthermore, compared with traditional numerical solvers, neurodynamic models possess strong capabilities in terms of parallel implementation, robustness against noise, and real-time convergence, which have made them particularly attractive in embedded control and autonomous decision-making scenarios. Recent works have further explored fixed-time and finite-time neural solvers [39], neural ODE frameworks [40], and biologically plausible dynamics that aim to generalize neurodynamics to more complex nonlinear and time-varying systems. Meanwhile, neurodynamics has also been applied to multi-agent systems. In a multi-agent system, there are multiple agents with sensing, computation, communication, and actuation capabilities, where some fundamental problems for such systems include their distributed optimization and control. In terms of distributed optimization, agents in a multi-agent system try to solve an optimization problem by local interaction with neighboring agents who have communication links with them [41]. In terms of distributed control, a fundamental problem in multi-agent systems is the consensus problem which requires that certain state variables of the agents can achieve common values with local negotiations [42].
The integration of neurodynamics with multi-agent systems offers a promising framework to model decentralized intelligence and emergent collective behaviors. This synergy is particularly important for large-scale systems such as smart grids, autonomous vehicle fleets, and sensor swarms, where coordination must be achieved with limited global information and under stringent real-time constraints. Neurodynamic models enable each agent to evolve its internal state based on local observations and interactions, thereby achieving system-level objectives through local computation. Despite the rapid progress and increasing interest in this area, a comprehensive survey that systematically reviews the intersection of neurodynamics and both computational theory and multi-agent systems is still lacking. Previous reviews have largely focused on static neural networks or specific application domains, leaving a gap in unified understanding across neurodynamic models, convergence theory, application scenarios, and emerging challenges.
Thus, in this paper, we aim at providing a brief survey of this topic by introducing the related methods and models, based on which we also outline some interesting topics worth further investigation. The main contributions of this survey are listed as follows.
(1)
Methods and models of neurodynamics for computation and multi-agent systems proposed in the past decade as well as their related engineering applications are analyzed and compared.
(2)
Challenges and opportunities in the area of neurodynamics for computation and multi-agent systems are discussed.
The rest of this paper is organized as follows. Some background knowledge for neurodynamics and multi-agent systems is given in Section 2. The advances of neurodynamics for computation and multi-agent systems in the past decade are reviewed in Section 3 and Section 4, respectively. Section 5 identifies research opportunities and challenges in the considered area. Then, Section 6 concludes this paper.

2. Preliminary

In this section, we provide background knowledge that can be useful for newcomers in the area of neurodynamics and multi-agent systems. This includes a review of the mathematical formulation of neurodynamic systems, graph-theoretic tools for describing inter-agent connectivity, and basic concepts underpinning stability and convergence analysis.

2.1. Notations for Neurodynamics

Let R denote the set of real numbers and R n the vector field of n-dimensional vectors whose elements are real numbers. Neurodynamics is characterized by the dynamics of the neurons. Let x = [ x 1 , x 2 , , x n ] T R n denote the state vector of the neurons in a dynamic neural network with x i being the state variable of the i-th neuron, the dynamics of the neurons is represented by the following differential equation:
ε x ˙ ( t ) = f ( t , x ( t ) ) ,
where ε > 0 R is a gain parameter scaling the state evolution speed of the neurons, t is a virtual time variable, x ˙ = d x / d t is the derivative of the state vector with respect to time t, and f ( · ) : R n + 1 R n is the mapping of evolution of the neuron states. For different neurodynamic models, the difference lies in the number of neurons and the definition of the mapping f ( · ) . For the convenience of presentation, in this paper, the time variable t is omitted whenever there is no confusion. As seen in (1) of the general neurodynamic model, the evolution of state vector x of the neurons is a result of its previous values, which is called feedback in control theory or cybernetics. It is the existence of this feedback information that makes neurodynamics essentially differential from traditional forward neural networks. Typically, equilibrium points of (1) are designed to correspond to solutions of specific computational problems such as optimization or equation solving. In the area of neurodynamics, stability of the neurodynamic models is essential. The stability analysis is often based on the Lypuanov stability theory.

2.2. Algebraic Graph Theory

When dealing with the coordination or competition of multi-agent systems, it is necessary to model the communication topology of the agents and the algebraic graph theory is a tool to achieve this goal. Depending on the characteristics of the communication topologies, different types of graphs may be used. In this modeling process, an agent in a multi-agent system is viewed as a node in a graph, and there is an edge between two nodes if there is a communication channel between the corresponding two agents. Consider an undirected graph G = { V , E } consisting of N nodes, where V = { 1 , 2 , , N } is the set of nodes and E is the set of edges where each edge ( i , j ) means that there is a bidirectional communication link between agent i and agent j. The weight of the edge ( i , j ) is denoted by w i j , which is identical to a j i for an undirected graph. Usually, w i j > 0 if the corresponding edge exits, and a i j = 0 if the corresponding edge does not exist while w i i = 0 , since self-loop is not allowed. The weights w i j form a matrix W called the weight matrix. Based on the definition of the weight matrix, a degree matrix can be defined by a diagonal matrix D with its diagonal elements represented by d i i = j = 1 N w i j . There is a matrix widely used in graph analysis, which is called the Laplacian matrix and denoted by L = D W . The set of neighbors of an agent i is denoted by N ( i ) = { j V | a i j > 0 } . The undirected graph G is connected if there is a path between any two nodes, which is equivalent to the positiveness of the algebraic connectivity of the graph. The algebraic connectivity is defined by the second smallest eigenvalues of the Laplacian matrix [43].

3. Neurodynamics for Computation

Basically, there are three types of neurodynamics that are widely encountered, i.e., gradient neurodynamics, zeroing neurodynamics, and projection neurodynamics.

3.1. Gradient Neurodynamics

The gradient neurodynamics is constructed based on gradient descent, whose early version is the Hopfield neural network. Gradient neurodynamics is widely considered a powerful tool for handling static computation problems whose theoretical solutions do not change over time t. However, its application scope is traditionally limited to static systems, and the extension to time-varying problems remains nontrivial and often problem-specific.
The design of gradient neurodynamics to solve computation problems contains three steps. Firstly, an energy function should be designed. The energy function is a nonnegative function with respect to the unknown variable of the computation problem, and its value is zero only if the unknown variable converges to the theoretical solution. Secondly, the gradient descent method is used to obtain the evolution rule of the unknown variable such that the energy function vanishes to zero over time. Finally, activation functions are designed to accelerate the convergence of the gradient neurodynamic model. While these steps are systematic and mathematically elegant, they tend to rely on heuristic choices of energy and activation functions, which may not generalize well across problem types. As an example to illustrate the design of gradient neurodynamic models, consider the linear equation as follows:
A x = b ,
where A R m × n and b R m are the known coefficient matrix and vector, respectively, and x R n is the unknown vector to be found. An energy function for the linear equation is selected as
E = 1 2 A x b 2 2 ,
where · 2 denotes the 2-norm of a vector. The gradient of the energy function is
E x = A T ( A x b ) .
Starting from a randomly generated initial state x ( 0 ) , to make x move to the theoretical solution, the following evolution rule is adopted:
ε x ˙ = A T ( A x b ) ,
where ε > 0 R is the gain parameter. The gradient neurodynamic model activated by nonlinear activation functions is further developed as follows [44]:
ε x ˙ = A T ϕ ( A x b ) ,
where ϕ ( · ) is the activation function, i.e., an element-wise nonlinear mapping, which is monotonically increasing and odd, for which ϕ ( x ) = [ ϕ ( x 1 ) , ϕ ( x 2 ) , , ϕ ( x n ) ] T .
Existing research has revealed that with selected nonlinear activation functions, the convergence speed of neurodynamic models can be accelerated. For linear Equation (2), without using nonlinear activation functions, exponential convergence can be achieved for which the convergence time is infinite and we have
lim t + x ( t ) = x ,
where x denotes the theoretical solution of linear Equation (2). Exponential convergence implies asymptotic behavior with an infinite horizon, which may not be acceptable in real-time or embedded computing scenarios. However, with properly chosen activation functions, finite-time convergence can be achieved, for which
lim t t f x ( t ) = x ,
where t f > 0 is a finite value. Evidently, finite-time convergence is more favorable than exponential convergence since the convergence time is finite. There is a particular case for which the convergence time t f is independent of the initial state x ( 0 ) and this case is referred to as fixed-time convergence. In the past two decades, much effort has been devoted to the development of activation functions to accelerate the convergence speed of neurodynamic models. A comparison of activation functions is shown in Table 1. Yet, the increasing complexity of activation functions often introduces additional computational burdens, and may compromise numerical stability or hardware implementability.
Based on the above framework, gradient neurodynamic models have been developed to handle various static computation problems. Conventionally, it is known that gradient neurodynamic models following the above design steps cannot accurately solve time-varying computation problems whose theoretical solutions vary over time no matter what activation functions are adopted. Hopefully, recent work indicates that by using norm correction, gradient neurodynamic models can indeed solve time-varying computation problems accurately and robustly with finite-time convergence. Consider the following time-varying linear equation:
A ( t ) x ( t ) = b ( t ) ,
where A ( t ) R m × n and b ( t ) R m are the known time-varying smooth coefficient matrix and vector, respectively, and x ( t ) R n is the unknown vector to be found. Since the parameters vary over time, the theoretical solution of the equation is a function of time t. A finite-time convergent gradient neurodynamic model based on norm correction is given as follows [51]:
ε A ( t ) x ( t ) b ( t ) 2 x ˙ = A T ( t ) ( A ( t ) x ( t ) b ( t ) ) .
It has been proved in [51] that the state vector x ( t ) of the gradient neurodynamic model (10) converges to the theoretical solution of the time-varying linear Equation (9) in finite time even in the presence of bounded additive noise if the parameter ε is large enough. The norm correction method is quite interesting as it provide another way to improve the performance of neurodynamic models. Compared with the complicated nonlinear activation functions shown in Table 1, the norm correction method makes it feasible to design finite-time convergence neurodynamic models with a much lower computational burden. Currently, gradient neurodynamic models based on norm correction have also been developed for time-varying matrix inversion [52], time-varying Moore–Penrose inversion [53], and time-varying Lyapunov equations [54]. The combination of norm correction with the Daivdenko method for time-varying nonlinear equations was also reported [55].
The gradient neurodynamic model based on norm correction relies on the choice of the parameter ε based on some prior knowledge about the time-varying problem, but they do not need accurate feedback of real-time derivatives of the parameters in the time-varying problems, such as the derivatives of A ( t ) and b ( t ) with respect to time t in the time-varying linear Equation (9). To some extent, the norm correction provides a mechanism that is a bit similar to sliding mode control, where a sufficient gain can compensate for bounded unknown disturbances.

3.2. Zeroing Neurodynamics

Zeroing neurodynamics is currently the mainstream method for solving time-varying computation problems whose solutions vary over time, and lots of zeroing neurodynamic models have been developed for solving a variety of time-varying computation problems since the first zeroing neurodynamic model was developed in 2002 [56]. The core idea of zeroing neurodynamic methods is to convert a computation problem into a zero-finding problem, where preferred dynamics of a selected error function is developed to ensure that the error function converges to zero. The first step for developing zeroing neurodynamic models is to convert the computation problem into an equation problem, which can be a linear equation or nonlinear equation. The second step is the selection of the error function for the equation. The third step is to design an error dynamics for the error function to make it converge. Then, similar to gradient neurodynamics, nonlinear activation functions can be added to the derived zeroing neurodynamics to further accelerate convergence. Although this approach is systematic and widely applicable, it often relies heavily on the careful hand-design of error dynamics and activation functions, which may lack generalizability and require heuristic tuning in different application scenarios. Consider time-varying linear Equation (9) as an example. To develop a zeroing neurodynamic models for solving (9), we can select an error function as follows:
e = A ( t ) x ( t ) b ( t ) .
Then, the following error dynamics with nonlinear activation functions can be used to ensure the convergence of e ( t ) :
e ˙ = λ ϕ ( e ) ,
where λ > 0 R is the gain parameter. Substituting (11) into (12), the following zeroing neurodynamic model is obtained for solving time-varying linear Equation (9) [57,58,59,60]:
A ( t ) x ˙ = λ ϕ ( A ( t ) x ( t ) b ( t ) ) A ˙ ( t ) x ( t ) .
In view of the above steps, different error dynamics, error functions, or activation functions give rise to different zeroing neurodynamic models for solving a time-varying computation problem. In the past decade, lots of error has been devoted to the design of error dynamics, error functions, and activation functions for boosting the performance of zeroing neurodynamic models. As (13), zeroing neurodynamic models are represented by implicit dynamics, for which there is a mass matrix in the different equation, such as A ( t ) for x ˙ . It should be noted that this kind of dynamics can be realized in analog circuits by following the Kirchhoff circuit laws. However, in the implementation of such models in digital computers, additional effort is needed, yielding models with a much complex structure. For example, for the zeroing neurodynamic model (13), when A ( t ) is a rectangle matrix with m n , the following model is used for the implementation of (13) in digital computers:
x ˙ = λ ( A T ( t ) A ( t ) ) 1 A T ( t ) ϕ ( A ( t ) x ( t ) b ( t ) ) A ˙ ( t ) x ( t ) ,
which works for the case where A ( t ) is full rank.
A time-varying nonlinear equation is given as follows:
f ( t , x ( t ) ) = 0 ,
where f ( · ) : R n R n is a smooth nonlinear function, and x ( t ) R n is an unknown vector. Based on the idea of zeroing neurodynamics, the following error function can be selected:
e ( t ) = f ( t , x ) .
Then, by using (12), a zeroing neurodynamic model for solving time-varying nonlinear Equation (15) is derived as follows [61,62]:
J ( t , x ) x ˙ = λ ϕ ( f ( t , x ) ) f ( t , x ) t ,
where J ( t , x ) = f ( t , x ) / x . To deal with disturbance during the implementation of zeroing neurodynamic models for time-varying nonlinear equations, Li et al. [63] further proposed the following error dynamics:
e ˙ = α Φ ( e ) β 0 t Ψ ( e ( τ ) ) d τ ,
where Φ ( · ) and Ψ ( · ) are element-wise activation functions like ϕ ( · ) while α > 0 R and β > 0 R are gain parameters. Based on the error dynamics (18), the following zeroing neurodynamic model is developed for the time-varying nonlinear equation above:
J ( t , x ) x ˙ = α Φ ( f ( t , x ) ) f ( t , x ) t β 0 t Φ ( f ( τ , x ( τ ) ) ) d τ .
To remove the requirement of complicated nonlinear activation functions for finite-time convergence and the integral operation for improving robustness, Dai et al. [64] proposed the following error dynamics to develop zeroing neurodynamic models for solving the time-varying nonlinear equation above:
e 2 e ˙ = λ e ,
by which the corresponding zeroing neurodynamic model is given as
f ( t , x ) 2 J ( t , x ) x ˙ = λ ϕ ( f ( t , x ) ) f ( t , x ) 2 f ( t , x ) t .
It has been proved in [64] that the zeroing neurodynamic model (21) has finite-time convergence for solving time-varying nonlinear equations even if there are bounded disturbances. Although this approach achieves finite-time convergence under bounded disturbances, it remains sensitive to vanishing error norms near convergence, potentially resulting in numerical stiffness or discontinuities in digital implementation.
By following the idea of zeroing neurodynamics for addressing linear equations and nonlinear equations, zeroing neurodynamics has also been applied to time-varying matrix inversion [65,66,67,68,69,70,71], time-varying Moore–Penrose inversion [72,73], and time-varying matrix equations [74,75,76,77,78,79,80,81,82]. A general time-varying matrix equation is given as
A ( t ) x B ( t ) = C ( t ) ,
with A ( t ) , B ( t ) , and C ( t ) denoting the smooth dynamic coefficient matrices, for which the error function can be defined as
E ( t ) = A ( t ) X ( t ) B ( t ) C ( t )
and the error dynamics can be designed as
E ˙ = λ ϕ ( E ) .
Then, the following zeroing neurodynamic model can be obtained for the above matrix equation:
A ( t ) X ˙ ( t ) B ( t ) = λ ϕ ( A ( t ) X ( t ) B ( t ) C ( t ) ) A ˙ ( t ) X ( t ) B ( t ) A ( t ) X ( t ) B ˙ ( t ) .
Let A denote the vectorization of matrix A which stacks its columns into a column vector with A = [ A 1 : T , A 2 : T , , A n : T ] T . Let ⊗ denote the Kronecker product. Then, the zeroing neurodynamic model (25) can be rewritten as
( B T ( t ) A ( t ) ) X ˙ ( t ) = λ ϕ ( ( B T ( t ) A ( t ) ) X ( t ) C ( t ) ) ( B T ( t ) A ˙ ( t ) ) X ( t ) ( B ˙ T ( t ) A ( t ) ) X ( t ) .
which can be used for the implementation of model (25) in digital computers. Despite their generality, such formulations often suffer from scalability issues when applied to large-scale matrix systems, and Kronecker-based representations introduce high memory and computational costs.
Dai et al. [83] showed that zeroing neurodynamics can also be adopted to handle time-varying linear inequality as follows:
A ( t ) x b ( t ) ,
where operator “≤” works in an element-wise manner, by converting the time-varying linear inequality into the following equation:
max { 0 , A x ( t ) b ( t ) } = 0 .
Zeroing neurodynamics has also been used to handle the following unconstrained time-varying optimization problem:
min x R n h ( t , x )
where h ( x ) R is a convex smooth function with respect to x R n and time variable t R . By the convex optimization theory, the above minimization problem can be converted into the following problem:
h ( t , x ) x = 0 ,
which is a time-varying nonlinear equation, which can then be solved by using zeroing neurodynamics [84,85]. Such an idea is also extended to equality-constrained time-varying quadratic minimization and even quadratic programs with inequality constraints [86,87,88,89,90]. By combining zeroing neural dynamics with evolutionary computation techniques, nonconvex time-varying optimization problems can also be solved [91,92,93,94]. The efficiency of zeroing neurodynamics for computation problems makes it a promising alternative for handling motion control problems in robotics, especially the redundancy resolution problem of manipulators [95,96,97,98,99]. In robotics, the inverse kinematics problem requires finding the joint space configuration for a given end-effector configuration, and the mapping between the joint vector and the end-effector pose vector is a nonlinear function. Due to the redundancy feature, for which the number of joints of a serial manipulator is larger than the degrees of freedom needed to complete a given task, there are infinite solutions to the nonlinear equation for the inverse kinematics, which makes it possible to optimize the motion of the manipulators. By using the zeroing neurodynamics, the constraint of motions in the joint angle level can be converted into the constraints on the joint velocity level, which is an affine constraint.

3.3. Projection Neurodynamics

Being widely deployed to solve constrained optimization problems, projection neurodynamics is characterized by the usage of projection operators. Given a closed set Ω R n , the projection of a vector x R n on Ω is defined as
P Ω ( x ) = argmin y Ω x y 2 .
In the following, we review the classical applications and principles of projection neurodynamics.
The minimum-velocity-norm redundancy resolution problem of robot manipulators can be formulated as the following optimization problem:
min u R n 1 2 u T u subject   to   r ˙ d = J ( θ ) u ν ( r d ( t ) f ( θ ) ) , η u η + ,
where J ( θ ) is the kinematic Jacobian, u = θ ˙ is the joint velocity vector of the manipulator while θ is the joint angle vector, r d is the reference trajectory for the end-effector, η is the lower bound vector of the joint velocity while η + is the upper bound of the joint velocity vector, f ( θ ) is the mapping from the joint angle space to the workspace, and ν > 0 R is a parameter for velocity feedback. Let Ω = { x | η x η + } . A classical projection neurodynamic model for the redundancy resolution problem (32) is given as follows [100,101]:
u ˙ = ε ( u + P Ω ( J T ( θ ) λ ) ) , λ ˙ = ε ( r ˙ d J ( θ ) u + ν ( r d ( t ) f ( θ ) ) ) ,
with ε > 0 R denoting the design parameter. The projection neurodynamic model can also deal with the case that constraints on the varying rate of the joint velocity are imposed. For example, in [100], an additional constraint is added into the redundancy resolution problem (32), i.e.,
u ˙ 2 C ,
for which a modified projection neurodynamic model was developed for deal with the redundancy resolution in the velocity level while considering joint acceleration constraints:
ρ u ˙ = ε ( u + P Ω ( J T ( θ ) λ ) ) , ρ λ ˙ = ε ( r ˙ d J ( θ ) u + ν ( r d ( t ) f ( θ ) ) ,
where the adaptive parameter ρ is defined as
ρ = ε ( u + P Ω ( J T ( θ ) λ ) ) P C ( ε ( u + P Ω ( J T ( θ ) λ ) ) ) 2
with C = { x R n | x 2 C } . By the definition of the projection rule, the calculation formula of P C ( x ) is given as follows:
P C ( x ) = x , if x 2 C C x x 2 , otherwise .
By formulating the redundancy resolution or kinematic control problem of redundant manipulators as constrained optimizations, extensive effort has been put into the design of neurodynamics-based control laws [102]. However, while projection neurodynamics provides a systematic approach to incorporating constraints via projection operators, it often relies on the availability of an accurate and computationally tractable projection operator, which may become computationally intensive in high-dimensional or nonconvex constraint sets. Moreover, the convergence behavior of the dynamic system can be sensitive to the choice of the projection set geometry and the regularization parameters.
One of the advantages of projection neurodynamics for handling such a problem is the capability to deal with kinematic uncertainty, for which the Jacobian matrix J ( θ ) is unknown. A general kinematic control problem of redundant manipulators with unknown D-H parameters is formulated as follows [103]:
min u R n 1 2 u T A u + b T u subject   to   r ˙ d = J ( θ ) u ν ( r d ( t ) f ( θ ) ) , J ( θ ) = W φ ( θ ) , η u η + ,
where A R n × n is a symmetric and positive definite matrix and b R n is a vector, W is an unknown constant matrix while φ ( θ ) is a known vector whose elements are functions of θ . Zhang et al. [103] proposed the following adaptive projection neurodynamic model to solve the problem (38):
u ˇ ˙ = ε ( u ˇ + P Ω ( u ˇ A u ˇ b + φ T ( θ ) W ^ T λ ) ) , λ ˙ = ε ( r ˙ d W ^ φ ( θ ) u ˇ + ν ( r d ( t ) f ( θ ) ) ) , W ^ ˙ = ε ( W ^ φ ( θ ) u ˇ r ˙ ) u T φ T ( θ ) , u ˇ = u + ρ ,
where ρ is bounded independent and identically distributed random noise of zero mean and r is the end-effector pose of the manipulator. In the projection neurodynamic model (39), W ^ is an estimate of W to construct the actual Jacobian matrix of the forward kinematics. It has been proved in [103] that model (39) can ensure parameter convergence of W ^ to the theoretical parameter W while finding the optimal solution to (38) with user given accuracy. Projection neurodynamics was also applied to kinematic teleportation of serial manipulators under deception attacks [104,105]. Despite its practical flexibility, the reliance on gradient-type dynamics and the linear feedback structure of projection neurodynamics models limits their ability to handle global nonlinearities or nonconvexities in the objective and constraints. This makes it challenging to extend such models to more complex multi-task coordination scenarios or discontinuous control tasks without substantial reformulation.
Projection neurodynamics has also been considered for solving visual servoing problems of manipulators like image-based visual servoing, where the motion control of manipulators is based on image sensing results [106,107]. In [106,107], the image-based visual servoing problem of serial manipulators is formulated as
min u R n 1 2 u T u subject   to   γ ( s s d ) = J img ( s , z ) J r ( θ ) u , η u η + ,
where s R 2 and s d R 2 denote the actual and desired feature coordinates in the image plane, respectively; J img ( s , z ) is the image Jacobian matrix with z denoting the image depth and J r ( θ ) is the Jacobian matrix of forward kinematics of the manipulator. Traditionally, when dealing with visual seroving problems, there are two steps, i.e., finding the mapping from the feature point to the end-effector pose and obtaining the joint configuration from the desired end-effector pose. Problem (40) directly combines the steps into a single step by directly solving an optimization problem, and includes the joint constraints of manipulators, which is more desirable for practical visual servoing. A projection neurodynamic model for handling (40) is given as
u ˙ = ε ( u + P Ω ( J r T ( θ ) J img ( s , z ) λ ) ) , λ ˙ = ε ( J img ( s , z ) J r ( θ ) u ν ( s s d ) ) .
The above model requires that the depth information z is available. For the case where the depth information is not available, projection neurodynamic models can also be designed to solve the image-based visual servoing problem (40) [108]. Nevertheless, the effectiveness of projection neurodynamics in visual servoing tasks largely depends on accurate depth information and image Jacobians, which are difficult to acquire robustly under occlusions, varying lighting conditions, or dynamic environments. These dependencies raise robustness concerns and indicate a need for integrating more adaptive or learning-based perception models into the framework.
Projection neurodynamics is also useful for designing near-optimal control laws for nonlinear systems [109,110,111,112,113,114,115]. For example, consider the following underactuated nonlinear system:
x ˙ = f ( x ) + g ( x ) u , y = h ( x ) ,
with x R n , u R r , and y R n being its state, input, and output vectors, respetively. A finite-horizon optimal control problem of the above system is described as
min u R r J ( t ) subject   to   x ˙ = f ( x ) + g ( x ) u , y = h ( x ) , u u u + ,
where u and u + are the lower and upper bounds of the input vector, respectively, and the performance index J ( t ) is defined as
J ( t ) = 0 T ( y ( t + τ ) y d ( t + τ ) ) T Q ( y ( t + τ ) y d ( t + τ ) d τ ,
where T > 0 R represents the length of horizon, y d ( t ) represents the desired output at time instant t, and Q R r × r is a positive definite symmetric matrix. In [109,110,111,112,113,114,115], it has been shown that the above optimal control problem can be converted into a quadratic programming problem by making approximations of y ( t + τ ) and y d ( t + τ ) by using Taylor expansion [116]. Then, a projection neurodynamic model was adopted for solving the problem:
λ u ˙ = u + P Ω ( u 2 Ξ u p ) ,
where Ξ = κ ( L g L f ρ 1 h ( x ) ) T Q L g L f ρ 1 h ( x ) , Ω = { u | u u u + } , p = 2 ( L g L f ρ 1 h ( x ) ) T Q T E v T , E = Y d Y , and the definitions of Y, Y d , and κ can be found in [109]. The above design can also be extended to cope with model uncertainty of the nonlinear systems as well as external disturbances such as periodic disturbances [109,112,113,114,115]. In terms of parameter uncertainty, auxiliary systems can be designed to reconstruct the dynamics of the nonlinear system, by which the design of the projection neural dynamic model can be conducted based on the auxiliary system. Yet, most projection-based neurodynamic schemes rely on continuous-time formulations, and their discrete-time implementations often lack rigorous convergence analysis. This gap between theory and implementation deserves more attention, particularly in real-time embedded robotic systems where sampling effects and model uncertainties can significantly affect performance. The properties of classical neurodynamics mentioned in this paper are compared in Table 2.

4. Neurodynamics for Multi-Agent Systems

In the past two decades, there have been a lot of papers on multi-agent systems, covering distributed computation and control. It is interesting that the past decade has witnessed the development of neurodynamics-based methods for multi-agent systems.

4.1. Neurodynamics for Distributed Competition

One of the attractive areas of neurodynamic applications is the k-winners-take-all (k-WTA), which selects the k largest inputs among n inputs ( n > k > 0 ) where each agent owns one input. The k-winners-take-all is the extension of winner-take-all which is the special case of k-WTA with k = 1 . Both k-WTA and WTA can be used to model competition behaviors in social and biological networks [117].
Suppose that there is a network of n agents whose communication graph is described by a graph Laplacian matrix L. For each agent i, it has a decision input x i and a decision output z i . Let x k denotes the k-th largest input among all the n inputs of the agents. For k-WTA, the relationship between each input and output is defined as follows [118]:
z i = 1 , if x i x k , 0 , otherwise .
Agents with output z i = 1 are called the winners, while those with z i = 0 are called the losers. Although the binary output of the k-WTA model is appealing for its simplicity, such a hard selection mechanism may not be robust to small fluctuations or noise in the input signals, potentially leading to instability in dynamic or uncertain environments. The k-WTA can be formulated as the following quadratic program:
min z R n α 2 z T z x T z subject   to   1 T z = k , 0 z 1 ,
where α > 0 R is a parameter, 0 R n is a zero vector with all elements being 0, and 1 R n is a 1 vector with all elements being 1. It was proved in [119] that the solution to the quadratic program (47) satisfies the k-WTA operation if α satisfies
0 < α | x k x k + 1 | ,
where x k + 1 denotes the ( k + 1 ) -th largest elements among the inputs x i .
Based on the above quadratic program, a classical projection neurodynamic model for k-WTA operation is given as follows [120]:
χ ˙ = γ ( 1 T z + k ) , z = P Ω 1 χ + x α ,
where Ω = { 0 z 1 } and γ > 0 R , x = [ x 1 , x 2 , , x n ] T , and z = [ z 1 , z 2 , , z n ] T . The neurodynamics-based k-WTA model (49) has global asymptotic convergence. With (49), for each agent i, its dynamics for computing z i is given as
χ ˙ = γ ( i = 1 n z i + k ) , z i = g χ + x i α ,
where g ( x ) = P Ω ¯ ( x ) with Ω ¯ = { x R n | 0 x 1 } . Evidently, for each agent i, to update the decision output z i , it needs to know the updated value of χ whose computation relies on the decision output of all the agents in the network. Thus, the neurodynamics-based k-WTA model (49) is centralized. This raises concerns about scalability, fault tolerance, and single-point-of-failure vulnerabilities, which are often overlooked in theoretical designs but are crucial for deployment in autonomous and resilient multi-agent systems. A centralized model has several disadvantages. Firstly, the model is not suitable for large-scale networks, which would pose a great requirement on the communication and computation capacity of the agent dealing with the computation of χ . Secondly, centralized models are vulnerable to failure of central nodes. Thus, distributed k-WTA models are worth investigating.
The first distributed k-WTA model is based on projection neurodynamics, whose design is based on the centralized k-WTA model (49) and consensus filters, and the obtained projection neurodynamic model is described as follows [121]:
ε u ˙ = C 0 y z + k 1 n C 0 L u , ε y ˙ = L ( y + z ) , z = P Ω x + x α ,
where u = [ u 1 , u 2 , , u n ] T and y = [ y 1 , y 2 , , y n ] T are auxiliary vectors; C 0 > 0 R and ε > 0 R are design parameters. Based on (51), for each agent in the network, there are two auxiliary variables, i.e., u i and y i , and the dynamics of each agent’s states is
ε u ˙ i = C 0 y i z i + k n C 0 j N ( i ) w i j ( u i u j ) , ε y ˙ i = j N ( i ) w i j ( y i + z i y j z j ) , z i = P Ω ¯ u i + x i α .
Evidently, in view of (52), the k-WTA model (51) is distributed, since updating each agent’s state variables only needs its own state information and information of neighboring agents. As proved in [121], if y ( 0 ) = 0 , λ 2 min ( L ) 7.5 C 0 , and the communication graph of the network is undirected connected, where λ 2 min denotes the algebraic connectivity of the Laplacian matrix, then z of the distributed k-WTA model (51) based on projection neurodynamics globally asymptotically converges to the solution of the k-WTA problem. The requirement for the initialization of y and the setting of parameter C 0 could limit the deployment of the model in engineering applications. While the distributed implementation alleviates some scalability issues, the model’s reliance on multiple auxiliary variables and tightly tuned parameters (such as y ( 0 ) = 0 and λ 2 min ( L ) 7.5 C 0 ) could hinder practical deployment in dynamically changing networks where such initialization and global spectral properties are difficult to guarantee.
To address the limitations of the neurodynamic model (51) for the k-WTA operation, by observing that y ( 0 ) = 0 and its value on the steady-state is also 0 , for undirected connected communication topologies modeled by Laplacian matrix L, the following neurodynamic model was proposed [122]:
ε u ˙ = z + k 1 n L u , z = P Ω x + x α ,
for which the dynamics of each agent is given as
ε u ˙ i = z i + k n j N ( i ) w i j ( u i u j ) , z i = P Ω ¯ u i + x i α .
Evidently, compared to neurodynamic model (51), neurodynamic model (53) has a simple structure, making its computational efficiency more favorable. It has been proved in [122] that, if u k u k + 1 > 2 α ( n λ 2 min 1 ( L ) + 1 ) with α > 0 , then output z of neurodynamic model (53) globally asymptotically converges to the solution of the k-WTA problem. An interesting property of k-WTA model (53) is its inherent privacy protection capability. If the competition input x i is private data that agents do not want to share with neighboring agents, then neurodynamic model (51) may not be desirable. Given that z i [ 0 , 1 ] , by the definition of the projection operator, we have x i = α ( z i u i ) , which means that obtaining the value of z i indicates obtaining the value of x i . For neurodynamic model (51), agents need to share z i with neighboring agents, while such a requirement is not needed for neurodynamic model (53).
The above two distributed k-WTA models are essentially based on a centralized k-WTA model by performing distributed estimation of centralized terms. Such a design intrinsically yields a two-step convergence process for finding the k-WTA solution; that is, convergence of estimators for centralized terms followed by convergence of the equivalent centralized k-WTA model, making the overall convergence speed slow. In [123], it is shown that the k-WTA problem can be formulated into the following form:
min z R n α 2 z T z x T z subject   to   z = k 1 n + L y , 0 z 1 ,
where the Laplacian matrix L of the undirected connected communication graph is embedded, and y is an unknown vector to be found. The problem description (55) allows direct design of distributed k-WTA models based on projection neurodynamics, and the following neurodynamic model was designed [123]:
ε z ˙ = z + P Ω ( x λ α ) , ε y ˙ = L λ α , ε λ ˙ = x α k 1 n α L y α L λ α ,
by which the state evolution of each agent is described by
ε z ˙ i = z i + P Ω ¯ ( x i λ i α ) , ε y ˙ i = j N ( i ) w i j ( λ i λ j ) α , ε λ i = x i α k n α j N ( i ) w i j ( y i y j ) α j N ( i ) w i j ( λ i λ j ) α ,
indicating that k-WTA model (56) based on projection neurodynamics is also distributed. One of the advantages of the design for k-WTA model (56) based on the novel formulation of the k-WTA problem with embedded graph Laplacian matrices is the ability to cope with various local constraints on decision outputs z i . In practice, agents generally have computation, actuation, communication, and sensing abilities. However, in some exceptional cases, such as system faults, agents may lose actuation ability, making them not suitable for performing tasks, but they are still able to propagate information. One of the potential applications of distributed k-WTA models is distributed task allocations, and k-WTA model (56) can be easily extended to deal with local constraints, which is illustrated in [123] in detail.
In [41], the performance of a distributed k-WTA model on various types of communication graphs is tested using extensive computer simulations. It is shown that the distributed k-WTA model works well on time-varying undirected connected graphs and joint connected undirected communication graphs. In [41], an application of the distributed k-WTA model based on projection neurodynamics is illustrated to assign tasks to mobile robots. The dynamics of the mobile robots are described as follows:
p ˙ i = v i
with i = 1 , 2 , , n ; p i = [ p i x , p i y ] T and v i = [ p v x , p v y ] T represent the position and velocity input of each robot. There is a moving target whose real-time position is denoted by p t = [ p t x , p t y ] T . The communication graph of the mobile robots is undirected and connected with the Laplacian matrix being L. The objective is to capture the moving target by the mobile robots using a competition mechanism, for which at each time, only k mobile robots track the moving target, and the assignment of mobile robots to execute the tracking task is based on the distances from the mobile robots to the moving target. To this end, the competition input of every agent i is defined as
u i = p i p t 2 ,
and the velocity input of each mobile robot is given as
v i = z i c ϕ ( p i p t ) + z i p ˙ t ,
with c > 0 R being a design parameter, z i [ 0 , 1 ] the task assignment signal generated by a k-WTA model, and ϕ an activation function defined as
ϕ ( y ) = | y | r + | y | 1 / r , if y > 0 , 0 , if y = 0 , | y | r | y | 1 / r , if y < 0 ,
where r ( 0 , 1 ) is a function parameter.
Overall, the neurodynamics-based k-WTA models represent a promising direction for distributed decision-making. However, current models are primarily validated in deterministic and idealized settings, with limited investigation into their robustness under model uncertainties, communication delays, or asynchronous update factors that are intrinsic to most real-world multi-agent systems. Bridging this theory–application gap is a critical avenue for future research.

4.2. Neurodynamics for Distributed Coordination

The coordination of multi-agent systems is a powerful tool to enlarge the ability of agents to complete complicated tasks that may not be finished by a single agent, which is useful in various fields like robots and embedded systems [124,125,126,127,128,129,130,131]. By coordination, it means agents cooperatively work to complete a given task. In the scenario of distributed coordination, agents are located in different positions and they complete tasks cooperatively via leveraging their sensing, communication, actuation, and communication abilities [132]. However, current coordination models often assume ideal communication links and perfectly synchronized updates, which may not hold in practical scenarios involving time delays, packet loss, or asynchronous events. These assumptions limit the robustness and real-time applicability of some proposed algorithms.
Consider a multi-agent system with n agents whose communication topology is represented by an undirected connected graph and the Laplacian matrix is denoted by L. The agent dynamics is given as
x ˙ i = v i , v ˙ i = u i ,
where x i , v i , and u i denote the position, velocity, and control input of the i-th agent. The objective is to design distributed control inputs u i to reach state consensus with x 1 = x 2 = = x n . Agents are with limited actuation capabilities, modeled by u i u i u i + , where u i and u i + are the lower and upper bound of the input of the i-th agent. The agent dynamics is written in a compact form as follows:
x ˙ = v , v ˙ = u ,
where x = [ x 1 , x 2 , , x n ] T R n , v = [ v 1 , v 2 , , v n ] T R n , and u = [ u 1 , u 2 , , u n ] T R n . The input constraints of the agents are also written in a compact form as u Ω , where Ω = { u R n | u u u + } , with u = [ u 1 , u 2 , , u n ] T R n and u + = [ u 1 + , u 2 + , , u n + ] T R n denoting the lower bound and upper bound of the control inputs. In [42], a projection neurodynamics-based method is proposed to address the consensus problem using an optimal control framework, where three performance indices are considered and the corresponding distributed consensus protocols are designed and tested, which are shown in Table 3. Using different performance indices, different purposes can be achieved. For example, the performance index J ( t ) = 0 T x T ( t + τ ) L x ( t + τ ) d τ + 1 2 u T ( t ) u ( t ) ensures that the velocity of the agents is zero when consensus is reached, which means that the position of agents will eventually be static. The consensus approach proposed in [42] can be readily extended to multi-dimensional multi-agent systems, where the position state of each agent is a vector instead of a scalar, by following the method in [133]. It was latter shown in [134] that the method proposed in [42] can also be extended to the case where privacy protection of agents’ initial position is necessary, by leveraging homomorphic encryption techniques like the Paillier cryptosystem. These methods typically rely on convexity assumptions and predefined communication topologies. In realistic environments with dynamic graphs or nonconvex performance surfaces, the stability and convergence properties of the projection-based neurodynamics may not be preserved without significant modifications.
A multiobjective optimization problem is described as
min   f ( x ) = [ f 1 ( x ) , f 2 ( x ) , , f k ( x ) ] T s . t . x Γ
where x R n is the decision variable, and Ω = { x R n | A x b , g ( x ) 0 , x Ω } is called feasible region; f ( x ) R k is a vector of objective functions with f i ( x ) R being the k-th objective function; Ω is a closed convex set. The problem is assumed to be convex, for which all the objective functions are convex and the feasible region is also convex. In [135], the multiobjective optimization problem (64) is converted to
min i = 1 k w i f i ( x i ) s . t . L x ˜ = 0 , x i Γ
with w i > 0 R , where x ˜ = col { x 1 , x 2 , , x k } R n k , L = L I n with I n representing an n-by-n identity matrix, and L is the Laplacian matrix of the undirected connected communication graph. Problem (65) is called a consensus optimization problem, where each agent has a state vector x i and a local function f i ( x i ) , and the agents collectively work to reach a common value on x that minimizes the cost function while satisfying all the constraints. A collective projection neurodynamic model was proposed in [135] to solve the above problem and the dynamics of the i-th agent is given by
x ˙ i = x i + P Ω ( x i w i f i ( x i ) x i g ( x i ) x i y i A T z i ) + j N ( i ) w i j ( x j + η j x i η i ) y ˙ i = y i + ( y i + g ( x i ) ) + , z ˙ i = A x i b , η ˙ i = j N ( i ) w i j ( x i x j ) ,
where ( x ) + = max { x , 0 } . It was proved in [135] that neurodynamic model (66) is globally asymptotically stable and can find an accurate solution to the above consensus optimization problem. Based on consensus optimization, projection neurodynamics is also applied to other distributed optimization problems [136,137,138]. It is worth mentioning that the scalability of such neurodynamic models to large-scale systems remains an open challenge. The computational and communication complexity grows with network size, and the real-time feasibility under strict latency and resource constraints is yet to be thoroughly validated. Projection neurodynamics was also applied to distributed maneuvering of autonomous surface vehicles in [139] in the present of unknown kinetics and state constraint. Nonetheless, most existing works still rely on relatively idealized assumptions about the model knowledge and network connectivity. Future studies should rigorously investigate the robustness of neurodynamic methods under practical uncertainties, model mismatches, and adversarial communication environments.

4.3. Scalability and Practical Constraints

The issue of scalability is a key consideration in the practical deployment of neurodynamics-based models for multi-agent systems. In this regard, centralized k-WTA model (49) poses evident limitations. As the number of agents increases, the requirement for a central unit to compute the global variable χ becomes increasingly prohibitive in terms of communication and computation, thereby making the model unsuitable for large-scale systems. Moreover, such centralized designs are vulnerable to single points of failure and cannot tolerate the dynamic nature of real-world networks where agents may join or leave.
To mitigate these limitations, distributed models such as (51) and (53) have been developed. These models only require local communication among neighboring agents, which significantly improves scalability and robustness. Nonetheless, practical deployment still faces several constraints. First, the model in (51) requires careful parameter tuning, particularly the choice of C 0 and the initialization y ( 0 ) = 0 . These conditions, while theoretically reasonable, may not always be guaranteed in real-world implementations, especially in asynchronous or delayed communication environments. The improved model (53) addresses some of these issues by eliminating auxiliary variables and reducing parameter sensitivity, leading to a simpler and more scalable architecture. However, this model still relies on continuous-time dynamics and assumes ideal communication among neighbors, which may not hold in practical digital systems with quantization, communication delay, or packet loss.
Furthermore, another practical constraint lies in the need for agents to exchange intermediate variables (e.g., u i or z i ). While the original input x i may be private and not directly shared, these derived variables could still leak sensitive information through reverse engineering. Thus, privacy-preserving mechanisms or noise-injection strategies may be needed for privacy-critical applications. Finally, in large-scale settings, issues such as computation burden per agent, convergence speed, and robustness to network changes (e.g., time-varying topologies or link failures) must also be addressed.

5. Challenges and Opportunities

In this section, based on the above review on the development of neurodynamics for computation and multi-agent systems in the past decade, the challenges and opportunities in the field of neurodynamics are discussed.

5.1. Gradient Neurodynamics for Time-Varying Computation Problems

Recently, it has been reported that gradient neurodynamics based on norm correction can be adopted to solve some time-varying problems with robust finite-time convergence, covering time-varying linear equations, time-varying matrix inversion, time-varying Moore–Penrose inversion, and time-varying Lyapunov equations [51,52,53,54]. There are lots of time-varying problems that have been extensively investigated by using the zeroing neurodynamic method with accurate time-derivative feedback, which is not required in the gradient neurodynamic method, making gradient neurodynamics more favorable for engineering applications where accurately measuring derivatives could be expensive or infeasible. The performance of such gradient neurodynamics to other time-varying computation problems is worth investigating. Meanwhile, the discretization techniques for norm-based gradient neurodynamics is still not explored. Due to the existence of the norm operation, the development of efficient discrete-time models for norm-based gradient neurodynamic model is a challenging problem.

5.2. Zeroing Neurodynamics for Distributed Control and Optimization

While zeroing neurodynamics are widely recognized as a powerful tool for handling time-varying problems, the existing zeroing neurodynamic models are centralized, and are not applicable to the scenario of distributed control and optimization. Considering the matured techniques for ensuring finite-time convergence and robustness to external disturbances, the extension of zeroing neurodynamics to distributed control and optimization is significant to enlarge the impact of zeroing neurodynamics. However, such an extension is challenging due to the inherent feature of implicit dynamics of conventional design of zeroing neurodynamic models. Thanks to the recent work of [140], it becomes feasible to design explicit zeroing neurodynamic model to solve time-varying nonlinear equations by choosing a norm-type error function. However, the method in [140] still cannot be directly extended to distributed computation scenarios due to the existence of global error norm. New design techniques are still needed for the zeroing neurodynamics to address distributed computation and control problems.

5.3. Projection Neurodynamics for Distributed Coordination

Currently, most existing work on distributed coordination of multi-agent systems is based on the assumption that the communication graphs are undirected connected. Considering that the actual communication may not be bidirectional and communication networks of physical agents may change with time owing to physical factors or cyber attacks, it would be useful to investigate the application of projection neurodynamics to the distributed coordination of multi-agent systems over a directed communication topology and even joint connected communication topology [141,142]. In addition, the effects of cyber attacks and the requirement of privacy protection can be considered. In terms of privacy protection, it remains a challenging problem to develop projection neurodynamics-based computationally efficient privacy-preserving distributed coordination methods with formal privacy analysis like differential privacy.

5.4. Applications of Neurodynamics

The applications of neurodynamics can be further exploited. For example, in anomaly detection [143,144,145], existing methods ignore the inherent time-varying features of anomaly detection, where anomaly samples can change over time. It would be interesting to incorporate neurodynamics with existing forward neural networks to improve the efficiency and accuracy of detection, and such an idea also applies to the applications of neurodynamics to internet of things [146,147], social network analysis [148], bioinformatics [149], and games [150,151].

5.5. Practical Deployment Considerations

Despite the promising theoretical performance of neurodynamics in solving optimization and control problems, their practical deployment in real-world systems remains an active area of exploration. Real-time constraints, hardware limitations, and noise sensitivity pose significant challenges for directly applying these models in embedded or edge devices. For instance, while fixed-time and finite-time neurodynamics offer desirable convergence properties, they may require higher computational precision or exhibit numerical stiffness in discretized implementations. Moreover, integrating neurodynamic solvers into larger control architectures (e.g., in autonomous vehicles or industrial robotics) necessitates careful co-design with perception and communication modules. As such, research efforts are increasingly focusing on lightweight, robust, and interpretable neural dynamics that can be deployed under resource-constrained or safety-critical conditions. This highlights the gap between theoretical advances and system-level implementation, and points to important directions for future work.
Some recent works have begun bridging this gap. For example, in [152], neurodynamics-inspired controllers have been tested in real-time path tracking of unmanned aerial vehicles and energy-efficient multi-robot coordination. These deployments often require careful tuning of system parameters and model structure to balance accuracy, convergence speed, and computational load. While still limited in scope, such case studies provide initial evidence of the applicability of neurodynamics in embedded systems and industrial settings.

5.6. Future Work

Despite the remarkable advancements in the application of neurodynamics to computation and multi-agent systems, several critical directions remain insufficiently explored and offer rich potential for future investigation.
(1)
Resilience and robustness under complex agent interactions: Real-world multi-agent systems are often subject to communication delays, asynchronous updates, adversarial behaviors, and heterogeneous dynamics. There is a pressing need to develop robust neurodynamic models that can maintain stability and coordination performance under such adversarial or uncertain settings. Investigating the interplay between robustness, convergence rate, and computational efficiency in neurodynamics remains an open research challenge.
(2)
High-dimensional and distributed optimization: As multi-agent systems increasingly engage in distributed machine learning, federated optimization, and cooperative decision-making, extending neurodynamic models to high-dimensional, nonconvex, and communication-efficient optimization tasks becomes imperative. This calls for scalable neurodynamic architectures that leverage sparsity, structure-exploitation, and decentralization principles.
(3)
Cross-disciplinary integration with deep learning and neuroscience: Inspired by biological systems, the integration of neurodynamics with deep neural architectures or spiking neural networks can bring new perspectives to adaptive control and collective intelligence. On the one hand, embedding neurodynamics into neural learning modules could improve interpretability and dynamical adaptability. On the other hand, introducing biologically plausible mechanisms such as plasticity, inhibition, and homeostasis into neurodynamic control may enrich the expressiveness and learning capacity of multi-agent coordination models.
(4)
Real-world deployment and standardization: A critical step toward broader adoption lies in validating neurodynamic methods in realistic benchmark environments, such as robot swarms, UAV formations, distributed sensor networks, and autonomous vehicle systems. Moreover, establishing standardized evaluation metrics, public codebases, and simulation platforms for neurodynamic algorithms will facilitate reproducibility and practical deployment, helping bridge the gap between theoretical advances and engineering applications.

6. Conclusions

In this paper, we have provided a brief review of neurodynamics for computation and multi-agent systems in the past decade. Gradient neurodynamics, zeroing neurodynamics, and projection neurodynamics have been covered in this survey with respect to their application in computation problems, where the basic design ideas of the corresponding neurodynamic models have been illustrated. The applications of neurodynamics in multi-agent systems have also been discussed, including distributed competition and distributed coordination. Based on the survey on related literature of the past decade, research challenges and opportunities of neurodynamics for computation and multi-agent systems have also been identified.

Author Contributions

Conceptualization, B.L. and C.H.; literature search and analysis, V.N.K. and C.H.; writing—original draft preparation, V.N.K. and C.H.; writing—review and editing, V.N.K. and C.H.; visualization, V.N.K. and C.H.; supervision, B.L.; project administration, C.H.; funding acquisition, B.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under grant number 62466019.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
k-WTAk-Winners-Take-All

References

  1. Sun, L.; Mo, Z.; Yan, F.; Xia, L. Adaptive feature selection guided deep forest for COVID-19 classification with chest CT. IEEE J. Biomed. Health Inform. 2020, 24, 2798–2805. [Google Scholar] [CrossRef] [PubMed]
  2. Yang, C.; Zhang, Y.; Khan, A.H. Undetectable attack to deep neural networks without using model parameters. In Proceedings of the International Conference on Intelligent Computing, Zhengzhou, China, 10–13 August 2023; pp. 46–57. [Google Scholar]
  3. Sun, Q.; Wu, X. A deep learning-based approach for emotional analysis of sports dance. PeerJ Comput. Sci. 2023, 9, e1441. [Google Scholar] [CrossRef]
  4. Peng, C.; Liao, B. Heavy-head sampling for fast imitation learning of machine learning based combinatorial auction solver. Neural Process. Lett. 2023, 55, 631–644. [Google Scholar] [CrossRef]
  5. Luo, M.; Wang, K.; Cai, Z.; Liu, A.; Li, Y.; Cheang, C.F. Using imbalanced triangle synthetic data for machine learning anomaly detection. CMC Comput. Mater. Contin. 2019, 58, 15–26. [Google Scholar] [CrossRef]
  6. Zhang, Z.; Ding, C.; Zhang, M.; Luo, Y.; Mai, J. DCDLN: A densely connected convolutional dynamic learning network for malaria disease diagnosis. Neural Netw. 2024, 176, 106339. [Google Scholar] [CrossRef]
  7. Chen, L.; Jin, L.; Shang, M. Efficient loss landscape reshaping for convolutional neural networks. IEEE Trans. Neural Netw. 2024, in press. [Google Scholar] [CrossRef]
  8. Jiang, W.; Zhou, K.; Sarkheyli-Hagele, A.; Zain, A.M. Modeling, reasoning, and application of fuzzy Petri net model: A survey. Artif. Intell. Rev. 2022, 55, 6567–6605. [Google Scholar] [CrossRef]
  9. Luo, Z.; Lan, Y.; Zheng, L.; Ding, L. Improved-equivalent-input-disturbance-based preview repetitive control for Takagi-Sugeno fuzzy system with state delay. Int. J. Wavelets Multiresolut. Inf. Process. 2024, 22, 2450022. [Google Scholar] [CrossRef]
  10. Qu, C.; Zhang, L.; Li, J.; Deng, F.; Tang, Y.; Zeng, X.; Peng, X. Improving feature selection performance for classification of gene expression data using Harris Hawks optimizer with variable neighborhood learning. Brief. Bioinform. 2021, 22, bbab097. [Google Scholar] [CrossRef]
  11. Ou, Y.; Qin, F.; Zhou, K.Q.; Yin, P.F.; Mo, L.P.; Mohd Zain, A. An improved grey wolf optimizer with multi-strategies coverage in wireless sensor networks. Symmetry 2024, 16, 286. [Google Scholar] [CrossRef]
  12. Wu, W.; Tian, Y.; Jin, T. A label based ant colony algorithm for heterogeneous vehicle routing with mixed backhaul. Appl. Soft. Comput. 2016, 47, 224–234. [Google Scholar] [CrossRef]
  13. Huang, Z.; Zhang, Z.; Hua, C.; Liao, B.; Li, S. Leveraging enhanced egret swarm optimization algorithm and artificial intelligence-driven prompt strategies for portfolio selection. Sci. Rep. 2024, 14, 26681. [Google Scholar] [CrossRef] [PubMed]
  14. Lin, Y.; Yang, Y.; Zhang, Y. Improved differential evolution with dynamic mutation parameters. Soft Comput. 2023, 27, 17923–17941. [Google Scholar] [CrossRef]
  15. Qin, F.; Zain, A.M.; Zhou, K.Q. Harmony search algorithm and related variants: A systematic review. Swarm Evol. Comput. 2022, 74, 101126. [Google Scholar] [CrossRef]
  16. Ye, S.; Zhou, K.; Zain, A.M.; Wang, F.; Yusoff, Y. A modified harmony search algorithm and its applications in weighted fuzzy production rule extraction. Front. Inform. Technol. Electron. Eng. 2023, 24, 1574–1590. [Google Scholar] [CrossRef]
  17. Zhang, Y.; Li, S.; Xu, B. Convergence analysis of beetle antennae search algorithm and its applications. Soft Comput. 2021, 25, 10595–10608. [Google Scholar] [CrossRef]
  18. Liu, J.; Qu, C.; Zhang, L.; Tang, Y.; Li, J.; Feng, H.; Peng, X. A new hybrid algorithm for three-stage gene selection based on whale optimization. Sci. Rep. 2023, 13, 3783. [Google Scholar] [CrossRef]
  19. Liu, J.; Feng, H.; Tang, Y.; Zhang, L.; Qu, C.; Zeng, X.; Peng, X. A novel hybrid algorithm based on Harris Hawks for tumor feature gene selection. PeerJ Comput. Sci. 2023, 13, e1229. [Google Scholar] [CrossRef]
  20. Liu, M.; Jiang, Q.; Li, H.; Cao, X.; Lv, X. Finite-time-convergent support vector neural dynamics for classification. Neurocomputing 2025, 617, 128810. [Google Scholar] [CrossRef]
  21. Zhang, Z.; He, Y.; Mai, W.; Luo, Y.; Li, X.; Cheng, Y.; Huang, X.; Lin, R. Convolutional dynamically convergent differential neural network for brain signal classification. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 8166–8177. [Google Scholar] [CrossRef]
  22. Liu, J.; Du, X.; Jin, L. A localization algorithm for underwater acoustic sensor networks with improved newton iteration and simplified Kalman filter. IEEE Trans. Mobile Comput. 2024, 23, 14459–14470. [Google Scholar] [CrossRef]
  23. Wang, C.; Wang, Y.; Yuan, Y.; Peng, S.; Li, G.; Yin, P. Joint computation offloading and resource allocation for end-edge collaboration in internet of vehicles via multi-agent reinforcement learning. Neural Netw. 2024, 179, 102261. [Google Scholar] [CrossRef] [PubMed]
  24. Cao, X.; Peng, C.; Zheng, Y.; Li, S.; Ha, T.T.; Shutyaev, V.; Stanimirovic, P. Neural networks for portfolio analysis in high-frequency trading. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 18052–18061. [Google Scholar] [CrossRef] [PubMed]
  25. Zhang, Y.N.; Li, S. Distributed biased min-consensus with applications to shortest path planning. IEEE Trans. Autom. Control 2017, 62, 5429–5436. [Google Scholar] [CrossRef]
  26. Zhang, Y.N.; Li, S. Perturbing consensus for complexity: A finite-time discrete biased min-consensus under time-delay and asynchronism. Automatica 2017, 85, 441–447. [Google Scholar] [CrossRef]
  27. Zhang, Y.; Li, S.; Guo, H. A type of biased consensus-based distributed neural network for path planning. Nonlinear Dyn. 2017, 89, 1803–1815. [Google Scholar] [CrossRef]
  28. Jin, L.; Huang, R.; Liu, M.; Ma, X. Cerebellum-inspired learning and control scheme for redundant manipulators at joint velocity level. IEEE Trans. Cybern. 2024, 54, 6297–6306. [Google Scholar] [CrossRef]
  29. Tang, Z.; Zhang, Y.N.; Ming, L. Novel snap-layer MMPC scheme via neural dynamics equivalency and solver for redundant robot arms with five-layer physical limits. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 3534–3546. [Google Scholar] [CrossRef]
  30. Xiang, Z.; Xiang, C.; Li, T.; Guo, Y. A self-adapting hierarchical actions and structures joint optimization framework for automatic design of robotic and animation skeletons. Soft Comput. 2021, 25, 263–276. [Google Scholar] [CrossRef]
  31. Zhang, Y.; Li, S.; Kathy, S.; Liao, B. Recurrent neural network for kinematic control of redundant manipulators with periodic input disturbance and physical constraints. IEEE Trans. Cybern. 2019, 49, 4194–4205. [Google Scholar] [CrossRef]
  32. Xiao, L.; Zhang, Y.; Liao, B.; Zhang, Z.; Ding, L.; Jin, L. A velocity-level bi-criteria optimization scheme for coordinated path tracking of dual robot manipulators using recurrent neural network. Front. Neurobot. 2017, 11, 47. [Google Scholar] [CrossRef]
  33. Zhang, Z.; Cao, Z.; Li, X. Neural dynamic fault-tolerant scheme for collaborative motion planning of dual-redundant robot manipulators. IEEE Trans. Neural Netw. Learn. Syst. 2024, in press. [Google Scholar] [CrossRef] [PubMed]
  34. Lv, X.; Xiao, L.; Tan, Z.; Yang, Z.; Yuan, J. Improved gradient neural networks for solving Moore-Penrose inverse of full-rank matrix. Neural Process. Lett. 2019, 50, 1993–2005. [Google Scholar] [CrossRef]
  35. Liao, B.; Han, L.; Cao, X.; Li, S.; Li, J. Double integral-enhanced Zeroing neural network with linear noise rejection for time-varying matrix inverse. CAAI Trans. Intell. Technol. 2023, 9, 197–210. [Google Scholar] [CrossRef]
  36. Zhang, Y.N.; Zhang, Y.N.; Chen, D.; Xiao, Z.; Yan, X. From Davidenko method to Zhang dynamics for nonlinear equation systems solving. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 2817–2830. [Google Scholar] [CrossRef]
  37. Fu, J.; Zhang, Y.; Geng, G.; Liu, Z. Recurrent neural network With scheduled varying gain for solving time-varying QP. IEEE Trans. Circuits Syst. II Express Briefs 2024, 71, 882–886. [Google Scholar] [CrossRef]
  38. Long, C.; Zhang, G.; Zeng, Z.; Hu, J. Finite-time stabilization of complex-valued neural networks with proportional delays and inertial terms: A non-separation approach. Neural Netw. 2022, 148, 86–95. [Google Scholar] [CrossRef]
  39. Li, J.; Qu, L.; Zhu, Y.; Li, Z.; Liao, B. Novel Zeroing Neural Network for Time-Varying Matrix Pseudoinversion in the Presence of Linear Noises. Tsinghua Sci. Technol. 2025, 30, 1911–1926. [Google Scholar] [CrossRef]
  40. Hua, C.; Cao, X.; Liao, B. Real-Time Solutions for Dynamic Complex Matrix Inversion and Chaotic Control Using ODE-Based Neural Computing Methods. Comput. Intell. 2025, 41, e70042. [Google Scholar] [CrossRef]
  41. Liu, K.; Zhang, Y. Distributed dynamic task allocation for moving target tracking of networked mobile robots using k-WTA network. Trans. Neural Netw. Learn. Syst. 2025, 36, 5795–5802. [Google Scholar] [CrossRef]
  42. Deng, Q.; Zhang, Y. Distributed near-optimal consensus of double-integrator multi-agent systems with input constraints. In Proceedings of the 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, 18–22 July 2021. [Google Scholar] [CrossRef]
  43. Zhang, Y.; Li, S.; Weng, J. Distributed estimation of algebraic connectivity. IEEE Trans. Cybern. 2022, 52, 3047–3056. [Google Scholar] [CrossRef] [PubMed]
  44. Xiao, L.; Li, K.; Tan, Z.; Zhang, Z.; Liao, B.; Chen, K.; Long, J.; Li, S. Nonlinear gradient neural network for solving system of linear equations. Inf. Process. Lett. 2019, 142, 35–40. [Google Scholar] [CrossRef]
  45. Li, L.; Xiao, L.; Wang, Z.; Zuo, Q. A survey on zeroing neural dynamics: Models, theories, and applications. Int. J. Syst. Sci. 2025, 56, 1360–1393. [Google Scholar] [CrossRef]
  46. Jin, L.; Li, S.; Yu, J.; He, J. Robot manipulator control using neural networks: A survey. Neurocomputing 2018, 285, 23–34. [Google Scholar] [CrossRef]
  47. Liu, X.; Zhao, L.; Jin, J. A noise-tolerant fuzzy-type zeroing neural network for robust synchronization of chaotic systems. Concurr. Comput. Pract. Exp. 2024, 36, e8218. [Google Scholar] [CrossRef]
  48. Jin, L.; Zhang, Y.N.; Li, S.; Zhang, Y. Modified ZNN for time-varying quadratic programming with inherent tolerance to noises and its application to kinematic redundancy resolution of robot manipulators. IEEE Trans. Ind. Electron. 2016, 63, 6978–6988. [Google Scholar] [CrossRef]
  49. Zhao, L.; Liu, X.; Jin, J. A novel adaptive parameter zeroing neural network for the synchronization of complex chaotic systems and its field programmable gate array implementation. Measurement 2025, 242, 115989. [Google Scholar] [CrossRef]
  50. Li, S.; Ma, C. A novel predefined-time noise-tolerant zeroing neural network for solving time-varying generalized linear matrix equations. J. Frankl. Inst. 2023, 360, 11788–11808. [Google Scholar] [CrossRef]
  51. Zhang, Y.; Liao, B.; Geng, G. GNN model with robust finite-time convergence for time-varying systems of linear equations. IEEE Trans. Syst. Man Cybern. Syst. 2024, 54, 4786–4797. [Google Scholar] [CrossRef]
  52. Zhang, Y.; Li, S.; Weng, J.; Liao, B. GNN Model for time-varying matrix inversion with robust finite-time convergence. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 559–569. [Google Scholar] [CrossRef]
  53. Zhang, Y.; Zhang, J.; Weng, J. Dynamic Moore–Penrose inversion with unknown derivatives: Gradient neural network approach. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 10919–10929. [Google Scholar] [CrossRef] [PubMed]
  54. Zhang, Y.Y. Improved GNN method with finite-time convergence for time-varying Lyapunov equation. Inf. Sci. 2022, 611, 494–503. [Google Scholar] [CrossRef]
  55. Zhang, Y.; Geng, G. Finite-time convergent modified Davidenko method for dynamic nonlinear equations. IEEE Trans. Circuits Syst. II Exp. Briefs 2023, 70, 1630–1634. [Google Scholar] [CrossRef]
  56. Zhang, Y.N.; Jiang, D.; Wang, J. A recurrent neural network for solving Sylvester equation with time-varying coefficients. IEEE Trans. Neural Netw. 2002, 13, 1053–1063. [Google Scholar] [CrossRef]
  57. Xiao, L.; Yi, Q.; Dai, J.; Li, K.; Hu, Z. Design and analysis of new complex zeroing neural network for a set of dynamic complex linear equations. Neurocomputing 2019, 363, 171–181. [Google Scholar] [CrossRef]
  58. Tang, Z.; Zhang, Y. Continuous and discrete gradient-Zhang neuronet (GZN) with analyses for time-variant overdetermined linear equation system solving as well as mobile localization applications. Neurocomputing 2023, 561, 126883. [Google Scholar] [CrossRef]
  59. Lv, X.; Xiao, L.; Tan, Z. Improved Zhang neural network with finite-time convergence for time-varying linear system of equations solving. Inf. Process. Lett. 2019, 147, 88–93. [Google Scholar] [CrossRef]
  60. Ding, L.; Xiao, L.; Liao, B.; Lu, R.; Peng, H. An improved recurrent neural network for complex-valued systems of linear equation and its application to robotic motion tracking. Front. Neurorobot. 2017, 11, 45. [Google Scholar] [CrossRef]
  61. Xiao, L. A nonlinearly activated neural dynamics and its finite-time solution to time-varying nonlinear equation. Neurocomputing 2016, 173, 1983–1988. [Google Scholar] [CrossRef]
  62. Xiao, L.; Lu, R. Finite-time solution to nonlinear equation using recurrent neural dynamics with a specially-constructed activation function. Neurocomputing 2015, 151, 246–251. [Google Scholar] [CrossRef]
  63. Li, W.; Xiao, L.; Liao, B. A finite-time convergent and noise-rejection recurrent neural network and its discretization for dynamic nonlinear equations solving. IEEE Trans. Cybern. 2020, 50, 3195–3207. [Google Scholar] [CrossRef] [PubMed]
  64. Dai, L.; Xu, H.; Zhang, Y.; Liao, B. Norm-based zeroing neural dynamics for time-variant non-linear equations. CAAI Trans. Intell. Techonol. 2024, 9, 1561–1571. [Google Scholar] [CrossRef]
  65. Xiao, L.; Zhang, Y.; Dai, J.; Chen, K.; Yang, S.; Li, W.; Liao, B.; Ding, L.; Li, J. A new noise-tolerant and predefined-time ZNN model for time-dependent matrix inversion. Neural Netw. 2019, 117, 124–134. [Google Scholar] [CrossRef]
  66. Jin, L.; Zhang, Y.N.; Li, S.; Zhang, Y. Noise-tolerant ZNN models for solving time-varying zero-finding problems: A control-theoretic approach. IEEE Trans. Autom. Control 2017, 62, 992–997. [Google Scholar] [CrossRef]
  67. Xiao, L.; Tan, H.; Jia, L.; Dai, J.; Zhang, Y. New error function designs for finite-time ZNN models with application to dynamic matrix inversion. Neurocomputing 2020, 402, 395–408. [Google Scholar] [CrossRef]
  68. Xiao, L. A new design formula exploited for accelerating Zhang neural network and its application to time-varying matrix inversion. Theor. Comput. Sci. 2016, 647, 50–58. [Google Scholar] [CrossRef]
  69. Xiao, L.; Zhang, Y.; Li, K.; Liao, B.; Tan, Z. A novel recurrent neural network and its finite-time solution to time-varying complex matrix inversion. Neurocomputing 2019, 331, 483–492. [Google Scholar] [CrossRef]
  70. Jin, L.; Zhang, Y.; Li, S. Integration-enhanced Zhang neural network for real-time-varying matrix inversion in the presence of various kinds of noises. IEEE Trans. Neural Netw. Learn. Syst. 2015, 27, 2615–2627. [Google Scholar] [CrossRef]
  71. Xiao, L.; He, Y.; Dai, J.; Liu, X.; Liao, B.; Tan, H. A variable-parameter noise-tolerant zeroing neural network for time-variant matrix inversion with guaranteed robustness. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 1535–1545. [Google Scholar] [CrossRef]
  72. Xiang, Q.; Liao, B.; Xiao, L.; Lin, L.; Li, S. Discrete-time noise-tolerant Zhang neural network for dynamic matrix pseudoinversion. Soft Comput. 2019, 23, 755–766. [Google Scholar] [CrossRef]
  73. Liao, B.; Wang, Y.; Li, J.; Guo, D.; He, Y. Harmonic noise-tolerant ZNN for dynamic matrix pseudoinversion and its application to robot manipulator. Front. Neurorobot. 2022, 16, 928636. [Google Scholar] [CrossRef] [PubMed]
  74. Liao, B.; Xiang, Q.; Li, S. Bounded Z-type neurodynamics with limited-time convergence and noise tolerance for calculating time-dependent Lyapunov equation. Neurocomputing 2019, 325, 234–241. [Google Scholar] [CrossRef]
  75. Lv, X.; Xiao, L.; Tan, Z.; Yang, Z. Wsbp function activated Zhang dynamic with finite-time convergence applied to Lyapunov equation. Neurocomputing 2018, 314, 310–315. [Google Scholar] [CrossRef]
  76. Xiao, L.; Liao, B. A convergence-accelerated Zhang neural network and its solution application to Lyapunov equation. Neurocomputing 2016, 193, 213–218. [Google Scholar] [CrossRef]
  77. Xiao, L.; Liao, B.; Li, S.; Chen, K. Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations. Neural Netw. 2018, 98, 102–113. [Google Scholar] [CrossRef]
  78. Xiao, L. A finite-time convergent neural dynamics for online solution of time-varying linear complex matrix equation. Neurocomputing 2015, 167, 254–259. [Google Scholar] [CrossRef]
  79. Li, W.; Liao, B.; Xiao, L.; Lu, R. A recurrent neural network with predefined-time convergence and improved noise tolerance for dynamic matrix square root finding. Neurocomputing 2019, 337, 262–273. [Google Scholar] [CrossRef]
  80. Xiao, L.; Li, L.; Tao, J.; Li, W. A predefined-time and anti-noise varying-parameter ZNN model for solving time-varying complex Stein equations. Neurocomputing 2023, 526, 156–168. [Google Scholar] [CrossRef]
  81. Zhang, Z.; Zheng, L.; Weng, J.; Mao, Y.; Lu, W.; Xiao, L. A new varying-parameter recurrent neural-network for online solution of time-varying Sylvester equation. IEEE Trans. Cybern. 2018, 48, 3135–3148. [Google Scholar] [CrossRef]
  82. Xiao, L.; Zhang, Z.; Zhang, Z.; Li, W.; Li, S. Design, verification and robotic application of a novel recurrent neural network for computing dynamic Sylvester equation. Neural Netw. 2018, 105, 185–196. [Google Scholar] [CrossRef]
  83. Dai, L.; Zhang, Y.; Geng, G. Norm-based finite-time convergent recurrent neural network for dynamic linear inequality. IEEE Trans. Ind. Inform. 2024, 20, 4874–4883. [Google Scholar] [CrossRef]
  84. Xiao, L.; Dai, J.; Lu, R.; Li, S.; Li, J.; Wang, S. Design and comprehensive analysis of a noise-tolerant ZNN model with limited-time convergence for time-dependent nonlinear minimization. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 5339–5348. [Google Scholar] [CrossRef] [PubMed]
  85. Liu, M.; Liao, B.; Ding, L.; Xiao, L. Performance analyses of recurrent neural network models exploited for online time-varying nonlinear optimization. Comput. Sci. Inform. Syst. 2016, 13, 691–705. [Google Scholar] [CrossRef]
  86. Xiao, L.; Li, S.; Yang, J.; Zhang, Z. A new recurrent neural network with noise-tolerance and finite-time convergence for dynamic quadratic minimization. Neurocomputing 2018, 285, 125–132. [Google Scholar] [CrossRef]
  87. Xiao, L.; Li, K.; Du, M. Computing time-varying quadratic optimization with finite-time convergence and noise tolerance: A unified framework for zeroing neural network. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3360–3369. [Google Scholar] [CrossRef]
  88. Xiao, L.; He, Y.; Wang, Y.; Dai, J.; Wang, R.; Tang, W. A segmented variable-parameter ZNN for dynamic quadratic minimization with improved convergence and robustness. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 2413–2424. [Google Scholar] [CrossRef]
  89. Xiao, L. A nonlinearly-activated neurodynamic model and its finite-time solution to equality-constrained quadratic optimization with nonstationary coefficients. Appl. Soft. Comput. 2016, 40, 252–259. [Google Scholar] [CrossRef]
  90. Liao, B.; Zhang, Y.; Jin, L. Taylor O(h3) discretization of ZNN models for dynamic equality-constrained quadratic programming with application to manipulators. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 225–237. [Google Scholar] [CrossRef]
  91. Zhang, Z.; Zheng, L.; Li, L.; Deng, X.; Xiao, L.; Huang, G. A new finite-time varying-parameter convergent-differential neural-network for solving nonlinear and nonconvex optimization problems. Neurocomputing 2018, 319, 74–83. [Google Scholar] [CrossRef]
  92. Luo, Y.; Li, X.; Li, Z.; Xie, J.; Zhang, Z.; Li, X. A novel swarm-exploring neurodynamic network for obtaining global optimal solutions to nonconvex nonlinear programming problems. IEEE Trans. Cybern. 2024, 54, 5866–5876. [Google Scholar] [CrossRef]
  93. Zhang, Z.; Yu, H.; Ren, X.; Luo, Y. A swarm exploring neural dynamics method for solving convex multi-objective optimization problem. Neurocomputing 2024, 601, 128203. [Google Scholar] [CrossRef]
  94. Wei, L.; Jin, L. Collaborative neural solution for time-varying nonconvex optimization with noise rejection. IEEE Trans. Emerg. Topics Comput. Intell. 2024, 8, 2935–2948. [Google Scholar] [CrossRef]
  95. Zhang, Y.; Xiao, G.; Li, S. Adaptive quadratic optimisation with application to kinematic control of redundant robot manipulators. Int. J. Syst. Sci. 2023, 54, 717–730. [Google Scholar] [CrossRef]
  96. Jin, L.; Liao, B.; Liu, M.; Xiao, L.; Guo, D.; Yan, X. Different-level simultaneous minimization scheme for fault tolerance of redundant manipulator aided with discrete-time recurrent neural network. Front. Neurobot. 2017, 11, 50. [Google Scholar] [CrossRef]
  97. Yan, J.; Jin, L.; Hu, B. Data-driven model predictive control for redundant manipulators with unknown model. IEEE Trans. Cybern. 2024, 54, 5901–5911. [Google Scholar] [CrossRef]
  98. Tang, Z.; Zhang, Y.N. Refined self-motion scheme with zero initial velocities and time-varying physical Limits via Zhang neurodynamics equivalency. Front. Neurobot. 2022, 16, 945346. [Google Scholar] [CrossRef]
  99. Zhang, Y.; Li, S.; Zou, J.; Khan, A.H. A passivity-based approach for kinematic control of manipulators with constraints. IEEE Trans. Ind. Inform. 2020, 16, 3029–3038. [Google Scholar] [CrossRef]
  100. Zhang, Y.; Li, S.; Gui, J.; Luo, X. Velocity-level control with compliance to acceleration-level constraints: A novel scheme for manipulator redundancy resolution. IEEE Trans. Ind. Inform. 2018, 14, 921–930. [Google Scholar] [CrossRef]
  101. Zhang, Y.; Li, S.; Zhou, X. Recurrent-neural-network-based velocity-level redundancy resolution for manipulators subject to a joint acceleration limit. IEEE Trans. Ind. Electron. 2019, 66, 3573–3582. [Google Scholar] [CrossRef]
  102. Zhang, Y.Y. Tri-projection neural network for redundant manipulators. IEEE Trans. Circuits Syst. II Exp. Briefs 2022, 69, 4879–4883. [Google Scholar] [CrossRef]
  103. Zhang, Y.; Chen, S.; Li, S.; Zhang, Z. Adaptive projection neural network for kinematic control of redundant manipulators with unknown physical parameters. IEEE Trans. Ind. Electron. 2018, 65, 4909–4920. [Google Scholar] [CrossRef]
  104. Zhang, C.; Zhang, Y.; Dai, L. Deception-attack-resilient kinematic control of redundant manipulators: A projection neural network approach. In Proceedings of the 2023 International Annual Conference on Complex Systems and Intelligent Science (CSIS-IAC), Shenzhen, China, 20–22 October 2023; pp. 483–488. [Google Scholar]
  105. Zhang, Y.; Li, S. Kinematic control of serial manipulators under false data injection attack. IEEE/CAA J. Autom. Sin. 2023, 10, 1009–1019. [Google Scholar] [CrossRef]
  106. Zhang, Y.; Li, S. A neural controller for image-based visual servoing of manipulators with physical constraints. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 5419–5429. [Google Scholar] [CrossRef]
  107. Zhang, Y.; Li, S.; Liao, B.; Jin, L.; Zheng, L. A recurrent neural network approach for visual servoing of manipulators. In Proceedings of the 2017 IEEE International Conference on Information and Automation (ICIA), Macao, China, 18–20 July 2017; pp. 614–619. [Google Scholar]
  108. Zhang, Y.; Zheng, Y.; Gao, F.; Li, S. Image-based visual servoing of manipulators with unknown depth: A recurrent neural network approach. IEEE Trans. Neural Netw. Learn. Syst. 2024, in press. [Google Scholar] [CrossRef]
  109. Zhang, Y.; Li, S. Time-scale expansion-based approximated optimal control for underactuated systems Using projection neural networks. IEEE Trans. Syst. Man Cybern. Syst. 2018, 48, 1957–1967. [Google Scholar] [CrossRef]
  110. Zhang, Y.; Li, S.; Jiang, X. Near-optimal control without solving HJB equations and its applications. IEEE Trans. Ind. Electron. 2018, 65, 7173–7184. [Google Scholar] [CrossRef]
  111. Zhang, Y.; Li, S.; Weng, J. Learning and near-optimal control of underactuated surface vessels with periodic disturbances. IEEE Trans. Cybern. 2021, 52, 7453–7463. [Google Scholar] [CrossRef]
  112. Zhang, Y.; Li, S.; Liu, X. Neural network-based model-free adaptive near-optimal tracking control for a class of nonlinear systems. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 6227–6241. [Google Scholar] [CrossRef]
  113. Zhang, Y.; Li, S.; Luo, X.; Shang, M.S. A dynamic neural controller for adaptive optimal control of permanent magnet DC motors. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 839–844. [Google Scholar]
  114. Zhang, Y.; Li, S.; Liu, X. Adaptive near-optimal control of uncertain systems with application to underactuated surface vessels. IEEE Trans. Control Syst. Technol. 2018, 26, 1204–1218. [Google Scholar] [CrossRef]
  115. Zhang, Y.; Li, S.; Liao, L. Near-optimal control of nonlinear dynamical systems: A brief survey. Annu. Rev. Control 2019, 47, 71–80. [Google Scholar] [CrossRef]
  116. Zhang, Y.; Li, S.; Liao, L. Input delay estimation for input-affine dynamical systems based on Taylor expansion. IEEE Trans. Circuits Syst. II Express Briefs 2021, 68, 1298–1302. [Google Scholar] [CrossRef]
  117. Liu, M.; Li, Y.; Chen, Y.; Qi, Y.; Jin, L. A distributed competitive and collaborative coordination for multirobot systems. IEEE Trans. Mob. Comput. 2024, 23, 11436–11448. [Google Scholar] [CrossRef]
  118. Zhang, Y.; Li, S.; Geng, G. Initialization-based k-winners-take-all neural network model using modified gradient descent. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 4130–4138. [Google Scholar] [CrossRef] [PubMed]
  119. Liu, S.; Wang, J. A simplified dual neural network for quadratic programming with its KWTA application. IEEE Trans. Neural Netw. 2006, 17, 1500–1510. [Google Scholar]
  120. Xia, Y.; Sun, C. A novel neural dynamical approach to convex quadratic program and its efficient applications. Neural Netw. 2009, 22, 1463–1470. [Google Scholar] [CrossRef]
  121. Zhang, Y.; Li, S.; Xu, B.; Yang, Y. Analysis and design of a distributed k-winners-take-all model. Automatica 2020, 115, 108868. [Google Scholar] [CrossRef]
  122. Zhang, Y.; Li, S.; Zhou, X.; Weng, J.; Geng, G. Single-state distributed k-winners-take-all neural network model. Inf. Sci. 2023, 647, 119528. [Google Scholar] [CrossRef]
  123. Zhang, Y.; Li, S.; Weng, J. Distributed k-winners-take-all network: An optimization perspective. IEEE Trans. Cybern. 2023, 53, 5069–5081. [Google Scholar] [CrossRef]
  124. Liao, B.; Hua, C.; Xu, Q.; Cao, X.; Li, S. Inter-robot management via neighboring robot sensing and measurement using a zeroing neural dynamics approach. Expert Syst. Appl. 2024, 244, 122938. [Google Scholar] [CrossRef]
  125. Li, X.; Ren, X.; Zhang, Z.; Guo, J.; Luo, Y.; Mai, J.; Liao, B. A varying-parameter complementary neural network for multi-robot tracking and formation via model predictive control. Neurocomputing 2024, 609, 128384. [Google Scholar] [CrossRef]
  126. Xu, H.; Li, R.; Zeng, L.; Li, K.; Pan, C. Energy-efficient scheduling with reliability guarantee in embedded real-time systems. Sustain. Comput. Inform. 2018, 18, 137–148. [Google Scholar] [CrossRef]
  127. Xu, H.; Zhang, B.; Pan, C.; Li, K. Energy-efficient triple modular redundancy scheduling on heterogeneous multi-core real-time systems. J. Parallel Distrib. Comput. 2024, 191, 104915. [Google Scholar] [CrossRef]
  128. Xu, H.; Zhang, B.; Pan, C.; Li, K. Energy-efficient scheduling for parallel applications with reliability and time constraints on heterogeneous distributed systems. J. Syst. Archit. 2024, 152, 103137. [Google Scholar] [CrossRef]
  129. Xu, H.; Zhang, B. A two-phase algorithm for reliable and energy-efficient heterogeneous embedded systems. IEICE Trans. Inf. Syst. 2024, E107.D, 1285–1296. [Google Scholar] [CrossRef]
  130. Xu, H.; Li, R.; Pan, C.; Li, K. Minimizing energy consumption with reliability goal on heterogeneous embedded systems. J. Parallel Distrib. Comput. 2019, 127, 44–57. [Google Scholar] [CrossRef]
  131. Xie, M.; An, B.; Jia, X.; Zhou, M.; Lu, J. Simultaneous update of sensing and control data using free-ride codes in vehicular networks: An age and energy perspective. Computer Netw. 2024, 252, 110667. [Google Scholar] [CrossRef]
  132. Zhang, Y.; Li, S.; Wu, Y.; Deng, Q. Distributed connectivity maximization for networked mobile robots with collision avoidance. In Proceedings of the 2021 33rd Chinese Control and Decision Conference (CCDC), Kunming, China, 22–24 May 2021; pp. 5584–5588. [Google Scholar]
  133. Zhang, Y. Near-optimal consensus of multi-dimensional double-integrator multi-agent systems. In Proceedings of the 2020 3rd International Conference on Unmanned Systems (ICUS), Harbin, China, 27–28 November 2020; pp. 13–18. [Google Scholar]
  134. Deng, Q.; Liu, K.; Zhang, Y. Privacy-preserving consensus of double-integrator multi-agent systems with input constraints. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 8, 4119–4129. [Google Scholar] [CrossRef]
  135. Yang, S.; Liu, Q.; Wang, J. A collaborative neurodynamic approach to multiple-objective distributed optimization. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 981–992. [Google Scholar] [CrossRef]
  136. Le, X.; Chen, S.; Yan, Z.; Xi, J. A neurodynamic approach to distributed optimization with globally coupled constraints. IEEE Trans. Cybern. 2018, 48, 3149–3158. [Google Scholar] [CrossRef]
  137. Li, H.; Qin, S. A neurodynamic approach for solving time-dependent nonlinear equation system: A distributed optimization perspective. IEEE Trans. Ind. Inform. 2024, 20, 10031–10039. [Google Scholar] [CrossRef]
  138. Xia, Z.; Liu, Y.; Wang, J. A collaborative neurodynamic approach to distributed global optimization. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 3141–3151. [Google Scholar] [CrossRef]
  139. Peng, Z.; Wang, J.; Wang, D. Distributed maneuvering of autonomous surface vehicles based on neurodynamic optimization and fuzzy approximation. IEEE Trans. Control Syst. Technol. 2018, 26, 1083–1090. [Google Scholar] [CrossRef]
  140. Zheng, K.; Li, S.; Zhang, Y. Low-computational-complexity zeroing neural network model for solving systems of dynamic nonlinear equations. IEEE Trans. Autom. Control 2024, 69, 4366–4379. [Google Scholar] [CrossRef]
  141. Zhang, Y.; Li, S. Machine Behavior Design And Analysis: A Consensus Perspective; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  142. Zhang, Y.; Li, S.; Liao, L. Consensus of high-order discrete-time multiagent systems with switching topology. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 721–730. [Google Scholar] [CrossRef]
  143. Xiao, R.; Li, W.; Lu, J.; Jin, S. ContexLog: Non-parsing log anomaly detection with all information preservation and enhanced contextual representation. IEEE Trans. Netw. Serv. Manag. 2024, 21, 4750–4762. [Google Scholar] [CrossRef]
  144. Zhang, P.; Zhang, Y. A BAS algorithm based neural network for intrusion detection. In Proceedings of the 2021 11th International Conference on Intelligent Control and Information Processing (ICICIP), Dali, China, 3–7 December 2021; pp. 22–27. [Google Scholar]
  145. Yang, X.; Lei, K.; Peng, S.; Cao, X.; Gao, X. Analytical expressions for the probability of false-alarm and decision threshold of Hadamard ratio detector in non-asymptotic scenarios. IEEE Commu. Lett. 2018, 22, 1018–1021. [Google Scholar] [CrossRef]
  146. Lu, J.; Li, W.; Sun, J.; Xiao, R.; Liao, B. Secure and real-time traceable data sharing in cloud-assisted IoT. IEEE Internet Things J. 2024, 11, 6521–6536. [Google Scholar] [CrossRef]
  147. Dai, Z.; Guo, X. Investigation of E-commerce security and data platform based on the era of big data of the internet of things. Mobile Inform. Syst. 2022, 2022, 3023298. [Google Scholar] [CrossRef]
  148. Liu, Z.; Wu, X. Structural analysis of the evolution mechanism of online public opinion and its development stages based on machine learning and social network analysis. Int. J. Comput. Intell. Syst. 2023, 16, 99. [Google Scholar] [CrossRef]
  149. Chu, H.M.; Kong, X.Z.; Liu, J.X.; Zheng, C.H.; Zhang, H. A new binary biclustering algorithm based on weight adjacency difference matrix for analyzing gene expression data. IEEE/ACM Trans. Comput. Biol. Bioinform. 2023, 20, 2802–2809. [Google Scholar] [CrossRef]
  150. Yu, Y.; Wang, D.; Faisal, M.; Jabeen, F.; Johar, S. Decision support system for evaluating the role of music in network-based game for sustaining effectiveness. Soft Comput. 2022, 26, 10775–10788. [Google Scholar] [CrossRef]
  151. Xiang, Z.; Guo, Y. Controlling melody structures in automatic game soundtrack compositions with adversarial learning guided Gaussian mixture models. IEEE Trans. Games 2021, 13, 193–204. [Google Scholar] [CrossRef]
  152. Wang, T.; Hua, C.; Wang, Y.; Cao, W.; Liao, B.; Li, S. Real-Time Formation Planning for Multi-robot Cooperation: A Neural Informatics Perspective. IEEE Trans. Ind. Electron. 2025, 52, 3047–3056. [Google Scholar]
Table 1. Comparison of activation functions in neurodynamics.
Table 1. Comparison of activation functions in neurodynamics.
LiteratureActivation Function
[34] ϕ ( x ) = x p , if | x | > 1 , 1 exp ( ε x ) 1 + exp ( ε x ) · 1 + exp ( ε ) 1 exp ( ε ) if | x | 1
[45] ϕ ( x ) = ( | x | α + | x | 1 / α ) sign ( x )
[46] ϕ ( x ) = ( ρ 1 | x | α + ρ 2 | x | ν ) sign ( x )
[44] ϕ ( x ) = ( ρ 1 | x | α + ρ 2 | x | ν ) sign ( x ) + ρ 3 x + ρ 4 sign ( x )
[47] ϕ ( x ) = η c exp ( | x | c ) | x | 1 c sign ( x ) + ν sign ( x )
[44] ϕ ( x ) = ρ 1 | x | α sign ( x ) + ρ 3 x , if | x | 1 ρ 2 | x | ν sign ( x ) + ρ 3 x , otherwise .
[44] ϕ ( x ) = ( ρ 1 | x | α + ρ 2 | x | ν ) sign ( x ) + ρ 3 x
[48] ϕ ( x ) = k = 1 3 x 2 k 1
[49] ϕ ( x ) = η c exp ( | x | c ) | x | 1 c sign ( x ) + ν sign ( x )
[50] c 1 i = 1 m | e | ν i + c 2 i = 1 m | e | κ i sgn ( e ) + c 3 e + c 4 sinh ( e )
Table 2. Comparison of different neurodynamics.
Table 2. Comparison of different neurodynamics.
LiteratureProblemModel TypeModel Features
[44]linear matrix equationsgradient neurodynamicsfinite-time convergence
[51]linear matrix equationsgradient neurodynamicsnoise-tolerance, finite-time convergence
[53]Moore–Penrose inversesgradient neurodynamicsfinite-time convergence
[54]Lyapunovgradient neurodynamicsnoise-tolerance, finite-time convergence
[75,76]Lyapunovzeroing neurodynamicsfinite-time convergence
[74]Lyapunovzeroing neurodynamicsnoise-tolerance, finite-time convergence
[82]Sylvesterzeroing neurodynamicsfinite-time convergence, noise-tolerance
[81]Sylvesterzeroing neurodynamicstime-varying parameter
[84,85]nonlinear optimizationzeroing neurodynamicsnoise-tolerance, limited-time convergence
[86,87]quadratic programmingzeroing neurodynamicsnoise-tolerance, finite-time convergence
[91,92]nonconvex optimizationzeroing neurodynamicsnoise-tolerance, finite-time convergence
[80]Steinzeroing neurodynamicspredefined-time convergence
[79]matrix square rootzeroing neurodynamicspredefined-time convergence, noise-tolerance
[77,78]linear matrix equationszeroing neurodynamicsfinite-time convergence
[100,101]redundancy resolutionprojection neurodynamicsglobal convergence
[103]kinematic controlprojection neurodynamicsadaptive parameter convergence
[108]image-based visual servoingprojection neurodynamicsglobal asymptotic convergence
[110,111,112]near-optimal controlprojection neurodynamicsexponential stability
Table 3. Comparison of performance index and consensus protocols in [42] using projection neurodynamics.
Table 3. Comparison of performance index and consensus protocols in [42] using projection neurodynamics.
Performance IndexConsensus Protocol
J ( t ) = 0 T x T ( t + τ ) L x ( t + τ ) d τ ε u ˙ = u + P Ω u T 5 10 L u T 3 3 L x T 4 4 L v
J ( t ) = 0 T x T ( t + τ ) L x ( t + τ ) d τ + 1 2 u T ( t ) u ( t ) ε u ˙ = u + P Ω T 5 10 L u T 3 3 L x T 4 4 L v
J ( t ) = 0 T x T ( t + τ ) L x ( t + τ ) + v T ( t + τ ) v ( t + τ ) d τ + 1 2 u T ( t ) u ( t ) ε u ˙ = u + P Ω T 5 10 L u T 3 3 L x T 4 4 L v 2 T 3 3 u T 2 v
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Katsikis, V.N.; Liao, B.; Hua, C. Survey of Neurodynamic Methods for Control and Computation in Multi-Agent Systems. Symmetry 2025, 17, 936. https://doi.org/10.3390/sym17060936

AMA Style

Katsikis VN, Liao B, Hua C. Survey of Neurodynamic Methods for Control and Computation in Multi-Agent Systems. Symmetry. 2025; 17(6):936. https://doi.org/10.3390/sym17060936

Chicago/Turabian Style

Katsikis, Vasilios N., Bolin Liao, and Cheng Hua. 2025. "Survey of Neurodynamic Methods for Control and Computation in Multi-Agent Systems" Symmetry 17, no. 6: 936. https://doi.org/10.3390/sym17060936

APA Style

Katsikis, V. N., Liao, B., & Hua, C. (2025). Survey of Neurodynamic Methods for Control and Computation in Multi-Agent Systems. Symmetry, 17(6), 936. https://doi.org/10.3390/sym17060936

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop