Next Article in Journal
Human–Exoskeleton Coupling Simulation for Lifting Tasks with Shoulder, Spine, and Knee-Joint Powered Exoskeletons
Next Article in Special Issue
Fractional-Order Boosted Hybrid Young’s Double-Slit Experimental Optimizer for Truss Topology Engineering Optimization
Previous Article in Journal / Special Issue
Investigating the Influence of Counterflow Regions on the Hydrodynamic Performance of Biomimetic Robotic Fish
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Survey on Biomimetic and Intelligent Algorithms with Applications

1
College of Computer Science and Engineering, Jishou University, Jishou 416000, China
2
School of Communication and Electronic Engineering, Jishou University, Jishou 416000, China
*
Authors to whom correspondence should be addressed.
Biomimetics 2024, 9(8), 453; https://doi.org/10.3390/biomimetics9080453
Submission received: 19 June 2024 / Revised: 12 July 2024 / Accepted: 22 July 2024 / Published: 24 July 2024

Abstract

:
The question “How does it work” has motivated many scientists. Through the study of natural phenomena and behaviors, many intelligence algorithms have been proposed to solve various optimization problems. This paper aims to offer an informative guide for researchers who are interested in tackling optimization problems with intelligence algorithms. First, a special neural network was comprehensively discussed, and it was called a zeroing neural network (ZNN). It is especially intended for solving time-varying optimization problems, including origin, basic principles, operation mechanism, model variants, and applications. This paper presents a new classification method based on the performance index of ZNNs. Then, two classic bio-inspired algorithms, a genetic algorithm and a particle swarm algorithm, are outlined as representatives, including their origin, design process, basic principles, and applications. Finally, to emphasize the applicability of intelligence algorithms, three practical domains are introduced, including gene feature extraction, intelligence communication, and the image process.

1. Introduction

The question “How does it work?” has motivated many scientists. In past decades, biomimetics has promoted lots of science and engineering research and changed people’s lives dramatically. Neural networks, which are inspired by biological neurons, have achieved significant success and been used in impactful applications [1,2]. For instance, ChatGPT, which is based on a multi-layered deep neural network, has been important since it came out. As a large language model, it can be used for various natural language-processing tasks, and it has been widely applied in social media, customer service chatbots, and virtual assistants. Neural networks aim to solve complex problems, and they have been widely used in tackling optimization problems or mathematical programming. Optimization is a typical mathematical problem, and it is widely encountered in all engineering disciplines. Literally, it refers to finding an optimal/desirable solution. Optimization problems are complex and numerous, so approaches to solving these problems are always an active topic. Optimization approaches can be essentially distinguished into deterministic and stochastic. However, the former requires enormous computational resources. This is the motivation for exploring more efficient methods.
As a class of neural networks, recurrent neural networks (RNNs) have demonstrated great computational power, and they have been investigated in many engineering and scientific fields over the past decades. The authors of [3] proposed a gradient-based RNN (GBRNN) model for solving matrix inversion problems. However, there are some restrictions for methods under traditional neural networks, such as GBRNN. The ability of GBRNN to deal with time-varying problems is limited. In [4], the author verified that the residual error function of GBRNN will accumulate over time, which means it is unable to efficiently deal with time-varying problems. This paper focuses on a special RNN, named a zeroing neural network (ZNN), specifically for solving time-varying problems. Unlike the existing surveys, this paper presents a new classification method based on the performance index of ZNNs. According to the different performance abilities of ZNNs, it can be classified into the following three categories: (1) an accelerated-convergence ZNN, the type of ZNN that has fast convergence characteristics; (2) a noise-tolerance ZNN, the type of ZNN that has noise tolerance characteristics; (3) a discrete-time ZNN, the type of ZNN that can achieve higher computational accuracy and is more easily implemented in hardware. It is worth noting that these three types of ZNNs do not exist in isolation; they can be organically integrated.
As an optimization technique, swarm intelligence algorithms are driven by natural phenomena and behaviors, and they are particularly suited for problems that are non-linear, multi-modal, and have complex search spaces. These algorithms excel in exploring diverse regions of the solution space and efficiently finding near-optimal solutions. Similarly, many swarm intelligence algorithms have been applied in many scientific and engineering fields [5,6,7,8]. For example, a bio-inspired algorithm named the ant colony algorithm has been proposed for vehicle heterogeneity and backhaul mixed-load problems [9]. This algorithm is intended to jointly optimize the vehicle type, the vehicle number and travel routes, and minimize the total service cost. Through the introduction of intelligent algorithms, many fields have ushered in new developments and opportunities.
This paper aims to present a comprehensive survey of intelligence algorithms, including a neural network designed to solve time-varying problems, a swarm intelligence algorithm, and some practical fields that combine the rest of the intelligent algorithms. The algorithms’ origins, basic principles, recent advances, future challenges, and applications are described in detail. The paper is organized as follows. In Section 3, ZNNs are discussed in detail, including their origin, basic principle, operation mechanism, model variants, and applications. The rest of the intelligence algorithms are discussed in Section 4, including the genetic algorithm, the particle swarm algorithm, and some real applications. Conclusions are drawn in Section 5.

2. Related Work

There are review articles on neural networks, as well as bio-inspired algorithms, mostly devoted to the origins and inspirations of algorithms and categorizing existing algorithms. Ref. [10] classified bio-inspired algorithms into seven main groups: stochastic algorithms, evolutionary algorithms, physical algorithms, probabilistic algorithms, swarm intelligence algorithms, immune algorithms, and natural algorithms. In another example, some researchers have started from a specific domain and explored the study of algorithms applied to that domain. For example, ref. [11] started from the field of electrical drives and discussed the bio-inspired algorithms applied to this field. There have been numerous reviews of bio-inspired algorithms written in various ways. However, very few of them have focused on a specific algorithm. Therefore, starting with how to solve the optimization problem, this paper divides the optimization problem into a time-varying/dynamic optimization problem and a time-invariant/static optimization problem, highlighting a neural network ZNN for time-varying/dynamic optimization problems. For the rest of the algorithms, the authors prefer that the readers have some understanding of these algorithms and their origins, fundamentals, and applications.
ZNNs are a subfield of neural networks, and they belong to a neural network based on neurodynamics. To the best of our knowledge, only a few attempts have been made to comprehensively review ZNNs. In [12], the authors focused on mathematical programming problems in optimization problems by categorizing mathematical programming problems and discussing the ZNN models corresponding to different mathematical programming problems. Ref. [12] only explored the application of ZNN models to different mathematical programming problems and did not discuss neural networks such as ZNNs in depth. Ref. [13] exhaustively described the application areas involved in ZNNs, as well as the model variants, but like ref. [12], it did not provide a good introduction to ZNNs themselves, mention the origin of ZNNs, or mention the topology of ZNNs. The ZNN model is divided into different application areas. This classification model involves a lot of subjectivity and uncertainty. Compared to other works, this paper provides an in-depth overview of ZNNs, and it comprehensively introduces their origin, structure, operation mechanism, model variants, and applications. A novel classification of ZNNs is first proposed, categorizing models into accelerated convergence ZNNs, noise tolerance ZNNs, and discrete-time ZNNs, each offering unique advantages for solving time-varying optimization problems. The main contribution of this paper is that it identifies the structural topology of ZNNs, analyzes and explains the operating mechanism of ZNNs to some extent, and classifies ZNNs structurally.
Remark 1. 
Neural networks and bio-inspired optimization algorithms have a lot in common. Firstly, both neural networks and bio-inspired optimization algorithms are types of bio-heuristic algorithms; in essence, both types of algorithms are summarized from inspiration gained from nature. For example, Ref. [14] explored neural networks and bio-inspired optimization algorithms as two different classes of bio-heuristic algorithms and discussed their applications in the field of sustainable energy systems. Ref. [15] described a bio-inspired algorithm, i.e., the particle swarm algorithm, and neural networks in terms of concepts, methodology, and performance. Secondly, all the related previous research [9,16,17] has considered neural networks and bio-inspired optimization algorithms as the same concept, i.e., “computational intelligence”. Both neural networks and bio-inspired optimization algorithms are considered powerful computing tools.
Neural networks and bio-inspired optimization algorithms have many intersections and correlations. Firstly, bio-inspired optimization algorithms are widely used in neural network parameter optimization [18,19]. Especially in [20], the author paid special attention to the optimization of recurrent neural network parameters using bio-inspired optimization algorithms and provided a comprehensive analysis. Secondly, neural networks and bio-inspired optimization algorithms are used in common applications, and they are always used for comparison, such as comparing intelligent feature selection and classification [21,22], as well as structural health monitoring [23].
Recurrent neural networks are a class of neural networks, and we believe that it is feasible to combine them with bio-inspired optimization algorithms in one text. Particularly in this paper, from the perspective of solving optimization problems, the author classified optimization problems as dynamic/time-varying and static/time-invariant, and in particular, they introduced a recurrent neural network that has been widely used to solve dynamic/time-varying optimization problems.
Remark 2. 
The writing logic of the paper revolves around how to solve the optimization problem. By observing the laws or habits of organisms in nature, researchers have been surprised to find that modeling the laws and habits of these organisms can be constructively helpful in solving practical problems. As a result, many such algorithms have been born, and they are collectively known as intelligent algorithms. Since these algorithms mainly realize computational functions, some researchers refer to them as “computational intelligence” [10,16,17]. In this paper, optimization problems are classified into time-varying/dynamic optimization problems and time-invariant/static optimization problems, and a class of neural networks for solving time-varying/dynamic optimization problems is described in detail in Section 2, while the rest of the bio-inspired algorithms used for optimization problems, including swarm intelligence algorithms and genetic algorithms, are described in Section 3.
In the field of application, neural networks and bio-inspired optimization algorithms have a broad and deep intersection. Bio-inspired optimization algorithms, especially swarm intelligence algorithms, are widely used for hyperparameter optimization and training process optimization for neural networks. In development trends, there is an increasing number of studies that have attempted to combine swarm intelligence algorithms with neural networks in order to create hybrid algorithms, for example, using PSO to optimize the architecture or parameters of convolutional neural networks [24]. Some research has focused on the co-evolution of swarm intelligence algorithms and neural networks, optimizing both simultaneously within the same system and promoting each to evolve towards better solutions. As future prospects, future research may develop more efficient swarm intelligence optimization methods, further enhancing the performance and training efficiency of neural networks. As technology advances, the combination of swarm intelligence algorithms and neural networks is expected to play a crucial role in more emerging fields, such as autonomous driving, smart manufacturing, and personalized medicine.
Remark 3. 
Biomimetic algorithms, bio-inspired algorithms, and bio-inspired intelligence algorithms can be viewed as synonyms, all denoting algorithms that draw on the good design and functionality found in biological systems. Intelligent algorithms refer to a class of algorithms that use artificial intelligence techniques to solve complex problems. These algorithms often mimic intelligent behaviors observed in biological systems to find optimal or near-optimal solutions in complex environments. They possess adaptive, learning, and optimization capabilities that enable them to perform well in dynamic and uncertain situations. In addition to the bio-heuristic algorithms mentioned above, intelligent algorithms include several statistically or probabilistically inspired algorithms such as Bayesian optimization, random forest algorithms, and K-means algorithms. The conclusion is that bio-inspired algorithms are a subset of intelligent algorithms.

3. Bio-Inspired Neural Network Models

3.1. Origin

The interior of each neuron cell is filled with a conducting ionic solution, and the cell is surrounded by a membrane of high resistivity. Ion-specific pumps maintain an electrical potential difference between the inside and the outside of the cell by transporting ions such as K + and Na + across the membrane. Neuron cells respond to outside stimulation by dynamically changing the conductivity of specific ion species in synapses and cell membranes. Based on the response mechanisms of biological neuron cells, John Hopfield [25] used a neural dynamics equation to describe this change in conductivity. He applied his physical talents to the field of neurobiology in order to help people better understand how the brain works.
In 1982, Hopfield [26] introduced the concept of “computational energy”, and he proposed the Hopfield neural network (HNN) model. Two years later, he proposed the continuous-time HNN model [27], which pioneered a new way for neural networks to be used for associative memory and optimization calculations. This pioneering research work has a strong impetus to the study of neural networks. In 1985, Hopfield and Tank [28] proposed an HNN model for solving a difficult but well-defined optimization problem, the traveling-salesman problem, and used it to illustrate HNNs’ computational power. After that, numerous neural networks have been proposed, owing to this seminal work.
Similarly, Zhang introduced the concept of an “error function” and proposed a kind of RNN named a zeroing neural network (ZNN). ZNNs are easier to implement in hardware, and they belong to a non-training model. Essentially, the reason why ZNNs can effectively tackle time-varying problems is that they can utilize time-derivative information. For any given random initial input values, ZNNs operate in a neurodynamic manner, and they evolve in the direction of a decreasing error function, eventually reaching a steady state. The ZNN neural dynamics can be formulated as follows:
d E ( t ) d t = γ ϕ ( E ( t ) ) ,
where E ( t ) is the error function, γ > 0 is the design parameter, and ϕ ( · ) denotes the activation function. The key step in designing the ZNN model is to construct an error function, E ( t ) . For an arbitrary time-varying problem, W X ( t ) = b , the error function can be formulated as E ( t ) = W X ( t ) b , where X ( t ) is the time-varying unknown variable to be solved. The structure of a ZNN is described in Figure 1, and input and output relationships can be described as follows:
e i ( t + Δ t ) = e i ( t ) + d e i ( Δ t ) d Δ t Δ t ,
where e i = j = 1 n ω i j x j ( t ) + b i is the ith neuron’s sub-error, ω i j is the weight coefficient, and Δ t refers to the next time. When all sub-error increments converge to zero, the system achieves stability, i.e., E ( t ) = W X ( t ) b = 0 . Based on the superiority of ZNNs in real-time processing, many fruitful academic outcomes have been reported [29,30,31,32,33,34]. The following subsections discuss the convergence and stability of ZNNs, as well as model variants and applications.

3.2. Convergence and Stability

The key performance aspects of ZNNs are convergence and stability. Generally speaking, this can be proven via three approaches, i.e., Lyapunov theory, ordinary differential equations, and Laplace transformation.
Proof. 
(Based on Lyapunov theory). For example, to solve a time-varying, nonlinear minimization problem, the target functions are f ( x ( t ) , t ) R and x ( t ) R n ; ref. [35] designed an error function as
E ( t ) = f ( x ( t ) , t ) x ( t ) .
Then, a Lyapunov function candidate is constructed:
V ( t ) = 1 2 E T ( t ) E ( t ) ,
it is obvious that V ( t ) is positive definiteness. Then, the formulation of its time derivative can be described as
V ˙ ( t ) = γ E T ( t ) E ( t ) ,
which is negative definiteness. That means the residual error of the ZNN model is globally converged to zero. Notably, by substituting the definition of the error function, E ( t ) , the global convergence of all extant continuous-time ZNN models can be analyzed using a similar methodology. This analytical approach represents the predominant method for verifying continuous-time ZNN models, and it has been extensively investigated in [36,37,38,39]. □
Proof. 
(Based on an ordinary differential equation). The ordinary differential equation is mainly used to verify the convergent speed of ZNN models activated via linear functions. For example, when considering the problem in [35], solving the ith subsystem of the design formula E ˙ i = γ E i ( t ) , it can lead to
E i ( t ) = E ( 0 ) exp ( γ t ) ,
where E ( 0 ) denotes the initial value of E ( t ) . Finally, the residual error of the ZNN model globally and exponentially is set to zero. There has been extensive research presented in [40,41]. □
Proof. 
(Based on Laplace transformation). Let us use Laplace transformation for E ˙ ( t ) = γ E ( t ) and get
s E ( s ) E ( 0 ) = γ E ( s ) ,
and, further, have
E ( s ) = E ( 0 ) s + γ .
Due to γ > 0 , the final value theorem can be applied. Based on the final value theorem, we get
lim t E ( t ) = lim s 0 s E ( s ) = lim s 0 s E ( 0 ) s + γ = 0 ,
the proof is completed. This approach was studied in [42,43,44]. □

3.3. Accelerated Convergence

To solve some of the shortcomings of conventional neural networks, e.g., long training times and high computational resource consumption, an accelerated convergence neural network undoubtedly provides an effective solution to these issues. In past decades, there have been two main kinds of accelerated convergence neural networks: finite-time neural networks and predefined-time neural networks. What finite-time neural networks and predefined-time neural networks have in common is importing activation functions (AFs). For convenience, the main usages of AFs are listed as follows:
General linear AF:
ϕ ( x ) = x .
Power AF:
ϕ ( x ) = x n ,
where n is an odd integer, and n > 3.
Bipolar sigmoid AF:
ϕ ( x ) = 1 e n x 1 + e n x ,
where n > 1 .
Hyperbolic sine AF:
ϕ ( x ) = e n x e n x 2
where n > 1 .
Power-sigmoid AF:
ϕ ( x ) = x n , i f | x | 1 1 + e n 1 e n · 1 e n x 1 + e n k , ohterwise .
Constant-sign–bi-power AF:
sgn n ( x ) = 1 if x > 0 ; 0 if x = 0 ; 1 if x < 0 .
Sign–bi-power AF:
sbp ( x ) = sgn n ( x ) + sgn 1 / n ( x ) ,
where sbp ( · ) : R R with the parameters n ( 0 , 1 ) and sgnn ( · ) defined as
sgn n ( x ) = | x n | if x > 0 ; 0 if x = 0 ; | x n | if x < 0 .
Weighted-sign–bi-power AF:
wsbp ( x ) : = μ 1 sgn n ( x ) + μ 2 sgn 1 / n ( x ) + μ 3 x ,
where μ 1 , μ 2 , μ 3 are tunable positive design parameters.
Nonlinear AF 1:
ϕ ( x ) = sgn n ( x ) ,
where 0 < n < 1 .
Nonlinear AF 2:
ϕ ( x ) = ξ sgn n ( x ) + λ x ,
where 0 < n < 1 , ξ > 0 , and λ > 0 .

3.3.1. Finite-Time Neural Network

The finite-time convergence of neural network models refers to the ability of a neural network to converge to a satisfactory state or achieve a predetermined level of performance within a finite period of time. In [45], Xiao proposed two nonlinear ZNN models based on nonlinear activation functions (Equations (18) and (19)) to efficiently solve linear matrix equations. According to Lyapunov theory, two such nonlinear ZNNs are proven to be convergent within finite time and the lower convergence bound. According to this paper, the convergence bound is
T u max { T u , T u + } max { | η ( 0 ) 1 ξ | λ ( 1 ξ ) , | η + ( 0 ) 1 ξ | λ ( 1 ξ ) } ,
where T u denotes the convergence upper bound.
In [46], a ZNN with a specially constructed activation function was proposed and investigated to find the root of nonlinear equations. Then, comparing it with gradient neural networks, Xiao [47] pointed out that this model has better consistency in actual situations and a stronger ability in dynamical systems. The specially constructed activation function can be defined as Equation (16) and named the sign–bi-power function or Li function [48]. Due to its finite-time convergence, most researchers have employed it. For example, ref. [49] proposed an improved ZNN model for solving a time-varying linear equation system. Such a ZNN model is activated via an array of continuous sign–bi-power functions (Equation (15)). In [50], to solve dynamic matrix inversion problems, two finite-time ZNNs with the sign–bi-power activation function were proposed by designing two novel error functions. Furthermore, Lv [51] introduced a weighted sign–bi-power function (Equation (17)) to ZNNs. The ZNN model makes full use of all the items of the weighted sign–bi-power function, and thus, it obtains a lower upper bound for the convergence time.
Differing from the previous processing method, ref. [52] proposed a new design formula, which also can accelerate a neural network to finite-time convergence. This neural network achieves a breakthrough in convergence performance. According to this paper, an indefinite matrix-valued, time-varying error function, E ( t ) , is defined as follows:
E ( t ) = A ( t ) X ( t ) I R n × n .
Then, the new design formula for E ( t ) is proposed as follows:
E ˙ ( t ) = γ φ ( λ 1 E ( t ) + λ 2 E q / p ( t ) ) ,
where φ ( · ) denotes an activation function array, and γ > 0 is used to adjust the convergence rate with the design parameters λ 1 > 0 , λ 2 > 0 ; p and q denote positive odd integers and satisfy p > q .

3.3.2. Predefined-Time Neural Network

The predefined-time convergence of a neural network refers to the ability of a neural network to converge to a satisfactory state or achieve a predetermined level of performance within a predefined time. In some actual applications [53,54] that require the fulfillment of strict time constraints, there is a requirement that a neural network model can guarantee a timely convergence. For instance, the common problem in numerous fields of science and engineering is solving the dynamic-matrix square root (DMSR) [55,56]. It is preferable to solve the DMSR via a neural network with explicitly and antecedently definable convergence time. Li [57] proposed a predefined-time neural network, the convergence time of which can be explicitly defined as a prior constant parameter. In this paper, the maximum predefined time can be formulated as
T max = 1 λ ( 1 a ) + 1 μ ( b 1 ) ,
where λ > 0 , μ > 0 , 0 < a < 1 and b > 1 are prior constant parameters. Similarly, Xiao [58] presented and introduced a new ZNN model using a versatile activation function for solving time-dependent matrix inversion. Unlike the existing ZNN models, the proposed ZNN model not only converges to zero within a predefined, finite time but also tolerates several noises in solving the time-dependent matrix inversion. Differing from the method of using AFs, ref. [59] proposed a very parameter-focused neural network for improving the ZNN model’s convergence speed, which is more compatible with the characteristics of the actual hardware parameter.

3.4. Noise Tolerance Neural Network

Noise in a neural network refers to the uncertainty or randomness in data, which can originate from various sources, including sensor errors, environmental interference, and incidental factors during data collection. In neural networks, noise can have significant impacts on model training and performance. Jim [60] first discussed the impact of noise on recurrent neural networks and pointed out that the introduction of noise will also help improve the convergence and generalization performance of recurrent neural network models. However, for the most part, the impact of noise on neural networks is negative. To eliminate the impact of noise, most researchers are devoted to designing a noise filter, such as a Gaussian filter [61]. This has achieved major success in image-recognition denoising [62]. But, in practical engineering applications, it is hard to gauge in advance which noise it is and, thus, hard to design the corresponding filter. Then, researchers focused on improving the robustness of neural networks. Liao [63] analyzed the traditional ZNN model and verified that it has a tolerance for low-amplitude noise during multiple noise interferences. There are two main approaches to tolerating noise interference: (1) introducing AFs and (2) introducing integral terms. In [64,65], some AFs were introduced to improve the robustness of neural networks. To tolerate harmonic noise interference, a Li activation function [66] was introduced in order to further improve the convergence rate and robustness. The convergence and robustness of harmonic noises of the proposed neural network models were proven through theoretical analyses.
Research [67] first proposed a noise-tolerance ZNN model with a single integral term for solving the time-varying matrix inversion problem, which can be mathematically formulated as follows:
E ˙ ( t ) = γ E ( t ) μ 0 t E ( τ ) d τ .
where γ and μ R > 0 . Essentially, a single-integral-structure ZNN model uses the error integration information to mitigate constant bias errors. No matter how large the matrix-form constant noise is, these kinds of ZNN models can converge to the theoretical solution with an arbitrarily small residual error. These design models can serve as a fundamental framework for future research on solving time-varying problems, and they can open a door to solving time-varying problems with constant noise. Based on [67], the research [68] presented a ZNN model with a single term to compute the matrices’ inversion, which was subjected to null space and specific range constraints under various noises. This is an important topic to further enhance the robustness of ZNN models. Thus, Liao [29] further extended the single integral framework in [67] and designed a double-integral-structure ZNN model to solve various types of time-varying problems, whose structure can be described as follows:
E ˙ ( t ) = 3 γ E ( t ) 3 γ 2 0 t E ( τ ) d τ γ 3 0 t 0 τ E ( ι ) d ι d τ .
Then, in [69], to solve matrix inversion problems with linear noise rejection, this paper illustrated that a double-integral-structure ZNN model can effectively tolerate linear noise. Similarly, to enhance robustness in solving matrix inversion problems, ref. [70] proposed a variable-parameter, noise-tolerant ZNN model. Compared with a single-integral-structure ZNN model for matrix inversion, numerical simulations revealed, a double-integral-structure ZNN model has better robustness under the same external noise interference.

3.5. Convergence Accuracy

As mentioned before, a series of continuous-time ZNN models have been proposed and designed to solve different problems. However, the fact is that step sizes in simulating continuous-time systems are variable, while a digital computer often requires constant time steps. It is hard to implement continuous-time ZNN models using digital computers. Additionally, continuous-time ZNN models always work in ideal conditions, i.e., it is supposed that neurons communicate and respond without any delay. Nevertheless, because of the existence of the sampling gap, a time delay is inevitable; it unavoidably leads to a reduction in the computing accuracy of ZNN models. Thus, researchers proposed and investigated discrete-time ZNN models. For example, Xiang [71] proposed a discrete-time ZNN model for solving dynamic matrix pseudoinversion. There are some challenges in constructing discrete-time ZNN model, and they are listed as follows.
In view of time, any time-varying problem can be considered a causal system. The computation should be based on existing data, which include present and/or previous data. For instance, when solving the time-varying matrix inversion problem discretely, at time instant t k , we can only use known information, such as A ( t k ) and A ˙ ( t k ) , not unknown information, such as A ( t k + 1 ) and A ˙ ( t k + 1 ) , to compute the inverse of A ( t k + 1 ) , i.e., X ( t k + 1 ) . Therefore, a fundamental requirement for constructing the discrete-time ZNN model is that unknown data cannot be used.
This means that a potential numerical differentiation formula for discretizing a continuous-time ZNN model should have and should only have one point ahead of the target point. Thus, in the process of discretizing the continuous-time ZNN model using the numerical differentiation formulation, the discrete-time ZNN model has only one unknown point, X ( t k + 1 ) , to be computed. Thus, no matter how minimal the truncation error of each formula is, discrete-time ZNN models cannot be constructed using backward and multiple-point central differentiation rules. Only the one-step-ahead forward differentiation formulas can be considered for discrete-time ZNNs.
Time is valuable for solving time-varying problems. It is crucial to design a very straightforward discrete-time ZNN model with minimal time consumption.
In [72], Zhang proposed the first discrete-time ZNN model for solving constant matrix inversion. This model built a critical bridge between discrete-time ZNN frameworks and traditional Newton iteration. At the beginning, ref. [73,74] constructed the discrete-time ZNN models using the Euler forward difference. Notably, the error pattern of discrete-time ZNN models derived from the Euler forward difference approach is O ( τ 2 ) , where τ is the sampling interval. For instance, the Euler-type discrete-time ZNN model for time-varying matrix inversion is directly formulated as [75]
X k + 1 = X k τ X k A ˙ k X k h X k ( A k X k I ) ,
where k denotes the iteration index, and h = τ γ > 0 . Especially by omitting the term τ X k A ˙ k X k and setting h = 1 , the Euler-type discrete-time ZNN model (24) is simplified to
X k + 1 = X k X k ( A k X k I ) ,
which aligns with the traditional Newton iteration. Hence, the traditional Newton iteration can be considered a special case of the Euler-type discrete-time ZNN model (24). Furthermore, a linkage between the Getz–Marsden dynamic system and discrete-time ZNN models was established in [76]. To enhance the accuracy of ZNN model discretization, a Taylor-type numerical differentiation formula was proposed in [77] for first-order derivative approximation. This formula exhibits a truncation error of O ( τ 2 ) , and it is expressed as:
f ( t k ) = 2 f ( t k + 1 ) 3 f ( t k ) + 2 f ( t k 1 ) f ( t k 2 ) 2 τ + O ( τ 2 ) ,
subsequently, a novel, Taylor-type, discrete-time ZNN model was developed for time-varying matrix inversion,
X k + 1 = τ X k A ˙ k X k h X k ( A k X k I ) + 3 2 X k X k 1 + 1 2 X k 2 .
This Taylor-type, discrete-time ZNN model converges to the theoretical solution of the time-varying problem with a residual error of O ( τ 3 )
Recently, a numerical differentiation rule with a truncation error of O ( τ 3 ) was established in [78] for first-order derivative approximation. By utilizing this new formula, a five-step discrete-time ZNN model was proposed for time-varying matrix inversion, with a residual error of O ( τ 4 ) . It is imperative to note that discrete-time ZNN models can be considered time-delay systems, and consequently, a high value of h may induce oscillations in the model. To mitigate instability, reducing the value of h is a direct remedy. However, decreasing h significantly decelerates the convergence of discrete-time ZNN models.

3.6. Time-Varying Linear System Solving

Time-varying linear systems have extensive applications in engineering fields such as circuit design [79], communication systems [80,81,82], and signal processing [83,84]. Many engineering problems can be effectively modeled as time-varying linear systems [85,86]. Time-varying linear systems can be constructed using a series of time-varying matrices that store system information, which changes over time.
One notable contribution in this field is the work of Lu [87], who proposed a novel ZNN model for solving time-varying underdetermined linear systems. This model satisfied variable constraints and effectively converged the residual errors. Through extensive theoretical analyses and numerical simulations, the author demonstrated the effectiveness and validity of the proposed ZNN model and showcased its applicability in controlling the PUMA560 robot under physical constraints. Additionally, Xiao [88] designed a ZNN model specifically for time-varying linear matrix equations. Their study included a theoretical analysis of the maximum convergence time, and it concluded with exceptional performance in solving time-varying linear equations. Zhang [89] proposed a varying-gain ZNN model for solving linear systems, which can be represented as A ( t ) B ( t ) C ( t ) = D ( t ) . Due to the varying properties parameter, this ZNN model can achieve finite-time convergence. Moreover, several other ZNN models, proposed in [90,91], have been presented for solving time-varying Sylvester equations of the form A ( t ) B ( t ) B ( t ) A ( t ) = C ( t ) .
Similarly, a complex time-varying linear system has also been mainly discussed and investigated. The complex number domain is widely and profoundly used in science and engineering, which not only exists as a mathematical tool but also plays a key role in solving many practical problems. For the online solving of complex time-varying linear equations, ref. [92] proposed and investigated a neural network. This proposed neural network adequately utilizes time-derivative information for time-varying complex matrix coefficients, and it was theoretically proven that it can converge to the theoretical solution with finite time. In [93], a complex-valued, nonlinear, recurrent neural network was designed for time-varying matrix inversion solving in complex number fields. This paper proposed a complex-valued, nonlinear ZNN model that was established on the basis of a nonlinear evolution formula and that possessed better finite-time convergence. Long [94] mainly discussed the finite-time stabilization of a complex-valued ZNN. This article investigated the finite-time stabilization of a complex-valued ZNN with a proportional delay via the direct analysis method without separating the real and imaginary parts of complex values. To achieve higher precision and a higher convergence rate, ref. [95] proposed a new ZNN model. Compared with the GBRNN model and the existing ZNN models, the illustrative results showed that the new ZNN model has higher precision and a higher convergence rate.

3.7. Time-Varying Nonlinear System Solving

Nonlinear systems are prevalent in many real-world applications, and real-time solutions for nonlinear systems have been a hot topic. In [42], a continuous-time ZNN model was presented and investigated for online time-varying nonlinear optimization. This paper focused on the convergence of continuous-time ZNN models, and it theoretically verified that the continuous-time ZNN model is provided with a global exponential convergence property. For the purpose of satisfying both finite-time convergence and noise tolerance, Xiao [96] proposed a finite-time robust ZNN to solve time-varying nonlinear minimization under various external disturbances. Ref. [97] proposed a finite-time, varying-parameter, convergent-differential ZNN for solving nonlinear optimization. The proposed ZNN model has super exponential convergence, finite-time convergence, and strong robustness.
Quadratic minimization is a fundamental problem in nonlinear optimization, and it has wide applicability. The goal of quadratic minimization is to find values of variables that minimize a quadratic objective function subject to certain constraints. Ref. [98] proposed a segmented variable-parameter ZNN model for solving time-varying quadratic problems. They achieved strong robustness by keeping the time-varying parameters stable. The general time-varying quadratic problem can be expressed as follows:
min . x T ( t ) H ( t ) x ( t ) f T ( t ) x ( t ) , s . t . A ( t ) x ( t ) = μ ( t ) , B ( t ) x ( t ) υ ( t ) ,
where the vector x ( t ) R n is the solution that should be tackled using the model, A ( t ) R m × n , B ( t ) R w × n , f ( t ) R n , μ ( t ) R m , H ( t ) R n × n is a symmetric positive definite matrix, and υ ( t ) R n ; these time-varying coefficients are known. In [99], a single integral structure with a nonlinearly activated function ZNN model was proposed to solve dynamic quadratic minimization with additive noises considered. Differing from the above-mentioned ZNN model, ref. [100] proposed a new design formula. Based on this formula, a finite-time and noise-tolerance ZNN was proposed for solving time-varying quadratic optimization problems under various levels of additive noise interference.

3.8. Robot Control

As an effective approach to solving time-varying problems, ZNNs have been widely applied to robot control, such as redundant manipulators [101] and multiple-robot control [102,103]. For example, because the end-effector task may be incomplete due to a manipulator’s physical limitations or space limitations, it is important to adjust the manipulator configuration from one state to another state. In [104], Tang proposed a refined self-motion control scheme based on ZNNs, which can be described as follows:
minimize Ω ( t ) + q ( t ) 2 2 / 2 , s u b j e c t t o J ( Ω ( t ) ) Ω ˙ ( t ) = γ 1 ( F ( Ω ( t ) ) F ( Ω ( 0 ) ) ) , A Ω ˙ A + , with q ( t ) = γ 2 t ( Ω ( t ) Ω g ) , A ( t ) = max { ω ( t ) + k ( ω ( t ) Ω ( t ) ) , θ ( t ) } , A + ( t ) = max { ω + ( t ) + k ( ω + ( t ) Ω ( t ) ) , θ + ( t ) } ,
where Ω ˙ ( t ) R n and Ω ( t ) R n are the joint-angle-velocity vector and joint-angle vector, respectively; Ω g denotes the given joint-angle vector; Ω ( 0 ) denotes the initial joint-angel vector; · 2 is the 2-norm of the vector; q ( t ) R n is defined according to the self-motion task; and J ( Ω ) = F ( Ω ) / Ω is the Jacobian matrix. γ 1 , γ 2 are the positive design parameters, and k is used to scale the magnitude of the manipulators. In addition, θ and θ + denote the time-varying joint-angle lower bound and upper bound; ω and ω + represent the joint-angle-velocity lower bound and upper bound. This self-motion can adjust the manipulator configuration from the initial state to the final state and keep the end effector immobile at its current orientation or position. Considering the higher accuracy, ref. [105] proposed a discrete-time ZNN model to solve the motion planning of redundant manipulators.
To effectively manage and control muti-robots, Liao [106] proposed a strategy for real-time control. This strategy achieves the real-time measuring and minimizing of inter-robot distances, which can be formulated as
min j = 1 N ( ( | P j ( t ) P j + 1 ( t ) | 2 + | P j ( t ) P j 1 ( t ) | 2 ) / 2 ) ,
where the complex number P j = x j + y j i is the position information of the jth follower robot, the real part x j and the imaginary part y j represent the x-coordinate and y-coordinate, and P j 1 ( t ) , P j + 1 are two neighboring robots. Based on leader–follower frameworks, the desired formation is merely known by leader robots, and follower robots just follow the motion of the leaders (this paper chose the robots at the edges as leaders). Furthermore, to verify the correctness of this strategy, the author employed a wheeled mobile robot, and the kinematic model can be described as
x ˙ ( t ) = ( ν r ( t ) + ν l ( t ) ) cos ω ( t ) / 2 y ˙ ( t ) = ( ν r ( t ) + ν l ( t ) ) sin ω ( t ) / 2 ω ˙ ( t ) = ( ν r ( t ) ν l ( t ) ) / 2 .
Then, according to differential flatness, the left- and right-wheel velocity can be represented with finite-order differentials of the position information, which can be formulated as follows:
ν r ( t ) = x ˙ 2 ( t ) + y ˙ ( t ) + y ¨ ( t ) x ˙ ( t ) y ˙ ( t ) x ¨ ( t ) x ˙ 2 ( t ) + y ˙ 2 ( t ) ν l ( t ) = x ˙ 2 ( t ) + y ˙ ( t ) y ¨ ( t ) x ˙ ( t ) y ˙ ( t ) x ¨ ( t ) x ˙ 2 ( t ) + y ˙ 2 ( t ) .
To achieve real-time control, the author constructed a complex ZNN model to find the real-time positions. Compared to the GBRNN model, this model can eliminate a large lagging error.

3.9. Discussion and Challenges

Overall, research on ZNNs has achieved considerable success in solving time-varying problems. However, there are still many challenges to be solved in the future. This paper provides some prospective suggestions for future directions.
(1)
Keep exploring and discovering more useful, one-step-ahead numerical differential formulas to construct discrete-time ZNNs. One of the goals is that further discrete-time ZNN models possess larger step sizes and higher computational accuracy. Larger step sizes consume less computation time. Such advancements align closely with the development of applied mathematics and computational mathematics.
(2)
How to achieve faster convergence still remains an open problem. Additionally, it is meaningful in the development of ZNNs to understand how to achieve convergence conditions.
(3)
Apart from global stability, more theory is also needed to improve robustness.
All of these future developments will coincide with the advancement of mathematical theory, particularly in applied mathematics and computational mathematics. For instance, it is necessary to derive new models for addressing inequality constraints in time-varying optimization problems. These advancements will parallel the progress in computational mathematics, which is crucial for constructing and developing neural networks. It is important to recognize that different types of neural networks, such as discrete-time ZNN models or accelerated-convergence ZNN models, each have their own applicable ranges. Therefore, it is unrealistic to expect that an isolated ZNN model can solve all computational problems.

4. Intelligence Algorithms with Applications

Bio-inspired intelligence algorithms can be categorized into two specific categories, genetic algorithms and swarm intelligence algorithms. This section mainly discusses one of the classic evaluation algorithms, i.e., genetic algorithms, and one of the classic swarm intelligence algorithms, i.e., the particle swarm optimization algorithm. And it introduces their origins, design process, and basic principles. It also introduces a number of swarm intelligence algorithms, such as the ant colony optimization algorithm, an artificial fish swarm algorithm, and the Harris hawks optimizer. Then, we select some practical domains, i.e., gene feature extraction, intelligence communication, and the image process, to emphasize the applicability of intelligent algorithms.

4.1. Bio-Inspired Intelligence Algorithm

Bio-inspired optimization algorithms draw inspiration from biological evolution and swarm intelligence, aiming to solve complex optimization problems. These algorithms leverage the principles underlying biological systems, such as evolution, swarm intelligence, and natural selection, to tackle optimization, search, and decision-making tasks across diverse domains.
One prominent category of bio-inspired algorithms is evolutionary algorithms, which include genetic algorithms, evolutionary strategies, evolutionary programming, and genetic programming. These algorithms simulate the process of natural evolution, where candidate solutions/individuals evolve and adapt over successive generations through mechanisms such as selection, crossover/reproduction, and mutation, ultimately converging toward optimal or near-optimal solutions.
Another is swarm intelligence algorithms, which are inspired by the collective behavior of decentralized, self-organized systems observed in nature, such as ant colonies and bird flocks [107,108]. Swarm intelligence algorithms include particle swarm optimization, the egret swarm optimization algorithm [109], the beetle antennae search algorithm [110,111], and the gray wolf algorithm [112], which mimic the cooperative behaviors of social insects to efficiently explore solution spaces and find high-quality solutions.
These intelligence algorithms offer the following advantages:
Distributed robustness: Individuals are distributed across the search space, and they interact with each other. Due to the lack of a centralized control center, these algorithms exhibit strong robustness. Even if certain individuals fail or become inactive, the overall optimization process is not significantly affected.
Simple structure and easy implementation: Individuals have simple structures and behavior rules. They typically only perceive local information and interact with others through simple mechanisms. This simplicity makes the algorithms easy to implement and understand.
Self-organization: Individuals exhibit complex self-organizing behaviors through interactions and information exchanges. The intelligent behavior of the entire swarm emerges from the interactions among simple individuals.

4.1.1. Genetic Algorithm

In 1975, Holland [113] first proposed the genetic algorithm (GA), which was inspired by natural selection. As the most successful type of evolutionary algorithm, GA is a stochastic optimization algorithm with a global search potential. Essentially, GA emulates the process of natural selection in finding optimal solutions for complex optimization and search problems. The core idea of GA is the concept of the survival of the fittest. The fitness function is used to measure the merits of individuals, and the individuals with better performance have a higher probability of being selected to produce offspring in the next generation. Through the iterative process of reproduction and selection, GA gradually improves the solution set and converges toward the optimal or near-optimal solution. The design procedure can be described as follows, and the flow diagram is shown in Figure 2.
(1)
Initialize the population.
(2)
Calculate the fitness function for each individual.
(3)
Select individuals with a high fitness function, and let them undergo crossover/reproduction and mutation to generate offspring.
(4)
Calculate the fitness function for each individual.
(5)
If the termination condition is satisfied, select the individual with the highest fitness, or else return to step (2).
Due to its remarkable performance in optimization, GA has been applied in many optimization problems, especially where the search space is large, complex, or poorly known. Ou [114] developed a new hybrid knowledge extraction framework by combining GA and backpropagation neural networks. The efficacy of the framework was demonstrated through a case study using the Wisconsin breast cancer dataset. Furthermore, Li [115] investigated the application of the harmonic search algorithm in this framework.

4.1.2. Particle Swarm Optimization Algorithm

In 1995, by observing the social behavior of birds flocking and searching for food, Kennedy and Eberhart proposed particle swarm optimization (PSO) algorithm, which is stochastic and computational-intelligence-oriented [116]. ”Particles” usually refers to population members with an arbitrarily small mass and volume. Each particle denotes a solution with four vectors, including the current position, the current optimal solution, the current optimal solution found via its neighbor particles, and velocity. In each iteration, each particle updates its current position in the search space based on the current optimal position and the current neighbor particle’s optimal position. The renewal principle of each particle’s position and velocity can be described as follows:
P j + 1 i = P j i + ν j + 1 i ν j + 1 i = ν j i + a 1 b 1 ( P i P j i ) + a 2 b 2 ( H * P j i )
where P j + 1 i denotes ith particle’s position in jth iteration; ν j i denotes the ith particle’s velocity in the jth iteration; P i denotes ith particle’s current optimal position; H * denotes whole particles’ optimal positions; a 1 a 2 denotes cognitive and social parameters; and b 1 b 2 [ 0 1 ] . The design procedure can be described as follows, and the flow diagram is shown in Figure 3.
(1)
Initialize the particle swarm.
(2)
Calculate the fitness function for each particle.
(3)
Update each particle’s current optimal solution.
(4)
Update the whole particle’s current optimal solution.
(5)
Update each particle’s position and velocity.
(6)
If the termination condition is satisfied, select the individual with the highest fitness or else return to step (2).
Due to the PSO algorithm’s simple concept, easy implementation, computational efficiency, and unique searching mechanism, it has extensive applications in various engineering optimizations. To solve the problem associated with Takagi–Sugeno fuzzy neural networks, Peng [117] proposed an enhanced chaotic quantum-inspired PSO algorithm. This algorithm effectively solves problems such as a slow convergence time and a long calculation time. Additionally, Yang [118] proposed an improved PSO (IPSO) algorithm to identify parameters of the Preisach model for modeling hysteresis phenomena. Compared with traditional PSO methods, IPSO is provided with superior convergence, less computation time, and higher accuracy.

4.2. Ant Colony Optimization Algorithm

The ant colony optimization (ACO) algorithm was first introduced by M. Dorigo and colleagues in 1996 as a nature-inspired, meta-heuristic method aimed at solving combinatorial optimization problems. This algorithm draws inspiration from stigmergy, where communication within a swarm is achieved through environmental manipulation. It has been demonstrated that ants communicate by depositing pheromones on the ground or objects to send specific signals to other ants. Various pheromones are used for different tasks within a colony. For example, ants mark distinct paths from their nests to a food source to guide other ants for transportation. The shortest path is naturally selected because longer paths allow more time for pheromone evaporation before re-deposition.
In the ACO’s original version, the problem must be represented as a graph, where each ant represents a tour, and the goal is to identify the shortest one. There are two key matrices: distance and pheromones, which ants use to update their routes. Through multiple iterations and updates of these matrices, a tour is established that all ants follow, which is considered the best solution for the problem. ACO has various adaptations that can address problems with continuous variables, constraints, multiple objectives, and more. Bullnheimer [119] introduced an innovative rank-based variant of the ant system. In this version, the paths of the ants are sorted from the shortest to the longest after each iteration. The algorithm assigns different weights based on the path length, with shorter paths receiving higher weights. Hu [120] developed a new ACO called the “continuous orthogonal ACO” to solve continuous problems, using a pheromone deposition mechanism that allows ants to efficiently search for solutions. Gupta [121] introduced the concept of depth into a recursive ACO, where depth determines the number of recursions, each based on a standard ant colony algorithm. Gao [122] combined the K-means clustering algorithm with the ACO and introduced three immigrant schemes to solve the dynamic location routing problem. Hemmatian [123] applied an elitist ACO algorithm to the multi-objective optimization of hybrid laminates, aiming to minimize the cost during the computation process.

4.3. Artificial Fish Swarm

In nature, fish can identify more nutritious areas by either searching individually or following others, and areas with more fish tend to be richer in nutrients. In 2002, Li [124] initially introduced a stochastic, population-based AFS optimization algorithm. The algorithm’s core idea is to imitate fish behaviors like swarming, preying, and following, combining these with a local search to generate a global optimum [125]. The AFS algorithm generally has the same advantages as genetic algorithms, while the AFS algorithm can reach faster convergence and needs fewer parameter adjustments. Unlike genetic algorithms, AFS does not involve crossover and mutation processes, making it simpler to execute. It operates as a population-based optimizer: the system begins with a set of randomly generated potential solutions and iteratively searches for the best one [126]. Due to random search and the parallel optimization method, AFS has global search capabilities and fast convergence. The fish group is represented by a set of points or solutions, with each agent symbolizing a candidate solution. The feasible solution space represents the “waters” where artificial fish move and search for the optimum. The algorithm reaches the optimal solution through survival, competition, and coordination mechanisms. In the AFS algorithm, the swarm behavior is characterized by movement towards the central point of the visual scope. Numerous researchers have developed and improved the AFS algorithm, resulting in a variety of novel variants. For example, a binary AFS was proposed to solve 0–1 multidimensional knapsack problems. In this approach, a 0/1-bit binary string represents a point, and each bit of a trial point is generated by copying the relevant bit from itself or another specified point with equal probability. Zhang introduced a Pareto-improved AFS algorithm for addressing a multi-objective fuzzy disassembly-line-balancing problem, which helped reduce uncertainty. To enhance the global search ability and convergence speed, Zhu proposed a new quantum AFS algorithm based on principles of quantum computing, such as quantum bits and quantum gates.

4.4. Harris Hawks Optimizer

The Harris hawks optimizer (HHO) was developed by Heidari [127], and it is based on the natural hunting strategies of Harris hawks. This algorithm uses different equations to update the positions of hawks in a search space, emulating the different hunting techniques these birds use to capture prey. The following equation is used to provide exploratory behavior for HHO [127]:
X ( t + 1 ) = X s ( t ) a 1 | X s ( t ) 2 a 2 X ( t ) | for b 0.5 ( X p ( t ) X m p ( t ) ) a 3 ( L + a 4 ( U L ) ) for b 0.5 ,
where X ( t ) denotes the position in the tth iteration, X p ( t ) is the prey position, a 1 , a 2 , a 3 , a 4 and b are a random number in [ 0 , 1 ] , L and U are the parameters’ lower and upper bounds, respectively, X s ( t ) indicates a hawk randomly selected from the current population, and X m p ( t ) = 1 N i = 1 N X i ( t ) represents the mean position of the current population of hawks. The exploitation algorithm employs two distinct besieging strategies: soft and hard:
X ( t + 1 ) = X ¯ ( t ) a 5 | C X p ( t ) X ( t ) | , X ¯ ( t ) = X p ( t ) X ( t ) , X ( t + 1 ) = X p ( t ) a 5 | X ¯ ( t ) | ,
where X ¯ ( t ) represents the distance between the solution and the prey at the tth iteration, a 5 and C are random variables.

4.5. The Rest of the Swarm Intelligence Algorithms

Swarm intelligence algorithms are predominantly derived from simulations of natural ecosystems, especially those based on insects and animals. These algorithms are optimized by simulating the processes of foraging or information exchange within these biomes, and most utilize directional iterative methods based on probabilistic search strategies. With the progression of swarm intelligence, optimization algorithms have transcended merely incorporating biological population traits. Some studies have introduced human biological characteristics into swarm intelligence algorithms, such as the human immune system [128]. Recent innovations in swarm intelligence algorithms, as well as enhancements to classical algorithms, mainly aim at reducing parameters, streamlining processes, accelerating computation speeds, and improving search capabilities, particularly for high-dimensional and multi-objective optimization problems [129]. The application domains of swarm intelligence algorithms are expanding, with their usage guiding further development directions.
In addition to the well-known swarm intelligence algorithms, many extended algorithms have also been extensively discussed and utilized. The “cuckoo search” algorithm, inspired by the brood parasitic behavior of some cuckoo species and the Lévy flight behavior of birds and fruit flies, addresses optimization problems [130]. Pigeon-inspired optimization, used for air–robot path planning, employs a map and compass operator model based on the magnetic field and the sun, along with a landmark operator model grounded in physical landmarks [131]. The bat algorithm, which leverages the echolocation behavior of bats, is effective for solving single- and multi-objective problems within continuous solution spaces [132]. The gray wolf optimizer, inspired by the social hierarchy and hunting strategies of gray wolves, simulates their natural leadership dynamics [133]. An artificial immune system (ARTIS) encompasses attributes of natural immune systems such as diversity, distributed computation, error tolerance, dynamic learning and adaptation, and self-monitoring, functioning as a robust framework for distributed adaptive systems across various domains [134]. The fruit fly optimization algorithm (FOA), based on the foraging behavior of fruit flies, exploits their keen sense of smell and vision to locate superior food sources and gathering spots [135]. Glow-worm swarm optimization, inspired by glow-worm behavior, is utilized for computing multiple optima of multimodal functions simultaneously [136]. Lastly, invasive weed optimization, inspired by the colonization strategies of weeds, is a numerical stochastic optimization algorithm that mimics the robustness, adaptability, and randomness of invasive weeds using a simple yet effective optimization approach [137].

4.6. Gene Feature Extraction

Gene feature extraction refers to the process of extracting meaningful information from genomic data, particularly focusing on the characteristics or attributes of genes. It plays a key role in understanding the biological functions of genes, deciphering genetic mechanisms underlying diseases, and developing computational models for predictive analysis. First, researchers perform original data preprocessing to ensure data quality. Second, what needs to be done is the selection of relevant features; for example, the gene HER2 is commonly found in breast cancer and stomach cancer. Then, meaningful features are extracted, such as gene expression levels, sequence features, and functional annotations. Finally, based on the extracted features, certain tasks can be achieved, such as classifying cancer genes. Intelligent algorithms are closely related to gene feature extraction, and they are often combined in genomics and bioinformatics research to better understand the function and biological significance of genes.
To prevent some preprocessing that may introduce noise or information loss, ref. [138] proposed a new preprocessing method named mean–standard deviation to solve this problem. In addition, to process overlapping biclusters, Chu [139] introduced a new biclustering algorithm called weight-adjacency-difference-matrix binary biclustering. The results of the GO enrichment analysis showed that this method was provided with biological significance using real datasets. Liu [140] proposed the Harris hawks optimization algorithm to accurately select a subset of feature cancer genes. By applying this method to eight published microarray gene expression datasets, the results showed a 100% classification accuracy in gastric cancer, acute lymphoblastic leukemia, and ovarian cancer and an average classification accuracy of 95.33% across a variety of other cancers. Then, ref. [141] improved the Harris hawks optimization algorithm, and the experimental results showed that the classification accuracy of the algorithm was greater than 96.128% for tumors in the colon, nervous system, and lungs, versus 100% for the rest. Compared with seven other algorithms, the experimental results demonstrated the superiority of this algorithm in classification accuracy, fitness value, and the AUC value in feature selection for gene expression data. Inspired by the predatory behavior of humpback whales, Mirjalili [142] proposed a whale optimization algorithm to select the optimal feature gene subset. Compared with other advanced feature selection algorithms, the experimental results showed that the whale optimization algorithm has significant advantages in various evaluation indicators.

4.7. Intelligence Communication

Intelligence communication is an emerging, hot field of communication engineering that uses intelligence algorithms to improve the quality of communication. Communication systems can be described simply in three parts: (1) the transmitter, which is responsible for converting information into a signal suitable for transmission, sending it into the transmission-medium encoder, and sending the information to be transmitted; (2) the transmission-medium encoder, in which the transmission medium mainly plays the role of transmitting signals; and (3) the receiver, which receives the signal from the transmission medium and converts it into an intelligible form of information.
Singnal-to-noise ratio estimation is an important task in improving communication quality. Based on deep learning, Xie [143] proposed a new signal-to-noise ratio estimation algorithm using constellation diagrams. This algorithm converted the received signal into a constellation diagram and constructed three deep neural networks to achieve signal-to-noise ratio estimation. The result showed that this algorithm is superior to the traditional algorithm, especially in a low-signal-to-noise-ratio situation.
Cognitive radio is an intelligent wireless communication technology aimed at enhancing the efficiency and reliability of radio spectrum utilization. Traditional wireless communication systems typically allocate fixed spectrum resources to specific users or services, but this allocation method may lead to spectrum wasting and congestion [144]. Cognitive radio dynamically senses, analyzes, and adapts to the radio spectrum environment to achieve the intelligent management and optimization of spectrum resources. False alarm probability and the missed detection probability are usually used to evaluate the performance of spectrum sensing. Consequently, there is a key problem in how to determine a threshold of the false alarm probability and the missed detection probability. There have been many studies related to the selection of threshold and cognitive radio networks [145,146]. Peng [147] proposed two new adaptive threshold algorithms, which had exploited the Markovian behavior of a primary user to reduce the Bayesian cost of a false alarm and missed detection in spectrum sensing. These two algorithms achieved higher detection accuracy for TV white spaces. To improve the performance of the detection time of a cognitive radio network, ref. [148] proposed a fast cooperative energy detection under an accuracy constraints algorithm. The results illustrated that it could achieve a minimal detection time by appropriately choosing the number of secondary users. Differing from the above-mentioned improvements, Liu [149] combined a butterfly network and Clos network and proposed a new topology network to enhance the throughput of the network.

4.8. Image Processing

Image processing is a field of study within computer science and engineering that focuses on analyzing, enhancing, and manipulating digital images. With the development of computer science, more and more intelligence algorithms are used in image processing [150,151,152,153]. Due to the outbreak of COVID-19 worldwide, there is a large burden on clinicians. Sun [154] proposed an adaptive-feature-selection, deep-forest algorithm for the classification of the computed tomography chest images of COVID-19 patients. Through the evaluation of the COVID-19 dataset with 1495 patients and 1027 community-acquired pneumonia patients, this algorithm exhibited better performance. To enhance image quality, Wang [138] investigated multifractal theory for texture features through the testing of some textures. The author verified the robustness of the proposed approach in three aspects: noise tolerance, the degree of image blurring, and the compression ratio. From a coding perspective, Li [155] proposed an early merge-mode decision approach for 3D high-efficiency video coding. The proposed approach can effectively decrease the computational cost. In terms of image denoising, Zhu and Chan [156] proposed an approach for restoring images, which is based on mean curvature energy minimization. This approach not only retains all the valid information but also filters out noise.

4.9. Other Applications

Recently, significant research has been directed towards applying intelligence algorithm techniques to address clustering challenges in wireless IoT networks. For instance, the study in [157] proposed a clustering framework specifically for wireless IoT sensor networks. This framework aims to enhance energy efficiency and balance energy consumption using a fitness function optimized via a chicken swarm optimization algorithm. An improved ACO-based UAV path-planning architecture was proposed in [158], which considered obstacles within a controlled area monitored via several radars. The study in [159] proposed a set-based PSO algorithm with adaptive weights to develop an optimal path-planning scheme for a UAV surveillance system. The objective was to minimize energy consumption and enhance the disturbance rejection response of UAVs. Ref. [160] introduced a novel ACO-based fuzzy identification algorithm designed to determine initial consequent parameters and an initial membership function matrix for silted conditions. Zhou [161] presented a new ACO algorithm for continuous domains. This algorithm effectively minimizes total fuel costs by appropriately allocating power demands to generator modules while adhering to various physical and operational constraints. The work in [162] applied the ACO algorithm to develop a new SVM model. The algorithm explored a finite subset of possible values to identify parameters that minimize the generalization error. In the context of flux-cored arc welding, a fusion welding process where a tubular wire electrode is continuously fed into the weld area, the welding input parameters are crucial for determining the quality of the weld joint. Table 1 shows some applications and corresponding papers on bio-inspired intelligence algorithms. Figure 4 depicts when each intelligent algorithm was introduced and some of its main applications.

4.10. Discussion and Future Direction

In summary, intelligence algorithms offer versatile solutions across diverse domains, ranging from gene feature extraction to image processing. Whether an intelligence algorithm is bio-inspired or not, both types can tackle complex optimization problems effectively and drive advancements in various fields of research and application. However, there are some challenges, such as local optima traps or overfitting. This has encouraged researchers to explore new optimization strategies. This paper provides some prospective suggestions for future directions.
(1)
Multi-objective optimization: Extending intelligence algorithms to multi-objective optimization domains is still a hot topic.
(2)
Adaptive and self-learning techniques: Developing adaptive and self-learning algorithms that automatically adjust parameters and strategies based on problem and environmental changes improves algorithm robustness and adaptability.
(3)
Hybridization and fusion algorithms: Integrating intelligence algorithms with other optimization methods, such as deep learning and reinforcement learning, to leverage the strengths of various algorithms enhances solution efficiency and accuracy.

5. Conclusions

In this paper, we have provided a comprehensive survey of intelligence algorithms. First, we presented an in-depth overview of ZNNs and comprehensively introduced their origin, structure, operation mechanism, model variants, and applications. A novel classification of ZNNs was first proposed, categorizing models into accelerated-convergence ZNNs, noise-tolerance ZNNs, and discrete-time ZNNs, each offering unique advantages in solving time-varying optimization problems. These three types of ZNNs can also be integrated to solve a variety of time-varying optimization algorithms. Concerning applications, the ZNN models were applied in different areas. From different perspectives, some future directions for ZNNs were suggested. Then, we analyzed and outlined other bio-inspired intelligence algorithms, such as GA and PSO, introducing their origins, basic principles, design procedures, and applications. For the rest of the intelligence algorithms, real applications were introduced to emphasize the applicability of intelligent algorithms. In summary, this paper has offered an informative guide for researchers interested in tackling optimization problems using intelligence algorithms.

Author Contributions

Conceptualization, H.L., B.L. and S.L.; methodology, H.L. and B.L.; validation, J.L. and S.L.; formal analysis, J.L. and B.L.; investigation, H.L. and S.L.; writing—original draft preparation, H.L.; writing—review and editing, H.L., J.L. and S.L.; supervision, B.L., H.L. and S.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 62066015 and Grant 62006095.

Data Availability Statement

Some or all of the data and models that support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sun, Q.; Wu, X. A deep learning-based approach for emotional analysis of sports dance. PeerJ Comput. Sci. 2023, 9, e1441. [Google Scholar] [CrossRef] [PubMed]
  2. Cao, X.; Peng, C.; Zheng, Y.; Li, S.; Ha, T.T.; Shutyaev, V.; Katsikis, V.; Stanimirovic, P. Neural Networks for Portfolio Analysis in High-Frequency Trading. Available online: https://ieeexplore.ieee.org/abstract/document/10250899 (accessed on 13 September 2023).
  3. Zhang, Y.; Li, S.; Weng, J.; Liao, B. GNN Model for Time-Varying Matrix Inversion With Robust Finite-Time Convergence. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 559–569. [Google Scholar] [CrossRef] [PubMed]
  4. Zhang, Y.; Li, Z.; Yi, C.; Chen, K. Zhang neural network versus gradient neural network for online time-varying quadratic function minimization. In Advanced Intelligent Computing Theories and Applications. With Aspects of Artificial Intelligence, Proceedings of the 4th International Conference on Intelligent Computing, ICIC 2008, Shanghai, China, 15–18 September 2008; Proceedings 4; Springer: Berlin/Heidelberg, Germany, 2008; pp. 807–814. [Google Scholar]
  5. Xu, H.; Li, R.; Pan, C.; Li, K. Minimizing energy consumption with reliability goal on heterogeneous embedded systems. J. Parallel Distrib. Comput. 2019, 127, 44–57. [Google Scholar] [CrossRef]
  6. Peng, C.; Liao, B. Heavy-head sampling for fast imitation learning of machine learning based combinatorial auction solver. Neural Process. Lett. 2023, 55, 631–644. [Google Scholar] [CrossRef]
  7. Xu, H.; Li, R.; Zeng, L.; Li, K.; Pan, C. Energy-efficient scheduling with reliability guarantee in embedded real-time systems. Sustain. Comput. Inform. Syst. 2018, 18, 137–148. [Google Scholar] [CrossRef]
  8. Qin, F.; Zain, A.M.; Zhou, K.Q. Harmony search algorithm and related variants: A systematic review. Swarm Evol. Comput. 2022, 74, 101126. [Google Scholar] [CrossRef]
  9. Wu, W.; Tian, Y.; Jin, T. A label based ant colony algorithm for heterogeneous vehicle routing with mixed backhaul. Appl. Soft Comput. 2016, 47, 224–234. [Google Scholar] [CrossRef]
  10. Sindhuja, P.; Ramamoorthy, P.; Kumar, M.S. A brief survey on nature inspired algorithms: Clever algorithms for optimization. Asian J. Comput. Sci. Technol. 2018, 7, 27–32. [Google Scholar] [CrossRef]
  11. Sakunthala, S.; Kiranmayi, R.; Mandadi, P.N. A review on artificial intelligence techniques in electrical drives: Neural networks, fuzzy logic, and genetic algorithm. In Proceedings of the 2017 International Conference on Smart Technologies for Smart Nation (SmartTechCon), Bengaluru, India, 17–19 August 2017; pp. 11–16. [Google Scholar]
  12. Lachhwani, K. Application of neural network models for mathematical programming problems: A state of art review. Arch. Comput. Methods Eng. 2020, 27, 171–182. [Google Scholar] [CrossRef]
  13. Wang, T.; Zhang, Z.; Huang, Y.; Liao, B.; Li, S. Applications of Zeroing neural networks: A survey. IEEE Access 2024, 12, 51346–51363. [Google Scholar] [CrossRef]
  14. Zheng, Y.J.; Chen, S.Y.; Lin, Y.; Wang, W.L. Bio-inspired optimization of sustainable energy systems: A review. Math. Probl. Eng. 2013, 2013, e354523. [Google Scholar] [CrossRef]
  15. Zajmi, L.; Ahmed, F.Y.; Jaharadak, A.A. Concepts, methods, and performances of particle swarm optimization, backpropagation, and neural networks. Appl. Comput. Intell. Soft Comput. 2018, 2018, e9547212. [Google Scholar] [CrossRef]
  16. Cosma, G.; Brown, D.; Archer, M.; Khan, M.; Pockley, A.G. A survey on computational intelligence approaches for predictive modeling in prostate cancer. Expert Syst. Appl. 2017, 70, 1–19. [Google Scholar] [CrossRef]
  17. Goel, L. An extensive review of computational intelligence-based optimization algorithms: Trends and applications. Soft Comput. 2020, 24, 16519–16549. [Google Scholar] [CrossRef]
  18. Ding, S.; Li, H.; Su, C.; Yu, J.; Jin, F. Evolutionary artificial neural networks: A review. Artif. Intell. Rev. 2013, 39, 251–260. [Google Scholar] [CrossRef]
  19. Abd Elaziz, M.; Dahou, A.; Abualigah, L.; Yu, L.; Alshinwan, M.; Khasawneh, A.M.; Lu, S. Advanced metaheuristic optimization techniques in applications of deep neural networks: A review. Neural Comput. Appl. 2021, 33, 14079–14099. [Google Scholar] [CrossRef]
  20. Alqushaibi, A.; Abdulkadir, S.J.; Rais, H.M.; Al-Tashi, Q. A review of weight optimization techniques in recurrent neural networks. In Proceedings of the 2020 International Conference on Computational Intelligence (ICCI), Bandar Seri Iskandar, Malaysia, 8–9 October 2020; pp. 196–201. [Google Scholar]
  21. Ganapathy, S.; Kulothungan, K.; Muthurajkumar, S.; Vijayalakshmi, M.; Yogesh, P.; Kannan, A. Intelligent feature selection and classification techniques for intrusion detection in networks: A survey. EURASIP J. Wirel. Commun. Netw. 2013, 2013, 271. [Google Scholar] [CrossRef]
  22. Vijh, S.; Gaurav, P.; Pandey, H.M. Hybrid bio-inspired algorithm and convolutional neural network for automatic lung tumor detection. Neural Comput. Appl. 2023, 35, 23711–23724. [Google Scholar] [CrossRef]
  23. Hassani, S.; Dackermann, U. A systematic review of optimization algorithms for structural health monitoring and optimal sensor placement. Sensors 2023, 23, 3293. [Google Scholar] [CrossRef]
  24. Gad, A.G. Particle swarm optimization algorithm and its applications: A systematic review. Arch. Comput. Methods Eng. 2022, 29, 2531–2561. [Google Scholar] [CrossRef]
  25. Hopfield, J.J. Neurons, Dynamics and Computation. Phys. Today 1994, 47, 40–46. [Google Scholar] [CrossRef]
  26. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef] [PubMed]
  27. Hopfield, J.J. Neurons with graded response have collective computational properties like those of two-state neurons. Proc. Natl. Acad. Sci. USA 1984, 81, 3088–3092. [Google Scholar] [CrossRef] [PubMed]
  28. Hopfield, J.; Tank, D. Neural Computation of Decisions in Optimization Problems. Biol. Cybern. 1985, 52, 141–152. [Google Scholar] [CrossRef] [PubMed]
  29. Liao, B.; Hua, C.; Cao, X.; Katsikis, V.N.; Li, S. Complex noise-resistant zeroing neural network for computing complex time-dependent Lyapunov equation. Mathematics 2022, 10, 2817. [Google Scholar] [CrossRef]
  30. Liao, B.; Han, L.; He, Y.; Cao, X.; Li, J. Prescribed-time convergent adaptive ZNN for time-varying matrix inversion under harmonic noise. Electronics 2022, 11, 1636. [Google Scholar] [CrossRef]
  31. Liao, B.; Huang, Z.; Cao, X.; Li, J. Adopting nonlinear activated beetle antennae search algorithm for fraud detection of public trading companies: A computational finance approach. Mathematics 2022, 10, 2160. [Google Scholar] [CrossRef]
  32. Liu, Z.; Wu, X. Structural Analysis of the Evolution Mechanism of Online Public Opinion and its Development Stages Based on Machine Learning and Social Network Analysis. Int. J. Comput. Intell. Syst. 2023, 16, 99. [Google Scholar] [CrossRef]
  33. Chen, S.; Zhou, C.; Li, J.; Peng, H. Asynchronous introspection theory: The underpinnings of phenomenal consciousness in temporal illusion. Minds Mach. 2017, 27, 315–330. [Google Scholar] [CrossRef]
  34. Luo, M.; Ke, W.; Cai, Z.; Liu, A.; Li, Y.; Cheang, C. Using Imbalanced Triangle Synthetic Data for Machine Learning Anomaly Detection. Comput. Mater. Contin. 2019, 58, 15–26. [Google Scholar] [CrossRef]
  35. Jin, L.; Zhang, Y. Continuous and discrete Zhang dynamics for real-time varying nonlinear optimization. Numer. Algorithms 2016, 73, 115–140. [Google Scholar] [CrossRef]
  36. Zhang, Z.; Zheng, L.; Weng, J.; Mao, Y.; Lu, W.; Xiao, L. A New Varying-Parameter Recurrent Neural-Network for Online Solution of Time-Varying Sylvester Equation. IEEE Trans. Cybern. 2018, 48, 3135–3148. [Google Scholar] [CrossRef] [PubMed]
  37. Xiao, L.; Liao, B. A convergence-accelerated Zhang neural network and its solution application to Lyapunov equation. Neurocomputing 2016, 193, 213–218. [Google Scholar] [CrossRef]
  38. Peng, C.; Ling, Y.; Wang, Y.; Yu, X.; Zhang, Y. Three new ZNN models with economical dimension and exponential convergence for real-time solution of moore-penrose pseudoinverse. In Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), Beijing, China, 6–11 July 2014; pp. 2788–2793. [Google Scholar]
  39. Guo, D.; Peng, C.; Jin, L.; Ling, Y.; Zhang, Y. Different ZFs lead to different nets: Examples of Zhang generalized inverse. In Proceedings of the 2013 Chinese Automation Congress, Changsha, China, 7–8 November 2013; pp. 453–458. [Google Scholar]
  40. Lv, X.; Xiao, L.; Tan, Z.; Yang, Z.; Yuan, J. Improved gradient neural networks for solving Moore–Penrose inverse of full-rank matrix. Neural Process. Lett. 2019, 50, 1993–2005. [Google Scholar] [CrossRef]
  41. Xiao, L. A nonlinearly activated neural dynamics and its finite-time solution to time-varying nonlinear equation. Neurocomputing 2016, 173, 1983–1988. [Google Scholar] [CrossRef]
  42. Liu, M.; Liao, B.; Ding, L.; Xiao, L. Performance analyses of recurrent neural network models exploited for online time-varying nonlinear optimization. Comput. Sci. Inf. Syst. 2016, 13, 691–705. [Google Scholar] [CrossRef]
  43. Sun, Z.; Wang, G.; Jin, L.; Cheng, C.; Zhang, B.; Yu, J. Noise-suppressing zeroing neural network for online solving time-varying matrix square roots problems: A control-theoretic approach. Expert Syst. Appl. 2022, 192, 116272. [Google Scholar] [CrossRef]
  44. Jin, L.; Zhang, Y.; Li, S.; Zhang, Y. Modified ZNN for time-varying quadratic programming with inherent tolerance to noises and its application to kinematic redundancy resolution of robot manipulators. IEEE Trans. Ind. Electron. 2016, 63, 6978–6988. [Google Scholar] [CrossRef]
  45. Xiao, L.; Liao, B.; Li, S.; Chen, K. Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations. Neural Netw. 2018, 98, 102–113. [Google Scholar] [CrossRef]
  46. Xiao, L.; Lu, R. Finite-time solution to nonlinear equation using recurrent neural dynamics with a specially-constructed activation function. Neurocomputing 2015, 151, 246–251. [Google Scholar] [CrossRef]
  47. Xiao, L. A nonlinearly-activated neurodynamic model and its finite-time solution to equality-constrained quadratic optimization with nonstationary coefficients. Appl. Soft Comput. 2016, 40, 252–259. [Google Scholar] [CrossRef]
  48. Liao, B.; Zhang, Y. From different ZFs to different ZNN models accelerated via Li activation functions to finite-time convergence for time-varying matrix pseudoinversion. Neurocomputing 2014, 133, 512–522. [Google Scholar] [CrossRef]
  49. Lv, X.; Xiao, L.; Tan, Z. Improved Zhang neural network with finite-time convergence for time-varying linear system of equations solving. Inf. Process. Lett. 2019, 147, 88–93. [Google Scholar] [CrossRef]
  50. Xiao, L.; Tan, H.; Jia, L.; Dai, J.; Zhang, Y. New error function designs for finite-time ZNN models with application to dynamic matrix inversion. Neurocomputing 2020, 402, 395–408. [Google Scholar] [CrossRef]
  51. Lv, X.; Xiao, L.; Tan, Z.; Yang, Z. Wsbp function activated Zhang dynamic with finite-time convergence applied to Lyapunov equation. Neurocomputing 2018, 314, 310–315. [Google Scholar] [CrossRef]
  52. Xiao, L. A new design formula exploited for accelerating Zhang neural network and its application to time-varying matrix inversion. Theor. Comput. Sci. 2016, 647, 50–58. [Google Scholar] [CrossRef]
  53. Jin, L.; Li, S.; Liao, B.; Zhang, Z. Zeroing neural networks: A survey. Neurocomputing 2017, 267, 597–604. [Google Scholar] [CrossRef]
  54. Hua, C.; Cao, X.; Liao, B.; Li, S. Advances on intelligent algorithms for scientific computing: An overview. Front. Neurorobot. 2023, 17, 1190977. [Google Scholar] [CrossRef] [PubMed]
  55. Cordero, A.; Soleymani, F.; Torregrosa, J.R.; Ullah, M.Z. Numerically stable improved Chebyshev–Halley type schemes for matrix sign function. J. Comput. Appl. Math. 2017, 318, 189–198. [Google Scholar] [CrossRef]
  56. Soleymani, F.; Tohidi, E.; Shateyi, S.; Khaksar Haghani, F. Some Matrix Iterations for Computing Matrix Sign Function. J. Appl. Math. 2014, 2014, 425654. [Google Scholar] [CrossRef]
  57. Li, W.; Liao, B.; Xiao, L.; Lu, R. A recurrent neural network with predefined-time convergence and improved noise tolerance for dynamic matrix square root finding. Neurocomputing 2019, 337, 262–273. [Google Scholar] [CrossRef]
  58. Xiao, L.; Zhang, Y.; Dai, J.; Chen, K.; Yang, S.; Li, W.; Liao, B.; Ding, L.; Li, J. A new noise-tolerant and predefined-time ZNN model for time-dependent matrix inversion. Neural Netw. 2019, 117, 124–134. [Google Scholar] [CrossRef]
  59. Xiao, L.; Li, L.; Tao, J.; Li, W. A predefined-time and anti-noise varying-parameter ZNN model for solving time-varying complex Stein equations. Neurocomputing 2023, 526, 158–168. [Google Scholar] [CrossRef]
  60. Jim, K.C.; Giles, C.; Horne, W. An analysis of noise in recurrent neural networks: Convergence and generalization. Neural Netw. IEEE Trans. 1996, 7, 1424–1438. [Google Scholar]
  61. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [PubMed]
  62. Kumar, A.; Sodhi, S.S. Comparative analysis of gaussian filter, median filter and denoise autoenocoder. In Proceedings of the 2020 7th International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 12–14 March 2020; pp. 45–51. [Google Scholar]
  63. Liao, B.; Zhang, Y. Different Complex ZFs Leading to Different Complex ZNN Models for Time-Varying Complex Generalized Inverse Matrices. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 1621–1631. [Google Scholar] [CrossRef]
  64. Li, W.; Xiao, L.; Liao, B. A Finite-Time Convergent and Noise-Rejection Recurrent Neural Network and Its Discretization for Dynamic Nonlinear Equations Solving. IEEE Trans. Cybern. 2020, 50, 3195–3207. [Google Scholar] [CrossRef]
  65. Liao, B.; Xiang, Q.; Li, S. Bounded Z-type neurodynamics with limited-time convergence and noise tolerance for calculating time-dependent Lyapunov equation. Neurocomputing 2019, 325, 234–241. [Google Scholar] [CrossRef]
  66. Liao, B.; Wang, Y.; Li, J.; Guo, D.; He, Y. Harmonic Noise-Tolerant ZNN for Dynamic Matrix Pseudoinversion and Its Application to Robot Manipulator. Front. Neurorobot. 2022, 16, 928636. [Google Scholar] [CrossRef]
  67. Jin, L.; Zhang, Y.; Li, S. Integration-Enhanced Zhang Neural Network for Real-Time-Varying Matrix Inversion in the Presence of Various Kinds of Noises. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 2615–2627. [Google Scholar] [CrossRef]
  68. Stanimirović, P.S.; Katsikis, V.N.; Li, S. Integration enhanced and noise tolerant ZNN for computing various expressions involving outer inverses. Neurocomputing 2019, 329, 129–143. [Google Scholar] [CrossRef]
  69. Liao, B.; Han, L.; Cao, X.; Li, S.; Li, J. Double integral-enhanced Zeroing neural network with linear noise rejection for time-varying matrix inverse. CAAI Trans. Intell. Technol. 2024, 9, 197–210. [Google Scholar] [CrossRef]
  70. Xiao, L.; He, Y.; Dai, J.; Liu, X.; Liao, B.; Tan, H. A Variable-Parameter Noise-Tolerant Zeroing Neural Network for Time-Variant Matrix Inversion With Guaranteed Robustness. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 1535–1545. [Google Scholar] [CrossRef] [PubMed]
  71. Xiang, Q.; Liao, B.; Xiao, L.; Lin, L.; Li, S. Discrete-time noise-tolerant Zhang neural network for dynamic matrix pseudoinversion. Soft Comput. 2019, 23, 755–766. [Google Scholar] [CrossRef]
  72. Zhang, Y.; Cai, B.; Liang, M.; Ma, W. On the variable step-size of discrete-time Zhang neural network and Newton iteration for constant matrix inversion. In Proceedings of the 2008 Second International Symposium on Intelligent Information Technology Application, Shanghai, China, 20–22 December 2008; Volume 1, pp. 34–38. [Google Scholar]
  73. Zhang, Y.; Ma, W.; Cai, B. From Zhang neural network to Newton iteration for matrix inversion. IEEE Trans. Circuits Syst. I Regul. Pap. 2008, 56, 1405–1415. [Google Scholar] [CrossRef]
  74. Mao, M.; Li, J.; Jin, L.; Li, S.; Zhang, Y. Enhanced discrete-time Zhang neural network for time-variant matrix inversion in the presence of bias noises. Neurocomputing 2016, 207, 220–230. [Google Scholar] [CrossRef]
  75. Zhang, Y.; Jin, L.; Guo, D.; Yin, Y.; Chou, Y. Taylor-type 1-step-ahead numerical differentiation rule for first-order derivative approximation and ZNN discretization. J. Comput. Appl. Math. 2015, 273, 29–40. [Google Scholar] [CrossRef]
  76. Guo, D.; Zhang, Y. Zhang neural network, Getz–Marsden dynamic system, and discrete-time algorithms for time-varying matrix inversion with application to robots’ kinematic control. Neurocomputing 2012, 97, 22–32. [Google Scholar] [CrossRef]
  77. Liao, B.; Zhang, Y.; Jin, L. Taylor O(h3) Discretization of ZNN Models for Dynamic Equality-Constrained Quadratic Programming With Application to Manipulators. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 225–237. [Google Scholar] [CrossRef]
  78. Guo, D.; Nie, Z.; Yan, L. Novel discrete-time Zhang neural network for time-varying matrix inversion. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 2301–2310. [Google Scholar] [CrossRef]
  79. Zhang, J. Analysis and Construction of Software Engineering OBE Talent Training System Structure Based on Big Data. Secur. Commun. Netw. 2022, 2022, e3208318. [Google Scholar]
  80. Jin, J. Low power current-mode voltage controlled oscillator for 2.4GHz wireless applications. Comput. Electr. Eng. 2014, 40, 92–99. [Google Scholar] [CrossRef]
  81. Tan, W.; Huang, W.; Yang, X.; Shi, Z.; Liu, W.; Fan, L. Multiuser precoding scheme and achievable rate analysis for massive MIMO system. EURASIP J. Wirel. Commun. Netw. 2018, 2018, 210. [Google Scholar] [CrossRef]
  82. Yang, X.; Lei, K.; Peng, S.; Cao, X.; Gao, X. Analytical Expressions for the Probability of False-Alarm and Decision Threshold of Hadamard Ratio Detector in Non-Asymptotic Scenarios. IEEE Commun. Lett. 2018, 22, 1018–1021. [Google Scholar] [CrossRef]
  83. Dai, Z.; Guo, X. Investigation of E-Commerce Security and Data Platform Based on the Era of Big Data of the Internet of Things. Mob. Inf. Syst. 2022, 2022, 3023298. [Google Scholar] [CrossRef]
  84. Lu, J.; Li, W.; Sun, J.; Xiao, R.; Liao, B. Secure and Real-Time Traceable Data Sharing in Cloud-Assisted IoT. IEEE Internet Things J. 2024, 11, 6521–6536. [Google Scholar] [CrossRef]
  85. Xiao, L.; Li, K.; Tan, Z.; Zhang, Z.; Liao, B.; Chen, K.; Jin, L.; Li, S. Nonlinear gradient neural network for solving system of linear equations. Inf. Process. Lett. 2019, 142, 35–40. [Google Scholar] [CrossRef]
  86. Xiao, L.; Yi, Q.; Dai, J.; Li, K.; Hu, Z. Design and analysis of new complex zeroing neural network for a set of dynamic complex linear equations. Neurocomputing 2019, 363, 171–181. [Google Scholar] [CrossRef]
  87. Lu, H.; Jin, L.; Luo, X.; Liao, B.; Guo, D.; Xiao, L. RNN for Solving Perturbed Time-Varying Underdetermined Linear System With Double Bound Limits on Residual Errors and State Variables. IEEE Trans. Ind. Inform. 2019, 15, 5931–5942. [Google Scholar] [CrossRef]
  88. Xiao, L.; Jia, L.; Zhang, Y.; Hu, Z.; Dai, J. Finite-Time Convergence and Robustness Analysis of Two Nonlinear Activated ZNN Models for Time-Varying Linear Matrix Equations. IEEE Access 2019, 7, 135133–135144. [Google Scholar] [CrossRef]
  89. Zhang, Z.; Deng, X.; Qu, X.; Liao, B.; Kong, L.D.; Li, L. A Varying-Gain Recurrent Neural Network and Its Application to Solving Online Time-Varying Matrix Equation. IEEE Access 2018, 6, 77940–77952. [Google Scholar] [CrossRef]
  90. Li, S.; Li, Y. Nonlinearly activated neural network for solving time-varying complex Sylvester equation. IEEE Trans. Cybern. 2013, 44, 1397–1407. [Google Scholar] [CrossRef] [PubMed]
  91. Liao, S.; Liu, J.; Xiao, X.; Fu, D.; Wang, G.; Jin, L. Modified gradient neural networks for solving the time-varying Sylvester equation with adaptive coefficients and elimination of matrix inversion. Neurocomputing 2020, 379, 1–11. [Google Scholar] [CrossRef]
  92. Xiao, L. A finite-time convergent neural dynamics for online solution of time-varying linear complex matrix equation. Neurocomputing 2015, 167, 254–259. [Google Scholar] [CrossRef]
  93. Xiao, L.; Zhang, Y.; Li, K.; Liao, B.; Tan, Z. A novel recurrent neural network and its finite-time solution to time-varying complex matrix inversion. Neurocomputing 2019, 331, 483–492. [Google Scholar] [CrossRef]
  94. Long, C.; Zhang, G.; Zeng, Z.; Hu, J. Finite-time stabilization of complex-valued neural networks with proportional delays and inertial terms: A non-separation approach. Neural Netw. 2022, 148, 86–95. [Google Scholar] [CrossRef]
  95. Ding, L.; Xiao, L.; Liao, B.; Lu, R.; Peng, H. An improved recurrent neural network for complex-valued systems of linear equation and its application to robotic motion tracking. Front. Neurorobot. 2017, 11, 45. [Google Scholar] [CrossRef] [PubMed]
  96. Xiao, L.; Dai, J.; Lu, R.; Li, S.; Li, J.; Wang, S. Design and Comprehensive Analysis of a Noise-Tolerant ZNN Model With Limited-Time Convergence for Time-Dependent Nonlinear Minimization. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 5339–5348. [Google Scholar] [CrossRef]
  97. Zhang, Z.; Zheng, L.; Li, L.; Deng, X.; Xiao, L.; Huang, G. A new finite-time varying-parameter convergent-differential neural-network for solving nonlinear and nonconvex optimization problems. Neurocomputing 2018, 319, 74–83. [Google Scholar] [CrossRef]
  98. Xiao, L.; He, Y.; Wang, Y.; Dai, J.; Wang, R.; Tang, W. A Segmented Variable-Parameter ZNN for Dynamic Quadratic Minimization With Improved Convergence and Robustness. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 2413–2424. [Google Scholar] [CrossRef]
  99. Xiao, L.; Li, S.; Yang, J.; Zhang, Z. A new recurrent neural network with noise-tolerance and finite-time convergence for dynamic quadratic minimization. Neurocomputing 2018, 285, 125–132. [Google Scholar] [CrossRef]
  100. Xiao, L.; Li, K.; Duan, M. Computing Time-Varying Quadratic Optimization With Finite-Time Convergence and Noise Tolerance: A Unified Framework for Zeroing Neural Network. IEEE Trans. Neural Netw. Learn. Syst. 2019, 30, 3360–3369. [Google Scholar] [CrossRef] [PubMed]
  101. Zhang, Y.; Li, S.; Kadry, S.; Liao, B. Recurrent Neural Network for Kinematic Control of Redundant Manipulators With Periodic Input Disturbance and Physical Constraints. IEEE Trans. Cybern. 2019, 49, 4194–4205. [Google Scholar] [CrossRef] [PubMed]
  102. Li, X.; Xu, Z.; Su, Z.; Wang, H.; Li, S. Distance- and Velocity-Based Simultaneous Obstacle Avoidance and Target Tracking for Multiple Wheeled Mobile Robots. IEEE Trans. Intell. Transp. Syst. 2024, 25, 1736–1748. [Google Scholar] [CrossRef]
  103. Xiao, L.; Zhang, Y.; Liao, B.; Zhang, Z.; Ding, L.; Jin, L. A velocity-level bi-criteria optimization scheme for coordinated path tracking of dual robot manipulators using recurrent neural network. Front. Neurorobot. 2017, 11, 47. [Google Scholar] [CrossRef] [PubMed]
  104. Tang, Z.; Zhang, Y. Refined Self-Motion Scheme With Zero Initial Velocities and Time-Varying Physical Limits via Zhang Neurodynamics Equivalency. Front. Neurorobot. 2022, 16, 945346. [Google Scholar] [CrossRef] [PubMed]
  105. Jin, L.; Liao, B.; Liu, M.; Xiao, L.; Guo, D.; Yan, X. Different-Level Simultaneous Minimization Scheme for Fault Tolerance of Redundant Manipulator Aided with Discrete-Time Recurrent Neural Network. Front. Neurorobot. 2017, 11, 50. [Google Scholar] [CrossRef] [PubMed]
  106. Liao, B.; Hua, C.; Xu, Q.; Cao, X.; Li, S. Inter-robot management via neighboring robot sensing and measurement using a zeroing neural dynamics approach. Expert Syst. Appl. 2024, 244, 122938. [Google Scholar] [CrossRef]
  107. Zhang, C.X.; Zhou, K.Q.; Ye, S.Q.; Zain, A.M. An improved cuckoo search algorithm utilizing nonlinear inertia weight and differential evolution for function optimization problem. IEEE Access 2021, 9, 161352–161373. [Google Scholar] [CrossRef]
  108. Ye, S.Q.; Zhou, K.Q.; Zhang, C.X.; Mohd Zain, A.; Ou, Y. An improved multi-objective cuckoo search approach by exploring the balance between development and exploration. Electronics 2022, 11, 704. [Google Scholar] [CrossRef]
  109. Chen, Z.; Francis, A.; Li, S.; Liao, B.; Xiao, D.; Ha, T.T.; Li, J.; Ding, L.; Cao, X. Egret Swarm Optimization Algorithm: An Evolutionary Computation Approach for Model Free Optimization. Biomimetics 2022, 7, 144. [Google Scholar] [CrossRef] [PubMed]
  110. Khan, A.T.; Cao, X.; Liao, B.; Francis, A. Bio-inspired Machine Learning for Distributed Confidential Multi-Portfolio Selection Problem. Biomimetics 2022, 7, 124. [Google Scholar] [CrossRef] [PubMed]
  111. Khan, A.H.; Cao, X.; Xu, B.; Li, S. Beetle Antennae Search: Using Biomimetic Foraging Behaviour of Beetles to Fool a Well-Trained Neuro-Intelligent System. Biomimetics 2022, 7, 84. [Google Scholar] [CrossRef]
  112. Ou, Y.; Yin, P.; Mo, L. An Improved Grey Wolf Optimizer and Its Application in Robot Path Planning. Biomimetics 2023, 8, 84. [Google Scholar] [CrossRef] [PubMed]
  113. Holland, J.H. Genetic algorithms and the optimal allocation of trials. SIAM J. Comput. 1973, 2, 88–105. [Google Scholar] [CrossRef]
  114. Ou, Y.; Ye, S.Q.; Ding, L.; Zhou, K.Q.; Zain, A.M. Hybrid knowledge extraction framework using modified adaptive genetic algorithm and BPNN. IEEE Access 2022, 10, 72037–72050. [Google Scholar] [CrossRef]
  115. Li, H.C.; Zhou, K.Q.; Mo, L.P.; Zain, A.M.; Qin, F. Weighted fuzzy production rule extraction using modified harmony search algorithm and BP neural network framework. IEEE Access 2020, 8, 186620–186637. [Google Scholar] [CrossRef]
  116. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  117. Peng, Y.; Lei, K.; Yang, X.; Peng, J. Improved chaotic quantum-behaved particle swarm optimization algorithm for fuzzy neural network and its application. Math. Probl. Eng. 2020, 2020, 9464593. [Google Scholar]
  118. Yang, L.; Ding, B.; Liao, W.; Li, Y. Identification of preisach model parameters based on an improved particle swarm optimization method for piezoelectric actuators in micro-manufacturing stages. Micromachines 2022, 13, 698. [Google Scholar] [CrossRef]
  119. Bullnheimer, B.; Hartl, R.F.; Strauss, C. A New Rank Based Version of the Ant System–A Computational Study. Cent. Eur. J. Oper. Res. 1999, 7, 25–38. [Google Scholar]
  120. Hu, X.M.; Zhang, J.; Li, Y. Orthogonal methods based ant colony search for solving continuous optimization problems. J. Comput. Sci. Technol. 2008, 23, 2–18. [Google Scholar] [CrossRef]
  121. Gupta, D.K.; Arora, Y.; Singh, U.K.; Gupta, J.P. Recursive ant colony optimization for estimation of parameters of a function. In Proceedings of the 2012 1st International Conference on Recent Advances in Information Technology (RAIT), Dhanbad, India, 15–17 March 2012; pp. 448–454. [Google Scholar]
  122. Gao, S.; Wang, Y.; Cheng, J.; Inazumi, Y.; Tang, Z. Ant colony optimization with clustering for solving the dynamic location routing problem. Appl. Math. Comput. 2016, 285, 149–173. [Google Scholar] [CrossRef]
  123. Hemmatian, H.; Fereidoon, A.; Sadollah, A.; Bahreininejad, A. Optimization of laminate stacking sequence for minimizing weight and cost using elitist ant system optimization. Adv. Eng. Softw. 2013, 57, 8–18. [Google Scholar] [CrossRef]
  124. Li, X.l. An optimizing method based on autonomous animats: Fish-swarm algorithm. Syst. Eng. Theory Pract. 2002, 22, 32–38. [Google Scholar]
  125. Shen, W.; Guo, X.; Wu, C.; Wu, D. Forecasting stock indices using radial basis function neural networks optimized by artificial fish swarm algorithm. Knowl. Based Syst. 2011, 24, 378–385. [Google Scholar] [CrossRef]
  126. Neshat, M.; Sepidnam, G.; Sargolzaei, M.; Toosi, A.N. Artificial fish swarm algorithm: A survey of the state-of-the-art, hybridization, combinatorial and indicative applications. Artif. Intell. Rev. 2014, 42, 965–997. [Google Scholar] [CrossRef]
  127. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  128. Timmis, J.; Andrews, P.; Hart, E. On artificial immune systems and swarm intelligence. Swarm Intell. 2010, 4, 247–273. [Google Scholar] [CrossRef]
  129. Gao, K.; Cao, Z.; Zhang, L.; Chen, Z.; Han, Y.; Pan, Q. A review on swarm intelligence and evolutionary algorithms for solving flexible job shop scheduling problems. IEEE/CAA J. Autom. Sin. 2019, 6, 904–916. [Google Scholar] [CrossRef]
  130. Roy, S.; Chaudhuri, S.S. Cuckoo search algorithm using Lévy flight: A review. Int. J. Mod. Educ. Comput. Sci. 2013, 5, 10. [Google Scholar] [CrossRef]
  131. Duan, H.; Qiao, P. Pigeon-inspired optimization: A new swarm intelligence optimizer for air robot path planning. Int. J. Intell. Comput. Cybern. 2014, 7, 24–37. [Google Scholar] [CrossRef]
  132. Yang, X.S.; Hossein Gandomi, A. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef]
  133. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  134. Hofmeyr, S.A.; Forrest, S. Architecture for an artificial immune system. Evol. Comput. 2000, 8, 443–473. [Google Scholar] [CrossRef]
  135. Pan, W.T. A new fruit fly optimization algorithm: Taking the financial distress model as an example. Knowl. Based Syst. 2012, 26, 69–74. [Google Scholar] [CrossRef]
  136. Krishnanand, K.; Ghose, D. Glowworm swarm optimization for simultaneous capture of multiple local optima of multimodal functions. Swarm Intell. 2009, 3, 87–124. [Google Scholar] [CrossRef]
  137. Mehrabian, A.R.; Lucas, C. A novel numerical optimization algorithm inspired from weed colonization. Ecol. Inform. 2006, 1, 355–366. [Google Scholar] [CrossRef]
  138. Wang, F.; Li, Z.S.; Liao, G.P. Multifractal detrended fluctuation analysis for image texture feature representation. Int. J. Pattern Recognit. Artif. Intell. 2014, 28, 1455005. [Google Scholar] [CrossRef]
  139. Chu, H.M.; Kong, X.Z.; Liu, J.X.; Zheng, C.H.; Zhang, H. A New Binary Biclustering Algorithm Based on Weight Adjacency Difference Matrix for Analyzing Gene Expression Data. IEEE/ACM Trans. Comput. Biol. Bioinform. 2023, 20, 2802–2809. [Google Scholar] [CrossRef]
  140. Liu, J.; Feng, H.; Tang, Y.; Zhang, L.; Qu, C.; Zeng, X.; Peng, X. A novel hybrid algorithm based on Harris Hawks for tumor feature gene selection. PeerJ Comput. Sci. 2023, 9, e1229. [Google Scholar] [CrossRef]
  141. Qu, C.; Zhang, L.; Li, J.; Deng, F.; Tang, Y.; Zeng, X.; Peng, X. Improving feature selection performance for classification of gene expression data using Harris Hawks optimizer with variable neighborhood learning. Briefings Bioinform. 2021, 22, bbab097. [Google Scholar] [CrossRef]
  142. Liu, J.; Qu, C.; Zhang, L.; Tang, Y.; Li, J.; Feng, H.; Zeng, X.; Peng, X. A new hybrid algorithm for three-stage gene selection based on whale optimization. Sci. Rep. 2023, 13, 3783. [Google Scholar] [CrossRef] [PubMed]
  143. Xie, X.; Peng, S.; Yang, X. Deep learning-based signal-to-noise ratio estimation using constellation diagrams. Mob. Inf. Syst. 2020, 2020, 8840340. [Google Scholar] [CrossRef]
  144. Yang, X.; Lei, K.; Peng, S.; Hu, L.; Li, S.; Cao, X. Threshold Setting for Multiple Primary User Spectrum Sensing via Spherical Detector. IEEE Wirel. Commun. Lett. 2019, 8, 488–491. [Google Scholar] [CrossRef]
  145. Jin, J. Multi-function current differencing cascaded transconductance amplifier (MCDCTA) and its application to current-mode multiphase sinusoidal oscillator. Wirel. Pers. Commun. 2016, 86, 367–383. [Google Scholar] [CrossRef]
  146. Jin, J. Resonant amplifier-based sub-harmonic mixer for zero-IF transceiver applications. Integration 2017, 57, 69–73. [Google Scholar] [CrossRef]
  147. Peng, S.; Gao, R.; Zheng, W.; Lei, K. Adaptive Algorithms for Bayesian Spectrum Sensing Based on Markov Model. KSII Trans. Internet Inf. Syst. (TIIS) 2018, 12, 3095–3111. [Google Scholar]
  148. Peng, S.; Zheng, W.; Gao, R.; Lei, K. Fast cooperative energy detection under accuracy constraints in cognitive radio networks. Wirel. Commun. Mob. Comput. 2017, 2017, 3984529. [Google Scholar] [CrossRef]
  149. Liu, H.; Xie, L.; Liu, J.; Ding, L. Application of butterfly Clos-network in network-on-Chip. Sci. World J. 2014, 2014, 102651. [Google Scholar] [CrossRef]
  150. Yu, Y.; Wang, D.; Faisal, M.; Jabeen, F.; Johar, S. Decision support system for evaluating the role of music in network-based game for sustaining effectiveness. Soft Comput. 2022, 26, 10775–10788. [Google Scholar] [CrossRef]
  151. Xiang, Z.; Guo, Y. Controlling Melody Structures in Automatic Game Soundtrack Compositions With Adversarial Learning Guided Gaussian Mixture Models. IEEE Trans. Games 2021, 13, 193–204. [Google Scholar] [CrossRef]
  152. Xiang, Z.; Xiang, C.; Li, T.; Guo, Y. F A self-adapting hierarchical actions and structures joint optimization framework for automatic design of robotic and animation skeletons. Soft Comput. 2021, 25, 263–276. [Google Scholar] [CrossRef]
  153. Qin, Z.; Tang, Y.; Tang, F.; Xiao, J.; Huang, C.; Xu, H. Efficient XML query and update processing using a novel prime-based middle fraction labeling scheme. China Commun. 2017, 14, 145–157. [Google Scholar] [CrossRef]
  154. Sun, L.; Mo, Z.; Yan, F.; Xia, L.; Shan, F.; Ding, Z.; Song, B.; Gao, W.; Shao, W.; Shi, F.; et al. Adaptive Feature Selection Guided Deep Forest for COVID-19 Classification With Chest CT. IEEE J. Biomed. Health Inform. 2020, 24, 2798–2805. [Google Scholar] [CrossRef] [PubMed]
  155. Li, Y.; Yang, G.; Zhu, Y.; Ding, X.; Gong, R. Probability model-based early Merge mode decision for dependent views in 3D-HEVC. ACM Trans. Multimed. Comput. Commun. Appl. 2018, 14, 1–15. [Google Scholar] [CrossRef]
  156. Yang, F.; Chen, K.; Yu, B.; Fang, D. A relaxed fixed point method for a mean curvature-based denoising model. Optim. Methods Softw. 2014, 29, 274–285. [Google Scholar] [CrossRef]
  157. Osamy, W.; El-Sawy, A.A.; Salim, A. CSOCA: Chicken swarm optimization based clustering algorithm for wireless sensor networks. IEEE Access 2020, 8, 60676–60688. [Google Scholar] [CrossRef]
  158. Cekmez, U.; Ozsiginan, M.; Sahingoz, O.K. Multi colony ant optimization for UAV path planning with obstacle avoidance. In Proceedings of the 2016 International Conference on Unmanned Aircraft Systems (ICUAS), Arlington, VA, USA, 7–10 June 2016; pp. 47–52. [Google Scholar] [CrossRef]
  159. Wai, R.J.; Prasetia, A.S. Adaptive neural network control and optimal path planning of UAV surveillance system with energy consumption prediction. IEEE Access 2019, 7, 126137–126153. [Google Scholar] [CrossRef]
  160. Tsai, S.H.; Chen, Y.W. A novel fuzzy identification method based on ant colony optimization algorithm. IEEE Access 2016, 4, 3747–3756. [Google Scholar] [CrossRef]
  161. Zhou, J.; Wang, C.; Li, Y.; Wang, P.; Li, C.; Lu, P.; Mo, L. A multi-objective multi-population ant colony optimization for economic emission dispatch considering power system security. Appl. Math. Model. 2017, 45, 684–704. [Google Scholar] [CrossRef]
  162. Zhang, X.; Chen, X.; He, Z. An ACO-based algorithm for parameter optimization of support vector machines. Expert Syst. Appl. 2010, 37, 6618–6628. [Google Scholar] [CrossRef]
  163. Qamhan, A.A.; Ahmed, A.; Al-Harkan, I.M.; Badwelan, A.; Al-Samhan, A.M.; Hidri, L. An exact method and ant colony optimization for single machine scheduling problem with time window periodic maintenance. IEEE Access 2020, 8, 44836–44845. [Google Scholar] [CrossRef]
  164. Zhou, Y.; Li, W.; Wang, X.; Qiu, Y.; Shen, W. Adaptive gradient descent enabled ant colony optimization for routing problems. Swarm Evol. Comput. 2022, 70, 101046. [Google Scholar] [CrossRef]
  165. Zhao, D.; Liu, L.; Yu, F.; Heidari, A.A.; Wang, M.; Oliva, D.; Muhammad, K.; Chen, H. Ant colony optimization with horizontal and vertical crossover search: Fundamental visions for multi-threshold image segmentation. Expert Syst. Appl. 2021, 167, 114122. [Google Scholar] [CrossRef]
  166. Ejigu, D.A.; Liu, X. Gradient descent-particle swarm optimization based deep neural network predictive control of pressurized water reactor power. Prog. Nucl. Energy 2022, 145, 104108. [Google Scholar] [CrossRef]
  167. Papazoglou, G.; Biskas, P. Review and comparison of genetic algorithm and particle swarm optimization in the optimal power flow problem. Energies 2023, 16, 1152. [Google Scholar] [CrossRef]
  168. Tiwari, S.; Kumar, A. Advances and bibliographic analysis of particle swarm optimization applications in electrical power system: Concepts and variants. Evol. Intell. 2023, 16, 23–47. [Google Scholar] [CrossRef]
  169. Souza, D.A.; Batista, J.G.; dos Reis, L.L.; Júnior, A.B. PID controller with novel PSO applied to a joint of a robotic manipulator. J. Braz. Soc. Mech. Sci. Eng. 2021, 43, 377. [Google Scholar] [CrossRef]
  170. Abbas, M.; Alshehri, M.A.; Barnawi, A.B. Potential Contribution of the Grey Wolf Optimization Algorithm in Reducing Active Power Losses in Electrical Power Systems. Appl. Sci. 2022, 12, 6177. [Google Scholar] [CrossRef]
  171. Abasi, A.K.; Aloqaily, M.; Guizani, M. Grey wolf optimizer for reducing communication cost of federated learning. In Proceedings of the GLOBECOM 2022-2022 IEEE Global Communications Conference, Rio de Janeiro, Brazil, 4–8 December 2022; pp. 1049–1054. [Google Scholar]
  172. Li, Y.; Lin, X.; Liu, J. An improved gray wolf optimization algorithm to solve engineering problems. Sustainability 2021, 13, 3208. [Google Scholar] [CrossRef]
  173. Nadimi-Shahraki, M.H.; Zamani, H.; Mirjalili, S. Enhanced whale optimization algorithm for medical feature selection: A COVID-19 case study. Comput. Biol. Med. 2022, 148, 105858. [Google Scholar] [CrossRef] [PubMed]
  174. Husnain, G.; Anwar, S. An intelligent cluster optimization algorithm based on Whale Optimization Algorithm for VANETs (WOACNET). PLoS ONE 2021, 16, e0250271. [Google Scholar] [CrossRef] [PubMed]
  175. Zhang, Z.; Yang, J. A discrete cuckoo search algorithm for traveling salesman problem and its application in cutting path optimization. Comput. Ind. Eng. 2022, 169, 108157. [Google Scholar] [CrossRef]
  176. Zhang, L.; Yu, Y.; Luo, Y.; Zhang, S. Improved cuckoo search algorithm and its application to permutation flow shop scheduling problem. J. Algorithms Comput. Technol. 2020, 14, 1748302620962403. [Google Scholar] [CrossRef]
  177. Harshavardhan, A.; Boyapati, P.; Neelakandan, S.; Abdul-Rasheed Akeji, A.A.; Singh Pundir, A.K.; Walia, R. LSGDM with biogeography-based optimization (BBO) model for healthcare applications. J. Healthc. Eng. 2022, 2022, 2170839. [Google Scholar] [CrossRef] [PubMed]
  178. Zhang, X.; Wen, S.; Wang, D. Multi-population biogeography-based optimization algorithm and its application to image segmentation. Appl. Soft Comput. 2022, 124, 109005. [Google Scholar] [CrossRef]
  179. Albashish, D.; Hammouri, A.I.; Braik, M.; Atwan, J.; Sahran, S. Binary biogeography-based optimization based SVM-RFE for feature selection. Appl. Soft Comput. 2021, 101, 107026. [Google Scholar] [CrossRef]
  180. Zhang, Y.; Gu, X. Biogeography-based optimization algorithm for large-scale multistage batch plant scheduling. Expert Syst. Appl. 2020, 162, 113776. [Google Scholar] [CrossRef]
  181. Lalljith, S.; Fleming, I.; Pillay, U.; Naicker, K.; Naidoo, Z.J.; Saha, A.K. Applications of flower pollination algorithm in electrical power systems: A review. IEEE Access 2021, 10, 8924–8947. [Google Scholar] [CrossRef]
  182. Ong, K.M.; Ong, P.; Sia, C.K. A new flower pollination algorithm with improved convergence and its application to engineering optimization. Decis. Anal. J. 2022, 5, 100144. [Google Scholar] [CrossRef]
  183. Subashini, S.; Mathiyalagan, P. A cross layer design and flower pollination optimization algorithm for secured energy efficient framework in wireless sensor network. Wirel. Pers. Commun. 2020, 112, 1601–1628. [Google Scholar] [CrossRef]
  184. Kumari, G.V.; Rao, G.S.; Rao, B.P. Flower pollination-based K-means algorithm for medical image compression. Int. J. Adv. Intell. Paradig. 2021, 18, 171–192. [Google Scholar] [CrossRef]
  185. Alyasseri, Z.A.A.; Khader, A.T.; Al-Betar, M.A.; Yang, X.S.; Mohammed, M.A.; Abdulkareem, K.H.; Kadry, S.; Razzak, I. Multi-objective flower pollination algorithm: A new technique for EEG signal denoising. Neural Comput. Appl. 2022, 11, 7943–7962. [Google Scholar] [CrossRef]
  186. Shen, X.; Wu, Y.; Li, L.; Zhang, T. A modified adaptive beluga whale optimization based on spiral search and elitist strategy for short-term hydrothermal scheduling. Electr. Power Syst. Res. 2024, 228, 110051. [Google Scholar] [CrossRef]
  187. Omar, M.B.; Bingi, K.; Prusty, B.R.; Ibrahim, R. Recent advances and applications of spiral dynamics optimization algorithm: A review. Fractal Fract. 2022, 6, 27. [Google Scholar] [CrossRef]
  188. Ekinci, S.; Izci, D.; Al Nasar, M.R.; Abu Zitar, R.; Abualigah, L. Logarithmic spiral search based arithmetic optimization algorithm with selective mechanism and its application to functional electrical stimulation system control. Soft Comput. 2022, 26, 12257–12269. [Google Scholar] [CrossRef]
  189. Nonita, S.; Xalikovich, P.A.; Kumar, C.R.; Rakhra, M.; Samori, I.A.; Maquera, Y.M.; Gonzáles, J.L.A. Intelligent water drops algorithm-based aggregation in heterogeneous wireless sensor network. J. Sensors 2022, 2022, e6099330. [Google Scholar] [CrossRef]
  190. Kaur, S.; Chaudhary, G.; Dinesh Kumar, J.; Pillai, M.S.; Gupta, Y.; Khari, M.; García-Díaz, V.; Parra Fuente, J. Optimizing Fast Fourier Transform (FFT) Image Compression Using Intelligent Water Drop (IWD) Algorithm. 2022. Available online: https://reunir.unir.net/handle/123456789/13930 (accessed on 3 November 2021).
  191. Gao, B.; Hu, X.; Peng, Z.; Song, Y. Application of intelligent water drop algorithm in process planning optimization. Int. J. Adv. Manuf. Technol. 2020, 106, 5199–5211. [Google Scholar] [CrossRef]
  192. Kowalski, P.A.; Łukasik, S.; Charytanowicz, M.; Kulczycki, P. Optimizing clustering with cuttlefish algorithm. In Information Technology, Systems Research, and Computational Physics; Springer: Berlin/Heidelberg, Germany, 2020; pp. 34–43. [Google Scholar]
  193. Joshi, P.; Gavel, S.; Raghuvanshi, A. Developed Optimized Routing Based on Modified LEACH and Cuttlefish Optimization Approach for Energy-Efficient Wireless Sensor Networks. In Microelectronics, Communication Systems, Machine Learning and Internet of Things: Select Proceedings of MCMI 2020; Springer: Berlin/Heidelberg, Germany, 2022; pp. 29–39. [Google Scholar]
Figure 1. ZNN structure.
Figure 1. ZNN structure.
Biomimetics 09 00453 g001
Figure 2. General procedure of GA.
Figure 2. General procedure of GA.
Biomimetics 09 00453 g002
Figure 3. General procedure of PSO.
Figure 3. General procedure of PSO.
Biomimetics 09 00453 g003
Figure 4. Applications of bio-inspired intelligence algorithms.
Figure 4. Applications of bio-inspired intelligence algorithms.
Biomimetics 09 00453 g004
Table 1. Bio-inspired intelligence algorithm and applications.
Table 1. Bio-inspired intelligence algorithm and applications.
AlgorithmApplications
Ant colony optimization (ACO)Scheduling problem [163]
Routing problem [164]
Image processing [165]
Particle swarm optimization (PSO)Neural network training [166]
Power system [167,168]
Robots [169]
Gray wolf optimizer (GWO)Electrical engineering [170]
Communication [171]
Mechanical engineering [172]
Whale optimization algorithm (WOA)Feature selection [173]
Data cluster [174]
Cuckoo search (CS)Path optimization [175]
Scheduling problem [176]
Biogeography-based optimization (BBO)Healthcare [177]
Image segmentation [178]
Feature selection [179]
Scheduling problem [180]
Flower pollination (FPA)Electrical power systems [181]
Engineering optimization [182]
Wireless and network domain [183]
Signal and image processing [184,185]
Spiral optimization (SOA)Scheduling problem [186]
Path optimization [187]
Electrical system [188]
Intelligent water drop (IWD)Wireless sensor network [189]
Image process [190]
Path optimization [191]
Cuttlefish optimization (CFO)Data clustering [192]
Signal processing [193]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, H.; Liao, B.; Li, J.; Li, S. A Survey on Biomimetic and Intelligent Algorithms with Applications. Biomimetics 2024, 9, 453. https://doi.org/10.3390/biomimetics9080453

AMA Style

Li H, Liao B, Li J, Li S. A Survey on Biomimetic and Intelligent Algorithms with Applications. Biomimetics. 2024; 9(8):453. https://doi.org/10.3390/biomimetics9080453

Chicago/Turabian Style

Li, Hao, Bolin Liao, Jianfeng Li, and Shuai Li. 2024. "A Survey on Biomimetic and Intelligent Algorithms with Applications" Biomimetics 9, no. 8: 453. https://doi.org/10.3390/biomimetics9080453

APA Style

Li, H., Liao, B., Li, J., & Li, S. (2024). A Survey on Biomimetic and Intelligent Algorithms with Applications. Biomimetics, 9(8), 453. https://doi.org/10.3390/biomimetics9080453

Article Metrics

Back to TopTop