Abstract
Complex time-dependent Lyapunov equation (CTDLE), as an important means of stability analysis of control systems, has been extensively employed in mathematics and engineering application fields. Recursive neural networks (RNNs) have been reported as an effective method for solving CTDLE. In the previous work, zeroing neural networks (ZNNs) have been established to find the accurate solution of time-dependent Lyapunov equation (TDLE) in the noise-free conditions. However, noises are inevitable in the actual implementation process. In order to suppress the interference of various noises in practical applications, in this paper, a complex noise-resistant ZNN (CNRZNN) model is proposed and employed for the CTDLE solution. Additionally, the convergence and robustness of the CNRZNN model are analyzed and proved theoretically. For verification and comparison, three experiments and the existing noise-tolerant ZNN (NTZNN) model are introduced to investigate the effectiveness, convergence and robustness of the CNRZNN model. Compared with the NTZNN model, the CNRZNN model has more generality and stronger robustness. Specifically, the NTZNN model is a special form of the CNRZNN model, and the residual error of CNRZNN can converge rapidly and stably to order when solving CTDLE under complex linear noises, which is much lower than order of the NTZNN model. Analogously, under complex quadratic noises, the residual error of the CNRZNN model can converge to quickly and stably, while the residual error of the NTZNN model is divergent.
Keywords:
complex time-dependent Lyapunov equation; zeroing neural network (ZNN); complex linear noise; complex quadratic noise; noise-suppression MSC:
92B20; 68Q32; 68T05
1. Introduction
The Lyapunov equation is widely practiced in the stability analysis of dynamic systems [,] in mathematics and engineering control fields. Therefore, the solution of the Lyapunov equation is indispensable in practical applications [,,]. In the past decade, many numerical methods, such as direct method and iterative method, have been proposed for the rapid calculation of the Lyapunov equation [,,]. The Bartels–Stewart method based on Shur decomposition is a famous direct method [], which can effectively solve low-scale Lyapunov equations. For the iterative method, in [], the piecewise alternating direction implicit iterative method is used to solve Lyapunov equations. In addition, Stykel et al. cleverly calculates the problem by using the low-rank iterative method [], and the feasibility of this method is further verified. Although the above numerical methods can effectively solve low-scale Lyapunov equations, they are inefficient for large-scale and real-time Lyapunov equation problems.
To overcome this shortcoming of numerical methods, recursive neural networks (RNNs) were further designed and studied. At present, RNNs were universally applied in practical engineering and application problems [,,,,,,,,,,,,]. Additionally, RNNs have the characteristics of parallel distributed processing; hence, they have been extensively employed for solving the time-dependent Lyapunov equation (TDLE) [,,,,]. In [], Zhang et al. compared two types of RNN (i.e., gradient neural network, GNN and zeroing neural network, ZNN) for solving TDLE, and they concluded that ZNN has a better solving performance than GNN. ZNN is a branch of RNN, which originated from the Hopfield neural network (HNN). The ZNN model can realize the real-time tracking of the state matrix by solving the time derivative of the coefficient matrix []. Therefore, when solving time-dependent problems, the ZNN model can find theoretical solutions quickly and accurately. In the task of solving Lyapunov equations, the ZNN model is further developed and analyzed. Ding et al. presented an improved complex ZNN model for computing complex time-dependent Sylvester equations (CTDSE) []. In [], Xiao et al. presented an arctan-type VP-ZNN model for solving time-dependent Sylvester equations (TDSE), which can realize convergence in finite time and adjust the design parameters of its final convergence to a constant.
It is noteworthy that the aforementioned ZNN models solve the time-dependent Lyapunov and Sylvester equations in a noise-free environment. However, in the actual implementation process, there will be inevitably interference from external noises, which usually contain constant noises, linear noises and quadratic noises. Although one can preprocess these noises, such as employing a prefilter, this will undoubtedly reduce the efficiency of the real-time solution. Therefore, some ZNN models with noise tolerance were further studied and widely used in the solution of TDLE [,,,,,,,]. Jin et al. studied a classical integral-enhanced ZNN (IEZNN) model in []. On the basis of IEZNN, Yang et al. further designed a noise-tolerant ZNN (NTZNN) model to calculate the time-dependent Lyapunov equation under various noises []. Furthermore, in [], Xiao et al. designed a class of robust nonlinear ZNN (RNZNNs) models by adding two nonlinear activation functions (AFs) and applied them to the solution of TDLE under various noises.
This paper considers the solution of complex time-dependent Lyapunov equation (CTDLE) under complex noises. It is noteworthy that the aforementioned TDLE and real-valued noises are special forms of CTDLE and complex noises, respectively. Hence, the CTDLE solution under complex noises is more general. For solving CTDLE under various complex noises, a novel complex noise-resistant ZNN (CNRZNN) model is proposed in this paper. Compared with the existing NTZNN model, the CNRZNN model has better robustness, especially for complex linear noises and complex quadratic noises. Specifically, the NTZNN model cannot completely suppress the complex linear noise, and for the complex quadratic noise disturbance, the residual error of the NTZNN model is divergent. However, complex linear noise and complex quadratic noise are very common in practical engineering and application []. At the same time, other nonlinear noises can be approximated as linear noises or polynomial noises (quadratic noise is a kind of polynomial noise) by the Taylor formula expansion method. Different from our previous single integration-enhanced ZNN model solving the real-valued TDLE [], we propose a more robust and general CNRZNN model for computing CTDLE in the current work. The CNRZNN model contains a double integral term, and real-time tracking of the state solution is realized by the time derivative of the state matrix. Therefore, it can achieve a complete suppression of complex linear noises and carries an excellent suppression performance on quadratic noises. As far as we know, there is no complex noise-resistant ZNN model with double integrals to solve the CTDLE. The differences between this paper and previous works are compared in Table 1.
Table 1.
Comparison between the present study and the previous works.
The rest of the paper will be presented in four sections. Section 2 presents the CTDLE, CNRZNN design formula and model design process. For comparison, this section also introduces the NTZNN model. In Section 3, the CNRZNN model is analyzed and deduced, and the convergence and robustness of the CNRZNN model are proved. Section 4 provides two completely different CTDLE instances for validation and comparison. In this section, the effectiveness, convergence and robustness of the CNRZNN model under various complex noises will be further verified. Meanwhile, the performance under complex linear noise and complex quadratic noise are analyzed separately and compared with NTZNN. In addition, the performance of the CNRZNN model under real noises are verified and compared with that of NTZNN. Section 5 is the summary of the work of this paper. Finally, the main contributions of this paper are introduced as follows:
- This paper proposes and investigates a complex double-integral noise-resistant ZNN model, which is first used to solve the CTDLE. It is noteworthy that the CNRZNN model is more general. When a coefficient of the CNRZNN model is set to 0, the existing NTZNN model is a special form of the CNRZNN model.
- The CNRZNN model is analyzed and deduced, and the convergence and robustness of the CNRZNN model are proved theoretically. It shows that the CNRZNN model has an inherent tolerance to complex constant noise, complex linear noise and complex quadratic noises.
- Three different experiments verify the effectiveness, convergence and robustness of the CNRZNN model. Meanwhile, the NTZNN model is introduced to make a robustness comparison with the CNRZNN model under the condition of complex linear noise, complex quadratic noise and various real noises.
- Compared with the NTZNN model, the CNRZNN model has more outstanding robustness for solving CTDLE under complex linear noises and complex quadratic noises. To be precise, in the case of complex linear noise, the residual error of the CNRZNN model converges stably to order , which is much lower than that of the NTZNN model at order . Similarly, for complex quadratic noise, the residual of the CNRZNN model can achieve stable convergence, while the residual of the NTZNN model is divergent.
2. Problem Formulation and Models Design
In this section, the problem expression of CTDLE is offered first. Then, the CNRZNN design formula is proposed and the existing NTZNN model is presented.
2.1. Problem Formulation
The complex time-dependent Lyapunov equation can be formulated as
where and are the non-singular smooth time-dependent complex coefficient matrix, denotes the transposed of and is the state complex matrix needed solving in real time. It is noteworthy that for -dimensional complex non-singular time-dependent matrices and , the condition of their rank is () = () = n. For comparison and verification, represents the theoretical solution of CTDLE (1). This paper aims at computing the state complex matrix to ensure that the CTDLE (1) holds true at arbitrary time, i.e., . It is well known that a complex matrix contains both an imaginary part and real part. In this case, using the theoretical solution as an example, its complex structure can be written as , where represents the imaginary part of and represents the real part of . Note that the imaginary unit .
For solving the CTDLE (1), the time-varying complex coefficient matrices and need to be considered as known and bounded. Meanwhile, the time derivatives and of the complex coefficient matrix are also known and bounded. Furthermore, before solving complex time-dependent Lyapunov equation, the existence and uniqueness of solution of the CTDLE needs to be considered and the relevant lemma is given as follows.
2.2. CNRZNN Design Formula
For computing the CTDLE, based on the design formula of Zhang et al. in [], we firstly define a complex error function expressed as
if the error function , then is equal to the theoretical solution for CTDLE (1).
To achieve an accurate and effective solution of CTDLE (1), we let all subelements () of iterate rapidly to 0. Therefore, according to the original ZNN design formula
where is the design parameter and , a new error function is expressed as . Furthermore, based on (3), one can obtain
which is further rewritten as
According to Equation (4), an error function is defined, and combined with ZNN design Formula (3), we have
Finally, a novel complex ZNN design formula is proposed as
2.3. CNRZNN Model
In this subsection, the error function is substituted into design Formula (5), and a complex noise-resistant ZNN model is further obtained as
where , and . In the actual calculation, we need to convert the CNRZNN model from matrix form into vector form. Firstly, we convert CTDLE (1) into a vector form:
thereinto,
where denotes the unit matrix, vec(·) and symbol ⊗ represent the vectorization and Kronecker product operation, respectively. Hence, the vector form CNRZNN model is expressed as
The CNRZNN model considering various types of noises can be rewritten as
where denotes the arbitrary complex vector-form noise. In this paper, complex constant noise, complex linear noise and complex quadratic noise are considered in solving CTDLE (1). It is noteworthy that any type noise satisfies Pauta () criterion [,], then, the coefficient value of noise in subsequent numerical experiments will obey this criterion, i.e., .
For the readers’ convenience, the NTZNN model is presented as []
where and . Furthermore, the vector-form NTZNN model is described as
In this section, we have completed the description of the CTDLE and the design of the CNRZNN model. For convenient calculation and clear comparison, the NTZNN model is introduced to compare with the CNRZNN model, and their vectorization forms have been given.
3. Theoretical Analysis and Results
Firstly, we discuss the convergence performance and robustness of the proposed CNRZNN model (7). In addition, the Frobenius norm of is used to intuitively show the residual error of the CTDLE-solving process, that is, .
3.1. Convergence of CNRZNN Model
In this work, a theorem is proposed to illustrate the efficiency and excellent converge performance of CNRZNN model (7) for computing CTDLE (1) in noise-free conditions.
Theorem 1.
Proof of Theorem 1.
By taking the time derivative of the design formula Equation (5) twice, we have that
let , , and be the th subelement of , , and , respectively, . Then, Equation (10) can be rewritten as a subelement as follows:
For the linear differential equations, the Laplace transform general equation [] is expressed as
where stands for Laplace transform operation. It is worth mentioning that is continuous over any finite interval and with constant and . Furthermore, taking the Laplace transform of Equation (11), we obtain
which is equivalent to
The pole position of the transfer function determines the stability of the closed-loop system. According to Equation (12), the three poles of the system are (), which are located in the left half plane of the S plane. Therefore, the system is stable and the final value theorem of Laplace transform can be obtained on the basis of stable system
Finally, we can conclude that the convergence of the Frobenius norm of the is The proof of Theorem 1 is complete. □
3.2. Robustness of CNRZNN Model
This subsection investigates the robustness of noise-perturbed CNRZNN model (8), considering complex constant noise , complex linear noise and complex quadratic noise , where A, B and C are complex noise coefficient matrices.
Theorem 2.
Proof of Theorem 2.
The proof of this theorem consists of the following two parts:
(1) complex constant noise: From the proving process of Theorem 1, entry wisely, we have
where and denote the th element of and , respectively. Employing the Laplace transformation on noise-perturbed CNRZNN model (13), to obtain
where is the Laplace transform of . Furthermore,
which is similar to Equation (12) and a stable system, then
(2) complex linear noise: Similar to the proof to complex constant noise, we have
where and . Thus,
The above two proofs draw the same result, i.e., as . Finally, we know that the norm of the matrix form error function is 0, i.e., . The proof of Theorem 2 is now complete. □
Remark 1.
In practical applications, noises are usually of real type. However, the complex numbers are a necessary part of the numerical world, and complex numbers have better plasticity and flexibility than real numbers. Therefore, the complex noises considered in this paper are mainly a mathematical extension based on real noises, making the problem mathematically more general.
Remark 2.
At present, the noise suppression performance of the CNRZNN model is the focus of this work. The proposed CNRZNN model does not have finite-time convergence as the linear activation function is adopted. Therefore, the upper limit of the constriction time cannot be deduced. However, by employing nonlinear activation functions, we can design the finite-time convergent CNRZNN model in the future.
Theorem 3.
For solving CTDLE under the condition of complex quadratic noise , the upper bound of steady-state of noise-perturbed CNRZNN model (8) is , and when the design parameter , we then obtain .
Proof of Theorem 3.
By the proof of Theorem 2, similarly, the Laplace transform is used to rewrite the CNRZNN model (8) disturbed by complex quadratic noise as
where is the Laplace transform of . All poles of the system are located in the left half plane of S plane; hence, it is a stable system. We then have
and . The proof of Theorem 3 is thus complete. □
4. Illustrative Examples and Results
In this section, the validity, convergence and robustness of CNRZNN model (7) are verified by three experiments, i.e., two complex experiments and one real experiment. Furthermore, for comparison and connection, NTZNN model (9) is presented for solving CTDLE (1) under the same conditions.
4.1. Example 1
The objective of this example is to solve the complex state matrix of CTDLE (14) and to achieve the fitting of the theoretical solution .
Before verifying the robustness of CNRZNN model (7), the validity and convergence of CNRZNN model (7) need to be examined. In the absence of noise, the proposed CNRZNN model (7) is used to solve CTDLE (14) and the relevant results are depicted in Figure 1 and Figure 2. More specifically, with design parameter , Figure 1 shows the results of real-time state matrix of CNRZNN model (7), while Figure 2 shows the convergence of the Frobenius norm of error matrix , which is called residual error in this paper. From Figure 1, the state matrix of CNRZNN model (7) starts from five initial states to fit represented by the red dotted line of CTDLE (14).
It is noteworthy that for the simplicity and clarity of presentation, Figure 1a represents the real part of the state matrix and Figure 1b represents the imaginary part of the state matrix . Meanwhile, in Figure 2a, the five residual error curves of CNRZNN model (7) converge to zero within 1.4 s. When the design parameter is increased to , the five curves converge to zero within 0.35 s, which is depicted in Figure 2b. Based on the above experimental results, the validity and convergence of CNRZNN model (7) and the influence of design parameter are preliminarily verified.
Remark 3.
To investigate the suppression ability of the CNRZNN model to bounded random noises, we consider bounded random noises in this example and the corresponding experimental results are shown in Figure 3. As seen from Figure 3a,b, the CNRZNN model can efficiently and accurately converge to the theoretical solution of the complex Lyapunov equation. Meanwhile, in Figure 3c, the residual error of the CNRZNN model converges stably to order after 2 s. It can be concluded that the CNRZNN model has excellent suppression performance for bounded random noises.
Figure 3.
CNRZNN model for solving CTDLE (14) under bounded random noises , where the solid blue line represents the state trajectory and the red dotted line stands for theoretical solution.
4.2. Example 2
On the basis of Example 1, we consider another CTDLE (1) to further analyze the convergence and robustness of CNRZNN model (7). Then, the complex matrices and of the new CTDLE are defined as
In this work, we first verify the convergence of CNRZNN model (7) and consider the CNRZNN with and for computing CTDLE (15) in the noise-free condition, and the corresponding results are illustrated in Figure 2c,d and Figure 4. From Figure 2c,d, the residual error of the CNRZNN model with different design parameters can rapidly and stably converge to zero. From Figure 4a, the real part of of CNRZNN model (7) is fitted rapidly from random initial state to . Similarly, Figure 4b shows that the imaginary part of of CNRZNN model (7) also fully and effectively fitted to . Then, the validity and convergence of CNRZNN model (7) are verified in noise-free conditions.
However, various noises cannot be ignored in the real implementation of CTDLE (1). This paper considers the interference of complex constant noises , complex linear noises and complex quadratic noises . The CNRZNN model (7) for solving CTDLE (15) in the above three noise conditions and the results are presented in Figure 5. For comprehensively verifying the robustness of CNRZNN model (7) under the above three kinds of noises, in this example, small and large coefficients are considered for each kind of noise. Specifically, complex constant noises are considered and , complex linear noises are and and complex quadratic noises and . Firstly, the robustness of CNRZNN model (7) under complex constant noises and are considered, and the related results are illustrated in Figure 5a. From Figure 5a, for the CTDLE (15) solution with any size of complex constant noise, the of CNRZNN model (7) can converge stably to the order – within 2.5 s. Analogously, in Figure 5b, the robustness analysis results of CNRZNN model (7) under complex linear noises and with different coefficients are displayed. This results shows that the of CNRZNN model (7) can effectively and stably converge to the order – within 2.5 s. In Figure 5c, the CNRZNN model solves CTDLE (15) under the complex quadratic noises and , and the residual errors of the CNRZNN model converge stably to below order and , respectively.
For comparison and correlation, NTZNN model (9) is introduced to compare with CNRZNN model (7) under the same complex linear noises and complex quadratic noises. Under linear noises , the NTZNN and CNRZNN models solve the same CTDLE (15) and the results are shown in Figure 6. To present the results succinctly and clearly, the real part and imaginary part of the state matrix are represented in Figure 6a,b, respectively. As can be seen from Figure 6a, the real part of of CNRZNN model (7) can accurately converge to , while NTZNN model (9) cannot achieve effectively convergence. Correspondingly, the fitting effect of the imaginary part of the state matrix of the two models in Figure 6b is the same as that in Figure 6a. The residual errors of NTZNN and CNRZNN models with are shown in Figure 7a,b. From these two figures, one can find that the of CNRZNN model (7) and NTZNN model (9) are of order and , respectively. In summary, one can conclude that CNRZNN model (7) has a stronger ability to suppress complex linear noise than NTZNN model (9).
Figure 6.
CNRZNN model and NTZNN model for solving CTDLE (15) under complex linear noises . (a) Real part of the state matrix . (b) Imaginary part of the state matrix .
Additionally, the CNRZNN model is further compared with the NTZNN model under the complex quadratic noise. In this work, the CNRZNN model and NTZNN model are employed to solve CTDLE (15) under complex quadratic noises , and the is used to estimate the robustness of the model. In Figure 7c, the of CNRZNN model (7) with design parameter is of order . However, under the same conditions, the of NTZNN model (9) is divergent. Then, we adjust the design parameter to 50, and the results are shown in Figure 7d. One can seen that the of CNRZNN model (7) is of order , while the of NTZNN model (9) is also divergent. According to Theorem 3, design parameter increases by five times and the decreases by times. Obviously, the experimental results are consistent with the theoretical results. For readers’ convenience, the steady-state residual error values of the CNRZNN and NTZNN models under different design parameters and noises are present in Table 2.
Table 2.
Comparison of steady-state residual error values between NTZNN model and CNRZNN model under different design parameters and noises.
4.3. Experimental Comparison of Real Lyapunov Equation under Real Noises
In this section, CNRZNN model (7) is considered to solve the real Lyapunov equation under real noises and is compared with NTZNN model (9). The real Lyapunov equation and real noises in [] are selected for analysis and comparison. Firstly, we consider the real Lyapunov equation as
in which,
Secondly, real noises in [] are considered: constant noises and linear noises . In addition, we further consider the real quadratic noises . Finally, we use the CNRZNN model to solve the real Lyapunov Equation (16) under the above three kinds of real noises and make a comparison with NTZNN model (9).
Case 1 (Real constant noises): In this case, we use the CNRZNN model with design parameter to solve real Lyapunov Equation (16) under real constant noises , and the corresponding experimental results are depicted in Figure 8a,b. In Figure 8a, the state trajectory of the CNRZNN model is rapidly fitted to the theoretical solution. In Figure 8b, the residual error of CNRZNN can converge rapidly and stably to order when solving real Lyapunov Equation (16) under real constant noises , which is lower than order of NTZNN model (9). Therefore, the CNRZNN model has better performance of real constant noises suppression.
Figure 8.
CNRZNN model with solves Lyapunov Equation (16) starting from initial complex matrices under real constant noises , real linear noises and real quadratic noises .
Case 2 (Real linear noises): Similar to Case 1 above, real linear noises are considered in this case. The comparison of the residual error between CNRZNN and NTZNN models is illustrated in Figure 8c. From Figure 8c, we can obtain results similar to Figure 8b, where the residual error of the CNRZNN model converges to order , while that of the NTZNN model (9) is only close to order . Then, the CNRZNN model also has better performance under real linear noises.
Case 3 (Real quadratic noises): In this case, we consider real quadratic noises . Similarly, the experimental results are shown in Figure 8d, and we can obtain the same results as the complex experiment in Example 2. That is, the residual error of CNRZNN model (7) can steadily converge to order , while the residual error of NTZNN model (9) is divergent.
5. Conclusions
For solving the CTDLE in the presence of various noises, this paper has been proposed a new design formula, and the CNRZNN model has been derived on the basis of this formula. The convergence and robustness of the CNRZNN model have been proved by theoretical derivation. Concretely, in the existence of complex constant noises and complex linear noises, the residual error of the CNRZNN model can effectively and accurately converge to zero. Similarly, under complex quadratic noises, the of the CNRZNN model has bounded, and the upper bound has been determined by design parameter . At last, three time-dependent Lyapunov examples and the NTZNN model were introduced for verification and comparison, respectively. Theoretical analysis and experimental results show that the CNRZNN model has better anti-noise ability than the NTZNN model, especially for complex linear noises and complex quadratic noises. It is worth pointing out that designing a finite-time convergent CNRZNN model by using nonlinear activation functions may be a future research direction of this work.
Author Contributions
Conceptualization, B.L. and S.L.; methodology, S.L. and B.L.; software, C.H. and V.N.K.; validation, B.L. and C.H.; formal analysis, C.H.; investigation, X.C. and C.H.; data curation, V.N.K. and C.H.; writing—original draft preparation, C.H.; writing—review and editing, B.L. and S.L.; visualization, C.H.; supervision, B.L. and X.C.; project administration, B.L.; funding acquisition, B.L. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported in part by the National Natural Science Foundation of China under Grant No. 62066015, the Natural Science Foundation of Hunan Province of China under Grant No. 2020JJ4511, and the Research Foundation of Education Bureau of Hunan Province, China, under Grant No. 20A396.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
Abbreviations
The following abbreviations are used in this manuscript:
| HNN | Hopfield neural network |
| RNN | recurrent neural network |
| ZNN | zeroing neural networks |
| IEZNN | integral-enhanced ZNN |
| NTZNN | noise-tolerant ZNN |
| RNZNN | robust nonlinear ZNN |
| VPZNN | varying-parameter ZNN |
| CNRZNN | complex noise-resistant ZNN |
| TDLE | time-dependent Lyapunov equation |
| TDSE | time-dependent Sylvester equation |
| CTDLE | complex time-dependent Lyapunov equation |
| CTDSE | complex time-dependent Sylvester equation |
| AF | activation function |
References
- Guo, D.; Zhang, Y. Zhang neural network, Getz-Marsden dynamic system, and discrete-time algorithms for time-varying matrix inversion with application to robots’ kinematic control. Neurocomputing 2012, 97, 22–32. [Google Scholar] [CrossRef]
- Shi, Y.; Zhang, Y. New discrete-time models of zeroing neural network solving systems of time-variant linear and nonlinear inequalities. IEEE Trans. Syst. Man Cybern. Syst. 2020, 50, 565–576. [Google Scholar] [CrossRef]
- Huang, H.; Fu, D.; Wang, G.; Jin, L.; Liao, S.; Wang, H. Modified Newton integration algorithm with noise suppression for online dynamic nonlinear optimization. Numer. Algorithms 2021, 87, 575–599. [Google Scholar] [CrossRef]
- Tanaka, N.; Iwamoto, H. Active boundary control of an Euler–Bernoulli beam for generating vibration-free state. J. Sound Vib. 2007, 304, 570–586. [Google Scholar] [CrossRef]
- Zhang, H.; Li, Z.; Qu, Z.; Lewis, F.L. On constructing Lyapunov functions for multi-agent systems. Automatica 2015, 58, 39–42. [Google Scholar] [CrossRef] [Green Version]
- Mathews, J.H.; Fink, K.D. Numerical Methods Using MATLAB; Prentice Hall: Hoboken, NJ, USA, 2004. [Google Scholar]
- Penzl, T. Numerical solution of generalized Lyapunov equations. Adv. Comput. Math. 1998, 8, 33–48. [Google Scholar] [CrossRef]
- Stykel, T. Low-rank iterative methods for projected generalized Lyapunov equations. Electron. Trans. Numer. Anal. 2008, 30, 187–202. [Google Scholar]
- Zhang, Y.; Li, Z.; Li, K. Complex-valued Zhang neural network for online complex-valued time-varying matrix inversion. Appl. Math. Comput. 2011, 217, 10066–10073. [Google Scholar] [CrossRef]
- Stanimirović, P.S.; Petković, M.D. Improved GNN models for constant matrix inversion. Neural Process. Lett. 2019, 50, 321–339. [Google Scholar] [CrossRef]
- Jiang, W.; Lin, C.L.; Katsikis, V.N.; Mourtas, S.D.; Stanimirović, P.S.; Simos, T.E. Zeroing neural network approaches based on direct and indirect methods for solving the Yang–Baxter-like matrix equation. Mathematics 2022, 10, 1950. [Google Scholar] [CrossRef]
- Li, X.; Xu, Z.; Li, S.; Su, Z.; Zhou, X. Simultaneous obstacle avoidance and target tracking of multiple wheeled mobile robots with certified safety. IEEE Trans. Cybern. 2021. online ahead of print. [Google Scholar] [CrossRef] [PubMed]
- Kornilova, M.; Kovalnogov, V.; Fedorov, R.; Zamaleev, M.; Katsikis, V.N.; Mourtas, S.D.; Simos, T.E. Zeroing neural network for pseudoinversion of an arbitrary time-varying matrix based on singular value decomposition. Mathematics 2022, 10, 1208. [Google Scholar] [CrossRef]
- Jin, L.; Yan, J.; Du, X.; Xiao, X.; Fu, D. RNN for solving time-variant generalized Sylvester equation with applications to robots and acoustic source localization. IEEE Trans. Ind. Inform. 2020, 16, 6359–6369. [Google Scholar] [CrossRef]
- Khan, A.T.; Li, S.; Cao, X. Control framework for cooperative robots in smart home using bio-inspired neural network. Measurement 2021, 167, 108253. [Google Scholar] [CrossRef]
- Stanimirović, P.S.; Srivastava, S.; Gupta, D.K. From Zhang neural network to scaled hyperpower iterations. J. Comput. Appl. Math. 2018, 331, 133–155. [Google Scholar] [CrossRef]
- Xiao, L. A finite-time convergent Zhang neural network and its application to real-time matrix square root finding. Neural. Comput. Appl. 2019, 31, 793–800. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhang, J.; Weng, J. Dynamic Moore-Penrose inversion with unknown derivatives: Gradient neural network approach. IEEE Trans. Neural Netw. Learn. Syst. 2022. online ahead of print. [Google Scholar] [CrossRef]
- Guo, D.; Zhang, Y. ZNN for solving online time-varying linear matrix–vector inequality via equality conversion. Appl. Math. Comput. 2015, 259, 327–338. [Google Scholar] [CrossRef]
- Stanimirović, P.S.; Petković, M.D. Gradient neural dynamics for solving matrix equations and their applications. Neurocomputing 2018, 306, 200–212. [Google Scholar] [CrossRef]
- Uhlig, F. Zhang neural networks for fast and accurate computations of the field of values. Linear Multilinear A 2019, 68, 1894–1910. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Y.; Chen, K.; Li, X.; Yi, C.; Zhu, H. Simulink modeling and comparison of Zhang neural networks and gradient neural networks for time-varying Lyapunov equation solving. Proc. Int. Conf. Nat. Comput. ICNC 2008, 3, 521–525. [Google Scholar]
- Yi, C.; Zhang, Y.; Guo, D. A new type of recurrent neural networks for real-time solution of Lyapunov equation with time-varying coefficient matrices. Math. Comput. Simul. 2013, 92, 40–52. [Google Scholar] [CrossRef]
- Lv, X.; Xiao, L.; Tan, Z.; Yang, Z. Wsbp function activated Zhang dynamic with finite-time convergence applied to Lyapunov equation. Neurocomputing 2018, 314, 310–315. [Google Scholar] [CrossRef]
- Shi, Y.; Mou, C.; Qi, Y.; Li, B.; Li, S.; Yang, B. Design, analysis and verification of recurrent neural dynamics for handling time-variant augmented Sylvester linear system. Neurocomputing 2021, 426, 274–284. [Google Scholar] [CrossRef]
- Xiao, L.; Tao, J.; Li, W. An arctan-type varying-parameter ZNN for solving time-varying complex Sylvester equations in finite time. IEEE Trans. Ind. Inform. 2022, 18, 3651–3660. [Google Scholar] [CrossRef]
- Guo, D.; Yi, C.; Zhang, Y. Zhang neural network versus gradient-based neural network for time-varying linear matrix equation solving. Neurocomputing 2011, 74, 3708–3712. [Google Scholar] [CrossRef]
- Ding, L.; Xiao, L.; Zhou, K.; Lan, Y.; Zhang, Y.; Li, J. An improved complex-valued recurrent neural network model for time-varying complex-valued Sylvester equation. IEEE Access 2019, 7, 19291–19302. [Google Scholar] [CrossRef]
- Jin, L.; Zhang, Y.; Li, S. Integration-enhanced Zhang neural network for real-time-varying matrix inversion in the presence of various kinds of noises. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 2615–2627. [Google Scholar] [CrossRef] [PubMed]
- Yan, J.; Xiao, X.; Li, H.; Zhang, J.; Yan, J.; Liu, M. Noise-tolerant zeroing neural network for solving non-stationary Lyapunov equation. IEEE Access 2019, 7, 41517–41524. [Google Scholar] [CrossRef]
- Xiao, L.; Zhang, Y.; Hu, Z.; Dai, J. Performance benefits of robust nonlinear zeroing neural network for finding accurate solution of Lyapunov equation in presence of various noises. IEEE Trans. Ind. Inform. 2019, 15, 5161–5171. [Google Scholar] [CrossRef]
- Xiang, Q.; Li, W.; Liao, B.; Huang, Z. Noise-resistant discrete-time neural dynamics for computing time-dependent Lyapunov equation. IEEE Access 2018, 6, 45359–45371. [Google Scholar] [CrossRef]
- He, Y.; Liao, B.; Xiao, L.; Han, L.; Xiao, X. Double accelerated convergence ZNN with noise-suppression for handling dynamic matrix inversion. Mathematics 2021, 10, 50. [Google Scholar] [CrossRef]
- Wang, G.; Hao, Z.; Zhang, B.; Jin, L. Convergence and robustness of bounded recurrent neural networks for solving dynamic Lyapunov equations. Inf. Sci. 2022, 588, 106–123. [Google Scholar] [CrossRef]
- Xiao, L.; He, Y. A noise-suppression ZNN model with new variable parameter for dynamic Sylvester equation. IEEE Trans. Ind. Inform. 2021, 17, 7513–7522. [Google Scholar] [CrossRef]
- Liao, B.; Xiang, Q.; Li, S. Bounded Z-type neurodynamics with limited-time convergence and noise tolerance for calculating time-dependent Lyapunov equation. Neurocomputing 2019, 325, 234–241. [Google Scholar] [CrossRef]
- Sun, Z.; Wang, G.; Jin, L.; Cheng, C.; Zhang, B.; Yu, J. Noise-suppressing zeroing neural network for online solving time-varying matrix square roots problems: A control-theoretic approach. Expert Syst. Appl. 2022, 192, 116272. [Google Scholar] [CrossRef]
- Trefethen, L.N.; Bau, D., III. Numerical Linear Algebra; Siam: Philadelphia, PA, USA, 1997; Volume 50. [Google Scholar]
- Zhang, Y.; Jiang, D.; Wang, J. A recurrent neural network for solving Sylvester equation with time-varying coefficients. IEEE Trans. Neural Netw. 2002, 13, 1053–1063. [Google Scholar] [CrossRef]
- Durrett, R. Probability: Theory and Examples; Cambridge University Press: Cambridge, UK, 2019; Volume 49. [Google Scholar]
- Zhang, Y.; Jin, L.; Guo, D.; Yin, Y.; Chou, Y. Taylor-type 1-step-ahead numerical differentiation rule for first-order derivative approximation and ZNN discretization. J. Appl. Math. Comput. 2015, 273, 29–40. [Google Scholar] [CrossRef]
- Oppenheim, A.V.; Willsky, A.S. Signals & Systems; Prentice-Hall: Englewood Cliffs, NJ, USA, 1997. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).