Abstract
Wavelet neural networks have been widely applied to dynamical system identification fields. The most difficult issue lies in selecting the optimal control parameters (the wavelet base type and corresponding resolution level) of the network structure. This paper utilizes the advantages of Legendre multiwavelet (LW) bases to construct a Legendre multiwavelet neural network (LWNN), whose simple structure consists of an input layer, hidden layer, and output layer. It is noted that the activation functions in the hidden layer are adopted as LW bases. This selection if based on the its rich properties of LW bases, such as piecewise polynomials, orthogonality, various regularities, and more. These properties contribute to making LWNNs more effective in approximating the complex characteristics exhibited by uncertainties, step, nonlinear, and ramp in the dynamical systems compared to traditional wavelet neural networks. Then, the number of selection LW bases and the corresponding resolution level are effectively optimized by the simple Genetic Algorithm, and the improved gradient descent algorithm is implemented to learn the weight coefficients of LWNN. Finally, four nonlinear dynamical system identification problems are applied to validate the efficiency and feasibility of the proposed LWNN-GA method. The experiment results indicate that the LWNN-GA method achieves better identification accuracy with a simpler network structure than other existing methods.
Keywords:
wavelet neural networks; Legendre multiwavelet; Legendre multiwavelet neural network; Nonlinear dynamical system identification; genetic algorithm MSC:
37M05; 37N30
1. Introduction
Nonlinear dynamical system identification is very important in various engineering applications [1,2]. Several system identification approaches, encompassing both mathematical model-based and data-driven methods, have been developed to solve this difficult problem posed by the complex characteristics of dynamical systems, which often exhibit high nonlinearity, uncertainties, delays, step, disturbances, and more [3,4,5,6]. Among the various methods for system identification, as it is difficult to establish an accurate mathematical model for the dynamical systems, the data-driven methods based on intelligent methods concerned in this study have recently been widely accepted and applied in the nonlinear dynamical system identification fields [2,7,8]. Various intelligent methods such as neural networks (NN) based on biological knowledge, fuzzy systems based on the operator’s knowledge with respect to the dynamical systems, fuzzy neural networks (FNN) [9,10,11,12,13], wavelet neural networks (WNN) [14,15,16], fuzzy wavelet neural networks (FWNN), and more [17,18,19,20] have been implemented to identify the nonlinear dynamical systems, and succesful identification results have been attained.
It is known that different variants of NN, multi-layer perceptron NN (MLPNN), radial basis function network (RBFN), recurrent neural network (RNN) [19], long short-term memory (LSTM) network, and more, have been widely used for nonlinear dynamical system identification [21,22]. The main issue of the MLPNN model is that the weights adjustment does not exploit any of the local data structure, and the function approximation is very sensitive to the available training data [21]. The presence of a feedback loop in RNN enhances their ability to approximate nonlinear dynamical systems. RNNs are deemed some of the most attractive models for processing information generated by various of dynamical systems, successfully overcoming the disadvantages of the MLPNN model [22]. For example, Kumar proposed a memory recurrent Elman neural network-based method for the time-delayed nonlinear dynamical system identification [19]. It is noted that the other requirement from the identification model is that it should be robust, that is, it should be able to compensate for the effects of uncertainties such as parameter variations and disturbance signals. An adaptive sliding-mode control system using a double-loop RNN structure was proposed for a class of nonlinear dynamic systems in Ref. [23], which realized the merits of high precision, fast speed, and strong robustness. Chu and Fei [8] proposed an adaptive dynamic global sliding-mode controller based on a proportional integral derivative sliding surface using a radial basis function neural network (NN) for a three-phase active power filter to obtain global robustness. Ghahderijani et al. [24] presented a sliding-mode control scheme to provide high robustness to external disturbances and transient response with large and abrupt load changes.
Although the methods based on NN have the ability to approximate any deterministic nonlinear process with little knowledge and no assumptions [3,14,25], the processing of the random weight initialization is generally accompanied with extended training cost and the training algorithm will converge to the local minima. In particular, there is no theoretical relationship between the specific learning parameters of the network and the optimal network architecture [26]. To effectively solve these issues, WNN, i.e., an alternative NN, is proposed to alleviate the aforementioned weaknesses [27,28]. Usually, three types of wavelet bases, the Gaussian derivative, the second derivative of the Gaussian, and the Morlet wavelet, are suggested to construct WNN [29]. However, in order to obtain less iteration steps in the training procedure, and arrive at the global convergence minimum of the loss function, various approaches have been proposed to initialize the dilation and translation parameters of the wavelet bases as the activation functions of the network [30,31,32,33]. For example, Luo et al. developed an identification of autonomous nonlinear dynamical system based on discrete-time multiscale wavelet neural network [16]. Emami developed an identification of nonlinear time-varying systems using wavelet neural networks [34]. However, wavelet neural networks involve adjusting multiple parameters, including the selection of wavelet functions, determination of wavelet scales, and the number of layers and nodes in the neural network. Adjusting these parameters requires experimentation and optimization, often involving iterative attempts with different parameter combinations and evaluating the model’s performance using appropriate evaluation metrics.
Additionally, FWNN contains the advantages of both the fuzzy systems and WNN, which can decrease the number of rules significantly. It has been successfully applied in the dynamical system identification [35,36]. For example, Wang et al. combined the fuzzy-neural structure with long short-term memory (LSTM) mechanism to identify the nonlinear dynamical system [37]. However, FWNN needs a large number of neurons to obtain a reasonably good performance in the nonlinear dynamical system identification, which increases the dimension and configuration parameters of the network [38,39,40].
Consequently, it is very necessary to design a new effective neural network and optimize the control parameters of the network structure, so as to effectively identify the nonlinear dynamical system. Fortunately, attractive properties such as piecewise polynomials, compact support, orthogonality, and various regularities of LW bases, constructed by Alpert et al. [41], provide an alternative structure for WNN. These properties are very beneficial for constructing a simpler and lower computational complexity network.
In this paper, our main objective is to design a LWNN based on the advantages of LW base functions and optimize the control parameters, i.e., the number of LW bases and corresponding resolution levels of this network, using a simple Genetic Algorithm, which enhances the effectiveness and adaptability of the network applied to the nonlinear dynamical system identification. More precisely, the basic structure of LWNN is composed of the input layer, hidden layer, and output layer, which are described in Figure 1 in detail. Due to the compact support and orthogonality properties of LW bases, the input data are parted into some subsections according to the optimal resolution level, which leads to locally connecting and sharing weights in each subsection, decreasing the calculation cost in the procedure of the network training. In the hidden layer, the essence of each main neuron is a linear combination of LW bases. Therefore, the number of LW bases and the resolution level measure the complexity of LWNN structure. The two control parameters of the network are optimized by the simple Genetic Algorithm to effectively and adaptively learn the salient features of the nonlinear dynamical system. In particular, the number of LW base functions and resolution level are usually small positive integers, and they are easy to optimize using the simple Genetic Algorithm rather than the various complex algorithms for initialization of the dilation and translation parameters used in the traditional WNN [37]. The performance of the LWNN-GA method is supported by rigorous wavelet analysis theory, and any function in can be approximated with any accuracy through sufficient training of LWNN [41].
Figure 1.
The structure of LWNN.
To summarize, in comparison to other neural networks, the contribution of our proposed method terms of the NN model are mainly due to the theoretical guidance, local learning structure, and adaptive adjustment mechanism, which makes the network topology more compact and simpler with a single hidden layer, attaining a high learning efficiency.
In order to demonstrate the effectiveness and feasibility of the proposed method in this article, the improved gradient descent algorithm is implemented to learn the network weight coefficients of the optimized structure model by the simple Genetic Algorithm, which can effectively approximate a benchmark piecewise function and identify three nonlinear dynamical systems. The main contributions of this paper are listed as follows:
- (1)
- This paper proposes a simplified LWNN to identify the nonlinear dynamical system. In essence, the main neuron in the hidden layer is a linear combination of orthogonal explicit LW polynomial bases instead of the traditional non-polynomial activation functions, which can effectively decrease the number of learning weight coefficients and avoid the issue of the numerical instability in the nonlinear dynamical system identification process;
- (2)
- The two control parameters of this network are optimized by the simple Genetic Algorithm. To be specific, the resolution level and order of LW bases are optimized to attain the optimal network structure, and the improved gradient descent algorithm is utilized to learn the network weight coefficients, which are prior and simpler to the algorithms used by the traditional WNN;
- (3)
- The essential attribute of the adaptive piece-wise polynomial approximation enables the proposed method to locally connect and share weights involving only a small subset of LW coefficients. This local process structure effectively decreases the training cost with the improved gradient descent algorithm;
- (4)
- Various LW bases with rich vanishing moments and regularities provide a strong approximation tool to thoroughly learn the complex characteristics shown by the uncertainties, step, ramp, and disturbances in a nonlinear dynamical system. Especially, the approximation error converges exponentially according to the optimal resolution level and order of LW bases.
As demonstrated by the numerical experiment results in Section 4, the proposed method attains better identification accuracy than other complex neural networks, showing great potential for complex nonlinear system identification.
The remaining of this paper is organized as follows: Section 2 introduces the rich properties of LW bases, and the basic structure of LWNN is elaborately designed. Section 3 uses the simple Genetic Algorithm to optimize the order of the adopted LW bases and the resolution level. Then, the weight coefficients of the optimal LWNN structure are learned by the improved gradient descent algorithm. Finally, the detailed flowchart of the proposed method for identifying the nonlinear dynamic system is described. In Section 4, the performance evaluation measure of the experiment results is described. Then, the nonlinear dynamical system identification commonly used in the literature is implemented to verify the effectiveness of the proposed method. Finally, Section 5 gives some conclusions of this research and prospects for future work.
2. Legendre Multiwavelet Neural Network
In this section, the concept and properties of LW bases are first introduced. In the second step, the type of the nonlinear dynamical system identification is described. Finally, the basic structure of LWNN is specifically designed in this context.
2.1. Legendre Multiwavelet Bases
In this context, let denote the Legendre polynomials of order k, which are defined as
By using simple transformation [42], LW bases defined on the resolution level 0 can be obtained as follows
where the whole set forms an orthogonal base for subspace and .
Then, for the resolution level and the corresponding translation , define interval Usually, the subspace is spanned by base functions, which are obtained from by dilation and translation, i.e.,
which forms an orthogonal base for
2.2. Type of Nonlinear System Identification
In this subsection, the mathematical model of the nonlinear dynamical system identified in this paper is elaborately described as follows.
where f is usually unknown and operates on the input x and generates an output y. Then, denotes one step ahead value of the nonlinear dynamical system, j denotes the order of the plant and , and represents the present input to the nonlinear dynamical system.
It is noted that the issue of the nonlinear dynamical system identification is then to find an approximate form of f. In this paper, the proposed LWNN is used as an identifier of this nonlinear dynamical system, the strict mathematical theory foundation is described as: Any a function is approximated by LW bases as the form [40]
which demonstrates that the approximation error exponentially convergences with the resolution level n and order p of LW bases, which is very beneficial to construct the simpler and lower computational complexity network. Based on the calculation technique, the approximation coefficients can be computed by the integral approach as
However, based on the data-driven intelligence method in this paper, the discrete data pairs of x and obtained from the nonlinear dynamical system are utilized to train LWNN to adaptively attain the above approximation coefficients.
2.3. Structure of the Legendre Multiwavelet Neural Network
In this subsection, the basic structure of LWNN is designed by combing NN with the rigorous approximation ability of LW bases according to (5). In contrast to the traditional NN, the activation function in the hidden layer is replaced by LW bases. Then, according to the approximation form in (5), the constructed LWNN is a three-layer feed-forward neural network, which consists of the input layer, hidden layer, and output layer illustrated in Figure 1 in detail. The operations involved in each layer of LWNN are elaborately described as follows.
Input layer: The resolution level n and the order p of the LW bases are first initialized. Then, the input data vector is parted into subintervals according to the resolution level n, which is denoted by . Second, the parted vector is fed into the neurons in the input layer. Finally, it is directly transmitted to the hidden layer.
Hidden layer: Each main neuron in this layer is essential to a linear combination of LW bases. Figure 2 shows the structure of the main neuron, which contains p orthogonal LW bases.
Figure 2.
The structure of the main neuron in the hidden layer of LWNN.
As demonstrated in Figure 1 and Figure 2, the input layer is locally connected with the hidden layer, and each main neuron shares the network weights.
Then, the output of the main neuron in the hidden layer fed to the output layer is described as
Output layer: The output of LWNN is calculated using the following form
In the output layer, the identification result of the nonlinear dynamical system is estimated by (8), which is conducive to the rapid convergence and provides better identification accuracy compared with the traditional NN, WNN, and FWNN methods.
3. Optimal Algorithm of the Proposed Method
In this section, the two control parameters of the constructed LWNN are first optimized by the simple Genetic Algorithm. In the second step, the improved gradient descent algorithm is implemented to learn the weight coefficients of LWNN. Finally, the flowchart of the proposed method applied to the nonlinear dynamical system identification is elaborately described in Figure 3.
Figure 3.
The flowchart of the proposed method for nonlinear system identification.
3.1. Optimal Control Parameters of LWNN
The essence of the structure complexity with respect to LWNN is absolutely dependent on the resolution level and the order of LW bases, which are usually small positive integers validated by many engineering applications based on wavelet methods [42]. In this subsection, the simple Genetic Algorithm is reliably implemented to optimize the two control parameters of LWNN, aiming to attain the optimal network structure without relying on a large number of numerical experiments.
In this context, each chromosome is composed of two types of genes, representing the resolution level and the order of LW bases, respectively, which denote a certain network structure of LWNN. More precisely, the devised chromosome in this paper includes three genes and four genes for the resolution level and the order of LW bases, respectively, which are elaborately described in Figure 3. Furthermore, the fitness function is adopted as the identification accuracy of the nonlinear dynamical system. It is noted that the identification accuracy is attained by using the follow improved gradient descent algorithm to learn the weight coefficients of LWNN.
3.2. Learning Weight Coefficients of LWNN
Generally, a good training algorithm can decrease the training cost while achieving a better identification accuracy. In the present study, the improved gradient descent algorithm is used to learn the weight coefficients of LWNN, which is easy to compare with other methods for nonlinear dynamical system identification.
The traditional WNN learning process includes the complex calculations of the dilation and translation parameters of the wavelet activation functions and the weights optimization of the neural network [43]. However, in this context, the structure of LWNN is optimized by the above simple Genetic Algorithm to enhance the ability of adaptively learning the optimal network structure. The improved gradient descent algorithm is implemented to learn the weight coefficients between the hidden layer and the output layer of LWNN. To be specific, the purpose of the learning weight coefficients is to minimize the loss function, which is described as follows.
where and denote the actual values and output values of LWNN, respectively. The error E describes the difference between the actual values and the network output values.
Then, the partial derivative of the loss function with respect to the weight coefficients of LWNN is computed as
In fact, this network is trained until the optimal weight vector of theLW bases is found by minimizing the above loss function through iterative technique. To be specific, the above derivative of the loss function is calculated at each iteration step t. Then, the update of the weight coefficients is performed by the improved gradient descent algorithm, as follows.
where is the learning rate, which is constant, and is the number of the sample points in the subinterval , which enhances the learning speed due to the variation learning step with the sample points.
The network weight coefficients adjustment should end when the loss function reaches a fixed lower bound. If the number of iterations reaches a fixed maximum, then the learning process is complete. Accordingly, the optimal values of the resolution level n and the order p of LW bases for the structure of LWNN can be obtained by combing the simple Genetic Algorithm with this weight coefficients learning process.
3.3. The Flowchart of the Proposed Method
In essence, LWNN involves two important issues to be solved for the nonlinear dynamical system identification in this paper. Firstly, to optimize the control parameters of the network structure using the simple Genetic Algorithm. Secondly, to optimize the connection weight coefficient vector between the hidden layer and the output layer of LWNN, which is learned by the improved gradient descent algorithm. It is very important to achieve the optimal configuration parameters for the nonlinear dynamical system identification with fewer hidden nodes, simpler structure, and better identification accuracy. The schematic of the proposed LWNN-GA method is elaborately demonstrated in Figure 3.
Accordingly, the optimal Algorithm 1 of the proposed method applied to the nonlinear dynamical system identification is demonstrated in detail as follows.
| Algorithm 1: LWNN-GA method for dynamical system identification |
![]() |
Then, the above learning algorithm of the proposed method is implemented to identify the following nonlinear dynamical system.
4. Numerical Experiments and Results Analysis
In this section, in order to verify the effectiveness and efficiency of the proposed method, the four most representative dynamical system identification examples are simulated to analyze the identification results. It is noted that the adopted examples include various complex features such as uncertainties, step, nonlinear, and ramp, which are able to be effectively recognized by the proposed LWNN-GA method.
Some researchers and scholars have proposed various methods to solve the system identification issues mentioned above. For effectiveness and clarity, the same performance measure the root mean squared error (RMSE) is utilized to compare the proposed method in this context with other existing methods. Then, the RMSE is used to calculate the difference between the estimated values and the sample actual data, and it is defined as follows
where is the actual values of the sample data and is estimated values of the output of LWNN at the point, and M is the number of the sample points, respectively.
Furthermore, the configuration parameters of the four examples simulated in this paper are elaborately described in Table 1 as follows
Table 1.
The detailed configuration parameters of the simulated experiments.
To avoid particularity and contingency, the samples are randomly selected from each of the nonlinear dynamical systems, and the specific training samples and testing samples for each example corresponding to different nonlinear dynamical systems are elaborately described in Table 1.
In addition, all numerical simulation experiments are conducted on a computer (Inter Core i5, 2.79 Ghz, 8GB RAM, OS Windows, Vista) using Matlab2020b.
4.1. Example 1
In this simulation experiment, the benchmark piecewise function proposed by Zhang and Benveniste, Ganjefar and Tofighi, and Carrillo et al. [39] is used to compare the performance of the proposed LWNN-GA method with other existing methods. Specifically, the structure of LWNN is optimized by the simple Genetic Algorithm, and the weight coefficients are learned by the improved gradient descent algorithm, which are implemented to approximate the function described as follows.
It is noted that the training samples of this experiment are composed of 200 input–output pairs uniformly distributed in the interval . Then, the training samples 100 and testing samples 100 for this experiment are randomly selected, as shown in Table 1. To be specific, the data corresponding to the variable x are transformed into the interval according to (5) by the simple linear transformation, which are the inputs of the proposed LWNN model, and the expected outputs of this model are the function values of . Furthermore, the identification accuracy of the proposed model also depends upon the learning rate value. In this paper, the learning step term in (11) is , where , and varies with the number of the sample points in the subinterval , which means that the variation learning step is devised to effectively enhance the convergence speed of the proposed model.
Finally, the approximation results for different resolution level n and the order p of LW bases are elaborately described in Table 2 as follows
Table 2.
Simulation results with different resolution levels and orders (Example 1).
As demonstrated in Table 2, the optimal control parameters of LWNN are attained as the resolution level and the order of LW bases by utilizing the simple Genetic Algorithm to optimize the structure of LWNN. Furthermore, the simulation results specifically described that the defined interval is parted into 32 subintervals, and three polynomials on each subinterval are utilized to approximate the function. Accordingly, the optimal RMSE value is . Finally, Figure 4 demonstrates the strong approximation ability of LWNN, and Figure 5 describes the approximation error variation at the corresponding discrete data points of the function as follows.
Figure 4.
Comparison of the actual values and the estimated values (Example 1).
Figure 5.
Error between the actual values and the estimated values (Example 1).
In Figure 4, the solid line and dotted line denote the actual values and estimated values, respectively. As shown in Figure 5, good results are achieved using the proposed method.
In addition, the RMSE iteration process of the proposed method is illustrated in Figure 6, and the learned network weight coefficients of the optimized structure are elaborately described in Figure 7, as follows.
Figure 6.
RMSE varies with the iterations (Example 1).

Figure 7.
Learned network weight coefficients using the proposed method (Example 1).
As shown in Figure 7, the low-order LW bases approximate the trend of the function, and the high-order LW base functions learn the detailed features. The learned weight coefficients of LWNN record the non-derivable feature at the point of the function on the 13th subinterval with the order and , respectively. Especially, the weight coefficients of LWNN with the order demonstrate the step variation of the function.
Finally, other existing methods are compared to the proposed method, and Table 3 shows the comparison results.
Table 3.
The proposed method compared to other research results (Example 1).
As described in Table 3, the proposed method provides the highest approximation accuracy compared to other existing methods.
4.2. Example 2
In this experiment, a nonlinear dynamical system studied by different existing methods and recognition models [36,38,39,48] is identified using the proposed method in this article. The corresponding dynamical system is described by the following difference equation
As represented by (13), the current output of the system depends on the system previous outputs and the previous input . Correspondingly, the input of this dynamical system is described by
Then, the training samples and testing samples of the proposed method are composed of the control variable x and the current output values, which are the input–output sample point pairs for this dynamical system identification. By substituting the control variable data x into (14), the control sample data are obtained and they are substituted into this system (13) to generate 1000 training and testing samples. Based on the generated samples, 800 samples are randomly selected as the training samples and the remaining samples are adopted as the testing samples. Finally, the identification results of this nonlinear dynamical system using the proposed method with different resolution level n and order p of LW bases are elaborately shown in Table 4 as follows.
Table 4.
Identification results with different resolution levels and orders (Example 2).
As described in Table 4, the resolution level and the order of LW bases are the optimal structure of LWNN. The optimal RMSE value of this example for the testing samples is . The good performance is described further by comparing the difference between the actual values (solid line) with the estimated values (dashed line) in Figure 8 and Figure 9, respectively.
Figure 8.
Comparison of the actual values and the estimated values (Example 2).
Figure 9.
Error between the actual values and the estimated values (Example 2).
Finally, the RMSE iteration process of training LWNN to identify this dynamical system, and the network weight coefficients learned by the improved gradient descent algorithm are shown in Figure 10 and Figure 11, respectively.
Figure 10.
Illustration of how RMSE varies with the iterations (Example 2).
Figure 11.
Learned network weight coefficients using the proposed method (Example 2).
As demonstrated in Figure 11, the network weight coefficients with a different order of LW bases illustrate the strong ability of LWNN learning the essential features of this dynamical system. Specifically, LWNN with the order and has effectively learned the step feature at the points and of this complex dynamical system on the th and th subintervals, respectively. Correspondingly, the network weight coefficients of LWNN have a sudden variation in the positive and negative values at the step points. Similarly, the ramp feature at the point of this dynamical system is described by the network weight coefficients of LW bases with the order and on the th subintervals, respectively. Additionally, the comparison results with other existing methods are elaborately described in Table 5 as follows.
Table 5.
The proposed method compared to other existing results (Example 2).
From the obtained results in Table 5, the proposed method can significantly enhance the identification accuracy due to the main neurons in LWNN representation by the polynomials, which can provide the highest accuracy with the optimal control parameters of the network structure.
4.3. Example 3
In this example, the nonlinear dynamical system mentioned in Refs. [47,57,58] is represented by the following difference equation
where the function f has the following form
The input of this dynamical system is described by
The input–output data pairs of this dynamical system identification are still composed of the control variable x and the current output of this dynamical system. Accordingly, 2000 samples are randomly generated by (15)–(17) and 1800 samples are used as training data and the rest of the samples as testing data. The simulation results of system identification with different decomposition scale n and order p of Legendre multiwavelet basis are illustrated in Table 6
Table 6.
Identification results with different resolution levels and orders (Example 3).
The optimal RMSE value of this dynamical system identification is , which can be found in Table 6 when the decomposition scale and order of Legendre multiwavelet basis. Additionally, Figure 12 and Figure 13 show that LWNN obtains a good response of this dynamical system, respectively.
Figure 12.
Comparison of the actual values and the estimated values (Example 3).
Figure 13.
Error between the actual values and the estimated values (Example 3).
Furthermore, the error iteration process of LWNN identification of this dynamical system and the network weight coefficients obtained are shown in Figure 14 and Figure 15, respectively.
Figure 14.
Illustration of how RMSE varies with the iterations (Example 3).

Figure 15.
Network weight coefficients with the optimal control parameters by LWNN (Example 3).
In Figure 15, the obtained network weight coefficients describe the ability of LWNN learning the feature of this dynamical system. When , the impulse signal presented by this dynamical system becomes more complicated. Then, LWNN with the order of Legendre multiwavelet basis records the main tread characteristic of this dynamical system, and LWNN with the order of LW bases effectively learns this complex impulse signal feature from to subintervals.
In contrast to the other methods, Table 7 shows the comparison results.
Table 7.
The proposed method compared to other research results (Example 3).
From the obtained results in Table 7, LWNN can provide better accuracy in identifying this dynamical system.
4.4. Example 4
In this experiment, a second-order nonlinear dynamical system studied by various models [33,36,37,54] is identified using the proposed LWNN-GA method. The difference equation of this dynamical system and the function f is represented using (15) and (16) in Example 3. The corresponding control signal of this complex dynamical system is demonstrated using (14). Then, 2500 data samples randomly generated using (14), (15), and (16) are applied to identify this second-order nonlinear dynamical system by using LWNN. Accordingly, 2000 data samples are used as training data and 500 data samples are used as testing data, respectively. The simulation results of this second-order nonlinear system identification with different decomposition scale n and order p of Legendre multiwavelet basis are described in Table 8.
Table 8.
Simulation results with different resolution levels and orders (Example 4).
As described in Table 8, the optimal RMSE value obtained is which attains the optimal resolution level and order of LW bases. Furthermore, Figure 16 and Figure 17 demonstrate that the proposed method obtains a good response of this complex dynamical system identification, respectively.
Figure 16.
Comparison of the actual values and the estimated values (Example 4).
Figure 17.
Error between the actual values and the estimated values (Example 4).
From the obtained results, it can be seen that LWNN has good performance for identification of this complex dynamical system. Then, the error iteration process of LWNN identification of this dynamical system and the network weight coefficients are shown in Figure 18 and Figure 19, respectively.
Figure 18.
RMSE varies with the iterations (Example 4).
Figure 19.
Network weight coefficients with the optimal control parameters of LWNN (Example 4).
From the variation values of the network weight coefficients described in Figure 19, the proposed method has the ability of thoroughly learning the complex features of this dynamical system, such as uncertainties, step, ramp, and nonlinear. The main reason is that LW bases have rich properties such as compact support, orthogonality, vanishing moments, and especially various regularities, which can effectively approximate the nonlinear dynamical system. Specifically, the optimal structure of LWNN with the order from to of LW bases effectively learns the main tread feature of this complex dynamical system. Correspondingly, the optimal structure of LWNN with the order of LW bases still demonstrates the step and ramp characteristics at the points of this complex dynamical system. The learned network weight coefficients describe the trend on the th, th and th subintervals, respectively.
Additionally, Table 9 shows the comparison results with the other methods as follows.
Table 9.
Comparison of the proposed method with the existing methods (Example 4).
To summarize, according to the results obtained in the above four examples, the identification accuracy shown in Table 2, Table 4, Table 6, Table 8, and Table 9 using the proposed method is basically consistent with the approximation error in (4). Furthermore, as described in Figure 6, Figure 10, Figure 14, and Figure 18, the proposed method effectively decrease the learning iteration steps by combing the simple Genetic Algorithm with the improved gradient descent algorithm in the nonlinear dynamical system identification. Correspondingly, the learned network weight coefficients of LWNN with the optimal structure can describe the complex features of the dynamical systems as shown in Figure 7, Figure 11, Figure 15, and Figure 19. Therefore, the proposed method demonstrates good performance and generalization ability for the identification of various complex dynamical systems.
5. Conclusions
In this paper, the two control parameters of LWNN structure are optimized using the simple Genetic Algorithm. Then, the weight coefficients of the optimal network are learned using the improved gradient descent algorithm, which are effectively implemented to identify four nonlinear dynamical systems. The simple Genetic Algorithm is utilized to find the appropriate LW bases in the hidden layer of LWNN, and avoid the complex calculations of the translation and dilation parameters of the traditional WNN. In addition, better identification accuracy of the nonlinear dynamic system is obtained with the optimal network structure. In future research, the proposed method should combine the advantage of the RNN model with a feed forward network and feedback loop to devise more effective LWNNs with a feedback loop. These enhanched models could then be implemented to identify the nonlinear dynamical system in application fields,. Additionally, the approach should be expanded to construct multidimensional LWNNs and study their implementation in various applications.
Author Contributions
Methodology, C.L.; Software, C.L.; Investigation, C.L.; Resources, X.Z.; Writing—original draft, X.Z. and C.L.; Writing—review & editing, Z.Y.; Visualization, S.L.; Supervision, X.Z.; Project administration, X.Z. All authors have read and agreed to the published version of the manuscript.
Funding
This work is funded by Fundamental and Advanced Research Project of Chongqing CSTC of China, the project No. are cstc2019jcyj-msxmX0386 and cstc2020jcyj-msxmX0232.
Data Availability Statement
The data presented in this study are available on request from the corresponding author. The data are not publicly available as they come from simulation experiments.
Acknowledgments
The authors are extremely grateful to the editor and referees for their valuable comments that greatly improve the quality of this paper.
Conflicts of Interest
Please add The authors declare no conflict of interest.
References
- Jin, L.; Liu, Z.; Li, L. Prediction and identification of nonlinear dynamical systems using machine learning approaches. J. Ind. Inf. Integr. 2023, 35, 100503. [Google Scholar] [CrossRef]
- Cheng, A.; Low, Y.M. Improved generalization of NARX neural networks for enhanced metamodeling of nonlinear dynamic systems under stochastic excitations. Mech. Syst. Signal Process. 2023, 200, 110543. [Google Scholar] [CrossRef]
- Quaranta, G.; Lacarbonara, W.; Masri, S.F. A review on computational intelligence for identification of nonlinear dynamical systems. Nonlinear Dyn. 2020, 99, 1709–1761. [Google Scholar] [CrossRef]
- Truong, H.V.A.; Nguyen, M.H.; Tran, D.T.; Ahn, K.K. A novel adaptive neural network-based time-delayed estimation control for nonlinear systems subject to disturbances and unknown dynamics. ISA Trans. 2023, 142, 214–227. [Google Scholar] [CrossRef]
- Brewick, P.T.; Masri, S.F. An evaluation of data-driven identification strategies for complex nonlinear dynamic systems. Nonlinear Dyn. 2016, 85, 1297–1318. [Google Scholar] [CrossRef]
- Chen, H.; Liu, Z.; Alippi, C.; Huang, B.; Liu, D. Explainable intelligent fault diagnosis for nonlinear dynamic systems: From unsupervised to supervised learning. IEEE Trans. Neural. Netw. Learn. Syst. 2022, 1–14. [Google Scholar] [CrossRef] [PubMed]
- Chen, H.; Li, L.; Shang, C.; Huang, B. Fault detection for nonlinear dynamic systems with consideration of modeling errors: A data-driven approach. IEEE Trans. Cybern. 2022. [Google Scholar] [CrossRef] [PubMed]
- Chu, Y.; Fei, J.; Hou, S. Adaptive global sliding-mode control for dynamic systems using double hidden layer recurrent neural network structure. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 1297–1309. [Google Scholar] [CrossRef]
- Revay, M.; Wang, R.; Manchester, I.R. Recurrent equilibrium networks: Flexible dynamic models with guaranteed stability and robustness. IEEE Trans. Autom. Control. 2023. [Google Scholar] [CrossRef]
- Otto, S.E.; Rowley, C.W. Linearly recurrent autoencoder networks for learning dynamics. SIAM J. Appl. Dyn. Syst. 2019, 18, 558–593. [Google Scholar] [CrossRef]
- de Campos Souza, P.V. Fuzzy neural networks and neuro-fuzzy networks: A review the main techniques and applications used in the literature. Appl. Soft Comput. 2020, 92, 106275. [Google Scholar] [CrossRef]
- Fei, J.; Liu, L. Real-time nonlinear model predictive control of active power filter using self-feedback recurrent fuzzy neural network estimator. IEEE Trans. Ind. Electron. 2021, 69, 8366–8376. [Google Scholar] [CrossRef]
- Wu, X.; Han, H.; Liu, Z.; Qiao, J. Data-knowledge-based fuzzy neural network for nonlinear system identification. IEEE Trans. Fuzzy Syst. 2019, 28, 2209–2221. [Google Scholar] [CrossRef]
- Ribeiro, G.T.; Mariani, V.C.; dos Santos Coelho, L. Enhanced ensemble structures using wavelet neural networks applied to short-term load forecasting. Eng. Appl. Artif. Intell. 2019, 82, 272–281. [Google Scholar] [CrossRef]
- Jin, M.; Brake, M.R.; Song, H. Comparison of nonlinear system identification methods for free decay measurements with application to jointed structures. J. Sound Vib. 2019, 453, 268–293. [Google Scholar] [CrossRef]
- Luo, G.; Yang, Z.; Zhang, Q. Identification of autonomous nonlinear dynamical system based on discrete-time multiscale wavelet neural network. Neural Comput. Appl. 2021, 33, 15191–15203. [Google Scholar] [CrossRef]
- Hamedani, M.H.; Zekri, M.; Sheikholeslam, F.; Selvaggio, M.; Ficuciello, F.; Siciliano, B. Recurrent fuzzy wavelet neural network variable impedance control of robotic manipulators with fuzzy gain dynamic surface in an unknown varied environment. Fuzzy Sets Syst. 2021, 416, 1–26. [Google Scholar] [CrossRef]
- Sheikhlar, Z.; Hedayati, M.; Tafti, A.D.; Farahani, H.F. Fuzzy Elman Wavelet Network: Applications to function approximation, system identification, and power system control. Inf. Sci. 2022, 583, 306–331. [Google Scholar] [CrossRef]
- Kumar, R. Memory recurrent Elman neural network-based identification of time-delayed nonlinear dynamical system. IEEE Trans. Syst. Man Cybern. Syst. 2022, 53, 753–762. [Google Scholar] [CrossRef]
- Luo, S.; Lewis, F.L.; Song, Y.; Garrappa, R. Dynamical analysis and accelerated optimal stabilization of the fractional-order self-sustained electromechanical seismograph system with fuzzy wavelet neural network. Nonlinear Dyn. 2021, 104, 1389–1404. [Google Scholar] [CrossRef]
- Sharifi, A.; Sharafian, A.; Ai, Q. Adaptive MLP neural network controller for consensus tracking of Multi-Agent systems with application to synchronous generators. Expert Syst. Appl. 2021, 184, 115460. [Google Scholar] [CrossRef]
- Bukhari, A.H.; Raja, M.A.Z.; Rafiq, N.; Shoaib, M.; Kiani, A.K.; Shu, C.M. Design of intelligent computing networks for nonlinear chaotic fractional Rossler system. Chaos Solitons Fractals 2022, 157, 111985. [Google Scholar] [CrossRef]
- Fei, J.; Lu, C. Adaptive sliding mode control of dynamic systems using double loop recurrent neural network structure. IEEE Trans. Neural Netw.Learn. Syst. 2017, 29, 1275–1286. [Google Scholar] [CrossRef] [PubMed]
- Moradi Ghahderijani, M.; Castilla, M.; Momeneh, A.; Miret, J.; Garcia de Vicuna, L. Robust and Fast sliding-mode control for a DC–DC current-source parallel-resonant converter. IET Power Electron. 2018, 11, 262–271. [Google Scholar] [CrossRef]
- Zeng, W.; Li, M.; Yuan, C.; Wang, Q.; Liu, F.; Wang, Y. Identification of epileptic seizures in EEG signals using time-scale decomposition (ITD), discrete wavelet transform (DWT), phase space reconstruction (PSR) and neural networks. Artif. Intell. Rev. 2020, 53, 3059–3088. [Google Scholar] [CrossRef]
- Kumar, R.; Srivastava, S.; Gupta, J.; Mohindru, A. Comparative study of neural networks for dynamic nonlinear systems identification. Soft Comput. 2019, 23, 101–114. [Google Scholar] [CrossRef]
- Zhang, Q.; Benveniste, A. Wavelet networks. IEEE Trans. Neural Netw. 1992, 3, 889–898. [Google Scholar] [CrossRef]
- Alexandridis, A.K.; Zapranis, A.D. Wavelet neural networks: A practical guide. Neural Netw. 2013, 42, 1–27. [Google Scholar] [CrossRef]
- Guo, T.; Zhang, T.; Lim, E.; Lopez-Benitez, M.; Ma, F.; Yu, L. A review of wavelet analysis and its applications: Challenges and opportunities. IEEE Access 2022, 10, 58869–58903. [Google Scholar] [CrossRef]
- Ko, C.N. Identification of nonlinear systems with outliers using wavelet neural networks based on annealing dynamical learning algorithm. Eng. Appl. Artif. Intell. 2012, 25, 533–543. [Google Scholar] [CrossRef]
- Yoo, S.J.; Park, J.B.; Choi, Y.H. Indirect adaptive control of nonlinear dynamic systems using self recurrent wavelet neural networks via adaptive learning rates. Inf. Sci. 2007, 177, 3074–3098. [Google Scholar] [CrossRef]
- Zainuddin, Z.; Pauline, O. Modified wavelet neural network in function approximation and its application in prediction of time-series pollution data. Appl. Soft Comput. 2011, 11, 4866–4874. [Google Scholar] [CrossRef]
- Samanta, S.; Suresh, S.; Senthilnath, J.; Sundararajan, N. A new neuro-fuzzy inference system with dynamic neurons (nfis-dn) for system identification and time series forecasting. Appl. Soft Comput. 2019, 82, 105567. [Google Scholar] [CrossRef]
- Emami, S.A.; Roudbari, A. Identification of nonlinear time-varying systems using wavelet neural networks. Adv. Control Appl. Eng. Ind. Syst. 2020, 2, e59. [Google Scholar] [CrossRef]
- Davanipoor, M.; Zekri, M.; Sheikholeslam, F. Fuzzy wavelet neural network with an accelerated hybrid learning algorithm. IEEE Trans. Fuzzy Syst. 2011, 20, 463–470. [Google Scholar] [CrossRef]
- Abiyev, R.H.; Kaynak, O. Fuzzy wavelet neural networks for identification and control of dynamic plants—A novel structure and a comparative study. IEEE Trans. Ind. Electron. 2008, 55, 3133–3140. [Google Scholar] [CrossRef]
- Wang, H.; Luo, C.; Wang, X. Synchronization and identification of nonlinear systems by using a novel self-evolving interval type-2 fuzzy LSTM-neural network. Eng. Appl. Artif. Intell. 2019, 81, 79–93. [Google Scholar] [CrossRef]
- Cheng, R.; Bai, Y. A novel approach to fuzzy wavelet neural network modeling and optimization. Int. J. Electr. Power Energy Syst. 2015, 64, 671–678. [Google Scholar] [CrossRef]
- Ganjefar, S.; Tofighi, M. Single-hidden-layer fuzzy recurrent wavelet neural network: Applications to function approximation and system identification. Inf. Sci. 2015, 294, 269–285. [Google Scholar] [CrossRef]
- Loussifi, H.; Nouri, K.; Braiek, N.B. A new efficient hybrid intelligent method for nonlinear dynamical systems identification: The Wavelet Kernel Fuzzy Neural Network. Commun. Nonlinear Sci. Numer. Simul. 2016, 32, 10–30. [Google Scholar] [CrossRef]
- Alpert, B.K. A class of bases in L2 for the sparse representation of integral operators. SIAM J. Math. Anal. 1993, 24, 246–262. [Google Scholar] [CrossRef]
- Alpert, B.; Beylkin, G.; Gines, D.; Vozovoi, L. Adaptive solution of partial differential equations in multiwavelet bases. J. Comput. Phys. 2002, 182, 149–190. [Google Scholar] [CrossRef]
- Ling, S.H.; Iu, H.H.C.; Leung, F.H.F.; Chan, K.Y. Improved hybrid particle swarm optimized wavelet neural network for modeling the development of fluid dispensing for electronic packaging. IEEE Trans. Ind. Electron. 2008, 55, 3447–3460. [Google Scholar] [CrossRef]
- Safavi, A.; Romagnoli, J. Application of wavelet-based neural networks to the modelling and optimisation of an experimental distillation column. Eng. Appl. Artif. Intell. 1997, 10, 301–313. [Google Scholar] [CrossRef]
- Ebadat, A.; Noroozi, N.; Safavi, A.A.; Mousavi, S.H. New fuzzy wavelet network for modeling and control: The modeling approach. Commun. Nonlinear Sci. Numer. Simul. 2011, 16, 3385–3396. [Google Scholar] [CrossRef]
- Tzeng, S.T. Design of fuzzy wavelet neural networks using the GA approach for function approximation and system identification. Fuzzy Sets Syst. 2010, 161, 2585–2596. [Google Scholar] [CrossRef]
- Carrillo-Santos, C.; Seck-Tuoh-Mora, J.; Hernandez-Romero, N.; Ramos-Velasco, L. Wavenet identification of dynamical systems by a modified PSO algorithm. Eng. Appl. Artif. Intell. 2018, 73, 1–9. [Google Scholar] [CrossRef]
- Juang, C.F. A TSK-type recurrent fuzzy network for dynamic systems processing by neural network and genetic algorithms. IEEE Trans. Fuzzy Syst. 2002, 10, 155–170. [Google Scholar] [CrossRef]
- Sastry, P.; Santharam, G.; Unnikrishnan, K. Memory neuron networks for identification and control of dynamical systems. IEEE Trans. Neural Netw. 1994, 5, 306–319. [Google Scholar] [CrossRef]
- Gough, J. Asymptotic stochastic transformations for nonlinear quantum dynamical systems. Rep. Math. Phys. 1999, 44, 313–338. [Google Scholar] [CrossRef]
- Juang, C.F.; Lin, C.T. A recurrent self-organizing neural fuzzy inference network. IEEE Trans. Neural Netw. 1999, 10, 828–845. [Google Scholar] [CrossRef]
- Wang, J.S.; Chen, Y.P. A fully automated recurrent neural network for unknown dynamic system identification and control. IEEE Trans. Circuits Syst. I Regul. Pap. 2006, 53, 1363–1372. [Google Scholar] [CrossRef]
- Juang, C.F.; Chiou, C.T.; Lai, C.L. Hierarchical singleton-type recurrent neural fuzzy networks for noisy speech recognition. IEEE Trans. Neural Netw. 2007, 18, 833–843. [Google Scholar] [CrossRef] [PubMed]
- Yilmaz, S.; Oysal, Y. Fuzzy wavelet neural network models for prediction and identification of dynamical systems. IEEE Trans. Neural Netw. 2010, 21, 1599–1609. [Google Scholar] [CrossRef] [PubMed]
- Ko, C.N. WSVR-based fuzzy neural network with annealing robust algorithm for system identification. J. Frankl. Inst. 2012, 349, 1758–1780. [Google Scholar] [CrossRef]
- Zhao, H.; Gao, S.; He, Z.; Zeng, X.; Jin, W.; Li, T. Identification of nonlinear dynamic system using a novel recurrent wavelet neural network based on the pipelined architecture. IEEE Trans. Ind. Electron. 2013, 61, 4171–4182. [Google Scholar] [CrossRef]
- Kumpati, S.N.; Kannan, P. Identification and control of dynamical systems using neural networks. IEEE Trans. Neural Netw. 1990, 1, 4–27. [Google Scholar]
- Ho, D.W.; Zhang, P.A.; Xu, J. Fuzzy wavelet networks for function learning. IEEE Trans. Fuzzy Syst. 2001, 9, 200–211. [Google Scholar] [CrossRef]
- Majhi, B.; Panda, G. Development of efficient identification scheme for nonlinear dynamic systems using swarm intelligence techniques. Expert Syst. Appl. 2010, 37, 556–566. [Google Scholar] [CrossRef]
- Abiyev, R.H.; Kaynak, O. Type 2 fuzzy neural structure for identification and control of time-varying plants. IEEE Trans. Ind. Electron. 2010, 57, 4147–4159. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
