Next Article in Journal
Camouflage Object Segmentation Using an Optimized Deep-Learning Approach
Next Article in Special Issue
Global Properties of a Diffusive SARS-CoV-2 Infection Model with Antibody and Cytotoxic T-Lymphocyte Immune Responses
Previous Article in Journal
Early Detection of Parkinson’s Disease Using Fusion of Discrete Wavelet Transformation and Histograms of Oriented Gradients
Previous Article in Special Issue
A Numerical Intuition of Activation Energy in Transient Micropolar Nanofluid Flow Configured by an Exponentially Extended Plat Surface with Thermal Radiation Effects
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Nonlinear Hammerstein System Identification: A Novel Application of Marine Predator Optimization Using the Key Term Separation Technique

by
Khizer Mehmood
1,
Naveed Ishtiaq Chaudhary
2,*,
Zeshan Aslam Khan
1,
Khalid Mehmood Cheema
3,
Muhammad Asif Zahoor Raja
2,
Ahmad H. Milyani
4 and
Abdullah Ahmed Azhari
5
1
Department of Electrical and Computer Engineering, International Islamic University Islamabad (IIUI), Islamabad 44000, Pakistan
2
Future Technology Research Center, National Yunlin University of Science and Technology, 123 University Road, Section 3, Douliou, Yunlin 64002, Taiwan
3
Department of Electronic Engineering, Fatima Jinnah Women University, Rawalpindi 46000, Pakistan
4
Department of Electrical and Computer Engineering, King Abdulaziz University, Jeddah 21589, Saudi Arabia
5
The Applied College, King Abdulaziz University, Jeddah 21589, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(22), 4217; https://doi.org/10.3390/math10224217
Submission received: 3 October 2022 / Revised: 8 November 2022 / Accepted: 8 November 2022 / Published: 11 November 2022

Abstract

:
The mathematical modelling and optimization of nonlinear problems arising in diversified engineering applications is an area of great interest. The Hammerstein structure is widely used in the modelling of various nonlinear processes found in a range of applications. This study investigates the parameter optimization of the nonlinear Hammerstein model using the abilities of the marine predator algorithm (MPA) and the key term separation technique. MPA is a population-based metaheuristic inspired by the behavior of predators for catching prey, and utilizes Brownian/Levy movement for predicting the optimal interaction between predator and prey. A detailed analysis of MPA is conducted to verify the accurate and robust behavior of the optimization scheme for nonlinear Hammerstein model identification.

1. Introduction

The identification of nonlinear systems is considered a challenging task because of the inherent stiffness and complex system representation [1,2]. A nonlinear system modelled through the block-oriented Hammerstein structure is relatively a simple approach, where a nonlinear block is followed by a linear dynamical subsystem [3,4]. The identification of Hammerstein models is of the utmost importance owing to their ability to model various nonlinear processes [5,6,7]. The pioneering work for nonlinear Hammerstein systems was presented by Narendra et al., in the 1960s, by proposing an iterative identification algorithm [8]. Chang et al. proposed a non-iterative scheme [9]. Vörös et al. presented a key term separation principle for the identification of a Hammerstein model [10,11]. Ding et al. introduced hierarchical least squares [12,13] and hierarchical gradient descent algorithms [14] for the Hammerstein model. Chaudhary et al. introduced the concept of fractional gradient algorithms for Hammerstein systems by proposing normalized fractional least mean square (FLMS) [15], sign FLMS [16], multi-innovation FLMS [17], hierarchical quasi fractional gradient descent [18], and fractional hierarchical gradient descent [19] algorithms.
Different system identification scenarios have multimodal error surfaces, and traditional gradient descent-based approaches may converge to a sub-optimal solution [20,21]. The system identification may be expressed as an optimization problem that can be solved through a stochastic search methodology such as evolutionary and swarm heuristics [20]. The research community proposed these population-based heuristics for Hammerstein model identification. For example, Raja et al. exploited genetic algorithms [22]; Mehmood et al. presented differential evolution [23], weighted differential evolution [24], and backtracking search heuristics [25]; and Altaf et al. presented adaptive evolutionary heuristics [26]. The swarm-based optimization heuristics for Hammerstein system identification include particle swarm optimization [27], cuckoo search algorithm [28], snake optimizer algorithm [29], and fractional swarming optimization heuristics [30], etc.
Metaheuristics have been applied in different engineering optimization problems. Heuristics are mainly categorized as (a) swarm intelligence, (b) physics-based, or (c) evolutionary algorithms. Swarm intelligence mimics the behavior of herds present in nature. Physics-based methods are inspired by the laws of physics, while evolutionary methods are inspired by biological processes in nature. The significant methods proposed in all of these domains are summarized in Table 1.
Among the various swarm-intelligence-based techniques, the Marine Predator Algorithm (MPA) [59] has recently been proposed and applied in various applications such as optimal power flow [60], economic load dispatch [61], wireless sensor network [62], and path planning [63]. Wadood et al. [64] applied MPA to minimize breakdown in an electrical power system. Lu et al. [65] applied MPA along with the convolutional neural network for the optimal detection of lung cancer. Hoang et al. [66] utilized MPA in the optimization of hyperparameters of support vector machines for remote sensing. Yang et al. [67] applied multi strategy MPA for semi supervised extreme machine learning for classification.
In this study, we explored MPA for effective parameter estimation of the Hammerstein structure. The detailed performance evaluation of the proposed scheme for Hammerstein identification was conducted for different noise conditions. The reliability of the proposed approach in comparison with the other recently introduced metaheuristics was established through detailed analyses based on multiple independent executions and statistical tests.
The remainder of the paper is outlined as follows: Section 2 describes the Hammerstein model structure. Section 3 presents the MPA methodology with a flow chart description. Section 4 provides the results of detailed simulations in terms of graphical and tabular representation. Finally, Section 5 concludes the study by presenting the main findings of the current investigation.

2. System Model

Considering the Hammerstein output error (HOE) model, whose output is written as
y m ( t ) = y ( t ) + ε ( t ) ,
where y ( t ) is the noise free output and ε ( t ) is the additive white noise. The noise free output of HOE model is presented in (2)
y ( t ) + s = 1 n g g s y ( t s ) = s = 0 n h h s μ ¯ ( t s ) ,
where μ ¯ ( t ) is the output of nonlinear block with basis (k1, k2, …, kp) and o 1 , o 2 , ,   o p are the corresponding parameters represented as
μ ¯ ( t ) = l = 1 p o l k l ( μ ( t ) ) .
By choosing μ ¯ ( t ) as the keyterm and applying the keyterm separation principle [18,19], and letting h 0 = 1 , then (2) can be expressed as
y ( t ) = s = 1 n g g s y ( t s ) + s = 0 n h h s μ ¯ ( t s ) , y ( t ) = s = 1 n g g s y ( t s ) + h 0 μ ¯ ( t ) + s = 1 n h h s μ ¯ ( t s ) , y ( t ) = s = 1 n g g s y ( t s ) + s = 1 n h h s μ ¯ ( t s ) + l = 1 p o l k l ( μ ( t ) ) .
The parameter vectors g , h , o and information vector δ g ( t ) ,   δ h ( t ) , k ( t ) are presented in (4) and (5), respectively
g = [ g 1 g 2 g n g ] R n g ,   h = [ h 1 h 2 h n h ] R n h ,     o = [ o 1 o 2 o p ] R p ,
δ g ( t ) = [ y ( t 1 ) y ( t 2 ) y ( t n g ) ] R n g ,   δ h ( t ) = [ μ ¯ ( t 1 ) μ ¯ ( t 2 ) μ ¯ ( t n h ) ] R n h ,   k ( t ) = [ k 1 ( μ ( t ) ) k 2 ( μ ( t ) ) k p ( μ ( t ) ) ] R p .
Then, (3) can be expressed as presented in (6)
y ( t ) = δ g T ( t ) g + δ h T ( t ) h + k T ( t ) o .

3. Methodology

In this section, the MPA-based methodology for the parameter estimation of the Hammerstein model is presented.

3.1. Marine Predator Algorithm

The MPA is a population-based metaheuristic inspired from the behavior of predators when catching prey [59]. It utilizes Brownian and Levy movement for the optimal interaction between predator and prey. Its mathematical model, pseudocode, and algorithm flowchart are presented below.

3.1.1. Formulation

MPA starts with the initialization of the population uniformly distributed over the search space, as presented in (7).
J 0 = J min + rand   ( J max J min ) .
Two matrices were also constructed, the Elite matrix (EM) and Prey matrix (PM), which consist of the position vector with the best fitness and the proposed random positions during initialization, respectively, as presented in (8) and (9).
EM = [ J I 1 , 1 J I 1 , b J I N p , 1 J I N p , b ] N p × b
PM = [ J 1 , 1 J 1 , b J N p , 1 J N p , b ] N p × b ,
where J I represents the top predator and J a , b is the dimension of the prey.

3.1.2. Optimization

The optimization of MPA consists of three phases, as presented below.

Phase I

In this phase, the predator moves faster than the prey by using Brownian movement for first third of the iterations. The prey matrices are updated as presented in (10) and (11)
SS i = R B ( EM i ( R B   PM i ) ) , i = 1 , 2 , , N p ,
PM i = PM i + ( C . R SS i ) ,
where R B is the normal distributed vector representing the Brownian movement, C = 0.5 , and R is a vector [ 0.1 ] .

Phase II

In this phase, both the prey and predator are moving at the same pace. The prey moves on Levy movement, while the predator moves using Brownian movement between one third and two third of the maximum iterations. The updated matrices are presented in (12)–(15)
SS i = R L ( EM i ( R L   PM i ) ) , i = 1 , 2 , , N p / 2 ,
PM i = PM i + ( C . R SS i ) ,
SS i = R B ( R B ( EM i PM i ) ) , i = 1 , 2 , , N p ,
PM i = EM i + ( C . DF SS i ) ,
where DF = ( 1 It T ) ( 2 It T ) is adaptive factor to control stepsize of the predator movement.

Phase III

In this phase, the predator is moving faster than the prey. The predator moves by using Levy movement and the prey matrix is updated as presented in (16) and (17)
SS i = R L ( R L ( EM i PM i ) ) , i = 1 , 2 , , N p ,
PM i = EM i + ( C . DF SS i ) .

3.1.3. Fish Aggregating Devices’ (FAD’s) Effect

The fish aggregating devices’ (FAD’s) effect is added in MPA to simulate the natural behavior of fish, as they spend 80% of their time in immediate locations and 20% of the time searching for other spaces [59], as presented in (18) and (19)
PM i = { ( 18 ) PM i + DF [ J min + R ( J max J min ) ] U , if   q FAD s ( 19 ) PM i + [ FAD s ( 1 q ) + q ] ( PM q 1 PM q 2 ) , if   q > FAD s
where FAD s = 0.2 , U is a binary vector, q is a random number [ 0 , 1 ] , J max   and   J min are the upper and lower bounds, respectively, q 1   and   q 2 are subscripts of the prey matrix. The flowchart for MPA is shown in Figure 1.

4. Performance Analysis

The detailed performance analysis of MPA for the HOE model is presented in this section. HOE model identification was conducted on numerous noise levels, several generations, and population sizes. The proposed scheme is examined in terms of accuracy, and is measured by the fitness function presented in (20)
Fitness   Function = 1 J w = 1 J [ y m ( t w ) y m ˜ ( t w ) ] 2 ,
here, y m ˜ is the estimated response through MPA, and y m is the actual response of the HOE model.

4.1. Case Study 1

For the simulation study, we considered the following HOE model
y m ( t ) = y ( t ) + ε ( t ) , y ( t ) + g 1 y ( t 1 ) + g 2 y ( t 2 ) = h 0 μ ¯ ( t ) + h 1 μ ¯ ( t 1 ) + h 2 μ ¯ ( t 2 ) ,   h 0 = 1 ,
μ ¯ ( t ) = o 1 k 1 ( μ ( t ) ) + o 2 k 2 ( μ ( t ) ) + o 3 k 3 ( μ ( t ) ) = o 1 μ ( t ) + o 2 μ 2 ( t ) + o 3 μ 3 ( t )
g = [ g 1 , g 2 ] T = [ 1.100 ,   0.900 ] T
h = [ h 1 , h 2 ] T = [ 0.800 , 0.600 ] T
o = [ o 1 , o 2 , o 3 ] T = [ 0.900 ,   0.600 ,   0.200 ] T
ω = [ g 1 , g 2 , h 1 , h 2 ,   o 1 , o 2 , o 3 ] T = [ 1.100 ,   0.900 , 0.800 , 0.600 , 0.900 ,   0.600 ,   0.200 ] T
The input is taken as zero mean unit variance random signal with the data length as 25 and the number of generations taken as 500 and 1000. The performance of MPA is assessed by introducing AWGN noise with five variations, i.e., [60 dB, 50 dB, 40 dB, 30 dB, 10 dB] to the HOE model for two variations of generation (T) [500, 1000] and populations (Np) [50, 100], and the fitness plots are demonstrated in Figure 2.
The fitness curves in Figure 2a correspond to the best fitness of the MPA algorithm for generation T = 500, while Figure 2b correspond to same variation for generation T = 1000. It is noted from Figure 2a,b that the fitness of MPA for the two generation variations, i.e., [500, 1000], reduces with the increase in population sizes.
The performance of MPA is also evaluated in terms of the generation variations as demonstrated by the fitness curves in Figure 3.
The fitness curves in Figure 3a correspond to the best fitness of the MPA algorithm for population size Np = 50, whereas Figure 3b is related to the same variation for population Np = 100. It is noted from Figure 3a,b that the fitness of MPA for the two population sizes, i.e., [50, 100], is reduced significantly with the increase in generations.
The behavior of MPA for different noise variations is evaluated by fixing the population size [50, 100] and changing the generation size [500, 1000] for five values of noise levels [60 dB, 50 dB, 40 dB, 30 dB, 10 dB], and the fitness plots are presented in Figure 4.
It is noted from the fitness plots demonstrated in Figure 4a–d that for a fixed population size and number of generations, the fitness achieved by MPA for a low level of noise, i.e., [60 dB], is quite a lot less compared with the fitness for a high noise level, i.e., [50 dB, 40 dB, 30 dB, 10 dB]. Yet, MPA achieves the minimum value of fitness for the smallest value of noise, i.e., [60 dB] for Np = 100 and T = 1000. Therefore, it is confirmed from the curves in Figure 4 that the performance of MPA is lower for higher noise values.
To further investigate MPA, it is compared with other metaheuristics, namely the prairie dog optimization (PDO) [68], sine cosine algorithm (SCA) [69], and whale optimization algorithm (WOA) [70], for Np = 100 and T = 1000, and noise levels of [60 dB, 50 dB, 40 dB, 30 dB, 10 dB] are presented in Figure 5.
Figure 5a–e demonstrates the convergence of MPA with SCA, WOA, and PDO at Np = 100 and T = 1000 over all of the noise levels, i.e., [60 dB, 50 dB, 40 dB, 30 dB, 10 dB]. It is noted from Figure 5a–e that with the increase in noise level, the fitness value also increases. However, MPA achieves the lowest fitness compared with SCA, WOA, and PDO for all of the noise levels.
MPA is further evaluated statistically against PDO, WOA, and SCA at Np = 100 and T = 1000 for 20 independent runs. Figure 6 shows the fitness value of MPA, SCA, WOA, and PDO on run#1, run#10, and run#20 for all of the noise levels.
It is noted from Figure 6a–e that MPA achieves the lowest fitness compared with SCA, WOA, and PDO for different independent runs for all of the noise levels. Moreover, it is also observed that fitness increases with the increase in noise levels for all of the methods. Further statistical analysis on all of the independent runs is demonstrated in Figure 7.
It is noted from Figure 7a–e that MPA outperforms SCA, WOA, and PDO for noise levels of [60 dB, 50 dB, 40 dB, 30 dB, 10 dB] in all of the independents runs. Moreover, it is also observed that there is an increase in fitness values for high noise levels for all of the methods.
Table 2, Table 3, Table 4, Table 5 and Table 6 show the performance of all of the algorithms in terms of the average fitness, best fitness, and estimated weights for the [60 dB, 50 dB, 40 dB, 30 dB, 10 dB] noise levels. It is noted that for lower noise levels, i.e., 60 dB, MPA gives better results compared with higher noise levels. Moreover, at a low noise level, the estimated weights are closer to the true value. It is also observed from Table 2 that the best fitness achieved for noise level = 60 dB is 2.6 × 10 7 . Similarly, the best fitness value for noise levels = [50 dB, 40 dB, 30 dB, 10 dB], provided in Table 3, Table 4, Table 5 and Table 6, are [ 2.6 × 10 6 ,   2.9 × 10 5 ,   2.4 × 10 4 , 2.6 × 10 2 ], respectively. Therefore, it is validated from Table 2, Table 3, Table 4, Table 5 and Table 6 that the fitness of MPA increases with the increase in noise levels and it decreases by increasing the population size and number of generations.

4.2. Case Study 2

The performance of the MPA is also considered for another Hammerstein model based on the autoregressive exogenous input structure, i.e., H-ARX [13,14], with the same model parameters as considered in case study 1. The H-ARX model is mathematically defined as
y ( t ) = H ( q ) G ( q ) μ ¯ ( t ) + 1 G ( q ) ε ( t ) ,
where G(q) and H(q) are polynomials in the shift operator, and rearranging (27) gives the following
G ( q ) y ( t ) = H ( q ) μ ¯ ( t ) + ε ( t ) y ( t ) = g 1 y ( t 1 ) g 2 y ( t 2 ) + h 0 μ ¯ ( t ) + h 1 μ ¯ ( t 1 ) + h 2 μ ¯ ( t 2 ) + ε ( t ) ,   h 0 = 1 ,
μ ¯ ( t ) = o 1 k 1 ( μ ( t ) ) + o 2 k 2 ( μ ( t ) ) + o 3 k 3 ( μ ( t ) ) = o 1 μ ( t ) + o 2 μ 2 ( t ) + o 3 μ 3 ( t )
ω = [ g 1 , g 2 , h 1 , h 2 ,   o 1 , o 2 , o 3 ] T = [ 1.100 ,   0.900 , 0.800 , 0.600 , 0.900 ,   0.600 ,   0.200 ] T .
The input data are generated in the same way as mentioned in case study 1, while the noise in the ARX model is taken as white Gaussian with constant variance, i.e., [0.01, 0.05, 0.10]. Two variations of generation (T) [500, 1000] and population Np = 50 are considered. The fitness for case study 2 is calculated through (20) by using y instead of y m , and the fitness plots are demonstrated in Figure 8.
The fitness curves in Figure 8 correspond to the best fitness of the MPA algorithm for population size Np = 50. It is noted from Figure 8 that the fitness of MPA reduces significantly with the increase in generations.
The behavior of MPA for different noise variations is evaluated at population size Np = 50 and changing the generation size [200, 500] for three values of noise levels [0.01,0.05,0.10], and the fitness plots are presented in Figure 9.
It is noted from the fitness plots demonstrated in Figure 9a,b that for a fixed population size and number of generations, the fitness achieved by MPA for a low level of noise, i.e., [0.01], is quite a lot less compared with the fitness for a high noise level, i.e., [0.05,0.10]. Yet, MPA achieves the minimum value of fitness for the smallest value of noise, i.e., [0.01] for Np = 50 and T = 500. Therefore, it is confirmed from the curves in Figure 9 that the performance of MPA is lower for higher noise values.
Similar to example 1, it is further compared with the prairie dog optimization (PDO) [68], sine cosine algorithm (SCA) [69], and whale optimization algorithm (WOA) [70] for Np = 50 and T = 500, and noise levels of [0.01,0.05,0.10] are presented in Figure 10.
Figure 10a–c demonstrates the convergence of MPA with SCA, WOA, and PDO at Np = 50 and T = 500 over all the noise levels, i.e., [0.01,0.05,0.10]. It is noted from Figure 10a–c that with the increase in noise level, the fitness value also increases. However, MPA achieves the lowest fitness compared with SCA, WOA, and PDO for all of the noise levels.
MPA is further evaluated statistically against PDO, WOA, and SCA at Np = 50 and T = 500 for 20 independent runs. Figure 11 shows the fitness value of MPA, SCA, WOA, and PDO on run#1, run#10, and run#20 for all of the noise levels.
It is noted from Figure 11a–c that MPA achieves the lowest fitness compared with SCA, WOA, and PDO for different independent runs for all noise levels. Moreover, it is also observed that the fitness increases with the increase in noise levels for all of the methods. Further statistical analysis on all independent runs is demonstrated in Figure 12.
It is noted from Figure 12a–c that MPA outperforms SCA, WOA, and PDO for noise levels of [0.01,0.05,0.10] in all of the independents runs. Moreover, it is also observed that there is an increase in fitness value for high noise levels for all of the methods
Table 7, Table 8 and Table 9 show the performance of all of the algorithms in terms of the average fitness, best fitness, and estimated weights for [0.01,0.05,0.10] noise levels at Np = 50. It is noted that for lower noise levels i.e., 0.01, MPA gives better results compared with the higher noise levels. Moreover, at a low noise level, the estimated weights are closer to the true value. It is also observed from Table 7 that the best fitness achieved for noise level = 0.01 is   5.2 × 10 5 . Similarly, the best fitness value for noise levels = [0.05,0.10] provided in Table 8 and Table 9 are [ 1.3 × 10 3 ,   5.3 × 10 3 , ], respectively. Therefore, it is validated from Table 7, Table 8 and Table 9 that the fitness of MPA increases with the increase in noise levels and it decreases by increasing the number of generations.
The results of the detailed simulations imply that the MPA-based swarming optimization heuristics efficiently approximates the parameters of the Hammerstein systems.

5. Conclusions

The main findings of the investigation are as follows:
The parameter optimization of the nonlinear mathematical model represented through the Hammerstein structure is studied using the marine predator algorithm, MPA, and the key term separation principle. The mathematical model and working principle of the MPA are inspired from the behavior of predators when catching prey. The optimal interaction between the predator and prey is modelled using Brownian and Levy movement. The key term separation is a technique used to avoid redundancy in the estimation of Hammerstein model parameters, thus making the MPA more efficient. The accuracy and robustness of the MPA based optimization scheme with key term separation technique is established for HOE and H-ARX estimation through the simulation results for different noise scenarios.
Future work may extend the application of MPA to optimize the muscle fatigue systems [71,72] and solve fractional order problems [73,74,75].

Author Contributions

Methodology, K.M.; visualization, Z.A.K.; formal analysis, Z.A.K., N.I.C. and M.A.Z.R.; writing—original draft preparation, K.M.; writing—review and editing, N.I.C., Z.A.K. and M.A.Z.R.; project administration, K.M.C., A.A.A. and A.H.M.; funding acquisition, K.M.C., A.A.A. and A.H.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

M.A.Z. Raja like to acknowledge the support of the National Science and Technology Council (NSTC), Taiwan under grant no. NSTC 111-2221-E-224-043.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schoukens, J.; Ljung, L. Nonlinear System Identification: A User-Oriented Road Map. IEEE Control Syst. 2019, 39, 28–99. [Google Scholar] [CrossRef]
  2. Yukai, S.; Chao, Y.; Zhigang, W.; Liuyue, B. Nonlinear System Identification of an All Movable Fin with Rotational Freeplay by Subspace-Based Method. Appl. Sci. 2020, 10, 1205. [Google Scholar] [CrossRef] [Green Version]
  3. Giri, F.; Bai, E.W. (Eds.) Block-Oriented Nonlinear System Identification; Springer: Berlin/Heidelberg, Germany, 2010; Volume 1, p. 0278-0046. [Google Scholar]
  4. Billings, S.A. Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio-Temporal Domains; John Wiley & Sons: New York, NY, USA, 2013. [Google Scholar]
  5. Tissaoui, K. Forecasting implied volatility risk indexes: International evidence using Hammerstein-ARX approach. Int. Rev. Financial Anal. 2019, 64, 232–249. [Google Scholar] [CrossRef]
  6. Bai, Y.-T.; Wang, X.-Y.; Jin, X.-B.; Zhao, Z.-Y.; Zhang, B.-H. A Neuron-Based Kalman Filter with Nonlinear Autoregressive Model. Sensors 2020, 20, 299. [Google Scholar] [CrossRef] [Green Version]
  7. Ljung, L. System Identification: Theory for the User, 2nd ed.; Prentice Hall: Hoboken, NJ, USA, 1987. [Google Scholar]
  8. Narendra, K.; Gallman, P. An iterative method for the identification of nonlinear systems using a Hammerstein model. IEEE Trans. Autom. Control 1966, 11, 546–550. [Google Scholar] [CrossRef]
  9. Chang, F.; Luus, R. A noniterative method for identification using Hammerstein model. IEEE Trans. Autom. Control 1971, 16, 464–468. [Google Scholar] [CrossRef]
  10. Vörös, J. Parameter identification of discontinuous hammerstein systems. Automatica 1997, 33, 1141–1146. [Google Scholar] [CrossRef]
  11. Voros, J. Recursive identification of hammerstein systems with discontinuous nonlinearities containing dead-zones. IEEE Trans. Autom. Control 2003, 48, 2203–2206. [Google Scholar] [CrossRef]
  12. Chen, H.; Ding, F. Hierarchical Least Squares Identification for Hammerstein Nonlinear Controlled Autoregressive Systems. Circuits Syst. Signal Process. 2014, 34, 61–75. [Google Scholar] [CrossRef]
  13. Ding, F.; Chen, H.; Xu, L.; Dai, J.; Li, Q.; Hayat, T. A hierarchical least squares identification algorithm for Hammerstein nonlinear systems using the key term separation. J. Frankl. Inst. 2018, 355, 3737–3752. [Google Scholar] [CrossRef]
  14. Chen, H.; Xiao, Y.; Ding, F. Hierarchical gradient parameter estimation algorithm for Hammerstein nonlinear systems using the key term separation principle. Appl. Math. Comput. 2014, 247, 1202–1210. [Google Scholar] [CrossRef]
  15. Chaudhary, N.I.; Khan, Z.A.; Zubair, S.; Raja, M.A.Z.; Dedovic, N. Normalized fractional adaptive methods for nonlinear control autoregressive systems. Appl. Math. Model. 2018, 66, 457–471. [Google Scholar] [CrossRef]
  16. Chaudhary, N.I.; Aslam, M.S.; Baleanu, D.; Raja, M.A.Z. Design of sign fractional optimization paradigms for parameter estimation of nonlinear Hammerstein systems. Neural Comput. Appl. 2019, 32, 8381–8399. [Google Scholar] [CrossRef]
  17. Chaudhary, N.I.; Raja MA, Z.; He, Y.; Khan, Z.A.; Machado, J.T. Design of multi innovation fractional LMS algorithm for parameter estimation of input nonlinear control autoregressive systems. Appl. Math. Model. 2021, 93, 412–425. [Google Scholar] [CrossRef]
  18. Chaudhary, N.I.; Raja, M.A.Z.; Khan, Z.A.; Cheema, K.M.; Milyani, A.H. Hierarchical Quasi-Fractional Gradient Descent Method for Parameter Estimation of Nonlinear ARX Systems Using Key Term Separation Principle. Mathematics 2021, 9, 3302. [Google Scholar] [CrossRef]
  19. Chaudhary, N.I.; Raja, M.A.Z.; Khan, Z.A.; Mehmood, A.; Shah, S.M. Design of fractional hierarchical gradient descent algorithm for parameter estimation of nonlinear control autoregressive systems. Chaos Solitons Fractals 2022, 157, 111913. [Google Scholar] [CrossRef]
  20. Gotmare, A.; Bhattacharjee, S.S.; Patidar, R.; George, N.V. Swarm and evolutionary computing algorithms for system identification and filter design: A comprehensive review. Swarm Evol. Comput. 2017, 32, 68–84. [Google Scholar] [CrossRef]
  21. Jui, J.J.; Ahmad, M.A. A hybrid metaheuristic algorithm for identification of continuous-time Hammerstein systems. Appl. Math. Model. 2021, 95, 339–360. [Google Scholar] [CrossRef]
  22. Raja MA, Z.; Shah, A.A.; Mehmood, A.; Chaudhary, N.I.; Aslam, M.S. Bio-inspired computational heuristics for parameter estimation of nonlinear Hammerstein controlled autoregressive system. Neural Comput. Appl. 2018, 29, 1455–1474. [Google Scholar] [CrossRef]
  23. Mehmood, A.; Aslam, M.S.; Chaudhary, N.I.; Zameer, A.; Raja, M.A.Z. Parameter estimation for Hammerstein control autoregressive systems using differential evolution. Signal Image Video Process. 2018, 12, 1603–1610. [Google Scholar] [CrossRef]
  24. Mehmood, A.; Raja, M.A.Z.; Shi, P.; Chaudhary, N.I. Weighted differential evolution-based heuristic computing for identification of Hammerstein systems in electrically stimulated muscle modeling. Soft Comput. 2022, 26, 8929–8945. [Google Scholar] [CrossRef]
  25. Mehmood, A.; Zameer, A.; Chaudhary, N.I.; Raja, M.A.Z. Backtracking search heuristics for identification of electrical muscle stimulation models using Hammerstein structure. Appl. Soft Comput. 2019, 84, 105705. [Google Scholar] [CrossRef]
  26. Altaf, F.; Chang, C.-L.; Chaudhary, N.I.; Raja, M.A.Z.; Cheema, K.M.; Shu, C.-M.; Milyani, A.H. Adaptive Evolutionary Computation for Nonlinear Hammerstein Control Autoregressive Systems with Key Term Separation Principle. Mathematics 2022, 10, 1001. [Google Scholar] [CrossRef]
  27. Nanda, S.J.; Panda, G.; Majhi, B. Improved identification of Hammerstein plants using new CPSO and IPSO algorithms. Expert Syst. Appl. 2010, 37, 6818–6831. [Google Scholar] [CrossRef]
  28. Gotmare, A.; Patidar, R.; George, N.V. Nonlinear system identification using a cuckoo search optimized adaptive Hammerstein model. Expert Syst. Appl. 2015, 42, 2538–2546. [Google Scholar] [CrossRef]
  29. Janjanam, L.; Saha, S.K.; Kar, R. Optimal Design of Hammerstein Cubic Spline Filter for Non-Linear System Modelling Based on Snake Optimiser. IEEE Trans. Ind. Electron. 2022. [Google Scholar] [CrossRef]
  30. Altaf, F.; Chang, C.-L.; Chaudhary, N.I.; Cheema, K.M.; Raja, M.A.Z.; Shu, C.-M.; Milyani, A.H. Novel Fractional Swarming with Key Term Separation for Input Nonlinear Control Autoregressive Systems. Fractal Fract. 2022, 6, 348. [Google Scholar] [CrossRef]
  31. Zheng, J.; Li, K.; Zhang, X. Wi-Fi Fingerprint-Based Indoor Localization Method via Standard Particle Swarm Optimization. Sensors 2022, 22, 5051. [Google Scholar] [CrossRef]
  32. Jiang, L.; Tajima, Y.; Wu, L. Application of Particle Swarm Optimization for Auto-Tuning of the Urban Flood Model. Water 2022, 14, 2819. [Google Scholar] [CrossRef]
  33. Mehmood, K.; Chaudhary, N.I.; Khan, Z.A.; Cheema, K.M.; Raja, M.A.Z.; Milyani, A.H.; Azhari, A.A. Dwarf Mongoose Optimization Metaheuristics for Autoregressive Exogenous Model Identification. Mathematics 2022, 10, 3821. [Google Scholar] [CrossRef]
  34. Alissa, K.A.; Elkamchouchi, D.H.; Tarmissi, K.; Yafoz, A.; Alsini, R.; Alghushairy, O.; Mohamed, A.; Al Duhayyim, M. Dwarf Mongoose Optimization with Machine-Learning-Driven Ransomware Detection in Internet of Things Environment. Appl. Sci. 2022, 12, 9513. [Google Scholar] [CrossRef]
  35. Ji, H.; Hu, H.; Peng, X. Multi-Underwater Gliders Coverage Path Planning Based on Ant Colony Optimization. Electronics 2022, 11, 3021. [Google Scholar] [CrossRef]
  36. Liu, Q.; Zhu, S.; Chen, M.; Liu, W. Detecting Dynamic Communities in Vehicle Movements Using Ant Colony Optimization. Appl. Sci. 2022, 12, 7608. [Google Scholar] [CrossRef]
  37. Al-Shammaa, A.A.; MAbdurraqeeb, A.; Noman, A.M.; Alkuhayli, A.; Farh, H.M. Hardware-In-the-Loop Validation of Direct MPPT Based Cuckoo Search Optimization for Partially Shaded Photovoltaic System. Electronics 2022, 11, 1655. [Google Scholar] [CrossRef]
  38. Hameed, K.; Khan, W.; Abdalla, Y.S.; Al-Harbi, F.F.; Armghan, A.; Asif, M.; Qamar, M.S.; Ali, F.; Miah, S.; Alibakhshikenari, M.; et al. Far-Field DOA Estimation of Uncorrelated RADAR Signals through Coprime Arrays in Low SNR Regime by Implementing Cuckoo Search Algorithm. Electronics 2022, 11, 558. [Google Scholar] [CrossRef]
  39. Mehmood, K.; Chaudhary, N.I.; Khan, Z.A.; Raja, M.A.Z.; Cheema, K.M.; Milyani, A.H. Design of Aquila Optimization Heuristic for Identification of Control Autoregressive Systems. Mathematics 2022, 10, 1749. [Google Scholar] [CrossRef]
  40. Lee, J.G.; Chim, S.; Park, H.H. Energy-efficient cluster-head selection for wireless sensor networks using sampling-based spider monkey optimization. Sensors 2019, 19, 5281. [Google Scholar] [CrossRef] [Green Version]
  41. He, F.; Ye, Q. A Bearing Fault Diagnosis Method Based on Wavelet Packet Transform and Convolutional Neural Network Optimized by Simulated Annealing Algorithm. Sensors 2022, 22, 1410. [Google Scholar] [CrossRef]
  42. Xiao, S.; Tan, X.; Wang, J. A Simulated Annealing Algorithm and Grid Map-Based UAV Coverage Path Planning Method for 3D Reconstruction. Electronics 2021, 10, 853. [Google Scholar] [CrossRef]
  43. Thiagarajan, K.; Anandan, M.M.; Stateczny, A.; Divakarachari, P.B.; Lingappa, H.K. Satellite Image Classification Using a Hierarchical Ensemble Learning and Correlation Coefficient-Based Gravitational Search Algorithm. Remote Sens. 2021, 13, 4351. [Google Scholar] [CrossRef]
  44. Owolabi, T.; Rahman, M.A. Energy Band Gap Modeling of Doped Bismuth Ferrite Multifunctional Material Using Gravitational Search Algorithm Optimized Support Vector Regression. Crystals 2021, 11, 246. [Google Scholar] [CrossRef]
  45. Qais, M.H.; Hasanien, H.M.; Turky, R.A.; Alghuwainem, S.; Tostado-Véliz, M.; Jurado, F. Circle Search Algorithm: A Geometry-Based Metaheuristic Optimization Algorithm. Mathematics 2022, 10, 1626. [Google Scholar] [CrossRef]
  46. Qais, M.H.; Hasanien, H.M.; Turky, R.A.; Alghuwainem, S.; Loo, K.-H.; Elgendy, M. Optimal PEM Fuel Cell Model Using a Novel Circle Search Algorithm. Electronics 2022, 11, 1808. [Google Scholar] [CrossRef]
  47. Ehteram, M.; Ahmed, A.N.; Ling, L.; Fai, C.M.; Latif, S.D.; Afan, H.A.; Banadkooki, F.B.; El-Shafie, A. Pipeline scour rates prediction-based model utilizing a multilayer perceptron-colliding body algorithm. Water 2020, 12, 902. [Google Scholar] [CrossRef] [Green Version]
  48. Yang, W.; Xia, K.; Li, T.; Xie, M.; Zhao, Y. An Improved Transient Search Optimization with Neighborhood Dimensional Learning for Global Optimization Problems. Symmetry 2021, 13, 244. [Google Scholar] [CrossRef]
  49. Almabrok, A.; Psarakis, M.; Dounis, A. Fast Tuning of the PID Controller in An HVAC System Using the Big Bang–Big Crunch Algorithm and FPGA Technology. Algorithms 2018, 11, 146. [Google Scholar] [CrossRef] [Green Version]
  50. Wu, T.; Li, X.; Zhou, D.; Li, N.; Shi, J. Differential Evolution Based Layer-Wise Weight Pruning for Compressing Deep Neural Networks. Sensors 2021, 21, 880. [Google Scholar] [CrossRef] [PubMed]
  51. Li, M.; Zhang, D.; Lu, S.; Tang, X.; Phung, T. Differential Evolution-Based Overcurrent Protection for DC Microgrids. Energies 2021, 14, 5026. [Google Scholar] [CrossRef]
  52. Drachal, K.; Pawłowski, M. A Review of the Applications of Genetic Algorithms to Forecasting Prices of Commodities. Economies 2021, 9, 6. [Google Scholar] [CrossRef]
  53. Awan, W.A.; Zaidi, A.; Hussain, M.; Hussain, N.; Syed, I. The design of a wideband antenna with notching characteristics for small devices using a genetic algorithm. Mathematics 2021, 9, 2113. [Google Scholar] [CrossRef]
  54. Strumberger, I.; Minovic, M.; Tuba, M.; Bacanin, N. Performance of Elephant Herding Optimization and Tree Growth Algorithm Adapted for Node Localization in Wireless Sensor Networks. Sensors 2019, 19, 2515. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  55. Abualigah, L.; Diabat, A.; Sumari, P.; Gandomi, A. A Novel Evolutionary Arithmetic Optimization Algorithm for Multilevel Thresholding Segmentation of COVID-19 CT Images. Processes 2021, 9, 1155. [Google Scholar] [CrossRef]
  56. Sharma, A.; Khan, R.A.; Sharma, A.; Kashyap, D.; Rajput, S. A Novel Opposition-Based Arithmetic Optimization Algorithm for Parameter Extraction of PEM Fuel Cell. Electronics 2021, 10, 2834. [Google Scholar] [CrossRef]
  57. Faradonbeh, R.S.; Monjezi, M.; Armaghani, D.J. Genetic programing and non-linear multiple regression techniques to predict backbreak in blasting operation. Eng. Comput. 2015, 32, 123–133. [Google Scholar] [CrossRef]
  58. Masoumi, N.; Xiao, Y.; Rivaz, H. ARENA: Inter-modality affine registration using evolutionary strategy. Int. J. Comput. Assist. Radiol. Surg. 2018, 14, 441–450. [Google Scholar] [CrossRef]
  59. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  60. Shaheen, M.A.; Yousri, D.; Fathy, A.; Hasanien, H.M.; Alkuhayli, A.; Muyeen, S.M. A novel application of improved marine predators algorithm and particle swarm optimization for solving the ORPD problem. Energies 2020, 13, 5679. [Google Scholar] [CrossRef]
  61. Ebeed, M.; Alhejji, A.; Kamel, S.; Jurado, F. Solving the optimal reactive power dispatch using marine predators algorithm considering the uncertainties in load and wind-solar generation systems. Energies 2020, 13, 4316. [Google Scholar] [CrossRef]
  62. He, Q.; Lan, Z.; Zhang, D.; Yang, L.; Luo, S. Improved Marine Predator Algorithm for Wireless Sensor Network Coverage Optimization Problem. Sustainability 2022, 14, 9944. [Google Scholar] [CrossRef]
  63. Yang, L.; He, Q.; Yang, L.; Luo, S. A Fusion Multi-Strategy Marine Predator Algorithm for Mobile Robot Path Planning. Appl. Sci. 2022, 12, 9170. [Google Scholar] [CrossRef]
  64. Wadood, A.; Khan, S.; Khan, B.M.; Ali, H.; Rehman, Z. Application of Marine Predator Algorithm in Solving the Problem of Directional Overcurrent Relay in Electrical Power System. Eng. Proc. 2021, 12, 9. [Google Scholar]
  65. Lu, X.; Nanehkaran, Y.A.; Fard, M.K. A Method for Optimal Detection of Lung Cancer Based on Deep Learning Optimized by Marine Predators Algorithm. Comput. Intell. Neurosci. 2021, 2021, 3694723. [Google Scholar] [CrossRef] [PubMed]
  66. Hoang, N.-D.; Tran, X.-L. Remote Sensing–Based Urban Green Space Detection Using Marine Predators Algorithm Optimized Machine Learning Approach. Math. Probl. Eng. 2021, 2021, 5586913. [Google Scholar] [CrossRef]
  67. Yang, W.; Xia, K.; Li, T.; Xie, M.; Song, F. A Multi-Strategy Marine Predator Algorithm and Its Application in Joint Regularization Semi-Supervised ELM. Mathematics 2021, 9, 291. [Google Scholar] [CrossRef]
  68. Ezugwu, A.E.; Agushaka, J.O.; Abualigah, L.; Mirjalili, S.; Gandomi, A.H. Prairie Dog Optimization Algorithm. Neural Comput. Appl. 2022, 34, 20017–20065. [Google Scholar] [CrossRef]
  69. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  70. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  71. Xiao, Y.; Liu, J.; Alkhathlan, A. Informatisation of educational reform based on fractional differential equations. Appl. Math. Nonlinear Sci. 2021. [Google Scholar] [CrossRef]
  72. Zhang, X.; Alahmadi, D. Study on the maximum value of flight distance based on the fractional differential equation for calculating the best path of shot put. Appl. Math. Nonlinear Sci. 2021. [Google Scholar] [CrossRef]
  73. Che, Y.; Keir, M.Y.A. Study on the training model of football movement trajectory drop point based on fractional differential equation. Appl. Math. Nonlinear Sci. 2021, 7, 425–430. [Google Scholar] [CrossRef]
  74. Chandra, S.; Hayashibe, M.; Thondiyath, A. Muscle Fatigue Induced Hand Tremor Clustering in Dynamic Laparoscopic Manipulation. IEEE Trans. Syst. Man, Cybern. Syst. 2018, 50, 5420–5431. [Google Scholar] [CrossRef]
  75. Xu, W.; Chu, B.; Rogers, E. Iterative learning control for robotic-assisted upper limb stroke rehabilitation in the presence of muscle fatigue. Control Eng. Pract. 2014, 31, 63–72. [Google Scholar] [CrossRef]
Figure 1. MPA flowchart.
Figure 1. MPA flowchart.
Mathematics 10 04217 g001
Figure 2. Fitness plots for MPA w.r.t population sizes for case study 1.
Figure 2. Fitness plots for MPA w.r.t population sizes for case study 1.
Mathematics 10 04217 g002
Figure 3. Fitness plots for MPA w.r.t generation variations for case study 1.
Figure 3. Fitness plots for MPA w.r.t generation variations for case study 1.
Mathematics 10 04217 g003
Figure 4. Fitness plots for MPA w.r.t noise levels for case study 1.
Figure 4. Fitness plots for MPA w.r.t noise levels for case study 1.
Mathematics 10 04217 g004
Figure 5. Fitness plot comparison of MPA with PDO, SCA, and WOA at Np = 100 and T = 1000 for case study 1.
Figure 5. Fitness plot comparison of MPA with PDO, SCA, and WOA at Np = 100 and T = 1000 for case study 1.
Mathematics 10 04217 g005
Figure 6. Run# fitness comparison of MPA with PDO, SCA and WOA at Np = 100 and T = 1000 for case study 1.
Figure 6. Run# fitness comparison of MPA with PDO, SCA and WOA at Np = 100 and T = 1000 for case study 1.
Mathematics 10 04217 g006aMathematics 10 04217 g006b
Figure 7. Statistical comparison of MPA with PDO, SCA, and WOA at Np = 100 and T = 1000 for case study 1.
Figure 7. Statistical comparison of MPA with PDO, SCA, and WOA at Np = 100 and T = 1000 for case study 1.
Mathematics 10 04217 g007
Figure 8. Fitness plots for MPA w.r.t generation variations for case study 2.
Figure 8. Fitness plots for MPA w.r.t generation variations for case study 2.
Mathematics 10 04217 g008
Figure 9. Fitness plots for MPA w.r.t noise levels for case study 2.
Figure 9. Fitness plots for MPA w.r.t noise levels for case study 2.
Mathematics 10 04217 g009
Figure 10. Fitness plots comparison of MPA with PDO, SCA, and WOA at Np = 50 and T = 500 for case study 2.
Figure 10. Fitness plots comparison of MPA with PDO, SCA, and WOA at Np = 50 and T = 500 for case study 2.
Mathematics 10 04217 g010aMathematics 10 04217 g010b
Figure 11. Run# fitness comparison of MPA with PDO, SCA, and WOA at Np = 50 and T = 500 for case study 2.
Figure 11. Run# fitness comparison of MPA with PDO, SCA, and WOA at Np = 50 and T = 500 for case study 2.
Mathematics 10 04217 g011
Figure 12. Statistical comparison of MPA with PDO, SCA, and WOA at Np = 50 and T = 200 for case study 2.
Figure 12. Statistical comparison of MPA with PDO, SCA, and WOA at Np = 50 and T = 200 for case study 2.
Mathematics 10 04217 g012
Table 1. Classification of metaheuristics.
Table 1. Classification of metaheuristics.
DomainTechnique
Swarm IntelligenceParticle swarm optimization (PSO) [31,32]
Dwarf Mongoose optimization (DMO) [33,34]
Ant Colony optimization (ACO) [35,36]
Cuckoo search [37,38]
Aquila Optimizer (AO) [39]
Spider monkey optimization [40]
Physics basedSimulated Annealing [41,42]
Gravitational search algorithm [43,44]
Circle search algorithm [45,46]
Colliding bodies optimizer [47]
Transient search optimizer [48]
Big bang big crunch [49]
EvolutionaryDifferential Evolution [50,51]
Genetic algorithm [52,53]
Tree growth algorithm [54]
Arithmetic optimization algorithm [55,56]
Genetic programming [57]
Evolutionary strategy [58]
Table 2. Estimated weight analysis w.r.t generation and population sizes at 60 dB noise for case study 1.
Table 2. Estimated weight analysis w.r.t generation and population sizes at 60 dB noise for case study 1.
MethodsTNpDesign ParametersBest FitnessAvg Fitness
g 1 g 2 h 1 h 2 o 1 o 2 o 3
MPA50050−1.09990.9003−0.7999−0.6007−0.89940.59980.1998 3.6 × 10 7 6.6 × 10 7
100−1.10040.9004−0.8003−0.5999−0.89830.60100.1999 4.3 × 10 7 5.5 × 10 7
100050−1.09970.8997−0.7998−0.5993−0.90280.59920.2003 3.5 × 10 7 5.2 × 10 7
100−1.09990.9000−0.7997−0.5998−0.90030.60020.2002 2.6 × 10 7 4.9 × 10 7
PDO50050−1.05180.8524−0.6261−0.7220−0.95880.58510.23290.00970.1631
100−1.06000.8566−0.7128−0.6272−1.08170.54170.22490.00590.1618
100050−1.07010.8637−0.6899−0.8059−0.87240.52220.17910.01190.1677
100−1.07300.8670−0.7100−0.6694−0.95220.56460.21550.00390.1848
SCA50050−1.15540.8944−1.2713−0.3910−1.36330.27600.00240.07850.1676
100−1.04690.8100−0.9144−0.3546−1.84130.23710.25650.07190.1567
100050−0.90630.6642−0.8466−0.6712−1.7836−0.00640.09900.11670.1977
100−1.02920.8255−1.0403−0.3248−1.20330.50340.18780.04640.1285
WOA50050−1.13070.9794−1.0294−0.7458−0.01030.83040.21240.13460.3724
100−1.10500.9013−0.8204−0.7087−0.84660.53440.15700.00420.3019
100050−1.02700.8621−0.4477−0.6913−0.78180.84360.37950.02940.3219
100−0.93560.7506−0.5725−0.6927−1.46240.40170.27050.05210.2748
True Values−1.10000.9000−0.8000−0.6000−0.90000.60000.200000
Table 3. Estimated weights analysis w.r.t generation and population sizes at 50 dB noise for case study 1.
Table 3. Estimated weights analysis w.r.t generation and population sizes at 50 dB noise for case study 1.
MethodsTNpDesign ParametersBest FitnessAvg Fitness
g 1 g 2 h 1 h 2 o 1 o 2 o 3
MPA50050−1.09900.8998−0.7971−0.6027−0.90000.60060.2000 3.5 × 10 6 5.4 × 10 6
100−1.10070.9007−0.8010−0.6005−0.89760.59990.1995 3.2 × 10 6 4.7 × 10 6
100050−1.10070.9007−0.8016−0.6004−0.90210.59880.1989 3.1 × 10 6 4.6 × 10 6
100−1.09990.8996−0.7972−0.6010−0.90340.59940.2006 2.6 × 10 6 4.2 × 10 6
PDO50050−1.09890.8856−0.6730−0.8708−0.84910.48670.16130.01220.2468
100−1.09500.8850−0.7609−0.6493−1.01930.52260.18980.00260.1971
100050−1.05680.8489−0.6975−0.6522−1.05370.51830.22550.00700.2406
100−1.08210.8792−0.6982−0.7261−0.88730.56790.20110.00310.1237
SCA50050−1.00860.7452−0.4993−0.7635−1.40640.41920.24370.09020.1928
100−1.12201.0084−0.3534−1.1780−0.15650.80280.25440.08010.1949
100050−1.15960.8947−1.0397−0.4481−1.06110.37930.15730.02950.1382
100−0.97080.7561−0.4713−0.7527−1.29200.47890.27520.05880.1212
WOA50050−1.06910.8544−0.9376−0.4316−1.11410.51840.20230.01040.3667
100−0.99260.8132−0.6119−0.6398−1.07120.62370.28960.02160.3111
100050−1.07230.9288−0.2641−0.9755−0.22210.96020.35900.06570.4094
100−1.07650.8998−0.3190−1.1725−0.58840.62730.22370.03700.2990
True Values−1.10000.9000−0.8000−0.6000−0.90000.60000.200000
Table 4. Estimated weight analysis w.r.t generation and population sizes at 40 dB noise for case study 1.
Table 4. Estimated weight analysis w.r.t generation and population sizes at 40 dB noise for case study 1.
MethodsTNpDesign ParametersBest FitnessAvg Fitness
g 1 g 2 h 1 h 2 o 1 o 2 o 3
MPA50050−1.09760.9002−0.8108−0.5990−0.90990.59120.1952 3.8 × 10 5 5.5 × 10 5
100−1.09920.8985−0.7965−0.6014−0.90450.59730.2016 3.2 × 10 5 5.0 × 10 5
100050−1.09780.8969−0.7999−0.6023−0.90440.59610.1984 3.1 × 10 5 4.2 × 10 5
100−1.09780.8965−0.7981−0.5994−0.90460.59760.2016 2.9 × 10 5 3.9 × 10 5
PDO50050−1.10770.8619−0.7954−0.8527−0.65260.50710.15000.02600.2769
100−1.05270.8379−0.7737−0.6079−0.90450.57090.21880.00710.1967
100050−1.02430.8187−0.8046−0.5028−1.22780.51090.23190.01100.1863
100−1.10200.8933−0.8539−0.5378−0.96030.57890.1965 4.3 × 10 4 0.1612
SCA50050−1.24581.0114−1.2823−0.5958−0.44400.67790.00840.13620.2290
100−1.10050.9112−0.7042−0.7149−0.73100.67180.24310.01950.1565
100050−1.03890.8189−0.8901−0.5615−1.18550.41030.20970.05560.1500
100−1.12790.8664−1.1919−0.3155−1.44020.20080.13380.05300.1164
WOA50050−1.17360.9365−1.3780−0.3366−0.54910.61830.09300.05150.4267
100−0.46720.3825−0.4320−1.1173−1.9501−0.16940.16070.16600.4192
100050−1.00510.8141−0.2655−0.9009−1.04310.57550.33750.06600.4354
100−0.73110.6054−0.3689−1.0833−1.04640.37660.26050.10530.2973
True Values−1.10000.9000−0.8000−0.6000−0.90000.60000.200000
Table 5. Estimated weight analysis w.r.t generation and population sizes at 30 dB noise for case study 1.
Table 5. Estimated weight analysis w.r.t generation and population sizes at 30 dB noise for case study 1.
MethodsTNpDesign ParametersBest FitnessAvg Fitness
g 1 g 2 h 1 h 2 o 1 o 2 o 3
MPA50050−1.09930.9027−0.8065−0.5937−0.94850.57390.2001 3.0 × 10 4 5.2 × 10 4
100−1.08840.8840−0.7943−0.6330−0.86510.59700.1908 2.4 × 10 4 4.5 × 10 4
100050−1.10020.8957−0.7948−0.5956−0.90170.59750.2046 2.7 × 10 4 4.1 × 10 4
100−1.10340.9003−0.8125−0.5908−0.92290.58610.1983 2.4 × 10 4 3.8 × 10 4
PDO50050−0.91180.6720−1.0179−0.4754−1.35360.20240.15780.04610.2711
100−1.01060.8154−0.6439−0.8667−0.95360.47420.17760.00950.2513
100050−0.89030.7105−0.6975−0.6399−1.30320.34730.23610.02300.2707
100−1.06860.8515−0.9238−0.4909−0.94260.54550.18480.00480.1520
SCA50050−1.07440.8886−0.8612−0.4488−1.30530.38670.18630.07840.2060
100−1.16460.8605−1.3450−0.1424−1.35340.15300.11140.07090.1638
100050−1.08430.9227−0.6941−0.8631−1.08940.41270.11700.07190.1513
100−1.06490.8689−0.8942−0.4560−0.98810.62810.26740.04890.1265
WOA50050−1.12710.94930.5670−1.96260.04650.83840.34080.19450.4343
100−0.99110.80160.0386−1.9189−0.51690.43450.16940.08260.3993
100050−1.25370.8268−1.94990.4810−1.8826−0.15190.07870.16430.3945
100−1.19380.9840−0.8044−0.46570.09981.11970.32170.09910.3120
True Values−1.10000.9000−0.8000−0.6000−0.90000.60000.200000
Table 6. Estimated weight analysis w.r.t generation and population sizes at 10 dB noise for case study 1.
Table 6. Estimated weight analysis w.r.t generation and population sizes at 10 dB noise for case study 1.
MethodsTNpDesign ParametersBest FitnessAvg Fitness
g 1 g 2 h 1 h 2 o 1 o 2 o 3
MPA50050−1.00440.9248−0.6585−0.5989−1.21750.58510.28790.02660.0651
100−0.47210.31170.0368−1.0912−1.78060.14600.46410.03500.0585
100050−1.07970.8075−1.0865−0.3263−0.88970.55620.21130.02570.0537
100−1.09860.9158−0.6987−0.6994−0.91530.55740.19410.02630.0417
PDO50050−0.41460.2163−0.3158−1.3845−1.17010.06190.16260.05910.3364
100−1.06510.6972−1.6685−0.2576−0.52220.47600.05160.08100.2740
100050−0.75140.5082−0.2147−0.6914−1.23490.81020.57570.07070.3043
100−0.75290.5434−0.3480−1.1650−0.97280.31010.18130.03900.2830
SCA50050−0.68970.4400−0.8164−0.9964−1.5210−0.05620.10770.17500.2764
100−1.13140.8923−1.5527−0.7299−0.84860.20960.01010.09330.1738
100050−0.02280.0203−0.0002−2.0000−1.7703−0.26650.10900.10050.1781
100−0.2253−0.0028−0.1909−1.6631−1.04980.10430.23360.09660.1620
WOA50050−0.1134−0.0868−0.3416−0.7243−0.49130.76830.45530.10160.3102
100−1.31690.8183−1.61770.1795−1.49090.01590.10900.07600.2376
100050−0.79330.6662−1.2328−0.5992−1.9345−0.24110.01030.08240.2924
1000.03010.1867−1.9827−1.9827−1.0564−0.17050.02000.13550.2732
True Values−1.10000.9000−0.8000−0.6000−0.90000.60000.200000
Table 7. Estimated weight analysis w.r.t generations at a 0.01 noise level for case study 2.
Table 7. Estimated weight analysis w.r.t generations at a 0.01 noise level for case study 2.
MethodsTDesign ParametersBest FitnessAvg Fitness
g 1 g 2 h 1 h 2 o 1 o 2 o 3
MPA200−1.10170.8978−0.8044−0.5999−0.89320.60010.1980 5.2 × 10 5 1.9 × 10 3
500−1.10180.8979−0.8044−0.5996−0.89200.60100.1982 5.2 × 10 5 5.2 × 10 5
PDO200−1.08360.8766−0.7975−0.6473−1.00230.52690.1740 6.3 × 10 3 0.5494
500−1.08990.8948−0.5766−0.6963−0.71770.75210.27270.01460.3834
SCA200−1.16500.9793−1.8630−1.26280.00360.62940.00720.23840.8104
500−1.13640.8896−1.6057−0.2327−0.77480.54650.02330.08670.4969
WOA200−0.99630.7957−0.3799−0.7507−1.38560.65010.35860.23311.0182
500−1.14500.9706−0.2738−0.64720.35531.47300.49750.17530.6517
True Values−1.10000.9000−0.8000−0.6000−0.90000.60000.200000
Table 8. Estimated weight analysis w.r.t generations at a 0.05 noise level for case study 2.
Table 8. Estimated weight analysis w.r.t generations at a 0.05 noise level for case study 2.
MethodsTDesign ParametersBest FitnessAvg Fitness
g 1 g 2 h 1 h 2 o 1 o 2 o 3
MPA200−1.11090.8910−0.8271−0.5952−0.85890.60440.1903 1.3 × 10 3 3.7 × 10 3
500−1.11090.8910−0.8262−0.5954−0.85830.60490.1906 1.3 × 10 3 1.3 × 10 3
PDO200−1.04060.8332−0.4206−1.3664−0.78280.42560.11940.08720.6333
500−1.08660.8555−0.8423−0.6316−1.09840.46470.15200.01720.4124
SCA200−1.10500.8054−1.4742−0.1922−1.02180.53510.00890.20730.8758
500−1.06950.8453−1.1957−1.4701−0.14590.58710.00420.18450.4258
WOA200−0.90640.7102−0.3235−1.1091−1.20990.45280.20380.35391.1520
500−1.05490.8315−1.0059−0.9560−1.27540.1507−0.02630.19940.5828
True Values−1.10000.9000−0.8000−0.6000−0.90000.60000.200000
Table 9. Estimated weight analysis w.r.t generations at a 0.10 noise level for case study 2.
Table 9. Estimated weight analysis w.r.t generations at a 0.10 noise level for case study 2.
MethodsTDesign ParametersBest FitnessAvg Fitness
g 1 g 2 h 1 h 2 o 1 o 2 o 3
MPA200−1.12670.8856−0.8717−0.5808−0.81270.60690.1776 5.3 × 10 3 6.8 × 10 3
500−1.12650.8857−0.8653−0.5835−0.81070.60930.1794 5.3 × 10 3 5.3 × 10 3
PDO200−1.01940.7655−0.6219−0.9296−1.15170.37110.12960.14520.7990
500−1.09380.8706−0.6854−0.7346−0.87230.56920.20070.01720.3089
SCA200−1.05740.9286−0.0290−1.3075−0.67390.58130.31240.28920.6840
500−1.15170.8870−1.4557−0.1035−1.44340.26150.01650.16550.4139
WOA200−1.15030.8178−1.96460.2299−1.9588−0.0416−0.08220.32251.0572
500−1.08630.8635−0.3088−0.8407−0.68000.83810.33920.07340.6578
True Values−1.10000.9000−0.8000−0.6000−0.90000.60000.200000
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mehmood, K.; Chaudhary, N.I.; Khan, Z.A.; Cheema, K.M.; Raja, M.A.Z.; Milyani, A.H.; Azhari, A.A. Nonlinear Hammerstein System Identification: A Novel Application of Marine Predator Optimization Using the Key Term Separation Technique. Mathematics 2022, 10, 4217. https://doi.org/10.3390/math10224217

AMA Style

Mehmood K, Chaudhary NI, Khan ZA, Cheema KM, Raja MAZ, Milyani AH, Azhari AA. Nonlinear Hammerstein System Identification: A Novel Application of Marine Predator Optimization Using the Key Term Separation Technique. Mathematics. 2022; 10(22):4217. https://doi.org/10.3390/math10224217

Chicago/Turabian Style

Mehmood, Khizer, Naveed Ishtiaq Chaudhary, Zeshan Aslam Khan, Khalid Mehmood Cheema, Muhammad Asif Zahoor Raja, Ahmad H. Milyani, and Abdullah Ahmed Azhari. 2022. "Nonlinear Hammerstein System Identification: A Novel Application of Marine Predator Optimization Using the Key Term Separation Technique" Mathematics 10, no. 22: 4217. https://doi.org/10.3390/math10224217

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop