Abstract
The mathematical modelling and optimization of nonlinear problems arising in diversified engineering applications is an area of great interest. The Hammerstein structure is widely used in the modelling of various nonlinear processes found in a range of applications. This study investigates the parameter optimization of the nonlinear Hammerstein model using the abilities of the marine predator algorithm (MPA) and the key term separation technique. MPA is a population-based metaheuristic inspired by the behavior of predators for catching prey, and utilizes Brownian/Levy movement for predicting the optimal interaction between predator and prey. A detailed analysis of MPA is conducted to verify the accurate and robust behavior of the optimization scheme for nonlinear Hammerstein model identification.
MSC:
93C10; 68T20
1. Introduction
The identification of nonlinear systems is considered a challenging task because of the inherent stiffness and complex system representation [1,2]. A nonlinear system modelled through the block-oriented Hammerstein structure is relatively a simple approach, where a nonlinear block is followed by a linear dynamical subsystem [3,4]. The identification of Hammerstein models is of the utmost importance owing to their ability to model various nonlinear processes [5,6,7]. The pioneering work for nonlinear Hammerstein systems was presented by Narendra et al., in the 1960s, by proposing an iterative identification algorithm [8]. Chang et al. proposed a non-iterative scheme [9]. Vörös et al. presented a key term separation principle for the identification of a Hammerstein model [10,11]. Ding et al. introduced hierarchical least squares [12,13] and hierarchical gradient descent algorithms [14] for the Hammerstein model. Chaudhary et al. introduced the concept of fractional gradient algorithms for Hammerstein systems by proposing normalized fractional least mean square (FLMS) [15], sign FLMS [16], multi-innovation FLMS [17], hierarchical quasi fractional gradient descent [18], and fractional hierarchical gradient descent [19] algorithms.
Different system identification scenarios have multimodal error surfaces, and traditional gradient descent-based approaches may converge to a sub-optimal solution [20,21]. The system identification may be expressed as an optimization problem that can be solved through a stochastic search methodology such as evolutionary and swarm heuristics [20]. The research community proposed these population-based heuristics for Hammerstein model identification. For example, Raja et al. exploited genetic algorithms [22]; Mehmood et al. presented differential evolution [23], weighted differential evolution [24], and backtracking search heuristics [25]; and Altaf et al. presented adaptive evolutionary heuristics [26]. The swarm-based optimization heuristics for Hammerstein system identification include particle swarm optimization [27], cuckoo search algorithm [28], snake optimizer algorithm [29], and fractional swarming optimization heuristics [30], etc.
Metaheuristics have been applied in different engineering optimization problems. Heuristics are mainly categorized as (a) swarm intelligence, (b) physics-based, or (c) evolutionary algorithms. Swarm intelligence mimics the behavior of herds present in nature. Physics-based methods are inspired by the laws of physics, while evolutionary methods are inspired by biological processes in nature. The significant methods proposed in all of these domains are summarized in Table 1.
Table 1.
Classification of metaheuristics.
Among the various swarm-intelligence-based techniques, the Marine Predator Algorithm (MPA) [59] has recently been proposed and applied in various applications such as optimal power flow [60], economic load dispatch [61], wireless sensor network [62], and path planning [63]. Wadood et al. [64] applied MPA to minimize breakdown in an electrical power system. Lu et al. [65] applied MPA along with the convolutional neural network for the optimal detection of lung cancer. Hoang et al. [66] utilized MPA in the optimization of hyperparameters of support vector machines for remote sensing. Yang et al. [67] applied multi strategy MPA for semi supervised extreme machine learning for classification.
In this study, we explored MPA for effective parameter estimation of the Hammerstein structure. The detailed performance evaluation of the proposed scheme for Hammerstein identification was conducted for different noise conditions. The reliability of the proposed approach in comparison with the other recently introduced metaheuristics was established through detailed analyses based on multiple independent executions and statistical tests.
The remainder of the paper is outlined as follows: Section 2 describes the Hammerstein model structure. Section 3 presents the MPA methodology with a flow chart description. Section 4 provides the results of detailed simulations in terms of graphical and tabular representation. Finally, Section 5 concludes the study by presenting the main findings of the current investigation.
2. System Model
Considering the Hammerstein output error (HOE) model, whose output is written as
where is the noise free output and is the additive white noise. The noise free output of HOE model is presented in (2)
where is the output of nonlinear block with basis (k1, k2, …, kp) and are the corresponding parameters represented as
By choosing as the keyterm and applying the keyterm separation principle [18,19], and letting then (2) can be expressed as
The parameter vectors and information vector , are presented in (4) and (5), respectively
Then, (3) can be expressed as presented in (6)
3. Methodology
In this section, the MPA-based methodology for the parameter estimation of the Hammerstein model is presented.
3.1. Marine Predator Algorithm
The MPA is a population-based metaheuristic inspired from the behavior of predators when catching prey [59]. It utilizes Brownian and Levy movement for the optimal interaction between predator and prey. Its mathematical model, pseudocode, and algorithm flowchart are presented below.
3.1.1. Formulation
MPA starts with the initialization of the population uniformly distributed over the search space, as presented in (7).
Two matrices were also constructed, the Elite matrix (EM) and Prey matrix (PM), which consist of the position vector with the best fitness and the proposed random positions during initialization, respectively, as presented in (8) and (9).
where represents the top predator and is the dimension of the prey.
3.1.2. Optimization
The optimization of MPA consists of three phases, as presented below.
Phase I
In this phase, the predator moves faster than the prey by using Brownian movement for first third of the iterations. The prey matrices are updated as presented in (10) and (11)
where is the normal distributed vector representing the Brownian movement, and is a vector .
Phase II
In this phase, both the prey and predator are moving at the same pace. The prey moves on Levy movement, while the predator moves using Brownian movement between one third and two third of the maximum iterations. The updated matrices are presented in (12)–(15)
where is adaptive factor to control stepsize of the predator movement.
Phase III
In this phase, the predator is moving faster than the prey. The predator moves by using Levy movement and the prey matrix is updated as presented in (16) and (17)
3.1.3. Fish Aggregating Devices’ (FAD’s) Effect
The fish aggregating devices’ (FAD’s) effect is added in MPA to simulate the natural behavior of fish, as they spend 80% of their time in immediate locations and 20% of the time searching for other spaces [59], as presented in (18) and (19)
where , is a binary vector, is a random number , are the upper and lower bounds, respectively, are subscripts of the prey matrix. The flowchart for MPA is shown in Figure 1.
Figure 1.
MPA flowchart.
4. Performance Analysis
The detailed performance analysis of MPA for the HOE model is presented in this section. HOE model identification was conducted on numerous noise levels, several generations, and population sizes. The proposed scheme is examined in terms of accuracy, and is measured by the fitness function presented in (20)
here, is the estimated response through MPA, and is the actual response of the HOE model.
4.1. Case Study 1
For the simulation study, we considered the following HOE model
The input is taken as zero mean unit variance random signal with the data length as 25 and the number of generations taken as 500 and 1000. The performance of MPA is assessed by introducing AWGN noise with five variations, i.e., [60 dB, 50 dB, 40 dB, 30 dB, 10 dB] to the HOE model for two variations of generation (T) [500, 1000] and populations (Np) [50, 100], and the fitness plots are demonstrated in Figure 2.
Figure 2.
Fitness plots for MPA w.r.t population sizes for case study 1.
The fitness curves in Figure 2a correspond to the best fitness of the MPA algorithm for generation T = 500, while Figure 2b correspond to same variation for generation T = 1000. It is noted from Figure 2a,b that the fitness of MPA for the two generation variations, i.e., [500, 1000], reduces with the increase in population sizes.
The performance of MPA is also evaluated in terms of the generation variations as demonstrated by the fitness curves in Figure 3.
Figure 3.
Fitness plots for MPA w.r.t generation variations for case study 1.
The fitness curves in Figure 3a correspond to the best fitness of the MPA algorithm for population size Np = 50, whereas Figure 3b is related to the same variation for population Np = 100. It is noted from Figure 3a,b that the fitness of MPA for the two population sizes, i.e., [50, 100], is reduced significantly with the increase in generations.
The behavior of MPA for different noise variations is evaluated by fixing the population size [50, 100] and changing the generation size [500, 1000] for five values of noise levels [60 dB, 50 dB, 40 dB, 30 dB, 10 dB], and the fitness plots are presented in Figure 4.
Figure 4.
Fitness plots for MPA w.r.t noise levels for case study 1.
It is noted from the fitness plots demonstrated in Figure 4a–d that for a fixed population size and number of generations, the fitness achieved by MPA for a low level of noise, i.e., [60 dB], is quite a lot less compared with the fitness for a high noise level, i.e., [50 dB, 40 dB, 30 dB, 10 dB]. Yet, MPA achieves the minimum value of fitness for the smallest value of noise, i.e., [60 dB] for Np = 100 and T = 1000. Therefore, it is confirmed from the curves in Figure 4 that the performance of MPA is lower for higher noise values.
To further investigate MPA, it is compared with other metaheuristics, namely the prairie dog optimization (PDO) [68], sine cosine algorithm (SCA) [69], and whale optimization algorithm (WOA) [70], for Np = 100 and T = 1000, and noise levels of [60 dB, 50 dB, 40 dB, 30 dB, 10 dB] are presented in Figure 5.
Figure 5.
Fitness plot comparison of MPA with PDO, SCA, and WOA at Np = 100 and T = 1000 for case study 1.
Figure 5a–e demonstrates the convergence of MPA with SCA, WOA, and PDO at Np = 100 and T = 1000 over all of the noise levels, i.e., [60 dB, 50 dB, 40 dB, 30 dB, 10 dB]. It is noted from Figure 5a–e that with the increase in noise level, the fitness value also increases. However, MPA achieves the lowest fitness compared with SCA, WOA, and PDO for all of the noise levels.
MPA is further evaluated statistically against PDO, WOA, and SCA at Np = 100 and T = 1000 for 20 independent runs. Figure 6 shows the fitness value of MPA, SCA, WOA, and PDO on run#1, run#10, and run#20 for all of the noise levels.

Figure 6.
Run# fitness comparison of MPA with PDO, SCA and WOA at Np = 100 and T = 1000 for case study 1.
It is noted from Figure 6a–e that MPA achieves the lowest fitness compared with SCA, WOA, and PDO for different independent runs for all of the noise levels. Moreover, it is also observed that fitness increases with the increase in noise levels for all of the methods. Further statistical analysis on all of the independent runs is demonstrated in Figure 7.
Figure 7.
Statistical comparison of MPA with PDO, SCA, and WOA at Np = 100 and T = 1000 for case study 1.
It is noted from Figure 7a–e that MPA outperforms SCA, WOA, and PDO for noise levels of [60 dB, 50 dB, 40 dB, 30 dB, 10 dB] in all of the independents runs. Moreover, it is also observed that there is an increase in fitness values for high noise levels for all of the methods.
Table 2, Table 3, Table 4, Table 5 and Table 6 show the performance of all of the algorithms in terms of the average fitness, best fitness, and estimated weights for the [60 dB, 50 dB, 40 dB, 30 dB, 10 dB] noise levels. It is noted that for lower noise levels, i.e., 60 dB, MPA gives better results compared with higher noise levels. Moreover, at a low noise level, the estimated weights are closer to the true value. It is also observed from Table 2 that the best fitness achieved for noise level = 60 dB is . Similarly, the best fitness value for noise levels = [50 dB, 40 dB, 30 dB, 10 dB], provided in Table 3, Table 4, Table 5 and Table 6, are [], respectively. Therefore, it is validated from Table 2, Table 3, Table 4, Table 5 and Table 6 that the fitness of MPA increases with the increase in noise levels and it decreases by increasing the population size and number of generations.
Table 2.
Estimated weight analysis w.r.t generation and population sizes at 60 dB noise for case study 1.
Table 3.
Estimated weights analysis w.r.t generation and population sizes at 50 dB noise for case study 1.
Table 4.
Estimated weight analysis w.r.t generation and population sizes at 40 dB noise for case study 1.
Table 5.
Estimated weight analysis w.r.t generation and population sizes at 30 dB noise for case study 1.
Table 6.
Estimated weight analysis w.r.t generation and population sizes at 10 dB noise for case study 1.
4.2. Case Study 2
The performance of the MPA is also considered for another Hammerstein model based on the autoregressive exogenous input structure, i.e., H-ARX [13,14], with the same model parameters as considered in case study 1. The H-ARX model is mathematically defined as
where G(q) and H(q) are polynomials in the shift operator, and rearranging (27) gives the following
The input data are generated in the same way as mentioned in case study 1, while the noise in the ARX model is taken as white Gaussian with constant variance, i.e., [0.01, 0.05, 0.10]. Two variations of generation (T) [500, 1000] and population Np = 50 are considered. The fitness for case study 2 is calculated through (20) by using y instead of , and the fitness plots are demonstrated in Figure 8.
Figure 8.
Fitness plots for MPA w.r.t generation variations for case study 2.
The fitness curves in Figure 8 correspond to the best fitness of the MPA algorithm for population size Np = 50. It is noted from Figure 8 that the fitness of MPA reduces significantly with the increase in generations.
The behavior of MPA for different noise variations is evaluated at population size Np = 50 and changing the generation size [200, 500] for three values of noise levels [0.01,0.05,0.10], and the fitness plots are presented in Figure 9.
Figure 9.
Fitness plots for MPA w.r.t noise levels for case study 2.
It is noted from the fitness plots demonstrated in Figure 9a,b that for a fixed population size and number of generations, the fitness achieved by MPA for a low level of noise, i.e., [0.01], is quite a lot less compared with the fitness for a high noise level, i.e., [0.05,0.10]. Yet, MPA achieves the minimum value of fitness for the smallest value of noise, i.e., [0.01] for Np = 50 and T = 500. Therefore, it is confirmed from the curves in Figure 9 that the performance of MPA is lower for higher noise values.
Similar to example 1, it is further compared with the prairie dog optimization (PDO) [68], sine cosine algorithm (SCA) [69], and whale optimization algorithm (WOA) [70] for Np = 50 and T = 500, and noise levels of [0.01,0.05,0.10] are presented in Figure 10.

Figure 10.
Fitness plots comparison of MPA with PDO, SCA, and WOA at Np = 50 and T = 500 for case study 2.
Figure 10a–c demonstrates the convergence of MPA with SCA, WOA, and PDO at Np = 50 and T = 500 over all the noise levels, i.e., [0.01,0.05,0.10]. It is noted from Figure 10a–c that with the increase in noise level, the fitness value also increases. However, MPA achieves the lowest fitness compared with SCA, WOA, and PDO for all of the noise levels.
MPA is further evaluated statistically against PDO, WOA, and SCA at Np = 50 and T = 500 for 20 independent runs. Figure 11 shows the fitness value of MPA, SCA, WOA, and PDO on run#1, run#10, and run#20 for all of the noise levels.
Figure 11.
Run# fitness comparison of MPA with PDO, SCA, and WOA at Np = 50 and T = 500 for case study 2.
It is noted from Figure 11a–c that MPA achieves the lowest fitness compared with SCA, WOA, and PDO for different independent runs for all noise levels. Moreover, it is also observed that the fitness increases with the increase in noise levels for all of the methods. Further statistical analysis on all independent runs is demonstrated in Figure 12.
Figure 12.
Statistical comparison of MPA with PDO, SCA, and WOA at Np = 50 and T = 200 for case study 2.
It is noted from Figure 12a–c that MPA outperforms SCA, WOA, and PDO for noise levels of [0.01,0.05,0.10] in all of the independents runs. Moreover, it is also observed that there is an increase in fitness value for high noise levels for all of the methods
Table 7, Table 8 and Table 9 show the performance of all of the algorithms in terms of the average fitness, best fitness, and estimated weights for [0.01,0.05,0.10] noise levels at Np = 50. It is noted that for lower noise levels i.e., 0.01, MPA gives better results compared with the higher noise levels. Moreover, at a low noise level, the estimated weights are closer to the true value. It is also observed from Table 7 that the best fitness achieved for noise level = 0.01 is . Similarly, the best fitness value for noise levels = [0.05,0.10] provided in Table 8 and Table 9 are [], respectively. Therefore, it is validated from Table 7, Table 8 and Table 9 that the fitness of MPA increases with the increase in noise levels and it decreases by increasing the number of generations.
Table 7.
Estimated weight analysis w.r.t generations at a 0.01 noise level for case study 2.
Table 8.
Estimated weight analysis w.r.t generations at a 0.05 noise level for case study 2.
Table 9.
Estimated weight analysis w.r.t generations at a 0.10 noise level for case study 2.
The results of the detailed simulations imply that the MPA-based swarming optimization heuristics efficiently approximates the parameters of the Hammerstein systems.
5. Conclusions
The main findings of the investigation are as follows:
The parameter optimization of the nonlinear mathematical model represented through the Hammerstein structure is studied using the marine predator algorithm, MPA, and the key term separation principle. The mathematical model and working principle of the MPA are inspired from the behavior of predators when catching prey. The optimal interaction between the predator and prey is modelled using Brownian and Levy movement. The key term separation is a technique used to avoid redundancy in the estimation of Hammerstein model parameters, thus making the MPA more efficient. The accuracy and robustness of the MPA based optimization scheme with key term separation technique is established for HOE and H-ARX estimation through the simulation results for different noise scenarios.
Future work may extend the application of MPA to optimize the muscle fatigue systems [71,72] and solve fractional order problems [73,74,75].
Author Contributions
Methodology, K.M.; visualization, Z.A.K.; formal analysis, Z.A.K., N.I.C. and M.A.Z.R.; writing—original draft preparation, K.M.; writing—review and editing, N.I.C., Z.A.K. and M.A.Z.R.; project administration, K.M.C., A.A.A. and A.H.M.; funding acquisition, K.M.C., A.A.A. and A.H.M. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
Not applicable.
Acknowledgments
M.A.Z. Raja like to acknowledge the support of the National Science and Technology Council (NSTC), Taiwan under grant no. NSTC 111-2221-E-224-043.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Schoukens, J.; Ljung, L. Nonlinear System Identification: A User-Oriented Road Map. IEEE Control Syst. 2019, 39, 28–99. [Google Scholar] [CrossRef]
- Yukai, S.; Chao, Y.; Zhigang, W.; Liuyue, B. Nonlinear System Identification of an All Movable Fin with Rotational Freeplay by Subspace-Based Method. Appl. Sci. 2020, 10, 1205. [Google Scholar] [CrossRef]
- Giri, F.; Bai, E.W. (Eds.) Block-Oriented Nonlinear System Identification; Springer: Berlin/Heidelberg, Germany, 2010; Volume 1, p. 0278-0046. [Google Scholar]
- Billings, S.A. Nonlinear System Identification: NARMAX Methods in the Time, Frequency, and Spatio-Temporal Domains; John Wiley & Sons: New York, NY, USA, 2013. [Google Scholar]
- Tissaoui, K. Forecasting implied volatility risk indexes: International evidence using Hammerstein-ARX approach. Int. Rev. Financial Anal. 2019, 64, 232–249. [Google Scholar] [CrossRef]
- Bai, Y.-T.; Wang, X.-Y.; Jin, X.-B.; Zhao, Z.-Y.; Zhang, B.-H. A Neuron-Based Kalman Filter with Nonlinear Autoregressive Model. Sensors 2020, 20, 299. [Google Scholar] [CrossRef]
- Ljung, L. System Identification: Theory for the User, 2nd ed.; Prentice Hall: Hoboken, NJ, USA, 1987. [Google Scholar]
- Narendra, K.; Gallman, P. An iterative method for the identification of nonlinear systems using a Hammerstein model. IEEE Trans. Autom. Control 1966, 11, 546–550. [Google Scholar] [CrossRef]
- Chang, F.; Luus, R. A noniterative method for identification using Hammerstein model. IEEE Trans. Autom. Control 1971, 16, 464–468. [Google Scholar] [CrossRef]
- Vörös, J. Parameter identification of discontinuous hammerstein systems. Automatica 1997, 33, 1141–1146. [Google Scholar] [CrossRef]
- Voros, J. Recursive identification of hammerstein systems with discontinuous nonlinearities containing dead-zones. IEEE Trans. Autom. Control 2003, 48, 2203–2206. [Google Scholar] [CrossRef]
- Chen, H.; Ding, F. Hierarchical Least Squares Identification for Hammerstein Nonlinear Controlled Autoregressive Systems. Circuits Syst. Signal Process. 2014, 34, 61–75. [Google Scholar] [CrossRef]
- Ding, F.; Chen, H.; Xu, L.; Dai, J.; Li, Q.; Hayat, T. A hierarchical least squares identification algorithm for Hammerstein nonlinear systems using the key term separation. J. Frankl. Inst. 2018, 355, 3737–3752. [Google Scholar] [CrossRef]
- Chen, H.; Xiao, Y.; Ding, F. Hierarchical gradient parameter estimation algorithm for Hammerstein nonlinear systems using the key term separation principle. Appl. Math. Comput. 2014, 247, 1202–1210. [Google Scholar] [CrossRef]
- Chaudhary, N.I.; Khan, Z.A.; Zubair, S.; Raja, M.A.Z.; Dedovic, N. Normalized fractional adaptive methods for nonlinear control autoregressive systems. Appl. Math. Model. 2018, 66, 457–471. [Google Scholar] [CrossRef]
- Chaudhary, N.I.; Aslam, M.S.; Baleanu, D.; Raja, M.A.Z. Design of sign fractional optimization paradigms for parameter estimation of nonlinear Hammerstein systems. Neural Comput. Appl. 2019, 32, 8381–8399. [Google Scholar] [CrossRef]
- Chaudhary, N.I.; Raja MA, Z.; He, Y.; Khan, Z.A.; Machado, J.T. Design of multi innovation fractional LMS algorithm for parameter estimation of input nonlinear control autoregressive systems. Appl. Math. Model. 2021, 93, 412–425. [Google Scholar] [CrossRef]
- Chaudhary, N.I.; Raja, M.A.Z.; Khan, Z.A.; Cheema, K.M.; Milyani, A.H. Hierarchical Quasi-Fractional Gradient Descent Method for Parameter Estimation of Nonlinear ARX Systems Using Key Term Separation Principle. Mathematics 2021, 9, 3302. [Google Scholar] [CrossRef]
- Chaudhary, N.I.; Raja, M.A.Z.; Khan, Z.A.; Mehmood, A.; Shah, S.M. Design of fractional hierarchical gradient descent algorithm for parameter estimation of nonlinear control autoregressive systems. Chaos Solitons Fractals 2022, 157, 111913. [Google Scholar] [CrossRef]
- Gotmare, A.; Bhattacharjee, S.S.; Patidar, R.; George, N.V. Swarm and evolutionary computing algorithms for system identification and filter design: A comprehensive review. Swarm Evol. Comput. 2017, 32, 68–84. [Google Scholar] [CrossRef]
- Jui, J.J.; Ahmad, M.A. A hybrid metaheuristic algorithm for identification of continuous-time Hammerstein systems. Appl. Math. Model. 2021, 95, 339–360. [Google Scholar] [CrossRef]
- Raja MA, Z.; Shah, A.A.; Mehmood, A.; Chaudhary, N.I.; Aslam, M.S. Bio-inspired computational heuristics for parameter estimation of nonlinear Hammerstein controlled autoregressive system. Neural Comput. Appl. 2018, 29, 1455–1474. [Google Scholar] [CrossRef]
- Mehmood, A.; Aslam, M.S.; Chaudhary, N.I.; Zameer, A.; Raja, M.A.Z. Parameter estimation for Hammerstein control autoregressive systems using differential evolution. Signal Image Video Process. 2018, 12, 1603–1610. [Google Scholar] [CrossRef]
- Mehmood, A.; Raja, M.A.Z.; Shi, P.; Chaudhary, N.I. Weighted differential evolution-based heuristic computing for identification of Hammerstein systems in electrically stimulated muscle modeling. Soft Comput. 2022, 26, 8929–8945. [Google Scholar] [CrossRef]
- Mehmood, A.; Zameer, A.; Chaudhary, N.I.; Raja, M.A.Z. Backtracking search heuristics for identification of electrical muscle stimulation models using Hammerstein structure. Appl. Soft Comput. 2019, 84, 105705. [Google Scholar] [CrossRef]
- Altaf, F.; Chang, C.-L.; Chaudhary, N.I.; Raja, M.A.Z.; Cheema, K.M.; Shu, C.-M.; Milyani, A.H. Adaptive Evolutionary Computation for Nonlinear Hammerstein Control Autoregressive Systems with Key Term Separation Principle. Mathematics 2022, 10, 1001. [Google Scholar] [CrossRef]
- Nanda, S.J.; Panda, G.; Majhi, B. Improved identification of Hammerstein plants using new CPSO and IPSO algorithms. Expert Syst. Appl. 2010, 37, 6818–6831. [Google Scholar] [CrossRef]
- Gotmare, A.; Patidar, R.; George, N.V. Nonlinear system identification using a cuckoo search optimized adaptive Hammerstein model. Expert Syst. Appl. 2015, 42, 2538–2546. [Google Scholar] [CrossRef]
- Janjanam, L.; Saha, S.K.; Kar, R. Optimal Design of Hammerstein Cubic Spline Filter for Non-Linear System Modelling Based on Snake Optimiser. IEEE Trans. Ind. Electron. 2022. [Google Scholar] [CrossRef]
- Altaf, F.; Chang, C.-L.; Chaudhary, N.I.; Cheema, K.M.; Raja, M.A.Z.; Shu, C.-M.; Milyani, A.H. Novel Fractional Swarming with Key Term Separation for Input Nonlinear Control Autoregressive Systems. Fractal Fract. 2022, 6, 348. [Google Scholar] [CrossRef]
- Zheng, J.; Li, K.; Zhang, X. Wi-Fi Fingerprint-Based Indoor Localization Method via Standard Particle Swarm Optimization. Sensors 2022, 22, 5051. [Google Scholar] [CrossRef]
- Jiang, L.; Tajima, Y.; Wu, L. Application of Particle Swarm Optimization for Auto-Tuning of the Urban Flood Model. Water 2022, 14, 2819. [Google Scholar] [CrossRef]
- Mehmood, K.; Chaudhary, N.I.; Khan, Z.A.; Cheema, K.M.; Raja, M.A.Z.; Milyani, A.H.; Azhari, A.A. Dwarf Mongoose Optimization Metaheuristics for Autoregressive Exogenous Model Identification. Mathematics 2022, 10, 3821. [Google Scholar] [CrossRef]
- Alissa, K.A.; Elkamchouchi, D.H.; Tarmissi, K.; Yafoz, A.; Alsini, R.; Alghushairy, O.; Mohamed, A.; Al Duhayyim, M. Dwarf Mongoose Optimization with Machine-Learning-Driven Ransomware Detection in Internet of Things Environment. Appl. Sci. 2022, 12, 9513. [Google Scholar] [CrossRef]
- Ji, H.; Hu, H.; Peng, X. Multi-Underwater Gliders Coverage Path Planning Based on Ant Colony Optimization. Electronics 2022, 11, 3021. [Google Scholar] [CrossRef]
- Liu, Q.; Zhu, S.; Chen, M.; Liu, W. Detecting Dynamic Communities in Vehicle Movements Using Ant Colony Optimization. Appl. Sci. 2022, 12, 7608. [Google Scholar] [CrossRef]
- Al-Shammaa, A.A.; MAbdurraqeeb, A.; Noman, A.M.; Alkuhayli, A.; Farh, H.M. Hardware-In-the-Loop Validation of Direct MPPT Based Cuckoo Search Optimization for Partially Shaded Photovoltaic System. Electronics 2022, 11, 1655. [Google Scholar] [CrossRef]
- Hameed, K.; Khan, W.; Abdalla, Y.S.; Al-Harbi, F.F.; Armghan, A.; Asif, M.; Qamar, M.S.; Ali, F.; Miah, S.; Alibakhshikenari, M.; et al. Far-Field DOA Estimation of Uncorrelated RADAR Signals through Coprime Arrays in Low SNR Regime by Implementing Cuckoo Search Algorithm. Electronics 2022, 11, 558. [Google Scholar] [CrossRef]
- Mehmood, K.; Chaudhary, N.I.; Khan, Z.A.; Raja, M.A.Z.; Cheema, K.M.; Milyani, A.H. Design of Aquila Optimization Heuristic for Identification of Control Autoregressive Systems. Mathematics 2022, 10, 1749. [Google Scholar] [CrossRef]
- Lee, J.G.; Chim, S.; Park, H.H. Energy-efficient cluster-head selection for wireless sensor networks using sampling-based spider monkey optimization. Sensors 2019, 19, 5281. [Google Scholar] [CrossRef]
- He, F.; Ye, Q. A Bearing Fault Diagnosis Method Based on Wavelet Packet Transform and Convolutional Neural Network Optimized by Simulated Annealing Algorithm. Sensors 2022, 22, 1410. [Google Scholar] [CrossRef]
- Xiao, S.; Tan, X.; Wang, J. A Simulated Annealing Algorithm and Grid Map-Based UAV Coverage Path Planning Method for 3D Reconstruction. Electronics 2021, 10, 853. [Google Scholar] [CrossRef]
- Thiagarajan, K.; Anandan, M.M.; Stateczny, A.; Divakarachari, P.B.; Lingappa, H.K. Satellite Image Classification Using a Hierarchical Ensemble Learning and Correlation Coefficient-Based Gravitational Search Algorithm. Remote Sens. 2021, 13, 4351. [Google Scholar] [CrossRef]
- Owolabi, T.; Rahman, M.A. Energy Band Gap Modeling of Doped Bismuth Ferrite Multifunctional Material Using Gravitational Search Algorithm Optimized Support Vector Regression. Crystals 2021, 11, 246. [Google Scholar] [CrossRef]
- Qais, M.H.; Hasanien, H.M.; Turky, R.A.; Alghuwainem, S.; Tostado-Véliz, M.; Jurado, F. Circle Search Algorithm: A Geometry-Based Metaheuristic Optimization Algorithm. Mathematics 2022, 10, 1626. [Google Scholar] [CrossRef]
- Qais, M.H.; Hasanien, H.M.; Turky, R.A.; Alghuwainem, S.; Loo, K.-H.; Elgendy, M. Optimal PEM Fuel Cell Model Using a Novel Circle Search Algorithm. Electronics 2022, 11, 1808. [Google Scholar] [CrossRef]
- Ehteram, M.; Ahmed, A.N.; Ling, L.; Fai, C.M.; Latif, S.D.; Afan, H.A.; Banadkooki, F.B.; El-Shafie, A. Pipeline scour rates prediction-based model utilizing a multilayer perceptron-colliding body algorithm. Water 2020, 12, 902. [Google Scholar] [CrossRef]
- Yang, W.; Xia, K.; Li, T.; Xie, M.; Zhao, Y. An Improved Transient Search Optimization with Neighborhood Dimensional Learning for Global Optimization Problems. Symmetry 2021, 13, 244. [Google Scholar] [CrossRef]
- Almabrok, A.; Psarakis, M.; Dounis, A. Fast Tuning of the PID Controller in An HVAC System Using the Big Bang–Big Crunch Algorithm and FPGA Technology. Algorithms 2018, 11, 146. [Google Scholar] [CrossRef]
- Wu, T.; Li, X.; Zhou, D.; Li, N.; Shi, J. Differential Evolution Based Layer-Wise Weight Pruning for Compressing Deep Neural Networks. Sensors 2021, 21, 880. [Google Scholar] [CrossRef] [PubMed]
- Li, M.; Zhang, D.; Lu, S.; Tang, X.; Phung, T. Differential Evolution-Based Overcurrent Protection for DC Microgrids. Energies 2021, 14, 5026. [Google Scholar] [CrossRef]
- Drachal, K.; Pawłowski, M. A Review of the Applications of Genetic Algorithms to Forecasting Prices of Commodities. Economies 2021, 9, 6. [Google Scholar] [CrossRef]
- Awan, W.A.; Zaidi, A.; Hussain, M.; Hussain, N.; Syed, I. The design of a wideband antenna with notching characteristics for small devices using a genetic algorithm. Mathematics 2021, 9, 2113. [Google Scholar] [CrossRef]
- Strumberger, I.; Minovic, M.; Tuba, M.; Bacanin, N. Performance of Elephant Herding Optimization and Tree Growth Algorithm Adapted for Node Localization in Wireless Sensor Networks. Sensors 2019, 19, 2515. [Google Scholar] [CrossRef] [PubMed]
- Abualigah, L.; Diabat, A.; Sumari, P.; Gandomi, A. A Novel Evolutionary Arithmetic Optimization Algorithm for Multilevel Thresholding Segmentation of COVID-19 CT Images. Processes 2021, 9, 1155. [Google Scholar] [CrossRef]
- Sharma, A.; Khan, R.A.; Sharma, A.; Kashyap, D.; Rajput, S. A Novel Opposition-Based Arithmetic Optimization Algorithm for Parameter Extraction of PEM Fuel Cell. Electronics 2021, 10, 2834. [Google Scholar] [CrossRef]
- Faradonbeh, R.S.; Monjezi, M.; Armaghani, D.J. Genetic programing and non-linear multiple regression techniques to predict backbreak in blasting operation. Eng. Comput. 2015, 32, 123–133. [Google Scholar] [CrossRef]
- Masoumi, N.; Xiao, Y.; Rivaz, H. ARENA: Inter-modality affine registration using evolutionary strategy. Int. J. Comput. Assist. Radiol. Surg. 2018, 14, 441–450. [Google Scholar] [CrossRef]
- Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
- Shaheen, M.A.; Yousri, D.; Fathy, A.; Hasanien, H.M.; Alkuhayli, A.; Muyeen, S.M. A novel application of improved marine predators algorithm and particle swarm optimization for solving the ORPD problem. Energies 2020, 13, 5679. [Google Scholar] [CrossRef]
- Ebeed, M.; Alhejji, A.; Kamel, S.; Jurado, F. Solving the optimal reactive power dispatch using marine predators algorithm considering the uncertainties in load and wind-solar generation systems. Energies 2020, 13, 4316. [Google Scholar] [CrossRef]
- He, Q.; Lan, Z.; Zhang, D.; Yang, L.; Luo, S. Improved Marine Predator Algorithm for Wireless Sensor Network Coverage Optimization Problem. Sustainability 2022, 14, 9944. [Google Scholar] [CrossRef]
- Yang, L.; He, Q.; Yang, L.; Luo, S. A Fusion Multi-Strategy Marine Predator Algorithm for Mobile Robot Path Planning. Appl. Sci. 2022, 12, 9170. [Google Scholar] [CrossRef]
- Wadood, A.; Khan, S.; Khan, B.M.; Ali, H.; Rehman, Z. Application of Marine Predator Algorithm in Solving the Problem of Directional Overcurrent Relay in Electrical Power System. Eng. Proc. 2021, 12, 9. [Google Scholar]
- Lu, X.; Nanehkaran, Y.A.; Fard, M.K. A Method for Optimal Detection of Lung Cancer Based on Deep Learning Optimized by Marine Predators Algorithm. Comput. Intell. Neurosci. 2021, 2021, 3694723. [Google Scholar] [CrossRef] [PubMed]
- Hoang, N.-D.; Tran, X.-L. Remote Sensing–Based Urban Green Space Detection Using Marine Predators Algorithm Optimized Machine Learning Approach. Math. Probl. Eng. 2021, 2021, 5586913. [Google Scholar] [CrossRef]
- Yang, W.; Xia, K.; Li, T.; Xie, M.; Song, F. A Multi-Strategy Marine Predator Algorithm and Its Application in Joint Regularization Semi-Supervised ELM. Mathematics 2021, 9, 291. [Google Scholar] [CrossRef]
- Ezugwu, A.E.; Agushaka, J.O.; Abualigah, L.; Mirjalili, S.; Gandomi, A.H. Prairie Dog Optimization Algorithm. Neural Comput. Appl. 2022, 34, 20017–20065. [Google Scholar] [CrossRef]
- Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
- Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
- Xiao, Y.; Liu, J.; Alkhathlan, A. Informatisation of educational reform based on fractional differential equations. Appl. Math. Nonlinear Sci. 2021. [Google Scholar] [CrossRef]
- Zhang, X.; Alahmadi, D. Study on the maximum value of flight distance based on the fractional differential equation for calculating the best path of shot put. Appl. Math. Nonlinear Sci. 2021. [Google Scholar] [CrossRef]
- Che, Y.; Keir, M.Y.A. Study on the training model of football movement trajectory drop point based on fractional differential equation. Appl. Math. Nonlinear Sci. 2021, 7, 425–430. [Google Scholar] [CrossRef]
- Chandra, S.; Hayashibe, M.; Thondiyath, A. Muscle Fatigue Induced Hand Tremor Clustering in Dynamic Laparoscopic Manipulation. IEEE Trans. Syst. Man, Cybern. Syst. 2018, 50, 5420–5431. [Google Scholar] [CrossRef]
- Xu, W.; Chu, B.; Rogers, E. Iterative learning control for robotic-assisted upper limb stroke rehabilitation in the presence of muscle fatigue. Control Eng. Pract. 2014, 31, 63–72. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).