Next Article in Journal
The Influence of Liquidity and Solvency on Performance within the Healthcare Industry: Evidence from Publicly Listed Companies
Previous Article in Journal
Virtual Dialogue Assistant for Remote Exams
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Manta Ray Foraging Optimization for Parameters Identification of Magnetorheological Dampers

1
State Key Laboratory of Mechanical Behavior and System Safety of Traffic Engineering Structures, Shijiazhuang Tiedao University, Shijiazhuang 050043, China
2
School of Civil Engineering, Shijiazhuang Tiedao University, Shijiazhuang 050043, China
3
School of Water Conservancy and Hydropower, Hebei University of Engineering, Handan 056038, China
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(18), 2230; https://doi.org/10.3390/math9182230
Submission received: 13 August 2021 / Revised: 4 September 2021 / Accepted: 6 September 2021 / Published: 10 September 2021
(This article belongs to the Section Engineering Mathematics)

Abstract

:
Magnetorheological (MR) dampers play a crucial role in various engineering systems, and how to identify the control parameters of MR damper models without any prior knowledge has become a burning problem. In this study, to identify the control parameters of MR damper models more accurately, an improved manta ray foraging optimization (IMRFO) is proposed. The new algorithm designs a searching control factor according to a weak exploration ability of MRFO, which can effectively increase the global exploration of the algorithm. To prevent the premature convergence of the local optima, an adaptive weight coefficient based on the Levy flight is designed. Moreover, by introducing the Morlet wavelet mutation strategy to the algorithm, the mutation space is adaptively adjusted to enhance the ability of the algorithm to step out of stagnation and the convergence rate. The performance of the IMRFO is evaluated on two sets of benchmark functions and the results confirm the competitiveness of the proposed algorithm. Additionally, the IMRFO is applied in identifying the control parameters of MR dampers, the simulation results reveal the effectiveness and practicality of the IMRFO in the engineering applications.

1. Introduction

Magnetorheological (MR) dampers are a kind of intelligent semi-active control device, the output damping force of the damper can be controlled by controlling the current instruction in the coil [1,2]. MR dampers are famous for their fast reaction speed, less energy consumption and wide control range; therefore, MR dampers are widely applied in various engineering domains, e.g., vehicle systems [3,4]. However, the mathematical models of MR dampers are very complicated because of their special mechanical characteristic-hysteretic property. Xu et al. [5] introduced a new single-rod MR damper with combined volume compensator; meanwhile, the design method of the combined compensator with independent functions of volume compensation and compensation force was proposed. Yu et al. [6] proposed a novel compact rotary MR damper with variable damping and stiffness and a unique structure that contains two driven disks and an active rotary disk was designed. Boreiry et al. [7] investigated the chaotic response of a nonlinear seven-degree-of-freedom full vehicle model equipped with an MR damper, and the equations of motion was proposed by employing the modified Bouc–Wen model for an MR damper. Many MR dampers have emerged with different uses and functions. Among them, the Bouc–Wen model is a typical one, which contains numerous unknown parameters. In addition, a range of other factors, including the current size, piston speed and excitation factors, also affect the accuracy of the model, which makes parameter identification of the model more difficult. Therefore, how to find an effective method to identify the control parameters of MR dampers is of great importance and has become a challenging task. In addition, the possible magneto-thermal problem of the MR damper to MR fluid is also becoming a pressing issue [8,9,10].
With the rapid development of computer technologies, meta-heuristic algorithms, a suitable technology, are applied in MR dampers to establish a set of accurate and reliable models. Many scholars at home and abroad carry out the relevant research work and have achieved some results. Giuclea et al. [11] used a genetic algorithm (GA) to identify the control parameters of a modified Bouc–Wen model, in which the applied currents are determined using the curve fitting method. However, in addition to the implementation complexity, a GA may suffer from premature convergence and its optimization performance depends mostly on the probability election for crossover, mutation and selecting operators [12]. Kwok et al. [13] gave a parameter identification technique for a Bouc–Wen damper model using particle swarm optimization (PSO), and some experimental force–velocity data under various operating conditions were employed. Although PSO is improved by using a stop criterion in this study, it easily traps into the local optima and suffers from premature convergence [14]. Thus, the application of PSO in MR dampers is limited, owing to its drawbacks of nature.
Despite the shortcomings of PSO and GAs, meta-heuristic methods are particularly popular for their randomness, easy implementation and black box treatment of problems [15,16]. An increasing number of scholars have proposed meta-heuristic algorithms inspired from different mechanisms by studying animal behaviors and physical phenomena in nature [17,18,19]. Some of the most popular algorithms are the bat algorithm (BA) [20], squirrel search algorithm (SSA) [21], artificial ecosystem-based optimization [22], whale optimization algorithm (WOA) [23], virus colony search (VCS) [24], fruit fly optimization algorithm (FOA) [25], butterfly optimization algorithm (BOA) [26], spotted hyena optimizer (SHO) [27], grasshopper optimization algorithm (GOA) [28], flower pollination algorithm (FPA) [29], crow search algorithm (CSA) [30], grey wolf optimizer (GWO) [31], water cycle algorithm (WCA) [32], gravitational search algorithm (GSA) [33], atom search optimization (ASO) [34], henry gas solubility optimization (HGSO) [35], charged system search (CSS) [36], water evaporation optimization (WEO) [37], equilibrium optimizer (EO) [38] and supply–demand-based optimization (SDO) [39]. In the last ten years, a great number of bio-inspired optimization and nature-inspired optimization algorithms have been proposed. These algorithms are significantly distinct in their natural or biological inspirations; therefore, one can find its category easily and without any ambiguity. In addition, the algorithms are different with respect to their search behaviors, i.e., how they update new candidate solutions during the iteration process [40]. Although these meta-heuristic methods perform well in tackling some challenging real-world problems, they have some disadvantages and their optimization performance still remains to be improved, especially for some inherently high-nonlinear and non-convex problems, such as the economic dispatch (ED) problem [41]. To overall the drawbacks of these algorithms, hybridization might produce new algorithmic behaviors that are effective in improving the optimization performance of the algorithms.
Manta ray foraging optimization (MRFO) [42,43] is a new meta-heuristic algorithm, which simulates the foraging behavior of manta rays in nature. The local exploitation is performed by the chain foraging and somersault foraging behaviors, while the global exploration is performed by the cyclone foraging behavior. This algorithm shows some optimization capabilities and is very successful in applying in some engineering domains, such as geophysics [44], energy allocation [45,46,47], image processing [48] and electric power [49]. These successful applications in the literature confirm MRFO is effective in solving different complex real-world problems. Even though MRFO belongs to a category of meta-heuristic algorithms, it is significantly distinct from other widely employed meta-heuristics in terms of ideology and conception. The major difference between MRFO and PSO is their different biology behaviors. PSO is inspired by the movement of bird flocks or fish schools in nature, whereas MRFO is inspired by the social foraging behaviors of manta rays. Another distinct difference between both is the solution searching mechanism. The solutions in PSO are produced by the combination of the global best solution found thus far and the local best solution, as well as the movement velocity of the individuals; whereas, in MRFO, the solutions are produced by the combination of the global best solution found thus far and the solution in front of it by switching different movement strategies. For a GA and MRFO, both are quite different. A GA is inspired by Darwin’s theory of evolution, which is quite different from the social foraging behaviors of manta rays in MRFO. The second difference is the representation of problem variables. In GAs, the problem variables are encoded in a series of fixed-length bit strings; in MRFO, the problem variables are used directly. Moreover, in GAs, by performing the roulette wheel selection strategy, better solutions have a greater probability of creating new solutions, and worse solutions are probably replaced by new better solutions [27]. While in MRFO, all the individuals in the population have the same probability of improving their solutions. Although both MRFO and bacterial foraging optimization (BFO) [50] are swarm-based and bio-inspired meta-heuristic techniques, which model the biology foraging behaviors, there are still some significant differences. The major difference between MRFO and BFO is that the concepts of searching in the foraging behaviors are different. MRFO models three foraging behaviors of manta rays including chain foraging, cyclone foraging and somersault foraging, while BFO models the social foraging behaviors of Escherichia coli, such as chemotaxis, swarming, reproduction and elimination–dispersal. The second difference is that BFO produces new solutions by adding a random direction with a fixed-length step pace, whereas MRFO produces new solutions with respect to the best solution and the solution in front of it. Additionally, in BFO, when updating the solutions, half of the solutions with higher health are reproduced and half of the solutions with lower health are discarded; MRFO accepts new solutions that are better than current solutions. The last difference between both is that BFO uses the health degree of the bacteria as the fitness values of solutions, while MRFO generally uses the function values of the given problems as the fitness values of solutions. It is obvious that BFO and MRFO follow totally different approaches when solving optimization problems.
However, similar to other meta-heuristics, MRFO also suffers from some disadvantages, i.e., premature convergence, trapping into the local optima and weak exploration [51,52]. There are many scholars who adopt various methods to enhance the performance of MRFO in the literature. Turgut [51] integrated the best performing ten chaotic maps into MRFO and proposed a novel chaotic MRFO to the thermal design problem, this improved version can effectively escape the local optima and increase the convergence rate of the algorithm. Calasan et al. [53] enhanced the optimization ability of the algorithm by integrating the chaotic numbers produced by the Logistic map in MRFO, this variant was used to estimate the transformer parameters. Houssein et al. [52] proposed an efficient MRFO algorithm by hybridizing opposition-based learning with initialization steps of MRFO, which was used to solve the image segmentation problem. Hassan et al. [54] solved the economic emission dispatch problems through hybridizing the gradient-based optimizer with MRFO, which may lead to the avoidance of local optima and accelerate the conveyance rate. Elaziz et al. [55] utilized the heredity and non-locality properties of the Grunwald–Letnikov fractional differ-integral operator to improve the exploitation ability of the algorithm. Elaziz et al. [56] proposed the modified MRFO based on differential evolution as an effective feature selection approach to identify COVID-19 patients by analyzing their chest images. Jena et al. [57] introduced a new attacking MRFO algorithm, which was used for the maximum 3D Tsallis entropy based multilevel thresholding of a brain MR image. Yang et al. [58] devised a hybrid Framework based on MRFO and grey prediction theory to forecast and evaluate the influence of PM10 on public health. Ramadan et al. [59] used MRFO to search for a feasible configuration of the distribution network to achieve fault-tolerance and fast recovery reliable configurable DN in smart grids. Houssein et al. [60] used MRFO to extract the PV parameters of single, double and triple-diode models. It shows better abilities to exploit the search space for unimodal problems, but the weak exploration and easily trapping into the local optima for multimodal problems as well as the unbalance between exploration and exploration for hybrid problems are still a few of the major issues.
It can be obvious that it is feasible to design some improved versions of MRFO, and the above implementations are good examples of this capability. However, it seems difficult for these improved versions to simultaneously improve their overall optimization capabilities combined with the shortcomings of the algorithm, such as exploration, exploitation, balance between both and convergence rate. Therefore, it is necessary to conduct some research attempts to establish a comprehensive improvement strategy for effectively tracking complex optimization problems. Based on the investigations mentioned above, a new hybrid model by combining different operators and designing some search strategies to comprehensively enhance the optimization performance of the algorithm is one of the major research attempts. This is the main motivation of this study. An effective optimization method can accurately obtain the parameters of the complex MR damper. These selected parameters have large influences on the output damping force, which gives the damper better hysteretic and dynamic characteristics. Therefore, it has a wide application value in industry and civil fields at home and abroad.
In light of the fact mentioned above, a hybrid MR damper model based on an optimization technique is proposed in this study. To obtain the optimal control parameters of the MR damper, an improved MRFO (IMRFO) combing a searching control factor, an adaptive weight coefficient with the Levy flight and a wavelet mutation are presented. Firstly, based on the analysis for the exploration probability of MRFO, the weak exploration capability of MRFO is promoted by introducing the searching control mechanism, in which a searching control factor is proposed to adaptively adjust the search option according to the number of iterations, promoting the exploration ability of the algorithm. Secondly, the weight coefficient in the cyclone foraging is redefined by combining with the Levy flight; this search mechanism can effectively promote the diversity of search individuals and prevent the premature convergence of local optima. Finally, the Morlet wavelet mutation strategy is employed to dynamically adjust the mutation space by calculating the energy concentration of the wavelet function; this strategy can improve the ability of the algorithm to step out of stagnation and the convergence rate. The effectiveness of the IMRFO is verified on two sets of standard benchmark functions. Additionally, the proposed IMRFO has been applied to optimally identify the control parameters of the MR dampers. The results have been compared with those of other algorithms, demonstrating the superiority of the IMRFO in solving the complex engineering problems.
The novelty points of this study are highlighted below.
In this study, a searching control factor, an adaptive weight coefficient and a wavelet mutation strategy are designed and integrated into MRFO.
The searching control factor proposed is a decreasing time-dependent function with the rand oscillation, it proved to effectively improve the exploration ability of the algorithm.
The adaptive weight coefficient is equipped with the Levy flight, which can adjust the step pace of the individuals having a Levy distribution with the iterations; it strengthens the diversity of search individuals.
The Morlet wavelet mutation utilizes the multi-resolution and energy concentration characteristics of the wavelet function to step out of stagnation and increase the convergence rate of the algorithm.
The Bouc–Wen model is established under various working conditions; the results discover that the IMRFO is more successful than its competitors in identifying the control parameters of the MR models.
The rest of this paper is organized as follows: Section 2 briefly introduces the basic MRFO, and Section 3 describes the proposed IMRFO algorithm in detail. The performance analysis of the IMRFO on benchmark functions is provided in Section 4. The experiment results and analysis for identifying the control parameters of MR dampers are presented in Section 5. Section 6 gives the conclusions and suggests a direction for future research.

2. Manta Ray Foraging Optimization (MRFO)

MRFO performs the optimization process by simulating the foraging behaviors of manta rays in nature to obtain the global optimum in the search space. MRFO has the following three foraging behaviors: chain foraging, cyclone foraging and somersault foraging. The chain foraging and somersault foraging behaviors contribute mainly to the exploitation search, while the cyclone foraging focuses on the exploration search. When solving an optimization problem using MRFO, the following update mechanisms are employed via the three foraging behaviors.
The chain foraging strategy of manta rays is as follows [42]:
x i ( t + 1 ) = x i ( t ) + r ( x b e s t ( t ) x i ( t ) ) + α ( x b e s t ( t ) x i ( t ) ) i = 1 x i ( t ) + r ( x i 1 ( t ) x i ( t ) ) + α ( x b e s t ( t ) x i ( t ) ) i = 2 , , N
α = 2 r | l o g ( r ) |
The cyclone foraging of manta rays is as follows [42]:
x i ( t + 1 ) = x b e s t ( t ) + r ( x b e s t ( t ) x i ( t ) ) + β ( x b e s t ( t ) x i ( t ) ) i = 1 x b e s t ( t ) + r ( x i 1 ( t ) x i ( t ) ) + β ( x b e s t ( t ) x i ( t ) ) i = 2 , , N
β = 2 e r 1 T t + 1 T sin ( 2 π r 1 )
and
x i ( t + 1 ) = x r a n d ( t ) + r ( x r a n d ( t ) x i ( t ) ) + β ( x r a n d ( t ) x i ( t ) ) i = 1 x r a n d ( t ) + r ( x i 1 ( t ) x i ( t ) ) + β ( x r a n d ( t ) x i ( t ) ) i = 2 , , N
x r a n d ( t ) = L b + r ( L b U b )
The somersault foraging of manta rays is as follows [42]:
x i ( t + 1 ) = x i ( t ) + S ( r 2 x b e s t r 3 x i ( t ) ) , i = 1 , , N
where xi(t) is the position of the ith individual at iteration t, r is a random vector in [0, 1], r1, r2 and r3 are the random numbers in [0, 1], xbest(t) is the food with the highest concentration, xrand(t) is a random position randomly produced in the search space, Lb and Ub are the lower and upper boundaries of the variables, respectively and T is the maximum number of iterations.

3. Improved Manta Ray Foraging Optimization (IMRFO)

In MRFO, in the exploitation phase, each individual updates its position with respect to the individual with the best fitness by adjusting the values of α and β, which results in the decrease in the population diversity and stagnation in the local optima; meanwhile, the lack of fine-tuning ability also causes weak solution stability. To overcome these disadvantages, an improved MRFO, named IMRFO, is proposed. In MRFO, the value of t/T is employed to balance exploratory and exploitative searches, and the exploration probability of 0.25 indicates the weak exploration ability of the algorithm; therefore, a searching control factor is proposed in the IMRFO to improve the searching progress and exploration process. To enhance the search efficiency of the algorithm, an adaptive weight coefficient with the Levy flight is introduced in the algorithm, which can maximize the diversification of individuals and maintain the balance between population diversity and concentration. To avoid prematurely converging to local optima and a guaranteed solution stability, the Morlet wavelet mutation with the fine-tuning ability is incorporated into the algorithm.

3.1. Searching Control Factor

In MRFO, in the whole iteration process, there is a probability of 50% to perform the cyclone foraging in which the first half of searches contribute to exploration, according to the value of t/T. This is, the probability of exploration is only 25% and the relatively low probability happens in the first half of the optimization process, indicating the weak exploration ability of MRFO. In regard to this weak exploration, the searching behavior of MRFO is controlled by the value of t/T, which increases linearly as the iterations rise; however, in the IMRFO algorithm, the searching ability of the algorithm is controlled by a coefficient, ps, called the searching control factor, which not only enables the algorithm to perform exploration with high probability in the second half of the optimization process but also makes it possible for the algorithm to perform exploration in the second half of the optimization process. The searching control factor ps is defined as follows:
p s ( t ) = ( 1 t T ) 5 r
where r is the random number in the interval (0, 1]. Figure 1 gives the time-dependent curve of the searching control factor. When ps > 0.5, the IMRFO performs exploration; otherwise, the algorithm performs exploitation. Obviously, from Figure 1, the searching control factor shows a decreasing trend with random oscillation, which also obliges the algorithm to explore the search in the later iteration.
Let θ = 1 t T , then p s ( t ) = 5 r θ , the probability of ps > 0.5 is given by the following:
P { A ( t ) > 0.5 } = 1 0 1 2 5 5 ( 0.5 θ ) 2 1 d r d θ 1 × 1   0.8509
Therefore, the probability of exploration in the IMRFO algorithm is 0.5 × 0.8509 = 0.4254 over the course of optimization. Figure 2 depicts the schematic diagram of the exploration probability of the algorithm based on the searching control factor ps. Hence, the global exploration of the algorithm can be improved from 25 to 42.54% through the searching control factor ps.

3.2. Adaptive Weight Coefficient with Levy Flight

Levy flight, as a simulation for the foraging activities of creatures, has been developed as an efficient search mechanism for an unknown space; therefore, it is widely employed to promote the search efficiency of various meta-heuristics [61,62,63,64,65,66]. In this part, using the fat-tailed characteristic producing numerous short steps punctuated by a few long steps from the Levy flight to construct a non-local random search mechanism, we can improve the search behavior in the cyclone foraging strategy of MRFO, which can effectively promote the diversity of search individuals and prevent the premature convergence of local optima.
The random step length of the Levy flight is offered by the following Levy distribution [66]:
L ( s ) t λ   ( 1 λ 3 )
where λ is a stability/tail index, and s is the step length. According to Mantegna’s algorithm [66], the step length of the Levy flight is given as follows:
s = u | v | 1 β
where u and v obey the normal distribution, respectively. That is:
u N ( 0 , σ u 2 ) , v N ( 0 , 1 )
σ u = Γ ( 1 + β ) sin ( π β 2 ) Γ ( 1 + β 2 ) β 2 β 2 2 1 β
Γ is the standard Gamma function and the default value of β is set to 1.5.
Therefore, an adaptive weight coefficient with the Levy flight in the cyclone foraging strategy is designed as follows:
β L = e 2 ( T t + 1 ) T u 2 | v | 1 β
Observing Equation (14), on the one hand, the multiple short steps produced frequently by the Levy flight promote the exploitation capacity, while the long steps produced occasionally enhance the exploration capacity, guaranteeing the local optima avoidance of the algorithm; on the other hand, e 2 ( T t + 1 ) T is a decreasing function with the iterations, depending on the decreasing trend of this function. A larger search scope is provided during the early iterations, while a smaller search scope is provided in the later iterations, this characteristic can promote the search efficiency of the algorithm and prevent the step lengths going out of the boundaries of the variables. The behavior of βL during two runs with 1000 iterations is demonstrated in Figure 3.
The cyclone foraging of the IMRFO algorithm can be given as follows:
x i d ( t + 1 ) = x b e s t + r ( x b e s t ( t ) x i ( t ) ) + β L ( x b e s t ( t ) x i ( t ) ) i = 1 x b e s t + r ( x i 1 ( t ) x i ( t ) ) + β L ( x b e s t ( t ) x i ( t ) ) i = 2 , , N p s < 0.5
x i d ( t + 1 ) = x r a n d + r ( x r a n d x i ( t ) ) + β L ( x r a n d x i ( t ) ) i = 1 x r a n d + r ( x i 1 ( t ) x i ( t ) ) + β L ( x r a n d x i ( t ) ) i = 2 , , N p s 0.5

3.3. Wavelet Mutation Strategy

It is easy for MRFO to trap into local optima, which results in an ineffective search for the global optimum and the instability of solutions. To improve the ability of the algorithm to step out of stagnation and the convergence rate and the stability of solutions, the Morlet wavelet mutation is incorporated into the somersault foraging strategy in the IMRFO algorithm. The wavelet mutation indicates that a dynamic adjustment of the mutation is implemented by combining the translations and dilations of a wavelet function [66,67]. To meet the fine-tuning purpose, the dilation parameters of the wavelet function are controlled to reduce its amplitude, which results in a constrain on the mutation space as the iterations increase.
Assume that pm is the mutation probability, r4 is a random number in [0, 1]. By incorporating the wavelet mutation, the somersault foraging strategy is improved by the following:
x i ( t + 1 ) = x i ( t ) + σ w ( x i ( t ) L o w ) σ w < 0 x i ( t ) + σ w ( U p x i ( t ) ) σ w 0    r 4 < p m x i ( t ) + S ( r 2 x b e s t r 3 x i ( t ) )        r 4 p m
where pm is the mutation probability that is set to 0.1, and σw is the dilation parameters of a wavelet function, which can be given as follows:
σ w = 1 a ψ ( φ i a )
where ψ ( x ) is the Morlet wavelet function and it is defined as follows:
ψ ( x ) = e x 2 2 cos ( 5 x )
More than 99% of the total energy of the wavelet function is contained in [−2.5, 2.5]. Thus, σw can be randomly generated from [−2.5a, 2.5a] [67]. a is the dilation parameter, which increases from 1 to s as the number of iterations increases. To avoid missing the global optimum, a monotonic increasing function can be given as follows [67]:
a = e ln ( g ) ( 1 t T ) + ln ( g )
where g is a constant number that is set to 100,000. The calculation of the Morlet wavelet mutation is visualized in Figure 4. From Figure 4, the value of the dilation parameter increases with the decrease in t, and thus, the amplitude of the Morlet wavelet function is decreased, which gradually reduces the significance of the mutation; this merit can ensure that the algorithm jumps out of local optima and improves the precision of the solutions. Moreover, the overall positive and negative mutations are nearly the same during the iteration process, and this property also ensures good solution stability.

3.4. The Proposed IMRFO Algorithm

In the IMRFO, the improvements for MRFO consist of the following three parts: first, a searching control factor is designed according to a weak exploration ability of MRFO, which can effectively increase the global exploration of the algorithm; second, to prevent the premature convergence of the local optima, an adaptive weight coefficient based on the Levy flight is designed to promote the search efficiency of the algorithm; three, the Morlet wavelet mutation strategy is introduced to the algorithm, in which the mutation space is adaptively adjusted according to the energy concentration of the wavelet function, it is helpful for the algorithm to step out of stagnation and speed up the convergence rate. In the IMRFO, in the somersault foraging, when the condition of r4 < pm is met, the Morlet wavelet mutation strategy is performed. The dilation parameters of a wavelet function σw is calculated according to the position of the individual xi via Equations (18)–(20), and then the position of the individual is updated with respect to the upper boundary or lower boundary of the search space, according to the sign of σw (negative or positive) based on Equation (20). By this means, the Morlet wavelet mutation strategy is well integrated to the algorithm.
The pseudocode of the IMRFO algorithm is given in Figure 5, and the specific steps are as follows:
Step1: The related parameters of the IMRFO are initialized, such as the size of population, N, the maximum number of iterations, T and the mutation probability, pm.
Step2: The initial population is randomly produced {x1(0), x2(0), , xN(0)}, the fitness of each individual is evaluated and the best solution x*(0) is restored.
Step3: If tT, for each individual in the population xi(t), i = 1, , N, if the condition of rand < 0.5 is met, the searching control factor ps is calculated according to Equation (8).
Step 4: All the individuals are sorted according to their fitness from small to large and the adaptive weight coefficient is calculated according to Equations (10)–(14); if ps < 0.5, the individual position is updated according to Equation (15), otherwise, the individual position is updated according to Equation (16); if the condition of rand < 0.5 is not met, the individual position is updated according to Equation (1).
Step 5: The individual that is out of the boundaries is relocated in the search space; the fitness value for each individual is evaluated and the best solution found thus far, x*(t), is restored.
Step 6: For each individual in the population xi(t), the dilation parameter σw is calculated according to Equations (18)–(20), and the individual position is updated according to Equation (17).
Step 7: The individual that is out of the boundaries is relocated in the search space; the fitness value for each individual is evaluated and the best solution found thus far, x*(t), is restored.
Step 8: If the stop criterion is met, the best solution found thus far, x*(t), is restored; otherwise, set t = t + 1 and go to Step 3.

4. Experimental Analysis and Results

To analyze the performance of the IMRFO algorithm, a set of benchmark functions are employed in this study. This function suite contains the following three types: unimodal (UF), multimodal (MF) and composite (CF) functions. The UF functions (f1–f7) are able to check the exploitation ability of the algorithms and the MF functions (f8–f13) are able to reveal the exploration ability of the algorithms. The CF functions (f14–f21) are selected from the CEC 2014 benchmark suite [68], which are often employed in many optimization algorithms and can check the balance ability between exploration and exploitation. Details of these benchmark problems are described in the literature [69,70]. The performance of the IMRFO is compared with that of other optimization methods. These comparative optimizers in this study include: PSO, as the most popular swarm-based algorithm; GSA, as the most well-known physics-based algorithm; WOA and MRFO, as recent optimizers; comprehensive learning particle swarm optimizer (CLPSO) [71] and evolution strategy with covariance matrix adaptation (CMA-ES) [72], as highly effective optimizers; and PSOGSA and improved grey wolf optimization (IGWO) [7], as recent improved meta-heuristics with high performance.
For all the considered optimizers, the swarm size and maximum fitness evaluations (FEs) are set to 50 and 25,000, respectively. All the results of the algorithms are based on the average performance of 30 independent runs. The parameter settings of all the algorithms are given in Table 1.

4.1. Exploitation Evaluation

Table 2 shows the results provided by the IMRFO and other optimizers in tackling UF functions f1–f7 with different dimensions. Inspecting Table 2, the IMRFO is very competitive with other optimizers on UF functions. In particular, there are significant improvements on functions f5 and f7 for all the dimensions. For the other UF functions, the IMRFO provides almost the same results as MRFO, followed by other algorithms. Therefore, the present algorithm is very effective in exploiting around the optimum and, thus, ensures a good exploitation ability.

4.2. Exploration Evaluation

The MF functions with numerous local optima are useful to evaluate the exploration ability of the algorithms. Table 3 gives the results provided by the IMRFO and other optimizers in tackling MF functions f8–f13 with different dimensions. From Table 3, the IMRFO performs the best for 50 dimensions of all the MF functions, the IMRFO offers the best results for 10 dimensions of the MF functions but functions f12 and f13, and the IMRFO outperforms all the other methods for 30 dimensions of the MF functions but function f13. Therefore, the IMRFO provides excellent results on these MF functions and its performance remains consistently superior for different dimensions of 10, 30 and 50. Obviously, these superior results benefit significantly from the high exploration ability of the IMRFO. This is due to the fact that the searching control factor increases the number of exploring the search space and the Morlet wavelet mutation strategy improves the exploration ability of the algorithm.

4.3. Evaluation of Local Optima Avoidance

Composite functions are the most challenging test suite; therefore, they are specifically used for evaluating the local optima avoidance of algorithms, which results from a proper balance between exploration and exploitation [73]. The results provided by the IMRFO and other optimizers in tackling CF functions f14–f21 are presented in Table 4. Based on Table 4, the results from the IMRFO are not inferior to those from other algorithms except function f17. For function f17, the results from the IMRFO are second only to those of IGWO. The IMRFO can considerably outperform other algorithms and provide the best results for 87.5% of f14–f21 problems. It can be seen that the results of the IMRFO are again significantly better than other optimizers. Therefore, the results from Table 4 confirm that the IMRFO benefits from a good balance between exploratory and exploitative searches that assist the optimizer to effectively avoid local optima.

4.4. Convergence Evaluation

The convergence curves by the IMRFO and other optimizers in tackling the UF and MF functions are depicted in Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 to observe the convergence performance of the optimizers. The convergence curves provided in these figures are based on the mean of the best-so-far solutions in each iteration over 30 runs. It can be found that these curves share common characteristics: in the early stage of iterations, there are high fluctuations in the curves; the convergence rate tends to be accelerated as the iterations increase; in the later phase of iterations, the curves exhibit low variations. From the figure, compared to other algorithms, the convergence rate of the IMRFO tends to be speeded up with the increase in the iterations. This is owing to the Morlet wavelet mutation strategy for MRFO that enables the algorithm to search for the promising regions of the search space in the initial iterations and converge towards the global optimum more rapidly in the subsequent iterations. Overall, these convergence characteristics are attributed to the improved strategies in the IMRFO, by which the individuals of the population are able to effectively search the variable space and update their positions towards the global optimum solution. Thus, the excellent convergence performance of the IMRFO reveals that the algorithm achieves a better balance between exploitation and exploration than other algorithms in the optimization process.

4.5. CEC 2017 Benchmarking

To further comprehensively investigate the performance of the IMRFO, a set of challenging functions CEC 2017 [74] is employed in this experiment. The comparative optimizers are the same as in the previous experiment and their parameters are set to be the same as mentioned in Table 1. The results are based on 30 independent runs of each algorithm with the population size of 50 and FEs of 50,000.
To investigate the significance difference of the performance of the IMRFO versus other algorithms and whether the IMRFO statistically outperforms other algorithms when solving optimization tasks, a Wilcoxon Signed-Rank Test (WSRT) is used. In this experiment, the WSRT is performed at 5% level of significance [75]. The WSRT results between the IMRFO and other optimizers are listed in Table 5. In Table 5 and Table 6, ‘=’ signifies that there is no significant difference between the IMRFO and other algorithms, ‘+’ signifies that the null hypothesis is rejected and the IMRFO outperforms other ones and ‘−’ vice versa. The statistical results of ‘=’, ‘−’ and ‘+’ for all the algorithms are given in the last row of each table. From Table 5 and Table 6, it can be observed that there is a statistical significance difference between the IMRFO and other optimizers and the performance of the IMRFO is superior to the other eight competitive optimizers on the majority of test tasks.
To further evaluate the difference between the IMRFO and its counterparts and rank them statistically, another statistical test, the Friedman Test (FT), is employed based on the average performance of the algorithms in this study. The performance ranks of the FT for each of the 29 functions among all the considered algorithms are given in Table 7 and their ranks, on average, are depicted in Figure 11, in which the lower rank denotes the better algorithm. Regarding the results in Table 7, the IMRFO can achieve the best results on 12 functions; a WOA can achieve the best results on 3 functions; and an IGWO can achieve the best results on 14 functions. Observing Figure 11, the IMRFO offers the lowest rank of all the comparative algorithms, indicating that the IMRFO can achieve the best performance, followed by IGWO, MRFO and CLPSO. Consequently, this statistical test proves the effectiveness and superiority of the combination of those strategies proposed in the IMRFO.

5. Optimal Parameter Identification of MR Damper

5.1. Principle of Operation

The MR damper is a kind of intelligent semi-active control device; controlling the size of the coil in the current instructions can change the strength of the magnetic field, which directly affects the viscosity coefficient of magnetorheological fluids, thus working to control the outputs of the damper. The general structure of the MR damper is plotted in Figure 12 [76]. The MR fluid is enclosed in a cylinder and flows through a small orifice; meanwhile, a magnetizing coil is enclosed in the piston. When a current is fed to the coil, the particles suspended in the MR fluid are aligned, this enables the fluid to change from the liquid state to the semisolid state within milliseconds, thus generating a controllable damping force.
The mechanical performance test of the MR damper is carried out on the tensile test bed. The experimental platform uses the amplitude signals of different frequencies from the exciter and the current provided by the MR shimmy damper, which are employed to generate some data signals, i.e., the damping force and cylinder rod displacement. The sinusoidal signals are used to generate the excitation in the MR damper, which can be expressed as follows:
x = A sin ( 2 π f t )
where x is the displacement of the damper piston, A is the amplitude of signals, f is the frequency of signals and t is the time. When the amplitude and frequency of the signals are constant, the displacements versus time for the different loading currents can be generated. Therefore, the dynamic responses of the MR damper, i.e., damping force–displacement and damping force–velocity, can be obtained by test data processing under various operating conditions. Figure 13 shows the typical damping force–displacement and damping force–velocity test curves for the amplitude A = 10 mm and frequency f = 0.5 Hz [76].

5.2. Bouc–Wen Model

Various MR models had been proposed to reproduce the dynamic responses of the MR damper, among which the Bouc–Wen model is one of the most commonly employed models owing to its good hysteretic characteristics and strong universality [77]; the Bouc–Wen model is depicted in Figure 14.
The mechanical behavior of the Bouc–Wen model is described as follows:
F = c 0 x ˙ + k 0 ( x x 0 ) + α z
z ˙ = γ | x ˙ | z | z | n 1 β x ˙ z | z | n + A x ˙
where x is the damper displacement, c0 is the damping coefficient of a dashpot, x0 is the initial deflection of spring with the stiffness k0, z is the hysteretic variable and, α, β, γ and n are the model parameters of the MR damper, which are given as follows [76]:
α = α α I + α b
c 0 = c 0 α e c 0 b I
β = β α e β b I
A = A α I 2 + A b I + A c
The following 15 parameters need to be determined in our study:
α b , c 0 α , c 0 b , k 0 , c 1 , k 1 , γ , β α , β b , A α , A b , A c , n ,   a n d   x 0
When these parameters are determined, the output damping force needs to be calculated using the Bouc–Wen model. Therefore, the Bouc–Wen model is established using SIMULINK.

5.3. Simulation Results and Analysis

After the establishment of the model, a fitness function should be defined that can evaluate the quality of the model with the given parameters. Thus, in this study, the mean error rate (MER) between the simulation data and experimental data is employed as the following fitness function [76]:
f E R j = i = 1 m ( F i j F i * j ) 2 i = 1 m ( F i j 1 m i = 1 m F i j ) 2
f M E R = 1 c j = 1 c f j
where m is the number of damping forces, c is the number of the constant loading currents, F i j and F i * j are the ith experimental force and the ith simulation force from the model under the jth loading current, respectively and f E R j is the error rate under a loading current. Therefore, to find the optimal model parameters, the proposed IMRFO algorithm is adopted, and the results are compared with those from the other optimizers.
The experimental data, i.e., the damper amplitude, velocity and the outputted damper force, are obtained under a wide range of operating conditions. Table 8 gives the test operating conditions. In Table 8, the working frequencies are set to 0.5, 1 and 1.5 Hz, while the amplitude is set 10 mm, respectively; moreover, the loading current is set from 0 to 3 A with the step length 0.5 A. Thus, there are three combination cases of frequency and displacement, and there are seven current types for each of the combination cases. For each case, the results offered by the IMRFO are compared with those offered by some other algorithms. For all the algorithms, the population size and the maximum FEs are set to 30 and 6000, respectively. For different loading currents, the search space of each parameter will be slightly different. For example, when I = 0.5 A, the search space of the 15 parameters is given as follows:
Lb = [104, 104, 103, 10−2, 100, 106, 102, 102, 103, −1, −10, −100, 100, 0.1, 10−4];
Ub = [105, 105, 104, 1, 103, 107, 103, 104, 104, −0.1, −1, −1, 103, 1, 1].
Table 9 lists the experimental results provided by the IMRFO and its competitors for three different cases. From Table 9, the IMRFO yields the most favorable performance in case one, in terms of ‘Mean’ and ‘Std’ followed by WOA, IGWO and CMA-ES; in case two, the IMRFO achieves the most reliable performance with comparison to other algorithms, followed by IGWO, PSOGSA and CLPSO; in case three, the IMRFO also offers better results than the other optimizers, followed by IGWO, WOA and CMA-EA. Therefore, Table 9 manifests that the IMRFO obtains more stable high-quality solutions than its counterparts when solving the Bouc–Wen model with different operating conditions. The 15 parameters of the Bouc–Wen are provided in Table 10 using different algorithms for the loading current = 0.5 A.
The convergence curves of all the algorithms for the three cases are depicted in Figure 15. It can be observed that the IMRFO displays the highest convergence rate during the entire optimization process for cases one and two. For case three, the convergence rate of the IMRFO is significantly superior to the other competitors at the later optimization process. Moreover, the IMRFO offers the final solutions with the highest precision in the three cases, demonstrating its superior convergence performance for finding the optimum solution.
The comparisons between the experimental data and simulated results from the Bouc–Wen model using the parameters identified provided by the IMRFO are shown in Figure 16, in which the damping forces obtained from the experiment are plotted in solid lines, while the damping forces from the model are plotted in dots. From Figure 16, the damping force from the experimental data increases with the increase in the loading currents, and the increasing amplitude decreases gradually. Observing the damping force–displacement test curves and damping force–velocity test curves in Figure 16, for each of the different loading currents, the simulated data from the IMRFO-based Bouc–Wen model can agree well with the experimental data; these results show the effectiveness of the proposed method in identifying the damping parameters of the MR damper.
Figure 17 and Figure 18 show the comparison of the damping force–velocity and damping force–displacement test curves between the experimental data and the simulated results for case one using the other different algorithms, respectively. It can be seen from Figure 17 and Figure 18 that the matching contains some big discrepancies between the experimental data and the simulated data from the Bouc–Wen models provided by PSO, GSA, WOA, CLPSO, CMA-ES and PSOGSA. Although it is difficult to visually identify the matching quality among the IMRFO, MRFO and IGWO from these figures, the IMRFO is more competitive than the other methods from Table 9.
To verify whether the IMRFO-based Bouc–Wen model can reflect the mechanical properties of the MR damper, two extensive simulations, i.e., cases two and three, are performed by changing the frequency of the model. The comparisons between the experimental data and simulated data from the Bouc–Wen model using the parameters identified by the IMRFO for cases two and three are shown in Figure 19 and Figure 20, respectively; Figure 21 and Figure 22 show the comparison of the damping force–velocity test curves between the experimental data and the simulated data from the Bouc–Wen model using the parameters identified by the other different algorithms for cases two and three, respectively; and Figure 23 and Figure 24 show the comparison of the damping force–displacement test curves between the experimental data and the simulated data from the Bouc–Wen model using the parameters identified by the other different algorithms for cases two and three, respectively. These results reveal that the simulated data from the IMRFO-based model show better coincidence with the experimental data compared to its competitors, and the IMRFO-based model can accurately describe the dynamic performance of the MR damper with different frequencies.
To evaluate the matching performance of the MR damper models from different algorithms, an indicator named the mean Nash–Sutcliffe efficiency (MNSE) is employed; the MNSE is formulated as follows:
f N S E j = 1 i = 1 m ( F i j F i * j ) 2 i = 1 m ( F i j 1 m i = 1 m F i j ) 2
f M N S E = 1 c j = 1 c f N S E j
where f N S E j is a single Nash–Sutcliffe efficiency (NSE) value under a given loading current. In general, the closer the value of the NSE is to one, the better the matching between the experimental data and simulated data from the model will be.
Figure 25, Figure 26 and Figure 27 show the matching comparison of the models based on different algorithms for cases 1–3 using the NSE, respectively. In these figures, the NSE of the models based on different algorithms under each of all the considered loading currents is depicted for each case, and the MNSE of the models based on each algorithm under all the considered loading currents is provided for each case. From Figure 25 (case one), the IMRFO-based model obtains the biggest NSE for each loading current and the MNSE compared to other competitors, demonstrating the superior matching performance of the model from our optimizer. Thus, the results of the NSE in Figure 25 show that the IMRFO-based model provides the best simulated results when tackling case one. From Figure 26, it can be found that the IMRFO-based model obtains the best matching performance measured using the NSE; the MRFO-based model is the second best, followed by the IGWO-based, CLPSO-based and PSOGSA-based models. Note that the model from GSA yields the worst matching performance in terms of the NSE, manifesting the model provides the unfavorable simulated data for case two. From Figure 27, for case three, the IMRFO-based model offers the most favorable matching performance that is almost the same as that offered by the IGWO-based model, followed by the PSOGSA-based, MRFO-based and CLPSO-based models.
Table 11 summarizes the matching results of the models from different algorithms for cases 1–3. From Table 11, the IMRFO-based model attains the results that are significantly better than, or almost the same as, the results of the models from the other eight peer algorithms for cases 1–3. Additionally, the average MNSE of the three cases from the IMRFO-based model outperforms that from the models based on the other algorithms, indicating the best overall performance of the IMRFO-based model. Based on the above analysis, the proposed IMRFO can provide superior and very competitive optimization performance in identifying the control parameters of the MR damper model.
In order to better evaluate the performance of the algorithm, we need to study not only the solution quality, but also the computational burden. Therefore, the running time of the IMRFO and its competitor are compared under identical conditions. The mean running time of the algorithms for cases 1–3 is given in Table 12. From Table 12, each algorithm spends most of the time in executing SIMULINK. The mean running time of the IMRFO is very close to that of the other algorithms; there was no significant difference between the IMRFO and others in terms of running time; however, the solution quality of the IMRFO is significantly improved compared to the other methods.

6. Conclusions

In this study, a new hybrid MRFO is proposed for solving the global optimization and parameter identification of the MR damper models. In this improvement, the searching control factor is introduced to the algorithm to alleviate the weak exploration ability, and the adaptive weight coefficient is employed to promote the search efficiency of the algorithm. Additionally, the Morlet wavelet mutation strategy is integrated into the algorithm to enhance the ability of the algorithm to step out of stagnation and the convergence rate. Two sets of the challenging benchmarks, i.e., CEC 2014 and CEC 2017, are used to estimate the optimization performance of the IMRFO and the results demonstrate that the IMRFO is superior to the other algorithms. A comprehensive simulation based on the MR damper models is established, including three different test working conditions, in which 15 control parameters need to be identified using the proposed IMRFO algorithm. According to these identified parameters, the damping force–velocity and damping force–displacement test curves can be obtained, which show a highly satisfactory coincidence with the experimental data. Meanwhile, the comparison analysis shows that the IMRFO-based MR damper model can provide the best simulation results.
Eventually, the results of the benchmark functions and the optimal parameter identification of the MR damper model discover that the proposed optimizer has significant potential when solving engineering optimization problems. The Bouc–Wen damper model identified by the IMRFO is not only consistent with the experimental data involved in the simulation, but it can also accurately express the dynamic response of the Bouc–Wen damper model with different amplitudes and frequencies under sinusoidal excitation.
The IMRFO introduced improved strategies that achieve competitive performance verified by the benchmark experiments and the MR damper model. However, we have found out that there are two open problems that need to be further investigated in our future work. One issue is how to design an efficient adaptive strategy to adjust the movement step of individuals according to their neighborhoods’ solutions. Another issue is how to establish an information sharing mechanism to promote the searching efficiency. Moreover, in future work, it would be interesting to further extend the IMRFO to other complex MR damper models in which more different control parameters of the models need to be identified.

Author Contributions

Conceptualization, Y.L. and W.Z.; methodology, Y.L., W.Z. and L.W.; validation, L.W; writing—original draft preparation, Y.L.; writing—review and editing, Y.L., W.Z. and L.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under grants 11972144, 12072098, 11790282, 12072208 and 52072249, the One Hundred Outstanding Innovative Scholars of Colleges and Universities in Hebei Province under grant SLRC2019022 and the Opening Foundation of State Key Laboratory of Shijiazhuang Tiedao University Province under grant ZZ2021-13.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tse, T.; Chang, C.C. Shear-mode rotary magnetorheological damper for small-scale structural control experiments. J. Struct. Eng. 2004, 130, 904–911. [Google Scholar] [CrossRef]
  2. Liu, Y.Q.; Yang, S.P.; Liao, Y.Y. Simulation analysis on lateral semi-active control of suspension system for high-speed emus. J. Vib. Shock 2010, 29, 51–54. [Google Scholar]
  3. Atabay, E.; Ozkol, I. Application of a magnetorheological damper modeled using the current–dependent bouc–wen model for shimmy suppression in a torsional nose landing gear with and without freeplay. J. Vib. Control 2014, 20, 1622–1644. [Google Scholar] [CrossRef]
  4. Yang, S.; Li, S.; Wang, X.; Gordaninejad, F.; Hitchcock, G. A hysteresis model for magneto-rheological damper. Int. J. Nonlinear Sci. Numer. Simul. 2005, 6, 139–144. [Google Scholar] [CrossRef]
  5. Xu, Z.D.; Xu, Y.W.; Wang, C.; Zhao, Y.L.; Ji, B.H.; Du, Y.L. Force tracking model and experimental verification on a novel magnetorheological damper with combined compensator for stay cables of bridge. Structures 2021, 32, 1971–1985. [Google Scholar] [CrossRef]
  6. Yu, J.; Dong, X.; Su, X.; Qi, S. Development and characterization of a novel rotary magnetorheological fluid damper with variable damping and stiffness. Mech. Syst. Signal Process. 2022, 165, 108320. [Google Scholar] [CrossRef]
  7. Boreiry, M.; Ebrahimi-Nejad, S.; Marzbanrad, J. Sensitivity analysis of chaotic vibrations of a full vehicle model with magnetorheological damper. Chaos Solitons Fractals 2019, 127, 428–442. [Google Scholar] [CrossRef]
  8. Patel, D.M.; Upadhyay, R.V. Predicting the thermal sensitivity of MR damper performance based on thermo-rheological properties. Mater. Res. Express 2018, 6, 015707. [Google Scholar] [CrossRef]
  9. Gołdasz, J.; Sapinski, B. Influence of Temperature on the MR Squeeze-Mode Damper. In Proceedings of the 2019 20th International Carpathian Control Conference (ICCC); IEEE: Krakow-Wieliczka, Poland, 2019; pp. 1–6. [Google Scholar]
  10. Versaci, M.; Cutrupi, A.; Palumbo, A. A magneto-thermo-static study of a magneto-rheological fluid damper: A finite element analysis. IEEE Trans. Magn. 2020, 57, 1–10. [Google Scholar] [CrossRef]
  11. Giuclea, M.; Sireteanu, T.; Stancioiu, D.; Stammers, C.W. Model parameter identification for vehicle vibration control with magnetorheological dampers using computational intelligence methods. Proc. Inst. Mech. Eng. Part I J. Syst. Control Eng. 2004, 218, 569–581. [Google Scholar] [CrossRef]
  12. Gogna, A.; Tayal, A. Etaheuristics: Review and application. J. Exp. Theor. Artif. Intell. 2013, 25, 503–526. [Google Scholar] [CrossRef]
  13. Kwok, N.M.; Ha, Q.P.; Nguyen, T.H.; Li, J.; Samali, B. A novel hysteretic model for magnetorheological fluid dampers and parameter identification using particle swarm optimization. Sens. Actuators A Phys. 2006, 132, 441–451. [Google Scholar] [CrossRef]
  14. Gou, J.; Lei, Y.X.; Guo, W.P.; Wang, C.; Cai, Y.Q.; Luo, W. A novel improved particle swarm optimization algorithm based on individual difference evolution. Appl. Soft Comput. 2017, 57, 468–481. [Google Scholar] [CrossRef]
  15. Zhao, W.; Shi, T.; Wang, L.; Cao, Q.; Zhang, H. An adaptive hybrid atom search optimization with particle swarm optimization and its application to optimal no-load PID design of hydro-turbine governor. J. Comput. Des. Eng. 2021, 8, 1204–1233. [Google Scholar]
  16. Caselli, N.; Soto, R.; Crawford, B.; Valdivia, S.; Olivares, R. A self-adaptive cuckoo search algorithm using a machine learning technique. Mathematics 2021, 9, 1840. [Google Scholar] [CrossRef]
  17. Zhao, W.; Wang, L.; Zhang, Z. A novel atom search optimization for dispersion coefficient estimation in groundwater. Future Gener. Comput. Syst. 2019, 91, 601–610. [Google Scholar] [CrossRef]
  18. Soleimani Amiri, M.; Ramli, R.; Ibrahim, M.F.; Abd Wahab, D.; Aliman, N. Adaptive particle swarm optimization of PID gain tuning for lower-limb human exoskeleton in virtual environment. Mathematics 2020, 8, 2040. [Google Scholar] [CrossRef]
  19. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  20. Yang, X.S.; Gandomi, A.H. Bat algorithm: A novel approach for global engineering optimization. Eng. Comput. 2012, 29, 464–483. [Google Scholar] [CrossRef] [Green Version]
  21. Jain, M.; Singh, V.; Rani, A. A novel nature-inspired algorithm for optimization: Squirrel search algorithm. Swarm Evol. Comput. 2019, 44, 148–175. [Google Scholar] [CrossRef]
  22. Zhao, W.; Wang, L.; Zhang, Z. Artificial ecosystem-based optimization: A novel nature-inspired meta-heuristic algorithm. Neural Comput. Appl. 2019, 32, 1–43. [Google Scholar] [CrossRef]
  23. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  24. Li, M.D.; Zhao, H.; Weng, X.W.; Han, T. A novel nature-inspired algorithm for optimization: Virus colony search. Adv. Eng. Softw. 2016, 92, 65–88. [Google Scholar] [CrossRef]
  25. Pan, W.T. A new fruit fly optimization algorithm: Taking the financial distress model as an example. Knowl.-Based Syst. 2012, 26, 69–74. [Google Scholar] [CrossRef]
  26. Arora, S.; Singh, S. Butterfly optimization algorithm: A novel approach for global optimization. Appl. Soft Comput. 2019, 23, 715–734. [Google Scholar] [CrossRef]
  27. Dhiman, G.; Kumar, V. Spotted hyena optimizer: A novel bio-inspired based meta-heuristic technique for engineering applications. Adv. Eng. Softw. 2017, 114, 48–70. [Google Scholar] [CrossRef]
  28. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper optimisation algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef] [Green Version]
  29. Yang, X.S. Flower pollination algorithm for global optimization. In Proceedings of the International Conference on Unconventional Computing and Natural Computation; Springer: Berlin/Heidelberg, Germany, 2012; pp. 240–249. [Google Scholar]
  30. Askarzadeh, A. A novel meta-heuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
  31. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  32. EskandaR, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm-A novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110, 151–166. [Google Scholar] [CrossRef]
  33. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. J. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  34. Zhao, W.; Wang, L.; Zhang, Z. Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. Knowl.-Based Syst. 2019, 163, 283–304. [Google Scholar] [CrossRef]
  35. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  36. Kaveh, A.; Talatahari, S. A novel heuristic optimization method: Charged system search. Acta Mech. 2010, 213, 267–289. [Google Scholar] [CrossRef]
  37. Kaveh, A.; Bakhshpoori, T. Water evaporation optimization: A novel physically inspired optimization algorithm. Comput. Struct. 2016, 167, 69–85. [Google Scholar] [CrossRef]
  38. Faramarzi, A.; Heidarinejad, M.; Stephens, B.; Mirjalili, S. Equilibrium optimizer: A novel optimization algorithm. Knowl.-Based Syst. 2020, 191, 105190. [Google Scholar] [CrossRef]
  39. Zhao, W.; Wang, L.; Zhang, Z. Supply-demand-based optimization: A novel economics-inspired algorithm for global optimization. IEEE Access 2019, 7, 73182–73206. [Google Scholar] [CrossRef]
  40. Molina, D.; Poyatos, J.; Del Ser, J.; García, S.; Hussain, A.; Herrera, F. Comprehensive taxonomies of nature-and bio-inspired optimization: Inspiration versus algorithmic behavior, critical analysis recommendations. Cogn. Comput. 2020, 12, 897–939. [Google Scholar] [CrossRef]
  41. Črepinšek, M.; Liu, S.H.; Mernik, M. Replication and comparison of computational experiments in applied evolutionary computing: Common pitfalls and guidelines to avoid them. Appl. Soft Comput. 2014, 19, 161–170. [Google Scholar] [CrossRef]
  42. Zhao, W.; Zhang, Z.; Wang, L. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  43. Alhumade, H.; Fathy, A.; Al-Zahrani, A.; Rawa, M.J.; Rezk, H. Optimal parameter estimation methodology of solid oxide fuel cell using modern optimization. Mathematics 2021, 9, 1066. [Google Scholar] [CrossRef]
  44. Ben, U.C.; Akpan, A.E.; Mbonu, C.C.; Ebong, E.D. Novel methodology for interpretation of magnetic anomalies due to two-dimensional dipping dikes using the Manta Ray Foraging Optimization. J. Appl. Geophys. 2021, 192, 104405. [Google Scholar] [CrossRef]
  45. Shaheen, A.M.; Ginidi, A.R.; El-Sehiemy, R.A.; Ghoneim, S.S. Economic power and heat dispatch in cogeneration energy systems using manta ray foraging optimizer. IEEE Access 2020, 8, 208281–208295. [Google Scholar] [CrossRef]
  46. Hemeida, M.G.; Ibrahim, A.A.; Mohamed, A.A.A.; Alkhalaf, S.; El-Dine, A.M.B. Optimal allocation of distributed generators DG based Manta Ray Foraging Optimization algorithm (MRFO). Ain Shams Eng. J. 2021, 12, 609–619. [Google Scholar] [CrossRef]
  47. Eid, A.; Abdelaziz, A.Y.; Dardeer, M. Energy loss reduction of distribution systems equipped with multiple distributed generations considering uncertainty using manta-ray foraging optimization. Int. J. Renew. Energy Dev. 2021, 10, 779–787. [Google Scholar] [CrossRef]
  48. Houssein, E.H.; Ibrahim, I.E.; Neggaz, N.; Hassaballah, M.; Wazery, Y.M. An efficient ECG arrhythmia classification method based on Manta ray foraging optimization. Expert Syst. Appl. 2021, 181, 115131. [Google Scholar] [CrossRef]
  49. Duman, S.; Dalcalı, A.; Özbay, H. Manta ray foraging optimization algorithm–based feedforward neural network for electric energy consumption forecasting. Int. Trans. Electr. Energy Syst. 2021, e12999. [Google Scholar] [CrossRef]
  50. Passino, K.M. Bacterial foraging optimization. Int. J. Swarm Intell. Res. IJSIR 2010, 1, 1–16. [Google Scholar] [CrossRef]
  51. Turgut, O.E. A novel chaotic manta-ray foraging optimization algorithm for thermo-economic design optimization of an air-fin cooler. SN Appl. Sci. 2021, 3, 1–36. [Google Scholar] [CrossRef]
  52. Houssein, E.H.; Emam, M.M.; Ali, A.A. Improved manta ray foraging optimization for multi-level thresholding using COVID-19 CT images. Neural Comput. Appl. 2021, 104827. [Google Scholar]
  53. Ćalasan, M.P.; Jovanović, A.; Rubežić, V.; Mujičić, D.; Deriszadeh, A. Notes on parameter estimation for single-phase transformer. IEEE Trans. Ind. Appl. 2020, 56, 3710–3718. [Google Scholar] [CrossRef]
  54. Hassan, M.H.; Houssein, E.H.; Mahdy, M.A.; Kamel, S. An improved manta ray foraging optimizer for cost-effective emission dispatch problems. Eng. Appl. Artif. Intell. 2021, 100, 104155. [Google Scholar] [CrossRef]
  55. Elaziz, M.A.; Yousri, D.; Al-qaness, M.A.; AbdelAty, A.M.; Radwan, A.G.; Ewees, A.A. A Grunwald–Letnikov based Manta ray foraging optimizer for global optimization and image segmentation. Eng. Appl. Artif. Intell. 2021, 98, 104105. [Google Scholar] [CrossRef]
  56. Elaziz, M.A.; Hosny, K.M.; Salah, A.; Darwish, M.M.; Lu, S.; Sahlol, A.T. New machine learning method for image-based diagnosis of COVID-19. PLoS ONE 2020, 15, e0235187. [Google Scholar] [CrossRef]
  57. Jena, B.; Naik, M.K.; Panda, R.; Abraham, A. Maximum 3D Tsallis entropy based multilevel thresholding of brain MR image using attacking Manta Ray foraging optimization. Eng. Appl. Artif. Intell. 2021, 103, 104293. [Google Scholar] [CrossRef]
  58. Yang, W.; Tang, G.; Hao, Y.; Wang, J. A novel framework for forecasting, evaluation and early-warning for the influence of PM10 on public health. Atmosphere 2021, 12, 1020. [Google Scholar] [CrossRef]
  59. Ramadan, H.S.; Helmi, A.M. Optimal reconfiguration for vulnerable radial smart grids under uncertain operating conditions. Comput. Electr. Eng. 2021, 93, 107310. [Google Scholar] [CrossRef]
  60. Houssein, E.H.; Zaki, G.N.; Diab, A.A.Z.; Younis, E.M. An efficient Manta Ray Foraging Optimization algorithm for parameter extraction of three-diode photovoltaic model. Comput. Electr. Eng. 2021, 94, 107304. [Google Scholar] [CrossRef]
  61. Emary, E.; Zawbaa, H.M.; Sharawi, M. Impact of Lévy flight on modern meta-heuristic optimizers. Appl. Soft Comput. 2019, 75, 775–789. [Google Scholar] [CrossRef]
  62. Hassan, M.H.; Kamel, S.; Selim, A.; Khurshaid, T.; Domínguez-García, J.L. A modified Rao-2 algorithm for optimal power flow incorporating renewable energy sources. Mathematics 2021, 9, 1532. [Google Scholar] [CrossRef]
  63. Zhao, W.; Wang, L. Multiple-Kernel MRVM with LBFO algorithm for fault diagnosis of broken rotor bar in induction motor. IEEE Access 2019, 7, 182173–182184. [Google Scholar] [CrossRef]
  64. Sun, Y.; Wang, X.; Chen, Y.; Liu, Z. A modified whale optimization algorithm for large-scale global optimization problems. Expert Syst. Appl. 2018, 114, 563–577. [Google Scholar] [CrossRef]
  65. Viswanathan, G.M.; Afanasyev, V.; Buldyrev, S.V.; Murphy, E.J.; Prince, P.A.; Stanley, H.E. Lévy flight search patterns of wandering albatrosses. Nature 1996, 381, 413–415. [Google Scholar] [CrossRef]
  66. Mantegna, R.N. Fast, accurate algorithm for numerical simulation of Levy stable stochastic processes. Phys. Rev. E 1994, 49, 4677–4683. [Google Scholar] [CrossRef] [PubMed]
  67. Ling, S.H.; Lu, H.; Chan, K.Y.; Lam, H.K.; Yeung, B.; Leung, F.H. Hybrid particle swarm optimization with wavelet mutation and its industrial applications. IEEE transactions on systems, man and cybernetics. Part B Cybern. 2008, 38, 743–763. [Google Scholar] [CrossRef] [Green Version]
  68. Guo, W.; Liu, T.; Dai, F.; Xu, P. An improved whale optimization algorithm for forecasting water resources demand. Appl. Soft Comput. 2019, 86, 105925. [Google Scholar] [CrossRef]
  69. Liang, J.J.; Qu, B.Y.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the CEC 2014 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization; Technical Report 201311; Nanyang Technological University: Singapore, 2013; Volume 635, p. 490. [Google Scholar]
  70. Suganthan, P.N.; Hansen, N.; Liang, J.J.; Deb, K.; Chen, Y.P.; Auger, A.; Tiwari, S. Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization; KanGAL Report 2005005; Nanyang Technological University: Singapore, 2005; pp. 1–50. [Google Scholar]
  71. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [Google Scholar] [CrossRef]
  72. Hansen, N.; Müller, S.D.; Koumoutsakos, P. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol. Comput. 2003, 11, 1–18. [Google Scholar] [CrossRef]
  73. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar] [CrossRef]
  74. Awad, N.H.; Ali, M.Z.; Liang, J.J.; Qu, B.Y.; Suganthan, P.N. CEC 2017 Special Session on Single Objective Numerical Optimization Single Bound Constrained Real-Parameter Numerical Optimization. 2017. Available online: https://www.researchgate.net/profile/Cholmin-Rim/publication/311671283_Adaptive_Niching_Chaos_Optimization_Algorithm_CEC_2017_Competition_on_Single_Objective_Real-Parameter_Numerical_Optimization/links/58d385f892851c319e570626/Adaptive-Niching-Chaos-Optimization-Algorithm-CEC-2017-Competition-on-Single-Objective-Real-Parameter-Numerical-Optimization.pdf (accessed on 6 September 2021).
  75. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  76. Liu, Y.Q.; Yang, S.P.; Liao, Y.Y. A quantizing method for determination of controlled damping parameters of magnetorheological damper models. J. Intell. Mater. Syst. Struct. 2011, 22, 2127–2136. [Google Scholar] [CrossRef]
  77. Wen, Y.K. Method of random vibration of hysteretic systems. J. Eng. Mech. Div. ASCE 1976, 102, 249–263. [Google Scholar] [CrossRef]
Figure 1. Time−dependent curve of searching control factor.
Figure 1. Time−dependent curve of searching control factor.
Mathematics 09 02230 g001
Figure 2. Schematic diagram of exploration probability of the algorithm based on the searching control probability ps.
Figure 2. Schematic diagram of exploration probability of the algorithm based on the searching control probability ps.
Mathematics 09 02230 g002
Figure 3. Value of βL during two runs with 1000 iterations.
Figure 3. Value of βL during two runs with 1000 iterations.
Mathematics 09 02230 g003
Figure 4. Calculation of the Morlet wavelet mutation.
Figure 4. Calculation of the Morlet wavelet mutation.
Mathematics 09 02230 g004
Figure 5. Pseudocode of IMRFO.
Figure 5. Pseudocode of IMRFO.
Mathematics 09 02230 g005
Figure 6. Convergence curves of algorithms for UF functions f1–f4 with different dimensions.
Figure 6. Convergence curves of algorithms for UF functions f1–f4 with different dimensions.
Mathematics 09 02230 g006
Figure 7. Convergence curves of algorithms for UF functions f5–f7 with different dimensions.
Figure 7. Convergence curves of algorithms for UF functions f5–f7 with different dimensions.
Mathematics 09 02230 g007
Figure 8. Convergence curves of algorithms for MF functions f8–f10 with different dimensions.
Figure 8. Convergence curves of algorithms for MF functions f8–f10 with different dimensions.
Mathematics 09 02230 g008
Figure 9. Convergence curves of algorithms for MF functions f11–f13 with different dimensions.
Figure 9. Convergence curves of algorithms for MF functions f11–f13 with different dimensions.
Mathematics 09 02230 g009
Figure 10. Convergence curves of algorithms for CF functions f14–f21 with 30 dimensions.
Figure 10. Convergence curves of algorithms for CF functions f14–f21 with 30 dimensions.
Mathematics 09 02230 g010
Figure 11. Average ranks of algorithms based on FT on 29 functions.
Figure 11. Average ranks of algorithms based on FT on 29 functions.
Mathematics 09 02230 g011
Figure 12. Structure of the MR damper.
Figure 12. Structure of the MR damper.
Mathematics 09 02230 g012
Figure 13. (a) Damping force−displacement test curves, (b) and damping force–velocity test curves.
Figure 13. (a) Damping force−displacement test curves, (b) and damping force–velocity test curves.
Mathematics 09 02230 g013
Figure 14. Bouc−Wen model of a MR damper.
Figure 14. Bouc−Wen model of a MR damper.
Mathematics 09 02230 g014
Figure 15. Convergence curves of algorithms for three cases.
Figure 15. Convergence curves of algorithms for three cases.
Mathematics 09 02230 g015
Figure 16. Comparison between experimental data and simulated results for case 1 using proposed algorithm, (a) damping force–velocity test curves, (b) and damping force–displacement test curves.
Figure 16. Comparison between experimental data and simulated results for case 1 using proposed algorithm, (a) damping force–velocity test curves, (b) and damping force–displacement test curves.
Mathematics 09 02230 g016
Figure 17. Comparison of damping force–velocity test curves between experimental data and simulated results for case 1 using different algorithms.
Figure 17. Comparison of damping force–velocity test curves between experimental data and simulated results for case 1 using different algorithms.
Mathematics 09 02230 g017
Figure 18. Comparison of damping force–displacement test curves between experimental data and simulated results for case 1 using different algorithms.
Figure 18. Comparison of damping force–displacement test curves between experimental data and simulated results for case 1 using different algorithms.
Mathematics 09 02230 g018
Figure 19. Comparison between experimental data and simulated results for case 2 using proposed algorithm, (a) damping force–velocity test curves, (b) and damping force–displacement test curves.
Figure 19. Comparison between experimental data and simulated results for case 2 using proposed algorithm, (a) damping force–velocity test curves, (b) and damping force–displacement test curves.
Mathematics 09 02230 g019
Figure 20. Comparison between experimental data and simulated results for case 3 using proposed algorithm, (a) damping force–velocity test curves, (b) and damping force–displacement test curves.
Figure 20. Comparison between experimental data and simulated results for case 3 using proposed algorithm, (a) damping force–velocity test curves, (b) and damping force–displacement test curves.
Mathematics 09 02230 g020
Figure 21. Comparison of damping force–velocity test curves between experimental data and simulated results for case 2 using other different algorithms.
Figure 21. Comparison of damping force–velocity test curves between experimental data and simulated results for case 2 using other different algorithms.
Mathematics 09 02230 g021
Figure 22. Comparison of damping force–velocity test curves between experimental data and simulated results for case 3 using other different algorithms.
Figure 22. Comparison of damping force–velocity test curves between experimental data and simulated results for case 3 using other different algorithms.
Mathematics 09 02230 g022
Figure 23. Comparison of damping force–displacement test curves between experimental data and simulated results for case 2 using other different algorithms.
Figure 23. Comparison of damping force–displacement test curves between experimental data and simulated results for case 2 using other different algorithms.
Mathematics 09 02230 g023
Figure 24. Comparison of damping force–displacement test curves between experimental data and simulated results for case 3 using other different algorithms.
Figure 24. Comparison of damping force–displacement test curves between experimental data and simulated results for case 3 using other different algorithms.
Mathematics 09 02230 g024
Figure 25. Matching comparison of models based on different algorithms for case 1 using NSE.
Figure 25. Matching comparison of models based on different algorithms for case 1 using NSE.
Mathematics 09 02230 g025
Figure 26. Matching comparison of models based on different algorithms for case 2 using NSE.
Figure 26. Matching comparison of models based on different algorithms for case 2 using NSE.
Mathematics 09 02230 g026
Figure 27. Matching comparison of models based on different algorithms for case 3 using NSE.
Figure 27. Matching comparison of models based on different algorithms for case 3 using NSE.
Mathematics 09 02230 g027
Table 1. Parameter settings for each algorithm.
Table 1. Parameter settings for each algorithm.
AlgorithmParameterValue
PSOInertia weight; acceleration coefficientsDecrease from 0.9 to 0.4; 2,2
GSAGravitational constant; decreasing coefficient100; 20
WOAConvergence parameterDecrease from 2 to 0
CMA-ESExpected initial distance to optimum per coordinate5
CLPSOInertia weight; acceleration coefficientsDecrease from 0.9 to 0.4; 1.49445, 1.49445
PSOGSAGravitational constant; decreasing coefficient; weighting factors1; 20; 0.5, 1.5
IGWOaDecrease from 2 to 0
IMRFOMutation probability0.1
Table 2. Results provided algorithms for UF functions.
Table 2. Results provided algorithms for UF functions.
Fun.Dim.IndexPSOGSAMRFOWOACLPSOCMA-EAPSOGSAIGWOIMRFO
f110Mean1.38 × 10−133.26 × 10−182.07 × 10−2159.62 × 10−912.31 × 10−141.12 × 10−422.28 × 10−201.50 × 10−347.22 × 10−208
STD2.81 × 10−131.50 × 10−1803.46 × 10−901.50 × 10−41.41 × 10−427.27 × 10−214.35 × 10−340
30Mean2.633.44 × 10−174.08 × 10−2071.08 × 10−831.74 × 1027.93 × 10−101.00 × 1032.73 × 10−158.50 × 10−199
STD2.901.03 × 10−1705.38 × 10−8351.75.42 × 10−103.05 × 1033.07 × 10−150
50Mean4.45 × 10227.13.23 × 10−2092.01 × 10−803.03 × 1033.48 × 10−55.14 × 1033.61 × 10−102.90 × 10−199
STD1.61 × 10232.201.10 × 10−794.55 × 1021.82 × 10−56.62 × 1032.51 × 10−100
f210Mean6.70 × 10−95.37 × 10−91.47 × 10−1107.45 × 10−561.46 × 10−33.58 × 10−213.18 × 10−102.93 × 10−212.18 × 10−104
STD8.09 × 10−91.21 × 10−94.19 × 10−1103.86 × 10−555.51 × 10−41.54 × 10−215.22 × 10−114.00 × 10−216.59 × 10−104
30Mean1.21 × 10−13.35 × 10−81.05 × 10−1066.52 × 10−554.195.49 × 10−511.28.18 × 10−104.01 × 10−101
STD5.44 × 10−25.85 × 10−94.48 × 10−1062.09 × 10−545.56 × 10−11.77 × 10−525.74.24 × 10−102.15 × 10−100
50Mean4.151.32 × 10−13.83 × 10−1071.17 × 10−5130.19.99 × 10−351.27.55 × 10−71.15 × 10−101
STD1.092.74 × 10−11.58 × 10−1066.12 × 10−513.192.16 × 10−347.14.33 × 10−73.81 × 10−101
f310Mean5.23 × 10−34.301.08 × 10−20945.81.22 × 1022.31 × 10−331.67 × 1023.39 × 10−151.89 × 10−198
STD8.29 × 10−36.81069.365.54.30 × 10−339.13 × 1021.13 × 10−140
30Mean8.72 × 1035.28 × 1021.56 × 10−2022.78 × 1041.75 × 1043.64 × 10−11.32 × 1043.06 × 10−19.86 × 10−191
STD4.34 × 1032.29 × 10201.00 × 1042.84 × 1033.83 × 10−15.97 × 1034.21 × 10−10
50Mean3.83 × 1041.82 × 1034.00 × 10−2031.47 × 1056.55 × 1045.23 × 1023.91 × 1042.15 × 1022.94 × 10−186
STD1.15 × 1045.36 × 10202.90 × 1047.82 × 1031.95 × 1031.34 × 1042.32 × 1020
f410Mean5.58 × 10−41.19 × 10−95.31 × 10−1091.968.832.21 × 10−193.15 × 10−13.81 × 10−115.49 × 10−102
STD4.43 × 10−42.51 × 10−101.20 × 10−1085.811.519.91 × 10−201.194.35 × 10−112.78 × 10−101
30Mean25.23.731.34 × 10−10527.839.56.68 × 10−4532.98 × 10−32.60 × 10−99
STD5.121.333.60 × 10−10528.82.482.05 × 10−423.81.75 × 10−39.43 × 10−99
50Mean47.78.544.71 × 10−10453.853.59.07 × 10−277.52.11 × 10−14.93 × 10−98
STD5.381.822.15 × 10−10333.02.954.18 × 10−214.91.07 × 10−11.30 × 10−97
f510Mean5.9218.12.186.3027.53.04 × 10−23.02 × 1033.864.49 × 10−2
STD1.0032.83.87 × 10−13.43 × 10−1121.15 × 10−21.64 × 1046.57 × 10−11.77 × 10−1
30Mean8.81 × 10239.024.127.34.22 × 10420.073.825.14.12 × 10−5
STD7.60 × 10232.74.77 × 10−14.54 × 10−11.52 × 1047.11 × 10−166.77.94 × 10−17.76 × 10−13
50Mean3.11 × 1052.92 × 10244.747.99.66 × 10557.35.37 × 10646.51.27 × 10−3
STD2.35 × 1051.66 × 1024.11 × 10−15.04 × 10−12.33 × 10537.12.03 × 1071.462.40 × 10−3
f610Mean0000006.33 × 10−100
STD0000008.09 × 10−100
30Mean6.931.67 × 10−1001.50 × 10201.84 × 10300
STD3.533.79 × 10−10031.303.30 × 10300
50Mean4.98 × 1021.40 × 102002.68 × 10309.03 × 10300
STD2.89 × 10284.9004.25 × 10207.13 × 10300
f710Mean4.52 × 1035.92 × 10−31.73 × 10−41.28 × 1036.87 × 10−31.78 × 10−31.31 × 10−27.56 × 10−41.46 × 10−4
STD2.14 × 10−33.03 × 10−31.62 × 10−41.06 × 10−33.30 × 10−38.04 × 10−48.82 × 10−35.87 × 10−41.34 × 10−4
30Mean1.23 × 10−12.91 × 10−21.97 × 10−42.36 × 10−31.61 × 10−17.23 × 10−31.14 × 10−13.86 × 10−31.75 × 10−4
STD4.94 × 10−21.01 × 10−21.43 × 10−43.18 × 10−33.76 × 10−22.52 × 10−34.72 × 10−21.47 × 10−31.95 × 10−4
50Mean1.231.15 × 10−12.28 × 10−42.60 × 10−31.111.20 × 10−24.33 × 10−16.79 × 10−31.73 × 10−4
STD4.13 × 10−14.92 × 10−21.39 × 10−43.08 × 10−32.86 × 10−13.30 × 10−31.48 × 10−12.07 × 10−31.80 × 10−4
Table 3. Results provided algorithms for MF functions.
Table 3. Results provided algorithms for MF functions.
Fun.Dim.IndexPSOGSAMRFOWOACLPSOCMA-EAPSOGSAIGWOIMRFO
f810Mean−2.87 × 103−1.70 × 103−3.63 × 103−3.36 × 103−3.94 × 103−2.50 × 103−3.00 × 103−3.96 × 103−4.17 × 103
STD2.44 × 1023.40 × 1022.60 × 1025.83 × 1027.97 × 1021.86 × 1022.83 × 1022.70 × 1023.55 × 102
30Mean−5.00 × 103−2.76 × 103−8.58 × 103−1.09 × 104−1.07 × 104−4.39 × 103−7.75 × 103−7.86 × 103−1.11 × 104
STD6.35 × 1024.32 × 1026.28 × 1021.47 × 1032.94 × 1023.07 × 1021.16 × 1031.79 × 1031.70 × 103
50Mean−6.11 × 103−3.66 × 103−1.29 × 104−1.79 × 104−1.44 × 104−5.74 × 103−1.19 × 104−9.28 × 103−1.89 × 104
STD4.83 × 1026.84 × 1028.46 × 1022.85 × 1035.03 × 1023.42 × 1021.04 × 1033.47 × 1032.79 × 103
f910Mean4.023.3201.671.99 × 10−113.535.15.730
STD1.8913.706.451.82 × 10−19.1215.05.710
30Mean56.117.005.68 × 10−1560.91.64 × 1021.37 × 10245.70
STD17.44.2702.29 × 10−148.659.1729.047.10
50Mean1.34 × 10233.4002.01 × 1022.67 × 1022.47 × 10264.20
STD23.580.70017.31.04 × 10261.824.80
f1010Mean1.73 × 10−72.47 × 10−98.88 × 10−164.20 × 10−151.42 × 10−21.01 × 10−151.99 × 10−19.53 × 10−158.88 × 10−16
STD2.44 × 10−75.15 × 10−1002.07 × 10−155.39 × 10−36.49 × 10−165.33 × 10−13.32 × 10−150
30Mean1.354.91 × 10−98.88 × 10−164.68 × 10−156.579.11 × 10−612.91.04 × 10−88.88 × 10−16
STD6.57 × 10−19.60 × 10−1001.60 × 10−155.50 × 10−12.59 × 10−165.085.84 × 10−90
50Mean5.571.52 × 10−18.88 × 10−163.61 × 10−1511.41.31 × 10−317.62.91 × 10−68.88 × 10−16
STD7.47 × 10−13.25 × 10−102.59 × 10−154.45 × 10−13.18 × 10−41.491.12 × 10−60
f1110Mean9.67 × 10−21.6806.48 × 10−25.47 × 10−201.69 × 10−13.33 × 10−20
STD5.16 × 10−21.2001.08 × 10−12.54 × 10−208.21 × 10−22.32 × 10−20
30Mean9.03 × 10−116.3002.725.16 × 10−912.84.28 × 10−30
STD2.14 × 10−13.84005.36 × 10−12.98 × 10−931.18.30 × 10−30
50Mean5.211.02 × 10203.70 × 10−1826.92.63 × 10−479.62.47 × 10−30
STD2.3013.402.03 × 10−175.069.54 × 10−574.54.74 × 10−30
f1210Mean1.22 × 10−145.58 × 10−201.04 × 10−22.00 × 10−32.75 × 10−54.71 × 10−324.88 × 10−11.89 × 10−61.46 × 10−27
STD1.84 × 10−142.55 × 10−205.68 × 10−25.91 × 10−31.82 × 10−51.67 × 10−471.135.29 × 10−77.04 × 10−27
30Mean6.354.70 × 10−12.83 × 10−87.01 × 10−38.706.00 × 10−118.941.14 × 10−24.10 × 10−11
STD3.564.28 × 10−12.94 × 10−84.61 × 10−32.115.93 × 10−114.982.03 × 10−25.56 × 10−11
50Mean1.19 × 1051.793.311.23 × 10−21.85 × 1049.57 × 10−72.56 × 1075.96 × 10−21.21 × 10−7
STD2.30 × 1056.28 × 10−11.93 × 10−59.72 × 10−31.93 × 1044.03 × 10−77.81 × 1073.37 × 10−21.51 × 10−7
f1310Mean1.11 × 10−132.87 × 10−192.03 × 10−22.80 × 10−32.44 × 10−41.35 × 10−327.32 × 10−49.87 × 10−61.46 × 10−3
STD2.25 × 10−131.02 × 10−134.07 × 10−23.92 × 10−31.96 × 10−45.57 × 10−482.79 × 10−33.69 × 10−63.80 × 10−3
30Mean25.12.072.471.74 × 10−11.36 × 1024.20 × 10−1038.12.24 × 10−17.32 × 10−4
STD21.42.971.111.35 × 10−11.77 × 1023.37 × 10−109.541.23 × 10−12.79 × 10−3
50Mean4.62 × 10528.84.925.12 × 10−16.18 × 1052.30 × 10−55.47 × 1071.415.10 × 10−6
STD5.08 × 10510.31.26 × 10−12.32 × 10−12.84 × 1051.03 × 10−51.42 × 1083.08 × 10−11.89 × 10−5
Table 4. The results provided algorithms for CF functions.
Table 4. The results provided algorithms for CF functions.
Fun.IndexPSOGSAMRFOWOACLPSOCMA-EAPSOGSAIGWOIMRFO
f14Mean2647.812578.602500.002681.932638.702706.032634.002617.822500.00
STD9.03117.43018.634.64148.008.610.820
f15Mean2633.482620.132600.002608.332654.292617.532669.752600.262600.00
STD7.731.7308.382.3251.6420.490.200
f16Mean2727.812706.182700.002725.712723.212713.202717.142707.262700.00
STD10.952.13020.593.460.999.141.980
f17Mean2702.912795.252700.572710.462703.672704.872771.202700.502700.52
STD0.9715.310.1231.580.680.8748.480.090.09
f18Mean3508.114624.612900.013740.093331.543680.233708.633195.692900.00
STD229.02216.010.04424.5580.2590.59125.3884.770
f19Mean7322.835455.383000.005693.725155.907452.474669.263849.983000.00
STD622.89751.150623.81370.96707.92526.87132.400
f20Mean3.07 × 1072.77 × 1063614.611.14 × 1073.34 × 1061.59 × 1087.15 × 1062.18 × 1043586.18
STD2.91 × 1076.45 × 106718.749.34 × 1061.77 × 1066.19 × 1076.62 × 1068529.33635.79
f21Mean1.43 × 1052.06 × 1069110.493.46 × 1051.22 × 105985,119.8431,366.181.30 × 1046905.11
STD1.81 × 1058.58 × 1052684.131.92 × 1053.86 × 1046.26 × 1052.70 × 1043389.181818.54
Table 5. WSRT results between IMRFO and PSO, GSA, MRFO and WOA.
Table 5. WSRT results between IMRFO and PSO, GSA, MRFO and WOA.
Fun.PSO vs. IMRFOGSA vs. IMRFOMRFO vs. IMRFOWOA vs. IMRFO
p-ValueT+T−Winnerp-ValueT+T−Winnerp-ValueT+T−Winnerp-ValueT+T−Winner
F11.73 × 10−60465+2.83 × 10−456409+3.72 × 10−5433321.73 × 10−60465+
F21.73 × 10−60465+1.73 × 10−60465+1.73 × 10−60465+1.73 × 10−60465+
F31.73 × 10−60465+1.73 × 10−60465+8.94 × 10−471394+1.73 × 10−60465+
F41.73 × 10−60465+1.73 × 10−60465+2.61 × 10−455410+1.73 × 10−60465+
F54.28 × 10−1194271=2.13 × 10−62463+3.29 × 10−1280185=1.73 × 10−60465+
F69.59 × 10−1230235=1.92 × 10−61464+8.19 × 10−541424+1.73 × 10−60465+
F73.72 × 10−5433323.39 × 10−1279186=6.16 × 10−4399661.92 × 10−61464+
F83.71 × 10−1276189=1.06 × 10−1154311=3.49 × 10−1278187=4.29 × 10−69456+
F92.99 × 10−1283182=6.32 × 10−538427+7.51 × 10−540425+1.73 × 10−60465+
F102.60 × 10−64461+4.72 × 10−2136329+2.37 × 10−527438+1.73 × 10−60465+
F111.73 × 10−60465+1.73 × 10−60465+1.59 × 10−3386791.73 × 10−60465+
F121.73 × 10−60465+1.73 × 10−60465+1.73 × 10−60465+1.73 × 10−60465+
F137.04 × 10−1214251=7.71 × 10−469396+1.73 × 10−60465+1.73 × 10−60465+
F149.32 × 10−617448+1.73 × 10−60465+1.74 × 10−450415+1.73 × 10−60465+
F154.99 × 10−396369+6.16 × 10−366399+6.34 × 10−613452+1.92 × 10−61464+
F161.73 × 10−60465+1.73 × 10−60465+1.85 × 10−2118347+1.73 × 10−60465+
F171.65 × 10−1165300=1.73 × 10−60465+2.37 × 10−527438+2.88 × 10−65460+
F182.88 × 10−65460+3.16 × 10−389376+5.98 × 10−2141324=2.88 × 10−65460+
F193.82 × 10−1190275=1.73 × 10−60465+1.73 × 10−60465+1.73 × 10−60465+
F201.85 × 10−23471181.73 × 10−60465+5.44 × 10−1203262=2.60 × 10−64461+
F212.37 × 10−527438+1.73 × 10−60465+3.39 × 10−1186279=1.73 × 10−60465+
F222.84 × 10−529436+1. × 10−60465+1.04 × 10−373392+1.73 × 10−60465+
F231.73 × 10−60465+1.73 × 10−60465+4.20 × 10−461404+2.35 × 10−63462+
F241.73 × 10−60465+1.73 × 10−60465+3.11 × 10−530435+1.73 × 10−60465+
F251.73 × 10−60465+1.73 × 10−60465+1.74 × 10−450415+1.73 × 10−60465+
F262.37 × 10−527438+1.73 × 10−60465+2.11 × 10−3382834.29 × 10−69456+
F271.73 × 10−60465+1.73 × 10−60465+3.71 × 10−1189276=2.35 × 10−63462+
F281.73 × 10−60465+1.73 × 10−60465+3.88 × 10−68457+1.73 × 10−60465+
F298.47 × 10−616449+1.73 × 10−60465+5.29 × 10−464401+1.73 × 10−60465+
+/=/−20/7/227/2/019/6/429/0/0
Table 6. WSRT results between IMRFO and CLPSO, CMA-ES, PSOGSA and IGWO.
Table 6. WSRT results between IMRFO and CLPSO, CMA-ES, PSOGSA and IGWO.
Fun.CLPSO vs. IMRFOCMA-ES vs. IMRFOPSOGSA vs. IMRFOIGWO vs. IMRFO
p-ValueT+T−Winnerp-ValueT+T−Winnerp-ValueT+T−Winnerp-ValueT+T−Winner
F11.73 × 10−60465+8.94 × 10−471394+2.16 × 10−526439+1.73 × 10−60465+
F21.73 × 10−60465+1.73 × 10−60465+1.73 × 10−60465+1.73 × 10−60465+
F31.73 × 10−60465+1.73 × 10−60465+1.73 × 10−60465+2.05 × 10−452413+
F41.73 × 10−60465+1.74 × 10−450415+6.98 × 10−614451+3.33 × 10−2129336+
F51.41 × 10−1304161=7.51 × 10−540425+1.92 × 10−61464+1.59 × 10−338679
F61.73 × 10−646505.32 × 10−397368+3.52 × 10−67458+1.73 × 10−64650
F73.72 × 10−5433324.07 × 10−5432339.32 × 10−617448+3.88 × 10−64578
F81.41 × 10−1304161=1.04 × 10−373392+1.24 × 10−520445+1.36 × 10−544421
F93.39 × 10−1279186=3.72 × 10−5433321.73 × 10−60465+1.73 × 10−64650
F101.24 × 10−520445+1.73 × 10−60465+1.96 × 10−382383+9.71 × 10−543422+
F111.73 × 10−60465+1.73 × 10−60465+1.73 × 10−60465+1.38 × 10−377388+
F121.73 × 10−60465+1.73 × 10−60465+1.13 × 10−519446+2.13 × 10−62463+
F131.73 × 10−60465+1.73 × 10−60465+6.32 × 10−538427+1.92 × 10−61464+
F141.73 × 10−60465+1.73 × 10−60465+1.92 × 10−61464+3.11 × 10−630435+
F152.88 × 10−65460+1.73 × 10−60465+1.83 × 10−381384+1.02 × 10−518447+
F169.10 × 10−1227238=2.60 × 10−64461+1.97 × 10−525440+1.36 × 10−1305160=
F172.26 × 10−3381841.32 × 10−2112353+3.18 × 10−66459+1.49 × 10−544322
F183.11 × 10−530435+2.60 × 10−64461+5.29 × 10−464401+5.71 × 10−465400+
F192.60 × 10−64461+1.73 × 10−60465+4.20 × 10−461404+1.32 × 10−2112353+
F208.94 × 10−4394713.88 × 10−68457+2.88 × 10−65460+6.34 × 10−645213
F211.83 × 10−381384+1.73 × 10−60465+1.73 × 10−60465+2.77 × 10−337887
F222.16 × 10−526439+9.59 × 10−1235230=4.29 × 10−69456+1.97 × 10−525440+
F232.89 × 10−1181284=1.73 × 10−60465+1.73 × 10−60465+2.41 × 10−441154
F245.71 × 10−465400+1.73 × 10−60465+4.29 × 10−69456+1.80 × 10−544124
F251.73 × 10−60465+2.71 × 10−1286179=9.71 × 10−643422+1.96 × 10−382383+
F262.21 × 10−1292173=4.29 × 10−69456+1.74 × 10−450415+1.29 × 10−338976
F272.77 × 10−3378871.73 × 10−60465+5.58 × 10−1204261=1.73 × 10−64650
F281.73 × 10−60465+4.99 × 10−396369+1.73 × 10−60465+2.35 × 10−63462+
F294.65 × 10−1197268=1.53 × 10−1163302=1.13 × 10−519446+6.89 × 10−542639
+/=/−17/7/524/3/228/1/015/1/13
Table 7. Performance rank of algorithms for each function.
Table 7. Performance rank of algorithms for each function.
Fun.IMRFOPSOGSAMRFOWOACLPSOCMA-EAPSOGSAIGWO
F1265174983
F2167245893
F3146297853
F4185264973
F5246593781
F6357492681
F7625783491
F8246583971
F9547693281
F10183275946
F11248195763
F12165274983
F13173256984
F14158396742
F15163287945
F16276384951
F17359472681
F18164285793
F19236195784
F20439572681
F21258374961
F22138296475
F23279463851
F24279364851
F25186374592
F26357482961
F27479362851
F28187254963
F29269483571
Table 8. Test operating conditions of Bouc–Wen model.
Table 8. Test operating conditions of Bouc–Wen model.
Frequency (Hz)Amplitude (mm)Loading Current (A)
Case10.51000.511.522.53
Case2110
Case31.510
Table 9. Results provided using different algorithms for three cases.
Table 9. Results provided using different algorithms for three cases.
Fun.IndexObjective Function (fMER)
PSOGSAMRFOWOACLPSOCMA-EAPSOGSAIGWOIMRFO
Case 1Mean1.091 × 10−12.343 × 10−11.345 × 10−17.345 × 10−21.877 × 10−17.896 × 10−21.478 × 10−17.402 × 10−26.706 × 10−2
STD9.035 × 10−21.145 × 10−22.290 × 10−33.010 × 10−36.465 × 10−32.185 × 10−33.800 × 10−35.897 × 10−42.238 × 10−4
Case 2Mean2.325 × 10−16.219 × 10−12.667 × 10−11.930 × 10−12.380 × 10−12.092 × 10−12.935 × 10−12.277 × 10−11.773 × 10−1
STD1.952 × 10−21.658 × 10−13.179 × 10−33.036 × 10−21.061 × 10−21.048 × 10−35.771 × 10−32.516 × 10−31.717 × 10−3
Case 3Mean2.655 × 10−17.154 × 10−12.667 × 10−12.387 × 10−12.380 × 10−12.535 × 10−12.935 × 10−12.277 × 10−12.275 × 10−1
STD6.687 × 10−24.544 × 10−22.850 × 10−26.458 × 10−28.677 × 10−25.653 × 10−23.634 × 10−23.596 × 10−21.004 × 10−2
Table 10. Parameters of Bouc–Wen model for loading current = 0.5 A.
Table 10. Parameters of Bouc–Wen model for loading current = 0.5 A.
ParametersPSOGSAMRFOWOACLPSOCMA-ESPSOGSAIGWOIMRFO
α α 3.59 × 1049.17 × 1042.58 × 1043.71 × 1043.19 × 1045.33 × 1048.01 × 1042.29 × 1044.43 × 104
α b 7.11 × 1044.78 × 1041.29 × 1043.08 × 1042.02 × 1042.11 × 1046.11 × 1041.25 × 1044.18 × 104
c 0 α 2.81 × 1035.01 × 1033.89 × 1033.07 × 1033.55 × 1035.67 × 1037.05 × 1032.78 × 1032.61 × 103
c 0 b 6.19 × 10−14.95 × 10−11.77 × 10−16.87 × 10−14.36 × 10−11.90 × 10−13.26 × 10−13.37 × 10−14.83 × 10−1
k 0 8.45 × 1022.46 × 1022.10 × 1022.38 × 1022.87 × 1025.84 × 1021.77 × 1027.76 × 1028.70 × 102
c 1 6.21 × 1069.60 × 1065.16 × 1066.67 × 1063.86 × 1067.31 × 1061.00 × 1065.64 × 1064.61 × 106
k 1 6.55 × 1022.05 × 1022.82 × 1023.38 × 1021.05 × 1022.77 × 1021.33 × 1021.59 × 1021.26 × 102
γ 6.36 × 1037.42 × 1024.50 × 1026.72 × 1027.15 × 1026.36 × 1037.09 × 1037.48 × 1031.94 × 103
β α 8.30 × 1037.10 × 1037.60 × 1036.34 × 1039.91 × 1034.21 × 1036.81 × 1035.15 × 1038.43 × 103
β b −7.24 × 10−1−2.25 × 10−1−2.93 × 10−1−8.85 × 10−1−7.26 × 10−1−4.29 × 10−1−8.85 × 10−1−3.63 × 10−1−3.91 × 10−1
A α −4.90−1.31−8.03−3.21−4.99−1.05−5.54−2.98−9.26
A b −47.8−26.7−17.8−88.5−35.9−61.3−86.0−24.5−34.2
A c 4.55 × 1022.13 × 1023.92 × 1026.91 × 1023.50 × 1025.22 × 1029.35 × 1024.08 × 1023.89 × 102
n 5.49 × 10−17.67 × 10−17.66 × 10−15.52 × 10−17.81 × 10−16.02 × 10−14.20 × 10−17.96 × 10−16.02 × 10−1
x 0 9.86 × 10−33.24 × 10−12.35 × 10−27.01 × 10−21.63 × 10−22.09 × 10−21.00 × 10−47.83 × 10−31.42 × 10−2
Table 11. Matching results of models from different algorithms for cases 1–3.
Table 11. Matching results of models from different algorithms for cases 1–3.
Mean Nash–Sutcliffe Efficiency (fMNSE)
PSOGSAMRFOWOACLPSOCMA-EAPSOGSAIGWOIMRFO
Case 10.9870.9330.9810.9930.9630.9930.9750.9940.995
Case 20.9440.7090.950.9620.9540.9550.9270.9590.968
Case 30.9260.4770.9280.9420.9420.9350.9090.9470.947
Average of fMNSE0.9520.7060.9530.9660.9530.9610.9370.9670.970
Table 12. Mean running time of different algorithms for cases 1–3.
Table 12. Mean running time of different algorithms for cases 1–3.
Fun.Mean Running Time (s)
PSOGSAMRFOWOACLPSOCMA-EAPSOGSAIGWOIMRFO
Case 1175418201839181517871893182918381825
Case 2168517301730171616801746174317741728
Case 3177819461868184718141879187920031862
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liao, Y.; Zhao, W.; Wang, L. Improved Manta Ray Foraging Optimization for Parameters Identification of Magnetorheological Dampers. Mathematics 2021, 9, 2230. https://doi.org/10.3390/math9182230

AMA Style

Liao Y, Zhao W, Wang L. Improved Manta Ray Foraging Optimization for Parameters Identification of Magnetorheological Dampers. Mathematics. 2021; 9(18):2230. https://doi.org/10.3390/math9182230

Chicago/Turabian Style

Liao, Yingying, Weiguo Zhao, and Liying Wang. 2021. "Improved Manta Ray Foraging Optimization for Parameters Identification of Magnetorheological Dampers" Mathematics 9, no. 18: 2230. https://doi.org/10.3390/math9182230

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop