Next Article in Journal
A Quantum-Based Chameleon Swarm for Feature Selection
Next Article in Special Issue
Machine Learning Combined with Restriction Enzyme Mining Assists in the Design of Multi-Point Mutagenic Primers
Previous Article in Journal
Extended Multi-Step Jarratt-like Schemes of High Order for Equations and Systems
Previous Article in Special Issue
Neural Metric Factorization for Recommendation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Remora Optimization Algorithm with Multistrategies for Global Optimization Problem

1
School of Information Engineering, Sanming University, Sanming 365004, China
2
School of Education and Music, Sanming University, Sanming 365004, China
3
School of Computer Science and Technology, Hainan University, Haikou 570228, China
4
Hourani Center for Applied Scientific Research, Al-Ahliyya Amman University, Amman 19328, Jordan
5
Faculty of Information Technology, Middle East University, Amman 11831, Jordan
*
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(19), 3604; https://doi.org/10.3390/math10193604
Submission received: 4 September 2022 / Revised: 25 September 2022 / Accepted: 28 September 2022 / Published: 2 October 2022
(This article belongs to the Special Issue Computational Intelligence Methods in Bioinformatics)

Abstract

:
Remora Optimization Algorithm (ROA) is a metaheuristic optimization algorithm, proposed in 2021, which simulates the parasitic attachment, experiential attack, and host feeding behavior of remora in the ocean. However, the performance of ROA is not very good. Considering the habits of the remora that rely on the host to find food, and in order to improve the performance of the ROA, we designed a new host-switching mechanism. By adding new a host-switching mechanism, joint opposite selection, and restart strategy, a modified remora optimization algorithm (MROA) is proposed. We use 23 standard benchmark and CEC2020 functions to test the performance of MROA and compare them with eight state-of-art optimization algorithms. The experimental results show that MROA has better-optimized performance and robustness. Finally, the ability of MROA to solve practical problems is demonstrated by five classical engineering problems.

1. Introduction

The optimization process in any problem is based on finding the best solution with some constraints. However, with the development of science and technology, the computational complexity of practical problems is increasing. More scholars favor metaheuristic optimization algorithms because of their characteristics of no derivation, simple mechanism, and low computational cost. Although in the linear problem, the metaheuristic algorithm is slightly less than some gradient descent algorithms. However, for nonlinear problems, metaheuristic algorithms usually generate a group of vectors randomly and find the optimal solution in the solution space without gradient information, which has good results in practical problems with unknown derivative information. Metaheuristic optimization algorithms can be divided into four subclasses: Evolutionary-based, Human-based, Swarm-based, and Physics-based.
Evolutionary-based algorithms mainly simulate the survival law in nature in order to realize the overall progress of the population and finally obtain the optimal solution. For example, the Genetic Algorithm (GA) [1] mimics the genetic inheritance, variation, and crossover of population genes; Biogeography-based Optimization (BBO) [2] simulates migration habits of a population’s habitat; Gene Expression Programming (GEP) [3] is based on biological gene structure and its function. A series of human behaviors mainly inspire human-based algorithms. For example, the Teaching–Learning-Based Optimization (TLBO) [4] and Group Teaching Optimization Algorithm (GTOA) [5] simulate the teaching behavior of teachers and the learning behavior of students; Harmony Search (HS) [6] simulates the principle of music performance; Brain Storm Optimization (BSO) algorithm [7] is inspired by the human creative problem-solving process, which is brainstorming conference. The wisdom of animal swarm mainly inspires swarm-based algorithms. Particle Swarm Optimization (PSO) [8] simulates the foraging of birds; Whale Optimization Algorithm (WOA) [9] simulates whale’s behavior of bubble-net attacking; the mating behavior of snakes inspires Snake Optimizer (SO) [10]. Physics-based algorithms mainly come from the universe’s physical rules and chemical reactions. For example, Arithmetic Optimization Algorithm (AOA) [11] searches for the optimal solution in the search space through four mixed operations of addition, subtraction, multiplication, and division; Sine Cosine Algorithm (SCA) [12] allows the search agent in the solution space to find the optimal position through the operation of the sine function and cosine function. Moreover, Simulated Annealing (SA) [13] is inspired by the annealing principle, Multi-Verse Optimizer (MVO) [14] is an optimization algorithm based on the multiverse theory in physics.
However, according to the “No Free Lunch theorem” (NFL) [15], the algorithm’s high performance on one set of problems is the cost of its poor performance on the rest of the set of problems. So, no optimization algorithm can solve all practical problems. While urging scholars to propose the new algorithm, it is essential to optimize existing algorithms. For example, Zhang et al. [16] proposed a Group Teaching Optimization Algorithm with Information Sharing (ISGTOA) by increasing the relationship between excellent and backward students in GTOA; Sun et al. [17] proposed to improve the traditional SMA by integrating Brownian motion and tournament mechanism and further improved the search progress of the algorithm by combining the adaptive mountain climbing method.
Although algorithms inspired by evolutionary theory, human behavior, population foraging, and physical phenomena are very common, some metaheuristic algorithms have no substantive innovation. They are all variations of Particle Swarm Optimization (PSO), Ant Colony Optimization algorithm (ACO) [18], and Genetic Algorithm (GA), and only have the metaphorical support of natural processes. For example, Harmony Search (HS) and Intelligent Water Drops (IWD) algorithms [19] have been pointed out by many scholars as being the result of plagiarism of other popular optimization algorithms. Reference [20] pointed out that harmony search algorithm is a variant of genetic algorithm, and reference [21] proved that intelligent water drop optimization algorithm is only a simplification of ant colony algorithm or a special case. Sörensen et al. [22] pointed out that these algorithms did not make actual contributions to the field of optimization. Based on the above literature analysis, we further pointed out the research direction. On the basis of ROA, this paper optimizes and upgrades traditional ROA.
Remora Optimization Algorithm (ROA) [23] is a swarm intelligence optimization algorithm proposed by Jia et al. in 2021 which imitates the feeding habits of remora. ROA is inspired by the parasitic behavior of remora, which can attach to sailfish and whales. Now, many scholars have improved ROA. For example, Liu et al. [24] have proposed a remora optimization algorithm integrating Brownian motion and lens opposition-based learning. Adding Brownian motion to improve the exploration ability of the algorithm and adding a lens opposite learning strategy to improve the ability of the algorithm to jump out of local optimization have effects on specific engineering problems such as multilevel thresholding image segmentation; Zheng et al. [25] proposed an improved remora optimization algorithm integrating an autonomous foraging mechanism. They designed a probabilistic autonomous foraging mechanism that restores the habits of remora. Wang et al. [26] used Levy flight to optimize the Sailed Fish Optimizer (SFO) strategy of ROA, then added adaptive dynamic probability of improving the host handover mechanism and introduced a restart strategy to improve the overall performance of the algorithm.
Based on the above improvement of ROA and inspired by the remora’s habit of relying on the host to find food, we design a new host-switching mechanism. The new mechanism is more in line with the habits of remora. It evaluates the surrounding environment around the host and determines whether the host needs to be switched. At the same time, joint opposite selection [27] and restart strategy [28] are also employed to improve the optimization performance of ROA. The modified algorithm has a significant effect on both test functions and classical engineering problems.
The main contribution of this paper can be summarized as follows:
  • A proposed modified remora optimization algorithm (MROA) with three strategies: new host-switching mechanism, joint opposite selection, and restart strategy.
  • The performance of MROA is tested in 23 standard benchmark and CEC2020 functions.
  • MROA has been tested under four different dimensions (dim = 30, 100, 500, and 1000).
  • Five traditional engineering problems are used to validate the engineering applicability of the proposed MROA.
In the rest of the paper, Section 2 introduces the specific details of ROA. Section 3 introduces the new host-switching mechanism, joint opposite selection strategy, and restart strategy and gives the pseudocode and flowchart of MROA. Section 4 gives the test results of MROA on benchmark functions. Section 5 gives the relevant results of MROA on engineering problems. Section 6 is the conclusions.

2. Remora Optimization Algorithm (ROA)

The Remora Optimization Algorithm is a metaheuristic optimization algorithm proposed in 2021, miming the parasitic behavior of remora. As the smartest fish in the sea, to avoid enemies’ invasion and save energy, remoras will attach them to sailfish, whales, or other creatures to find food. Therefore, ROA uses some formulas of Sailed Fish Optimizer (SFO) and Whale Optimization Algorithm (WOA) for reference to update positions. In addition, in order to determine whether it is necessary to change the host, the remoras will move around the host in a small range, that is, experience attacks. They will conduct host feeding if there is no need to replace the host. Figure 1 shows the detailed process of remora predation.

2.1. Initialization

In ROA, the calculation equation for initializing the remora population is as follows:
X i , j = l b j + r a n d × ( u b j l b j )
where Xi,j is the position of the ith remora’s jth dimension, lbj and ubj are the lower and upper boundary of the search space, and rand is a random number between 0 and 1.

2.2. Free Travel (Exploration)

2.2.1. SFO Strategy

When the remora is adsorbed on the sailfish, it can be seen that the remora will move with the sailfish. Based on the elite strategy of the SFO algorithm, the formula of the SFO algorithm is improved, and the following formula is obtained:
X i t + 1 = X B e s t t ( r a n d × ( X B e s t t + X r a n d t 2 ) X r a n d t )
where t is the current iteration number, XBestt is the current best position, and Xrandt is the current random position of remora.

2.2.2. Experience Attack

When the remora is adsorbed on the host, it will move around the host in a small range according to the position of the previous generation of remora and the current host to determine whether the host needs to be replaced. This process is similar to the accumulation of experience. The mathematical calculation formula is as follows:
X a t t = X i t + ( X i t + X p r e ) × r a n d n
where Xatt is a tentative movement of the remora. Xpre represents the position of the previous generation of remora, which can be regarded as an experience. Finally, randn is a random number with normal distribution between 0 and 1.
After a small range of movement, the remora will judge whether to switch hosts according to Equation (4), and the formula for switching hosts is shown in Equation (5):
f ( X i t ) < f ( X a t t )
H ( i ) = r o u n d ( r a n d )
Among them, H(i) determines the host adsorbed by remora, and its initial value is 0 or 1. If H(i) is equal to 0, the sailfish is adsorbed; if H(i) is equal to 1, the whale is adsorbed. Moreover, the round is a rounded function, and f(Xit) and f(Xatt) are the fitness values of Xit and Xatt, respectively.

2.3. Eat Thoughtfully (Exploitation)

2.3.1. WOA Strategy

When the host is a whale, the remora will move synchronously with it. The calculation formula is as follows:
X i t + 1 = D × e l × cos ( 2 π a ) + X i t
D = | X B e s t t X i t |
l = r a n d × ( a 1 ) + 1
a = ( 1 + t T )
where D represents the distance between the optimal and current positions before the update, e is a constant with a value of about 2.7182, l is a random number between −1 and 1, a is linear decrease between [−2, 1] during the iteration, and T is the maximum number of iterations.

2.3.2. Host Feeding

The host feeding is a further subdivision of the exploitation stage, and the search range is reduced. The remora will look for food around the host. The mathematical calculation formula is as follows:
X i t + 1 = X i t + A
A = B × ( X i t C × X B e s t )
B = 2 × V × rand V
V = 2 × ( 1 t T )
where A is the moving distance of the remora, which is related to the volume of the remora and the host. The remora factor C is used to limit the position of the remora, and its value is set to 0.1 in ROA. B is used to simulate the host’s volume, and V is used to simulate the volume of the remora.
By adopting the above methods, ROA performs better than the WOA and SFO algorithms. The specific pseudocode is shown in Algorithm 1.
Algorithm 1. The pseudocode of ROA
1.        Initialize the population size N and the maximum number of iterations T
2.        Initialize the positions of population Xi(i = 1, 2,…N)
3.        While (t < T)
4.                Check if any search agent goes beyond the search space and amend it
5.                Calculate the fitness value of each remora and find the best position XBest
6.                For each remora indexed by i
7.                        If H(i) == 0 then
8.                                Using Equation (2) to update the position of attached sailfishes
9.                        Else if H(i) == 1
10.                               Using Equations (6)–(9) to update the position of attached whales
11.                       End if
12.                       Make a one-step prediction by Equation (3)
13.                       If f(Xatt) < f(Xit)
14.                               Using Equation (4) to switch hosts
15.                       Else
16.                               Using Equations (10)–(13) as the host feeding mode for remora;
17.                       End if
18.               End for
19.       End While
20.      Return Xbest

3. The Proposed Multistrategies for ROA

The mechanism of ROA is simple and easy to implement and is widely used to solve optimization problems. However, there is an imbalance between ROA’s exploration ability and exploitation ability. Therefore, this paper improves the host-switching mechanism of ROA and adds a joint opposite selection strategy to balance the exploration stage and exploitation stage of the algorithm. At the same time, a restart strategy is also added to improve the ability of the algorithm to jump out of the local optimization. The specific steps of MROA are as follows.

3.1. Host-Switching Mechanism

ROA judges whether the host needs to be replaced after the experience attack. The experience attack is related to the situation of remora and the current host. However, remora mainly relies on the host to obtain food and has a poor ability to find food by itself. If the remora does not switch between hosts, it may be unable to find food. Therefore, this paper proposes a new host-switching mechanism that evaluates the surrounding environment of the host again and judges whether to switch the host. The new host-switching mechanism reduces the impact of remora’s self-feeding ability. Through the above remora’s foraging behavior, the improvement strategies proposed in this paper are as follows:
X n e w = X i t + k × s t e p   if   r a n d < P
k = β × ( 1 r a n d ) + r a n d
s t e p = X r 1 t X r 2 t
P = 0.5 ( 2 t / T )
f ( X i t ) < f ( X n e w )
where Xnew is a new solution, k is a random factor, β is a constant with a value of 0.2 in this paper. step represents the distance between two random solutions. Xr1t and Xr2t are two random solutions. f(Xnew) is the fitness value of Xnew. P is a linear decreasing factor from 1 to 0.5. It is used to control the frequency of generating new solutions.
The above strategy generates a new solution around the current solution through Equation (14), and evaluates the fitness value of the new solution and the current solution through Equation (18). If the fitness value of the new solution is better, the update method of the solution will be changed.

3.2. Joint Opposite Selection

Joint opposite selection combines the advantages of selective leading opposition and dynamic opposition [29]. It further balances the exploration ability and exploitation ability of the algorithm. At the same time, it enhances the algorithm’s global nature and improves the algorithm’s ability to find the optimal solution.

3.2.1. Selective Leading Opposition (SLO)

SLO is developed from selective opposition (SO) [30]. It calculates the difference distance between the current solution and the best solution in each dimension and compares it with a threshold. The dimension greater than the threshold is the faraway distance dimension (df), and the dimension less than the threshold is the close distance dimension (dc). At the same time, it is also necessary to calculate the Spearman’s Rank Correlation Coefficient value between the current and optimal solutions. It carries out the SLO strategy for the position whose correlation is less than 0.
Take the two-dimensional problem as an example. As shown in Figure 2, it is assumed that P1, P2, and P3 represent the current solutions, and G is the best solution. Assuming that the threshold is 3, the difference distance between P1 and G in the first dimension is 5, and the difference distance between P1 and G in the second dimension is 2. Only the difference distance in the second dimension is less than the threshold. In this case, the SLO strategy is applied to the second dimension. Similarly, for solution P2, only the first dimension needs to apply the SLO strategy. For solution P3, both the first and second dimensions must carry out the SLO strategy.
The calculation equations of the SLO strategy are as follows:
X i , d c ¯ = l b + u b X i , d c       i f   s r c < 0   and   size ( d c ) > size ( d f )
s r c = 1 6 × j = 1 d i m ( d d i , j ) 2 n × ( n 2 1 )
where X i , d c is the close dimension of the ith solution and X i , d c ¯ is the opposite solution of the close dimension. src is the Spearman’s Rank Correlation Coefficient value. Size (dc) is the number of the close distance dimension (dc) of the current solution. Size (df) is the number of the faraway distance dimension (df) of the current solution. The difference distance of each dimension between the current solution and the optimal solution is calculated by Equation (21). The dimension whose difference distance is less than the threshold is dc, and the opposite is df. The threshold is calculated by Equation (22).
d d i , j t = | X b e s t , j t X i , j t |
t h r e s h o l d = 2 - 2 t T

3.2.2. Dynamic Opposite (DO)

Dynamic opposition combines the idea of Quasi Opposition [31] and Quasi Reflection [32]. Its advantage is that it can dynamically search the space and move asymmetrically in the search space, which helps the algorithm jump out of the local optimum. The calculation equation is as follows.
X i ¯ = l b + u b X i
X r = r a n d × X i ¯
X ¯ d o = X i + r a n d × ( X r X i )   if   r a n d < j r
Here, X i ¯ is the opposite solution of X i . X i is the ith solution. X r is the random opposite solution. X ¯ d o is the dynamic opposite solution. Additionally, Jr is the jump rate, which is the probability of carrying out dynamic opposition. The literature [27] indicates that the effect is optimal when the value of Jr is 0.25.

3.3. Restart Strategy (RS)

Restart strategy is generally used to help poor individuals jump out of the local optimum and get out of the stagnation state. Zhang et al. [28] used a trial vector trial(i) to record the number of times an individual stopped at the local optimum. If the solution has not been improved for a long time, trial(i) will be increased by 1, otherwise, trial(i) will be reset to 0. When the trial vector trial(i) is not less than limit, which is a predefined limitation, two new solutions T1 and T2 will be produced by Equations (26) and (27). If T2 exceeds the boundary, its boundary will be pulled back by Equation (28). A better solution will be selected in T1 and T2 to replace the current solution. In this paper, the limit is set as logt. If the limit is small in the early stage, it will help enhance the algorithm’s global nature. If the limit is large in the later stage, it can prevent the algorithm from being far from the optimal solution.
T 1 = ( u b l b ) × r a n d + l b
T 2 = ( u b + l b ) × r a n d X i
T 2 = ( u b l b ) × r a n d + l b   if   T 2 > u b | | T 2 < l b

3.4. The Proposed MROA

The MROA proposed in this paper improves the shortcomings of the host-switching mechanism of the original ROA. We propose a new host-switching mechanism and add a joint opposite selection and restart strategy to enhance the global performance of the algorithm.
In ROA, the remora will initially adsorb on the host to update its position (if it adsorbs on the swordfish, update its position through Equation (2); if it adsorbs on the whale, update its position through Equations (6)–(9)), and then conduct an experience attack to determine whether it needs to reselect the host. If there is no need to switch hosts, host foraging will be carried out. MROA will evaluate the surrounding environment of the host through Equation (14) after the experience attack to determine whether there is a need to switch the host (the host-switching mechanism proposed in this paper). At the same time, the joint opposite selection’s SLO strategy (Equation (19)) and DO strategy (Equation (25)) will be used at the beginning and end of each position update. Finally, if remora stays at a position for a long time, it will use Equations (26) and (27) to regenerate a position and select a good position to replace the original position.
The pseudocode of MROA is shown in Algorithm 2, and the flowchart of MROA is shown in Figure 3.
Algorithm 2. The pseudocode of MROA
1.        Initialize the population size N and the maximum number of iterations T
2.        Initialize the positions of population Xi(i = 1, 2,…N)
3.        While (t < T)
4.                Check if any search agent goes beyond the search space and amend it
5.                Calculate the fitness value of each remora and find the best solution XBest
6.                Perform SLO for each position by Equation (19)
7.                For each remora indexed by i
8.                        If H(i) == 0 then
9.                                Using Equation (2) to update the position of attached sailfishes
10.                       Else if H(i) == 1
11.                               Using Equations (6)–(9) to update the position of attached whales
12.                       End if
13.                       Perform experience attack by Equation (3)
14.                       If rand < P
15.                               Make a prediction around host by Equation (14)
16.                               If f(Xnew) < f(Xit)
17.                                       Using Equation (5) to switch hosts
18.                               End if
19.                       End if
20.                       Using Equation (10) as the host feeding mode for remora;
21.                       Perform DO for each position by Equation (25)
22.                       Update trial(i) for each remora
23.                       If trial(i) > limit
24.                               Generate positions by Equations (26) and (27)
25.                               Choose the position with the better fitness value
26.                       End if
27.               End for
28.       End While
29.       Return Xbest

3.5. The Computational Complexity of MROA

Time complexity is very important for evaluating an algorithm. The computational complexity of MROA mainly comes from initialization parameters, initialization population, update of population’s positions, and other factors. The specific complexity analysis is as follows:
  • Complexity to initialize the parameters: O(1).
  • Complexity to initialize the population: O(N × dim), where N is the number of search agents and dim is the dimension size.
  • The computational complexity of updating the solutions of the population comes from many aspects: The computational complexity of SFO strategy and WOA strategy is O(N × dim × T). At the same time, host feeding and experience attack’s computational complexity are also O(N × dim × T), respectively. The computational complexity of evaluating the solutions around the host is uncertain, assuming that it is the maximum value O(N × dim × T). The computational complexity of JOS is O(SLO) + O(DO), which is O(N × size (dc) × T) + O(N × dim × T × Jr), where size (dc) is the number of close-distance dimension and Jr is the jump rate; moreover, according to the literature [28], the time of the restart strategy is O(2 × N × dim × T/limit), where limit is a predefined limitation. To sum up, the computational complexity of updating the solutions of the population is O(N × T × (dim(Jr + 2/dim + 4) + size (dc)).
The computational complexity of initialization parameters and the population is very small, so it can be ignored. Therefore, the computational complexity of MROA is O(N × T × (dim(Jr + 2/dim + 4) + size (dc)). The computational complexity of ROA is O(3 × N × T × dim). Although the computational complexity of MROA has been changed, MROA’s performance has been improved even more. The experimental results in Section 4 show that the performance of MROA is significantly better than ROA.

4. Experimental Tests and Analysis

All experiments in this paper are carried out in MATLAB R2021a on a PC with Intel (R) Core (TM) i7-11700 CPU @ 2.50 GHz and RAM 16 GB memory on OS Windows 11.
In order to test the performance of MROA and reflect the effectiveness of MROA’s improvement, this paper selects Remora Optimization Algorithm (ROA), Whale Optimization Algorithm (WOA), Sooty Tern Optimization Algorithm (STOA) [33], Sine Cosine Algorithm (SCA), Arithmetic Optimization Algorithm (AOA), Group Teaching Optimization Algorithm (GTOA), Bald Eagle Search (BES) [34], HPGSOS [35], and PSOS [36] as comparative algorithms. The parameter settings of MROA and other algorithms are shown in Table 1. We set the population size to 30, the number of iterations to 500, and the dimension size to 30/100/500/1000 or a fixed dimension size.

4.1. Experiments on Standard Benchmark Functions

In this section, 23 standard benchmark functions are selected to test the performance of MROA, among which F1–F7 are unimodal functions, F8–F13 are multimodal functions, and F14–F23 are multimodal functions with fixed dimensions. Table 2 shows the formula, dimension, range, and optimal value of 23 standard benchmark functions.
Figure 4 shows the images of part of the 23 standard benchmark functions, the search history of MROA in this function, the trajectory of the first search agent, and the convergence curve. It is not difficult to see that most of the search agents of MROA can approach the theoretical optimum in the later stage of iteration, which verifies the strong optimization capacity of MROA.
The results of MROA and other optimization algorithms running in 23 standard benchmark functions are shown in Table 3 and Table 4. Best is the optimal fitness value, Mean is the average fitness value, and Std is the standard deviation of the fitness value. It is clear to see that MROA has excellent effects on most functions except for F5, F6, F12, and F13. Inf in Table 4 represents infinity, and NaN represents invalid number. It is because SCA cannot converge in F2 (dim = 1000) and the solution obtained by MATLAB is too large to be expressed. This also makes it impossible to calculate the standard deviation. Although the standard deviation of fitness value of MROA in some test functions is not as good as other comparative algorithms, MROA has smaller optimal and average fitness values. It has a stronger optimization capacity. Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9 show the convergence curves of MROA and seven comparative algorithms in 23 standard benchmark functions. It can be seen that MROA converges faster than other algorithms in most benchmark functions except for F5, F6, F12, and F13.
In order to study the significant difference between MROA and comparative algorithms, the Wilcoxon rank-sum test’s results of MROA and comparative algorithms are given in Table 5 and Table 6. Among them, the significance level is 5%. When p is less than 5%, it indicates a significant difference. Otherwise, it is relatively similar. In F1, F9, F10, and F11, many cases have p = 1. This is because the benchmark function is not complex enough; MROA and some other algorithms can find the optimal value. Through the results of the Wilcoxon rank-sum test and the experimental data in Table 3 and Table 4, we can see that MROA has good effects in most functions except for F5, F6, F12, and F13.

4.2. Experiments on CEC2020 Test Suite

In this section, we will use CEC2020 to test the performance of MROA. The specific details of CEC2020 are shown in Table 7.
The results of MROA and other algorithms running 30 times in CEC2020 are shown in Table 8. It is worth mentioning that in the unimodal function CEC 1, MROA can always converge and find the optimal value, though most algorithms cannot converge, which shows the strong exploitation ability of MROA. In other complex functions, MROA still has an excellent effect. Figure 10 shows the convergence curve of MROA in CEC2020, which shows that MROA has a strong convergence ability.
Table 9 shows the results of MROA and other algorithms in Wilcoxon-rank sum. In function CEC 4, the value of p is often 1 because function CEC 4 is relatively simple, and most algorithms can find the optimal value. In most cases, the p-value is less than 0.05. It can be seen from Table 8 and Table 9 that MROA can achieve good results in most CEC2020 functions.

4.3. Sensitivity Analysis of β on MROA

β is a parameter added in MROA which controls random numbers and dramatically impacts the algorithm. This paper selected β=0.1~0.9 to experiment and recorded results running 30 times in CEC2020 with different values of β. The operation results are shown in Table 10. It is not difficult to see that when the value of β is 0.2, the effect of MROA is best.

5. Results of Constrained Engineering Design Problems

The ultimate purpose of studying optimization algorithms is to solve practical problems, and the test function cannot reflect the actual performance of MROA in engineering problems. Therefore, five classical engineering problems are selected in this section and compared with other optimization algorithms to verify the engineering applicability of MROA further.

5.1. Welded Beam Design Problem

The design problem of the welded beam is to minimize the cost of the welded beam through four decision variables and seven constraints. There are four variables to be optimized: the thickness of the weld (h), the thickness of the bar (b), the length of the welded part of the bar (l), and the height of the bar (t). The constraint condition is shear stress (τ), bending stress (θ), beam bending load (pc), deflection of the beam (δ), boundary conditions, and so on. The schematic diagram of its structural design is shown in Figure 11.
The mathematical model of welded beam design is as follows:
Consider:
x = [ x 1   x 2   x 3   x 4 ] = [ h   l   t   b ]
Objective function:
f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 )
Subject to:
g 1 ( x ) = τ ( x ) τ max 0
g 2 ( x ) = σ ( x ) σ max 0
g 3 ( x ) = δ ( x ) δ max 0
g 4 ( x ) = x 1 x 4 0
g 5 ( x ) = P P c ( x ) 0
g 6 ( x ) = 0.125 x 1 0
g 7 ( x ) = 1.10471 x 1 2 + 0.04811 x 3 x 4 ( 14.0 + x 2 ) 0.5 0
where:
τ ( x ) = ( τ ) 2 + 2 τ τ " x 2 2 R + ( τ " ) , τ = P 2 x 1 x 2 , τ " = M R J ,
M = P ( L + x 2 2 ) , R = x 2 2 4 + ( x 1 + x 3 2 ) 2 , σ ( x ) = 6 P L x 4 x 3 2 ,
J = 2 { 2 x 1 x 2 [ x x 2 4 + ( x 1 + x 3 2 ) 2 ] } , δ ( x ) = 6 P L 3 E x 4 x 3 2 ,
P c ( x ) = x 3 2 x 4 6 0 4.013 E L 2 , ( 1 x 3 2 L E 4 G ) , ( 1 x 3 2 L E 4 G ) ,
P = 6000 l b , L = 14     i n , δ max = 0.25     in , E = 30 × 10 6       p s i ,
τ max = 13600           p s i , a n d           σ max = 30000               p s i
Boundaries:
0.1 x i 2 , i = 1 , 4 ; 0.1 x i 10 , i = 2.3
Table 11 shows the results of MROA and other algorithms on the design of welded beams. When h is 0.2062185, l is 3.254893, t is 9.02003, and b is 0.206489, MROA achieves the minimum cost of 1.699058. It can be seen that MROA is superior to other algorithms in the design of welded beams.

5.2. The Tension/Compression Spring Design Problem

The tension/pressure spring design problem’s purpose is to reduce the spring’s mass through three variables and four constraints. The constraint conditions include the minimum deviation (g1), shear stress (g2), impact frequency (g3), and outer diameter limit (g4), and the corresponding variables include wire diameter d, mean coil diameter D, and effective coil number N. f(x) is the minimum spring mass. The model is shown in Figure 12.
The mathematical model of tension/pressure spring design is described as follows:
Consider:
x = [ x 1   x 2   x 3 ] = [ d   D   N ]
Objective function:
f ( x ) = ( x 3 + 2 ) × x 2 × x 1 2
Subject to:
g 1 ( x ) = 1 x 3 × x 2 3 71785 × x 1 4 0
g 2 ( x ) = 4 × x 2 2 x 1 × x 2 12566 × x 1 4 + 1 5108 × x 1 2 1 0
g 3 ( x ) = 1 140.45 × x 1 x 2 2 × x 3 0
g 4 ( x ) = x 1 + x 2 1.5 1 0
Boundaries:
0.05 x 1 2.0 ; 0.25 x 2 1.3 ; 2.0 x 3 15.0
The results of MROA and other algorithms in the design of tension/pressure springs are shown in Table 12. It is easy to see that MROA achieves the minimum spring mass when d is 0.05, D is 0.374430, and N is 8.5497203. It is obviously superior to other comparison algorithms.

5.3. Pressure Vessel Design Problem

The purpose of the pressure vessel design problem is to meet the production needs and reduce the total cost of the vessel. Its four variables are shell thickness Ts, head thickness Th, inner radius R, and vessel length L without considering the head, where Ts and Th are integer multiples of 0.625. R and L are continuous variables. The schematic diagram of structural optimization design is shown in Figure 13.
The mathematical model of the pressure vessel design problem is as follows:
Consider:
x = [ x 1       x 2       x 3       x 4 ] = [ T s       T h       R       L ]
Objective function:
f ( x ) = 0.6224 x 1 x 2 x 3 + 1.7781 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
Subject to:
g 1 ( x ) = x 1 + 0.0193 x 3 0
g 2 ( x ) = x 3 + 0.00954 x 3 0
g 3 ( x ) = π x 3 2 x 4 + 2 3 π x 3 3 + 1296000 0
g 4 ( x ) = x 4 240 0
Boundaries:
0 x 1 99 , 0 x 2 99 , 10 x 3 200 10 x 4 200
The comparison results between MROA and other optimization algorithms in pressure vessel design are shown in Table 13. The minimum weight of MROA is 5735.8501 when Ts is 0.742578894, Th is 0.368384814, R is 40.33385234, and L is 199.802664. It is not difficult to see that MROA is superior to other algorithms in the design of pressure vessels.

5.4. Speed Reducer Design Problem

In the mechanical system, the reducer is one of the critical parts of the gearbox, and can be used for various applications. The reducer design problem is a minimization problem. Its main objective is to find the minimum mass of the reducer to meet its four design constraints: tooth bending stress, covering stress, lateral deflection of the shaft, and stress in the shaft. There are seven variables in this problem, which are tooth surface width x1, gear module x2, number of teeth on pinion x3, length of the first shaft between bearings x4, length of the second shaft between bearings x5, the diameter of the first shaft x6, and diameter of the second shaft x7. The specific model is shown in the Figure 14.
The mathematical model of reducer design is as follows:
Objective function:
f ( x ) = 07854 × x 1 × x 2 2 × ( 3.3333 × x 3 2 + 14.9334 × x 3 43.0934 ) 1.508 × x 1 × ( x 6 2 + x 7 2 ) + 7.4777 × x 6 3 + x 7 3 + 0.7854 × x 4 × x 6 2 + x 5 × x 7 2
Subject to:
g 1 ( x ) = 27 x 1 × x 2 2 × x 3 1 0
g 2 ( x ) = 397.5 x 1 × x 2 2 × x 3 2 1 0
g 3 ( x ) = 1.93 × x 4 3 x 2 × x 3 × x 6 4 1 0
g 4 ( x ) = 1.93 × x 5 3 x 2 × x 3 × x 7 4 1 0
g 5 ( x ) = 1 110 × x 6 3 × ( 745 × x 4 x 2 × x 3 ) 2 + 16.9 × 10 6 1 0
g 6 ( x ) = 1 85 × x 7 3 × ( 745 × x 5 x 2 × x 3 ) 2 + 16.9 × 10 6 1 0
g 7 ( x ) = x 2 × x 3 40 1 0
g 8 ( x ) = 5 × x 2 x 1 1 0
g 9 ( x ) = x 1 12 × x 2 1 0
g 10 ( x ) = 1.5 × x 6 + 1.9 x 4 1 0
g 11 ( x ) = 1.1 × x 7 + 1.9 x 5 1 0
Boundaries:
2.6 x 1 3.6 , 0.7 x 2 0.8 , 17 x 3 28 , 7.3 x 4 8.3 , 7.3 x 5 8.3 , 2.9 x 6 3.9 , 5 x 7 5.5
It can be seen from Table 14 that MROA obtains the minimum reducer weight of 2995.437447 when x = [3.497571,0.7,17,7.3,7.8,3.350057,5.285540], which is superior to other comparison algorithms.

5.5. Multiple Disc Clutch Brake Problem

The purpose of the multiple disc clutch brake problem is to find five relevant variables’ values of the multiple disc clutch brake with the minimum mass under eight constraints. The five variables are the inner radius ri, the outer radius ro, the disc thickness t, the driving force F, and the number of surface friction Z. The specific model is shown in Figure 15.
The mathematical model of multiple disc clutch brake design is as follows:
Consider:
x = [ x 1       x 2       x 3       x 4     x 5 ] = [ r i       r o         t     F     Z ]
Objective function:
f ( x ) = Ι Ι ( r o 2 r i 2 ) t ( Z + 1 ) ρ   ( ρ = 0 . 0000078 )
Subject to:
g 1 ( x ) = r o r i Δ r 0
g 2 ( x ) = l m a x ( Z + 1 ) ( t + δ ) 0
g 3 ( x ) = P m a x P r z 0
g 4 ( x ) = P m a x ν s r       m a x P r z υ s r 0
g 5 ( x ) = ν s r       m a x υ s r 0
g 6 ( x ) = T m a x T 0
g 7 ( x ) = M h s M s 0
g 8 ( x ) = T 0
Variable range:
60 x 1 80 , 90 x 2 110 , 1 x 3 3 , 600 x 4 1000 , 2 x 5 9
M h = 2 3 μ F Z r o 3 r i 2 r o 2 r i 3 , P r z = F Ι Ι ( r o 2 r i 2 ) ,
υ r z = 2 Ι Ι ( r o 3 r i 3 ) 90 ( r o 2 r i 2 ) , T = I z Ι Ι       n 30 ( M h + M f )
Δ r = 20 m m , I z = 55 k g m m 2 , P max = 1 M P a , F max = 1000 N ,
T max = 15 s , μ = 0.5 , s = 1.5 , M s = 40 N m , M f = 3 N m ,
n = 250 r p m , υ s r       m a x = 10 m / s , l m a x = 30 m m
Other parameters:
Table 15 shows the solutions of different algorithms in the design of multiple disc clutch brakes. When the inner radius of MROA is 69.9999995, the outer radius is 90, the disc thickness is 1, the driving force is 600, and the surface friction is 2, the minimum mass of 0.235242458 is obtained. It is obviously better than other comparison algorithms.

6. Conclusions

To improve the performance of ROA and based on the characteristics of the remora that rely on the host to find food, a new host-switching mechanism is proposed in this paper. This mechanism judges whether it is necessary to switch hosts by evaluating the position around the host, which is consistent with remora’s habit. At the same time, a joint opposite selection strategy and restart strategy are introduced to enhance the global performance of the algorithm. The MROA is tested with 23 standard benchmark functions and CEC2020’s functions. The experiments show that the performance of MROA is better than ROA and other optimization algorithms and has better robustness. In addition, the significant difference between MROA and other algorithms is also verified by the Wilcoxon rank-sum test, which proves the effectiveness of MROA. Finally, the engineering applicability of MROA is verified by solving five classical engineering problems. In future work, it is planned to apply the modified remora optimization algorithm to more practical problems, such as multithreshold image segmentation, feature selection, etc. We also plan to propose its multiobjective version to solve other engineering problems.

Author Contributions

Conceptualization, C.W. and H.J.; methodology, C.W. and H.J.; software, C.W. and D.W.; validation, C.W., H.J. and H.R.; formal analysis, C.W. and D.W.; investigation, C.W. and H.J.; resources, Q.L. and S.L.; data curation, Q.L. and L.A.; writing—original draft preparation, C.W. and H.J.; writing—review and editing, H.J., L.A. and D.W.; visualization, H.J., H.R. and D.W.; supervision, D.W. and H.J.; funding acquisition, H.J. and L.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by Sanming University National Natural Science Foundation Breeding Project (PYT2105) and Fujian Natural Science Foundation Project (2021J011128). The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code: (22UQU4320277DSR09).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the support of Fujian Key Lab of Agriculture IOT Application and IOT Application Engineering Research Center of Fujian Province Colleges and Universities, as well as the anonymous reviewers, for helping us improve the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  2. Simon, D. Biogeography-Based Optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef] [Green Version]
  3. Ferreira, C. Gene Expression Programming: A New Adaptive Algorithm for Solving Problems. Complex Syst. 2001, 13, 87–129. [Google Scholar]
  4. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Design. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  5. Zhang, Y.; Jin, Z. Group teaching optimization algorithm: A novel metaheuristic method for solving global optimization problems. Expert Syst. Appl. 2020, 148, 113246. [Google Scholar] [CrossRef]
  6. Zong, W.G.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  7. Cheng, S.; Qin, Q.; Chen, J.; Shi, Y. Brain storm optimization algorithm. Artif. Intell. Rev. 2016, 46, 445–458. [Google Scholar] [CrossRef]
  8. Fearn, T. Particle swarm optimisation. NIR News 2014, 25, 27. [Google Scholar] [CrossRef]
  9. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  10. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl. Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  11. Abualigah, L.; Diabat, A.; Mirjalili, S.; Elaziz, M.A.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  12. Mirjalili, S. SCA: A Sine Cosine Algorithm for Solving Optimization Problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  13. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  14. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-verse optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2015, 27, 495–513. [Google Scholar] [CrossRef]
  15. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef] [Green Version]
  16. Zhang, Y.; Chi, A. Group teaching optimization algorithm with information sharing for numerical optimization and engineering optimization. J. Intell. Manuf. 2021. [Google Scholar] [CrossRef]
  17. Sun, K.; Jia, H.; Li, Y.; Jiang, Z. Hybrid improved slime mould algorithm with adaptive β hill climbing for numerical optimization. J. Intell. Fuzzy Syst. 2020, 40, 1–13. [Google Scholar] [CrossRef]
  18. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intel. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  19. Shah-Hosseini, H. The intelligent water drops algorithm: A nature-inspired swarm-based optimization algorithm. Int. J. Bio. Insrip. Com. 2009, 1, 71–79. [Google Scholar]
  20. Weyland, D. A rigorous analysis of the harmony search algorithm: How the research community can be misled by a “novel” methodology. Int. J. Appl. Metaheur. Comput. 2010, 1, 50–60. [Google Scholar] [CrossRef] [Green Version]
  21. Camacho-Villalón, C.L.; Dorigo, M. Stützle, T. The intelligent water drops algorithm: Why it cannot be considered a novel algorithm. Swarm Intell. 2019, 13, 173–192. [Google Scholar] [CrossRef]
  22. Sörensen, K. Metaheuristics—the metaphor exposed. Int. Trans. Oper. Res. 2015, 22, 3–18. [Google Scholar] [CrossRef]
  23. Jia, H.; Peng, X.; Lang, C. Remora optimization algorithm. Expert Syst. Appl. 2021, 185, 115665. [Google Scholar] [CrossRef]
  24. Liu, Q.; Li, N.; Jia, H.; Qi, Q.; Abualigah, L. Modified Remora Optimization Algorithm for Global Optimization and Multilevel Thresholding Image Segmentation. Mathematics 2022, 10, 1014. [Google Scholar] [CrossRef]
  25. Zheng, R.; Jia, H.; Abualigah, L.; Wang, S.; Wu, D. An improved remora optimization algorithm with autonomous foraging mechanism for global optimization problems. Math. Biosci. Eng. 2022, 19, 3994–4037. [Google Scholar] [CrossRef]
  26. Wang, S.; Hussien, A.G.; Jia, H.; Abualigah, L.; Zheng, R. Enhanced Remora Optimization Algorithm for Solving Constrained Engineering Optimization Problems. Mathematics 2022, 10, 1696. [Google Scholar] [CrossRef]
  27. Arini, F.Y.; Chiewchanwattana, S.; Soomlek, C.; Sunat, K. Joint Opposite Selection (JOS): A premiere joint of selective leading opposition and dynamic opposite enhanced Harris’ hawks optimization for solving single-objective problems. Expert Syst. Appl. 2022, 188, 116001. [Google Scholar] [CrossRef]
  28. Zhang, H.; Wang, Z.; Chen, W.; Heidari, A.A.; Wang, M.; Zhao, X.; Liang, G.; Chen, H.; Zhang, X. Ensemble mutation-driven salp swarm algorithm with restart mechanism: Framework and fundamental analysis. Expert Syst. Appl. 2021, 165, 113897. [Google Scholar] [CrossRef]
  29. Xu, Y.; Yang, Z.; Li, X.; Kang, H.; Yang, X. Dynamic opposite learning enhanced teaching–learning-based optimization. Knowl. Based Syst. 2020, 188, 104966. [Google Scholar] [CrossRef]
  30. Dhargupta, S.; Ghosh, M.; Mirjalili, S.; Sarkar, R. Selective opposition based grey wolf optimization. Expert Syst. Appl. 2020, 151, 113389. [Google Scholar] [CrossRef]
  31. Mandal, B.; Roy, P.K. Optimal reactive power dispatch using quasi-oppositional teaching learning based optimization. Int. J. Electr. Power 2013, 53, 123–134. [Google Scholar] [CrossRef]
  32. Fan, Q.; Chen, Z.; Xia, Z. A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems. Soft Comput. 2020, 24, 14825–14843. [Google Scholar] [CrossRef]
  33. Dhiman, G.; Kaur, A. STOA: A bio-inspired based optimization algorithm for industrial engineering problems. Eng. Appl. Artif. Intel. 2019, 82, 148–174. [Google Scholar] [CrossRef]
  34. Alsattar, H.A.; Zaidan, A.A.; Zaidan, B.B. Novel meta-heuristic bald eagle search optimisation algorithm. Artif. Intel. Rev. 2020, 53, 2237–2264. [Google Scholar] [CrossRef]
  35. Farnad, B.; Jafarian, A. A new nature-inspired hybrid algorithm with a penalty method to solve constrained problem. Int. J. Comput. Methods 2018, 15, 1850069. [Google Scholar] [CrossRef]
  36. Jafarian, A.; Farnad, B. Hybrid PSOS Algorithm for Continuous Optimization. Int. J. Ind. Math. 2019, 11, 143–156. [Google Scholar]
  37. Hussien, A.G.; Amin, M.; Abd El Aziz, M. A comprehensive review of moth-flame optimisation: Variants, hybrids, and applications. J. Exp. Theor. Artif. Intell. 2020, 32, 705–725. [Google Scholar] [CrossRef]
  38. Kaveh, A.; Mahdavi, V. Colliding bodies optimization: A novel meta-heuristic method. Comput. Struct. 2014, 139, 18–27. [Google Scholar] [CrossRef]
  39. Mahdavi, M.; Fesanghary, M.; Damangir, E. An improved harmony search algorithm for solving optimization problems. Appl. Math. Comput. 2007, 188, 1567–1579. [Google Scholar] [CrossRef]
  40. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  41. He, Q.; Wang, L. An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng. Appl. Artif. Intel. 2007, 20, 89–99. [Google Scholar] [CrossRef]
  42. Kaveh, A.; Khayatazad, M. A new meta-heuristic method: Ray optimization. Comput. Struct. 2012, 112, 283–294. [Google Scholar] [CrossRef]
  43. Hussien, A.G.; Amin, M. A self-adaptive harris hawks optimization algorithm with opposition-based learning and chaotic local search strategy for global optimization and feature selection. Int. J. Mach. Learn. Cybern. 2022, 13, 309–336. [Google Scholar] [CrossRef]
  44. Huang, F.-z.; Wang, L.; He, Q. An effective co-evolutionary differential evolution for constrained optimization. Appl. Math. Comput. 2007, 186, 340–356. [Google Scholar] [CrossRef]
  45. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. GSA: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  46. Kaveh, A.; Farhoudi, N. A new optimization method: Dolphin echolocation. Adv. Eng. Softw. 2013, 59, 53–70. [Google Scholar] [CrossRef]
  47. Dhiman, G.; Kumar, V. Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv. Eng. Softw. 2017, 114, 48–70. [Google Scholar] [CrossRef]
  48. Kaveh, A.; Talatahari, S. An improved ant colony optimization for constrained engineering design problems. Eng. Comput. 2010, 27, 155–182. [Google Scholar] [CrossRef]
  49. Efr’en, M.; Carlos, C. An empirical study about the usefulness of evolution strategies to solve constrained optimization problem. Int. J. Gen. Syst. 2008, 37, 443–473. [Google Scholar]
  50. Li, S.; Chen, H.; Wangm, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  51. Gandomi, A.H.; Yang, X.; Alavi, A.H.; Talatahari, S. Bat algorithm for constrained optimization tasks. Neural Comput. Appl. 2013, 22, 1239–1255. [Google Scholar] [CrossRef]
  52. He, Q.; Wang, L. A hybrid particle swarm optimization with a feasibility-based rule for constrained optimization. Appl. Math. Comput. 2007, 186, 1407–1422. [Google Scholar] [CrossRef]
  53. Kaveh, A.; Talatahari, S. A novel heuristic optimization method: Charged system search. Acta Mech. 2010, 213, 267–289. [Google Scholar] [CrossRef]
  54. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A Nature-inspired Metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  55. Baykasoglu, A.; Akpinar, S. Weighted superposition attraction (wsa): A swarm intelligence algorithm for optimization problems–part2: Constrained optimization. Appl. Soft Comput. 2015, 37, 396–415. [Google Scholar] [CrossRef]
  56. Czerniak, J.M.; Zarzycki, H.; Ewald, D. Aao as a new strategy in modeling and simulation of constructional problems optimization. Simul. Model. Pract. Theory 2017, 76, 22–33. [Google Scholar] [CrossRef]
  57. Gandomi, A.H.; Yang, X.-S.; Alavi, A.H. Cuckoo search algorithm: A metaheuristic approach to solve structural optimization problems. Eng. Comput. 2013, 29, 17–35. [Google Scholar] [CrossRef]
  58. Baykasoglu, A.; Ozsoydan, F.B. Adaptive firefly algorithm with chaos for mechanical design optimization problems. Appl. Soft Comput. 2015, 36, 152–164. [Google Scholar] [CrossRef]
  59. Abualigah, L.; Elaziz, M.A.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2021, 191, 116158. [Google Scholar] [CrossRef]
  60. Kamboj, V.K.; Nandi, A.; Bhadoria, A.; Sehgal, S. An intensify harris hawks optimizer for numerical and engineering optimization problems. Appl. Soft Comput. 2020, 89, 106018. [Google Scholar] [CrossRef]
  61. Eskandar, H.; Sadollah, A.; Bahreininejad, A.; Hamdi, M. Water cycle algorithm–a novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 2012, 110, 151–166. [Google Scholar] [CrossRef]
  62. Sayed, G.I.; Darwish, A.; Hassanien, A.E. A new chaotic multi-verse optimization algorithm for solving engineering optimization problems. J. Exp. Theor. Artif. Intell. 2018, 30, 293–317. [Google Scholar] [CrossRef]
Figure 1. The different states of ROA.
Figure 1. The different states of ROA.
Mathematics 10 03604 g001
Figure 2. Two-dimensional schematic diagram of selective leading opposition.
Figure 2. Two-dimensional schematic diagram of selective leading opposition.
Mathematics 10 03604 g002
Figure 3. The flowchart of the MROA.
Figure 3. The flowchart of the MROA.
Mathematics 10 03604 g003
Figure 4. Quantitative measurement effect charts of functions (F1, F2, F3, F5, F7, F12, F13, F15, F16).
Figure 4. Quantitative measurement effect charts of functions (F1, F2, F3, F5, F7, F12, F13, F15, F16).
Mathematics 10 03604 g004aMathematics 10 03604 g004b
Figure 5. The convergence curves for the optimization algorithms on test functions (F1–F13) with dim = 30.
Figure 5. The convergence curves for the optimization algorithms on test functions (F1–F13) with dim = 30.
Mathematics 10 03604 g005
Figure 6. The convergence curves for the optimization algorithms on test functions (F1–F13) with dim = 100.
Figure 6. The convergence curves for the optimization algorithms on test functions (F1–F13) with dim = 100.
Mathematics 10 03604 g006
Figure 7. The convergence curves for the optimization algorithms on test functions (F1–F13) with dim = 500.
Figure 7. The convergence curves for the optimization algorithms on test functions (F1–F13) with dim = 500.
Mathematics 10 03604 g007
Figure 8. The convergence curves for the optimization algorithms on test functions (F1–F13) with dim = 1000.
Figure 8. The convergence curves for the optimization algorithms on test functions (F1–F13) with dim = 1000.
Mathematics 10 03604 g008
Figure 9. The convergence curves for the optimization algorithms on test functions (F14–F23).
Figure 9. The convergence curves for the optimization algorithms on test functions (F14–F23).
Mathematics 10 03604 g009
Figure 10. The convergence curves for the optimization algorithms on test functions (CEC 1–CEC 10).
Figure 10. The convergence curves for the optimization algorithms on test functions (CEC 1–CEC 10).
Mathematics 10 03604 g010
Figure 11. Model of welded beams design.
Figure 11. Model of welded beams design.
Mathematics 10 03604 g011
Figure 12. Model of tension/compression spring design.
Figure 12. Model of tension/compression spring design.
Mathematics 10 03604 g012
Figure 13. Model of pressure vessel design.
Figure 13. Model of pressure vessel design.
Mathematics 10 03604 g013
Figure 14. Model of speed reducer design.
Figure 14. Model of speed reducer design.
Mathematics 10 03604 g014
Figure 15. Model of multiple disc clutch brake.
Figure 15. Model of multiple disc clutch brake.
Mathematics 10 03604 g015
Table 1. Parameter setting of each algorithm.
Table 1. Parameter setting of each algorithm.
AlgorithmParameters Setting
MROAC = 0.1; β = 0.6; Jr = 0.25; limit = logt
ROA [23]C=0.1
WOA [9]a1 = [2, 0]; a2 = [−2, −1]; b = 1
STOA [33]Cf = 2; u = 1; v = 1
SCA [12]α = 2
AOA [11]MOP_Max = 1; MOP_Min = 0.2; A = 5;
Mu = 0.499
GTOA [5]-
BES [34]α = [1.5, 2.0]; r = [0, 1]
HPGSOS [35]Pm = 0.2; Pc = 0.7; c1,c2 = 2, w = 1
PSOS [36]c1,c2 = 2, w = 1
Table 2. Details of 23 standard benchmark functions (dim indicates the dimension).
Table 2. Details of 23 standard benchmark functions (dim indicates the dimension).
FdimRangefmin
F 1 ( x ) = i = 1 n x i 2 30/100/500/1000[−100, 100]0
F 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | 30/100/500/1000[−10, 10]0
F 3 ( x ) = i = 1 n ( j 1 i x j ) 2 30/100/500/1000[−100, 100]0
F 4 ( x ) = max { | x i | , 1 i n } 30/100/500/1000[−100, 100]0
F 5 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 30/100/500/1000[−30, 30]0
F 6 ( x ) = i = 1 n ( x i + 5 ) 2 30/100/500/1000[−100, 100]0
F 7 ( x ) = i = 1 n i × x i 4 + r a n d o m [ 0 , 1 ) 30/100/500/1000[−1.28, 1.28]0
F 8 ( x ) = i = 1 n x i sin ( | x i | ) 30/100/500/1000[−500, 500]−418.9829 × dim
F 9 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 30/100/500/1000[−5.12, 5.12]0
F 10 ( x ) = 20 exp ( 0.2 1 n i = 1 n x i 2 exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e ) 30/100/500/1000[−32, 32]0
F 11 ( x ) = 1 400 i = 1 n x i 2 Π i = 1 n cos ( x i i ) + 1 30/100/500/1000[−600, 600]0
F 12 ( x ) = π n { 10 sin ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 n u ( x i , 10 , 100 , 4 ) , where   y i = 1 + x i + 1 4 , u ( x i , a , k , m ) = { k ( x i a ) m x i > a 0 a < x i < a k ( x i a ) m x i < a 30/100/500/1000[−50, 50]0
F 13 ( x ) = 0.1 ( sin 2 ( 3 π x 1 ) + i = 1 n ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x n ) ] ) + i = 1 n u ( x i , 5 , 100 , 4 ) 30/100/500/1000[−50, 50]0
F 14 ( x ) = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 ) 1 2[−65, 65]1
F 15 ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4[−5, 5]0.00030
F 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + x 2 4 2[−5, 5]−1.0316
F 17 ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos x 1 + 10 2[−5, 5]0.398
F 18 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x 2 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 5[−2, 2]3
F 19 ( x ) = i = 1 4 c i exp ( j = 1 3 a i j ( x j p i j ) 2 ) 3[−1, 2]−3.86
F 20 ( x ) = i = 1 4 c i exp ( j = 1 6 a i j ( x j p i j ) 2 ) 6[0, 1]−3.32
F 21 ( x ) = i = 1 5 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.1532
F 22 ( x ) = i = 1 7 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.4028
F 23 ( x ) = i = 1 10 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.5363
Table 3. Results of the MROA and other metaheuristic algorithms on test functions (F1–F23) in low dimensions (run 100 times). (Bold means good results).
Table 3. Results of the MROA and other metaheuristic algorithms on test functions (F1–F23) in low dimensions (run 100 times). (Bold means good results).
FdimMetricMROAROAWOASTOASCAAOAGTOABESHPGSOSPSOS
F130Best001.39 × 10−871.41 × 10−98.31 × 10−43.03 × 10−1721.82 × 10−1601.43 × 10−2773.62 × 10−139
Mean01.27 × 10−3113.24 × 10−717.66 × 10−71.44 × 1015.23 × 10−141.32 × 10−603.86 × 10−2681.08 × 10−135
Std002.81 × 10−701.94 × 10−63.83 × 1015.12 × 10−138.35 × 10−6002.57 × 10−135
100Best003.69 × 10−861.95 × 10−51.35 × 1031.51 × 10−32.27 × 10−2109 × 10−2611.36 × 10−125
Mean06.32 × 10−3078.54 × 10−711.12 × 10−21.15 × 1042.51 × 10−21.97 × 10−65.9 × 10−3083.94 × 10−2556.14 × 10−123
Std008.3 × 10−702.41 × 10−26.85 × 1031.03 × 10−27.31 × 10−6001.54 × 10−122
F230Best01.56 × 10−1811.32 × 10−575.28 × 10−72.13 × 10−405.15 × 10−81.98 × 10−2327.32 × 10−1391.75 × 10−70
Mean2.33 × 10−2421.15 × 10−1537.7 × 10−491.2 × 10−52.54 × 10−204.52 × 10−42.96 × 10−1482.88 × 10−1354.14 × 10−69
Std01.15 × 10−1527.55 × 10−481.43 × 10−54.87 × 10−201.04 × 10−32.96 × 10−1472.3 × 10−1346.49 × 10−69
100Best03.01 × 10−1852.94 × 10−591.93 × 10−42.35 × 10−11.34 × 10−1236.15 × 10−85.58 × 10−2323.32 × 10−1323.76 × 10−65
Mean9.21 × 10−2392.99 × 10−1611.05 × 10−492.58 × 10−36.442.68 × 10−449.88 × 10−45.10 × 10−1621.15 × 10−1285.35 × 10−64
Std02.81 × 10−1607.92 × 10−492.03 × 10−35.062.68 × 10−431.78 × 10−35.1 × 10−1613.25 × 10−1285.91 × 10−64
F330Best001.23 × 1041.71 × 10−44.59 × 1029.49 × 10−1637.5 × 10−1802.84 × 10−1457.98 × 10−50
Mean04.06 × 10−2814.5 × 1048.09 × 10−28.5 × 1038.7 × 10−31.31 × 10−41.25 × 1028.42 × 10−1289.25 × 10−44
Std001.42 × 1042.52 × 10−15.47 × 1032.12 × 10−24.39 × 10−41.22 × 1038.42 × 10−1278.53 × 10−43
100Best07.68 × 10−3205.72 × 1051.15 × 1028.5 × 1041.99 × 10−12.08 × 10−1701.88 × 10−1275.43 × 10−34
Mean01.35 × 10−2721.05 × 1061.95 × 1032.55 × 1051.43 × 1033.82 × 10−355.49 × 10−1102.96 × 10−27
Std002.56 × 1051.85 × 1036.41 × 1041.43 × 1041.99 × 10−249.75.19 × 10−1091.5 × 10−26
F430Best01.56 × 10−1821.23 × 10−14.74 × 10−38.61.69 × 10−662.93 × 10−92.96 × 10−2459.43 × 10−1247.39 × 10−57
Mean6.09 × 10−2336.57 × 10−15346.45.19 × 10−235.52.69 × 10−22.3 × 10−41.11 × 10−1681.35 × 10−1205.39 × 10−55
Std06.57 × 10−15229.38.6 × 10−210.82.01 × 10−25.02 × 10−404.42 × 10−1206.46 × 10−55
100Best04.22 × 10−1803.9110.679.37.00 × 10−28.03 × 10−105.5 × 10−2381.12 × 10−1164.6 × 10−49
Mean3.32 × 10−2493.57 × 10−15676.970.989.69.22 × 10−21.68 × 10−45.47 × 10−1552.63 × 10−1142.72 × 10−47
Std03.11 × 10−15522.816.62.861.25 × 10−23.41 × 10−45.45 × 10−1547.11 × 10−1145.16 × 10−47
F530Best6.35 × 10−926.227.127.286.527.828.91.82 × 10−1024
Mean6.9326.32828.41.13 × 10528.528.9251.125.2
Std11.24.355.06 × 10−14.72 × 10−17.04 × 1053.57 × 10−13.32 × 10−29.365.426.6 × 10−1
100Best2.05 × 10−796.597.599.32.41 × 10798.598.91.61 × 10−1094
Mean23.197.698.21.07 × 1021.21 × 10898.998.9854.93 × 10−3095.1
Std41.24.96 × 10−12.3 × 10−110.16.29 × 1071.07 × 10−13.75 × 10−232.74.93 × 10−297.5 × 10−1
F630Best4.77 × 10−121.82 × 10−21.01 × 10−11.94.722.563.981.83 × 10−400
Mean2.95 × 10−51.11 × 10−14.73 × 10−12.7128.13.225.721.8905.08 × 10−33
Std3.64 × 10−51.25 × 10−12.73 × 10−14.96 × 10−169.53.02 × 10−18.16 × 10−13.1707.75 × 10−33
100Best8.53 × 10−75.88 × 10−12.0116.51.44 × 10316.820.82.59 × 10−401.26 × 10−13
Mean1.11 × 10−11.754.3217.91.25 × 10418.323.25.3401.38 × 10−11
Std1.69 × 10−17.57 × 10−11.318.3 × 10−19.61 × 1035.56 × 10−18.85 × 10−19.8903.91 × 10−11
F730Best3.78 × 10−73.48 × 10−65.17 × 10−51.21 × 10−36.79 × 10−31.19 × 10−64.53 × 10−58.85 × 10−44.56 × 10−51.11 × 10−4
Mean7.17 × 10−51.67 × 10−44.29 × 10−36.42 × 10−31.43 × 10−17.71 × 10−53.78 × 10−46.36 × 10−31.52 × 10−45.36 × 10−4
Std5.69 × 10−51.99 × 10−45.63 × 10−33.78 × 10−31.97 × 10−17.26 × 10−52.59 × 10−43.91 × 10−37.98 × 10−52.68 × 10−4
100Best6.48 × 10−78.82 × 10−67.46 × 10−56.9 × 10−319.52.91 × 10−66.85 × 10−57.3 × 10−44.26 × 10−52.13 × 10−4
Mean7.24 × 10−51.68 × 10−44.59 × 10−33.02 × 10−21.44 × 1027.89 × 10−54.53 × 10−46.07 × 10−31.76 × 10−47.02 × 10−4
Std5.89 × 10−51.84 × 10−45.93 × 10−31.69 × 10−277.38.89 × 10−53.68 × 10−43.44 × 10−39.44 × 10−52.84 × 10−4
F830Best−1.26 × 104−1.26 × 104−1.26 × 104−6.28 × 103−4.22 × 103−6.06 × 103−6.54 × 103−1.26 × 104−1.26 × 104−1.23 × 104
Mean−1.26 × 104−1.23 × 104−9.92 × 103−5.06 × 103−3.75 × 103−5.22 × 103−5.09 × 103−9.56 × 103−1.26 × 104−1.10 × 104
Std3.79 × 10−46.94 × 1021.87 × 1034.65 × 1023.14 × 1024.56 × 1027.09 × 1021.76 × 1031.83 × 10−126.94 × 102
100Best−4.19 × 104−4.19 × 104−4.19 × 104−1.33 × 104−8.1 × 103−1.12 × 104−1.27 × 104−4.19 × 104−4.19 × 104−3.73 × 104
Mean−4.19 × 104−4.12 × 104−3.42 × 104−1.03 × 104−6.87 × 103−9.94 × 103−9.79 × 103−3.04 × 104−4.19 × 104−2.96 × 104
Std6.29 × 10−31.76 × 1036.06 × 1031.41 × 1036.02 × 1027.17 × 1021.48 × 1035.9 × 1032.61 × 10−114.54 × 103
F930Best0004.41 × 10−95.12 × 10−300000
Mean005.68 × 10−166.9237.408.02 × 10−67.3200
Std005.68 × 10−156.9435.402.2 × 10−536.300
100Best0007.72 × 10−679.800000
Mean003.41 × 10−1514.82.64 × 10202.26 × 10−5000
Std002.53 × 10−1419.91.17 × 10201.38 × 10−4000
F1030Best8.88 × 10−168.88 × 10−168.88 × 10−16201.61 × 10−28.88 × 10−161.3 × 10−118.88 × 10−168.88 × 10−168.88 × 10−16
Mean8.88 × 10−168.88 × 10−164.55 × 10−152013.78.88 × 10−161.77 × 10−48.88 × 10−163.34 × 10−154.16 × 10−15
Std002.50 × 10−151.55 × 10−38.804.05 × 10−401.65 × 10−159.69 × 10−16
100Best8.88 × 10−168.88 × 10−168.88 × 10−16206.638.88 × 10−161.41 × 10−118.88 × 10−168.88 × 10−168.88 × 10−16
Mean8.88 × 10−168.88 × 10−164.23 × 10−152018.63.55 × 10−41.17 × 10−49.24 × 10−164.19 × 10−154.37 × 10−15
Std002.41 × 10−152.96 × 10−44.348.68 × 10−42.16 × 10−43.55 × 10−169.11 × 10−165 × 10−16
F1130Best0007.65 × 10−93.07 × 10−33.33 × 10−40000
Mean007.36 × 10−33.19 × 10−29.81 × 10−11.84 × 10−12.9 × 10−6000
Std003.54 × 10−23.58 × 10−24.61 × 10−11.45 × 10−11.25 × 10−5000
100Best0002.24 × 10−67.751.99 × 1020000
Mean007.93 × 10−33.67 × 10−295.86.33 × 1027.95 × 10−6000
Std004.63 × 10−25.8 × 10−2661.98 × 1022.88 × 10−5000
F1230Best6.45 × 10−132.34 × 10−36.21 × 10−31.03 × 10−19.69 × 10−14.32 × 10−12.7 × 10−18.68 × 10−61.57 × 10−321.57 × 10−32
Mean1.04 × 10−61.06 × 10−27.2 × 10−22.78 × 10−13.25 × 1055.24 × 10−15.88 × 10−11.75 × 10−11.57 × 10−321.63 × 10−32
Std1.72 × 10−66.81 × 10−34.43 × 10−11.65 × 10−11.65 × 1065.27 × 10−22.3 × 10−14.03 × 10−15.5 × 10−481.43 × 10−33
100Best7.06 × 10−165.64 × 10−31.54 × 10−25.74 × 10−17.48 × 1078.61 × 10−17.63 × 10−14.73 × 10−64.71 × 10−331.67 × 10−15
Mean7.51 × 10−52.58 × 10−25.18 × 10−28.4 × 10−13.59 × 1089.06 × 10−19.9 × 10−11.65 × 10−14.71 × 10−339.79 × 10−14
Std1.25 × 10−41.25 × 10−23.09 × 10−22.93 × 10−11.85 × 1082.37 × 10−21.15 × 10−14.17 × 10−16.88 × 10−494.79 × 10−13
F1330Best3.15 × 10−124.87 × 10−21.19 × 10−11.463.582.622.26.86 × 10−41.35 × 10−321.35 × 10−32
Mean8.92 × 10−42.54 × 10−15.81 × 10−11.921.15 × 1062.832.871.441.35 × 10−322.78 × 10−31
Std3.31 × 10−31.57 × 10−12.84 × 10−13.13 × 10−16.4 × 1061.05 × 10−12.52 × 10−11.465.5 × 10−482.04 × 10−30
100Best9.55 × 10−92.82 × 10−11.29.061.48 × 1089.839.981.33 × 10−41.35 × 10−321.59 × 10−12
Mean5.92 × 10−31.343.0710.76.28 × 1089.969.993.511.35 × 10−322.4
Std1.43 × 10−27.66 × 10−11.171.063.09 × 1086.38 × 10−24.2 × 10−34.645.5 × 10−483.65
F142Best9.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−19.98 × 10−1
Mean9.98 × 10−15.143.592.312.1410.41.243.349.98 × 10−19.98 × 10−1
Std4.92 × 10−1653.662.492.363.88.56 × 10−11.5400
F154Best3.07 × 10−43.08 × 10−43.17 × 10−43.44 × 10−43.7 × 10−43.49 × 10−43.07 × 10−43.47 × 10−43.08 × 10−43.07 × 10−4
Mean3.17 × 10−44.78 × 10−48.58 × 10−42.91 × 10−31.13 × 10−31.95 × 10−23.35 × 10−37.63 × 10−33.34 × 10−43.56 × 10−4
Std9.16 × 10−52.71 × 10−41.33 × 10−35.86 × 10−33.66 × 10−42.76 × 10−26.8 × 10−31.05 × 10−21.33 × 10−42.01 × 10−4
F162Best−1.03−1.03−1.03−1.03−1.03−1.03−1.03−1.03−1.03−1.03
Mean−1.03−1.03−1.03−1.03−1.03−1.03−1.03−9.56 × 10−1−1.03−1.03
Std6.51 × 10−166.58 × 10−83.89 × 10−93.11 × 10−64.28 × 10−51.43 × 10−71.81 × 10−142.29 × 10−14.27 × 10−76.64 × 10−16
F172Best3.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−1
Mean3.98 × 10−13.98 × 10−13.98 × 10−13.98 × 10−14 × 10−13.98 × 10−13.98 × 10−16.93 × 10−13.98 × 10−13.98 × 10−1
Std01.15 × 10−52.27 × 10−51.21 × 10−41.9 × 10−31.34 × 10−73.96 × 10−165.36 × 10−11.82 × 10−160
F185Best3333333333
Mean333.543310.83.815.183.273
Std2.65 × 10−151.72 × 10−43.82.42 × 10−44.22 × 10−415.58.15.992.79.82 × 10−16
F193Best−3.86−3.86−3.86−3.86−3.86−3.86−3.86−3.86−3.86−3.86
Mean−3.86−3.86−3.86−3.86−3.85−3.85−3.86−3.64−3.86−3.86
Std1.74 × 10−152.29 × 10−37.08 × 10−35.17 × 10−37.6 × 10−34.41 × 10−39.46 × 10−152.63 × 10−12.16 × 10−152.23 × 10−15
F206Best−3.32−3.32−3.32−3.32−3.32−3.32−3.32−3.32−3.32−3.32
Mean−3.28−3.23−3.21−2.85−2.82−3.04−3.24−2.84−3.3−3.25
Std5.74 × 10−21.43 × 10−11.75 × 10−15.28 × 10−14.27 × 10−11.16 × 10−18.2 × 10−23.34 × 10−14.87 × 10−25.86 × 10−2
F214Best−10.2−10.2−10.2−10.1−5.27−6.77−10.2−10.1−10.2−10.2
Mean−10.2−10.1−8.19−3.48−2.08−3.74−8.08−5.97−10.2−8.83
Std4.79 × 10−153.45 × 10−22.794.161.791.32.742.7201.03 × 10−142.24
F224Best−10.4−10.4−10.4−10.4−6.59−6.72−10.4−10.4−10.4−10.4
Mean−10.4−10.4−7.27−5.26−2.82−3.76−7.69−6.07−10.4−9.39
Std5.13 × 10−152.89 × 10−23.144.451.741.743.213.089.11 × 10−152.1
F234Best−10.5−10.5−10.5−10.5−7.39−7.4−10.5−10.5−10.5−10.5
Mean−10.5−10.4−6.65−5.94−3.92−3.75−7.62−5.83−10.5−9.94
Std4.46 × 10−157.72 × 10−13.534.261.771.593.312.862.8 × 10−151.71
Table 4. Results of the MROA and other metaheuristic algorithms on test functions (F1–F13) in high dimensions (run 30 times). (Bold means good results).
Table 4. Results of the MROA and other metaheuristic algorithms on test functions (F1–F13) in high dimensions (run 30 times). (Bold means good results).
FdimMetricMROAROAWOASTOASCAAOAGTOABESHPGSOSPSOS
F1500Best005.1 × 10−838.92 × 10−17.97 × 1045.66 × 10−16.58 × 10−1602.86 × 10−2551.62 × 10−119
Mean01.03 × 10−3091.79 × 10−719.562.14 × 1056.38 × 10−15.66 × 10−65.22 × 10−3157.48 × 10−2483.55 × 10−116
Std006.88 × 10−717.657.63 × 1044.2 × 10−21.51 × 10−5001.8 × 10−115
1000Best004.65 × 10−8115.51.37 × 1051.571.73 × 10−1401.85 × 10−2543.02 × 10−118
Mean02.02 × 10−3131.46 × 10−6572.34.96 × 1051.721.94 × 10−506.28 × 10−2491.45 × 10−115
Std007.97 × 10−6546.61.76 × 1058.26 × 10−26.68 × 10−5005.15 × 10−115
F2500Best02.45 × 10−1835.07 × 10−571.02 × 10−232.25.14 × 10−132.47 × 10−91.68 × 10−2341.27 × 10−1271.67 × 10−62
Mean9.80 × 10−2374.31 × 10−1601.09 × 10−497.64 × 10−21.04 × 1028.77 × 10−42.54 × 10−37.25 × 10−1741.99 × 10−1241.82 × 10−61
Std02.31 × 10−1595.05 × 10−495.91 × 10−2571.35 × 10−35.59 × 10−301.04 × 10−1232.6 × 10−61
1000Best02.62 × 10−1792.75 × 10−566.96 × 10−2Inf5.13 × 10−37.25 × 10−91.99 × 10−2225.59 × 10−1277.39 × 10−62
Mean5.16 × 10−2401.8 × 10−1583.37 × 10−482.72 × 10−1Inf1.43 × 10−21.15 × 10−29.01 × 10−1732.57 × 10−1241.11 × 10−60
Std09.86 × 10−1581.36 × 10−472.27 × 10−1NaN4.95 × 10−32.22 × 10−206.33 × 10−1241.39 × 10−60
F3500Best04.48 × 10−2991.56 × 1071.71 × 1054.29 × 10613.21.12 × 10−1902.04 × 10−1144.21 × 10−24
Mean01.06 × 10−2572.93 × 1074.9 × 1056.77 × 10631.55.35 × 10−23.09 × 10−13.98 × 10−742.83 × 10−19
Std008.13 × 1061.56 × 1051.41 × 10613.11.35 × 10−11.692.18 × 10−731.14 × 10−18
1000Best06.45 × 10−2926.045 × 1071.25 × 1061.225 × 10760.72.665 × 10−801.145 × 10−1122.035 × 10−22
Mean03.215 × 10−2401.295 × 1082.385 × 1062.655 × 1071.815 × 1045.115 × 10−11.435 × 1031.165 × 10−443.745 × 10−17
Std005.455 × 1077.485 × 1057.285 × 1069.855 × 1042.127.835 × 1036.375 × 10−441.95 × 10−16
F4500Best02.175 × 10−17450.696.9981.625 × 10−12.695 × 10−94.235 × 10−2483.815 × 10−1133.525 × 10−44
Mean4.495 × 10−2342.825 × 10−15790.498.699.11.8 × 10−12.29 × 10−49.23 × 10−1542.03 × 10−1107.94 × 10−43
Std01.43 × 10−15610.67.42 × 10−13.11 × 10−11.23 × 10−25.62 × 10−45.05 × 10−1534.61 × 10−1101.32 × 10−42
1000Best08.44 × 10−17428.19999.31.95 × 10−11.47 × 10−61.73 × 10−2292.04 × 10−1115.36 × 10−43
Mean6.25 × 10−2322.12 × 10−15780.299.599.62.12 × 10−14.52 × 10−42.17 × 10−1842.41 × 10−1096.64 × 10−42
Std08.71 × 10−15719.71.85 × 10−11.14 × 10−19.2 × 10−31.07 × 10−308.79 × 10−1096.42 × 10−42
F5500Best6.72 × 10−94.94 × 1024.96 × 1023.34 × 1031.43 × 1094.99 × 1024.99 × 10240.204.94 × 102
Mean1.01 × 1024.95 × 1024.96 × 1022.13 × 1042.03 × 1094.99 × 1024.99 × 1023.83 × 10204.96 × 102
Std2 × 1023.6 × 10−15.12 × 10−14.58 × 1044.95 × 1081 × 10−14.11 × 10−22.11 × 10207.35 × 10−1
1000Best9.73 × 10−59.89 × 1029.92 × 1022.81 × 1042.12 × 1099.99 × 1029.99 × 1026.3609.96 × 102
Mean4.34 × 1029.9 × 1029.9 × 1021.48 × 1054.35 × 1099.99 × 1029.99 × 1028.98 × 10209.97 × 102
Std4.95 × 1024.52 × 10−18.8 × 10−11.29 × 1051.03 × 1091.02 × 10−13.91 × 10−22.93 × 10205.67 × 10−1
F6500Best2.54 × 10−68.9822.81.17 × 1021.23 × 1051.14 × 1021.22 × 1022.7 × 10−2022.1
Mean2.3517.935.11.28 × 1022.23 × 1051.16 × 1021.23 × 10230.3048.5
Std3.176.9810.112.68.2 × 1041.391.0253.2017.1
1000Best1.45 × 10−310.131.92.49 × 1022.02 × 1052.41 × 1022.45 × 1027.75 × 10−301.69 × 102
Mean10.633.264.42.97 × 1025.3 × 1052.43 × 1022.48 × 10227.201.92 × 102
Std14.315.216.2341.25 × 1051.111.1275.6012.8
F7500Best2.99 × 10−61.16 × 10−51.29 × 10−41.68 × 10−11.14 × 1043.12 × 10−65.49 × 10−51.19 × 10−35.6 × 10−53.66 × 10−4
Mean8.25 × 10−52.27 × 10−43.98 × 10−34.31 × 10−11.49 × 1049.94 × 10−54.97 × 10−45.77 × 10−31.78 × 10−48.83 × 10−4
Std5.64 × 10−52.46 × 10−44.64 × 10−32.32 × 10−13.38 × 1036.76 × 10−55.59 × 10−43.17 × 10−38.93 × 10−53.85 × 10−4
1000Best2.29 × 10−66.63 × 10−62.37 × 10−44.95 × 10−13.54 × 1044.59 × 10−67.19 × 10−57.83 × 10−44.69 × 10−53.89 × 10−4
Mean7.69 × 10−51.71 × 10−44.72 × 10−33.196.68 × 1041.01 × 10−45.79 × 10−44.96 × 10−31.81 × 10−41.05 × 10−3
Std6.5 × 10−51.81 × 10−45.03 × 10−32.251.29 × 1049.81 × 10−54.14 × 10−43.11 × 10−39.47 × 10−55 × 10−4
F8500Best−2.09 × 105−2.09 × 105−2.09 × 105−2.94 × 104−1.91 × 104−2.53 × 104−2.97 × 104−1.59 × 105−2.09 × 105−1.75 × 105
Mean−2.09 × 105−2.02 × 105−1.83 × 105−2.48 × 104−1.55 × 104−2.24 × 104−2.28 × 104−5.87 × 104−2.09 × 105−1.58 × 105
Std7.76 × 1021.41 × 1043.11 × 1042.91 × 1031.2 × 1031.49 × 1033.42 × 1034.68 × 1042.96 × 10−117.93 × 103
1000Best−4.19 × 105−4.19 × 105−4.19 × 105−4.62 × 104−2.84 × 104−3.85 × 104−4.1 × 104−3.36 × 105−4.19 × 105−3.6 × 105
Mean−4.19 × 105−4.11 × 105−3.56 × 105−3.7 × 104−2.23 × 104−3.19 × 104−3.19 × 104−1.28 × 105−4.19 × 105−3.19 × 105
Std4.16 × 10−22.01 × 1045.73 × 1045.62 × 1031.97 × 1032.92 × 1033.88 × 1031.12 × 1051.18 × 10−102.19 × 104
F9500Best0001.585.7 × 10200000
Mean009.09 × 10−1419.11.46 × 1036.81 × 10−62.47 × 10−5000
Std003.66 × 10−1315.85.32 × 1027.31 × 10−69.79 × 10−5000
1000Best0001.48.39 × 1021.46 × 10−50000
Mean001.21 × 10−1325.12.13 × 1036.36 × 10−55.07 × 10−5000
Std006.64 × 10−1319.46.94 × 1021.82 × 10−51.56 × 10−4000
F10500Best8.88 × 10−168.88 × 10−168.88 × 10−16205.737.28 × 10−32.57 × 10−138.88 × 10−164.44 × 10−154.44 × 10−15
Mean8.88 × 10−168.88 × 10−163.97 × 10−152019.17.87 × 10−31.74 × 10−48.88 × 10−164.32 × 10−154.44 × 10−15
Std002.23 × 10−155.9 × 10−54.123.34 × 10−42.84 × 10−406.49 × 10−160
1000Best8.88 × 10−168.88 × 10−168.88 × 10−16209.178.76 × 10−37.86 × 10−98.88 × 10−164.44 × 10−154.44 × 10−15
Mean8.88 × 10−168.88 × 10−163.61 × 10−152019.39.3 × 10−31.61 × 10−41.13 × 10−154.44 × 10−154.44 × 10−15
Std002.41 × 10−152.97 × 10−53.622.85 × 10−43.17 × 10−49.01 × 10−1600
F11500Best0001.72 × 10−16.07 × 1026.49 × 1031.11 × 10−16000
Mean0005.93 × 10−11.77 × 1031.05 × 1041.93 × 10−5000
Std0003.15 × 10−15.94 × 1022.72 × 1036.7 × 10−5000
1000Best0006.27 × 10−16.46 × 1022.71 × 1040000
Mean0001.344.21 × 1032.83 × 1043.9 × 10−6000
Std0003.91 × 10−11.6 × 1034.2 × 1021.04 × 10−5000
F12500Best3.21 × 10−81.12 × 10−23.42 × 10−21.734.82 × 1091.051.071.97 × 10−79.42 × 10−341.16 × 10−2
Mean1.18 × 10−43.5 × 10−29.51 × 10−24.896.74 × 1091.081.143.59 × 10−39.42 × 10−343.16 × 10−2
Std1.66 × 10−41.56 × 10−24.96 × 10−22.258.71 × 1081.16 × 10−23.08 × 10−26.49 × 10−31.74 × 10−492.02 × 10−2
1000Best5.01 × 10−72.65 × 10−34.19 × 10−25.488 × 1091.11.131.06 × 10−54.71 × 10−341.6 × 10−1
Mean1.02 × 10−33.5 × 10−21.29 × 10−18.74 × 10313.31.121.174.25 × 10−24.71 × 10−342.49 × 10−1
Std4.57 × 10−31.88 × 10−26 × 10−22.43 × 1042.72 × 1096.47 × 10−31.2 × 10−22.17 × 10−18.7 × 10509.77 × 10−2
F13500Best1.02 × 10−73.8412.71.06 × 1027.76 × 10950.2501.09 × 1021.35 × 10−3249.5
Mean1.16 × 10−29.2719.16.41 × 10210.150.250221.35 × 10−3249.6
Std1.98 × 10−24.987.892.62 × 1032.28 × 1095.43 × 10−23.28 × 10−324.65.57 × 10−489.31 × 10−2
1000Best4.53 × 10−71.7714.53.77 × 10213.71 × 1021 × 1029.36 × 10−31.35 × 10−3299.6
Mean7.56 × 10−116.133.86.29 × 10422.11.01 × 1021 × 102191.35 × 10−329.97
Std2.429.5511.48.64 × 1044.07 × 1095.74 × 10−24.88 × 10−336.75.57 × 10−488.03 × 10−2
Table 5. The Wilcoxon signed-rank test results between MROA and other metaheuristic algorithms on test functions (F1–F23) in low dimensions (run 100 times). (Bold makes the Wilcoxon rank-sum test more effective).
Table 5. The Wilcoxon signed-rank test results between MROA and other metaheuristic algorithms on test functions (F1–F23) in low dimensions (run 100 times). (Bold makes the Wilcoxon rank-sum test more effective).
FdimMROA
vs.
ROA
MROA
vs.
WOA
MROA
vs.
STOA
MROA
vs.
SCA
MROA
vs.
AOA
MROA
vs.
GTOA
MROA
vs.
BES
MROA
vs.
HPGSOS
MROA
vs.
PSOS
F1307.81 × 10−33.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−1813.9 × 10−183.9 × 10−18
1001.25 × 10−13.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−1813.9 × 10−183.9 × 10−18
F2303.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−185.28 × 10−143.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−18
1003.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−18
F3308.33 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−186.25 × 10−23.9 × 10−183.9 × 10−18
1005.70 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−189.77 × 10−43.9 × 10−183.9 × 10−18
F4303.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−18
1003.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−18
F5303.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−182.26 × 10−161.85 × 10−145.03 × 10−17
1004.27 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−181.2 × 10−153.9 × 10−183.41 × 10−9
F6303.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−18
1003.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−181.19 × 10−83.9 × 10−183.9 × 10−18
F7301.2 × 10−55.27 × 10−183.9 × 10−183.9 × 10−185.29 × 10−12.08 × 10−173.9 × 10−184.16 × 10−124.27 × 10−18
1004.83 × 10−48.02 × 10−183.9 × 10−183.9 × 10−185.76 × 10−34.92 × 10−153.9 × 10−189.55 × 10−94.02 × 10−18
F8305.27 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−181.22 × 10−173.9 × 10−18
1005.43 × 10−184.02 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−185.7 × 10−183.9 × 10−18
F93015 × 10−13.9 × 10−183.9 × 10−1815.7 × 10−182.5 × 10−111
100113.9 × 10−183.9 × 10−1813.9 × 10−18111
F103019.87 × 10−153.9 × 10−183.9 × 10−1813.9 × 10−185 × 10−12.84 × 10−181.44 × 10−21
10012.01 × 10−173.9 × 10−183.9 × 10−181.65 × 10−83.9 × 10−1811.15 × 10−222.53 × 10−23
F113017.81 × 10−33.9 × 10−183.9 × 10−183.9 × 10−185.7 × 10−18111
10011.25 × 10−13.9 × 10−183.9 × 10−183.9 × 10−185.7 × 10−18111
F12303.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−184.67 × 10−183.9 × 10−183.9 × 10−18
1003.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−181.27 × 10−153.9 × 10−183.9 × 10−18
F13303.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−181.22 × 10−173.9 × 10−183.9 × 10−18
1003.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−184.22 × 10−173.9 × 10−181.71 × 10−3
F1423.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−181.71 × 10−53.9 × 10−182.62 × 10−142.62 × 10−14
F1541.07 × 10−125.88 × 10−139.89 × 10−181.24 × 10−162.93 × 10−164.44 × 10−135.77 × 10−182.55 × 10−113.13 × 10−7
F1623.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−182.5 × 10−13.9 × 10−185 × 10−11
F1723.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−1813.9 × 10−1811
F1853.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−184.7 × 10−13.9 × 10−185.11 × 10−12.58 × 10−18
F1933.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−186.25 × 10−23.9 × 10−1815 × 10−1
F2061.35 × 10−41.36 × 10−54.81 × 10−184.4 × 10−184.27 × 10−181.27 × 10−24.02 × 10−185.05 × 10−46.73 × 10−1
F2143.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−181.22 × 10−173.9 × 10−182.78 × 10−83.76 × 10−1
F2243.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.89 × 10−183.9 × 10−181.87 × 10−91.55 × 10−2
F2343.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−183.9 × 10−189.14 × 10−183.9 × 10−181.18 × 10−93.33 × 10−7
Table 6. The Wilcoxon signed-rank test results between MROA and other metaheuristic algorithms on test functions (F1–F13) in high dimensions (run 30 times). (Bold makes the Wilcoxon rank-sum test more effective).
Table 6. The Wilcoxon signed-rank test results between MROA and other metaheuristic algorithms on test functions (F1–F13) in high dimensions (run 30 times). (Bold makes the Wilcoxon rank-sum test more effective).
FdimMROA
vs.
ROA
MROA
vs.
WOA
MROA
vs.
STOA
MROA
vs.
SCA
MROA
vs.
AOA
MROA
vs.
GTOA
MROA
vs.
BES
MROA
vs.
HPGSOS
MROA
vs.
PSOS
F150001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−611.73 × 10−61.73 × 10−6
10003.13 × 10−21.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−611.73 × 10−61.73 × 10−6
F25001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
10001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F35001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−65 × 10−11.73 × 10−61.73 × 10−6
10001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.25 × 10−11.73 × 10−61.73 × 10−6
F45001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
10001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6
F55002.35 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−63.06 × 10−41.73 × 10−61.73 × 10−6
10006.34 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−62.37 × 10−51.73 × 10−61.73 × 10−6
F65001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.89 × 10−41.73 × 10−61.73 × 10−6
10009.32 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−67.19 × 10−11.73 × 10−61.73 × 10−6
F75009.27 × 10−31.73 × 10−61.73 × 10−61.73 × 10−65.98 × 10−22.13 × 10−61.73 × 10−62.11 × 10−31.73 × 10−6
10001.40 × 10−21.92 × 10−61.73 × 10−61.73 × 10−64.28 × 10−21.73 × 10−61.73 × 10−66.64 × 10−41.73 × 10−6
F85003.52 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−62.56 × 10−61.73 × 10−6
10001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−63.79 × 10−61.73 × 10−6
F9500111.73 × 10−61.73 × 10−61.96 × 10−42.56 × 10−6111
1000111.73 × 10−61.73 × 10−61.73 × 10−68.3 × 10−6111
F1050013.89 × 10−61.73 × 10−61.73 × 10−61.95 × 10−31.73 × 10−62.5 × 10−14.32 × 10−84.32 × 10−8
100015.31 × 10−51.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−614.32 × 10−84.32 × 10−8
F11500111.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6111
1000111.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−6111
F125001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−64.86 × 10−51.73 × 10−61.73 × 10−6
10001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.97 × 10−51.73 × 10−61.73 × 10−6
F135001.92 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−64.45 × 10−51.73 × 10−61.73 × 10−6
10001.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−64.07 × 10−51.73 × 10−61.73 × 10−6
Table 7. Details of CEC2020 test functions.
Table 7. Details of CEC2020 test functions.
No.TypeFunctiondimRangefmin
CEC 1Unimodal FunctionShifted and Rotated Bent Cigar Function10[−100, 100]100
CEC 2Simple Multimodal FunctionsShifted and Rotated Schwefel’s Function10[−100, 100]1100
CEC 3Simple Multimodal FunctionsShifted and Rotated Lunacek
bi-Rastrigin Function
10[−100, 100]700
CEC 4Simple Multimodal FunctionsExpanded Rosenbrock’s plus
Griewangk’s Function
10[−100, 100]1900
CEC 5Hybrid FunctionsHybrid Function 1 (N = 3)10[−100, 100]1700
CEC 6Hybrid FunctionsHybrid Function 2 (N = 4)10[−100, 100]1600
CEC 7Hybrid FunctionsHybrid Function 3 (N = 5)10[−100, 100]2100
CEC 8Composition FunctionsComposition Function 1 (N = 3)10[−100, 100]2200
CEC 9Composition FunctionsComposition Function 2 (N = 4)10[−100, 100]2400
CEC 10Composition FunctionsComposition Function 3 (N = 5)10[−100, 100]2500
Table 8. Results of the MROA and other metaheuristic algorithms on test functions (CEC 1–CEC 10) with dim = 10 (run 30 times). (Bold means good results).
Table 8. Results of the MROA and other metaheuristic algorithms on test functions (CEC 1–CEC 10) with dim = 10 (run 30 times). (Bold means good results).
CECMetricMROAROAWOASTOASCAAOAGTOABESHPGSOSPSOS
CEC 1Best1 × 1022.57 × 1064.08 × 1062.11 × 1063.41 × 1081.13 × 1097.62 × 1075.64 × 1081.48 × 1021.31 × 102
Mean2.56 × 1038.25 × 1084.86 × 1072.46 × 1081.02 × 1099.94 × 1091.21 × 1094.26 × 1092.54 × 1033.71 × 103
Std2.05 × 1039.36 × 1085.06 × 1072.18 × 1083.64 × 1084.27 × 1091.11 × 1093.34 × 1092.51 × 1033.48 × 103
CEC 2Best1.26 × 1031.76 × 1031.81 × 1031.7 × 1032.23 × 1031.92 × 1031.55 × 1032.22 × 1031.32 × 1031.13 × 103
Mean1.92 × 1032.14 × 1032.32 × 1032.09 × 1032.6 × 1032.3 × 1032.14 × 1032.71 × 1031.51 × 1031.37 × 103
Std2.02 × 1023.12 × 1023.11 × 1022.71 × 1022.09 × 1022.24 × 1023.6 × 1022.17 × 1022.27 × 1021.87 × 102
CEC 3Best7.11 × 1027.67 × 1027.6 × 1027.46 × 1027.72 × 1027.81 × 1027.46 × 1027.75 × 1027.19 × 1027.21 × 102
Mean7.46 × 1027.93 × 1027.98 × 1027.68 × 1027.88 × 1028.06 × 1027.65 × 1028.08 × 1027.3 × 1027.3 × 102
Std12.922.14014.912.816.524.525.59.888.04
CEC 4Best1.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 103
Mean1.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 1031.90 × 103
Std002.51 × 10−14.08 × 10−11.1001.2 × 10−121.2200
CEC 5Best1.8 × 1033.12 × 1039.08 × 1034.65 × 1031.18 × 1041.47 × 1042.53 × 1034.87 × 1045.27 × 1031.91 × 103
Mean4.29 × 1036.03 × 1043.66 × 1051.23 × 1056.5 × 1044.49 × 1054.93 × 1039.66 × 1051.31 × 1055.86 × 103
Std1.79 × 1039.9 × 1045.81 × 1051.79 × 1058.02 × 1043.2 × 1053.17 × 1031.43 × 1061.41 × 1057.62 × 103
CEC 6Best1.6 × 1031.72 × 1031.74 × 1031.66 × 1031.73 × 1031.89 × 1031.73 × 1031.75 × 1031.6 × 1031.6 × 103
Mean1.7 × 1031.85 × 1031.88 × 1031.8 × 1031.87 × 1032.11 × 1031.9 × 1032.03 × 1031.72 × 1031.67 × 103
Std87.41.44 × 1021.58 × 1021.10 × 10292.21.8 × 1021.45 × 1021.94 × 1028984.9
CEC 7Best2.19 × 1032.61 × 1039.37 × 1033.2 × 1036.11 × 1033.99 × 1032.43 × 1034.65 × 1032.62 × 1032.14 × 103
Mean2.68 × 1031.1 × 1046.24 × 1059.81 × 1031.5 × 1048.44 × 1053.14 × 1031.66 × 1054.16 × 1042.82 × 103
Std3.64 × 1021.01 × 1041.38 × 1067.53 × 1038.69 × 1032.12 × 1065.89 × 1024.25 × 1057.36 × 1041.14 × 103
CEC 8Best2.24 × 1032.31 × 1032.25 × 1032.25 × 1032.3 × 1032.41 × 1032.28 × 1032.42 × 1032.3 × 1032.3 × 103
Mean2.3 × 1032.4 × 1032.4 × 1032.65 × 1032.4 × 1033.07 × 1032.46 × 1032.81 × 1032.31 × 1032.3 × 103
Std17.484.73.03 × 1025.52 × 102493.45 × 1021.27 × 1023.34 × 1025.373.54
CEC 9Best2.50 × 1032.75 × 1032.76 × 1032.75 × 1032.78 × 1032.79 × 1032.76 × 1032.78 × 1032.5 × 1032.5 × 103
Mean2.76 × 1032.77 × 1032.78 × 1032.76 × 1032.8 × 1032.88 × 1032.78 × 1032.8 × 1032.67 × 1032.71 × 103
Std11.958.657.414.410.494.274.457.61.22 × 10285.2
CEC 10Best2.6 × 1032.94 × 1032.94 × 1032.92 × 1032.96 × 1033.17 × 1032.95 × 1033.02 × 1032.9 × 1032.9 × 103
Mean2.92 × 1033.0 × 1032.99 × 1032.95 × 1032.99 × 1033.46 × 1033.04 × 1033.28 × 1032.94 × 1032.93 × 103
Std63.91.1 × 1021.16 × 10233.731.52.86 × 1021.50 × 1022.93 × 10222.424.1
Table 9. Results of the Wilcoxon signed-rank test between MROA and other metaheuristic algorithms (CEC 1–CEC 10) with dim = 10 (run 30 times). (Bold makes the Wilcoxon rank-sum test more effective).
Table 9. Results of the Wilcoxon signed-rank test between MROA and other metaheuristic algorithms (CEC 1–CEC 10) with dim = 10 (run 30 times). (Bold makes the Wilcoxon rank-sum test more effective).
CECMROA
vs.
ROA
MROA
vs.
WOA
MROA
vs.
STOA
MROA
vs.
SCA
MROA
vs.
AOA
MROA
vs.
GTOA
MROA
vs.
BES
MROA
vs.
HPGSOS
MROA
vs.
PSOS
CEC 11.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−62.7 × 10−22.41 × 10−3
CEC 21.71 × 10−32.61 × 10−48.59 × 10−24.29 × 10−61.96 × 10−21.59 × 10−11.97 × 10−54.73 × 10−61.92 × 10−6
CEC 32.37 × 10−54.86 × 10−52.41 × 10−33.18 × 10−61.73 × 10−62.05 × 10−43.88 × 10−61.8 × 10−58.19 × 10−5
CEC 411.56 × 10−22.44 × 10−41.32 × 10−413.91 × 10−32.5 × 10−111
CEC 53.18 × 10−61.92 × 10−61.92 × 10−61.73 × 10−61.73 × 10−62.06 × 10−11.73 × 10−61.92 × 10−61.78 × 10−1
CEC 62.83 × 10−48.22 × 10−31.25 × 10−23.32 × 10−42.6 × 10−68.73 × 10−36.34 × 10−65.71 × 10−21.11 × 10−3
CEC 71.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−61.73 × 10−63.33 × 10−21.73 × 10−63.18E-064.95 × 10−2
CEC 81.73 × 10−63.41 × 10−51.92 × 10−61.73 × 10−61.73 × 10−62.35 × 10−61.73 × 10−62.99 × 10−12.35 × 10−6
CEC 92.22 × 10−44.53 × 10−48.59 × 10−22.26 × 10−33.88 × 10−65.79 × 10−53.06 × 10−42.6 × 10−51.97 × 10−5
CEC 103.88 × 10−63.16 × 10−21.04 × 10−23.18 × 10−61.73 × 10−63.88 × 10−61.73 × 10−62.18 × 10−21.20 × 10−3
Table 10. Parameters sensitivity analysis. (Bold means good results).
Table 10. Parameters sensitivity analysis. (Bold means good results).
CECMetricβ = 0.1β = 0.2β = 0.3β = 0.4β = 0.5β = 0.6β = 0.7β = 0.8β = 0.9
CEC 1Best1.03 × 1021 × 1021.6 × 1021.13 × 1021.01 × 1021.02 × 1021 × 1021.17 × 1021.71 × 102
Mean4.08 × 1032.56 × 1034.19 × 1032.89 × 1032.53 × 1034.32 × 1033.55 × 1033.55 × 1033.4 × 103
Std3.68 × 1032.05 × 1033.47 × 1031.98 × 1033.02 × 1032.91 × 1033.07 × 1033.09 × 1033.69 × 103
CEC 2Best1.34 × 1031.26 × 1031.42 × 1031.33 × 1031.7 × 1031.73 × 1031.35 × 1031.46 × 1031.34 × 103
Mean2.01 × 1031.92 × 1032 × 1032 × 1032.09 × 1032.08 × 1032.03 × 1032 × 1031.92 × 103
Std2.54 × 1022.02 × 1022 × 1022.58 × 1021.62 × 1021.48 × 1021.97 × 1022.38 × 1022.54 × 102
CEC 3Best7.25 × 1027.11 × 1027.24 × 1027.19 × 1027.23 × 1027.17 × 1027.16 × 1027.17 × 1027.23 × 102
Mean7.55 × 1027.46 × 1027.5 × 1027.49 × 1027.5 × 1027.56 × 1027.51 × 1027.51 × 1027.50 × 102
Std17.512.918.317.620.327.116.218.618.2
CEC 4Best1.9 × 1031.9 × 1031.9 × 1031.9 × 1031.9 × 1031.9 × 1031.9 × 1031.9 × 1031.9 × 103
Mean1.9 × 1031.9 × 1031.9 × 1031.9 × 1031.9 × 1031.9 × 1031.9 × 1031.9 × 1031.9 × 103
Std000000000
CEC 5Best2.09 × 1031.80 × 1031.91 × 1031.88 × 1032.05 × 1031.98 × 1032.06 × 1032.01 × 1032 × 103
Mean4.18 × 1034.29 × 1033.75 × 1033.59 × 1033.64 × 1033.72 × 1034.13 × 1034.09 × 1033.69 × 103
Std2.12 × 1031.79 × 1031.93 × 1031.35 × 1031.74 × 1031.61 × 1031.96 × 1031.59 × 1031.59 × 103
CEC 6Best1.60 × 1031.6 × 1031.60 × 1031.60 × 1031.60 × 1031.60 × 1031.60 × 1031.60 × 1031.60 × 103
Mean1.75 × 1031.7 × 1031.76 × 1031.74 × 1031.74 × 1031.75 × 1031.76 × 1031.75 × 1031.72 × 103
Std1.08 × 10287.41.07 × 1021.13 × 1021.08 × 1021.1 × 10299.81.2 × 10285.7
CEC 7Best2.15 × 1032.19 × 1032.13 × 1032.15 × 1032.27 × 1032.2 × 1032.17 × 1032.27 × 1032.23 × 103
Mean2.66 × 1032.68 × 1032.59 × 1032.66 × 1032.68 × 1032.89 × 1032.76 × 1033.01 × 1033.12 × 103
Std3.4 × 1023.64 × 1023.41 × 1023.81 × 1022.5 × 1024.83 × 1026.13 × 1028.67 × 1027.58 × 102
CEC 8Best2.26 × 1032.24 × 1032.23 × 1032.26 × 1032.27 × 1032.21 × 1032.24 × 1032.3 × 1032.23 × 103
Mean2.31 × 1032.3 × 1032.31 × 1032.31 × 1032.31 × 1032.3 × 1032.31 × 1032.31 × 1032.3 × 103
Std9.8617.414.7108.9222.613.74.8120.1
CEC 9Best2.74 × 1032.5 × 1032.50 × 1032.74 × 1032.50 × 1032.50 × 1032.73 × 1032.50 × 1032.74 × 103
Mean2.76 × 1032.76 × 1032.75 × 1032.76 × 1032.75 × 1032.75 × 1032.75 × 1032.75 × 1032.76 × 103
Std15.611.947.91149.548.611.748.711
CEC 10Best2.9 × 1032.6 × 1032.9 × 1032.9 × 1032.9 × 1032.9 × 1032.9 × 1032.9 × 1032.9 × 103
Mean2.94 × 1032.92 × 1032.93 × 1032.94 × 1032.94 × 1032.94 × 1032.93 × 1032.94 × 1032.93 × 103
Std22.563.922.131.626.72030.12624.7
Table 11. Comparison of optimal solutions for the welded beam design problem.
Table 11. Comparison of optimal solutions for the welded beam design problem.
AlgorithmOptimal Values for VariablesOptimum Weight
hltb
MROA0.20621853.2548939.0200030.2064891.699058
ROA [23]0.2000773.3657549.0111820.2068931.706447
MFO [37]0.20573.47039.03640.20571.72452
CBO [38]0.2057223.470419.0372760.2057351.724663
IHS [39]0.205733.470499.036620.20571.7248
GWO [40]0.2056763.4783779.036810.2057781.72624
MVO [14]0.2054633.4731939.0445020.2056951.72645
WOA [9]0.2053963.4842939.0374260.2062761.730499
CPSO [41]0.2023693.5442149.048210.2057231.73148.
RO [42]0.2036873.5284679.0042330.2072411.735344
IHHO [43]0.205333.472269.03640.20101.7238
Table 12. Comparison of optimal solutions for the tension/pressure springs design problem.
Table 12. Comparison of optimal solutions for the tension/pressure springs design problem.
AlgorithmOptimal Values for Variablesf(x)
dDN
MROA0.050.3744308.54972030.009875331
GA [1]0.051480.35166111.6322010.01270478
HS [6]0.0511540.34987112.0764320.0126706
CSCA [44]0.0516090.35471411.4108310.0126702
PSO [41]0.0517280.35764411.2445430.0126747
AOA [11]0.050.34980911.86370.012124
WOA [9]0.0512070.34521512.0040320.0126763
GSA [45]0.0502760.3236813.525410.0127022
DE [46]0.0516090.35471411.4108310.0126702
RLTLBO [33]0.0551180.50595.11670.010938
RO [42]0.051370.34909611.762790.0126788
Table 13. Comparison of optimal solutions for the pressure vessel design problem.
Table 13. Comparison of optimal solutions for the pressure vessel design problem.
AlgorithmOptimal Values for VariablesOptimum Weight
TsThRL
MROA0.7425788940.36838481440.33385234199.8026645735.8501
SHO [47]0.778210.38488940.315042005885.5773
GWO [40]0.81250.434542.089181176.7587316051.5639
ACO [48]0.81250.437542.103624176.5726566059.0888
WOA [9]0.81250.437542.0982699176.6389986059.741
ES [49]0.81250.437542.098087176.6405186059.7456
SMA [50]0.79310.393240.6711196.21785994.1857
BA [51]0.81250.437542.0984176.63666059.7143
HPSO [52]0.81250.437542.0984176.63666059.7143
CSS [53]0.81250.437542.1036176.57276059.0888
MPA [54]0.778168760.3846496640.31962084199.99999355885.3353
Table 14. Comparison of optimal solutions for the Speed Reducer Design Problem.
Table 14. Comparison of optimal solutions for the Speed Reducer Design Problem.
AlgorithmOptimal Values for VariablesOptimal Weight
x1x2x3x4x5x6x7
MROA3.4975710.7177.37.83.3500575.2855402995.437447
AOA [11]3.503840.7177.37.729333.356495.28672997.9157
MFO [37]3.4974550.7177.827757.7124573.3517875.2863522998.94083
WSA [55]3.50.7177.37.83.3502155.2866832996.348225
AAO [56]3.4990.6999177.37.83.35025.28722996.783
CS [57]3.50150.7177.6057.81813.3525.28753000.981
FA [58]3.5074950.7001177.7196748.0808543.3515125.2870513010.137492
RSA [59]3.502790.7177.308127.747153.350675.286752996.5157
HS [6]3.5201240.7178.377.83.366975.2887193029.002
hHHO-SCA [60]3.5061190.7177.37.991413.4525695.2867493029.873076
Table 15. Results of optimal solutions for multiple disc clutch brake design problem.
Table 15. Results of optimal solutions for multiple disc clutch brake design problem.
AlgorithmOptimal Values for VariablesOptimum Weight
x1x2x3x4x5
MROA69.9999999590160020.235242458
TLBO [4]7090181030.313656611
WCA [61]7090191030.313656
MVO [14]7090191030.313656
CMVO [62]7090191030.313656
MFO [37]7090191030.313656
RSA [59]70.034790.03491801.72852.9740.31176
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wen, C.; Jia, H.; Wu, D.; Rao, H.; Li, S.; Liu, Q.; Abualigah, L. Modified Remora Optimization Algorithm with Multistrategies for Global Optimization Problem. Mathematics 2022, 10, 3604. https://doi.org/10.3390/math10193604

AMA Style

Wen C, Jia H, Wu D, Rao H, Li S, Liu Q, Abualigah L. Modified Remora Optimization Algorithm with Multistrategies for Global Optimization Problem. Mathematics. 2022; 10(19):3604. https://doi.org/10.3390/math10193604

Chicago/Turabian Style

Wen, Changsheng, Heming Jia, Di Wu, Honghua Rao, Shanglong Li, Qingxin Liu, and Laith Abualigah. 2022. "Modified Remora Optimization Algorithm with Multistrategies for Global Optimization Problem" Mathematics 10, no. 19: 3604. https://doi.org/10.3390/math10193604

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop