Enhanced Remora Optimization Algorithm for Solving Constrained Engineering Optimization Problems

: Remora Optimization Algorithm (ROA) is a recent population-based algorithm that mimics the intelligent traveler behavior of Remora. However, the performance of ROA is barely satisfactory; it may be stuck in local optimal regions or has a slow convergence, especially in high dimensional complicated problems. To overcome these limitations, this paper develops an improved version of ROA called Enhanced ROA (EROA) using three different techniques: adaptive dynamic probability, SFO with Levy ﬂight, and restart strategy. The performance of EROA is tested using two different benchmarks and seven real-world engineering problems. The statistical analysis and experimental results show the efﬁciency of EROA.


Introduction
The process of determining the best values of design variables to minimize/maximize fitness function while fulfilling the requirements of the whole system is known as optimization [1,2]. Optimization problems exist in almost everything and every field, such as engineering, business, science, etc. Optimization methods can be classified into two large categories: (1) exact and (2) heuristic & metaheuristic algorithms [3][4][5]. The former category can be considered less applicable and practical as it needs less complicated calculations, which will take much time. In contrast, the latter class (metaheuristic algorithms) shows some randomized/stochastic behavior and performs an "educated search decision" for some "wise regions" [6,7].
These days optimization field has gained a huge interest by many scholars as it become one of the hot topics in computer science since it appears in many domains such as cloud computing tasks [8], face detection [9], power [10,11], and engineering problems [12]. In the literature, there exist an enormous number of optimization algorithms since there is no algorithm that is able to find the optimal solution in all problems as stated by the No Free Lunch (NFL) theory [13]. In other words, if an algorithm is able to find the optimal solution in one type of problem, it will fail in other types. The above theorem encourages researchers to introduce novel algorithms and enhance already existed ones.

•
Enhanced version of ROA is proposed based on 3 strategies: Adaptive dynamic probability, SFO with Levy flight, and Restart strategy (RS); • EROA has been tested on 23 different functions (CEC2005), 29 functions from (CEC2017), and 7 real-world engineering problems; • EROA has been tested using 3 dimensions (D = 30, 100, & 500); • EROA has been compared with original algorithm and other 6 different algorithms.
This paper is organized as follows: Section 2 gives a brief description of the ROA, whereas Section 3 illustrates the using operators: adaptive dynamic probability, Levy flight, and restart strategy, and gives the framework of the proposed algorithm. Sections 4 and 5 show the experiments results of the proposed algorithm in solving benchmark and constrained engineering problems, whereas Section 6 concludes the paper.

Remora Optimization Algorithm (ROA)
ROA is a new metaheuristic optimization algorithm inspired by Remora, the "intelligent traveler" in the ocean (Figure 1). It mimics the concept of parasitism and random host replacement of Remora. Remora can attach to whales and swordfishes to learn the effective characteristics of the hosts, so the ROA borrows two strategies from WOA and SFO [57]. ROA contains "Free travel" and "Eat thoughtfully" phases, corresponding to the exploration and exploitation stages. The algorithm switches between exploration and exploitation phases through a "one small step try".
• Few number of parameters; • Good balance between exploration and exploitation.
Moreover, like all other metaheurstics algorithms, it may stuck in local optima or have a slow convergence curve. A brief description of ROA's mathematical model can be described as follows.

Sailfish Optimization (SFO) Strategy
When Remora attach to the swordfish, its position can be considered as the swordfish's position. ROA improves the location update formula based on the elite idea and obtains the following formula: where t is the number of current iterations; XBest(t) represents the best solution obtained so far and Xrand(t) indicates a random location; and rand is a uniformly distributed random number between 0 and 1.

Experience Attempt
At the same time, Remora continuously takes a small step around the host to accumulate experience to determine whether the host needs to be replaced. The mathematical formula is as follows.
After this "small global" movement, Remora will compare fitness values of SFO Strategy f(X) and experience attempt f(Xatt) to choose whether to change the host. The position with a smaller fitness value is retained. ROA has many advantages such as: • Easy-to-implement; • Few number of parameters; • Good balance between exploration and exploitation.
Moreover, like all other metaheurstics algorithms, it may stuck in local optima or have a slow convergence curve. A brief description of ROA's mathematical model can be described as follows.

Sailfish Optimization (SFO) Strategy
When Remora attach to the swordfish, its position can be considered as the swordfish's position. ROA improves the location update formula based on the elite idea and obtains the following formula: where t is the number of current iterations; X Best (t) represents the best solution obtained so far and X rand (t) indicates a random location; and rand is a uniformly distributed random number between 0 and 1.

Experience Attempt
At the same time, Remora continuously takes a small step around the host to accumulate experience to determine whether the host needs to be replaced. The mathematical formula is as follows.
where X att (t + 1) represents a tentative step; X pre (t) is the position of the previous generation and X(t) indicates the current position; and randn is a normally distributed random number between 0 and 1.
After this "small global" movement, Remora will compare fitness values of SFO Strategy f (X) and experience attempt f (X att ) to choose whether to change the host. The position with a smaller fitness value is retained. When Remora attaches to the whale, the position update formula of remora is described as follows.
where T is the maximum number of iterations, Dist indicates the distance between the best position and the current position, α is a random number in [−1, 1], and a linearly decreases from −1 to −2.

Host Feeding
"Host feeding" is a small step in the exploitation process, which creates a solution space that converges gradually around the host, refining and enhancing the ability of local optimization. This stage can be mathematically modeled as: where A denotes a small step movement related to the volume space of the host and Remora, and factor C is a constant number equal to 0.1, used to narrow the position of remora. It is worth noting that a random integer argument H (0 or 1) is used to decide whether to choose the WOA Strategy or the SFO Strategy. The pseudo-code of ROA is shown in Algorithm 1.
Algorithm 1 Pseudo-code of ROA 1: Set initial values of the population size N and the maximum number of iterations T 2: Initialize positions of the population X i (i = 1, 2, 3, ..., N) 3: Initialize the best solution X best and corresponding best fitness f (X best ) 4: While t < T do 5: Calculate the fitness value of each Remora 6: Check if any search agent goes beyond the search space and amend it 7: Update a, α, V and H 8: For each Remora indexed by i do 9: If H(i) = 0 then 10: Update the position using Equation (3) 11: Elseif H(i) = 1 then 12: Update the position using Equation (1) 13: Endif 14: Make a one-step prediction by Equation (2) 15: Compare fitness values to judge whether host replacement is necessary 16: If the host is not replaced, Equation (7) is used as the host feeding mode for Remora 17: End for 18: End while 19: Return X best

The Proposed Approach
As a newly proposed algorithm, ROA has achieved good results on some test functions. However, experiment results show that it still has the defects of insufficient global exploration and local optimum stagnation. The global exploration is implemented by the SFO Strategy and the "small global" movement experience attempt. The lack of global exploration capacity can be attributed to the deficient SFO Strategy. Thus, adaptive dynamic probability and Levy flight are utilized to improve the global search ability in this work. Meanwhile, a restart strategy is added to help the algorithm escape from local optima.
To best of our knowledge, it is the first time the following there operators have been combined with ROA.

Adaptive Dynamic Probability
As mentioned above, H is used to decide whether to choose the WOA Strategy or the SFO Strategy; that is, H determines whether to explore or exploit the search space. However, H is a random integer number, which means that the probability of exploration and exploitation is the same, whether during early or late iterations. This is not in line with our desire to focus on exploration in the early stage and on exploitation in the later stage for the optimization algorithm. Thus, the adaptive dynamic probability of H is designed as follows: where p denotes the probability that H takes 0 or 1; obviously, with the increase of the number of iterations, the probability of H taking 0 increases, while the probability of 1 decreases. The possibility of individuals for exploited increases and the possibility of exploration decreases.

Sailfish Optimization (SFO) Strategy with Levy Flight
Levy flight is a stochastic strategy widely used in optimization algorithms. It has a relatively high probability of large strides in random walking, which can effectively improve the randomness of the algorithm. To further enhance the exploration ability of the method, Levy flight is integrated into the formula of the SFO Strategy, which is described as follows: where Levy represents the Levy flight function, and D is the dimension size of the problem. u and v are random values between 0 and 1, and β is a constant number equal to 1.5.

Restart Strategy (RS)
Restart schemes can help worse individuals jump out of the local optimum, so they are used to prevent the population from stagnating. Zhang et al. [58] proposed a RS with a trial vector recording the times the position of individuals has not been improved. If the position of the ith individual has not been improved in this search, the trial value of this individual is increased by 1. Otherwise, the trial value is reset to zero. If the trail value is Mathematics 2022, 10, 1696 6 of 32 not less than the predefined Limit, the position will be replaced by choosing the location with a better fitness value from Equations (15) and (16).
where lb and ub are the lower and upper bound of the problem, respectively; in this paper, we replace Equation (16) with Equation (17) from the random opposition-based learning (ROL) strategy [59] to obtain an opposite position. A better solution generated from Equations (15) and (17) is adopted if the trail value is not less than the Limit.

The Proposed EROA
EROA is proposed to combine the above three strategies. The overall process of the EROA is similar to ROA, except that the update method of H is replaced by adaptive dynamic probability, the SFO Strategy integrates Levy flight, and the RS is added at the end. The pseudo-code of EROA is given in Algorithm 2, and the summarized flowchart is illustrated in Figure 2. 3: Initialize the best solution X best and corresponding best fitness f (X best ) 4: While t < T do 5: Calculate the fitness value of each Remora 6: Check if any search agent goes beyond the search space and amend it 7: Update a, α, and V 8: Update H based on Equation (11) 9: For each Remora indexed by i do 10: If H(i) = 0 then 11: Update the position using Equation (3) 12: Elseif H(i) = 1 then 13: Update the position using Equation (12) 14: End if 15: Make a one-step prediction by Equation (2) 16: Compare fitness values to judge whether host replacement is necessary 17: If the host is not replaced, Equation (7) is used as the host feeding mode for Remora 18: Update trial(i) for remora 19: If trial(i) >= Limit 20: Generate positions using Equations (15) and (17)

Numerical Experiment Results
In this section, two different types of benchmark functions are used to evaluate the performance of the EROA. First, experiments on 23 standard benchmark functions are carried out to evaluate the performance of EROA in solving simple numerical optimization problems. Then, the CEC2017 test suite, including 29 benchmark functions, is utilized to evaluate the performance of EROA in solving complex numerical problems. The EROA is compared with seven well-known metaheuristic methods, including ROA, AO, AOA, HHO, WOA, SCA, and STOA [60]. We set the population size N = 30, dimension size D = 30/100/500, the maximum number of iterations T = 500, and run 30 times independently for all algorithms. The parameter settings of each algorithm are shown in Table 1. All experiments are carried out in MATLAB R2016a on a PC with Intel (R) Core (TM) i7-9700 CPU @ 3.00 GHz and RAM 8 GB memory on OS Windows 10.

Numerical Experiment Results
In this section, two different types of benchmark functions are used to evaluate the performance of the EROA. First, experiments on 23 standard benchmark functions are carried out to evaluate the performance of EROA in solving simple numerical optimization problems. Then, the CEC2017 test suite, including 29 benchmark functions, is utilized to evaluate the performance of EROA in solving complex numerical problems. The EROA is compared with seven well-known metaheuristic methods, including ROA, AO, AOA, HHO, WOA, SCA, and STOA [60]. We set the population size N = 30, dimension size D = 30/100/500, the maximum number of iterations T = 500, and run 30 times independently for all algorithms. The parameter settings of each algorithm are shown in Table 1. All experiments are carried out in MATLAB R2016a on a PC with Intel (R) Core (TM) i7-9700 CPU @ 3.00 GHz and RAM 8 GB memory on OS Windows 10. Table 1. Parameter settings for the comparative algorithms.

Experiments on Standard Benchmark Functions
Here, the EROA performance is tested using 23 mathematical benchmark functions. This benchmark contains seven unimodal, six multimodal, and ten fixed-dimension multimodal functions. The mathematical description of each type is given in Tables 2-4 where Fun refers to a mathematical function, D refers to the number of dimensions, Range shows the interval of search space, f min refers to the optimal value that the corresponding functions can achieve. Table 5 shows the results of the introduced algorithm with its competitors. The parameter settings of each algorithm are illustrated in Table 1. From Table 5, it can be seen that EROA ranked first in 19 functions out of 23 ones. In unimodal functions, it ranked first in 5 out of 7, whereas in multimodal, it achieves the best results in 4 out of 6. On the other hand, it achieves the best in all functions that belong to fixed-dimension multimodal functions. Figure 3 shows the convergence curve for all functions. From this table, it can be noticed that EROA has a faster convergence than other competitors.
To test the scalability of the proposed algorithm, we carry out the experiments from F1-F13 using two other dimensions D = 100 and D = 500 as shown in Table 5. Moreover, the convergence curve for these dimensions are shown in Figures 4 and 5.
Furthermore, a non-parametric test called Wilcoxon rank-sum is used at a 5% level of significance to make a fair comparison between EROA and other algorithm results in each independent run. Table 6 shows the results of such parameters. From this table, it can be seen that p-values for almost functions are less than 0.05.

Experiments on CEC2017 Test Suite
In this subsection, a discussion on the performance of the suggested algorithm named EROA using CEC2017 is provided. The results of EROA with other algorithms are given in Table 7. This table shows that EROA has achieved the best results in 9 functions out of 29, whereas STOA achieved the best results in only eight functions. Moreover, EROA has achieved the second-best in two functions (F20-F21) and the third-best in eight functions (F3-F5-F18-F19-F24-F26-F30). To make a fair comparison in algorithm ranking, we perform a Friedman test, as shown in Table 8. From this table, it can be noticed that EROA has ranked first overall in solving CEC2017. Furthermore, Figure 6 shows some convergence curves to the introduced algorithm compared with the classical ROA and other six functions. EROA achieved a fast convergence.

Constrained Engineering Design Problems
The previous experiments show the EROA's ability to solve numerical optimization problems. Here, to be able to reveal the powerfulness of EROA in solving real constrained engineering problems, seven different problems are used; namely, pressure vessel design, speed reducer design, tension/compression spring design, three-bar truss, welded beam design, tubular column design, and gear train design problem.

Constrained Engineering Design Problems
The previous experiments show the EROA's ability to solve numerical optimization problems. Here, to be able to reveal the powerfulness of EROA in solving real constrained engineering problems, seven different problems are used; namely, pressure vessel design, speed reducer design, tension/compression spring design, three-bar truss, welded beam design, tubular column design, and gear train design problem.

Pressure Vessel Design Problem
Pressure vessel design is a minimization problem that consists of 4 variables, as shown in Figure 7. The mathematical formulation of this problem is shown below. Minimize Subject to Results of EROA in solving pressure vessel design are compared with Aquila Optimizer (AO), Harris Hawks Optimization (HHO), Whale Optimization Algorithm (WOA), Slime Mould Algorithm (SMA) [61], Grey Wolf Optimizer (GWO), Multi-Verse Optimizer (MVO), Evolutions Strategy (ES) [62], Gravitational Search Algorithm (GSA), Genetic Algorithm (GA), and Co-evolutionary Particle Swarm Optimization (CPSO) [63] as shown in Table 9. It can be seen from this table, EROA has the smallest cost, 5935,7301 with X = (0.8434295, 0.4007618, 44.786, 145.9578).

Pressure Vessel Design Problem
Pressure vessel design is a minimization problem that consists of 4 variables, as shown in Figure 7. The mathematical formulation of this problem is shown below.
Subject to

Speed Reducer Design Problem
The second engineering problem is the speed reducer design, which minimizes the reducer weight. It has seven variables, as shown in Figure 8. The mathematical formulation is given below.

Speed Reducer Design Problem
The second engineering problem is the speed reducer design, which minimizes the reducer weight. It has seven variables, as shown in Figure 8. The mathematical formulation is given below.

Tension/Compression Spring Design Problem
Tension/compression spring design problem aims to determine the minimum cost of spring fabrication. It has three variables as shown in Figure 9. The mathematical model can be shown below.

Tension/Compression Spring Design Problem
Tension/compression spring design problem aims to determine the minimum cost of spring fabrication. It has three variables as shown in Figure 9. The mathematical model can be shown below.
Subject to

Consider
Subject to Results of tension/compression spring design is given in Table 11, where the EROA is compared with Aquila Optimizer (AO), Harris Hawks Optimization (HHO), Whale Optimization Algorithm (WOA), Salp Swarm Algorithm (SSA), Grey Wolf Optimizer (GWO), Multi-Verse Optimization (MVO), Particle Swarm Optimization (PSO), Improved Teaching-Learning Based Optimization algorithm (RLTLBO) [66], Genetic Algorithm (GA), and Harmony Search (HS). From this table, we can seen that, EROA has achieved the best results.

Three-Bar Truss Design Problem
The three-bar design problem aims to find the minimum structure burden. It has two variables, as shown in Figure 10. The mathematical model is given below.

Welded Beam Design Problem
The fifth engineering problem is welded beam design which tries to find the minimum price of welded beam manufacturing. It has four variables, as shown in Figure 11. The mathematical model of welded beam design is given below.

Welded Beam Design Problem
The fifth engineering problem is welded beam design which tries to find the minimum price of welded beam manufacturing. It has four variables, as shown in Figure 11. The mathematical model of welded beam design is given below.

Tubular Column Design Problem
The sixth engineering problem is called tubular column design [65], which aims to find the lowest cost to design tubular columns. It consists of two variables, whereas the mathematical description of this problem is shown below.

Gear Train Design Problem
The last engineering problem discussed here is Gear train design [14] which aims to minimize gear ratio. It consists of four variables whereas the mathematical formulation is founded below.