Subtraction-Average-Based Optimizer: A New Swarm-Inspired Metaheuristic Algorithm for Solving Optimization Problems

This paper presents a new evolutionary-based approach called a Subtraction-Average-Based Optimizer (SABO) for solving optimization problems. The fundamental inspiration of the proposed SABO is to use the subtraction average of searcher agents to update the position of population members in the search space. The different steps of the SABO’s implementation are described and then mathematically modeled for optimization tasks. The performance of the proposed SABO approach is tested for the optimization of fifty-two standard benchmark functions, consisting of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal types, and the CEC 2017 test suite. The optimization results show that the proposed SABO approach effectively solves the optimization problems by balancing the exploration and exploitation in the search process of the problem-solving space. The results of the SABO are compared with the performance of twelve well-known metaheuristic algorithms. The analysis of the simulation results shows that the proposed SABO approach provides superior results for most of the benchmark functions. Furthermore, it provides a much more competitive and outstanding performance than its competitor algorithms. Additionally, the proposed approach is implemented for four engineering design problems to evaluate the SABO in handling optimization tasks for real-world applications. The optimization results show that the proposed SABO approach can solve for real-world applications and provides more optimal designs than its competitor algorithms.


Introduction
Optimization is a comprehensive concept in various fields of science. An optimization problem is a type of problem that has more than one feasible solution. Therefore, the goal of optimization is to find the best solution among all these feasible solutions. From a mathematical point of view, an optimization problem is explained using three parts: decision variables, constraints, and objective function [1]. The problem solving techniques in optimization studies are placed into two groups: deterministic and stochastic approaches [2].
Deterministic approaches, which are placed into two classes, gradient-based and non-gradient-based, are effective in solving linear, convex, simple, low-dimensional, continuous, and differentiable optimization problems [3]. However, increasing the complexity of these optimization problems leads to disruption in the performance of the deterministic approaches, and these methods get stuck in inappropriate local optima. On the other hand, many optimization problems within science and real-world applications have characteristics such as a high dimensionality, a high complexity, a non-convex, non-continuous, non-linear, and non-differentiable objective function, and a non-linear and unknown search space [4]. These optimization task characteristics and the difficulties of deterministic approaches have led researchers to introduce new techniques called stochastic approaches.
Metaheuristic algorithms are one of the most widely used stochastic approaches that effectively solve complex optimization problems. They have efficiency in solving non-linear, non-convex, non-differentiable, high-dimensional, and NP-hard optimization problems. An efficiency in addressing discrete, non-linear, and unknown search spaces, the simplicity of their concepts, their easy implementation, and their non-dependence on the type of problem are among the advantages that have led to the popularity of metaheuristic algorithms [5]. Metaheuristic algorithms are employed in various optimization applications within science, such as index tracking [6], energy [7][8][9][10], protection [11], energy carriers [12,13], and electrical engineering [14][15][16][17][18][19].
The optimization process of these metaheuristic algorithms is based on random search in the problem solving space and the use of random operators. Initially, candidate solutions are randomly generated. Then, during a repetition-based process and based on the steps of the algorithm, to improve the quality of these initial solutions, the position of the candidate solutions in the problem solving space is updated. In the end, the best candidate solution is available to solve the problem. Using random search in the optimization process does not guarantee the achievement of the global optimal by a metaheuristic algorithm. For this reason, the solutions that are obtained from metaheuristic algorithms are called pseudooptimal [20]. To organize an effective search in the problem solving space, metaheuristic algorithms should be able to provide and manage search operations well, at both global and local levels. Global search, with the concept of exploration, leads to a comprehensive search in the problem solving space and an escape from optimal local areas. Local search, with the concept of exploitation, leads to a detailed search around the promising solutions for a convergence towards possible better solutions. Considering that exploration and exploitation pursue opposite goals, the key to the success of metaheuristic algorithms is to create a balance between this exploration and exploitation during the search process [21].
On the one hand, the concepts of the random search process and quasi-optimal solutions, and, on the other hand, the desire to achieve better quasi-optimal solutions for these optimization problems, have led to the development of numerous metaheuristic algorithms by researchers.
The main research question is that now that many metaheuristic algorithms have been designed, is there still a need to introduce a newer algorithm to deal with optimization problems or not? In response to this question, the No Free Lunch (NFL) [22] theorem explains that the high success of a particular algorithm in solving a set of optimization problems will not guarantee the same performance of that algorithm for other optimization problems. There is no assumption that implementing an algorithm on an optimization problem will be successful. According to the NFL theorem, no particular metaheuristic algorithm is the best optimizer for solving all optimization problems. The NFL theorem motivates researchers to search for better solutions for these optimization problems by designing newer metaheuristic algorithms. The NFL theorem has also inspired the authors of this paper to provide more effective solutions for dealing with optimization problems by creating a new metaheuristic algorithm.
The innovation and novelty of this paper are in the introduction a new metaheuristic algorithm called the Subtraction Average of Searcher Agents (SABO) for solving the optimization problems in different sciences. The main contributions of this study are as follows: • The basic idea behind the design of the SABO is the mathematical concepts and information subtraction average of the algorithm's search agents.

•
The steps of the SABO's implementation are described and its mathematical model is presented.

•
The efficiency of the proposed SABO approach has been evaluated for fifty-two standard benchmark functions.

•
The quality of the SABO's results has been compared with the performance of twelve well-known algorithms.
• To evaluate the capability of the SABO in handling real-world applications, the proposed approach is implemented for four engineering design problems.
decision variables of the given problem. According to their position in the search space, algorithm searcher agents (i.e., population members) determine the values for the decision variables. Therefore, each search agent contains the information of the decision variables and is mathematically modeled using a vector. The set of search agents together forms the population of the algorithm. From a mathematical point of view, the population of the algorithm can be represented using a matrix, according to Equation (1). The primary positions of the search agents in the search space are randomly initialized using Equation (2).
where X is the SABO population matrix, X i is the ith search agent (population member), x i,d is its dth dimension in the search space (decision variable), N is the number of search agents, m is the number of decision variables, r i,d is a random number in the interval [0, 1], and lb d and ub d are the lower and upper bounds of the dth decision variable, respectively. Each search agent is a candidate solution to the problem that suggests values for the decision variables. Therefore, the objective function of the problem can be evaluated based on each search agent. The evaluated values for the objective function of the problem can be represented by using a vector called → F , according to Equation (3). Based on the placement of the specified values by each population member for the decision variables of the problem, the objective function is evaluated and stored in the vector → F . Therefore, the number of elements of the vector → F is equal to the number of the population members N.
where → F is the vector of the values of the objective function, and F i is the evaluated values for the objective function based on the ith search agent.
The evaluated values for the objective function are a suitable criterion for analyzing the quality of the solutions that are proposed by the search agents. Therefore, the best value that is calculated for the objective function corresponds to the best search agent. Similarly, the worst value that is calculated for the objective function corresponds to the worst search agent. Considering that the position of the search agents in the search space is updated in each iteration, the process of identifying and saving the best search agent continues until the last iteration of the algorithm.

Mathematical Modelling of SABO
The basic inspiration for the design of the SABO is mathematical concepts such as averages, the differences in the positions of the search agents, and the sign of difference of the two values of the objective function. The idea of using the arithmetic mean location of all the search agents (i.e., the population members of the tth iteration), instead of just using, e.g., the location of the best or worst search agent to update the position of all the search agents (i.e., the construction of all the population members of the (t + 1)th iteration), is not new, but the SABO's concept of the computation of the arithmetic mean is wholly unique, as it is based on a special operation "− v ", called the v−subtraction of the search agents B from the search agent A, which is defined as follows: where → v is a vector of the dimension m, in which components are random numbers that are generated from the set {1, 2}, the operation " * " represents the Hadamard product of the two vectors (i.e., all the components of the resulting vectors are formed by multiplying the corresponding components of the given two vectors), F(A) and F(B) are the values of the objective function of the search agents A and B, respectively, and sign is the signum function. It is worth noting that, due to the use of a random vector → v with components from the set {1, 2} in the definition of the v−subtraction, the result of this operation is any of the points of a subset of the search space that has a cardinality of 2 m+1 .
In the proposed SABO, the displacement of any search agent X i in the search space is calculated by the arithmetic mean of the v−subtraction of each search agent X j , j = 1, 2, . . . , N, from the search agent X i . Thus, the new position for each search agent is calculated using (5).
where X new i is the new proposed position for the ith search agent X i , N is the total number of the search agents, and → r i is a vector of the dimension m, in which components have a normal distribution with the values from the interval [0, 1].
Then, if this proposed new position leads to an improvement in the value of the objective function, it is acceptable as the new position of the corresponding agent, according to (6).
where F i and F new i are the objective function values of the search agents X i and X new i , respectively.
Clearly, the v−subtraction X i − v X j represents a vector → χ ij , and we can look at Equation (5) as the motion equation of the search agent X i , since we can rewrite it in the → χ ij determines the direction of the movement of the search agent X i to its new position X new i . The search mechanism based on "the arithmetic mean of the v-subtractions", which is presented in (5), has the essential property of realizing both the exploration and exploitation phases to explore the promising areas in the search space. The exploration phase is realized by the operation of "v-subtraction" (i.e., the vector

Repetition Process, Pseudocode, and Flowchart of SABO
After updating all the search agents, the first iteration of the algorithm is completed. Then, based on the new values that have been evaluated for the positions of the search agents and objective function, the algorithm enters its next iteration. In each iteration, the best search agent is stored as the best candidate solution so far. This process of updating the search agents continues until the last iteration of the algorithm, based on (3) to (5). In the end, the best candidate solution that was stored during the iterations of the algorithm is presented as the solution to the problem. The implementation steps of the SABO are shown as a flowchart in Figure 2 and presented as a pseudocode in Algorithm 1.

Repetition Process, Pseudocode, and Flowchart of SABO
After updating all the search agents, the first iteration of the algorithm is completed. Then, based on the new values that have been evaluated for the positions of the search agents and objective function, the algorithm enters its next iteration. In each iteration, the best search agent is stored as the best candidate solution so far. This process of updating the search agents continues until the last iteration of the algorithm, based on (3) to (5). In the end, the best candidate solution that was stored during the iterations of the algorithm is presented as the solution to the problem. The implementation steps of the SABO are shown as a flowchart in Figure 2 and presented as a pseudocode in Algorithm 1. Start SABO. 1. Input problem information: variables, objective function, and constraints. 2. Set SABO population size (N) and iterations (T). 3. Generate the initial search agents' matrix at random using Equation (2).
Evaluate the objective function.
For i = 1 to N 7.
Calculate new proposed position for ith SABO search agent using Equation (5).
Save the best candidate solution so far. 11. end 12. Output best quasi-optimal solution obtained with the SABO. End SABO.
agents and objective function, the algorithm enters its next iteration. In each iteration, the best search agent is stored as the best candidate solution so far. This process of updating the search agents continues until the last iteration of the algorithm, based on (3) to (5). In the end, the best candidate solution that was stored during the iterations of the algorithm is presented as the solution to the problem. The implementation steps of the SABO are shown as a flowchart in Figure 2 and presented as a pseudocode in Algorithm 1.

Computational Complexity of SABO
In this subsection, the computational complexity of the proposed SABO approach is evaluated. The initialization steps of the SABO for dealing with an optimization problem with m decision variables have a complexity that is equal to O(Nm), where N is the number of search agents. Furthermore, the process of updating these search agents has a complexity that is equal to O(NmT), where T is the total number of iterations of the algorithm. Therefore, the computational complexity of the SABO is equal to O(Nm(1 + T)).

Simulation Studies and Results
In this section, the effectiveness of the proposed SABO approach in solving optimization problems is challenged. For this purpose, a set of fifty-two standard benchmark functions is employed, consisting of seven unimodal functions (F1 to F7), six high-dimensional multimodal functions (F8 to F13), ten fixed-dimensional multimodal functions (F14 to F23), and twenty-nine functions from the CEC 2017 test suite [59]. To analyze the performance quality of the SABO in optimization tasks, the results that were obtained from the proposed approach have been compared with twelve well-known metaheuristic algorithms: GA, PSO, GSA, TLBO, GWO, MVO, WOA, MPA, TSA, RSA, WSO, and AVOA. The values of the control parameters of the competitor algorithms are specified in Table 1. The proposed SABO and each of the competitor algorithms are implemented for twenty independent runs on the benchmark functions, where each independent run includes 1000 iterations. The optimization results are reported using six indicators: the mean, best, worst, standard deviation (std), median, and rank. The ranking criterion of these metaheuristic algorithms is based on providing a better value for the mean index.

Evaluation Unimodal Functions
The unimodal objective functions, F1 to F7, due to the lack of local optima, are suitable options for analyzing the exploitation ability of the metaheuristic algorithms. The optimization results of the F1 to F7 functions, using the SABO and competitor algorithms, are reported in Table 2.  Based on the obtained results, the proposed SABO, with a high exploitation ability, provided the global optimal when solving the F1, F2, F3, F4, and F6 functions. Additionally, the SABO is the best optimizer for the F5 and F7 functions. A comparison of the simulation results shows that the SABO, through obtaining the first rank in the total, provided a superior performance for solving the unimodal problems F1 to F7 compared to the competitor algorithms.

Evaluation High-Dimensional Multimodal Functions
The high-dimensional multimodal objective functions, F8 to F13, due to having a large number of local optima, are suitable options for evaluating the exploration ability of the metaheuristic algorithms. The results of implementing the SABO and its competitor algorithms on the functions F8 to F13 are reported in Table 3.
Based on the optimization results, the SABO provided the global optimal for the F9 and F11 functions, with a high exploration ability. The proposed SABO approach is the best optimizer for solving the functions F8, F10, F12, and F13. The analysis of the simulation results shows that the SABO provided a superior performance in handling the high-dimensional multimodal problems compared to its competitor algorithms.

Evaluation Fixed-Dimensional Multimodal Functions
The fixed-dimensional multimodal objective functions, F14 to F23, have fewer numbers of local optima than the functions F8 to F13. These functions are suitable options for evaluating the ability of the metaheuristic algorithms to create a balance between the exploration and exploitation. The optimization results of the operations F14 to F23, using the SABO and its competitor algorithms, are presented in Table 4.
Based on the obtained results, the SABO is the best optimizer for the functions F15 and F21. In solving the other benchmark functions of this group, the SABO had a similar situation to some of its competitor algorithms from the point of view of the mean criterion. However, the proposed SABO algorithm performed better in solving these functions by providing better values for the std index. Furthermore, the analysis of the simulation results shows that, compared to the competitor algorithms, the SABO provided a superior performance by balancing the exploration and exploitation in the optimization of the fixed-dimensional multimodal problems.
The performances of the proposed SABO approach and the competitor algorithms in solving the functions F1 to F23 are presented in the form of boxplot diagrams in Figure 3.

Evaluation CEC 2017 Test Suite
In this subsection, the efficiency of the SABO in solving the complex optimization problems from the CEC 2017 test suite is evaluated. The CEC 2017 test suite has thirty benchmark functions, consisting of three unimodal functions, C17-F1 to C17-F3, seven multimodal functions, C17-F4 to C17-F10, ten hybrid functions, C17-F11 to C17-F20, and ten composition functions, C17-F21 to C17-F30. The C17-F2 function was removed from this test suite due to its unstable behavior. The complete information on the CEC 2017 test suite is provided by [59]. The results of implementing the proposed SABO approach and its competitor algorithms on the CEC 2017 test suite are reported in Table 5.
Based on the obtained results, the SABO is the best optimizer for the functions C17-F1, C17-F3 to C17-F23, and C17-F25 to C17-F30. The analysis of the simulation results shows that the proposed SABO approach provided better results for most of the benchmark functions. Overall, by winning the first rank, it provided a superior performance in handling the CEC 2017 test suite compared to the competitor algorithms. The performances of the SABO and its competitor algorithms in solving the CEC 2017 test suite are plotted as boxplot diagrams in Figure 4.

Statistical Analysis
In this subsection, statistical analyses are presented for the results of the proposed SABO approach and its competing algorithms to determine whether the superiority of the SABO over the competing algorithms is significant from a statistical point of view. For this purpose, the Wilcoxon rank sum test [60] was used, which is a non-parametric statistical analysis that is used to determine the significant difference between the averages of two data samples. In this test, an index called the -value is used to determine the significant difference. The results of implementing the Wilcoxon rank sum test on the performances of the SABO and the competitor algorithms are presented in Table 6.
Based on the simulation results, in cases where the -value was less than 0.05, the proposed SABO approach had a significant statistical superiority over the corresponding

Statistical Analysis
In this subsection, statistical analyses are presented for the results of the proposed SABO approach and its competing algorithms to determine whether the superiority of the SABO over the competing algorithms is significant from a statistical point of view. For this purpose, the Wilcoxon rank sum test [60] was used, which is a non-parametric statistical analysis that is used to determine the significant difference between the averages of two data samples. In this test, an index called the p-value is used to determine the significant difference. The results of implementing the Wilcoxon rank sum test on the performances of the SABO and the competitor algorithms are presented in Table 6. Based on the simulation results, in cases where the p-value was less than 0.05, the proposed SABO approach had a significant statistical superiority over the corresponding metaheuristic algorithm.

Advantages and Disadvantages of SABO
The proposed SABO approach is a metaheuristic algorithm that performs the optimization process based on the search power of its population through an iteration-based process. Among the advantages of the SABO, it can be mentioned that, except for the parameters of the number of population members N and the maximum number of iterations of the algorithm T, which are similar in all the algorithms, it does not have any control parameters. For this reason, it does not need a parameter-setting process. The simplicity of its equations, its easy implementation, and its simple concepts are other advantages of the SABO. The SABO's ability to balance exploration and exploitation during the search process in the problem solving space is another advantage of this proposed approach. Despite these advantages, the proposed approach also has several disadvantages. The proposed SABO approach belongs to the group of stochastic techniques for solving optimization problems, and for this reason, its first disadvantage is that there is no guarantee of it achieving the global optimal. Another disadvantage of the SABO is that, based on the NFL theorem, it cannot be claimed that the proposed approach performs best in all optimization applications. Another disadvantage of the SABO is that there is always the possibility that newer metaheuristic algorithms will be designed that have a better performance than the proposed approach in handling some optimization tasks.

SABO for Real-World Applications
In this subsection, the capability of the proposed SABO approach in handling optimization tasks in real-world applications is challenged by four engineering design optimization problems.

Pressure Vessel Design Problem
The pressure vessel design is an optimization challenge with the aim of minimizing construction costs. The pressure vessel design schematic is shown in Figure 5. mization tasks in real-world applications is challenged by four engineering design optimization problems.

Pressure Vessel Design Problem
The pressure vessel design is an optimization challenge with the aim of minimizing construction costs. The pressure vessel design schematic is shown in Figure 5. The optimization results for the pressure vessel design, using the SABO and its competing algorithms, are reported in Tables 7 and 8.  The mathematical model of the pressure vessel design problem is as follows [61]: The optimization results for the pressure vessel design, using the SABO and its competing algorithms, are reported in Tables 7 and 8.
Based on the obtained results, the SABO provided the optimal solution, with the values of the design variables being equal to (0.778027075, 0.384579186, 40.3122837, and 200) and the value of the objective function being equal to 5882.901334. The analysis of the simulation results shows that the SABO more effectively dealt with the pressure vessel design compared to its competing algorithms. The convergence curve of the SABO during the pressure vessel design optimization is drawn in Figure 6.  Based on the obtained results, the SABO provided the optimal solution, with the values of the design variables being equal to (0.778027075, 0.384579186, 40.3122837, and 200) and the value of the objective function being equal to 5882.901334. The analysis of the simulation results shows that the SABO more effectively dealt with the pressure vessel design compared to its competing algorithms. The convergence curve of the SABO during the pressure vessel design optimization is drawn in Figure 6.

Speed Reducer Design Problem
The speed reducer design is a real-world application within engineering science with the aim of minimizing the weight of the speed reducer. The speed reducer design schematic is shown in Figure 7.

Speed Reducer Design Problem
The speed reducer design is a real-world application within engineering science with the aim of minimizing the weight of the speed reducer. The speed reducer design schematic is shown in Figure 7. The mathematical model of the speed reducer design problem is as follows [62,63]: .
The results of implementing the proposed SABO approach and its competing algorithms on the speed reducer design problem are presented in Tables 9 and 10.
Based on the obtained results, the SABO provided the optimal solution, with the values of the design variables being equal to (3.5, 0.7, 17, 7.3, 7.8, 3.350214666, and 5.28668323) and the value of the objective function being equal to 2996.348165. What can be concluded from the comparison of the simulation results is that the proposed SABO approach provided better results and a superior performance in dealing with the speed reducer design problem compared to the competing algorithms. The convergence curve of the SABO while achieving the optimal solution for the speed reducer design problem is drawn in Figure 8.  Based on the obtained results, the SABO provided the optimal solution, with the values of the design variables being equal to (3.5, 0.7, 17, 7.3, 7.8, 3.350214666, and 5.28668323) and the value of the objective function being equal to 2996.348165. What can be concluded from the comparison of the simulation results is that the proposed SABO approach provided better results and a superior performance in dealing with the speed reducer design problem compared to the competing algorithms. The convergence curve of the SABO while achieving the optimal solution for the speed reducer design problem is drawn in Figure 8.

Welded Beam Design
The design of the welded beam is the subject of optimization by real users to minimize its production costs. The design of the welded beam schematic is shown in Figure 9.

Welded Beam Design
The design of the welded beam is the subject of optimization by real users to minimize its production costs. The design of the welded beam schematic is shown in Figure 9.   Based on the obtained results, the SABO provided the optimal solution, with the values of the design variables being equal to (0.20572964, 3.470488666, 9.03662391, and 0.20572964) and the value of the objective function being equal to 1.724852309. Comparing these optimization results indicates the superior performance of the SABO over the competing algorithms in optimizing the welded beam design. The SABO convergence curve while providing the solution for the welded beam design problem is drawn in Figure 10.

Tension/Compression Spring Design
The tension/compression spring design is an engineering challenge with the aim of minimizing the weight of the tension/compression spring. The tension/compression spring design schematic is shown in Figure 11.

Tension/Compression Spring Design
The tension/compression spring design is an engineering challenge with the aim of minimizing the weight of the tension/compression spring. The tension/compression spring design schematic is shown in Figure 11.

Tension/Compression Spring Design
The tension/compression spring design is an engineering challenge with the aim o minimizing the weight of the tension/compression spring. The tension/compression spring design schematic is shown in Figure 11.  The mathematical model of the tension/compression spring design problem is as follows [32]: Consider: X = [x 1 , x 2 , x 3 ] = [d, D, P]. Minimize: f (x) = (x 3 + 2)x 2 x 2 1 . Subject to: with 0.05 ≤ x 1 ≤ 2, 0.25 ≤ x 2 ≤ 1.3 and 2 ≤ x 3 ≤ 15.
The results of employing the SABO and the competing algorithms to handle the tension/compression spring design problem are presented in Tables 13 and 14. Based on the obtained results, the SABO provided the optimal solution, with the values of the design variables being equal to (0.051689061, 0.356717736, and 11.28896595) and the value of the objective function being equal to 0.012665233. What is evident from the analysis of the simulation results is that the SABO was more effective in optimizing the tension/compression spring design than the competing algorithms. The SABO convergence curve in reaching the optimal design for the tension/compression spring problem is drawn in Figure 12. Based on the obtained results, the SABO provided the optimal solution, with the values of the design variables being equal to (0.051689061, 0.356717736, and 11.28896595) and the value of the objective function being equal to 0.012665233. What is evident from the analysis of the simulation results is that the SABO was more effective in optimizing the tension/compression spring design than the competing algorithms. The SABO convergence curve in reaching the optimal design for the tension/compression spring problem is drawn in Figure 12.

Conclusions and Future Works
In this paper, a new metaheuristic algorithm called the Subtraction Average of Searcher Agents (SABO) was designed. The main idea of the design of the SABO was to use mathematical concepts and information on the average differences of searcher agents to update the population of the algorithm. The mathematical modeling of the proposed SABO approach was presented for optimization applications. The SABO's ability to solve these optimization problems was evaluated for fifty-two standard benchmark functions, including unimodal, high-dimensional, and fixed-dimensional functions, and the CEC 2017 test suite. The optimization results indicated the SABO's optimal ability to create a balance between exploration and exploitation while scanning the search space to provide suitable solutions for the optimization problems. A total of twelve well-known metaheuristic algorithms were employed for comparison with the proposed SABO approach. Comparing the simulation results showed that the SABO performed better than its competitor algorithms, providing better results for most of the benchmark functions. The implementation of the proposed optimization method on four engineering design problems demonstrated the SABO's ability to handle these optimization tasks in real-world applications.
With the introduction of the proposed SABO approach, several research avenues are opened for further study. The design of binary and multi-objective versions of the SABO is one of this study's most special research potentials. Employing the SABO to solve the optimization problems within various sciences and real-world applications is another suggestion for further studies.