1. Introduction
The term “optimization” refers to obtaining the optimal solution out of all available solutions to a problem [
1]. Optimization appears widely in real-world issues. For example, the goal of engineers is to design a product with the best performance, traders seek to maximize profits from their transactions, and investors try to minimize investment risk, etc. [
2]. These types of problems must be modeled mathematically and then optimized using the appropriate method. Each optimization problem is composed of three parts: (a) decision variables, (b) constraints, and (c) objective functions that can be modeled using Equations (1)–(4).
Subject to:
where
is the number of problem variables,
is the vector of problem variables,
is the values of the objective function for problem variables,
is the
th inequality constraint,
is the total number of inequality constraints,
is the
th equality constraint,
is the total number of equality constraints, and
and
are the lower and upper bounds of the
th problem variable
, respectively.
Problem-solving techniques in the study of optimization problems fall into two categories. The first category consists of “exact algorithms” that find optimal solutions to these problems and guarantee the optimality of these solutions. The second category consists of “approximate algorithms,” which are usually designed to solve optimization problems that exact methods are unable to solve [
3]. In contrast to exact algorithms, approximate algorithms are able to generate appropriate quality solutions for many optimization problems in a reasonable period of time. However, the important issue with approximate algorithms is that there is no assurance that the problem’s global optimal solution will be found [
4]. As a result, solutions derived from approximation approaches are referred to as quasi-optimal [
5]. A quasi-optimal solution should be as near to the global optimum as feasible.
Random-based optimization algorithms are among the most extensively utilized approximate algorithms in the solution of optimization problems. Optimization algorithms can give acceptable quasi-optimal solutions for objective functions by employing random operators and random scanning of the optimization problem’s search space [
6]. The proximity of their offered quasi-optimal solution to the global optimum is the key criterion of optimization algorithms’ superiority over one another. Scholars created numerous optimization techniques in this respect with the goal of finding quasi-optimal solutions that are closer to the global optimum. These random-based optimization algorithms are used in solving combinatorial optimization problems.
The main question that arises is whether there is still a need to design new optimizers, given that numerous optimization algorithms have been produced. According to the No Free Lunch (NFL) theorem [
7], even if an optimization method is very good at solving a certain set of optimization problems, there is no guarantee that it will be an effective optimizer for other optimization problems. As a result, it is impossible to declare that a specific method is the best optimizer for all optimization challenges. The NFL theorem has prompted researchers to design new optimizers to handle optimization issues in a variety of fields [
8]. This motivated the authors of this study to develop a novel optimization approach to optimizing real-world engineering problems that are both effective and gradient-free.
The novelty and innovation of this paper are in developing a novel population-based optimization method called the One-to-One-Based Optimizer (OOBO) to handle diverse optimization problems. The main contributions of this paper are as follows:
The key idea behind the suggested OOBO algorithm is the effective use of different members of the population and not relying on specific members during the population updating process.
The suggested OOBO algorithm’s theory is discussed, and its mathematical model for applications in solving optimization problems is offered.
OOBO’s ability to provide appropriate solutions is evaluated with fifty-two distinct objective functions.
The effectiveness of OOBO in solving real-world applications is tested on four engineering design problems.
The performance of OOBO is compared with eight well-known algorithms to assess its quality and ability.
The proposed OOBO approach has advantages, such as simple concepts, simple equations, and convenient implementation. The main advantage of OOBO is that it does not have any control parameters; therefore, the proposed approach does not need to adjust the parameters (of course, it should be mentioned, except for the population size, i.e., N and the maximum number of iterations of the algorithm, i.e., T, which are present in all metaheuristic algorithms due to the nature of population-based metaheuristic algorithms). In addition, the optimization process in the proposed OOBO ensures that a member of the population is employed solely to guide a member of the population in each iteration of the algorithm. Therefore, all members participate in guiding the OOBO population.
The rest of the paper is as follows: A literature review is presented in
Section 2.
Section 3 introduces the suggested OOBO algorithm.
Section 4 contains simulation studies and results. The evaluation of OOBO for optimizing four real-life problems is presented in
Section 5. Finally, conclusions and several recommendations for further research are stated in
Section 6.
2. Literature Review
Optimization algorithms are classified into five types, based on their primary design concepts: (a) swarm-based, (b) physics-based, (c) evolutionary-based, (d) human-based, and (e) game-based approaches.
Swarm-based optimization methods were inspired by various natural phenomena and natural behaviors of living organisms in nature. Particle swarm optimization (PSO) is among the oldest and most extensively used algorithms in this category, and it was designed based on natural fish and bird behaviors [
9]. Ant colony optimization (ACO) is another swarm-based technique that is focused on simulating ants’ behavior as they travel between nests and food sources, as well as the placement of pheromones in their paths. The presence of more pheromones in a path indicates that the path is closer to the food source [
10]. The bat algorithm (BA) is designed by imitating the activity of bats’ sound systems in locating prey, obstacles, and nests [
11]. Grey wolf optimization is a nature-based technique that models the hierarchical structure of grey wolves’ social behavior during hunting [
12]. Some of the other swarm-based optimization algorithms are green anaconda optimization (GOA) [
13], the spotted hyena optimizer (SHO) [
14], northern goshawk optimization (NGO) [
15], the orca predation algorithm (OPA) [
16], the artificial fish-swarm algorithm (AFSA) [
17], the reptile search algorithm (RSA) [
18], the firefly algorithm (FA) [
19], the grasshopper optimization algorithm (GOA) [
20], dolphin partner optimization (DPO) [
21], the whale optimization algorithm (WOA) [
22], the hunting search (HS) [
23], moth–flame optimization (MFO) [
24], the seagull optimization algorithm (SOA) [
25], the subtraction-average-based optimizer (SABO) [
26], the remora optimization algorithm (ROA) [
27], the marine predators algorithm (MPA) [
28], the artificial hummingbird algorithm (AHA) [
29], red fox optimization (RFO) [
30], the tunicate swarm algorithm (TSA) [
31], the pelican optimization algorithm (POA) [
32], the cat- and mouse-based optimizer (CMBO) [
33], the selecting-some-variables-to-update-based algorithm (SSVUBA) [
34], the good, the bad, and the ugly optimizer (GBUO) [
35], the group mean-based optimizer (GMBO) [
36], and the snake optimizer (SO) [
37].
Physics-based optimization algorithms are produced by drawing inspiration from numerous physical occurrences and using a variety of its rules. Simulated annealing (SA) is one of the methods in this group that originates from the process of refrigerating molten metals. In the refrigeration process, a very high-temperature molten metal is gradually cooled [
38]. The gravitational search algorithm (GSA) is designed by modeling the force of gravity and Newton’s laws of motion in an artificial system in which masses apply force to each other at different distances and move in this system according to such laws [
39]. Some of the other physics-based optimization algorithms are the galaxy-based search algorithm (GbSA) [
40], the small world optimization algorithm (SWOA) [
41], Henry gas solubility optimization (HGSO) [
42], central force optimization (CFO) [
43], ray optimization (RO) [
44], the flow regime algorithm (FRA) [
45], curved space optimization (CSO) [
46], the billiards-inspired optimization algorithm (BOA) [
47], and nuclear reaction optimization (NRO) [
48].
Evolutionary-based optimization algorithms are based on the simulation of biological evolution and the theory of natural selection. This category includes the genetic algorithm (GA), one of the earliest approximation optimizers. The GA was developed by modeling the reproductive process according to Darwin’s theory of evolution using three operators: (a) selection, (b) crossover, and (c) mutation [
49]. Some of the other evolutionary-based optimization algorithms are the biogeography-based optimizer (BBO) [
50], the memetic algorithm (MA) [
51], evolutionary programming (EP) [
52], the drawer algorithm (DA) [
53], evolution strategy (ES) [
54], differential evolution (DE) [
55], and genetic programming (GP) [
56].
Human-based optimization algorithms are developed based on modeling human behavior. Teaching–learning-based optimization (TLBO) is among the most employed human-based algorithms and models the educational process in the classroom between teachers and students. In the TLBO, the educational process is implemented in two phases: (a) a teaching phase in which the teacher shares knowledge with the students and (b) a learner phase in which the students share knowledge with each other [
57]. Some of the other human-based optimization algorithms are the mother optimization algorithm (MOA) [
58], the exchange market algorithm (EMA) [
59], the group counseling optimizer (GCO) [
60], the teamwork optimization algorithm (TOA) [
6], dual-population social group optimization (DPSGO) [
61], and the election-based optimization algorithm (EBOA) [
6].
Game-based optimization algorithms originate from the rules of various groups or individual games. The volleyball premier league (VPL) algorithm is based on modeling the interaction and competition among volleyball teams during a season and the coaching process during a match [
62]. Some of the other game-based optimization algorithms are football game-based optimization (FGBO) [
63], ring toss game-based optimization (RTGBO) [
64], the golf optimization algorithm (GOA) [
65], and shell game optimization (SGO) [
66].
Some other recently proposed metaheuristic algorithms are monarch butterfly optimization (MBO) [
67], the slime mold algorithm (SMA) [
68], the moth search algorithm (MSA) [
69], the Hunger Games search (HGS) [
70], the Runge Kutta method (RUN) [
71], the colony predation algorithm (CPA) [
72], the weighted mean of vectors (INFO) [
73], Harris Hawks optimization (HHO) [
74], and the Rime optimization algorithm (RIME) [
75].
3. One-to-One Based Optimizer
In this section, the proposed OOBO algorithm is described, and its mathematical modeling is presented. OOBO is a population-based metaheuristic algorithm that can provide effective solutions to optimization problems in an iteration-based process using a population search power in the problem-solving space.
3.1. Basis of the Algorithm
The basis of OOBO is that, first, several feasible solutions are generated based on the constraints of the problem. Then, in each iteration, the position of these solutions in the search space is updated, employing the algorithm’s main idea. Excessive reliance on specific population members in the update process prevents accurate scanning of the problem’s search space. This can lead to the convergence of the algorithm towards local optimal areas. The main idea in designing the proposed OOBO algorithm, while preventing it from relying too much on specific members of the population, such as best, worst, and mean members, is the effective use of information on all population members in the process of updating the algorithm population. Therefore, in this process of updating, the following items are considered: (a) the non-reliance of population updates on its specific members; (b) the involvement of all members in the updating process; and (c) each population member is employed in a one-to-one correspondence to guide another member in the search space.
3.2. Algorithm Initialization
In the OOBO algorithm, each population member is a proposed solution to the given problem as values for the decision variables, depending on its location in the search space. As a result, in OOBO, each population member is mathematically represented by a vector with the same number of elements as the number of decision variables. A population member can be represented using
To generate the initial population of OOBO, population members are randomly positioned in the search space utilizing
where
is the
th population member (that is, the proposed solution),
is its
th dimension (that is, the proposed value for the
th variable), rand() is a function generating a random uniform number from the interval
and
N is the size of the population.
In OOBO, the algorithm population is represented using a matrix according to
The optimization problem’s objective function can be assessed based on each population member, which is a proposed solution. Thus, different values for the objective function are acquired in each iteration equal to the number of population members, which can be mathematically described by means of
where
is the objective function vector and
is the objective function value for the
th proposed solution.
3.3. Mathematical Modeling of OOBO
At this stage of mathematical modeling for the OOBO algorithm, the population members’ positions must be updated in the search space. The main difference between metaheuristic algorithms is in how to update the position of population members. One of the things that can be seen in many metaheuristic algorithms is that the population update process is strongly dependent on the best member. This may lead to a decrease in the algorithm’s exploration ability to provide the global search in the problem-solving space and then get stuck in the local optimum. In fact, moving the population towards the best member can cause convergence to inappropriate local solutions, especially in complex optimization problems. Meanwhile, in the design of OOBO, the dependence of the population update process on the best member has been prevented. Hence, by moving the population of the algorithm to different areas in the search space, the exploration power of OOBO can be increased to provide the global search. The main idea of OOBO for this process is that all members of the population should participate in population updating. Therefore, each population member is selected only once and randomly to guide a different member of the population in the search space. We can mathematically describe this idea using an -tuple with the following properties: (a) each member is randomly selected from the positive integers from 1 to ; (b) there are no duplicate members among its members; and (c) no member has a value equal to its position in this -tuple.
To model a one-to-one correspondence, the member position number in the population matrix is used. The random process of forming the set
as “the set of the positions of guiding members” is modeled by
where
is the set of all permutations of the set
, and
is the
th element of the vector
.
In OOBO, to guide the
th member (
), a member of the population with position number
(
) in the population matrix is selected. Based on the values of the objective function of these two members, if the status of member
in the search space is better than that of member
, member
moves to member
; otherwise, it moves away from member
. Based on the above concepts, the process of calculating the new status of population members in the search space is modeled, employing
where
is the new suggested status of the
th member in the
th dimension,
is the
th dimension of the selected member to guide the
th member,
is the objective function value obtained based on
, and the variable
takes values from the set
.
The updating process of the population members in the proposed algorithm is such that the suggested new status for a member is acceptable if it leads to an improvement in the value of the objective function. Otherwise, the suggested new status is unacceptable, and as a result, the member stays in the previous position. This step of modeling OOBO is formulated as
where
is the new suggested status in the search space for the
th population member and
is its value of the objective function.
3.4. Repetition Process, Pseudocode, and Flowchart of OOBO
At this stage of OOBO, after updating the positions of all members of the population in the search space, the algorithm completes one iteration and enters the next iteration based on the population members’ new statuses. The procedure of updating population members is repeated using Equations (9)–(12) until the algorithm reaches the stopping rule. OOBO provides the best-found solution as a quasi-optimal after fully implementing the algorithm in the given problem. The implementation steps of OOBO are presented as pseudocode in Algorithm 1. The complete set of codes is available at the following repository:
https://www.mathworks.com/matlabcentral/fileexchange/135807-one-to-one-based-optimizer-oobo (accessed on 22 September 2023).
Algorithm 1. Pseudocode of OOBO. |
Start OOBO. |
1. Input optimization problem information. |
2. Set N and T. |
3. Create an initial population matrix. |
4. Evaluate the objective function. |
5. for ← 1 to do |
6. Update based on Equation (9). |
7. for ← 1 to do |
8. Calculate based on Equations (10) and (11). |
9. Compute based on . |
10. Update using Equation (12). |
11. end for |
12. Save the best solution found so far. |
13. end for |
14. Output the best quasi-optimal solution. |
End OOBO. |
3.5. Computational Complexity of OOBO
Next, the computational complexity of the OOBO algorithm, including the time complexity and space complexity, is studied.
The time complexity of OOBO is affected by the initialization process, the calculation of the objective function, and population updating as follows:
The algorithm initialization process requires time, where, as mentioned, N is the number of population members and m the number of decision variables.
In each iteration, the objective function is calculated for each population member. Therefore, calculating the objective function requires time, where T is the number of iterations of the algorithm.
The updating of population members requires an time.
Therefore, is the total time complexity of the OOBO algorithm, which can be simplified to . Competitor algorithms such as GA, PSO, GSA, GWO, WOA, TSA, and MPA have a time complexity equal to and TLBO has a time complexity equal to . Of course, considering that it is usually expressed as time complexity without constants and slower-growing terms, this expression is simplified to . Thus, the proposed OOBO approach has a similar time complexity to the seven competitor algorithms mentioned above. Compared to the TLBO, the OOBO approach has less time complexity and better conditions from this perspective.
The space complexity of the OOBO algorithm is , which is considered the maximum amount of space in its initialization process. Similarly, the competitor algorithms also have a space complexity equal to . In this respect, there is no difference between OOBO and the competitor algorithms.
4. Simulation Studies and Results
In this section, OOBO’s ability to solve optimization problems and provide quasi-optimal solutions is evaluated. For this purpose, OOBO was tested on 52 objective functions, which were categorized into (a) seven unimodal functions of
to
, (b) six high-dimensional multimodal functions of
to
, and (c) ten fixed-dimensional multimodal test functions of
to
, as well as twenty-nine functions from the CEC 2017 test suite (C17-F1, C17-F3 to C17-F30). Detailed information and a complete description of the benchmark functions for functions
to
are provided in [
76], and for the CEC 2017 test suite, they are provided in [
77]. In addition, the performance of OOBO was evaluated in four real-world optimization problems.
4.1. Intuitive Analysis in Two-Dimensional Search Space
Next, to visually observe the optimization process of the OOBO approach, the OOBO function was implemented in ten objective functions,
to
, in two dimensions. In this experiment, the number of OOBO population members was considered equal to five. To show the mechanism of the OOBO algorithm in solving the problems related to
to
, convergence curves, search history curves, and trajectory curves are presented in
Figure 1. The horizontal axis in convergence curves and trajectory curves represents the number of iterations of the algorithm. These curves display OOBO’s behavior in scanning the problem-search space, solution-finding, the convergence process, and how it achieves better solutions based on update processes after each iteration, as well as decreasing the objective function values. What was concluded from the analysis of this experiment is that the OOBO approach, by improving the initial candidate solutions during the progress of the algorithm iterations, can converge towards the optimal solution, providing acceptable quasi-optimal solutions for the given problem.
4.2. Experimental Setup
To further analyze the quality of OOBO, the results obtained from this algorithm were compared with eight well-known optimization algorithms: PSO, TLO, GWO, WOA, MPA, TSA, GSA, and GA. The reasons for choosing these competitor algorithms were as follows: GA and PSO are among the most famous and widely used optimization algorithms that have been employed in many applications; GSA, TLBO, and GWO are highly cited algorithms, which shows that they have always been trusted and used by researchers. Additionally, WOA, MPA, and TSA are methods that have been published recently, and because of their acceptable performance, they have been favored by many researchers in this short period of publication. Therefore, in total, eight competitor algorithms in this study were selected, based on the following three criteria:
- (i)
The most widely used algorithms: GA and PSO.
- (ii)
Highly cited algorithms: GSA, TLBO, and GWO.
- (iii)
Recently published and widely used algorithms: WOA, MPA, and TSA.
The values used for the control parameters of these competitors are specified in
Table 1. To provide a fair comparison, standard versions of metaheuristic algorithms are used. Experiments were implemented on the software MATLAB R2022a utilizing a 64-bit Core i7 processor with 3.20 GHz and 16 GB main memory.
4.3. Performance Comparison
The ability of OOBO was compared with eight competitor algorithms applied to different objective functions of unimodal and multimodal types. Five indicators (mean, best, worst, standard deviation, and median) of the best-found solutions were used to report the performance results of the algorithms. To optimize each of the objective functions, OOBO was implemented in 20 independent runs, each of which contained 1000 iterations. Convergence curves for each benchmark function were drawn based on the average performance of metaheuristic algorithms in 20 independent runs. Random optimization algorithms are stochastic-based approaches that can provide a solution to the problem in an iterative process. An essential point in implementing optimization algorithms is determining the stopping rule for the algorithm iterations. There are various stopping rules (criteria) for optimization algorithms, including the total number of iterations, the total number of function evaluations, no change in the value of the objective function after a certain number of iterations, and determining an error level between the values of the objective function in several consecutive repetitions. Among them, the total number of iterations has been the focus of researchers, who employ this criterion for the stopping rule. Hence, the present investigation considered the total number of iterations (T) as a stopping rule for optimization algorithms in solving the functions F1 to F23 and function evaluations (FEs) in solving the CEC 2017 test suite.
Seven unimodal structures were included in the first group of objective functions analyzed to assess the competence of OOBO.
Table 2 reports the implementation results of OOBO and eight competitors. What is clear from the analysis of the simulation results is that OOBO is the first best optimizer for the functions
,
,
,
,
,
, and
compared to the competitor algorithms. The comparison of the simulation results demonstrates that the proposed OOBO has a great capacity to solve unimodal problems and is far more competitive than the other eight algorithms.
The second set of objective functions chosen to assess the efficacy of optimization algorithms consisted of six high-dimensional multimodal objective functions,
to
.
Table 3 presents the outcomes of optimizing these objective functions utilizing the proposed OOBO and eight competitor techniques. Based on the simulation results, OOBO provides the optimal solution for
and
and is also the first-best optimizer for
,
,
, and
. Similarly, it was determined that OOBO has a more efficient ability to provide suitable solutions for
to
in relation to the competitor algorithms.
Ten fixed-dimensional multimodal functions were considered as the third group of objective functions to test the performance of the optimization techniques.
Table 4 provides the outcomes of implementing the proposed OOBO and eight competitor algorithms on
to
. The simulation results reveal that OOBO outperforms the competitor algorithms for
,
,
,
,
, and
. In optimizing the functions
,
,
, and
, although from the “mean” perspective, the performance of several algorithms is the same, OOBO has better “standard deviation,” providing adequate solutions. The simulation results demonstrate that OOBO is more efficient than the competitor algorithms at solving this sort of objective functions.
Figure 2 depicts a boxplot of the performance of optimization algorithms in solving objective functions
to
. In addition, the convergence curves of the OOBO approach and all competitor algorithms for benchmark functions
to
are presented in
Figure 3. The best score in convergence curves refers to the best value obtained for the objective function up to each iteration. This index is updated in each iteration based on the comparison with its value in the previous iteration. The analysis of the convergence curves indicates that, when solving unimodal problems with objectives functions
to
, the proposed OOBO converges on much better solutions than its eight competitor algorithms, and it has superior performance. When solving high-dimensional multi-model problems based on
to
, OOBO has a greater convergence strength than its eight competitor algorithms. When solving high-dimensional multi-model problems using
to
, the proposed OOBO approach has a faster convergence speed and greater convergence strength than eight competitor algorithms.
4.4. Sensitivity Analysis
The proposed OOBO employs two parameters, the number of population members (
) and the maximum number of iterations (
) in the implementation process, to solve optimization problems. In this regard, the analysis of OOBO’s sensitivity to these two parameters was assessed next. OOBO was implemented in independent runs for different values of
= 10, 20, 30, and 100 on
to
to investigate the sensitivity of the proposed method to the number of population members’ parameter. The simulation results of this part of the study are reported in
Table 5, whereas the behavior of the convergence curves under the impact of changes in population size is displayed in
Figure 4. The simulation results show that the values of all objective functions decline as the population size increases. To investigate the proposed algorithm’s sensitivity in relation to
, OOBO is employed in independent runs with different values of this parameter equal to
T = 200, 500, 800, and 1000 for optimizing the functions
to
.
Table 6 and
Figure 5 show the results of the sensitivity analysis of OOBO regarding
. The inference from the OOBO sensitivity analysis with the parameter
is that this algorithm can converge on better optimal solutions when employed in a larger number of iterations.
4.5. Scalability Analysis
Next, a scalability study is presented to analyze the performance of OOBO in optimizing objective functions under the influence of changes in the problem dimensions. For this purpose, OOBO was employed in different dimensions (30, 50, 80, 100, 250, and 500) in optimizing
to
. The OOBO convergence curves in solving objective functions for the various mentioned dimensions are presented in
Figure 6. The simulation results obtained from the scalability study are reported in
Table 7. From the analysis of the results in this table, we can deduce that the efficiency of OOBO is not degraded too much when the dimensions of the given problem increase. OOBO’s optimal performance under the influence of changes in the problem dimensions is due to OOBO’s ability to achieve the proper balance between exploration and exploitation.
4.6. Evaluation of the CEC 2017 Test Suite
Next, the performance of the proposed OOBO approach in handling the CEC 2017 test suite was evaluated. The CEC 2017 test suite has thirty benchmark functions consisting of three unimodal functions, C17-F1 to C17-F3, seven multimodal functions, C17-F4 to C17-F10, ten hybrid functions, C17-F11 to C17-F20, and ten composition functions, C17-F21 to C17-F30. The function C17-F2 was removed from this test suite due to unstable behavior (as in similar papers). The unstable behavior of C17-F2 means that, especially for higher dimensions, it shows significant performance variations for the same algorithm implemented in Matlab. Complete information on the CEC 2017 test suite is provided in [
77]. The implementation results of OOBO and competitor algorithms with the CEC 2017 test suite are reported in
Table 8. The boxplot diagrams obtained from the performance of the metaheuristic algorithms in handling the CEC 2017 test suite are drawn in
Figure 7.
Based on the simulation results, OOBO is the first-best optimizer for the functions C17-F1, C17-F4 to C17-F6, C17-F8, C17-F10 to C17-24, and C17-F26 to C17-F30. The analysis of the simulation results shows that the proposed OOBO approach has been able to provide superior performance in solving the CEC 2017 test suite by providing better results in most of the benchmark functions compared to competitor algorithms.
4.7. Statistical Analysis
Next, the Wilcoxon rank sum test [
78] was utilized to evaluate the performance of optimization algorithms in addition to statistical analysis of the average and standard deviation. The Wilcoxon rank sum test is used to determine whether there is a statistically significant difference between two sets of data. A
p-value in this test reveals whether the difference between OOBO and each of the competitor algorithms is statistically significant.
Table 9 reports the results of our statistical analysis. Based on the analysis of the simulation results, the proposed OOBO has a
p-value less than 0.05 in each of the three types of objective functions compared to each of the competitor algorithms. This result indicates that OOBO is significantly different in statistical terms from the eight compared algorithms.
5. OOBO for Real-World Applications
In this section, the proposed OOBO and eight competitor algorithms are applied to four science/engineering designs to evaluate their capacity to resolve real-world problems. These design problems are pressure vessel, speed reducer, welded beam, and tension/compression spring.
5.1. Pressure Vessel Design Problem
The mathematical model of the pressure vessel design problem was adapted from [
79]. The main goal of this problem is to minimize the design cost. A schematic view of the pressure vessel design problem is shown in
Figure 8.
To formulate the model, consider that , and then the mathematical program is given by
Subject to:
with
.
The obtained solutions using OOBO and eight competitor algorithms are presented in
Table 10. Based on the results of this table, OOBO presented the optimal solution at (0.7781, 0.3832, 40.3150, 200), and the value of the objective function was equal to 5870.8460. An analysis of the results of this table showed that the proposed OOBO has good performance in solving the problem at a low cost. The statistical results of the performance of the optimization algorithms when solving this problem are presented in
Table 11. These results indicated that OOBO provides better values for the best, mean, and median indices than the other eight compared algorithms. The convergence curve of the proposed OOBO is presented in
Figure 9 while achieving the optimal solution.
5.2. Speed Reducer Design Problem
The mathematical model of the speed reducer design problem was first formulated in [
80], but we used an adapted formulation from [
81]. The main goal of this problem is to minimize the weight of the speed reducer. A schematic view of the speed reducer design problem is shown in
Figure 10.
To formulate the model, consider that and then the mathematical program is given by
The results of the implementation of the proposed OOBO and eight compared algorithms on this problem are presented in
Table 12. OOBO presented the optimal solution at (3.5012, 0.7, 17, 7.3, 7.8, 3.33412, 5.26531) with an objective function value of 2989.8520.
Table 13 presents the statistical results obtained from the proposed OOBO and eight competitor algorithms. Based on the simulation results, OOBO has superior performance over the eight algorithms when solving the speed reducer design problem. The convergence curve of the proposed OOBO is presented in
Figure 11.
5.3. Welded Beam Design
The mathematical model of a welded beam design was adapted from [
22]. The main goal for solving this design problem is to minimize the fabrication cost of the welded beam. A schematic view of the welded beam design problem is shown in
Figure 12.
To formulate the model, consider that , and then the mathematical program is given by
Subject to:
where
with
.
The results of the implementation of the proposed OOBO and the compared algorithms on this problem are presented in
Table 14. The simulation results show that the proposed algorithm presented the optimal solution at (0.20328, 3.47115, 9.03500, 0.20116) with an objective function value of 1.72099. An analysis of the statistical results of the implemented algorithms is presented in
Table 15. Based on this analysis, note that the proposed OOBO is superior to the compared algorithms in providing the best, mean, and median indices. The convergence curve of OOBO to solve the welded beam design problem is shown in
Figure 13.
5.4. Tension/Compression Spring Design
The mathematical model of this problem was adapted from [
22]. The main goal of this design problem is to minimize the tension/compression of the spring weight. A schematic view of the tension/compression spring design problem is shown in
Figure 14.
To formulate the model, consider that , and then the mathematical program is given by
Subject to:
with
.
The performance of all optimization algorithms in achieving the objective values and design variables values is presented in
Table 16. The optimization results show that the proposed OOBO provided the optimal solution at (0.05107, 0.34288, 12.08809) with an objective function value of 0.01266. A comparison of the results showed that OOBO has superior performance in solving this problem compared to those of the other eight algorithms. A comparison of the statistical results of the performance of the proposed OOBO against the eight competitor algorithms is provided in
Table 17. The analysis of this table reveals that OOBO offers a more competitive performance in providing the best, mean, and median indices. The convergence curve of the proposed OOBO in achieving the obtained optimal solution is shown in
Figure 15.
6. Conclusions and Future Works
A new optimization technique called one-to-one-based optimizer (OOBO) was proposed in this study. The main idea in designing OOBO was the participation of all population members in the algorithm’s updating process based on a one-to-one correspondence between the two sets of members of the population and a set of selected members as guides. Thus, each population member was selected precisely once as a guide to another member and then used to update the position of that population member. The performance of the proposed OOBO in solving optimization problems was tested on 52 objective functions belonging to unimodal, high-dimensional, and fixed-dimensional multimodal types, as well as the CEC 2017 test suite. The findings indicated OOBO’s strong ability in exploitation based on the results of unimodal functions, OOBO’s strong ability in exploration based on the results of high-dimensional multimodal functions, and OOBO’s acceptability in balancing exploitation and exploration based on the results of fixed-dimensional multimodal, hybrid, and composition functions.
In addition, the performance of the proposed approach in solving optimization problems was compared with eight well-known algorithms. Simulation results reported that the proposed algorithm provided quasi-optimal solutions with better convergence than the compared algorithms. Furthermore, the power of the proposed approach to provide suitable solutions for real-world applications was tested by applying it to four science/engineering design problems. It is clear from the optimization results of this experiment that the proposed OOBO is applicable to solving real-world optimization problems. In response to the main research question about introducing a new optimization algorithm, the simulation findings showed that the proposed OOBO approach performed better in most of the benchmark functions than its competing algorithms. The successful and acceptable performance of OOBO justifies the introduction and design of the proposed approach.
Against advantages such as a strong ability to balance exploration and exploitation and effectiveness in handling real-world applications, the proposed OOBO approach has limitations and disadvantages. The first limitation for all optimization algorithms is that, based on the NFL theorem, there is always a possibility that newer algorithms will be designed that perform better than OOBO. A second limitation of OOBO is that there is no guarantee of achieving global optimization using it due to the nature of random search. Another limitation of OOBO is that, although it has provided successful performance in the optimization problems under study in this paper, there is no guarantee that it will provide similar performance in other optimization applications. Therefore, it is never and in no way claimed that OOBO is the best optimizer for all optimization applications.
The authors of this paper provide several study proposals for the present research. In this regard, we can mention the multi-objective version’s design and the binary version of the proposed OOBO algorithm. Moreover, the usage of OOBO for NP-hard/NP-complete problems, different applications, and optimization problems in science, engineering, data mining, data clustering, sensor placement, big data, medical, and feature selection is additional research potential for further studies based on this paper.