Light Spectrum Optimizer: A Novel Physics-Inspired Metaheuristic Optimization Algorithm

This paper introduces a novel physical-inspired metaheuristic algorithm called “Light Spectrum Optimizer (LSO)” for continuous optimization problems. The inspiration for the proposed algorithm is the light dispersions with different angles while passing through rain droplets, causing the meteorological phenomenon of the colorful rainbow spectrum. In order to validate the proposed algorithm, three different experiments are conducted. First, LSO is tested on solving CEC 2005, and the obtained results are compared with a wide range of well-regarded metaheuristics. In the second experiment, LSO is used for solving four CEC competitions in single objective optimization benchmarks (CEC2014, CEC2017, CEC2020, and CEC2022), and its results are compared with eleven well-established and recently-published optimizers, named grey wolf optimizer (GWO), whale optimization algorithm (WOA), and salp swarm algorithm (SSA), evolutionary algorithms like differential evolution (DE), and recently-published optimizers including gradient-based optimizer (GBO), artificial gorilla troops optimizer (GTO), Runge–Kutta method (RUN) beyond the metaphor, African vultures optimization algorithm (AVOA), equilibrium optimizer (EO), grey wolf optimizer (GWO), Reptile Search Algorithm (RSA), and slime mold algorithm (SMA). In addition, several engineering design problems are solved, and the results are compared with many algorithms from the literature. The experimental results with the statistical analysis demonstrate the merits and highly superior performance of the proposed LSO algorithm.


Introduction
The practical implications of metaheuristic algorithms have been widely spread, especially in the last few years. The reason behind this is the rapidity, high-quality solutions, and problem-independent characteristics of metaheuristics [1][2][3][4][5]. Unfortunately, no metaheuristics can efficiently solve all types of optimization problems. Consequently, a significant number of metaheuristics have been proposed from time to time, aiming to find efficient metaheuristics that are proper for various types of optimization problems. In particular, metaheuristics depend on the progress or movement behavior of a specified phenomenon or creature. By simulating such a progress or movement style, a metaheuristic can invade the search space of a problem as the environment of the simulated phenomenon or creature.
Metaheuristics depend on two search mechanisms while trying to find the best solution to the given problem. The first mechanism is exploration, which invades the unvisited search area. The second mechanism is exploitation, which searches around the found best solution [6]. The main factor of any metaheuristic success is balancing these two mechanisms. In particular, using more exploration makes metaheuristics unable to reach the global best solution. Alternatively, using more exploitation may lead to trapping into the local optima. In general, metaheuristics' searching mechanisms have stemmed from natural phenomena or the behavior of creatures.
These advantages are proved with three different validating experiments that include several optimization problems with various characteristics. Besides, LSO is compared with many other optimization algorithms, and the results are analyzed with the appropriate statistical tests. The experimental findings affirm the superiority of LSO compared to all the other rival algorithms. Finally, the main contributions of this study are listed as follows: • Proposing a novel physical-based metaheuristic algorithm called light spectrum optimizer (LSO), inspired by the sparkle rainbow phenomenon caused by passing sunlight rays through the rain droplets.

•
Validating LSO using four challengeable mathematical benchmarks like CEC2014, CEC2017, CEC2020, and CEC2022, as well as several engineering design problems.

•
The experimental findings, along with the Wilcoxon rank-sum test as a statistical test, illustrate the merits and highly superior performance of the proposed LSO algorithm The remainder of this work is organized as follows. Section 2 gives the background illustration of the inspiration and the mathematical modelling of the rainbow phenomenon. Section 3 explains the mathematical formulation and the searching procedure of LSO. In Section 4, various experiments are done on the CEC2005, CEC2014, CEC2017, CEC2020, and CEC2022 benchmarks, and their experimental results are analyzed with the proper statistical analysis. Additionally, LSO sensitivity is presented. In Section 5, popular engineering design problems are solved with LSO.

Background
Rainbow is one of the most fabulous metrological wonders. From the physical perspective, it is a half-circle of spectrum colors created by dispersion and internal reflection of sunlight rays that hit spherical rain droplets [109]. When a white ray hits a water droplet, it changes its direction by refracting and reflecting inside and outside the droplet (sometimes more than once) [110]. In other words, the rainbow is formed by light rays' refraction, reflection, and dispersion through water droplets.
According to Descartes's laws [111,112], refraction occurs when the light rays travel from one material to another with a different refractive index. When light rays hit the outer surface of a droplet, some light rays reflect away from the droplet while the others are refracted. The refracted light rays hit the inner surface of a droplet, causing another reflection and refracting away from the droplet with different angles, which causes the white sunlight to be dispersed into its seven spectral colors: red, orange, yellow, green, blue, indigo, and violet, as depicted in Figure 2. These spectral colors, differs according to the angles of deviations, which range from 40° (violet) to 42°(red) [113,114] (See Figure  2). Mathematically, the refraction and reflection of the rainbow spectrum have been illustrated by Snell's laws. Snell's law said that the ratio between the sines of the incident and refracted angles is equal to the ratio between the refractive indices of air and water, as [115]: Mathematically, the refraction and reflection of the rainbow spectrum have been illustrated by Snell's laws. Snell's law said that the ratio between the sines of the incident and refracted angles is equal to the ratio between the refractive indices of air and water, as [115]: where θ 1 is the incident angle, θ 2 is the refracted angle, k 2 is the refractive index of water, and k 1 is the refractive index of air. In this work, Snell's law is used in its vector form. As shown in Figure 3, all normal, incidents, refracted, and reflected rays are converted to vectors. The mathematical formulation of the refracted ray can be expressed as [116]: where L 1 is the refracted light ray, k is the refractive index of the droplet, L 0 is the incident light ray, and n A is the normal line at the point of incidence. Meanwhile, the inner reflected ray can be formulated as: where L 2 is the inner reflected light ray and n B is the normal line at the point of inner reflection. Finally, the outer refracted ray is expressed as: where L 3 is the outer refracted light ray and n C is the normal line at the point of outer refraction.
where is the incident angle, is the refracted angle, is the refractive index of water, and is the refractive index of air. In this work, Snell's law is used in its vector form. As shown in Figure 3, all normal, incidents, refracted, and reflected rays are converted to vectors. The mathematical formulation of the refracted ray can be expressed as [116]: where is the refracted light ray, is the refractive index of the droplet, is the incident light ray, and is the normal line at the point of incidence. Meanwhile, the inner reflected ray can be formulated as: where is the inner reflected light ray and is the normal line at the point of inner reflection. Finally, the outer refracted ray is expressed as: where is the outer refracted light ray and is the normal line at the point of outer refraction.

Light Spectrum Optimizer (LSO)
As discussed before, the rainbow spectrum rays are caused by colorful light dispersion. In this paper, the proposed algorithm takes its inspiration from this metrological phenomenon. In particular, LSO is based on the following assumptions: (1) Each colorful ray represents a candidate solution.
(2) The dispersion of light rays ranges from 40° to 42° or have a refractive index that varies between = 1.331 and = 1.344.
(3) The population of light rays has a global best solution, which is the best dispersion reached so far. (4) The refraction and reflection (inner or outer) are randomly controlled. (5) The current solution's fitness value controls a colorful rainbow curve's first and second scattering phases compared to the best so-far solution's fitness. Suppose the fitness value between them is so close. In that case, the algorithm will apply the first

Light Spectrum Optimizer (LSO)
As discussed before, the rainbow spectrum rays are caused by colorful light dispersion. In this paper, the proposed algorithm takes its inspiration from this metrological phenomenon. In particular, LSO is based on the following assumptions: (1) Each colorful ray represents a candidate solution.
(2) The dispersion of light rays ranges from 40 • to 42 • or have a refractive index that varies between k red = 1.331 and k violet = 1.344. (3) The population of light rays has a global best solution, which is the best dispersion reached so far. (4) The refraction and reflection (inner or outer) are randomly controlled. (5) The current solution's fitness value controls a colorful rainbow curve's first and second scattering phases compared to the best so-far solution's fitness. Suppose the fitness value between them is so close. In that case, the algorithm will apply the first scattering phase to exploit the regions around the current solution because it might be so close to the near-optimal solution. Otherwise, the second phase will be applied to help the proposed algorithm avoid getting stuck in the regions of the best-so-far solution because it might be local minima.
Next, the detailed mathematical formulation of LSO will be discussed.

Initialization Step
The search process of LSO begins with the random initialization of the initial population of white lights as: where → x 0 is the initial solution, RV 1 is a vector of uniform random numbers generated between [0, 1] with a length equal to the given problem dimension (d), and lb and ub are the lower and upper bounds of the search space, respectively. After that, the generated initial solutions are evaluated in order to determine the global and personal best solutions.

Colorful Dispersion of Light Rays
In this subsection, we discuss the mathematical formulation of rainbow spectrum directions, colorful rays scattering, and the exploration and exploitation mechanisms of LSO.

The Direction of Rainbow Spectrums
After the initialization, the normal vector of inner refraction → x nA , inner reflection → x nB , and outer refraction → x nC are calculated as: where → x r t is a randomly selected solution from the current population at iteration t, → x p t is the current solution at iteration t, → x * is the global best solution ever founded, and norm(.) indicates the normalized value of a vector and computed according to the following formula: where d stands for the number of dimensions in an optimization problem. → x is the input vector to the norm function to normalize it. x j is the jth dimension in the input vector → x . For the incident light ray, it is calculated as follows: → x L0 = X mean norm (X mean ) (11) where → x L0 is the incident light ray, X mean is the mean of the current population of solutions → x i (i = 1, . . . N), and N is the population size. Then, the vectors of inner and outer refracted and reflected light rays are calculated as: Mathematics 2022, 10, 3466 8 of 63 where → x L1 , → x L2 , and → x L3 are the inner refracted, inner reflected, and outer refracted light rays, respectively. k r stands for the refractive index, which is updated randomly between k red and k violet to define a random spectrum color as: (15) where RV 1 is a uniform random number generated randomly between [0, 1]. x * presented at the same table are randomly generated between 100 and −100. The values of the inner refracted and inner reflected are obvious from this table. Outer refracted vectors could not be employed alone to update the individuals, which have to be ranged between 100 and −100 because the change rate in the updated solutions will be so low. Hence, many function evaluations will be consumed to reach better solutions. Therefore, the equations described in the next section are adapted to deal with this problem by extensively encouraging the exploration operator of the newly-proposed algorithm.

Generating New Colorful Ray: Exploration Mechanism
After the calculation of the rays' directions, we calculate the candidate solutions according to the value of a randomly generated probability between 0 and 1, referred to as p. In particular, if the value of p is lower than a number generated randomly between 0 and 1, then the new candidate solution will be calculated as: Otherwise, the new candidate solution will be calculated as: where → x t+1 is the newly generated candidate solution, → x t is the current candidate solution at iteration t. r1, r2, r3, and r4 are indices of four solutions selected randomly from the current population. RV n 1 and RV n 2 are vectors of uniform random numbers that are generated between [0, 1]. is a scaling factor that is calculated using (18). GI is an adaptive control factor based on the inverse incomplete gamma function and computed according to (19). (18) where RV n 3 a vector of normally distributed random numbers with a mean equal to zero and a standard deviation equal to one, and a is an adaptive parameter that can be calculated using (20).
GI is an adaptive control factor. r is a uniform random number between [0, 1] that is inversed to promote the exploration operator throughout the optimization process because inversing any random value will generate a new decimal number greater than 1, which might take the current solution to far away regions within the search space for finding a better solution. P −1 is the inverse incomplete gamma function for the corresponding value of a.
where t is the current iteration number, RV 2 is a scalar numerical value of uniform random numbers generated between [0, 1], and Tmax is the maximum number of function evaluations.
When the input numbers are greater than 0.5, this inverse incomplete gamma function generates high numerical values starting from almost 0.8 and ending at nearly 5.5, as described in Figure 4; otherwise, it generates decimal values down to 0. When the input numbers to this function are high, it will encourage the exploration operator. However, the highly-generated value might take the updated solutions out of the search boundary, and hence the algorithm might be converted into a randomization process because the boundary checking method will move those infeasible solutions back again into the search space. Therefore, the factor, a, described before in (20), is modeled with the inverse incomplete gamma values to reduce their magnitude and avoid the randomization process when the input values are high. Both the inverse function and factor a decrease gradually with increasing the current iteration, and hence the optimization process will be gradually converted from the exploration operator into the exploitation that might lead to falling into local minima. Therefore, to support the exploration operator throughout the optimization process, the inverse of a number generated randomly between 0 and 1 will be modeled with both the inverse function and factor a as defined in (19).
Mathematics 2022, 10, x FOR PEER REVIEW 9 of 60 are generated between [0, 1]. is a scaling factor that is calculated using (18). is an adaptive control factor based on the inverse incomplete gamma function and computed according to (19).
where a vector of normally distributed random numbers with a mean equal to zero and a standard deviation equal to one, and is an adaptive parameter that can be calculated using (20).
is an adaptive control factor. is a uniform random number between [0, 1] that is inversed to promote the exploration operator throughout the optimization process because inversing any random value will generate a new decimal number greater than 1, which might take the current solution to far away regions within the search space for finding a better solution.
is the inverse incomplete gamma function for the corresponding value of .
Where is the current iteration number, is a scalar numerical value of uniform random numbers generated between [0, 1], and is the maximum number of function evaluations.
When the input numbers are greater than 0.5, this inverse incomplete gamma function generates high numerical values starting from almost 0.8 and ending at nearly 5.5, as described in Figure 4; otherwise, it generates decimal values down to 0. When the input numbers to this function are high, it will encourage the exploration operator. However, the highly-generated value might take the updated solutions out of the search boundary, and hence the algorithm might be converted into a randomization process because the boundary checking method will move those infeasible solutions back again into the search space. Therefore, the factor, a, described before in (20), is modeled with the inverse incomplete gamma values to reduce their magnitude and avoid the randomization process when the input values are high. Both the inverse function and factor a decrease gradually with increasing the current iteration, and hence the optimization process will be gradually converted from the exploration operator into the exploitation that might lead to falling into local minima. Therefore, to support the exploration operator throughout the optimization process, the inverse of a number generated randomly between 0 and 1 will be modeled with both the inverse function and factor a as defined in (19).

Colorful Rays Scattering: Exploitation Mechanism
This phase helps to scatter the rays in the direction of the best-so-far solution, the current solution, and a solution selected randomly from the current population to improve its exploitation operator. At the start, the algorithm will scatter the rays around the current solution to exploit the region around it to reach better outcomes. However, this might reduce the convergence speed of LSO, so an additional step size applied with a predefined probability β is integrated to move the current solution in the direction of the best-so-far solution to overcome this problem. The mathematical model of scattering around the current solution is as follows: where → x * is the best-so-far solution, and → x r1 and → x r2 are two solutions selected randomly from the current population. RV 3 includes a number selected randomly at the interval of 0 and 1. RV n 4 is a vector including numbers generated randomly between 0 and 1. The second scattering phase is based on generating rays in a new position based on the best-so-far solution and the current solution according to the following formula: where r 1 is a randomly generated numerical value at the interval of 0 and 1. π indicates the ratio of the perimeter of a circle to its diameter. Exchanging between the first and second scattering phases is achieved based on a predefined probability P e as shown in the following formula: where R is a number generated randomly between 0 and 1. The last scattering phase is based on generating a new solution according to a solution selected randomly from the population and the current solution according to the following formula: where RV 5 is a scalar value of normally distributed random numbers with a mean equal to zero and a standard deviation equal to one, and → U is a vector including random values of 0 and 1. | . . . | is the absolute symbol, which converts the negative values into positive ones and returns the positive numbers as passed. Exchanging between Equations (23) and (24) is based on computing the difference between the fitness value of each solution and that of the best-so-far solution and normalizing this difference between 0 and 1 according to (25). If this difference is less than a threshold value R 1 generated randomly between 0 and 1, (23) will be applied; otherwise, (24) is applied. Our hypothesis is based herein computing the probabilistic fitness value using (25) to determine how far the current light ray is close to the best-so-far light ray. If the probabilistic fitness value for the ith light ray is smaller than R 1 , it is preferable to scatter this light ray in the same direction as the best-so-far solution. Our proposed algorithm suggests this hypothesis to maximize its performance when dealing with any optimization problem that needs a high-exploitation operator to accelerate the convergence speed and save computational costs.
where F, F b , and F w indicate the fitness values of the current solution, best-so-far solution, and worst solution, respectively. However, the probability of applying (23) when the value of F is high is a little. For example, Figure 5 has tracked the values of F for an agent and the random number R 1 during the optimization process of a test function; this figure shows that the F values are nearly greater than R 1 for most of the optimization process, as shown by red points that are much greater than the blue points in the 9 subgraphs depicted in Figure 5, and thus the chance of firing the first and second scattering stages is so low when relying solely on factor F . Therefore, exchanging between (23) and (24) is also applied with a predefined probability P s to further promote the first and second scattering stages for accelerating convergence toward the best-so-far solution. Finally, exchanging between these two equations is formulated in the following formula: where R and R 1 are numbers generated randomly between 0 and 1.
Mathematics 2022, 10, x FOR PEER REVIEW 11 of 60 subgraphs depicted in Figure 5, and thus the chance of firing the first and second scattering stages is so low when relying solely on factor F'. Therefore, exchanging between (23) and (24) is also applied with a predefined probability Ps to further promote the first and second scattering stages for accelerating convergence toward the best-so-far solution. Finally, exchanging between these two equations is formulated in the following formula: where and are numbers generated randomly between 0 and 1.

LSO Pseudocode
The pseudocode of the proposed algorithm is stated in Algorithm 1, and the same steps are depicted in Figure 6. Some solutions might go outside the problem's search space. Thus, they have to be returned back into the search space to find feasible solutions to the problem. There are two common ways to convert the infeasible solutions, which go outside the search space to feasible ones; the first one is based on setting the lower bound to the dimensions, which are smaller, and setting the upper bound to these, which are higher; while the second one is based on generating new random values within the search boundaries of the dimensions, which go outside the search space of the problem. Within our proposed algorithm, we make hybridization between these two methods to improve the convergence rate by the first and the exploration by the second. This hybridization is achieved based on a predefined probability Ph, which is estimated within the experiments section. Figure 5. Tracing F's values versus R 1 for an individual over 9 independent runs: The red points indicate that R 1 < F ; and the blue points indicate that R 1 > F .

LSO Pseudocode
The pseudocode of the proposed algorithm is stated in Algorithm 1, and the same steps are depicted in Figure 6. Some solutions might go outside the problem's search space. Thus, they have to be returned back into the search space to find feasible solutions to the problem. There are two common ways to convert the infeasible solutions, which go outside the search space to feasible ones; the first one is based on setting the lower bound to the dimensions, which are smaller, and setting the upper bound to these, which are higher; while the second one is based on generating new random values within the search boundaries of the dimensions, which go outside the search space of the problem. Within our proposed algorithm, we make hybridization between these two methods to improve the convergence rate by the first and the exploration by the second. This hybridization is achieved based on a predefined probability P h , which is estimated within the experiments section.

Algorithm 1: LSO Pseudo-Code
Input: Population size of light rays N, problem Number of Iterations Tmax Output: The best light dispersion x * and its fitness Generate initial random population of light rays x i (i = 1, 2, 3, . . . , N) for each light ray 3 evaluate the fitness value 4 t = t + 1 5 keep the current global best x * 6 Update the current solution if the updated solution is better.
update the refractive index k r 10 update a, , and GI 11 Generate two random numbers: p, q between 0 and 1 %%%%Generating new ColorFul ray: Exploration phase 12 if p ≤ q 13 update the next light dispersion using Equation (16)

Searching Behavior and Complexity of LSO
In this section, we will discuss the searching schema of LSO and its computational complexity.
A. Searching behavior of LSO As discussed before, LSO reciprocates the methods of finding the next solution

Searching Behavior and Complexity of LSO
In this section, we will discuss the searching schema of LSO and its computational complexity.
A. Searching behavior of LSO As discussed before, LSO reciprocates the methods of finding the next solution through In other words, → x nA is calculated according to a randomly selected solution, which ensures the exploration of the search space. Meanwhile, the calculation of → x nB and → x nC depends on the global and personal best solutions, respectively. This preserves the exploitation of the search space. Another exploitation consolidation is preserved by the usage of the inverse incomplete gamma function [117], which can be expressed as follows: where w is a scaling factor that is greater than or equal 0. Figure 7 depicts the LSO's exploration and exploitation operators to illustrate the behavior of LSO experimentally. This figure is plotted by displaying the search core of an individual during the exploration and exploitation phases for the first two dimensions (X 1 and X 2 ). From this figure, specifically Figure 7a, which depicts the LSO's exploitation operator, it is obvious that this operator focuses its search toward a specific region, often the best-so-far region, to explore the solutions around and inside this region in the hope of reaching better solutions in a lower number of function evaluations. On the other side, Figure 7b pictures the exploration behavior of LSO to show how far LSO could reach; this figure shows that the individuals within the optimization process try to attack different regions, far from the current, within the search space of the optimization process for reaching the most promising region, which is attacked using the exploitation operator discussed formerly.

B. Space and Time Complexity
(1) LSO Space Complexity The space complexity of any metaheuristic can be defined as the maximum space required during the search process. The big O notation of LSO space complexity can be stated as O(N × d), where N is the number of search agents, and d is the dimension of the given optimization problem. (2) LSO Time Complexity The time complexity of LSO is analyzed in this study using asymptotic analysis, which could analyze the performance of an algorithm based on the input size. Other than the input, all the other operations, like the exploration and exploitation operators, are considered constant. There are three asymptotic notations: big-O, omega, and theta, which are commonly used to analyze the running time complexity of an algorithm. The big-O notation is considered in this study to analyze the time complexity of LSO because it expresses the upper bound of the running time required by LSO for reaching the outcomes.
The time complexity of any metaheuristic depends on the required time for each step of the algorithm, like generating the initial population, updating candidate solutions, etc. Thus, the total time complexity is the sum of all such time measures. The time complexity of LSO results from three main algorithm steps: (1) Generation of the initial population.  figure shows that the individuals within the optimization process try to attack different regions, far from the current, within the search space of the optimization process for reaching the most promising region, which is attacked using the exploitation operator discussed formerly.

B. Space and Time Complexity
(1) LSO Space Complexity The space complexity of any metaheuristic can be defined as the maximum space required during the search process. The big O notation of LSO space complexity can be stated as ( × ), where is the number of search agents, and is the dimension of the given optimization problem.
(2) LSO Time Complexity The time complexity of LSO is analyzed in this study using asymptotic analysis, which could analyze the performance of an algorithm based on the input size. Other than the input, all the other operations, like the exploration and exploitation operators, are considered constant. There are three asymptotic notations: big-O, omega, and theta, which are commonly used to analyze the running time complexity of an algorithm. The big-O notation is considered in this study to analyze the time complexity of LSO because it expresses the upper bound of the running time required by LSO for reaching the outcomes.
The time complexity of any metaheuristic depends on the required time for each step of the algorithm, like generating the initial population, updating candidate solutions, etc. Thus, the total time complexity is the sum of all such time measures. The time complexity of LSO results from three main algorithm steps: (1) Generation of the initial population.
The first initialization step has a time complexity equal to ( × ). The candidate solutions calculation has a time complexity ( × × ), which includes the evaluation of the generated solutions and updating the current best solution, Where is the maximum number of search iterations. So, the total time complexity of LSO in big-O is ( × × ), which is confirmed in detail in Table 3.  Table 3. Table 3. Execution time of each line in the proposed algorithm according to using the asymptotic analysis. One execution time for each input. This line contains 3 inputs, and the total execution time will be equal to 3 Output: The best light dispersion x * and its fitness Generate an initial random population of light rays x i (i = 1, 2, 3, . . . , N) -The output will be returned one time, so the execution time of this line will be 1 -Regarding the initialization, this stage will initialize d dimensions for N solutions, so the time complexity will be O(N × d) here indicates the number of generated vectors Table 3. Cont.

Difference between LSO, RO, and LRO
This section compares the proposed algorithm to two other metaheuristic algorithms inspired by light reflection and refraction to demonstrate that LSO is completely different from those algorithms in terms of inspiration, formulation of candidate solutions, and the variation of the updating process, as illustrated in Table 4. Table 4. Comparison between LSO, RO, and LRO.

Inspiration
Simulating the light movement and orientation in the rainbow metrological phenomenon.
Simulating Snell's light refraction law when light transfers from a lighter medium to a darker medium.
Simulating the light's reflection and refraction.
Formulation LSO mainly depends on the vector representation of the rainbow and its intersperse in the sky.
The formulation of RO depends on the general Snell's law of light ray transformation from a medium to a darker one to ray tracing in 2-dimensional and 3-dimensional spaces.
The updating of candidate solutions depends on the division of a search space into grid cells and then considering these cells as reflection and refraction points. -Having a weak exploration operator due to its inability to find the optimal solution for most fixed-dimensional multimodal optimization problems with many local minima necessitates using a strong exploration operator. On the contrary, our proposed algorithm can solve all fixed-dimensional multimodal problems. -Its performance for some unimodal problems requiring a strong exploitation operator is subpar.
-Its updating process is limited due to its reliance solely on refraction and reflection. -Slower convergence speed than LSO due to the weakness of its exploitation operator.

Experimental Results and Discussions
In this section, we investigate the efficiency of LSO by different benchmarks, including CEC2005, CEC2014, CEC2017, CEC2020, and CEC2022. In addition, the sensitivity and scalability analyses of the proposed algorithm are introduced in this section.

Benchmarks and Compared Optimizers
We first validate the efficiency of LSO by solving 20 classical benchmarks CEC2005 that were selected from [118][119][120]. The selected benchmarks consist of three classes: unimodal, multi-modal, and fixed-dimension multi-modal. Both unimodal and multimodal functions of CEC2005 are solved in 100 dimensions. Appendix A Tables A1-A7 shows the characteristics of these three classes, which are mathematical formulations of benchmarks, dimension (D), boundaries of the search space (B), and the global optimal solution (OS). Furthermore, the proposed algorithm is tested on solving challengeable benchmarks like CEC2014, CEC2017, CEC2020, and CEC2020, which are described in Appendix A Tables A1-A7. The dimensions of these challengeable benchmarks are set to 10. In addition, the Wilcoxon test [121] is performed to analyze both algorithms' performance during the 30 runs with a 5% significance level.
The experimental results of LSO are compared with highly-cited state-of-the-art optimization algorithms like grey wolf optimizer (GWO) [122], whale optimization algorithm (WOA) [123], and salp swarm algorithm (SSA) [23], evolutionary algorithms like differential evolution (DE), and recently-published optimizers including gradient-based optimizer (GBO) [124], artificial gorilla troops optimizer (GTO) [27], Runge-Kutta method (RUN) beyond the metaphor [125], African vultures optimization algorithm (AVOA) [106], equilibrium optimizer (EO) [76], grey wolf optimizer (GWO) [122], reptile search algorithm (RSA) [92], and slime mold algorithm (SMA) [40]. Both comparisons are based on standard deviation (SD), an average of fitness values (Avr), and rank. All the algorithms are coded in MATLAB © 2019. All experiments are performed on a 64-bit operating system with a 2.60 GHz CPU and 32 GB RAM. For a fair comparison, each algorithm runs for 25 independent times, the maximum number of function evaluations and population size are of 50,000 and 20, respectively (These parameters are constant within our experiments for all validated benchmarks). The other algorithms' parameters are kept as standard. The used parameters are given in Table 5.

Sensitivity Analysis of LSO
Extensive experiments have been done to perform a sensitivity analysis of four controlling parameters found in LSO, which are P e , P s , P h , and β. For each parameter of these, extensive experiments have been done using different values for each one to solve two test functions: F57 and F58, and the obtained outcomes, are depicted in Figure 8. This figure shows that the most effective values of these four parameters: P e , P s , P h , and β, for two observed test functions are of 0.9, 0.05, 0.4, and 0.05, respectively.
The first investigated parameter is P h (responsible for the tradeoff between two boundary checking methods to improve the LSO's searchability), which is analyzed in Figure 8a,b using various randomly-picked values between 0 and 1.0. These figures show that LSO could reach the optimal value for the test function: F58 when P h = 0.4. Based on that, this value is assigned to P h within the experiments conducted in this study.
For the parameter β (responsible for improving the convergence speed of LSO), Figure 8c,d depicts the performance of LSO under various randomly-picked values between 0 and 0.6 for this parameter over two test problems: F57 and F58. According to this figure, over F57, on one side, the performance of LSO is substantially improved with increasing the value of this parameter even reaching 0.3, and then the performance again deteriorated. On the other side, over F58, LSO has poor performance when increasing the value of this parameter. Therefore, we found that the best value for the parameter β that will be substantially suitable for most test functions is 0.05, since LSO under this value could reach the optimal value for the test problem: F58. It is worth mentioning that this parameter is responsible for accelerating the convergence speed of LSO to reach the near-optimal solution in as low a number of function evaluations as possible. Therefore, an additional experiment has been conducted in this section to depict the convergence speed of LSO under various values for the parameter β over F58 (see Figure 8e). The first investigated parameter is Ph (responsible for the tradeoff between two boundary checking methods to improve the LSO's searchability), which is analyzed in Figure 8a,b using various randomly-picked values between 0 and 1.0. These figures show that LSO could reach the optimal value for the test function: F58 when Ph=0. 4. Based on that, this value is assigned to Ph within the experiments conducted in this study.
For the parameter (responsible for improving the convergence speed of LSO), Figure 8c,d depicts the performance of LSO under various randomly-picked values between 0 and 0.6 for this parameter over two test problems: F57 and F58. According to this figure, over F57, on one side, the performance of LSO is substantially improved with increasing the value of this parameter even reaching 0.3, and then the performance again deteriorated. On the other side, over F58, LSO has poor performance when increasing the value of this parameter. Therefore, we found that the best value for the parameter that will be substantially suitable for most test functions is 0.05, since LSO under this value could reach the optimal value for the test problem: F58. It is worth mentioning that this parameter is responsible for accelerating the convergence speed of LSO to reach the near-optimal solution in as low a number of function evaluations as possible. Therefore, an additional experiment has been conducted in this section to depict the convergence speed of LSO under various values for the parameter over F58 (see Figure 8e). Figure 8e further affirms that the best value for this parameter is 0.05. The third parameter is P e , employed in LSO to exchange between the first and second scattering phases. Figure 8f,g compare the influence of various values for this parameter over the test functions: F57 and F58. According to these figures, the best value for this parameter is 0.9, since LSO under this value could reach 900 and 805.9 for F58 and F57, respectively. Regarding the parameter P s , which is employed to further promote the first and second scattering stages for improving the exploitation operator of LSO, Figure 8h,i were presented to report the influence of various values for this parameter; these values range between 0 and 0.6. Inspecting these figures shows that LSO reaches the top performance when P s has a value of 0.05 over two investigated test functions: F57 and F58.

Evaluation of Exploitation and Exploration Operators
The class of Uni-modal benchmarks has only one global optimal solution. This feature allows testing and validating a metaheuristic's exploitation capabilities. Tables 6 and 7 show that LSO is competitive with some other comparators for F1, F2, and F3. For F3 and F5, LSO has inferior performance compared to some of the recently-published rival algorithms. In general, LSO proves that it has a competitive exploitation operator. Multi-modal classes can efficiently discover the exploration of metaheuristics, as they have many optimal solutions. As observed in Table 6, LSO is able to reach the optimal solution for 13 benchmarks, especially fixed-dimension ones, including F11-F20. In addition, LSO is competitive with the other comparators for F6-F8. To affirm the difference between the outcomes produced by LSO and those of the rival algorithms, the Wilcoxon rank-sum test is employed to compute the p-values, which determine that there is a difference when its value is less than 5%; otherwise, there is no difference. Table 7 introduces the p-values on unimodal and multimodal test functions between LSO and each rival algorithm. These values clarify that there are differences between the outcomes of LSO and most rival algorithms on most test functions. NaN in this table indicates that the independent outcomes of LSO and the corresponding optimizer are the same. As a result, the results and discussion is given herein assure the prosperity of LSO's exploration and exploitation capabilities.

LSO for Challengeable CEC2017
This section compares the performance of LSO and other optimizers using the CEC2017 test suite to further validate the performance of LSO against the comparators for more challenging mathematical test functions [126]. CEC2017 is composed of four mathematical function families: unimodal (F51-F52), multimodal (F53-F59), composition (F60-F69), and hybrid (F70-F79). As previously described, unimodal test functions are preferable for evaluating the exploitation operator of optimization algorithms because they involve only one global best solution, and multimodal test functions contain multiple local optimal solutions, which makes them particularly well-suited for evaluating the exploration operator of newly proposed optimizers; while composition and hybrid test functions have been designed to evaluate the optimization algorithms' ability to escape out of local optima. The dimension of this benchmark is set to 10 within the conducted experiments in this section. Appendix A Tables A1-A7 contains the characteristics of the CEC2017 benchmark. Table 10 shows the Avr, SD, and Rank values of 25 independent findings obtained by this suite's proposed and rival optimizers. According to this table, LSO comes in the first rank compared to all optimizers for unimodal, multimodal, composition and hybrid test functions since it could reach better Avr and SD for all test functions. Figures 11 and 12 display the average of the rank and standard deviation values presented in Table 10 for all test functions for each algorithm. According to these figures, LSO is the best since it occupies the 1st rank with a value of 1 and has the lowest standard deviation of 32, while RSA is the worst.       The Wilcoxon rank-sum test is used to determine the difference between the comes of LSO and these of each rival optimizer on CEC2017 test functions. Wilcoxon ra sum test demonstrates a significant difference between the outcomes of LSO with the r algorithms, as the p-values in Table 11 support the alternative hypothesis. Ultimately, is a strong optimizer, as demonstrated by its ability to defeat GBO, RUN, GTO, AV SMA, RSA, and EO, which are the most recently published optimizers, as well as highly-cited metaheuristic algorithms such as WOA, GWO, SSA, and DE.   The Wilcoxon rank-sum test is used to determine the difference between the out comes of LSO and these of each rival optimizer on CEC2017 test functions. Wilcoxon rank sum test demonstrates a significant difference between the outcomes of LSO with the riva algorithms, as the p-values in Table 11 support the alternative hypothesis. Ultimately, LSO is a strong optimizer, as demonstrated by its ability to defeat GBO, RUN, GTO, AVOA SMA, RSA, and EO, which are the most recently published optimizers, as well as four highly-cited metaheuristic algorithms such as WOA, GWO, SSA, and DE.  The Wilcoxon rank-sum test is used to determine the difference between the outcomes of LSO and these of each rival optimizer on CEC2017 test functions. Wilcoxon rank-sum test demonstrates a significant difference between the outcomes of LSO with the rival algorithms, as the p-values in Table 11 support the alternative hypothesis. Ultimately, LSO is a strong optimizer, as demonstrated by its ability to defeat GBO, RUN, GTO, AVOA, SMA, RSA, and EO, which are the most recently published optimizers, as well as four highly-cited metaheuristic algorithms such as WOA, GWO, SSA, and DE.

LSO for Challengeable CEC2020
During this section, additional testing is carried out on the CEC-2020 test suite to determine whether the proposed has stable performance for more challenging test functions. An algorithm's ability to explore, exploit, and stay away from local minima can be evaluated using this suite, which consists of ten test functions and is divided into four categories: unimodal, multimodal, hybrid, and compositional. The characteristics of this suite are shown in Appendix A Tables A1-A7. LSO has superior performance for all test functions found in the CEC-2020 test suite except for F83, as evidenced by the Avr, rank, and SD values presented in Table 12 and obtained after analyzing the results of 25 independent runs. Figures 13 and 14 show the average of the rank values and SD on all test functions of CEC2020. According to these figures, LSO is the best because it is ranked first with a value of 1.7 and has the lowest standard deviation of 38, whereas RSA is the worst because it is ranked last with a value of 12. Finally, the Wilcoxon rank-sum test is used to determine the difference between the results of LSO and those of each rival optimizer on this suite. The Wilcoxon rank-sum test results demonstrate a statistically significant difference between the outcomes of LSO and the rival algorithms for most test functions, as evidenced by the p-values in Table 13, which support the alternative hypothesis. As an added bonus, in this section, we present additional experimental evidence that LSO belongs to the strong optimizers.

LSO for Challengeable CEC2022
The proposed and other methods are tested again on the CEC2022 test suite. This test suite contains 12 test functions divided into unimodal, multimodal, hybrid, and compositional. The properties of this test suite are also listed in Appendix A (Tables A1-A7), and their dimensions in the conducted experiments are 10. Table 14 shows the Avr, rank, and SD for 25 independent runs, demonstrating LSO's superior performance for 9 out of 12 test functions of the CEC-2022 test suite. The average of the rank values and standard deviation on all test functions of CEC2022 are depicted in Figures 15 and 16. These figures show that LSO is the best because it is ranked first with a value of 1.6 and has the lowest standard deviation of 12, whereas RSA is the worst because it is ranked last with a value of 12 and has the highest standard deviation. In the end, the Wilcoxon rank-sum test is used to determine whether there is a significant difference between the results of LSO and those of each rival optimizer on this suite of problems. For most test functions, the Wilcoxon rank-sum test results demonstrate a statistically significant difference between the outcomes of LSO and the rival algorithms, as demonstrated by the p-values in Table 15. This section further affirms that LSO belongs to the category of highly-performed optimizers.

The Overall Effectiveness of the Proposed Algorithm
In the previous sections, LSO has been separately assessed using five mathematical benchmarks: CEC2005, CEC2014, CEC2017, CEC2020, and CEC2022, and compared with twenty-two well-established metaheuristic algorithms, but the overall performance of LSO over all benchmarks has to be elaborated. Therefore, this section is presented to compare the overall performance of LSO and the other algorithms over all the test functions of each benchmark and all benchmarks. The average rank values and SD values of each benchmark are computed and reported in Table 16. This table also indicates the overall effectiveness of the proposed algorithm and other rival algorithms using an additional metric known as overall effectiveness (OE) and computed according to the following formula [127]: where N denotes the total number of test functions, and L i denotes the number of test functions in which the i-th algorithm is a loser. Inspecting this table reveals that the proposed algorithm could be superior in terms of SD, rank, and OE for four challengeable benchmarks, and competitive regarding rank and superior regarding SD and OE for the remaining benchmark. The average of the rank values, OE values, and SD values of each algorithm across all benchmarks are computed and reported in the last rows of Table 16, respectively, to measure the overall effectiveness of each algorithm across all benchmarks. According to those rows, LSO ranks first for all indicators, with a significant difference from the nearest well-performed algorithm. The LSO's strong performance due to the variation of the search process enables the algorithm to have a strong exploration and exploitation operator during the optimization process to help aid in preserving the population diversity, avoiding being stuck in local optima, accelerating the convergence towards the best-so-far solution. It is worth noting that both the inverse incomplete gamma function and the inverse random number are capable of preserving population diversity as well as avoiding becoming stuck in local minima due to their ability to generate significant numbers that aid in jumping the current solution to far away regions within the search space throughout the optimization process. On the other hand, three different scattering stages provide a variation in the exploitation operator to the LSO for rapidly reaching the near-optimal solution for various optimization problems of varying difficulty.  Figure 17 compares the convergence rates of LSO and competing algorithms to show how they differ in terms of reaching the near-optimal solution in less number function evaluations. This figure illustrates that all LSO convergence curves exhibit an accelerated reducing pattern within the various stages of the optimization process for four families of test functions (unimodal, multimodal, composition, and hybrid). The LSO optimizer is significantly faster than any of the other competing algorithms, as shown by the convergence curves in Figure 17. Exploration and exploitation operators of LSO can work together in harmony, which prevents stagnation in local minima and speeds up convergence in the right direction to the most promising regions.  Figure 17 compares the convergence rates of LSO and competing algorithms to show how they differ in terms of reaching the near-optimal solution in less number function evaluations. This figure illustrates that all LSO convergence curves exhibit an accelerated reducing pattern within the various stages of the optimization process for four families of test functions (unimodal, multimodal, composition, and hybrid). The LSO optimizer is significantly faster than any of the other competing algorithms, as shown by the convergence curves in Figure 17. Exploration and exploitation operators of LSO can work together in harmony, which prevents stagnation in local minima and speeds up convergence in the right direction to the most promising regions.

Qualitative Analysis
The following metrics are used to evaluate the LSO performance during optimization: diversity, convergence curve, average fitness value, trajectory in the first dimension, and search history. The diversity metric shows how far apart an individual is on average from other individuals in the population; the convergence curve depicts the best-fitting value that was obtained within each iteration; the average fitness value represents the average of all individuals' fitness values throughout each iteration; the trajectory curve shows how a solution's first dimension changes over time as it progresses through the optimization process; and search history shows how a solution's position changed during the optimization process.
The diversity metric shown in the second column of Figure 18 is computed by summing the difference mean between the positions of each two solutions in the population according to the following formula:

Qualitative Analysis
The following metrics are used to evaluate the LSO performance during optimization: diversity, convergence curve, average fitness value, trajectory in the first dimension, and search history. The diversity metric shows how far apart an individual is on average from other individuals in the population; the convergence curve depicts the best-fitting value that was obtained within each iteration; the average fitness value represents the average of all individuals' fitness values throughout each iteration; the trajectory curve shows how a solution's first dimension changes over time as it progresses through the optimization process; and search history shows how a solution's position changed during the optimization process.
The diversity metric shown in the second column of Figure 18 is computed by summing the difference mean between the positions of each two solutions in the population according to the following formula: where d indicates the number of dimensions, N stands for the population size, x i,k indicate the kth dimension in the ith and jth solutions such that j > i. Observing Figure 18 shows that the diversity metric of LSO is decreasing over time, indicating that LSO's optimization process is gradually shifting from exploration to exploitation. LSO's performance initially started to explore most regions in the search space to avoid stagnation into local minima and then shifted gradually to the exploitation operator to reduce diversity quickly during the second half of the optimization process to accelerate the convergence toward the most promising region discovered thus far. The LSO convergence curves show an accelerated reducing pattern on a variety of test functions during the latter half of the optimization process when population diversity is reduced. The exploratory phase is largely transformed into the exploitative phase, as illustrated in the third column of Figure 18. At the beginning of the optimization process, LSO convergence is slow to avoid becoming stuck in local minima. Then, it is highly improved in the second half of the optimization process.
The depiction of average fitness history in Figure 18 shows that LSO's competitiveness has decreased over time due to all solutions focusing on exploiting the regions around the best-so-far solution, and as a result, the fitness values of all solutions are nearly moved towards the same region which involves the best-so-far solution. Figure 18 also depicts the trajectory of LSO's search for the optimal position of the first dimension as it gradually explores all aspects of the search space, as depicted in Figure 18. Because of the need to find a better solution in a short period of time, the exploratory approach is being replaced by an exploitative approach that restricts the scope of the search to a single aspect of it. As can be seen from the LSO's trajectory curve, the optimization process begins with an exploratory trend before moving to an exploitation trend in search of better outcomes before coming to an end.
In the final column of Figure 18, the history of LSO positions is depicted. The search history is investigated by depicting the search core followed by LSO's solutions within the whole optimization process for the first two dimensions: X 1 and X 2 of an optimization problem. The same pattern is followed for the other dimensions. As can be seen in this preceding column, LSO does not follow a consistent pattern for all test functions. Consider F21 as an example: to find an optimal solution for this problem, LSO first explores the entire search space before narrowing its focus to the range 0-50. The search history graph shows that LSO's performance is more dispersed for the multimodal and composition test functions, while its performance for the unimodal test function is more concentrated around the optimum points.
toward the most promising region discovered thus far.
The LSO convergence curves show an accelerated reducing pattern on a variety of test functions during the latter half of the optimization process when population diversity is reduced. The exploratory phase is largely transformed into the exploitative phase, as illustrated in the third column of Figure 18. At the beginning of the optimization process, LSO convergence is slow to avoid becoming stuck in local minima. Then, it is highly improved in the second half of the optimization process. The depiction of average fitness history in Figure 18 shows that LSO's competitiveness has decreased over time due to all solutions focusing on exploiting the regions around the best-so-far solution, and as a result, the fitness values of all solutions are nearly moved towards the same region which involves the best-so-far solution. Figure 18 also depicts the trajectory of LSO's search for the optimal position of the first dimension as it gradually explores all aspects of the search space, as depicted in Figure 18. Because of the need to find a better solution in a short period of time, the exploratory approach is being Figure 18. Diversity, convergence curve, average fitness history, trajectory, and search history.

Computational Cost
The average computational cost consumed by each algorithm on the investigated test functions is shown in Figure 19. The graph shows that the CPU time for all algorithms is nearly the same, except RSA and SMA, which take a long time, and WOA, which takes less than half the time required by the rest. LSO is thus far superior in terms of the convergence speed and the quality of the obtained outcomes, with a negligible difference in CPU time.
entire search space before narrowing its focus to the range 0-50. The search history gr shows that LSO's performance is more dispersed for the multimodal and composition functions, while its performance for the unimodal test function is more concentra around the optimum points.

Computational Cost
The average computational cost consumed by each algorithm on the investigated functions is shown in Figure 19. The graph shows that the CPU time for all algorithm nearly the same, except RSA and SMA, which take a long time, and WOA, which ta less than half the time required by the rest. LSO is thus far superior in terms of the c vergence speed and the quality of the obtained outcomes, with a negligible differenc CPU time.

LSO for Engineering Design Problems
In section, LSO is applied to solve three constrained engineering benchmarks, incl ing Tension/Compression Spring Design Optimization Problem, Welded Beam Des Problem, and Pressure Vessel Design Problem. The best values found by LSO during runs are compared with many optimization algorithms. For compared the algorithms' rameters settings, all parameters are left as the defaults suggested by the authors. parameters of LSO are kept as mentioned in Table 5, except for Ps, which substanti affect the performance of LSO based on the nature of the solved problems. Therefore extensive experiment has been done under various values for this parameter, and the tained outcomes are depicted in Figure 20. This figure shows that the performance of L is maximized when Ps=0. 6. In addition to all rival algorithms used in the previous c parisons, five recently-published metaheuristic algorithms known as political optim (PO) [45], continuous-state cellular automata algorithm (CCAA) [128], snake optim (SO) [91], beluga whale optimization (BWO), [96] and driving training-based optimiza (DTBO) [50] are added in the next experiments to further show the superiority of L when tackling the real-world optimization problems, such as engineering design pr lems. Additionally, LSO is compared to some of the state-of-the-art optimizers propo for each constrained engineering benchmark according to the cited results.

LSO for Engineering Design Problems
In section, LSO is applied to solve three constrained engineering benchmarks, including Tension/Compression Spring Design Optimization Problem, Welded Beam Design Problem, and Pressure Vessel Design Problem. The best values found by LSO during 25 runs are compared with many optimization algorithms. For compared the algorithms' parameters settings, all parameters are left as the defaults suggested by the authors. The parameters of LSO are kept as mentioned in Table 5, except for Ps, which substantially affect the performance of LSO based on the nature of the solved problems. Therefore, an extensive experiment has been done under various values for this parameter, and the obtained outcomes are depicted in Figure 20. This figure shows that the performance of LSO is maximized when P s = 0.6. In addition to all rival algorithms used in the previous comparisons, five recently-published metaheuristic algorithms known as political optimizer (PO) [45], continuous-state cellular automata algorithm (CCAA) [128], snake optimizer (SO) [91], beluga whale optimization (BWO), [96] and driving training-based optimization (DTBO) [50] are added in the next experiments to further show the superiority of LSO when tackling the real-world optimization problems, such as engineering design problems. Additionally, LSO is compared to some of the state-of-the-art optimizers proposed for each constrained engineering benchmark according to the cited results.
Engineering design problems are characterized by many different constraints. In order to handle this type of constraint, we employ penalty-based constraint handling techniques with LSO. There are several methods for handling constraints of optimization problems based on the penalty function. In this work, we choose to implement the Death Penalty method (the rejection of infeasible solutions method) [129], in which the infeasible solutions are rejected and regenerated. So, the infeasible solution is automatically omitted from the candidate solutions. The main advantage of the Death Penalty method is its simple implementation and low computational complexity. Engineering design problems are characterized by many different constraints. In order to handle this type of constraint, we employ penalty-based constraint handling techniques with LSO. There are several methods for handling constraints of optimization problems based on the penalty function. In this work, we choose to implement the Death Penalty method (the rejection of infeasible solutions method) [129], in which the infeasible solutions are rejected and regenerated. So, the infeasible solution is automatically omitted from the candidate solutions. The main advantage of the Death Penalty method is its simple implementation and low computational complexity.

Tension/Compression Spring Design Optimization Problem
The main objective of the tension/compression spring design optimization problem is to find the minimum volume ( ) of a coil spring under compression undergoing constraints of minimum deflection, shear stress, surge frequency, and limits on outside diameter and design variables (See Figure 21a). Mathematically, the problem can be formulated as [130]: where is the wire diameter, is the mean coil diameter, and is the length or number of coils.

Tension/Compression Spring Design Optimization Problem
The main objective of the tension/compression spring design optimization problem is to find the minimum volume f (X) of a coil spring under compression undergoing constraints of minimum deflection, shear stress, surge frequency, and limits on outside diameter and design variables (See Figure 21a). Mathematically, the problem can be formulated as [130]: where x 1 is the wire diameter, x 2 is the mean coil diameter, and x 3 is the length or number of coils.
Mathematics 2022, 10, x FOR PEER REVIEW 46 of 60 algorithm with social interaction (PSIGA) [138]. Table 17 shows the results obtained by LSO for tension/compression spring design optimization problem. As shown, LSO is better able to reach minimum values than others. Figure 21b shows the convergence speed of LSO.

Welded Beam Design Problem
The problem of designing welded beams can be defined as finding the feasible dimensions of a welded beam x 1 , x 2 , x 3 , and x 4 (which are the thickness of weld, length of the clamped bar, the height of the bar, and thickness of the bar, respectively) that minimize the total manufacturing cost f (X) subject to a set of constraints. Figure 22a shows a representa-tion of the weld beam design problem. Mathematically, the problem can be formulated as the following [130]: , P c (X) = 4.013E where τ is shear stress, σ is the bending stress, P C is the buckling load, and δ is the end deflection. The proposed algorithm is compared with nine additional algorithms from the literature including hybrid and improved ones, which are RO [67], WOA [41], HS [141], hybrid charged system search and particle swarm optimization (PSO) algorithms I (CSS&PSO I) [136], hybrid charged system search and particle swarm optimization (PSO) algorithms II (CSS&PSO II) [136], particle swarm optimization algorithm with struggle selection The proposed algorithm is compared with nine additional algorithms from the literature including hybrid and improved ones, which are RO [67], WOA [41], HS [141], hybrid charged system search and particle swarm optimization (PSO) algorithms I (CSS&PSO I) [136], hybrid charged system search and particle swarm optimization (PSO) algorithms II (CSS&PSO II) [136], particle swarm optimization algorithm with struggle selection (PSOStr) [140], RO [67], firefly algorithm (FA) [142], differential evolution with dynamic stochastic selection (DE) [143], and hybrid artificial immune system and genetic algorithm (AIS-GA). As shown in Table 18, on one side, LSO has a competitive result comparing to GBO, GTO, EO, and DE, and superior in terms of the convergence speed as shown in Figure 22b. On the other side, LSO is superior to all the other optimizers.

No.
Algorithm

Conclusions
In this work, a novel LSO metaheuristic algorithm is introduced that is inspired by sunlight dispersion through a water droplet, causing the rainbow phenomenon. The proposed algorithm is tested on several selected benchmarks. For CEC2005 benchmarks, LSO significantly performs well, especially for fixed dimensional multi-model functions. This indicates that LSO has high exploration capabilities. In addition, the sensitivity analysis of LSO parameters shows that the selected values of the parameters are the best. Finally, for CEC2020, CEC2017, CEC2022, and CEC2014, LSO has a superior performance compared to several well-established and recently published metaheuristic algorithms like DE, WOA, SMA, EO, GWO, GTO, GBO, RSA, SSA, RUN, and AVOA, which have been selected in our comparison due to their stability and recent publication compared to some of the other optimization algorithms like MBO, EWA, EHO, MS, HGS, CPA, and HHO proposed for tackling the same benchmarks. This indicates that LSO has a good balance between exploration and exploitation. LSO has competitive results for engineering design problems compared to other algorithms, even for improved and hybrid metaheuristics. For future work, we suggest developing the binary and multi-objective versions of LSO. In addition, several enhancements can be proposed for LSO by using fuzzy controllers or chaotic maps for defining LSO controlling probabilities and the hybridization with other algorithms. Finally, we suggest using LSO for solving recent real-life optimization problems such as sensor allocation problems, smart management of the power grid, and smart routing of vehicles.

Nomenclature
Nomenclature of symbols used in this study θ Angle of reflection or refraction k ∈ R Refractive index of a medium L i (i = 0, . . . , 3) i th refracted or reflected light ray n s Normal line at a point s p ∈ [0, 1] Controlling probability of inner and outer reflection and refraction q ∈ [0, 1] Controlling probability of the first scattering phase z ∈ [0, 1] Controlling probability of the second scattering phase t ∈ N Iteration number → x 0 ∈ R Initial candidate solution N ∈ N Population size d ∈ N Problem dimension lb ∈ R Lower bound of the search space ub ∈ R Upper bound of the search space RV ∈ [0, 1] Vector of uniform random numbers → x t ∈ R Candidate solution at iteration t w ∈ [0, ∞] Scaling factor GI ∈ R Scaling factor ∈ R Scaling factor ginv Inverse incomplete gamma function Appendix A Table A1. Description of uni-modal benchmark functions.