Next Article in Journal
Simulation Analysis and Experimental Verification of High-Speed Impact of Rocky Asteroids
Previous Article in Journal
Multi-Objective Optimization Method for High-Efficiency and Low-Consumption Wire Rope Greasing Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LSEWOA: An Enhanced Whale Optimization Algorithm with Multi-Strategy for Numerical and Engineering Design Optimization Problems

by
Junhao Wei
1,
Yanzhao Gu
1,
Yuzheng Yan
1,
Zikun Li
2,
Baili Lu
3,
Shirou Pan
3 and
Ngai Cheong
1,*
1
Faculty of Applied Sciences, Macao Polytechnic University, Macao 999078, China
2
School of Economics and Management, South China Normal University, Guangzhou 510006, China
3
College of Animal Science and Technology, Zhongkai University of Agriculture and Engineering, Guangzhou 510225, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(7), 2054; https://doi.org/10.3390/s25072054
Submission received: 8 February 2025 / Revised: 24 February 2025 / Accepted: 28 February 2025 / Published: 25 March 2025
(This article belongs to the Section Intelligent Sensors)

Abstract

:
The Whale Optimization Algorithm (WOA) is a bio-inspired metaheuristic algorithm known for its simple structure and ease of implementation. However, WOA suffers from issues such as premature convergence, low population diversity in the later stages of iteration, slow convergence rate, low convergence accuracy, and an imbalance between exploration and exploitation. In this paper, we proposed an enhanced whale optimization algorithm with multi-strategy (LSEWOA). LSEWOA employs Good Nodes Set Initialization to generate uniformly distributed whale individuals, a newly designed Leader-Followers Search-for-Prey Strategy, a Spiral-based Encircling Prey strategy inspired by the concept of Spiral flight, and an Enhanced Spiral Updating Strategy. Additionally, we redesigned the update mechanism for convergence factor a to better balance exploration and exploitation. The effectiveness of the proposed LSEWOA was evaluated using CEC2005, and the impact of each improvement strategy was analyzed. We also performed a quantitative analysis of LSEWOA and compare it with other state-of-the-art metaheuristic algorithms in 30/50/100 dimensions. Finally, we applied LSEWOA to nine engineering design optimization problems to verify its capability in solving real-world optimization challenges. Experimental results demonstrate that LSEWOA outperformed better than other algorithms and successfully addressed the shortcomings of the classic WOA.

1. Introduction

Metaheuristic algorithms are a class of algorithms that combine random algorithms and local search algorithms, aiming to solve a variety of complex optimization problems. In recent decades, metaheuristic algorithms have been developed, extensively studied, and widely applied. Due to the complexity and diversity of many real-world problems, canonical exact algorithms often struggle to find optimal solutions within a reasonable time frame. Metaheuristic algorithms are developed based on heuristic algorithms, and their core idea is to progressively approximate the optimal solution by searching through the problem space. They can provide feasible solutions within acceptable computational time and space, although the degree of deviation from the optimal solution may not be predictable in advance. The main advantage of metaheuristic algorithms is their ability to handle complex, nonlinear problems without requiring assumptions about the specific models of different problems. Metaheuristic algorithms primarily consist of two phases: a global exploration phase and a local exploitation phase. Since the optimal solution may exist at any location within the search space, the exploration phase aims to explore the entire search space as thoroughly as possible. The exploitation phase focuses on utilizing effective information, as in most cases, there is often some correlation between superior solutions, and these correlations are used to progressively adjust and find better solutions. Compared to other numerical optimizers, metaheuristic algorithms combine both global and local search strategies. By integrating the exploration and exploitation strategies and controlling the balance between them, they increase the probability of finding the global optimal solution in complex problems. Additionally, metaheuristic algorithms introduce randomness into the search process to avoid being trapped in local optima, thereby enhancing their ability to find better solutions. Metaheuristic algorithms do not require problem-specific knowledge and typically do not rely on the specific nature or structure of the problem, making them highly versatile and capable of handling a variety of optimization problems. They also exhibit a degree of randomness to prevent premature convergence to local optima. Moreover, metaheuristic algorithms are adaptive, capable of adjusting parameters and strategies in real time based on the progress of the optimization process.
Given the many advantages of metaheuristic algorithms in numerical optimization, they have received increasing attention, with more and more researchers developing improved metaheuristic algorithms to better solve complex optimization problems. Some classic optimization algorithms are shown in Table 1 and Table 2. The Simulated Annealing (SA) algorithm, proposed by Metropolis in 1953, is based on comparing the process of solving a certain class of optimization problems to the thermal equilibrium problem in statistical thermodynamics, attempting to find the global optimal solution or an approximate optimal solution by simulating the annealing process of high-temperature materials [1]. The Genetic Algorithm (GA), first introduced by John Holland in 1975, is based on Darwin’s theory of evolution and Mendel’s genetics. It uses processes such as reproduction, mutation, and competition among individuals in a population to exchange information and perform natural selection, progressively approaching the optimal solution to the problem [2]. The Ant Colony Optimization (ACO) algorithm, proposed by Dorigo et al. in 1991, is a stochastic search algorithm that simulates the foraging behavior of real ants in nature [3]. In 1995, American psychologists Kennedy and electrical engineer Eberhart, inspired by the foraging behavior of birds, proposed the Particle Swarm Optimization (PSO) algorithm [4]. Storn and Price, in 1997, proposed the Differential Evolution (DE) algorithm for real-valued function optimization [5]. In 2005, Karaboga et al., inspired by the honeybee foraging behavior, proposed the Artificial Bee Colony (ABC) algorithm, which simulates the intelligent foraging behavior of bee colonies and compares the global optimal solution to the nectar-richest flower source in the search optimization problem [6]. Elhamifar and Ahmadi, inspired by the hunting behavior of Harris hawks, proposed the Harris Hawks Optimization (HHO) algorithm in 2019 [7]. In 2021, Laith Abualigah et al. proposed the Aquila Optimizer (AO), which was inspired by the hunting behavior of aquila eagles in nature [8]. In 2022, C Zhong et al., inspired by the swimming, foraging, and whale fall phenomena of beluga white whales in nature, proposed the Beluga whale optimization algorithm (BWO) [9]. In 2023, Jia et al., inspired by the social behaviors (foraging, cooling, and competitive behaviors) of crayfish in nature, proposed the Crayfish Optimization Algorithm (COA) [10]. Metaheuristic algorithms, due to their excellent optimization capabilities and versatility, are widely applied in various fields, such as Advanced Planning and Scheduling (APS) [11], engineering design optimization [12], feature selection [13], urban site selection [14], path planning [15], the traveling salesman problem [16] and antenna design optimization [17].
However, metaheuristic algorithms also have certain limitations. These algorithms often face challenges such as the difficulty in balancing exploration and exploitation, difficulties in parameter selection, poor population quality in the later stages of iteration, and the tendency to become trapped in local optima. Particle Swarm Optimization (PSO) faces the risk of premature convergence when dealing with complex optimization problems, often leading to early convergence to local optima, halting further exploration of better solutions. The Harris Hawks Optimization (HHO) algorithm is known for its strong exploration and exploitation capabilities, but its numerous parameters and complex position update strategies make its tuning more difficult. The Whale Optimization Algorithm (WOA) is simple in structure and easy to implement, but it suffers from slow convergence, low solution accuracy, and the tendency to get trapped in local optima in complex problems [18]. Therefore, improving the balance between exploration and exploitation, enhancing the efficiency of the exploration phase, increasing the accuracy of the exploitation phase, and maintaining population diversity in the later stages of iteration have become the major challenges in enhancing the performance of metaheuristic algorithms.
Table 2. Current research on improved metaheuristic algorithms.
Table 2. Current research on improved metaheuristic algorithms.
AlgorithmYearAuthorSource of Inspiration
COGWO2D [19]2018Ibrahim R A et al.Opposition-Based Learning, Differential Evolution, and disruption operator.
MEGWO [20]2019Tu Q et al.Adaptable cooperative strategy and disperse foraging strategy.
QMPA [21]2021Abd Elaziz M et al.Schrodinger wave function.
ISSA [22]2023Xue Z et al.Circle chaotic mapping, GWO and chaotic sine cosine mechanism
ACRIME [23]2024Abdel-Salam M et al.Symbiotic Organism Search (SOS) and restart strategy
BWOA [24]2019H Chen et al.Levy flight and chaotic local search.
SMWOA [25]2020W Guo et al.Linear incremental probability, social learning principle and Morlet wavelet mutation
HSWOA [26]2021VKRA Kalanandan et al.Social Group Optimization algorithm (SGO)
ImWOA [27]2022S Chakraborty et al.Cooperative hunting strategy and improving the exploration-exploitation logic
In recent years, researchers have continuously attempted to integrate different methods to improve metaheuristic algorithms. In 2004, Y. Gao et al. introduced chaotic mapping into population initialization, generating higher-quality populations, which enhanced the optimization ability of the Particle Swarm Optimization (PSO) algorithm to some extent. In 2018, Rehab Ali Ibrahim et al. proposed the Chaotic Opposition-based Grey Wolf Optimizer (GWO) based on Differential Evolution and Disruption Operator. They incorporated logistic mapping, opposition-based learning (OBL), differential evolution (DE), and disruption operator (DO) into GWO to improve its exploration and exploitation capabilities while maintaining population diversity [19]. In 2019, Qiang Tu et al. introduced the Multi-strategy Ensemble Grey Wolf Optimizer (MEGWO), which incorporated the Enhanced Global-best Leading Strategy, Adaptable Cooperative Hunting Strategy, and Scattered Hunting Strategy into the canonical GWO to overcome the limitations of a single search strategy when solving various optimization problems [20]. In 2021, Mohamed Abd Elaziz et al. proposed the Quantum Marine Predators Algorithm (QMPA), which uses the probability function from the Schrödinger wave function to determine the position of particles at any given moment, thus enhancing the exploration and exploitation abilities of the Marine Predators Algorithm (MPA) [21]. In 2023, Zhilu Xue et al. proposed the ISSA, which introduced chaotic mapping, integrated the information exchange mechanism from the Grey Wolf Optimizer (GWO), and utilized chaotic sine-cosine strategies to improve the optimization accuracy and convergence speed of the Social Spider Algorithm (SSA) [22]. In 2024, Mahmoud Abdel-Salam et al. introduced chaotic mapping, adaptive improvement of the Symbiotic Organisms Search (SOS) mutualistic phase, hybrid mutation strategies, and restart strategies into the RIME algorithm, proposing the Chaotic RIME Optimization Algorithm with Adaptive Mutualism (ACRIME) to address feature selection problems [23]. These outstanding algorithms, which integrate various novel improvement strategies, provide new insights for enhancing metaheuristic algorithms.
The Whale Optimization Algorithm (WOA), proposed by Mirjalili et al. in 2016, is a metaheuristic optimization algorithm inspired by the hunting behavior of humpback whales [18]. WOA has a relatively simple structure, making it easy to understand and implement. However, WOA suffers from poor balance between exploration and exploitation, and the population quality tends to degrade significantly during iterations, leading to insufficient global exploration and premature convergence to local optima. These drawbacks make WOA less competitive when solving complex and real-world optimization problems. As a result, many researchers have attempted to improve the performance of WOA in recent years. In 2019, Huiling Chen et al. proposed an improved Whale Optimization Algorithm (BWOA), which integrates Levy flight and chaotic local search strategies (CLS) to more effectively balance global exploration and local search capabilities [24]. In 2020, Guo W et al. introduced a modified Whale Optimization Algorithm (SMWOA), which incorporates social learning and wavelet mutation strategies. This algorithm significantly enhances global search efficiency by designing a new linearly increasing probability [25]. In 2021, Vamsi Krishna Reddy Aala Kalananda et al. proposed two new hybrid optimization algorithms, named Hybrid Social Whale Optimization Algorithm (HS-WOA and HS-WOA+), which combine the advantages of WOA and Social Group Optimization [26]. These algorithms integrate WOA’s exploration ability with SGO’s convergence properties, achieving a perfect balance between exploration and exploitation. In 2022, S Chakraborty et al. introduced a novel improved Whale Optimization Algorithm (ImWOA), which employs two different exploration strategies to explore food sources and introduces a new collaborative hunting strategy [27]. This was designed to address the issues of solution diversity and local optima in the canonical WOA. However, although these algorithms achieve a good balance between exploration and exploitation, they fail to effectively improve convergence speed and accuracy. Therefore, this paper proposed an enhanced Whale Optimization Algorithm based on multi-strategy (LSEWOA) to address these issues.

2. Development History and Current Research of Engineering Design

In ancient times, the design of structures largely relied on experience and intuition. When constructing the pyramids in ancient Egypt, designers determined the dimensions and shapes of the structures based on human experience and mathematical principles. In ancient architecture or mechanical design, the concept of “optimization” was generally absent, with designs being based more on manual calculations and intuitive judgment to create reasonable structures. During the Industrial Revolution, with the advancement of machine manufacturing and mass production, engineering design began to focus on optimization to enhance structural performance and reduce costs, although it still relied on manual design. Optimization was typically achieved through experimentation, adjustment, and correction. In fields like thermodynamics and mechanical structural design, designers would conduct elaborate experiments to adjust design parameters until they met practical requirements.
In the early 20th century, mathematical optimization theory gradually developed and began to serve as a core tool in engineering design. The initial optimization methods were based on classical mathematical analysis, such as calculus, which provided optimal solutions through analytical derivations. In the 1940s, linear programming (LP) was proposed by George Dantzig and others, providing a mathematical foundation for optimization, particularly in applications such as economics, transportation, and resource allocation. With the emergence of nonlinear systems, designers required more complex mathematical tools to solve these problems. As a result, numerical methods like Newton’s method and gradient descent were developed and applied to engineering optimization, addressing more complex design problems. However, due to the lack of powerful computational tools at the time, engineering design remained a time-consuming and labor-intensive process. The optimization process required designers to make certain assumptions to find solutions, and the problems solved were often simplified or idealized.
From the 1950s to the 1970s, with the development of computers, engineering design optimization gradually became computerized. Engineers began using computer programs to solve optimization problems, and breakthroughs were made, particularly in finite element analysis (FEA) and dynamic programming (DP). The finite element method allowed engineers to model and analyze complex structures, solving stress and deformation problems for various materials and geometries, providing more detailed and accurate computational tools for engineering optimization. In control theory and scheduling problems, dynamic programming methods were widely applied, breaking down problems into smaller subproblems and optimizing them step by step, solving many complex design issues. However, in these early computer-aided designs, optimization problems were typically solved using deterministic mathematical methods and were limited to certain types of problems (e.g., linear and nonlinear issues). When faced with large-scale, complex engineering problems, the computational effort and difficulty in solving these problems remained substantial.
As the scale and complexity of engineering design problems continued to increase, canonical mathematical optimization methods struggled to handle high-dimensional, nonlinear, and complexly constrained problems. By the 1980s, researchers began to turn to metaheuristic algorithms to address complex engineering optimization problems. Metaheuristic algorithms, such as simulated annealing, genetic algorithms, particle swarm optimization, and ant colony optimization, were developed and put into practical use. These algorithms effectively avoided getting trapped in local optima and provided better optimization solutions, offering a new approach to engineering design optimization, especially for complex, irregular, or high-dimensional problems. At the same time, significant improvements in computer hardware enabled these algorithms to handle larger-scale optimization problems. In the 2010s, the rapid development of artificial intelligence (AI) and deep learning (DL) began to offer additional options for engineering design optimization. For example, neural networks (NN) can automatically predict and help designers optimize structural designs by learning from large amounts of historical data. However, AI models require extensive training time and large datasets to perform effectively, and for engineering design problems where data is scarce or difficult to obtain, their performance may be suboptimal. Additionally, AI depends on the mathematical model of the problem, and once trained, it can only handle known engineering design problems within that model.
Given that metaheuristic algorithms are robust and maintain high optimization performance across different application scenarios, typically without relying on the mathematical model or derivative information of the problem, they are well-suited for various complex and difficult-to-model problems. Consequently, metaheuristic algorithms remain the primary method for solving engineering design optimization problems today.

3. Organization of the Paper

Section 4 primarily presents the main contributions of this research. Section 5 provides a detailed explanation of the principles of the WOA, along with its advantages and disadvantages. Section 6 introduces the proposed LSEWOA proposed in this paper. In Section 7, we will evaluate the performance of LSEWOA through a series of experiments. Section 8 involves testing various metaheuristic algorithms and LSEWOA on different engineering design optimization problems to validate the practicality and robustness of LSEWOA.

4. Major Contributions

Whale Optimization Algorithm (WOA) has shown subpar performance in the field of engineering design optimization. However, its simple structure holds significant potential for further development. We aimed to improve WOA, hoping that this variant will match state-of-the-art (SOTA) algorithms in terms of convergence speed and optimization accuracy in numerical optimization tasks. Additionally, we intended for this variant to outperform several SOTA algorithms and the original WOA in the field of engineering design optimization, addressing the shortcomings of WOA in this area and exploring the potential application of WOA in engineering design optimization.
WOA struggles to balance exploration and exploitation, and the population quality tends to deteriorate significantly over iterations, leading to insufficient global exploration and premature convergence to local optima. To address these issues, this paper proposed LSEWOA. LSEWOA introduces the Good Nodes Set method to generate uniformly distributed populations, employs a newly designed Leader-Followers Search-for-Prey Strategy to enhance global exploration, incorporates a novel Spiral-based Encircling Prey Strategy that integrates Spiral flight, utilizes an Enhanced Spiral Updating Strategy combining inertia weight and Tangent flight, and introduces a new update mechanism for the parameter a to better balance exploration and exploitation. Experiments show that LSEWOA effectively addresses the drawbacks of WOA. Furthermore, compared to the classical WOA and other state-of-the-art metaheuristic algorithms, LSEWOA demonstrates significant advantages in both numerical optimization and real-world optimization problems.

5. WOA

The Whale Optimization Algorithm (WOA), proposed by Mirjalili et al. in 2016, is a metaheuristic optimization algorithm inspired by the hunting behavior of humpback whales [18]. In WOA, the spiral upward strategy and encircling prey strategy of humpback whales are simulated.

5.1. Encircling Prey

Encircling prey behavior is described by Equations (1) and (2).
D = C · X ( t ) X ( t )
X ( t + 1 ) = X ( t ) A · D
where t is the current iteration; A and C are coefficient vectors; X ( t ) is the position of the current best solution; X ( t ) is the position of the whale.
The results of each iteration are updated if a better solution is found, and the corresponding fitness value improves.
The coefficients A and C are calculated as follows:
A = 2 a · r a
a = 2 2 · t T
C = 2 · r
where r is a random vector between 0 and 1; convergence factor a decreases linearly from 2 to 0 over the course of iterations, as shown in Figure 1.

5.2. Bubble-Net Attacking Method

In addition to encircling prey, whales use a bubble-net attacking method to trap prey by spiraling upward while creating bubbles. This strategy involves two main mechanisms: shrinking encircling and spiral updating.

5.2.1. Shrinking Encircling

This behavior is modeled by decreasing the value of A in Equation (4).

5.2.2. Spiral Updating

The distance between the whale and prey is calculated by Equation (7), and a spiral equation simulates the upward spiral motion to encircle prey as Equation (6).
X ( t + 1 ) = X ( t ) + D · e b l · cos ( 2 π l )
D = | X ( t ) X ( t ) |
l = ( a 1 1 ) · R a n d + 1
a 1 = 1 t T
where b is a constant that defines the shape of a logarithmic spiral, usually set to 1; a 1 is the parameter of the linear change of [ 2 , 1 ] ; R a n d is a random number between 0 and 1; the value of spiral coefficient l is [ 2 , 1 ] .
When the WOA algorithm sets the whale’s update position, the encircling strategy and the spiral updating strategy each have a 50% probability, that is:
X ( t + 1 ) = X ( t ) A · D p < 0.5 X ( t ) + D · e b l · c o s ( 2 π l ) p 0.5

5.3. Searching for Prey

If the whale moves beyond the position where the prey exists, then the whale will abandon the previous moving direction and randomly search for other prey in other directions to avoid falling into a local optimum. This helps avoid local optima and the modelling of the whale searching for prey is as follows:
D = | C · X r a n d X ( t ) |
X ( t + 1 ) = X r a n d A · D
where X r a n d is a random whale chosen from the current population; A and C are described in Equations (3) and (5).

5.4. Population Initialization

Like most metaheuristic algorithms, WOA uses pseudo-random number method for population initialization. The population initialized by pseudo-random number for a population size of N = 300, is as shown in Figure 2.
X i , j = ( u b l b ) · R a n d + l b
where X i , j is randomly produced population; u b and l b are the upper limit and lower limit of the problem; R a n d is a random number between 0 and 1.
This approach, while simple and direct, often results in poor diversity and uneven distribution of solutions. The phenomenon of population aggregation can easily occur, which can lead to inefficiency in the search process.

5.5. Pseudo-Code of WOA

The pseudo-code of WOA is provided in Algorithm 1.
Algorithm 1 WOA
  • Begin
  • Initialize the parameters ( T , N , p , etc. ) ;
  • Calculate the fitness of each search agent;
  • The best search agent is X ;
  •     while  t < T
  •       for each search agent
  •          Update a, A , C , l, and p;
  •          if  p < 0.5
  •            if  | A | < 1
  •                            (Encircling Prey)
  •               Update the position of the current search agent by Equation (2);
  •            else
  •                            (Search For Prey)
  •               Update the position of the current search agent by Equation (12);
  •            end if
  •          else
  •                            (Spiral Updating)
  •              Update the position of the current search agent by Equation (7);
  •          end if
  •       end for
  •       Check if any search agent goes beyond the search space and amend it;
  •       Calculate the fitness of each search agent;
  •       Update X if there is a better solution;
  •       t = t + 1
  •     end while
  • return X
  • End

5.6. Advantages and Disadvantages of WOA

It is shown by Algorithm 1 that the structure of WOA is relatively simple, with fewer parameters, making it easy to understand and implement. By dynamically adjusting the convergence factor a, WOA balances global exploration in the early stages and local exploitation in the later stages of the iteration, preventing premature convergence to local optima. However, WOA has limitations, particularly in solving complex multi-modal problems, where it may not explore the search space adequately and may converge prematurely. Meanwhile, WOA is suffering an imbalance between exploration and exploitation. Therefore, there is considerable room for improvement in WOA. Hence, we proposed LSEWOA.

6. LSEWOA

6.1. Good Nodes Set Initialization

The classical WOA uses pseudo-random number method to generate the population, as shown in Figure 2. While simple and effective, this approach often results in poor population diversity and uneven distribution. This leads to inefficient searches, especially when individuals cluster together.
To address these shortcomings, we adopt the Good Nodes Set method for population initialization [28], which ensures a more uniform distribution of solutions. The concept of Good Nodes Set, first introduced by Chinese mathematician Loo-keng Hua, is a method for generating evenly distributed points. This advantage is evident not only in two-dimensional space but also in high-dimensional spaces, as the construction of Good Nodes Set is dimension-independent. Thus, Good Nodes Set Initialization can enhance the quality of whale populations and improve the exploration capabilities of WOA.The population generated by the Good Nodes Set Initialization for a population size of N = 300 is shown in Figure 3. In comparison to the pseudo-random number method, the population generated by the Good Nodes Set Initialization is more uniformly distributed, effectively avoiding the phenomenon of individual clustering. Assuming U D is a unit hypercube in a D-dimensional Euclidean space, the form of the Good Nodes Set can be described by Equation (14):
P r M = { p ( k ) = ( { k r } , { k r 2 } , , { k r D } ) | k = 1 , 2 , , M }
where {x} represents the fractional part of x; M is the number of nodes; r is a deviation parameter greater than zero; the constant C ( r , ε ) is associated only with r and ε is related to and is a constant greater than zero.
This set P r M is called Good Nodes Set and each node p ( k ) in it is called a Good Node. Assume that the upper and lower bounds of the i t h dimension of the search space X m a x i are and X m i n i , then the mapping formula for mapping the Good Nodes Set to the actual search space is:
X k i = X m i n i + p i ( k ) · ( X m a x i X m i n i )

6.2. Leader-Followers Search-for-Prey Strategy

The search-for-prey strategy of the original WOA by randomly selecting a whale individual, increasing the diversity of WOA to some extent. However, it also leads to unstable convergence, making the search paths of individuals lack clear direction and regularity. Particularly in later iterations, the search strategy may become overly reliant on randomness, resulting in premature convergence or trapping in local optima. Additionally, the search-for-prey strategy in WOA overly depends on randomly selected individuals to update positions, which may lead to insufficient utilization of the population’s information. To address these issues, this paper mimics the behavior of the leader guiding followers towards the prey during the whale’s foraging process and proposes the Leader-Followers Search-for-Prey Strategy. The detailed process is shown in Figure 4. This strategy aims to resolve the drawbacks of the WOA exploration phase, such as insufficient utilization of population information and excessive reliance on randomness in the search process. The modeling of the Leader-Followers Search-for-Prey Strategy is as follows:
X ( t + 1 ) = ε · X ( t ) + | X R ( t ) X ( t ) |
ε = 1 t T
X R ( t ) = 1 N i = 1 N X i ( t )
where ε is the attraction coefficient of the Leader, and its calculation formula is shown in Equation (17); t is the current iteration number; T is the maximum number of iterations; X i represents the position of the whale individual; X R is the average position of all whale individuals, calculated as shown in Equation (18); N represents the population size; X denotes the position of the current best solution.
The Leader-Followers Search-for-Prey Strategy updates the position of whale individuals by using the position of the current best whale and the average position of all whales. This approach makes better use of the collective information of the population, allowing the movement of whale individuals to be more based on the population structure characteristics rather than purely random behavior, thus avoiding too little or too much mutual influence between whale individuals. By leveraging the dominant influence of the Leader and combining the population average, the position update of each individual is attracted by the global optimal position while being constrained by the population distribution, thus expanding the search range of the algorithm. This helps to make the search more targeted, reduces randomness, and, by decreasing ε , gradually reduces the update intensity, enabling the algorithm to quickly identify potential solutions in a larger search space. In the early stages, when ε is larger, the Leader’s attraction to the Followers is stronger, enhancing the exploratory behavior. In later stages, this attraction weakens, promoting more exploitative behavior, thus helping the algorithm to better balance exploration and exploitation. The Leader’s strong guidance helps the whale population to form a clear global search direction, laying the foundation for later development. In the early stages, the positions of the whale individuals are more dispersed. If the dependency on the Leader is small, the updates between individuals may lack concentration, leading to disordered search behavior. In later stages, when ε is smaller, the weaker attraction from the Leader prevents the whale population from prematurely converging around the Leader, which helps preserve population diversity. During each position update, whale individuals, by focusing on the relationship between themselves and the population average, can perform fine searches within their neighborhood with smaller steps, contributing to greater convergence stability and avoiding large fluctuations in position, thereby improving the accuracy of the final solution. Additionally, the linear variation of ε makes the search strategy smoother and more gradual, helping to reduce unnecessary fluctuations during convergence and improving both the convergence accuracy and speed of the algorithm.

6.3. Spiral-Based Encircling Prey Strategy

In the encircling prey phase of the original WOA, the position update method is based on the Euclidean distance between the current best solution and the whale’s individual position. This linear shrinking strategy often leads to a monotonous search behavior, lacking flexibility, and may get trapped in local optima, especially in later stages. Additionally, when the value of A is small, the search range is restricted, which could prevent the algorithm from escaping local optima. Inspired by the Spiral flight mechanism, this paper proposes a Spiral-based Encircling Prey Strategy. The modeling of the Spiral-based Encircling Prey Strategy is shown in Equations (19)–(22). Figure 5 illustrates the concept of the Spiral-based Encircling Prey Strategy.
X ( t + 1 ) = X ( t ) + e Z · L · c o s ( 2 π L ) · | A · D |
D = | C · X ( t ) X ( t ) |
L = 2 · r 1
Z = e k · c o s ( π · ( 1 t T ) )
where A and C are coefficient vectors; Z represents the Spiral flight step size; s and k are spiral coefficients; and r is a random number between 0 and 1.
The Spiral-based Encircling Prey Strategy increases the randomness and nonlinear variation of position updates by introducing the nonlinear step size Z of Spiral flight. This enables whale individuals to take diversified paths when approaching the prey. During the process of approaching the prey, the whale individuals can explore the local space more thoroughly, and when trapped in a local optimum, they have a higher probability of escaping and finding a better solution. This strategy helps avoid premature convergence in complex problems.

6.4. Enhanced Spiral Updating Strategy

The Spiral Updating Strategy in the original WOA helps with local search, but due to the lack of randomness and disturbance mechanisms, the position update of the whale individuals consistently depends on the Leader’s position at the same level. This results in a lack of diversity and the ability to escape from local optima during the development phase, making it prone to getting trapped in local optimum solutions. Therefore, in this paper, inertia weight and Tangent flight were introduced into the original Spiral Updating Strategy, and an Enhanced Spiral Updating Strategy is proposed.

6.4.1. Inertia Weight ω

The concept of inertia weight ω was first introduced by Shi et al. in the PSO [29], and it led to significant performance improvements. Later, numerous studies demonstrated that inertia weight is effective in balancing global exploration and local exploitation. Therefore, an inertia weight ω based on the Sigmoid function is introduced into the prey capture strategy of the WOA, as shown in Equation (23). Figure 6 compares the common inertia weights with the inertia weight proposed in this paper with k 1 = 10, k 1 = 15 and k 1 = 20. This inertia weight ω increases from 0 to 0.9, initially increasing slowly, then rapidly increasing in the middle phase, and finally slowing down towards the later stage. Furthermore, as k 1 increases, the speed of change in the middle phase becomes faster.
ω = 0.9 1 + e k 1 ( t T 0.5 )
where t is the current iteration; T is the maximum number of iterations; k 1 is the slope parameter of the Sigmoid function and the value of the parameter k 1 will be discussed.
In the early stages of iteration, the inertia weight ω is small, and when updating the positions, the distance D between the whale individual and the leader has a lower reference value. As a result, the whale individual is less influenced by the leader’s attraction, allowing for greater freedom to perform global exploration. In the later stages of iteration, when the inertia weight is larger, the whale individual is more strongly influenced by the leader’s attraction. At this point, the freedom of the whale individual is restricted, and the whale individual closely follows the leader for detailed local exploitation.

6.4.2. Tangent Flight

Tangent flight is a new step-size calculation method based on the tangent function, proposed in 2021 by the Tangent Search Algorithm (TSA) [30]. Tangent flight is based on a heavy-tailed distribution and is characterized by large step-size movements. Figure 7 shows a simulation of tangent flight. The following is the calculation method for the Tangent flight step size.
T f = tan ( θ )
θ = r a n d · π 2
where T f is the step size of Tangent flight; r a n d is a random number from 0 to 1.
Larger Tangent flight steps are beneficial for exploration, while smaller Tangent flight steps are advantageous for exploitation. From the formula, the value of θ ranges from 0 to π 2 . The closer θ is to π 2 , the larger T f becomes, which corresponds to a larger step size in Tangent flight, favoring global exploration. On the other hand, the closer θ is to 0, the smaller t a n T f becomes, resulting in smaller step size movements, which are suitable for local exploitation. Compared to Levy flight, the frequency of large steps in Tangent flight is higher, which compensates for the issue of search distances being too large or too small in Levy flight. Tangent flight is more advantageous than Levy flight in escaping local optima. Tangent flight enhances the global exploration ability of the WOA while effectively addressing its local exploitation shortcomings. This, in turn, accelerates WOA’s convergence rate and improves overall performance.

6.5. Calculation of Enhanced Spiral Updating Strategy

Enhanced Spiral Updating Strategy combining inertial weight and step size of tangent flight is modeled as follows:
X ( t + 1 ) = X ( t ) · T f + ω · D · e b l · cos ( 2 π l )
where T f is the step size of Tangent flight; ω is the proposed inertia weight.
The Enhanced Spiral Updating Strategy enhances the diversity of the whale individual’s search. The original spiral ascent strategy itself, through the combination of exponential functions and cosine waves, makes the whale individual’s search trajectory more complex and directional. After introducing the inertial weight ω , the amplitude and jump range of the trajectory are dynamically adjusted with each iteration, further enriching the whale’s search path. By combining the step size T f of Tangent flight with heavy-tail characteristics, the whale individual’s movement ability is expanded globally, allowing for large random jumps. While accelerating the algorithm’s convergence speed, the larger step sizes provide WOA with an effective ability to escape local optima, making it more efficient in solving complex optimization problems.

6.6. Redesigned Convergence Factor a

This paper proposed a new updating method of the convergence factor a based on Sigmoid function to balance global exploration and local exploitation, with its calculation method shown in Equation (27). Figure 8 compares the proposed approach with k 2 = 15, k 2 = 20 and k 2 = 25 to the original linear decay method in the classical WOA. The update strategy results in a slower decrease in the early stage, a rapid reduction in the middle stage, and a slower decrease again in the later stage. Moreover, as k 2 increases, the speed of change in the middle stage becomes faster. This approach simulates a more complex variation process, giving the algorithm different convergence characteristics at different stages: the slower reduction in the early stage helps to broadly explore the search space; the rapid decrease in the middle stage accelerates convergence; and the slower reduction in the later stage preserves a certain level of exploration ability while enhancing local exploitation, facilitating more refined local search. This strategy provides a better balance between global and local search, offers more flexible convergence behavior, alleviates premature convergence, increases adaptability to different search environments, and improves the precision of the final solution. These improvements make WOA more efficient and reliable when dealing with complex optimization problems.
a = 2 2 1 + e k 2 ( t T 0.5 )
where t is the current iteration count; T is the maximum number of iterations; k 2 is the slope parameter of the Sigmoid function and the value of the parameter k 2 will be discussed.

6.7. Pseudo-Code of LSEWOA

The pseudo-code of LSEWOA is provided in Algorithm 2.
Algorithm 2 LSEWOA
  • Begin
  • Initialize the parameters ( T , N , p , etc . ) ;
  • Initialize population using Good Nodes Set Initialization;
  • Calculate the fitness of each search agent;
  • The best search agent is X ;
  •     while  t < T
  •       for each search agent
  •          Update a, ω , ε , A , C , l, and p;
  •          if  p < 0.5
  •            if  | A | < 1
  •                               (Spiral-based Encircling Prey)
  •               Update the position of the current search agent by Equation (19);
  •            else
  •                            (Leader-Followers Search-for-Prey)
  •               Update the position of the current search agent by Equation (16);
  •            end if
  •          else
  •                           (Enhanced Spiral Updating Strategy)
  •              Update the position of the current search agent by Equation (26);
  •          end if
  •       end for
  •       Check if any search agent goes beyond the search space and amend it;
  •       Calculate the fitness of each search agent;
  •       Update X if there is a better solution;
  •       t = t + 1
  •     end while
  • return X
  • End

6.8. Time Complexity Analysis

6.8.1. Time Complexity of WOA

Assume that the time complexity of initialization in WOA is O ( N D ) . During each iteration, the time complexity of boundary checking is O ( N D ) , the time complexity of fitness evaluation is O ( N D ) , and the total time complexity of position updates is O ( N D ) . Therefore, the total time complexity per iteration is O ( N D ) . If the algorithm iterates T times, the total time complexity is calculated as:
Total Time Complexity1 = Initialization + T ∗ (the total time complexity per iteration) = O ( N D ) + T O ( N D ) = O ( T N D )

6.8.2. Time Complexity of LSEWOA

Assume that the time complexity of initialization in LSEWOA is O ( N D ) . During each iteration, the time complexity of boundary checking is O ( N D ) , the time complexity of fitness evaluation is O ( N D ) , and the total time complexity of position updates is O ( N D ) . Therefore, the total time complexity per iteration is O ( N D ) . If the algorithm iterates T times, the total time complexity is calculated as:
Total Time Complexity2 = Initialization + T ∗ (the total time complexity per iteration) = O ( N D ) + T O ( N D ) = O ( T N D )
In summary, the time complexity of LSEWOA and WOA are the same, both are O ( T N D ) .

7. Experiments

The experimental setup for this study includes a Windows 11 (64-bit) operating system, an Intel(R) Core(TM) i5-8300H CPU @ 2.30 GHz processor, and 8 GB of RAM. The simulation platform used is MATLAB R2023a. The algorithm was tested on 23 classical benchmark functions in CEC2005 test suit in Table 3 and Table A1 [31]. To verify the performance and effectiveness of LSEWOA, the following experiments were conducted.
  • A parameter sensitivity analysis experiment was performed on different LSEWOAs with various k 1 and k 2 , aiming at choosing the perfect option of k 1 and k 2 for parameter a and inertia weight ω respectively to better balance exploration and exploitation.
  • A qualitative analysis experiment was performed by applying LSEWOA on the 23 benchmark functions to comprehensively evaluate the performance, robustness and exploration-exploitation balance of LSEWOA in different types of problems, by assessing search behavior, exploration-exploitation capability and population diversity.
  • An ablation study was performed by removing each of the five improvement strategies from LSEWOA and testing on 23 benchmark functions.
  • LSEWOA was tested against five excellent WOA variants on the benchmark functions.
  • LSEWOA was compared with the canonical WOA and several state-of-the-art algorithms on the benchmark functions.

7.1. Parameter Sensitivity Analysis Experiment

As shown in Equations (23) and (27), the values of ω with different k 1 and parameter a with different k 2 can significantly affect the performance of LSEWOA. The ω with different k 1 values has a substantial impact on the change in the degree of dependency of whale individuals on the leader during the position update in the Enhanced Spiral Updating Strategy. Meanwhile, as parameter a influences the balance between exploration and exploitation, it is necessary to explore the shape of the Sigmoid function of parameter a. In this experiment, we discuss the values of k 1 and k 2 . Specifically, we performed the Friedman test on LSEWOA with different combinations of k 1 and k 2 over 23 benchmark functions. The number of iterations was set to T = 500, and the population size was set to N = 30. Each version of LSEWOA with different k 1 and k 2 was executed 30 times on the 23 benchmark functions listed in Table 3. The Friedman values are recorded in Table 4. ‘LSEWOA(20, 15)’ means k 1 = 20, k 2 = 15; ‘LSEWOA(15, 20)’ means k 1 = 15, k 2 = 20. ‘Average’ means average Friedman value.
The results indicate that LSEWOA with k 1 = 20 and k 2 = 25 performed the best. Notably, the Friedman value for LSEWOA with k 1 = 15 and k 2 = 25 was 4.0913, for k 1 = 20 and k 2 = 25 it was 3.7935, and for k 1 = 25 and k 2 = 25 it was 3.9443. This confirms the importance of exploring the values of k 1 and k 2 .

7.2. Ablation Study

In this ablation study, we removed five improvement strategies from LSEWOA individually:
  • LSEWOA1: We replaced the Good Nodes Set Initialization with pseudo-random number initialization in LSEWOA, which is referred to as LSEWOA1;
  • LSEWOA2: We replaced the Leader-Followers Search-for-Prey Strategy with the original WOA’s Search-for-prey strategy, referred to as LSEWOA2;
  • LSEWOA3: We replaced the Spiral-based Encircling Prey Strategy with the original WOA’s encircling prey strategy, referred to as LSEWOA3;
  • LSEWOA4: We replaced the Enhanced Spiral Updating Strategy with the original WOA’s spiral updating mechanism, referred to as LSEWOA4;
  • LSEWOA5: We replaced the proposed update mechanism of parameter a with the one in classical WOA, referred to as LSEWOA5.
Additionally, we set the number of iterations to T = 500 and the population size to N = 30. Each algorithm was run 30 times independently on 23 benchmark functions in 30 dimensions to validate the effectiveness of each improvement strategy. The iteration curves are shown in Figure 9.
As seen in the figure, the Good Nodes Set Initialization generates a more uniformly distributed whale population, allowing whale individuals to explore the solution space more effectively. This initialization strategy demonstrates a significant advantage in handling complex multi-modal problems such as F12–F15. The Leader-Followers Search-for-Prey Strategy uses the average position of whale individuals and the leader’s distance as one of the references for position updates, fully utilizing the information within the population. Furthermore, with the leader’s decaying attraction, this strategy better guides the followers toward the prey, enabling the algorithm to balance exploration and exploitation more naturally throughout the iterations, thus improving the convergence speed for functions like F1–F4. The Spiral-based Encircling Prey Strategy, by introducing a nonlinear step size through spiral flight, increases the randomness and nonlinear variation of position updates, introducing a certain degree of periodicity and randomness into the algorithm. This allows the algorithm to continuously escape local optima, thus preventing premature convergence when tackling complex problems like F5–F6 and F12–F13. The Enhanced Spiral Updating Strategy further enriches the whale’s search path. While accelerating the algorithm’s convergence speed, the larger step size enables WOA to effectively escape local optima, greatly improving the convergence speed on functions like F1–F4, F9–F11. Additionally, the Sigmoid function-based update for parameter a, proposed in this paper, provides WOA with a better ability to balance global exploration and local exploitation, further improving the solution accuracy. This allows WOA to focus more effectively on local exploitation in the later iterations, continuously searching for better solutions.

7.3. Qualitative Analysis Experiment

In the qualitative analysis experiment, we set the number of iterations to T = 500 and the population size to N = 30, and ran LSEWOA independently on 23 benchmark functions in 30 dimensions in Table 3 to analyze the search history, exploration-exploitation ratio, and population diversity of LSEWOA. In addition, we have provided the landscape of the benchmark functions and the iteration curves for reference. The results of the qualitative analysis are shown in Figure 10, Figure 11, Figure 12, which includes:
  • the landscape of benchmark functions;
  • the search history of the whale population;
  • the exploration-exploitation ratio;
  • the changes in population diversity;
  • the iteration curves.
The search history represents the positions and distribution of the whale individuals. In the figures of the search history of the whale population, red circles indicate the location of the global optimum, and blue circles represent the search history of the whale individuals. It is noteworthy that LSEWOA effectively explores the entire search space, as indicated by the positions of the whale individuals. LSEWOA demonstrates a rapid convergence speed in single-modal functions such as F1–F4, where whale individuals find the optimal solution within a limited number of iterations, resulting in a concentrated distribution of individuals within the solution space. In the case of complex multi-modal functions like F8, which have many local optima, LSEWOA first conducts a quick global exploration and then performs detailed exploitation in promising regions. During the exploitation phase, the oscillation term in the Spiral-based Encircling Prey Strategy and the tangent flight step size in the Enhanced Spiral Updating Strategy allow LSEWOA to not only converge rapidly but also effectively escape from local optima, thereby exploring more promising solutions.
In terms of balancing exploration and exploitation, LSEWOA performs excellently. When dealing with uni-modal functions such as F1–F4, the exploitation ratio of LSEWOA increases rapidly in the early iterations, indicating strong exploitation ability. When handling multi-modal functions like F14–F15 and composite functions like F21–F23, LSEWOA exhibits a higher exploration ratio in the early iterations, demonstrating its strong global exploration capability and its ability to continuously identify more promising regions in the search space.
In functions F17–F23, the population diversity curve of LSEWOA consistently fluctuates and maintains a high value. This indicates that LSEWOA is able to maintain high population diversity when handling complex multi-modal functions, effectively preventing premature convergence caused by the population clustering in certain areas.

7.4. Comparative Experiment with State-of-the-Art WOA Variants

To evaluate the superiority of LSEWOA, we selected following five state-of-the-art WOA variants as controls and tested them with LSEWOA on the 23 benchmark functions in 30 dimensions listed in Table 3.
  • WOAV1: The WOA variant (eWOA) that introduces adaptive parameter adjustment, multi-strategy search mechanisms, and elite retention strategies is referred to as WOAV1 [32];
  • WOAV2: The WOA variant (NHWOA) that incorporates multiple subpopulations, dynamically adjusted control parameters, adaptive position update mechanisms, and Levy flight perturbations is referred to as WOAV2 [33];
  • WOAV3: The WOA variant (MSWOA) that introduces adaptive weights, dynamic convergence factors, and Levy flight is referred to as WOAV3 [34];
  • WOAV4: The WOA variant (MWOA) that uses an iteration-based cosine function and exponential decay adjustment for parameters, hybrid mutation strategies, Levy flight, and hybrid update mechanisms is referred to as WOAV4 [35];
  • WOAV5: The WOA variant (WOA_LFDE) that introduces Levy flight and Differential Evolution strategies is referred to as WOAV5 [36].
We uniformly set the number of iterations to T = 500 and the population size to N = 30. Each of these five WOA variants and LSEWOA was run 30 times on the benchmark functions, recording the average fitness (Ave), standard deviation (Std), p-values of Wilcoxon rank-sum test, and Friedman values for performance analysis. And finally we evaluate the overall effectiveness (OE). The iteration curves are shown in Figure 13 and Table 5, Table 6, Table 7.

7.4.1. Parametric Analysis

The experimental results show that LSEWOA performed excellently in this comparative experiment. LSEWOA outperformed all other algorithms in terms of both mean and standard deviation on F1–F17 and F20–F23. In F18–F19, although LSEWOA has the optimal mean value, its stability is lower than that of other variants.
In algorithm performance evaluation, the average fitness and standard deviation are commonly used to measure convergence and stability, but they do not provide an intuitive representation of algorithm performance. Relying solely on the average fitness and standard deviation to compare different algorithms has limitations, which is why non-parametric tests, such as the Wilcoxon rank-sum test and Friedman test, are often introduced. These statistical tests provide deeper analysis and reliability verification.

7.4.2. Non-Parametric Wilcoxon Rank-Sum Test

The Wilcoxon rank-sum test is a non-parametric method used to compare whether there are significant differences between the distributions of two independent samples. The Wilcoxon rank-sum test avoids the bias that may arise from relying solely on mean and variance, is less sensitive to outliers, and can provide more robust performance evaluation than mean and standard deviation. In the comparison of optimization algorithms, when we want to compare whether the effects of two algorithms show significant differences, the Wilcoxon test helps us determine whether this difference is statistically significant. If the p-value from the Wilcoxon test is smaller than the set significance level (usually 0.05), we can conclude that the performance difference between the two algorithms is significant, rather than being caused by random error. In the Wilcoxon rank-sum test, LSEWOA shown significant differences with WOAV2 and WOAV5 across all benchmark functions. LSEWOA did not show significant differences with WOAV1 and WOAV4 in F1 and F9–F11 because, in these functions, both algorithms quickly converge to the optimal solution. Similarly, LSEWOA did not show significant differences with WOAV3 in F10–F11, as WOAV3 also converged rapidly on these functions. LSEWOA did not show significant differences with WOAV4 in F3, as both algorithms quickly reached the optimal solution in this function.

7.4.3. Non-Parametric Friedman Test

The Friedman test is also a non-parametric method used to compare differences among three or more related samples. It is a non-parametric version of repeated-measures ANOVA and is suitable for scenarios where different algorithms are repeatedly tested on the same dataset. The Friedman test can identify statistically significant differences among algorithms. By comparing the performance of multiple algorithms across multiple datasets or test environments, the Friedman test effectively eliminates sample bias and provides a fairer comparison. In the Friedman test, LSEWOA had an average Friedman value of 1.6710, ranking first, followed by WOAV1, WOAV2, and WOAV3 in second, third, and fourth places, respectively. WOAV5 and WOAV4 ranked fifth and sixth.

7.5. Scalability Experiment of LSWOA

In the 23 benchmark functions, F1–F13 are functions with expandable dimensions, while F14–F23 are functions with fixed dimensions. To validate the ability of LSEWOA to handle problems of different dimensions and complexities, this experiment expands the dimensions of the expandable functions (F1–F13) to 50 and 100 dimensions, while keeping the dimensions of the functions (F14–F23) fixed. LSEWOA was compared with WOAV1, WOAV2, WOAV3, WOAV4, and WOAV5 on the 23 benchmark functions on higher dimension of 50 (D = 50) and 100 (D = 100). The comparison results are shown in Table 8. The results shown that LSEWOA performed excellently in the scalability comparison experiment. LSEWOA had a significant advantage in handling problems with different dimensions.

Overall Effectiveness of LSEWOA

Table 7 summarizes all performance results of LSEWOA and other algorithms by a useful metric named overall effectiveness (OE). In Table 7, w indicates win, t indicates tie and l indicates loss. The OE of each algorithm is computed by Equation (28) [37].
O E = N L L · 100
where N is the total number of tests; L is the total number of losing tests for each algorithm.
The results reveal that LSEWOA with overall effectiveness of 97.10% was the most effective algorithm. In summary, LSEWOA demonstrated exceptional performance on the classical benchmark functions and shown clear differences compared to the five selected WOA variants.

7.6. Comparative Experiment with State-of-the-Art Metaheuristic Algorithms in 30 Dimension

To further validate the effectiveness of LSEWOA, this study compares it with several state-of-the-art metaheuristic algorithms in 30 dimensions, including the Grey Wolf Optimizer (GWO) [38], Harris Hawk Optimization algorithm (HHO) [7], Zebra Optimization Algorithm (ZOA) [39], Slime Mould Algorithm (SMA) [40], Sine Cosine Algorithm (SCA) [41], Attraction-Repulsion Optimization Algorithm (AROA) [42], Rime Optimization Algorithm (RIME) [43], and Whale Optimization Algorithm (WOA) [18], on the benchmark functions listed in Table 9. The parameter settings for each algorithm are shown in Table 10. The number of iterations was uniformly set to T = 500, with a population size of N = 30. Each algorithm was independently run 30 times on 23 benchmark functions, and the average fitness (Ave), standard deviation (Std), p-values of Wilcoxon rank-sum test, and Friedman values were recorded for performance analysis. The experimental results are shown in Figure 14 and Table 11, Table 12, Table 13.
The experimental results demonstrate that LSEWOA achieved the best overall performance among all the algorithms in 30 dimensions and shown a significant improvement in overall performance compared to the original WOA. As shown in Figure 14 and Table 11, LSEWOA outperformed all algorithms in terms of both the average fitness and standard deviation when solving F1–F17 and F20–F23. However, in solving F18–F19, the stability of LSEWOA was lower than that of SMA. For most functions, LSEWOA quickly found the optimal solution, exhibiting a faster convergence rate and higher solution accuracy. This confirms that LSEWOA had good adaptability and robustness when handling different types of problems.
In the Wilcoxon rank-sum test, as shown in Table 12, LSEWOA shown significant differences from GWO, SCA, AROA, and RIME across all test functions. LSEWOA did not show significant differences from HHO, ZOA, and SMA on F9-F11 because all of these algorithms quickly found the optimal values for these three functions within the given number of iterations. LSEWOA also did not show significant differences from RIME on F19.
Ranking the average Friedman values of each algorithm, as shown in Table 12, LSEWOA had the lowest average Friedman value of 1.5587, ranking first. SMA had an average Friedman value of 2.7565, ranking second. ZOA and HHO had average Friedman values of 3.8956 and 4.9297, ranking third and fourth, respectively. GWO, WOA, and RIME had average Friedman values of 5.2312, 5.2587, and 6.0000, ranking fifth, sixth, and seventh, respectively. AROA and SCA had average Friedman values of 7.2913 and 8.0783, ranking eighth and ninth.

7.7. Comparative Experiment with State-of-the-Art Metaheuristic Algorithms in Higher Dimensions

The parameter settings for each algorithm were shown in Table 10. The number of iterations was set to T = 500, and the population size was set to N = 30. Each algorithm was run independently 30 times on the benchmark functions in 50 and 100 dimensions, with the p-value from Wilcoxon rank-sum test and Friedman values recorded for performance analysis. The experimental results were shown in Table 14.
The results shown that LSEWOA performed excellently in the scalability comparison experiment. Table 13 reveals that LSEWOA with overall effectiveness of 97.10% was the most effective algorithm. In summary, LSEWOA exhibited the best overall performance among all the algorithms, demonstrating strong competitiveness compared to other state-of-the-art metaheuristic algorithms.

8. Engineering Optimization

In this section, nine engineering design problems will be used to test the superior performance of the developed LSEWOA in solving various practical applications. LSEWOA will be compared with GWO, HHO, ZOA, SMA, SCA, AROA, RIME, and WOA. The parameter settings for each algorithm are shown in Table 10. The iteration number is uniformly set to T = 500 and the population size to N = 30. Each algorithm is run 30 times independently on each engineering design optimization problem, with the average fitness (Ave) and standard deviation (Std) recorded for performance analysis.

8.1. Three-Bar Truss

The three-bar truss is a simple structural system consisting of three members, as shown in Figure 15. It is commonly used to support concentrated loads and is widely applied in engineering fields such as bridges, buildings, and aerospace. The three-bar truss design problem is a classic structural optimization problem, often used to study the mechanical behavior of simple structures under external loading conditions. In the three-bar truss design problem, the objective is to optimize the cross-sectional areas of the truss members to minimize material usage while ensuring that the structure meets the required mechanical performance.
This optimization problem involves a nonlinear objective function, three nonlinear inequality constraints, and two continuous decision variables x 1 and x 2 . The objective function for the three-bar truss design problem can be described as follows:
Variable:
x = [ x 1 , x 2 ]
Minimize:
f ( x ) = ( 2 2 x 1 + x 2 ) · l
Subject to:
g 1 ( x ) = 2 x 1 + x 2 2 · x 1 2 + 2 x 1 · x 2 P σ 0
g 2 ( x ) = x 2 2 x 1 2 + 2 x 1 · x 2 P σ 0
g 3 ( x ) = 1 2 x 2 + x 1 P σ 0
Where:
l = 100   cm ; P = 2   kN / cm 2 ; σ = 2   kN / cm 2
Variable range:
0 x i 1 , i = 1 , 2
The experimental results are shown in Figure 16 and Table 15. From Table 15, it can be observed that LSEWOA significantly outperforms other algorithms in terms of stability, with the best optimization accuracy among all algorithms. This indicates that LSEWOA has a significant advantage when handling such optimization problems.

8.2. Tension/Compression Spring

The extension/compression spring, as shown in Figure 17, plays a crucial role in modern industry, with widespread applications in fields such as automotive, home appliances, and electronics. Its design optimization not only helps improve product performance and extend service life, but also reduces costs and enhances manufacturing efficiency. Through reasonable design optimization, the spring can achieve optimal performance in dynamic working environments and meet various stringent requirements. The optimization objective of the design problem for the extension/compression spring is the minimization of its mass. The problem needs to be solved under constraints such as shear force, deflection, fluctuation frequency, and outer diameter. There are three design variables in this problem: coil diameter d, mean coil diameter D, and number of coils N. There are also four constraints, g 1 to g 4 . The mathematical model of the problem is as follows,
Variable:
x = [ d , D , N ] = [ x 1 , x 2 , x 3 ]
Minimize:
f ( x ) = ( x 3 + 2 ) · x 2 · x 1 2
Subject to:
g 1 ( x ) = 1 x 2 3 · x 3 71785 x 1 4 0
g 2 ( x ) = 4 x 2 2 x 1 · x 2 12566 ( x 2 · x 1 3 x 4 ) + 1 5108 x 1 2 1 0
g 3 ( x ) = 1 140.45 x 1 x 2 2 · x 3 0
g 4 ( x ) = x 1 + x 2 1.5 1 0
Variable range:
0.05 x 1 2 , 0.25 x 2 1.3 , 2.0 x 3 15
The experimental results are presented in Figure 18 and Table 15. As shown in Table 15, the stability of LSEWOA in the Tension/Compression Spring design problem significantly surpasses other algorithms, and it achieves the highest optimization accuracy among all algorithms. This indicates that LSEWOA has a significant advantage in handling such optimization problems.

8.3. Speed Reducer

A speed reducer is a mechanical transmission device and one of the key components of a gearbox, shown in Figure 19. It is primarily used to reduce the rotational speed of an electric motor or other power sources while increasing the output torque. The reducer achieves this speed reduction through gears, worm gears, or other transmission mechanisms. It is typically applied in situations where there is a need to decrease the rotational speed, increase torque, or adjust the direction of motion.
In the optimization design of a reducer, the goal is to minimize the weight of the reducer. This problem involves seven variables, which are as follows: the width of the gear teeth x 1 , the gear module x 2 , the number of teeth on the small gear x 3 , the length of the first shaft between the bearings x 4 , the length of the second shaft between the bearings x 5 , the diameter of the first shaft x 6 , and the diameter of the second shaft x 7 . Furthermore, this problem also involves eleven constraints, g 1 to g 11 . The mathematical formulation of the problem is as follows,
Variable:
x = [ x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ]
Minimize:
y = f ( x )
Subject to:
g 1 = 27 x 1 · x 2 2 · x 3 1 0 ;
g 2 = 397.5 x 1 · x 2 2 · x 3 2 1 0 ;
g 3 = 1.93 x 4 3 x 2 · x 6 4 · x 3 1 0 ;
g 4 = 1.93 x 5 3 x 2 · x 7 4 · x 3 1 0 ;
g 5 = 16.91 · 10 6 + 745 x 4 x 2 · x 3 2 110 x 6 3 1 0 ;
g 6 = 157.5 · 10 6 + 745 x 4 x 2 · x 3 2 85 x 7 3 1 0 ;
g 7 = x 2 · x 3 40 1 0 ;
g 8 = 5 x 2 x 1 1 0 ;
g 9 = x 1 12 x 2 1 0 ;
g 10 = 1.5 x 6 + 1.9 x 4 1 0 ;
g 11 = 1.1 x 7 + 1.9 x 5 1 0 ;
Variable range:
2.6 x 1 3.6 ; 0.7 x 2 0.8 ; 17 x 3 28 ; 7.3 x 4 8.3 ;
7.3 x 5 8.3 ; 2.9 x 6 3.9 ; 5 x 7 5.5 ;
The experimental results are presented in Figure 20 and Table 15. As shown in Table 15, the stability of LSEWOA in the Speed Reducer design problem significantly surpasses other algorithms, and it achieves the highest optimization accuracy among all algorithms. This indicates that LSEWOA has a significant advantage in handling such optimization problems.

8.4. Cantilever Beam

A cantilever beam is a common structural form, fixed at one end and free at the other, as shown in Figure 21. The cantilever beam design problem is a classic engineering structural optimization problem, with the objective of minimizing material usage or beam weight while satisfying constraints on strength, stability, and other factors. This optimization problem is widely used in civil engineering, mechanical design, and aerospace fields.
The cantilever beam consists of five hollow square cross-section units. As shown in Figure 21, each unit is defined by one variable, and the thickness is constant. Therefore, the design problem includes five structural parameters, which correspond to five decision variables, denoted as s 1 , s 2 , s 3 , s 4 , s 5 . The objective function for the cantilever beam design problem can be expressed as:
Variable:
x = [ s 1 , s 2 , s 3 , s 4 , s 5 ] = [ x 1 , x 2 , x 3 , x 4 , x 5 ]
Minimize:
f ( x ) = 0.0624 ( x 1 + x 2 + x 3 + x 4 + x 5 )
Subject to:
g ( x ) = 61 x 1 3 + 37 x 2 3 + 19 x 3 3 + 7 x 4 3 + 1 x 5 3 1 0
Variable range:
0.01 x i 100 , i = 1 , 2 , 3 , 4 , 5
The experimental results are presented in Figure 22 and Table 15. As shown in Table 15, the stability of LSEWOA in the Cantilever Beam design problem significantly surpasses other algorithms, and it achieves the highest optimization accuracy among all algorithms. This indicates that LSEWOA has a significant advantage in handling such optimization problems.

8.5. I-Beam

An I-beam, named for its cross-sectional shape resembling the letter ‘I’, is a type of steel with high strength and low self-weight. It is widely used in various engineering structures. Its superior mechanical properties make it applicable in multiple fields, particularly in structures subjected to bending moments and axial forces. The objective of I-beam design optimization is to select the geometric parameters of the I-beam (such as width, height, thickness, etc.) in a way that maximizes its performance. This typically involves maximizing its load-bearing capacity, minimizing material usage, controlling structural deformations, and reducing costs. Optimizing I-beam design in engineering can enhance the safety, economy, and efficiency of structures. As shown in Figure 23, the I-beam design optimization problem involves four variables ( x 1 , x 2 , x 3 and x 4 ) and two constraints ( g 1 and g 2 ). x 1 , x 2 , x 3 and x 4 represent the web height, flange width, web thickness, and flange thickness of the I-beam, respectively. The objective function for the I-beam design problem can be described as:
Variable:
x = [ x 1 , x 2 , x 3 , x 4 ]
Maximize:
f ( x ) = 5000 x 3 · ( x 1 2 x 4 ) 3 12 + x 2 · x 4 3 6 + 2 x 2 · x 4 x 1 x 4 2 2
Subject to:
g 1 ( x ) = 2 x 2 · x 3 + x 3 · ( x 1 2 x 4 ) 300 0 ;
g 2 ( x ) = 18 · 10 4 x 1 x 3 x 1 2 x 4 3 + 2 x 2 · x 3 4 x 4 2 + 3 x 1 · ( x 1 2 x 4 ) 0
Variable range:
10 x 1 80 ; 10 x 2 50 ; 0.9 x 3 5 ; 0.9 x 4 5 ;
The experimental results are shown in Figure 24 and Table 15. As shown in Table 15, LSEWOA significantly outperforms the other algorithms in terms of both optimization accuracy and stability for the I-beam design problem. This demonstrates that LSWOA has superior solving capabilities when handling this type of problem. This indicates that LSEWOA has a significant advantage in handling such optimization problems.

8.6. Piston Lever

The piston lever is a typical mechanical structure, as shown in Figure 25, and its design problem is a classic engineering optimization problem. It involves the design adjustment of multiple geometric and mechanical parameters, with the goal of minimizing material usage or structural weight while ensuring that constraints such as strength and stability are satisfied. This type of optimization problem is widely applied in mechanical engineering, vehicle design, and other industrial scenarios, particularly when aiming for lightweight and efficient designs of moving components.
In the piston lever optimization problem, the objective is to minimize the total material consumption of the piston lever while ensuring that the structural strength and performance meet the design requirements. The geometric structure of the piston lever is defined by several design parameters, which describe the relationships of its key dimensions. From the geometric relationships, the variables x 1 to x 4 can be interpreted as follows: x 1 , x 2 are the main length and width parameters of the geometric structure, controlling the overall lever arm; x 3 is the cross-sectional radius at the point of force application, influencing the force distribution; x 4 is a geometric dimension related to the support point. The objective function for the piston lever design problem can be described as:
Variable:
x = [ x 1 , x 2 , x 3 , x 4 ]
Minimize:
f ( x ) = 0.25 π x 3 2 ( L 2 L 1 )
Subject to:
g 1 ( x ) = Q L cos θ R F 0 ;
g 2 ( x ) = Q ( L x 4 ) M max 0 ;
g 3 ( x ) = 1.2 ( L 2 L 1 ) L 1 0 ;
g 4 ( x ) = x 3 2 x 2 0 ;
Variable range:
0.05 x 1 500 ; 0.05 x 2 500 ; 0.05 x 4 500 ; 0.05 x 3 120 ;
Where:
Q = 10000 ; P = 1500 ; L = 240 ; M max = 1.8 × 10 6 ;
L 1 = ( x 4 x 2 ) 2 + x 1 2 ; L 2 = ( x 4 sin θ + x 1 ) 2 + ( x 2 x 4 cos θ ) 2 ;
R = | x 4 ( x 4 sin θ + x 1 ) + x 1 ( x 2 x 4 cos θ ) | ( x 4 x 2 ) 2 + x 1 2 ;
F = 0.25 π P x 3 2 ;
The experimental results are shown in Figure 26 and Table 15. From Table 15, it can be seen that in the piston lever design optimization problem, LSEWOA significantly outperforms other algorithms in both optimization accuracy and stability. This indicates that LSEWOA has a significant advantage when dealing with such optimization problems.

8.7. Multi-Disc Clutch Brake

The multiple-disc clutch brake is commonly used in transmission systems or mechanical devices to enhance the performance and efficiency of the clutch brake [44]. The structure of the multiple-disc clutch brake is shown in Figure 27. The objective of the multiple-disc clutch brake optimization problem is to minimize the system’s cost by adjusting the design parameters of the clutch brake system (such as the thickness of discs, inner&outer radius, actuating force, and the number of friction surfaces), while satisfying various constraints. The multiple-disc clutch brake optimization problem involves five design variables and eight constraints. The meanings of the variables x 1 to x 5 : x 1 indicates the inner radius; x 2 indicates the outer radius; x 3 indicates the thickness of discs; x 4 indicates the actuating force; x 5 indicates the number of friction surfaces. The objective function for the Multi-disc Clutch Brake design problem can be described as:
Variable:
x = [ x 1 , x 2 , x 3 , x 4 , x 5 ]
Minimize:
y = f ( x )
Subject to:
g 1 = Δ r + x 1 x 2 0 ;
g 2 = ( x 5 + 1 ) ( x 3 + δ ) l max 0 ;
g 3 = P r z P m a x 0 ;
g 4 = P r z · V s r P m a x · V s r m a x 0 ;
g 5 = V s r V s r m a x 0 ;
g 6 = T T m a x 0 ;
g 7 = s · M s M h 0 ;
g 8 = T 0 ;
Where:
f ( x ) = π x 3 ρ x 2 2 x 1 2 ( x 5 + 1 ) .
M h = 2 3 μ F x 5 x 2 3 x 1 3 x 2 2 x 1 2 ;
P r z = F π x 2 2 x 1 2 ;
V s r = 2 π n x 2 3 x 1 3 90 x 2 2 x 1 2 ;
T = I z π n 30 M h + M f ;
P r z = x 4 π · ( x 2 2 x 1 2 ) ;
V s r = π · R s r · n 30 ;
R s r = 2 3 · x 2 3 x 1 3 x 2 2 x 1 2 ;
Δ r = 20 ; t m a x = 3 ; t m i n = 1.5 ; l m a x = 30 ; Z m a x = 10 ; V m a x = 10 ; μ = 0.6 ; δ = 0.5 ; M s = 40 ;
M f = 3 ; n = 250 ; P m a x = 1 ; I z = 55 ; T m a x = 15 ; F m a x = 1000 ; r i m i n = 55 ;
r o m a x = 110 ; ρ = 0.0000078 ;
Variable range:
60 x 1 80 ; 90 x 2 110 ; 1 x 3 3 ; 0 x 4 1000 ; 2 x 5 9
The experimental results are shown in Figure 28 and Table 15. From Table 15, it can be observed that in the multiple-disc clutch brake optimization problem, LSEWOA significantly outperforms other algorithms in both optimization accuracy and stability.

8.8. Gas Transmission System

The gas transmission system is a crucial component of the modern energy supply chain, widely used in various industries, urban natural gas supply, and multinational energy transportation. Since the transportation of natural gas relies on Gas Transmission Compressors and pipeline networks, the design optimization of these devices is essential to ensuring energy transmission efficiency and reducing energy waste. The objective of the Gas Transmission Compressor optimization problem is to design and optimize the parameters of the natural gas transmission compressor, so that the compressor can deliver optimal performance under different working conditions, reduce energy consumption, extend service life, and minimize costs. As shown in Figure 29, the Gas Transmission Compressor optimization problem involves four design variables and one constraint. The meanings of the variables x 1 to x 4 are: x 1 indicates the length between compressor stations; x 2 indicates the compression ratio denoting inlet pressure to the compressor; x 3 indicates the pipe inside diameter; x 4 indicates the gas speed on the output side. The mathematical modeling of the Gas Transmission Compressor optimization problem is as follows:
Variable:
x = [ x 1 , x 2 , x 3 , x 4 ]
Minimize:
y = 8.61 · 10 5 x 1 1 2 x 2 x 3 2 3 x 4 1 2 + 3.69 · 10 4 x 3 + 7.72 · 10 8 x 1 1 x 2 0.219 765.43 · 10 6 x 1 1
Subject to:
g = x 4 x 2 2 + x 2 2 1 0 ;
Variable range:
20 < x 1 < 50 ; 1 < x 2 < 10 ; 20 < x 3 < 45 ; 0.1 < x 4 < 60
The experimental results are shown in Figure 30 and Table 15. From Table 15, it can be seen that in the Gas Transmission Compressor optimization problem, LSEWOA significantly outperforms other algorithms in both optimization accuracy and stability.

8.9. Industrial Refrigeration System

In the chemical plant design, an industrial refrigeration system is one of the key auxiliary facilities, widely used in chemical production processes, especially in operations such as chemical reactions, storage, transportation, and refining, where temperature control and heat exchange are critical. Chemical plants often require significant cooling and temperature control to maintain reaction stability, ensure product quality, reduce energy consumption and emissions, and ensure the proper functioning of equipment. Therefore, industrial refrigeration systems play a crucial role in the design of chemical plants. The industrial refrigeration system design problem focuses on minimizing energy consumption and cost while ensuring efficient cooling performance, as shown in Figure 31. The objective is to configure the system components, such as compressors, condensers, and evaporators, to achieve the lowest operating cost and optimal heat exchange efficiency. The problem includes fourteen variables: compressor power x 1 and x 2 , refrigerant flow rate and mass flow x 3 through x 6 , characteristics of the condenser and evaporator x 7 and x 8 , compression ratios x 9 and x 10 , temperature parameters x 11 and x 12 , and flow rate parameters x 13 and x 14 . Specifically, compressor power x 1 and x 2 control the cooling capacity; refrigerant flow rate and mass flow x 3 through x 6 indicate the refrigerant flow through condensers, evaporators, and receivers; x 7 and x 8 represent the sizing parameters of the condenser and evaporator; x 9 and x 10 define the compression degree and compressor efficiency; x 11 and x 12 manage the temperature differential for heat exchange; and x 13 and x 14 govern the flow rate of cooling water or refrigerant, affecting overall system performance. Industrial refrigeration system design problem is modeled below.
Variable:
x = [ x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 , x 9 , x 10 , x 11 , x 12 , x 13 , x 14 ]
Minimize:
y = f ( x )
Subject to:
g 1 = 1.524 x 7 1 0 ;
g 2 = 1.524 x 8 1 0 ;
g 3 = 0.07789 · x 1 2 · x 9 x 7 1 0 ;
g 4 = 7.05305 · x 1 2 · x 10 x 9 · x 8 · x 2 · x 14 1 0 ;
g 5 = 0.0833 · x 14 x 13 1 0 ;
g 6 = 47.136 · x 2 0.333 · x 12 x 10 1.333 · x 8 · x 13 2.1195 + 62.08 · x 13 2.1195 · x 8 0.2 x 12 · x 10 1 0 ;
g 7 = 0.04771 · x 10 · x 8 1.8812 · x 12 0.3424 1 0 ;
g 8 = 0.0488 · x 9 · x 7 1.893 · x 11 0.316 1 0 ;
g 9 = 0.0099 · x 1 x 3 1 0 ;
g 10 = 0.0193 · x 2 x 4 1 0 ;
g 11 = 0.0298 · x 1 x 5 1 0 ;
g 12 = 0.056 · x 2 x 6 1 0 ;
g 13 = 2 x 9 1 0 ;
g 14 = 2 x 10 1 0 ;
g 15 = x 12 x 11 1 0 ;
Where:
f ( x ) = 63098.88 · x 2 · x 4 · x 12 + 5441.5 · x 2 2 · x 12 + 115055.5 · x 2 1.664 · x 6
+ 6172.27 · x 2 2 · x 6 + 63098.88 · x 1 · x 3 · x 11 + 5441.5 · x 1 2 · x 11
+ 115055.5 · x 1 1.664 · x 5 + 6172.27 · x 1 2 · x 5 + 140.53 · x 1 · x 11
+ 281.29 · x 3 · x 11 + 70.26 · x 1 2 + 281.29 · x 1 · x 3 + 281.29 · x 3 2
+ 14437 · x 8 1.8812 · x 12 0.3424 · x 10 · x 1 2 · x 7 x 14 · x 9
+ 20470.2 · x 7 2.893 · x 11 0.316 · x 12
Variable range:
0.001 < x 1 < 5 ; 0.001 < x 2 < 5 ; 0.001 < x 3 < 5 ; 0.001 < x 4 < 5 ;
0.001 < x 5 < 5 ; 0.001 < x 6 < 5 ; 0.001 < x 7 < 5 ; 0.001 < x 8 < 5 ;
0.001 < x 9 < 5 ; 0.001 < x 10 < 5 ; 0.001 < x 11 < 5 ; 0.001 < x 12 < 5 ;
0.001 < x 13 < 5 ; 0.001 < x 14 < 5 ;
The experimental results, presented in Figure 32 and Table 15. The results demonstrate that LSEWOA consistently escapes local optima, continuously searching for better solutions even when other algorithms are trapped in sub-optimal states. Compared with other algorithms, LSWOA shows exceptional stability and accuracy in solution-seeking. This indicates that LSEWOA has a significant advantage when dealing with such optimization problems.

9. Conclusions

The Whale Optimization Algorithm (WOA) suffers from several issues, including premature convergence, low population diversity, slow convergence speed, low convergence accuracy, and an imbalance between exploration and exploitation. These drawbacks make WOA less competitive in solving complex, real-world optimization problems. To address these limitations, this paper presents an enhanced WOA, LSEWOA, which integrates multiple strategies aimed at better balancing exploration and exploitation, improving convergence speed, and enhancing optimization accuracy.
In ablation study, we validated the significance of five improvement strategies incorporated into LSEWOA. A qualitative analysis experiment were conducted to examine the search behavior of LSEWOA across different functions, the ratio of exploitation to exploration, and population diversity. The results demonstrated that LSEWOA effectively explored the solution space and identified either the optimal or near-optimal solutions. The exploitation-exploration ratio charts show that LSEWOA successfully balanced exploration and exploitation. The population diversity curve indicates that LSEWOA maintained high population diversity, avoiding premature convergence, and continued until it approaches an optimal solution.
In comparasin experiments, LSEWOA was tested on 23 selected classic benchmark functions alongside superior WOA variants and other state-of-the-art metaheuristic algorithms in 30/50/100 dimensions. The results after 30 runs show that LSEWOA was highly competitive, achieving optimal or near-optimal solutions with higher efficiency. In nine engineering design optimization problems, LSEWOA demonstrated strong optimization capability and stability, indicating that LSEWOA can be used as an optimization tool for addressing complex real-world problems. LSEWOA provides new insights into the application of WOA in real-world scenarios.
In future research, we will conduct further studies by rigorously testing using prototypes of various mechanical components, validating against real-world scenarios, and incorporating practical constraints into the optimization process to achieve more reliable and effective mechanical design. Ultimately, LSEWOA aims to improve the reliability and effectiveness of engineering design, aligning with contemporary industrial demands. We recommend LSEWOA as a tool for design, simulation, and manufacturing, for use by researchers and practitioners in the field. In the future, we will also explore more application scenarios of LSEWOA and extend its use to more challenging benchmark functions, such as path planning, multi-objective problems, constrained optimization problems, and parameter optimization.

Author Contributions

Conceptualization, J.W.; methodology, J.W.; software, J.W.; validation, J.W.; formal analysis, Y.Y. and J.W.; investigation, Z.L.; resources, N.C.; data curation, B.L., S.P. and J.W.; writing—original draft preparation, J.W.; writing—review and editing, J.W. and N.C.; visualization, J.W. and Y.G.; supervision, N.C.; project administration, B.L.; funding acquisition, N.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research and the APC was funded by Macao Polytechnic University (MPU Grant No.: RP/FCA-06/2022) and Macao Science and Technology Development Fund (FDCT Grant No.: 0044/2023/ITP2).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No dataset is used in this reasearch.

Acknowledgments

The supports provided by Macao Polytechnic University (MPU Grant no: RP/FCA-06/2022) and Macao Science and Technology Development Fund (FDCT Grant No.: 0044/2023/ITP2) enabled us to conduct data collection, analysis, and interpretation, as well as cover expenses related to research materials and participant recruitment. MPU and FDCT investment in our work have significantly contributed to the quality and impact of our research findings.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AveAverage fitness
StdStandard deviation
OEoverall effectiveness

Appendix A

Table A1. Standard Benchmark Functions [31].
Table A1. Standard Benchmark Functions [31].
FunctionFunction’s NameBest Value
F 1 x = i = 1 n x i 2 Sphere0
F 2 x = i = 1 n x i + i = 1 n | x i | Schwefel’s Problem 2.220
F 3 x = i = 1 n j 1 i x j 2 Schwefel’s Problem 1.20
F 4 ( x ) = max 1 i n x i Schwefel’s Problem 2.210
F 5 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] Generalized Rosenbrock’s Function0
F 6 ( x ) = i = 1 n ( x i + 0.5 ) 2 Step Function0
F 7 ( x ) = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ) Quartic Function0
F 8 ( x ) = i = 1 n x i sin ( | x i | ) Generalized Schwefel’s Function−12,569.5
F 9 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ) ] Generalized Rastrigin’s Function0
F 10 x = 20 exp 0.2 1 n i = 1 n x i 2 exp 1 n i = 1 n cos 2 π x i + 20 + e Ackley’s Function0
F 11 x = 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1 Generalized Griewank’s Function0
F 12 x = π n 10 sin 2 ( π y i ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2 } + i = 1 n u ( x i , 10 , 100 , 4 ) , y i = 1 + 1 4 ( x i + 1 ) u x i , a , k , m = k ( x i a ) m , x i > a , 0 , a x i a , k ( x i a ) m , x i < a . Generalized Penalized Function 10
F 13 x = 0.1 sin 2 ( 3 π x 1 ) + i = 1 n 1 ( x i 1 ) 2 [ 1 + s i n 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x n ) ] } + i = 1 n u ( x i , 5 , 100 , 4 ) Generalized Penalized Function 20
F 14 x = 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 1 Shekel’s Foxholes Function0.998
F 15 x = i = 1 11 a i x 1 b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2 Kowalik’s Function0.0003075
F 16 x = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 Six-Hump Camel-Back Function−1.0316
F 17 x = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10 Branin Function0.398
F 18 x = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 ( 18 32 x + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] Goldstein-Price Function3
F 19 x = i = 1 4 c i exp j = 1 4 a i j x j p i j 2 Hartman’s Function 1−3.8628
F 20 x = i = 1 4 c i exp j = 1 6 a i j ( x j p i j ) 2 Hartman’s Function 2−3.32
F 21 x = i = 1 5 ( x a i ) ( x a i ) T + c i 1 Shekel’s Function 1−10.1532
F 22 x = i = 1 7 ( x a i ) ( x a i ) T + c i 1 Shekel’s Function 1−10.1532
Shekel’s Function 2−10.4029
F 23 x = i = 1 11 ( x a i ) ( x a i ) T + c i 1 Shekel’s Function 1−10.1532
Shekel’s Function 3−10.5364
The modelling of the benchmark fucntions has been uploaded to Figshare, and the link for the specific modeling of Standard Benchmark Functions (D = 30) is: https://figshare.com/s/aea70ae3f8877f7c8461, only for reference and further analysis by the readers.

References

  1. Laarhoven, P.J.M.V.; Aarts, E.H.L. Simulated Annealing; Springer: Eindhoven, The Netherlands, 1987. [Google Scholar]
  2. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  3. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef]
  4. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27–31 March 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  5. Das, S.; Suganthan, P.N. Differential evolution: A survey of the state-of-the-art. IEEE Trans. Evol. Comput. 2010, 15, 4–31. [Google Scholar] [CrossRef]
  6. Cai, J.; Wan, H.; Sun, Y.; Qin, T. Artificial bee colony algorithm-based self-optimization of base station antenna azimuth and down-tilt angle. Telecommun. Sci. 2021, 37, 69–75. [Google Scholar]
  7. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  8. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-qaness, M.A.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  9. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  10. Jia, H.; Rao, H.; Wen, C.; Mirjalili, S. Crayfish optimization algorithm. Artif. Intell. Rev. 2023, 56 (Suppl. S2), 1919–1979. [Google Scholar] [CrossRef]
  11. Hvolby, H.H.; Steger-Jensen, K. Technical and industrial issues of Advanced Planning and Scheduling (APS) systems. Comput. Ind. 2010, 61, 845–851. [Google Scholar] [CrossRef]
  12. Nadimi-Shahraki, M.H.; Fatahi, A.; Zamani, H.; Mirjalili, S.; Abualigah, L. An improved moth-flame optimization algorithm with adaptation mechanism to solve numerical and mechanical engineering problems. Entropy 2021, 23, 1637. [Google Scholar] [CrossRef]
  13. Kabir, M.M.; Shahjahan, M.; Murase, K. A new hybrid ant colony optimization algorithm for feature selection. Expert Syst. Appl. 2012, 39, 3747–3763. [Google Scholar] [CrossRef]
  14. Masoomi, Z.; Mesgari, M.S.; Hamrah, M. Allocation of urban land uses by Multi-Objective Particle Swarm Optimization algorithm. Int. J. Geogr. Inf. Sci. 2013, 27, 542–566. [Google Scholar] [CrossRef]
  15. Wei, J.; Gu, Y.; Law, K.L.E.; Zhang, X.; Li, Z.; Wang, Q.; Liu, Y.; Chen, P.; Zhang, L.; Wang, R.; et al. Adaptive Position Updating Particle Swarm Optimization for UAV Path Planning. In Proceedings of the 2024 22nd International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt), Athens, Greece, 14–17 May 2024; pp. 124–131. [Google Scholar]
  16. Shi, X.H.; Liang, Y.C.; Lee, H.P.; Lu, C.; Wang, Q.X. Particle swarm optimization-based algorithms for TSP and generalized TSP. Inf. Process. Lett. 2007, 103, 169–176. [Google Scholar] [CrossRef]
  17. Adegboye, O.R.; Feda, A.K.; Ishaya, M.M.; Agyekum, E.B.; Kim, K.-C.; Mbasso, W.F.; Kamel, S. Antenna S-parameter optimization based on golden sine mechanism based honey badger algorithm with tent chaos. Heliyon 2023, 9, e13087. [Google Scholar] [CrossRef] [PubMed]
  18. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  19. Ibrahim, R.A.; Abd Elaziz, M.; Lu, S. Chaotic opposition-based grey-wolf optimization algorithm based on differential evolution and disruption operator for global optimization. Expert Syst. Appl. 2018, 108, 1–27. [Google Scholar] [CrossRef]
  20. Tu, Q.; Chen, X.; Liu, X. Multi-strategy ensemble grey wolf optimizer and its application to feature selection. Appl. Soft Comput. 2019, 76, 16–30. [Google Scholar] [CrossRef]
  21. Abd Elaziz, M.; Mohammadi, D.; Oliva, D.; Gandomi, A.H.; Al-qaness, M.A.A.; Ewees, A.A. Quantum marine predators algorithm for addressing multilevel image segmentation. Appl. Soft Comput. 2021, 110, 107598. [Google Scholar] [CrossRef]
  22. Xue, Z.; Yu, J.; Zhao, A.; Zong, Y.; Yang, S.; Wang, M. Optimal chiller loading by improved sparrow search algorithm for saving energy consumption. J. Build. Eng. 2023, 67, 105980. [Google Scholar] [CrossRef]
  23. Abdel-Salam, M.; Hu, G.; Çelik, E.; Gharehchopogh, F.S.; El-Hasnony, I.M. Chaotic RIME optimization algorithm with adaptive mutualism for feature selection problems. Comput Biol Med. 2024, 179, 108803. [Google Scholar] [CrossRef] [PubMed]
  24. Chen, H.; Xu, Y.; Wang, M.; Zhao, X. A Balanced Whale Optimization Algorithm for Constrained Engineering Design Problems. Appl. Math. Model. 2019, 71, 45–59. [Google Scholar] [CrossRef]
  25. Guo, W.; Liu, T.; Dai, F.; Xu, P. An improved whale optimization algorithm for forecasting water resources demand. Appl. Soft Comput. 2020, 86, 105925. [Google Scholar] [CrossRef]
  26. Kalananda, V.K.R.A.; Komanapalli, V.L.N. A combinatorial social group whale optimization algorithm for numerical and engineering optimization problems. Appl. Soft Comput. 2021, 99, 106903. [Google Scholar] [CrossRef]
  27. Chakraborty, S.; Sharma, S.; Saha, A.K.; Saha, A. A novel improved whale optimization algorithm to solve numerical optimization and real-world applications. Artif. Intell. Rev. 2022, 55, 4605–4716. [Google Scholar] [CrossRef]
  28. Xiao, C.; Cai, Z.; Wang, Y. A good nodes set evolution strategy for constrained optimization. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation, Singapore, 25–28 September 2007; pp. 943–950. [Google Scholar]
  29. Shi, Y.; Eberhart, R. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE International Conference on Evolutionary Computation Proceedings, IEEE World Congress on Computational Intelligence (Cat. No. 98TH8360), Anchorage, AK, USA, 4–9 May 1998; pp. 69–73. [Google Scholar]
  30. Layeb, A. Tangent search algorithm for solving optimization problems. Neural Comput. Appl. 2022, 34, 8853–8884. [Google Scholar] [CrossRef]
  31. Suganthan, P.N.; Hansen, N.; Liang, J.J.; Deb, K.; Chen, Y.-P.; Auger, A.; Tiwari, S. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Kangal Rep. 2005, 2005, 2005005. [Google Scholar]
  32. Chakraborty, S.; Saha, A.K.; Chakraborty, R.; Saha, S. An enhanced whale optimization algorithm for large scale optimization problems. Knowl.-Based Syst. 2021, 233, 107543. [Google Scholar] [CrossRef]
  33. Lin, X.; Yu, X.; Li, W. A heuristic whale optimization algorithm with niching strategy for global multi-dimensional engineering optimization. Comput. Ind. Eng. 2022, 171, 108361. [Google Scholar] [CrossRef]
  34. Yang, W.; Xia, K.; Fan, S.; Wang, L.; Li, T.; Zhang, J.; Feng, Y. A Multi-Strategy Whale Optimization Algorithm and Its Application. Eng. Appl. Artif. Intell. 2022, 108, 104558. [Google Scholar] [CrossRef]
  35. Anitha, J.; Pian, S.I.A.; Agnes, S.A. An efficient multilevel color image thresholding based on modified whale optimization algorithm. Expert Syst. Appl. 2021, 178, 115003. [Google Scholar] [CrossRef]
  36. Liu, M.; Yao, X.; Li, Y. Hybrid whale optimization algorithm enhanced with Lévy flight and differential evolution for job shop scheduling problems. Appl. Soft Comput. 2020, 87, 105954. [Google Scholar] [CrossRef]
  37. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S.; Faris, H. MTDE: An effective multi-trial vector-based differential evolution algorithm and its applications for engineering design problems. Appl. Soft Comput. 2020, 97, 106761. [Google Scholar] [CrossRef]
  38. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  39. Trojovská, E.; Dehghani, M.; Trojovský, P. Zebra optimization algorithm: A new bio-inspired optimization algorithm for solving optimization algorithm. IEEE Access 2022, 10, 49445–49473. [Google Scholar] [CrossRef]
  40. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  41. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  42. Cymerys, K.; Oszust, M. Attraction-Repulsion Optimization Algorithm for Global Optimization Problems. Swarm Evol. Comput. 2024, 84, 101459. [Google Scholar] [CrossRef]
  43. Su, H.; Zhao, D.; Heidari, A.A.; Liu, L.; Zhang, X.; Mafarja, M.; Chen, H. RIME: A physics-based optimization. Neurocomputing 2023, 532, 183–214. [Google Scholar] [CrossRef]
  44. Yildiz, B.S.; Pholdee, N.; Bureerat, S.; Yildiz, A.R.; Sait, S.M. Enhanced grasshopper optimization algorithm using elite opposition-based learning for solving real-world engineering problems. Eng. Comput. 2022, 38, 4207–4219. [Google Scholar] [CrossRef]
Figure 1. The linearly decreasing convergence factor a.
Figure 1. The linearly decreasing convergence factor a.
Sensors 25 02054 g001
Figure 2. Population initialized by pseudo-random number (N = 300). The portions enclosed by the dashed black line represent the phenomenon of population aggregation.
Figure 2. Population initialized by pseudo-random number (N = 300). The portions enclosed by the dashed black line represent the phenomenon of population aggregation.
Sensors 25 02054 g002
Figure 3. Population initialized by Good Nodes Set (N = 300).
Figure 3. Population initialized by Good Nodes Set (N = 300).
Sensors 25 02054 g003
Figure 4. Leader-Followers Search-for-Prey Strategy.
Figure 4. Leader-Followers Search-for-Prey Strategy.
Sensors 25 02054 g004
Figure 5. Simulation of Spiral flight.
Figure 5. Simulation of Spiral flight.
Sensors 25 02054 g005
Figure 6. Comparison of different types of inertia weight ω . Regular ω 1: ω = 0.9 · t T ; Regular ω 2: ω = 0.9 · ( t T ) 2 ; Regular ω 1: ω = 0.9 · t T .
Figure 6. Comparison of different types of inertia weight ω . Regular ω 1: ω = 0.9 · t T ; Regular ω 2: ω = 0.9 · ( t T ) 2 ; Regular ω 1: ω = 0.9 · t T .
Sensors 25 02054 g006
Figure 7. Comparison of Levy flight and Tangent flight.
Figure 7. Comparison of Levy flight and Tangent flight.
Sensors 25 02054 g007
Figure 8. Comparison of the original convergence factor a and the proposed a.
Figure 8. Comparison of the original convergence factor a and the proposed a.
Sensors 25 02054 g008
Figure 9. Iteration curves of the LSEWOAs in ablation study.
Figure 9. Iteration curves of the LSEWOAs in ablation study.
Sensors 25 02054 g009
Figure 10. Results of qualitative analysis experiment (F1–F8).
Figure 10. Results of qualitative analysis experiment (F1–F8).
Sensors 25 02054 g010
Figure 11. Results of qualitative analysis experiment (F9–F16).
Figure 11. Results of qualitative analysis experiment (F9–F16).
Sensors 25 02054 g011
Figure 12. Results of qualitative analysis experiment (F17–F23).
Figure 12. Results of qualitative analysis experiment (F17–F23).
Sensors 25 02054 g012
Figure 13. Iterative curves of different WOA variants in the comparison experiment in 30 dimensions.
Figure 13. Iterative curves of different WOA variants in the comparison experiment in 30 dimensions.
Sensors 25 02054 g013
Figure 14. Iterative curves of different algorithms in comparison experiment in 30 dimensions.
Figure 14. Iterative curves of different algorithms in comparison experiment in 30 dimensions.
Sensors 25 02054 g014
Figure 15. The structure of a three-bar truss.
Figure 15. The structure of a three-bar truss.
Sensors 25 02054 g015
Figure 16. Iteration curves of the algorithms in Three-bar Truss design.
Figure 16. Iteration curves of the algorithms in Three-bar Truss design.
Sensors 25 02054 g016
Figure 17. The structure of a tension/compression spring.
Figure 17. The structure of a tension/compression spring.
Sensors 25 02054 g017
Figure 18. Iteration curves of the algorithms in Tension/Compression Spring design.
Figure 18. Iteration curves of the algorithms in Tension/Compression Spring design.
Sensors 25 02054 g018
Figure 19. The structure of a speed reducer.
Figure 19. The structure of a speed reducer.
Sensors 25 02054 g019
Figure 20. Iteration curves of the algorithms in Speed Reducer design.
Figure 20. Iteration curves of the algorithms in Speed Reducer design.
Sensors 25 02054 g020
Figure 21. The structure of a cantilever beam.
Figure 21. The structure of a cantilever beam.
Sensors 25 02054 g021
Figure 22. Iteration curves of the algorithms in Cantilever Beam design.
Figure 22. Iteration curves of the algorithms in Cantilever Beam design.
Sensors 25 02054 g022
Figure 23. The structure of an I-beam.
Figure 23. The structure of an I-beam.
Sensors 25 02054 g023
Figure 24. Iteration curves of the algorithms in I-beam design.
Figure 24. Iteration curves of the algorithms in I-beam design.
Sensors 25 02054 g024
Figure 25. The structure of a piston lever.
Figure 25. The structure of a piston lever.
Sensors 25 02054 g025
Figure 26. Iteration curves of the algorithms in Piston Lever design.
Figure 26. Iteration curves of the algorithms in Piston Lever design.
Sensors 25 02054 g026
Figure 27. The structure of a multiple-disc clutch brake.
Figure 27. The structure of a multiple-disc clutch brake.
Sensors 25 02054 g027
Figure 28. Iteration curves of the algorithms in Multiple-disc Clutch Brake design.
Figure 28. Iteration curves of the algorithms in Multiple-disc Clutch Brake design.
Sensors 25 02054 g028
Figure 29. The structure of a gas transmission system.
Figure 29. The structure of a gas transmission system.
Sensors 25 02054 g029
Figure 30. Iteration curves of the algorithms in Gas Transmission System design.
Figure 30. Iteration curves of the algorithms in Gas Transmission System design.
Sensors 25 02054 g030
Figure 31. The structure of an industrial refrigeration system.
Figure 31. The structure of an industrial refrigeration system.
Sensors 25 02054 g031
Figure 32. Iteration curves of the algorithms in Industrial Refrigeration System design.
Figure 32. Iteration curves of the algorithms in Industrial Refrigeration System design.
Sensors 25 02054 g032
Table 1. Basic metaheuristic algorithms.
Table 1. Basic metaheuristic algorithms.
AlgorithmYearAuthorSource of Inspiration
Simulated annealing (SA) [1]1953Metropolis et al.The annealing process.
Genetic Algorithm (GA) [2]1975John Holland et al.Darwin’s theory of evolution and Mendel’s genetic.
Ant Colony Optimization (ACO) [3]1991Dorigo et al.Foraging behavior of ants.
Particle Swarm Optimization (PSO) [4]1995Kennedy et al.Foraging behavior of birds.
Differential Evolution (DE) [5]1997Rainer Storn et al.Mutation, crossover and selection.
Artificial Bee Colony (ABC) [6]2005Karaboga et al.Honeybee’s foraging behavior.
Harris Hawks Optimization (HHO) [7]2019Elhamifar et al.Hunting behavior of Harris hawks.
Aquila Optimizer (AO) [8]2021Laith Abualigah et al.Hunting behavior of aquila eagles.
Beluga Whale Optimization (BWO) [9]2022C Zhong et al.Swimming, foraging and whale fall phenomena of beluga white whales.
Crayfish Optimization (COA) [10]2023H Jia et al.foraging, cooling and competitive behaviors of crayfish.
Table 3. Standard Benchmark Functions [31].
Table 3. Standard Benchmark Functions [31].
FunctionFunction’s NameTypeDimension (Dim)Best Value
F1SphereUni-modal, Scalable30/50/1000
F2Schwefel’s Problem 2.22Uni-modal, Scalable30/50/1000
F3Schwefel’s Problem 1.2Uni-modal, Scalable30/50/1000
F4Schwefel’s Problem 2.21Uni-modal, Scalable30/50/1000
F5Generalized Rosenbrock’s FunctionUni-modal, Scalable30/50/1000
F6Step FunctionUni-modal, Scalable30/50/1000
F7Quartic FunctionUni-modal, Scalable30/50/1000
F8Generalized Schwefel’s FunctionMulti-modal, Scalable30/50/100−418.98·Dim
F9Generalized Rastrigin’s FunctionMulti-modal, Scalable30/50/1000
F10Ackley’s FunctionMulti-modal, Scalable30/50/1000
F11Generalized Griewank’s FunctionMulti-modal, Scalable30/50/1000
F12Generalized Penalized Function 1Multi-modal, Scalable30/50/1000
F13Generalized Penalized Function 2Multi-modal, Scalable30/50/1000
F14Shekel’s Foxholes FunctionMulti-modal, Unscalable20.998
F15Kowalik’s FunctionComposition, Unscalable40.0003075
F16Six-Hump Camel-Back FunctionComposition, Unscalable2−1.0316
F17Branin FunctionComposition, Unscalable20.398
F18Goldstein-Price FunctionComposition, Unscalable23
F19Hartman’s Function 1Composition, Unscalable3−3.8628
F20Hartman’s Function 2Composition, Unscalable6−3.32
F21Shekel’s Function 1Composition, Unscalable4−10.1532
F22Shekel’s Function 2Composition, Unscalable4−10.4029
F23Shekel’s Function 3Composition, Unscalable4−10.5364
Table 4. Results of parameter sensitivity analysis experiment.
Table 4. Results of parameter sensitivity analysis experiment.
FunctionLSEWOA(15, 15)LSEWOA(15, 20)LSEWOA(15, 25)LSEWOA(20, 15)LSEWOA(20, 20)LSEWOA(20, 25)LSEWOA(25, 15)LSEWOA(25, 20)LSEWOA(25, 25)
F15.00005.00005.00005.00005.00005.00005.00005.00005.0000
F25.15005.06004.97004.97004.97004.97004.97004.97004.9700
F35.00005.00005.00005.00005.00005.00005.00005.00005.0000
F45.62005.01004.91004.91004.91004.91004.91004.91004.9100
F57.02004.96003.12006.72004.10003.78007.10004.32003.8800
F67.92004.56003.10007.28004.66002.70007.74004.24002.8000
F74.40004.68004.88005.14006.22004.60004.98005.14004.9600
F87.98005.00002.30007.98004.82002.08007.76004.88002.2000
F95.00005.00005.00005.00005.00005.00005.00005.00005.0000
F105.00005.00005.00005.00005.00005.00005.00005.00005.0000
F115.00005.00005.00005.00005.00005.00005.00005.00005.0000
F126.46003.88004.30006.64004.04004.40006.68003.82004.7800
F137.16003.98004.24007.06004.26003.66006.82003.90003.9200
F147.64004.56002.95007.59005.13002.58007.42004.79002.3400
F154.70004.38005.54004.42004.88005.72004.86004.94005.5600
F167.60005.09002.11008.00005.20002.20007.84004.81002.1500
F178.12005.24002.26007.78005.00002.07007.70004.92001.9100
F184.28004.30005.96004.00005.40005.20004.28005.48006.1000
F195.84004.48004.90006.40004.08004.98006.04004.02004.2600
F207.66005.26007.86007.88004.84002.22002.24002.26004.7800
F217.98004.88001.74008.14005.10002.20007.86005.04002.0600
F228.18004.98002.14007.82005.16001.86007.94004.92002.0000
F238.10004.76001.82007.84005.10002.12008.06005.06002.1400
Average6.38304.78524.09136.32914.90743.79356.09574.67043.9443
Rank953861742
Table 5. Comparative results of different WOA variants in 30 dimensions.
Table 5. Comparative results of different WOA variants in 30 dimensions.
FunctionMetricsWOAV1WOAV2WOAV3WOAV4WOAV5LSEWOA
F1Ave0.0000E+009.8638E-171.3408E-1490.0000E+005.6634E-300.0000E+00
Std0.0000E+002.5718E-163.8808E-1490.0000E+001.3704E-290.0000E+00
F2Ave2.0330E-1692.1140E-144.1665E-811.9041E-2294.4123E-230.0000E+00
Std2.6530E-1692.2621E-149.4838E-812.0640E-2295.8699E-230.0000E+00
F3Ave9.7597E-2831.1229E+042.2191E-1380.0000E+001.1728E+030.0000E+00
Std9.9857E-2834.4476E+038.0227E-1380.0000E+008.1830E+020.0000E+00
F4Ave1.2894E-1652.7553E+018.5275E-711.7281E-2003.4477E+010.0000E+00
Std1.6534E-1651.4118E+016.5641E-711.8576E-2001.2159E+010.0000E+00
F5Ave2.8434E+012.5671E+013.9129E+002.8674E+012.5411E+017.0226E-04
Std1.9274E-019.7131E-019.8399E+001.5929E-011.6234E+007.1473E-04
F6Ave3.0652E-011.0685E-041.0465E-031.5137E+001.8336E-024.4627E-07
Std1.5010E-016.0965E-059.1790E-044.0534E-016.2416E-028.5647E-07
F7Ave1.2704E-041.8777E-021.6854E-041.0146E-043.6796E-029.0617E-05
Std1.6348E-048.9501E-031.1422E-041.0148E-042.7993E-029.8754E-05
F8Ave−1.1587E+04−5799.7097−9.6835E+03−5071.082−6.2135E+03−1.2569E+04
Std8.6513E+021.7944E+021.4073E+031.9181E+033.0826E+028.8190E-03
F9Ave0.0000E+004.3725E+017.2948E-020.0000E+001.0214E+020.0000E+00
Std0.0000E+003.5049E+013.9955E-010.0000E+003.3930E+010.0000E+00
F10Ave4.4409E-165.5139E-104.4409E-164.4409E-163.4122E+004.4409E-16
Std0.0000E+007.5953E-100.0000E+000.0000E+004.4409E-160.0000E+00
F11Ave0.0000E+006.5879E-030.0000E+000.0000E+009.8103E-030.0000E+00
Std0.0000E+001.7301E-020.0000E+000.0000E+001.6744E-020.0000E+00
F12Ave2.1045E-021.5641E+002.0810E-028.2819E-024.5985E+001.1056E-06
Std1.0780E-021.6340E+001.1291E-013.0729E-024.2231E+003.3784E-06
F13Ave4.0665E-018.1110E-027.6209E-037.0082E-017.7391E+001.1074E-03
Std1.9499E-018.3807E-021.9818E-022.2757E-011.2779E+013.3682E-03
F14Ave4.7805E+001.3287E+001.5967E+006.6164E+003.4841E+009.9800E-01
Std4.4631E+007.5207E-011.3094E+004.6380E+003.5220E+001.5423E-16
F15Ave3.2035E-045.6925E-041.6779E-036.0438E-042.4352E-033.1131E-04
Std4.057E-053.4712E-043.6112E-032.2205E-046.0863E-038.9892E-06
F16Ave−1.0316E+00−1.0316E+00−1.0316E+00−9.9542E-01−1.0316E+00−1.0316E+00
Std6.3208E-166.7122E-161.5322E-053.5053E-026.3208E-166.1358E-16
F17Ave3.9789E-013.9789E-013.9807E-014.1714E-013.9789E-013.9789E-01
Std1.8233E-090.0000E+002.4150E-042.1294E-020.0000E+003.0227E-14
F18Ave3.0000E+003.0000E+003.0003E+009.7182E+003.9000E+003.0000E+00
Std1.4523E-141.5003E-152.6184E-041.0779E+014.9295E+009.1567E-05
F19Ave−3.8628E+00−3.8628E+00−3.8610E+00−3.7703E+00−3.8628E+00−3.8628E+00
Std1.6154E-122.7101E-151.4594E-038.7085E-022.5684E-154.5466E-05
F20Ave−3.2970E+00−3.2705E+003.1149E+00−2.8895E+00−3.2546E+00−3.3139E+00
Std5.0399E-025.9923E-022.3645E-022.2151E-015.9923E-021.0705E-02
F21Ave−1.0153E+01−8.0347E+00−8.3530E+00−4.7618E+00−7.1336E+00−1.0153E+01
Std1.2439E-032.6741E+002.3778E+001.0865E+003.3901E+009.4998E-11
F22Ave−1.0402E+01−8.3325E+00−6.9258E+00−4.7869E+00−6.9124E+00−1.0403E+01
Std4.1015E-032.8050E+003.5457E+008.6837E-013.4437E+001.0354E-10
F23Ave−1.0536E+01−9.0041E+00−8.3377E+00−4.7300E+00−6.2289E+00−1.0536E+01
Std1.2232E-042.6270E+003.4851E+008.9833E-012.9646E+001.5291E-10
Table 6. Results of non-parametric tests of different WOA variants in 30 dimensions.
Table 6. Results of non-parametric tests of different WOA variants in 30 dimensions.
AlgorithmRankAverage Friedman Value+/=/−
WOAV123.218819/4/0
WOAV243.878323/0/0
WOAV333.837021/2/0
WOAV464.364518/5/0
WOAV524.030423/0/0
LSEWOA11.6710
Table 7. Effectiveness of LSEWOA and other WOA variants.
Table 7. Effectiveness of LSEWOA and other WOA variants.
MetricsWOAV1
(w/t/l)
WOAV2
(w/t/l)
WOAV3
(w/t/l)
WOAV4
(w/t/l)
WOAV5
(w/t/l)
LSEWOA
(w/t/l)
D = 300/4/190/0/230/2/210/5/180/0/2318/5/0
D = 501/4/180/0/231/2/201/5/170/0/2317/5/1
D = 1001/4/180/0/231/2/201/5/170/0/2317/5/1
Total2/12/550/0/692/6/612/15/520/0/6952/15/2
O E 20.29%0.00%11.59%24.64%0.00%97.10%
Table 8. Results of non-parametric tests of different WOA variants in higher dimensions.
Table 8. Results of non-parametric tests of different WOA variants in higher dimensions.
DimensionAlgorithmRankAverage Friedman Value+/=/−
D = 50WOAV123.171722/0/1
WOAV254.013019/3/1
WOAV333.720320/3/0
WOAV464.340619/3/1
WOAV543.875422/0/1
LSEWOA11.7790
D = 100WOAV163.171722/0/1
WOAV244.013019/3/1
WOAV333.720320/3/0
WOAV424.340619/3/1
WOAV593.875422/0/1
LSEWOA11.5551
Table 9. Details of the metaheuristic algorithms.
Table 9. Details of the metaheuristic algorithms.
AlgorithmYearAuthor(s)Source of Inspiration
Grey Wolf Optimizer (GWO) [38]2014Seyedali Mirjalili et al.The leadership hierarchy and
hunting system of gray wolves.
Harris Hawk Optimization2019AA Heidari et al.The predatory behavior
algorithm (HHO) [7] of Harris’s hawks.
Zebra Optimization Algorithm2022E Trojovská et al.Foraging and Defense Strategy
(ZOA) [39] of zebras.
Slime Mould Algorithm (SMA) [40]2020S Li et al.Foraging behavior of slime molds.
Sine Cosine Algorithm2022Seyedali MirjaliliMathematical model of the
(SCA) [41] tangent function.
Attraction-Repulsion Optimization2024K CymerysAttraction-repulsion phenomenon.
Algorithm (AROA) [42]
Rime optimization algorithm2023Su Hang et al.The formation process of rime
(RIME) [43] in nature.
Whale Optimization Algorithm2016Seyedali Mirjalili et al.The hunting behavior of
(WOA) [18] humpback whales.
Table 10. Parameter settings for different metaheuristic algorithms.
Table 10. Parameter settings for different metaheuristic algorithms.
AlgorithmParameterValue
GWOConvergence factor a2 decrease to 0
HHOThreshold0.5
ZOAR0.1
SMAz0.03
v c 1 decrease to 0
SCAa2
AROAAttraction factor c0.95
Local search scaling factor 10.15
Local search scaling factor 20.6
Attraction probability 10.2
Local search probability0.8
Expansion factor0.4
Local search threshold 10.9
Local search threshold 20.85
Local search threshold 30.9
RIME ω 5
WOAConvergence factor a2 decrease to 0
Spiral factor b1
LSEWOAConvergence factor a2 decrease to 0
Inertia weight ω 0 increase to 0.9
Spiral factor k1
Table 11. Comparative results of different metaheuristic algorithms in 30 dimensions.
Table 11. Comparative results of different metaheuristic algorithms in 30 dimensions.
FunctionMetricsGWOHHOZOASMASCAAROARIMEWOALSEWOA
F1Ave2.0895E-271.6901E-561.3535E-2493.4272E-3191.1493E+013.9851E+002.1360E+003.122E-720.0000E+00
Std3.4562E-279.2573E-561.4325E-2493.653E-3191.2787E+012.7759E+001.2179E+001.4327E-710.0000E+00
F2Ave9.8045E-177.2027E-382.4225E-1306.1088E-1421.5330E-027.2023E-011.5665E+001.0548E-490.0000E+00
Std6.078E-172.6592E-378.6167E-1303.3458E-1412.0414E-022.2962E-011.0821E+004.055E-490.0000E+00
F3Ave2.6528E-056.4586E-662.317E-1549.577E-2967.5000E+031.9846E+021.4642E+034.7902E+040.0000E+00
Std1.1192E-043.5282E-651.2691E-1539.5432E-2965.2636E+032.7161E+024.5706E+021.6365E+040.0000E+00
F4Ave6.8653E-071.2801E-364.223E-1143.4892E-1593.7443E+011.8233E+007.5008E+005.4785E+010.0000E+00
Std7.816E-075.0091E-361.4688E-1131.9017E-1581.2096E+018.0749E-013.2426E+002.5634E+010.0000E+00
F5Ave2.7214E+011.2597E+012.8435E+017.9363E+008.2844E+049.4260E+013.8156E+022.7943E+012.1804E-04
Std7.6933E-011.4341E+014.6934E-011.1387E+011.4598E+056.3976E+015.9200E+024.7547E-011.9583E-04
F6Ave7.5490E-011.1604E-012.6854E+005.6277E-032.3240E+011.0211E+012.0228E+003.7597E-012.4235E-07
Std4.1382E-012.0970E-015.4786E-012.7533E-033.3771E+013.0057E+006.5657E-011.9799E-013.0661E-07
F7Ave2.1070E-031.6577E-041.2183E-042.0192E-041.3017E-013.3554E-024.2646E-023.3823E-039.2310E-05
Std1.1727E-032.0006E-049.8147E-051.4495E-041.2503E-012.6544E-021.7281E-023.4709E-039.4816E-05
F8Ave−6.0646E+03−12569.413−6.5366E+03−12569.0803−3.7756E+03−4.5711E+03−9.9880E+03−1.0387E+04−12569.4810
Std8.6178E+021.8809E-016.6667E+022.4234E-012.9546E+027.0335E+024.8839E+021.9207E+037.1526E-03
F9Ave2.5289E+000.0000E+000.0000E+000.0000E+003.6167E+015.1770E+016.7959E+011.8948E-150.0000E+00
Std4.8781E+000.0000E+000.0000E+000.0000E+003.2372E+016.5822E+011.2253E+011.0378E-140.0000E+00
F10Ave1.0501E-134.4409E-164.4409E-164.4409E-161.2727E+018.2919E-012.1549E+003.76E-154.4409E-16
Std2.0391E-140.0000E+000.0000E+000.0000E+009.4703E+003.8480E-015.0321E-012.6279E-150.0000E+00
F11Ave4.0717E-030.0000E+000.0000E+000.0000E+008.5792E-019.8457E-019.9393E-013.7007E-180.0000E+00
Std9.2179E-030.0000E+000.0000E+000.0000E+002.4644E-011.1340E-013.4509E-022.027e-170.0000E+00
F12Ave4.5354E-027.6756E-041.6897E-018.3303E-031.3194E+041.3013E+003.3013E+002.5013E-021.1132E-06
Std2.4764E-021.6439E-036.6607E-029.6578E-034.3933E+043.2708E-012.1232E+002.5786E-021.8246E-06
F13Ave6.1504E-014.8366E-022.2662E+006.2369E-031.2142E+054.0630E+002.1907E-014.5703E-017.3709E-04
Std2.3088E-019.7235E-022.7059E-016.4921E-032.2536E+054.7363E-016.5521E-022.0359E-012.7942E-03
F14Ave4.2279E+001.4941E+002.7431E+009.9800E-011.6626E+006.4242E+009.9800E-011.9840E+009.9800E-01
Std4.2406E+008.5423E-012.0626E+001.1416E-129.4904E-014.4124E+006.8753E-122.0261E+004.2751E-16
F15Ave4.4172E-033.8813E-041.7074E-035.4518E-049.5536E-044.2943E-037.1893E-037.4320E-043.0960E-04
Std8.1125E-037.6667E-055.0750E-032.4870E-043.4759E-046.0518E-031.2544E-025.6744E-045.1968E-06
F16Ave−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00−1.0316E+00
Std2.5294E-081.1537E-063.6246E-101.0767E-096.1383E-052.8188E-059.1271E-082.3332E-091.19E-15
F17Ave3.9791E-013.9817E-013.9789E-013.9789E-013.9913E-013.9889E-013.9789E-013.9791E-013.9789E-01
Std1.0285E-041.0967E-032.5577E-088.5519E-089.2873E-043.1123E-037.8052E-073.9929E-051.8276E-14
F18Ave3.0001E+003.9007E+005.7000E+003.0000E+003.0001E+003.9922E+003.0000E+003.9004E+003.0000E+00
Std9.6496E-054.9298E+008.2385E+001.0303E-101.6082E-044.9382E+006.9447E-074.9312E+005.6509E-05
F19Ave−3.8615E+00−3.8024E+00−3.8623E+00−3.8628E+00−3.8549E+00−3.8591E+00−3.8628E+00−3.8535E+00−3.8628E+00
Std2.5697E-037.5654E-024.5151E-041.9366E-073.3445E-035.6257E-032.5256E-071.5083E-024.59E-05
F20Ave−3.2599E+00−2.5371E+00−3.3178E+00−3.2583E+00−2.9681E+00−3.2171E+00−3.2665E+00−3.2337E+003.3220E+00
Std7.5667E-024.3035E-012.1897E-026.0626E-022.8675E-017.5154E-026.0327E-021.2029E-012.5098E-12
F21Ave−9.3930E+00−2.9686E+00−9.8132E+00−1.0153E+01−2.3603E+00−6.5719E+00−7.0419E+00−7.8461E+00−1.0153E+01
Std2.0038E+001.4372E+001.2934E+003.6940E-041.9595E+003.2936E+003.0763E+002.6970E+006.4992E-11
F22Ave−1.0401E+01−3.6079E+00−9.6935E+00−1.0403E+01−3.0380E+00−6.9924E+00−8.8560E+00−7.5651E+00−1.0403E+01
Std1.3707E-031.1968E+001.8374E+003.3640E-041.5471E+003.3121E+002.9167E+002.8153E+008.9896E-11
F23Ave−1.0264E+01−3.0811E+00−9.6350E+00−1.0536E+01−3.5309E+00−6.0164E+00−9.0563E+00−6.6449E+00−1.0536E+01
Std1.4812E+001.4661E+002.0499E+002.5217E-042.1461E+003.3571E+002.7900E+003.6057E+001.1943E-10
Table 12. Results of non-parametric tests of different metaheuristic algorithms in 30 dimensions.
Table 12. Results of non-parametric tests of different metaheuristic algorithms in 30 dimensions.
AlgorithmRankAverage Friedman Value+/=/−
GWO55.231223/0/0
HHO44.929720/3/0
ZOA33.895620/3/0
SMA22.756520/3/0
SCA98.078323/0/0
AROA87.291323/0/0
RIME76.000022/0/1
WOA65.258723/0/0
LSEWOA11.5587
Table 13. Effectiveness of LSEWOA and other metaheuristic algorithms.
Table 13. Effectiveness of LSEWOA and other metaheuristic algorithms.
MetricsGWO
(w/t/l)
HHO
(w/t/l)
ZOA
(w/t/l)
SMA
(w/t/l)
SCA
(w/t/l)
AROA
(w/t/l)
RIME
(w/t/l)
WOA
(w/t/l)
LSEWOA
(w/t/l)
D = 300/0/230/3/200/3/200/3/200/0/230/0/231/0/220/0/2322/1/0
D = 501/0/221/3/190/3/201/3/191/0/220/0/231/0/221/1/2119/3/1
D = 1001/0/221/3/190/3/201/3/191/0/220/0/231/0/221/2/2019/3/1
Total2/0/672/9/580/9/602/9/582/0/670/0/693/0/662/3/6460/7/2
O E 2.90%15.94%13.04%15.94%2.90%0.00%4.35%7.25%97.10%
Table 14. Results of non-parametric tests of different algorithms in higher dimensions.
Table 14. Results of non-parametric tests of different algorithms in higher dimensions.
DimensionAlgorithmRankAverage Friedman Value+/=/−
D = 50GWO65.222522/0/1
HHO44.947119/3/1
ZOA33.882620/3/0
SMA22.823219/3/1
SCA98.126122/0/1
AROA87.153623/0/0
RIME76.023222/0/1
WOA55.147122/0/1
LSEWOA11.6746
D = 100GWO65.355122/0/1
HHO44.810119/3/1
ZOA33.844220/3/0
SMA22.764519/3/1
SCA98.307222/0/1
AROA86.971023/0/0
RIME76.288422/0/1
WOA55.104322/1/0
LSEWOA11.5551
Table 15. Average fitness and standard deviation of each algorithm across the seven engineering design problems.
Table 15. Average fitness and standard deviation of each algorithm across the seven engineering design problems.
ChallengesMetricsGWOHHOZOASMASCAAROARIMEWOALSEWOA
Three-bar TrussAve259.805063259.815011259.805048263.072647259.820148259.832780259.806407259.863959259.805047
Std0.0000150.0146590.0000012.6665510.0122430.0839010.0015470.0817530.000000
Tension/Compression SpringAve0.1215260.1215220.1215230.1215220.1217400.1244730.1222410.1219210.121522
Std0.0000050.0000000.0000010.0000000.0002310.0093360.0035920.0017250.000000
Speed ReducerAve2638.8482102638.8249692638.8206672638.8198632647.8004602640.7579022638.8664592638.8200242638.819842
Std0.0253080.0233200.0009960.0000626.0140992.7528960.1010590.0003880.000020
Cantilever BeamAve13.36039413.39088813.36029013.36064513.96379220.62015313.58445215.44425313.360259
Std0.0001080.0216380.0000500.0003080.2126854.9971510.1792481.7792010.000000
I-beamAve6.7027056.7026896.7029626.7030476.6640085.7824766.4576006.3659876.703048
Std0.0003200.0005310.0001050.0000010.0351990.9959070.2895820.3220970.000000
Piston LeverAve12.179036274.0578522.95353834.3403371.219121281.00469552.94234328.6435451.057195
Std42.317243238.1557167.40598967.7042740.071245202.116656132.15871879.4213640.000099
Multi-disc Clutch BrakeAve0.2353020.2352430.2352580.2352430.2376680.2366330.2354520.2352440.235242
Std0.0000820.0000010.0000150.0000010.0020420.0026170.0002780.0000090.000000
Gas Transmission SystemAve1224745.9598301224745.9372241224745.9382951224745.9372231224901.5984331226318.2582501224745.9529551224745.9372271224745.937222
Std0.0179070.0000050.0016560.000002118.0928193538.3859640.0222370.0000070.000000
Industrial Refrigeration SystemAve642.809538897.65598913.111876642.3339239.72902023312.3910778.430037868.3345588.249197
Std3477.9508443927.4568665.3306603475.9595501.07018824530.7670022.0125444131.2212710.496502
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wei, J.; Gu, Y.; Yan, Y.; Li, Z.; Lu, B.; Pan, S.; Cheong, N. LSEWOA: An Enhanced Whale Optimization Algorithm with Multi-Strategy for Numerical and Engineering Design Optimization Problems. Sensors 2025, 25, 2054. https://doi.org/10.3390/s25072054

AMA Style

Wei J, Gu Y, Yan Y, Li Z, Lu B, Pan S, Cheong N. LSEWOA: An Enhanced Whale Optimization Algorithm with Multi-Strategy for Numerical and Engineering Design Optimization Problems. Sensors. 2025; 25(7):2054. https://doi.org/10.3390/s25072054

Chicago/Turabian Style

Wei, Junhao, Yanzhao Gu, Yuzheng Yan, Zikun Li, Baili Lu, Shirou Pan, and Ngai Cheong. 2025. "LSEWOA: An Enhanced Whale Optimization Algorithm with Multi-Strategy for Numerical and Engineering Design Optimization Problems" Sensors 25, no. 7: 2054. https://doi.org/10.3390/s25072054

APA Style

Wei, J., Gu, Y., Yan, Y., Li, Z., Lu, B., Pan, S., & Cheong, N. (2025). LSEWOA: An Enhanced Whale Optimization Algorithm with Multi-Strategy for Numerical and Engineering Design Optimization Problems. Sensors, 25(7), 2054. https://doi.org/10.3390/s25072054

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop