Next Article in Journal
An Ensemble of Convolutional Neural Networks for Sound Event Detection
Previous Article in Journal
Adaptive Zero Trust Policy Management Framework in 5G Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Bio-Inspired Optimization Algorithm Based on Mantis Shrimp Survival Tactics

by
José Alfonso Sánchez Cortez
1,
Hernán Peraza Vázquez
1,* and
Adrián Fermin Peña Delgado
2,*
1
Instituto Politécnico Nacional, CICATA-Altamira, Km. 14.5 Carretera Tampico-Puerto Industrial Altamira, Altamira 89600, Tamaulipas, Mexico
2
Departamento de Mecatrónica y Energías Renovables, Universidad Tecnológica de Altamira, Boulevard de los Ríos Km. 3+100, Puerto Industrial Altamira, Altamira 89608, Tamaulipas, Mexico
*
Authors to whom correspondence should be addressed.
Mathematics 2025, 13(9), 1500; https://doi.org/10.3390/math13091500
Submission received: 26 March 2025 / Revised: 20 April 2025 / Accepted: 29 April 2025 / Published: 1 May 2025
(This article belongs to the Special Issue Advances in Metaheuristic Optimization Algorithms)

Abstract

:
This paper presents a novel meta-heuristic algorithm inspired by the visual capabilities of the mantis shrimp (Gonodactylus smithii), which can detect linearly and circularly polarized light signals to determine information regarding the polarized light source emitter. Inspired by these unique visual characteristics, the Mantis Shrimp Optimization Algorithm (MShOA) mathematically covers three visual strategies based on the detected signals: random navigation foraging, strike dynamics in prey engagement, and decision-making for defense or retreat from the burrow. These strategies balance exploitation and exploration procedures for local and global search over the solution space. MShOA’s performance was tested with 20 testbench functions and compared against 14 other optimization algorithms. Additionally, it was tested on 10 real-world optimization problems taken from the IEEE CEC2020 competition. Moreover, MShOA was applied to solve three studied cases related to the optimal power flow problem in an IEEE 30-bus system. Wilcoxon and Friedman’s statistical tests were performed to demonstrate that MShOA offered competitive, efficient solutions in benchmark tests and real-world applications.

1. Introduction

Metaheuristics are methods that can be implemented to solve optimization problems to obtain an optimal solution (near-to-optimal), including non-linear problems with non-linear constraints [1]; real-world engineering applications [2,3,4,5], such as optimization problems in power electronics [6]; or even those that traditional methods cannot solve [7]. However, as described in [8,9], there is no method capable of solving every optimization problem, as some of these approaches might perform better than others. Metaheuristics involve exploring the local and global solution search space efficiently [10]. The local exploration strategy refers to a search focused on a promising area of space or domain, whereas global exploration focuses on searching for new areas within the problem domain. A good balance between global and local searches can improve the metaheuristic performance.
Metaheuristics can be categorized as evolutionary-based, trajectory-based, and nature-inspired; see Figure 1. A short description of these metaheuristic types can be summarized as follows. The process of biological evolution inspires evolutionary algorithms [11,12,13,14,15]. Nature-inspired algorithms include swarm algorithms, which are metaheuristics based on the collective behavior of biological systems [16,17,18,19,20,21,22,23,24,25,26,27,28]. They also include physics-inspired algorithms that use physical principles for optimization [29,30,31,32,33,34], and algorithms inspired by human behavior [35,36,37,38]. Moreover, trajectory-based metaheuristics follow a trajectory through the solution space, aiming to improve the solution found at each step. Unlike population-based algorithms, which work with multiple solutions simultaneously, trajectory-based metaheuristics focus on a single solution that evolves and improves iteratively. A more extensive metaheuristic classification can be found in [39,40].
A schematic representation of a metaheuristic algorithm is shown in Figure 2. In the first stage, a set of vectors, also named as search agents, is randomly generated. The size of the vector represents the number of variables of the problem, where its value represents a possible solution.
In the second stage, vectors are updated by functions representing living organisms’ behavior or physical and chemical phenomena as bio-inspired models. These vector modifications present an additional opportunity to investigate novel regions within the solution search space.
In the third stage, vectors are evaluated by the objective function, where the calculated value is referred to as the fitness. In minimization, the ideal fitness vector is the minimum within the population.
In the fourth stage, during each iteration, the vector exhibiting the best fitness is thereafter compared with the fitness of the previously recombined vectors.
This process is repeated until the stop criterion is reached, either the number of interactions or the number of objective function evaluations is satisfied, and then the best achievable solution is found.
The contributions of this work are the following:
  • A novel bio-inspired optimization algorithm named MShOA, which is based on the mantis shrimp’s visual ability to detect polarized light, is proposed. Once the polarization state is determined, these detection capabilities allow the mantis shrimp to decide whether to search for food, hit predators, defend itself, or escape from its burrow.
  • The performance of MShOA was evaluated with a set of 20 unimodal and multimodal functions reported in the literature. Furthermore, it was implemented in 10 real-world constraint optimization applications from CEC2020 and 3 study cases of an electrical engineering optimal power flow problem.
  • In the Wilcoxon and Friedman statistical tests, MShOA outperformed 14 algorithms taken from the state of the art.
  • MShOA’s source code is available for the scientific community.
This paper is structured as follows: Section 2 shows the MShOA bio-inspiration, outlines the mathematical model, provides the pseudocode, and includes the general flowchart of the algorithm. Section 3 shows the performance of MShOA compared with 14 recent bio-inspired algorithms. Section 4 presents the results and discusses the algorithm. Section 5 describes the application of MShOA to real-world optimization problems. Finally, Section 6 summarizes the conclusions and future work.

2. Mantis Shrimp Optimization Algorithm (MShOA)

2.1. Biological Fundamentals

The purple-spotted mantis shrimp (Gonodactylus smithii) is a stomatopod crustacean that primarily inhabits shallow tropical marine waters worldwide. They are sedentary animals that spend most of the day hidden in burrows on the seabed. However, they make forays to feed, exhibiting a remarkable ability to return to their burrows [41]. During these outings, they behave in a cautious and observant manner, using their complex visual systems to detect dangers and opportunities, especially when searching for food [42]. They have the most complex eyes in the animal kingdom, with up to 12 types of photoreceptors that allow them to discriminate linearly and circularly polarized light [43,44,45]. Their eyes are also extremely mobile, capable of performing proactive torsional rotations of up to 90 degrees. These movements can be coordinated or independent during visual scanning, allowing them to individually adjust the orientation of each eye in response to specific stimuli, such as polarized light signals [46]. Mantis shrimps use these specific polarization signals, both linear and circular, in critical social contexts, such as mating and territorial defense [47,48]. This adaptation enables them to accurately orient themselves in their surroundings to detect objects, predators, prey, and communicate visually with other mantis shrimp, optimizing their visual perception and behavior in their environment [49]. Additionally, they are also characterized by their fierce competition for their burrows, which are highly valuable resources due to their multifunctional use. They use their acute vision to determine the size of the opponent and their fighting ability. The opponent’s fighting ability is decisive in deciding whether to fight for the burrow or avoid the confrontation [50]. Figure 3 summarizes most of the aforementioned mantis shrimps’ behaviors. Figure 3a illustrates how shrimp hide in their shelter to increase their chances of staying safe. The shrimp’s eyes gaze in multiple directions at once for further decision-making, which may include strike, defense or shelter, burrow, or cautiously forage, as seen in Figure 3b. Figure 3c shows a lateral view of a red mantis shrimp. Additionally, Figure 3d illustrates the impact of a mantis shrimp’s strike on a prey shell.
Overall, when observing its surroundings, the mantis shrimp can see independently with both eyes. In this way, each eye detects a predominant type of polarization (vertical, horizontal, or circular) within its field of view, which allows it to identify prey, a threat, or a potential mate. Subsequently, the shrimp compares the information captured by both eyes and selects the detected signal that is the most predominant, which then guides its subsequent behavior. Based on this process, this study considered that the mantis shrimp engages in foraging when vertical polarization dominates; attacks when horizontal polarization is more intense; and burrows, defends, or shelters when circular polarization is detected, as depicted in Figure 4.

2.2. Initialization

The algorithm’s initialization consists of two phases to obtain the initial values. As seen in Equation (1), a set of multidimensional solutions representing the initial population is randomly generated in the first phase:
X i , j = l b 1 , j + rand i , j · ( u b 1 , j l b 1 , j ) , for i = 1 , 2 , , N and j = 1 , 2 , , d i m
where X i , j is the initial population, N is defined as the number of search agents, and d i m represents the dimension of the problem (number of variables). Figure 5a presents a graphical representation of the initial population X. Finally, l b and u b represent the lower and upper bounds of the search space, respectively. As described in Equation (2), a vector is randomly generated in the second phase to represent the detected polarization’s Polarization Type Indicator (PTI):
P T I = r o u n d [ ( 1 + 2 r a n d ) ]
In both phases, the rand function is assumed to follow a uniform distribution in the range [ 0 , 1 ] . The round function restricts the P T I value to 1, 2, or 3, as seen in Figure 5b. These values represent the reference angle set at ( π / 2 ), (0 or π ), or ( π / 4 or 3 π / 4 ), which are related to vertical linearly, horizontal linearly, and circularly polarized light, respectively. Each of these polarization states is based on the visual capabilities of the mantis shrimp. In addition, a P T I value of 1 activates the mantis shrimp’s foraging strategy, while a P T I value of 2 activates the attack strategy. The final strategy, burrow, defense, or shelter, is triggered by a P T I of 3. Figure 5c illustrates the correlation of the PTI value and the type of polarized light detected, as well as its relationship with the forage; attack; or burrow, defense, or shelter strategies.

2.3. Polarization Type Identifier (PTI) Vector Update Process

The following steps outline how the PTI vector is updated. The whole process is summarized in Figure 6.
  • The calculation of the polarization angle was inspired by the visual system of the mantis shrimp, where each eye performs independent perception. The left eye employs vectors from both the initial (X) and updated population ( X ) to compute the Left Polarization Angle ( L P A ), as described in Equation (3). In contrast, the Right Polarization Angle ( R P A ) is determined through Equation (4).
    L P A = a r c cos ( X i · X i )
    R P A = r a n d π
    Furthermore, the angular difference between L P A and R P A is computed for subsequent consideration in the PTI update process.
  • Before the PTI value is updated, the detected left- and right-eye polarization types should be calculated. The Left Polarization Type ( L P T ) vector and the Right Polarization Type ( R P T ) vector are determined by the criteria described in Equation (5):
    Left   eye L P T = 1 , if 3 π 8 L P A 5 π 8 2 , if 0 L P A π 8 or 7 π 8 L P A π 3 , if π 8 < L P A < 3 π 8 or 5 π 8 < L P A < 7 π 8 Right   eye R P T = 1 , if 3 π 8 R P A 5 π 8 2 , if 0 R P A π 8 or 7 π 8 R P A π 3 , if π 8 < R P A < 3 π 8 or 5 π 8 < R P A < 7 π 8
  • The Left-Eye Angular Difference ( L A D ) between the Left Polarization Angle ( L P A ) and the reference angles of the polarized light is calculated using Equation (6). Similarly, the Right-Eye Angular Difference ( R A D ) is calculated with the Right Polarization Angle ( R P A ).
    L A D = L P A , if 0 L P A π 8 π L P A , if 7 π 8 L P A π π 2 L P A , if 3 π 8 L P A 5 π 8 π 4 L P A , if π 8 < L P A < 3 π 8 3 π 4 L P A , if 5 π 8 < L P A < 7 π 8 R A D = R P A , if 0 R P A π 8 π R P A , if 7 π 8 R P A π π 2 R P A , if 3 π 8 R P A 5 π 8 π 4 R P A , if π 8 < R P A < 3 π 8 3 π 4 R P A , if 5 π 8 < R P A < 7 π 8
  • Finally, by Equation (7), the PTI value is calculated. The pseudocode for the previously discussed polarization type identifier vector update process (PTI) is described in Algorithm 1 and in Figure 7.
    P T I i = L P T i if L A D i < R A D i R P T i if L A D i > R A D i
Algorithm 1 Polarization Type Identifier (PTI) Vector Update Process
  1:
procedure  PTI_Vector_Update
  2:
    Step 1: compute polarization angles.
  3:
    Left-Eye Polarization Angle ( L P A ) calculated by Equation (3).
  4:
    Right-Eye Polarization Angle ( R P A ) calculated by Equation (4).
  5:
    Step 2: determine polarization type.
  6:
    Left-Eye Polarization Type ( L P T ) calculated by Equation (5).
  7:
    Right-Eye Polarization Type ( R P T ) calculated by Equation (5).
  8:
    Step 3: compute angular differences.
  9:
    Left-Eye Angular Difference ( L A D ) calculated by Equation (6).
10:
    Right-Eye Angular Difference ( R A D ) calculated by Equation (6).
11:
    Step 4: update PTI vector.
12:
    if  L A D < R A D  then
13:
         P T I i L P T i
14:
    else
15:
         P T I i R P T i
16:
    end if
17:
    Output: updated PTI vector.
18:
end procedure

2.4. Mathematical Model and Optimization Algorithm

Overall, the mantis shrimp’s behavior is characterized by foraging trips, in which it remains cautious and observant to detect prey or predators. In its burrow, it remains vigilant for any danger or opportunity for food. As previously described, it performs all these activities thanks to its eye’s independent movement skills, making it capable of perceiving linearly horizontal, linearly vertical, and circularly polarized light. This study used the polarized light detection signal capabilities of the mantis shrimp to determine the crustacean’s movement strategy, whether it is a forage; attack; or burrow, defense, or shelter survival tactic.
A mathematical model that simulates each eye’s rotational and independent movements is introduced to explore the mantis shrimp’s polarized light detection skills. Each eye of the mantis shrimp is considered a polarizing filter. The left eye is set to observe a known environment, while the right eye randomly explores an area, searching for new interactions within the environment. The intensity of the polarized light detected by each eye is meticulously compared, and then, the final decision is based on the signal with the highest intensity, which is directly related to the proximity of the polarization angle. Table 1 summarizes the polarized light detection capabilities of the mantis shrimp and their relationships with their behavioral strategies.

2.5. Strategy 1: Foraging

The characteristic movement of the mantis shrimp when searching for food can be described as Brownian motion. Therefore, we can analyze this behavior starting with the generalized Langevin equation. This equation describes the motion of a particle without external forces, known as a free Brownian particle [51]. The dynamics of this particle are described by Equation (8). In this case, by assuming that no external forces are acting on the particle, W ( q ) is set to 0, which simplifies Equation (8) to Equation (9). This represents the equilibrium between friction ( ζ 0 ) and the noise term R ( t ) . A detailed explanation of this simplification process can be found in [51].
μ q ¨ = d W d q 0 t q ( τ ) ζ ( t τ ) d τ + R ( t ) .
μ q ¨ = ζ 0 q ˙ + R ( t )
Given that only q ¨ and q ˙ appear in the equation of motion, Equation (9) can be rewritten in terms of the velocity v = q ˙ to obtain Equation (10), where the quantity ζ ( t ) in the Langevin equation is called the dynamic friction kernel, while R ( t ) is known as a random force. In Equation (11), a diffusion constant D, which allows the random component of the model to be scaled, is then incorporated:
μ v ˙ = ζ 0 v + R ( t )
μ v ˙ = ζ 0 v + D · R ( t )
Moreover, the current position is updated by adding the random movement defined by the Langevin equation. For this work, the value of ζ ( t ) is proposed to be 1; thus, the equation can be rewritten as
x i ( t + 1 ) = x b e s t v + D · R ( t ) ; v = x i ( t ) x b e s t ; R ( t ) = x r ( t ) x i ( t )
where x i ( t + 1 ) represents the new position of the mantis shrimp in the ( t + 1 )-th iteration; x b e s t represents the best position found for the mantis shrimp; v is defined by the difference between the current position x i ( t ) and the best position x b e s t ; R ( t ) represents the difference between x r ( t ) and the current vector x i ( t ) , where r [ 1 ,   population   size ] , r i ; and D is a random diffusion value set within [ 1 , 1 ] . The foraging strategy is summarized in Figure 8.

2.6. Strategy 2: Attack

The mantis shrimp’s strike is renowned in biology for its capacity to fracture hard shells, comparable to the impact of a bullet [52,53,54,55]. Equation (13) shows how this hit can be represented by a circular motion parametric equation in a two-dimensional plane:
x t = r cos θ
where the vector r represent the mantis shrimp’s front appendages and θ is the angular strike motion.
Finally, the mantis shrimp’s attack can be expressed as follows:
x i ( t + 1 ) = x b e s t cos θ
where x i ( t + 1 ) represents the new position of the mantis shrimp in the ( t + 1 )-th iteration, x b e s t represents the best position found for the mantis shrimp, and θ is randomly generated with θ [ π , 2 π ] . The graphical representation of the attack strategy is shown in Figure 9.

2.7. Strategy 3: Burrow, Defense, or Shelter

The mantis shrimp bases its decision-making strategy regarding defending, sheltering, or burrowing on its exceptional visual ability to assess an opponent’s threat. Using their keen vision, they determine whether the opponent is significantly larger and likely stronger, choosing to flee, or if similar in size or smaller, they aggressively decide to defend or shelter in their territory [56,57,58,59,60]. These animals hide in their shelters to increase the chance of staying safe [61]. Equation (15) is used to describe the mantis shrimp’s defense or shelter strategy:
Defense : x i ( t + 1 ) = x b e s t + k ( x b e s t ) , k [ 0 , 0.3 ] Shelter : x i ( t + 1 ) = x b e s t k ( x b e s t ) , k [ 0 , 0.3 ]
where x i ( t + 1 ) represents the new position of the mantis shrimp in the ( t + 1 )-th iteration, x b e s t represents the best position found for the mantis shrimp, and k is a scaling factor randomly generated between 0 and 0.3 . The graphic representation of the burrow, defense, or shelter strategy is shown in Figure 10.   

Sensitivity Analysis of k Parameter

The third strategy introduces a scaling parameter k fixed at 0.3. To evaluate the algorithm’s sensitivity to this parameter, nine different values of k running from 0.1 to 0.9 in increments of 0.1 were studied. The analysis included 10 unimodal and 10 multimodal benchmark functions; see Table A1 and Table A2. Each test was executed independently 30 times, with a population size of 30 and 200 iterations. The statistical results of this sensitivity analysis are presented in Table 2, Table 3, Table 4, Table 5 and Table 6. Figure 11 illustrates the behavior of MShOA when using different values of k, as tested on one representative unimodal function and one multimodal function.
Subsequently, the nonparametric Wilcoxon signed-rank test was applied to determine whether any alternative value of k led to statistically significant performance differences when compared with k = 0.3 . The results of this test, as summarized in Table 7, indicate that no statistically significant differences were observed at the 5% significance level.

2.8. Pseudocode for MShOA

The pseudocode and flowchart of MShOA are described in Algorithm 2 and Figure 12, respectively.
Algorithm 2 Optimization Algorithm of the Mantis Shrimp
  1:
Procedure MShOA
  2:
Initialization of parameters. Specify the number of search agents, population size, and maximum number of iterations.
  3:
Randomly generate the initial population.
  4:
Randomly generate the Polarization Type Indicator (PTI).
  5:
while iteration < max number of iterations do
  6:
    if  P T I i = 1  then                   % vertical linearly polarized light
  7:
        Strategy 1: foraging (Equation (12)).
  8:
    else if  P T I i = 2  then             % horizontal linearly polarized light
  9:
        Strategy 2: attack (Equation (14)).
10:
    else if  P T I i = 3  then             % circularly polarized light
11:
        Strategy 3: burrow, defense, or shelter (Equation (15)).
12:
    end if
13:
    Update the Polarization Type Identifier (PTI) vector (Algorithm 1).
14:
    Update the population.
15:
    Calculate the fitness value from the new population.
16:
    Update the fitness and the best solution found.
17:
     i t e r a t i o n = i t e r a t i o n + 1 .
18:
end while
19:
Display the best solution found.
20:
end procedure

MShOA’s Time Complexity

The computational complexity of an optimization method is characterized by a function that relates the method’s runtime to the size of the problem’s input. Big-O notation functions as a widely accepted representation. The time complexity of MShOA is described as follows:
O ( MShOA ) = O ( Initial population ) + O ( Generate Polarization Type Indicator ) + O ( Strategy 1 ) + O ( Strategy 2 ) + O ( Strategy 3 ) + O ( Update Polarization Type Indicator by Algorithm 1 ) + O ( Update population ) + O ( Update fitness )
In addition to Equation (16), the time complexity of MShOA depends on the number of iterations ( M a x I t e r ), the population size of mantis shrimps (nMS), the dimensionality of the problem (dim), and the cost of the objective function (f). Table 8 describes each part of Equation (16).
Therefore, the overall time complexity of MshOA can be computed as follows:
O ( MShOA ) = O ( n M S dim ) + O ( dim 1 ) + O ( M a x I t e r 1 ) + O ( M a x I t e r 1 ) + O ( M a x I t e r 1 ) + O ( M a x I t e r 1 ) + O ( M a x I t e r n M S ) + O ( M a x I t e r n M S dim f )
In O notation, when we add several terms, the fastest-growing one dominates. Hence, the time complexity of MShOA can be expressed as follows:
O ( MShOA ) = O ( M a x I t e r n M S dim f )

3. Experimental Setup

The efficiency and stability of MShOA were evaluated by solving 20 optimization functions from the literature; see Appendix A. MShOA was compared with the 14 bio-inspired algorithms described below:
  • Ant Lion Optimizer (ALO): The algorithm presents the predatory behavior of the ant lion in nature and mathematically models five stages: random movement, building traps, entrapment of ants in traps, catching prey, and rebuilding traps [21].
  • Arithmetic Optimization Algorithm (AOA): This algorithm takes advantage of the distribution behavior of fundamental arithmetic operations in mathematics, including multiplication (M), division (D), subtraction (S), and addition (A) [62].
  • Beluga Whale Optimization (BWO): This algorithm was inspired by the natural behaviors of beluga whales and mathematically models their behavior in pair swimming, preying, and whale fall [63].
  • Dandelion Optimizer (DO): This algorithm simulates the long-distance flight of dandelion seeds relying on the wind and is divided into three stages: the rising stage, the descending stage, and the landing stage [64].
  • Evolutionary Mating Algorithm (EMA): the evolutionary algorithm is based on a random mating concept from the Hardy–Weinberg equilibrium [65].
  • Grey Wolf Optimizer (GWO): This algorithm is inspired by grey wolves (Canis lupus) and mimics their leadership hierarchy and hunting mechanisms in nature [22].
  • Liver Cancer Algorithm (LCA): This algorithm mimics liver tumor’s growth and takeover processes and mathematically models their ability to replicate and spread to other organs [66].
  • Mexican Axolotl Optimization Algorithm (MAO): This algorithm was inspired by the way axolotls live in their aquatic environment and is modeled after their processes of regeneration, reproduction, and tissue restoration [67].
  • Marine Predators Algorithm (MPA): This algorithm is inspired by the interactions between predators and prey in marine ecosystems and models the widespread foraging strategies and optimal encounter rate policies [68].
  • Salp Swarm Algorithm (SSA): This algorithm is inspired by the behavior of salps in nature and primarily models their swarming behavior when navigating and foraging in oceans [17].
  • Synergistic Swarm Optimization Algorithm (SSOA): This algorithm combines swarm intelligence with synergistic cooperation to find optimal solutions. It mathematically models a cooperation mechanism, where particles exchange information and learn from one another, enhancing their search behaviors and improving the overall performance [69].
  • Tunicate Swarm Algorithm (TSA): this algorithm imitates the behavior of tunicates and models their use of jet propulsion and collective movements while navigating and searching for food [70].
  • Whale Optimization Algorithm (WOA): this algorithm imitates the social behavior of humpback whales in nature and mathematically models their bubble-net hunting strategy [18].
  • Catch Fish Optimization Algorithm (CFOA): this algorithm is inspired by traditional rural fishing practices and models the strategic process of fish capture through two main phases: an exploration phase combining individual intuition and group collaboration, and an exploitation phase based on coordinated collective action [71].
Each algorithm was executed independently 30 times, with a population size of 30 and 200 iterations. The initial configuration parameters of all the employed algorithms are detailed in Table 9. The Wilcoxon test was applied to compare the performances of the algorithms. The four best-ranked algorithms, as computed using the Friedman test, were selected for further evaluation on 10 optimization problems taken from the CEC2020 benchmark suite detailed in Table 24. Moreover, three engineering study cases based on the optimal power flow optimization problem were also studied.
All experiments were conducted on a standard desktop with the following specifications: Intel Core i9-13900K 5.8 GHz processor, 192 GB RAM, Linux Ubuntu 24.04 LTS operating system, and MATLAB R2024a compiler.

4. Results and Discussion

The computational results of MShOA, GWO, BWO, DO, WOA, MPA, LCA, SSA, EMA, ALO, MAO, AOA, SSOA, TSA, and CFOA on 20 benchmark test functions are presented in Table 10, Table 11, Table 12, Table 13, Table 14, Table 15, Table 16 and Table 17, where the average, standard deviation, and best values are provided for comparison measurements.
Three stages of nonparametric Wilcoxon signed-rank tests, at a 5% significance level, and Friedman tests determined the algorithms’ performances. In stage 1, 10 unimodal functions were analyzed, whereas in stage 2, 10 multimodal functions were studied. The unimodal and multimodal functions evaluated the algorithms’ capabilities in exploitation and exploration in the solution space, respectively. In stage 3, the set of 20 previously used functions—unimodal and multimodal—was analyzed with the Wilcoxon and Friedman statistical tests; see Table A1 and Table A2. The Wilcoxon test assessed whether MShOA exhibited statistically superior performance; a p-value below 0.05 signified that MShOA outperformed the algorithm under comparison. The Friedman test evaluated algorithms by ranking them according to their average performance and benchmark scores.
In stage 1, according to the Wilcoxon test (see Table 18), the MShOA algorithm was better than all the others in local searches. The results of the Friedman test (see Table 19) show that MShOA ranked first among the analyzed algorithms. Figure 13 presents the convergence curves for each unimodal function.
In stage 2, the algorithms’ performances were evaluated with 10 multimodal functions; see Table A1. The Wilcoxon test results (see Table 20) indicate that MShOA outperformed SSA, EMA, ALO, MAO, and CFOA; meanwhile, they also demonstrate competitive results compared with GWO, BWO, DO, WOA, MPA, LCA, AOA, SSOA, and TSA. The Friedman test results (see Table 21) indicate that the BWO algorithm ranked first. In the global search, MShOA demonstrated a competitive performance by ranking in second place. Figure 14 presents the convergence curves for each multimodal function.
In stage 3, the set of 20 previously used functions—unimodal and multimodal—was analyzed with the Wilcoxon and Friedman statistical tests; see Table A1. The Wilcoxon test results, shown in Table 22, indicate that MShOA outperformed the following optimization algorithms: ALO, DO, EMA, GWO, LCA, MAO, MPA, SSA, TSA, and WOA. However, no significant differences were observed with BWO, AOA, and SSOA. The Friedman test analysis (see Table 23) shows that BWO ranked first, MShOA ranked second, and SSOA and AOA ranked third and fourth, respectively.

5. Real-World Applications

The performance of MShOA was evaluated by solving 10 optimization problems from CEC 2020 [72]. These problems are presented in Table 24 and described in Section 5.2, Section 5.3, Section 5.4, Section 5.5, Section 5.6, Section 5.7, Section 5.8, Section 5.9, Section 5.10 and Section 5.11. In addition, MShOA was tested on three different cases of the optimal power flow problem for the IEEE bus 30 configuration. These cases were fuel cost, active power, and reactive power. The results for each of these engineering problems were obtained with MShOA and the top three algorithms ranked according to the Friedman test: BWO, SSOA, and AOA; see Table 23. The results for each real-world optimization problem described in Table 24 are summarized in table form to include the decision variables and the feasible objective function value found, as can be seen in Table 25, Table 26, Table 27, Table 28, Table 29, Table 30, Table 31, Table 32, Table 33 and Table 34. The constraint-handling method used on each of the real-world optimization problems is described in Section 5.1.

5.1. Constraint Handling

The penalization method taken from [39], which is applied to engineering problems with constraints, is presented in Equation (19):
F ( x ) = f ( x ) , i f M C V ( x ) 0 f max + M C V ( x ) ,         o t h e r w i s e .
where f ( x ) is the fitness function value of a viable solution (i.e., a solution that satisfies all the constraints). In other matters, f m a x represents the fitness function value of the worst solution in the population, and M C V ( x ) is the Mean Constraint Violation [39] represented in Equation (20):
M C V ( x ) = i = 1 p G i ( x ) + j = 1 m H j ( x ) p + m
In this case, M C V ( x ) represents the average sum of the inequality constraints ( G i ( x ) ) and the equality constraints ( H j ( x ) ), as shown in Equations (21) and (22), respectively. It is important to note that the inequality constraints g i ( x ) and the equality constraints h j ( x ) only have single value, which is the penalty applied when the constraint is violated.
G i ( x ) = 0 , i f g i ( x ) 0 g i ( x ) ,         o t h e r w i s e .
H j ( x ) = 0 , i f | h j ( x ) | δ 0 | h j ( x ) | , o t h e r w i s e

5.2. Process Synthesis Problem

The process synthesis problem includes two decision variables and two inequality constraints. The mathematical representation of the problem is shown in Equation (23). The best-known feasible objective value taken from Table 24 is 2.
M i n i m i z e f ( x ¯ ) = x 2 + 2 x 1 S u b j e c t to g 1 ( x ¯ ) = x 1 2 x 2 + 1.25 0 g 2 ( x ¯ ) = x 1 + x 2 1.6 w i t h bounds : 0 x 1 1.6 x 2 { 0 , 1 }
Table 25 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. Nonetheless, SSOA and AOA did not satisfy one or more problem constraints. MShOA’s difference from the best-known feasible objective function value was 7.2546608720 × 10 09 . The convergence graph is shown in Figure 15.

5.3. Process Synthesis and Design Problem

The process synthesis and design problem included three decision variables, one inequality, and one equality constraint. The mathematical representation of the problem is shown in Equation (24). The best-known feasible objective value taken from Table 24 is 2.5576545740 .
M i n i m i z e f ( x ¯ ) = x 3 + x 2 + 2 x 1 S u b j e c t to : h 1 ( x ¯ ) = 2 exp ( x 2 ) + x 1 = 0 g 1 ( x ¯ ) = x 2 x 1 + x 3 0 w i t h bounds : 0.5 x 1 , x 2 1.4 x 3 { 0 , 1 }
Table 26 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. However, only the results of MShOA did not violate any problem constraints. MShOA’s difference from the best-known feasible objective function value was 1.8912987813 × 10 04 . The convergence graph is shown in Figure 16.

5.4. Process Flow Sheeting Problem

The process flow sheeting problem includes three decision variables and three inequality constraints. The mathematical representation of the problem is shown in Equation (25). The best-known feasible objective value from Table 24 is 1.0765430833 .
M i n i m i z e f ( x ¯ ) = 0.7 x 3 + 0.8 + 5 ( 0.5 x 1 ) 2 S u b j e c t to : g 1 ( x ¯ ) = exp ( x 1 0.2 ) x 2 0 g 2 ( x ¯ ) = x 2 + 1.1 x 3 1.0 g 3 ( x ¯ ) = x 1 x 3 0.2 w i t h bounds : 2.22554 x 2 1 0.2 x 1 1 x 3 { 0 , 1 }
Table 27 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. Nonetheless, BWO and AOA did not satisfy one or more problem constraints. MShOA’s difference from the best-known feasible objective function value was 3.4569167 × 10 03 . The convergence graph is shown in Figure 17.

5.5. Weight Minimization of a Speed Reducer

The weight minimization of a speed reducer includes seven decision variables and eleven inequality constraints. The mathematical representation of the problem is shown in Equation (26). The best-known feasible objective value taken from Table 24 is 2.9944244658 × 10 + 03 .
M i n i m i z e f ( x ¯ ) = 0.7854 x 2 2 x 1 14.9334 x 3 43.0934 + 3.3333 x 3 2 + 0.7854 ( x 5 x 7 2 + x 4 x 6 2 ) 1.508 x 1 ( x 7 2 + x 8 2 ) + 7.477 ( x 7 3 + x 6 3 ) S u b j e c t to : g 1 ( x ¯ ) = x 1 x 2 2 x 3 + 27 0 g 2 ( x ¯ ) = x 1 x 2 2 x 3 2 + 397.5 0 g 3 ( x ¯ ) = x 2 x 6 4 x 3 x 4 3 + 1.93 0 g 4 ( x ¯ ) = x 2 x 7 4 x 3 x 5 3 + 1.93 0 g 5 ( x ¯ ) = 10 x 6 3 16.91 × 10 6 + 745 x 4 x 2 1 x 3 1 2 1100 0 g 6 ( x ¯ ) = 10 x 7 3 157.5 × 10 6 + 745 x 5 x 2 1 x 3 1 2 850 0 g 7 ( x ¯ ) = x 2 x 3 40 0 g 8 ( x ¯ ) = x 1 x 2 1 + 5 0 g 9 ( x ¯ ) = x 1 x 2 1 12 0 g 10 ( x ¯ ) = 1.5 x 6 x 4 + 1.9 0 g 11 ( x ¯ ) = 1.1 x 7 x 5 + 1.9 0 w i t h bounds : 0.7 x 2 0.8 , 17 x 3 28 , 2.6 x 1 3.6 , 5 x 7 5.5 , 7.3 x 5 , x 4 8.3 , 2.9 x 6 3.9 .
Table 28 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. Nonetheless, SSOA did not satisfy one or more problem constraints. MShOA’s difference from the best-known feasible objective function value was 8.3485313744 × 10 + 1 . The convergence graph is shown in Figure 18.

5.6. Tension/Compression Spring Design (Case 1)

The tension/compression spring design (case 1) problem includes three decision variables and four inequality constraints. The mathematical representation of the problem is shown in Equation (27). The best-known feasible objective value taken from Table 24 is 1.2665232788 × 10 02 .
Minimize f ( x ¯ ) = x 1 2 x 2 ( 2 + x 3 ) Subject to : g 1 ( x ¯ ) = 1 x 2 3 x 3 71785 x 1 4 0 , g 2 ( x ¯ ) = 4 x 2 2 x 1 x 2 12566 ( x 2 x 1 3 x 1 4 ) + 1 5108 x 1 2 1 0 , g 3 ( x ¯ ) = 1 140.45 x 1 x 2 x 3 0 , g 4 ( x ¯ ) = x 1 + x 2 1.5 1 0 , With bounds : 0.05 x 1 2.00 , 0.25 x 2 1.30 , 2.00 x 3 15.00 .
Table 29 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. Nonetheless, SSOA did not satisfy one or more problem constraints. MShOA’s difference from the best-known feasible objective function value was 1.3085251562 × 10 04 . The convergence graph is shown in Figure 19.

5.7. Welded Beam Design

The welded beam design includes four decision variables and five inequality constraints. The mathematical representation of the problem is shown in Equation (28). The best-known feasible objective value from Table 24 is 1.6702177263 .
Minimize f ( x ¯ ) = 0.04811 x 3 x 4 ( x 2 + 14 ) + 1.10471 x 1 2 x 2 , Subject to : g 1 ( x ¯ ) = x 1 x 4 0 g 2 ( x ¯ ) = δ ( x ¯ ) δ max 0 g 3 ( x ¯ ) = P P c ( x ¯ ) g 4 ( x ¯ ) = τ max τ ( x ¯ ) g 5 ( x ¯ ) = σ ( x ¯ ) σ max 0 Where : τ = τ 2 + τ 2 + 2 τ τ x 2 2 R , τ = R M J , τ = P 2 x 2 x 1 , M = P x 2 2 + L , R = x 2 2 4 + x 1 + x 3 2 2 , J = 2 x 2 2 4 + x 1 + x 3 2 2 2 x 1 x 2 , σ ( x ¯ ) = 6 P L x 4 x 3 2 , δ ( x ¯ ) = 6 P L 3 E x 3 2 x 4 , P c ( x ¯ ) = 4.013 E x 3 x 4 3 6 L 2 1 x 3 2 L E 4 G , Constants : L = 14 in , P = 6000 lb , E = 30 × 10 6 psi , σ max = 30 , 000 psi , τ max = 13 , 600 psi , G = 12.10 × 10 6 psi , δ max = 0.25 in , With bounds : 0.125 x 1 2 , 0.1 x 2 , x 3 10 , 0.1 x 4 2 .
Table 30 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. Nonetheless, BWO and SSOA did not satisfy one or more problem constraints. MShOA’s difference from the best-known feasible objective function value was 6.7148143232 × 10 02 . The convergence graph is shown in Figure 20.

5.8. Multiple Disk Clutch Brake Design Problem

The multiple disk clutch brake design problem includes five decision variables and eight inequality constraints. The mathematical representation of the problem is shown in Equation (29). The best-known feasible objective value taken from Table 24 is 2.3524245790 × 10 01 .
Minimize f ( x ¯ ) = π x 2 2 x 1 2 x 3 ( x 5 + 1 ) ρ Subject to : g 1 ( x ¯ ) = p max + p tr 0 , g 2 ( x ¯ ) = p rz V sr V sr , max p max 0 , g 3 ( x ¯ ) = R + x 1 x 2 0 , g 4 ( x ¯ ) = L max + ( x 5 + 1 ) ( x 3 + δ ) 0 , g 5 ( x ¯ ) = s M s M h 0 , g 6 ( x ¯ ) = T 0 , g 7 ( x ¯ ) = V sr , max + V sr 0 , g 8 ( x ¯ ) = T T max 0 . Where : M h = 2 3 μ x 4 x 5 x 2 3 x 1 3 x 2 2 x 1 2 N . mm , ω = π n 30 rad / s , A = π ( x 2 2 x 1 2 ) mm 2 , p rz = x 4 A N / mm 2 , V sr = π R sr n 30 mm / s , R sr = 2 3 x 2 3 x 1 3 x 2 2 x 1 2 mm , T = I z ω M h + M f , Constants : R = 20 mm , L m a x = 30 mm , μ = 0.6 , V s r , m a x = 10 m / s , δ = 0.5 mm , s = 1.5 , T m a x = 15 s , n = 250 r p m , I z = 55 kg . m 2 . M s = 40 Nm , M f = 3 Nm , a n d p m a x = 1 With bounds : 60 x 1 80 , 90 x 2 110 , 1 x 3 3 0 x 4 1000 , 2 x 5 9
Table 31 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. Nonetheless, BWO, SSOA, and AOA did not satisfy one or more problem constraints. MShOA’s difference from the best-known feasible objective function value was 3.8493744275 × 10 04 . The convergence graph is shown in Figure 21.

5.9. Planetary Gear Train Design Optimization Problem

The planetary gear train design optimization problem includes six decision variables and eleven inequality constraints. The mathematical representation of the problem is shown in Equation (30). The best-known feasible objective value from Table 24 is 5.2576870748 × 10 01 .
Minimize f ( x ¯ ) = max | i k i 0 k | , k = { 1 , 2 , , R } Subject to : g 1 ( x ¯ ) = m 3 N 6 + 2.5 D max 0 , g 2 ( x ¯ ) = m 1 N 1 + N 2 + m 1 N 2 + 2 D max 0 , g 3 ( x ¯ ) = m 3 N 4 + N 5 + m 3 N 5 + 2 D max 0 , g 4 ( x ¯ ) = m 1 N 1 + N 2 m 3 N 6 N 3 m 1 m 3 0 , g 5 ( x ¯ ) = N 1 + N 2 sin ( π / p ) + N 2 + 2 + δ 22 0 , g 6 ( x ¯ ) = N 6 N 3 sin ( π / p ) + N 3 + 2 + δ 33 0 , g 7 ( x ¯ ) = N 4 + N 5 sin ( π / p ) + N 5 + 2 + δ 55 0 , g 8 ( x ¯ ) = N 3 + N 5 + 2 + δ 35 2 N 6 N 3 2 N 4 + N 5 2 + 2 N 6 N 3 N 4 + N 5 cos 2 π p β 0 , g 9 ( x ¯ ) = N 4 N 6 + 2 N 5 + 2 δ 56 + 4 0 , g 10 ( x ¯ ) = 2 N 3 N 6 + N 4 + 2 δ 34 + 4 0 , h 1 ( x ¯ ) = N 6 N 4 p = i n t e g e r , Where : i 1 = N 6 N 4 , i 01 = 3.11 , i 2 = N 6 N 1 N 3 + N 2 N 4 N 1 N 3 N 6 N 4 , i 0 R = 3.11 , I R = N 2 N 6 N 1 N 3 , i 02 = 1.84 , x ¯ = p , N 6 , N 5 , N 4 , N 3 , N 2 , N 1 , m 2 , m 1 δ 22 = δ 33 = δ 55 = δ 35 = δ 56 = 0.5 . β = cos 1 N 4 + N 5 2 + N 6 N 3 2 N 3 + N 5 2 2 N 6 N 3 N 4 + N 5 , D max = 220 , With bounds : p = ( 3 , 4 , 5 ) , m 1 = ( 1.75 , 2.0 , 2.25 , 2.5 , 2.75 , 3.0 ) , m 3 = ( 1.75 , 2.0 , 2.25 , 2.5 , 2.75 , 3.0 ) , 17 N 1 96 , 14 N 2 54 , 14 N 3 51 17 N 4 46 , 14 N 5 51 , 48 N 6 124 , a n d N i = i n t e g e r .
Table 32 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. Nonetheless, SSOA and AOA did not satisfy one or more problem constraints. MShOA’s difference with the best-known feasible objective function value was 4.2312925200 × 10 03 . The convergence graph is shown in Figure 22.

5.10. Tension/Compression Spring Design (Case 2)

The tension/compression spring design (case 2) problem includes three decision variables and eight inequality constraints. The mathematical representation of the problem is shown in Equation (31). The best-known feasible objective value taken from Table 24 is 2.6138840583 .
Minimize f ( x ¯ ) = π 2 x 2 x 3 2 x 1 + 2 4 Subject to : g 1 ( x ¯ ) = 8000 C f x 2 π x 3 3 189000 0 , g 2 ( x ¯ ) = l f 14 0 , g 3 ( x ¯ ) = 0.2 x 3 0 , g 4 ( x ¯ ) = x 2 3 0 , g 5 ( x ¯ ) = 3 x 2 x 3 0 , g 6 ( x ¯ ) = σ p 6 0 , g 7 ( x ¯ ) = σ p + 700 K + 1.05 x 1 + 2 x 3 l f 0 , g 8 ( x ¯ ) = 1.25 700 K 0 , Where : C f = 4 x 2 x 3 1 4 x 2 x 3 4 + 0.615 x 3 x 2 , K = 11.5 × 10 6 x 3 4 8 x 1 x 2 3 , σ p = 300 K , l f = 1000 K + 1.05 x 1 + 2 x 3 With bounds : 1 x 1 ( i n t e g e r ) 70 , x 3 ( d i s c r e a t e ) { 0.009 , 0.0095 , 0.0104 , 0.0118 , 0.0128 , 0.0132 , 0.014 , 0.015 , 0.0162 , 0.0173 , 0.018 , 0.020 , 0.023 , 0.025 , 0.028 , 0.032 , 0.035 , 0.041 , 0.047 , 0.054 , 0.063 , 0.072 , 0.080 , 0.092 , 0.0105 , 0.120 , 0.135 , 0.148 , 0.162 , 0.177 , 0.192 , 0.207 , 0.225 , 0.244 , 0.263 , 0.283 , 0.307 , 0.0331 , 0.362 , 0.394 , 0.4375 , 0.500 } 0.6 x 2 ( c o n t i n u o u s ) 3 .
Table 33 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. MShOA’s difference from the best-known feasible objective function value was 8.5610383350 × 10 02 . The convergence graph is shown in Figure 23.

5.11. Himmelblau’s Function

The Himmelblau’s function problem includes five decision variables and six inequality constraints. The mathematical representation of the problem is shown in Equation (32). The best-known feasible objective value taken from Table 24 is 3.0665538672 × 10 + 04 .
Minimize f ( x ¯ ) = 5.3578547 x 3 2 + 0.8356891 x 1 x 5 + 37.293239 x 1 40792.141 Subject to : g 1 ( x ¯ ) = G 1 0 , g 2 ( x ¯ ) = G 1 92 0 , g 3 ( x ¯ ) = 90 G 2 0 , g 4 ( x ¯ ) = G 2 110 0 , g 5 ( x ¯ ) = 20 G 3 0 , g 6 ( x ¯ ) = G 3 25 0 , Where : G 1 = 85.334407 + 0.0056858 x 2 x 5 + 0.0006262 x 1 x 4 0.0022053 x 3 x 5 , G 2 = 80.51249 + 0 . 0071317 2 x 5 + 0.0029955 x 1 x 2 + 0.0021813 x 3 2 , G 3 = 9.300961 + 0.0047026 x 3 x 5 + 0.00125447 x 1 x 3 + 0.0019085 x 3 x 4 With bounds : 78 x 1 102 , 33 x 2 45 , 27 x 3 45 , 27 x 4 45 , 27 x 5 45 .
Table 34 compares MShOA, BWO, AOA, and SSOA, all of which provided effective and competitive solutions. Nonetheless, BWO, SSOA, and AOA did not satisfy one or more problem constraints. MShOA’s difference with the best-known feasible objective function value was 1.9624794082 × 10 + 02 . The convergence graph is shown in Figure 24.

5.12. Optimal Power Flow

The optimal power flow (OPF) problem was initially considered an annex to the conventional economic dispatch (ED) problem because both problems were solved simultaneously [73]. The OPF has changed over time into a non-linear optimization problem that tries to find the operating conditions of an electrical system while considering the power balance equations and the constraints of the transmission network [74,75]. The mathematical formulation of the OPF is described in Equation (33); this equation is called the canonical form:
Minimize J ( x , u ) , Subject to : g ( x , u ) = 0 , h ( x , u ) 0 Where : J ( x , u ) = the   oobjective   function , g ( x , u ) = the   set   of   equality   constraints , h ( x , u ) = the   inequality   constraints ,
In Equation (34), the set of control variables u is defined, which includes P G (active power generation at PV buses), V G (voltage magnitudes at PV buses), Q C (VAR compensators), and T (transformer tap settings). The parameter N G is the number of generators, N C is the number of VAR compensators, and N T is the number of regulating transformers.
u T = P G 2 P G N G , V G 1 V G N G , Q C 1 Q C N C , T 1 T N T
In Equation (35), the set of state variables x T are
  • P G 1 —active output power (generation) of the reference node.
  • V L —voltage at the load node.
  • Q G —reactive power at the output (generation) of all generators.
  • S l —transmission line.
x T = P G 1 , V L 1 V L N L , Q G 1 Q G N C , S l 1 S l n l
where the parameter N L is the number of load nodes and n l is the number of transmission lines.
The constraints of the active power equality ( P ) are defined in Equation (36), which indicates that the active power injected into node ( i ) is equal to the sum of the active power demanded at node ( i ) plus the active power losses in the transmission lines connected to the node:
P G i P D i V i j = 1 N B V j G i j cos ( θ i j ) + B i j sin ( θ i j ) = 0
The constraints of the reactive power equality ( Q ) are defined in Equation (37), which indicates that the reactive power injected into node ( i ) is equal to the reactive power demanded at node ( i ) plus the reactive power associated with the flows in the transmission lines connected to the node:
Q G i Q D i V i j = 1 N B V j G i j sin ( θ i j ) + B i j cos ( θ i j ) = 0
The real and reactive power equality constraints are defined in Equations (36) and (37), respectively, where
  • P G —the active power generation.
  • Q G —the reactive power generation.
  • P D —the active load demand.
  • Q D —the reactive load demand.
  • N B —the number of buses.
  • G i j —the conductance between buses i and j.
  • B i j —the susceptance between buses i and j.
  • Y i j = G i j + j B i j (the admittance matrix).
The inequality constraints in an optimal power flow (OPF) problem [76] cover the operational limits of the system’s components. The generator constraints are represented by Equation (38), and these constraints are shown in Table 35:
V G i m i n V G i V G i m a x , i = 1 , , N G P G i m i n P G i P G i m a x , i = 1 , , N G Q G i m i n Q G i Q G i m a x , i = 1 , , N G
The transformer constraints are described in Equation (39) and in Table 36:
T i m i n T i T i m a x , i = 1 , , N T
The shunt VAR compensator constraints are defined by Equation (40) and in Table 37:
Q C i m i n Q G C i Q C i m a x , i = 1 , , N G
Finally, the security constraints are represented in Equation (41):
V L i m i n V L i V L i m a x , i = 1 , , N L S l i S l i m a x , i = 1 , , n l
Following the previously described methodology in Section 5, the Mantis Shrimp Optimization Algorithm (MShOA), the Beluga Whale Optimization (BWO), the Arithmetic Optimization Algorithm (AOA), and the Synergistic Swarm Optimization Algorithm (SSOA) are applied to the optimal power flow problem for the IEEE-30 Bus test system; see Figure 25. Under these tests, three case studies were analyzed: the optimization of the total fuel cost for generation, the optimization of active power losses, and the optimization of reactive power losses.
Case 1 included the total fuel cost of the six generating units connected at buses 1, 2, 5, 8, 11, and 13. The objective function was defined as follows:
J = i = 1 N G f i ( $ / h )
where the quadratic function f i is described in [76].
The objective functions for Case 2 and Case 3, which involved optimizing the active power losses and optimizing the reactive power losses, respectively, are described in Equations (43) and (44):
J = i = 1 N B P i = i = 1 N B P G i i = 1 N B P D i
J = i = 1 N B Q i = i = 1 N B Q G i i = 1 N B Q D i
The convergence graphs analysis showed that MShOA outperformed the Beluga Whale Optimization (BWO), Arithmetic Optimization Algorithm (AOA), and Synergistic Swarm Optimization Algorithm (SSOA) in all three cases of this study, as illustrated in Figure 26, Figure 27 and Figure 28, respectively. The tests were conducted with a population size of 30 and 500 iterations. The minimum values obtained by each algorithm for each case are summarized in Table 38.

6. Conclusions

This paper discusses a novel metaheuristic optimization technique inspired by the behavior of the mantis shrimp, called the Mantis Shrimp Optimization Algorithm (MShOA). The shrimp’s behavior includes three strategies: forage (food search); attack (the mantis shrimp strike); and burrow, defense, and shelter (hole defense and shelter). These strategies are triggered by the type of polarized light detected by the mantis shrimp.
The algorithm’s performance was evaluated on 20 testbench functions, on 10 real-world optimization problems taken from IEEE CEC2020, and on 3 study cases related to the optimal power flow problem: optimization of fuel cost in electric power generation, and optimization of active and reactive power losses. In addition, the algorithm’s performance was compared with fourteen algorithms selected from the scientific literature.
The statistical analysis results indicate that MShOA outperformed CFOA, ALO, DO, EMA, GWO, LCA, MAO, MPA, SSA, TSA, and WOA. In addition, it proved to be competitive with BWO, AOA, and SSOA.
Results and conclusions of this study:
  • The mantis shrimp’s biological strategies modeled in this study included polarization principles in optics as new methods for optimization goals.
  • MShOA has no parameters to configure in the modeling of strategies. The only parameters are those common to all bio-inspired algorithms, e.g., population size and number of iterations.
  • The Wilcoxon rank and Friedman tests were performed to analyze the 10 unimodal and 10 multimodal functions. The Wilcoxon test results indicate that MShOA outperformed all the other algorithms in unimodal functions. In addition, it ranked first in the Friedman test. Furthermore, in the Wilcoxon test, MShOA outperformed the following algorithms: SSA, EMA, ALO, MAO, and CFOA in multimodal functions. Additionally, it ranked second in the Friedman statistical tests. These results demonstrate that MShOA has a remarkable balance between local and global searches.
  • The set of 20 functions, unimodal and multimodal, was analyzed with Wilcoxon and Friedman statistical tests. The Wilcoxon test results indicate that MShOA outperformed the following optimization algorithms: CFOA, ALO, DO, EMA, GWO, LCA, MAO, MPA, SSA, TSA, and WOA. On the other hand, the Friedman test analysis shows that MShOA was ranked second.
  • MShOA demonstrated outstanding results in 80% of the real-world IEEE CEC2020 engineering problems: process synthesis problem, process synthesis and design problem, process flow sheeting problem, welded beam design, planetary gear train design optimization problem, and tension/compression spring design.
  • In the optimal power flow (OPF) problem study cases, MShOA obtained better solutions than BWO, AOA, and SSOA.
  • MShOA was able to effectively solve real-world problems with unknown search spaces.
The proposed algorithm demonstrated competitive performances. However, it has some limitations that open avenues for future work. Currently, it is designed for single-objective optimization and has not yet been extended to handle multi-objective scenarios. Additionally, while it performed well on the tested optimal power flow (OPF) cases, its generalization to more complex or large-scale OPF models remains to be fully explored. Future work will address these limitations by extending the algorithm to multi-objective optimization and evaluating its applicability to broader and more diverse OPF problem instances.

Author Contributions

Conceptualization, J.A.S.C., H.P.V. and A.F.P.D.; Methodology, H.P.V. and A.F.P.D.; Software, J.A.S.C.; Validation, H.P.V. and A.F.P.D.; Formal analysis, J.A.S.C., H.P.V. and A.F.P.D.; Investigation, J.A.S.C.; Resources, H.P.V.; Writing—original draft preparation, J.A.S.C.; Writing—review and editing, H.P.V. and A.F.P.D.; Visualization, J.A.S.C.; Supervision, H.P.V. and A.F.P.D.; Project administration, H.P.V. and A.F.P.D.; Funding acquisition, H.P.V. All authors have read and agreed to the published version of this manuscript.

Funding

This project was supported by the Instituto Politécnico Nacional (IPN) through grant SIP no. 20250569.

Data Availability Statement

The source code used to support the findings of this study has been deposited in the MathWorks repository at https://www.mathworks.com/matlabcentral/fileexchange/180937-mantis-shrimp-optimization-algorithm-mshoa, available since 30 April 2025.

Acknowledgments

The first author acknowledges support from SECIHTI to pursue his Ph.D. in advanced technology at the Instituto Politécnico Nacional (IPN)–CICATA Altamira.

Conflicts of Interest

The authors declared no potential conflicts of interest with respect to the research, authorship, funding, and/or publication of this article.

Appendix A

Table A1. Classification of testbench functions.
Table A1. Classification of testbench functions.
IDFunction NameUnimodalMultimodaln-DimensionalNon-SeparableConvexDifferentiableContinuousNon-ConvexNon-DifferentiableSeparableRandom
F1Brownx xxxx
F2Griewankx xx xx
F3Schwefel 2.20x xx xxx
F4Schwefel 2.21x x x x xx
F5Schwefel 2.22x x x x xx
F6Schwefel 2.23x x xxx x
F7Spherex x xxx x
F8Sum Squaresx x xxx x
F9Xin-She Yang N. 3x xx x x
F10Zakharovx x x x
F11Ackley xx xxx
F12Ackley N. 4 xxx x x
F13Periodic xx xxxx
F14Quartic xx xx xx
F15Rastrigin xx xxx x
F16Rosenbrock xxx xxx
F17Salomon xxx xxx
F18Xin-She Yang xx xxx
F19Xin-She Yang N. 2 x x x xx
F20Xin-She Yang N. 4 xxx xx
Table A2. Descriptions of the testbench functions.
Table A2. Descriptions of the testbench functions.
IDFunctionDimInterval f min
F1 f ( x ) = f x 1 , , x n = i = 1 n 1 x i 2 x i + 1 2 + 1 + x i + 1 2 x i 2 + 1 30 [ 1 , 4 ] 0
F2 f ( x ) = f x 1 , , x n = 1 + i = 1 n x i 2 4000 i = 1 n cos x i i 30 [ 600 , 600 ] 0
F3 f ( x ) = f x 1 , , x n = i = 1 n x i 30 [ 100 , 100 ] 0
F4 f ( x ) = f x 1 , , x n = max i = 1 , , n x i 30 [ 100 , 100 ] 0
F5 f ( x ) = f x 1 , , x n = i = 1 n x i + i = 1 n x i 30 [ 100 , 100 ] 0
F6 f ( x ) = f x 1 , , x n = i = 1 n x i 10 30 [ 10 , 10 ] 0
F7 f ( x ) = f x 1 , x 2 , , x n = i = 1 n x i 2 30 [ 5.12 , 5.12 ] 0
F8 f ( x ) = f x 1 , , x n = i = 1 n i x i 2 30 [ 10 , 10 ] 0
F9 f ( x ) = f x 1 , , x n = exp i = 1 n x i / β 2 m 2 exp i = 1 n x i 2 i = 1 n cos 2 x i 30 [ 2 π , 2 π ] , m = 5 , β = 15 −1
F10 f ( x ) = f x 1 , . . , x n = i = 1 n x i 2 + i = 1 n 0.5 i x i 2 + i = 1 n 0.5 i x i 4 30 [ 5 , 10 ] 0
F11 f ( x ) = f x 1 , , x n = a · exp b 1 n i = 1 n x i 2 exp 1 d i = 1 n cos c x i + a + exp ( 1 ) 30 [ 32 , 32 ] , a = 20 , b = 0.3 , c = 2 π 0
F12 f ( x ) = f x 1 , , x n = i = 1 n 1 e 0.2 x i 2 + x i + 1 2 + 3 cos 2 x i + sin 2 x i + 1 2 [ 35 , 35 ] 5.901 × 10 14
F13 f ( x ) = f x 1 , , x n = 1 + i = 1 n sin 2 x i 0.1 e i = 1 n x i 2 30 [ 10 , 10 ] 0.9
F14 f ( x ) = f x 1 , , x n = i = 1 n i x i 4 + random [ 0 , 1 ) 30 [ 1.28 , 1.28 ] 0 + random noise
F15 f ( x , y ) = 10 n + i = 1 n x i 2 10 cos 2 π x i 30 [ 5.12 , 5.12 ] 0
F16 f x 1 x n = i = 1 n 1 100 x i 2 x i + 1 2 + 1 x i 2 30 [ 5 , 10 ] 0
F17 f ( x ) = f x 1 , , x n = 1 cos 2 π i = 1 D x i 2 + 0.1 i = 1 D x i 2 30 [ 100 , 100 ] 0
F18 f ( x ) = f x 1 , , x n = i = 1 n ϵ i x i i 30 [ 5 , 5 ] , ε random0
F19 f ( x ) = f x 1 , , x n = i = 1 n x i exp i = 1 n sin x i 2 30 [ 2 π , 2 π ] 0
F20 f ( x ) = f x 1 , , x n = i = 1 n sin 2 x i e i = 1 n x i 2 e i = 1 n sin 2 x i 30 [ 10 , 10 ] −1

References

  1. Sang-To, T.; Le-Minh, H.; Wahab, M.A.; Thanh, C.L. A new metaheuristic algorithm: Shrimp and Goby association search algorithm and its application for damage identification in large-scale and complex structures. Adv. Eng. Softw. 2023, 176, 103363. [Google Scholar] [CrossRef]
  2. Lodewijks, G.; Cao, Y.; Zhao, N.; Zhang, H. Reducing CO2 Emissions of an Airport Baggage Handling Transport System Using a Particle Swarm Optimization Algorithm. IEEE Access 2021, 9, 121894–121905. [Google Scholar] [CrossRef]
  3. Kumar, A. Chapter 5—Application of nature-inspired computing paradigms in optimal design of structural engineering problems—A review. In Nature-Inspired Computing Paradigms in Systems; Intelligent Data-Centric Systems; Mellal, M.A., Pecht, M.G., Eds.; Academic Press: Cambridge, MA, USA, 2021; pp. 63–74. [Google Scholar] [CrossRef]
  4. Sharma, S.; Saha, A.K.; Lohar, G. Optimization of weight and cost of cantilever retaining wall by a hybrid metaheuristic algorithm. Eng. Comput. 2022, 38, 2897–2923. [Google Scholar] [CrossRef]
  5. Peraza-Vázquez, H.; Peña-Delgado, A.; Ranjan, P.; Barde, C.; Choubey, A.; Morales-Cepeda, A.B. A bio-inspired method for mathematical optimization inspired by arachnida salticidade. Mathematics 2022, 10, 102. [Google Scholar] [CrossRef]
  6. Peña-Delgado, A.F.; Peraza-Vázquez, H.; Almazán-Covarrubias, J.H.; Cruz, N.T.; García-Vite, P.M.; Morales-Cepeda, A.B.; Ramirez-Arredondo, J.M. A Novel Bio-Inspired Algorithm Applied to Selective Harmonic Elimination in a Three-Phase Eleven-Level Inverter. Math. Probl. Eng. 2020, 8856040. [Google Scholar] [CrossRef]
  7. Tzanetos, A.; Dounias, G. Nature inspired optimization algorithms or simply variations of metaheuristics? Artif. Intell. Rev. 2021, 54, 1841–1862. [Google Scholar] [CrossRef]
  8. Joyce, T.; Herrmann, J.M. A Review of no free lunch theorems, and their implications for metaheuristic optimisation. In Nature-Inspired Algorithms and Applied Optimization; Yang, X.S., Ed.; Springer International Publishing: Cham, Switzerland, 2018; pp. 27–51. [Google Scholar] [CrossRef]
  9. Almazán-Covarrubias, J.H.; Peraza-Vázquez, H.; Peña-Delgado, A.F.; García-Vite, P.M. An Improved Dingo Optimization Algorithm Applied to SHE-PWM Modulation Strategy. Appl. Sci. 2022, 12, 992. [Google Scholar] [CrossRef]
  10. Abualigah, L.; Hanandeh, E.S.; Zitar, R.A.; Thanh, C.L.; Khatir, S.; Gandomi, A.H. Revolutionizing sustainable supply chain management: A review of metaheuristics. Eng. Appl. Artif. Intell. 2023, 126, 106839. [Google Scholar] [CrossRef]
  11. Beyer, H.G.; Beyer, H.G.; Schwefel, H.P.; Schwefel, H.P. Evolution strategies—A comprehensive introduction. Nat. Comput. 2002, 1, 3–52. [Google Scholar] [CrossRef]
  12. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar] [CrossRef]
  13. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  14. Holland, J.H. Genetic algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  15. Koza, J.R. Genetic programming as a means for programming computers by natural selection. Stat. Comput. 1994, 4, 87–112. [Google Scholar] [CrossRef]
  16. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  17. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  18. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  19. Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
  20. Mirjalili, S. Dragonfly algorithm: A new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput. Appl. 2016, 27, 1053–1073. [Google Scholar] [CrossRef]
  21. Mirjalili, S. The ant lion optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  22. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  23. Kaveh, A.; Farhoudi, N. A new optimization method: Dolphin echolocation. Adv. Eng. Softw. 2013, 59, 53–70. [Google Scholar] [CrossRef]
  24. Gandomi, A.H.; Yang, X.S.; Alavi, A.H.; Talatahari, S. Bat algorithm for constrained optimization tasks. Neural Comput. Appl. 2013, 22, 1239–1255. [Google Scholar] [CrossRef]
  25. Gandomi, A.H.; Alavi, A.H. Krill herd: A new bio-inspired optimization algorithm. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 4831–4845. [Google Scholar] [CrossRef]
  26. Yang, X.S. Firefly algorithms for multimodal optimization. In Stochastic Algorithms: Foundations and Applications; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5792. [Google Scholar] [CrossRef]
  27. Dorigo, M.; Caro, G.D. Ant Colony Optimization: A New Meta-Heuristic. In Proceedings of the 1999 Congress on Evolutionary Computation (CEC 1999), Washington, DC, USA, 6–9 July 1999. [Google Scholar] [CrossRef]
  28. Kennedy, J.; Eberhart, R. Particle Swarm Optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995. [Google Scholar] [CrossRef]
  29. Kaveh, A.; Dadras, A. A novel meta-heuristic optimization algorithm: Thermal exchange optimization. Adv. Eng. Softw. 2017, 110, 69–84. [Google Scholar] [CrossRef]
  30. Kaveh, A.; Bakhshpoori, T. Water Evaporation Optimization: A novel physically inspired optimization algorithm. Comput. Struct. 2016, 167, 69–85. [Google Scholar] [CrossRef]
  31. Kashan, A.H. A new metaheuristic for optimization: Optics inspired optimization (OIO). Comput. Oper. Res. 2015, 55, 99–125. [Google Scholar] [CrossRef]
  32. Rashedi, E.; Nezamabadi-pour, H.; Saryazdi, S. GSA: A Gravitational Search Algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  33. Formato, R.A. Central force optimization: A new metaheuristic with applications in applied electromagnetics. Prog. Electromagn. Res. 2007, 77, 425–491. [Google Scholar] [CrossRef]
  34. Erol, O.K.; Eksin, I. A new optimization method: Big Bang-Big Crunch. Adv. Eng. Softw. 2006, 37, 106–111. [Google Scholar] [CrossRef]
  35. Zhu, A.; Gu, Z.; Hu, C.; Niu, J.; Xu, C.; Li, Z. Political optimizer with interpolation strategy for global optimization. PLoS ONE 2021, 16, e0251204. [Google Scholar] [CrossRef]
  36. Fadakar, E.; Ebrahimi, M. A new metaheuristic football game inspired algorithm. In Proceedings of the 1st Conference on Swarm Intelligence and Evolutionary Computation (CSIEC 2016), Bam, Iran, 9–11 March 2016. [Google Scholar] [CrossRef]
  37. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching-learning-based optimization: A novel method for constrained mechanical design optimization problems. CAD—Comput.-Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  38. Atashpaz-Gargari, E.; Lucas, C. Imperialist Competitive Algorithm: An Algorithm for Optimization Inspired by Imperialistic Competition. In Proceedings of the 2007 IEEE Congress on Evolutionary Computation (CEC 2007), Singapore, 25–28 September 2007. [Google Scholar] [CrossRef]
  39. Peraza-Vázquez, H.; Peña-Delgado, A.; Merino-Treviño, M.; Morales-Cepeda, A.B.; Sinha, N. A novel metaheuristic inspired by horned lizard defense tactics. Artif. Intell. Rev. 2024, 57, 1–65. [Google Scholar] [CrossRef]
  40. Harifi, S.; Mohammadzadeh, J.; Khalilian, M.; Ebrahimnejad, S. Giza Pyramids Construction: An ancient-inspired metaheuristic algorithm for optimization. Evol. Intell. 2021, 14, 1743–1761. [Google Scholar] [CrossRef]
  41. Patel, R.N.; Khil, V.; Abdurahmonova, L.; Driscoll, H.; Patel, S.; Pettyjohn-Robin, O.; Shah, A.; Goldwasser, T.; Sparklin, B.; Cronin, T.W. Mantis shrimp identify an object by its shape rather than its color during visual recognition. J. Exp. Biol. 2021, 224, 242256. [Google Scholar] [CrossRef] [PubMed]
  42. Patel, R.N.; Cronin, T.W. Landmark navigation in a mantis shrimp. Proc. R. Soc. B 2020, 287, 20201898. [Google Scholar] [CrossRef] [PubMed]
  43. Streets, A.; England, H.; Marshall, J. Colour vision in stomatopod crustaceans: More questions than answers. J. Exp. Biol. 2022, 225, jeb243699. [Google Scholar] [CrossRef]
  44. Thoen, H.H.; How, M.J.; Chiou, T.H.; Marshall, J. A different form of color vision in mantis shrimp. Science 2014, 343, 411–413. [Google Scholar] [CrossRef]
  45. Chiou, T.H.; Kleinlogel, S.; Cronin, T.; Caldwell, R.; Loeffler, B.; Siddiqi, A.; Goldizen, A.; Marshall, J. Circular polarization vision in a stomatopod crustacean. Curr. Biol. 2008, 18, 429–434. [Google Scholar] [CrossRef]
  46. Zhong, B.; Wang, X.; Gan, X.; Yang, T.; Gao, J. A biomimetic model of adaptive contrast vision enhancement from mantis shrimp. Sensors 2020, 20, 4588. [Google Scholar] [CrossRef]
  47. Cronin, T.W.; Chiou, T.H.; Caldwell, R.L.; Roberts, N.; Marshall, J. Polarization signals in mantis shrimps. In Proceedings of the Polarization Science and Remote Sensing IV, San Diego, CA, USA, 3–4 August 2009; Volume 7461. [Google Scholar] [CrossRef]
  48. Wang, T.; Wang, S.; Gao, B.; Li, C.; Yu, W. Design of Mantis-Shrimp-Inspired Multifunctional Imaging Sensors with Simultaneous Spectrum and Polarization Detection Capability at a Wide Waveband. Sensors 2024, 24, 1689. [Google Scholar] [CrossRef]
  49. Daly, I.M.; How, M.J.; Partridge, J.C.; Temple, S.E.; Marshall, N.J.; Cronin, T.W.; Roberts, N.W. Dynamic polarization vision in mantis shrimps. Nat. Commun. 2016, 7, 1–9. [Google Scholar] [CrossRef] [PubMed]
  50. Gagnon, Y.L.; Templin, R.M.; How, M.J.; Marshall, N.J. Circularly polarized light as a communication signal in mantis shrimps. Curr. Biol. 2015, 25, 3074–3078. [Google Scholar] [CrossRef] [PubMed]
  51. Alavi, S. Statistical Mechanics: Theory and Molecular Simulation. By Mark E. Tuckerman. Angew. Chem. Int. Ed. 2011, 50, 12138–12139. [Google Scholar] [CrossRef]
  52. Patek, S.N.; Caldwell, R.L. Extreme impact and cavitation forces of a biological hammer: Strike forces of the peacock mantis shrimp Odontodactylus scyllarus. J. Exp. Biol. 2005, 208, 3655–3664. [Google Scholar] [CrossRef]
  53. Patek, S.N.; Nowroozi, B.N.; Baio, J.E.; Caldwell, R.L.; Summers, A.P. Linkage mechanics and power amplification of the mantis shrimp’s strike. J. Exp. Biol. 2007, 210, 3677–3688. [Google Scholar] [CrossRef]
  54. DeVries, M.S.; Murphy, E.A.; Patek, S.N. Research article: Strike mechanics of an ambush predator: The spearing mantis shrimp. J. Exp. Biol. 2012, 215, 4374–4384. [Google Scholar] [CrossRef]
  55. Cox, S.M.; Schmidt, D.; Modarres-Sadeghi, Y.; Patek, S.N. A physical model of the extreme mantis shrimp strike: Kinematics and cavitation of Ninjabot. Bioinspiration Biomimetics 2014, 9, 016014. [Google Scholar] [CrossRef] [PubMed]
  56. Caldwell, R.L.; Dingle, J. The Influence of Size Differential On Agonistic Encounters in the Mantis Shrimp, Gonodactylus Viridis. Behaviour 2008, 69, 255–264. [Google Scholar] [CrossRef]
  57. Caldwell, R.L. Cavity occupation and defensive behaviour in the stomatopod Gonodactylus festai: Evidence for chemically mediated individual recognition. Anim. Behav. 1979, 27, 194–201. [Google Scholar] [CrossRef]
  58. Caldwell, R.L.; Dingle, H. Ecology and evolution of agonistic behavior in stomatopods. Die Naturwissenschaften 1975, 62, 214–222. [Google Scholar] [CrossRef]
  59. Berzins, I.K.; Caldwell, R.L. The effect of injury on the agonistic behavior of the Stomatopod, Gonodactylus Bredini (manning). Mar. Behav. Physiol. 1983, 10, 83–96. [Google Scholar] [CrossRef]
  60. Steger, R.; Caldwell, R.L. Intraspecific deception by bluffing: A defense strategy of newly molted stomatopods (Arthropoda: Crustacea). Science 1983, 221, 558–560. [Google Scholar] [CrossRef] [PubMed]
  61. Caldwell, R.L. The Deceptive Use of Reputation by Stomatopods; State University of New York Press: New York, NY, USA, 1986; pp. 129–145. [Google Scholar]
  62. Abualigah, L.; Diabat, A.; Mirjalili, S.; Elaziz, M.A.; Gandomi, A.H. The Arithmetic Optimization Algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  63. Zhong, C.; Li, G.; Meng, Z. Beluga whale optimization: A novel nature-inspired metaheuristic algorithm. Knowl.-Based Syst. 2022, 251, 109215. [Google Scholar] [CrossRef]
  64. Zhao, S.; Zhang, T.; Ma, S.; Chen, M. Dandelion Optimizer: A nature-inspired metaheuristic algorithm for engineering applications. Eng. Appl. Artif. Intell. 2022, 114, 105075. [Google Scholar] [CrossRef]
  65. Sulaiman, M.H.; Mustaffa, Z.; Saari, M.M.; Daniyal, H.; Mirjalili, S. Evolutionary mating algorithm. Neural Comput. Appl. 2023, 35, 487–516. [Google Scholar] [CrossRef]
  66. Houssein, E.H.; Oliva, D.; Samee, N.A.; Mahmoud, N.F.; Emam, M.M. Liver Cancer Algorithm: A novel bio-inspired optimizer. Comput. Biol. Med. 2023, 165, 107389. [Google Scholar] [CrossRef] [PubMed]
  67. Villuendas-Rey, Y.; Velázquez-Rodríguez, J.L.; Alanis-Tamez, M.D.; Moreno-Ibarra, M.A.; Yáñez-Márquez, C. Mexican axolotl optimization: A novel bioinspired heuristic. Mathematics 2021, 9, 781. [Google Scholar] [CrossRef]
  68. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  69. Alzoubi, S.; Abualigah, L.; Sharaf, M.; Daoud, M.S.; Khodadadi, N.; Jia, H. Synergistic Swarm Optimization Algorithm. CMES Comput. Model. Eng. Sci. 2024, 139, 2557–2604. [Google Scholar] [CrossRef]
  70. Kaur, S.; Awasthi, L.K.; Sangal, A.L.; Dhiman, G. Tunicate Swarm Algorithm: A new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. Artif. Intell. 2020, 90, 103541. [Google Scholar] [CrossRef]
  71. Jia, H.; Wen, Q.; Wang, Y.; Mirjalili, S. Catch fish optimization algorithm: A new human behavior algorithm for solving clustering problems. Clust. Comput. 2024, 27, 13295–13332. [Google Scholar] [CrossRef]
  72. Kumar, A.; Wu, G.; Ali, M.Z.; Mallipeddi, R.; Suganthan, P.N.; Das, S. A test-suite of non-convex constrained optimization problems from the real-world and some baseline results. Swarm Evol. Comput. 2020, 56, 100693. [Google Scholar] [CrossRef]
  73. Lin, J.; Magnago, F.H. Optimal power flow In Electricity Markets: Theories and Applications; Wiley: Hoboken, NJ, USA, 2017; pp. 147–171. [Google Scholar] [CrossRef]
  74. Nucci, C.A.; Borghetti, A.; Napolitano, F.; Tossani, F. Basics of Power systems analysis. In Springer Handbook of Power Systems; Papailiou, K.O., Ed.; Springer: Singapore, 2021; pp. 273–366. [Google Scholar] [CrossRef]
  75. Huneault, M.; Galiana, F.D. A Survey Of The Optimal Power Flow LiteratureA Survey Of The Optimal Power Flow Literature. IEEE Trans. Power Syst. 1991, 6, 762–770. [Google Scholar] [CrossRef]
  76. Ela, A.A.A.E.; Abido, M.A.; Spea, S.R. Optimal power flow using differential evolution algorithm. Electr. Power Syst. Res. 2010, 80, 878–885. [Google Scholar] [CrossRef]
Figure 1. Metaheuristic classification.
Figure 1. Metaheuristic classification.
Mathematics 13 01500 g001
Figure 2. Schematic representation of a metaheuristic algorithm.
Figure 2. Schematic representation of a metaheuristic algorithm.
Mathematics 13 01500 g002
Figure 3. (a) The shrimp inside its shelter. Photograph (published under a CC BY license). Author unknown. (b) The shrimp observing in different directions simultaneously. Photograph (published under a CC BY-SA license). Author unknown. (c) The purple-spotted mantis shrimp (Gonodactylus smithii). Photograph by Roy L. Caldwell (published under a CC BY-SA license). (d) Strike of a mantis shrimp. Photograph (published under a CC BY license). Author unknown.
Figure 3. (a) The shrimp inside its shelter. Photograph (published under a CC BY license). Author unknown. (b) The shrimp observing in different directions simultaneously. Photograph (published under a CC BY-SA license). Author unknown. (c) The purple-spotted mantis shrimp (Gonodactylus smithii). Photograph by Roy L. Caldwell (published under a CC BY-SA license). (d) Strike of a mantis shrimp. Photograph (published under a CC BY license). Author unknown.
Mathematics 13 01500 g003
Figure 4. Shrimp strategies based on detected polarized light.
Figure 4. Shrimp strategies based on detected polarized light.
Mathematics 13 01500 g004
Figure 5. (a) Schematic representation of the initial population, where d i m stands for the vector size; (b) Polarization Type Identifier (PTI) vector representation; (c) strategy activated by the type of polarized light detected.
Figure 5. (a) Schematic representation of the initial population, where d i m stands for the vector size; (b) Polarization Type Identifier (PTI) vector representation; (c) strategy activated by the type of polarized light detected.
Mathematics 13 01500 g005
Figure 6. Schematic diagram of the Polarization Type Identifier ( P T I ) vector update process.
Figure 6. Schematic diagram of the Polarization Type Identifier ( P T I ) vector update process.
Mathematics 13 01500 g006
Figure 7. Algorithm 1’s Polarization Type Identifier (PTI) vector update process—flowchart.
Figure 7. Algorithm 1’s Polarization Type Identifier (PTI) vector update process—flowchart.
Mathematics 13 01500 g007
Figure 8. Foraging strategy.
Figure 8. Foraging strategy.
Mathematics 13 01500 g008
Figure 9. Mantis shrimp’s attack strike.
Figure 9. Mantis shrimp’s attack strike.
Mathematics 13 01500 g009
Figure 10. Burrow, defense, or shelter strategy.
Figure 10. Burrow, defense, or shelter strategy.
Mathematics 13 01500 g010
Figure 11. Convergence performances of unimodal function F3 and multimodal function F14.
Figure 11. Convergence performances of unimodal function F3 and multimodal function F14.
Mathematics 13 01500 g011
Figure 12. MShOA flowchart.
Figure 12. MShOA flowchart.
Mathematics 13 01500 g012
Figure 13. Convergence curves of 10 unimodal functions.
Figure 13. Convergence curves of 10 unimodal functions.
Mathematics 13 01500 g013
Figure 14. Convergence curves of 10 multimodal functions.
Figure 14. Convergence curves of 10 multimodal functions.
Mathematics 13 01500 g014
Figure 15. Convergence graph of the process synthesis problem.
Figure 15. Convergence graph of the process synthesis problem.
Mathematics 13 01500 g015
Figure 16. Convergence graph of the process synthesis and design problem.
Figure 16. Convergence graph of the process synthesis and design problem.
Mathematics 13 01500 g016
Figure 17. Convergence graph of the process flow sheeting problem.
Figure 17. Convergence graph of the process flow sheeting problem.
Mathematics 13 01500 g017
Figure 18. Convergence graph of the weight minimization of a speed reducer.
Figure 18. Convergence graph of the weight minimization of a speed reducer.
Mathematics 13 01500 g018
Figure 19. Convergence graph of the tension/compression spring design (case 1).
Figure 19. Convergence graph of the tension/compression spring design (case 1).
Mathematics 13 01500 g019
Figure 20. Convergence graph of the welded beam design.
Figure 20. Convergence graph of the welded beam design.
Mathematics 13 01500 g020
Figure 21. Convergence graph of the multiple disk clutch brake design problem.
Figure 21. Convergence graph of the multiple disk clutch brake design problem.
Mathematics 13 01500 g021
Figure 22. Convergence graph of the planetary gear train design optimization problem.
Figure 22. Convergence graph of the planetary gear train design optimization problem.
Mathematics 13 01500 g022
Figure 23. Convergence graph of the tension/compression spring design (case 2).
Figure 23. Convergence graph of the tension/compression spring design (case 2).
Mathematics 13 01500 g023
Figure 24. Convergence graph of Himmelblau’s function.
Figure 24. Convergence graph of Himmelblau’s function.
Mathematics 13 01500 g024
Figure 25. IEEE 30-bus system.
Figure 25. IEEE 30-bus system.
Mathematics 13 01500 g025
Figure 26. Case 1: minimization of fuel cost.
Figure 26. Case 1: minimization of fuel cost.
Mathematics 13 01500 g026
Figure 27. Case 2: minimization of active power transmission losses.
Figure 27. Case 2: minimization of active power transmission losses.
Mathematics 13 01500 g027
Figure 28. Case 3: minimization of reactive power transmission losses.
Figure 28. Case 3: minimization of reactive power transmission losses.
Mathematics 13 01500 g028
Table 1. Behavioral strategies of the mantis shrimp based on detected polarized light.
Table 1. Behavioral strategies of the mantis shrimp based on detected polarized light.
Type of Detected Light by Mantis ShrimpAction to Take
Vertical linearly polarized lightForaging
Horizontal linearly polarized lightAttack
Circularly polarized lightBurrow, defense, or shelter
Table 2. Comparative performances of MShOA with k = 0.1 to 0.9 on benchmark functions.
Table 2. Comparative performances of MShOA with k = 0.1 to 0.9 on benchmark functions.
K Value K = 0.1 K = 0.2
F f min BestAveStdBestAveStd
F10000000
F20000000
F300 3.84 × 10 279 00 1.08 × 10 283 0
F400 2.15 × 10 290 00 1.05 × 10 289 0
F500 8.90 × 10 285 00 5.42 × 10 287 0
F60000000
F70000000
F80000000
F9−1 1.00 × 10 + 00 1.00 × 10 + 00 0 1.00 × 10 + 00 1.00 × 10 + 00 0
F100000000
F110 4.44 × 10 16 4.44 × 10 16 0 4.44 × 10 16 4.44 × 10 16 0
F12−4.59 4.59 × 10 + 00 2.89 × 10 + 00 1.65 × 10 + 00 4.59 × 10 + 00 2.85 × 10 + 00 1.60 × 10 + 00
F13 0.9 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16
F140 4.22 × 10 06 1.72 × 10 04 1.42 × 10 04 1.63 × 10 05 2.36 × 10 04 2.02 × 10 04
F150000000
F160 2.90 × 10 + 01 2.90 × 10 + 01 8.43 × 10 03 2.89 × 10 + 01 2.90 × 10 + 01 2.63 × 10 02
F170000000
F180 1.02 × 10 161 3.43 × 10 33 1.88 × 10 32 1.59 × 10 169 4.68 × 10 13 1.96 × 10 12
F190 7.51 × 10 08 1.17 × 10 05 2.59 × 10 05 8.19 × 10 08 6.22 × 10 06 1.20 × 10 05
F20−1 1.00 × 10 + 00 1.00 × 10 + 00 0 1.00 × 10 + 00 1.00 × 10 + 00 0
Table 3. Continuation of comparative performances of MShOA with k = 0.1 to 0.9 on benchmark functions.
Table 3. Continuation of comparative performances of MShOA with k = 0.1 to 0.9 on benchmark functions.
K Value K = 0.3 K = 0.4
F f min BestAveStdBestAveStd
F10000000
F20000000
F300 1.99 × 10 283 00 2.44 × 10 281 0
F400 3.83 × 10 285 00 1.92 × 10 286 0
F500 5.36 × 10 285 00 9.14 × 10 287 0
F60000000
F70000000
F80000000
F9−1 1.00 × 10 + 00 1.00 × 10 + 00 0 1.00 × 10 + 00 1.00 × 10 + 00 0
F100000000
F110 4.44 × 10 16 4.44 × 10 16 0 4.44 × 10 16 4.44 × 10 16 0
F12−4.59 4.59 × 10 + 00 3.39 × 10 + 00 1.22 × 10 + 00 4.59 × 10 + 00 2.72 × 10 + 00 1.61 × 10 + 00
F13 0.9 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16
F140 7.37 × 10 06 1.23 × 10 04 1.33 × 10 04 7.17 × 10 06 1.55 × 10 04 1.61 × 10 04
F150000000
F160 2.90 × 10 + 01 2.90 × 10 + 01 1.16 × 10 02 2.89 × 10 + 01 2.90 × 10 + 01 1.77 × 10 02
F170000000
F180 3.41 × 10 170 2.45 × 10 24 1.31 × 10 23 5.08 × 10 174 4.79 × 10 34 2.62 × 10 33
F190 5.96 × 10 08 7.01 × 10 06 8.81 × 10 06 2.10 × 10 238 1.99 × 10 05 5.03 × 10 05
F20−1 1.00 × 10 + 00 1.00 × 10 + 00 0 1.00 × 10 + 00 1.00 × 10 + 00 0
Table 4. Continuation of comparative performances of MShOA with k = 0.1 to 0.9 on benchmark functions.
Table 4. Continuation of comparative performances of MShOA with k = 0.1 to 0.9 on benchmark functions.
K Value K = 0.5 K = 0.6
F f min BestAveStdBestAveStd
F10000000
F20000000
F300 5.68 × 10 286 00 6.66 × 10 284 0
F400 1.35 × 10 286 00 5.04 × 10 285 0
F500 7.30 × 10 281 00 1.52 × 10 285 0
F60000000
F70000000
F80000000
F9−1 1.00 × 10 + 00 1.00 × 10 + 00 0 1.00 × 10 + 00 1.00 × 10 + 00 0
F100000000
F110 4.44 × 10 16 4.44 × 10 16 0 4.44 × 10 16 4.44 × 10 16 0
F12−4.59 4.59 × 10 + 00 2.48 × 10 + 00 1.61 × 10 + 00 4.59 × 10 + 00 3.18 × 10 + 00 1.49 × 10 + 00
F13 0.9 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16
F140 4.95 × 10 06 1.87 × 10 04 1.89 × 10 04 1.09 × 10 06 1.69 × 10 04 1.37 × 10 04
F150000000
F160 2.89 × 10 + 01 2.90 × 10 + 01 2.02 × 10 02 2.89 × 10 + 01 2.90 × 10 + 01 1.69 × 10 02
F170000000
F180 2.35 × 10 157 1.69 × 10 37 9.26 × 10 37 9.64 × 10 158 1.53 × 10 07 8.35 × 10 07
F190 1.87 × 10 07 9.20 × 10 06 1.16 × 10 05 3.15 × 10 07 1.85 × 10 05 2.58 × 10 05
F20−1 1.00 × 10 + 00 1.00 × 10 + 00 0 1.00 × 10 + 00 1.00 × 10 + 00 0
Table 5. Continuation of comparative performances of MShOA with k = 0.1 to 0.9 on benchmark functions.
Table 5. Continuation of comparative performances of MShOA with k = 0.1 to 0.9 on benchmark functions.
K Value K = 0.7 K = 0.8
F f min BestAveStdBestAveStd
F10000000
F20000000
F300 9.43 × 10 284 00 5.55 × 10 290 0
F400 1.55 × 10 279 00 6.40 × 10 286 0
F500 9.67 × 10 291 00 5.62 × 10 293 0
F60000000
F70000000
F80000000
F9−1 1.00 × 10 + 00 1.00 × 10 + 00 0 1.00 × 10 + 00 1.00 × 10 + 00 0
F100000000
F110 4.44 × 10 16 4.44 × 10 16 0 4.44 × 10 16 4.44 × 10 16 0
F12−4.59 4.59 × 10 + 00 3.19 × 10 + 00 1.28 × 10 + 00 4.58 × 10 + 00 2.91 × 10 + 00 1.47 × 10 + 00
F13 0.9 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16
F140 5.75 × 10 06 1.78 × 10 04 1.76 × 10 04 5.16 × 10 06 1.34 × 10 04 1.54 × 10 04
F150000000
F160 2.89 × 10 + 01 2.90 × 10 + 01 1.80 × 10 02 2.90 × 10 + 01 2.90 × 10 + 01 1.19 × 10 02
F170000000
F180 8.57 × 10 166 1.04 × 10 23 5.67 × 10 23 4.63 × 10 152 4.56 × 10 11 2.50 × 10 10
F190 3.55 × 10 07 2.48 × 10 05 5.23 × 10 05 9.23 × 10 08 1.00 × 10 05 1.41 × 10 05
F20−1 1.00 × 10 + 00 1.00 × 10 + 00 0 1.00 × 10 + 00 1.00 × 10 + 00 0
Table 6. Continuation of comparative performances of MShOA with k = 0.1 to 0.9 on benchmark functions.
Table 6. Continuation of comparative performances of MShOA with k = 0.1 to 0.9 on benchmark functions.
K Value K = 0.9
F f min BestAveStd
F10000
F20000
F300 8.01 × 10 289 0
F400 5.30 × 10 291 0
F500 7.29 × 10 290 0
F60000
F70000
F80000
F9−1 1.00 × 10 + 00 1.00 × 10 + 00 0
F100000
F110 4.44 × 10 16 4.44 × 10 16 0
F12−4.59 4.59 × 10 + 00 2.77 × 10 + 00 1.66 × 10 + 00
F13 0.9 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16
F140 1.71 × 10 06 1.51 × 10 04 1.31 × 10 04
F150000
F160 2.90 × 10 + 01 2.90 × 10 + 01 1.11 × 10 02
F170000
F180 2.01 × 10 165 3.79 × 10 14 2.07 × 10 13
F190 4.03 × 10 08 6.07 × 10 06 7.16 × 10 06
F20− 1 1.00 × 10 + 00 1.00 × 10 + 00 0
Table 7. Statistical analysis of the Wilcoxon signed-rank test that compared different values of the parameter k in MShOA for 10 unimodal and 10 multimodal functions using a 5% significance level.
Table 7. Statistical analysis of the Wilcoxon signed-rank test that compared different values of the parameter k in MShOA for 10 unimodal and 10 multimodal functions using a 5% significance level.
k = 0.3 vs. k = 0.1 k = 0.3 vs. k = 0.2 k = 0.3 vs. k = 0.4
(+/=/−)p-Value(+/=/−)p-Value(+/=/−)p-Value
(5/13/2) 1.76 × 10 01 (3/13/4) 6.12 × 10 01 (4/13/3) 2.37 × 10 01
k = 0.3 vs. k = 0.5 k = 0.3 vs. k = 0.6 k = 0.3 vs. k = 0.7
(+/=/−)p-Value(+/=/−)p-Value(+/=/−)p-Value
(4/13/3) 2.37 × 10 01 (5/13/2) 1.28 × 10 01 (5/13/2) 6.30 × 10 02
k = 0.3 vs. k = 0.8 k = 0.3 vs. k = 0.9
(+/=/−)p-Value(+/=/−)p-Value
(4/13/3) 1.76 × 10 01 (3/13/4) 6.12 × 10 01
Table 8. Description of the MShA algorithm’s computational complexity.
Table 8. Description of the MShA algorithm’s computational complexity.
InstructionBig-O NotationObservation
Initial population O ( n M S d i m ) No loop is related.
Polarization Type Indicator O ( d i m 1 ) No loop is related.
Strategy 1: foraging (Equation (12)) O ( M a x I t e r 1 ) Consists of a sequence of constant operations. It does not depend on the input size or iterable data structures, and the loop is related to M a x I t e r .
Strategy 2: attack (Equation (14)) O ( M a x I t e r 1 ) Consists of a sequence of constant operations. It does not depend on the input size or iterable data structures, and the loop is related to M a x I t e r .
Strategy 3: burrow, defense, or shelter (Equation (15)) O ( M a x I t e r 1 ) Consists of a sequence of constant operations. It does not depend on the input size or iterable data structures, and the loop is related to M a x I t e r .
Update Polarization Type Indicator (PTI) by Algorithm 1 O ( M a x I t e r 1 ) Algorithm 1 consists of a sequence of constant operations. It does not depend on the input size or iterable data structures, and the loop is related
to M a x I t e r .
Update population O ( M a x I t e r n M S ) The loop is related to M a x I t e r .
Update fitness O ( M a x I t e r n M S d i m f ) The loop is related to M a x I t e r .
Table 9. Initial parameters for each algorithm.
Table 9. Initial parameters for each algorithm.
AlgorithmParametersValue
For all algorithmsPopulation size for all problems30
Maximum iterations for testbench functions and real-world problems200
Number of repetition for testbench functions30
MShOADoes not use additional parameters-
ALOI ratio10
w2.0–0.6
AOA α 5.0
μ 0.5
BWO W f [0.1, 0.05]
DO α [0, 1]
k[0, 1]
EMAr[0, 0.2]
C r [0, 1]
GWO α 2.0–0.0
LCAf1
MAO d p 0.5
r p 0.1
k3
λ 0.5
MPAP0.5
F A D s 0.2
SSAv0.0
SSOAw0.7
C 1 2.0
C 2 2.0
k0.5
TSA P m i n 1
P m a x 4
WOA α 2.0–0.0
b2.0
CFOADoes not use additional parameters
Table 10. Comparative performances of MShOA and 14 algorithms on benchmark functions.
Table 10. Comparative performances of MShOA and 14 algorithms on benchmark functions.
Algorithms MShOA GWO
F f min BestAveStdBestAveStd
F10000 1.12 × 10 12 1.51 × 10 11 1.39 × 10 11
F20000 1.07 × 10 09 1.08 × 10 02 1.54 × 10 02
F300 1.16 × 10 286 0 2.14 × 10 05 4.62 × 10 05 1.73 × 10 05
F400 1.21 × 10 288 0 4.92 × 10 03 3.08 × 10 02 1.86 × 10 02
F500 5.63 × 10 284 0 3.26 × 10 05 8.29 × 10 05 3.22 × 10 05
F60000 1.43 × 10 39 2.07 × 10 31 9.76 × 10 31
F70000 4.10 × 10 13 1.95 × 10 11 2.26 × 10 11
F80000 1.97 × 10 10 1.22 × 10 09 1.51 × 10 09
F9−1−1.00−1.000 9.96 × 10 01 9.97 × 10 01 2.53 × 10 04
F100000 1.73 × 10 02 3.47 × 10 01 3.22 × 10 01
F110 4.44 × 10 16 4.44 × 10 16 0 6.44 × 10 06 1.67 × 10 05 7.44 × 10 06
F12−4.59−4.59 2.79 1.36 −4.59−4.59 1.73 × 10 06
F13 0.9 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16 1.19 2.36 1.95
F140 3.68 × 10 07 1.51 × 10 04 1.46 × 10 04 2.39 × 10 03 7.00 × 10 03 4.16 × 10 03
F150000 4.43 1.62 × 10 + 01 8.43
F160 2.89 × 10 + 01 2.90 × 10 + 01 1.58 × 10 02 2.61 × 10 + 01 2.73 × 10 + 01 7.56 × 10 01
F170000 2.00 × 10 01 2.98 × 10 01 5.61 × 10 02
F180 2.75 × 10 162 1.33 × 10 27 7.27 × 10 27 1.40 × 10 16 9.98 × 10 06 5.46 × 10 05
F190 4.34 × 10 07 1.54 × 10 05 2.70 × 10 05 1.67 × 10 10 1.27 × 10 07 2.58 × 10 07
F20−1−1.00−1.000 3.56 × 10 15 7.86 × 10 15 3.17 × 10 15
Table 11. Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.
Table 11. Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.
Algorithms BWO DO
F f min BestAveStdBestAveStd
F10 9.60 × 10 117 7.51 × 10 107 2.97 × 10 106 1.18 × 10 05 9.70 × 10 05 5.74 × 10 05
F20000 1.25 × 10 02 6.76 × 10 02 4.38 × 10 02
F30 1.97 × 10 56 4.47 × 10 52 1.09 × 10 51 2.29 × 10 01 4.54 × 10 01 1.57 × 10 01
F40 4.29 × 10 55 5.33 × 10 52 1.00 × 10 51 2.71 1.09 × 10 + 01 5.30
F50 2.07 × 10 56 1.95 × 10 52 4.28 × 10 52 1.77 × 10 01 8.69 4.33 × 10 + 01
F60000 5.34 × 10 16 1.34 × 10 09 5.13 × 10 09
F70 5.11 × 10 118 7.58 × 10 106 3.97 × 10 105 1.57 × 10 05 5.59 × 10 05 4.28 × 10 05
F80 1.01 × 10 112 9.62 × 10 103 5.19 × 10 102 1.22 × 10 03 3.74 × 10 03 2.10 × 10 03
F9−1−1.00−1.000 9.95 × 10 01 9.96 × 10 01 3.48 × 10 04
F100 1.47 × 10 101 3.55 × 10 93 8.37 × 10 93 1.77 8.49 6.83
F110 4.44 × 10 16 4.44 × 10 16 0 1.02 × 10 02 3.83 × 10 01 6.86 × 10 01
F12−4.59−4.59 4.58 1.53 × 10 02 −4.59−4.59 1.35 × 10 10
F13 0.9 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16 1.03 1.13 7.30 × 10 02
F140 1.24 × 10 06 2.76 × 10 04 2.13 × 10 04 2.20 × 10 02 6.61 × 10 02 3.19 × 10 02
F150000 5.38 4.29 × 10 + 01 2.29 × 10 + 01
F160 2.26 × 10 04 7.66 × 10 03 1.16 × 10 02 2.57 × 10 + 01 2.76 × 10 + 01 7.95 × 10 01
F170 6.65 × 10 51 5.03 × 10 43 1.72 × 10 42 7.00 × 10 01 1.08 2.25 × 10 01
F180 4.73 × 10 36 1.84 × 10 17 1.01 × 10 16 1.49 × 10 05 1.73 × 10 01 2.36 × 10 01
F190 3.51 × 10 12 3.69 × 10 12 7.63 × 10 13 3.04 × 10 11 1.26 × 10 10 1.07 × 10 10
F20−1−1.00−1.000 1.20 × 10 15 3.28 × 10 15 1.44 × 10 15
Table 12. Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.
Table 12. Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.
Algorithms WOA MPA
F f min BestAveStdBestAveStd
F10 4.28 × 10 38 3.44 × 10 31 1.06 × 10 30 1.24 × 10 10 1.09 × 10 09 9.38 × 10 10
F200 3.68 × 10 02 1.41 × 10 01 1.83 × 10 07 1.15 × 10 05 4.85 × 10 05
F30 1.10 × 10 23 2.81 × 10 18 1.02 × 10 17 1.90 × 10 04 1.27 × 10 03 7.01 × 10 04
F40 6.78 5.70 × 10 + 01 2.16 × 10 + 01 2.03 × 10 03 4.22 × 10 03 1.16 × 10 03
F50 3.53 × 10 22 3.44 × 10 17 1.63 × 10 16 2.66 × 10 04 1.28 × 10 03 5.84 × 10 04
F60 1.65 × 10 102 4.46 × 10 66 1.96 × 10 65 7.15 × 10 41 3.13 × 10 36 1.40 × 10 35
F70 2.56 × 10 38 4.80 × 10 27 2.62 × 10 26 7.58 × 10 11 1.29 × 10 09 9.13 × 10 10
F80 5.50 × 10 34 2.96 × 10 28 1.24 × 10 27 2.71 × 10 09 6.67 × 10 08 5.53 × 10 08
F9−1−1.00−1.00 1.09 × 10 16 9.73 × 10 01 2.56 × 10 01 8.42 × 10 01
F100 3.47 × 10 + 02 5.08 × 10 + 02 1.02 × 10 + 02 4.35 × 10 01 1.39 6.78 × 10 01
F110 3.11 × 10 15 2.23 × 10 14 1.81 × 10 14 6.58 × 10 05 1.57 × 10 04 6.56 × 10 05
F12−4.59−4.59−4.59 2.17 × 10 05 −4.59−4.59 2.08 × 10 13
F13 0.9 9.00 × 10 01 1.41 6.32 × 10 01 1.01 1.14 1.00 × 10 01
F140 6.84 × 10 04 1.08 × 10 02 8.88 × 10 03 1.44 × 10 03 3.89 × 10 03 1.89 × 10 03
F1500 1.33 × 10 14 3.23 × 10 14 4.65 × 10 07 1.30 × 10 02 3.78 × 10 02
F160 2.76 × 10 + 01 2.85 × 10 + 01 2.97 × 10 01 2.62 × 10 + 01 2.69 × 10 + 01 4.03 × 10 01
F170 9.89 × 10 12 1.30 × 10 01 6.51 × 10 02 2.00 × 10 01 2.07 × 10 01 2.54 × 10 02
F180 6.80 × 10 16 6.75 × 10 + 03 3.70 × 10 + 04 6.95 × 10 16 1.11 × 10 08 3.87 × 10 08
F190 3.51 × 10 12 5.63 × 10 12 3.42 × 10 12 3.51 × 10 12 8.06 × 10 12 4.12 × 10 12
F20−1−1.00 1.67 × 10 01 3.79 × 10 01 1.47 × 10 14 1.46 × 10 13 1.01 × 10 13
Table 13. Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.
Table 13. Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.
Algorithms LCA SSA
F f min BestAveStdBestAveStd
F10 1.22 × 10 06 7.62 × 10 04 1.10 × 10 03 3.82 × 10 02 2.85 4.62
F20 5.22 × 10 03 3.62 × 10 01 3.26 × 10 01 1.08 1.45 4.03 × 10 01
F30 1.39 × 10 01 2.13 1.59 2.75 × 10 + 01 8.03 × 10 + 01 2.92 × 10 + 01
F40 1.72 × 10 03 8.44 × 10 02 6.41 × 10 02 1.00 × 10 + 01 1.70 × 10 + 01 4.35
F50 2.93 × 10 01 1.98 1.45 4.20 × 10 + 03 1.17 × 10 + 19 4.18 × 10 + 19
F60 4.60 × 10 35 1.66 × 10 16 8.89 × 10 16 1.01 × 10 03 1.67 × 10 + 01 2.80 × 10 + 01
F70 2.22 × 10 05 7.77 × 10 04 1.47 × 10 03 2.29 × 10 02 1.32 × 10 01 8.38 × 10 02
F80 5.85 × 10 05 5.90 × 10 02 1.51 × 10 01 7.19 3.18 × 10 + 01 2.17 × 10 + 01
F9−1−1.00 9.97 × 10 01 4.25 × 10 03 9.96 × 10 01 9.96 × 10 01 1.61 × 10 04
F100 1.09 × 10 02 7.77 1.25 × 10 + 01 1.34 × 10 + 02 2.39 × 10 + 02 7.25 × 10 + 01
F110 2.96 × 10 03 1.57 × 10 01 1.70 × 10 01 2.90 5.38 1.41
F12−4.59 4.56 3.62 9.77 × 10 01 −4.59−4.59 7.94 × 10 12
F13 0.9 9.00 × 10 01 5.48 4.76 1.01 1.16 1.51 × 10 01
F140 9.68 × 10 05 1.73 × 10 03 1.54 × 10 03 1.38 × 10 01 3.52 × 10 01 1.58 × 10 01
F150 6.90 × 10 04 1.53 × 10 01 2.86 × 10 01 2.43 × 10 + 01 5.58 × 10 + 01 2.03 × 10 + 01
F160 2.54 × 10 03 1.34 × 10 01 1.32 × 10 01 4.63 × 10 + 01 1.18 × 10 + 02 4.82 × 10 + 01
F170 1.01 × 10 01 2.23 × 10 01 1.18 × 10 01 2.90 4.59 7.56 × 10 01
F180 9.52 × 10 06 1.15 × 10 04 1.08 × 10 04 8.17 × 10 02 8.98 × 10 + 01 2.71 × 10 + 02
F190 3.51 × 10 12 3.52 × 10 12 1.27 × 10 14 1.70 × 10 11 1.40 × 10 10 1.87 × 10 10
F20−1 9.66 × 10 01 8.06 × 10 01 1.43 × 10 01 8.25 × 10 14 4.88 × 10 13 2.67 × 10 13
Table 14. Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.
Table 14. Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.
Algorithms EMA ALO
F f min BestAveStdBestAveStd
F10 1.16 × 10 04 1.50 × 10 03 1.26 × 10 03 1.21 × 10 05 2.72 × 10 02 5.38 × 10 02
F20 4.72 × 10 02 5.24 × 10 01 2.43 × 10 01 1.17 9.93 6.33
F30 2.87 × 10 02 2.64 × 10 01 1.82 × 10 01 1.73 × 10 + 02 2.19 × 10 + 02 3.65 × 10 + 01
F40 3.06 × 10 + 01 6.15 × 10 + 01 1.70 × 10 + 01 1.47 × 10 + 01 2.30 × 10 + 01 5.59
F50 1.40 × 10 02 2.77 × 10 01 1.90 × 10 01 4.80 × 10 + 02 3.42 × 10 + 24 1.54 × 10 + 25
F60 9.16 × 10 06 6.04 × 10 + 01 2.20 × 10 + 02 8.06 × 10 01 3.01 × 10 + 03 1.12 × 10 + 04
F70 2.10 × 10 04 6.44 × 10 03 9.76 × 10 03 1.06 × 10 01 2.52 1.76
F80 6.14 × 10 03 9.74 × 10 02 9.60 × 10 02 1.34 × 10 + 01 1.11 × 10 + 02 5.17 × 10 + 01
F9−1 9.95 × 10 01 9.95 × 10 01 5.22 × 10 16 −1.00 9.83 × 10 01 2.85 × 10 02
F100 2.93 × 10 + 02 4.40 × 10 + 02 8.74 × 10 + 01 1.74 × 10 + 02 3.85 × 10 + 02 9.94 × 10 + 01
F110 1.81 × 10 02 1.12 × 10 01 7.69 × 10 02 7.97 1.24 × 10 + 01 2.09
F12−4.59−4.59 4.53 2.23 × 10 01 −4.59−4.59 2.61 × 10 11
F13 0.9 2.51 3.37 6.31 × 10 01 9.40 × 10 01 1.07 5.54 × 10 02
F140 4.97 × 10 02 1.45 × 10 01 7.41 × 10 02 4.80 × 10 01 1.04 5.72 × 10 01
F150 1.29 × 10 01 1.58 × 10 + 01 2.08 × 10 + 01 3.81 × 10 + 01 8.90 × 10 + 01 2.55 × 10 + 01
F160 2.88 × 10 + 01 3.53 × 10 + 01 7.84 8.56 × 10 + 01 3.69 × 10 + 02 2.39 × 10 + 02
F170 8.03 × 10 01 1.15 2.30 × 10 01 5.30 7.89 1.49
F180 1.03 × 10 06 4.35 × 10 02 8.95 × 10 02 6.14 × 10 + 03 3.37 × 10 + 09 7.40 × 10 + 09
F190 2.07 × 10 11 2.51 × 10 11 1.88 × 10 12 0 3.27 × 10 12 6.66 × 10 12
F20−1 3.27 × 10 13 9.86 × 10 13 4.08 × 10 13 1.02 × 10 14 6.56 × 10 13 2.72 × 10 12
Table 15. Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.
Table 15. Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.
Algorithms MAO AOA
F f min BestAveStdBestAveStd
F10 9.33 × 10 + 01 2.68 × 10 + 02 1.47 × 10 + 02 2.27 6.62 8.88
F20 3.40 × 10 + 01 6.16 × 10 + 01 1.45 × 10 + 01 9.00 × 10 03 3.99 × 10 01 2.05 × 10 01
F30 2.78 × 10 + 02 3.57 × 10 + 02 4.38 × 10 + 01 2.03 × 10 35 3.33 × 10 10 1.82 × 10 09
F40 2.59 × 10 + 01 3.59 × 10 + 01 4.45 1.11 × 10 17 2.80 × 10 02 2.15 × 10 02
F50 4.35 × 10 + 20 8.75 × 10 + 25 2.39 × 10 + 26 7.48 × 10 46 6.62 × 10 08 3.62 × 10 07
F60 1.26 × 10 + 05 7.67 × 10 + 05 6.73 × 10 + 05 000
F70 1.27 × 10 + 01 1.86 × 10 + 01 3.71 000
F80 5.66 × 10 + 02 9.44 × 10 + 02 2.35 × 10 + 02 0 2.62 × 10 240 0
F9−1 9.99 × 10 01 9.99 × 10 01 1.01 × 10 04 −1.00 7.34 × 10 01 6.91 × 10 01
F100 1.36 × 10 + 02 7.07 × 10 + 05 2.25 × 10 + 06 2.52 × 10 + 02 3.98 × 10 + 02 6.68 × 10 + 01
F110 1.29 × 10 + 01 1.44 × 10 + 01 8.01 × 10 01 4.44 × 10 16 4.44 × 10 16 0
F12−4.59−4.59 3.96 5.87 × 10 01 −4.59−4.59 1.14 × 10 05
F13 0.9 6.83 8.54 8.00 × 10 01 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16
F140 1.00 2.31 7.98 × 10 01 1.19 × 10 07 1.44 × 10 04 1.31 × 10 04
F150 2.00 × 10 + 02 2.38 × 10 + 02 1.49 × 10 + 01 000
F160 2.18 × 10 + 03 3.90 × 10 + 03 1.23 × 10 + 03 2.87 × 10 + 01 2.88 × 10 + 01 4.70 × 10 02
F170 6.91 8.86 9.47 × 10 01 9.99 × 10 02 9.99 × 10 02 6.56 × 10 08
F180 2.75 × 10 + 01 2.25 × 10 + 05 1.09 × 10 + 06 000
F190 2.06 × 10 07 3.66 × 10 06 4.38 × 10 06 6.21 × 10 08 4.46 × 10 05 1.16 × 10 04
F20−1 1.65 × 10 11 8.97 × 10 11 7.30 × 10 11 1.13 × 10 08 9.19 × 10 08 6.28 × 10 08
Table 16. Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.
Table 16. Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.
Algorithms SSOA TSA
F f min BestAveStdBestAveStd
F10 1.45 × 10 170 7.68 × 10 155 4.18 × 10 154 4.61 × 10 79 4.25 × 10 76 6.57 × 10 76
F200000 3.11 × 10 03 5.48 × 10 03
F30 6.92 × 10 84 6.63 × 10 78 3.31 × 10 77 3.38 × 10 40 2.07 × 10 38 2.36 × 10 38
F40 7.10 × 10 82 3.21 × 10 76 9.69 × 10 76 3.94 × 10 37 4.64 × 10 36 5.72 × 10 36
F50 6.86 × 10 83 2.68 × 10 78 9.53 × 10 78 7.84 × 10 41 2.26 × 10 38 2.74 × 10 38
F60000000
F70 2.75 × 10 168 1.05 × 10 156 5.72 × 10 156 3.56 × 10 82 8.62 × 10 78 2.69 × 10 77
F80 6.39 × 10 167 9.21 × 10 157 3.74 × 10 156 9.46 × 10 82 3.99 × 10 76 1.62 × 10 75
F9−1 9.98 × 10 01 9.99 × 10 01 1.69 × 10 04 9.97 × 10 01 9.98 × 10 01 2.59 × 10 04
F100 1.39 × 10 104 8.76 × 10 92 2.42 × 10 91 2.31 × 10 74 3.94 × 10 71 9.39 × 10 71
F110 4.44 × 10 16 4.44 × 10 16 0 3.11 × 10 15 3.35 × 10 15 9.01 × 10 16
F12−4.59−4.59 4.46 1.77 × 10 01 −4.59 4.32 4.11 × 10 01
F13 0.9 9.00 × 10 01 9.00 × 10 01 4.52 × 10 16 4.00 7.48 1.33
F140 4.57 × 10 06 2.09 × 10 04 2.25 × 10 04 2.62 × 10 05 2.90 × 10 04 2.01 × 10 04
F1500000 1.58 × 10 + 01 4.69 × 10 + 01
F160 2.88 × 10 + 01 2.89 × 10 + 01 7.06 × 10 02 2.79 × 10 + 01 2.87 × 10 + 01 3.02 × 10 01
F170 3.20 × 10 69 8.63 × 10 02 3.96 × 10 02 9.99 × 10 02 1.10 × 10 01 3.05 × 10 02
F180 3.37 × 10 75 1.24 × 10 31 6.82 × 10 31 7.70 × 10 23 4.18 × 10 07 1.64 × 10 06
F190 1.56 × 10 07 2.28 × 10 05 2.25 × 10 05 2.80 × 10 08 7.30 × 10 06 1.82 × 10 05
F20−1 3.99 × 10 10 1.84 × 10 09 1.24 × 10 09 1.99 × 10 11 1.18 × 10 10 1.34 × 10 10
Table 17. Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.
Table 17. Continuation of comparative performances of MShOA and 14 algorithms on benchmark functions.
Algorithms CFOA
F f min BestAveStd
F10 2.45 × 10 + 02 3.89 × 10 + 04 1.33 × 10 + 05
F20 6.35 × 10 + 01 1.23 × 10 + 02 5.28 × 10 + 01
F30 3.15 × 10 + 02 4.98 × 10 + 02 1.06 × 10 + 02
F40 2.92 × 10 + 01 4.77 × 10 + 01 9.91 × 10 + 00
F50 5.05 × 10 + 24 6.42 × 10 + 33 2.83 × 10 + 34
F60 2.35 × 10 + 05 5.12 × 10 + 07 1.21 × 10 + 08
F70 8.95 × 10 + 00 3.36 × 10 + 01 1.45 × 10 + 01
F80 6.66 × 10 + 02 1.64 × 10 + 03 7.98 × 10 + 02
F9−1 9.95 × 10 01 9.96 × 10 01 5.13 × 10 04
F100 2.73 × 10 + 02 1.01 × 10 + 05 5.46 × 10 + 05
F110 1.16 × 10 + 01 1.57 × 10 + 01 1.46 × 10 + 00
F12−4.59 4.59 × 10 + 00 3.36 × 10 + 00 1.16 × 10 + 00
F13 0.9 6.40 × 10 + 00 8.54 × 10 + 00 1.23 × 10 + 00
F140 1.41 × 10 + 00 7.13 × 10 + 00 5.45 × 10 + 00
F150 1.87 × 10 + 02 2.65 × 10 + 02 4.74 × 10 + 01
F160 2.85 × 10 + 03 1.31 × 10 + 04 9.58 × 10 + 03
F170 7.40 × 10 + 00 1.20 × 10 + 01 2.76 × 10 + 00
F180 3.40 × 10 + 01 5.11 × 10 + 09 2.06 × 10 + 10
F190 1.60 × 10 09 3.38 × 10 07 6.10 × 10 07
F20−1 5.66 × 10 11 5.60 × 10 10 8.06 × 10 10
Table 18. Statistical analysis of the Wilcoxon signed-rank test comparing MShOA with other algorithms for 10 unimodal functions using a 5% significance level.
Table 18. Statistical analysis of the Wilcoxon signed-rank test comparing MShOA with other algorithms for 10 unimodal functions using a 5% significance level.
MShOA–GWO MShOA–BWO MShOA–DO
(+/=/−)p-Value(+/=/−)p-Value(+/=/−)p-Value
(10/0/0) 5 . 06 × 10 03 (7/3/0) 1 . 80 × 10 02 (10/0/0) 5 . 06 × 10 03
MShOA–WOA MShOA–MPA MShOA–LCA
(+/=/−)p-Value(+/=/−)p-Value(+/=/−)p-Value
(9/1/0) 7 . 69 × 10 03 (10/0/0) 5 . 06 × 10 03 (10/0/0) 5 . 06 × 10 03
MShOA–SSA MShOA–EMA MShOA–ALO
(+/=/−)p-Value(+/=/−)p-Value(+/=/−)p-Value
(10/0/0) 5 . 06 × 10 03 (10/0/0) 5 . 06 × 10 03 (10/0/0) 5 . 06 × 10 03
MShOA–MAO MShOA–AOA MShOA–SSOA
(+/=/−)p-Value(+/=/−)p-Value(+/=/−)p-Value
(10/0/0) 5 . 06 × 10 03 (8/2/0) 1 . 17 × 10 02 (8/2/0) 1 . 17 × 10 02
MShOA–TSA MShOA–CFOA
(+/=/−)p-Value(+/=/−)p-Value
(9/1/0) 7 . 69 × 10 03 (10/0/0) 5 . 06 × 10 03
The bold numbers in this table indicate a significant difference between the two compared algorithms, where MShOA demonstrated a superior performance.
Table 19. Performance comparison on 10 unimodal functions by Friedman test.
Table 19. Performance comparison on 10 unimodal functions by Friedman test.
AlgorithmMShOAGWOBWODOWOA
Mean of ranks 1 . 45 7.20 2 . 90 9.15 6.90
Global ranking 1 8 2 107
AlgorithmMPALCASSAEMAALO
Mean of ranks 6.80 8.70 11.15 10.80 11.50
Global ranking69121113
AlgorithmMAOAOASSOATSACFOA
Mean of ranks 13.95 6.45 3 . 65 5 . 10 14.30
Global ranking145 3 4 15
The five algorithms with the best overall performance according to the Friedman test are highlighted in bold.
Table 20. Statistical analysis of the Wilcoxon signed-rank test comparing MShOA with other algorithms for 10 multimodal functions using a 5% significance level.
Table 20. Statistical analysis of the Wilcoxon signed-rank test comparing MShOA with other algorithms for 10 multimodal functions using a 5% significance level.
MShOA–GWO MShOA–BWO MShOA–DO
(+/=/−)p-Value(+/=/−)p-Value(+/=/−)p-Value
(7/0/3) 3.86 × 10 01 (3/4/3) 4.63 × 10 01 (7/0/3) 3.33 × 10 01
MShOA–WOA MShOA–MPA MShOA–LCA
(+/=/−)p-Value(+/=/−)p-Value(+/=/−)p-Value
(7/0/3) 3.33 × 10 01 (7/0/3) 5.08 × 10 01 (7/0/3) 3.86 × 10 01
MShOA–SSA MShOA–EMA MShOA–ALO
(+/=/−)p-Value(+/=/−)p-Value(+/=/−)p-Value
(8/0/2) 2 . 84 × 10 02 (8/0/2) 4 . 69 × 10 02 (8/0/2) 2 . 84 × 10 02
MShOA–MAO MShOA–AOA MShOA–SSOA
(+/=/−)p-Value(+/=/−)p-Value(+/=/−)p-Value
(8/0/2) 1 . 66 × 10 02 (3/3/4) 8.66 × 10 01 (4/3/3) 8.66 × 10 01
MShOA–TSA MShOA–CFOA
(+/=/−)p-Value(+/=/−)p-Value
(7/0/3) 2.85 × 10 01 (8/0/2) 1 . 25 × 10 02
The bold numbers in this table indicate a significant difference between the two compared algorithms, where MShOA demonstrated a superior performance.
Table 21. Performance comparison on 10 multimodal functions by Friedman test.
Table 21. Performance comparison on 10 multimodal functions by Friedman test.
AlgorithmMShOAGWOBWODOWOA
Mean of ranks 5 . 30 7.40 3 . 10 7.90 6.50
Global ranking 2 8 1 96
AlgorithmMPALCASSAEMAALO
Mean of ranks 5 . 90 7.10 9.90 9.55 9.80
Global ranking 4 7131112
AlgorithmMAOAOASSOATSACFOA
Mean of ranks 13.20 5 . 55 6.25 8.45 14.10
Global ranking14 3 51015
The four algorithms with the best overall performance according to the Friedman test are highlighted in bold.
Table 22. Statistical analysis of the Wilcoxon signed-rank test comparing MShOA with other algorithms for 20 functions using a 5% significance level.
Table 22. Statistical analysis of the Wilcoxon signed-rank test comparing MShOA with other algorithms for 20 functions using a 5% significance level.
MShOA–GWO MShOA–BWO MShOA–DO
(+/=/−)p-Value(+/=/−)p-Value(+/=/−)p-Value
(17/0/3) 1 . 69 × 10 02 (10/7/3) 4.63 × 10 01 (17/0/3) 5 . 73 × 10 03
MShOA–WOA MShOA–MPA MShOA–LCA
(+/=/−)p-Value(+/=/−)p-Value(+/=/−)p-Value
(16/1/3) 2 . 18 × 10 02 (17/0/3) 2 . 76 × 10 02 (17/0/3) 1 . 11 × 10 02
MShOA–SSA MShOA–EMA MShOA–ALO
(+/=/−)p-Value(+/=/−)p-Value(+/=/−)p-Value
(18/0/2) 2 . 93 × 10 04 (18/0/2) 6 . 81 × 10 04 (18/0/2) 2 . 93 × 10 04
MShOA–MAO MShOA–AOA MShOA–SSOA
(+/=/−)p-Value(+/=/−)p-Value(+/=/−)p-Value
(18/0/2) 1 . 63 × 10 04 (11/5/4) 7.83 × 10 02 (12/5/3) 1.40 × 10 01
MShOA–TSA MShOA–TSA
(+/=/−)p-Value(+/=/−)p-Value
(16/1/3) 2 . 69 × 10 02 (18/0/2) 1 . 40 × 10 04
The bold values in this table indicate a significant difference between the two compared algorithms, where MShOA demonstrated a superior performance.
Table 23. Performance comparison of algorithms on 20 unimodal and multimodal functions by Friedman test.
Table 23. Performance comparison of algorithms on 20 unimodal and multimodal functions by Friedman test.
AlgorithmMShOAGWOBWODOWOA
Mean of ranks 3 . 38 7.30 3 . 00 8.53 6.70
Global ranking 2 8 1 106
AlgorithmMPALCASSAEMAALO
Mean of ranks 6.35 7.90 10.53 10.18 10.65
Global ranking59121113
AlgorithmMAOAOASSOATSACFOA
Mean of ranks 13.58 6 . 00 4 . 95 6.78 14.20
Global ranking14 4 3 715
The four algorithms with the best overall performance according to the Friedman test are highlighted in bold.
Table 24. Real-world optimization problems from the CEC2020 benchmark, where D is the problem dimension, g is the number of inequality constraints, h is the number of equality constraints, and f( x ¯ * ) is the best-known feasible objective function value.
Table 24. Real-world optimization problems from the CEC2020 benchmark, where D is the problem dimension, g is the number of inequality constraints, h is the number of equality constraints, and f( x ¯ * ) is the best-known feasible objective function value.
ProbNameDghf( x ¯ * )
CEC20-RC08Process synthesis problem220 2.0000000000
CEC20-RC09Process synthesis and design problem311 2.5576545740
CEC20-RC10Process flow sheeting problem330 1.0765430833
CEC20-RC15Weight minimization of a speed reducer7110 2.9944244658 × 10 + 03
CEC20-RC17Tension/compression spring design (case 1)330 1.2665232788 × 10 02
CEC20-RC19Welded beam design450 1.6702177263
CEC20-RC21Multiple disk clutch brake design problem580 2.3524245790 × 10 01
CEC20-RC22Planetary gear train design optimization problem9101 5.2576870748 × 10 01
CEC20-RC30Tension/compression spring design (case 2)380 2.6138840583
CEC20-RC32Himmelblau’s function560 3.0665538672 × 10 + 04
Table 25. Results of the process synthesis problem.
Table 25. Results of the process synthesis problem.
Algorithms x 1 x 2 f min
BWO 5.00 × 10 01 0.149 × 10 01 0.200 × 10 01
MShOA 5.00 × 10 01 9.37 × 10 01 0 . 200 × 10 01
SSOA0 8.94 × 10 01 0.201 × 10 01 *
AOA 4.98 × 10 01 9.24 × 10 01 0.200 × 10 01 *
* This solution does not satisfy one or more constraints.
Table 26. Comparison results of the process synthesis and design problem.
Table 26. Comparison results of the process synthesis and design problem.
Algorithms x 1 x 2 x 3 f min
BWO 8.60 × 10 01 8.44 × 10 01 1.09 × 10 01 0.257 × 10 01 *
MShOA 8.53 × 10 01 8.52 × 10 01 8.45 × 10 02 0 . 256 × 10 01
SSOA 9.03 × 10 01 7.93 × 10 01 3.65 × 10 01 0.261 × 10 01 *
AOA 8.62 × 10 01 8.41 × 10 01 2.95 × 10 04 0.257 × 10 01 *
* This solution does not satisfy one or more constraints.
Table 27. Comparison results of the process flow sheeting problem.
Table 27. Comparison results of the process flow sheeting problem.
Algorithms x 1 x 2 x 3 f min
BWO 9.48 × 10 01 0.209 × 10 01 6.11 × 10 01 0.118 × 10 01 *
MShOA 9.43 × 10 01 0.210 × 10 01 8.50 × 10 01 0 . 108 × 10 01
SSOA 9.67 × 10 01 0.211 × 10 01 0.142 × 10 01 0.119 × 10 01
AOA 9.38 × 10 01 0.209 × 10 01 0.149 × 10 01 0.114 × 10 01 *
* This solution does not satisfy one or more constraints.
Table 28. Comparison results of the weight minimization of a speed reducer.
Table 28. Comparison results of the weight minimization of a speed reducer.
ParametersBWOMShOASSOAAOA
x 1 0.354 × 10 01 0.350 × 10 01 0.358 × 10 01 0.360 × 10 01
x 2 7.00 × 10 01 7.00 × 10 01 7.13 × 10 01 7.00 × 10 01
x 3 1.70 × 10 + 01 1.72 × 10 + 01 2.56 × 10 + 01 1.70 × 10 + 01
x 4 0.790 × 10 01 0.734 × 10 01 0.766 × 10 01 0.730 × 10 01
x 5 0.815 × 10 01 0.796 × 10 01 0.777 × 10 01 0.830 × 10 01
x 6 0.340 × 10 01 0.339 × 10 01 0.385 × 10 01 0.350 × 10 01
x 7 0.529 × 10 01 0.534 × 10 01 0.532 × 10 01 0.531 × 10 01
f m i n 3 . 04 × 10 + 03 3.08 × 10 + 03 6.04 × 10 + 03 * 3.10 × 10 + 03
* This solution does not satisfy one or more constraints.
Table 29. Comparison results of the tension/compression spring design (case 1).
Table 29. Comparison results of the tension/compression spring design (case 1).
Algorithms x 1 x 2 x 3 f min
BWO 5.00 × 10 02 3.17 × 10 01 1.41 × 10 + 01 1 . 27 × 10 02
MShOA 5.43 × 10 02 4.23 × 10 01 0.827 × 10 01 1.28 × 10 02
SSOA 6.17 × 10 02 7.09 × 10 01 0.351 × 10 01 2.68 × 10 02 *
AOA 5.00 × 10 02 3.11 × 10 01 1.50 × 10 + 01 1.32 × 10 02
* This solution does not satisfy one or more constraints.
Table 30. Comparison results of the welded beam design.
Table 30. Comparison results of the welded beam design.
Algorithms x 1 x 2 x 3 x 4 f min
BWO 2.40 × 10 01 0.280 × 10 01 0.959 × 10 01 2.00 × 10 01 0.178 × 10 01 *
MShOA 1.89 × 10 01 0.394 × 10 01 0.917 × 10 01 2.00 × 10 01 0 . 174 × 10 01
SSOA 2.58 × 10 01 0.383 × 10 01 0.763 × 10 01 3.74 × 10 01 0.273 × 10 01 *
AOA 1.25 × 10 01 0.604 × 10 01 1.00 × 10 + 01 1.96 × 10 01 0.199 × 10 01
* This solution does not satisfy one or more constraints.
Table 31. Comparison results of the multiple disk clutch brake design problem.
Table 31. Comparison results of the multiple disk clutch brake design problem.
VariablesBWOMShOASSOAAOA
x 1 7.00 × 10 + 01 7.01 × 10 + 01 6.61 × 10 + 01 7.00 × 10 + 01
x 2 9.00 × 10 + 01 9.01 × 10 + 01 9.00 × 10 + 01 9.00 × 10 + 01
x 3 0.10 × 10 01 0.10 × 10 01 0.10 × 10 01 0.10 × 10 01
x 4 7.71 × 10 + 01 2.04 × 10 + 02 0.415 × 10 01 1.00 × 10 + 03
x 5 0.20 × 10 01 0.20 × 10 01 0.20 × 10 01 0.20 × 10 01
f m i n 2 . 35 × 10 01 * 2.36 × 10 01 2.74 × 10 01 * 2.35 × 10 01 *
* This solution does not satisfy one or more constraints.
Table 32. Comparison results of the planetary gear train design optimization problem (transposed).
Table 32. Comparison results of the planetary gear train design optimization problem (transposed).
VariablesBWOMShOASSOAAOA
x 1 2.44 × 10 + 01 4.53 × 10 + 01 2.40 × 10 + 01 2.27 × 10 + 01
x 2 1.35 × 10 + 01 3.05 × 10 + 01 1.35 × 10 + 01 1.35 × 10 + 01
x 3 1.35 × 10 + 01 2.42 × 10 + 01 1.35 × 10 + 01 1.35 × 10 + 01
x 4 1.65 × 10 + 01 2.48 × 10 + 01 1.65 × 10 + 01 1.65 × 10 + 01
x 5 1.35 × 10 + 01 1.82 × 10 + 01 1.35 × 10 + 01 1.35 × 10 + 01
x 6 5.57 × 10 + 01 9.11 × 10 + 01 5.87 × 10 + 01 5.81 × 10 + 01
x 7 5.10 × 10 01 9.06 × 10 01 5.10 × 10 01 0.105 × 10 01
x 8 0.207 × 10 01 0.101 × 10 01 5.10 × 10 01 0.649 × 10 01
x 9 5.10 × 10 01 0.164 × 10 01 6.15 × 10 01 0.371 × 10 01
f m i n 7.77 × 10 01 5 . 30 × 10 01 8.70 × 10 01 * 6.66 × 10 01 *
* This solution does not satisfy one or more constraints.
Table 33. Comparison results of the tension/compression spring design (case 2).
Table 33. Comparison results of the tension/compression spring design (case 2).
Algorithms x 1 x 2 x 3 f min
BWO 1.05 × 10 + 01 0.116 × 10 01 3.55 × 10 + 01 0.299 × 10 01
MShOA 0.548 × 10 01 0.166 × 10 01 3.69 × 10 + 01 0 . 270 × 10 01
SSOA 0.631 × 10 01 0.163 × 10 01 3.73 × 10 + 01 0.304 × 10 01
AOA 1.72 × 10 + 01 8.98 × 10 01 3.45 × 10 + 01 0.291 × 10 01
Table 34. Comparison results of Himmelblau’s function.
Table 34. Comparison results of Himmelblau’s function.
VariablesBWOMShOASSOAAOA
x 1 7.80 × 10 + 01 7.87 × 10 + 01 8.33 × 10 + 01 7.80 × 10 + 01
x 2 3.30 × 10 + 01 3.44 × 10 + 01 3.67 × 10 + 01 3.36 × 10 + 01
x 3 3.01 × 10 + 01 3.08 × 10 + 01 3.25 × 10 + 01 3.07 × 10 + 01
x 4 4.50 × 10 + 01 4.40 × 10 + 01 3.79 × 10 + 01 4.50 × 10 + 01
x 5 3.65 × 10 + 01 3.50 × 10 + 01 3.43 × 10 + 01 3.42 × 10 + 01
f m i n 3 . 06 × 10 + 04 * 3.05 × 10 + 04 2.96 × 10 + 04 * 3.02 × 10 + 04 *
* This solution does not satisfy one or more constraints.
Table 35. Generator inequality constraints.
Table 35. Generator inequality constraints.
MinMaxInitial
P G 1 5020099.23
P G 2 208080
P G 5 155050
P G 8 103520
P G 11 103020
P G 13 124020
V G 1 0.951.11.05
V G 2 0.951.11.04
V G 5 0.951.11.01
V G 8 0.951.11.01
V G 11 0.951.11.05
V G 13 0.951.11.05
Table 36. Transformer inequality constraints.
Table 36. Transformer inequality constraints.
MinMaxInitial
T 11 0.91.11.078
T 12 0.91.11.069
T 15 0.91.11.032
T 36 0.91.11.068
Table 37. Shunt VAR compensator inequality constraints.
Table 37. Shunt VAR compensator inequality constraints.
MinMaxInitial
Q C 10 050
Q C 12 050
Q C 15 050
Q C 17 050
Q C 20 050
Q C 21 050
Q C 23 050
Q C 24 050
Q C 29 050
Table 38. Obtained results.
Table 38. Obtained results.
Minimization StudiesBWOMShOASSOAAOA
Fuel cost 2.237 × 10 4 8.054 × 10 2 1.797 × 10 4 8.102 × 10 2
Active power 7.640 × 10 0 4.758 × 10 0 1.158 × 10 1 4.936 × 10 0
Reactive power 1.065 × 10 1 1.923 × 10 1 3.288 × 10 1 6.280 × 10 1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sánchez Cortez, J.A.; Peraza Vázquez, H.; Peña Delgado, A.F. A Novel Bio-Inspired Optimization Algorithm Based on Mantis Shrimp Survival Tactics. Mathematics 2025, 13, 1500. https://doi.org/10.3390/math13091500

AMA Style

Sánchez Cortez JA, Peraza Vázquez H, Peña Delgado AF. A Novel Bio-Inspired Optimization Algorithm Based on Mantis Shrimp Survival Tactics. Mathematics. 2025; 13(9):1500. https://doi.org/10.3390/math13091500

Chicago/Turabian Style

Sánchez Cortez, José Alfonso, Hernán Peraza Vázquez, and Adrián Fermin Peña Delgado. 2025. "A Novel Bio-Inspired Optimization Algorithm Based on Mantis Shrimp Survival Tactics" Mathematics 13, no. 9: 1500. https://doi.org/10.3390/math13091500

APA Style

Sánchez Cortez, J. A., Peraza Vázquez, H., & Peña Delgado, A. F. (2025). A Novel Bio-Inspired Optimization Algorithm Based on Mantis Shrimp Survival Tactics. Mathematics, 13(9), 1500. https://doi.org/10.3390/math13091500

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop