Next Article in Journal
Enhancing Motor Function and Quality of Life Combining Advanced Robotics and Biomechatronics in an Adult with Dystonic Spastic Tetraparesis: A Case Report
Next Article in Special Issue
A Multi-Strategy Parrot Optimization Algorithm and Its Application
Previous Article in Journal
Integrating Sustainability into Biologically Inspired Design: A Systematic Evaluation Model
Previous Article in Special Issue
Q-Learning-Driven Butterfly Optimization Algorithm for Green Vehicle Routing Problem Considering Customer Preference
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Multi-Objective White Shark Algorithm

School of Science, Xi’an University of Technology, Xi’an 710048, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(2), 112; https://doi.org/10.3390/biomimetics10020112
Submission received: 19 December 2024 / Revised: 30 January 2025 / Accepted: 9 February 2025 / Published: 13 February 2025

Abstract

:
To balance the diversity and stringency of Pareto solutions in multi-objective optimization, this paper introduces a multi-objective White Shark Optimization algorithm (MONSWSO) tailored for multi-objective optimization. MONSWSO integrates non-dominated sorting and crowding distance into the White Shark Optimization framework to select the optimal solution within the population. The uniformity of the initial population is enhanced through a chaotic reverse initialization learning strategy. The adaptive updating of individual positions is facilitated by an elite-guided forgetting mechanism, which incorporates escape energy and eddy aggregation behavior inspired by marine organisms to improve exploration in key areas. To evaluate the effectiveness of MONSWSO, it is benchmarked against five state-of-the-art multi-objective algorithms using four metrics: inverse generation distance, spatial homogeneity, spatial distribution, and hypervolume on 27 typical problems, including 23 multi-objective functions and 4 multi-objective project examples. Furthermore, the practical application of MONSWSO is demonstrated through an example of optimizing the design of subway tunnel foundation pits. The comprehensive results reveal that MONSWSO outperforms the comparison algorithms, achieving impressive and satisfactory outcomes.

1. Introduction

In the domain of multi-objective (MO) optimization, researchers are tirelessly pursuing more efficient methodologies to tackle these intricate challenges, aiming to procure sets of Pareto optimal solutions (POS) that meticulously mirror the genuine Pareto front (PF). The recent advancements in MO Evolutionary Algorithms (MOEAs) can be primarily categorized into three distinct groups: (1) decomposition-based MOEAs, (2) indicator-based MOEAs, and (3) those grounded in Pareto dominance-based principles. The detailed description of MOEAs is shown in Figure 1.
(1)
Decomposition-based MOEAs
The core idea behind MOEAs based on decomposition, known as MOEA/D, involves employing a set of well-distributed weight vectors within the objective space to convert the intricate multi-objective optimization problem into a sequence of simpler, more manageable sub-problems. By leveraging the interrelations among these sub-problems, the algorithm computes objective function values, which function as fitness indicators, ultimately culminating in a comprehensive solution set. Research endeavors concerning this algorithm can be encapsulated into five pivotal domains [1]: (1) weight vector design, (2) decomposition technique, (3) reorganization approach, (4) replacement mechanism, and (5) computational resource allocation framework.
(1) Weight vectors (WCs) play a pivotal role in ensuring population diversity within decomposition-based MOEAs. Research endeavors in this realm primarily concentrate on two vital facets: the generation of uniformly distributed WCs and methodologies for adaptive WCs generation. The decomposition-based MO optimization method MOEA/D, initially proposed by Zhang et al. [2], utilizes the simplex method to generate uniformly distributed WCs. Building upon this foundation, Tan et al. [3] further refined the MOEA/D by introducing the UDEM algorithm, which surpassed the simplex lattice method in achieving a more balanced distribution of vectors within the objective space. Ma et al. [4] then introduced an even more adaptive approach in the objective space, seamlessly blending the UDEM and simplex mesh design methods to create alternative WCs. Harada et al. [5] contributed by offering a method to adaptively augment the WCs for particularly challenging sub-problems. Li et al. [6] took a novel route by introducing a technique that automatically recognizes the state of vectors corresponding to individual sub-problems, thereby generating weight vectors for each sub-problem while anticipating the computational complexity associated with each. More recently, Qi et al. [7] suggested an innovative two-stage strategy to fine-tune the generation of WCs.
(2) Decomposition-based methods encompass the Weighted Sum, Tchebycheff, and Penalty-based Boundary Intersection approaches. Given the unique attributes of diverse optimization problems, traditional decomposition methods necessitate specific enhancements. Sato [8] put forth the innovative inverted Penalty-based Boundary Intersection method. Jiang et al. [9] introduced two functions, MSF and PBSF, to dynamically adjust the extent of the decomposition region. Liu et al. [10], meanwhile, integrated the MOEA/D framework to tackle decomposition problems. This method not only dynamically adjusts the region’s size but also significantly boosts the diversity of the algorithms.
(3) The reorganization strategy primarily aims to enhance parent selection and devise novel methods for generating offspring, tailored to the characteristics of the evolutionary process. Wang et al. [11] seamlessly integrated evolutionary operators to bolster the effectiveness and diversity of the original method. Ke et al. [12] harmoniously united MOEA/D with ACO, leveraging the strengths of both global and local information to optimize a suite of MO sub-problems.
(4) The replacement strategy is designed to mitigate the risk of losing population diversity when a new solution supplants an existing one. Typically, algorithms strike a balance between diversity and convergence by imposing constraints on the replacement scope of new solutions and meticulously selecting promising candidates to pair with their corresponding sub-problems. As an illustration, Li et al. [13] seamlessly integrated the NDS into the MOEA/D framework, adopting a Pareto dominance-based replacement strategy.
(5) This strategy allocates computational resources to the more challenging sub-problems by evaluating the difficulty of each sub-problem’s solution, thereby enhancing the discovery of superior solutions while optimizing resource utilization. Cai et al. [14] utilize both the NDS and decomposition-based methods to maintain a steady count of internal and external populations, determining the probability of resource allocation to each sub-problem based on the number of individuals within it.
(2)
Indicator-based MOEAs
The fundamental principle of indicator-based MOEAs is to sieve through individuals, selecting those suitable for continued iteration based on specific performance indicators, thereby guiding the population towards evolving higher-quality solutions. For instance, the HypE algorithm [15] employs the hypervolume (HV) metric to filter individuals, whereas the MaOEA/IGD algorithm [16] utilizes the Inverted Generational Distance (IGD) metric for similar purposes. Although the HV metric is widely used in low-dimensional problems, its computational complexity increases dramatically with the number of objectives and iterations. Consequently, there is a pressing need for algorithms capable of swiftly computing HV metrics. However, significant advancements in reducing HV complexity remain elusive.
(3)
Domination-based MOEAs
In MO problems characterized by conflicting objectives, the term “dominance” is utilized to assess solution quality. The core concept of dominance-based MOEAs involves utilizing dominance relationships to stratify the population into distinct levels, where individuals within the same level exhibit no dominance over each other, thereby necessitating the selection of individuals through various mechanisms. Research on tackling MO problems through non-dominated sorting is pivotal for subsequent investigations and can be broadly classified into three primary areas: (1) dominance relations, (2) density estimation techniques, and (3) individual updating strategies.
(1) Traditional dominance relations exhibit varied performance across diverse problems, prompting the proposal of numerous refined dominance relations aimed at enhancing the efficiency of MO problem solutions. Yang et al. [17] introduced grid dominance, which boosts the dominance probability of individuals by segmenting the objective space into hypergrids. Adaptive fuzzy domination [18] redefined a robust dominance relation by incorporating fuzzy logic principles. Yuan and Elarbi, alongside others [19], integrated the concept of weight vectors from decomposition algorithms into dominance-based MOEAs, presenting hybrid dominance and RP [20] domination. Tian et al. [21] unveiled a novel non-dominated sorting method based on the target vector perspective, termed SDR dominance. Zitzler et al. [22] introduced the SPEAII algorithm, while Chalabi et al. [23] revealed a GMOMPA framework grounded in epsilon dominance.
(2) These strategies are utilized to select individuals within the same classification, ensuring that solutions with the greatest potential for development are preserved in each iteration. However, traditional density estimation strategies, such as the CD method, run the risk of allowing individuals situated further from the PF to persist, potentially compromising convergence. To overcome these limitations, Adra et al. [24] employed a Diversity Management operator to bolster algorithm diversity, striking a balance between convergence and diversity. Li et al. [25] introduced a novel Solution Distribution Estimation approach to fine-tune solution positions in the objective space, further enhancing solution performance.
(3) This area has witnessed numerous proposals and enhancements in MO optimization algorithms. Many researchers have substituted the genetic algorithms in NSGA-II with newer swarm intelligence algorithms that exhibit superior solving capabilities, embedding these algorithms into MO frameworks to augment both convergence and diversity. For instance, Hancer et al. [26] innovated a Pareto-based MO Artificial Bee Colony (MOABC) algorithm, incorporating an archive mechanism for characteristic selection. Abdallahi et al. [27] introduced a MO Efficient Symbiotic Organism Search (MOESOS) algorithm, applied to task scheduling optimization. Houssein et al. [28] developed a MOSMA that integrates the SMA with Pareto dominance and crowding distance. Khishe et al. [29] presented an MOChOA, demonstrating a superior capacity to evade local optima across diverse benchmark problems.
MO optimization algorithms stand at the vanguard of computational intelligence, diligently seeking solutions that concurrently optimize multiple, often conflicting objectives. Despite their proven effectiveness, there remain several avenues for further refinement. One critical area is enhancing their convergence properties to guarantee that the discovered solutions are closer to the genuine Pareto front. Another essential aspect is improving the diversity of solutions, thereby ensuring a more extensive and comprehensive coverage of the solution space.
In 2022, Malik Braik et al. [30] introduced the White Shark Optimizer (WSO), a novel meta-heuristic approach inspired by the feeding behavior of white sharks. This method encompasses four distinct phases: advancing towards prey, surrounding optimal targets, converging on sharks, and mimicking fish schooling behavior. While the WSO exhibits robust global search capabilities, it also encounters challenges, including limited computational accuracy and a propensity for premature convergence.
To facilitate MO intelligent optimization, an approach for solving MO optimization problems is presented by integrating MO thinking with the WSO. The key highlights of this article are as follows:
  • Introduces a MONSWSO solution framework based on NSGA-II. The WSO boasts impressive exploration and development capabilities. By integrating the WSO with an elite non-dominated sorting (NDS) mechanism and a Pareto archive, the MONSWSO was developed. This novel method exhibits enhanced robustness and more efficient search capabilities.
  • By incorporating a chaotic reverse initialization learning strategy, we generate a more diverse initialization population. Additionally, an adaptive evolution design is introduced to enhance local exploitation capabilities. Furthermore, a hybrid escape energy vortex fish aggregation strategy is utilized to promote the exploration of potential regions.
  • Through a series of case studies with varying characteristics, including 23 MO benchmark functions and 4 MO engineering optimization problems, the performance of MONSWSO is rigorously verified through the analysis of five key measures. A practical MO optimization example, such as the optimal setup of an underpass tunnel above a pit, is presented to demonstrate the reliability of MONSWSO’s ability to tackle real-world problems.
The analytical conclusions affirm that the algorithm excels in handling a wide range of benchmarking and Multi-Objective (MO) engineering problems, achieving a Pareto Front (PF) with superior agglomeration and diversity. Comparative results highlight that MONSWSO demonstrates advantages in the majority of case studies and tends to surpass other comparative methods in performance.
Section 2 provides the definitions pertinent to this study. Section 3 elaborates on the WSO and the proposed MONSWSO in detail. Section 4 presents the results of MONSWSO in benchmarking problems. Following this section, the calculation results and analysis of MO engineering problems utilizing MONSWSO are described. Finally, the paper concludes with a summary. We hereby expertly declare that the abbreviations utilized within this article, along with their corresponding meanings, have been meticulously listed in Table A1 in Appendix A.

2. Related Concepts

For the convenience of narration, this section presents the relevant concepts and technologies needed in this paper.

2.1. MO Optimization

The standard MO optimization includes at least two conflicting targets that must be optimized simultaneously. A typical MO minimization can be expressed as Equation (1):
min   F ( x ) = F 1 ( x ) , F 2 ( x ) , , F m ( x ) s . t .   G B k ( x ) 0 , k = 1 , 2 , , K H E j ( x ) = 0 . j = 1 , 2 , , J l o i x i u p i , i = 1 , 2 , , D
where x denotes a decision variable with D dimension, F ( x ) denotes an objective vector with m dimension, G B k ( x ) denotes the k - th inequality constraint,   H E j ( x ) denotes the j - th equality constraint.
For the MO problem, the prevailing research approach currently involves acquiring the POS set through Pareto domination and illustrating its distribution via the PF, which is defined as follows:
Definition 1.
Pareto domination. If x A and x B are two solutions of MO problem satisfying the constraints, if F i x A F i x B for all targets and there is at least one goal that satisfies f j x A < f j x B . At this point, it is called x A Pareto domination x B and is noted as such x A x B .
Figure 2 provides an intuitive depiction of the pertinent concepts in the bi-objective scenario. In the target space comprising imaginary and real points, for any imaginary point C associated with a particular solution x C , at least one real point A corresponding to a feasible solution x A can be identified such that x A x C . Furthermore, for any two real points A and B linked to their respective solutions x A , x B , neither dominates the other. Consequently, the curves traced by connecting these real points constitute the PF.
Definition 2.
POS. For any solution, x in the feasible domain of the MO problem, x x comes into existence and is known as the POS. The set of all POS is called the set of POS, and the collection of objective vectors formed by the elements in the set of POS is called the PF, i.e., PF = F x x The set of P O S .

2.2. NDS and CD

This section elucidates the rapid NDS method [31] utilized for individual hierarchical classification, along with the CD calculation employed for individual merit selection in this text. Given a group G of individuals numbered N , the POS set within this group is identified G 1 . Initially, individuals in the G 1 are placed in the first layer L 1 . Subsequently, the individuals within the POS set in G G 1 are ranked in the second layer L 2 , and this process continues iteratively, with the remaining ND solutions being ranked in subsequent layers up to the t - th layer L t , thereby completing the ranking of all individuals in G . An intuitive graphical representation of the NDS process is depicted in Figure 3.
For NDS populations, solutions occupying higher ranks are deemed superior to those in lower ranks. When it comes to comparing the superiority of solutions among individuals within the same rank, numerous literature sources have adopted the CD as a criterion. This metric assesses the relative superiority of two individuals within the same rank, with a larger CD indicating a better solution. The CD of the τ - th solution x τ i in L i is evaluated as shown in Equation (2)
d L i τ = j = 1 m f j τ 1 f j τ + 1 f j max f j min , τ = 1 , 2 , . G i
where F j max and F j min are the maximum values of the j - th target in G . F j τ 1 and F j τ + 1 denote the two targets of the j - th target neighboring F x τ i in the target space.
A larger value of d L i τ indicates that there are fewer points of interest in the vicinity of the point in the target space corresponding to x τ i , suggesting that positions farther away from x τ i in the target space are not easily replaceable. Consequently, such positions hold greater significance and are thus more optimal. Figure 4 provides a visualization of the congestion distance, which demonstrates that point B has a greater congestion distance than point E. Therefore, point B is considered preferable.

2.3. Elite Retention Strategies

To accelerate the rate of population evolution and elevate the quality of the final PF, the newly formed group from each iteration is integrated with the existing group to form a combined population, which is subsequently filtered. Initially, this merged population undergoes rapid NDS to establish its hierarchical structure. Following this, the CD for each solution is calculated. Based on this CD information, a selection strategy is implemented to retain individuals, where N signifies the designated population size. These carefully selected individuals continue to evolve and adapt until the algorithm reaches its conclusion. Figure 5 provides a clear illustration of the elite retention strategy in action.

3. Multi-Objective White Shark Algorithm

To enhance the search capabilities of the WSO algorithm, this section introduces an enhanced version of WSO, built upon the foundation of the basic WSO algorithm and a thorough analysis of its shortcomings. By integrating this improved WSO into an MO framework, we propose a novel target-seizing optimization algorithm MONSWSO.

3.1. WSO

In 2022, Malik Braik and his colleagues introduced a novel meta-heuristic approach named WSO, which mimics the feeding behavior of white sharks on prey across four distinct phases: advancing toward the prey, surrounding the most promising target, converging toward fellow sharks, and exhibiting schooling behavior. This method boasts a potent global search capability.

3.1.1. Move Towards the Quarry

When chasing prey, the white shark moves towards the prey in a fluctuating manner, as shown in Equation (3):
v k + 1 i = μ v k i + p 1 c 1 X g b e s t k X k i + p 2 c 2 X b e s t v i X k i
where i = 1 , , N , X g b e s t k is the best solution obtained, X k i is the i - th solution in the k - th iteration, X b e s t v i denotes the best solution passed by the i - th individual in the population, c 1 and c 2 are belongs to 0 , 1 , μ = 0.352 . p 1 and p 2 denotes the influence of X g b e s t k and X b e s t v i , respectively. p 1 and p 2 is calculated as follows:
p 1 = p m a + p m a p m i × e 4 k / K max 2
p 2 = p m i + p m a p m i × e 4 k / K max 2
where k and K max denote the current and final of iterations, p m i and p m a denote the minimum and maximum speeds, p m i = 0.5 and p m a = 1.5 .

3.1.2. Surrounding the Best Prey

White sharks gather information through their hearing and sense of smell, continuously adjusting their positions to seek out potential prey. The updates to their positions are governed by an adaptive switching probability, which escalates with the number of iterative follow-alongs or frequency updates, specifically.
X k + 1 i 1 = X k i , r a n d < m v k X k i + ( 1 / h ) v k + 1 i , r a n d m v k
where h = 0.8992 is the frequency of wave motion and the expression for m v k is
m v k = 1 6.25 + e K max / 2 k / 0.01

3.1.3. Moving Closer to the Best Sharks

During foraging, white sharks utilize both visual and olfactory cues to zero in on the most promising individuals, thereby enhancing their chances of capturing prey. They subsequently update their positions accordingly.
X k + 1 i 2 = X g b e s t k + r 1 D i sgn r 2 0.5 , r 3 < s s a X k + 1 i 1 , o t h e r w i s e b
where D i is the gap between the self and optimal value with the following expression and sgn r 2 0.5 is a sign function.
D i = r 4 × X g b e s t k X k + 1 i 1
s s = 1 e 0.0005 k K max

3.1.4. Cluster Behavior

For the position updated through Equations (6) and (8), the white shark determines its final location based on the clustering behavior.
X k + 1 i = X k + 1 i 2 1 + X k + 1 i 2 2 r 5
where r 5 [ 0 , 1 ] . The population is updated cyclically by Equations (1)–(11) to obtain the ideal solution to the problem.

3.2. MONSWSO

In the realm of MO optimization, researchers strive to balance both solution diversity and algorithm convergence. A uniform distribution of solutions is crucial in maintaining diversity within an MO context. Next, we introduce an enhanced version of WSO, named MONSWSO, seamlessly integrated into the framework of MO algorithms.

3.2.1. Improved WSO

WSO employs a stochastic initial value strategy, resulting in a weak initialization population distribution that hinders the algorithm’s otherwise efficient search capabilities. When Equation (8) is utilized for development based on visual and olfactory intensity s s , an excessively small s s value can lead to underdevelopment in WSO and a sluggish convergence rate. To address these two shortcomings of WSO, the MONSWSO algorithm is introduced, leveraging chaotic reverse initialization, adaptive evolution, and the vortex effect to enhance WSO’s performance. Subsequently, we propose a MONSWSO based on an improved WSO. The framework of this study mirrors that of NSGAII, with the main process comprising NDS, CD assignment, and elite sorting. In this algorithm, a more diverse initialized population is generated through a chaotic inverse initialization learning strategy, while an adaptive evolution strategy is introduced to facilitate local exploitation of potential areas. Furthermore, a hybrid escape energy mechanism and vortex fish aggregation strategy are employed to bolster the exploitation of potential regions and prevent the algorithm from falling into local optima.
(1)
Chaotic reverse initialization
The inherent convenience and stochastic nature of chaos enable it to generate more homogeneous solutions [32], while reverse learning [33] techniques can produce potential solutions that are closer to the optimal one. By integrating these two concepts into the initialization process of WSO, a high-quality initial population can be obtained. Tent Chaos, in particular, exhibits excellent uniformity and swift iteration speed. Its evolution process is as follows:
C s + 1 = 2 C s , C s < 0.5 2 1 C s , C s 0.5
For an optimization problem involving a population size of N and a decision space of D dimensions, the Tent mapping is initially utilized to generate N × D points, which are then arranged into a chaotic matrix C = C i , d N × D . Subsequently, the d-th dimension of the ith individual within the population is determined by:
X 0 i , d = l d + C i , d u d l d ,   i = 1 , 2 , , N , d = 1 , 2 , , D
O X 0 i , d = l o d + u p d X 0 i , d
For the chaotic population G 0 = X 0 i , d N × D and the inverse population O G 0 = O X 0 i , d N × D , N individuals were selected to form the starting group according to the selection strategy.
(2)
Adaptive evolution and vortex effects
In WSO, the development phase relies heavily on probabilistic factors s s . When the parameter s s is too small, the algorithm adopts fewer update modes during the execution of Equation (8a), leading to insufficient development ability and a slowdown in convergence speed. To address this, we incorporate a parameter E defined by an exponential decay function [34] with stochastic properties. This allows us to design diverse update modes for the development phase, thereby balancing the algorithm’s exploration and development capabilities. The defining equation for [35,36,37] is as follows:
E k = 4 E r r 6 e 1.5 k K max
where E r is a randomly given value on 1 , 1 and r 6 is a random number on 0 , 1 .
When E k > 1 the prey possesses greater escape energy and is situated farther from the target, incorporating the elite-guided forgetting mechanism can help the group explore innovative directions. This enhances the global leading capability during the development process, at which point Equation (8) is updated to:
X k + 1 i = ω ln r 7 X g b e s t k + r 1 D i sgn r 2 0.5 , r 3 < s s X k + 1 i 1 , r 3 s s
where the adaptive evolution factor is given in the following equation:
ω = e 0.8 ln 1 + 10 k K max
When E k 1 , the prey is close to the target and the use of random wandering and vortex effects helps the agent jump out of the trough in order to obtain a better solution, specifically:
X k + 1 i 3 = X g b e s t k + 0.5 1 k K max 2 k K max U L U L X g b e s t k X k + 1 i 1
where U L is the vector of Levy [38] distribution and denotes the point-to-point multiplication of two vectors.
The vortex effect is updated as:
X k + 1 i = X k + 1 i 3 + 1 k K max 2 k K max l + r 9 u l U i . r 8 0.2 X k + 1 i 3 + 0.2 ( 1 r 10 ) + r 11 X k + 1 r t 1 X k + 1 r t 2 . r 8 > 0.2
where X k + 1 r t 1 , X k + 1 r t 2 are two random solutions of group composed of a X k + 1 i 3 , and U i is a selection function, specified as:
U i = 0 , r 10 0.2 1 , r 10 > 0.2
where r 8 , r 9 , r 10 , r 11 is a random number on 0 , 1 .

3.2.2. Multi-Objective WSO Algorithm

The WSO algorithm, enhanced with chaotic reverse initialization, adaptive evolution, and vortex effect improvements, is designated as the MONSWSO. This algorithm significantly boosts the optimization performance by utilizing a high-quality initialized population and an evolutionary strategy that balances both exploration and exploitation. By integrating it with the MO framework, we propose the WSO-based MO algorithm, also known as MONSWSO.
The cornerstone of MO intelligent algorithms lies in the comparison of solution merits and the distribution of solution diversity. The MONSWSO algorithm enhances both the distribution and quality of solutions, ensuring a superior performance in addressing MO optimization problems.
  • The most optimal individual X g b e s t k of the population G k in generation k is selected by NDS of G k and randomly selecting one of the individuals in the first tier L t as X g b e s t k .
  • The principle of initial population selection is to merge the chaotic initialized population and the inverse population, select N individuals from the elite non-dominated ordering of the resulting 2 N individuals to form the initial population G 0 , and record the optimal individuals. X g b e s t 0
The iterative process of MONSWSO is listed in Algorithm 1.
Algorithm 1: The iterative process of MONSWSO
Input: N , D , K max , G 0
Output: G K
1:     Select G 0 , X g b e s t 0 according to Equations (11)–(13)
2:     While k < K max  do
3:     Update pop 1 = G k
4:     Update m v k , s s , E k , ω k
5:     For i = 1   t o   N
6:       Use Equations (3)–(6) to renew solutions
7:     End for
8:     For i = 1   t o   N
9:       If E k > 1
10:   Update individuals according to Equation (16)
11:  Else
12:   Update individuals according to Equations (17) and (18)
13:  End if
14: End for
15: Combine G k and pop 1
16: Sort the combined group with the elitist NDS and find N excellent individuals
17: k = k + 1
18: End while
19: Obtain the optimal population
MONSWSO employs a chaotic reverse initialization strategy to enhance the uniformity of population distribution and the adaptability of solutions. The approach of merging groups and selecting the final updated population through elite NDS and diversity retains high-quality solutions while bolstering solution diversity.
Furthermore, MONSWSO incorporates an energy switching strategy and selects an adaptive evolutionary path, enabling the group to swiftly converge towards the PF of the problem. This is facilitated by random wandering and vortex effects, which augment the algorithm’s exploration of potential solution regions.

4. Numerical Simulations

The superiority of MONSWSO is demonstrated through three representative types of MO problems: 23 well-known MO functions, 4 engineering problems, and the optimization challenge of subway tunnel pit excavation.

4.1. Experimental Setting

Table 1 details the 23 MO problems, while Table 2 outlines the characteristics of the 4 engineering design problems. In the experiments, five leading and advanced multi-objective algorithms—NSGAII, PESAII, MOPSO, MOALO, and MOGWO—are selected for comparison with classical test functions. For the engineering examples, NSGAII, PESAII, MOPSO, IBEA, and SMPSO are chosen, with parameter settings N = 300 , K max = 300 consistent with their original literature. The algorithms are independently run 10 times, and the mean (M) and standard deviation (Sd) of comparison indices are calculated for analysis. IGD [39], Spacing, Spread and HV [40] four comparison metrics are used to measure the advantage of the algorithms.

4.2. Multi-Target Testing Experiments

This subsection presents the experimental results of MONSWSO on 11 two-objective test problems and 12 three-objective test problems and analyzes the superiority of MONSWSO from a statistical perspective.

4.2.1. The Two-Objective Test Problem

Table 3 presents the M and Sd of all metrics and test results for MONSWSO, NSGAII, PESAII [41], MOPSO [42], MOALO [43], and MOGWO [44] on the two-objective benchmark test. The bold is to emphasize the best value. In terms of convergence accuracy, MONSWSO’s results surpass the comparison algorithms on the majority of test functions, with the exception of DEB1, FON2, and LAU, which exhibit slightly inferior performance compared to NSGAII. This indicates that MONSWSO possesses exceptional convergence capabilities.
The Spacing index results clearly indicate that our algorithm outperforms all other comparative algorithms across the entire spectrum of test functions, with the sole exception of ZDT6, where its performance is marginally inferior to NSGA-II. This underscores the exceptional uniformity of the solutions derived by MONSWSO. Furthermore, the Spread metric attests to MONSWSO’s outstanding performance across all test functions, highlighting its impressive breadth.
Regarding the HV metrics, while MONSWSO’s results for ZDT1 are inferior to NSGAII and those for ZDT3 lag behind PESAII, MOALO, and MOGWO, it nonetheless achieves excellent results on all other functions. This demonstrates that the algorithm possesses strong overall performance.
The Wilcoxon rank sum test was utilized to evaluate the differences between MONSWSO and other algorithms. The symbol ‘−’ signifies that MONSWSO surpasses the comparative algorithm. For the test problems that encompass the IGD, Spacing, Spread, and HV metrics. In terms of the IGD metric, all metric values significantly diverge, with the exception of NSGA-II and MOALO, which demonstrated superior performance compared to MONSWSO. Regarding the Spacing metric, all metric values significantly differ from the proposed algorithm, except for PESAII, whose results underperformed those of the comparative algorithms. Notably, PESAII’s results on ZDT6 and MOPSO’s results on ZDT4 showed no significant difference from MONSWSO’s. For the Spread metric, only the results for ZDT4 and ZDT6 did not significantly differ from those obtained by NSGA-II, while all other results exhibited significant differences. Concerning the HV metrics, no significant difference was observed between MONSWSO and PESAII. However, all other algorithms demonstrated significant differences when compared to the proposed algorithm.
Figure 6 presents a comparative illustration of the actual PF achieved by MONSWSO on the two-target test function, juxtaposed against the resultant PF. Figure 6 demonstrates that MONSWSO can swiftly and efficiently approximate the genuine PF of the problem.

4.2.2. Three-Objective Test Problems

Table 4 presents the M and Sd of the four metrics, along with the test results obtained on the three-objective problems. The bold is to emphasize the best value.
In terms of convergence accuracy, MONSWSO performs less effectively than NSGAII and PESAII on WFG4, WFG5, WFG7, and WFG8. However, MONSWSO’s results surpass the comparative algorithms on the other test functions, indicating that MONSWSO possesses good convergence properties.
Regarding the Spacing metric, DTLZ4 falls behind MOPSO, while DTLZ7 lags behind both MOPSO and MOALO. Additionally, WFG2 is inferior to NSGAII, PESAII, MOPSO, MOALO, and MOGWO. WFG4 is also outperformed by NSGAII, PESAII, and MOGWO. Meanwhile, WFG7 is only worse than MOGWO. Notably, MONSWSO outperforms the other algorithms on the remaining functions, demonstrating the algorithm’s good uniformity. Furthermore, according to the Spread metric, MONSWSO achieves the best results across all functions, highlighting its breadth.
From the HV metrics, with the exception of DTLZ6 and WFG6, the metric values for all other test functions fall below those of NSGAII. Furthermore, more than half of these metric values are inferior to those of PESAII. Nevertheless, they generally outperform the remaining comparison algorithms, indicating the potential for enhancing the overall capability of MONSWSO.
Regarding the IGD metrics, MONSWSO exhibited significant differences compared to NSGAII, PESAII, and MOGWO on 4, 6, and 11 out of 12 problems, respectively. Similarly, it demonstrated significant divergence from MOPSO and MOALO on 12 test functions, respectively. In terms of the Spacing metric, MONSWSO showed significant distinctions from NSGAII, PESAII, MOPSO, MOALO, and MOGWO on 7, 8, 5, 10, and 7 test functions, respectively.
When analyzing Spread metrics, NSGAII and PESAII were not notably different from MONSWSO on the DTLZ2 problem, and both were indistinguishable from MOPSO on 5 test functions. As for HV metrics, MONSWSO demonstrated significant variations from NSGAII, PESAII, and MOPSO on 2, 4, and 9 test functions, respectively. Furthermore, MONSWSO exhibited significant differences from both MOALO and MOGWO across all applicable test functions.
Figure 7 plots a comparative representation of the true PF achieved by MONSWSO on the three-objective test function and the obtained PF. Figure 7 illustrates that MONSWSO is equally proficient in efficiently searching for the true PF of the problem, even in the context of a three-objective scenario.

4.3. MO Engineering Design Issues

While the representativeness of MO benchmarking problems poses challenges in validating algorithm performance, the validation of real-world examples holds greater appeal. To assess the utility of MONSWSO, four MO engineering optimization problems were further employed to validate the superior optimization capabilities of MONSWSO. Consistent with the previous section, MONSWSO was independently executed ten times for each instance, and its optimization results were compared with those of NSGAII, MOPSO, PESAII, SMPSO [45], and IBEA [46] using spacing metrics. In this paper, constraints were handled using the penalty function approach. Figure 8a–d present the schematic diagrams of these four engineering optimization problems.
The cantilever beam design [47] problem represented by Figure 8a is an important MO-constrained problem, where the length x 11 and width x 12 of the beam are designed to minimize the weight and deflection of the beam under certain constraints.
The disc brake problem [48] represented by Figure 8b is composed of four variables, the meaning of x 21 , x 22 , x 23 , x 24 are the same as in the literature.
The objective of the I-beam design [49] represented by Figure 8c is to minimize the cross-sectional area and the beam deflection, respectively.
The four-bar truss problem [50,51] represented by Figure 8d aims at optimizing the cross-sectional areas x 41 , x 42 , x 43 , x 44 of the structural members 1,2,3,4 thereby minimizing the structural weight and nodal displacements.
Table 5 gives the M and Sd of the Spacing metrics for six algorithms for the four engineering instances. The bold is to emphasize the best value. Figure 9a–d show the optimal PF obtained by each algorithm for the four engineering instances.
Table 5 and Figure 9 indicate that the MONSWSO algorithm outperforms the compared methods in terms of solution homogeneity for the real-world instance, achieving continuous Pareto frontiers with excellent solution coverage.

4.4. Optimization Design for Foundation Pit Above Metro Tunnel

Subway tunnels situated above large pit excavations will alter the stress field surrounding the tunnels, leading to upward displacement. Significant vertical displacement can have a profound impact on the durability of subway tunnels, not only increasing operational costs but also compromising subway safety. The focal point of subway tunnel pit optimization design [52] lies in devising solutions for the MO optimization problem, which aims to minimize vertical displacement and project costs. This is achieved by considering the number of longitudinal excavation blocks x 1 , the thickness of the anti-floating plate x 2 , the length of anti-piling x 3 , the diameter of anti-piling x 4 ,and the number of unilateral anti-piling x 5 as decision-making variables. These variables collectively form the basis for the optimization design model of the pit.
min   f 1 x = 17.01 ( 0.0000131849 x 1 5 + 0.000469463 x 1 4 0.00604901 x 1 3 + 0.0343602 x 1 2 0.00992232 x 1 + 1.26013 ) ( 00.01459 x 2 5 + 0.0959586 x 2 4 0.212419 x 2 3 + 0.199501 x 2 2 0.231241 x 2 + 1.1189 ) × ( 0.00000216142 x 3 5 0.000113656 x 3 4 + 0.00217459 x 3 3 0.018897 x 3 2 + 0.0466485 x 3 + 1.32445 ) ( 0.033511 x 4 + 1.03334 ) × ( 0.0141426 x 5 + 1.39165 )
min   f 2 x = ( 119850 + 4500 80.5 x 1 ) x 1 + 1638175 x 2 + 450 π x 4 2 x 3 x 5 + 500 π x 4 2 x 5
where f 1 20 m m , 1 x 1 13 , x 1 N , 0 x 2 2.8 , x 2 0.1 × N , 3 x 3 21 , x 3 0.5 × N , 0.6 x 4 1.6 , x 4 0.05 × N , 6 x 4 11 , x 5 N .
The results obtained by MONSWSO, NSGAII, PESAII, MOPSO, SMPSO, IBEA, MOALO, and MOGWO are rigorously evaluated using the Spacing and HV indicators. The first two solutions, characterized by the optimal objective value and decision variable value, with the greatest distance from the congestion degree, are listed in Table 6. Table 7 presents the M, Sd, optimal value (Best), and median value (Mid) for each algorithm, in relation to both the Spacing and HV indicators.
The PF and the box-and-line diagram are depicted in Figure 10 and Figure 11, respectively. Notably, Table 7 demonstrates that MONSWSO achieves optimal performance in terms of both Spacing and HV metrics. The bold is to emphasize the best value. Furthermore, Figure 10 and Figure 11 vividly illustrate that MONSWSO provides a satisfactory solution with exceptional uniformity and stability.

5. Conclusions

The MONSWSO algorithm seamlessly integrates the WSO with NDS and CD to enhance its performance significantly. It employs a chaotic inverse initialization strategy to generate a high-quality initial population, thereby improving both the uniformity of population distribution and the adaptability of solutions. A variable evolution scheme is utilized to bolster the algorithm’s local exploitation capabilities, while stochastic wandering and vortex effects further enhance the exploration of potential regions, enabling a rapid convergence to the problem’s PF. As a novel MO optimization method, MONSWSO demonstrates notable advantages.
MONSWSO achieves superior uniformity, extensiveness, and comprehensiveness in generating Pareto solutions. The enhancements in uniformity and extensiveness are attributed to the hybrid initialization strategy, energy switching strategy, and adaptive evolutionary approach. The comprehensive performance and solution coverage are further bolstered by non-dominated ordering, crowding distance, and elite selection mechanisms.
When applied to optimize instances, MONSWSO demonstrates a broader range of informative solutions. However, it also has certain limitations; specifically, its performance on two-objective problems outshines that on three-objective problems. Therefore, further refinement of the algorithm is required to address more complex three-objective challenges. The nature of NDS can sometimes lead to a large number of non-dominated solutions generated during iterations, negatively impacting convergence performance. Future research should focus on designing WSO-based MO algorithms that can effectively solve problems with three or more objectives, as well as their application to high-dimensional network optimization problems. Additionally, integrating machine learning techniques to predict and guide the search process has the potential to significantly boost efficiency. Scalability is another critical aspect, as real-world problems often involve a large number of objectives and variables, necessitating robust and scalable algorithms.

Author Contributions

W.G.: Supervision, Methodology, Formal analysis, Writing—original draft. Writing—reviewing & editing. Y.Q.: Software, Data curation, Conceptualization, Visualization, Formal analysis, Writing—original draft. F.D.: Methodology, Formal analysis, Data curation, Writing—review & editing. J.W.: Methodology, Supervision, Data curation, Writing—review& editing. S.L.: Software, Data curation, Formal analysis, Supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China No. 6237621.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data will be made available on request.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Appendix A

Table A1. The meaning of abbreviations.
Table A1. The meaning of abbreviations.
AbbreviationMeaning
MOmulti-objective
WSOWhite Shark Optimization algorithm
NDSnon-dominated sorting
CDcrowding distance
IGDinverse generation distance
Spacingspatial homogeneity
Spreadspatial distribution
HVhypervolume
PFpareto front
MONSWSOmulti-objective White Shark Optimization algorithm
WCsweight vectors
POSPareto optimal solutions

References

  1. Gao, W.; Liu, L.; Wang, Z. A Review of Decomposition-Based Evolutionary Multi-Objective Optimization Algorithms. J. Softw. 2022, 34, 4743–4771. [Google Scholar]
  2. Zhang, Q.F.; Hui, L. MOEA/D: A multi-objective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 2008, 11, 712–731. [Google Scholar] [CrossRef]
  3. Tan, Y.Y.; Jiao, Y.C.; Li, H.; Wang, X.-K. MOEA/D+ uniform design: A new version of MOEA/D for optimization problems with many objectives. Comput. Oper. Res. 2013, 40, 1648–1660. [Google Scholar] [CrossRef]
  4. Ma, X.L.; Qi, Y.T.; Li, L.L.; Liu, F.; Jiao, L.; Wu, J. MOEA/D with uniform decomposition measurement for many-objective problems. Soft Comput. 2014, 18, 2541–2564. [Google Scholar] [CrossRef]
  5. Harada, K.; Hiwa, S.; Hiroyasu, T. Adaptive weight vector assignment method for MOEA/D. In Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA, 27 November–1 December 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–9. [Google Scholar]
  6. Li, Z.X.; He, L.; Chu, Y.J. An improved decomposition multi-objective optimization algorithm with weight vector adaptation strategy. In Proceedings of the 2017 13th International Conference on Semantics, Knowledge and Grids (SKG), Beijing, China, 13–14 August 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 19–24. [Google Scholar]
  7. Qi, Y.T.; Ma, X.L.; Liu, F.; Jiao, L.; Sun, J.; Wu, J. MOEA/D with adaptive weight adjustment. Evol. Comput. 2014, 22, 231–264. [Google Scholar] [CrossRef]
  8. Sato, H. Analysis of inverted PBI and comparison with other scalarizing functions in decomposition based MOEAs. J. Heuristics 2015, 21, 819–849. [Google Scholar] [CrossRef]
  9. Jiang, S.Y.; Yang, S.X.; Wang, Y.; Liu, X. Scalarizing functions in decomposition based multi-objective evolutionary algorithms. IEEE Trans. Evol. Comput. 2017, 22, 296–313. [Google Scholar] [CrossRef]
  10. Liu, H.L.; Gu, F.Q.; Zhan, Q.F. Decomposition of a multi-objective optimization problem into a number of simple multi-objective subproblems. IEEE Trans. Evol. Comput. 2014, 18, 450–455. [Google Scholar] [CrossRef]
  11. Wang, Z.K.; Zhang, Q.F.; Li, H. Balancing convergence and diversity by using two different reproduction operators in MOEA/D: Some preliminary work. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Hong Kong, China, 9–12 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 2849–2854. [Google Scholar]
  12. Ke, L.J.; Zhang, Q.F.; Battiti, R. MOEA/D-ACO: A multi-objective evolutionary algorithm using decomposition and antcolony. IEEE Trans. Cybern. 2013, 43, 1845–1859. [Google Scholar] [CrossRef] [PubMed]
  13. Li, K.; Deb, K.; Zhang, Q.; Kwong, S. An evolutionary many-objective optimization algorithm based on dominance and decomposition. IEEE Trans. Evol. Comput. 2014, 19, 694–716. [Google Scholar] [CrossRef]
  14. Cai, X.Y.; Li, Y.X.; Fan, Z.; Zhang, Q. An external archive guided multi-objective evolutionary algorithm based on decomposition for combinatorial optimization. IEEE Trans. Evol. Comput. 2014, 19, 508–523. [Google Scholar]
  15. Bader, J.; Zitzler, E. HypE: An algorithm for fast hypervolume-based many-objective optimization. Evol. Comput. 2011, 19, 45–76. [Google Scholar] [CrossRef] [PubMed]
  16. Sun, Y.N.; Yen, G.G.; Yi, Z. IGD indicator-based evolutionary algorithm for many-objective optimization problems. IEEE Trans. Evol. Comput. 2018, 23, 173–187. [Google Scholar] [CrossRef]
  17. Yang, S.X.; Li, M.Q.; Liu, X.C.; Zheng, J. A grid-based evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 2013, 17, 721–736. [Google Scholar] [CrossRef]
  18. Yu, W.; Xie, C.; Bi, Y. A High-Dimensional Multi-Objective Particle Swarm Optimization Algorithm Based on Adaptive Fuzzy Dominance. Acta Autom. Sin. 2018, 44, 2278–2289. [Google Scholar]
  19. Yuan, Y.; Xu, H.; Wang, B.; Yao, X. A new dominance relation-based evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 2016, 20, 16–37. [Google Scholar] [CrossRef]
  20. Elarbi, M.; Bechikh, S.; Gupta, A.; Ben Said, L.; Ong, Y.-S. A new decomposition based NSGA-II for many-objective optimization. IEEE Trans. Syst. Man Cybern. 2017, 48, 1191–1210. [Google Scholar] [CrossRef]
  21. Tian, Y.; Zhang, X.Y.; Cheng, R.; He, C.; Jin, Y. Guiding evolutionary multi-objective optimization with generic front modeling. IEEE Trans. Cybern. 2018, 50, 1106–1119. [Google Scholar] [CrossRef]
  22. Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the Strength Pareto Evolutionary Algorithm; TIK Report; ETH Zurich, Computer Engineering and Networks Laboratory: Zurich, Switzerland, 2001. [Google Scholar]
  23. Chalabi, N.E.; Attia, A.; Bouziane, A.; Hassaballah, M. An improved marine predator algorithm based on epsilon dominance and pareto archive for multi-objective optimization. Eng. Appl. Artif. Intell. 2023, 119, 105718–105743. [Google Scholar] [CrossRef]
  24. Adra, S.F.; Fleming, P.J. Diversity management in evolutionary many-objective optimization. IEEE Trans. Evol. Comput. 2011, 15, 183–195. [Google Scholar] [CrossRef]
  25. Li, M.Q.; Yang, S.X.; Liu, X.H. Shift-based density estimation for pareto based algorithms in many-objective optimization. IEEE Trans. Evol. Comput. 2014, 18, 348–365. [Google Scholar] [CrossRef]
  26. Hancer, E.; Xue, B.; Zhang, M.; Karaboga, D.; Akay, B. A multi-objective artificial bee colony approach to feature selection using fuzzy mutual information. In Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC), Sendai, Japan, 25–28 May 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 2420–2427. [Google Scholar]
  27. Abdullahi, M.; Ngadi, M.A.; Dishing, S.I.; Abdulhamid, S.M.; Ahmad, B.I. An efficient symbiotic organisms search algorithm with chaotic optimization strategy for multi-objective task scheduling problems in cloud computing environment. J. Netw. Comput. Appl. 2019, 133, 60–74. [Google Scholar] [CrossRef]
  28. Houssein, E.H.; Mahdy, M.A.; Shebl, D.; Manzoor, A.; Sarkar, R.; Mohamed, W.M. An efficient slime mould algorithm for solving multi-objective optimization problems. Expert Syst. Appl. 2022, 187, 115870. [Google Scholar] [CrossRef]
  29. Khishe, M.; Orouji, N.; Mosavi, M.R. Multi-objective chimp optimizer: An innovative algorithm for multi-objective problems. Expert Syst. Appl. 2023, 211, 118734. [Google Scholar] [CrossRef]
  30. Braik, M.; Hammouri, A.; Atwan, J.; Al-Betar, M.A.; Awadallah, M.A. White shark optimizer: A novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl. Based Syst. 2022, 243, 108457. [Google Scholar] [CrossRef]
  31. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  32. Haupt, R.; Haupt, S. Practical Genetic Algorithm; John Wiley and Sons: New York, NY, USA, 2004; pp. 38–39. [Google Scholar]
  33. Tizhoosh, H. Opposition-based learn: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Vienna, Austria, 28–30 November 2005; IEEE Computer Society: Washington, DC, USA, 2005; Volume 1, pp. 695–701. [Google Scholar]
  34. Feng, Z.K.; Duan, J.F.; Niu, W.J.; Jiang, Z.-Q.; Liu, Y. Enhanced sine cosine algorithm using opposition learning, adaptive evolution and neighborhood search strategies for multivariable parameter optimization problems. Appl. Soft Comput. 2022, 119, 108562. [Google Scholar] [CrossRef]
  35. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  36. Wang, X.; Zou, X. Modeling the fear effect in predator prey interactions with adaptive avoidance of predators. Bull. Math. Biol. 2017, 79, 1–35. [Google Scholar] [CrossRef] [PubMed]
  37. Zhang, X.; Wang, S.; Zhao, K.; Wang, Y. A salp swarm algorithm based on harris eagle foraging strategy. Math. Comput. Simul. 2023, 203, 858–877. [Google Scholar] [CrossRef]
  38. Chegini, S.N.; Bagheri, A.; Najafi, F. PSOSCALF: A new hybrid pso based on sine cosine algorithm and levy flight for solving optimization problems. Appl. Soft Comput. 2018, 73, 697–726. [Google Scholar] [CrossRef]
  39. Sierra, M.R.; Coello Coello, C.A. Improving PSO-based multi-objective optimization using crowding, mutation and ε-dominance. In Evolutionary Multi-Criterion Optimization; Springer: Berlin/Heidelberg, Germany, 2005; pp. 505–519. [Google Scholar]
  40. Schott, J.R. Fault Tolerant Design Using Single and Multi-Criteria Genetic Algorithm Optimization. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1995. [Google Scholar]
  41. Knowles, J.; Corne, D. The pareto archived evolution strategy: A new baseline algorithm for pareto multi-objective optimisation. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; IEEE: Piscataway, NJ, USA, 1999; Volume 1, pp. 98–105. [Google Scholar]
  42. Coello, C.C.; Pulido, G.T.; Lechuga, M.S. Handling multiple objectives with particle swarm optimization. IEEE Trans. Evol. Comput. 2004, 8, 256–279. [Google Scholar] [CrossRef]
  43. Mirjalili, S.; Jangir, P.; Saremi, S. Multi-objective ant lion optimizer: A multi-objective optimization algorithm for solving engineering problems. Appl. Intell. 2017, 46, 79–95. [Google Scholar] [CrossRef]
  44. Emary, E.; Yamany, W.; Hassanien, A.E.; Snasel, V. Multi-objective gray-wolf optimization for attribute reduction. Procedia Comput. Sci. 2015, 65, 623–632. [Google Scholar] [CrossRef]
  45. Nebro, A.J.; Durillo, J.J.; Garcia-Nieto, J.; Coello, C.C.; Luna, F.; Alba, E. SMPSO: A new PSO-based metaheuristic for multi-objective optimization. In Proceedings of the 2009 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making (MCDM), Nashville, TN, USA, 30 March–2 April 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 66–73. [Google Scholar]
  46. Zitzler, E.; Künzli, S. Indicator-based selection in multi-objective search. In International Conference on Parallel Problem Solving from Nature; Springer: Berlin/Heidelberg, Germany, 2004; pp. 832–842. [Google Scholar]
  47. Gurugubelli, S.; Kallepalli, D. Weight and deflection optimization of cantilever beam using a modified non-dominated sorting genetic algorithm. IOSR J. Eng. 2014, 4, 19–23. [Google Scholar] [CrossRef]
  48. Gong, W.Y.; Cai, Z.H.; Zhu, L. An efficient multi-objective differential evolution algorithm for engineering design. Struct. Multidiscip. Optim. 2009, 38, 137–157. [Google Scholar] [CrossRef]
  49. Khazaee, A.; Naimi, H.M. Two multi-objective genetic algorithms for finding optimum design of an I-beam. Engineering 2011, 3, 1054–1061. [Google Scholar] [CrossRef]
  50. Coello, C.C.; Pulido, G.T. Multi-objective structural optimization using a microgenetic algorithm. Struct. Multidiscip. Optim. 2005, 30, 388–403. [Google Scholar] [CrossRef]
  51. Cheng, F.Y.; Li, X.S. Generalized center method for multi-objective engineering optimization. Eng. Optim. 1999, 31, 641–661. [Google Scholar] [CrossRef]
  52. Bu, K.; Zhao, Y.; Zheng, X. Optimization Design of Metro Tunnel Excavation Above Foundation Pit Engineering Based on NSGA-II Genetic Algorithm. J. Railw. Sci. Eng. 2021, 18, 459–467. [Google Scholar]
Figure 1. Detailed description of MOEAs.
Figure 1. Detailed description of MOEAs.
Biomimetics 10 00112 g001
Figure 2. Graphical representation of related terms.
Figure 2. Graphical representation of related terms.
Biomimetics 10 00112 g002
Figure 3. Schematic diagram of NDS.
Figure 3. Schematic diagram of NDS.
Biomimetics 10 00112 g003
Figure 4. Schematic diagram of crowding distance.
Figure 4. Schematic diagram of crowding distance.
Biomimetics 10 00112 g004
Figure 5. Elite selection process.
Figure 5. Elite selection process.
Biomimetics 10 00112 g005
Figure 6. MONSWSO-generated PF and real PF.
Figure 6. MONSWSO-generated PF and real PF.
Biomimetics 10 00112 g006
Figure 7. Comparison of MONSWSO-generated Pareto with real Pareto.
Figure 7. Comparison of MONSWSO-generated Pareto with real Pareto.
Biomimetics 10 00112 g007
Figure 8. Schematic diagram of the 4 engineering optimization problems.
Figure 8. Schematic diagram of the 4 engineering optimization problems.
Biomimetics 10 00112 g008
Figure 9. Best Pareto front for each algorithm on four engineering optimization problems.
Figure 9. Best Pareto front for each algorithm on four engineering optimization problems.
Biomimetics 10 00112 g009aBiomimetics 10 00112 g009b
Figure 10. PF obtained by each algorithm.
Figure 10. PF obtained by each algorithm.
Biomimetics 10 00112 g010
Figure 11. Box plots of Spacing and HV metrics for each algorithm.
Figure 11. Box plots of Spacing and HV metrics for each algorithm.
Biomimetics 10 00112 g011
Table 1. Validation of 23 MO problems.
Table 1. Validation of 23 MO problems.
FunctionZDT1-ZDT4ZDT6DEB1-DEB3FON1-FON2LAUDTLZ2, DTLZ4-7WFG2, WFG4-9
Targets2−222233
Dimensions30102221212
Variable range[0,1][0,1][0,1][−4,4][−50,50][0,1][0,2:2:24]
Table 2. Description of 4 engineering problems.
Table 2. Description of 4 engineering problems.
FunctionFour-Bar Truss Cantilever BeamDisk BrakeI Beam
Targets2−222
Dimensions (constrain)4 (0)2 (2)4 (5)4 (1)
Table 3. Comparison of MONSWSO indicators on two objective test functions.
Table 3. Comparison of MONSWSO indicators on two objective test functions.
AlgorithmsMONSWSONSGAIIPESAIIMOPSOMOALOMOGWO
Indicators M (Sd)M (Sd)M (Sd)M (Sd)M (Sd)M (Sd)
ZDT1IGD0.00132 (0.00006)0.00223 (0.00024) −0.0049 (0.00064) −0.30310 (0.0049) −0.23260 (0.03573) −0.00595 (0.00542) −
Spacing0.00224 (0.00010)0.00321 (0.01786) −0.00439 (0.00074) −0.00582 (0.00062) −0.00232 (0.00278) −0.00517 (0.00438) −
Spread0.35420 (0.01856)0.47234 (0.04477) −0.98300 (0.04530) −0.85800 (0.04300) −1.09400 (0.03887) −1.26600 (0.12940) −
HV0.7229 (0.00004)0.7229 (0.00002) +0.36316 (0.17000) =0.71922 (0.00122) −0.51140 (0.02140) −0.70600 (0.00651) −
ZDT2IGD0.00201 (0.00052)0.01990 (0.09580) −0.00592 (0.00099) −0.47000 (0.50800) −0.58610 (0.01956) −0.00616 (0.00496) −
Spacing0.00233 (0.00034)0.00582 (0.01260) −0.00445 (0.00096) −0.00582 (0.00367) −0.00319 (0.00017) −0.00618 (0.00456) −
Spread0.35250 (0.02730)0.42000 (0.11200) −0.92000 (0.05020) −0.86500 (0.11000) −1.00200 (0.00225) −1.16200 (0.17660) −
HV0.44750 (0.00001)0.44721 (0.00003) −0.43879 (0.00239) −0.01394 (0.03050) −0.09167 (0.00129) −0.41970 (0.00781) −
ZDT3IGD0.00164 (0.00005)0.00217 (0.00007) −0.00390 (0.00119) −0.32400 (0.11100) −0.05629 (0.03323) −0.00436 (0.00480) −
Spacing0.00225 (0.00014)0.00327 (0.00026) −0.00457 (0.00090) −0.00680 (0.00165) −0.00749 (0.00806) −0.00759 (0.0032) −
Spread0.35960 (0.02214)0.40700 (0.02940) −0.92700 (0.04870) −0.85000 (0.02960) −1.31400 (0.16610) −1.11600 (0.06785) −
HV0.60070 (0.00004)0.59064 (0.00003) −0.60941 (0.02800) +0.28254 (0.07930) −0.66060 (0.05070) +0.60810 (0.03257) +
ZDT4IGD0.00150 (0.00007)0.00302 (0.00055) −0.00361 (0.00205) −11.50000 (5.44000) −0.14828 (0.06440) −0.14489 (0.00465) −
Spacing0.00217 (0.00012)0.00342 (0.00028) −0.00480 (0.00105) −0.00801 (0.00858) =0.00549 (0.00394) −0.00363 (0.00527) −
Spread0.36740 (0.02234)0.38800 (0.04290) =0.90800 (0.10400) −0.98100 (0.01690) −1.08300 (0.05164) −1.22140 (0.05273) −
HV0.73150 (0.00006)0.71276 (0.00026) −0.71550 (0.00202) −NaN (NaN) −0.55400 (0.04146) −0.70630 (0.00533) −
ZDT6IGD0.00135 (0.00049)0.00193 (0.00008) −0.00274 (0.00040) −0.00265 (0.02600) −0.01759 (0.01080) −0.00445 (0.04722) −
Spacing0.00337 (0.00537)0.00283 (0.00019) +0.04880 (0.00064) =0.01890 (0.01720) −0.00752 (0.01445) −0.01107 (0.01149) −
Spread0.29960 (0.11480)0.38400 (0.02780) =1.09000 (0.31100) −1.01000 (0.20600) −1.58100 (0.11970) −1.05300 (0.07541) −
HV0.40980 (0.00071)0.39070 (0.00004) −0.38751 (0.00059) −0.38212 (0.01423) −0.34680 (0.01225) −0.36670 (0.00838) −
DEB1IGD0.00186 (0.00054)0.00162 (0.00005)0.00277 (0.00022) −0.00477 (0.00030) −0.02599 (0.00581) −0.00584 (0.00307) −
Spacing0.00227 (0.00028)0.00251 (0.00009) −0.00373 (0.00026) −0.00601 (0.00062) −0.00509 (0.00386) −0.00622 (0.00433) −
Spread0.34620 (0.03558)0.42300 (0.03140) −0.82300 (0.04650) −0.91800 (0.04620) −1.22200 (0.10150) −1.16000 (0.06317) −
HV0.44750 (0.00012)0.44236 (0.00005) −0.44350 (0.00110) −0.44608 (0.00025) −0.39830 (0.00383) −0.42210 (0.00494) −
DEB2IGD0.00164 (0.00011)0.13600 (0.00002) −0.15600 (0.00003) −0.15600 (0.00044) −0.02847 (0.01877) −0.00542 (0.00452) −
Spacing0.00211 (0.00005)0.00269 (0.00014) −0.00412 (0.00031) −0.00447 (0.00030) −0.00454 (0.01530) −0.00766 (0.00989) −
Spread0.37490 (0.01518)0.60800 (0.03400) −0.90200 (0.02830) −1.15000 (0.03610) −1.33100 (0.29300) −1.16700 (0.12040) −
HV0.47640 (0.00002)0.45039 (0.00001) −0.44994 (0.00007) −0.45020 (0.00048) −0.44570 (0.01793) −0.45660 (0.00541) −
DEB3IGD0.00159 (0.00016)0.00524 (0.00094) −0.00763 (0.00092) −0.00928 (0.00653) −0.03210 (0.02386) −0.00710 (0.02386) −
Spacing0.00197 (0.00013)0.00664 (0.00054) −0.00867 (0.00071) −0.00848 (0.00073) −0.00446 (0.00506) −0.00616 (0.00506) −
Spread0.34710 (0.02220)0.42400 (0.05370) −0.83400 (0.07320) −0.74900 (0.05960) −1.35800 (0.17400) −1.01800 (0.17400) −
HV0.24310 (0.00010)0.23158 (0.00011) −0.22980 (0.00075) −0.22943 (0.00237) −0.20430 (0.00631) −0.21700 (0.00626) −
FON1IGD0.00203 (0.00019)0.00284 (0.00006) −0.00335 (0.00024) −0.00293 (0.00018) 0.05434 (0.01743) 0.07805 (0.00944)
Spacing0.00242 (0.00038)0.00284 (0.00010) −0.00375 (0.00024) −0.00328 (0.00018) −0.00145 (0.00454) +0.01701 (0.00235) −
Spread0.36340 (0.02442)0.41600 (0.02910) −0.89600 (0.03140) −0.75500 (0.02730) −1.07200 (0.12220) +1.04600 (0.12670) −
HV0.22590 (0.00001)0.22585 (0.00003) −0.22409 (0.00056) −0.22544 (0.00019) −0.18950 (0.00964) −0.21190 (0.00371) −
FON2IGD0.00216 (0.00023)0.00201 (0.00015) +0.00482 (0.00115) 0.00356 (0.00057) 0.04959 (0.01091) 0.01508 (0.00407)
Spacing0.00232 (0.00003)0.00239 (0.00008) −0.00407 (0.00038) −0.00354 (0.00035) −0.00294 (0.00379) −0.00412 (0.00159) −
Spread0.35780 (0.02195)0.41000 (0.02840) −0.92500 (0.02970) −0.81000 (0.03920) −1.03500 (0.11720) −0.86640 (0.01767) −
HV0.43130 (0.00006)0.42085 (0.00007) −0.42570 (0.00163) −0.42965 (0.00062) −0.38510 (0.00695) −0.41080 (0.00252) −
LAUIGD0.00688 (0.00065)0.00674 (0.00076) +0.01560 (0.00161) −0.01190 (0.00163) −0.14070 (0.03893) −0.02493 (0.02529) −
Spacing0.00832 (0.00040)0.01030 (0.00048) −0.01690 (0.00258) −0.01470 (0.00065) −0.04747 (0.01946) −0.03520 (0.01049) −
Spread0.34570 (0.02212)0.48300 (0.03310) −1.01000 (0.05430) −0.82600 (0.02760) −1.37100 (0.10770) −1.15600 (0.13540) −
HV0.88100 (0.00002)0.76107 (0.00001) −0.85937 (0.00037) −0.81007 (0.00013) −0.84080 (0.00355) −0.85330 (0.00262) −
+/−/=IGD 2/9/00/11/00/11/01/10/00/11/0
Spacing 1/10/00/10/00/10/01/10/00/11/0
Spread 0/9/20/11/00/11/00/11/00/11/0
HV 1/10/01/9/10/11/01/10/01/10/0
Table 4. Comparison of MONSWSO indicators on the three objective test functions.
Table 4. Comparison of MONSWSO indicators on the three objective test functions.
AlgorithmsMONSWSONSGAIIPESAIIMOPSOMOALOMOGWO
Indicators M (Sd)M (Sd)M (Sd)M (Sd)M (Sd)M (Sd)
DTLZ2IGD0.03697 (0.00042)0.03910 (0.00094) =0.03930 (0.00062) =0.04840 (0.00346) −0.11950 (0.01543) −0.09500 (0.02922) −
Spacing0.03002 (0.00228)0.03210 (0.00108) −0.03290 (0.00112) −0.03130 (0.00175) −0.06412 (0.00824) −0.03458 (0.00169) −
Spread0.34650 (0.00312)0.50135 (0.01760) =0.55242 (0.03930) =0.38481 (0.02241) =1.36412 (0.00824) −0.55458 (0.00169) −
HV0.54650 (0.00310)0.56335 (0.00265) +0.56312 (0.00195) +0.54534 (0.00926) −0.40042 (0.02112) −0.43421 (0.03326) −
DTLZ4IGD0.03851 (0.00067)0.03920 (0.00069) =0.03959 (0.00069) −0.14384 (0.10000) −0.36850 (0.01417) −0.09210 (0.02972) −
Spacing0.02868 (0.00148)0.03260 (0.00127) =0.03332 (0.00085) −0.02575 (0.01460) +0.03810 (0.01984) −0.03430 (0.00246) −
Spread0.39160 (0.02081)0.51175 (0.02330) −0.53451 (0.03280) −0.55518 (0.13900) −1.47900 (0.08540) −0.81820 (0.02073) −
HV0.56371 (0.00168)0.56511 (0.00151) +0.56752 (0.00192) +0.48967 (0.04455) =0.29450 (0.06226) −0.17904 (0.01743) −
DTLZ5IGD0.00181 (0.00011)0.00189 (0.00007) =0.00412 (0.00042) −0.00411 (0.00040) −0.03650 (0.02382) −0.01494 (0.01102) −
Spacing0.00277 (0.00008)0.00302 (0.00022) =0.00568 (0.00081) −0.00543 (0.00054) −0.02064 (0.01341) −0.00711 (0.00188) −
Spread0.38140 (0.01083)0.43658 (0.04740) −0.92704 (0.04120) −0.95067 (0.06150) −1.38600 (0.12390) −1.17800 (0.08354) −
HV0.20142 (0.00003)0.20157 (0.000001) +0.19886 (0.00135) =0.19771 (0.00165) =0.13895 (0.01752) −0.1669 (0.02245) −
DTLZ6IGD0.00165 (0.00014)0.00188 (0.00005) −0.00462 (0.00027) −1.77190 (0.87200) −0.05422 (0.04962) −0.00373 (0.00136) −
Spacing0.00283 (0.00011)0.00357 (0.00016) −0.00531 (0.00052) −0.06821 (0.02460) −0.08127 (0.05243) −0.00391 (0.00148) −
Spread0.40600 (0.02047)0.61763 (0.03520) −1.16700 (0.04690) −0.59714 (0.07260) −1.46300 (0.20750) −0.86790 (0.07747) −
HV0.20173 (0.00006)0.20162 (0.00003) −0.19909 (0.00074) −NAN (NAN) −0.16622 (0.01276) −0.1921 (0.00822) −
DTLZ7IGD0.03978 (0.00311)0.04108 (0.00230) =0.04209 (0.00223) =0.72841 (0.39800) −0.57880 (0.02365) −0.04730 (0.06067) =
Spacing0.03029 (0.00217)0.03808 (0.00345) −0.03496 (0.00205) −0.01827 (0.01090) +0.01196 (0.00506) +0.03877 (0.00791) −
Spread0.44560 (0.01032)0.48693 (0.02530) −0.58002 (0.04870) −0.57881 (0.16200) −1.09600 (0.04316) −0.70110 (0.06978) −
HV0.21452 (0.00195)0.28210 (0.00058) +0.28087 (0.00129) +0.12391 (0.07210) −0.16390 (0.02612) −0.10900 (0.03344) −
WFG2IGD0.11830 (0.00380)0.12433 (0.00621) −0.12430 (0.00710) −0.17405 (0.01980) −0.28800 (0.02969) −0.15840 (0.01953) −
Spacing0.14530 (0.01960)0.12909 (0.04000) +0.10783 (0.00670) +0.09866 (0.04640) +0.14490 (0.01973) +0.11460 (0.02953) +
Spread0.36950 (0.01541)0.47087 (0.02160) −0.52292 (0.04230) −0.44214 (0.02980) −1.08400 (0.09980) −0.47540 (0.03116) −
HV0.93256 (0.00191)0.93443 (0.00081) +0.93132 (0.00182) −0.86914 (0.01842) −0.82560 (0.02620) −0.8645 (0.00572) −
WFG4IGD0.21010 (0.00484)0.16118 (0.00246) +0.16457 (0.00337) +0.22022 (0.00712) −0.61810 (0.10020) −0.38840 (0.23370) −
Spacing0.12400 (0.00762)0.12022 (0.00626) +0.11795 (0.00625) +0.12415 (0.00837) =0.17380 (0.02253) −0.13520 (0.03277) +
Spread0.39918 (0.00729)0.42288 (0.02560) −0.42300 (0.02340) −0.40542 (0.03290) −1.49400 (0.03258) −0.52890 (0.03700) −
HV0.5175 (0.00176)0.55013 (0.00212) −0.54642 (0.00310) −0.49404 (0.00439) −0.37492 (0.02349) −0.3317 (0.02252) −
WFG5IGD0.19690 (0.00326)0.18070 (0.00334) +0.18195 (0.00323) +0.20280 (0.01350) −0.39650 (0.05902) −0.49797 (0.15670) −
Spacing0.10710 (0.00505)0.11930 (0.00541) −0.12474 (0.00949) −0.11514 (0.00817) −0.19550 (0.01606) −0.12720 (0.02631) −
Spread0.36950 (0.01541)0.47087 (0.02160) −0.52292 (0.04230) −0.44214 (0.02980) −1.08400 (0.09980) −0.47540 (0.03116) −
HV0.50263 (0.00253)0.51788 (0.00317) +0.49825 (0.00531) −0.46605 (0.00881) −0.41842 (0.02600) −0.27884 (0.01552) −
WFG6IGD0.16722 (0.00433)0.20847 (0.00874) −0.19431 (0.01260) −0.23406 (0.02620) −0.60000 (0.07342) −0.47700 (0.07941) −
Spacing0.11364 (0.00617)0.12953 (0.00768) −0.12812 (0.00712) =0.17190 (0.00677) −0.20220 (0.02319) −0.15830 (0.02722) −
Spread0.38990 (0.01927)0.49648 (0.02690) −0.52410 (0.01850) −0.40585 (0.02080) =1.58300 (0.03202) −0.62880 (0.04042) −
HV0.54564 (0.00222)0.49859 (0.00952) −0.50025 (0.01250) −0.46985 (0.00964) −0.33832 (0.0263) −0.26622 (0.01669) −
WFG7IGD0.16956 (0.00359)0.16431 (0.00348) +0.16579 (0.00223) +0.24305 (0.01340) −0.57120 (0.05648) −0.61830 (0.08866) −
Spacing0.11299 (0.00376)0.12813 (0.00938) =0.12793 (0.00662) =0.11785 (0.00805) =0.19230 (0.01449) −0.10970 (0.03292) +
Spread0.39097 (0.01326)0.53448 (0.02520) −0.54623 (0.03090) −0.41414 (0.02160) =1.46500 (0.02506) −0.61280 (0.04507) −
HV0.54378 (0.00242)0.56474 (0.00128) +0.54612 (0.00403) +0.47741 (0.00639) −0.35372 (0.00930) −0.26422 (0.01316) −
WFG8IGD0.27467 (0.00301)0.27150 (0.00373) +0.25909 (0.00557) +0.39959 (0.01510) −0.75280 (0.07509) −0.66374 (0.15700) −
Spacing0.11605 (0.00352)0.13303 (0.00665) −0.13346 (0.00661) −0.12294 (0.00562) =0.17890 (0.02275) −0.14190 (0.02361) =
Spread0.40327 (0.01505)0.54412 (0.02860) −0.55864 (0.03350) −0.40969 (0.02190) =1.55200 (0.03676) −0.58990 (0.03998) −
HV0.53927 (0.00220)0.56474 (0.00128) +0.54612 (0.00403) +0.47741 (0.00639) −0.30262 (0.00840) −0.19697 (0.02371) −
WFG9IGD0.15622 (0.00445)0.16649 (0.00399) −0.16057 (0.00172) −0.18701 (0.01120) −0.46700 (0.06412) −0.54952 (0.25570) −
Spacing0.10778 (0.00371)0.11692 (0.00481) −0.11654 (0.00549) −0.11812 (0.00448) −0.18730 (0.01528) −0.12890 (0.02646) −
Spread0.38916 (0.01534)0.46482 (0.02810) −0.44877 (0.03030) −0.40412 (0.02940) =1.37900 (0.27080) −0.59610 (0.03859) −
HV0.46776 (0.00393)0.53804 (0.00174) +0.52620 (0.00255) +0.50723 (0.00739) + 0.42399 (0.03350) −0.26681 (0.02237) −
+/−/=IGD 4/4/44/6/20/12/00/12/00/11/1
Spacing 2/7/32/8/23/5/42/10/03/7/2
Spread 0/11/10/11/10/7/50/12/00/12/0
HV 10/2/07/4/11/9/20/12/00/12/0
Table 5. Spacing metric results for four engineering optimization problems.
Table 5. Spacing metric results for four engineering optimization problems.
AlgorithmSpacing
Problem aProblem bProblem cProblem d
MSdMSdMSdMSd
MONSWSO0.004270.000320.017520.000511.119200.012000.876040.04870
NSGAII0.005600.000120.029850.010104.267300.403000.881250.05470
PESAII0.010140.00096NaNNaN5.427500.868001.193500.09310
MOPSO0.009260.00082NaNNaN5.896201.810001.165100.09820
SMPSO0.006690.00065NaNNaN6.008300.738000.860040.04320
IBEA0.040120.01533NaNNaNNaNNaN1.079900.03230
Table 6. Optimized design results of the pit above the subway tunnel.
Table 6. Optimized design results of the pit above the subway tunnel.
AlgorithmProblem Design Objectives Problem Design Parameters
f 1 f 2 x 1 x 2 x 3 x 4 x 5
MONSWSO12.31259282.03516100.5200.641
13.27690237.4485950.619.50.641
NSGAII12.95089251.5202460.620.50.641
12.45820275.4902480.620.50.641
PESAII11.93554315.12135100.7200.641
11.95881313.31412100.720.50.641
MOPSO11.89600320.2387090.820.50.641
11.40560363.94390101200.641
SMPSO11.52850351.9589091200.641
12.96330276.6948070.7200.639
IBEA12.62490253.6684090.4200.641
12.56850284.2837060.820.50.641
MOALO11.80487332.8018780.9200.641
12.54890267.2568080.5200.641
MOGWO12.55031260.8087690.4210.641
12.45241294.4358260.820.50.641
Table 7. Spacing and HV results.
Table 7. Spacing and HV results.
AlgorithmSpacingHV
MSdBestMidMSdBestMid
MONSWSO2.116100.038342.062702.109520.717290.001120.719930.71040
NSGAII2.140210.102212.089002.137620.653740.000010.654110.65374
PESAII2.313130.385212.111542.308510.652240.000290.653500.65219
MOPSO2.160120.368152.072352.174330.652630.051200.692720.65277
SMPSO2.119220.041622.075252.127420.653830.041600.706440.65364
IBEA3.798240.552143.247433.558140.643730.021100.653850.64373
MOALO3.187800.501372.732183.324760.472260.006010.486930.47108
MOGWO2.261130.415882.197392.230960.486910.000820.488680.48729
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, W.; Qiang, Y.; Dai, F.; Wang, J.; Li, S. An Efficient Multi-Objective White Shark Algorithm. Biomimetics 2025, 10, 112. https://doi.org/10.3390/biomimetics10020112

AMA Style

Guo W, Qiang Y, Dai F, Wang J, Li S. An Efficient Multi-Objective White Shark Algorithm. Biomimetics. 2025; 10(2):112. https://doi.org/10.3390/biomimetics10020112

Chicago/Turabian Style

Guo, Wenyan, Yufan Qiang, Fang Dai, Junfeng Wang, and Shenglong Li. 2025. "An Efficient Multi-Objective White Shark Algorithm" Biomimetics 10, no. 2: 112. https://doi.org/10.3390/biomimetics10020112

APA Style

Guo, W., Qiang, Y., Dai, F., Wang, J., & Li, S. (2025). An Efficient Multi-Objective White Shark Algorithm. Biomimetics, 10(2), 112. https://doi.org/10.3390/biomimetics10020112

Article Metrics

Back to TopTop