Next Article in Journal
Global Pipe Optimization for Ship Engine Room
Next Article in Special Issue
Improved Fracture Permeability Evaluation Model for Granite Reservoirs in Marine Environments: A Case Study from the South China Sea
Previous Article in Journal
A Model-Free Adaptive Positioning Control Method for Underactuated Unmanned Surface Vessels in Unknown Ocean Currents
Previous Article in Special Issue
Occurrence Mechanism and Controlling Factors of Shale Oil from the Paleogene Kongdian Formation in Cangdong Sag, Bohai Bay Basin, East China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Salmon Salar Optimization: A Novel Natural Inspired Metaheuristic Method for Deep-Sea Probe Design for Unconventional Subsea Oil Wells

1
Hubei Key Laboratory of Digital Finance Innovation, Hubei University of Economics, Wuhan 430205, China
2
School of Information Engineering, Hubei University of Economics, Wuhan 430205, China
3
Hubei Internet Finance Information Engineering Technology Research Center, Hubei University of Economics, Wuhan 430205, China
4
Faculty of Computer and Information Sciences, Hosei University, Tokyo 102-8160, Japan
*
Author to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2024, 12(10), 1802; https://doi.org/10.3390/jmse12101802
Submission received: 12 August 2024 / Revised: 20 September 2024 / Accepted: 28 September 2024 / Published: 10 October 2024

Abstract

:
As global energy demands continue to rise, the development of unconventional oil resources has become a critical priority. However, the complexity and high dimensionality of these problems often cause existing optimization methods to get trapped in local optima when designing key tools, such as deep-sea probes. To address this challenge, this study proposes a novel meta-heuristic approach—the Salmon Salar Optimization algorithm, which simulates the social structure and collective behavior of salmon to perform high-precision searches in high-dimensional spaces. The Salmon Salar Optimization algorithm demonstrated superior performance across two benchmark function sets and successfully solved the constrained optimization problem in deep-sea probe design. These results indicate that the proposed method is highly effective in meeting the optimization needs of complex engineering systems, particularly in the design optimization of deep-sea probes for unconventional oil exploration.

1. Introduction

The development of unconventional oil resources is increasingly recognized as a critical component in addressing global energy demands. As conventional oil reserves dwindle and geopolitical factors introduce uncertainties in supply, the ability to access unconventional resources, such as shale oil, oil sands, and ultra-deepwater reserves, has become imperative [1,2]. Although these resources are abundant, they require advanced extraction technologies and sophisticated engineering solutions due to their complex geological formations and challenging extraction environments [3,4].
Deep-sea probes play an indispensable role in the exploration and extraction of unconventional oil resources, particularly in ultra-deepwater environments. These probes must navigate and analyze extreme underwater conditions, characterized by high geological complexity and a hostile operational environment. The design and deployment of deep-sea probes present high-dimensional optimization challenges, where multiple factors—such as structural integrity, sensor accuracy, energy efficiency, and environmental resilience—must be considered simultaneously [5]. The complexity of this task is further amplified by the need to ensure reliable long-term operation under conditions of extreme pressure, temperature fluctuations, and corrosive seawater.
While metaheuristic methods have demonstrated success in solving many complex optimization problems, several challenges remain. These include the risk of getting trapped in local optima, particularly in high-dimensional optimization problems where the search space is vast, making it difficult for algorithms to escape local optima and find the global solution. Additionally, the convergence speed of these algorithms can be slow, especially in high-dimensional settings, where computational resource consumption is substantial, leading to reduced efficiency. In applications, tasks like deep-sea probe design for unconventional oil exploration require highly precise, adaptable, and robust optimization methods capable of handling complex, multi-dimensional environments under extreme conditions. To address these limitations, this study proposes the Salmon Salar Optimization (SSO) algorithm, inspired by the social and cooperative behavior of salmon, as a novel approach to achieving more effective and scalable solutions in high-dimensional optimization tasks.

2. Related Works

Evolutionary algorithms, such as particle swarm optimization (PSO) [6], firefly algorithms [7], differential evolution [8], and the wolf pack algorithm [9], are effective methods for solving high-dimensional optimization problems. PSO is inspired by the social behavior of animals like birds and bees and is used to find global optimal solutions to optimization problems. In PSO, a group of particles represents potential solutions moving through the search space. These particles are initialized with random positions and velocities and are guided by two factors: their personal best position (the best solution each particle has found) and the global best position (the best solution found by any particle in the swarm). At each iteration, particles update their velocities and positions based on these two factors, aiming to move toward more promising regions of the search space. The movement of the particles is controlled by acceleration constants, which determine how much influence the personal and global best positions have on the particle’s trajectory.
In recent years, convex optimization, which deals with optimization problems where the objective function is convex, has gained increasing importance. Other active research areas in optimization include global optimization [10] and multi-objective optimization [11].
Optimization problems are widely applied across various fields, such as supply chain management [12], transportation [13], colored TSP problems [14,15], networks [16], permanent magnets [17], sound classification [18], feature selection [19], risk prediction [20], route design [21], and financial management [22]. As the Internet continues to grow rapidly, the complexity and scale of optimization problems have also increased, making traditional optimization algorithms less suited to modern needs. As a result, researchers continue to develop and improve optimization algorithms to meet these evolving challenges.
In 2015, Gao [23] proposed a selectively informed PSO (SIPSO). In SIPSO, densely connected particles receive information from all their neighbors, while sparsely connected particles follow only the best-performing neighbor. Liang [24] introduced an adaptive PSO based on clustering (APSO-C), which involves two main steps: first, a K-means clustering method is used to divide the swarm into several sub-groups, and second, the inertia weights of individuals are adjusted based on the evaluation of clusters and swarm states. Li [25] proposed a composite PSO with historical memory (HMPSO), in which each particle considers three candidate positions: its historical memory, personal best position, and the global best of the swarm.
In 2016, Pornsing [26] proposed techniques for self-adaptive inertia weight and time-varying adaptive swarm topology to enhance the performance of particle swarm optimization (PSO). Guo [27] introduced several variants of the Bare Bones PSO (BBPSO) algorithm, including pair-wise bare bones PSO (PBBPSO), dynamic local search bare bones PSO (DLS-BBPSO) [28], and hierarchical bare bones PSO (HBBPSO) [29], all aimed at improving the search capabilities of the original BBPSO [30].
In 2017, Xu [31] proposed a chaotic PSO (CPSO) for combinatorial optimization problems, which combines a chaos method with the traditional PSO. In 2018, Tian [32] introduced a modified PSO that incorporates chaos-based initialization and robust update mechanisms. That same year, Guo [28] developed a dynamic allocation bare bones PSO (DABBPSO) and a dynamic reconstruction bare bones PSO (DRBBPSO) to further improve the original BBPSO.
In 2019, Ghasemi [33] proposed the Phasor PSO (PPSO), while Guo [34] introduced a fission-fusion hybrid BBPSO (FBBPSO) for single-objective optimization problems. Xu [35] presented a dimensional learning strategy for PSO.
In 2020, Xu [36] proposed a reinforcement learning-based communication topology for PSO. In 2021, Yamanaka [37] developed a gravitational PSO (SPSO) for multi-modal optimization problems, and Liu [38] introduced a Sigmoid-Function-Based adaptive weighted PSO (SAWPSO). Wang [39] proposed an adaptive granularity learning distributed PSO (AGLDPSO). In 2022, Li [40] introduced a multi-population PSO with neighborhood learning (MPSO-NL), and Tian [41] proposed an electronic transition-based BBPSO for high-dimensional problems. Guo [42] later introduced a twinning strategy for BBPSO, and Zhou [43] proposed an atomic retrospective learning BBPSO (ARBBPSO).
Many researchers have also drawn inspiration from the natural world and developed various metaheuristic algorithms, such as the Dung Beetle Optimizer (DBO) [44], Whale Optimization Algorithm (WOA) [45], Grey Wolf Optimizer (GWO) [46], Harris Hawks Optimization (HHO) [47], African Vultures Optimization Algorithm (AVOA) [48], and Gorilla Troops Optimizer [49]. These algorithms have demonstrated strong performance in solving both benchmark and engineering problems.
With the advancement of artificial intelligence technology, optimization problems have grown in both scale and dimensionality. Traditional optimization algorithms are no longer sufficient to address the wide range of complex optimization challenges. To overcome these issues, this work proposes a new metaheuristic method, Salmon Salar Optimization (SSO). The strength of the SSO method lies in its ability to simulate the social structure and collective behavior of salmon, effectively balancing exploration and exploitation in high-dimensional search spaces. This approach helps the algorithm escape local optima and improves the convergence rate, even in highly complex and constrained environments. These features make SSO particularly well-suited for solving complex engineering optimization problems, such as the design of deep-sea probes for unconventional oil exploration. The major contributions of this work are summarized as follows:
(1) Biologically-Inspired Design: The Salmon Salar Optimization algorithm is a novel high-dimensional optimization method inspired by the collective behavior of salmon in nature. Unlike traditional algorithms, it emulates the migration and cooperative strategies of salmon, making it more effective in tackling high-dimensional optimization problems.
(2) High-Precision Solutions: The SSO algorithm excels in high-dimensional spaces, demonstrating a unique ability to discover highly accurate solutions. It is particularly effective at addressing complex multi-dimensional problems, providing robust solutions across engineering, scientific, and commercial applications.
(3) Adaptiveness: The SSO algorithm exhibits a high degree of adaptiveness, allowing it to dynamically adjust based on the nature and dimensionality of the problem. This adaptability enables it to handle various types of high-dimensional optimization problems without requiring extensive parameter tuning beforehand.
The rest of this paper is organized as follows: Section 2 introduces the design and structure of the Salmon Salar Optimization algorithm; Section 3 presents the experimental details and results and Section 4 provides the conclusion of this work.

3. Materials and Methods

3.1. Salmon Salar Optimization

Drawing inspiration from the structural composition and social behaviors observed in salmon salar populations, the Salmon Salar Optimization (SSO) Algorithm is proposed. The SSO Algorithm derives its conceptual foundation from three principal search objectives: food procurement, breeding habitat identification, and hazard avoidance. Primarily, the food procurement objective corresponds to the imperative for sustenance, enabling the algorithm to pursue optimal solutions within the problem domain to ensure individual health and viability. Subsequently, the breeding habitat identification objective reflects the algorithm’s capacity to discern suitable problem domain solutions in support of reproduction and ensuing population expansion. Lastly, the hazard avoidance objective equips the algorithm with adaptability to volatile problem conditions, enabling it to circumvent potential threats or adverse factors, thereby ensuring search process stability.
Consequently, the SSO amalgamates these three fundamental search strategies to achieve efficient, diversified, and adaptive search processes. This approach, when applied to the resolution of intricate high-dimensional optimization problems, evinces significant potential by not only furnishing robust solutions but also introducing an innovative optimization instrument applicable across diverse domains, encompassing engineering, scientific inquiry, and commercial enterprises. By emulating the efficacious survival strategies exhibited by salmon salar populations, the SSO introduces biologically inspired paradigms into the realm of optimization, promising noteworthy advancements in the solution of complex problem sets.

3.2. The Multi-Purpose Fusion Search Strategy

Considering the complexity of the high-dimensional optimization problem, the Gaussian distribution is used to select the position of the particle in the next iteration. The next position of a salmon is calculated by Equation (1).
α = ( S a l m o n + L e a d e r S a l m o n ) / 2 β = | S a l m o n L e a d e r S a l m o n | S a l m o n _ c a n d i = G a u s i ( α , β )
where S a l m o n is the initial position of a normal salmon; L e a d e r S a l m o n contains the food memory, breeding memory, and danger memory.  G a u s i ( α , β ) is a Gaussian distribution with a mean α and a standard deviation β . Since L e a d e r S a l m o n has three levels, S a l m o n _ c a n d i will contain three candidate positions. After the candidate position selection, the next position of a normal S a l m o n is calculated by Equation (2).
c a n d i d a t e l i s t t + 1 = [ S a l m o n t , S a l m o n _ c a n d i t + 1 ] S a l m o n t t + 1 = F i n d B e s t ( c a n d i d a t e l i s t t , 1 )
where c a n d i d a t e l i s t t + 1 is a list containing the position of a S a l m o n in the tth generation and three candidate positions in the ( t + 1 ) th generation, F i n d B e s t ( X , 1 ) is a function used to find the best position from X. After every S a l m o n finds a new position in the ( t + 1 ) th generation, the next position of the L e a d e r S a l m o n is calculated by Equation (3).
L e a d e r S a l m o n _ c a n d i t + 1 = F i n d B e s t ( S a l m o n t t + 1 , 1 ) l e a d e r c a n d i d a t e l i s t t + 1 = [ L e a d e r S a l m o n t , L e a d e r S a l m o n _ c a n d i t + 1 ] L e a d e r S a l m o n t t + 1 = F i n d B e s t ( c a n d i d a t e l i s t t , 3 )
where l e a d e r c a n d i d a t e l i s t t + 1 is a list containing the position of the L e a d e r S a l m o n in tth generation and the best position of all Salmons in the ( t + 1 ) th generation. F i n d B e s t ( X , 3 ) is a function used to find the best three positions from X.
To better describe the operation mode of SSO, this section details the pseudo-code and flowchart of SSO. The flowchart is shown in Figure 1; pseudo-code of SSO is shown in Algorithm 1.
Algorithm 1 Pseudo-code of SSO.
Require: 
I T , max iteration time
Require: 
f u n c , fitness function,
Require: 
R a n , searching Range
Require: 
M a x , Max generation time
Require: 
n, number of Salmon
   1: 
for  g e n =1 to M a x  do
   2: 
   Randomly generate the initial position of S a l m o n s
   3: 
   Find the L e a d e r S a l m o n from all S a l m o n s
   4: 
   Update the position of S a l m o n s , using Equations (1) and (2)
   5: 
   Update the position of L e a d e r S a l m o n , using Equation (3)
   6: 
    g e n = g e n +1
   7: 
end for
   8: 
Output: L e a d e r S a l m o n

4. Results

4.1. Numerical Experiments with CEC2017

To explore the optimization ability of SSO, the CEC2017 benchmark functions are selected in the simulation test. The CEC2017 contains four different groups of types:
  • Unimodal Functions, f 1 f 2 ;
  • Simple Multimodal Functions, f 3 f 9 ;
  • Hybrid Functions, f 10 f 19 ;
  • Composition Functions, f 20 f 29 .
Eight state-of-the-art optimization algorithms, DBO, WOA, GWO, HHO, AVOA, GTO, ARBBPSO, and ETBBPSO are selected in the control group. To ensure the fairness of the experiment, all algorithms use the same number of particles with the same number of iterations. To reduce the error caused by chance, all calculations are repeated thirty-seven times. The mean and variance of these thirty-seven times are recorded, and these results are used to evaluate the performance of the algorithms. The parameters of experiments are listed below:
  • Population size for all algorithms, 100;
  • Max iteration times, 1.000 × 10 5 ;
  • Dimension, 100;
  • Individual runs, 37.
  • search range, [−100, +100]
To easily compare the optimization ability of each algorithm, the FE (final error) is used in the results analysis. The FE is defined in Equation (4).
F E = | f i n a l g b e s t T O V |
where the f i n a l g b e s t is the gbest value of an algorithm after the last evolution, T O V is the theoretical optimal value of the test function. Obviously, when comparing the two methods, the method with a smaller FE has better optimization performance. Numerical and ranked results of CEC2017 are shown in Table 1, Table 2, Table 3 and Table 4. The convergence curves of the experiments are shown in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7.

Numerical Analysis

In a total of nine algorithms, SSO achieved eight first places, nine second places, five third places, two fourth places, three fifth places, and two sixth places, with an average ranking of 2.62, ranking first among all algorithms. Specific results comparison are listed below:
  • In f 1 , The rank of SSO is 3, the first method is AVOA, the difference between the two methods is 76.174%;
  • In f 2 , The rank of SSO is 3, the first method is AVOA, the difference between the two methods is 100%;
  • In f 3 , The rank of SSO is 6, the first method is AVOA, the difference between the two methods is 98.343%;
  • In f 4 , The rank of SSO is 1, the second method is ARBBPSO, the difference between the two methods is 5.42%;
  • In f 5 , The rank of SSO is 4, the first method is GWO, the difference between the two methods is 34.647%;
  • In f 6 , The rank of SSO is 3, the first method is GWO, the difference between the two methods is 21.938%;
  • In f 7 , The rank of SSO is 1, the second method is ARBBPSO, the difference between the two methods is 6.178%;
  • In f 8 , The rank of SSO is 2, the first method is GWO, the difference between the two methods is 26.383%;
  • In f 9 , The rank of SSO is 4, the first method is GWO, the difference between the two methods is 39.515%;
  • In f 10 ,The rank of SSO is 5, the first method is GWO, the difference between the two methods is 20.694%;
  • In f 11 , The rank of SSO is 1, the second method is ARBBPSO, the difference between the two methods is 14.329%;
  • In f 12 , The rank of SSO is 2, the first method is AVOA, the difference between the two methods is 34.981%;
  • In f 13 , The rank of SSO is 3, the first method is ETBBPSO, the difference between the two methods is 18.873%;
  • In f 14 , The rank of SSO is 2, the first method is AVOA, the difference between the two methods is 51.522%;
  • In f 15 , The rank of SSO is 1, the second method is ETBBPSO, the difference between the two methods is 14.277%;
  • In f 16 , The rank of SSO is 3, the first method is GWO, the difference between the two methods is 28.221%;
  • In f 17 , The rank of SSO is 5, the first method is GWO, the difference between the two methods is 42.281%;
  • In f 18 , The rank of SSO is 5, the first method is AVOA, the difference between the two methods is 86.234%;
  • In f 19 , The rank of SSO is 1, the second method is ARBBPSO, the difference between the two methods is 27.192%;
  • In f 20 , The rank of SSO is 2, the first method is GWO, the difference between the two methods is 28.455%;
  • In f 21 , The rank of SSO is 2, the first method is GWO, the difference between the two methods is 22.436%;
  • In f 22 , The rank of SSO is 6, the first method is GWO, the difference between the two methods is 31.869%;
  • In f 23 , The rank of SSO is 2, the first method is GWO, the difference between the two methods is 8.565%;
  • In f 24 , The rank of SSO is 2, the first method is GWO, the difference between the two methods is 12.753%;
  • In f 25 , The rank of SSO is 2, the first method is ARBBPSO, the difference between the two methods is 0.131%;
  • In f 26 , The rank of SSO is 2, the first method is GWO, the difference between the two methods is 26.263%;
  • In f 27 , The rank of SSO is 1, the second method is ARBBPSO, the difference between the two methods is 0%;
  • In f 28 ,The rank of SSO is 1, the second method is ARBBPSO, the difference between the two methods is 0%;
  • In f 29 , The rank of SSO is 1, the second method is GWO, the difference between the two methods is 20.934%;
In summary, SSO demonstrates outstanding performance, versatility, efficiency, robustness, and broad applicability in addressing high-dimensional optimization problems.

4.2. Numerical Experiments with CEC2022

To further validate the performance of SSO, CEC2022 was utilized for simulation experiments. The experimental results showed that out of a total of 12 test functions, SSO secured eight first-place rankings, one second-place ranking, and three third-place rankings, with an average ranking of 1.58. Numerical and ranked results of CEC2022 are shown in Table 5 and Table 6.

4.3. Optimization Problem Formulation for Deep-Sea Probe Design

In this extended design optimization problem for a deep-sea probe, we aim to minimize the total weight of the probe while accounting for various critical design variables and constraints. The problem is formulated with eight design variables, including wall thickness, radius, length, material density, internal pressure capacity, battery energy storage, and sensor diameter. The objective function reflects the total weight, which is influenced by these variables, while the constraints ensure the probe’s structural integrity, energy sufficiency, and appropriate sensor arrangement within the probe’s internal volume. In this optimization problem, we aim to minimize the total weight of the deep-sea probe by adjusting eight design variables. The design variables are shown in Equation (5).
x = [ x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 , x 8 ] = [ t s , t h , r , l , ρ material , P max , E battery , d sensor ]
where:
  • t s : Wall thickness of the cylindrical section,
  • t h : Wall thickness of the end caps,
  • r: Radius of the probe,
  • l: Length of the probe,
  • ρ material : Material density,
  • P max : Maximum internal pressure capacity,
  • E battery : Battery energy storage,
  • d sensor : Sensor diameter.
The objective function, which represents the total weight of the probe, is shown in Equation (6).
Minimize f ( x ) = c 1 x 1 x 3 x 4 + c 2 x 2 x 3 2 + c 3 x 1 2 x 4 + c 4 x 1 2 x 3 + c 5 ρ material x 3 2 x 4 + c 6 E battery + c 7 d sensor 2
where c 1 , c 2 , , c 7 are constants related to material properties, gravitational acceleration, and design specifics. The problem is subject to the severe constraints which are shown in Equation (7):
g 1 ( x ) = P max r t s + σ allowable 0 g 2 ( x ) = P external r t h + σ allowable 0 g 3 ( x ) = W total ρ water g V displaced 0 g 4 ( x ) = π r 2 l 4 π r 3 3 + V required ( d sensor , E battery ) 0 g 5 ( x ) = E battery E required 0 g 6 ( x ) = d sensor d max 0
The goal is to find the optimal set of variables x that minimizes the total weight while satisfying all the constraints. Experimental results are shown in Table 7. Convergence graph are shown in Figure 8, standard deviation are shown in Figure 9. The experimental results demonstrate that SSO provides high-precision solutions for the Deep-Sea Probe Design Problem.

4.4. Discussion

The Salmon Salar Optimization (SSO) algorithm offers several key advantages in high-dimensional optimization problems, primarily due to its ability to balance exploration and exploitation effectively. By simulating the collective behavior of salmon, the algorithm dynamically adjusts its search strategy, allowing it to explore new areas of the search space while refining known solutions. This enables SSO to avoid local optima, a common issue in high-dimensional problems, and consistently move toward global optimal solutions. The algorithm’s adaptability further enhances its performance by allowing it to adjust key parameters based on the nature and complexity of the problem, ensuring robust and efficient optimization without the need for extensive parameter tuning.
However, in each iteration, SSO performs three fitness function evaluations (FFEs), leading to a significant increase in computational time, particularly for high-dimensional and complex optimization problems. This increased computational load can hinder the algorithm’s efficiency, making it less suitable for large-scale or time-sensitive applications that demand fast convergence. The need to perform multiple FFEs per iteration poses a challenge, as it directly impacts the algorithm’s scalability and practicality in real-world scenarios. One of the primary limitations of SSO lies in this computational cost, which can become a bottleneck in certain applications. Future research should focus on reducing the number of FFEs or optimizing the evaluation process to improve the algorithm’s efficiency without compromising solution accuracy. Addressing this limitation is essential for enhancing SSO’s competitiveness and applicability in complex engineering and scientific problems.

5. Conclusions

A novel metaheuristic approach, Salmon Salar Optimization (SSO), is proposed to address the complex design challenges associated with deep-sea probes, which are crucial for unconventional oil exploration. SSO distinguishes itself as an innovative solution to the intricate issues of high-dimensional optimization by emulating the social structure and collective behavior of salmon populations.
To rigorously evaluate SSO’s effectiveness, a series of simulation experiments were conducted using the challenging CEC2017 benchmark functions. In these benchmarks, SSO was tested against eight historically top-performing algorithms, serving as a stringent control group. The results were remarkable. Out of the nine algorithms considered, SSO achieved an impressive record: eight first-place rankings, nine second-place rankings, five third-place rankings, and several additional top placements. With an average ranking of 2.62, SSO clearly emerged as the leading algorithm among its competitors.
SSO demonstrated its ability to effectively tackle the complex, high-dimensional design challenges associated with deep-sea probes, which are critical for offshore exploration and extraction of unconventional oil resources. By optimizing both the structural and functional aspects of these probes under stringent conditions, SSO has made significant advancements in deep-sea exploration technologies, facilitating the efficient and sustainable development of unconventional oil resources.
In summary, SSO exhibits outstanding characteristics, including high performance, adaptability, efficiency, robustness, and broad applicability to the multifaceted challenges of high-dimensional optimization problems. These qualities underscore SSO’s position as a powerful tool for addressing complex optimization issues and pave the way for promising future research directions.
Future work should focus on further fine-tuning SSO parameters to enhance optimization performance across diverse problem domains. Additionally, investigating SSO’s potential in multi-objective optimization scenarios and extending its application to real-world problem-solving contexts represent exciting research avenues that merit exploration.

Author Contributions

Conceptualization, J.G.; methodology, J.G.; software, J.G.; validation, J.G.; formal analysis, J.G.; investigation, J.G.; resources, J.G.; data curation, Z.Y., Q.Z. and Y.S.; writing—original draft preparation, J.G.; writing—review and editing, Y.S.; visualization, Z.Y., Q.Z. and Y.S.; supervision, Y.S.; project administration, Q.Z.; funding acquisition, J.G., Z.Y. and Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Science Foundation of Hubei Province (2023AFB003, 2023AFB004); Education Department Scientific Research Program Project of Hubei Province of China (Q20222208, Q20232206); JSPS KAKENHI Grant Numbers JP22K12185.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available on request.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Petrovic, A.; Damaševičius, R.; Jovanovic, L.; Toskovic, A.; Simic, V.; Bacanin, N.; Zivkovic, M.; Spalević, P. Marine Vessel Classification and Multivariate Trajectories Forecasting Using Metaheuristics-Optimized eXtreme Gradient Boosting and Recurrent Neural Networks. Appl. Sci. 2023, 13, 9181. [Google Scholar] [CrossRef]
  2. Yaseen, Z.M.; Melini Wan Mohtar, W.H.; Homod, R.Z.; Alawi, O.A.; Abba, S.I.; Oudah, A.Y.; Togun, H.; Goliatt, L.; Ul Hassan Kazmi, S.S.; Tao, H. Heavy metals prediction in coastal marine sediments using hybridized machine learning models with metaheuristic optimization algorithm. Chemosphere 2024, 352, 141329. [Google Scholar] [CrossRef] [PubMed]
  3. Zhang, Y.H.; Wang, X.J.; Zhang, X.Z.; Saad, M.; Zhao, R.J. Numerical Investigation of the Impacts of Large Particles on the Turbulent Flow and Surface Wear in Series-Connected Bends. J. Mar. Sci. Eng. 2024, 12, 164. [Google Scholar] [CrossRef]
  4. Nguyen, T.H.H.; Hou, T.H.; Pham, H.A.; Tsai, C.C. Oil Spill Sensitivity Analysis of the Coastal Waters of Taiwan Using an Integrated Modelling Approach. J. Mar. Sci. Eng. 2024, 12, 155. [Google Scholar] [CrossRef]
  5. Xing, R.; Zhang, Y.; Feng, Y.; Ji, F. Performance Analysis of a WPCN-Based Underwater Acoustic Communication System. J. Mar. Sci. Eng. 2023, 12, 43. [Google Scholar] [CrossRef]
  6. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 942–1948. [Google Scholar] [CrossRef]
  7. Fister, I.; Yang, X.S.; Brest, J. A comprehensive review of firefly algorithms. Swarm Evol. Comput. 2013, 13, 34–46. [Google Scholar] [CrossRef]
  8. Zhou, X.G.; Zhang, G.J. Differential evolution with underestimation-based multimutation strategy. IEEE Trans. Cybern. 2019, 49, 1353–1364. [Google Scholar] [CrossRef] [PubMed]
  9. Chen, L.; Liu, Y.; Gao, Y.; Wang, J. Carbon Emission Trading Policy and Carbon Emission Efficiency: An Empirical Analysis of China’s Prefecture-Level Cities. Front. Energy Res. 2021, 9, 793601. [Google Scholar] [CrossRef]
  10. Zhang, X.; Zou, D.; Shen, X. A novel simple particle swarm optimization algorithm for global optimization. Mathematics 2018, 6, 287. [Google Scholar] [CrossRef]
  11. Qiao, J.; Zhou, H.; Yang, C. Bare-Bones Multiobjective Particle Swarm Optimization Based on Parallel Cell Balanceable Fitness Estimation. IEEE Access 2018, 6, 32493–32506. [Google Scholar] [CrossRef]
  12. Zhang, X.; Melbourne, S.; Sarkar, C.; Chiaradia, A.; Webster, C. Effects of green space on walking: Does size, shape and density matter? Urban Stud. 2020, 57, 3402–3420. [Google Scholar] [CrossRef]
  13. Singh, G.; Singh, A. A hybrid algorithm using particle swarm optimization for solving transportation problem. Neural Comput. Appl. 2020, 32, 11699–11716. [Google Scholar] [CrossRef]
  14. Meng, X.; Li, J.; Member, S.; Dai, X.; Dou, J. Variable Neighborhood Search for a Colored Traveling Salesman Problem. IEEE Trans. Intell. Transp. Syst. 2018, 19, 1018–1026. [Google Scholar] [CrossRef]
  15. Meng, X.; Li, J.; Member, S.; Zhou, M.; Dai, X.; Dou, J. Population-Based Incremental Learning Algorithm for a Serial Colored Traveling Salesman Problem. IEEE Trans. Syst. Man, Cybern. Syst. 2018, 48, 277–288. [Google Scholar] [CrossRef]
  16. Al-Andoli, M.; Tan, S.C.; Cheah, W.P. Parallel stacked autoencoder with particle swarm optimization for community detection in complex networks. Appl. Intell. 2022, 52, 3366–3386. [Google Scholar] [CrossRef]
  17. Ahandani, M.A.; Abbasfam, J.; Kharrati, H. Parameter identification of permanent magnet synchronous motors using quasi-opposition-based particle swarm optimization and hybrid chaotic particle swarm optimization algorithms. Appl. Intell. 2022, 52, 13082–13096. [Google Scholar] [CrossRef]
  18. Zhang, J.; Zhao, X.; Jin, S.; Greaves, D. Phase-resolved real-time ocean wave prediction with quantified uncertainty based on variational Bayesian machine learning. Appl. Energy 2022, 324, 119711. [Google Scholar] [CrossRef]
  19. Hu, P.; Pan, J.S.; Chu, S.C.; Sun, C. Multi-surrogate assisted binary particle swarm optimization algorithm and its application for feature selection. Appl. Soft Comput. 2022, 121, 108736. [Google Scholar] [CrossRef]
  20. Wang, H.; Zhang, Z. Forecasting Chinese provincial carbon emissions using a novel grey prediction model considering spatial correlation. Expert Syst. Appl. 2022, 209, 118261. [Google Scholar] [CrossRef]
  21. Lu, B.; Zhou, C. Particle Swarm Algorithm and Its Application in Tourism Route Design and Optimization. Comput. Intell. Neurosci. 2022, 2022, 6467086. [Google Scholar] [CrossRef]
  22. Pan, J.; Bardhan, R. Evaluating the risk of accessing green spaces in COVID-19 pandemic: A model for public urban green spaces (PUGS) in London. Urban For. Urban Green. 2022, 74, 127648. [Google Scholar] [CrossRef] [PubMed]
  23. Gao, Y.; Du, W.; Yan, G. Selectively-informed particle swarm optimization. Sci. Rep. 2015, 5, 9295. [Google Scholar] [CrossRef] [PubMed]
  24. Liang, X.; Li, W.; Zhang, Y.; Zhou, M. An adaptive particle swarm optimization method based on clustering. Soft Comput. 2015, 19, 431–448. [Google Scholar] [CrossRef]
  25. Li, J.; Zhang, J.; Jiang, C.; Zhou, M. Composite Particle Swarm Optimizer with Historical Memory for Function Optimization. IEEE Trans. Cybern. 2015, 45, 2350–2363. [Google Scholar] [CrossRef]
  26. Pornsing, C.; Sodhi, M.S.; Lamond, B.F. Novel self-adaptive particle swarm optimization methods. Soft Comput. 2016, 20, 3579–3593. [Google Scholar] [CrossRef]
  27. Guo, J.; Sato, Y. A pair-wise bare bones particle swarm optimization algorithm. In Proceedings of the 2017 IEEE/ACIS 16th International Conference on Computer and Information Science (ICIS), Wuhan, China, 24–26 May 2017; Number 1. pp. 353–358. [Google Scholar] [CrossRef]
  28. Guo, J.; Sato, Y. A Bare Bones Particle Swarm Optimization Algorithm with Dynamic Local Search. In Advances in Swarm Intelligence: 8th International Conference, ICSI 2017, Fukuoka, Japan, 27 July–1 August 2017, Proceedings, Part I; Tan, Y., Takagi, H., Shi, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 158–165. [Google Scholar] [CrossRef]
  29. Guo, J.; Sato, Y. A Hierarchical Bare Bones Particle Swarm Optimization Algorithm. In Proceedings of the 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Banff, AB, Canada, 5–8 October 2017; pp. 1936–1941. [Google Scholar] [CrossRef]
  30. Kennedy, J. Bare bones particle swarms. In Proceedings of the 2003 IEEE Swarm Intelligence Symposium, SIS’03 (Cat. No.03EX706), Indianapolis, IN, USA, 26 April 2003; pp. 80–87. [Google Scholar] [CrossRef]
  31. Xu, X.; Rong, H.; Trovati, M.; Liptrott, M.; Bessis, N. CS-PSO: Chaotic particle swarm optimization algorithm for solving combinatorial optimization problems. Soft Comput. 2018, 22, 783–795. [Google Scholar] [CrossRef]
  32. Tian, D.; Shi, Z. MPSO: Modified particle swarm optimization and its applications. Swarm Evol. Comput. 2018, 41, 49–68. [Google Scholar] [CrossRef]
  33. Ghasemi, M.; Akbari, E.; Rahimnejad, A.; Razavi, S.E.; Ghavidel, S.; Li, L. Phasor particle swarm optimization: A simple and efficient variant of PSO. Soft Comput. 2019, 23, 9701–9718. [Google Scholar] [CrossRef]
  34. Guo, J.; Sato, Y. A fission-fusion hybrid bare bones particle swarm optimization algorithm for single-objective optimization problems. Appl. Intell. 2019, 49, 3641–3651. [Google Scholar] [CrossRef]
  35. Xu, G.; Cui, Q.; Shi, X.; Ge, H.; Zhan, Z.H.; Lee, H.P.; Liang, Y.; Tai, R.; Wu, C. Particle swarm optimization based on dimensional learning strategy. Swarm Evol. Comput. 2019, 45, 33–51. [Google Scholar] [CrossRef]
  36. Xu, Y.; Pi, D. A reinforcement learning-based communication topology in particle swarm optimization. Neural Comput. Appl. 2020, 32, 10007–10032. [Google Scholar] [CrossRef]
  37. Yamanaka, Y.; Yoshida, K. Simple gravitational particle swarm algorithm for multimodal optimization problems. PLoS ONE 2021, 16, e0248470. [Google Scholar] [CrossRef] [PubMed]
  38. Liu, J.; Jin, B.; Yang, J.; Xu, L. Sea surface temperature prediction using a cubic B-spline interpolation and spatiotemporal attention mechanism. Remote Sens. Lett. 2021, 12, 478–487. [Google Scholar] [CrossRef]
  39. Wang, Z.J.; Zhan, Z.H.; Kwong, S.; Jin, H.; Zhang, J. Adaptive Granularity Learning Distributed Particle Swarm Optimization for Large-Scale Optimization. IEEE Trans. Cybern. 2021, 51, 1175–1188. [Google Scholar] [CrossRef] [PubMed]
  40. Li, X.; Wang, Z.; Ying, Y.; Xiao, F. Multipopulation Particle Swarm Optimization Algorithm with Neighborhood Learning. Sci. Program. 2022, 2022, 8312450. [Google Scholar] [CrossRef]
  41. Tian, H.; Guo, J.; Xiao, H.; Yan, K.; Sato, Y. An electronic transition-based bare bones particle swarm optimization algorithm for high dimensional optimization problems. PLoS ONE 2022, 17, e0271925. [Google Scholar] [CrossRef]
  42. Guo, Q.; Su, Z.; Chiao, C. Carbon emissions trading policy, carbon finance, and carbon emissions reduction: Evidence from a quasi-natural experiment in China. Econ. Chang. Restruct. 2022, 55, 1445–1480. [Google Scholar] [CrossRef]
  43. Zhou, G.; Guo, J.; Yan, K.; Zhou, G.; Li, B. An Atomic Retrospective Learning Bare Bone Particle Swarm Optimization. In Advances in Swarm Intelligence. ICSI 2023; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2023; Volume 13968, pp. 168–179. [Google Scholar] [CrossRef]
  44. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  45. Mirjalili, S.; Lewis, A. The Whale Optimization Algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  46. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  47. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  48. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  49. Abdollahzadeh, B.; Soleimanian Gharehchopogh, F.; Mirjalili, S. Artificial gorilla troops optimizer: A new nature-inspired metaheuristic algorithm for global optimization problems. Int. J. Intell. Syst. 2021, 36, 22535. [Google Scholar] [CrossRef]
Figure 1. Flowchart of SSO.
Figure 1. Flowchart of SSO.
Jmse 12 01802 g001
Figure 2. Convergence graph and standard deviation of F1 to F5. In the Error figure, the red square stands for the mean value, and the blue line stands for the error.
Figure 2. Convergence graph and standard deviation of F1 to F5. In the Error figure, the red square stands for the mean value, and the blue line stands for the error.
Jmse 12 01802 g002
Figure 3. Convergence graph and standard deviation of F6 to F10. In the Error figure, the red square stands for the mean value, and the blue line stands for the error.
Figure 3. Convergence graph and standard deviation of F6 to F10. In the Error figure, the red square stands for the mean value, and the blue line stands for the error.
Jmse 12 01802 g003
Figure 4. Convergence graph and standard deviation of F11 to F15. In the Error figure, the red square stands for the mean value, and the blue line stands for the error.
Figure 4. Convergence graph and standard deviation of F11 to F15. In the Error figure, the red square stands for the mean value, and the blue line stands for the error.
Jmse 12 01802 g004
Figure 5. Convergence graph and standard deviation of F16 to F20. In the Error figure, the red square stands for the mean value, and the blue line stands for the error.
Figure 5. Convergence graph and standard deviation of F16 to F20. In the Error figure, the red square stands for the mean value, and the blue line stands for the error.
Jmse 12 01802 g005
Figure 6. Convergence graph and standard deviation of F21 to F25. In the Error figure, the red square stands for the mean value, and the blue line stands for the error.
Figure 6. Convergence graph and standard deviation of F21 to F25. In the Error figure, the red square stands for the mean value, and the blue line stands for the error.
Jmse 12 01802 g006
Figure 7. Convergence graph and standard deviation of F26 to F29. In the Error figure, the red square stands for the mean value, and the blue line stands for the error.
Figure 7. Convergence graph and standard deviation of F26 to F29. In the Error figure, the red square stands for the mean value, and the blue line stands for the error.
Jmse 12 01802 g007
Figure 8. Convergence graph of simulated Deep-Sea Probe Design Problem.
Figure 8. Convergence graph of simulated Deep-Sea Probe Design Problem.
Jmse 12 01802 g008
Figure 9. Standard deviation of simulated Deep-Sea Probe Design Problem. The red square stands for the mean value, and the blue line stands for the error.
Figure 9. Standard deviation of simulated Deep-Sea Probe Design Problem. The red square stands for the mean value, and the blue line stands for the error.
Jmse 12 01802 g009
Table 1. The simulation results of SSO, DBO, WOA, GWO, HHO, AVOA, GTO, ARBBPSO and ETBBPSO, f1–f8, best results are shwon in bold.
Table 1. The simulation results of SSO, DBO, WOA, GWO, HHO, AVOA, GTO, ARBBPSO and ETBBPSO, f1–f8, best results are shwon in bold.
FTypeSSODBOWOAGWOHHOAVOAGTOARBBPSOETBBPSO
1Mean2.808 × 1045.748 × 1073.515 × 1073.184 × 10102.097 × 1086.690 × 1032.604 × 1092.912 × 1041.656 × 104
 Std2.760 × 1043.705 × 1071.173 × 1077.385 × 1092.181 × 1079.224 × 1031.822 × 1093.091 × 1042.186 × 104
 Best1.851 × 1011.490 × 1061.560 × 1072.009 × 10101.621 × 1089.860 × 1007.170 × 1081.369 × 1024.990 × 100
 Worst8.870 × 1041.431 × 1087.225 × 1075.204 × 10102.517 × 1083.450 × 1048.877 × 1091.281 × 1051.120 × 105
 Rank365971842
2Mean1.909 × 10891.845 × 101348.159 × 101356.547 × 101251.117 × 10731.326 × 10303.553 × 101199.078 × 101257.782 × 10114
 Std7.879 × 10891.122 × 101354.955 × 101363.976 × 101265.615 × 10735.600 × 10301.423 × 101205.522 × 101263.073 × 10115
 Best1.679 × 10682.434 × 10982.676 × 101105.418 × 10976.231 × 10593.731 × 10139.303 × 10996.574 × 10842.233 × 1088
 Worst4.579 × 10906.822 × 101353.014 × 101372.418 × 101273.375 × 10742.681 × 10317.773 × 101203.359 × 101271.718 × 10116
 Rank389621574
3Mean1.227 × 1063.241 × 1056.068 × 1052.008 × 1053.406 × 1042.034 × 1042.030 × 1061.589 × 1062.934 × 106
 Std1.186 × 1061.856 × 1041.497 × 1051.767 × 1048.364 × 1036.828 × 1037.150 × 1066.647 × 1051.808 × 106
 Best4.535 × 1052.578 × 1053.368 × 1051.638 × 1051.796 × 1048.612 × 1032.438 × 1056.560 × 1051.059 × 106
 Worst4.888 × 1063.516 × 1058.400 × 1052.421 × 1055.648 × 1043.839 × 1044.276 × 1073.261 × 1068.476 × 106
 Rank645321879
4Mean1.493 × 1024.300 × 1026.164 × 1022.268 × 1034.530 × 1022.495 × 1021.473 × 1031.578 × 1021.683 × 102
 Std4.601 × 1017.513 × 1018.827 × 1016.743 × 1025.750 × 1014.472 × 1014.581 × 1025.156 × 1015.145 × 101
 Best8.963 × 1012.765 × 1024.498 × 1021.096 × 1033.622 × 1021.930 × 1027.983 × 1027.623 × 1017.982 × 101
 Worst2.934 × 1025.789 × 1028.222 × 1024.021 × 1035.927 × 1023.663 × 1022.757 × 1033.117 × 1022.893 × 102
 Rank157964823
5Mean8.255 × 1021.089 × 1039.223 × 1025.395 × 1029.242 × 1027.989 × 1028.184 × 1029.809 × 1029.808 × 102
 Std1.336 × 1021.545 × 1021.244 × 1025.188 × 1014.758 × 1016.290 × 1011.499 × 1021.781 × 1021.639 × 102
 Best5.462 × 1028.141 × 1027.312 × 1024.092 × 1028.221 × 1026.398 × 1026.021 × 1026.248 × 1026.706 × 102
 Worst1.081 × 1031.361 × 1031.250 × 1036.650 × 1021.038 × 1039.522 × 1021.393 × 1031.369 × 1031.270 × 103
 Rank495162387
6Mean3.455 × 1017.188 × 1017.737 × 1012.697 × 1017.590 × 1014.148 × 1017.272 × 1013.194 × 1013.806 × 101
 Std9.455 × 1009.560 × 1009.166 × 1004.047 × 1003.312 × 1004.854 × 1001.090 × 1015.052 × 1007.284 × 100
 Best1.108 × 1015.496 × 1016.168 × 1011.967 × 1016.720 × 1013.192 × 1014.958 × 1011.972 × 1012.120 × 101
 Worst5.381 × 1019.361 × 1011.048 × 1023.501 × 1018.179 × 1015.283 × 1019.305 × 1014.202 × 1015.119 × 101
 Rank369185724
7Mean8.635 × 1021.507 × 1032.561 × 1031.044 × 1032.740 × 1032.101 × 1031.754 × 1039.204 × 1029.485 × 102
 Std1.510 × 1023.905 × 1021.393 × 1021.037 × 1028.270 × 1011.582 × 1022.081 × 1021.256 × 1021.575 × 102
 Best6.102 × 1021.036 × 1032.212 × 1038.211 × 1022.560 × 1031.773 × 1031.337 × 1036.931 × 1026.999 × 102
 Worst1.147 × 1033.145 × 1032.894 × 1031.309 × 1032.932 × 1032.442 × 1032.239 × 1031.212 × 1031.326 × 103
 Rank158497623
8Mean7.509 × 1021.143 × 1031.069 × 1035.528 × 1021.026 × 1039.013 × 1028.395 × 1029.049 × 1029.439 × 102
 Std1.620 × 1021.525 × 1021.231 × 1027.252 × 1015.340 × 1018.687 × 1011.798 × 1021.495 × 1021.705 × 102
 Best5.174 × 1027.631 × 1028.041 × 1024.036 × 1028.413 × 1027.323 × 1025.891 × 1025.801 × 1025.612 × 102
 Worst1.132 × 1031.405 × 1031.476 × 1037.012 × 1021.108 × 1031.102 × 1031.533 × 1031.296 × 1031.223 × 103
 Rank298174356
Table 2. The simulation results of SSO, DBO, WOA, GWO, HHO, AVOA, GTO, ARBBPSO and ETBBPSO, f9–f16, best results are shwon in bold.
Table 2. The simulation results of SSO, DBO, WOA, GWO, HHO, AVOA, GTO, ARBBPSO and ETBBPSO, f9–f16, best results are shwon in bold.
FTypeSSODBOWOAGWOHHOAVOAGTOARBBPSOETBBPSO
9Mean3.189 × 1043.533 × 1043.694 × 1041.929 × 1042.902 × 1042.176 × 1044.350 × 1043.872 × 1043.285 × 104
 Std8.257 × 1039.423 × 1031.313 × 1041.023 × 1042.590 × 1031.672 × 1031.855 × 1041.076 × 1041.002 × 104
 Best1.398 × 1041.751 × 1042.303 × 1049.616 × 1032.394 × 1041.901 × 1042.294 × 1041.777 × 1041.762 × 104
 Worst4.255 × 1044.880 × 1048.197 × 1044.220 × 1043.367 × 1042.774 × 1047.876 × 1048.490 × 1044.980 × 104
 Rank467132985
10Mean1.808 × 1041.735 × 1042.023 × 1041.434 × 1041.783 × 1041.521 × 1042.516 × 1042.355 × 1042.671 × 104
 Std7.565 × 1031.543 × 1032.902 × 1032.535 × 1031.718 × 1031.573 × 1036.647 × 1035.728 × 1038.382 × 103
 Best1.069 × 1041.446 × 1041.545 × 1041.057 × 1041.334 × 1041.213 × 1041.270 × 1041.233 × 1049.839 × 103
 Worst3.297 × 1042.025 × 1042.601 × 1042.693 × 1042.084 × 1041.979 × 1043.375 × 1042.947 × 1043.364 × 104
 Rank536142879
11Mean4.957 × 1021.867 × 1046.803 × 1033.741 × 1041.861 × 1031.202 × 1031.087 × 1055.786 × 1023.974 × 103
 Std1.357 × 1023.081 × 1043.448 × 1031.107 × 1041.889 × 1022.255 × 1029.458 × 1042.244 × 1025.112 × 103
 Best2.573 × 1022.700 × 1034.322 × 1031.755 × 1041.512 × 1037.107 × 1022.676 × 1042.266 × 1023.565 × 102
 Worst7.958 × 1021.483 × 1052.567 × 1046.831 × 1042.247 × 1031.602 × 1035.289 × 1051.058 × 1032.318 × 104
 Rank176843925
12Mean1.830 × 1074.235 × 1086.533 × 1084.330 × 1093.029 × 1081.190 × 1074.165 × 1082.783 × 1074.958 × 107
 Std1.196 × 1072.298 × 1082.408 × 1082.095 × 1099.060 × 1076.478 × 1061.147 × 1081.477 × 1072.681 × 107
 Best6.615 × 1067.719 × 1072.185 × 1081.473 × 1091.635 × 1083.719 × 1062.026 × 1087.105 × 1065.747 × 106
 Worst5.860 × 1079.782 × 1081.079 × 1099.595 × 1095.550 × 1082.737 × 1077.120 × 1088.331 × 1071.144 × 108
 Rank278951634
13Mean9.419 × 1039.405 × 1069.002 × 1044.787 × 1083.105 × 1063.811 × 1043.234 × 1049.378 × 1037.642 × 103
 Std1.186 × 1041.835 × 1073.417 × 1044.293 × 1085.306 × 1051.059 × 1041.306 × 1041.530 × 1041.233 × 104
 Best6.528 × 1021.196 × 1054.051 × 1049.201 × 1042.014 × 1062.153 × 1041.419 × 1042.912 × 1022.868 × 102
 Worst3.599 × 1049.361 × 1071.596 × 1051.853 × 1094.130 × 1066.888 × 1046.511 × 1048.090 × 1044.816 × 104
 Rank386975421
14Mean4.505 × 1053.598 × 1061.782 × 1063.591 × 1068.700 × 1052.184 × 1057.977 × 1056.176 × 1051.325 × 106
 Std2.583 × 1053.441 × 1061.059 × 1062.115 × 1063.002 × 1059.406 × 1046.201 × 1054.363 × 1056.751 × 105
 Best1.425 × 1051.266 × 1054.800 × 1056.837 × 1053.137 × 1055.758 × 1049.775 × 1041.427 × 1054.959 × 105
 Worst1.131 × 1061.302 × 1074.792 × 1069.880 × 1061.634 × 1064.261 × 1053.816 × 1062.290 × 1062.803 × 106
 Rank297851436
15Mean6.377 × 1037.555 × 1051.308 × 1057.000 × 1077.655 × 1052.200 × 1047.456 × 1037.748 × 1037.439 × 103
 Std5.773 × 1031.481 × 1062.144 × 1051.005 × 1083.090 × 1057.872 × 1034.781 × 1031.019 × 1041.312 × 104
 Best1.849 × 1024.700 × 1042.620 × 1044.352 × 1051.394 × 1056.418 × 1033.032 × 1031.612 × 1021.635 × 102
 Worst2.138 × 1047.265 × 1061.054 × 1063.816 × 1081.849 × 1064.234 × 1042.806 × 1044.044 × 1047.599 × 104
 Rank176985342
16Mean5.132 × 1035.921 × 1037.751 × 1033.684 × 1035.377 × 1034.724 × 1035.854 × 1035.957 × 1036.546 × 103
 Std1.326 × 1039.500 × 1021.853 × 1036.423 × 1025.029 × 1025.282 × 1021.846 × 1032.099 × 1032.552 × 103
 Best3.655 × 1033.863 × 1034.193 × 1032.205 × 1034.483 × 1033.733 × 1033.794 × 1033.357 × 1033.885 × 103
 Worst9.912 × 1037.955 × 1031.340 × 1044.936 × 1036.998 × 1036.212 × 1039.924 × 1031.031 × 1041.209 × 104
 Rank369142578
Table 3. The simulation results of SSO, DBO, WOA, GWO, HHO, AVOA, GTO, ARBBPSO and ETBBPSO, f17–f24, best results are shwon in bold.
Table 3. The simulation results of SSO, DBO, WOA, GWO, HHO, AVOA, GTO, ARBBPSO and ETBBPSO, f17–f24, best results are shwon in bold.
FTypeSSODBOWOAGWOHHOAVOAGTOARBBPSOETBBPSO
17Mean4.331 × 1035.316 × 1035.418 × 1032.500 × 1034.207 × 1034.230 × 1033.994 × 1034.892 × 1034.818 × 103
 Std8.247 × 1021.054 × 1037.340 × 1025.376 × 1024.684 × 1024.697 × 1029.162 × 1021.001 × 1031.093 × 103
 Best2.686 × 1033.639 × 1034.328 × 1031.546 × 1033.531 × 1033.003 × 1032.678 × 1032.963 × 1033.252 × 103
 Worst6.121 × 1038.138 × 1037.694 × 1034.048 × 1035.525 × 1034.990 × 1036.177 × 1037.715 × 1037.388 × 103
 Rank589134276
18Mean2.582 × 1065.795 × 1062.207 × 1063.302 × 1062.056 × 1063.554 × 1051.581 × 1063.123 × 1067.391 × 106
 Std1.838 × 1065.146 × 1067.891 × 1051.528 × 1066.659 × 1051.237 × 1058.044 × 1051.601 × 1064.976 × 106
 Best7.702 × 1054.827 × 1057.334 × 1057.652 × 1058.964 × 1051.826 × 1054.847 × 1057.828 × 1051.886 × 106
 Worst7.819 × 1062.227 × 1074.676 × 1066.225 × 1063.805 × 1066.416 × 1054.410 × 1068.323 × 1062.448 × 107
 Rank584731269
19Mean5.543 × 1031.876 × 1061.434 × 1077.688 × 1073.721 × 1061.077 × 1042.169 × 1047.613 × 1031.652 × 104
 Std7.741 × 1032.501 × 1066.244 × 1069.678 × 1071.493 × 1068.269 × 1032.103 × 1041.145 × 1041.577 × 104
 Best1.186 × 1024.900 × 1042.480 × 1065.702 × 1061.125 × 1062.071 × 1031.116 × 1031.396 × 1021.491 × 102
 Worst4.335 × 1041.205 × 1072.536 × 1074.576 × 1086.880 × 1064.351 × 1047.993 × 1044.363 × 1045.512 × 104
 Rank168973524
20Mean3.053 × 1033.737 × 1034.162 × 1032.184 × 1033.717 × 1033.664 × 1034.850 × 1033.737 × 1033.296 × 103
 Std5.343 × 1026.889 × 1025.279 × 1024.699 × 1025.790 × 1024.433 × 1021.186 × 1038.359 × 1021.128 × 103
 Best1.318 × 1032.061 × 1033.318 × 1031.209 × 1032.461 × 1032.420 × 1032.189 × 1032.255 × 1031.451 × 103
 Worst3.761 × 1034.976 × 1035.401 × 1033.215 × 1034.876 × 1034.418 × 1037.470 × 1035.557 × 1036.233 × 103
 Rank278154963
21Mean9.983 × 1021.432 × 1031.747 × 1037.743 × 1021.664 × 1031.370 × 1031.011 × 1031.090 × 1031.088 × 103
 Std1.302 × 1021.335 × 1022.281 × 1021.130 × 1021.967 × 1021.409 × 1022.289 × 1021.394 × 1021.405 × 102
 Best8.102 × 1021.145 × 1031.290 × 1036.484 × 1021.267 × 1031.049 × 1037.903 × 1028.929 × 1028.240 × 102
 Worst1.282 × 1031.724 × 1032.225 × 1031.326 × 1032.080 × 1031.761 × 1031.600 × 1031.577 × 1031.443 × 103
 Rank279186354
22Mean2.166 × 1041.800 × 1042.085 × 1041.476 × 1041.969 × 1041.730 × 1042.692 × 1042.551 × 1042.484 × 104
 Std8.214 × 1032.021 × 1033.096 × 1031.398 × 1031.251 × 1031.118 × 1035.593 × 1035.254 × 1038.665 × 103
 Best1.341 × 1041.397 × 1041.660 × 1041.157 × 1041.537 × 1041.492 × 1041.829 × 1041.133 × 1041.276 × 104
 Worst3.406 × 1042.250 × 1042.691 × 1041.859 × 1042.211 × 1042.052 × 1043.415 × 1042.974 × 1043.452 × 104
 Rank635142987
23Mean1.203 × 1031.789 × 1032.420 × 1031.100 × 1032.264 × 1031.418 × 1031.436 × 1031.301 × 1031.298 × 103
 Std9.380 × 1011.926 × 1022.668 × 1026.199 × 1012.421 × 1021.267 × 1022.203 × 1021.001 × 1021.120 × 102
 Best1.051 × 1031.370 × 1031.855 × 1039.835 × 1021.804 × 1031.134 × 1031.169 × 1031.116 × 1031.046 × 103
 Worst1.376 × 1032.222 × 1033.034 × 1031.235 × 1032.739 × 1031.726 × 1031.949 × 1031.479 × 1031.508 × 103
 Rank279185643
24Mean1.753 × 1032.492 × 1033.476 × 1031.530 × 1033.168 × 1032.330 × 1031.952 × 1031.927 × 1031.815 × 103
 Std1.663 × 1022.807 × 1024.040 × 1029.677 × 1012.494 × 1021.994 × 1022.455 × 1021.607 × 1021.550 × 102
 Best1.462 × 1031.758 × 1032.537 × 1031.297 × 1032.619 × 1031.899 × 1031.587 × 1031.629 × 1031.484 × 103
 Worst2.172 × 1033.261 × 1034.256 × 1031.706 × 1033.718 × 1032.763 × 1032.540 × 1032.250 × 1032.172 × 103
 Rank279186543
Table 4. The simulation results of SSO, DBO, WOA, GWO, HHO, AVOA, GTO, ARBBPSO and ETBBPSO, f25–f29, best results are shwon in bold.
Table 4. The simulation results of SSO, DBO, WOA, GWO, HHO, AVOA, GTO, ARBBPSO and ETBBPSO, f25–f29, best results are shwon in bold.
FTypeSSODBOWOAGWOHHOAVOAGTOARBBPSOETBBPSO
25Mean7.624 × 1022.310 × 1031.113 × 1032.813 × 1031.021 × 1038.076 × 1021.766 × 1037.614 × 1027.745 × 102
 Std6.221 × 1013.353 × 1035.379 × 1016.758 × 1027.246 × 1016.130 × 1011.783 × 1025.359 × 1016.691 × 101
 Best6.506 × 1027.359 × 1021.010 × 1031.868 × 1038.443 × 1026.832 × 1021.467 × 1036.328 × 1026.448 × 102
 Worst9.385 × 1021.559 × 1041.203 × 1034.875 × 1031.140 × 1039.031 × 1022.123 × 1038.624 × 1028.997 × 102
 Rank286954713
26Mean1.346 × 1041.788 × 1042.922 × 1049.924 × 1032.127 × 1041.854 × 1041.510 × 1041.433 × 1041.445 × 104
 Std1.774 × 1033.869 × 1033.190 × 1037.893 × 1021.640 × 1032.705 × 1032.739 × 1031.675 × 1032.077 × 103
 Best1.043 × 1041.018 × 1042.117 × 1048.657 × 1031.867 × 1041.417 × 1041.026 × 1041.085 × 1041.021 × 104
 Worst1.875 × 1042.475 × 1043.657 × 1041.159 × 1042.513 × 1042.298 × 1042.191 × 1041.800 × 1041.965 × 104
 Rank269187534
27Mean5.000 × 1021.206 × 1032.317 × 1031.153 × 1031.396 × 1031.265 × 1031.018 × 1035.000 × 1025.000 × 102
 Std3.785 × 10−42.531 × 1025.599 × 1029.433 × 1012.785 × 1021.821 × 1021.125 × 1025.206 × 10−45.264 × 10−4
 Best5.000 × 1027.054 × 1021.422 × 1039.829 × 1021.056 × 1039.671 × 1028.444 × 1025.000 × 1025.000 × 102
 Worst5.000 × 1021.959 × 1033.740 × 1031.351 × 1032.277 × 1031.732 × 1031.268 × 1035.000 × 1025.000 × 102
 Rank169587423
28Mean5.000 × 1021.200 × 1049.275 × 1024.120 × 1037.670 × 1025.559 × 1021.835 × 1035.000 × 1025.000 × 102
 Std4.422 × 10−47.865 × 1034.799 × 1011.190 × 1034.914 × 1013.666 × 1013.997 × 1024.211 × 10−45.674 × 10−4
 Best5.000 × 1026.750 × 1028.462 × 1021.866 × 1036.391 × 1025.000 × 1021.188 × 1035.000 × 1025.000 × 102
 Worst5.000 × 1022.256 × 1041.051 × 1038.132 × 1038.968 × 1026.575 × 1022.845 × 1035.000 × 1025.000 × 102
 Rank196854723
29Mean3.553 × 1036.307 × 1031.093 × 1044.494 × 1035.608 × 1034.691 × 1036.756 × 1034.502 × 1034.514 × 103
 Std8.641 × 1021.198 × 1031.755 × 1035.604 × 1026.617 × 1025.392 × 1021.300 × 1039.586 × 1021.046 × 103
 Best1.911 × 1033.721 × 1038.172 × 1033.257 × 1034.514 × 1033.146 × 1034.721 × 1032.849 × 1033.025 × 103
 Worst5.768 × 1039.492 × 1031.531 × 1046.276 × 1036.787 × 1035.560 × 1039.779 × 1037.920 × 1038.239 × 103
 Rank179265834
Average Rank2.62076.68977.27594.37935.68973.58625.79314.34484.6207
Table 5. The simulation results of SSO, DBO, WOA, GWO, HHO, AVOA, and GTO, CEC2022, f1–f7, best results are shwon in bold.
Table 5. The simulation results of SSO, DBO, WOA, GWO, HHO, AVOA, and GTO, CEC2022, f1–f7, best results are shwon in bold.
FTypeSSODBOWOAGWOHHOAVOAGTO
1Mean8.450 × 10−149.707 × 1001.223 × 1015.130 × 1031.502 × 1002.827 × 10−135.531 × 104
 Std3.934 × 10−142.158 × 1011.619 × 1013.105 × 1036.404 × 10−11.164 × 10−132.432 × 105
 Best5.684 × 10−145.684 × 10−146.837 × 10−18.533 × 1023.373 × 10−11.137 × 10−138.955 × 102
 Worst1.705 × 10−138.868 × 1017.809 × 1011.232 × 1042.876 × 1006.253 × 10−131.487 × 106
 Rank1456327
2Mean2.039 × 1004.625 × 1015.740 × 1016.977 × 1015.700 × 1013.560 × 1015.656 × 101
 Std1.824 × 1002.145 × 1011.793 × 1012.433 × 1012.279 × 1012.364 × 1011.401 × 101
 Best1.986 × 10−16.377 × 1006.135 × 1004.497 × 1014.462 × 1009.160 × 10−49.619 × 100
 Worst6.481 × 1001.207 × 1029.476 × 1011.759 × 1021.454 × 1026.775 × 1017.504 × 101
 Rank1367524
3Mean7.457 × 10−31.299 × 1015.617 × 1011.395 × 1003.816 × 1011.102 × 1011.680 × 101
 Std2.705 × 10−26.720 × 1001.235 × 1011.513 × 1001.011 × 1016.929 × 1009.457 × 100
 Best1.137 × 10−131.712 × 1003.042 × 1013.802 × 10−22.057 × 1016.057 × 10−12.112 × 100
 Worst1.232 × 10−12.645 × 1018.445 × 1016.357 × 1006.284 × 1012.599 × 1015.112 × 101
 Rank1472635
4Mean5.623 × 1018.186 × 1011.172 × 1023.746 × 1018.384 × 1018.659 × 1017.928 × 101
 Std2.091 × 1012.469 × 1013.671 × 1011.430 × 1011.422 × 1012.900 × 1012.119 × 101
 Best2.288 × 1013.285 × 1014.975 × 1011.431 × 1015.699 × 1012.885 × 1014.020 × 101
 Worst1.224 × 1021.304 × 1022.060 × 1029.085 × 1011.219 × 1021.662 × 1021.598 × 102
 Rank2471563
5Mean3.504 × 1013.804 × 1022.158 × 1037.464 × 1011.415 × 1031.552 × 1038.221 × 102
 Std7.383 × 1013.077 × 1021.139 × 1035.327 × 1012.363 × 1024.757 × 1028.107 × 102
 Best1.791 × 10−14.877 × 1006.416 × 1021.225 × 1009.214 × 1022.568 × 1024.134 × 101
 Worst4.034 × 1021.065 × 1035.899 × 1032.777 × 1021.843 × 1032.986 × 1032.835 × 103
 Rank1372564
Table 6. The simulation results of SSO, DBO, WOA, GWO, HHO, AVOA, and GTO, CEC2022, f8–f12, best results are shwon in bold.
Table 6. The simulation results of SSO, DBO, WOA, GWO, HHO, AVOA, and GTO, CEC2022, f8–f12, best results are shwon in bold.
FTypeSSODBOWOAGWOHHOAVOAGTO
6Mean5.617 × 1031.832 × 1044.887 × 1039.515 × 1057.657 × 1033.968 × 1035.974 × 103
 Std6.412 × 1034.973 × 1045.665 × 1033.539 × 1066.873 × 1034.852 × 1039.473 × 103
 Best6.222 × 1012.311 × 1021.899 × 1022.544 × 1023.994 × 1021.379 × 1021.454 × 102
 Worst2.102 × 1043.079 × 1051.867 × 1041.773 × 1072.848 × 1041.861 × 1045.192 × 104
 Rank3627514
7Mean3.543 × 1018.382 × 1011.520 × 1024.246 × 1011.010 × 1027.208 × 1011.589 × 102
 Std1.194 × 1013.448 × 1015.237 × 1011.361 × 1013.128 × 1013.763 × 1018.407 × 101
 Best2.171 × 1013.203 × 1015.223 × 1012.271 × 1015.024 × 1013.137 × 1016.022 × 101
 Worst6.577 × 1011.829 × 1022.940 × 1028.749 × 1011.558 × 1021.919 × 1024.697 × 102
 Rank1462537
8Mean2.105 × 1014.558 × 1014.910 × 1013.523 × 1013.935 × 1012.643 × 1011.045 × 102
 Std4.800 × 10−13.979 × 1013.129 × 1013.342 × 1012.003 × 1017.248 × 1007.315 × 101
 Best2.003 × 1012.186 × 1012.824 × 1012.164 × 1012.816 × 1012.108 × 1013.293 × 101
 Worst2.244 × 1011.649 × 1021.717 × 1021.471 × 1021.492 × 1024.390 × 1013.282 × 102
 Rank1563427
9Mean1.653 × 1021.809 × 1021.813 × 1021.921 × 1021.813 × 1021.808 × 1021.809 × 102
 Std3.039 × 10−135.324 × 10−25.481 × 10−11.362 × 1013.341 × 10−19.752 × 10−92.468 × 10−1
 Best1.653 × 1021.808 × 1021.808 × 1021.808 × 1021.808 × 1021.808 × 1021.808 × 102
 Worst1.653 × 1021.809 × 1021.828 × 1022.329 × 1021.821 × 1021.808 × 1021.823 × 102
 Rank1457623
10Mean1.272 × 1021.206 × 1021.431 × 1034.844 × 1023.147 × 1022.645 × 1021.016 × 102
 Std9.533 × 1019.312 × 1011.078 × 1035.006 × 1022.695 × 1022.349 × 1025.654 × 10−1
 Best1.525 × 1011.003 × 1021.008 × 1021.003 × 1025.298 × 1014.508 × 1011.007 × 102
 Worst4.689 × 1026.317 × 1023.247 × 1031.850 × 1039.701 × 1028.968 × 1021.032 × 102
 Rank3276541
11Mean3.216 × 1023.426 × 1022.985 × 1026.291 × 1023.496 × 1023.270 × 1023.152 × 102
 Std4.173 × 1011.483 × 1029.834 × 1012.079 × 1026.459 × 1014.502 × 1011.122 × 102
 Best3.000 × 1024.547 × 10−136.195 × 10−13.006 × 1022.661 × 1013.000 × 1023.556 × 10−3
 Worst4.000 × 1027.009 × 1024.001 × 1021.171 × 1034.050 × 1024.000 × 1027.605 × 102
 Rank3517642
12Mean2.000 × 1022.754 × 1023.029 × 1022.534 × 1023.098 × 1022.616 × 1022.486 × 102
 Std2.000 × 10−42.589 × 1014.893 × 1011.284 × 1015.648 × 1012.173 × 1018.500 × 100
 Best2.000 × 1022.459 × 1022.446 × 1022.351 × 1022.517 × 1022.392 × 1022.406 × 102
 Worst2.000 × 1023.662 × 1024.432 × 1022.884 × 1025.061 × 1023.453 × 1022.864 × 102
 Rank1563742
Average Rank1.584.085.424.425.173.254.08
Table 7. The simulation results of SSO, DBO, WOA, GWO, HHO, AVOA, and GTO, best results are shwon in bold.
Table 7. The simulation results of SSO, DBO, WOA, GWO, HHO, AVOA, and GTO, best results are shwon in bold.
TypeSSODBOWOAGWOHHOAVOAGTO
Mean5.265 × 1025.414 × 1026.669 × 1025.295 × 1026.939 × 1025.396 × 1025.357 × 102
Std2.496 × 1001.613 × 1015.779 × 1012.538 × 1005.889 × 1011.169 × 1017.616 × 100
Best5.248 × 1025.248 × 1025.532 × 1025.251 × 1025.908 × 1025.270 × 1025.264 × 102
Worst5.327 × 1025.909 × 1027.907 × 1025.324 × 1028.123 × 1025.918 × 1025.563 × 102
Rank1562743
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, J.; Yan, Z.; Sato, Y.; Zuo, Q. Salmon Salar Optimization: A Novel Natural Inspired Metaheuristic Method for Deep-Sea Probe Design for Unconventional Subsea Oil Wells. J. Mar. Sci. Eng. 2024, 12, 1802. https://doi.org/10.3390/jmse12101802

AMA Style

Guo J, Yan Z, Sato Y, Zuo Q. Salmon Salar Optimization: A Novel Natural Inspired Metaheuristic Method for Deep-Sea Probe Design for Unconventional Subsea Oil Wells. Journal of Marine Science and Engineering. 2024; 12(10):1802. https://doi.org/10.3390/jmse12101802

Chicago/Turabian Style

Guo, Jia, Zhou Yan, Yuji Sato, and Qiankun Zuo. 2024. "Salmon Salar Optimization: A Novel Natural Inspired Metaheuristic Method for Deep-Sea Probe Design for Unconventional Subsea Oil Wells" Journal of Marine Science and Engineering 12, no. 10: 1802. https://doi.org/10.3390/jmse12101802

APA Style

Guo, J., Yan, Z., Sato, Y., & Zuo, Q. (2024). Salmon Salar Optimization: A Novel Natural Inspired Metaheuristic Method for Deep-Sea Probe Design for Unconventional Subsea Oil Wells. Journal of Marine Science and Engineering, 12(10), 1802. https://doi.org/10.3390/jmse12101802

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop