Next Article in Journal
Jump Control Based on Nonlinear Wheel-Spring-Loaded Inverted Pendulum Model: Validation of a Wheeled-Bipedal Robot with Single-Degree-of-Freedom Legs
Next Article in Special Issue
An Improved Whale Optimization Algorithm for the Clean Production Transformation of Automotive Body Painting
Previous Article in Journal
Radar-Based Activity Recognition in Strictly Privacy-Sensitive Settings Through Deep Feature Learning
Previous Article in Special Issue
mESC: An Enhanced Escape Algorithm Fusing Multiple Strategies for Engineering Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Snake Optimization Algorithm Augmented by Adaptive t-Distribution Mixed Mutation and Its Application in Energy Storage System Capacity Optimization

School of Intelligent Manufacturing and Electronic Engineering, Wenzhou University of Technology, Wenzhou 325035, China
*
Author to whom correspondence should be addressed.
Biomimetics 2025, 10(4), 244; https://doi.org/10.3390/biomimetics10040244
Submission received: 9 March 2025 / Revised: 12 April 2025 / Accepted: 14 April 2025 / Published: 16 April 2025

Abstract

:
To address the drawbacks of the traditional snake optimization method, such as a random population initialization, slow convergence speed, and low accuracy, an adaptive t-distribution mixed mutation snake optimization strategy is proposed. Initially, Tent-based chaotic mapping and the quasi-reverse learning approach are utilized to enhance the quality of the initial solution and the population initialization process of the original method. During the evolution stage, a novel adaptive t-distribution mixed mutation foraging strategy is introduced to substitute the original foraging stage method. This strategy perturbs and mutates at the optimal solution position to generate new solutions, thereby improving the algorithm’s ability to escape local optima. The mating mode in the evolution stage is replaced with an opposite-sex attraction mechanism, providing the algorithm with more opportunities for global exploration and exploitation. The improved snake optimization method accelerates convergence and improves accuracy while balancing the algorithm’s local and global exploitation capabilities. The experimental results demonstrate that the improved method outperforms other optimization methods, including the standard snake optimization technique, in terms of solution robustness and accuracy. Additionally, each improvement technique complements and amplifies the effects of the others.

1. Introduction

Many different sectors are faced with an endless variety of challenging optimization challenges in the current context of fast technological growth. The development of swarm intelligence optimization algorithms has flourished due in large part to these circumstances [1]. From an engineering perspective, the layout design of large-scale integrated circuits requires that numerous components be arranged in a limited chip space while meeting tight requirements, including signal transmission and heat dissipation. The optimization of aircraft forms in aeronautical engineering is linked to several metrics, including stability, energy consumption, and flying efficiency. Intricate restrictions and a large number of design variables are involved in this undertaking [2]. Even a small change in the scheduling of the production process in industrial manufacturing can set off a series of events, including delivery delays and resource idleness. When dealing with high-dimensional, nonlinear, non-differentiable, and even dynamically changing objective functions and restrictions, traditional optimization techniques that rely on exact mathematical models and gradient information are inadequate. These algorithms’ computing complexity increases exponentially with the magnitude of the problem, and they rely heavily on precise mathematical representations of the issue [3]. Once they encounter practical problems lacking explicit expressions, they are at a loss as to how to proceed.
Swarm intelligence optimization methods have become quite important in this context. The intelligent behaviors of real biological populations, such fish schooling, bird migration, and ant colony feeding, are mimicked by these algorithms. They do not necessitate a thorough comprehension of the intricate interior workings of an issue. Instead, they can discover roughly optimum solutions in complicated and changing situations based on just basic individual behavior norms, interactions, and collaboration among people. This not only demonstrates outstanding robustness and adaptability, allowing for rapid adaption to various conditions, but it also loosens the stringent requirements for prior knowledge. Consequently, it significantly decreases the time it takes to find a solution, lowers computing expenses, and creates new opportunities for the academic and industrial sectors to address difficult issues. The advantages have been felt in many industries. For example, the ant colony method is used in distribution and logistics to optimize routes and save on transportation costs. The particle swarm optimization technique is used in the power system to optimize reactive power and preserve grid stability [4]. The snake-shaped optimization method has emerged within the “family” of swarm intelligence optimization algorithms. The way snakes seek and survive in the wild serves as an inspiration. Snakes rely on their keen senses, adaptable movement, and teamwork to seek prey in challenging environments and avoid obstacles [5]. This trait is condensed into a special optimization technique by the snake-shaped optimization algorithm: snake-shaped people move and “swim” around the search space, constantly changing their routes based on peer locations and environmental information. In the early stages, they daringly explore, and in the later stages, they perfectly converge. This method approaches the ideal answer due to the group’s very effective information exchange and combined efforts [6]. It has distinct advantages over other conventional swarm intelligence algorithms in that it avoids local optima and strikes a balance between exploration and exploitation. It provides fresh viewpoints and effective solutions for resolving cutting-edge issues including dynamic environment optimization and multimodal function optimization. The theoretical depth and application range of swarm intelligence optimization algorithms are constantly being enhanced and broadened.

1.1. Problem Statement and Motivation

Swarm intelligence algorithms, characterized by their unique population collaboration mechanisms and search strategies, offer novel approaches to solve complex optimization problems. They hold remarkable application value across diverse fields. Currently, as the scale of engineering projects and the complexity of data continue to increase, traditional optimization algorithms face challenges such as slow convergence and a tendency to become trapped in local optima when dealing with large-scale high-dimensional problems. Research on swarm intelligence optimization algorithms can not only enrich the theoretical framework of optimization algorithms and deeply analyze their search performance and convergence characteristics in different scenarios, but also provides a theoretical basis for solving practical engineering problems. By leveraging swarm intelligence optimization algorithms, we can enhance the prediction accuracy and generalization ability of machine learning models while tuning their parameters, reducing computational costs. In resource scheduling scenarios, these algorithms enable efficient resource allocation, contributing to cost reduction and efficiency improvement for enterprises. At the moment, metaheuristics and mathematical programming are the two primary categories of algorithmic optimization techniques that are in use [7]. Mathematical programming encompasses traditional mathematical approaches like mathematical programming and integer programming, which are challenging to use in the above-specified application fields because of their complexity [8]. Because of its benefits, which include ease of implementation, flexibility, and the ability to avoid becoming stuck in local optima, the metaheuristics algorithm (MA) is able to identify the best or nearly optimal solution [9].
Constrained optimization problems (COPs) are a significant topic in the optimization community. Compared to unconstrained optimization issues, limited optimization problems are more challenging to solve [10]. Consequently, there are substantial theoretical and practical advantages to studying solution techniques for limited optimization problems. Finding choice variables that may maximize or minimize the objective function while meeting linear and nonlinear restrictions inside the search space is a crucial step in addressing restricted optimization problems [11,12]. Constrained optimization issues have been solved using a variety of linear and nonlinear numerical optimization approaches throughout the last ten years. Constrained optimization issues can be efficiently solved using these numerical optimization strategies. Nevertheless, gradient information is necessary for the majority of these numerical optimization techniques [13,14]. For many real-world optimization situations, obtaining the gradient information is difficult. Consequently, many complicated restricted optimization problems are hard to solve using numerical optimization algorithms [15]. In contrast to numerical optimization algorithms, swarm intelligence optimization algorithms exhibit strong flexibility and accuracy in solving constrained optimization problems. Consequently, an increasing number of swarm intelligence algorithms are being used to solve constrained optimization problems [16].
Hashim et al. [17] proposed a novel metaheuristic group method named snake optimization (SO). The iterative process of the SO algorithm offers more optimization techniques and fewer adjustment parameters. Nevertheless, the SO algorithm suffers from a slow convergence speed in the early stage, poor stage interaction, high stochasticity in the initial population, and a tendency to converge quickly to a local optimal solution. In view of the current problems of the SO algorithm, this study examines the improvement of the optimization method proposed by previous researchers. Although this scheme improves the efficiency of algorithm optimization, the optimization speed is relatively slow in the early stage of optimization iteration. To enhance the algorithm’s optimization ability, the refined methods in [18,19] utilized multi-strategy boosting to increase the diversity of particles in the final iteration step. Although this method has achieved certain results, the improved snake optimization algorithm has a slow convergence speed and the optimization effect is not significant enough. Ruba et al. introduced three evolutionary crossover operators (i.e., one-point crossover, two-point crossover, and uniform crossover) to improve the search space exploration of the BSO algorithm and controlled them through switching probabilities. Two newly developed feature selection (FS) algorithms, BSO and BSO-CV, were implemented and evaluated on the real-world COVID-19 dataset and 23 disease benchmark datasets [20]. Although this method improved the accuracy of dataset prediction, the optimization speed was relatively slow in the early stage of optimization iteration. Shourbaji et al. proposed a reptile search algorithm—the snake optimizer (RSA-SO) feature selection algorithm, which uses both the RSA and SO methods in a parallel mechanism. This mechanism reduces the likelihood of the two methods falling into local optima and improves their ability to balance exploration and exploitation [21]. The improved optimization algorithm performs better than the traditional snake optimization algorithm in terms of falling into local optima, but the slow convergence speed of the algorithm is a problem. Belabbes et al. proposed using the snake optimization metaheuristic algorithm to extract parameters for three types of photovoltaic cells: monocrystalline silicon, amorphous silicon, and RTC France. The snake algorithm was applied to single-diode and double-diode models, extracting five and seven parameters, respectively. Root-mean-square-error-based statistical tests were conducted to verify the algorithm’s performance [22]. The optimization effect of the proposed algorithm was not significant enough, and further improvement was needed in terms of algorithm convergence speed, optimization accuracy, and other aspects. To address the issues with the aforementioned algorithms, there is a need for an intelligent optimization algorithm with strong search and optimization capabilities. The snake optimizer (SO) algorithm has the advantages of simple parameter settings and high optimization accuracy, making it suitable for solving complex function optimization problems. However, it also has problems such as a slow convergence speed, susceptibility to local optima, and severe randomness in population initialization.

1.2. Contribution

This study applies the optimization of initialization strategy, adaptive phase switching, and population renewal method to further improve the phase linkage and population richness of the SO algorithm. An adaptive t-distribution mixed mutation snake optimization method (DTHSO) is suggested based on the study above. The main contribution of this article differs from the work of others in that the following factors:
  • Introduces the Tent chaotic map and quasi-reverse learning strategy to initialize the population, improve the spatial distribution structure of the population’s individual initialization stage, and improve the convergence speed of the algorithm.
  • The exploration and development stages are dynamically chosen based on the algorithm’s fitness throughout the optimization process, which effectively cuts down on needless time spent and quickens the rate of convergence.
  • Swaps out the original algorithm’s population renewal technique with the adaptive t-distribution mixed mutation foraging approach. In the development stage, the novel renewal method guarantees population variety; in the exploration stage, it increases the individual richness of the algorithm and boosts optimization capability.
  • Verifies the DTHSO algorithm’s engineering viability through simulation tests on issues in two engineering domains and a comparison with alternative optimization methods on 23 CEC2005 test functions.

2. Related Works

Optimization issues with restrictions are tackled using swarm intelligent optimization techniques. Better solution outcomes can be obtained by combining swarm intelligent optimization algorithms with specific constraint-handling strategies, such as the multi-objective optimization method [23], feasibility rule [24], random ranking method [25], and penalty function method [26]. Using the goal function, constraint conditions, and punishment factor as a basis, the adaptive penalty function approach builds a penalty function.
The restricted optimization problem is then converted into a sequence of unconstrained optimization problems by modifying the penalty factor of the penalty function in accordance with the proportion of viable solutions in the current population. In order to tackle the drawbacks of confined optimization issues, a unique adaptive penalty function technique was suggested in [27], which was combined with a straightforward evolutionary approach to create a new evolutionary algorithm for constrained optimization problems. The selection of options from the population that meet the constraints is the foundation of the feasibility rule [28]. Based on each person’s level of constraint violation, the feasibility rule determines which persons move on to the next iteration during the iterative process. When it comes to handling limitations, the feasibility rule offers the benefits of a straightforward theory and simplicity in practice. A unique method for tackling constrained optimization problems, called CSMO, is created by combining the feasibility rule with the improved spider monkey optimization strategy [29]. According to experimental studies, CSMO performs better than the comparison approach when it comes to solving limited optimization issues, which provide a global optimization difficulty. One innovative way to deal with constraints is the random ranking method. Its goal is to rank people according to a likelihood that is based on the objective function value and the degree of constraint violation [30]. Gao et al. developed an estimation distribution method and used the evolutionary sampling technique to produce offspring in order to solve optimization problems with only variable boundary constraints. In constrained optimization, they also suggested a method for resolving any kind of optimization issue with restrictions [31]. By transforming the constraint conditions of a restricted optimization problem into the objective function, the multi-objective constraint-handling approach transforms the constrained optimization problem into a multi-objective unconstrained optimization problem [32]. The multi-objective constraint-handling approach is well adapted to optimization problems in low-feasibility regions and offers excellent convergence performance and impact. Wang et al. proposed a differential multi-objective evolutionary algorithm for multi-mutation strategy fusion [33]. It effectively combined the use of mutation strategies with the multi-objective optimization method in both the early and late stages of iteration, accelerated the algorithm’s convergence, and reduced the high computational cost of the multi-objective optimization method during application. The results showed that the algorithm improved convergence speed and accuracy.
Although significant progress has been made with the above-mentioned methods in handling constrained optimization problems, there is still room for these algorithms to improve in terms of accuracy and efficiency. In 2022, two academicians, Hashim and Hussien, presented the snake optimization method (SO), a novel swarm intelligence optimization method mainly used for solving unconstrained optimization problems. This method is distinguished from other swarm intelligence optimization techniques by its simple concept and small number of parameters. Currently, there is relatively little research on applying the snake optimization technique to solve constrained optimization problems; instead, it is mainly used to tackle unconstrained optimization problems. By combining the snake optimization algorithm with appropriate constraint-handling techniques, we aim to explore a new algorithm with greater solving capabilities and efficiency. This enables the advantages of the snake optimization algorithm in solving unconstrained optimization problems to be applied to solving constrained optimization problems. When dealing with constrained optimization problems, the newly explored method should have a higher solving accuracy and a shorter solving time. However, since the snake optimization process may converge even faster and is prone to being trapped in local optima, this study proposes an adaptive t-distribution mixed mutation snake optimization methodology. Initially, the population initialization process of the original method and the quality of the initial solution are enhanced through the use of Tent-based chaotic mapping and the quasi-reverse learning technique. The algorithm’s ability to escape local optima is improved by perturbing and mutating at the optimal solution position to generate new solutions. A mechanism of opposite-sex attraction is proposed to replace the mating strategy in the mating mode of the evolution stage, providing the algorithm with more opportunities for global exploration and development. The improved snake optimization algorithm balances the algorithm’s local and global development while accelerating its convergence speed and improving its accuracy.

3. Snake Optimization Algorithm (SO)

Based on the survival strategies of snakes, the SO algorithm is a metaheuristic algorithm. When there is food and a low temperature, snakes will mate; otherwise, they will search for food or consume what is already there [34,35]. The author categorizes these phenomena into many key sections, including population initialization, the split of populations into male and female groups, the assessment of temperature and food amount, and the stages of exploration and development, which are associated with the search process. The ideal solution is reached and the best person in the population is selected after several iterative procedures [36].

3.1. Initialize Population

The standard SO algorithm population initialization randomly generates a population in the search space.
X i = X min   + r X max   X min  
The parameter r is a random number between [0, 1]. Xi is the initial position of the individual in the i-th population. Xmax and Xmin are the upper and lower limits of the search space, respectively.

3.2. Division of Male and Female Populations

After completing population initialization, all populations are divided equally into two groups based on the total population, with half being male and the other half being female. The population partitioning formula is
N m = N / 2
N f = N N m
where the parameter N is the total number of individuals in the population, and Nm and Nf represent the individual numbers of male and female populations, respectively.

3.3. Evaluation of Temperature and Food Intake

The definition equation for temperature is
Temp   = exp t T
The parameter t is the current number of iterations. T is the total number of iterations. As the number of iterations increases, the overall temperature decreases.
The definition equation for food Q is:
Q = c 1 × exp t T T
The parameter c1 is a constant of 0.5. As the number of iterations increases, the overall amount of food increases [37].

3.4. Exploration Stage

When the food quantity Q < 0.25, the snake searches for food by selecting a random position and then updates its position. The position update formula at this stage is as follows:
X i , m ( t + 1 ) = X rand , m ( t ) ± c 2 × A m × X max   X min   × rand + X min  
X i , f ( t + 1 ) = X rand , f ( t ) ± c 2 × A f × X max   X min   ×   rand   + X min  
X i , m and X i , f respectively represent the positions of the male and female snakes. X rand , m and X rand , f represent the positions of randomly selected male and female snakes, respectively. The parameter rand is a random number between [0, 1]. The parameters and are the ability of male and female snakes to search for food, respectively. The calculation method is as follows:
A m = exp f rand , m f i , m
A f = exp f rand , f f i , f
The parameter f rand , m represents the fitness of X rand , m ; f i , m represents the fitness of X i , m ; f r a n d , f represents the fitness of X rand , f ; and f i , f represents the fitness of X i , f . Fitness is a metric used in algorithms to measure the quality of a solution, calculated through a fitness function to guide the algorithm in searching for better solutions. The parameter c 2 is a constant of 0.05 [38].

3.5. Development Phase

When the amount of food is Q > 0.25 and Temp 0.6 , the ambient temperature is in a hot state, and the snake only searches for food. The formula for updating its position is
X i , j ( t + 1 ) = X food   ± c 3 ×   Temp   ×   rand   × X food   X i , j ( t )
The parameter X i , j is the position of the snake. X food   is the current optimal individual position. The parameter c 3 is a constant of 2.
When Q > 0.25 and Temp < 0.6 are met, the ambient temperature is in a cold state, and the snake will be in a combat or mating mode.
(1)
Combat mode.
Update the position in combat mode to
X i , m ( t + 1 ) = X i , m ( t ) + c 3 × F M ×   rand   × Q × X best , f X i , m ( t )
X i , f ( t + 1 ) = X i , f ( t ) + c 3 × F F ×   rand   × Q × X best , m X i , f ( t )
X best , f and X best , m represent the optimal individual positions in the male and female populations, respectively. The parameters FM and FF represent the combat capabilities of male and female snakes, respectively. The parameters FM and FF are calculated by the following formula:
F M = exp f best , f f i
F F = exp f best , m f i
The parameters f best , f and f best , m are the optimal fitness for female and male snakes, respectively. The parameter f i is the fitness of an individual i.
(2)
Mating mode.
Update the position during the mating mode to
X i , m ( t + 1 ) = X i , m ( t ) + c 3 × M m ×   rand   × Q × X i , f ( t ) X i , m ( t )
X i , f ( t + 1 ) = X i , f ( t ) + c 3 × M f ×   rand   × Q × X i , m ( t ) X i , f ( t )
The parameters M m and M f represent the mating ability of male and female snakes, respectively. The calculation formula is
M m = exp f i , f f i , m
M f = exp f i , m f i , f
The parameters f i , m and f i , f represent the fitness of the i-th male and female individuals, respectively. If the snake egg is hatched, the worst male and female individuals in the population are randomly replaced.
X worst , m = X min + rand × X max X min  
X wors , f = X min + rand × X max X min
The parameters X worst , m and X wors , f represent the worst individuals in the male and female snake populations, respectively [39].

4. Snake Optimization Algorithm with Adaptive t-Distribution Mixed Mutation

The SO algorithm is an algorithm with a strong optimization ability, but it is easy to fall into the local optimal solution in the local development stage, that is, its global optimization search ability is insufficient. Therefore, in order to better balance the global exploration ability and local development ability of the algorithm, accelerate the convergence speed of the algorithm, and enhance its robustness, improvement strategies are proposed.

4.1. Chaotic Map Based on Tent and Quasi-Reverse Learning Strategy

The diversity of the initial location of the population plays an important role in the snake optimization algorithm to obtain the global optimal solution. Chaotic motion has the characteristics of randomness, ergodicity, regularity, and sensitivity to the initial value, which help to enhance intelligent algorithms to jump out of local optimal solutions and obtain better global optimization search capabilities [40]. In chaotic maps, the random numbers generated by Tent chaotic maps between [0, 1] are relatively uniform, which can better enhance the diversity of the initial population positions of snake optimization algorithms. Therefore, the Tent chaotic map is introduced to initialize the snake position [41].
The chaotic sequence based on Tent chaotic mapping is
z i + 1 = z i ε , 0 z i ε 1 z i 1 ε , ε < z i 1 ( i = 1 , 2 , )
The parameter z i is the i-th chaotic value of the chaotic sequence, z i [ 0 , 1 ] . The parameter ε is the control parameter, and in this article, ε is taken as 0.6.
According to Equation (21), the initial position of individual snake groups based on Tent chaotic mapping can be obtained with
X i = b 0 + z i b 1 b 0 i = 1 , 2 , , N p
In order to further improve the diversity of the initial positions of snake groups, quasi-reverse learning strategy 1 is introduced. Based on the basic principle of the quasi-reverse learning strategy, if there exists a point x ( x [ c 1 , c 2 ] , where c1 and c2 are the minimum and maximum values of the x value interval, respectively), then its quasi-reverse point x * is defined as
x * = c 1 + c 2 2 + c 1 + c 2 2 x r d
According to Formula (22), x * is a uniformly distributed random number within the interval [ ( c 1 + c 2 ) / 2 , ( c 1 + c 2 ) x ] . Combining Tent chaotic mapping with quasi-reverse learning strategy forms the Tent chaotic mapping quasi-reverse learning strategy. Using Equations (22) and (23), the initial position of snake swarm individuals based on Tent chaotic mapping quasi-reverse learning strategy is obtained.

4.2. Adaptive t-Distribution Mutation SO Algorithm Strategy (DTHSO)

The introduction of Cauchy mutation and Gaussian mutation into intelligent optimization algorithms has been proven to be effective in improving algorithm performance. Among them, Cauchy mutation can enrich the population diversity, while Gaussian mutation can make the algorithm obtain a good local search ability. Cauchy distribution and Gaussian distribution are two special forms of t-distribution [42]. With the increase in the number of iterations and the degree of the freedom parameter t, the curve of t-distribution gradually approaches Gaussian distribution, changing from Cauchy distribution to Gaussian distribution [43].
Generating a new solution that conforms to the mutation of the t-distribution near the position of the optimal solution can combine the advantages of Gaussian distribution and Cauchy distribution at the same time. In the early stage of algorithm iteration, the value of the degree of freedom parameter t is small [44]. At this time, the t-distribution mainly presents the characteristics of the Cauchy distribution, enriches the diversity of the population, and effectively improves the global search ability of the algorithm. At the middle and late stage of the iteration, the value of the degree of freedom parameter t is large, and the t-distribution is infinitely close to the Gaussian distribution, which enhances the local development ability of the algorithm and improves its convergence accuracy. In order to enrich the population diversity in the early stage and retain the elite solution of the seagull population in the later stage, adaptive parameters are introduced at the same time. In the early stage of iteration, a larger value can be taken, and the new solution generated by t-distribution variation can be used to increase population diversity [45]. With the increase in the number of iterations, the algorithm gradually approaches the optimal solution, and the influence of the adaptive parameter control t-distribution on the new solution gradually decreases, fully retaining the elite solution of the seagull population. The expressions for the sum of the new solutions conforming to the variation in the t-distribution are shown in Formulas (24) and (25).
P news   t = P b s t + ω T D ( t ) P b s t
ω = a + ( b a ) T t T
In the formulas, TD(t) represents the t-distribution of the degree of freedom parameter t, a = 0.1 , b = 1 , and T is the maximum number of iterations.
According to a certain probability, a new solution is accepted with an adaptive t-distribution variation and the parameter p e [ 0 , 1 ] can be randomly generated. The determination of the new optimal snake individual position is shown in Formula (26).
P best   = P b s p e > 0.5 P n e w s   p e 0.5
In this way, when the algorithm performs iterative optimization, there will be two options when determining the optimal solution through probability: one is to continue to select the optimal solution according to the original algorithm, maintaining population diversity while retaining the elite solution; the second is to choose a new solution generated by the mutation perturbation of the adaptive t-distribution, which combines the advantages of Gaussian distribution and Cauchy distribution.

4.3. Heterogeneous Attraction Strategy

When the standard SO algorithm is in mating mode, its location update depends on the position and fitness of the i-th male and female snakes, and there is a certain probability that the worst individual in the two populations will be replaced. Although all individuals will be closer to the optimal coverage in the whole process, they are also more likely to fall into local optima and cannot jump out. In order to achieve this goal, an opposite attraction strategy is proposed to replace the mating mode [46].
Firstly, the three optimal individuals in the current male and female snake populations are reserved to the next generation, and then the moved positions are calculated according to the three optimal individuals of the opposite sex. Here, the positional changes in male snakes are explained, and the same is true for female snakes. Its calculation formula is
X m , 1 ( t ) = X one , f ( t ) A 1 × D one , f
X m , 2 ( t ) = X two , f ( t ) A 2 × D two , f
X m , 3 ( t ) = X three , f ( t ) A 3 × D three , f
X m ( t + 1 ) = X m , 1 + X m , 2 + X m , 3 3
X m , 1 ( t ) , X m , 2 ( t ) , and X m , 3 ( t ) represent, respectively, the positions of the third-generation male snake after being attracted by the current optimal three female individuals. X one , f ( t ) , X two , f ( t ) , and X three , f ( t ) represent, respectively, the optimal three population positions for the third-generation female snake. Parameter A is the convergence factor, and its calculation formula is Equation (31). D one , f , D two , f , and D three   , f are the positional parameters of the three populations with the best reference distance for the female snake, and their calculation formulas are Equations (30)–(32).
A = 2 × ( rand 1 ) × 1 t T
Among them, the parameter t is the current number of iterations, and T is the total number of iterations.
D one , f = C 1 × X one , f X m ( t )
D two , f = C 2 × X two , f X m ( t )
D three , f = C 3 × X three , f X m ( t )
Among them, the parameter C is the oscillation factor. C = 2 × r a n d .
When Q > 0.25 and Temp < 0.6, the execution probabilities of the standard SO algorithm in combat mode and mating mode are 0.6 and 0.4, respectively. In several experiments, it is found that improving the execution probability of the opposite attraction strategy can further improve the optimization ability of the algorithm. While the algorithm is in combat mode, coverage undergoes significant changes as the iteration progresses. This is because the battle mode population position update adopts a larger step size, so that it can quickly find better coverage. However, once the coverage reaches a certain high level, the algorithm will fall into the local optimal solution and cannot further improve the coverage. When the algorithm is in the phase of the attraction of opposites, except for a few optimal individuals in the population that are completely reserved to the next generation, all other individuals rely on the optimal individuals in the opposites population to update their positions, which can reduce the possibility of falling into local optima. And with the increase in the number of iterations, the optimization step size of the positional update in this stage is also gradually reduced, thus improving the local optimization performance of the algorithm. Therefore, in the DTHSO algorithm, the execution of the combat mode and opposite attraction strategy may be changed to 0.4 and 0.6, so as to improve the optimization ability of the algorithm.

4.4. Analysis of Algorithm Time Complexity

Assuming that the time complexity of the SO algorithm is T(n), the search space dimension is D, the population number of the snake optimization algorithm is N, and the maximum number of iterations is Tmax, then the complexity of population initialization is O(n) and the complexity of location update in the first stage is O(ND). In the second stage, the temperature and physical location are calculated first and then the location update is carried out. The complexity is O ( N D + 1 ) , and the time complexity of the SO algorithm is T ( n ) = O [ T max ( 2 N D + 1 ) + N ] . Compared with the SO algorithm, the DTHSO algorithm has three processes added: Tent chaotic mapping and quasi-reverse learning, adaptive t-distribution mutation, and the anisotropic attraction strategy, and the complexity correspondingly increases. Reverse population initialization generates two different populations using Tent chaotic mapping and reverse learning, with a complexity of O(2n). The complexity of adaptive t-distribution variation is O(D). The opposite attraction strategy is nested in the second stage position update, and its complexity remains unchanged. Therefore, the time complexity of the DTHSO algorithm is T ( n ) = O [ 2 N + T max ( 2 N D + D + 1 ) ] , with a slight increase in time complexity and little difference.

4.5. The Algorithm Process of DTHSO Algorithm

The implementation steps of the DTHSO algorithm are as follows:
Step 1: Based on the objective function that needs to be calculated, establish a mathematical model and determine the optimization objective.
Step 2: Set the number of snake populations and the maximum number of iterations, and initialize the snake population based on the Tent chaotic map and quasi-reverse learning strategy based on Formula (19).
Step 3: Divide the population into two groups, male and female, according to Equations (2) and (3); set a fitness function; and calculate the corresponding fitness to find the current best male and female individuals.
Step 4: Define the ambient temperature Temp and food quantity Q based on Equations (4) and (5).
Step 5: Determine whether to only search for food or engage in battles and mating based on the amount of food Q. If Q < 0.25, search for food and update the position of the individual snake according to Equation (6).
Step 6: If there is sufficient food and the Temp > 0.6, only search for food and eat existing food, and update the position according to Equation (10).
Step 7: Determine whether to enter the combat mode or mating mode based on the random number, rand, of the mode. Update the position of the combat mode according to Equation (11), replace the FM in combat mode with Equation (13), replace the best individual with the current individual, and update the position. If the eggs hatch, select the worst individuals and replace them.
Step 8: Process the updated position and update the individual’s historical best value.
Step 9: Update the position again based on the improved adaptive t-distribution mutation according to Equation (22).
Step 10: Update individual historical best values using the opposite attraction strategy.
Step 11: Update the global best fit again and update the food location.
Step 12: Determine if the number of iterations has been reached, and if it does not meet the requirements, proceed to the next iteration; if satisfied, end the iteration and output the optimal position.
The workflow of the improved DTHSO algorithm is shown in Figure 1.

5. Algorithm Comparison and Result Analysis

5.1. Comparison of Test Function Results

The simulation testing environment is configured as follows: the operating system is Windows 10, 64-bit edition; the processor is a 12th Generation Intel Core(TM) i9-12900K (Intel Corporation, Santa Clara, CA, USA) with a base frequency of 3.20 GHz; the system is equipped with 64GB of RAM; and the simulation software utilized is MATLAB R2022b. The algorithms selected for comparison include the Gray Wolf Optimizer (GWO) [47], particle swarm optimization (PSO) [48], Whale Optimization Algorithm (WOA) [49], Dung Beetle Optimizer (DBO) [50], Subtraction-Average-Based Optimizer (SABO) [51], Sand Cat Swarm Optimization (SCSO) [52], standard snake optimization (SO) [17], and the proposed DTHSO algorithm. To ensure fairness in the comparison of each algorithm, the population size for all algorithms is set to 30, and the number of iterations is set to 500. This study employs 23 standard benchmark functions from CEC2005 for comparative experiments, which are designed to mimic varying degrees of difficulty in actual search spaces. Detailed information regarding these functions is presented in Table 1.
After 30 rounds of average calculation, the iterative calculation results of eight algorithms are shown in Figure 2. The bar charts of the test function results for the eight algorithms are shown in Figure 3. Table 2 displays a comparison of the experimental results for the eight algorithms.
In order to increase the visual comparability of the data, we added eight algorithms to the radar plots of 23 test functions, as shown in Figure 4. The performance comparison and rankings of the eight additional algorithms are shown in Figure 5.
In order to compare the convergence speed and accuracy of the algorithms more intuitively, this article selects the iterative convergence curves of 23 benchmark test functions for comparison, as shown in Figure 2. Observing the convergence curve of the test function, it can be seen that the improved algorithm DTHSO proposed in this article has a better convergence speed and accuracy than PSO, GWO, WOA, DBO, SABO, SCSO, and the standard SO algorithm. F1~F6 are high-dimensional unimodal functions that reflect the algorithm’s global exploration and convergence capabilities. After using Tent chaotic mapping and the quasi-reverse learning strategy, the initial population can be effectively optimized. Simultaneously adapting the t-distribution mutation strategy and the opposite attraction strategy can reduce the search space, help the algorithm to jump out of local optima, and improve the algorithm’s optimization ability. From the convergence curve of the F1–F6 functions, it can be seen that the DTHSO algorithm has a high convergence speed and accuracy, which to some extent indicates that the introduction of the adaptive t-distribution mutation strategy and anisotropic attraction strategy has improved the weak global search ability of the SO algorithm. For high-dimensional multimodal functions, they are mainly used to detect the local development ability of the algorithms. From the convergence curve of algorithm comparison, it can be seen that the DTHSO algorithm shows superior performance, which is less likely to fall into local optima compared to other PSO, GWO, WOA, DBO, SABO, SCSO, and standard SO algorithms, improving the shortcomings of the original algorithm. Overall, from the convergence graph, it can be seen that the DTHSO algorithm can also demonstrate significant superiority in the optimization problem of finding the minimum value of a function.
According to the experimental results in Table 2, it can be seen that the DTHSO algorithm proposed in this paper can directly explore the optimal values for F1~F6, and the optimization effect reaches 100%. For F7, the average and the standard deviation of the DTHSO algorithm are also much smaller than those of other algorithms, indicating its high stability. For F7~F10, there is a better verification effect on the algorithm’s global exploration ability. For F11 and F15, the optimization effect of the DTHSO algorithm is relatively ideal. The algorithm proposed in this article is better than PSO, GWO, WOA, DBO, SABO, SCSO, and standard SO algorithms, with a faster convergence speed and the best performance. For functions F16 and F17, PSO, GWO, WOA, DBO, SABO, SCSO, the standard SO algorithm, and the algorithm proposed in this article all have poor performance and are not given here. For F18~F23, the optimization performance of the DTHSO algorithm is not significant compared to other algorithms such as PSO, GWO, WOA, DBO, SABO, SCSO, and the standard SO algorithm, but the average and standard deviation are far better than the other compared algorithms. The above analysis indicates that the overall optimization performance of the DTHSO algorithm is superior to the other seven algorithms.
The radar charts of the 48 algorithms added in the 23 test functions, and the performance comparison and ranking chart of the eight added algorithms are in Figure 5. It can be clearly seen that the DTHSO algorithm proposed in this article has significant advantages and optimal results compared to other algorithms. Compared with the other seven algorithms, it can be seen that our algorithm has significant performance advantages.
In summary, the DTHSO algorithm has significantly improved the optimization performance of 23 benchmark test functions and achieved good stability, effectively improving the shortcomings of the original algorithm. The feasibility and effectiveness of the DTHSO algorithm have been demonstrated. This article proposes a multi strategy improved DTHSO algorithm based on the original SO algorithm, and further improves some behaviors by combining the type of SO and its role in the population. DTHSO is compared with seven other excellent algorithms in 23 commonly used benchmark functions in multiple dimensions, and the convergence curves of each algorithm are analyzed. The significance of the differences is verified by the Wilcoxon sign rank test, and the results fully prove the effectiveness of the improved strategy. The results indicate that the DTHSO algorithm has a good optimization ability and robustness in complex problems.

5.2. Comparison of Energy Storage Optimization Scheduling

In order to improve the economic efficiency of wind–solar complementary power generation energy storage systems and reduce their operating costs, a capacity optimization configuration model for wind–solar complementary power generation energy storage systems is studied, and the improvement of the particle swarm optimization algorithm and hybrid energy storage capacity optimization method are explored. In the wind–solar complementary power generation system, a battery supercapacitor hybrid is used as an energy storage device. The system composition is shown in Figure 6, which consists of wind turbines, photovoltaic arrays, batteries, supercapacitors, converters, loads, etc.
The life cost cycle (LCC), also known as the total lifecycle cost, refers to the sum of all expenses paid during the planning, manufacturing, installation, use, maintenance, and disposal of equipment throughout its lifecycle. To estimate LCC reasonably and accurately, a breakdown structure for the entire lifecycle cost is established. Based on the work items of each stage of the entire lifecycle, the cost of the entire lifecycle is gradually decomposed into basic units, forming a unit cost system sorted and arranged in sequence, known as the cost breakdown structure (CBS). An estimation model for LCC based on the breakdown structure of expenses is established. The cost model, without considering the time value of funds, is a static cost model, where LCC is
L C C = C 1 + C O + C M + C D
LCC is the full lifecycle cost. C1 is the purchase cost of the equipment. CO is the operating cost of the equipment. CM is the maintenance cost of the equipment. CD represents the processing cost (residual value cost and scrap cost) of the equipment.
The objective function is the full lifecycle cost of hybrid energy storage devices, which can also be defined as the sum of the four major costs. The LCC model of a hybrid energy storage system is
min C = C 1 + C O + C M + C D   = ( 1 + f o b + f m b + f d b ) N b P b + ( 1 + f o c + f d c ) N c P c
The parameters Nb and Nc represent the number of batteries and supercapacitors, respectively. Pb and Pc are the unit prices of batteries and supercapacitors, respectively. fob and foc are the operating coefficients of the battery and supercapacitor. fmb and fmc are the maintenance coefficients for the batteries and supercapacitors. Generally, supercapacitors are maintenance free, so fmc = 0. fdb and fdc are the processing coefficients for the batteries and supercapacitors. The specific mathematical model can be found in reference [53,54]. The comparison of capacity optimization iterations for the hybrid energy storage systems using eight algorithms is shown in Figure 7. The minimum lifecycle cost of the intelligent optimization algorithm is shown in Figure 8. The number of batteries is shown in Figure 9. The number of capacitors is shown in Figure 10.
From the simulation results, it can be seen that when using PSO, GWO, WOA, DBO, SABO, SCSO, and standard SO algorithms, the convergence speed is relatively slow, requiring approximately 15 iterations to find the approximate optimal solution. The DTHSO algorithm proposed in this article shows a significant acceleration in convergence speed, with convergence occurring after approximately 10 times. The results obtained using PSO, GWO, WOA, DBO, SABO, SCSO, and standard SO algorithm methods all meet the requirements of the minimum load outage rate. The optimization effect of the DTHSO algorithm proposed in this paper is superior to the other seven intelligent optimization algorithms.
Compared to PSO, GWO, WOA, DBO, SABO, SCSO, and standard SO algorithms, it can be seen that using the DTHSO algorithm requires 49,827 batteries and 5,623,400 supercapacitors. At this time, the minimum lifecycle cost is about CNY 159,880, and the load outage rate is 0.032. When using an asymmetric acceleration factor, approximately 47,212 batteries and 5,685,300 supercapacitors need to be configured, resulting in a minimum lifecycle cost of CNY 157,130 and a load shortage rate of 0.035. It can be seen that the DTHSO algorithm proposed in this article has a better optimization ability. The capacity configuration optimization design of hybrid energy storage devices for batteries and supercapacitors is carried out with the goal of minimizing the total lifecycle cost of the system and the corresponding constraints, such as power outage rate. This includes the design of a capacity configuration optimization model and the use of the DTHSO algorithm proposed in this paper. Simulation results show that when using the DTHSO algorithm proposed in this paper, it converges approximately 10 times, accelerating the convergence speed and having a better optimization ability. This optimization model using the DTHSO algorithm has certain referential value for the capacity configuration and the economy of hybrid energy storage devices in wind–solar complementary power generation systems.

6. Conclusions

To overcome the shortcomings of traditional snake optimization processes, including poor population diversity, sluggish convergence, and inadequate interaction during the exploration and exploitation stages, this work suggests an adaptive t-distribution mixed mutation snake optimization approach. This method starts with a quasi-reverse learning strategy and Tent chaotic mapping. It employs an opposite-sex attraction approach in place of the mating strategy for optimization and substitutes an adaptive t-distribution mixed mutation foraging strategy for the original foraging stage technique. Higher solution accuracy and a quicker rate of convergence are provided by the enhanced approach. The DTHSO algorithm shows the highest accuracy among the various approaches, and the outcomes of 23 test functions confirm the optimization strategy’s efficacy. In order to lower the total system lifespan cost and satisfy the related requirements, such as the power outage rate, the capacity configuration optimization design is performed for hybrid energy storage devices, such as batteries and supercapacitors. The number of batteries needed for configuration is significantly reduced when an optimization model for capacity configuration is created and an adaptive t-distribution mixed mutation in a snake-shaped optimization approach is used. In wind–solar complementary power generating systems, this has a predetermined reference value for the economy and for the capacity arrangement of hybrid energy storage devices.
The advanced SO algorithm demonstrates superior optimization abilities, resilience, and practicality. Nevertheless, when it comes to the function testing and capacity optimization of hybrid energy storage systems, it has not consistently yielded uniform outcomes. There remain certain constraints. The primary challenge is that for two or three test functions, the optimization results of the DTHSO technique are less favorable than those of the traditional SO method. Even though it can circumvent local optima in the majority of situations, the DTHSO algorithm sometimes fails to break free from local optima. In comparison with previous swarm intelligence algorithms, the DTHSO approach substantially enhances the performance for unimodal functions, yet has a relatively more limited influence on some multimodal functions. In light of these concerns, relevant improvement techniques will be put forward and implemented to address real-life problems in diverse fields, such as signal processing, image segmentation, and unmanned aerial vehicle (UAV) route optimization.

Author Contributions

Conceptualization, L.C.; writing—review and editing, Y.Y.; validation, L.C.; data curation, C.C.; writing—original draft preparation, Y.Y.; methodology, B.C.; software, Y.C.; All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Natural Science Foundation of Zhejiang Province under Grant LY23F010002, and the School-level Scientific Research Project of Wenzhou university of Technology under Grant ky202403.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon request. There are no restrictions on data availability.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. Tang, J.; Duan, H.; Lao, S. Swarm intelligence algorithms for multiple unmanned aerial vehicles collaboration: A comprehensive review. Artif. Intell. Rev. 2023, 56, 4295–4327. [Google Scholar] [CrossRef]
  2. Attiya, I.; Abd Elaziz, M.; Abualigah, L.; Nguyen, T.; Abd, A. An improved hybrid swarm intelligence for scheduling iot application tasks in the cloud. IEEE Trans. Ind. Inform. 2022, 18, 6264–6272. [Google Scholar] [CrossRef]
  3. Nasir, M.H.; Khan, S.A.; Khan, M.M.; Fatima, M. Swarm intelligence inspired intrusion detection systems—A systematic literature review. Comput. Netw. 2022, 205, 108708. [Google Scholar] [CrossRef]
  4. Sakovich, N.; Aksenov, D.; Pleshakova, E.; Gataullin, S. MAMGD: Gradient-Based Optimization Method Using Exponential Decay. Technologies 2024, 12, 154. [Google Scholar] [CrossRef]
  5. Wang, S.; Cao, L.; Chen, Y.; Chen, C.; Yue, Y.; Zhu, W. Gorilla optimization algorithm combining sine cosine and cauchy variations and its engineering applications. Sci. Rep. 2024, 14, 7578. [Google Scholar] [CrossRef]
  6. Jiang, S.; Yue, Y.; Chen, C.; Chen, Y.; Cao, Y. A Multi-Objective Optimization Problem Solving Method Based on Improved Golden Jackal Optimization Algorithm and Its Application. Biomimetics 2024, 9, 270. [Google Scholar] [CrossRef] [PubMed]
  7. Chen, C.; Cao, L.; Chen, Y.; Yue, Y. A comprehensive survey of convergence analysis of beetle antennae search algorithm and its applications. Artif. Intell. Rev. 2024, 57, 141. [Google Scholar] [CrossRef]
  8. Hu, G.; Huang, F.; Chen, K.; Wei, G. MNEARO: A meta swarm intelligence optimization algorithm for engineering applications. Comput. Methods Appl. Mech. Eng. 2024, 419, 116664. [Google Scholar] [CrossRef]
  9. Yue, Y.; Cao, L.; Lu, D.; Hu, Z.; Xu, M.; Wang, S.; Li, B.; Ding, H. Review and empirical analysis of sparrow search algorithm. Artif. Intell. Rev. 2023, 56, 10867–10919. [Google Scholar] [CrossRef]
  10. Prity, F.S.; Uddin, K.M.A.; Nath, N. Exploring swarm intelligence optimization techniques for task scheduling in cloud computing: Algorithms, performance analysis, and future prospects. Iran J. Comput. Sci. 2024, 7, 337–358. [Google Scholar] [CrossRef]
  11. Bacanin, N.; Zivkovic, M.; Stoean, C.; Antonijevic, M.; Janicijevic, S.; Sarac, M.; Janicijevic, I. Application of natural language processing and machine learning boosted with swarm intelligence for spam email filtering. Mathematics 2022, 10, 4173. [Google Scholar] [CrossRef]
  12. Klein, L.; Zelinka, I.; Seidl, D. Optimizing parameters in swarm intelligence using reinforcement learning: An application of Proximal Policy Optimization to the iSOMA algorithm. Swarm Evol. Comput. 2024, 85, 101487. [Google Scholar] [CrossRef]
  13. Chen, B.; Cao, L.; Chen, C.; Chen, Y.; Yue, Y. A comprehensive survey on the chicken swarm optimization algorithm and its applications: State-of-the-art and research challenges. Artif. Intell. Rev. 2024, 57, 170. [Google Scholar] [CrossRef]
  14. Maleki, A. Optimization based on modified swarm intelligence techniques for a stand-alone hybrid photovoltaic/diesel/battery system. Sustain. Energy Technol. Assess. 2022, 51, 101856. [Google Scholar] [CrossRef]
  15. Liu, Y.; Huang, H.; Zhou, J. A dual cluster head hierarchical routing protocol for wireless sensor networks based on hybrid swarm intelligence optimization. IEEE Internet Things J. 2024, 11, 16710–16721. [Google Scholar] [CrossRef]
  16. Cheng, S.; Wang, X.; Zhang, M.; Lei, X.; Lu, H.; Shi, Y. Solving multimodal optimization problems by a knowledge-driven brain storm optimization algorithm. Appl. Soft Comput. 2024, 150, 111105. [Google Scholar] [CrossRef]
  17. Hashim, F.A.; Hussien, A.G. Snake Optimizer: A novel meta-heuristic optimization algorithm. Knowl.-Based Syst. 2022, 242, 108320. [Google Scholar] [CrossRef]
  18. Yao, L.; Yuan, P.; Tsai, C.Y.; Zhang, T.; Lu, Y.; Ding, S. ESO: An enhanced snake optimizer for real-world engineering problems. Expert Syst. Appl. 2023, 230, 120594. [Google Scholar] [CrossRef]
  19. Hu, G.; Yang, R.; Abbas, M.; Wei, G. BEESO: Multi-strategy boosted snake-inspired optimizer for engineering applications. J. Bionic Eng. 2023, 20, 1791–1827. [Google Scholar] [CrossRef]
  20. Khurma, R.A.; Albashish, D.; Braik, M.; Alzaqebah, A.; Qasem, A.; Adwan, O. An augmented Snake Optimizer for diseases and COVID-19 diagnosis. Biomed. Signal Process. Control 2023, 84, 104718. [Google Scholar] [CrossRef]
  21. Al-Shourbaji, I.; Kachare, P.H.; Alshathri, S.; Duraibi, S.; Elnaim, B.; Elaziz, M.A. An efficient parallel reptile search algorithm and snake optimizer approach for feature selection. Mathematics 2022, 10, 2351. [Google Scholar] [CrossRef]
  22. Belabbes, F.; Cotfas, D.T.; Cotfas, P.A.; Medles, M. Using the snake optimization metaheuristic algorithms to extract the photovoltaic cells parameters. Energy Convers. Manag. 2023, 292, 117373. [Google Scholar] [CrossRef]
  23. Xu, C.; Liu, Q.; Huang, T. Resilient penalty function method for distributed constrained optimization under byzantine attack. Inf. Sci. 2022, 596, 362–379. [Google Scholar] [CrossRef]
  24. Li, W.; Sun, B.; Sun, Y.; Huang, Y.; Cheung, Y.; Gu, F. DC-SHADE-IF: An infeasible–feasible regions constrained optimization approach with diversity controller. Expert Syst. Appl. 2023, 224, 119999. [Google Scholar] [CrossRef]
  25. Hussien, A.G.; Heidari, A.A.; Ye, X.; Liang, G.; Chen, H.; Pan, Z. Boosting whale optimization with evolution strategy and Gaussian random walks: An image segmentation method. Eng. Comput. 2023, 39, 1935–1979. [Google Scholar] [CrossRef]
  26. Pereira, J.L.J.; Oliver, G.A.; Francisco, M.B.; Cunha, S.; Gomes, G.F. A review of multi-objective optimization: Methods and algorithms in mechanical engineering problems. Arch. Comput. Methods Eng. 2022, 29, 2285–2308. [Google Scholar] [CrossRef]
  27. Xiao, N.; Liu, X.; Yuan, Y. A class of smooth exact penalty function methods for optimization problems with orthogonality constraints. Optim. Methods Softw. 2022, 37, 1205–1241. [Google Scholar] [CrossRef]
  28. Kumari, S.; Khurana, P.; Singla, S.; Kumar, A. Solution of constrained problems using particle swarm optimiziation. Int. J. Syst. Assur. Eng. Manag. 2022, 13, 1688–1695. [Google Scholar] [CrossRef]
  29. Liao, W.; Xia, X.; Jia, X.; Shen, S.; Zhuang, H.; Zhang, X. A Spider Monkey Optimization Algorithm Combining Opposition-Based Learning and Orthogonal Experimental Design. Comput. Mater. Contin. 2023, 76, 3297–3323. [Google Scholar] [CrossRef]
  30. Wei, W.; Wang, J.; Tao, M. Constrained differential evolution with multiobjective sorting mutation operators for constrained optimization. Appl. Soft Comput. 2015, 33, 207–222. [Google Scholar] [CrossRef]
  31. Gao, S.; de Silva, C.W. Estimation distribution algorithms on constrained optimization problems. Appl. Math. Comput. 2018, 339, 323–345. [Google Scholar] [CrossRef]
  32. Rahimi, I.; Gandomi, A.H.; Chen, F.; Mezura, E. A review on constraint handling techniques for population-based algorithms: From single-objective to multi-objective optimization. Arch. Comput. Methods Eng. 2023, 30, 2181–2209. [Google Scholar] [CrossRef]
  33. Wang, Y.; Liu, Z.; Wang, G.G. Improved differential evolution using two-stage mutation strategy for multimodal multi-objective optimization. Swarm Evol. Comput. 2023, 78, 101232. [Google Scholar] [CrossRef]
  34. Masood, A.; Hameed, M.M.; Srivastava, A.; Pham, Q.B.; Ahmad, K.; Razali, F.M.; Baowidan, S.A. Improving PM2.5 prediction in New Delhi using a hybrid extreme learning machine coupled with snake optimization algorithm. Sci. Rep. 2023, 13, 21057. [Google Scholar] [CrossRef] [PubMed]
  35. Wang, C.; Jiao, S.; Li, Y.; Zhang, Q. Capacity optimization of a hybrid energy storage system considering wind-Solar reliability evaluation based on a novel multi-strategy snake optimization algorithm. Expert Syst. Appl. 2023, 231, 120602. [Google Scholar] [CrossRef]
  36. Wang, L.; Fan, G.; Wang, Q.; Li, H.; Huo, J.; Wei, S.; Niu, Q. Snake optimizer LSTM-based UWB positioning method for unmanned crane. PLoS ONE 2023, 18, e0293618. [Google Scholar] [CrossRef]
  37. Li, H.; Xu, G.; Chen, B.; Huang, S.; Xia, Y.; Chai, S. Dual-mutation mechanism-driven snake optimizer for scheduling multiple budget constrained workflows in the cloud. Appl. Soft Comput. 2023, 149, 110966. [Google Scholar] [CrossRef]
  38. Janjanam, L.; Saha, S.K.; Kar, R. Optimal design of Hammerstein cubic spline filter for nonlinear system modeling based on snake optimizer. IEEE Trans. Ind. Electron. 2022, 70, 8457–8467. [Google Scholar] [CrossRef]
  39. Cheng, R.; Qiao, Z.; Li, J.; Huang, J. Traffic signal timing optimization model based on video surveillance data and snake optimization algorithm. Sensors 2023, 23, 5157. [Google Scholar] [CrossRef]
  40. Li, C.; Feng, B.; Li, S.; Kurths, J.; Chen, G. Dynamic analysis of digital chaotic maps via state-mapping networks. IEEE Trans. Circuits Syst. I Regul. Pap. 2019, 66, 2322–2335. [Google Scholar] [CrossRef]
  41. Yu, W.; Zhou, P.; Miao, Z.; Zhao, H.; Mou, J.; Zhou, W. Energy performance prediction of pump as turbine (PAT) based on PIWOA-BP neural network. Renew. Energy 2024, 222, 119873. [Google Scholar] [CrossRef]
  42. Yang, X.; Liu, J.; Liu, Y.; Xu, P.; Yu, L.; Zhu, L.; Chen, H.; Deng, W. A novel adaptive sparrow search algorithm based on chaotic mapping and t-distribution mutation. Appl. Sci. 2021, 11, 11192. [Google Scholar] [CrossRef]
  43. Ilboudo, W.E.L.; Kobayashi, T.; Matsubara, T. Adaterm: Adaptive t-distribution estimated robust moments for noise-robust stochastic gradient optimization. Neurocomputing 2023, 557, 126692. [Google Scholar] [CrossRef]
  44. Yin, S.; Luo, Q.; Du, Y.; Zhou, Y. DTSMA: Dominant swarm with adaptive t-distribution mutation-based slime mould algorithm. Math. Biosci. Eng. 2022, 19, 2240–2285. [Google Scholar] [CrossRef]
  45. Zhang, H.; Huang, Q.; Ma, L.; Zhang, Z. Sparrow search algorithm with adaptive t distribution for multi-objective low-carbon multimodal transportation planning problem with fuzzy demand and fuzzy time. Expert Syst. Appl. 2024, 238, 122042. [Google Scholar] [CrossRef]
  46. Corbett, B.; Loi, R.; Zhou, W.; Liu, D.; Ma, Z. Transfer print techniques for heterogeneous integration of photonic components. Prog. Quantum Electron. 2017, 52, 1–17. [Google Scholar] [CrossRef]
  47. Zamfirache, I.A.; Precup, R.E.; Roman, R.C.; Petriu., E.M. Policy iteration reinforcement learning-based control using a grey wolf optimizer algorithm. Inf. Sci. 2022, 585, 162–175. [Google Scholar] [CrossRef]
  48. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  49. Amiriebrahimabadi, M.; Mansouri, N. A comprehensive survey of feature selection techniques based on whale optimization algorithm. Multimed. Tools Appl. 2024, 83, 47775–47846. [Google Scholar] [CrossRef]
  50. Xue, J.; Shen, B. Dung beetle optimizer: A new meta-heuristic algorithm for global optimization. J. Supercomput. 2023, 79, 7305–7336. [Google Scholar] [CrossRef]
  51. Trojovský, P.; Dehghani, M. Subtraction-average-based optimizer: A new swarm-inspired metaheuristic algorithm for solving optimization problems. Biomimetics 2023, 8, 149. [Google Scholar] [CrossRef] [PubMed]
  52. Wu, D.; Rao, H.; Wen, C.; Jia, H.; Liu, Q.; Abualigah, L. Modified sand cat swarm optimization algorithm for solving constrained engineering optimization problems. Mathematics 2022, 10, 4350. [Google Scholar] [CrossRef]
  53. Wang, Y.; Song, F.; Ma, Y.; Zhang, Y.; Yang, J.; Liu, Y.; Zhang, F.; Zhu, J. Research on capacity planning and optimization of regional integrated energy system based on hybrid energy storage system. Appl. Therm. Eng. 2020, 180, 115834. [Google Scholar] [CrossRef]
  54. Pang, M.; Shi, Y.; Wang, W.; Pang, S. Optimal sizing and control of hybrid energy storage system for wind power using hybrid parallel PSO-GA algorithm. Energy Explor. Exploit. 2019, 37, 558–578. [Google Scholar] [CrossRef]
Figure 1. The workflow of the improved DTHSO algorithm.
Figure 1. The workflow of the improved DTHSO algorithm.
Biomimetics 10 00244 g001
Figure 2. Comparison of iterative computation results for eight algorithms.
Figure 2. Comparison of iterative computation results for eight algorithms.
Biomimetics 10 00244 g002aBiomimetics 10 00244 g002bBiomimetics 10 00244 g002c
Figure 3. Bar charts of test function results for eight algorithms.
Figure 3. Bar charts of test function results for eight algorithms.
Biomimetics 10 00244 g003
Figure 4. The radar plots of eight algorithms in 23 test functions.
Figure 4. The radar plots of eight algorithms in 23 test functions.
Biomimetics 10 00244 g004
Figure 5. The performance comparison and ranking of the eight additional algorithms.
Figure 5. The performance comparison and ranking of the eight additional algorithms.
Biomimetics 10 00244 g005
Figure 6. Wind–solar complementary hybrid energy storage structure.
Figure 6. Wind–solar complementary hybrid energy storage structure.
Biomimetics 10 00244 g006
Figure 7. Comparison of the energy storage optimization scheduling of eight algorithms.
Figure 7. Comparison of the energy storage optimization scheduling of eight algorithms.
Biomimetics 10 00244 g007
Figure 8. Comparison of minimum lifecycle costs.
Figure 8. Comparison of minimum lifecycle costs.
Biomimetics 10 00244 g008
Figure 9. Comparison of battery quantity.
Figure 9. Comparison of battery quantity.
Biomimetics 10 00244 g009
Figure 10. Comparison of the number of capacitors.
Figure 10. Comparison of the number of capacitors.
Biomimetics 10 00244 g010
Table 1. CEC2005 test functions.
Table 1. CEC2005 test functions.
FunctionEquationDimensionBoundsOptimum
F1 i = 1 d x i 2 30[−100, 100]0
F2 i = 1 d x i + i = 1 d x i 30[−10, 100]0
F3 i = 1 d j = 1 i x j 2 30[−100, 100]0
F4 max x i , 1 i d 30[−100, 100]0
F5 i = 1 d 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 30[−10, 100]0
F6 i = 1 d ( x i + 0.5 ) 2 30[−100, 100]0
F7 i = 1 d i x i 4 + r a n d ( 0 , 1 ) 30[−1.28, 1.28]0
F8 i = 1 d x i sin ( x i ) + 0.1 x i 30[−10, 100]0
F9 10 d + i = 1 d x i 2 10 cos ( 2 π x i ) 30[−100, 100]0
F10 20 exp 0.2 1 d i = 1 d x 2 exp 1 d i = 1 d cos ( 2 π x i ) + 20 + exp ( 1 ) 30[−5.12, 5.12]0
F11 1 4000 i = 1 d x i 2 i = 1 d cos x i i + 1 30[−32, 32]0
F12 F 12 = π d 10 sin ( π y 1 ) + i = 1 d 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2   + i = 1 d u ( x i , 10 , 100 , 4 ) y i = 1 + x i + 1 4 u x i , a , k , m = k ( x i a ) m , x i > a 0 , a < x i < a k ( x i a ) m , x i < a 30[−600, 600]0
F13 0.1 sin 2 ( 3 π x 1 ) + i = 1 d 1 ( x i 1 ) 2 [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x n ) ] + i = 1 d u ( x i , 5 , 100 , 4 ) 30[−50, 50]0
F14 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 1 2[−65, 65]0
F15 i = 1 11 [ a i x i ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4[−5, 5]0.0003
F16 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]−1.0316
F17 ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos x 1 + 10 2[−5, 10]0.3978
F18 [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 3 x 2 ) 2 × ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]3
F19 i = 1 4 c i exp j = 1 3 a i j ( x j p i j ) 2 3[1, 3]−3.86
F20 i = 1 4 c i exp j = 1 6 a i j ( x j p i j ) 2 6[0, 1]−3.32
F21 i = 1 5 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.1532
F22 i = 1 7 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.4028
F23 i = 1 10 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.5363
Table 2. Comparison of experimental results of eight algorithms.
Table 2. Comparison of experimental results of eight algorithms.
FunctionAlgorithmMeanStdBestWorst
F1PSO7.23×10−101.29×10−95.28×10−125.18 × 10−9
GWO1.25 × 10−565.57 × 10−563.41 × 10−603.07 × 10−55
WOA6.99 × 10−743.83 × 10−737.73 × 10−912.1 × 10−72
DBO1.2 × 10−1026.76 × 10−1023 × 10−1543.7 × 10−101
SABO1.6 × 10−19905.8 × 10−2053.2 × 10−198
SCSO2.1 × 10−1317.61 × 10−1317.6 × 10−1464.1 × 10−130
SO3.96 × 10−1611.6 × 10−1602.47 × 10−1748.08 × 10−160
DTHSO0000
F2PSO1.45 × 10−68.83 × 10−72.94 × 10−74 × 10−6
GWO7.94 × 10−331.55 × 10−322.9 × 10−357.8 × 10−32
WOA4.86 × 10−512.66 × 10−503.93 × 10−601.46 × 10−49
DBO2.7 × 10−559.45 × 10−552.4 × 10−854.53 × 10−54
SABO1.5 × 10−1091.9 × 10−1094 × 10−1127.2 × 10−109
SCSO5.12 × 10−692.4 × 10−684.34 × 10−761.31 × 10−67
SO2.307 × 10−875.077 × 10−875.735 × 10−912.574 × 10−86
DTHSO0000
F3PSO0.0027940.0036319.86 × 10−50.01412
GWO1.59 × 10−244.89 × 10−245.58 × 10−312.12 × 10−23
WOA255.6735476.03060.3800312629.944
DBO3.84 × 10−892.11 × 10−888.9 × 10−1611.15 × 10−87
SABO6.56 × 10−743.59 × 10−731.7 × 10−1221.97 × 10−72
SCSO5.9 × 10−1113.3 × 10−1101 × 10−1261.8 × 10−109
SO1.96 × 10−1319.46 × 10−1314.85 × 10−1445.19 × 10−130
DTHSO−2.5 × 10−274.19 × 10−260−7.4 × 10−26
F4PSO0.0017050.0012920.0001980.00573
GWO3.47 × 10−186.38 × 10−181.98 × 10−203.11 × 10−17
WOA5.26006113.528210.00080264.69591
DBO9.37 × 10−445.13 × 10−434.59 × 10−792.81 × 10−42
SABO7.44 × 10−851.47 × 10−847.91 × 10−886.65 × 10−84
SCSO1.77 × 10−579.08 × 10−572.16 × 10−644.98 × 10−56
SO4.259 × 10−722.307 × 10−713.529 × 10−771.264 × 10−70
DTHSO0000
F5PSO8.40987611.910733.79665269.3338
GWO6.4979920.5015926.0903138.062182
WOA11.5447625.591586.131159147.0338
DBO5.4670780.7237744.9351068.073513
SABO7.4140.4422346.8320938.704741
SCSO6.9738170.6163796.1542448.072619
SO2.10366320.48844981.25768183.1305641
DTHSO5.0025730.3365224.3228486.107432
F6PSO6.35 × 10−101.34 × 10−99.24 × 10−126.14 × 10−9
GWO3.62 × 10−61.24 × 10−61.55 × 10−66.49 × 10−6
WOA0.0020190.0041020.0001990.019776
DBO2.31 × 10−238.34 × 10−236.63 × 10−304.52 × 10−22
SABO0.0443010.0899920.0004080.283947
SCSO0.0502030.1385492.38 × 10−70.505572
SO1.002 × 10−143.614 × 10−145.11 × 10−221.777 × 10−13
DTHSO1.4 × 10−122.22 × 10−127.53 × 10-161.02 × 10−11
F7PSO0.0025950.0018980.0003970.008926
GWO0.0006090.0004279.96 × 10-50.001877
WOA0.0029820.0034140.0001230.015604
DBO0.0009790.0006317.1 × 10−50.002622
SABO0.0001770.0001541.19 × 10−50.000602
SCSO0.0001370.0002851.23 × 10−60.001529
SO0.00029890.00024463.4 × 10−50.0009868
DTHSO0.0001439.87 × 10-055.39 × 10−60.000431
F8PSO−3096.79288.6078−3854.25−2432.97
GWO−2743.88281.8803−3298.59−2220.43
WOA−3399.65564.8312−4189.78−2451.76
DBO−3483.12461.9972−4188.44−2522.59
SABO−1752.48169.5616−2290.32−1493.77
SCSO−2578.91308.291−3143.56−2040.75
SO−3399.49410.44483−4189.829−2161.412
DTHSO−3588.87292.6576−4071.39−3003.93
F9PSO6.3118512.8363081.98991813.92943
GWO0.7167221.6888130.006.30707
WOA2.1903739.5162460.0050.33645
DBO1.0944534.1834750.0017.90923
SABO0000
SCSO0000
SO0000
DTHSO0000
F10PSO8.19 × 10−66.06 × 10−61.2 × 10−62.26 × 10−5
GWO6.84 × 10−151.45 × 10−154 × 10−157.55 × 10−15
WOA3.76 × 10−152.79 × 10−154.44 × 10−167.55 × 10−15
DBO4.44 × 10−1604.44 × 10−164.44 × 10−16
SABO4 × 10−1504 × 10−154 × 10−15
SCSO4.44 × 10−1604.44 × 10−164.44 × 10−16
SO4.441 × 10−1604.441 × 10−164.441 × 10−16
DTHSO4.44 × 10−1604.44 × 10−164.44 × 10−16
F11PSO0.099040.042320.0197010.211569
GWO0.0185670.0187800.057892
WOA0.0567090.11956400.436088
DBO0.0143530.03380200.137842
SABO0.0054360.02977600.163089
SCSO0000
SO0000
DTHSO0000
F12PSO1.53 × 10−107.41 × 10−103.1 × 10−134.07 × 10−9
GWO0.0083480.0123432.31 × 10−70.041028
WOA0.0086710.0101780.0002970.031783
DBO1.17 × 10−244.3 × 10−242.59 × 10−302.11 × 10−23
SABO0.0433320.0351590.0018480.137659
SCSO0.0263210.0314261.84 × 10−70.160187
SO1.236 × 10−156.554 × 10−152.017 × 10−203.593 × 10−14
DTHSO8.28 × 10−131.34 × 10−128.69 × 10−156.75 × 10−12
F13PSO2.55 × 10−91.32 × 10−87.47 × 10−137.26 × 10−8
GWO0.0132550.0343561.41 × 10−60.100487
WOA0.0557940.0807340.0008450.3936
DBO0.0254350.0489191.25 × 10−280.196254
SABO0.1691190.1185430.0022890.426229
SCSO0.1445370.1398711.11 × 10−60.431344
SO0.01166920.0261042.409 × 10−200.0988826
DTHSO0.0235070.097118.78 × 10−120.496478
F14PSO1.1301460.5658870.9980043.96825
GWO4.5916744.0235380.99800412.67051
WOA4.2956424.1125660.99800410.76318
DBO2.2765952.5575530.99800410.76318
SABO3.8963043.3298651.03153112.67081
SCSO3.8718353.9540560.99800412.67051
SO2.53570393.01175960.998003810.763181
DTHSO1.2625510.6859950.9980042.982105
F15PSO0.0032090.0068530.0003070.020363
GWO0.0082440.0130010.0003090.05662
WOA0.0005530.0002550.0003080.001377
DBO0.0007710.0003460.0003070.001333
SABO0.0009050.0017960.0003160.010149
SCSO0.0005470.0003990.0003080.001595
SO0.00539130.00878440.00030750.0225533
DTHSO0.0003113.94 × 10−60.0003070.000323
F16PSO−1.031636.05 × 10−16−1.03163−1.03163
GWO−1.031631.76 × 10−8−1.03163−1.03163
WOA−1.031631.17 × 10−9−1.03163−1.03163
DBO−1.031636.32 × 10−16−1.03163−1.03163
SABO−1.025250.012055−1.03163−0.98597
SCSO−1.031637.84 × 10−10−1.03163−1.03163
SO−1.0316285.904 × 10−16−1.031628−1.031628
DTHSO−1.031635.3 × 10−16−1.03163−1.03163
F17PSO0.39788700.3978870.397887
GWO0.3978952.7 × 10−50.3978870.398037
WOA0.3978941.54 × 10−50.3978870.397961
DBO0.39788700.3978870.397887
SABO0.4291050.0530910.3979240.588487
SCSO0.3978874.69E-080.3978870.397888
SO0.397887400.39788740.3978874
DTHSO0.39788700.3978870.397887
F18PSO31.68 × 10−1533
GWO3.0000345.53 × 10−533.000225
WOA3.0000457.69 × 10−533.000341
DBO32.86 × 10−1533
SABO4.583534.5440023.00024326.43964
SCSO3.0000091.08 × 10−533.000038
SO11.124.715415384
DTHSO39.55 × 10−1633
F19PSO−3.837010.141133−3.86278−3.08976
GWO−3.861840.002065−3.86278−3.8549
WOA−3.857860.007432−3.8627−3.8231
DBO−3.861990.002405−3.86278−3.8549
SABO−3.634450.21976−3.85595−2.97954
SCSO−3.860650.003528−3.86278−3.8549
SO−3.8622570.0019996−3.862782−3.854901
DTHSO−3.862782.56 × 10−15−3.86278−3.86278
F20PSO−3.290290.053475−3.322−3.2031
GWO−3.262750.071901−3.32199−3.11526
WOA−3.18850.141879−3.32168−2.63803
DBO−3.232310.110709−3.322−2.91606
SABO−3.240410.124298−3.32092−2.91461
SCSO−3.177120.220596−3.32199−2.26724
SO−3.2531350.0733724−3.321995−3.132697
DTHSO−3.242730.057005−3.322−3.2031
F21PSO−5.522283.451826−10.1532−2.63047
GWO−8.978412.44141−10.1528−2.68255
WOA−7.523172.926188−10.1525−2.62409
DBO−7.36242.689533−10.1532−2.63047
SABO−4.887830.574508−6.62134−3.409
SCSO−5.686442.207265−10.1532−0.88199
SO−9.5933682.142856−10.1532−0.880982
DTHSO−5.315761.387429−10.1532−2.63047
F22PSO−6.173983.390893−10.4029−2.75193
GWO−10.40120.000611−10.4025−10.4
WOA−7.499963.452079−10.402−1.8352
DBO−8.022733.057761−10.4029−1.83759
SABO−4.838980.550266−5.08625−2.66295
SCSO−6.782022.637799−10.4029−2.7659
SO−8.7487513.0586222−10.40294−2.765897
DTHSO−5.982442.30892−10.4029−2.7659
F23PSO−6.228723.903871−10.5364−1.67655
GWO−10.53440.001083−10.5363−10.5314
WOA−6.85253.314703−10.5348−1.85892
DBO−8.82772.69506−10.5364−2.42173
SABO−4.853531.101254−9.5443−2.1511
SCSO−6.291942.805714−10.5364−0.94888
SO−8.6970273.4025103−10.53641−1.85948
DTHSO−7.461853.206559−10.5364−2.42173
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yue, Y.; Cao, L.; Chen, C.; Chen, Y.; Chen, B. Snake Optimization Algorithm Augmented by Adaptive t-Distribution Mixed Mutation and Its Application in Energy Storage System Capacity Optimization. Biomimetics 2025, 10, 244. https://doi.org/10.3390/biomimetics10040244

AMA Style

Yue Y, Cao L, Chen C, Chen Y, Chen B. Snake Optimization Algorithm Augmented by Adaptive t-Distribution Mixed Mutation and Its Application in Energy Storage System Capacity Optimization. Biomimetics. 2025; 10(4):244. https://doi.org/10.3390/biomimetics10040244

Chicago/Turabian Style

Yue, Yinggao, Li Cao, Changzu Chen, Yaodan Chen, and Binhe Chen. 2025. "Snake Optimization Algorithm Augmented by Adaptive t-Distribution Mixed Mutation and Its Application in Energy Storage System Capacity Optimization" Biomimetics 10, no. 4: 244. https://doi.org/10.3390/biomimetics10040244

APA Style

Yue, Y., Cao, L., Chen, C., Chen, Y., & Chen, B. (2025). Snake Optimization Algorithm Augmented by Adaptive t-Distribution Mixed Mutation and Its Application in Energy Storage System Capacity Optimization. Biomimetics, 10(4), 244. https://doi.org/10.3390/biomimetics10040244

Article Metrics

Back to TopTop