Next Article in Journal
Frequency-Enhanced Transformer with Symmetry-Based Lightweight Multi-Representation for Multivariate Time Series Forecasting
Next Article in Special Issue
Interval-Valued Multiobjective Programming Problems Based on Convex Cones
Previous Article in Journal
Solutions for the Nonlinear Mixed Variational Inequality Problem in the System
Previous Article in Special Issue
A Relaxed Inertial Method for Solving Monotone Inclusion Problems with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

DTSA: Dynamic Tree-Seed Algorithm with Velocity-Driven Seed Generation and Count-Based Adaptive Strategies

1
Center for Artificial Intelligence, Jilin University of Finance and Economics, Changchun 130117, China
2
Jilin Province Key Laboratory of Fintech, Jilin University of Finance and Economics, Changchun 130117, China
3
School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland 1010, New Zealand
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(7), 795; https://doi.org/10.3390/sym16070795
Submission received: 26 May 2024 / Revised: 18 June 2024 / Accepted: 19 June 2024 / Published: 25 June 2024
(This article belongs to the Special Issue Advanced Optimization Methods and Their Applications)

Abstract

:
The Tree-Seed Algorithm (TSA) has been effective in addressing a multitude of optimization issues. However, it has faced challenges with early convergence and difficulties in managing high-dimensional, intricate optimization problems. To tackle these shortcomings, this paper introduces a TSA variant (DTSA). DTSA incorporates a suite of methodological enhancements that significantly bolster TSA’s capabilities. It introduces the PSO-inspired seed generation mechanism, which draws inspiration from Particle Swarm Optimization (PSO) to integrate velocity vectors, thereby enhancing the algorithm’s ability to explore and exploit solution spaces. Moreover, DTSA’s adaptive velocity adaptation mechanism based on count parameters employs a counter to dynamically adjust these velocity vectors, effectively curbing the risk of premature convergence and strategically reversing vectors to evade local optima. DTSA also integrates the trees population integrated evolutionary strategy, which leverages arithmetic crossover and natural selection to bolster population diversity, accelerate convergence, and improve solution accuracy. Through experimental validation on the IEEE CEC 2014 benchmark functions, DTSA has demonstrated its enhanced performance, outperforming recent TSA variants like STSA, EST-TSA, fb-TSA, and MTSA, as well as established benchmark algorithms such as GWO, PSO, BOA, GA, and RSA. In addition, the study analyzed the best value, mean, and standard deviation to demonstrate the algorithm’s efficiency and stability in handling complex optimization issues, and DTSA’s robustness and efficiency are proven through its successful application in five complex, constrained engineering scenarios, demonstrating its superiority over the traditional TSA by dynamically optimizing solutions and overcoming inherent limitations.

1. Introduction

An optimization problem involves the pursuit of the optimal or an approximate optimal solution within prescribed constraints [1]. This typically necessitates identifying variable values that either maximize or minimize a specific objective function [2,3]. Such problems are fundamental in scientific inquiry, engineering, and various other disciplines, driving advancements in diverse fields through systematic problem-solving approaches [4,5,6]. However, when solving optimization problems in the real world, we encounter countless complexities, including high computational requirements, nonlinear constraints, dynamic or noisy objective functions, etc. [7]. These challenges significantly impact the decision-making process for selecting the appropriate optimization algorithms. Against this backdrop, exact algorithms provide precise global optimum solutions. However, as the number of variables increases, their execution time grows exponentially. This exponential growth makes them impractical for large-scale or highly complex scenarios [8]. Therefore, stochastic optimization algorithms, especially heuristic and metaheuristic algorithms within the realm of approximate solutions, emerge as practical tools for tackling complex issues [9,10].
Heuristic and metaheuristic algorithms are favored for their ability to search for optimal or near-optimal solutions within a reasonable timeframe by employing probabilistic and statistical methods [11,12]. These algorithms are particularly suited for addressing complex problems that are beyond the reach of traditional methods. On the one hand, heuristics utilize problem-specific strategies, relying on intuitive logic to make decisions that guide the search process towards promising areas of the solution space [13,14]. On the other hand, metaheuristics provide a higher level of abstraction, offering a framework adaptable to various optimization problems without the need for customization to a specific issue [15]. This shift towards heuristic and metaheuristic algorithms represents a natural response to the limitations of exact algorithms, underscoring their significance in solving the complex optimization challenges characteristic of many real-world applications [6,16,17].
As depicted in Figure 1, metaheuristic algorithms can be broadly categorized as non-nature-inspired or nature-inspired. Examples of non-nature-inspired algorithms include Tabu Search (TS) [18,19], Iterated Local Search (ILS) [20,21], and Adaptive Dimensional Search (ADS) [22], many algorithms draw inspiration from natural phenomena. Nature-inspired algorithms, such as Differential Evolution (DE) [23,24], Particle Swarm Optimization (PSO) [25,26], Artificial Bee Colony (ABC) [27,28], Krill Herd (KH) [29,30], and Gravitational Search Algorithm (GSA) [31,32], are valued for their versatility and simplicity in tackling complex optimization challenges [23]. However, it is essential to note that while some algorithms excel in addressing specific problems, their effectiveness may vary across different or intricate scenarios [33].
Among them, the Tree-Seed Algorithm (TSA) stands out as a successful nature-inspired metaheuristic developed in 2015, drawing inspiration from the natural interplay between tree and seed evolution [34]. TSA approaches optimization problems by exploring the locations of trees and seeds, representing feasible solutions [35,36]. Notably, TSA has gained widespread adoption, surpassing some swarm intelligence algorithms in popularity due to its simpler structure, higher precision, and greater robustness [37]. Nevertheless, TSA exhibits drawbacks, including premature convergence and a susceptibility to stagnate in local optima [38]. This study aims to mitigate these issues through novel approaches, including a distinct initial design and algorithmic modifications and hybridizations.

1.1. Motivations

TSA is a population-based approach to evolution that takes its inspiration from the relationship between trees and seeds, which are spread to the ground for reproduction. TSA is simple, has few parameters, it is easy to understand the concept, and it is effective in solving continuous structure optimization problems [34]. However, the TSA has its limits. During space exploration, it is often trapped in a locally optimal solution, struggling to escape this mechanism [39]. In addition, the optimal solutions generated for each round are underutilized, resulting in a relatively single optimal position [40]. These shortcomings highlight the need to refine the TSA methodology to improve its effectiveness and extend its applicability to optimization tasks. However, it is these shortcomings that motivate us to improve it. The motivations driving this research are as follows:
  • The existing seed generation mechanism in TSA lacks consideration of population attributes and may lead to seeds being generated in less favorable regions of the search space [41]. By redesigning the tree selection process, we aim to improve the quality of generated seeds and enhance the algorithm’s overall performance.
  • In the TSA evolutionary process, interactions solely between trees and seeds ignore potential interactions among trees, reducing population diversity [42]. To tackle this, we propose a tree population strategy to boost diversity and speed up convergence.
  • Many optimization algorithms rely on simplistic initialization methods, such as uniform random sampling, which may result in a less diverse initial population [43]. By rethinking the initialization process, we aim to improve the exploration–exploitation balance and enhance the algorithm’s robustness across various optimization scenarios.

1.2. Contribution

The no free lunch (NFL) theorem holds significance within this context [44]. This study is notably driven by the explicit goal of reducing the probability of conventional TSA encountering stagnation in local optima and premature convergence. This study aims to enhance the balance between exploitation and exploration by incorporating the PSO velocity update mechanism and an improved balance mechanism into the original TSA. The primary contributions of this paper can be succinctly summarized as follows.
  • PSO-Inspired seed generation mechanism: The significant advancement of this TSA variant lies in its seed generation technique, which is inspired by PSO. By utilizing velocity vectors for updating seed positions, this approach introduces a dynamic and adaptive element to the exploration–exploitation equilibrium, thereby augmenting the algorithm’s efficacy in traversing the search space efficiently.
  • Adaptive dynamic parameter update mechanism: The incorporation of adaptive weight (w) and constant (k) updates during the optimization process is a novel aspect. This adaptive mechanism allows the algorithm to dynamically adjust its exploration and exploitation tendencies based on the current iteration, contributing to improved convergence behavior and solution quality.
  • Adaptive velocity adaptation mechanism based on count parameters: The introduction of a count-based adaptive mechanism for updating the velocity vectors contributes to the algorithm’s ability to dynamically adjust its behavior during different phases of the optimization process. The count parameter influences the exploration–exploitation trade-off, allowing the algorithm to adapt its strategy as the optimization progresses.
  • Population-based evolutionary strategy with information exchange: The variant integrates an evolutionary strategy characterized by dynamic partitioning of the population into distinct subpopulations based on their respective fitness values. This innovative approach incorporates a combination of grouping, crossover, and natural selection operations. Crossover events are facilitated between the superior subpopulation and a subset of the inferior subpopulation, facilitating structured information exchange. Concurrently, a natural selection operation replaces the inferior subpopulation with the superior counterpart in terms of both position and velocity. This sophisticated methodology enhances the algorithm’s adaptability, facilitating the effective exploitation of promising solutions while concurrently preserving population diversity to mitigate premature convergence.
  • Dynamic seeding with chaotic map: The sine chaotic map is used for generating random numbers during the initialization phase and seed production [45]. Chaotic maps can provide a better and more dynamic exploration of the search space compared to uniform random numbers. This can enhance the diversity of the seeds produced and potentially improve the algorithm’s ability to escape from local optima.

2. Related Work

2.1. A Brief Introduction to TSA

The TSA, proposed by Mustafa Servet Kiran in 2015, is a swarm intelligence algorithm inspired by the relationship between trees and seeds on land [34]. TSA is designed for solving continuous optimization problems and is widely applicable in heuristic and population-based search scenarios. The key principles of TSA are outlined below:
  • Tree position initialization: The initial position of each tree ( T i ) is determined by randomly selecting values for each dimension (j) within the specified bounds of the search space, using Equation (1).
    T i , j = L j , min + r i , j × ( H j , max L j , min )
    where T i , j is the j-th dimension of the i-th tree, L j , min and H j , max are the lower and upper bounds of the search space for dimension j, and  r i , j is a random number in the range of [ 0 , 1 ] .
  • Tree-seed renewal mechanism: The seed renewal mechanism involves two update formulas for generating new seeds, considering both the current tree’s location and the optimal location of the entire tree population, which are calculated in Equations (2) and (3).
    Local Search Formula : S i , j = T i , j + α i , j × ( B j T r , j )
    Global Search Formula : S i , j = T i , j + α i , j × ( T i , j T r , j )
    where S i , j is the j-th dimension of the seed to be produced by the i-th tree, T i , j is the j-th dimension of the i-th tree, B j is the j-th dimension of the best tree location obtained so far, T r , j is the j-th dimension of a randomly selected tree (r) from the population, and α i , j is a scaling factor randomly generated in the range [ 1 , 1 ] .

2.2. Literature Review

The TSA has gained significant attention in the swarm intelligence research community. It has become widely adopted due to its simplicity, precision, and robustness in comparison to other swarm intelligence algorithms [46]. Researchers have continuously proposed variants to improve TSA’s performance, with a focus on enhancing seed generation, tree migration, and expanding its application domains [35].
  • Tree migration variants: The Migration Tree-Seed Algorithm (MTSA) incorporates hierarchical gravity learning and random-based migration, drawing inspiration from the Grey Wolf Optimizer [38]. This approach effectively mitigates challenges related to exploration–exploitation imbalance, local stagnation, and premature convergence. Additionally, the Triple Tree-Seed Algorithm (TriTSA) introduces triple learning-based mechanisms, amalgamating migration strategies with sine random distribution to further enhance algorithmic performance [47].
  • Innovations in seed generation: Various innovations have emerged to enhance seed generation and improve the effectiveness of the optimization process. Jiang’s integration of the Sine Cosine Algorithm (SCA) with TSASC introduces a novel mechanism for updating seed positions, refining weight factors to pursue optimal solutions [48]. Additionally, the Sine Tree-Seed Algorithm (STSA) dynamically adjusts seed quantity, transitioning from higher to lower counts to emphasize output bolstering during initial search phases [42]. Other TSA variants like ITSA, incorporating an acceleration coefficient for faster updates [49], and EST-TSA, leveraging the current optimal population position for improved local search, make significant contributions [50]. Innovations such as fb-TSA [51], integrating seeds and search tendencies via feedback mechanisms [51], and LTSA, introducing a Lévy flight random walk strategy to seed position equations [52], collectively refine TSA’s performance and adaptability in optimization tasks.
  • Algorithm applications: TSA and its various iterations are applied across a wide array of fields. For instance, CTSA is adept at handling constrained optimization problems by leveraging Deb’s rules for tree and seed selection [53]. Meanwhile, DTSA integrates swap, shift, and symmetry transformation operators to tackle permutation-coded optimization problems [54]. In financial risk assessment, Jiang introduces the sinhTSA-MLP model for identifying credit default risks with remarkable precision [55]. Moreover, in the medical domain, Aslan proposes the TSA-ANN structure for precise COVID-19 diagnosis, optimizing artificial neural networks to classify deep architectural features [56].
The broad range of applications demonstrates the adaptability and efficacy of TSA and its derivatives in tackling multifaceted challenges spanning various domains. Continuous improvements and fine-tuning bolster TSA’s optimization prowess, solidifying its role as a pivotal tool in diverse problem-solving contexts.

2.3. An Overview of PSO

PSO is a popular metaheuristic algorithm designed for solving global optimization problems. Proposed by Dr. Eberhart and Dr. Kennedy in 1995, PSO draws inspiration from the social behavior of bird flocks and fish schools [57]. The working principle of the PSO algorithm can be briefly summarized as follows.
In the initialization phase, particles are randomly generated within the search space. Each particle is characterized by its position X i = ( x i 1 , x i 2 , , x i d , , x i D ) and velocity V i = ( v i 1 , v i 2 , , v i d , , v i D ) , where D denotes the problem’s dimensionality and i is the index of the particle. Next, the fitness of each particle is evaluated by applying the objective function to its current position by Equation (4).
Fitness i = Objective Function ( X i )
Subsequently, the local best position ( P best i ) for each particle and the global best position ( G best ) for the entire swarm are updated based on fitness improvement, as shown in Equation (5) below:
P best i = X i , if Fitness i < Fitness best i P best i , otherwise G best = X min ( Fitness )
Then, the particle velocities and positions are updated using the velocity and position update formulas, considering inertia (w), cognitive ( c 1 ), and social ( c 2 ) components, as shown below in Equation (6):
V i = w V i + c 1 r 1 ( P best i X i ) + c 2 r 2 ( G best X i )
where w is the inertia weight, c 1 and c 2 are learning rates, and  r 1 and r 2 are random numbers between 0 and 1. Finally, through iterative optimization, the swarm converges towards the global optimum or an approximate solution.

3. Methods

3.1. PSO-Inspired Seed Generation Mechanism

The original TSA employed a rudimentary seed generation mechanism based on random perturbations of tree positions within the search space. This approach lacked a systematic and adaptive strategy, which could lead to suboptimal exploration and exploitation. To address these limitations, we introduce a novel PSO-inspired seed generation mechanism; in this mechanism, each tree has a certain initial velocity. Furthermore, in the process of generating seeds, the velocity update of each tree is also affected by ST. As a result, two update strategies emerge: the speed update strategy for local development and the speed update strategy for global development, as depicted in Equation (9) and illustrated in Figure 2. These strategies introduce a dynamic and adaptive element to speed generation, as detailed in Equation (8) and demonstrated in Figure 3.
v j + 1 , d = w · v j , d + rand · best_params ( d ) trees ( i , d ) · k · cos ( 2 π · rand ) , if S T < 0.1 w · v j , d + rand · trees ( r , d ) trees ( i , d ) · k · cos ( 2 π · rand ) , if S T 0.1
k = 2 2 · t Max_Gen
In the speed vector update mechanism of seeds, w · v ( j , d ) represents the inertia or influence of the previous velocity on the current velocity. The inertia weight (w) ensures that the algorithm retains some information from the previous iteration, contributing to the smooth transition between consecutive velocities and aiding in the stability of the optimization process. r a n d · ( t r e e s ( r , d ) t r e e s ( i , d ) ) introduces a random perturbation to the velocity, facilitating exploration of the solution space. The subtraction t r e e s ( r , d ) t r e e s ( i , d ) signifies the difference in positions between the current tree (i) and a randomly chosen neighboring tree (r). Multiplying by r a n d introduces a random scaling factor, adding stochasticity to the algorithm. k · cos ( 2 π · rand ) utilizes k as a control parameter to adjust the amplitude of the cosine term. The cosine function introduces periodicity and oscillatory behavior to the velocity update, aiding exploration by allowing the algorithm to escape local optima and explore diverse regions of the solution space. Multiplying by k provides adaptive control over the amplitude of the cosine term.
Meanwhile, the updated speed will be incorporated into Equations (2) and (3), as described in Equation (9), utilizing velocity vector integration to update the seed position. The introduction of velocity vectors in the optimization process is pivotal for enhancing the exploration–exploitation balance and adaptability of the algorithm by integrating velocity vectors for updating seed positions.
seeds ( j , d ) = trees ( i , d ) + best_params ( d ) trees ( r , d ) · ( rand 0.5 ) · 2 + v ( j + 1 , d ) , if S T < 0.1 trees ( i , d ) + trees ( r , d ) trees ( i , d ) · ( rand 0.5 ) · 2 + v ( j + 1 , d ) , if S T 0.1

3.2. Adaptive Velocity Adaptation Mechanism Based on Count Parameters

The static nature of TSA, with a fixed velocity update strategy, limits its adaptability across various optimization scenarios, leading to issues such as premature convergence and insufficient exploration. The introduced count-based adaptive mechanism dynamically adjusts the algorithm’s behavior, effectively balancing exploration and exploitation. A pivotal enhancement is introduced through the incorporation of a counter, denoted as count, to monitor changes in the global optimal tree. If the global optimal tree is replaced, the counter resets to zero. Conversely, if the global optimal tree remains unchanged for a consecutive duration, when the count reaches a threshold of 15, it indicates being trapped in a local optimum. To overcome this challenge and facilitate escape from local optima, the velocity vector is inverted (Equation (10) and Figure 4). This inversion disrupts the pattern leading to local optima, enabling seeds to break free from suboptimal solutions. This innovative approach significantly bolsters the algorithm’s global search capability by promoting diversity and exploration, enhancing resilience against convergence to suboptimal solutions.
V ( j + 1 , d ) = V ( j + 1 , d ) C o u n t > 15 V ( j + 1 , d ) C o u n t 15

3.3. Population-Based Evolutionary Strategy with Information Exchange Mechanism

Due to the standard TSA’s slow convergence speed and diminished population diversity in later iterations, achieving the global optimal value becomes challenging. To address this, arithmetic crossover and natural selection strategies are incorporated to enhance tree diversity, convergence speed, and accuracy illustrated in Figure 5. The trees are arranged by selecting the absolute values of all particle fitness values, as shown in Equation (11), and divided according to Equation (12).
[ | P 1 | , | P 2 | , | P 3 | , , | P N | ]
ζ 1 = { | P 1 | , | P 2 | , | P 3 | , , | P 5 N | } ζ 2 = { | P N 2 + 1 | , , | P N 2 | } ζ 3 = { | P N 2 + 1 | , , | P 4 N 5 | } ζ 4 = { | P 4 N 5 + 1 | , , | P N | }
In Equation (12), the fitness values are partitioned into four components: ξ 1 , ξ 2 , ξ 3 , and  ξ 4 . The stochastic nature of the TSA algorithm categorizes fitness values into two groups: favorable ( ξ 1 , ξ 2 ) and unfavorable ( ξ 3 , ξ 4 ). Specifically, ξ 1 and ξ 4 each constitute 20% of the total population, while ξ 2 and ξ 3 each represent 30%.
The crossover operation is applied to ξ 2 and ξ 3 , facilitating the fusion of trees with favorable and unfavorable fitness values. This approach generates offspring with increased diversity and propels less fit trees towards improved fitness values.
Simultaneously, natural selection acts upon ξ 1 and ξ 4 . This process involves directly replacing the velocity and position of less fit trees ( ξ 4 ) with those of more fit trees ( ξ 1 ), analogous to reducing the entire population by 20%, significantly accelerating the convergence speed of particles.
The specific strategies are as follows:
  • Arithmetic crossover: This paper proposes a novel crossover strategy for trees based on the crossover strategy of Differential Evolution (DE). The update equations for the position x ξ 3 and velocity v ξ 3 of trees at the locations ξ 2 and ξ 3 , respectively, are defined as follows:
    x ξ 3 = rand × x ξ 2 + ( 1 rand ) × x ξ 3
    v ξ 3 = rand × v ξ 2 + ( 1 rand ) × v ξ 3
    where x ξ 2 , x ξ 3 , v ξ 2 , and  v ξ 3 represent the positions and velocities of trees, as defined in Equations (11) and (12), and rand is a random number uniformly distributed in the range [0, 1].
  • Natural selection: To expedite the convergence speed of trees, a mechanism is employed whereby well-performing trees replace less effective ones. The procedure is expressed as Equations (17) and (18):
    x ξ 4 = x ξ 1
    v ξ 4 = v ξ 1
    where x ξ 1 , x ξ 4 , v ξ 1 , and  v ξ 4 represent the positions and velocities of trees as defined in Equation (12). This natural selection process involves the direct replacement of the position and velocity of less effective trees ( ξ 4 ) with those of more effective trees ( ξ 1 ), significantly accelerating the convergence of trees.

3.4. DTSA: A Novel Tree-Seed Algorithm

The proposed algorithm represents a substantial improvement over the original TSA by addressing its limitations through a multifaceted approach. The adaptive velocity-driven seed generation method introduces a PSO-inspired mechanism, integrating velocity vectors into seed generation to dynamically update positions. This enhancement significantly improves exploration and exploitation, providing adaptability and a refined global search capability. The count-based adaptive velocity update method introduces a dynamic adjustment mechanism based on a counter, effectively preventing premature convergence and enhancing exploration by strategically inverting velocity vectors. Lastly, the tree population integrated evolutionary strategy tackles slow convergence and reduced diversity issues by incorporating arithmetic crossover and natural selection. This integrated evolutionary approach transforms the population structure, accelerating convergence and improving overall algorithm performance. Together, these methods contribute to a more dynamic, adaptive, and efficient optimization algorithm, effectively mitigating the drawbacks of the original TSA.
The flowchart illustrating DTSA is presented in Figure 6, while the pseudo-code is provided in Algorithm 1.

3.5. Time Complexity Analysis of DTSA

To study the time complexity of DTSA, it is crucial to understand the main components of the code and their frequencies of execution. The DTSA primarily comprises initialization, iterative updates, crossover operations, and natural selection. Here, is an analysis of the time complexities involved in each part.
Initialization: This stage employs Chebyshev chaotic mapping to generate random numbers for each dimension of every tree. With two nested loops—outer iterating N times (number of trees) and inner D times (problem dimensions)—the time complexity is estimated at O ( N × D ) .
Iterative update phase: Within a loop iterating up to a maximum of M a x I t e r s times, several key steps unfold. First, adaptive weight calculation occurs, which is computationally light with a time complexity of O ( 1 ) . Proceeding to seed generation, each tree produces a variable number of seeds within a dynamic range defined by ‘low’ and ‘high’, resulting in a complexity of O ( N × ( l o w + h i g h ) × D ) . Estimating N S , the average number of seeds per tree as N × ( l o w + h i g h ) 2 , calculating the fitness for each seed incurs a cost of O ( N S × D ) . Updates to tree positions and tracking the best solution both occur with a linear time complexity, O ( N ) , while updating a counter is negligible at O ( 1 ) . Consequently, the aggregate time complexity for this iterative process is approximately O ( M a x I t e r s × N × D ) .
Crossover and natural selection: This phase initiates with sorting trees based on fitness, demanding O ( N · log ( N ) ) time. Segregation of the population follows swiftly with O ( N ) complexity. Crossover operations and natural selection, both executed in O ( N × D ) time, precede an analogous double-loop operation updating tree velocities and positions, also in O ( N × D ) . This segment, integrating sorting, population manipulation, genetic mechanisms, and status refreshes, accumulates to a total time complexity around O ( M a x I t e r s × ( N × D ) ) .
In summary, while DTSA’s time complexity mirrors TSA’s in most procedural aspects, the crossover and natural selection phases introduce additional computational demands without fundamentally elevating the overall time complexity of the algorithm.
Algorithm 1 The pseudo-code of the DTSA
  1:
Step 1. Initialize of the population
  2:
Set the initial number of zones and the dimensions of the problem;
  3:
Set the ST parameter for the algorithm and termination conditions;
  4:
Evaluates the location of the tree against the specified target function;
  5:
Creation1:
  6:
Generating the positions and velocities of all trees and seeds through Sine chaotic mapping
  7:
End Creation1
  8:
Step 2. Search with seeds
  9:
for all trees do
10:
     Decide the number of seeds produced for this tree;
11:
     for all seeds do
12:
         for all dimensions do
13:
             if  rand < ST  then
14:
                Creation2:
15:
                Generate seed speed through Equations (9) and (10)
16:
                Generate the position of tree by Equation (11)
17:
            else
18:
                Generate seed speed through Equations (9) and (10)
19:
                Generate the position of tree by Equation (11)
20:
                End Creation2
21:
            end if
22:
            Creation3:
23:
            if  Count 15  then
24:
                The speed of updating seeds by Equation (12)
25:
            end if
26:
            End Creation3
27:
       end for
28:
     end for
29:
     Choose the best seeds and compare them with the trees;
30:
     If the seed is in a better position than the tree, replace the tree with the seed.
31:
end for
32:
Step 3. Select of Best Solution
33:
Select the best solution of the population;
34:
if the new best solution number is lower than the previous best solution number then
35:
    Count = Count + 1
36:
else
37:
    Count = 0
38:
    the new best solution will replace the previous best solution
39:
end if
40:
Creation4:
41:
Divide trees into good and bad groups by Equations (11) and (12)
42:
Perform crossover operation between good and bad groups by Equations (13) and (14)
43:
Perform natural selection operation between bad groups by Equations (15) and (16)
44:
End Creation4
45:
Step 4. Test Termination condition
46:
If the condition is not met, return to Step 2.
47:
Step 5. Report
48:
Report the best solution.

4. Results and Discussion

In this section, we present a series of experiments to assess the performance of the proposed DTSA. We analyze the advantages of DTSA based on six comprehensive experimental results. Additionally, Section 4.1 outlines the parameter configurations utilized in the upcoming experiments. The subsequent experiments’ parameter settings are introduced in detail, providing a comprehensive overview of the experimental setup. Subsequently, in Section 4.2, we initiate the exploration with the qualitative analysis of DTSA. Subsequently, in Section 4.3, we delve into the quantitative aspects of the analysis. Section 4.4 conducts an in-depth examination of the experimental data from the preceding quantitative analysis. Further reinforcing the significance of DTSA, Section 4.5 presents statistical experiments, demonstrating its superior performance compared to other algorithms. In Section 4.6, we extend the validation of DTSA by applying it to address three real-world problems.

4.1. Experiment Setting

DTSA underwent a comprehensive performance assessment by being pitted against various cutting-edge algorithms and several modifications. Notable contenders included EST-TSA [50], fb-TSA [51], TSA [34], STSA [42], MTSA [38], GA [58], PSO [57], GWO [59], BA [60], BOA, and RSA [61].
To ensure experimental fairness, this paper meticulously records the datasets and parameters employed by the comparison algorithms in the original research, as presented in Table 1. The experimental dataset is based on the IEEE CEC 2014 benchmark function, as indicated by the statistical results, where F 1 F 3 are unimodal functions, F 4 F 16 are simple multimodal functions, F 17 F 23 are mixed functions, and F 24 F 30 are combined functions [62]. Further details regarding the CEC 2014 benchmark function can be found in Table 2 and Table 3.

4.2. Qualitative Analysis

In order to qualitatively analyze the proposed algorithm, we conducted convergence behavior analysis, population diversity analysis, and exploration and exploitation analysis experiments to observe its performance. At the same time, we selected classical unimodal function F 1 to evaluate the development ability of the algorithm, and classical multimodal function F 8 to evaluate the exploration ability of the algorithm, as shown in Figure 7 and Figure 8.

4.2.1. Convergence Behavior Analysis

In this experiment, there are a total of four convergence analysis graphs ( in Figure 9 and Figure 10), the specific meanings of the four pictures of the different functions are as follows:
  • The first image illustrates the optimization process of the DTSA algorithm. The black dots represent the areas covered by current seeds, while the red dot indicates the best position found, representing the optimal solution. The clustering of black dots around the red dot demonstrates the step-by-step optimization of DTSA towards convergence.
  • The second figure depicts the convergence of DTSA, showcasing its rapid convergence towards the optimal solution. The sharp decline in the convergence curve underscores DTSA’s efficiency in finding optimal solutions.
  • The third graph monitors changes in the first dimension, offering insights into the algorithm’s behavior and its avoidance of premature convergence to local optima. Empirical evidence suggests that the DTSA algorithm effectively navigates away from local optima.
  • In the fourth graph, the convergence of the mean over multiple iterations is presented. The noticeable decline in the curve indicates the significant overall convergence effect of DTSA, further affirming its efficacy in optimization tasks.

4.2.2. Population Diversity Analysis

In optimization algorithms, population diversity serves as a pivotal metric for gauging efficiency. Its level has a direct impact on the exploration depth within the algorithm’s search space. A heightened population diversity is instrumental in averting entrapment in local optima, preserving the algorithm’s prowess for global search. This fosters a more extensive exploration of the search space during initial stages, thereby enhancing the likelihood of discovering the global optimal solution. This broadened search capability enables DTSA to adeptly handle problems with varying shapes and structures, mitigating over-optimization for specific instances. Consequently, the maintenance of appropriate population diversity emerges as a critical factor for ensuring the robustness and global search performance of algorithms. In practical applications, the mathematical formula for measuring population diversity can be used in various ways. In this paper, the dispersion of individual and collective centroids is used to represent population diversity, where Equations (17) and (18) are used to calculate the population diversity I c .
I c ( t ) = i = 1 N d = 1 D ( x i d ( t ) c d ) 2 ( t )
c d ( t ) = 1 N d = 1 D ( x i d ( t ) )
We easily observe from Figure 11 and Figure 12 that in the single-modal function F 1 , both DTSA and TSA exhibit a rapid decrease in population diversity. The algorithms exhibit strong local exploitation capabilities, facilitating the rapid identification of global optimum solutions. Despite this, DTSA demonstrates faster convergence compared to TSA while preserving higher population diversity throughout the process. For the F 8 multimodal function, DTSA benefits from the sine chaotic map for population initialization, resulting in increased diversity early in the algorithm and maintaining it through mid-term iterations. Although population diversity diminishes after encountering local optima, the count-based adaptive velocity update strategy enables DTSA to bolster diversity, aiding its escape from such solutions.

4.2.3. Exploration and Exploitation Analysis

In swarm intelligence optimization algorithms, exploring development analysis is often used to evaluate the exploration capability of the algorithm. The diversity of the swarm across iterations is a key factor in understanding how well the algorithm explores the solution space. The diversity at iteration t ( D i v t ) can be calculated using the average distance between particles in the swarm by Equation (19):
D i v t = 1 N ( N 1 ) i = 1 N j i N d = 1 D ( x i d t x j d t ) 2
This measures the average Euclidean distance between all pairs of particles in the swarm. The exploring development and the exploitation analysis can be defined as the ratio of the diversity at each iteration to the maximum diversity, as shown by Equations (20) and (21):
Exploration(%) = D i v t D i v max                            
Exploitation(%) = | D i v t D i v max | D i v max
In Figure 13 and Figure 14, a higher value of E x p l o r a t i o n indicates better exploration capacity and a higher value of E x p l o i t a t i o n indicates better exploitation capacity at iteration t. In comparing DTSA with the original algorithm, both demonstrate rapid development in single-mode function F 1 , indicating strong development capabilities. For multimode function F 8 , DTSA achieves a balance between development and exploration at 212 iterations, gradually decreasing exploration while maintaining excellent global exploration ability. In contrast, TSA reaches a balance at 1042 iterations, indicating good global exploration but with weaker practical applicability due to a maximum iteration limit of 500 iterations.

4.3. Quantitative Analysis

In this section, three strategically crafted experiments establish DTSA’s superior performance. Section 4.3.1 rigorously compares DTSA with various TSA variants, unequivocally demonstrating its superiority. Section 4.3.2 extends the comparison to include additional metaheuristic algorithms, highlighting DTSA’s exceptional ability to tackle intricate problems. Section 4.3.3 presents boxplots derived from 30 rounds of experiment results, offering visual insights into DTSA’s stability across multiple iterations.

4.3.1. Comparative Experiment 1: DTSA versus EST-TSA, fb-TSA, TSA, STSA, and MTSA

The first comparative experiment evaluates basic TSA [34] and its recent variants (EST-TSA [50], fb-TSA [51], STSA [42], and MTSA [38]) to showcase their strengths. Comparative assessments of DTSA and other algorithms across 30, 50, and 100 dimensions are presented in Table 4, Table 5 and Table 6. Additionally, graphical representations of the convergence curves for these algorithms are depicted in Figure 15, Figure 16 and Figure 17.
The tables present the average optimal values obtained from 30 experimental iterations, each comprising 500 iterations, offering insights into the convergence performance of the algorithms. This aggregated data are crucial for discerning algorithmic superiority.
Moreover, meticulously documented convergence processes over 500 iterations for each experimental iteration shed light on performance dynamics. By averaging 30 local optimal solutions at each iteration point, the computed average convergence process reveals nuanced performance details. The slopes of these curves provide quantitative assessments of convergence speed.
The results underscore the efficacy of leveraging the fusion mechanism of tree population to enhance population diversity, thereby avoiding local optima. Additionally, refinements in the position update formula (Equation (9)) significantly improve convergence speed compared to previously proposed TSA variants.
Based on the empirical evidence, we confidently assert the superiority of the algorithm introduced in this study over other TSA algorithms and their variants in optimization endeavors.

4.3.2. Comparative Experiment 2: DTSA versus Classical and Recent Swarm Intelligence Algorithms

To substantiate the efficacy of the proposed algorithm, additional experiments were meticulously conducted. Classical swarm intelligence algorithms, including GA [58] and PSO [63], and some recent and famous swarm intelligence algorithms, such as RSA [64] and BOA [65], were deliberately chosen for comparative analyses.
The experimental setup rigorously adhered to the specifications outlined in Section 4.3.1, comprising 30 rounds of experiments with 500 iterations per round. Problem dimensions varied across 30, 50, and 100, maintaining consistency with earlier experiments. Furthermore, parameters for each algorithm were meticulously configured according to the settings outlined in their respective original papers, as delineated in Table 1.
Table 7, Table 8 and Table 9 document the average best values attained by DTSA and its counterparts across 30 rounds of experimentation, providing insights into the average rankings of each algorithm. These tables validate DTSA’s robust performance, particularly in addressing multimodal problems. Additionally, graphical depictions of convergence curves for these algorithms are illustrated in Figure 18, Figure 19 and Figure 20.

4.3.3. Comparative Experiment 3: Analyzing the Stability of the DTSA

In this section, boxplots are presented to facilitate stability analysis of the DTSA algorithm. Figure 21, Figure 22 and Figure 23 depict these boxplots, with the x-axis representing the algorithm proposed in this study and other comparative algorithms. The y-axis shows objective function evaluations derived from 30 rounds of experiments for each algorithm.
Each plot displays uppermost and lowermost black lines representing maximum and minimum values from the 30 evaluations. The bounds of the box signify upper and lower quartiles, while the red line within the box indicates the median. Plus signs outside the box highlight outliers, denoting exceptional or subpar performance instances in specific rounds. This graphical representation aids in comprehensive assessment of the algorithm’s stability and performance across multiple experimental rounds.
Upon scrutinizing the comparative boxplots, it is evident that DTSA demonstrates commendable consistency in performance across various experimental setups. Its robust boxplot manifestations and minimal presence of outliers substantiate DTSA’s capability to consistently yield favorable results in diverse experimental environments, affirming its distinguished attribute of elevated stability. This empirical validation underscores DTSA’s reliability and effectiveness in optimization tasks, confirming its utility in complex problem-solving scenarios with confidence.
In summary, DTSA’s performance in the quantitative analysis highlights a substantial boost in its optimization efficacy. By integrating a PSO-inspired seed generation mechanism and a count-based adaptive strategy, it effectively addresses the traditional TSA’s limitations regarding premature convergence and challenges with high-dimensional, intricate optimization issues. Notably, the experiments not only validate DTSA’s superiority through comparisons with numerous cutting-edge algorithms but also reinforce its efficiency and stability in real-world problem applications. The comprehensive assessment on the IEEE CEC 2014 benchmark functions demonstrates DTSA’s rapid and steady convergence trend, particularly excelling in navigating multimodal, mixed, and composite functions by adeptly avoiding local optima to reach global optimality.
However, when confronted with simplistic, unimodal functions, where all algorithms effortlessly locate the optimal solution due to the inherently low complexity, the enhancements brought by DTSA become less conspicuous. In alignment with the no free lunch theorem [44], it is a recognized fact that no single algorithm can universally prevail in every scenario; their superiority is inherently tied to the specific complexities embedded within each distinct problem. Hence, it is inevitable that DTSA exhibits a differentiated effectiveness, finely attuned to the intricacy and dimensional breadth of the tasks it encounters.

4.4. Further Analysis

To elucidate the performance characteristics of DTSA, it is imperative to undertake an advanced analysis that integrates findings from both qualitative and quantitative investigations. This approach will provide a comprehensive understanding of the experimental observations associated with DTSA’s functionality.
Firstly, DTSA showcases superior performance over several TSA variants, as it integrates three key innovative mechanisms that collectively enhance its operational efficiency. These innovations significantly improve DTSA’s adaptability, diversity, and convergence speed. By dynamically updating seed positions, adjusting search behavior to prevent premature convergence, and incorporating genetic diversity through arithmetic crossover and natural selection, DTSA achieves a remarkable balance between exploration and exploitation. This strategic combination not only accelerates the algorithm’s convergence towards optimal solutions but also maintains a high level of population diversity, setting DTSA apart in optimization tasks and showcasing its advanced capability in handling complex optimization problems
Secondly, in addressing low-dimensional optimization challenges, DTSA does not exhibit a marked superiority over competing algorithms, attributable to its sophisticated mechanisms primarily optimized for the intricacies of high-dimensional search spaces. These mechanisms, while potent in navigating and exploiting the complex landscapes of high-dimensional problems, yield diminishing returns in less complex, low-dimensional scenarios. Consequently, the inherent advantage of DTSA’s advanced features becomes less pronounced, as simpler algorithmic solutions prove equally adept in these contexts. This observation underscores a critical avenue for future research aimed at refining DTSA’s adaptability and efficiency across a diverse array of problem scales, thereby enhancing its utility in a wider spectrum of optimization tasks.
Thirdly, the empirical evidence, as delineated by the boxplot visualizations in Figure 21, Figure 22 and Figure 23, unequivocally establishes the superior stability and quality of DTSA in comparison to the algorithms it was tested against within this study. Specifically, the more compact and lower interquartile ranges of DTSA’s boxplots for certain functions distinctly underscore its enhanced performance metrics. This excellence is directly attributable to the innovative integration of the tree population integrated evolutionary strategy and adaptive velocity-driven seed generation mechanisms, as introduced in this research. These methodologies empower the DTSA to dynamically refine the optimization trajectory across varied benchmark function evaluations, culminating in reduced variability (as evidenced by smaller standard deviations) and improved reliability in experimental outcomes. This adaptability and precision in handling diverse optimization scenarios underscore the algorithm’s robustness and its potential applicability in solving complex optimization problems with heightened efficiency and consistency.

4.5. Statistical Experiments

Table 10 presents the outcomes of the Wilcoxon’s signed-rank test applied to the experimental data of the DTSA algorithm in comparison to 11 other algorithms [66]. The table includes p-values obtained at the significance level α . Columns labeled TRUE and FALSE indicate whether the hypothesis is rejected or not rejected at the specified significance level of α . The data for this analysis are sourced from the comparative experiments detailed in Section 4.3, ensuring consistency with the experimental setup.
Our assertion is that DTSA demonstrates superior performance compared to the other algorithms. A smaller p-value resulting from the experiment between DTSA and a comparative algorithm, below the significance level α , indicates DTSA’s superior performance. Table 10 reveals that DTSA consistently exhibits very low p-values compared to the majority of the algorithms. Consequently, under both significance levels ( α = 0.1 and α = 0.05), our hypothesis is confirmed. These findings affirm that, in this experiment, the DTSA algorithm significantly outperforms the other algorithms in terms of performance.

4.6. Practical Engineering Problems of Mathematical Modeling

In light of the advanced analysis delineated in Section 4.4, which evidences the Dynamic Tree-Seed Algorithm’s (DTSA) proficiency in navigating high-dimensional complex problem spaces, this section is dedicated to an empirical evaluation of DTSA’s adeptness in addressing complex constrained optimization challenges. Specifically, we scrutinize its performance through the lens of intricate objective functions and multifarious constraints, employing a suite of benchmark problems for this purpose: Tension spring design [67], three-bar truss design [68], welded beam design [69], cantilever beam design [70], and step-cone pulley design problems [71]. The analytical results, encapsulated in Table 11, Table 12, Table 13, Table 14 and Table 15, are articulated through several statistical measures: best signifies the optimal value achieved by DTSA across 30 experimental iterations; mean delineates the average result; std refers to the standard deviation, indicating result variability; worst captures the least favorable outcome; and the x denotes the optimal solution configuration ascertained by DTSA at the juncture of achieving its best result. This methodological approach affords a rigorous assessment of DTSA’s capability to effectively resolve complex constrained optimization tasks, furnishing a comprehensive perspective on its performance efficacy.

4.6.1. Example 1: Tension Spring Design Problem

The pressure vessel design (PVD) problem aims to minimize the total cost, denoted as f ( x ) , while satisfying the production requirements. This optimization problem is defined by four design variables: the shell thickness T s ( x 3 ) , the head thickness T h ( x 4 ) , the inner radius R ( x 1 ) , and the length of the vessel L ( x 2 ) , excluding the head. Both T s and T h are quantized in multiples of 0.625 , whereas R and L are considered as continuous variables. The mathematical model of the pressure vessel design (PVD) problem is detailed from [67]:
Objective function:
minimize f ( x ) = ( N + 2 ) D d 2
Subject to constraints:
g 1 ( x ) = 1 D 3 N 71785 d 4 0
g 2 ( x ) = 4 D 2 d D 12566 ( D d 3 d 4 ) + 1 5108 d 2 1 0
g 3 ( x ) = 1 140.45 d D 2 N 0
g 4 ( x ) = D + d 1.5 1 0
Variable bounds:
0.05 x 1 2 , 0.25 x 2 1.3 , 2 x 3 15
The results of the tension spring design problem are shown in Table 11. The results show that the DTSA has better performance.
Table 11. Tension spring design problem.
Table 11. Tension spring design problem.
DTSATSADBOHHOGWOSOWFOGOA
Best0.0130.0130.0130.0140.0130.0130.0180.013
Mean0.0130.0130.0130.0150.0130.0130.0360.014
Std0.0000.0000.0000.0010.0000.0000.0260.001
Worst0.0000.0130.0130.0160.0130.0130.0540.015
X 1 0.0530.0530.0500.0600.0500.0500.0680.058
X 2 0.3890.3810.3170.5970.3190.3170.8720.521
X 3 9.60210.01514.0284.42613.91814.0282.4395.648

4.6.2. Example 2: Three-Bar Truss Design Problem

The efficacy of the algorithm is evaluated in test 2, utilizing the three-bar truss design problem. The primary objective is to minimize the volumetric attribute of the statically loaded three-bar truss, constrained by a triplet of inequality conditions. The optimization’s objective function is structured to compute the optimal cross-sectional areas corresponding to the design variables, denoted as x 1 and x 2 . This delineation is emblematic of structural optimization paradigms, where the optimization of material usage and adherence to mechanical constraints are of critical importance, thus providing a stringent assessment of the algorithm’s navigational acumen within multifaceted design domains.
Consider  X = [ x 1 , x 2 ] ,
Objective function:
minimize f ( X ) = ( 2 2 x 1 + x 2 ) × l
Subject to constraints:
g 1 ( X ) = 2 x 1 + x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 , g 2 ( X ) = x 2 2 x 1 2 + 2 x 1 x 2 P σ 0 , g 3 ( X ) = 1 2 x 2 + x 1 P σ 0 ,
Where:
l = 100 cm , P = 2 kN / cm 3 , σ = 2 kN / cm 3 ,
Variable bounds:
0 x 1 , x 2 1 .
The results of the three-bar truss design problem are shown in Table 12. The results show that the DTSA has better performance.
Table 12. Three-bar truss design problem.
Table 12. Three-bar truss design problem.
DTSATSADBOHHOGWOSOWFOGOA
Best263.89582.64 × 10 + 2 263.8963263.9051263.8982263.8959265.4799263.9054
Mean263.8958263.8958263.8968264.0205263.9012263.8969265.5999263.975
Std2.81 × 10 6 3.91 × 10 6 7.95 × 10 4 0.1632264.28 × 10 3 0.0014940.1697620.098407
Worst2.64 × 10 + 2 263.8959263.8974264.1359263.9043263.898265.4799264.0446
X 1 0.78860.78870.78790.78510.78950.78850.8226650.785091
X 2 0.40820.40810.41040.41820.40590.40870.32790.4184

4.6.3. Example 3: Welded Beam Design Problem

The welded beam design (WBD) problem is a classical optimization task that seeks to minimize the manufacturing cost of a beam’s design. The problem is defined by four design variables: the beam’s length l, height t, thickness b, and the weld thickness h. The objective of the optimization process is to ascertain the values of these variables that result in the minimum production cost while adhering to the constraints imposed by shear stress τ , bending stress θ , buckling load on the bar P c , end deflection δ , and other prescribed boundary conditions. The WBD problem thus serves as a paradigmatic nonlinear programming challenge, wherein the cost function is intricately linked with a set of physical constraints that are interdependent. The mathematical formulation of the WBD problem is as follows:
Variables:
l = [ l 1 , l 2 , l 3 , l 4 ] = [ h , l , t , b ] = [ x 1 , x 2 , x 3 , x 4 ]
Objective function:
minimize f ( l ) = 1.10471 l 1 2 l 2 + 0.04811 l 3 l 4 ( 14.0 + l 2 )
Variable bounds:
0.1 l 1 2 0.1 l 2 10 0.1 l 3 10 0.1 l 4 2
Subject to constraints:
s 1 ( l ) = τ ( l ) τ max 0
s 2 ( l ) = σ ( l ) σ max 0
s 3 ( l ) = δ ( l ) δ max 0
s 4 ( l ) = l 1 l 4 0
s 5 ( l ) = P P c ( l ) 0
s 6 ( l ) = 0.125 l 1 0
s 7 ( l ) = 1.10471 l 1 2 + 0.04811 l 3 l 4 ( 14.0 + l 2 ) 5.0 0
where:
σ max = 30 , 000 psi ,   P = 6000 lb ,   L = 14   in. ,   δ max = 0.25   in. ,   E = 3 × 10 6 psi ,   τ max = 136,000 psi ,   G = 1.2 × 10 7 psi .
τ ( l ) = τ 2 + 2 τ τ l 2 R + ( τ ) 2
τ = P 2 l 1 l 2 , τ = M R J , M = p L + l 2 2
R = l 2 2 + ( l 1 + l 3 ) 2 4
J = 2 2 l 1 l 2 l 2 2 12 + ( l 1 + l 3 ) 2 14
P c ( l ) = 4.013 E l 3 l 4 2 6 L 2 1 l 3 E 8 L G
The results of the welded beam design problem are shown in Table 13. The results show that the DTSA has better performance.
Table 13. Welded beam design problem.
Table 13. Welded beam design problem.
DTSATSADBOHHOGWOSOWFOGOA
Best1.69271.69271.69271.73991.69541.69281.98502.3256
Mean1.69271.69271.69271.78381.69571.69402.00072.6348
Std01.03 × 10 11 5.55 × 10 6 0.06200.00040.0010.022260.4372
Worst1.69271.69271.69271.82761.69601.69512.01642.9439
X 1 0.20570.20570.20570.18770.20550.20570.243360.278417
X 2 3.23493.23493.23493.59963.24163.23462.80603.1226
X 3 9.03669.03669.03669.22509.05069.03728.57616.7455
X 4 0.20570.20570.20570.20480.20560.20570.25970.3704

4.6.4. Example 4: Cantilever Beam Design Problem

The cantilever beam optimization problem represents a fundamental structural engineering design challenge focused on the weight reduction of a cantilever beam featuring a square cross-section. This beam is characterized by a fixed support at one end and experiences a vertical load at the free end. Comprised of five hollow square segments of uniform thickness—specifically, 2/3 of an inch—the primary design variables are the heights or widths of these segments. The optimization seeks to minimize the total weight while ensuring compliance with structural requirements, including stress limits, deflection tolerances, and vibrational characteristics. It also considers practical aspects such as manufacturability and cost-effectiveness. The mathematical model encapsulating this optimization problem involves an objective function for weight calculation and a set of constraints ensuring the beam’s structural feasibility and functional adequacy under applied loads. This problem can be represented by the following mathematical equation:
Objective function:
f ( X ) = 0.0624 ( x 1 + x 2 + x 3 + x 4 + x 5 )
Subject to constraints:
g ( X ) = 61 x 1 3 + 37 x 2 3 + 19 x 3 3 + 7 x 4 3 + 1 x 5 3 1 0
Variable bounds:
0.01 x i 100 , i = 1 , 2 , , 5
The results of the cantilever beam design problem are shown in Table 14. The results show that the DTSA has better performance.
Table 14. Cantilever beam design problem.
Table 14. Cantilever beam design problem.
DTSATSADBOHHOGWOSOWFOGOA
Best1.33991.33991.33991.34371.34001.34002.62141.3737
Mean1.33991.33991.33991.34401.34001.34002.67401.4063
Std2.08 × 10 7 7.40 × 10 7 6.63 × 10 6 0.00045.03 × 10 5 2.11 × 10 5 0.07430.0462
Worst1.33991.33991.33991.34431.34011.34002.72661.4390
X 1 6.01616.01926.03545.82796.02665.97324.97415.9102
X 2 5.30945.30855.31915.22995.27605.304216.32156.0045
X 3 4.49474.49344.49224.82024.48344.53957.19593.9133
X 4 3.50073.49943.48013.45283.53223.500610.64544.1481
X 5 2.15252.15292.14712.20332.15652.15752.87382.0382

4.6.5. Example 5: Step-Cone Pulley Problem

The step-cone pulley design problem is a weight minimization task encountered in structural engineering, focusing on the optimization of a multi-stepped pulley system. Each step of the pulley, with a distinct diameter, contributes to the overall design that must adhere to specific mechanical constraints and performance goals. The challenge lies in determining the optimal configuration of these diameters and other design aspects to achieve minimal weight without compromising the pulley’s functional integrity. This problem is emblematic of complex engineering design tasks, which require sophisticated optimization techniques to solve due to their nonlinear and constrained nature.
Objective function:
minimize f ( x ) = ρ ω d 1 2 1 + N 1 N 2 + d 2 2 1 + N 2 N 2 + d 3 2 1 + N 3 N 2 + d 4 2 1 + N 4 N 2
Subject to constraints:
h 1 ( x ) = C 1 C 2 = 0 , h 2 ( x ) = C 1 C 3 = 0 , h 3 ( x ) = C 1 C 4 = 0 , g 1 , 2 , 3 , 4 ( x ) R i 2 , g 5 , 6 , 7 , 8 ( x ) = P i ( 0.75 × 745.6998 ) ,
Where:
C i = π d i 2 1 + N i N + N i N 1 2 1 4 a + 2 a i = ( 1 , 2 , 3 , 4 ) .
R i = exp μ π 2 sin 1 ( N i / N ) 1 2 a i = ( 1 , 2 , 3 , 4 ) .
P i = ρ ω 1 exp μ π 2 sin 1 ( N i / N ) 1 2 a π d i N i 60 i = ( 1 , 2 , 3 , 4 ) .
ρ = 7200 kg / m 3 ,   a = 3 m ,   μ = 0.35 ,   s = 1.75 MPa ,   t = 8 mm
The results of the step-cone pulley problem are shown in Table 15. The results show that the DTSA has better performance.
Table 15. Step-cone pulley problem.
Table 15. Step-cone pulley problem.
DTSATSADBOHHOGWOSOWFOGOA
Best1.67 × 10 + 1 2.84 × 10 + 81 1.67 × 10 + 1 4.60 × 10 + 87 4.31 × 10 + 92 1.67 × 10 + 1 6.77 × 10 + 103 4.86 × 10 + 93
Mean1.67 × 10 + 1 2.49 × 10 + 84 1.75 × 10 + 1 2.17 × 10 + 91 8.53 × 10 + 92 1.44 × 10 + 75 1.03 × 10 + 104 3.01 × 10 + 96
Std1.17 × 10 1 3.51 × 10 + 3 1.08 × 10 + 0 3.07 × 10 + 91 5.97 × 10 + 92 2.03 × 10 + 75 5.00 × 10 + 103 4.25 × 10 + 96
Worst1.68 × 10 + 1 4.97 × 10 + 84 1.83 × 10 + 1 4.34 × 10 + 91 1.28 × 10 + 93 2.87 × 10 + 75 1.38 × 10 + 104 6.01 × 10 + 96
X 1 3.98 × 10 + 1 3.91 × 10 + 1 4.00 × 10 + 1 3.97 × 10 + 1 3.95 × 10 + 1 3.89 × 10 + 1 5.82 × 10 + 1 3.89 × 10 + 1
X 2 5.47 × 10 + 1 5.38 × 10 + 1 5.50 × 10 + 1 5.46 × 10 + 1 5.44 × 10 + 1 5.35 × 10 + 1 5.98 × 10 + 1 5.35 × 10 + 1
X 3 7.29 × 10 + 1 7.17 × 10 + 1 7.33 × 10 + 1 7.28 × 10 + 1 7.26 × 10 + 1 7.14 × 10 + 1 7.78 × 10 + 1 7.14 × 10 + 1
X 4 8.75 × 10 + 1 8.59 × 10 + 1 8.79 × 10 + 1 8.73 × 10 + 1 8.70 × 10 + 1 8.56 × 10 + 1 8.23 × 10 + 1 8.55 × 10 + 1
X 5 8.69 × 10 + 1 8.92 × 10 + 1 8.65 × 10 + 1 8.73 × 10 + 1 8.95 × 10 + 1 8.99 × 10 + 1 8.81 × 10 + 1 8.92 × 10 + 1

5. Conclusions and Future Work

The Dynamic Tree-Seed Algorithm presents a series of quantitative improvements over existing optimization algorithms, which can be specifically highlighted in terms of numerical results and comparative evaluations. In the studies and experiments conducted, DTSA demonstrates its efficacy through the following achievements:
  • Performance enhancement: DTSA was tested against various benchmarks and recent TSA variants such as STSA, EST-TSA, fb-TSA, and MTSA, along with established algorithms like GA, PSO, GWO, BA, and RSA. It consistently outperformed these algorithms across multiple dimensions (30D, 50D, 100D) and on different types of functions (unimodal, multimodal, composite), as shown in the IEEE CEC 2014 benchmark tests.
  • Convergence and robustness: The convergence curves depicted in figures like Figure 17 illustrate DTSA’s faster convergence rate and stability even in higher dimensional spaces and for complex functions such as hybrid and composite ones. This indicates that DTSA effectively balances exploration and exploitation, leading to quicker and more accurate solutions.
  • Statistical measures: Across experiments, DTSA’s performance was quantified using measures like best, mean, std (standard deviation), worst, and X (optimal solution configuration). These metrics provided a comprehensive view of its effectiveness, showing consistent superiority in finding optimal or near-optimal solutions with reduced variability.
  • Engineering applications: When applied to real-world engineering problems like tension spring design, three-bar truss design, and others, DTSA achieved optimal values, as documented in Table 11, Table 12, Table 13, Table 14 and Table 15, indicating its practical utility and robustness in solving constrained optimization tasks.
These outcomes and comparisons solidify DTSA’s scientific contribution by offering a quantifiable advancement in optimization efficiency, accuracy, and versatility across a broad spectrum of problem domains.

6. Research Constraints and Considerations

While DTSA has undeniably demonstrated significant enhancements in its optimization capabilities, particularly excelling in addressing high-dimensional and multimodal optimization problems, there remain several aspects warranting further exploration and refinement to augment its versatility and practical utility. These areas include:
Firstly, the algorithm’s universality and generalization capacity: Despite impressive performances on numerous IEEE CEC 2014 benchmark functions, DTSA’s advancements are less pronounced in scenarios involving simple structures or unimodal functions. This suggests a need for refining the algorithm to better accommodate diverse problem characteristics, encompassing low-dimensional and straightforward optimization scenarios, thus elevating its generalization prowess.
Secondly, parameter tuning and sensitivity: DTSA’s performance is significantly influenced by meticulous parameter calibration. Presently, research on parameter selection and tuning strategies might not delve deep enough, necessitating a more systematic analysis of parameter sensitivity. This would facilitate the establishment of default or adaptive parameter settings applicable to a broader spectrum of problems, alleviating the user’s burden in manual parameter adjustments.
Lastly, computational efficiency for large-scale problems: As problem dimensions escalate, so do DTSA’s computational costs and memory requirements, potentially impeding its application to ultra-large-scale optimization tasks. Advancements in search strategies and data structures, or the incorporation of distributed computing frameworks to parallelize the algorithm, emerge as promising avenues to enhance the efficiency of tackling such massive problems.
To facilitate further research and practical application, we have made the source code publicly accessible at www.jianhuajiang.com (accessed on 1 June 2024), enabling the community to engage with and enhance the algorithm.

Author Contributions

J.J. and J.H. conceived and designed the methodology and experiments; J.H. performed the experiments, analyzed the results and wrote the paper; J.H., J.W., J.L., X.Y. and W.L. revised the paper. All authors have read and agreed to the published version of the manuscript.

Funding

The authors thank the financial support from the Foundation of the Jilin Provincial Department of Science and Technology (No. YDZJ202201ZYTS565), the Foundation of Social Science of Jilin Province, China (No. 2022B84) and the Jilin Provincial Department of Education Science and Technology (No. JJKH20240198KJ).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  2. Homaifar, A.; Qi, C.X.; Lai, S.H. Constrained optimization via genetic algorithms. Simulation 1994, 62, 242–253. [Google Scholar] [CrossRef]
  3. Deb, K. An efficient constraint handling method for genetic algorithms. Comput. Methods Appl. Mech. Eng. 2000, 186, 311–338. [Google Scholar] [CrossRef]
  4. Liao, T.W. Two hybrid differential evolution algorithms for engineering design optimization. Appl. Soft Comput. 2010, 10, 1188–1199. [Google Scholar] [CrossRef]
  5. Yildiz, K.; Lesieutre, G.A. Sizing and prestress optimization of Class-2 tensegrity structures for space boom applications. Eng. Comput. 2022, 38, 1451–1464. [Google Scholar] [CrossRef]
  6. Osaba, E.; Villar-Rodriguez, E.; Del Ser, J.; Nebro, A.J.; Molina, D.; Latorre, A.; Suganthan, P.N.; Coello, C.A.C.; Herrera, L.E.F. A Tutorial On the design, experimentation and application of metaheuristic algorithms to real-World optimization problems. Swarm Evol. Comput. 2021, 64, 100888. [Google Scholar] [CrossRef]
  7. Tang, J.; Liu, G.; Pan, Q. A review on representative swarm intelligence algorithms for solving optimization problems: Applications and trends. IEEE/CAA J. Autom. Sin. 2021, 8, 1627–1643. [Google Scholar] [CrossRef]
  8. Tian, Y.; Si, L.; Zhang, X.; Cheng, R.; He, C.; Tan, K.C.; Jin, Y. Evolutionary large-scale multi-objective optimization: A survey. ACM Comput. Surv. (CSUR) 2021, 54, 1–34. [Google Scholar] [CrossRef]
  9. Lodi, A.; Martello, S.; Vigo, D. Heuristic and metaheuristic approaches for a class of two-dimensional bin packing problems. INFORMS J. Comput. 1999, 11, 345–357. [Google Scholar] [CrossRef]
  10. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  11. Hussain, K.; Mohd Salleh, M.N.; Cheng, S.; Shi, Y. Metaheuristic research: A comprehensive survey. Artif. Intell. Rev. 2019, 52, 2191–2233. [Google Scholar] [CrossRef]
  12. Gharehchopogh, F.S. Quantum-inspired metaheuristic algorithms: Comprehensive survey and classification. Artif. Intell. Rev. 2023, 56, 5479–5543. [Google Scholar] [CrossRef]
  13. Braik, M.; Sheta, A.; Al-Hiary, H. A novel meta-heuristic search algorithm for solving optimization problems: Capuchin search algorithm. Neural Comput. Appl. 2021, 33, 2515–2547. [Google Scholar] [CrossRef]
  14. Jiang, J.; Wu, J.; Luo, J.; Yang, X.; Huang, Z. MOBCA: Multi-Objective Besiege and Conquer Algorithm. Biomimetics 2024, 9, 316. [Google Scholar] [CrossRef] [PubMed]
  15. Parejo, J.A.; Ruiz-Cortés, A.; Lozano, S.; Fernandez, P. Metaheuristic optimization frameworks: A survey and benchmarking. Soft Comput. 2012, 16, 527–561. [Google Scholar] [CrossRef]
  16. Abualigah, L. Group search optimizer: A nature-inspired meta-heuristic optimization algorithm with its results, variants, and applications. Neural Comput. Appl. 2021, 33, 2949–2972. [Google Scholar] [CrossRef]
  17. Bai, J.; Jia, L.; Peng, Z. A new insight on augmented Lagrangian method with applications in machine learning. J. Sci. Comput. 2024, 99, 53. [Google Scholar] [CrossRef]
  18. Lin, Y.; Bian, Z.; Liu, X. Developing a dynamic neighborhood structure for an adaptive hybrid simulated annealing–tabu search algorithm to solve the symmetrical traveling salesman problem. Appl. Soft Comput. 2016, 49, 937–952. [Google Scholar] [CrossRef]
  19. Xue, X.; Chen, J. Using compact evolutionary tabu search algorithm for matching sensor ontologies. Swarm Evol. Comput. 2019, 48, 25–30. [Google Scholar] [CrossRef]
  20. Li, J.; Pardalos, P.M.; Sun, H.; Pei, J.; Zhang, Y. Iterated local search embedded adaptive neighborhood selection approach for the multi-depot vehicle routing problem with simultaneous deliveries and pickups. Expert Syst. Appl. 2015, 42, 3551–3561. [Google Scholar] [CrossRef]
  21. Derbel, H.; Jarboui, B.; Hanafi, S.; Chabchoub, H. Genetic algorithm with iterated local search for solving a location-routing problem. Expert Syst. Appl. 2012, 39, 2865–2871. [Google Scholar] [CrossRef]
  22. Vrugt, J.A.; Robinson, B.A.; Hyman, J.M. Self-adaptive multimethod search for global optimization in real-parameter spaces. IEEE Trans. Evol. Comput. 2008, 13, 243–259. [Google Scholar] [CrossRef]
  23. Deng, W.; Xu, J.; Song, Y.; Zhao, H. Differential evolution algorithm with wavelet basis function and optimal mutation strategy for complex optimization problem. Appl. Soft Comput. 2021, 100, 106724. [Google Scholar] [CrossRef]
  24. Das, S.; Abraham, A.; Chakraborty, U.K.; Konar, A. Differential evolution using a neighborhood-based mutation operator. IEEE Trans. Evol. Comput. 2009, 13, 526–553. [Google Scholar] [CrossRef]
  25. Wang, D.; Tan, D.; Liu, L. Particle swarm optimization algorithm: An overview. Soft Comput. 2018, 22, 387–408. [Google Scholar] [CrossRef]
  26. Zhang, Y.; Wang, S.; Ji, G. A comprehensive survey on particle swarm optimization algorithm and its applications. Math. Probl. Eng. 2015, 2015, 931256. [Google Scholar] [CrossRef]
  27. Kıran, M.S.; Fındık, O. A directed artificial bee colony algorithm. Appl. Soft Comput. 2015, 26, 454–462. [Google Scholar] [CrossRef]
  28. Xue, Y.; Jiang, J.; Zhao, B.; Ma, T. A self-adaptive artificial bee colony algorithm based on global best for global optimization. Soft Comput. 2018, 22, 2935–2952. [Google Scholar] [CrossRef]
  29. Gandomi, A.H.; Alavi, A.H. Krill herd: A new bio-inspired optimization algorithm. Commun. Nonlinear Sci. Numer. Simul. 2012, 17, 4831–4845. [Google Scholar] [CrossRef]
  30. Bolaji, A.L.; Al-Betar, M.A.; Awadallah, M.A.; Khader, A.T.; Abualigah, L.M. A comprehensive review: Krill Herd algorithm (KH) and its applications. Appl. Soft Comput. 2016, 49, 437–446. [Google Scholar] [CrossRef]
  31. Rashedi, E.; Rashedi, E.; Nezamabadi-Pour, H. A comprehensive survey on gravitational search algorithm. Swarm Evol. Comput. 2018, 41, 141–158. [Google Scholar] [CrossRef]
  32. Taradeh, M.; Mafarja, M.; Heidari, A.A.; Faris, H.; Aljarah, I.; Mirjalili, S.; Fujita, H. An evolutionary gravitational search-based feature selection. Inf. Sci. 2019, 497, 219–239. [Google Scholar] [CrossRef]
  33. MiarNaeimi, F.; Azizyan, G.; Rashki, M. Horse herd optimization algorithm: A nature-inspired algorithm for high-dimensional optimization problems. Knowl.-Based Syst. 2021, 213, 106711. [Google Scholar] [CrossRef]
  34. Kiran, M.S. TSA: Tree-seed algorithm for continuous optimization. Expert Syst. Appl. 2015, 42, 6686–6698. [Google Scholar] [CrossRef]
  35. El-Fergany, A.A.; Hasanien, H.M. Tree-seed algorithm for solving optimal power flow problem in large-scale power systems incorporating validations and comparisons. Appl. Soft Comput. 2018, 64, 307–316. [Google Scholar] [CrossRef]
  36. Jiang, J.; Yang, X.; Li, M.; Chen, T. ATSA: An Adaptive Tree Seed Algorithm based on double-layer framework with tree migration and seed intelligent generation. Knowl.-Based Syst. 2023, 279, 110940. [Google Scholar] [CrossRef]
  37. Jiang, J.; Wu, J.; Meng, X.; Qian, L.; Luo, J.; Li, K. Katsa: Knn Ameliorated Tree-Seed Algorithm for Complex Optimization Problems. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4636664 (accessed on 1 June 2024).
  38. Jiang, J.; Meng, X.; Qian, L.; Wang, H. Enhance tree-seed algorithm using hierarchy mechanism for constrained optimization problems. Expert Syst. Appl. 2022, 209, 118311. [Google Scholar] [CrossRef]
  39. Beşkirli, M. Solving continuous optimization problems using the tree seed algorithm developed with the roulette wheel strategy. Expert Syst. Appl. 2021, 170, 114579. [Google Scholar] [CrossRef]
  40. Beşkirli, A.; Özdemir, D.; Temurtaş, H. A comparison of modified tree–seed algorithm for high-dimensional numerical functions. Neural Comput. Appl. 2020, 32, 6877–6911. [Google Scholar] [CrossRef]
  41. Caponetto, R.; Fortuna, L.; Fazzino, S.; Xibilia, M.G. Chaotic sequences to improve the performance of evolutionary algorithms. IEEE Trans. Evol. Comput. 2003, 7, 289–304. [Google Scholar] [CrossRef]
  42. Jiang, J.; Xu, M.; Meng, X.; Li, K. STSA: A sine Tree-Seed Algorithm for complex continuous optimization problems. Phys. A Stat. Mech. Appl. 2020, 537, 122802. [Google Scholar] [CrossRef]
  43. Bajer, D.; Martinović, G.; Brest, J. A population initialization method for evolutionary algorithms based on clustering and Cauchy deviates. Expert Syst. Appl. 2016, 60, 294–310. [Google Scholar] [CrossRef]
  44. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  45. Zheng, Y.; Li, L.; Qian, L.; Cheng, B.; Hou, W.; Zhuang, Y. Sine-SSA-BP ship trajectory prediction based on chaotic mapping improved sparrow search algorithm. Sensors 2023, 23, 704. [Google Scholar] [CrossRef] [PubMed]
  46. Beşkirli, M.; Kiran, M.S. Optimization of Butterworth and Bessel Filter Parameters with Improved Tree-Seed Algorithm. Biomimetics 2023, 8, 540. [Google Scholar] [CrossRef]
  47. Jiang, J.; Liu, Y.; Zhao, Z. TriTSA: Triple Tree-Seed Algorithm for dimensional continuous optimization and constrained engineering problems. Eng. Appl. Artif. Intell. 2021, 104, 104303. [Google Scholar] [CrossRef]
  48. Jiang, J.; Han, R.; Meng, X.; Li, K. TSASC: Tree–seed algorithm with sine–cosine enhancement for continuous optimization problems. Soft Comput. 2020, 24, 18627–18646. [Google Scholar] [CrossRef]
  49. Linden, A. ITSA: Stata Module to Perform Interrupted Time Series Analysis for Single and Multiple Groups. 2021. Available online: https://ideas.repec.org/c/boc/bocode/s457793.html (accessed on 1 June 2024).
  50. Jiang, J.; Jiang, S.; Meng, X.; Qiu, C. EST-TSA: An effective search tendency based to tree seed algorithm. Phys. A Stat. Mech. Appl. 2019, 534, 122323. [Google Scholar] [CrossRef]
  51. Jiang, J.; Meng, X.; Chen, Y.; Qiu, C.; Liu, Y.; Li, K. Enhancing tree-seed algorithm via feed-back mechanism for optimizing continuous problems. Appl. Soft Comput. 2020, 92, 106314. [Google Scholar] [CrossRef]
  52. Chen, X.; Przystupa, K.; Ye, Z.; Chen, F.; Wang, C.; Liu, J.; Gao, R.; Wei, M.; Kochan, O. Forecasting short-term electric load using extreme learning machine with improved tree seed algorithm based on Levy flight. Eksploat. I Niezawodn. 2022, 24, 153–162. [Google Scholar] [CrossRef]
  53. Babalik, A.; Cinar, A.C.; Kiran, M.S. A modification of tree-seed algorithm using Deb’s rules for constrained optimization. Appl. Soft Comput. 2018, 63, 289–305. [Google Scholar] [CrossRef]
  54. Kanna, S.R.; Sivakumar, K.; Lingaraj, N. Development of deer hunting linked earthworm optimization algorithm for solving large scale traveling salesman problem. Knowl.-Based Syst. 2021, 227, 107199. [Google Scholar] [CrossRef]
  55. Jiang, J.; Meng, X.; Liu, Y.; Wang, H. An enhanced TSA-MLP model for identifying credit default problems. SAGE Open 2022, 12, 21582440221094586. [Google Scholar] [CrossRef]
  56. Aslan, M.F.; Sabanci, K.; Ropelewska, E. A new approach to COVID-19 detection: An ANN proposal optimized through tree-seed algorithm. Symmetry 2022, 14, 1310. [Google Scholar] [CrossRef]
  57. Luo, X.; Chen, J.; Yuan, Y.; Wang, Z. Pseudo Gradient-Adjusted Particle Swarm Optimization for Accurate Adaptive Latent Factor Analysis. IEEE Trans. Syst. Man Cybern. Syst. 2024, 54, 2213–2226. [Google Scholar] [CrossRef]
  58. Ahmad, A.; Yadav, A.K.; Singh, A.; Singh, D.K.; Ağbulut, Ü. A hybrid RSM-GA-PSO approach on optimization of process intensification of linseed biodiesel synthesis using an ultrasonic reactor: Enhancing biodiesel properties and engine characteristics with ternary fuel blends. Energy 2024, 288, 129077. [Google Scholar] [CrossRef]
  59. Nadimi-Shahraki, M.H.; Zamani, H.; Varzaneh, Z.A.; Sadiq, A.S.; Mirjalili, S. A Systematic Review of Applying Grey Wolf Optimizer, its Variants, and its Developments in Different Internet of Things Applications. Internet Things 2024, 26, 101135. [Google Scholar] [CrossRef]
  60. Yılmaz, S.; Küçüksille, E.U. A new modification approach on bat algorithm for solving optimization problems. Appl. Soft Comput. 2015, 28, 259–275. [Google Scholar] [CrossRef]
  61. Ekinci, S.; Izci, D.; Abu Zitar, R.; Alsoud, A.R.; Abualigah, L. Development of Lévy flight-based reptile search algorithm with local search ability for power systems engineering design problems. Neural Comput. Appl. 2022, 34, 20263–20283. [Google Scholar] [CrossRef]
  62. Ajani, O.S.; Kumar, A.; Mallipeddi, R. Covariance matrix adaptation evolution strategy based on correlated evolution paths with application to reinforcement learning. Expert Syst. Appl. 2024, 246, 123289. [Google Scholar] [CrossRef]
  63. Kocak, O.; Erkan, U.; Toktas, A.; Gao, S. PSO-based image encryption scheme using modular integrated logistic exponential map. Expert Syst. Appl. 2024, 237, 121452. [Google Scholar] [CrossRef]
  64. Zheng, Y.; Wang, J.S.; Zhu, J.H.; Zhang, X.Y.; Xing, Y.X.; Zhang, Y.H. MORSA: Multi-objective reptile search algorithm based on elite non-dominated sorting and grid indexing mechanism for wind farm layout optimization problem. Energy 2024, 293, 130771. [Google Scholar] [CrossRef]
  65. He, K.; Zhang, Y.; Wang, Y.K.; Zhou, R.H.; Zhang, H.Z. EABOA: Enhanced adaptive butterfly optimization algorithm for numerical optimization and engineering design problems. Alex. Eng. J. 2024, 87, 543–573. [Google Scholar] [CrossRef]
  66. Doğan, B.; Ölmez, T. A new metaheuristic for numerical function optimization: Vortex Search algorithm. Inf. Sci. 2015, 293, 125–145. [Google Scholar] [CrossRef]
  67. Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct. 2016, 169, 1–12. [Google Scholar] [CrossRef]
  68. Li, Y.; Zhao, Y.; Liu, J. Dimension by dimension dynamic sine cosine algorithm for global optimization problems. Appl. Soft Comput. 2021, 98, 106933. [Google Scholar] [CrossRef]
  69. Russo, I.L.; Bernardino, H.S.; Barbosa, H.J. Knowledge discovery in multiobjective optimization problems in engineering via Genetic Programming. Expert Syst. Appl. 2018, 99, 93–102. [Google Scholar] [CrossRef]
  70. Baghmisheh, M.V.; Peimani, M.; Sadeghi, M.H.; Ettefagh, M.M.; Tabrizi, A.F. A hybrid particle swarm–Nelder–Mead optimization method for crack detection in cantilever beams. Appl. Soft Comput. 2012, 12, 2217–2226. [Google Scholar] [CrossRef]
  71. Gupta, S.; Abderazek, H.; Yıldız, B.S.; Yildiz, A.R.; Mirjalili, S.; Sait, S.M. Comparison of metaheuristic optimization algorithms for solving constrained mechanical design optimization problems. Expert Syst. Appl. 2021, 183, 115351. [Google Scholar] [CrossRef]
Figure 1. The classification of nature-inspired algorithms.
Figure 1. The classification of nature-inspired algorithms.
Symmetry 16 00795 g001
Figure 2. The principle of velocity-driven seed generation.
Figure 2. The principle of velocity-driven seed generation.
Symmetry 16 00795 g002
Figure 3. The variation in k * cos (2 * pi * rand) with the number of iterations.
Figure 3. The variation in k * cos (2 * pi * rand) with the number of iterations.
Symmetry 16 00795 g003
Figure 4. Adaptive velocity adaptation mechanism based on count parameters.
Figure 4. Adaptive velocity adaptation mechanism based on count parameters.
Symmetry 16 00795 g004
Figure 5. Schematic of population-based evolutionary strategy with information exchange.
Figure 5. Schematic of population-based evolutionary strategy with information exchange.
Symmetry 16 00795 g005
Figure 6. The flowchart for the DTSA.
Figure 6. The flowchart for the DTSA.
Symmetry 16 00795 g006
Figure 7. Unimodal function F1.
Figure 7. Unimodal function F1.
Symmetry 16 00795 g007
Figure 8. Multimodal function F8.
Figure 8. Multimodal function F8.
Symmetry 16 00795 g008
Figure 9. The qualitative analysis of DTSA in unimodal function (F1) (search history (a), average fitness history (b), trajectory history (c), convergence curve (d)).
Figure 9. The qualitative analysis of DTSA in unimodal function (F1) (search history (a), average fitness history (b), trajectory history (c), convergence curve (d)).
Symmetry 16 00795 g009
Figure 10. The qualitative analysis of DTSA in multimodal function (F8) (search history (a), average fitness history (b), trajectory history (c), convergence curve (d)).
Figure 10. The qualitative analysis of DTSA in multimodal function (F8) (search history (a), average fitness history (b), trajectory history (c), convergence curve (d)).
Symmetry 16 00795 g010
Figure 11. Unimodal functions.
Figure 11. Unimodal functions.
Symmetry 16 00795 g011
Figure 12. Multimodal functions.
Figure 12. Multimodal functions.
Symmetry 16 00795 g012
Figure 13. Unimodal functions.
Figure 13. Unimodal functions.
Symmetry 16 00795 g013
Figure 14. Multimodal functions.
Figure 14. Multimodal functions.
Symmetry 16 00795 g014
Figure 15. Convergence curves of the DTSA and TSA and its recent variants in 30 dimensions.
Figure 15. Convergence curves of the DTSA and TSA and its recent variants in 30 dimensions.
Symmetry 16 00795 g015
Figure 16. Convergence curves of the DTSA and TSA and its recent variants in 50 dimensions.
Figure 16. Convergence curves of the DTSA and TSA and its recent variants in 50 dimensions.
Symmetry 16 00795 g016aSymmetry 16 00795 g016b
Figure 17. Convergence curve of the DTSA and TSA and its recent variants in 100 dimensions.
Figure 17. Convergence curve of the DTSA and TSA and its recent variants in 100 dimensions.
Symmetry 16 00795 g017aSymmetry 16 00795 g017b
Figure 18. Convergence curves of the DTSA and TSA and its recent variants in 30 dimensions.
Figure 18. Convergence curves of the DTSA and TSA and its recent variants in 30 dimensions.
Symmetry 16 00795 g018
Figure 19. Convergence curve of the DTSA and TSA and its recent variants in 50 dimensions.
Figure 19. Convergence curve of the DTSA and TSA and its recent variants in 50 dimensions.
Symmetry 16 00795 g019
Figure 20. Convergence curve of the DTSA and TSA and its recent variants in 100 dimensions.
Figure 20. Convergence curve of the DTSA and TSA and its recent variants in 100 dimensions.
Symmetry 16 00795 g020aSymmetry 16 00795 g020b
Figure 21. Boxplots of all experiment algorithms in 30 dimensions.
Figure 21. Boxplots of all experiment algorithms in 30 dimensions.
Symmetry 16 00795 g021
Figure 22. Boxplots of all experiment algorithms in 50 dimensions.
Figure 22. Boxplots of all experiment algorithms in 50 dimensions.
Symmetry 16 00795 g022
Figure 23. Boxplots of all experiment algorithms in 100 dimensions.
Figure 23. Boxplots of all experiment algorithms in 100 dimensions.
Symmetry 16 00795 g023aSymmetry 16 00795 g023b
Table 1. The initial parameters of comparative algorithms.
Table 1. The initial parameters of comparative algorithms.
AlgorithmParameterValue
DTSAST0.1
EST-TSAST0.1
fb-TSAST0.1
TSAST0.1
STSAST0.1
MTSAST0.1
GWOaLinearly decreased from 2 to 0
GAType
Selection
Crossover
Mutation
Real coded
Roulette wheel
P r o b a b i l i t y = 0.7
P r o b a b i l i t y = 0.2
BOAp0.8
RSAEvolutionary Sense
Sensitive parameter controlling the exploration accuracy
Sensitive parameter controlling the exploitation accuracy
2 × r a n d n × ( 1 ( i t e r / m a x i t e r ) )
0.05
0.1
HHO E 0
β
Range from [−1,1]
1.5
GOA α
p
Power exponent
Sensory modality
Linearly decreased from 2 to 0
0.1
0.01
DBOProducers0.2
WFOProbability of laminar flow
Probability of spiral flow in turbulent flow
0.3
0.7
SOThreshold
Threshold2
C 1
C 2
C 3
0.25
0.6
0.5
0.05
2
Table 2. Definitions of the basic IEEE benchmark functions.
Table 2. Definitions of the basic IEEE benchmark functions.
Function NameFunction Details
High Conditioned Elliptic Function f 1 ( X ) = i = 1 D ( 10 6 ) i 1 D 1 X i 2
Bent Cigar Function f 2 ( X ) = X 1 2 + 10 6 i = 2 D X i 2
Discus Function f 3 ( X ) = 10 6 X 1 2 + i = 2 D X i 2
Rosenbrock’s Function f 4 ( X ) = i = 1 D 1 ( 100 ( X i 2 X i + 1 ) 2 + ( X i 1 ) 2 )
Ackley’s Function f 5 ( X ) = 20 exp 0.2 1 D i = 1 D X i 2 exp 1 D i = 1 D cos ( 2 π X i ) + 20 + e
Weierstrass Function f 6 ( X ) = i = 1 D k = 0 k max [ a k cos ( 2 π b k ( X i + 0.5 ) ) ] D k = 0 k max [ a k cos ( 2 π b k × 0.5 ) ] ; a = 0.5 , b = 3 , k max = 20
Griewank’s Function f 7 ( X ) = i = 1 D X i 2 4000 i = 1 D cos X i i + 1
Rastrigin’s Function f 8 ( X ) = i = 1 D ( X i 2 10 cos ( 2 π X i ) + 10 )
Modified Schwefel’s Function f 9 = 418.9829 × D i = 1 D g ( z i ) ; z i = x i + 4.209687462275036 × 10 + 002
Katsuura Function f 10 ( X ) = 10 D 2 i = 1 D 1 + i j = 1 32 2 j X i round ( 2 j X i ) 2 j 10 D 1.2 10 D 2
HappyCat Function f 11 ( X ) = i = 1 D X i 2 D 1 / 4 + ( 0.5 i = 1 D X i 2 + i = 1 D X i ) / D + 0.5
HGBat Function f 12 ( X ) = i = 1 D X i 2 2 i = 1 D X i 2 1 / 2 + ( 0.5 i = 1 D X i 2 + i = 1 D X i ) / D + 0.5
Expanded Griewank’s plus Rosenbrock’s Function f 13 = f 7 ( f 4 ( X 1 , X 2 ) ) + f 7 ( f 4 ( X 2 , X 3 ) ) + + f 7 ( f 4 ( X D 1 , X D ) ) + f 7 ( f 4 ( X D , X 1 ) )
Expanded Scaffer’s F6 Function g ( X , Y ) = 0.5 + sin 2 ( X 2 + Y 2 ) 0.5 ( 1 + 0.001 ( X 2 + Y 2 ) ) 2 ; f 14 = g ( X 1 , X 2 ) + g ( X 2 , X 3 ) + + g ( X D 1 , X D ) + g ( X D , X 1 )
Table 3. Benchmark functions of IEEE CEC 2014.
Table 3. Benchmark functions of IEEE CEC 2014.
A. Unimodal Functions 
Rotated High Conditioned Elliptic Function F 1 ( x ) = f 1 ( M ( x o 1 ) ) + 100
Rotated Bent Cigar Function F 2 ( x ) = f 2 ( M ( x o 2 ) ) + 200
Rotated Discus Function F 3 ( x ) = f 3 ( M ( x o 3 ) ) + 300
B. Multimodal Functions 
Shifted and Rotated Rosenbrock’s Function F 4 ( x ) = f 4 ( M ( 2.048 ( x o 4 ) 100 ) + 1 ) + 400
Shifted and Rotated Ackley’s Function F 5 ( x ) = f 5 ( M ( x o 5 ) ) + 800
Shifted and Rotated Weierstrass Function F 6 ( x ) = f 6 ( M ( 0.5 ( x o 6 ) 100 ) ) + 600
Shifted and Rotated Griewank’s Function F 7 ( x ) = f 7 ( M ( 600 ( x o 7 ) 100 ) ) + 700
Shifted Rastrigin’s Function F 8 ( x ) = f 8 ( M ( 5.12 ( x o 8 ) 100 ) ) + 700
Shifted and Rotated Rastrigin’s Function F 9 ( x ) = f 8 ( M ( 5.12 ( x o 9 ) 100 ) ) + 900
Shifted Schwefel’s Function F 10 ( x ) = f 9 ( M ( 1000 ( x o 10 ) 100 ) ) + 1000
Shifted and Rotated Schwefel’s Function F 11 ( x ) = f 9 ( M ( 1000 ( x o 11 ) 100 ) ) + 1100
Shifted and Rotated Katsuura Function F 12 ( x ) = f 10 ( M ( 5 ( x o 12 ) 100 ) ) + 1200
Shifted and Rotated HappyCat Function F 13 ( x ) = f 11 ( M ( 5 ( x o 13 ) 100 ) ) + 1300
Shifted and Rotated HGBat Function F 14 ( x ) = f 12 ( M ( 5 ( x o 14 ) 100 ) ) + 1400
Shifted and Rotated Expanded Griewank’s plus Rosenbrock’s Function F 15 ( x ) = f 13 ( M ( 5 ( x o 15 ) 100 ) + 1 ) + 1500
Shifted and Rotated Expanded Scaffer’s F6 Function F 16 ( x ) = f 14 ( M ( ( x o 16 ) ) + 1 ) + 1600
C. Hybrid Functions 
F 17 = f 9 ( M 1 Z 1 ) + f 8 ( M 2 Z 2 ) + f 1 ( M 3 Z 3 ) + 1700 p = [0.3,0.3,0.4]
F 18 = f 2 ( M 1 Z 1 ) + f 1 2 ( M 2 Z 2 ) + f 8 ( M 3 Z 3 ) + 1800 p = [0.3,0.3,0.4]
F 19 = f 7 ( M 1 Z 1 ) + f 6 ( M 2 Z 2 ) + f 4 ( M 3 Z 3 ) + f 1 4 ( M 4 Z 4 ) + 1900 p = [0.2,0.2,0.3,0.3]
F 20 = f 12 ( M 1 Z 1 ) + f 3 ( M 2 Z 2 ) + f 13 ( M 3 Z 3 ) + f 8 ( M 4 Z 4 ) + 2000 p = [0.2,0.2,0.3,0.3]
F 21 = f 14 ( M 1 Z 1 ) + f 12 ( M 2 Z 2 ) + f 4 ( M 3 Z 3 ) + f 9 ( M 4 Z 4 ) + f 1 ( M 5 Z 5 ) + 2100 p = [0.1,0.2,0.2,0.2,0.3]
F 22 = f 10 ( M 1 Z 1 ) + f 11 ( M 2 Z 2 ) + f 13 ( M 3 Z 3 ) + f 9 ( M 4 Z 4 ) + f 5 ( M 5 Z 5 ) + 2200 p = [0.1,0.2,0.2,0.2,0.3]
D. Composition Functions 
F 23 = w 1 F 4 ( x ) + w 2 [ 1 e 6 F 1 ( x ) + 100 ] + w 3 [ 1 e 26 F 2 ( x ) + 200 ]  
+ w 4 [ 1 e 6 F 3 ( x ) + 300 ] + w 5 [ 1 e 6 F 1 ( x ) + 400 ] + 2300 σ = [10,20,30,40,50]
F 24 = w 1 F 10 ( x ) + w 2 [ F 9 ( x ) + 100 ] + w 3 [ F 14 ( x ) + 200 ] + 2400 σ = [20,20,20]
F 25 = w 1 0.25 F 11 ( x ) + w 2 [ F 9 ( x ) + 100 ] + w 3 [ 1 e 7 F 1 ( x ) + 200 ] + 2500 σ = [10,30,50]
F 26 = w 1 0.25 F 11 ( x ) + w 2 [ F 13 ( x ) + 100 ] + w 3 [ 1 e 7 F 1 ( x ) + 200 ]  
+ w 4 [ 2.5 F 6 ( x ) + 300 ] + w 5 [ 10 F 7 ( x ) + 400 ] + 2600 σ = [10,10,10,10,10]
F 27 = w 1 10 F 14 ( x ) + w 2 [ 10 F 9 ( x ) + 100 ] + w 3 [ 2.5 F 11 ( x ) + 200 ]  
+ w 4 [ 25 F 6 ( x ) + 300 ] + w 5 [ 1 e 6 F 1 ( x ) + 400 ] + 2700 σ = [10,10,10,20,20]
F 28 = w 1 2.5 F 15 ( x ) + w 2 [ 10 F 13 ( x ) + 100 ] + w 3 [ 2.5 F 11 ( x ) + 200 ]  
+ w 4 [ 5 e 4 F 16 ( x ) + 300 ] + w 5 [ 1 e 6 F 1 ( x ) + 400 ] + 2800 σ = [10,20,30,40,50]
F 29 = w 1 F 17 ( x ) + w 2 [ F 18 ( x ) + 100 ] + w 3 [ F 19 ( x ) + 200 ] + 2900 σ = [10,30,50]
F 30 = w 1 F 20 ( x ) + w 2 [ F 21 ( x ) + 100 ] + w 3 [ F 22 ( x ) + 200 ] + 3000 σ = [10,30,50]
Table 4. The mean values for the DTSA, EAT-TSA, fb-TSA, TSA, STSA, and MTSA in 30 dimensions.
Table 4. The mean values for the DTSA, EAT-TSA, fb-TSA, TSA, STSA, and MTSA in 30 dimensions.
FunctionDTSAEST-TSAMTSASTSATSAfb-TSA
F11.8534 × 10 + 6 6.6216 × 10 + 7 5.6627 × 10 + 6 5.2357 × 10 + 8 1.0857 × 10 + 8 9.2412 × 10 + 6
F25.3718 × 10 + 3 9.5866 × 10 + 5 1.9370 × 10 + 5 2.8175 × 10 + 10 3.1216 × 10 + 8 2.9438 × 10 + 3
F35.3235 × 10 + 2 3.3472 × 10 + 4 3.6163 × 10 + 3 1.0183 × 10 + 5 3.9619 × 10 + 4 1.9082 × 10 + 4
F44.4918 × 10 + 2 6.2177 × 10 + 2 4.8830 × 10 + 2 2.8145 × 10 + 3 6.9366 × 10 + 2 4.4972 × 10 + 2
F55.2003 × 10 + 2 5.2101 × 10 + 2 5.2098 × 10 + 2 5.2096 × 10 + 2 5.2102 × 10 + 2 5.2100 × 10 + 2
F66.0130 × 10 + 2 6.2733 × 10 + 2 6.0111 × 10 + 2 6.3899 × 10 + 2 6.2873 × 10 + 2 6.0093 × 10 + 2
F77.0000 × 10 + 2 7.0051 × 10 + 2 7.0031 × 10 + 2 9.5051 × 10 + 2 7.0427 × 10 + 2 7.0001 × 10 + 2
F88.2189 × 10 + 2 9.9125 × 10 + 2 8.2342 × 10 + 2 1.0774 × 10 + 3 9.8800 × 10 + 2 8.2388 × 10 + 2
F99.2388 × 10 + 2 1.1516 × 10 + 3 1.0439 × 10 + 3 1.2170 × 10 + 3 1.1215 × 10 + 3 9.7803 × 10 + 2
F101.3552 × 10 + 3 5.5110 × 10 + 3 1.7501 × 10 + 3 7.6039 × 10 + 3 6.7713 × 10 + 3 2.7224 × 10 + 3
F113.6458 × 10 + 3 7.5516 × 10 + 3 8.0938 × 10 + 3 8.4115 × 10 + 3 8.3971 × 10 + 3 6.5521 × 10 + 3
F121.2028 × 10 + 3 1.2023 × 10 + 3 1.2025 × 10 + 3 1.2030 × 10 + 3 1.2029 × 10 + 3 1.2033 × 10 + 3
F131.3003 × 10 + 3 1.3005 × 10 + 3 1.3004 × 10 + 3 1.3043 × 10 + 3 1.3006 × 10 + 3 1.3005 × 10 + 3
F141.4003 × 10 + 3 1.4003 × 10 + 3 1.4003 × 10 + 3 1.4800 × 10 + 3 1.4004 × 10 + 3 1.4003 × 10 + 3
F151.5034 × 10 + 3 1.5219 × 10 + 3 1.5172 × 10 + 3 1.7233 × 10 + 4 1.5277 × 10 + 3 1.5139 × 10 + 3
F161.6102 × 10 + 3 1.6128 × 10 + 3 1.6122 × 10 + 3 1.6133 × 10 + 3 1.6128 × 10 + 3 1.6122 × 10 + 3
F172.3889 × 10 + 5 1.3751 × 10 + 6 5.7574 × 10 + 5 1.8705 × 10 + 7 1.8190 × 10 + 6 4.2697 × 10 + 5
F181.8997 × 10 + 3 2.1487 × 10 + 3 4.5423 × 10 + 3 1.2407 × 10 + 8 2.1233 × 10 + 3 1.9254 × 10 + 3
F191.9035 × 10 + 3 1.9081 × 10 + 3 1.9355 × 10 + 3 2.0009 × 10 + 3 1.9127 × 10 + 3 1.9073 × 10 + 3
F204.6994 × 10 + 3 2.2861 × 10 + 4 7.6132 × 10 + 3 4.6643 × 10 + 4 1.7044 × 10 + 4 1.2748 × 10 + 4
F211.2688 × 10 + 5 4.7358 × 10 + 5 1.5364 × 10 + 5 3.0916 × 10 + 6 4.3843 × 10 + 5 2.0241 × 10 + 5
F222.4970 × 10 + 3 2.7481 × 10 + 3 2.3768 × 10 + 3 3.2253 × 10 + 3 2.7129 × 10 + 3 2.4806 × 10 + 3
F232.6152 × 10 + 3 2.6155 × 10 + 3 2.6152 × 10 + 3 2.7273 × 10 + 3 2.6191 × 10 + 3 2.6152 × 10 + 3
F242.6244 × 10 + 3 2.6000 × 10 + 3 2.6252 × 10 + 3 2.6007 × 10 + 3 2.6543 × 10 + 3 2.6132 × 10 + 3
F252.7085 × 10 + 3 2.7000 × 10 + 3 2.7066 × 10 + 3 2.7518 × 10 + 3 2.7238 × 10 + 3 2.7089 × 10 + 3
F262.7003 × 10 + 3 2.7527 × 10 + 3 2.7004 × 10 + 3 2.7040 × 10 + 3 2.7007 × 10 + 3 2.7004 × 10 + 3
F273.0204 × 10 + 3 3.3265 × 10 + 3 3.1028 × 10 + 3 3.9204 × 10 + 3 3.2294 × 10 + 3 3.0030 × 10 + 3
F283.6667 × 10 + 3 4.0400 × 10 + 3 3.6696 × 10 + 3 5.5382 × 10 + 3 4.0981 × 10 + 3 3.7407 × 10 + 3
F293.9272 × 10 + 3 7.2542 × 10 + 4 4.0237 × 10 + 3 2.4545 × 10 + 7 1.7331 × 10 + 5 4.0245 × 10 + 3
F304.5599 × 10 + 3 2.3030 × 10 + 4 4.9883 × 10 + 3 4.8079 × 10 + 5 2.1804 × 10 + 4 4.7449 × 10 + 3
Rank first2331003
Table 5. The mean values for the DTSA, EAT-TSA, fb-TSA, TSA, STSA, and MTSA in 50 dimensions.
Table 5. The mean values for the DTSA, EAT-TSA, fb-TSA, TSA, STSA, and MTSA in 50 dimensions.
FunctionDTSAEST-TSAMTSASTSATSAfb-TSA
F13.9446 × 10 + 6 3.9755 × 10 + 8 1.4848 × 10 + 7 2.3766 × 10 + 9 4.3731 × 10 + 8 1.8934 × 10 + 7
F24.0065 × 10 + 3 9.7082 × 10 + 8 7.8296 × 10 + 6 1.1836 × 10 + 11 8.9880 × 10 + 9 4.8431 × 10 + 4
F37.4047 × 10 + 3 1.1842 × 10 + 5 6.6952 × 10 + 4 2.7862 × 10 + 5 1.1875 × 10 + 5 9.0153 × 10 + 4
F45.2444 × 10 + 2 1.4599 × 10 + 3 5.2644 × 10 + 2 3.4377 × 10 + 4 1.9098 × 10 + 3 5.2544 × 10 + 2
F55.2024 × 10 + 2 5.2117 × 10 + 2 5.2121 × 10 + 2 5.2118 × 10 + 2 5.2121 × 10 + 2 5.2121 × 10 + 2
F66.0826 × 10 + 2 6.5543 × 10 + 2 6.1059 × 10 + 2 6.7526 × 10 + 2 6.5902 × 10 + 2 6.2356 × 10 + 2
F77.0000 × 10 + 2 7.0925 × 10 + 2 7.0108 × 10 + 2 1.8056 × 10 + 3 7.8536 × 10 + 2 7.0015 × 10 + 2
F88.5124 × 10 + 2 1.2534 × 10 + 3 8.9569 × 10 + 2 1.4234 × 10 + 3 1.2436 × 10 + 3 8.8733 × 10 + 2
F99.5224 × 10 + 2 1.3821 × 10 + 3 1.0803 × 10 + 3 1.6733 × 10 + 3 1.3558 × 10 + 3 1.0980 × 10 + 3
F103.2518 × 10 + 3 1.1643 × 10 + 4 3.9794 × 10 + 3 1.4554 × 10 + 4 1.2651 × 10 + 4 4.9779 × 10 + 3
F115.6490 × 10 + 3 1.3677 × 10 + 4 1.4605 × 10 + 4 1.5495 × 10 + 4 1.4906 × 10 + 4 1.3739 × 10 + 4
F121.2034 × 10 + 3 1.2036 × 10 + 3 1.2041 × 10 + 3 1.2039 × 10 + 3 1.2042 × 10 + 3 1.2043 × 10 + 3
F131.3004 × 10 + 3 1.3008 × 10 + 3 1.3006 × 10 + 3 1.3071 × 10 + 3 1.3012 × 10 + 3 1.3006 × 10 + 3
F141.4004 × 10 + 3 1.4005 × 10 + 3 1.4004 × 10 + 3 1.7103 × 10 + 3 1.4265 × 10 + 3 1.4004 × 10 + 3
F151.5083 × 10 + 3 5.0779 × 10 + 3 1.5372 × 10 + 3 6.6525 × 10 + 6 4.9848 × 10 + 3 1.5313 × 10 + 3
F161.6183 × 10 + 3 1.6226 × 10 + 3 1.6224 × 10 + 3 1.6230 × 10 + 3 1.6228 × 10 + 3 1.6224 × 10 + 3
F171.0135 × 10 + 6 1.3693 × 10 + 7 1.0388 × 10 + 6 1.5580 × 10 + 8 2.0814 × 10 + 7 1.0716 × 10 + 6
F182.1716 × 10 + 3 3.6138 × 10 + 3 3.1925 × 10 + 3 2.5696 × 10 + 9 1.0451 × 10 + 5 2.5001 × 10 + 3
F191.9680 × 10 + 3 1.9831 × 10 + 3 1.9324 × 10 + 3 2.3253 × 10 + 3 1.9917 × 10 + 3 1.9371 × 10 + 3
F206.8302 × 10 + 3 4.2957 × 10 + 4 1.5457 × 10 + 4 2.7659 × 10 + 5 4.8036 × 10 + 4 3.7643 × 10 + 4
F214.4947 × 10 + 5 8.5454 × 10 + 6 2.2217 × 10 + 6 4.0869 × 10 + 7 7.2602 × 10 + 6 1.0551 × 10 + 6
F223.1275 × 10 + 3 3.6816 × 10 + 3 3.6274 × 10 + 3 5.2984 × 10 + 3 3.9806 × 10 + 3 3.7124 × 10 + 3
F232.6440 × 10 + 3 2.5795 × 10 + 3 2.6441 × 10 + 3 3.4661 × 10 + 3 2.6688 × 10 + 3 2.6440 × 10 + 3
F242.6736 × 10 + 3 2.6000 × 10 + 3 2.6628 × 10 + 3 2.8856 × 10 + 3 2.7318 × 10 + 3 2.6711 × 10 + 3
F252.7209 × 10 + 3 2.7000 × 10 + 3 2.7168 × 10 + 3 2.9058 × 10 + 3 2.7741 × 10 + 3 2.7325 × 10 + 3
F262.7505 × 10 + 3 2.8000 × 10 + 3 2.7507 × 10 + 3 2.7070 × 10 + 3 2.7017 × 10 + 3 2.8030 × 10 + 3
F273.1411 × 10 + 3 4.2662 × 10 + 3 3.3051 × 10 + 3 4.8809 × 10 + 3 4.4235 × 10 + 3 3.5434 × 10 + 3
F284.2500 × 10 + 3 6.3406 × 10 + 3 4.1564 × 10 + 3 8.8173 × 10 + 3 6.0780 × 10 + 3 4.8144 × 10 + 3
F294.3020 × 10 + 3 1.4916 × 10 + 6 7.1278 × 10 + 3 2.3249 × 10 + 8 7.0369 × 10 + 6 4.3833 × 10 + 3
301.6597 × 10 + 4 1.4608 × 10 + 5 1.9499 × 10 + 4 3.2828 × 10 + 6 2.6339 × 10 + 5 2.0468 × 10 + 4
Rank first2432011
Table 6. The mean values for the DTSA, EAT-TSA, fb-TSA, TSA, STSA, and MTSA in 100 dimensions.
Table 6. The mean values for the DTSA, EAT-TSA, fb-TSA, TSA, STSA, and MTSA in 100 dimensions.
FunctionDTSAEST-TSAMTSASTSATSAfb-TSA
F15.7243 × 10 + 7 1.4918 × 10 + 9 1.2921 × 10 + 8 1.1868 × 10 + 10 3.1047 × 10 + 9 2.2997 × 10 + 8
F27.6447 × 10 + 4 6.2758 × 10 + 10 2.6417 × 10 + 8 4.6906 × 10 + 11 8.1438 × 10 + 10 1.9459 × 10 + 9
F32.8429 × 10 + 4 3.0542 × 10 + 5 2.1455 × 10 + 5 7.8482 × 10 + 5 3.4631 × 10 + 5 3.0294 × 10 + 5
F47.7869 × 10 + 2 1.0509 × 10 + 4 8.7401 × 10 + 2 1.5260 × 10 + 5 1.3924 × 10 + 4 1.0689 × 10 + 3
F55.2115 × 10 + 2 5.2137 × 10 + 2 5.2138 × 10 + 2 5.2134 × 10 + 2 5.2138 × 10 + 2 5.2137 × 10 + 2
F66.4302 × 10 + 2 7.3663 × 10 + 2 6.6467 × 10 + 2 7.6389 × 10 + 2 7.4510 × 10 + 2 6.7667 × 10 + 2
F77.0018 × 10 + 2 1.2792 × 10 + 3 7.0693 × 10 + 2 5.0465 × 10 + 3 1.5563 × 10 + 3 7.1093 × 10 + 2
F89.7213 × 10 + 2 1.9392 × 10 + 3 1.1618 × 10 + 3 2.6293 × 10 + 3 1.9059 × 10 + 3 1.1462 × 10 + 3
F91.0736 × 10 + 3 2.1834 × 10 + 3 1.7852 × 10 + 3 2.9956 × 10 + 3 2.0958 × 10 + 3 1.5184 × 10 + 3
F107.0297 × 10 + 3 2.9105 × 10 + 4 1.8204 × 10 + 4 3.2641 × 10 + 4 3.0755 × 10 + 4 1.4705 × 10 + 4
F111.4719 × 10 + 4 3.0524 × 10 + 4 3.1888 × 10 + 4 3.2506 × 10 + 4 3.2241 × 10 + 4 3.1913 × 10 + 4
F121.2024 × 10 + 3 1.2043 × 10 + 3 1.2045 × 10 + 3 1.2045 × 10 + 3 1.2049 × 10 + 3 1.2046 × 10 + 3
F131.3007 × 10 + 3 1.3040 × 10 + 3 1.3008 × 10 + 3 1.3120 × 10 + 3 1.3049 × 10 + 3 1.3009 × 10 + 3
F141.4004 × 10 + 3 1.5730 × 10 + 3 1.4007 × 10 + 3 2.7335 × 10 + 3 1.6373 × 10 + 3 1.4004 × 10 + 3
F151.5510 × 10 + 3 4.8840 × 10 + 5 1.6074 × 10 + 3 2.1651 × 10 + 8 1.2171 × 10 + 6 2.0121 × 10 + 3
F161.6448 × 10 + 3 1.6470 × 10 + 3 1.6467 × 10 + 3 1.6477 × 10 + 3 1.6470 × 10 + 3 1.6470 × 10 + 3
F173.6189 × 10 + 6 1.6916 × 10 + 8 1.6867 × 10 + 7 1.1497 × 10 + 9 2.7832 × 10 + 8 3.9192 × 10 + 7
F182.3118 × 10 + 3 4.0168 × 10 + 4 5.9916 × 10 + 4 2.5461 × 10 + 10 2.0551 × 10 + 8 3.0225 × 10 + 3
F192.0177 × 10 + 3 2.2319 × 10 + 3 2.0154 × 10 + 3 5.9575 × 10 + 3 2.3520 × 10 + 3 2.0079 × 10 + 3
F203.5926 × 10 + 4 2.5544 × 10 + 5 8.3978 × 10 + 4 3.2150 × 10 + 6 3.0385 × 10 + 5 2.0031 × 10 + 5
F212.6598 × 10 + 6 6.2281 × 10 + 7 7.2953 × 10 + 6 4.8294 × 10 + 8 1.0373 × 10 + 8 7.2459 × 10 + 6
F224.0180 × 10 + 3 6.9075 × 10 + 3 5.2165 × 10 + 3 9.8067 × 10 + 3 6.7491 × 10 + 3 6.9634 × 10 + 3
F232.6485 × 10 + 3 2.5000 × 10 + 3 2.6555 × 10 + 3 5.9704 × 10 + 3 2.9402 × 10 + 3 2.6556 × 10 + 3
F242.7984 × 10 + 3 2.6000 × 10 + 3 2.7736 × 10 + 3 3.9225 × 10 + 3 3.0772 × 10 + 3 2.8196 × 10 + 3
F252.7765 × 10 + 3 2.7000 × 10 + 3 2.7716 × 10 + 3 3.8529 × 10 + 3 3.0918 × 10 + 3 2.7956 × 10 + 3
F262.8014 × 10 + 3 2.8000 × 10 + 3 2.8031 × 10 + 3 2.7185 × 10 + 3 3.0466 × 10 + 3 2.8094 × 10 + 3
F273.9995 × 10 + 3 6.3806 × 10 + 3 4.5261 × 10 + 3 7.4034 × 10 + 3 6.5736 × 10 + 3 5.0424 × 10 + 3
F286.3978 × 10 + 3 2.2158 × 10 + 4 7.0157 × 10 + 3 2.2799 × 10 + 4 1.9432 × 10 + 4 1.2124 × 10 + 4
F296.7693 × 10 + 3 4.9012 × 10 + 7 1.2955 × 10 + 5 1.5016 × 10 + 9 1.6496 × 10 + 8 1.8635 × 10 + 4
F303.7315 × 10 + 4 3.2730 × 10 + 6 1.5121 × 10 + 5 6.5268 × 10 + 7 7.2847 × 10 + 6 1.1320 × 10 + 5
Rank first2530101
Table 7. The mean values for the DTSA, EAT-TSA, fb_TSA, TSA, STSA, MTSA, PSO, GWO, BOA and RSA in 30 dimensions.
Table 7. The mean values for the DTSA, EAT-TSA, fb_TSA, TSA, STSA, MTSA, PSO, GWO, BOA and RSA in 30 dimensions.
FunctionDTSATSASTSAMTSAEST-TSAfb-TSAPSOGWOBOARSAGA
Unimodal
functions
1.65 × 10+69.39 × 10 + 7 5.92 × 10 + 8 5.92 × 10 + 6 1.04 × 10 + 8 8.07 × 10 + 6 3.84 × 10 + 7 1.32 × 10 + 8 1.64 × 10 + 9 1.05 × 10 + 9 6.05 × 10 + 8
5.80 × 10 + 3 3.93 × 10 + 5 2.63 × 10 + 10 2.24 × 10 + 5 3.10 × 10 + 6 5.59 × 10 + 3 4.42 × 10 + 6 5.36 × 10 + 9 7.75 × 10 + 10 7.18 × 10 + 10 3.62 × 10 + 10
4.48 × 10 + 3 3.74 × 10 + 4 8.66 × 10 + 4 4.01 × 10 + 3 3.94 × 10 + 4 1.19 × 10 + 4 3.29 × 10 + 4 5.64 × 10 + 4 7.61 × 10 + 4 7.95 × 10 + 4 7.14 × 10 + 4
Simple
multimodal
functions
5.00 × 10 + 2 5.53 × 10 + 2 2.89 × 10 + 3 5.01 × 10 + 2 5.91 × 10 + 2 4.96 × 10 + 2 6.44 × 10 + 2 8.03 × 10 + 2 1.64 × 10 + 4 8.78 × 10 + 3 5.48 × 10 + 3
5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2
6.02 × 10 + 2 6.23 × 10 + 2 6.39 × 10 + 2 6.03 × 10 + 2 6.26 × 10 + 2 6.03 × 10 + 2 6.18 × 10 + 2 6.19 × 10 + 2 6.38 × 10 + 2 6.40 × 10 + 2 6.37 × 10 + 2
7.00 × 10 + 2 7.00 × 10 + 2 9.50 × 10 + 2 7.00 × 10 + 2 7.00 × 10 + 2 7.00 × 10 + 2 7.01 × 10 + 2 7.54 × 10 + 2 1.47 × 10 + 3 1.30 × 10 + 3 1.07 × 10 + 3
8.23 × 10 + 2 9.68 × 10 + 2 1.08 × 10 + 3 8.31 × 10 + 2 9.91 × 10 + 2 8.37 × 10 + 2 8.57 × 10 + 2 9.10 × 10 + 2 1.12 × 10 + 3 1.16 × 10 + 3 1.05 × 10 + 3
9.32 × 10 + 2 1.11 × 10 + 3 1.20 × 10 + 3 9.75 × 10 + 2 1.14 × 10 + 3 9.77 × 10 + 2 1.03 × 10 + 3 1.04 × 10 + 3 1.25 × 10 + 3 1.24 × 10 + 3 1.17 × 10 + 3
1.60 × 10 + 3 6.28 × 10 + 3 7.74 × 10 + 3 1.96 × 10 + 3 6.02 × 10 + 3 2.20 × 10 + 3 3.01 × 10 + 3 4.51 × 10 + 3 8.58 × 10 + 3 7.92 × 10 + 3 7.17 × 10 + 3
3.79 × 10 + 3 8.18 × 10 + 3 8.45 × 10 + 3 7.49 × 10 + 3 7.53 × 10 + 3 7.80 × 10 + 3 7.30 × 10 + 3 5.81 × 10 + 3 8.98 × 10 + 3 8.81 × 10 + 3 7.99 × 10 + 3
1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3
1.30 × 10 + 3 1.30 × 10 + 3 1.30 × 10 + 3 1.30 × 10 + 3 1.30 × 10 + 3 1.30 × 10 + 3 1.30 × 10 + 3 1.30 × 10 + 3 1.31 × 10 + 3 1.31 × 10 + 3 1.31 × 10 + 3
1.40 × 10 + 3 1.40 × 10 + 3 1.48 × 10 + 3 1.40 × 10 + 3 1.40 × 10 + 3 1.40 × 10 + 3 1.40 × 10 + 3 1.42 × 10 + 3 1.71 × 10 + 3 1.57 × 10 + 3 1.55 × 10 + 3
1.50 × 10 + 3 1.52 × 10 + 3 1.37 × 10 + 4 1.52 × 10 + 3 1.52 × 10 + 3 1.51 × 10 + 3 1.52 × 10 + 3 2.36 × 10 + 3 3.98 × 10 + 5 2.0 × 10 + 5 1.71 × 10 + 4
1.61 × 10 + 3 1.61 × 10 + 3 1.61 × 10 + 3 1.61 × 10 + 3 1.61 × 10 + 3 1.61 × 10 + 3 1.61 × 10 + 3 1.61 × 10 + 3 1.61 × 10 + 3 1.61 × 10 + 3 1.61 × 10 + 3
Hybrid
functions
3.83 × 10+52.32 × 10 + 6 1.50 × 10 + 7 5.30 × 10 + 5 2.04 × 10 + 6 5.78 × 10 + 5 2.60 × 10 + 6 5.95 × 10 + 6 1.91 × 10 + 8 1.08 × 10 + 8 3.38 × 10 + 7
2.62 × 10 + 3 2.32 × 10 + 3 1.19 × 10 + 8 2.81 × 10 + 3 2.21 × 10 + 3 2.22 × 10 + 3 7.05 × 10 + 5 1.97 × 10 + 7 6.28 × 10 + 9 4.51 × 10 + 9 7.71 × 10 + 8
1.90 × 10 + 3 1.91 × 10 + 3 2.00 × 10 + 3 1.91 × 10 + 3 1.91 × 10 + 3 1.91 × 10 + 3 1.92 × 10 + 3 1.94 × 10 + 3 2.44 × 10 + 3 2.25 × 10 + 3 2.17 × 10 + 3
7.62 × 10 + 3 1.62 × 10 + 4 5.49 × 10 + 4 7.66 × 10 + 3 2.21 × 10 + 4 1.44 × 10 + 4 3.39 × 10 + 4 5.10 × 10 + 4 3.70 × 10 + 5 1.60 × 10 + 5 6.39 × 10 + 4
1.37 × 10+54.03 × 10 + 5 3.14 × 10 + 6 1.53 × 10 + 5 4.84 × 10 + 5 1.52 × 10 + 5 4.80 × 10 + 5 2.05 × 10 + 6 5.24 × 10 + 7 4.28 × 10 + 7 7.92 × 10 + 6
2.41 × 10 + 3 2.69 × 10 + 3 3.24 × 10 + 3 2.37 × 10 + 3 2.68 × 10 + 3 2.42 × 10 + 3 2.60 × 10 + 3 2.76 × 10 + 3 3.48 × 10 + 4 2.06 × 10 + 4 3.53 × 10 + 3
2.62 × 10 + 3 2.62 × 10 + 3 2.72 × 10 + 3 2.62 × 10 + 3 2.60 × 10 + 3 2.62 × 10 + 3 2.62 × 10 + 3 2.65 × 10 + 3 2.50 × 10 + 3 2.50 × 10 + 3 2.79 × 10 + 3
Composition
functions
2.63 × 10 + 3 2.63 × 10 + 3 2.60 × 10 + 3 2.62 × 10 + 3 2.60 × 10 + 3 2.63 × 10 + 3 2.64 × 10 + 3 2.60 × 10 + 3 2.60 × 10 + 3 2.60 × 10 + 3 2.62 × 10 + 3
2.71 × 10 + 3 2.72 × 10 + 3 2.75 × 10 + 3 2.71 × 10 + 3 2.70 × 10 + 3 2.71 × 10 + 3 2.71 × 10 + 3 2.70 × 10 + 3 2.70 × 10 + 3 2.70 × 10 + 3 2.71 × 10 + 3
2.70 × 10 + 3 2.70 × 10 + 3 2.70 × 10 + 3 2.71 × 10 + 3 2.77 × 10 + 3 2.70 × 10 + 3 2.74 × 10 + 3 2.72 × 10 + 3 2.77 × 10 + 3 2.80 × 10 + 3 2.72 × 10 + 3
3.04 × 10 + 3 3.22 × 10 + 3 3.72 × 10 + 3 3.06 × 10 + 3 3.30 × 10 + 3 3.06 × 10 + 3 3.40 × 10 + 3 3.55 × 10 + 3 3.56 × 10 + 3 4.20 × 10 + 3 3.74 × 10 + 3
3.71 × 10 + 3 4.04 × 10 + 3 5.29 × 10 + 3 3.67 × 10 + 3 4.09 × 10 + 3 3.71 × 10 + 3 4.65 × 10 + 3 3.35 × 10 + 3 5.84 × 10 + 3 5.73 × 10 + 3 8.72 × 10 + 3
3.97 × 10 + 3 4.06 × 10 + 4 2.32 × 10 + 7 3.97 × 10 + 3 4.84 × 10 + 4 3.94 × 10 + 3 4.42 × 10 + 6 3.11 × 10 + 3 9.16 × 10 + 6 1.63 × 10 + 7 2.98 × 10 + 8
4.51 × 10 + 3 1.27 × 10 + 4 4.00 × 10 + 5 4.99 × 10 + 3 1.84 × 10 + 4 5.07 × 10 + 3 1.99 × 10 + 4 3.56 × 10 + 3 3.51 × 10 + 6 2.49 × 10 + 6 1.37 × 10 + 6
Ranking
first
200011203210
Table 8. The mean values for the DTSA, EAT-TSA, fb_TSA, TSA, STSA, MTSA, PSO, GWO, BOA and RSA in 50 dimensions.
Table 8. The mean values for the DTSA, EAT-TSA, fb_TSA, TSA, STSA, MTSA, PSO, GWO, BOA and RSA in 50 dimensions.
FunctionDTSATSASTSAMTSAEST-TSAfb-TSAPSOGWOBOARSAGA
Unimodal
functions
3.06 × 10+63.56× 10 + 8 2.35× 10 + 9 8.95× 10 + 6 2.99× 10 + 8 4.43× 10 + 7 9.03× 10 + 7 4.72× 10 + 8 5.45× 10 + 9 2.93× 10 + 9 2.43× 10 + 9
2.34 × 10 + 3 2.45 × 10 + 8 1.15 × 10 + 11 8.51 × 10 + 6 1.07 × 10 + 9 2.10 × 10 + 4 2.44 × 10 + 8 1.66 × 10 + 10 1.79 × 10 + 11 1.40 × 10 + 11 9.68 × 10 + 10
1.36 × 10+41.19 × 10 + 5 2.98 × 10 + 5 7.72 × 10 + 4 1.10 × 10 + 5 1.33 × 10 + 5 1.45 × 10 + 5 1.66 × 10 + 5 1.99 × 10 + 5 1.48 × 10 + 5 1.43 × 10 + 5
Simple
multimodal
functions
5.15 × 10 + 2 8.83 × 10 + 2 2.84 × 10 + 4 5.40 × 10 + 2 1.26 × 10 + 3 5.36 × 10 + 2 7.50 × 10 + 2 2.91 × 10 + 3 5.13 × 10 + 4 2.51 × 10 + 4 2.33 × 10 + 4
5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2
6.07 × 10 + 2 6.54 × 10 + 2 6.74 × 10 + 2 6.08 × 10 + 2 6.54 × 10 + 2 6.16 × 10 + 2 6.42 × 10 + 2 6.39 × 10 + 2 6.68 × 10 + 2 6.75 × 10 + 2 6.68 × 10 + 2
7.00 × 10 + 2 7.02 × 10 + 2 1.81 × 10 + 3 7.01 × 10 + 2 7.12 × 10 + 2 7.00 × 10 + 2 7.04 × 10 + 2 9.23 × 10 + 2 2.30 × 10 + 3 1.90 × 10 + 3 1.64 × 10 + 3
8.50 × 10 + 2 1.21 × 10 + 3 1.44 × 10 + 3 8.83 × 10 + 2 1.23 × 10 + 3 8.98 × 10 + 2 9.26 × 10 + 2 1.05 × 10 + 3 1.43 × 10 + 3 1.48 × 10 + 3 1.28 × 10 + 3
9.74 × 10 + 2 1.37 × 10 + 3 1.65 × 10 + 3 1.09 × 10 + 3 1.40 × 10 + 3 1.15 × 10 + 3 1.16 × 10 + 3 1.21 × 10 + 3 1.60 × 10 + 3 1.61 × 10 + 3 1.56 × 10 + 3
2.24 × 10 + 3 1.30 × 10 + 4 1.51 × 10 + 4 4.48 × 10 + 3 1.23 × 10 + 4 5.23 × 10 + 3 6.37 × 10 + 3 8.41 × 10 + 3 1.56 × 10 + 4 1.47 × 10 + 4 1.30 × 10 + 4
5.32 × 10 + 3 1.45 × 10 + 4 1.53 × 10 + 4 1.45 × 10 + 4 1.41 × 10 + 4 1.48 × 10 + 4 1.23 × 10 + 4 8.52 × 10 + 3 1.59 × 10 + 4 1.53 × 10 + 4 1.43 × 10 + 4
1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3
1.30 × 10 + 3 1.30 × 10 + 3 1.31 × 10 + 3 1.30 × 10 + 3 1.30 × 10 + 3 1.30 × 10 + 3 1.30 × 10 + 3 1.30 × 10 + 3 1.31 × 10 + 3 1.31 × 10 + 3 1.31 × 10 + 3
1.40 × 10 + 3 1.40 × 10 + 3 1.70 × 10 + 3 1.40 × 10 + 3 1.40 × 10 + 3 1.40 × 10 + 3 1.40 × 10 + 3 1.46 × 10 + 3 1.81 × 10 + 3 1.72 × 10 + 3 1.63 × 10 + 3
1.51 × 10 + 3 1.78 × 10 + 3 3.09 × 10 + 6 1.54 × 10 + 3 3.46 × 10 + 3 1.53 × 10 + 3 1.57 × 10 + 3 7.03 × 10 + 4 5.77 × 10 + 6 1.72 × 10 + 6 5.12 × 10 + 5
1.62 × 10 + 3 1.62 × 10 + 3 1.62 × 10 + 3 1.62 × 10 + 3 1.62 × 10 + 3 1.62 × 10 + 3 1.62 × 10 + 3 1.62 × 10 + 3 1.62 × 10 + 3 1.62 × 10 + 3 1.62 × 10 + 3
Hybrid
functions
5.27 × 10+51.56 × 10 + 7 1.14 × 10 + 8 2.27 × 10 + 6 2.45 × 10 + 7 1.76 × 10 + 6 1.31 × 10 + 7 1.85 × 10 + 7 9.37 × 10 + 8 3.85 × 10 + 8 1.75 × 10 + 8
2.29 × 10 + 3 2.76 × 10 + 3 2.35 × 10 + 9 2.49 × 10 + 3 3.00 × 10 + 3 2.13 × 10 + 3 2.30 × 10 + 3 4.37 × 10 + 7 2.46 × 10 + 10 1.05 × 10 + 10 5.29 × 10 + 9
1.93 × 10 + 3 1.98 × 10 + 3 2.34 × 10 + 3 1.95 × 10 + 3 1.99 × 10 + 3 1.92 × 10 + 3 1.97 × 10 + 3 2.03 × 10 + 3 6.52 × 10 + 3 4.17 × 10 + 3 2.70 × 10 + 3
1.18 × 10 + 4 3.66 × 10 + 4 4.29 × 10 + 5 2.61 × 10 + 4 3.11 × 10 + 4 5.02 × 10 + 4 7.18 × 10 + 4 2.05 × 10 + 5 2.71 × 10 + 5 1.60 × 10 + 5 6.96 × 10 + 4
1.20 × 10+66.05 × 10 + 6 3.83 × 10 + 7 1.51 × 10 + 6 4.30 × 10 + 6 5.56 × 10+58.61 × 10 + 6 7.82 × 10 + 6 1.07 × 10 + 8 5.64 × 10 + 7 1.90 × 10 + 7
2.82 × 10 + 3 3.94 × 10 + 3 5.23 × 10 + 3 3.01 × 10 + 3 3.92 × 10 + 3 3.34 × 10 + 3 3.95 × 10 + 3 3.53 × 10 + 3 6.75 × 10 + 5 1.21 × 10 + 6 7.32 × 10 + 3
2.64 × 10 + 3 2.65 × 10 + 3 3.38 × 10 + 3 2.64 × 10 + 3 2.58 × 10 + 3 2.64 × 10 + 3 2.66 × 10 + 3 2.83 × 10 + 3 2.50 × 10 + 3 2.50 × 10 + 3 2.89 × 10 + 3
Composition
functions
2.67 × 10 + 3 2.70 × 10 + 3 2.91 × 10 + 3 2.67 × 10 + 3 2.60 × 10 + 3 2.67 × 10 + 3 2.70 × 10 + 3 2.64 × 10 + 3 2.60 × 10 + 3 2.60 × 10 + 3 2.66 × 10 + 3
2.72 × 10 + 3 2.78 × 10 + 3 2.87 × 10 + 3 2.71 × 10 + 3 2.70 × 10 + 3 2.72 × 10 + 3 2.73 × 10 + 3 2.71 × 10 + 3 2.70 × 10 + 3 2.70 × 10 + 3 2.71 × 10 + 3
2.80 × 10 + 3 2.77 × 10 + 3 2.71 × 10 + 3 2.80 × 10 + 3 2.80 × 10 + 3 2.75 × 10 + 3 2.90 × 10 + 3 2.86 × 10 + 3 2.80 × 10 + 3 2.80 × 10 + 3 2.80 × 10 + 3
3.20 × 10 + 3 4.11 × 10 + 3 4.98 × 10 + 3 3.29 × 10 + 3 4.29 × 10 + 3 3.56 × 10 + 3 4.10 × 10 + 3 4.11 × 10 + 3 5.40 × 10 + 3 5.11 × 10 + 3 5.31 × 10 + 3
4.16 × 10 + 3 5.70 × 10 + 3 9.86 × 10 + 3 4.24 × 10 + 3 8.31 × 10 + 3 4.33 × 10 + 3 6.48 × 10 + 3 3.51 × 10 + 3 1.48 × 10 + 4 1.20 × 10 + 4 1.59 × 10 + 4
3.70 × 10 + 3 8.68 × 10 + 5 2.99 × 10 + 8 6.75 × 10 + 3 1.57 × 10 + 6 5.16 × 10 + 3 1.12 × 10 + 5 3.12 × 10 + 3 3.10 × 10 + 3 3.10 × 10 + 3 1.09 × 10 + 9
1.33 × 10 + 4 1.40 × 10 + 5 3.19 × 10 + 6 1.84 × 10 + 4 2.08 × 10 + 5 1.68 × 10 + 4 1.88 × 10 + 5 4.41 × 10 + 3 6.83 × 10 + 6 2.20 × 10 + 7 2.03 × 10 + 7
Ranking
first
200101302330
Table 9. The mean values for the DTSA, EAT-TSA, fb_TSA, TSA, STSA, MTSA, PSO, GWO, BOA and RSA in 100 dimensions.
Table 9. The mean values for the DTSA, EAT-TSA, fb_TSA, TSA, STSA, MTSA, PSO, GWO, BOA and RSA in 100 dimensions.
FunctionDTSATSASTSAMTSAEST-TSAfb-TSAPSOGWOBOARSAGA
Unimodal
functions
5.87 × 10 + 7 2.42 × 10 + 9 1.17 × 10 + 10 1.34 × 10 + 8 1.55 × 10 + 9 2.15 × 10 + 8 5.60 × 10 + 8 9.18 × 10 + 8 1.08 × 10 + 10 7.88 × 10 + 9 3.90 × 10 + 9
7.81 × 10+45.33 × 10 + 10 4.67 × 10 + 11 4.47 × 10 + 8 6.23 × 10 + 10 1.02 × 10 + 9 1.09 × 10 + 10 9.34 × 10 + 10 3.12 × 10 + 11 2.79 × 10 + 11 2.16 × 10 + 11
2.63 × 10+43.46 × 10 + 5 8.09 × 10 + 5 2.14 × 10 + 5 2.85 × 10 + 5 3.23 × 10 + 5 4.46 × 10 + 5 3.42 × 10 + 5 3.23 × 10 + 5 3.08 × 10 + 5 2.85 × 10 + 5
Simple
multimodal
functions
7.81 × 10 + 2 9.36 × 10 + 3 1.59 × 10 + 5 9.20 × 10 + 2 1.12 × 10 + 4 1.05 × 10 + 3 2.27 × 10 + 3 1.11 × 10 + 4 1.07 × 10 + 5 8.06 × 10 + 4 5.35 × 10 + 4
5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2 5.21 × 10 + 2
6.48 × 10 + 2 7.42 × 10 + 2 7.63 × 10 + 2 6.59 × 10 + 2 7.38 × 10 + 2 6.83 × 10 + 2 7.08 × 10 + 2 7.08 × 10 + 2 7.57 × 10 + 2 7.58 × 10 + 2 7.50 × 10 + 2
7.00 × 10 + 2 1.17 × 10 + 3 4.98 × 10 + 3 7.06 × 10 + 2 1.29 × 10 + 3 7.11 × 10 + 2 7.85 × 10 + 2 1.65 × 10 + 3 3.83 × 10 + 3 3.47 × 10 + 3 2.92 × 10 + 3
9.55 × 10 + 2 1.89 × 10 + 3 2.57 × 10 + 3 1.17 × 10 + 3 1.94 × 10 + 3 1.19 × 10 + 3 1.34 × 10 + 3 1.58 × 10 + 3 2.15 × 10 + 3 2.27 × 10 + 3 2.02 × 10 + 3
1.10 × 10 + 3 2.11 × 10 + 3 3.01 × 10 + 3 1.79 × 10 + 3 2.21 × 10 + 3 1.52 × 10 + 3 1.85 × 10 + 3 1.70 × 10 + 3 2.39 × 10 + 3 2.38 × 10 + 3 2.26 × 10 + 3
7.42 × 10 + 3 2.99 × 10 + 4 3.29 × 10 + 4 1.71 × 10 + 4 2.93 × 10 + 4 1.90 × 10 + 4 1.77 × 10 + 4 2.01 × 10 + 4 3.30 × 10 + 4 3.07 × 10 + 4 3.08 × 10 + 4
1.61 × 10+43.21 × 10 + 4 3.28 × 10 + 4 3.19 × 10 + 4 2.94 × 10 + 4 3.20 × 10 + 4 3.12 × 10 + 4 2.78 × 10 + 4 3.26 × 10 + 4 3.14 × 10 + 4 3.05 × 10 + 4
1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.20 × 10 + 3 1.21 × 10 + 3 1.20 × 10 + 3
1.30 × 10 + 3 1.30 × 10 + 3 1.31 × 10 + 3 1.30 × 10 + 3 1.30 × 10 + 3 1.30 × 10 + 3 1.30 × 10 + 3 1.31 × 10 + 3 1.31 × 10 + 3 1.31 × 10 + 3 1.31 × 10 + 3
1.40 × 10 + 3 1.54 × 10 + 3 2.63 × 10 + 3 1.40 × 10 + 3 1.58 × 10 + 3 1.40 × 10 + 3 1.43 × 10 + 3 1.65 × 10 + 3 2.34 × 10 + 3 2.22 × 10 + 3 2.05 × 10 + 3
1.55 × 10 + 3 6.26 × 10 + 5 2.15 × 10 + 8 1.61 × 10 + 3 4.66 × 10 + 5 1.77 × 10 + 3 1.62 × 10 + 4 2.75 × 10 + 5 3.05 × 10 + 7 1.34 × 10 + 7 4.49 × 10 + 6
1.65 × 10 + 3 1.65 × 10 + 3 1.65 × 10 + 3 1.65 × 10 + 3 1.65 × 10 + 3 1.65 × 10 + 3 1.65 × 10 + 3 1.65 × 10 + 3 1.65 × 10 + 3 1.65 × 10 + 3 1.65 × 10 + 3
Hybrid
functions
5.02 × 10+62.22 × 10 + 8 1.15 × 10 + 9 1.18 × 10 + 7 1.68 × 10 + 8 2.55 × 10 + 7 8.84 × 10 + 7 1.07 × 10 + 8 2.05 × 10 + 9 1.27 × 10 + 9 7.33 × 10 + 8
2.45 × 10 + 3 2.98 × 10 + 3 2.16 × 10 + 10 4.69 × 10 + 4 5.14 × 10+42.99 × 10 + 3 2.09 × 10 + 7 2.35 × 10 + 9 4.47 × 10 + 10 3.22 × 10 + 10 2.16 × 10 + 10
2.00 × 10 + 3 2.15 × 10 + 3 6.32 × 10 + 3 2.01 × 10 + 3 2.24 × 10 + 3 2.03 × 10 + 3 2.15 × 10 + 3 2.57 × 10 + 3 1.29 × 10 + 4 8.82 × 10 + 3 5.74 × 10 + 3
5.05 × 10+42.70 × 10 + 5 4.24 × 10 + 6 1.14 × 10 + 5 2.35 × 10 + 5 1.89 × 10 + 5 2.96 × 10 + 5 2.89 × 10 + 5 1.40 × 10 + 6 8.38 × 10 + 5 4.30 × 10 + 5
3.33 × 10+68.85 × 10 + 7 5.02 × 10 + 8 6.83 × 10 + 6 5.72 × 10 + 7 9.40 × 10 + 6 3.75 × 10 + 7 4.44 × 10 + 7 7.12 × 10 + 8 3.97 × 10 + 8 1.83 × 10 + 8
4.22 × 10 + 3 7.12 × 10 + 3 1.07 × 10 + 4 6.66 × 10 + 3 6.75 × 10 + 3 6.68 × 10 + 3 6.24 × 10 + 3 5.96 × 10 + 3 4.27 × 10 + 5 1.45 × 10 + 5 1.88 × 10 + 4
2.65 × 10 + 3 2.77 × 10 + 3 5.89 × 10 + 3 2.66 × 10 + 3 2.50 × 10 + 3 2.66 × 10 + 3 2.72 × 10 + 3 3.12 × 10 + 3 2.50 × 10 + 3 2.50 × 10 + 3 3.14 × 10 + 3
Composition
functions
2.79 × 10 + 3 3.00 × 10 + 3 3.93 × 10 + 3 2.78 × 10 + 3 2.60 × 10 + 3 2.82 × 10 + 3 2.92 × 10 + 3 2.60 × 10 + 3 2.60 × 10 + 3 2.60 × 10 + 3 2.75 × 10 + 3
2.78 × 10 + 3 3.06 × 10 + 3 3.85 × 10 + 3 2.77 × 10 + 3 2.70 × 10 + 3 2.81 × 10 + 3 2.86 × 10 + 3 2.74 × 10 + 3 2.70 × 10 + 3 2.70 × 10 + 3 2.73 × 10 + 3
2.80 × 10 + 3 2.99 × 10 + 3 2.72 × 10 + 3 2.80 × 10 + 3 2.80 × 10 + 3 2.81 × 10 + 3 2.85 × 10 + 3 2.86 × 10 + 3 2.80 × 10 + 3 2.80 × 10 + 3 2.80 × 10 + 3
4.02 × 10 + 3 6.42 × 10 + 3 7.46 × 10 + 3 4.45 × 10 + 3 6.45 × 10 + 3 4.93 × 10 + 3 5.79 × 10 + 3 5.94 × 10 + 3 7.99 × 10 + 3 7.57 × 10 + 3 8.41 × 10 + 3
6.49 × 10 + 3 2.13 × 10 + 4 2.30 × 10 + 4 8.19 × 10 + 3 2.18 × 10 + 4 1.02 × 10 + 4 1.43 × 10 + 4 5.37 × 10 + 3 3.00 × 10 + 4 2.29 × 10 + 4 3.56 × 10 + 4
5.73 × 10 + 3 2.50 × 10 + 7 1.43 × 10 + 9 1.63 × 10 + 5 4.53 × 10 + 7 2.73 × 10 + 4 4.56 × 10 + 6 3.14 × 10 + 3 3.10 × 10 + 3 3.10 × 10 + 3 2.13 × 10 + 9
5.56 × 10 + 4 2.65 × 10 + 6 5.64 × 10 + 7 1.46 × 10 + 5 4.23 × 10 + 6 1.97 × 10 + 5 8.55 × 10 + 5 6.14 × 10 + 3 3.36 × 10 + 7 3.63 × 10 + 7 1.28 × 10 + 8
Ranking
first
230102002340
Table 10. Results of Wilcoxon’s test for DTSA and other algorithms.
Table 10. Results of Wilcoxon’s test for DTSA and other algorithms.
α = 0.1 Algorithms
TSASTSAMTSAEST-TSAfb-TSAPSOGWOBOARSAGA
D = 301.13 × 10 5 3.18 × 10 6 4.68 × 10 3 1.60 × 10 4 1.29 × 10 3 1.73 × 10 6 8.31 × 10 4 1.24 × 10 5 1.24 × 10 5 3.52 × 10 6
TRUETRUETRUETRUETRUETRUETRUETRUETRUETRUE
D = 504.29 × 10 6 3.18 × 10 6 1.36 × 10 5 4.45 × 10 5 2.11 × 10 3 1.73 × 10 6 5.71 × 10 4 5.31 × 10 5 4.86 × 10 5 6.34 × 10 6
TRUETRUETRUETRUETRUETRUETRUETRUE