Next Article in Journal
Multi-Probe Measurement Method for Error Motion of Precision Rotary Stage Based on Reference Plate
Previous Article in Journal
Functional and Neuroplastic Effects of Cross-Education in Anterior Cruciate Ligament Rehabilitation: A Scoping Review with Bibliometric Analysis
Previous Article in Special Issue
Exploring the Impact of Local Operator Configurations in the Multi-Demand Multidimensional Knapsack Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Personalized-Template-Guided Intelligent Evolutionary Algorithm

1
School of Information Science and Technology, Jinan University, Guangzhou 510632, China
2
Engineering Research Center of Trustworthy AI, Ministry of Education, Jinan University, Guangzhou 510632, China
3
School of Mathematics and Statistics, Changchun University of Technology, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(15), 8642; https://doi.org/10.3390/app15158642 (registering DOI)
Submission received: 5 July 2025 / Revised: 27 July 2025 / Accepted: 30 July 2025 / Published: 4 August 2025
(This article belongs to the Special Issue Novel Research and Applications on Optimization Algorithms)

Abstract

Existing heuristic algorithms are based on inspiration sources and have not yet done a good job of basing themselves on optimization principles to minimize and utilize historical information, which may lead to low efficiency, accuracy, and stability of the algorithm. To solve this problem, a personalized-template-guided intelligent evolutionary algorithm named PTG is proposed. The core idea of PTG is to generate personalized templates to guide particle optimization. We also find that high-quality templates can be generated to guide the exploration and exploitation of particles by using the information of the population particles when the optimal value remains unchanged, the knowledge of population distribution changes, and the dimensional distribution properties of particles themselves. By conducting an ablation study and comparative experiments on the challenging CEC2022 test and CEC2005 test functions, we have validated the effectiveness of our method and concluded that the stability and accuracy of the solutions obtained by PTG are superior to other algorithms. Finally, we further verified the effectiveness of PTG through four engineering problems.

1. Introduction

Traditional optimization methods have established precise mathematical models for addressing diverse real-world problems, including linear programming, quadratic programming, nonlinear programming, penalty function methods, and Newton’s method. However, when dealing with large-scale non-convex optimization problems, these conventional approaches not only demand substantial computational resources but also tend to converge to local optima [1]. In contrast to traditional optimization algorithms, heuristic algorithms leverage historical information while incorporating stochastic elements for exploration. This approach not only reduces computational burden but also enhances the ability to escape the local optimum, making it particularly suitable for solving complex problems where exact solutions are difficult to obtain, notably NP-hard issues [2]. Heuristic algorithms have been widely used to solve different problems, such as portfolio optimization [3], path planning [4], electrical system control [5], industrial internet of things service request [6], water resources management [7], urban planning [8], wireless sensor point coverage [9], etc.
Most existing heuristic algorithms can be classified by their inspiration sources into four categories: evolutionary algorithms (EA), physics-based methods (PM), human behavior models (HM), and swarm intelligence (SI).
(1) Evolutionary algorithms.Evolutionary algorithms draw their inspiration from the theory of biological evolution, and their most notable feature is the simulation of biological inheritance and variation to produce new individuals. Genetic Algorithm (GA) [10] algorithm explores the solution space by cross-combining the genetic information of parents. It also uses mutation to explore areas not covered by crossover in the early stage and to refine the search solution in the later stage. In addition to GA, other algorithms in the EA category include the Differential Evolution (DE) algorithm [11], which is different from GA’s mutation strategy in difference. It multiplies the difference between two randomly selected individuals by the difference term to indicate the distance of mutation, so that the current individual changes at the corresponding distance and then enters the crossover stage. Each dimension in the intersection determines whether to replace it with the differentiated dimension value according to the user-defined intersection probability. It is helpful to keep excellent dimension values and combine excellent genetic characteristics into new individuals. Genetic Programming (GP) algorithm [12] has the same algorithm flow as GA, but the individual objects operated by GP are usually programs in the form of a tree structure, and its main purpose is to generate or optimize computer programs. Some other evolutionary-based algorithms include Evolutionary Programming (EP) algorithm [13], Evolution Strategy (ES) [14], Biogeography-Based Optimizer (BBO) [15].
(2) Physics-based methods. The physical phenomenon is used to simulate the change of the function solution space, and physical principles are used to balance the exploration and exploitation in the process of solving a function. For example, the Gravity Search Algorithm (GSA) [16] uses the fitness value of the particle to represent its quality, and the velocity of the particle related to the particle position update is calculated by Newton’s law of universal gravitation and Newton’s second law of motion. The Simulated Annealing algorithm (SA) [17] imitates the annealing process in physics and helps the material to achieve a more stable structure by controlling the rate of temperature drop. In the application of optimization, the exploration and development of the balanced algorithm are controlled by controlling the diversity of the solutions of the offspring. Big-Bang Big-Crunch (BB-BC) optimization [18] optimizes the algorithm by simulating the expansion and contraction of the universe, that is, controlling the distance between particles. Specifically, in the big bang stage, the global search is carried out by increasing the distance between particles, and in the big collision stage, the local search is carried out by reducing the distance between particles. The electromagnetic algorithm [19] transforms the solution search space by simulating the attraction and repulsion between charged particles and specifically updates the position of the particle solution by using the fitness value to represent the charge intensity and introducing Coulomb’s law, Newton’s second law, and related formulas of kinematics equations. There are Quantum-Inspired metaheuristic algorithms (QI) [20], Central Force Optimization (CFO) [21], Charged System Search algorithm (CSS) [22], Ray Optimization (RO) algorithm [23], and so on.
(3) Human behavior models. In the Teaching-Learning-Based Optimization algorithm (TLBO) [24], each particle represents a student, and the particle position is updated by simulating two stages of teaching and learning. In the teaching stage, each particle learns the optimal value and average value in the population. In the learning stage, particles learn from each other, and particles with higher fitness value guide particles with lower fitness value. In the Imperialist Competitive Algorithm (ICA) [25], each particle represents a country, and these particles are divided into imperialist countries and colonies according to the fitness value. The imperial competition process is used to determine the possibility of being occupied by other empires, and the colonial parties update their solutions according to the position of imperialist countries in the assimilation process. In the Educational Competition Optimization (ECO) algorithm [26], schools represent the evolution of the population, and students represent the particles in the population. The algorithm is divided into three stages: primary school represents the average position of the population, middle school represents the average and optimal position of the population, and students choose their nearby schools as their moving direction. In high school, schools represent the average, optimal, and worst positions in the population, while students choose the optimal position as their moving direction. In the Human Learning Optimization (HLO) algorithm [27], each latitude value of each particle is selected from three learnable operators according to probability, and the random learning operator uses random number generation to simulate extensive human learning. Individual learning operators store and use the best experience of individual history to generate solutions, while social learning operators generate new values through population knowledge sharing. HM categories include Driving Training-Based Optimization (DTBO) [28], Supply–Demand-Based Optimization (SDO) [29], Student Psychology Based Optimization (SPBO) algorithm [30], Poor and Rich Optimization (PRO) algorithm [31], Fans Optimization (FO) [32], Political Optimizer (PO) [33], and so on.
(4) Swarm intelligence. The most commonly used algorithm is the Particle Swarm Optimization algorithm [34]. In PSO, each bird represents the position of a solution, and the individual position is updated and optimized by simulating the foraging behavior of birds and recording the historical optimal position found by each bird and the whole flock. The Moth-Flame Optimization algorithm (MFO) [35] uses the behavior of the moth fireworm to find and approach the light source at night to simulate the process of function optimization. The Whale Optimization Algorithm (WOA) [36] compares each candidate solution to a humpback whale and updates the position of the candidate solution through the random search, encirclement, and rotation of the humpback whale when hunting. In the Harris Hawks optimization algorithm (HHO) [37], each eagle represents a potential solution, and the position of the candidate solution is updated by simulating the siege, scattered search, and direct attack of eagles during hunting. In the Coati Optimization Algorithm (COA) [38], each raccoon represents a candidate solution and effectively searches the solution space by simulating raccoon hunting and escaping from predators. The Tunicate Swarm Algorithm (TSA) [39] uses cystic worms to represent candidate solutions, and the position updating strategy in the search process draws lessons from the jet propulsion and group cooperation behavior of the cystic swarm in navigation and foraging activities. In the Reptile Search Algorithm (RSA) [40], each reptile represents the position of a solution, and the optimization idea of the algorithm draws lessons from the two stages of reptile hunting to carry out global and local search. SI categories include the American Zebra Optimization Algorithm (ZOA) [41], Artificial Bee Colony (ABC) algorithm [42], Spotted Hyena Optimizer (SHO) [43], Hermit Crab Optimization Algorithm (HCOA) [44], Grasshopper Optimization Algorithm (GOA), [45] and so on.
However, the existing heuristic algorithms determine the optimization mechanism through the source of inspiration, rather than establishing the optimization paradigm from the optimization mechanism, which does not have strong explanatory power or stability of the algorithm. They also do not make good use of the information on historical stagnation and the distribution of the particle itself, which makes the algorithm prone to missing the opportunity to evolve towards excellent dimension values and difficult to converge to the global optimal value.
Based on the above problems, we establish a paradigm that balances exploration and exploitation. Instead of directly interacting with particle groups and historical solutions by adjusting step sizes, we aim to make the most of historical information. We first create a template based on particle characteristics and population history information, and then the template is used to guide the particle optimization. During the whole algorithm, there are no user-defined parameters.
The main contributions of this paper are as follows:
  • We propose a PTG algorithm based on an optimization principle. In the template generation stage, a personalized template containing key exploration areas is generated. In the template guidance stage, the particle exploits key areas under the guidance of the template. By gradually narrowing and locking the optimal value area, this paradigm solves the difficult problems of setting the search step size and defining parameters.
  • We find that historical stagnation information can enhance the exploitation ability of the particle. So we extract this information as the history-retrace template’s base points. Historical stagnation information refers to the population particle information when the optimal value of the population is unchanged. The timely use of this information can guide particles to mine in the historical optimal solution area and avoid missing the global optimal value.
  • From the perspective of historical distribution information, the template interval expansion strategy is proposed to extract the distribution change knowledge of population offspring, expand the particle exploration space, and enhance the particle’s exploration ability. We find that the particle’s own dimension distribution attribute can provide effective information to speed up particle optimization, so we propose the personalized template generation strategy based on particle dimensional distribution to generate a personalized template for each particle.
The following part of this paper is organized as follows. In Section 1, the existing heuristic algorithms are introduced. In Section 2, we introduce the proposed PTG algorithm. In Section 3, we conduct experiments and analysis of results. In Section 4, we summarize the content of this paper. In order to cover and vividly represent the newly generated individuals, population individuals, and populations. This paper draws on the expression of the PSO algorithm and describes individuals as particles. However, the method proposed in this paper has nothing to do with the PSO algorithm.

2. Methods

A personalized-template-guided intelligent evolutionary algorithm includes the template generation stage and the template guidance stage. The purpose of the template generation stage is to generate a guiding particle for the current particle, that is, a personalized template. Firstly, each particle randomly selects whether to generate a value-driven template or a history-retrace template to increase the diversity of exploration. Then the selection strategy of the template base point set is applied to determine the base point set of the template. Then, based on the template interval expansion strategy, the template interval surrounded by basic points is expanded. Finally, the personalized template generation strategy based on particle dimensional distribution is adopted to generate a personalized template in the corresponding interval dimension. In the template guidance stage, the template-guided knowledge transfer strategy is used to update the position of the particle. See Algorithm 1 for the specific process.
The detailed flowchart of the algorithm is shown in Figure 1. First, initialize the population and parameter values, and calculate the fitness value. Use the template interval expansion strategy to get the value V a l , and then enter the loop. Calculate the f d value of the i-th particle, and decide whether to generate the history-retrace template or the value-driven template according to the value of the random integer L. When L i = 1 , enter the left half of the process to generate a history-retrace template, and use the selection strategy of the template base point set to calculate the base point A i of the history-retrace template by Equation (1). Then, the template generation strategy is adopted, and the M i is calculated by Equation (12), which determines whether to use the personalized template generation strategy based on particle dimensional distribution. When M i = 1 , the dimension to be learned is obtained by Equation (13), and then the history-retrace template a i is generated by Equations (14)–(19) in combination with the interval extension value V a l . When M i = 2 , only the template generation strategy is used, and the history-retrace template a i is generated by Equations (14)–(19) in combination with the interval extension value V a l . When L i = 2 , enter the right half of the process to generate a value-driven template. Calculate the base point G P i of the value-driven template by Equation (2), and calculate the M i value by Equation (12). When M i = 1 , the dimension to be learned is obtained by Equation (13), and then the value-driven template v i is generated by combining the interval spacing value with Equations (20) and (21). When M i = 2 , v i is directly generated by combining the interval spacing value with Equations (20) and (21). After the template is generated, the algorithm enters the template guidance stage. First, a value of the random number r 7 is calculated. When r 7 = 1 , a new solution X i is obtained by Equations (22) and (23), and when r 7 = 2 , a new solution X i is obtained by Equation (24). Update the M and r 5 of the i-th particle. After N new solutions are generated by the above steps for N particles, the fitness value of the population is calculated, and the values of β and V a l are updated. When t T , continue the above cycle. Otherwise, jump out of the loop, return to the optimal solution value and its position, and end the algorithm.
Algorithm 1: Pseudocode of PTG
Input: 
popultion size N, number of iterations T
Output: 
the best solution of the objective function
1:
β 0
2:
Calculate v a l based on Equation (7)
3:
Calculate the population fitness value, sort and record the optimal value
4:
while  t < T   do
5:
    for  i = 1  to N do
6:
        Calculate f d i based on Equations (8)–(11)
7:
         L i R a n d o m ( { 1 , 2 } )
8:
         r 7 R a n d o m ( { 1 , 2 } )
9:
        for  j = 1  to D do
10:
           if  L i = 1  then
11:
               Calculate A i based on Equation (1)
12:
               Calculate M i L based on Equation (12)
13:
               if  M i L = 1  then
14:
                   Calculate r 5 i L based on Equation (13)
15:
               end if
16:
               Generate a i j based on Equations (14)–(19)
17:
           end if
18:
           if  L i = 2  then
19:
               Calculate G P i based on Equation (2)
20:
               Calculate M i L based on Equation (12)
21:
               if  M i L = 1  then
22:
                   Calculate r 5 i L based on Equation (13)
23:
               end if
24:
               Generate v i j based on Equations (20) and (21)
25:
           end if
26:
           if  r 7 = 1  then
27:
               Generate a new particle based on Equations (22) and (23)
28:
           end if
29:
           if  r 7 = 2  then
30:
               Generate a new particle based on Equation (24)
31:
           end if
32:
        end for
33:
        Record M i L , r 5 i L
34:
    end for
35:
    Calculate and rank population fitness values
36:
    Update the spacing value v a l based on Equations (3)–(7)
37:
    Update β
38:
end while

2.1. Template Generation Stage

Firstly, an example is given to illustrate the brief process of template generation. Suppose there is a particle that has two dimensions. Figure 2 shows a brief process of generating a personalized template for this particle, and the same applies to the other particles. For specific details, please refer to the following content of this section. The pink part on the left of the figure represents the generation process of the history-retrace template, while the part on the right shows the generation process of the value-driven template. The length of the yellow square is the interval expansion value V a l obtained through the interval expansion strategy. The two small dots on the left and right of the black line segment in the figure represent the upper and lower bounds of the particle dimension value. j represents the dimension of the particle. a refers to the generated history-retrace template, and v represents the generated value-driven template.
If the particle chooses to generate a historical backtracking template, there are two possible cases for the selection of template base points. When h < 2 , there is only one template base point. When h 2 , there are two template base points. In the case of h 2 , the selection strategy of the template base point set for the history-retrace template is given. According to this strategy, two basic points are obtained in each dimension of the particle. Next, if M = 1 , the personalized template generation strategy based on particle dimensional distribution is implemented. Suppose the learning object selected by the particle is dimension 2; then, on the basis of the template base point j = 2 , the interval range is expanded with the V a l , and the value obtained within the dashed box range is used as the value of the personalized template dimensions 1 and 2. If M = 2 , the template generation strategy is executed. Based on the base points of particle dimensions 1 and 2, add V a l to expand the interval range, and then generate new values within their respective ranges as the generated templates.
If the particle selects to generate a value-driven template, there is only one basis point for its template. In the template-based point selection strategy, one base point is obtained on each of the two dimensions of the particle. The following demonstrations are consistent with the historical backtracking template, but it should be noted that the specific principles and formulas of the two are different. For details, please refer to the following content of this section.

2.1.1. Selection Strategy of Template Base Point Set

The choice of base point set is very important because it determines what area the template guides the particle to explore. In particular, this algorithm stores a random particle in the merged population when the merit of each generation is unchanged, which helps the algorithm to review the previous solutions and extract historical stagnation information from them. In the specific algorithm steps, each particle randomly chooses to generate a history-retrace template or a value-driven template, which increases the diversity of exploration. In order to balance exploration and exploitation, the corresponding template base point set selection strategy is adopted under the guidance of the optimal value signal β . This strategy realizes the collaborative exploration of the population; some particles review the previous limited historical area, and some particles exploit near the optimal value.
Specifically, when the signal is the change of the previous generation’s merit, the base point of the history-retrace template is the random particles in the merged population when the merit stored in the last two generations is unchanged, while the value-driven template randomly selects one of the top three elite particles of the parent as the base point, thus realizing the collaborative exploration of the population. Some particles review the local points when they were unchanged before, and some particles are developed near the merit to jointly promote the further optimization of the solution. When the signal is the unchanged merit of the previous generation, the history-retrace template’s base point is selected from historical particles, and the value-driven template’s base point is a random particle of the parent. So as to expand the exploration scope of particles, help particles jump out of local optimization, and explore new fields.
A i = C t 1 , C t 2 , if β = 0 and h 2 Q h 1 , Q h , if β = 1 and h 2 g b e s t , if β = 1 and h < 2 1 g b e s t , if β = 0 and h < 2
where A i is the base point set of the history-retrace template of the i-th particle. In each iteration, randomly select one of the top N elite individuals in the merged population and store it in C. t is the current iteration number, t = 1 , 2 , , T , and T is the total iteration number. Where t 1 and t 2 are random integers of [ 1 , t ] . C t 1 represents the t 1 individual in C. When the optimal value of the t-iteration population remains unchanged, a random individual of the t-iteration merged population will be stored in Q. h is the number of individuals stored in Q. N is the number of individuals in the population. g b e s t is the location of the optimal solution of the population. β is the optimal value signal; when the optimal value of the population changes, β is 1, otherwise it is 0.
G P i = P r 4 , if β = 1 P r 1 , if β = 0
where G P i is the value-driven template’s base point set of the i-th particle, P is the parent population, and r 4 is a random integer of [ 1 , 3 ] . r 1 is a random integer of [ 1 , N ] .

2.1.2. Template Interval Expansion Strategy

The selection strategy of the template base point set is only based on historical information. If the particles are guided to explore the template interval surrounded by the basic points they choose, the algorithm will easily fall into local optimization, especially in the late convergence stage. In order to solve this problem, we use Equations (3)–(7) to calculate the interval value V a l j .
When the optimal value of the population changes, the offspring population, the parent population, and the merged population all contain valid information about the distribution of optimal values. It should be noted that V c j is the spacing of the optimal particle population after the elite selection strategy, which lacks diversity compared with V op j . Therefore, we set the parameter θ to balance the diversity and convergence of interval values. In the early stage, when the optimal value area is uncertain, the weight of V c j decreases gradually, while V op j increases gradually, which makes the randomness of the template interval expansion value increase gradually with iteration to help the particles jump out of the local optimization. After the midterm iteration, the weight of V op j gradually decreases, and the weight of V c j gradually increases, which makes the randomness of the template interval expansion value gradually decrease to help the particles gradually converge.
When the optimal value of the population is constant, the combined population may contain new distribution knowledge, because some particles in the offspring may be better than some particles in the parent. For this reason, we set the interval expansion range as the absolute difference between two random particles in the combined population to help the particle jump out of the local optimum.
V a l j = β × ( V op j × θ + V c j × ( 1 θ ) ) + ( 1 β ) × V c r j
V o p j = a b s ( 2 N × i = 1 N / 2 O i j 2 N × i = 1 N / 2 P i j )
V c j = a b s ( 2 N × i = 1 N / 2 C t i j 2 N × i = N / 2 + 1 N C t i j )
θ = sin ( t × π T )
V c r j = a b s ( C t r 1 j C t r 2 j )
where V a l j is the interval value in the j-th dimension of the current iteration, j = 1 , 2 , , D , and D is the maximum dimension value. O i j is the j-th dimension value of the i-th individual in the offspring population. P i j is the j-th dimension value of the i-th individual of the parent population. V o p j is the mean difference between the offspring population and the parent population in the j-th dimension. C t is the collection of the top N outstanding individuals in the contemporary merged population. V c j is the difference between the average of the top 50% of individuals and the average of the bottom 50% of individuals in C t . θ is the adjustment parameter. V c r j is the absolute value of the difference between random individuals in C t in the j-th dimension. r 1 is a random integer of [ 1 , N ] . r 2 is a random integer of [ 1 , N ] . And r 1 is not equal to r 2 .

2.1.3. Personalized Template Generation Strategy Based on Particle Dimensional Distribution

There are three kinds of dimensional distribution of a particle: the first is concentrated distribution, the second is similar to normal distribution, and the third is random distribution. In the first two cases, just finding the mean point of the dimensional distribution can help the particle converge quickly. Therefore, we put forward the particle dimensional concentration degree to judge the particle dimensional distribution. For the particle that meets one of the first two conditions, all dimension values of the personalized template are the same, so as to guide the particle to mine near this dimension value, and then get the mean point of particle dimension distribution through the elite selection strategy.
We use the parameter δ to regulate the value of f d i . At the initial stage of optimization, the particle dimension distribution is not stable, so f d i should be small. With the progress of iteration, under the elite selection strategy, the dimension distribution of particles will gradually stabilize. At this time, the personalized guidance template can be made for the particle according to its dimensional distribution to help mine their potential knowledge. At the later stage, the dimension distribution of the particle itself has been particularly stable. At this time, each dimension should be optimized and converged separately, so the value of f d i should be reduced.
f d i = δ , if w i = 0 ( 1 s i w i ) × δ , o t h e r w i s e
δ = sin t × π T
s i = a b s ( 1 D × j = 1 D x i j 2 D × j = 1 D / 2 x i r j )
w i = a b s ( max ( x i ) min ( x i ) )
where f d i is the particle dimension concentration, x i is the i-th particle, δ is the concentration control parameter, and x i r is a new particle obtained by disrupting the dimension order in x i .
When f d i is large, it is more likely to generate templates with the same dimension values. In addition, the dimension value the template chooses for knowledge guidance is the selection of the i-th particle of the parent in the corresponding template block with a certain probability. If the historical particle selection value is empty, it will be generated randomly. See Equations (12) and (13) below for the specific dimension value selection.
M i L = 1 ,         if r < f d i or ( β = 1 and r < ( t T ) 2 and P M i L = 1 ) 2 ,         if r > = f d i or ( β = 1 and r < ( t T ) 2 and P M i L = 2 )
where M i L represents the dimension learning mode selected when the i-th particle generates the template L. If M i L = 1 , a template with the same dimension value is generated. We set judgment condition 2 in our expression to be superior to condition 1. Where r is a random number between 0 and 1. P M i L represents the mode of dimension learning selected when the i-th parent particle generates the template L. L is a random integer of 1 or 2. The history-retrace template is selected when L = 1 , and the value-driven template is selected when L = 2 .
r 5 i L = p r 5 i L , if β = 1 and r < ( t T ) 2 and p r 5 i L R r 8 , o t h e r w i s e
where r 5 i L represents the dimension selected by the i-th particle when generating the L-type template. p r 5 i L represents the dimension selected by the i-th parent particle when generating the L-type template. r 8 is a random integer from 1 to D.
Generate a history-retrace template
a i j = f 1 i r 5 + ( f 2 i r 5 f 1 i r 5 ) × r , if M i 1 = 1 f 1 i j + ( f 2 i j f 1 i j ) × r , if M i 1 = 2
f 1 i = max ( min ( A i 1 , A i 2 ) V a l , 0 )
f 2 i = min ( max ( A i 1 , A i 2 ) + V a l , 1 )
When h 2 , Equations (14)–(16) are used to generate the history-retrace template, where a i is the history-retrace template generated for the i-th particle, f 1 i is the interval lower bound, and f 2 i is the interval upper bound. A i 1 represents the first value in A i , and A i 2 represents the second value in A i .
a i j = e 1 i r 5 + ( e 2 i r 5 e 1 i r 5 ) × r , if M i 1 = 1 e 1 i j + ( e 2 i j e 1 i j ) × r , if M i 1 = 2
e 1 i = max ( ( A i V a l ) , 0 )
e 2 i = min ( ( A i + V a l ) , 1 )
When h < 2 , the history-retrace template is generated by calculating Equations (17)–(19), where e 1 is the lower bound of the interval, and e 2 is the upper bound of the interval.
Generate a value-driven template
When the optimal value changes, the change value of the particle dimension value of the optimal value of the previous generation and the current generation contains certain particle optimization trend information, and following this change rule may help the particle find the optimal value.
The particle selects the optimal value template generation method through a random number r 3 . When r 3 = 1 , the optimal value template generation method follows the changing law of the optimal value, and when r 3 = 2 , the optimal value template is generated randomly. r 9 is used to randomly select whether the template value is increased or decreased.
Where r 3 and r 9 are both random integers of 1 or 2. When the random number r 3 = 1 , and in the corresponding selected dimension g b e s t t < g b e s t t 1 , or the random number r 3 = 2 and r 9 = 1 , the following Equation (20) is used to generate the value-driven template.
v i j = G P i r 5 V a l r 5 + V a l r 5 × r , if M i 2 = 1 G P i j V a l j + V a l j × r , if M i 2 = 2
When r 3 = 1 and g b e s t t g b e s t t 1 or r 3 = 2 and r 9 = 2 , the following Equation (21) is used to generate the value-driven template.
v i j = G P i r 5 + V a l r 5 × r , if M i 2 = 1 G P i j + V a l j × r , if M i 2 = 2
where v i j is the value-driven template generated for the i-th individual in the j-th dimension.

2.2. Template Guidance Stage

Particle optimization needs to explore the right spatial range at the right time node. After the template generation stage, we obtained the space that needs to be explored emphatically. Therefore, in the template guidance stage, we adopt the template-guided knowledge transfer strategy. The strategy, aimed at improving the optimization efficiency of particles, further controls the step size of particles based on the current iteration’s time node, guiding them to search for key optimal points within the focused exploration space. r 7 is a random integer of 1 or 2. In order to increase randomness, the particle uses r 7 to randomly select particle-led knowledge transfer or template-led knowledge transfer. When r < f d i , all dimensions use the same random number. In order to balance the ability to optimize exploration and development, it is not necessary to constrain step size and direction through time when the optimal value changes.
r d 1 = t T + ( ( 2 t 2 × T ) t T ) × r
X i = m i + ( m i x i ) × sin ( r d 1 × π ) × ( 1 β ) × ( t / T ) + β × sin ( r × 2 × π ) × ( m i x i )
When r 7 = 1 , the above Equations (22) and (23) generate new particles X i through knowledge transfer. r d 1 is the regulatory parameter, and m i is the template selected by the i-th individual. When L = 1 , m i = a i , and when L = 2 , m i = v i .
When the figure of merit is constant, the knowledge transfer strategy with the template as the main learning object will gradually move closer to the particle direction with iteration, and the step size will gradually increase.
X i = x i + ( x i m i ) × ( cos ( 2 × π × r 6 ) × ( 1 β ) × ( 1 t T ) + β × sin ( r × 2 × π ) × ( x i m i )
When r 7 = 2 , Equation (24) generates new particles X i through knowledge transfer. r 6 is a random number in the range [ 1 , 1 ] .
When the figure of merit is constant, the direction of the knowledge transfer strategy with the particle itself as the core is random, and with the iteration, the step size of the particle to the template gradually decreases.

2.3. Computation Complexity

This section analyzes the time complexity and algorithm running time of PTG. Suppose the population size is N, the number of iterations is T, and the problem dimension is D. The main loop of the algorithm executes T iterations, and in each iteration, the algorithm needs to generate particles through the template generation stage and the template guidance stage. In the template generation stage, the dimension analysis value f d of each particle needs to be calculated, so the time complexity is O ( N D T ) . In the template interval expansion strategy, the V a l value of each particle is the same in the current iteration, so the computational complexity is O ( N / 2 D T ) . The time complexity of generating history-retrace templates or value-driven templates for particles is O ( N D T ) . In the template guidance stage, the time complexity of both boot modes is O ( N D T ) . Therefore, the overall time complexity of the algorithm is O ( N D T ) + O ( N / 2 D T ) + O ( N D T ) + O ( N D T ) . Since the constant term can be ignored, it can be reduced to O ( N D T ) .

3. Results

3.1. Experimental Setup

(1)
Benchmark function: CEC2005 [46] data set containing 23 benchmark test functions is adopted. Where F1–F13 are high-dimensional problems. The functions of F1–F5 are unimodal. F6 function is a step function, F7 function is a noisy quartic function, and F8–F13 is a multimodal function. F14–F23 are low-dimensional functions with only a few local minima. CEC2022 [47] test function with complex OP ability is also used to evaluate the performance of the algorithm and verify the algorithm. In CEC2022, the function consists of 12 benchmark functions, including unimodal, multimodal, mixed, and compound functions, where F1 is unimodal, F2–F5 are multimodal, F6–F8 are mixed, which can be unimodal or multimodal, and F9–F12 are compound and multimodal.
(2)
Comparison algorithm: We compare the proposed algorithm with the classical algorithm, which are Moth-Flame Optimization algorithm (MFO) [35], Whale Optimization Algorithm (WOA) [36], Harris Hawks optimization algorithm (HHO) [37], and Tunicate Swarm Algorithm (TSA) [39]. We also compare the proposed algorithm with the latest and most advanced optimization algorithm, which are Educational Competitive Optimization (ECO) [26] (Q2, 2024), Coati Optimization Algorithm (COA) [38] (Q1, 2023), Reptile Search Algorithm (RSA) [40] (Q1, 2022), Fata Morgana Algorithm (FATA) [48] (Q2, 2024), and Hippopotamus Optimization algorithm (HO) [49] (Q2, 2024). See Table 1 for the detailed settings of control parameters, in which the population size of all algorithms is set to 30, the number of iterations is 1000, and the maximum number of evaluations is 30,000.
(3)
experimental environment: All experiments were carried out under the 64-bit version of Windows 11 operating system with MATLAB R2022a software, using Intel Core, with a basic frequency of 1.00 GHz and a maximum turbo frequency of 1.2 GHz, and a processor with 8 logic cores.

3.2. Ablation Experiment

To verify the validity of our proposed framework, we set up the ablation study. The specific results are shown in Table 2.
(1)
PTG-noQ means that the history-retrace template’s base point set does not extract the stagnant historical value Q in Equation (1). It can be seen from the table that the stability and accuracy of the PTG solution are better than those of PTG-noQ, which shows that the timely backtracking and mining of historical stagnation optimal values can help the algorithm refine the search solution and reduce time-consuming shocks and wrong searches.
(2)
PTG-noPDD refers to adopting the template generation strategy that is not based on the particle’s dimension distribution, specifically, M = 2 . PTG is superior to PTG-noPDD in all 11 functions, especially F1, F6, and F11, which indicates that the dimensional distribution information of the particle itself is very important for solving unimodal function problems, mixed function problems, and complex and multimodal function problems. It also shows that the personalized template generation strategy based on particle dimensional distribution can adjust the dimension value of the template promptly according to the dimensional distribution law of the particle itself and guide the particle to efficiently search for optimization.
(3)
PTG-noIE stands for eliminating the template interval expansion strategy. PTG is superior to PTG-noIE in 12 functions, especially in complex functions such as F5, F6, and F10. This shows that the template interval expansion strategy can help particles jump out of local optimization and enhance the exploration ability of the algorithm.
It can also be seen from the ablation experiment that the function value rarely changes in order of magnitude after removing the key strategy, which shows the effectiveness of the algorithm framework guided by the figure of merit signal.

3.3. Comparison of PTG with Other Algorithms

3.3.1. Analysis of Experimental Results of CEC2005 Test Function

The comparison results between the algorithm and the comparison algorithm based on the CEC2005 test set are shown in Table 3 and Table 4. In the table, we take the average of the optimal solutions obtained by the algorithm in 30 independent runs as the mean of the algorithm. Calculate the standard deviation of these 30 solutions to measure the average deviation degree of data points from the mean. The larger the value, the more dispersed the data distribution. The 10 algorithms are ranked based on their standard deviations and means. The smaller the ranking value, the better the algorithm’s performance. The convergence of the algorithm is shown in Figure 3, where the vertical axis represents the mean value of the algorithm at the same iteration time point during 30 independent runs. For example, the vertical axis corresponding to the point of iteration 1 on the horizontal axis is the mean value of the 30 points with an iteration time of 1 during the 30 independent runs. Because the algorithm converges quickly on some functions, in order to show the convergence trend of the graph more obviously, in the functions F1, F2, F5, F6, F7, F11, F12, F13, F14, F15, F16, F17, and F18, only the first 50 of the 1000 mean points of the algorithm and the solution values corresponding to the 200th, 400th, 600th, 800th, and 1000th iterations are taken as the x-coordinate points.
As can be seen from Table 3 and Table 4. In the comparison of unimodal benchmark functions, PTG ranks first among F1-F4 and F6 functions, and the final result of the F5 function is far superior to other algorithms except COA, with a standard deviation of 0 and ranking second. It can be seen from Figure 3 that the convergence speed of the PTG algorithm is not always the fastest among the F1–F4 and F6 functions, but it converges to a solution better than or equal to that of other algorithms before about 50 iterations. In the F4 function, although its convergence speed is slower than that of the four algorithms, it is better than their function values, so PTG has the ability to maintain diversity when converging. As can be seen from Figure 4, in F1–F7, the data of PTG running independently for 30 times presents a centralized distribution, and compared with other algorithms, there are no outliers. A “+” in the figure indicates an outlier of the data. To sum up, it can be seen that the algorithm is effective in solving the unimodal benchmark function problem.
The performance analysis of the algorithm in the F8-F13 multi-dimensional and multimodal test function of CEC2005 is as follows. As can be seen from Table 3 and Table 4, the PTG algorithm always ranks first, and the standard deviation is always the minimum. As can be seen from Figure 3, the PTG algorithm can always converge to a solution better than or equal to other algorithms 200 generations ago. As can be seen from Figure 4, the data of the PTG algorithm is the most concentrated, and there are no outliers. In summary, it can be seen that the algorithm has excellent global search ability.
The evaluation of this algorithm in the multimodal benchmark function of F14–F23 is as follows. As can be seen from Table 3 and Table 4, except for the functions of F15, F20, and F21, the algorithm ranks first in the other seven functions and second only to the HO algorithm in the function of F20. From Figure 3, in the F22 function, the algorithm found the excellent solution again after converging to the excellent value and maintaining a certain number of iterations, indicating that the algorithm has a strong ability to balance exploration and exploitation.

3.3.2. Analysis of Experimental Results of CEC2022 Test Function

The comparison results of PTG and other algorithms on CEC2022 are shown in Table 5 and Table 6. In the table, we take the average of the optimal solutions obtained by the algorithm in 30 independent runs as the mean of the algorithm. Calculate the standard deviation of these 30 solutions to measure the average deviation degree of data points from the mean. The larger the value, the more dispersed the data distribution. The 10 algorithms are ranked based on their standard deviations and means. The higher the ranking, the better the performance of the algorithm. It can be seen from the table that PTG ranks first among all 12 algorithms and has the highest solution accuracy. Compared with other algorithms, PTG has the smallest standard deviation and the best stability. In the F1 unimodal function, this algorithm is significantly superior to the comparison algorithm, proving that this algorithm has a strong global search ability. In the F5 multimodal function, the algorithm is one order of magnitude better than other algorithms, which means that the algorithm has the ability to traverse the multi-peak terrain so that the population can continuously find better areas. In the mixed function F6, the solution result of the algorithm is significantly better than that of other algorithms, indicating that the algorithm has the ability to solve complex problems.
Since CEC2022 contains a variety of test functions, we record the time required for the algorithm to run once on each function in Table 5 and Table 6. As can be seen from the table, the running time of PTG is longer, only better than that of HO, and it runs more than 2 s longer than other algorithms. After analysis, the algorithm’s main time is spent in the template generation stage. At this stage, after the template determines the base point and the extended interval, the dimension information of the particles is integrated to generate the template. Through the ablation experiment, it can be concluded that this design is reasonable and effective. The generated template integrates historical information to guide particles to optimize efficiently during the template guidance stage, while eliminating the need for custom parameters in the algorithm and solving the difficulty of setting algorithm parameters. The experiments in the next section also prove that the algorithm has high accuracy and high stability.
Convergence curves of PTG and 9 optimization algorithms on CEC2022 is shown in Figure 5, where the vertical axis represents the mean value of the algorithm at the same iteration time point during 30 independent runs. Since the algorithm converges very quickly on some functions (8), in order to more obviously show the convergence trend, in function F8, only the first 50 of the 1000 mean points of the algorithm and the solution values corresponding to the 200th, 400th, 600th, 800th, and 1000th iterations are taken as X-coordinate points. As can be seen from the figure, among the F2, F3, F9, F10, and F12 functions, compared with other algorithms, PTG has a faster convergence speed. Especially in the F10 function, the algorithm did not stop when it iterated to the 200th, 400th, and 600th generations and could still find better solutions, demonstrating the algorithm’s excellent development capabilities. In the F4, F5, F6, and F7 functions, the convergence speed of the algorithm in the early stage is slower than that of some algorithms, but it gradually speeds up in subsequent iterations. When other algorithms stagnated, PTG caught up with and surpassed them, obtaining the optimal solution.
The box plots of optimization algorithms on CEC2022 benchmark functions are shown in Figure 6, where a “ + ” indicates an outlier of the data. It is drawn using the 30 optimal solutions obtained from 30 independent runs of each algorithm. The box plot helps us analyze the degree of dispersion and outliers of the data. As can be seen from the figure, compared with other comparison algorithms, our algorithm has a more concentrated distribution on the F1, F2, F3, F5, F6, F8, F9, F10, F11, and F12 functions. Among the 12 functions, except for the F10 function, PTG generates at most one outlier and has relatively high stability.

3.3.3. Statistical Testing

The Friedman test [50] is a nonparametric test method that uses rank to test whether there are significant differences between multiple population distributions. The test results are shown in Table 7. As can be seen from the table, among the CEC2005 and CEC2022 test functions, the proposed algorithm ranks first in the Friedman rank test, and it can be concluded that PTG is obviously superior to other algorithms.

3.4. Engineering Problem Experiment

We applied PTG to four engineering problems, using 30 independent runs, with 30 individuals in the population and 1000 iterations.

3.4.1. The Problem of Compression Spring Design

This problem involves three parameters, including the diameter of spring wire d ( x 1 ) , the average diameter of spring coil D ( x 2 ) , and the number of active coils P ( x 3 ) . The goal is to minimize the weight of a spring. When the spring is subjected to an external force, several factors are constrained: the maximum allowable deformation at its end or designated position, the ratio of shear force generated by the internal material due to the external force to the material’s cross-sectional area, and the number of vibrations per unit time when subjected to a periodic external force. For specific definitions, see Equations (25)–(27), and the problem diagram is shown in Figure 7.
Definition:
X = x 1 , x 2 , x 3 = d , D , P
Minimization function:
f ( x ) = ( x 3 + 2 ) x 2 x 1 2
Set of constraint function:
g 1 ( x ) = 1 x 2 3 x 3 71785 x 1 4 0 , g 2 ( x ) = 4 x 2 2 x 1 x 2 12566 ( x 2 x 1 3 x 1 4 ) + 1 5108 x 1 2 1 0 , g 3 ( x ) = 1 140.45 x 1 x 2 2 x 3 0 , g 4 ( x ) = x 1 + x 2 1.5 1 0 , 0.05 x 1 2 , 0.25 x 2 1.3 , 2 x 3 15
The solution results of PTG and other comparison algorithms are shown in Table 8. Thus, compared with other algorithms, PTG obtains the optimal value, and the mean and mean square error of 30 independent operations are the smallest, and the algorithm results are the most stable, and its performance in dealing with pressure vessel design is better than that of competitive algorithms.

3.4.2. Welded Beam Structure Problem

The structural design of a welded beam involves four parameters, including weld thickness h, bar clamping length l, bar height t, and bar thickness b. The objective function is to obtain the minimum manufacturing cost of a welded beam under the premise of satisfying the shear stress τ , the bending stress in the beam σ , the bending load of the beam P c , and the deflection of the beam end δ . The specific problem is illustrated in Figure 8. For specific definitions, see Equations (28)–(30).
Definition of the problem:
X = x 1 , x 2 , x 3 , x 4 = h , l , t , b
The objective function:
f ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14 + x 2 )
Constraint setting:
g 1 ( x ) = τ ( x ) τ max 0 , g 2 ( x ) = σ ( x ) σ max 0 , g 3 ( x ) = δ ( x ) δ max 0 , g 4 ( x ) = x 1 x 4 0 , g 5 ( x ) = P P c ( x ) 0 , g 6 ( x ) = 0.125 x 1 0 , g 7 ( x ) = 1.10471 x 1 2 x 2 + 0.04811 x 3 x 4 ( 14 + x 2 ) 5 0 , 0.1 x 1 2 , 0.1 x 2 10 , 0.1 x 3 10 , 0.1 x 4 2 , τ ( x ) = ( τ ) 2 + 2 τ τ x 2 2 R + ( τ ) 2 , τ = P 2 x 1 x 2 , τ = M R J , M = P L + x 2 2 , R = x 2 2 4 + x 1 + x 3 2 2 , J = 2 2 x 1 x 2 x 2 2 12 + x 1 + x 3 2 2 , σ ( x ) = 6 P L x 4 x 3 2 , δ ( x ) = 4 P L 3 E x 3 3 x 4 , P c ( X ) = 4.013 E L 2 x 3 2 x 4 6 36 1 x 3 2 L E 4 G , P = 6000 , L = 14 , δ max = 0.25 , E = 30 × 10 6 , G = 12 × 10 6 , τ max = 13600 , σ max = 30000
The solution results of PTG and other comparison algorithms are shown in Table 9. After the PTG solution, the optimal solution is x 1 = 0.206395 , x 2 = 3.456223 , x 3 = 9.036624 , x 4 = 0.205730 , and the corresponding value of the objective function is 1.724399. Thus, compared with other algorithms, PTG obtains the optimal value, and the mean and mean square error of 30 independent runs are the smallest, and the algorithm results are the most stable, and its performance in welding beam structure design is better than that of competitive algorithms.

3.4.3. Pressure Vessel Design Problem

The goal of pressure vessel design is to minimize the manufacturing cost of a pressure vessel (pairing, forming, and welding). Both ends of the pressure vessel are capped with caps, and the head end is capped in a hemispherical shape. The wall thickness T s of the cylinder part and the wall thickness T h of the head are integer multiples of 0.625, and the goal is to minimize the manufacturing cost of welded beams under the condition of satisfying constraints. The design of a pressure vessel is shown in Figure 9. For specific definitions, see Equations (31)–(33).
Definition of the problem:
X = [ x 1 , x 2 , x 3 , x 4 ] = [ T s , T h , R , L ]
The objective function:
f ( x ) = 0.6224 x 1 x 3 x 4 + 1.778 x 2 x 3 2 + 3.1661 x 1 2 x 4 + 19.84 x 1 2 x 3
Constraint setting:
g 1 ( x ) = x 1 + 0.0193 x 3 0 , g 2 ( x ) = x 2 + 0.00954 x 3 0 , g 3 ( x ) = π x 3 2 x 4 4 3 π x 3 3 + 1296000 0 , g 4 ( x ) = x 4 240 0 , 0 x 1 , x 2 100 , 10 x 3 , x 4 200
The solution results of PTG and other comparative algorithms are shown in Table 10. The L value obtained by all the algorithms is 200. After the PTG solution, the optimal solution is x 1 = 0 , x 2 = 0 , x 3 = 40.31961872 , x 4 = 200 , and the corresponding value of the objective function is 753.5014127298541. It can be seen from the table that PTG has obtained the optimal value compared with other algorithms except MFO and ECO, and the mean square deviation of 30 independent runs is the smallest, and the algorithm results are the most stable. The optimal value of PTG is second only to MFO and ECO, and the mean and mean square deviation of 30 independent operations are better than ECO.

3.4.4. Reducer Design Issues

There are seven parameters involved in the design of the reducer, including width b, number of tooth dies m, number of gear teeth z, length of the first shaft between bearings l 1 , length of the second shaft between bearings l 2 , diameter of the first shaft d 1 , and diameter of the second shaft d 2 . By optimizing the weight of the gear and the axial deformation of the shaft, the weight of the reducer is minimized. The diagram of the problem is shown in Figure 10. For specific definitions, see Equations (34)–(36).
Definition of the problem:
X = [ x 1 , x 2 , x 3 , x 4 , x 5 , x 6 , x 7 ] = [ b , m , p , l 1 , l 2 , d 1 , a 2 ]
The objective function:
f ( x ) = 0.7854 x 1 x 2 2 ( 3.3333 x 3 2 + 14.9334 x 3 43.0934 ) 1.508 x 1 ( x 6 2 + x 7 2 ) + 7.4777 ( x 6 3 + x 7 3 ) + 0.7854 ( x 4 x 6 2 + x 5 x 7 2 )
Constraint setting:
g 1 ( x ) = 27 x 1 x 2 2 x 3 1 0 , g 2 ( x ) = 397.5 x 1 x 2 2 x 3 1 0 , g 3 ( x ) = 1.93 x 4 3 x 2 x 3 x 6 4 1 0 , g 4 ( x ) = 1.93 x 5 3 x 2 x 3 x 7 4 1 0 , g 5 ( x ) = 1 110 x 6 3 745 x 4 x 2 x 3 2 + 16.9 × 10 6 1 0 , g 6 ( x ) = 1 85 x 7 3 745 x 5 x 2 x 3 2 + 157.5 × 10 6 1 0 , g 7 ( x ) = x 2 x 3 40 1 0 , g 8 ( x ) = 5 x 2 x 1 1 0 , g 9 ( x ) = x 1 12 x 2 1 0 , g 10 ( x ) = 1.5 x 6 + 1.9 x 4 1 0 , g 11 ( x ) = 1.1 x 7 + 1.9 x 5 1 0 , 2.6 x 1 3.6 , 0.7 x 2 0.8 , 17 x 3 28 , 7.3 x 4 8.3 , 7.3 x 5 8.3 , 2.9 x 6 3.9 , 5 x 7 5.5
The solution results of PTG and other comparative algorithms are shown in Table 11. b = 3.6 , m = 0.8 , p = 28 , d 2 = 5 . Thus, the optimal value of PTG is better than FATA, and the same optimal value is obtained as that of the other two comparative algorithms, but its average value and mean square error of 30 independent runs are the smallest among all algorithms, and the algorithm results are the most stable, and its performance in reducer design problems is better than that of competitive algorithms.

4. Conclusions

Existing heuristic algorithms are mostly based on inspiration sources, which are random and unstable. Therefore, based on the optimization principle, this paper proposes a highly stable algorithm named PTG. In particular, PTG can effectively use the knowledge of the particle’s own dimension distribution to accelerate the convergence of particle optimization and also effectively use the knowledge of population distribution to help the particle jump out of local optimization. Different from other comparison algorithms, PTG preserves and uses the random particles of the population with constant merit, which helps the algorithm trace back the previous limited points and prevents missing the merit solution.
The experimental results show that compared with the other nine comparison algorithms, PTG performs best among 23 classical benchmark functions and CEC2022 benchmark functions, which shows that PTG is suitable for solving multimodal problems and unimodal benchmark functions with global convergence. In addition, PTG has solved four engineering problems, including spring compression design, welded beam structure, pressure vessel, and reducer design, which have practical application value. In addition, PTG ranks first in the Friedman test, which shows that PTG has obvious advantages in two different types of test functions. Although PTG has many advantages, there are still several limitations. Firstly, when dealing with high-dimensional data, the performance of PTG is similar to that of contrast algorithms, which indicates that the dimension distribution strategy requires a more comprehensive design to handle high-dimensional complex problems. Secondly, real-world problems are intricate and complex. For issues under complex constraints, the PTG algorithm may perform poorly, and all these need to be improved in future work. And in the future, we also plan to further expand the PTG algorithm to solve multi-objective optimization problems and use it to optimize large deep models.

Author Contributions

Conceptualization, D.H.; Data curation, D.H.; Formal analysis, D.H.; Investigation, D.H.; Methodology, D.H.; Project administration, X.H.; Resources, D.H.; Software, D.H.; Supervision, X.H.; Validation, D.H.; Visualization, D.H.; Writing—original draft, D.H.; Writing—review and editing, D.H., X.H., M.G., Y.C. and T.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Social Science Foundation of China (24BTJ041).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are contained in this paper. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, S.; Chen, H.; Wang, M.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  2. Mi, Z.; Yang, B. Heuristic Algorithms. Cornell University Computational Optimization Open Textbook. 2024. Available online: https://optimization.cbe.cornell.edu/index.php?title=Heuristic_algorithms (accessed on 15 December 2024).
  3. Yu, D.M.; Cheng, B.R.; Cheng, H.; Wu, L.; Li, X.Y. A quantum portfolio optimization algorithm based on hard constraint and warm starting. J. Univ. Electron. Sci. Technol. China 2025, 54, 116–124. [Google Scholar] [CrossRef]
  4. Zhang, X.; Yu, L.; Li, S.; Zhang, A.; Zhang, B. Research Progress of Metaheuristic Algorithm in Path Planning of Plant Protection UAV. J. Agric. Mech. Res. 2025, 47, 1–9. [Google Scholar]
  5. Roni, M.H.K.; Rana, M.; Pota, H.; Hasan, M.M.; Hussain, M.S. Recent trends in bio-inspired meta-heuristic optimization techniques in control applications for electrical systems: A review. Int. J. Dyn. Control. 2022, 10, 999–1011. [Google Scholar] [CrossRef]
  6. Natesha, B.; Guddeti, R.M.R. Meta-heuristic based hybrid service placement strategies for two-level fog computing architecture. J. Netw. Syst. Manag. 2022, 30, 47. [Google Scholar] [CrossRef]
  7. Bhavya, R.; Elango, L. Ant-inspired metaheuristic algorithms for combinatorial optimization problems in water resources management. Water 2023, 15, 1712. [Google Scholar] [CrossRef]
  8. Alghamdi, M. Smart city urban planning using an evolutionary deep learning model. Soft Comput. 2024, 28, 447–459. [Google Scholar] [CrossRef]
  9. Yang, J.; Xia, Y. Coverage and routing optimization of wireless sensor networks using improved cuckoo algorithm. IEEE Access 2024, 12, 39564–39577. [Google Scholar] [CrossRef]
  10. Woodcock, A.; Zhang, L. Genetic Algorithm with Reinforcement Learning based Parameter Optimisation. In Proceedings of the 2024 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Kuching, Malaysia, 6–10 October 2024; pp. 4833–4839. [Google Scholar] [CrossRef]
  11. Xu, Q.; Meng, Z. Differential Evolution with multi-stage parameter adaptation and diversity enhancement mechanism for numerical optimization. Swarm Evol. Comput. 2025, 92, 101829. [Google Scholar] [CrossRef]
  12. Zhong, J.; Dong, J.; Liu, W.L.; Feng, L.; Zhang, J. Multiform Genetic Programming Framework for Symbolic Regression Problems. IEEE Trans. Evol. Comput. 2025, 29, 429–443. [Google Scholar] [CrossRef]
  13. Zhang, Y.; Zhang, H. Enhancing robot path planning through a twin-reinforced chimp optimization algorithm and evolutionary programming algorithm. IEEE Access 2023, 12, 170057–170078. [Google Scholar] [CrossRef]
  14. Hussien, A.G.; Heidari, A.A.; Ye, X.; Liang, G.; Chen, H.; Pan, Z. Boosting whale optimization with evolution strategy and Gaussian random walks: An image segmentation method. Eng. Comput. 2023, 39, 1935–1979. [Google Scholar] [CrossRef]
  15. Zhang, Z.; Gao, Y.; Liu, Y.; Zuo, W. A hybrid biogeography-based optimization algorithm to solve high-dimensional optimization problems and real-world engineering problems. Appl. Soft Comput. 2023, 144, 110514. [Google Scholar] [CrossRef]
  16. Su, F.; Wang, Y.; Yang, S.; Yao, Y. A Manifold-Guided Gravitational Search Algorithm for High-Dimensional Global Optimization Problems. Int. J. Intell. Syst. 2024, 2024, 5806437. [Google Scholar] [CrossRef]
  17. Wei, J.; Zhang, Y.; Wei, W. Augmenting Particle Swarm Optimization with Simulated Annealing and Dimensional Learning for UAVs Path Planning. In Proceedings of the 2024 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Kuching, Malaysia, 6–10 October 2024; pp. 3721–3726. [Google Scholar] [CrossRef]
  18. Abirami, A.; Palanikumar, S. BBBC-DDRL: A hybrid big-bang big-crunch optimization and deliberated deep reinforced learning mechanisms for cyber-attack detection. Comput. Electr. Eng. 2023, 109, 108773. [Google Scholar] [CrossRef]
  19. Zhu, C.; Zhang, Y.; Wang, M.; Deng, J.; Cai, Y.; Wei, W.; Guo, M. Optimization, validation and analyses of a hybrid PV-battery-diesel power system using enhanced electromagnetic field optimization algorithm and ε-constraint. Energy Rep. 2024, 11, 5335–5349. [Google Scholar] [CrossRef]
  20. Gharehchopogh, F.S. Quantum-inspired metaheuristic algorithms: Comprehensive survey and classification. Artif. Intell. Rev. 2023, 56, 5479–5543. [Google Scholar] [CrossRef]
  21. Charest, T.; Green, R.C. Implementing Central Force optimization on the Intel Xeon Phi. In Proceedings of the 2020 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), New Orleans, LA, USA, 18–22 May 2020; IEEE: New York, NY, USA, 2020; pp. 502–511. [Google Scholar]
  22. Talatahari, S.; Azizi, M. An extensive review of charged system search algorithm for engineering optimization applications. In Nature-Inspired Metaheuristic Algorithms for Engineering Optimization Applications; Springer: Singapore, 2021; pp. 309–334. [Google Scholar]
  23. Kaveh, A.; Kaveh, A. Ray optimization algorithm. In Advances in Metaheuristic Algorithms for Optimal Design of Structures; Springer: Singapore, 2017; pp. 237–280. [Google Scholar]
  24. Bao, Y.Y.; Xing, C.; Wang, J.S.; Zhao, X.R.; Zhang, X.Y.; Zheng, Y. Improved teaching–learning-based optimization algorithm with Cauchy mutation and chaotic operators. Appl. Intell. 2023, 53, 21362–21389. [Google Scholar] [CrossRef]
  25. Tang, Y.; Zhou, F. An improved imperialist competition algorithm with adaptive differential mutation assimilation strategy for function optimization. Expert Syst. Appl. 2023, 211, 118686. [Google Scholar] [CrossRef]
  26. Lian, J.; Zhu, T.; Ma, L.; Wu, X.; Heidari, A.A.; Chen, Y.; Chen, H.; Hui, G. The educational competition optimizer. Int. J. Syst. Sci. 2024, 55, 3185–3222. [Google Scholar] [CrossRef]
  27. Du, J.; Wen, Y.; Wang, L.; Zhang, P.; Fei, M.; Pardalos, P.M. An adaptive human learning optimization with enhanced exploration–exploitation balance. Ann. Math. Artif. Intell. 2023, 91, 177–216. [Google Scholar] [CrossRef]
  28. Rehman, H.; Sajid, I.; Sarwar, A.; Tariq, M.; Bakhsh, F.I.; Ahmad, S.; Mahmoud, H.A.; Aziz, A. Driving training-based optimization (DTBO) for global maximum power point tracking for a photovoltaic system under partial shading condition. IET Renew. Power Gener. 2023, 17, 2542–2562. [Google Scholar] [CrossRef]
  29. Daqaq, F.; Hassan, M.H.; Kamel, S.; Hussien, A.G. A leader supply-demand-based optimization for large scale optimal power flow problem considering renewable energy generations. Sci. Rep. 2023, 13, 14591. [Google Scholar] [CrossRef] [PubMed]
  30. Bao, Y.Y.; Wang, J.S.; Liu, J.X.; Zhao, X.R.; Yang, Q.D.; Zhang, S.H. Student psychology based optimization algorithm integrating differential evolution and hierarchical learning for solving data clustering problems. Evol. Intell. 2025, 18, 1–23. [Google Scholar] [CrossRef]
  31. Wang, Y.; Zhou, S. An improved poor and rich optimization algorithm. PLoS ONE 2023, 18, e0267633. [Google Scholar] [CrossRef]
  32. Wang, X.; Xu, J.; Huang, C. Fans Optimizer: A human-inspired optimizer for mechanical design problems optimization. Expert Syst. Appl. 2023, 228, 120242. [Google Scholar] [CrossRef]
  33. Bashkandi, A.H.; Sadoughi, K.; Aflaki, F.; Alkhazaleh, H.A.; Mohammadi, H.; Jimenez, G. Combination of political optimizer, particle swarm optimizer, and convolutional neural network for brain tumor detection. Biomed. Signal Process. Control. 2023, 81, 104434. [Google Scholar] [CrossRef]
  34. Elleuch, S.; Brahmi, I.; Hamdi, M.; Zarai, F. Efficient PSO Coupled with a Local Search Heuristic for Radio Resource Allocation in V2X Communications. In Proceedings of the 2024 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Kuching, Malaysia, 6–10 October 2024; pp. 2507–2512. [Google Scholar] [CrossRef]
  35. Zamani, H.; Nadimi-Shahraki, M.H.; Mirjalili, S.; Soleimanian Gharehchopogh, F.; Oliva, D. A critical review of moth-flame optimization algorithm and its variants: Structural reviewing, performance evaluation, and statistical analysis. Arch. Comput. Methods Eng. 2024, 31, 2177–2225. [Google Scholar] [CrossRef]
  36. Amiriebrahimabadi, M.; Mansouri, N. A comprehensive survey of feature selection techniques based on whale optimization algorithm. Multimed. Tools Appl. 2024, 83, 47775–47846. [Google Scholar] [CrossRef]
  37. Akl, D.T.; Saafan, M.M.; Haikal, A.Y.; El-Gendy, E.M. IHHO: An improved Harris Hawks optimization algorithm for solving engineering problems. Neural Comput. Appl. 2024, 36, 12185–12298. [Google Scholar] [CrossRef]
  38. Dehghani, M.; Montazeri, Z.; Trojovská, E.; Trojovskỳ, P. Coati Optimization Algorithm: A new bio-inspired metaheuristic algorithm for solving optimization problems. Knowl.-Based Syst. 2023, 259, 110011. [Google Scholar] [CrossRef]
  39. Jumakhan, H.; Abouelnour, S.; Redhaei, A.A.; Makhadmeh, S.N.; Al-Betar, M.A. Recent Versions and Applications of Tunicate Swarm Algorithm. Arch. Comput. Methods Eng. 2025, 1–30. [Google Scholar] [CrossRef]
  40. Abualigah, L.; Abd Elaziz, M.; Sumari, P.; Geem, Z.W.; Gandomi, A.H. Reptile Search Algorithm (RSA): A nature-inspired meta-heuristic optimizer. Expert Syst. Appl. 2022, 191, 116158. [Google Scholar] [CrossRef]
  41. Punia, P.; Raj, A.; Kumar, P. Enhanced zebra optimization algorithm for reliability redundancy allocation and engineering optimization problems. Clust. Comput. 2025, 28, 267. [Google Scholar] [CrossRef]
  42. Zhu, S.; Pun, C.M.; Zhu, H.; Li, S.; Huang, X.; Gao, H. An artificial bee colony algorithm with a balance strategy for wireless sensor network. Appl. Soft Comput. 2023, 136, 110083. [Google Scholar] [CrossRef]
  43. Sharma, N.; Gupta, V.; Johri, P.; Elngar, A.A. SHO-CH: Spotted hyena optimization for cluster head selection to optimize energy in wireless sensor network. Peer-to-Peer Netw. Appl. 2025, 18, 1–18. [Google Scholar] [CrossRef]
  44. Guo, J.; Zhou, G.; Yan, K.; Shi, B.; Di, Y.; Sato, Y. A novel hermit crab optimization algorithm. Sci. Rep. 2023, 13, 9934. [Google Scholar] [CrossRef]
  45. Alirezapour, H.; Mansouri, N.; Mohammad Hasani Zade, B. A comprehensive survey on feature selection with grasshopper optimization algorithm. Neural Process. Lett. 2024, 56, 28. [Google Scholar] [CrossRef]
  46. Yao, X.; Liu, Y.; Lin, G. Evolutionary programming made faster. IEEE Trans. Evol. Comput. 1999, 3, 82–102. [Google Scholar] [CrossRef]
  47. Kumar, A.; Price, K.V.; Mohamed, A.W.; Hadi, A.A.; Suganthan, P.N. Problem Definitions and Evaluation Criteria for the 2022 Special Session and Competition on Single Objective Bound Constrained Numerical Optimization; Technical Report; Nanyang Technological University: Singapore, 2021. [Google Scholar]
  48. Qi, A.; Zhao, D.; Heidari, A.A.; Liu, L.; Chen, Y.; Chen, H. FATA: An efficient optimization method based on geophysics. Neurocomputing 2024, 607, 128289. [Google Scholar] [CrossRef]
  49. Amiri, M.H.; Hashjin, N.M.; Montazeri, M.; Mirjalili, S.; Khodadadi, N. Hippopotamus optimization algorithm: A novel nature-inspired optimization algorithm. Sci. Rep. 2024, 14, 5032. [Google Scholar] [CrossRef] [PubMed]
  50. Watanabe, K. Current status of the position on labor progress prediction for contemporary pregnant women using Friedman curves: An updated review. J. Obstet. Gynaecol. Res. 2024, 50, 313–321. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The flowchart of the PTG.
Figure 1. The flowchart of the PTG.
Applsci 15 08642 g001
Figure 2. Graph of the template generation stage.
Figure 2. Graph of the template generation stage.
Applsci 15 08642 g002
Figure 3. Convergence curves of PTG and 9 optimization algorithms in F1–F23 on CEC2005.
Figure 3. Convergence curves of PTG and 9 optimization algorithms in F1–F23 on CEC2005.
Applsci 15 08642 g003aApplsci 15 08642 g003bApplsci 15 08642 g003c
Figure 4. Box diagram of PTG and 9 optimization algorithms in CEC2005 (F1–F23).
Figure 4. Box diagram of PTG and 9 optimization algorithms in CEC2005 (F1–F23).
Applsci 15 08642 g004aApplsci 15 08642 g004b
Figure 5. Convergence curves (F1–F12) of PTG and 9 optimization algorithms on CEC2022.
Figure 5. Convergence curves (F1–F12) of PTG and 9 optimization algorithms on CEC2022.
Applsci 15 08642 g005aApplsci 15 08642 g005b
Figure 6. Box plots (F1–F12) of optimization algorithms on CEC2022 benchmark functions.
Figure 6. Box plots (F1–F12) of optimization algorithms on CEC2022 benchmark functions.
Applsci 15 08642 g006aApplsci 15 08642 g006b
Figure 7. Schematic diagram of design problems of compression spring.
Figure 7. Schematic diagram of design problems of compression spring.
Applsci 15 08642 g007
Figure 8. Schematic diagram of structural design problems of welded beams.
Figure 8. Schematic diagram of structural design problems of welded beams.
Applsci 15 08642 g008
Figure 9. Schematic diagram of pressure vessel design problems.
Figure 9. Schematic diagram of pressure vessel design problems.
Applsci 15 08642 g009
Figure 10. Schematic diagram of reducer design problems.
Figure 10. Schematic diagram of reducer design problems.
Applsci 15 08642 g010
Table 1. Algorithm parameters and their values.
Table 1. Algorithm parameters and their values.
AlgorithmParameterValue
FATA α 0.2
ECOH0.5
a4
HO--
COA--
RSAa0.1
β 0.005
TSA P m i n 1
P m a x 4
HHO E 0 [−1, 1)
WOAConvergence parameter (a)Linear reduction 2 to 0
MFOb1
rLinear reduction −1 to −2
Table 2. Ablation experiment of PTG on CEC2022.
Table 2. Ablation experiment of PTG on CEC2022.
FunctionsMetricPTGPTG-noQPTG-noPDDPTG-noIE
F1mean8.17 × 1039.44 × 1031.95 × 1048.32 × 103
std2.70 × 1033.79 × 1033.51 × 1033.70 × 103
F2mean4.52 × 1024.60 × 1024.63 × 1024.70 × 102
std1.49 × 1012.14 × 1016.272.81 × 101
F3mean6.00 × 1026.01 × 1026.07 × 1026.16 × 102
std3.12 × 10−13.30 × 10−11.208.35
F4mean8.26 × 1028.28 × 1028.52 × 1028.64 × 102
std9.371.07 × 1011.29 × 1011.81 × 101
F5mean9.15 × 1029.19 × 1029.30 × 1021.68 × 103
std1.58 × 1012.42 × 1011.35 × 1016.06 × 102
F6mean6.93 × 1039.42 × 1034.12 × 1057.20 × 103
std5.72 × 1036.50 × 1032.12 × 1056.66 × 103
F7mean2.04 × 1032.05 × 1032.07 × 1032.10 × 103
std1.22 × 1011.37 × 1011.38 × 1014.16 × 101
F8mean2.226 × 1032.227 × 1032.231 × 1032.233 × 103
std2.612.441.922.22 × 101
F9mean2.48 × 1032.48 × 1032.50 × 1032.48 × 103
std3.86 × 10−29.00 × 10−26.361.80 × 10−1
F10mean2.56 × 1032.61 × 1032.50 × 1033.15 × 103
std1.98 × 1023.66 × 1021.90 × 10−15.55 × 102
F11mean2.92 × 1032.94 × 1033.19 × 1032.97 × 103
std1.19 × 1021.05 × 1026.78 × 1011.75 × 102
F12mean2.95 × 1032.95 × 1032.96 × 1032.97 × 103
std6.749.647.781.92 × 101
Table 3. Performance metrics for algorithms (PTG, WOA, HHO, COA, TSA) on CEC2005 (F1–F23).
Table 3. Performance metrics for algorithms (PTG, WOA, HHO, COA, TSA) on CEC2005 (F1–F23).
FunctionsMetricPTGWOAHHOCOATSA
F1mean0 4.23 × 10 149
1.90 × 10 184
0
3.61 × 10 46
std0
2.29 × 10 148
00
1.62 × 10 45
rank17619
F2mean0
9.94 × 10 108
1.28 × 10 94
0
2.71 × 10 49
std0
3.01 × 10 107
6.99 × 10 94
0
5.92 × 10 49
rank16719
F3mean0
1.901 × 10 1
3.22 × 10 156
0
5.78 × 10 54
std0
3.278 × 10 1
1.76 × 10 155
0
1.74 × 10 53
rank110618
F4mean0
8.70 × 10 1
1.80 × 10 91
0
1.55 × 10 21
std0
2.59
8.77 × 10 91
0
4.58 × 10 21
rank110618
F5mean
1.15 × 10 26
6.19
7.35 × 10 4
0
8.49
std0
4.10 × 10 1
9.41 × 10 4
0
9.20 × 10 1
rank28319
F6mean0
6.35 × 10 5
1.32 × 10 5
0
1.12
std0
6.57 × 10 5
1.86 × 10 5
0
4.00 × 10 1
rank17519
F7mean
1.01 × 10 4
1.10 × 10 3
1.10 × 10 4
2.60 × 10 5
1.60 × 10 3
std
6.68 × 10 5
1.10 × 10 3
1.12 × 10 4
2.40 × 10 5
6.93 × 10 4
rank58619
F8mean
4.19 × 10 3
3.66 × 10 3
4.18 × 10 3
4.19 × 10 3
2.52 × 10 3
std
2.78 × 10 12
6.6747 × 10 2
7.9308 × 10 1
2.80 × 10 3
3.5426 × 10 2
rank17539
F9mean0
4.74 × 10 16
00
2.56364 × 10 1
std0
2.59 × 10 15
00
1.225 × 10 1
rank181110
F10mean
4.44 × 10 16
3.05 × 10 15
4.44 × 10 16
4.44 × 10 16
1.13
std0
2.46 × 10 15
00
1.63
rank181110
F11mean0
2.40 × 10 2
00
4.39 × 10 1
std0
6.12 × 10 2
00
2.55 × 10 1
rank181110
F12mean
4.71 × 10 32
2.90 × 10 3
5.50 × 10 6
4.71 × 10 32
2.56
std
1.67 × 10 47
5.40 × 10 3
6.74 × 10 6
1.67 × 10 47
3.71
rank164110
F13mean
1.35 × 10 32
7.90 × 10 3
2.96 × 10 5
1.35 × 10 32
6.38 × 10 1
std
5.57 × 10 48
1.84 × 10 2
5.35 × 10 5
5.57 × 10 48
2.70 × 10 1
rank174110
F14mean
9.98 × 10 1
2.1781
1.0311
9.98 × 10 1
1.05407 × 10 1
std0
2.49
1.82 × 10 1
9.73 × 10 11
3.98
rank164310
F15mean
3.38 × 10 4
5.98 × 10 4
3.63 × 10 4
4.23 × 10 4
1.31 × 10 2
std
1.67 × 10 4
3.24 × 10 4
1.66 × 10 4
2.40 × 10 4
2.00 × 10 2
rank364510
F16mean
1.0316
1.0316
1.0316
1.0316
1.0264
std
6.78 × 10 16
1.31 × 10 10
9.31 × 10 12
4.41 × 10 5
1.20 × 10 2
rank154810
F17mean
3.979 × 10 1
3.979 × 10 1
3.979 × 10 1
3.984 × 10 1
3.979 × 10 1
std0
4.33 × 10 7
1.14 × 10 7
1.50 × 10 3
2.04 × 10 5
rank16598
F18mean
3.00
3.00
3.00
3.38
6.60
std
1.36 × 10 15
2.57 × 10 5
4.78 × 10 8
8.87 × 10 1
9.34
rank17589
F19mean
3.8628
3.8585
3.861
3.8206
3.8624
std
2.71 × 10 15
5.50 × 10 3
2.20 × 10 3
4.66 × 10 2
1.40 × 10 3
rank176104
F20mean
3.2744
3.2325
3.1903
2.7113
3.2051
std
5.92 × 10 2
1.16 × 10 1
9.15 × 10 2
3.25 × 10 1
1.91 × 10 1
rank24897
F21mean
9.9848
9.3022
5.2232
10.1532
6.0689
std
9.22 × 10 1
1.93
9.23 × 10 1
7.30 × 10 5
2.94
rank46928
F22mean
10.4029
8.7292
5.4334
10.4028
7.8476
std
8.73 × 10 16
2.63
1.32
2.17 × 10 5
3.23
rank16937
F23mean
10.5364
8.629
5.1279
10.5363
7.337
std
2.06 × 10 15
3.05
6.64 × 10 4
3.16 × 10 5
3.58
rank161038
Table 4. Performance metrics for algorithms (RSA, FATA, MFO, ECO, HO) on CEC2005 (F1–F23).
Table 4. Performance metrics for algorithms (RSA, FATA, MFO, ECO, HO) on CEC2005 (F1–F23).
FunctionsMetricRSAFATAMFOECOHO
F1mean 0 0
2.00 × 10 3
6.72 × 10 111
0
std 0 0
4.07 × 10 3
3.48 × 10 110
0
rank 1 1 10 8 1
F2mean 0 0 1
2.73 × 10 59
0
std 0 0
3.05
1.50 × 10 58
0
rank 1 1 10 8 1
F3mean 0 0
1.94 × 10 6
3.10 × 10 99
0
std 0 0
6.77 × 10 6
1.70 × 10 98
0
rank 1 1 9 7 1
F4mean 0 0
4.15 × 10 1
5.92 × 10 57
0
std 0 0
5.76 × 10 1
3.24 × 10 56
0
rank 1 1 9 7 1
F5mean
2.921 × 10 1
2.8708
3.22 × 10 3
5.6281
9.92 × 10 4
std
1.60
3.25
1.64 × 10 4
4.51 × 10 1
8.10 × 10 4
rank 5 6 10 7 4
F6mean
1.9942
4.39 × 10 2
4.82 × 10 30
3.90 × 10 14
2.15 × 10 5
std
3.26 × 10 1
6.8 × 10 2
1.50 × 10 29
7.68 × 10 14
1.29 × 10 5
rank 10 8 3 4 6
F7mean
5.02 × 10 5
3.35 × 10 5
7.2 × 10 3
1.36 × 10 4
8.42 × 10 5
std
4.11 × 10 5
3.27 × 10 5
5.2 × 10 3
1.09 × 10 4
6.42 × 10 5
rank 3 2 10 7 4
F8mean
1.96 × 10 3
4.19 × 10 3
3.16 × 10 3
3.93 × 10 3
4.19 × 10 3
std
6.232 × 10 1
1.23 × 10 2
3.46 × 10 2
3.08 × 10 2
4.38 × 10 4
rank 10 4 8 6 2
F9mean 0 0
2.2062 × 10 1
0 0
std 0 0
1.26044 × 10 1
0 0
rank 1 1 9 1 1
F10mean
4.44 × 10 16
4.44 × 10 16
6.408 × 10 1
4.44 × 10 16
4.44 × 10 16
std 0 0
3.30
0 0
rank 1 1 9 1 1
F11mean 0 0
1.49 × 10 1
0 0
std 0 0
8.8 × 10 2
0 0
rank 1 1 9 1 1
F12mean
7.75 × 10 1
3.9 × 10 3
5.79 × 10 1
2.03 × 10 14
2.28 × 10 5
std
5.00 × 10 1
1.02 × 10 2
1.32
6.43 × 10 14
2.08 × 10 5
rank 9 7 8 3 5
F13mean
2.06 × 10 31
1.30 × 10 2
7.70 × 10 3
3.48 × 10 2
1.35 × 10 4
std
2.23 × 10 31
3.16 × 10 2
1.45 × 10 2
1.80 × 10 1
1.51 × 10 4
rank 3 8 6 9 5
F14mean
4.2171
3.3325
3.1292
1.1629
9.98 × 10 1
std
3.36
4.7488
2.9899
7.38 × 10 1
2.43 × 10 14
rank 9 8 7 5 2
F15mean
1.40 × 10 3
3.15 × 10 4
1.20 × 10 3
3.20 × 10 3
3.07 × 10 4
std
3.96 × 10 4
3.03 × 10 5
1.40 × 10 3
6.90 × 10 3
3.45 × 10 9
rank 8 2 7 9 1
F16mean
1.0307
1.0316
1.0316
1.0316
1.0316
std
1.60 × 10 3
3.84 × 10 8
6.78 × 10 16
2.88 × 10 9
1.62 × 10 12
rank 9 7 1 6 3
F17mean
4.072 × 10 1
3.979 × 10 1
3.979 × 10 1
3.979 × 10 1
3.979 × 10 1
std
1.13 × 10 2
1.56 × 10 6
0
1.32 × 10 9
2.19 × 10 10
rank 10 7 1 4 3
F18mean
6.62
3 3 3 3
std
9.40
3.93 × 10 7
1.84 × 10 15
1.91 × 10 15
3.47 × 10 11
rank 10 6 2 3 4
F19mean
3.8358
3.8623
3.8628
3.837
3.8628
std
2.10 × 10 2
5.92 × 10 4
2.71 × 10 15
1.41 × 10 1
1.94 × 10 9
rank 9 5 1 8 3
F20mean
2.6615
3.2145
3.2319
3.2586
3.2850
std
4.46 × 10 1
5.99 × 10 2
6.32 × 10 2
6.03 × 10 2
5.75 × 10 2
rank 10 6 5 3 1
F21mean
5.0552
10.1531
6.3007
9.9024
10.1532
std
3.84 × 10 7
1.46 × 10 4
3.34
1.37
1.15 × 10 7
rank 10 3 7 5 1
F22mean
5.0877
10.4027
7.7248
9.5259
10.4029
std
1.02 × 10 6
1.82 × 10 4
3.4044
2.6962
1.77 × 10 7
rank 10 4 8 5 2
F23mean
5.1285
10.5362
7.6631
9.0018
10.5364
std
1.61 × 10 6
1.61 × 10 4
3.62
3.14
9.97 × 10 8
rank 9 4 7 5 2
Table 5. Performance Comparison of Algorithms (PTG, WOA, HHO, COA, TSA) on CEC2022.
Table 5. Performance Comparison of Algorithms (PTG, WOA, HHO, COA, TSA) on CEC2022.
FunctionMetricPTGWOAHHOCOATSA
F1mean 8.32 × 10 3 2.43 × 10 4 8.79 × 10 3 4.56 × 10 4 1.75 × 10 4
std 3.16 × 10 3 6.46 × 10 3 3.41 × 10 3 1.22 × 10 4 6.45 × 10 3
rank172106
time 3.50 8.42 × 10 2 2.45 × 10 1 1.87 × 10 1 1.34 × 10 1
F2mean 4.52 × 10 2 5.66 × 10 2 4.94 × 10 2 3.09 × 10 3 7.74 × 10 2
std 1.49 × 10 1 6.11 × 10 1 3.87 × 10 1 8.97 × 10 2 2.06 × 10 2
rank174108
time 3.63 8.45 × 10 2 2.41 × 10 1 1.59 × 10 1 1.45 × 10 1
F3mean 6.00 × 10 2 6.68 × 10 2 6.60 × 10 2
6.8079 × 10 2
6.62 × 10 2
std 3.12 × 10 1 1.17 × 10 1 7.05 1.07 × 10 1 1.44 × 10 1
rank18697
time 3.58 1.85 × 10 1 5.15 × 10 1 4.18 × 10 1 2.34 × 10 1
F4mean
8.28 × 10 2
9.2405 × 10 2
8.9052 × 10 2
9.7333 × 10 2
9.5783 × 10 2
std 1.04 × 10 1 3.00 × 10 1 1.33 × 10 1 1.75 × 10 1 3.12 × 10 1
rank174108
time 3.55 1.24 × 10 1 3.14 × 10 1 2.37 × 10 1 1.63 × 10 1
F5mean 9.15 × 10 2 3.99 × 10 3 2.87 × 10 3 3.50 × 10 3 4.16 × 10 3
std
1.57559 × 10 1
1.36 × 10 3
2.5840 × 10 2
3.9193 × 10 2
1.52 × 10 3
rank195810
time 3.48 1.24 × 10 1 3.34 × 10 1 2.43 × 10 1 1.66 × 10 1
F6mean 6.93 × 10 3 7.62 × 10 5 1.07 × 10 5 2.22 × 10 9 1.65 × 10 8
std 5.72 × 10 3 1.14 × 10 6 6.18 × 10 4 1.07 × 10 9 4.61 × 10 8
rank164108
time 3.54 9.58 × 10 2 2.70 × 10 1 2.02 × 10 1 1.55 × 10 1
F7mean 2.04 × 10 3 2.21 × 10 3 2.19 × 10 3 2.22 × 10 3 2.25 × 10 3
std 1.22 × 10 1 8.44 × 10 1 6.68 × 10 1 4.82 × 10 1 9.53 × 10 1
rank176810
time 3.97 2.30 × 10 1 6.05 × 10 1 5.57 × 10 1 2.78 × 10 1
F8mean 2.23 × 10 3 2.27 × 10 3 2.28 × 10 3 2.43 × 10 3 2.41 × 10 3
std 2.61 4.86 × 10 1 7.74 × 10 1 1.36 × 10 2 3.84 × 10 2
rank14798
time 3.96 2.74 × 10 1 6.82 × 10 1 6.42 × 10 1 3.23 × 10 1
F9mean 2.48 × 10 3 2.57 × 10 3 2.51 × 10 3 3.48 × 10 3 2.71 × 10 3
std 3.86 × 10 2 4.82 × 10 1 3.69 × 10 1 3.51 × 10 2 1.62 × 10 2
rank174108
time 3.75 2.22 × 10 1 5.64 × 10 1 5.37 × 10 1 2.70 × 10 1
F10mean 2.68 × 10 3 4.69 × 10 3 3.76 × 10 3 6.24 × 10 3 5.03 × 10 3
std 5.45 × 10 2 1.09 × 10 3 7.84 × 10 2 1.45 × 10 3 1.24 × 10 3
rank175109
time 3.70 1.91 × 10 1 5.20 × 10 1 4.07 × 10 1 2.30 × 10 1
F11mean 2.92 × 10 3 3.66 × 10 3 3.10 × 10 3 9.07 × 10 3 6.83 × 10 3
std 1.19 × 10 2 7.79 × 10 2 2.01 × 10 2 5.60 × 10 2 1.11 × 10 3
rank15497
time 3.93 3.12 × 10 1 6.82 × 10 1 6.44 × 10 1 3.67 × 10 1
F12mean
2.9471 × 10 3
3.1041 × 10 3
3.1355 × 10 3
3.6458 × 10 3
3.3283 × 10 3
std 6.74 × 10 1 1.21 × 10 2 1.25 × 10 2 3.09 × 10 2 1.87 × 10 2
rank147109
time 3.86 3.27 × 10 1 7.76 × 10 1 7.27 × 10 1 3.70 × 10 1
Table 6. Performance Comparison of Algorithms (RSA, FATA, MFO, ECO, HO) on CEC2022.
Table 6. Performance Comparison of Algorithms (RSA, FATA, MFO, ECO, HO) on CEC2022.
FunctionMetricRSAFATAMFOECOHO
F1mean 3.30 × 10 4 1.11 × 10 4 3.88 × 10 4 1.10 × 10 4 1.41 × 10 4
std 9.03 × 10 3 4.71 × 10 3 2.08 × 10 4 6.95 × 10 3 7.06 × 10 3
rank84935
time 8.05 × 10 1 1.64 × 10 1 1.43 × 10 1 2.64 × 10 1 3.78
F1mean 2.15 × 10 3
4.9043 × 10 2
5.1656 × 10 2
4.9377 × 10 2
5.18 × 10 2
std
7.4513 × 10 2
3.5924 × 10 1
6.3323 × 10 1
5.5792 × 10 1
5.1648 × 10 1
rank92536
time 8.16 × 10 1 1.46 × 10 1 1.30 × 10 1 2.58 × 10 1 3.66
F3mean
6.8129 × 10 2
6.5411 × 10 2
6.2229 × 10 2
6.3994 × 10 2
6.53 × 10 2
std
7.5845
9.1956
8.6708
1.1516 × 10 1
7.5945
rank105234
time 9.60 × 10 1 2.43 × 10 1 2.29 × 10 1 3.56 × 10 1 4.14
F4mean
9.6710 × 10 2
9.0017 × 10 2
9.0193 × 10 2
8.8036 × 10 2
8.74 × 10 2
std
1.235 × 10 1
1.2315 × 10 1
2.8429 × 10 1
2.0832 × 10 1
1.1217 × 10 1
rank95632
time 8.87 × 10 1 1.78 × 10 1 1.67 × 10 1 2.78 × 10 1 3.83
F5mean 3.34 × 10 3 2.50 × 10 3 2.89 × 10 3 2.22 × 10 3 2.16 × 10 3
std
2.8232 × 10 2
3.1463 × 10 2
1.10 × 10 3
4.3103 × 10 2
2.81 × 10 2
rank74632
time 9.38 × 10 1 1.85 × 10 1 1.55 × 10 1 2.78 × 10 1 3.88
F6mean 1.15 × 10 9 1.28 × 10 5 1.07 × 10 7 7.47 × 10 3 9.65 × 10 3
std 1.15 × 10 9 1.48 × 10 5 1.66 × 10 7 5.81 × 10 3 7.98 × 10 3
rank95723
time 8.16 × 10 1 1.74 × 10 1 1.42 × 10 1 2.83 × 10 1 3.73
F7mean 2.23 × 10 3 2.14 × 10 3 2.12 × 10 3 2.14 × 10 3 2.14 × 10 3
std 3.25 × 10 1 2.57 × 10 1 5.84 × 10 1 5.66 × 10 1 2.27 × 10 1
rank94253
time 9.93 × 10 1 2.86 × 10 1 2.72 × 10 1 3.86 × 10 1 4.22
F8mean 2.56 × 10 3 2.27 × 10 3 2.26 × 10 3 2.27 × 10 3 2.24 × 10 3
std 5.60 × 10 2 5.38 × 10 1 4.81 × 10 1 7.23 × 10 1 8.34
rank105362
time 1.09 3.33 × 10 1 3.18 × 10 1 4.80 × 10 1 4.37
F9mean 3.14 × 10 3 2.51 × 10 3 2.52 × 10 3 2.48 × 10 3 2.53 × 10 3
std 2.23 × 10 2 1.83 × 10 1 4.86 × 10 1 2.67 2.85 × 10 1
rank93526
time 9.87 × 10 1 2.91 × 10 1 2.78 × 10 1 3.84 × 10 1 4.20
F10mean 4.70 × 10 3 3.48 × 10 3 3.88 × 10 3 3.70 × 10 3 3.74 × 10 3
std 1.78 × 10 3 9.80 × 10 2 9.58 × 10 2 1.08 × 10 3 1.06 × 10 3
rank82634
time 9.76 × 10 1 2.42 × 10 1 2.26 × 10 1 3.30 × 10 1 4.06
F11mean 8.59 × 10 3 5.39 × 10 3 1.88 × 10 4 3.00 × 10 3 3.03 × 10 3
std 4.67 × 10 2 1.17 × 10 3 1.99 × 10 4 1.62 × 10 2 1.79 × 10 2
rank861023
time 1.06 3.45 × 10 1 3.56 × 10 1 4.49 × 10 1 4.38
F12mean
3.2313 × 10 3
3.1168 × 10 3
2.9545 × 10 3
3.0011 × 10 3
3.1103 × 10 3
std 1.81 × 10 2 6.83 × 10 1 1.31 × 10 1 3.72 × 10 1 9.81 × 10 1
rank86235
time 1.05 3.61 × 10 1 3.63 × 10 1 4.56 × 10 1 4.46
Table 7. Results of the Friedman rank test.
Table 7. Results of the Friedman rank test.
AlgorithmCEC2022CEC2005
PTG1.00002.5217
WOA6.58336.8478
HHO4.75005.5435
COA9.41674.0652
TSA8.16678.7391
RSA8.66676.8696
FATA4.33334.8261
MFO5.25006.9130
ECO3.08335.5217
HO3.75003.1522
Table 8. Comparison results of PTG and advanced algorithm in compression spring design.
Table 8. Comparison results of PTG and advanced algorithm in compression spring design.
AlgorithmdDPBestMeanStd
PTG 5.17 × 10 2 3.57 × 10 1 1.13 × 10 1 1.266506 × 10 2 1.27 × 10 2 6.57 × 10 5
WOA 5.16 × 10 2 3.54 × 10 1 1.14 × 10 1 1.266523 × 10 2 1.35 × 10 2 1.08 × 10 3
HHO 5.27 × 10 2 3.81 × 10 1 1.00 × 10 1 1.268195 × 10 2 1.37 × 10 2 1.04 × 10 3
COA 5.13 × 10 2 3.47 × 10 1 1.19 × 10 1 1.268248 × 10 2 3.83 × 10 1 2.02
TSA 5.21 × 10 2 3.66 × 10 1 1.08 × 10 1 1.267502 × 10 2 1.29 × 10 2 2.19 × 10 4
RSA 5.00 × 10 2 3.10 × 10 1 1.50 × 10 1 1.319337 × 10 2 2.26 × 10 2 2.46 × 10 2
FATA 5.13 × 10 2 3.48 × 10 1 1.18 × 10 1 1.266776 × 10 2 1.29 × 10 2 3.20 × 10 4
MFO 5.17 × 10 2 3.58 × 10 1 1.12 × 10 1 1.266508 × 10 2 1.32 × 10 2 6.45 × 10 4
ECO 5.20 × 10 2 3.65 × 10 1 1.08 × 10 1 1.266712 × 10 2 1.33 × 10 2 7.13 × 10 4
HO 5.17 × 10 2 3.58 × 10 1 1.12 × 10 1 1.268031 × 10 2 1.30 × 10 2 4.97 × 10 4
Table 9. Comparison results of PTG and advanced algorithm in structural design of welded beams.
Table 9. Comparison results of PTG and advanced algorithm in structural design of welded beams.
AlgorithmhltbBestMeanStd
PTG 2.06395 × 10 1 3.456223 9.036624 2.05730 × 10 1 1.724399 1.724400 4.5766 × 10 5
WOA 1.85061 × 10 1 4.047632 9.066901 2.05579 × 10 1 1.771565 2.199500 6.84900 × 10 1
HHO 2.03490 × 10 1 3.497705 9.092332 2.05502 × 10 1 1.732927 1.929300 1.60300 × 10 1
COA 2.10017 × 10 1 3.304639 9.249082 2.10017 × 10 1 1.778173 2.524100 4.29600 × 10 1
TSA 2.07034 × 10 1 3.430732 9.089197 2.05678 × 10 1 1.731993 1.741000 5.70000 × 10 3
RSA 2.48422 × 10 1 3.132122 8.357501 2.47278 × 10 1 1.918210 2.287000 2.84500 × 10 1
FATA 2.06643 × 10 1 3.450326 9.039539 2.05739 × 10 1 1.724936 1.726900 3.30000 × 10 3
MFO 2.06394 × 10 1 3.456249 9.036624 2.05730 × 10 1 1.724399 1.779500 9.01000 × 10 2
ECO 2.04278 × 10 1 3.502072 9.036455 2.05737 × 10 1 1.726881 1.886600 1.61700 × 10 1
HO 2.07432 × 10 1 3.445703 9.008467 2.07018 × 10 1 1.729205 1.971400 2.36400 × 10 1
Table 10. Comparison results of PTG and advanced algorithm in pressure vessel design.
Table 10. Comparison results of PTG and advanced algorithm in pressure vessel design.
Algorithm T s T h RBestMeanStd
PTG00 4.031961872 × 10 1 7.535014127298541 × 10 2 7.535014127298540 × 10 2 1.16 × 10 13
WOA00 4.031961875 × 10 1 7.535014137 × 10 2 8.291512734 × 10 2 2.2867 × 10 2
HHO00 4.031961872 × 10 1 7.535014127 × 10 2 1.107287597 × 10 3 3.5883 × 10 2
COA00 4.032086747 × 10 1 7.535480873 × 10 2 1.442735757 × 10 3 7.6608 × 10 2
TSA 3.792 × 10 1 1.986 × 10 1 4.031967744 × 10 1 7.535036074 × 10 2 9.015074985 × 10 2 3.0102 × 10 2
RSA 6.0034 × 10 10 4.6716 × 10 24 4.032531618 × 10 1 7.537143781 × 10 2 7.639325533 × 10 2 1.48806 × 10 1
FATA00 4.031966742 × 10 1 7.535032326 × 10 2 7.535234204 × 10 2 2.03 × 10 2
MFO00 4.031961872 × 10 1 7.535014127298540 × 10 2 7.535014127298539 × 10 2 1.16 × 10 13
ECO 1.5404 × 10 7 7.76 × 10 2 4.031961872 × 10 1 7.535014127298540 × 10 2 8.767789495 × 10 2 2.8037 × 10 2
HO 2.709 × 10 1 0 4.031961875 × 10 1 7.535014139 × 10 2 8.274684181 × 10 2 2.2569 × 10 2
Table 11. Comparison results of PTG and advanced algorithm in reducer design.
Table 11. Comparison results of PTG and advanced algorithm in reducer design.
Algorithm l 1 l 2 d 1 BestMeanStd
PTG7.37.33.227269460 3.3305403487 × 10 4 3.3305403 × 10 4 1.4801 × 10 11
WOA7.37.33.227269518 3.3305403487 × 10 4 3.3312162 × 10 4 1.03881 × 10 1
HHO7.37.33.227269464 3.3305403487 × 10 4 3.3305410 × 10 4 3.6 × 10 2
COA8.37.33.225813184 3.3313710127 × 10 4 3.3511267 × 10 4 1.4624 × 10 2
TSA7.37.33.227253000 3.3305403488 × 10 4 3.3305405 × 10 4 2.2 × 10 3
RSA7.38.33.298115800 3.3330883055 × 10 4 3.3432371 × 10 4 7.28574 × 10 1
FATA7.37.33.227204486 3.3305403493 × 10 4 3.3305405 × 10 4 3.6 × 10 3
MFO7.37.33.227269478 3.3305403487 × 10 4 3.3306053 × 10 4 3.5549
ECO7.37.33.227269496 3.3305403487 × 10 4 3.3306053 × 10 4 3.5549
HO7.37.33.227269500 3.3305403487 × 10 4 3.3306503 × 10 4 3.2239
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, D.; Han, X.; Gao, M.; Chu, Y.; Zhou, T. Personalized-Template-Guided Intelligent Evolutionary Algorithm. Appl. Sci. 2025, 15, 8642. https://doi.org/10.3390/app15158642

AMA Style

Hu D, Han X, Gao M, Chu Y, Zhou T. Personalized-Template-Guided Intelligent Evolutionary Algorithm. Applied Sciences. 2025; 15(15):8642. https://doi.org/10.3390/app15158642

Chicago/Turabian Style

Hu, Dongni, Xuming Han, Minghan Gao, Yali Chu, and Ting Zhou. 2025. "Personalized-Template-Guided Intelligent Evolutionary Algorithm" Applied Sciences 15, no. 15: 8642. https://doi.org/10.3390/app15158642

APA Style

Hu, D., Han, X., Gao, M., Chu, Y., & Zhou, T. (2025). Personalized-Template-Guided Intelligent Evolutionary Algorithm. Applied Sciences, 15(15), 8642. https://doi.org/10.3390/app15158642

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop