Next Article in Journal
Bio-Inspired Highest Lift-to-Drag-Ratio Fin Shape and Angle for Maximum Surfboard Stability: Flow Around Fish Fins
Previous Article in Journal
mESC: An Enhanced Escape Algorithm Fusing Multiple Strategies for Engineering Optimization
Previous Article in Special Issue
An Enhanced Team-Oriented Swarm Optimization Algorithm (ETOSO) for Robust and Efficient High-Dimensional Search
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

IA-DTPSO: A Multi-Strategy Integrated Particle Swarm Optimization for Predicting the Total Urban Water Resources in China

1
School of Architecture, Chang’an University, Xi’an 710061, China
2
College of Architecture, Xi’an University of Architecture and Technology, Xi’an 710055, China
3
Xi’an International Science and Technology Cooperation Base: International Joint Research Center for Green Urban Rural and Land Space Smart Construction Technology Innovation, Xi’an 710000, China
*
Authors to whom correspondence should be addressed.
Biomimetics 2025, 10(4), 233; https://doi.org/10.3390/biomimetics10040233
Submission received: 11 February 2025 / Revised: 20 March 2025 / Accepted: 1 April 2025 / Published: 8 April 2025

Abstract

:
In order to overcome the drawbacks of low search efficiency and susceptibility to local optimal traps in PSO, this study proposes a multi-strategy particle swarm optimization (PSO) with information acquisition, referred to as IA-DTPSO. Firstly, Sobol sequence initialization on particles to achieve a more uniform initial population distribution is performed. Secondly, an update scheme based on information acquisition is established, which adopts different information processing methods according to the evaluation status of particles at different stages to improve the accuracy of information shared between particles. Then, the Spearman’s correlation coefficient (SCC) is introduced to determine the dimensions that require reverse solution position updates, and the tangent flight strategy is used to improve the inherent single update method of PSO. Finally, a dimension learning strategy is introduced to strengthen individual particles’ activity, thereby ameliorating the entire particle population’s diversity. In order to conduct a comprehensive analysis of IA-DTPSO, its excellent exploration and exploitation (ENE) capability is firstly validated on CEC2022. Subsequently, the performance of IA-DTPSO and other algorithms on different dimensions of CEC2022 is validated, and the results show that IA-DTPSO wins 58.33% and 41.67% of the functions on 10 and 20 dimensions of CEC2022, respectively. Finally, IA-DTPSO is employed to optimize parameters of the time-dependent gray model (1,1,r,ξ,Csz) (TDGM (1,1,r,ξ,Csz)) and applied to simulate and predict total urban water resources (TUWRs) in China. By using four error evaluation indicators, this method is compared with other algorithms and existing models. The results show that the total MAPE (%) value obtained by simulation after IA-DTPSO optimization is 5.9439, which has the smallest error among all comparison methods and models, verifying the effectiveness of this method for predicting TUWRs in China.

1. Introduction

Optimization is the procedure of determining the most efficient solution for practical problems. For an optimization problem, it is necessary to first clarify the three basic elements of the problem: the number of decision variables, the optimization range of variables, and the objective function. At present, optimization techniques have been employed in various domains such as business, engineering, science, and medicine [1,2,3,4]. However, classical numerical methods struggle to offer accurate solutions for non-convex problems with non-linear constraints, leading to extended computation time [5]. Meta-heuristics are key tools for developing efficient optimizers that can effectively solve challenging real-world problems [6,7]. Metaheuristic algorithms (MAs) can traverse the solution space with effect in indeterminate environments to recognize global optima or find an approximate optimum [8]. This means that although MAs cannot guarantee an accurate solution, they can definitely generate an optimal solution [9]. MAs have high scalability and can be directly designed and implemented and can surmount challenges related to the enormous sophistication of mathematical inference [10,11]. Therefore, when traditional optimization techniques cannot handle exact solutions, MAs become an alternative method for quickly solving large-scale optimization problems [12]. MAs’ main features can be generalized as below:
  • It is an approximate method that is not specific to a particular problem.
  • It is a process of continuously learning towards the optimal solution through trial and error.
  • Demonstrates significant multi-functionality and robustness.
  • It is an optimization logic used to determine approximate solutions to complex global optimization problems.
All population-based MAs possess these characteristics, with differences only in the use of operators and mechanisms. In addition, MAs also include two essential search tactics, namely ENE [13,14]. Exploration is the ability to search the solution space on a global scale, which is associated with averting local optimality and solving traps in local optimality. Exploitation is about making optimal decisions from promising solutions in the vicinity to increase MAs’ local quality [15]. Therefore, the key to whether a MA has excellent performance depends on whether an appropriate balance can be achieved between these two strategies. This typically involves how to use search operators to effectively extract and utilize information, thereby generating more promising solutions to problems [16].
Thirty years ago, MAs represented by PSO [17] gained widespread recognition in the research community, and more and more MAs rapidly emerged under its influence. So far, thousands of MAs have been made public. According to different sources of inspiration, MAs can be broadly divided into four categories: swarm-behavior inspired, human-behavior inspired, evolution-phenomena inspired, and nature-science-phenomena inspired.
(i)
Swarm-behavior inspired: Swarm-behavior-inspired algorithms are techniques that mimic collaborative behavior in biological social systems to solve problems. They organize a large number of simple individual units (such as ants, bees, bird swarm agents) together, allowing them to interact and learn in complex environments, and jointly search for optimal solutions. In recent years, newly proposed population-based algorithms include: Whale Optimization Algorithm (WOA) [18], Northern Goshawk Optimization (NGO) [19], Bottlenose Dolphin Optimizer (BDO) [20], Nutcracker Optimization Algorithm (NOA) [21], Mantis Search Algorithm (MSA) [22], Genghis Khan Shark Optimizer (GKSO) [23], Black-winged kite algorithm (BKA) [24], Secretary Bird Optimization Algorithm (SBOA) [25], and Horned Lizard Optimization Algorithm (HLOA) [26].
(ii)
Human-behavior inspired: Human-behavior-inspired algorithms typically draw inspiration from human creativity, artistic thinking, and problem-solving approaches, simulating the process of humans making a series of decisions through team collaboration. In recent years, this type of algorithm includes: Enterprise Development Optimizer (EDO) [27], Hiking Optimization Algorithm (HOA) [28], Great Wall Construction Algorithm (GWCA) [29], Football Team Training Algorithm (FTTA) [30], Alpine Skiing Optimization (ASO) [31], Information Acquisition Optimizer (IAO) [32], Adolescent Identity Search Algorithm (AISA) [33], and Information Decision Search Algorithm (IDSE) [34].
(iii)
Evolution-phenomena inspired: Evolution-phenomena-inspired algorithms are mainly a type of computational technology that draw inspiration from biological evolution theory. These mainly include Genetic Algorithm (GA) [35], Genetic Programming (GP) [36], Evolutionary Programming (EP) [37], Evolutionary Strategy (ES) [38], Differential Evolution (DE) algorithm [39], Biogeography-based optimization (BBO) [40], Clonal Selection Algorithm (CSA) [41], and Alpha Evolution (AE) [42].
(iv)
Nature-science-phenomena inspired: Nature-science-phenomena-inspired algorithms based on natural science phenomena mainly come from observations of natural phenomena and scientific laws in various fields. The latest achievements in this research direction mainly include: Tangent Search Algorithm (TSA) [43], Kepler Optimization Algorithm (KOA) [44], Exponential- Trigonometric Optimization (ETO) algorithm [45], Artemisinin Optimization (AO) algorithm [46], Weighted Average Algorithm (WAA) [5], Newton-Raphson-based Optimizer (NRBO) [47], Polar Lights Optimization (PLO) [48], and FATA morgana algorithm (FATA) [49].
In addition to these classic MAs, many improved versions of MAs have emerged in this field and been applied in various practical applications. Yan et al. [50] developed an enhanced human memory optimizer to solve engineering optimization problems. Hu et al. [51] studied a multi-strategy DE algorithm for the smooth path planning of multi-scale robots and obtained a motion path with higher smoothness. Gobashy et al. [7] used WOA to solve the problem of spontaneous potential energy anomalies caused by 2D tilted plates of infinite horizontal length. Li et al. [52] proposed an improved seagull optimizer for fault location in distribution networks. Jamal et al. [53] proposed an improved Pelican optimization algorithm to solve non-convex stochastic optimal power flow problems in power systems, thereby reducing generation costs and emissions.
Although the successive emergence of various new MAs has added great vitality to the field of intelligent optimization, existing MAs still have several limitations, which can be generalized as below:
  • Difficulty in achieving the optimal balance of ENE, resulting in MAs to local optimum.
  • Multiple operators are typically used to approximate the optimum, complicating the search scenario.
  • Performance degradation in high-dimensional search space.
As one of the classic MAs, PSO has hundreds or thousands of improved versions, but it is difficult to select the best from these improved versions due to the No Free Lunch (NFL) theorem that all MAs have to rely on [54]. This theorem emphasizes that no MA can be universally applicable to all types of problems. That is to say, different MAs may perform better for specific types of problems but may not be as effective for other types of problems. Furthermore, search operators’ basis vectors in PSO typically determine the starting point of the search and are sampled straightforwardly from the solution set instead of being adaptively selected. In addition, particles overly rely on obtaining information from two historical best positions while lacking the capacity to gain more information from other particles. Finally, PSO still exposes the imbalance of ENE. In response to the shortcomings of PSO mentioned above and combined with the NFL theorem, this study proposes a multi-strategy improved PSO (IA-DTPSO), which is based on the information acquisition strategy and involves four other improvement strategies for targeted auxiliary improvement. Compared with a large number of existing PSO variants, this method has a more novel structure and refined update method.
In recent years, combining MAs with predictive models has become a hot topic. However, existing prediction models have the characteristics of slow technological updates and slow application of new technologies. Usually, these models are mostly based on historical data, which makes it difficult to cope with complex changes in the future, thereby affecting prediction accuracy and efficiency. At present, this field has achieved relatively satisfactory optimization results by utilizing various hybrid MAs or other machine learning methods to process models. However, these methods often have low universality, and there is still room for improvement in terms of prediction accuracy and efficiency. In order to overcome these limitations, this study combines the proposed IA-DTPSO with the simulation and prediction of China’s TUWRs. Through the distinctive update method of PSO variants, the parameters of TDGM (1,1,r,ξ,Csz) are optimized step by step to achieve the solution of the simulation and prediction problem of China’s TUWRs.
This study’s main contributions are as below:
(i)
A multi-strategy PSO with information acquisition, referred to as IA-DTPSO, is proposed and the entire optimization process is modeled.
(ii)
The good ENE ability of IA-DTPSO is validated on CEC2022.
(iii)
IA-DTPSO is compared with 11 other algorithms on different dimensions of CEC2022, verifying the superiority of IA-DTPSO.
(iv)
IA-DTPSO and seven other algorithms are employed to optimize parameters of TDGM (1,1,r,ξ,Csz) and applied to predict TUWRs in China. In addition, the IA-DTPSO optimized model is compared with three existing models, and the results indicate that the model optimized by IA-DTPSO achieves the minimum error among the four error evaluation metrics in both comparisons.
This study’s remaining parts are arranged as below: Section 2 reviews PSO’s basic framework. Section 3 gives an optimization model for IA-DTPSO. Section 4 analyzes and discusses the experimental results of IA-DTPSO on CEC2022. Section 5 utilizes the proposed IA-DTPSO to optimize TDGM (1,1,r,ξ,Csz) and applies it to simulate and predict TUWRs in China. Section 6 provides a summary of this study and prospects for the future.

2. The Classic PSO

PSO [16] regards bird flocks as a group of particles with self-activity trajectories, and the activity trajectories of particles depend on their velocity v and position x. Then, the calculation formula for v at time t + 1 is shown in Equation (1) [16]:
v i t + 1 = ω × v i t + c 1 × r 1 × ( PB i t x i t ) + c 2 × r 2 × ( GB t x i t ) ,
where ω is the inertia weight, c1 and c2 represent individual and social cognitive factors, respectively, and r1 and r2 are random numbers between [0, 1]. Assuming there are N populations, PB i t and GB t represent the historical best positions found by the i-th and all N particles up to time t.
At time t + 1, x is updated according to the current position x i t of the particle and the rate of change v i t + 1 towards the next position, as shown in Equation (2) [16]:
x i t + 1 = x i t + v i t + 1 .

3. The Proposed IA-DTPSO

In this section, we introduce five strategies to optimize PSO and propose an improved PSO (IA-DTPSO) method. The proposed IA-DTPSO is described in detail below.

3.1. Sobol Sequence Initialization

The initial solutions’ distribution is an important prerequisite for affecting the convergence speed of MAs. A homogeneously spread initial population can effectively improve the search efficiency of MAs. Therefore, this article uses a Sobol sequence initialization [55] population instead of the random initialization scheme in PSO. The Sobol sequence is a low-variance sequence that uses a deterministic quasi-random number sequence instead of a pseudo-random number sequence to fill, as evenly as possible, points into a multidimensional hypercube, thereby generating wider coverage in the solution space. The initial population position generated by the Sobol sequence is shown in Equation (3) [55]:
X i = lb + Sobol i × ( ub lb ) ,   Sobol i [ 0 ,   1 ] ,
where Soboli denotes the i-th randomly generated number in the sequence. ub and lb denote the upper and lower bounds, respectively. The population spatial distribution of Sobol sequence initialization is shown in Figure 1.

3.2. Information Acquisition Strategy

The information acquisition strategy is to collect and acquire useful information through the key stages of information gathering, information filtering and evaluation, information analysis and organization.

3.2.1. Information Gathering

Information gathering is a crucial step in gaining valuable feedback. Therefore, particles use various approaches and utilize a variety of channels to gather information, forming a more complete initial information system. This procedure can be expressed as [32]:
x i t + 1 = x i t + μ × ( x r 1 t x r 2 t ) ,
where x r 1 t and x r 2 t are two randomly generated particles at time t. μ is used as a random number between [−1, 1] to control the strength and direction of particle information collection. Generally speaking, collecting more information is not necessarily better. A large amount of information may lead to new candidate solutions exceeding the global optimum, thereby weakening the exploitation of the equation.

3.2.2. Information Filtering and Evaluation

After the particles have collected the information, they need to quickly identify the relevant useful information, and this key mechanism can be expressed as
x i t + 1 = x i t σ × r 3 × ( x r N t x i t ) ,   if   r 3 < 0.5 , x i t + 1 = x i t + σ × r 3 × ( x r N t x i t ) ,   else ,
where r3 is a random number between [0, 1], and rN is a particle randomly selected from N populations.  σ is the error generated when subjective factors filter and evaluate information, defined by Equations (6)–(9) [32]:
σ = cos ( π 2 × γ ) Ω ,
Ω = 2 × [ 3 . 468 × r 4 × ( 1 r 5 ) × arccos ( r 6 × 10 4 ) ] ,
γ = λ + sin π 4 i t T + log 10 i t T 8 ,
λ = cos ( 2 × r 7 + 1 ) × ( 1 i t T ) ,
where Ω is the subjective influencing factor, which serves as a quantitative indicator of particle subjectivity. It may make two extreme judgments on the information, thereby changing the result of information acquisition. Owing to changes in subjective states, the evaluation of different particles or the same particle at different time points may vary. Another key factor is γ , which characterizes the algorithm’s ability to self-adjust based on the information quality at different iteration stages. Among it, λ represents the information quality factor, which avoids the algorithm from neglecting the basic requirements of information quality due to excessive optimization iteration dynamics. Furthermore, ri (i = 4, 5, 6, 7) is a random number between [0, 1] in these equations.
Figure 2 shows a schematic diagram of information filtering and evaluation, from which it can be seen that particles exhibit adaptive adjustment behavior when evaluating different information, which not only effectively eliminates unconventional information but also significantly improves the overall quality of information.

3.2.3. Information Analysis and Organization

After filtering out the information, particles need to seek out existing valuable information. They increase the likelihood of obtaining the optimal target information by converting the convertible information identified in the preceding stage into valuable information. This operation can be shown by Equations (10) to (11) [32]:
x i t + 1 = GB i t × cos ( π 2 × δ 1 3 ) r 8 × ( 1 N i = 1 N GB i t GB i t ) ,   if   λ 0.5 , x i t + 1 = GB i t × cos ( π 2 × δ 1 3 ) 0.8 × ( r 9 × r 10 × 1 N i = 1 N GB i t ( 2 × r 11 1 ) × GB i t ) ,   else ,
δ = 2 ( γ 2 ) .
where ri (i = 8, 9, 10, 11) is a random number between [0, 1]. δ indicates a controlling factor, the trend of which is shown in Figure 3.
During this process, particles can optimize the depth and breadth of this stage dynamically, according to the quality of the information, thus increasing the target information body’s accuracy. In the next iteration, the novel information subject will totally substitute the previous information subject. Figure 4 depicts the entire framework of the information acquisition strategy.

3.3. SCC Method

SCC is a method used to evaluate the statistical dependence between two ranking sequences (here are two candidate solution positions) [56]. By measuring the consistency of the particle ranking differences between two candidate solutions, the statistical correlation between these two rankings can be evaluated. The expression of this method is shown in Equation (12) [56]:
S C = 1 1 D × ( D 2 1 ) × i = 1 N ( 6 z i 2 ) ,
where zi = mini, where mi and ni are the rankings of N populations in two sequences, respectively. When two sequences are completely identical, they are considered positively correlated. In this case, mi = ni. For each individual i, there is i z i 2 = 0 , so SC = 1. Similarly, when there is inconsistency between two sequences, it can be inferred that SC < 1.
In IA-DTPSO, the SCC’s calculation method shown in Equation (4) is used to measure the correlation between GB and each particle in each dimension, which determines the dimension that requires an inverse solution position update. The specific calculation formulas are shown in Equations (13)–(15) [56]:
x i t + 1 ( τ ) = ub ( τ ) + lb ( τ ) x i t ( τ ) , τ j : z i ( j ) > a ,
z i ( j ) = GB ( j ) x i ( j ) ,
a = 2 2 × i t T ,
where j is the j-th dimension on the D-dimensional problem. This targeted non-complete reverse operation helps the algorithm improve computational accuracy while maintaining its fast convergence.

3.4. Tangent Flight Strategy

Due to the fact that there is only one update method in PSO that calculates the next position based on the rate of change, this single-search method often carries the risk of convergence stagnation. Therefore, based on the PSO update method, this section utilizes the tangent flight strategy to compensate for this deficiency. The updated formula obtained by combining Equations (1) and (2) and introducing the tangent flight strategy is shown in Equation (16) [43]:
x i t + 1 = x i t + v i t + 1 ,   if   r 12 < 0.5 , x i t + 1 = x i t + s t e p × tan ( θ ) ,   else ,  
where r12 is a random number between [0, 1]. In tangent flight, all motion equations are controlled by a global step, which takes the form of step × tan(θ), and θ is a random number between [ 0 ,   π 2 ) . step is the move’s size, and its calculation formula is
s t e p = sign ( r 13 0.5 ) × norm ( GB i t ) × log 10 ( 1 + 10 × D × N i t × T ) ,
where r13 is a random number between [0, 1], the sign controls the direction of ENE, and the norm is a Euclidean norm.
As shown in Figure 5a,b, the stride interval generated by tangent flight is large and the stride randomness is small, which keeps the search distance stable during the iteration process and greatly shortens the optimization iteration cycle of the algorithm. In addition, particles can obtain more information in this large step frequency search to get rid of local optima’s constraints.

3.5. Dimension Learning Strategy

For particles in PSO, they can only passively be limited by the radiation of PB and GB, and cannot extract more effective information from other particles. In the introduced dimension learning strategy, particles can learn based on the behavior of their neighbors. Calculate the radius between the particle and other candidate particles based on the Euclidean distance [57]:
R i t = x i t x i t + 1 ,
The neighborhood of x i t can be expressed as:
U i t = x k t Δ i ( x i t , x k t ) R i t ,   x k t N   ,
where Δ i is the Euclidean distance between x i t and x k t . Once the neighborhood of x i t is constructed, a neighbor particle can be randomly selected from the j-th dimensional neighborhood for updating using Equation (20):
x i t + 1 ( j ) = x i t ( j ) + r 14 × ( x u t ( j ) x r t ( j ) ) ,
where r14 is a random number between [0, 1], x u t ( j ) is a randomly selected neighbor from the neighborhood U i t , and x r t ( j ) is a randomly selected particle from N populations.
Dimension learning strategy increases the algorithm’s exploration capacity and ability to retain population diversity by increasing the interaction between particles and their neighbors and introducing other randomly selected particles from the population.
In order to present the structure and process of IA-DTPSO more intuitively, Algorithm 1 provides IA-DTPSO’s pseudo-code and draws IA-DTPSO’s flowchart as mirrored in Figure 6.
Algorithm 1: IA-DTPSO’s pseudo-code
Start IA-DTPSO
Input: Particles’ number (N) and iterations (T)
Output: The optimum
1:  Use Equation (3) for Sobol sequence initialization and store the current optimum
2: While (it < T) Do
3:    For i = 1 to N Do
4:   Use Equation (4) to form the initial information system
5:    End For
6:    Update the parameter a using Equation (15)
7:    Calculate the Spearman’s correlation coefficient Sc using Equations (12) and (14)
8:    For i = 1 to N Do
9:   For j = 1 to D Do
10:    If Sc <= 0
11:     For τ j : z i ( j ) > a
12:      Use Equation (13) to determine the dimension that requires reverse solution position update
13:     End For
14:    End If
15:   End For
16:    End For
17:    For i = 1 to N Do
18:   Calculate the movement size step using Equation (15)
19:    Use Equation (16) for the tangent flight or PSO update scheme to randomly update particles’ position
20:    End For
21:    For i = 1 to N Do
22: Exploration
23:   Update relevant parameters using Equations (6)–(9)
24:   Use Equation (5) for information filtering and evaluation process
25: End
26: Exploitation
27:   Update parameter δ using Equation (11)
28:   Use Equation (10) for information analysis and organization
29: End
30:    End For
31:    For i = 1 to N Do
32:   Update radius R i t using Equation (18)
33:   Construct the neighborhood x i t using Equation (19)
34:   For j = 1 to D Do
35:    Update a randomly selected neighbor particle on the neighborhood using Equation (20)
36:   End For
37:    End For
38:    Compute fitness values and store the current optimum
39:    it = it + 1
40: End While
41: Output the optimum
End IA-DTPSO

3.6. Time Complexity Analysis of IA-DTPSO

In this section, we discuss the time complexity of IA-DTPSO. The time complexity of IA-DTPSO mainly depends on four parts: Sobel sequence initialization O(N × D), fitness evaluation O(5 × N), fitness ranking O(N × logN), and update O(5 × N × D). Therefore, the time complexity of IA-DTPSO is as follows:
O ( I A - D T P S O ) = O ( N × D + N × T × ( 5 + log N + 5 × D ) ) .

4. Experimental Results and Discussion

4.1. Experimental Design and Parameter Setting

In this section, we test IA-DTPSO with 11 different types of comparison algorithms on the 10 and 20 dimensions of CEC2022 [23]. This test set provides a train of challenging test functions, as shown in Algorithm 1. By inputting variables N and T, continuous position updates and iterations are carried out until the optimum is output. The entire process reflects the solving performance of IA-DTPSO on these functions. Therefore, CEC2022 is an effective tool for fair comparison between different MAs. Comparative algorithms can be categorized into the following two types:
(1)
New MAs proposed in recent years: RUNge Kutta Optimizer (RUN) [58], Northern Goshawk Optimization (NGO) [19], Nutcracker Optimization Algorithm (NOA) [21], Genghis Khan Shark Optimizer (GKSO) [23], and IVY Algorithm (IVYA) [59].
(2)
PSO [17] and its various improved versions: Elite Archives-driven PSO (EAPSO) [60], Gaussian Quantum-behaved PSO (G-QPSO) [61], Hybrid algorithm based on Jellyfish Search PSO (HJSPSO) [62], single-objective variant PSO (PSO-sono) [63], and Multi-strategy PSO incorporating Snow Ablation Optimizer (SAO-MPSO) [64].
Table 1 displays the parameter settings for each MA. To avoid the influence of unexpected factors on the experiment, the runs are set to 20, which means that all MAs are independently run 20 times on each test function. Meanwhile, set N to 100 and T to 1000. Evaluate the optimization results through six evaluation metrics: Best, Worst, Mean, Wilcoxon Rank Sum Test (WRST), Friedman Test (FT), and Rank. In addition, optimization results are evaluated through three error indicators: Standard deviation (Std), Root Mean Square Error (RMSE), and relative error (δ). All tests are conducted according to the equipment specifications displayed in Table 2.

4.2. ENE Behavior Analysis

In order to further confirm that the improvement strategy proposed in this study is promising and effective in solving potential problems, this section discusses the trend of ENE rate changes of IA-DTPSO on CEC2022. The relevant formulas are as follows [64]:
D i v i = 1 N i = 1 N M ^ ( x ( j ) ) x i ( j ) ,
D i v = 1 D i = 1 D D i v i ,
E 1 % = D i v D i v max × 100 % ,
E 2 % = D i v D i v max D i v max × 100 % ,
where M ^ ( x ( j ) ) denotes all particles’ median on the j-th dimension. Divmax indicates the maximum diversity. E1% and E2% mean exploration rate and exploitation rate, respectively.
The two intersecting nonlinear curves shown in Figure 7 represent the ENE change rate of IA-DTPSO on the CEC2022 partial function. From these graphs, it can be seen that ENE rapidly approaches the intersection point at the beginning of iteration, which is attributed to the fact that the initial population under Sobol sequence initialization can effectively improve the search efficiency of particles. Subsequently, ENE reaches the first equilibrium point, where the two are intertwined and the ENE rate is 50%. This is due to the fact that the updated equations of Equations (5) and (9) in the information acquisition strategy effectively balance ENE. As shown in F1, F7, and F10 in Figure 7, after the first intersection of ENE, IA-DTPSO quickly transitioned from exploration to exploitation, focusing on local exploitation and completing the entire search in subsequent iterations. This is due to the targeted non-complete reverse operation of SCC, which helps IA-DTPSO improve convergence accuracy. However, ENE may experience fluctuations and even multiple intersections on certain functions. As shown in F6, ENE intersects several times at 50% and eventually stabilizes. This is because in tangent flight, the random step frequency and size generated by particles can obtain more information, and this random execution of exploration or exploitation operations can effectively break free from the constraints of local optima. The dimension learning strategy enhances the algorithm’s exploration ability and ability to maintain population diversity by increasing the interaction between particles and their neighbors. Specifically, as shown in F11, after 500 iterations, the proportion of exploration continues to increase, indicating that IA-DTPSO is still searching for the global optimum. In summary, the proposed strategies play their respective roles in solving CEC2022 and jointly promote the convergence of IA-DTPSO towards the theoretical optimum.

4.3. Experimental Results and Analysis

Table 3 displays statistical results of IA-DTPSO and other MAs on 10-dimensional CEC2022, with the optimal data highlighted in bold. In addition, the Theoretical Optimal (TO) values for F1–F12 in CEC2022 are 300, 400, 600, 800, 900, 1800, 2000, 2200, 2300, 2400, 2600, and 2700, respectively. All values that reach theoretical optimum in experimental results of this section are replaced by “TO”. For an algorithm, the more times it reaches TO, the better its performance. Firstly, IA-DTPSO achieves smaller values on 7 out of 12 test functions, accounting for 58.33% of CEC2022. Secondly, IA-DTPSO performs particularly well on uni-modal functions (F1), hybrid functions (F6–F8), and composition functions (F9–F12), all of which have achieved at least the top 2 rankings. Finally, according to the final ranking results, IA-DTPSO has a mean rank of 1.833 and a mean FT of 2.681, both leading the other comparison MAs. This denotes that IA-DTPSO has a great ability to solve complex optimization problems, and a smaller FT also means that IA-DTPSO has better stability. If FT is designed for repeated testing, then WRST is designed to test the pairing between two groups. Table 3 presents the WRST results under the condition of significance level α = 0.05. The symbol “-” denotes comparison algorithms’ number that are inferior to IA-DTPSO; “+” is the quantity of opposite effects to “-”; “=“ represents the number of algorithms with similar performance compared to IA-DTPSO. The final functions’ numbers that are superior/similar/inferior to IA-DTPSO for each comparison algorithm are 2/0/10, 0/1/11, 4/3/5, 0/0/12, 0/1/11, 0/0/12, 1/2/9, 0/0/12, 3/2/7, 0/0/12, and 0/2/10, respectively. The WRST results show that NOA, IVYA, G-QPSO, and PSO-sono obtain the same test data, and these algorithms do not outperform IA-DTPSO on any of the functions. RUN and GKSO also obtain the same test data, which only show similar performance to IA-DTPSO on one function and inferior to IA-DTPSO on the other functions. Although NGO and HJSPSO, ranked second and third respectively, outperform IA-DTPSO in some functions, they are inferior to IA-DTPSO in more functions. Therefore, it can be said that NGO and HJSPSO also have certain competitiveness. In addition, PSO ranked seventh, which is inferior to IA-DTPSO on 10 functions, indicating that IA-DTPSO has significantly improved its optimization ability compared to PSO. However, PSO still outperforms IA-DTPSO on two functions, and further improvement of IA-DTPSO’s performance on these functions can be considered in the future.
To further compare the discrepancy in performance between IA-DTPSO and various MAs, Table 4 provides the error data for IA-DTPSO and other algorithms in solving the 10-dimensional CEC2022. Based on the relevant data in Table 4, the Std values of PSO are relatively high on F1 and F6, indicating that PSO exhibits instability on different test functions. Meanwhile, the RMSE and δ values of PSO on F6 are also high, indicating that PSO has low accuracy in complex problems and inconsistent performance in different runs. In addition to PSO, NOA, IVYA, and PSO-sono also perform poorly on complex problems, with low stability and accuracy. EAPSO and NGO perform well on simple issues but are still slightly inferior to IA-DTPSO on complex issues. IA-DTPSO gains relatively small RMSE and δ values on most functions, suggesting that IA-DTPSO has high accuracy to assure that more experimental results are close to the TO solution.
Figure 8 mirrors IA-DTPSO’s convergence curves and other MAs on 12 functions. IA-DTPSO quickly converges to the global optimum in the early stage and finds the TO solution around 100 iterations. This is due to the Sobol sequence initialization generating a good initial solution for the particle population, which gives IA-DTPSO extraordinary optimization ability. In addition, IA-DTPSO still has a downward convergence trend after 1000 iterations when solving the F4 function, indicating that IA-DTPSO still has the capacity to find the global optimum. This is also due to the information acquisition strategy that maintains the diversity between different particles, resulting in a steady increase in population diversity. Based on the box plots shown in Figure 9, IA-DTPSO’s box shape is relatively narrow and positioned downwards, indicating that IA-DTPSO has good robustness and high accuracy.
Table 5 presents the statistical results of IA-DTPSO and other MAs on 20-dimensional CEC2022. Firstly, IA-DTPSO achieves smaller values on 5 out of the 12 test functions, accounting for 41.67% of CEC2022. Secondly, IA-DTPSO performs particularly well on uni-modal functions (F1) and hybrid functions (F6–F8), both ranking at least in the top two. Finally, according to the final ranking results, IA-DTPSO has a mean rank of 2.417 and a mean FT of 3.242, both leading the other comparison algorithms. This indicates that with the increase in problem dimensions, IA-DTPSO still has a positive optimization ability. Meanwhile, Table 5 presents the WRST results of IA-DTPSO and other MAs. The number of functions obtained for each comparison MA that are superior/similar/inferior to IA-DTPSO is 1/1/10, 0/2/10, 3/2/7, 0/0/12, 0/2/10, 1/0/11, 3/4/5, 0/0/12, 4/1/7, 0/0/12, and 4/3/5, respectively. From the WRST results of this group, NOA, G-QPSO, and PSO-sono obtain the same test data, and these algorithms do not outperform IA-DTPSO on any of the functions. Similar to the test results on the 10-dimensional CEC2022, RUN and GKSO also obtain the same settlement data. They only perform similarly to IA-DTPSO on one function and are inferior to IA-DTPSO on the other functions. In addition, PSO ranked eighth and only outperformed IA-DTPSO on one function and is inferior to IA-DTPSO on ten functions, indicating that IA-DTPSO has significantly improved its optimization level compared to PSO. It is worth mentioning that HJSPSO and SAO-MPSO, ranked fourth and fifth, respectively, have a higher number of functions than IA-DTPSO, NGO, and EAPSO, ranked second and third, respectively. Combined with the mean rank outcomes, there is no appreciable discrepancy in the performance of these MAs.
Table 6 provides error data for IA-DTPSO and other algorithms in solving 20-dimensional CEC2022. Based on the relevant data of the best and worst values in Table 5, IA-DTPSO has smaller Std values on most functions. Combined with the relevant data of Mean in Table 5, IA-DTPSO also gains smaller RMSE and δ values on most functions. Thus, IA-DTPSO has high accuracy and always approaches the TO solution with minimal error, making it the algorithm with the best overall performance. In addition, EAPSO and SAO-MPSO perform well on simple problems, but are slightly inferior to IA-DTPSO on complex problems. It is worth mentioning that PSO, NOA, IVYA, and PSO-sono still perform poorly in high-dimensional complex problems, manifested in low stability and accuracy.
Figure 10 mirrors the convergence curves of IA-DTPSO and other MAs on a 20-dimensional CEC2022. IA-DTPSO has a faster convergence rate on most functions, and its convergence curve can reach a lower landing point within a limited number of iterations. In addition, IA-DTPSO still has a downward convergence trend after iteration termination when solving F3, F4, and F7 functions, indicating that IA-DTPSO still has the capacity to find the global optimum. Figure 11 mirrors the box plots of IA-DTPSO and other MAs on a 20-dimensional CEC2022. IA-DTPSO’s box shape is relatively narrow and positioned downwards, indicating that IA-DTPSO has good robustness and high accuracy.
Figure 12 shows a line graph comparison of the mean rank and mean FT of IA-DTPSO and other algorithms in different dimensions of CEC2022. Among them, (a) is the comparison result in 10 dimensions, and (b) is the comparison result in 20 dimensions. From the two sub-graphs in Figure 12, IA-DTPSO has the smallest mean rank and mean FT in both dimensions, which is due to the stable performance of IA-DTPSO in both dimensions under the influence of the dimension learning strategy. In addition, owing to discrepancies in statistical approaches, the mean FT and mean rank of each algorithm also vary slightly. The smaller this difference, the better the stability of the algorithm’s operation. Finally, Figure 13 mirrors the stacked rank and bar chart of IA-DTPSO and other algorithms in different dimensions and finds that IA-DTPSO obtains the lowest cumulative column height. In summary, the comprehensive performance of IA-DTPSO is obviously better than the other compared MAs.

5. Simulation and Prediction of TUWRs in China Based on IA-DTPSO and TDGM(1,1,r,ξ,Csz)

We verified the superiority of the proposed IA-DTPSO on the test set. In this section, we use IA-DTPSO to optimize TDGM (1,1,r,ξ,Csz) and apply the optimized TDGM (1,1,r,ξ,Csz) model to simulate and predict TUWRs’ situation in China.

5.1. TDGM(1,1,r,ξ,Csz)

Definition 1.
Let a set of data sequences  X ( 0 ) = ( x ( 0 ) ( 1 ) , x ( 0 ) ( 2 ) , , x ( 0 ) ( n ) ) , X ( r )  is a one-time accumulation sequence of X ( 0 ) , as shown in Equation (26):
X ( r ) = ( x ( r ) ( 1 ) , x ( r ) ( 2 ) , , x ( r ) ( n ) ) , r R + ,
where the calculation formula for  X ( r ) ( n )  is shown in Equation (27):
X ( r ) ( n ) = i = 1 n Γ ( r + n i ) Γ ( n i + 1 ) Γ ( r ) x ( 0 ) ( i ) ,
where  Γ  function is utilized to optimize the space of order r in the model. Figure 14 shows the graph of  Γ  function.
It is not difficult to derive the expression for the inverse first-order accumulation sequence X ( r ) of X ( 0 )  from Definition 1, which will not be repeated here.
Definition 2.
Z ( r )  is the average sequence generated by consecutive neighboring neighbors of  X ( r ) , as shown in Equation (28):
Z ( r ) = ( z ( r ) ( 2 ) , z ( r ) ( 3 ) , , z ( r ) ( n ) ) , r R + ,
where the calculation method of  z ( r ) ( n )  is shown in Equation (29):
z ( r ) ( n ) = ξ × x ( r ) ( n ) + ( 1 ξ ) × x ( 1 ) ( n 1 ) ,
Definition 3.
If  X ( 0 ) , X ( r ) , and  Z ( r )  have the same definitions as above, then there are
x ( r 1 ) ( k ) + a z ( r ) ( k ) = k b + c , k = 1 , 2 , , n ,
Equation (30) is denoted as TDGM (1,1,r,ξ,Csz).
Theorem 1.
Let  C ^ = ( a , b , c ) T  be computed as shown in Equation (31):
C ^ = ( a , b , c ) T = ( B T B ) 1 B T Y ,
where Y and B are matrices (n − 1) × 1 and (n − 1) × 3, respectively, expressed as:
Y = x ( r 1 ) ( 2 ) x ( r 1 ) ( 3 ) x ( r 1 ) ( n ) , B = z ( r ) ( 2 ) 2 1 z ( r ) ( 3 ) 3 1 z ( r ) ( n ) n 1 .
Theorem 2.
The r-order time response function of TDGM (1,1,r,ξ,Csz) is
x ^ ( r ) ( k ) = C s z × α 2 k 1 + g = 0 k 2 [ ( k g ) × β 2 + γ 2 ] × α 2 g ,
where  α 2 = 1 a × ( 1 ξ ) 1 + ξ × a , β 2 = b 1 + ξ × a , γ 2 = c 1 + ξ × a ,  and
x ^ ( 0 ) ( k ) = ( x ^ ( r ) ) ( r ) ( k ) = i = 0 k 1 ( 1 ) i Γ ( r + 1 ) Γ ( i + 1 ) Γ ( r i + 1 ) x ^ ( r ) ( k i ) .
The proof process of TDGM (1,1,r,ξ,Csz) is described in reference [65].

5.2. Investigation Data Analysis

TUWRs refer to the surface and underground water production formed by urban precipitation, which is the sum of surface runoff and precipitation infiltration recharge [66]. China is a country with abundant water resources, but also a country with scarce and unevenly distributed water resources [67]. With the acceleration of urbanization, people’s demand for water resources has sharply increased. In order to improve prediction accuracy and water resource utilization efficiency, this section selects the TUWRs in China from 2004 to 2023 for simulation and prediction. Table 7 presents the total urban water resources data in China over the past 20 years, sourced from the National Bureau of Statistics (https://data.stats.gov.cn/ access on 20 March 2025). As an important input data for this method, 75% of the data (2004–2018) will be used for the training set and 25% (2019–2023) for the test set.
Figure 15 shows the distribution of the proportion of TUWRs in China from 2004 to 2023. It can be seen that China’s TUWRs do not increase linearly over time, and their annual TUWRs are influenced by actual social conditions. Figure 16 shows the growth rate of TUWRs in China, from which it can be seen that the total urban water resources growth rate was the most significant from 2009 to 2013, while the growth rate in the past two years has been relatively slow.

5.3. Model Evaluation Criteria

This study uses four error evaluation metrics to gauge the predictive performance of TDGM (1,1,r,ξ,Csz), including Absolute Percentage Error (APE), Mean Absolute Percentage Error (MAPE), simulation MAPE (MAPEsimulation), and prediction MAPE (MAPEprediction). The descriptions of these indicators are shown in Equations (35)–(38) [68]:
APE = x ^ ( k ) x ( k ) x ( k ) × 100 % ,
MAPE s i m u l a t i o n = 1 g i = 1 g x ( k ) x ^ ( k ) x ( k ) × 100 % ,
MAPE p r e d i c t i o n = 1 n g i = g + 1 n x ( k ) x ^ ( k ) x ( k ) × 100 % ,
MAPE = g × MAPE s i m u l a t i o n + ( n g ) × MAPE p r e d i c t i o n n ,
where x ^ ( k ) and x ( k ) are the fitted value and the raw data, respectively.

5.4. IA-DTPSO and Other Algorithms for Parameter Optimization and Prediction of TDGM (1,1,r,ξ,Csz)

This section uses IA-DTPSO, PSO [17], GKSO [23], IVYA [59], EAPSO [60], HJSPSO [62], PSO-sono [63], and SAO-MPSO [64] to optimize TDGM (1,1,r,ξ,Csz) and applies them to simulate and predict TUWRs in China. Table 8 shows the statistical results of IA-DTPSO and other MAs for solving TUWRs in China. Among them, SimD represents simulated data, and ResE represents residuals. It is not difficult to see from Table 8 that all data except for the true values given over these years are output data. Among them, the MAPEsimulation (%) and MAPEprediction (%) of IA-DTPSO on the training and testing sets are 5.6366 and 6.8041, respectively, and the total MAPE (%) is 5.9439. It can be seen that the three performance indicators of IA-DTPSO have achieved the minimum values compared with the other seven MAs. Furthermore, the MAPEsimulation (%) and MAPEprediction (%) of PSO on the training and testing sets are 6.6432 and 7.2254, respectively, and the total MAPE (%) is 6.7964. Obviously, IA-DTPSO has significantly reduced the error in predicting TUWRs in China compared to PSO. Table 9 shows the parameters calculated by IA-DTPSO and other algorithms after optimizing the TDGM (1,1,r,ξ,Csz). Finally, Table 10 displays the forecast results of TUWRs in China in the next five years. TUWRs in China will reach 3368.846 hundred million cubic meters in 2028, which is the year with the highest total water resources since 2004. How to reasonably utilize and manage these huge water resources will be a challenge in the future.

5.5. Four Models for Simulating and Predicting TUWRs in China

In this section, the IA-DTPSO optimized TDGM (1,1,r,ξ,Csz) and existing GM (1,1) [69], DGM (1,1) [70], and NGBM (1,1) [71] models are used to simulate and predict TUWRs in China. The statistical results are shown in Table 11. From the output data in the table, it can be seen that the TDGM (1,1,r,ξ,Csz) model optimized by IA-DTPSO (referred to as ID_T) has a MAPEsimulation (%) and a MAPEprediction (%) of 5.6366 and 6.8041 on the training and testing sets, respectively, and a total MAPE (%) of 5.9439. Its three performance indicators have reached the minimum values among the four compared models, further indicating the superiority of IA-DTPSO. Table 12 presents the forecast results of TUWRs in China for the next five years under four different models. It can be seen from the table that the GM (1,1), DGM (1,1), and NGBM (1,1) models predict a relatively flat growth trend in TUWRs for the next five years, while the TDGM (1,1,r,ξ,Csz) model optimized by IA-DTPSO shows a large fluctuation in the forecast results, which is closely related to the iteration and randomness of meta-heuristic methods and can better reflect the more fitting forecast data.

6. Conclusions and Future Prospects

This study proposes a multi-strategy improved PSO (IA-DTPSO). Firstly, Sobol sequences are introduced to produce a wider coverage of the initial particles. Secondly, an update mechanism based on information acquisition methods is established, which applies three different types of information processing operations to different stages. Among them, information gathering is the preparatory stage for particles to obtain useful information, and the remaining two methods correspond to the ENE stage of algorithms. This method improves the overall quality of information obtained by particles. Then, the SCC is introduced to gauge the correlation between GB and particles in each dimension, determining the dimensions that require reverse solution position updates, ensuring that the algorithm improves computational accuracy without sacrificing convergence speed. In addition, the use of tangent flight strategy combined with the original update method of PSO prevents the algorithm from falling into convergence stagnation. Finally, the introduced dimension learning strategy increases the interactivity between particles, enhances the overall particle vitality, and sustains the population diversity.
In the experimental section, the changing trend of the ENE rate of IA-DTPSO on CEC2022 was first discussed, further confirming the promising improvement strategy proposed in this study in solving potential problems. Then, by comparing IA-DTPSO with 11 other algorithms on different dimensions of CEC2022, the results show that IA-DTPSO achieves the minimum mean rank and mean FT index in both dimensions. According to the WRST results, IA-DTPSO outperforms other algorithms with a larger number of optimization functions in pairwise comparisons. Therefore, this study utilizes IA-DTPSO to optimize the TDGM (1,1,r,ξ,Csz) model and applies it to simulate and predict TUWRs in China. At the same time, numerical experiments are conducted to compare IA-DTPSO with seven other algorithms and three existing models, and the results show that TDGM (1,1,r,ξ,Csz) optimized by IA-DTPSO obtains the smallest error among the four error evaluation indicators compared in the two groups, verifying the superiority of the proposed method. Finally, the total urban water resources in China for the next five years are predicted, and results show that by 2028, the TUWRs in China will reach 3368.846 hundred million cubic meters. In summary, the proposed IA-DTPSO achieves good results in both numerical experiments and simulation examples.
However, IA-DTPSO still has some limitations. Its performance on two functions did not surpass PSO in solving the 10-dimensional CEC2022. In addition, the accuracy obtained by IA-DTPSO in solving F3, F4, and F11 on 20-dimensional CEC2022 is not high. In the future, further improvement of IA-DTPSO’s performance on these functions can be considered. In future work, IA-DTPSO can be applied to more existing models and compared with the optimized model in this study. IA-DTPSO may not be the most perfect optimizer, but it can definitely demonstrate its applicability in a wide range of fields. It can be attempted to use IA-DTPSO to solve other complex optimization problems, such as engineering optimization [72], feature selection [73], and path planning [74].

Author Contributions

Conceptualization, J.W. and K.Y.; Methodology, Z.Z., J.W. and K.Y.; Software, Z.Z.; Validation, Z.Z.; Formal analysis, J.W. and K.Y.; Investigation, Z.Z. and K.Y.; Resources, J.W. and K.Y.; Data curation, Z.Z. and J.W.; Writing—original draft, Z.Z., J.W. and K.Y.; Writing—review & editing, Z.Z., J.W. and K.Y.; Visualization, Z.Z.; Supervision, K.Y.; Project administration, J.W. and K.Y.; Funding acquisition, K.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (grant Nos. 52278047), the Key Research Projects of the Shaanxi Provincial Government Research Office in 2024 (grant Nos. 2024HZ1186), and the 2024 Shaanxi Provincial Communist Youth League and Youth Work Research Project (grant Nos. 2024HZ1236).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data generated or analyzed during the study are included in this published article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, J.; Wang, Y.; Yang, Y.; Ma, Y.; Dai, Z. Fault diagnosis and intelligent maintenance of industry 4.0 power system based on internet of things technology and thermal energy optimization. Therm. Sci. Eng. Prog. 2024, 55, 102902. [Google Scholar]
  2. Li, Z.; Song, P.; Li, G.; Han, Y.; Ren, X.; Bai, L.; Su, J. AI energized hydrogel design, optimization and application in biomedicine. Mater. Today Bio 2024, 25, 101014. [Google Scholar]
  3. Yaiprasert, C.; Hidayanto, A.N. AI-powered ensemble machine learning to optimize cost strategies in logistics business. Int. J. Inf. Manag. Data Insights 2024, 4, 100209. [Google Scholar]
  4. Hong, Q.; Jun, M.; Bo, W.; Sichao, T.; Jiayi, Z.; Biao, L.; Tong, L.; Ruifeng, T. Application of Data-Driven technology in nuclear Engineering: Prediction, classification and design optimization. Ann. Nucl. Energy 2023, 194, 110089. [Google Scholar]
  5. Cheng, J.; De Waele, W. Weighted average algorithm: A novel meta-heuristic optimization algorithm based on the weighted average position concept. Knowl. -Based Syst. 2024, 305, 112564. [Google Scholar]
  6. Huang, L.; Wang, Y.; Guo, Y.; Hu, G. An Improved Reptile Search Algorithm Based on Lévy Flight and Interactive Crossover Strategy to Engineering Application. Mathematics 2022, 10, 2329. [Google Scholar] [CrossRef]
  7. Gobashy, M.; Abdelazeem, M. Metaheuristics Inversion of Self-Potential Anomalies. In Self-Potential Method: Theoretical Modeling and Applications in Geosciences; Biswas, A., Ed.; Springer International Publishing: Cham, Switzerland, 2021; pp. 35–103. [Google Scholar]
  8. Peng, L.; Cai, Z.; Heidari, A.A.; Zhang, L.; Chen, H. Hierarchical Harris hawks optimizer for feature selection. J. Adv. Res. 2023, 53, 261–278. [Google Scholar]
  9. Zamani, H.; Nadimi-Shahraki, M.H.; Mirjalili, S.; Gharehchopogh, F.S.; Oliva, D. A Critical Review of Moth-Flame Optimization Algorithm and Its Variants: Structural Reviewing, Performance Evaluation, and Statistical Analysis. Arch. Comput. Methods Eng. 2024, 31, 2177–2225. [Google Scholar]
  10. Sahoo, S.K.; Houssein, E.H.; Premkumar, M.; Saha, A.K.; Emam, M.M. Self-adaptive moth flame optimizer combined with crossover operator and Fibonacci search strategy for COVID-19 CT image segmentation. Expert Syst. Appl. 2023, 227, 120367. [Google Scholar]
  11. Li, X.; Lin, Z.; Lv, H.; Yu, L.; Heidari, A.A.; Zhang, Y.; Chen, H.; Liang, G. Advanced slime mould algorithm incorporating differential evolution and Powell mechanism for engineering design. iScience 2023, 26, 107736. [Google Scholar]
  12. Minh, H.-L.; Sang-To, T.; Wahab, M.A.; Cuong-Le, T. A new metaheuristic optimization based on K-means clustering algorithm and its application to structural damage identification. Knowl. -Based Syst. 2022, 251, 109189. [Google Scholar]
  13. Salcedo-Sanz, S. Modern meta-heuristics based on nonlinear physics processes: A review of models and design procedures. Phys. Rep. 2016, 655, 1–70. [Google Scholar]
  14. Abualigah, L.; Diabat, A.; Geem, Z.W. A Comprehensive Survey of the Harmony Search Algorithm in Clustering Applications. Appl. Sci. 2020, 10, 3827. [Google Scholar] [CrossRef]
  15. Hu, G.; Guo, Y.; Sheng, G. Salp Swarm Incorporated Adaptive Dwarf Mongoose Optimizer with Lévy Flight and Gbest-Guided Strategy. J. Bionic Eng. 2024, 21, 2110–2144. [Google Scholar]
  16. Mohamed, A.W.; Hadi, A.A.; Mohamed, A.K. Gaining-sharing knowledge based algorithm for solving optimization problems: A novel nature-inspired algorithm. Int. J. Mach. Learn. Cybern. 2020, 11, 1501–1529. [Google Scholar]
  17. Kennedy, J.; Eberhart, R. Particle swarm optimization. Proc. ICNN’95—Int. Conf. Neural Netw. 1995, 1944, 1942–1948. [Google Scholar]
  18. Mahmood, S.; Bawany, N.Z.; Tanweer, M.R. A comprehensive survey of whale optimization algorithm: Modifications and classification. Indones. J. Electr. Eng. Comput. Sci. 2023, 29, 899–910. [Google Scholar]
  19. Dehghani, M.; Hubálovský, Š.; Trojovský, P. Northern Goshawk Optimization: A New Swarm-Based Algorithm for Solving Optimization Problems. IEEE Access 2021, 9, 162059–162080. [Google Scholar]
  20. Srivastava, A.; Das, D.K. A bottlenose dolphin optimizer: An application to solve dynamic emission economic dispatch problem in the microgrid. Knowl. -Based Syst. 2022, 243, 108455. [Google Scholar]
  21. Abdel-Basset, M.; Mohamed, R.; Jameel, M.; Abouhawwash, M. Nutcracker optimizer: A novel nature-inspired metaheuristic algorithm for global optimization and engineering design problems. Knowl. -Based Syst. 2023, 262, 110248. [Google Scholar]
  22. Abdel-Basset, M.; Mohamed, R.; Zidan, M.; Jameel, M.; Abouhawwash, M. Mantis Search Algorithm: A novel bio-inspired algorithm for global optimization and engineering design problems. Comput. Methods Appl. Mech. Eng. 2023, 415, 116200. [Google Scholar]
  23. Hu, G.; Guo, Y.; Wei, G.; Abualigah, L. Genghis Khan shark optimizer: A novel nature-inspired algorithm for engineering optimization. Adv. Eng. Inform. 2023, 58, 102210. [Google Scholar]
  24. Wang, J.; Wang, W.-C.; Hu, X.-X.; Qiu, L.; Zang, H.-F. Black-winged kite algorithm: A nature-inspired meta-heuristic for solving benchmark functions and engineering problems. Artif. Intell. Rev. 2024, 57, 98. [Google Scholar]
  25. Fu, Y.; Liu, D.; Chen, J.; He, L. Secretary bird optimization algorithm: A new metaheuristic for solving global optimization problems. Artif. Intell. Rev. 2024, 57, 123. [Google Scholar]
  26. Peraza-Vázquez, H.; Peña-Delgado, A.; Merino-Treviño, M.; Morales-Cepeda, A.B.; Sinha, N. A novel metaheuristic inspired by horned lizard defense tactics. Artif. Intell. Rev. 2024, 57, 59. [Google Scholar]
  27. Truong, D.-N.; Chou, J.-S. Metaheuristic algorithm inspired by enterprise development for global optimization and structural engineering problems with frequency constraints. Eng. Struct. 2024, 318, 118679. [Google Scholar]
  28. Oladejo, S.O.; Ekwe, S.O.; Mirjalili, S. The Hiking Optimization Algorithm: A novel human-based metaheuristic approach. Knowl. -Based Syst. 2024, 296, 111880. [Google Scholar] [CrossRef]
  29. Guan, Z.; Ren, C.; Niu, J.; Wang, P.; Shang, Y. Great Wall Construction Algorithm: A novel meta-heuristic algorithm for engineer problems. Expert Syst. Appl. 2023, 233, 120905. [Google Scholar]
  30. Tian, Z.; Gai, M. Football team training algorithm: A novel sport-inspired meta-heuristic optimization algorithm for global optimization. Expert Syst. Appl. 2024, 245, 123088. [Google Scholar]
  31. Yuan, Y.; Ren, J.; Wang, S.; Wang, Z.; Mu, X.; Zhao, W. Alpine skiing optimization: A new bio-inspired optimization algorithm. Adv. Eng. Softw. 2022, 170, 103158. [Google Scholar] [CrossRef]
  32. Wu, X.; Li, S.; Jiang, X.; Zhou, Y. Information acquisition optimizer: A new efficient algorithm for solving numerical and constrained engineering optimization problems. J. Supercomput. 2024, 80, 25736–25791. [Google Scholar] [CrossRef]
  33. Bogar, E.; Beyhan, S. Adolescent Identity Search Algorithm (AISA): A novel metaheuristic approach for solving optimization problems. Appl. Soft Comput. 2020, 95, 106503. [Google Scholar] [CrossRef]
  34. Wang, K.; Guo, M.; Dai, C.; Li, Z. Information-decision searching algorithm: Theory and applications for solving engineering optimization problems. Inf. Sci. 2022, 607, 1465–1531. [Google Scholar] [CrossRef]
  35. Holland, J.H. Genetic Algorithms. Sci. Am. 1992, 267, 66–73. [Google Scholar] [CrossRef]
  36. Banzhaf, W.; Koza, J.R.; Ryan, C.; Spector, L.; Jacob, C. Genetic programming. IEEE Intell. Syst. Their Appl. 2000, 15, 74–84. [Google Scholar] [CrossRef]
  37. Sinha, N.; Chakrabarti, R.; Chattopadhyay, P.K. Evolutionary programming techniques for economic load dispatch. IEEE Trans. Evol. Comput. 2003, 7, 83–94. [Google Scholar] [CrossRef]
  38. Bäck, T. Evolution strategies: An alternative evolutionary algorithm. In Artificial Evolution; Alliot, J.-M., Lutton, E., Ronald, E., Schoenauer, M., Snyers, D., Eds.; Springer: Berlin/Heidelberg, Germany, 1996; pp. 1–20. [Google Scholar]
  39. Storn, R.; Price, K. Differential Evolution–A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  40. Simon, D. Biogeography-Based Optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  41. De Castro, L.N.; Von Zuben, F.J. The clonal selection algorithm with engineering applications. In Proceedings of the GECCO, Las Vegas, NV, USA, 8–12 July 2000; pp. 36–39. [Google Scholar]
  42. Gao, H.; Zhang, Q. Alpha evolution: An efficient evolutionary algorithm with evolution path adaptation and matrix generation. Eng. Appl. Artif. Intell. 2024, 137, 109202. [Google Scholar] [CrossRef]
  43. Layeb, A. Tangent search algorithm for solving optimization problems. Neural Comput. Appl. 2022, 34, 8853–8884. [Google Scholar] [CrossRef]
  44. Abdel-Basset, M.; Mohamed, R.; Azeem, S.A.A.; Jameel, M.; Abouhawwash, M. Kepler optimization algorithm: A new metaheuristic algorithm inspired by Kepler’s laws of planetary motion. Knowl. -Based Syst. 2023, 268, 110454. [Google Scholar]
  45. Luan, T.M.; Khatir, S.; Tran, M.T.; De Baets, B.; Cuong-Le, T. Exponential-trigonometric optimization algorithm for solving complicated engineering problems. Comput. Methods Appl. Mech. Eng. 2024, 432, 117411. [Google Scholar]
  46. Yuan, C.; Zhao, D.; Heidari, A.A.; Liu, L.; Chen, Y.; Wu, Z.; Chen, H. Artemisinin optimization based on malaria therapy: Algorithm and applications to medical image segmentation. Displays 2024, 84, 102740. [Google Scholar] [CrossRef]
  47. Sowmya, R.; Premkumar, M.; Jangir, P. Newton-Raphson-based optimizer: A new population-based metaheuristic algorithm for continuous optimization problems. Eng. Appl. Artif. Intell. 2024, 128, 107532. [Google Scholar]
  48. Yuan, C.; Zhao, D.; Heidari, A.A.; Liu, L.; Chen, Y.; Chen, H. Polar lights optimizer: Algorithm and applications in image segmentation and feature selection. Neurocomputing 2024, 607, 128427. [Google Scholar]
  49. Qi, A.; Zhao, D.; Heidari, A.A.; Liu, L.; Chen, Y.; Chen, H. FATA: An efficient optimization method based on geophysics. Neurocomputing 2024, 607, 128289. [Google Scholar] [CrossRef]
  50. Yan, J.; Hu, G.; Shu, B. MGCHMO: A dynamic differential human memory optimization with Cauchy and Gauss mutation for solving engineering problems. Adv. Eng. Softw. 2024, 198, 103793. [Google Scholar]
  51. Hu, G.; Gong, C.; Shu, B.; Xu, Z.; Wei, G. DHRDE: Dual-population hybrid update and RPR mechanism based differential evolutionary algorithm for engineering applications. Comput. Methods Appl. Mech. Eng. 2024, 431, 117251. [Google Scholar]
  52. Li, Y.; Su, S.; Hu, F.; He, X.; Su, J.; Zhang, J.; Li, B.; Liu, S.; Man, W. A novel fault location method for distribution networks with distributed generators based on improved seagull optimization algorithm. Energy Rep. 2025, 13, 3237–3245. [Google Scholar]
  53. Jamal, R.; Khan, N.H.; Ebeed, M.; Zeinoddini-Meymand, H.; Shahnia, F. An improved pelican optimization algorithm for solving stochastic optimal power flow problem of power systems considering uncertainty of renewable energy resources. Results Eng. 2025, 26, 104553. [Google Scholar] [CrossRef]
  54. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1997, 1, 67–82. [Google Scholar] [CrossRef]
  55. Sirsant, S.; Reddy, M.J. Improved MOSADE algorithm incorporating Sobol sequences for multi-objective design of Water Distribution Networks. Appl. Soft Comput. 2022, 120, 108682. [Google Scholar]
  56. Dhargupta, S.; Ghosh, M.; Mirjalili, S.; Sarkar, R. Selective Opposition based Grey Wolf Optimization. Expert Syst. Appl. 2020, 151, 113389. [Google Scholar]
  57. Nadimi-Shahraki, M.H.; Taghian, S.; Mirjalili, S. An improved grey wolf optimizer for solving engineering problems. Expert Syst. Appl. 2021, 166, 113917. [Google Scholar]
  58. Ahmadianfar, I.; Heidari, A.A.; Gandomi, A.H.; Chu, X.; Chen, H. RUN beyond the metaphor: An efficient optimization algorithm based on Runge Kutta method. Expert Syst. Appl. 2021, 181, 115079. [Google Scholar] [CrossRef]
  59. Ghasemi, M.; Zare, M.; Trojovský, P.; Rao, R.V.; Trojovská, E.; Kandasamy, V. Optimization based on the smart behavior of plants with its engineering applications: Ivy algorithm. Knowl. -Based Syst. 2024, 295, 111850. [Google Scholar]
  60. Zhang, Y. Elite archives-driven particle swarm optimization for large scale numerical optimization and its engineering applications. Swarm Evol. Comput. 2023, 76, 101212. [Google Scholar]
  61. Coelho, L.D.S. Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems. Expert Syst. Appl. 2010, 37, 1676–1683. [Google Scholar]
  62. Nayyef, H.M.; Ibrahim, A.A.; Zainuri, M.A.A.; Zulkifley, M.A.; Shareef, H. A Novel Hybrid Algorithm Based on Jellyfish Search and Particle Swarm Optimization. Mathematics 2023, 11, 3210. [Google Scholar] [CrossRef]
  63. Meng, Z.; Zhong, Y.; Mao, G.; Liang, Y. PSO-sono: A novel PSO variant for single-objective numerical optimization. Inf. Sci. 2022, 586, 176–191. [Google Scholar]
  64. Hu, G.; Guo, Y.; Zhao, W.; Houssein, E.H. An adaptive snow ablation-inspired particle swarm optimization with its application in geometric optimization. Artif. Intell. Rev. 2024, 57, 332. [Google Scholar] [CrossRef]
  65. Zeng, B.; He, C.; Mao, C.; Wu, Y. Forecasting China’s hydropower generation capacity using a novel grey combination optimization model. Energy 2023, 262, 125341. [Google Scholar] [CrossRef]
  66. Guan, Y.; Xiao, Y.; Niu, R.; Zhang, N.; Shao, C. Characterizing the water resource-environment-ecology system harmony in Chinese cities using integrated datasets: A Beautiful China perspective assessment. Sci. Total Environ. 2024, 921, 171094. [Google Scholar] [CrossRef] [PubMed]
  67. Wang, F.; Wang, W.; Wu, Y.; Li, W. Assessment of water retention dynamic in water resource formation area: A case study of the northern slope of the Qinling Mountains in China. J. Hydrol. Reg. Stud. 2024, 56, 102063. [Google Scholar] [CrossRef]
  68. Hu, G.; Song, K.; Abdel-salam, M. Sub-population evolutionary particle swarm optimization with dynamic fitness-distance balance and elite reverse learning for engineering design problems. Adv. Eng. Softw. 2025, 202, 103866. [Google Scholar] [CrossRef]
  69. Yuan, C.Q.; Liu, S.F.; Fang, Z.G. Comparison of China’s primary energy consumption forecasting by using ARIMA (the autoregressive integrated moving average) model and GM(1,1) model. Energy 2016, 100, 384–390. [Google Scholar] [CrossRef]
  70. Ye, J.; Dang, Y.; Ding, S.; Yang, Y. A novel energy consumption forecasting model combining an optimized DGM (1, 1) model with interval grey numbers. J. Clean. Prod. 2019, 229, 256–267. [Google Scholar] [CrossRef]
  71. Wang, Z.X.; Hipel, K.W.; Wang, Q.; He, S.W. An optimized NGBM(1,1) model for forecasting the qualified discharge rate of industrial wastewater in China. Appl. Math. Model. 2011, 35, 5524–5532. [Google Scholar] [CrossRef]
  72. Hu, G.; Guo, Y.; Zhong, J.; Wei, G. IYDSE: Ameliorated Young’s double-slit experiment optimizer for applied mechanics and engineering. Comput. Methods Appl. Mech. Eng. 2023, 412, 116062. [Google Scholar] [CrossRef]
  73. Abdel-Salam, M.; Alzahrani, A.I.; Alblehai, F.; Zitar, R.A.; Abualigah, L. An improved Genghis Khan optimizer based on enhanced solution quality strategy for global optimization and feature selection problems. Knowl. -Based Syst. 2024, 302, 112347. [Google Scholar] [CrossRef]
  74. Hu, G.; Huang, F.; Seyyedabbasi, A.; Wei, G. Enhanced multi-strategy bottlenose dolphin optimizer for UAVs path planning. Appl. Math. Model. 2024, 130, 243–271. [Google Scholar] [CrossRef]
Figure 1. Population spatial distribution of Sobol sequence initialization. (a) Distribution in 2D space. (b) Distribution in 3D space.
Figure 1. Population spatial distribution of Sobol sequence initialization. (a) Distribution in 2D space. (b) Distribution in 3D space.
Biomimetics 10 00233 g001
Figure 2. Schematic diagram of information filtering and evaluation.
Figure 2. Schematic diagram of information filtering and evaluation.
Biomimetics 10 00233 g002
Figure 3. Trend plot of control factor δ .
Figure 3. Trend plot of control factor δ .
Biomimetics 10 00233 g003
Figure 4. Schematic diagram of information acquisition strategy.
Figure 4. Schematic diagram of information acquisition strategy.
Biomimetics 10 00233 g004
Figure 5. Schematic diagram of tangent flight random walk. (a) Distribution in 2D space. (b) Distribution in 3D space.
Figure 5. Schematic diagram of tangent flight random walk. (a) Distribution in 2D space. (b) Distribution in 3D space.
Biomimetics 10 00233 g005
Figure 6. The flowchart of the proposed IA-DTPSO.
Figure 6. The flowchart of the proposed IA-DTPSO.
Biomimetics 10 00233 g006
Figure 7. ENE trends of IA-DTPSO on CEC 2022 partial test functions.
Figure 7. ENE trends of IA-DTPSO on CEC 2022 partial test functions.
Biomimetics 10 00233 g007
Figure 8. Convergence curves of IA-DTPSO and other MAs for addressing 10-dimensional CEC2022.
Figure 8. Convergence curves of IA-DTPSO and other MAs for addressing 10-dimensional CEC2022.
Biomimetics 10 00233 g008
Figure 9. Box plots of IA-DTPSO and other MAs for solving 10-dimensional CEC2022.
Figure 9. Box plots of IA-DTPSO and other MAs for solving 10-dimensional CEC2022.
Biomimetics 10 00233 g009
Figure 10. Convergence curves of IA-DTPSO and other MAs for solving 20-dimensional CEC2022.
Figure 10. Convergence curves of IA-DTPSO and other MAs for solving 20-dimensional CEC2022.
Biomimetics 10 00233 g010
Figure 11. Box plots of IA-DTPSO and other MAs for solving 20-dimensional CEC2022.
Figure 11. Box plots of IA-DTPSO and other MAs for solving 20-dimensional CEC2022.
Biomimetics 10 00233 g011
Figure 12. Comparative line graphs of mean rank and mean FT for each MA on different dimensions of CEC2022. (a) Comparison results on 10-dimensional CEC2022. (b) Comparison results on 20-dimensional CEC2022.
Figure 12. Comparative line graphs of mean rank and mean FT for each MA on different dimensions of CEC2022. (a) Comparison results on 10-dimensional CEC2022. (b) Comparison results on 20-dimensional CEC2022.
Biomimetics 10 00233 g012
Figure 13. Cumulative rank sum of IA-DTPSO and other MAs on different dimensions.
Figure 13. Cumulative rank sum of IA-DTPSO and other MAs on different dimensions.
Biomimetics 10 00233 g013
Figure 14. Γ Function.
Figure 14. Γ Function.
Biomimetics 10 00233 g014
Figure 15. Distribution of the proportion of TUWRs in China from 2004 to 2023.
Figure 15. Distribution of the proportion of TUWRs in China from 2004 to 2023.
Biomimetics 10 00233 g015
Figure 16. Growth rate of total urban annual water resources in China.
Figure 16. Growth rate of total urban annual water resources in China.
Biomimetics 10 00233 g016
Table 1. Parameter settings of IA-DTPSO and other different types of MAs.
Table 1. Parameter settings of IA-DTPSO and other different types of MAs.
AlgorithmsProposed YearParameterValue
PSO1995ω, c1, c20.8, 2, 2
RUN2021a, b20, 12
NGO2021--
NOA2023Prp, Pa2, N, δ 0.2, 0.4, 25, 0.05
GKSO2023m1.5
IVYA2024--
EAPSO2023--
G-QPSO2010ω1, ω2, c1, c20.6, 0.8, 2, 2
HJSPSO2023cmin, cmax, ωmin, ωmax, β, γ, c00.5, 2.5, 0.4, 0.9, 0.1, 0.1, 0.5
PSO-sono2022ωmin, ωmax, iw, r0.6, 0.8, [0.4, 0.9], 0.5
SAO-MPSO2024m, fads, Jump1.5, 2, [0, 1]
IA-DTPSO2025θ, a, ω, c1, c2[−1, 1], [0, 2], 0.8, 2, 2
Table 2. PC configuration.
Table 2. PC configuration.
SettingsSpecifications
OSWindows 11 Version 23H2 22631.4317
CPU11th Gen Intel (R) Core (TM) i7-11700 @ 2.50 GHz
RAM8 GB
Language (version)Matlab (R2024a)
Table 3. Statistical results of IA-DTPSO and other MAs on 10-dimensional CEC2022.
Table 3. Statistical results of IA-DTPSO and other MAs on 10-dimensional CEC2022.
FIndexAlgorithms
PSORUNNGONOAGKSOIVYAEAPSOG-QPSOHJSPSOPSO-SonoSAO-MPSOIA-DTPSO
F1Best3.001 × 102TO TO2.628 × 103TOTOTO1.891 × 103TO3.875 × 103TOTO
Worst3.009 × 102TOTO1.997 × 104TO3.075 × 102TO3.548 × 103TO2.374 × 104TOTO
Mean3.004 × 102TOTO8.435 × 103TO3.011 × 102TO2.679 × 103TO1.147 × 104TOTO
WRST8.007 × 10−9/-8.007 × 10−9/-7.992 × 10−9/-8.007 × 10−9/-6.054 × 10−9/-8.007 × 10−9/-3.338 × 10−4/-8.007 × 10−9/-8.007 × 10−9/-8.007 × 10−9/-1.427 × 10−6/--
FT8.8007.1005.30011.3003.6758.1002.27510.0005.70011.7002.6751.375
Rank875114931061221
F2BestTOTOTO4.562 × 102TOTOTO5.000 × 102TO4.576 × 102TOTO
Worst4.073 × 1024.089 × 1024.071 × 1026.144 × 1024.089 × 1024.742 × 1024.089 × 1026.265 × 1024.041 × 1026.218 × 1024.089 × 1024.001 × 102
Mean4.025 × 1024.036 × 1024.024 × 1025.194 × 1024.055 × 1024.101 × 1024.050 × 1025.678 × 1024.007 × 1025.004 × 1024.049 × 102TO
WRST4.388 × 10−2/-6.949 × 10−1/=5.310 × 10−2/=△/-1.604 × 10−4/-6.220 × 10−4/-1.135 × 10−2/-△/-1.264 × 10−1/=△/-5.842 × 10−7/--
FT5.3004.4004.75010.9005.7506.3505.47511.8003.85010.2506.1253.050
Rank453118971221061
F3Best6.001 × 1026.002 × 102TO6.111 × 102TOTOTO6.246 × 102TO6.184 × 102TOTO
Worst6.103 × 1026.209 × 102TO6.414 × 1026.100 × 1026.005 × 102TO6.437 × 102TO6.387 × 1026.004 × 1026.002 × 102
Mean6.034 × 1026.097 × 102TO6.278 × 1026.018 × 102TOTO6.365 × 102TO6.260 × 102TOTO
WRST1.065 × 10−7/-△/-2.946 × 10−8/+△/-2.062 × 10−6/-4.355 × 10−7/-2.439 × 10−8/+△/-6.810 × 10−7/+△/-6.092 × 10−7/--
FT8.0008.7003.20010.8007.2501.4752.82511.8004.62510.3503.0755.900
Rank892117611231054
F4Best8.080 × 1028.109 × 1028.030 × 1028.369 × 1028.090 × 1028.090 × 1028.040 × 1028.304 × 1028.030 × 1028.397 × 1028.060 × 1028.025 × 102
Worst8.448 × 1028.338 × 1028.107 × 1028.640 × 1028.328 × 1028.348 × 1028.318 × 1028.432 × 1028.090 × 1028.691 × 1028.259 × 1028.149 × 102
Mean8.194 × 1028.205 × 1028.069 × 1028.512 × 1028.176 × 1028.186 × 1028.174 × 1028.374 × 1028.064 × 1028.540 × 1028.133 × 1028.083 × 102
WRST3.293 × 10−5/-1.431 × 10−7/-4.388 × 10−2/+△/-1.198 × 10−6/-1.200 × 10−6/-1.103 × 10−5/-△/-7.114 × 10−3/+△/-5.111 × 10−3/--
FT6.7007.4002.10011.3506.2506.9006.35010.0001.95011.5004.5502.950
Rank892116751011243
F5BestTO9.023 × 102TO9.975 × 102TO1.007 × 103TO1.079 × 103TO1.001 × 103TOTO
Worst9.001 × 1021.021 × 1039.001 × 1021.381 × 1039.017 × 1021.710 × 1039.005 × 1021.182 × 1039.005 × 1021.366 × 1039.005 × 1029.006 × 102
MeanTO9.714 × 102TO1.199 × 1039.003 × 1021.243 × 103TO1.110 × 103TO1.161 × 1039.001 × 102TO
WRST2.745 × 10−4/+△/-9.996 × 10−7/+△/-4.088 × 10−1/=△/-6.326 × 10−6/-△/-4.703 × 10−3/+△/-2.033 × 10−2/--
FT5.6508.0002.87511.0004.90010.9001.6009.7504.55010.3503.3755.050
Rank281117125931064
F6Best1.866 × 1031.903 × 1031.828 × 1038.824 × 1051.826 × 1031.857 × 1031.942 × 1033.479 × 1051.843 × 1034.901 × 1051.902 × 103TO
Worst6.960 × 1034.950 × 1031.943 × 1033.144 × 1075.528 × 1038.090 × 1037.181 × 1034.242 × 1063.069 × 1032.214 × 1077.657 × 1031.803 × 103
Mean3.069 × 1033.096 × 1031.882 × 1038.279 × 1062.236 × 1034.031 × 1034.335 × 1032.007 × 1062.109 × 1036.362 × 1064.436 × 1031.801 × 103
WRST△/-△/-△/-△/-△/-△/-△/-△/-△/-△/-△/--
FT5.7006.4502.75011.3503.9506.8507.15010.5004.15011.1507.0001.000
Rank562124781031191
F7Best2.002 × 1032.017 × 1032.001 × 1032.046 × 1032.002 × 1032.001 × 103TO2.070 × 1032.001 × 1032.051 × 1032.001 × 1032.001 × 103
Worst2.045 × 1032.058 × 1032.011 × 1032.096 × 1032.026 × 1032.084 × 1032.054 × 1032.105 × 1032.025 × 1032.121 × 1032.025 × 1032.022 × 103
Mean2.028 × 1032.036 × 1032.004 × 1032.072 × 1032.020 × 1032.021 × 1032.017 × 1032.088 × 1032.010 × 1032.085 × 1032.012 × 1032.009 × 103
WRST2.062 × 10−6/-1.431 × 10−7/-1.782 × 10−3/+△/-1.610 × 10−4/-1.143 × 10−2/-1.404 × 10−1/=△/-8.392 × 10−1/=△/-8.392 × 10−1/=-
FT7.5008.0002.05010.3005.7005.7004.65011.4004.10011.2003.8003.600
Rank891106751231142
F8Best2.202 × 1032.204 × 1032.208 × 1032.228 × 103TO2.201 × 103TO2.224 × 1032.210 × 1032.221 × 103TO2.207 × 103
Worst2.228 × 1032.226 × 1032.223 × 1032.245 × 1032.221 × 1032.224 × 1032.222 × 1032.235 × 1032.227 × 1032.312 × 1032.221 × 1032.215 × 103
Mean2.223 × 1032.222 × 1032.218 × 1032.237 × 1032.216 × 1032.219 × 1032.220 × 1032.232 × 1032.223 × 1032.246 × 1032.216 × 1032.210 × 103
WRST1.201 × 10−6/-1.201 × 10−6/-3.705 × 10−5/-△/-7.114 × 10−3/-1.807 × 10−5/-1.201 × 10−6/-△/-4.539 × 10−7/-△/-7.114 × 10−3/--
FT7.2507.1004.80011.1503.4005.2004.50010.1007.55011.5003.4502.000
Rank874113561081221
F9Best2.486 × 1032.529 × 1032.529 × 1032.570 × 1032.529 × 1032.529 × 1032.529 × 1032.643 × 1032.529 × 1032.586 × 1032.529 × 1032.486 × 103
Worst2.486 × 1032.529 × 1032.529 × 1032.710 × 1032.529 × 1032.676 × 1032.529 × 1032.669 × 1032.529 × 1032.683 × 1032.529 × 1032.490 × 103
Mean2.486 × 1032.529 × 1032.529 × 1032.620 × 1032.529 × 1032.537 × 1032.529 × 1032.659 × 1032.529 × 1032.640 × 1032.529 × 1032.488 × 103
WRST△/+△/-1.127 × 10−8/-△/-6.777 × 10−8/-5.366 × 10−8/-8.007 × 10−9/-△/-6.644 × 10−8/-△/-1.945 × 10−8/--
FT1.0008.8504.25010.4007.5256.3754.17511.5506.55010.9004.4252.000
Rank184107931261152
F10BestTOTOTO2.503 × 103TOTOTO2.508 × 103TO2.502 × 103TOTO
Worst2.633 × 1032.619 × 103TO2.676 × 103TO2.638 × 1032.618 × 1032.651 × 103TO2.684 × 103TOTO
Mean2.555 × 1032.534 × 103TO2.533 × 103TO2.541 × 1032.512 × 1032.539 × 103TO2.547 × 103TOTO
WRST△/-6.674 × 10−6/-8.604 × 10−1/=△/-6.868 × 10−4/-4.540 × 10−6/-1.105 × 10−5/-△/-3.048 × 10−4/-△/-6.750 × 10−1/=-
FT6.7508.1004.0009.9503.5507.9506.55010.4005.6509.6003.6501.850
Rank128572106931141
F11Best2.601 × 103TOTO2.762 × 103TOTOTO2.822 × 103TO2.769 × 103TOTO
Worst3.001 × 1033.184 × 103TO2.871 × 1033.000 × 1033.000 × 1033.000 × 1032.899 × 103TO3.429 × 1033.184 × 103TO
Mean2.672 × 1032.659 × 103TO2.815 × 1032.640 × 1032.768 × 1032.673 × 1032.865 × 103TO2.835 × 1032.964 × 103TO
WRST3.473 × 10−8/-3.473 × 10−8/-2.512 × 10−1/=3.473 × 10−8/-1.889 × 10−4/-6.682 × 10−5/-5.164 × 10−2/=3.473 × 10−8/-1.512 × 10−5/-3.473 × 10−8/-8.221 × 10−6/--
FT7.7506.7502.4008.9004.6757.1004.07510.0505.2008.9009.9002.300
Rank652948711310121
F12Best2.801 × 1032.862 × 1032.859 × 1032.872 × 1032.863 × 1032.864 × 1032.863 × 1032.931 × 1032.865 × 1032.875 × 1032.862 × 1032.846 × 103
Worst2.926 × 1032.867 × 1032.864 × 1032.967 × 1032.868 × 1032.920 × 1032.866 × 1032.966 × 1032.871 × 1033.000 × 1032.866 × 1032.849 × 103
Mean2.857 × 1032.864 × 1032.862 × 1032.891 × 1032.865 × 1032.871 × 1032.864 × 1032.948 × 1032.866 × 1032.898 × 1032.864 × 1032.847 × 103
WRST1.803 × 10−6/-△/-△/-△/-6.786 × 10−8/-6.757 × 10−8/-6.786 × 10−8/-△/-△/-△/-6.786 × 10−8/--
FT2.5505.3003.30010.4505.9008.6005.15011.8507.60010.4005.8001.100
Rank263107951281141
Mean Rank6.0007.2502.83310.3335.0838.1675.08310.7504.08310.9175.2501.833
Final Ranking782104941131261
Mean FT6.0797.1793.48110.6545.2106.7924.56510.7675.12310.6504.8192.681
Final FT792116831251041
+/=/−2/0/100/1/114/3/50/0/120/1/110/0/121/2/90/0/123/2/70/0/120/2/10-/-/-
△: 6.796 × 10−8 is replaced by △.
Table 4. Errors of IA-DTPSO and other MAs on 10-dimensional CEC2022.
Table 4. Errors of IA-DTPSO and other MAs on 10-dimensional CEC2022.
FIndexAlgorithms
PSORUNNGONOAGKSOIVYAEAPSOG-QPSOHJSPSOPSO-SonoSAO-MPSOIA-DTPSO
F1Std2.461 × 10−11.625 × 10−55.384 × 10−104.211 × 1031.258 × 10−132.2494.124 × 10−144.362 × 1023.157 × 10−95.113 × 1032.916 × 10−140.000
RMSE2.994 × 1022.990 × 1022.990 × 1029.379 × 1032.990 × 1023.001 × 1022.990 × 1022.711 × 1032.990 × 1021.251 × 1042.990 × 1022.990 × 102
δ2.991 × 1022.990 × 1022.990 × 1022.627 × 1032.990 × 1022.990 × 1022.990 × 1021.890 × 1032.990 × 1023.874 × 1032.990 × 1022.990 × 102
F2Std2.2654.4812.5383.991 × 1013.8342.175 × 1013.6603.138 × 1011.4583.467 × 1014.0423.083 × 10−2
RMSE4.015 × 1024.026 × 1023.990 × 1025.198 × 1024.045 × 1024.096 × 1024.040 × 1025.676 × 1023.997 × 1025.005 × 1024.039 × 1024.014 × 102
δ3.990 × 1023.990 × 1023.990 × 1024.552 × 1023.990 × 1023.990 × 1023.990 × 1024.990 × 1023.990 × 1024.566 × 1023.990 × 1023.990 × 102
F3Std2.6027.3591.744 × 10−67.3422.5931.228 × 10−16.901 × 10−143.9304.006 × 10−34.2099.164 × 10−24.160 × 10−2
RMSE6.024 × 1026.087 × 1025.990 × 1026.268 × 1026.008 × 1025.990 × 1025.990 × 1026.355 × 1025.990 × 1026.250 × 1025.990 × 1025.990 × 102
δ5.991 × 1025.992 × 1025.990 × 1026.101 × 1025.990 × 1025.990 × 1025.990 × 1026.236 × 1025.990 × 1026.174 × 1025.990 × 1025.990 × 102
F4Std9.6125.6222.0127.6386.5156.8336.5873.6181.6188.3906.0202.867
RMSE8.185 × 1028.196 × 1028.059 × 1028.503 × 1028.166 × 1028.176 × 1028.165 × 1028.364 × 1028.054 × 1028.531 × 1028.124 × 1028.073 × 102
δ8.070 × 1028.099 × 1028.020 × 1028.359 × 1028.080 × 1028.080 × 1028.030 × 1028.294 × 1028.020 × 1028.387 × 1028.050 × 1028.015 × 102
F5Std3.486 × 10−23.548 × 1012.002 × 10−21.137 × 1024.573 × 10−11.707 × 1021.543 × 10−12.964 × 1011.139 × 10−18.556 × 1011.385 × 10−11.334 × 10−1
RMSE8.990 × 1029.710 × 1028.990 × 1021.203 × 1038.993 × 1021.254 × 1038.990 × 1021.109 × 1038.990 × 1021.163 × 1038.991 × 1028.990 × 102
δ8.990 × 1029.013 × 1028.990 × 1029.965 × 1028.990 × 1021.006 × 1038.990 × 1021.078 × 1038.990 × 1029.998 × 1028.990 × 1028.990 × 102
F6Std1.600 × 1031.128 × 1033.221 × 1019.251 × 1068.287 × 1022.139 × 1032.003 × 1031.298 × 1063.062 × 1025.700 × 1062.261 × 1039.067 × 10−1
RMSE3.442 × 1033.284 × 1031.881 × 1031.224 × 1072.376 × 1034.537 × 1034.753 × 1032.372 × 1062.129 × 1038.446 × 1064.952 × 103TO
δ1.865 × 1031.902 × 1031.827 × 1038.824 × 1051.825 × 1031.856 × 1031.941 × 1033.479 × 1051.842 × 1034.901 × 1051.901 × 1031.799 × 103
F7Std9.6561.171 × 1013.4491.378 × 1016.5711.823 × 1011.383 × 1019.6489.0781.718 × 1019.9435.864
RMSE2.027 × 1032.035 × 1032.003 × 1032.071 × 1032.019 × 1032.020 × 1032.016 × 1032.087 × 1032.009 × 1032.084 × 1032.011 × 1032.008 × 103
δ2.001 × 1032.016 × 103TO2.045 × 1032.001 × 103TO1.999 × 1032.069 × 103TO2.050 × 103TOTO
F8Std5.3404.5075.1464.4508.8825.8764.6532.7534.4661.754 × 1018.8452.329
RMSE2.222 × 1032.221 × 1032.217 × 1032.236 × 1032.215 × 1032.218 × 1032.219 × 1032.231 × 1032.222 × 1032.246 × 1032.215 × 1032.209 × 103
δ2.201 × 1032.203 × 1032.207 × 1032.227 × 1032.199 × 103TO2.199 × 1032.223 × 1032.209 × 1032.220 × 1032.199 × 1032.206 × 103
F9Std1.141 × 10−33.325 × 10−51.043 × 10−133.961 × 1014.092 × 10−93.286 × 1010.0008.3155.291 × 10−122.874 × 1011.807 × 10−131.108
RMSE2.485 × 1032.528 × 1032.528 × 1032.619 × 1032.528 × 1032.536 × 1032.528 × 1032.659 × 1032.528 × 1032.639 × 1032.528 × 1032.487 × 103
δ2.485 × 1032.528 × 1032.528 × 1032.569 × 1032.528 × 1032.528 × 1032.528 × 1032.642 × 1032.528 × 1032.585 × 1032.528 × 1032.485 × 103
F10Std6.277 × 1015.267 × 1018.037 × 10−25.130 × 1015.454 × 10−25.760 × 1013.488 × 1014.883 × 1017.003 × 10−27.598 × 1017.548 × 10−25.131 × 10−2
RMSE2.555 × 1032.534 × 1032.499 × 1032.532 × 1032.499 × 1032.541 × 1032.511 × 1032.538 × 1032.499 × 1032.547 × 1032.499 × 1032.499 × 103
δ2.499 × 1032.499 × 1032.499 × 1032.502 × 1032.499 × 1032.499 × 1032.499 × 1032.507 × 1032.499 × 1032.501 × 1032.499 × 1032.499 × 103
F11Std1.455 × 1021.378 × 1026.211 × 10−103.421 × 1011.231 × 1021.808 × 1021.352 × 1022.235 × 1012.612 × 10−91.603 × 1021.852 × 1023.460 × 10−13
RMSE2.674 × 1032.662 × 1032.599 × 1032.815 × 1032.642 × 1032.772 × 1032.675 × 1032.864 × 1032.599 × 1032.839 × 1032.969 × 1032.599 × 103
δTO2.599 × 1032.599 × 1032.761 × 1032.599 × 1032.599 × 1032.599 × 1032.821 × 1032.599 × 1032.768 × 1032.599 × 1032.599 × 103
F12Std2.080 × 1011.1451.6882.276 × 1011.3621.232 × 1011.0649.5631.7493.320 × 1019.740 × 10−17.480 × 10−1
RMSE2.856 × 1032.863 × 1032.861 × 1032.890 × 1032.864 × 1032.870 × 1032.863 × 1032.947 × 1032.865 × 1032.897 × 1032.863 × 1032.846 × 103
δ2.800 × 1032.861 × 1032.858 × 1032.871 × 1032.862 × 1032.863 × 1032.862 × 1032.930 × 1032.864 × 1032.874 × 1032.861 × 1032.845 × 103
Table 5. Statistical results of IA-DTPSO and other MAs on 20-dimensional CEC2022.
Table 5. Statistical results of IA-DTPSO and other MAs on 20-dimensional CEC2022.
FIndexAlgorithms
PSORUNNGONOAGKSOIVYAEAPSOG-QPSOHJSPSOPSO-SonoSAO-MPSOIA-DTPSO
F1Best3.153 × 102TO2.016 × 1032.665 × 104TO5.133 × 103TO1.452 × 1046.166 × 1022.779 × 104TOTO
Worst3.434 × 102TO6.215 × 1036.186 × 104TO1.751 × 104TO2.237 × 1042.105 × 1039.624 × 104TOTO
Mean3.247 × 102TO4.014 × 1033.917 × 104TO9.801 × 103TO1.873 × 1041.314 × 1035.183 × 104TOTO
WRST△/-△/-△/-△/-△/-△/-5.075 × 10−1/=△/-△/-△/-6.653 × 10−8/+-
FT6.0004.0008.00011.2005.0009.0002.75010.0007.00011.8001.0002.250
Rank648115931071212
F2Best4.154 × 102TO4.002 × 1028.056 × 102TO4.289 × 102TO9.092 × 1024.449 × 1027.176 × 1024.026 × 1024.105 × 102
Worst4.753 × 1024.491 × 1024.747 × 1021.499 × 1034.685 × 1024.723 × 1024.491 × 1021.118 × 1034.755 × 1021.157 × 1034.686 × 1024.316 × 102
Mean4.429 × 1024.417 × 1024.536 × 1021.045 × 1034.402 × 1024.522 × 1024.388 × 1021.025 × 1034.546 × 1029.211 × 1024.462 × 1024.238 × 102
WRST8.182 × 10−1/=1.610 × 10−4/-1.201 × 10−6/-△/-1.481 × 10−3/-7.898 × 10−8/-1.217 × 10−3/-△/-△/-△/-1.587 × 10−5/--
FT4.9004.6507.05011.3004.5006.6503.05011.3507.60010.3504.3002.300
Rank548123721191061
F3Best6.154 × 1026.164 × 102TO6.294 × 1026.087 × 102TOTO6.518 × 102TO6.291 × 102TO6.026 × 102
Worst6.459 × 1026.431 × 1026.002 × 1026.817 × 1026.324 × 1026.223 × 102TO6.686 × 1026.033 × 1026.675 × 1026.030 × 1026.181 × 102
Mean6.337 × 1026.277 × 102TO6.585 × 1026.184 × 1026.020 × 102TO6.630 × 1026.006 × 1026.511 × 1026.004 × 1026.063 × 102
WRST1.235 × 10−7/-2.218 × 10−7/-△/+△/-2.690 × 10−6/-1.251 × 10−5/+△/+△/-7.898 × 10−8/+△/-7.898 × 10−8/+-
FT8.7008.0003.00010.9507.2002.5001.55011.5504.35010.4003.8006.000
Rank982117511241036
F4Best8.399 × 1028.438 × 1028.247 × 1029.495 × 1028.328 × 1028.428 × 1028.199 × 1029.244 × 1028.195 × 1029.414 × 1028.139 × 1028.319 × 102
Worst8.897 × 1028.955 × 1028.515 × 1029.987 × 1029.025 × 1028.866 × 1028.955 × 1029.565 × 1028.468 × 1021.008 × 1038.557 × 1028.762 × 102
Mean8.642 × 1028.739 × 1028.396 × 1029.745 × 1028.643 × 1028.655 × 1028.414 × 1029.444 × 1028.315 × 1029.737 × 1028.318 × 1028.531 × 102
WRST2.074 × 10−2/-4.680 × 10−5/-3.382 × 10−4/+△/-6.557 × 10−3/-4.320 × 10−3/-8.355 × 10−3/+△/-7.948 × 10−7/+△/-9.278 × 10−5/+-
FT6.6507.9503.40011.3006.8006.9503.55010.1502.00011.5502.5005.200
Rank693127841011125
F5Best9.019 × 1021.341 × 1039.081 × 1022.247 × 1039.091 × 1021.940 × 103TO2.579 × 1039.001 × 1022.379 × 1039.002 × 1029.006 × 102
Worst2.213 × 1032.316 × 1031.514 × 1034.371 × 1032.304 × 1032.498 × 1039.258 × 1023.076 × 1039.551 × 1025.071 × 1031.590 × 1039.079 × 102
Mean1.511 × 1031.732 × 1031.186 × 1033.080 × 1031.306 × 1032.295 × 1039.015 × 1022.780 × 1039.061 × 1023.171 × 1031.017 × 1039.042 × 102
WRST4.680 × 10−5/-△/-△/-△/-△/-△/-1.306 × 10−6/+△/-1.404 × 10−1/=△/-4.903 × 10−1/=-
FT6.7007.4005.30011.1005.9508.9501.25010.5503.20011.2503.7502.600
Rank785116911031242
F6Best5.703 × 1031.923 × 1032.288 × 1037.281 × 1071.864 × 1031.926 × 1031.930 × 1033.348 × 1071.842 × 1039.198 × 1071.947 × 1031.815 × 103
Worst7.610 × 1044.447 × 1034.619 × 1035.995 × 1082.266 × 1045.921 × 1032.277 × 1041.987 × 1084.698 × 1033.962 × 1082.505 × 1041.919 × 103
Mean2.908 × 1043.547 × 1033.083 × 1032.420 × 1081.002 × 1043.351 × 1038.944 × 1031.201 × 1082.769 × 1032.049 × 1089.325 × 1031.833 × 103
WRST△/-△/-△/-△/-9.173 × 10−8/-△/-△/-△/-1.431 × 10−7/-△/-△/--
FT8.7505.0004.40011.4006.2504.3005.90010.3503.40011.2505.9001.100
Rank953128461021171
F7Best2.048 × 1032.045 × 1032.045 × 1032.143 × 1032.027 × 1032.067 × 1032.021 × 1032.163 × 1032.029 × 1032.168 × 1032.021 × 1032.045 × 103
Worst2.167 × 1032.142 × 1032.085 × 1032.318 × 1032.142 × 1032.155 × 1032.172 × 1032.195 × 1032.059 × 1032.352 × 1032.181 × 1032.072 × 103
Mean2.099 × 1032.109 × 1032.064 × 1032.209 × 1032.070 × 1032.112 × 1032.062 × 1032.182 × 1032.045 × 1032.244 × 1032.068 × 1032.056 × 103
WRST1.600 × 10−5/-1.803 × 10−6/-4.679 × 10−2/-△/-1.332 × 10−2/-1.235 × 10−7/-5.979 × 10−1/=△/-1.481 × 10−3/+△/-9.892 × 10−1/=-
FT6.5507.6504.50010.9505.1007.8503.75010.3002.20011.6003.8503.700
Rank784116931011252
F8Best2.225 × 1032.223 × 1032.223 × 1032.250 × 1032.221 × 1032.221 × 1032.221 × 1032.238 × 1032.225 × 1032.239 × 1032.221 × 1032.216 × 103
Worst2.363 × 1032.243 × 1032.229 × 1032.424 × 1032.341 × 1032.576 × 1032.358 × 1032.263 × 1032.236 × 1032.532 × 1032.240 × 1032.231 × 103
Mean2.243 × 1032.227 × 1032.227 × 103TO2.232 × 1032.329 × 1032.264 × 1032.253 × 1032.230 × 1032.376 × 1032.227 × 1032.226 × 103
WRST1.116 × 10−3/-5.250 × 10−1/=9.031 × 10−1/=△/-6.787 × 10−2/=8.292 × 10−5/-5.979 × 10−1/=△/-1.159 × 10−4/-△/-1.636 × 10−1/=-
FT6.5504.3504.50010.3003.5509.2005.5008.9506.15011.2504.3003.400
Rank732106119851241
F9Best2.465 × 1032.481 × 1032.481 × 1032.565 × 1032.481 × 1032.481 × 1032.481 × 1032.712 × 1032.481 × 1032.579 × 1032.481 × 1032.472 × 103
Worst2.465 × 1032.481 × 1032.481 × 1032.775 × 1032.481 × 1032.482 × 1032.481 × 1032.916 × 1032.481 × 1032.816 × 1032.481 × 1032.481 × 103
Mean2.465 × 1032.481 × 1032.481 × 1032.650 × 1032.481 × 1032.481 × 1032.481 × 1032.809 × 1032.481 × 1032.703 × 1032.481 × 1032.476 × 103
WRST△/+△/-△/-△/-△/-△/-5.903 × 10−8/-△/-△/-△/-6.541 × 10−8/--
FT1.0007.5505.30010.2005.6009.0003.20011.8507.45010.9503.9002.000
Rank184106931271152
F10BestTO2.501 × 103TO2.538 × 103TOTO2.501 × 1032.601 × 1032.501 × 1032.522 × 103TOTO
Worst4.867 × 1032.627 × 1032.625 × 1036.769 × 1032.711 × 1035.023 × 1033.985 × 1032.674 × 1032.637 × 1037.760 × 1034.510 × 1032.501 × 103
Mean3.639 × 1032.507 × 1032.507 × 1033.024 × 1032.511 × 1033.197 × 1032.886 × 1032.634 × 1032.508 × 1034.594 × 1033.056 × 1032.501 × 103
WRST1.227 × 10−3/-2.218 × 10−7/-7.205 × 10−2/=△/-2.561 × 10−3/-1.929 × 10−2/-4.540 × 10−6/-△/-1.794 × 10−4/-△/-5.874 × 10−6/--
FT8.9005.9503.6508.3502.0506.6007.7508.6004.7509.5009.0502.850
Rank113285107641291
F11Best2.651 × 1032.900 × 103TO4.176 × 1032.900 × 1032.900 × 1032.900 × 1036.225 × 103TO3.676 × 1032.900 × 103TO
Worst3.008 × 1033.000 × 1033.000 × 1037.276 × 1033.360 × 1033.000 × 1033.000 × 1037.128 × 1033.000 × 1035.998 × 1032.900 × 1033.038 × 103
Mean2.950 × 1032.910 × 1032.888 × 1035.612 × 1032.963 × 1032.930 × 1032.945 × 1036.742 × 1032.885 × 1034.959 × 1032.900 × 1032.922 × 103
WRST3.852 × 10−2/-7.557 × 10−1/=3.382 × 10−4/+△/-8.103 × 10−2/=3.639 × 10−3/-5.231 × 10−2/=△/-2.561 × 10−3/+△/-7.656 × 10−7/+-
FT7.0506.2504.50010.8505.5004.4004.35011.9505.15010.2001.3506.450
Rank842119671211035
F12Best2.896 × 1032.941 × 1032.935 × 1033.113 × 1032.944 × 1032.947 × 1032.934 × 1033.469 × 1032.954 × 1033.059 × 1032.945 × 1032.900 × 103
Worst3.394 × 1032.984 × 1032.947 × 1033.371 × 1032.981 × 1033.060 × 1032.999 × 1033.668 × 1032.998 × 1033.596 × 1033.016 × 1032.900 × 103
Mean3.198 × 1032.954 × 1032.939 × 1033.215 × 1032.957 × 1032.971 × 1032.953 × 1033.574 × 1032.972 × 1033.191 × 1032.970 × 1032.900 × 103
WRST1.201 × 10−6/-△/-△/-△/-△/-△/-△/-△/-△/-△/-△/--
FT9.5504.7502.25010.3004.8005.9004.40011.9506.8009.7006.5501.050
Rank104211573128961
Mean Rank7.1675.6673.75010.8336.0837.8334.08310.2504.33311.0004.5832.417
Final Ranking862117931041251
Mean FT6.7756.1254.65410.6835.1926.7753.91710.6295.00410.8174.1883.242
Final FT874116821051231
+/=/−1/1/100/2/103/2/70/0/120/2/101/0/113/4/50/0/124/1/70/0/124/3/5-/-/-
△: 6.796 × 10−8 is replaced by △.
Table 6. Errors of IA-DTPSO and other MAs on 20-dimensional CEC2022.
Table 6. Errors of IA-DTPSO and other MAs on 20-dimensional CEC2022.
FIndexAlgorithms
PSORUNNGONOAGKSOIVYAEAPSOG-QPSOHJSPSOPSO-SonoSAO-MPSOIA-DTPSO
F1Std6.8987.562 × 10−41.241 × 1038.173 × 1038.985 × 10−43.391 × 1038.880 × 10−61.993 × 1033.761 × 1021.630 × 1044.304 × 10−135.222 × 10−7
RMSE3.238 × 1022.990 × 1024.192 × 1033.998 × 1042.995 × 1021.034 × 1042.990 × 1021.883 × 1041.363 × 1035.420 × 1042.990 × 1022.990 × 102
δ3.143 × 1022.990 × 1022.015 × 1032.665 × 1042.990 × 1025.132 × 1032.990 × 1021.452 × 1046.156 × 1022.779 × 1042.990 × 1022.990 × 102
F2Std2.793 × 1011.797 × 1011.656 × 1011.700 × 1022.180 × 1011.055 × 1011.948 × 1015.161 × 1011.057 × 1011.316 × 1021.585 × 1013.711
RMSE4.427 × 1024.411 × 1024.529 × 1021.057 × 1034.397 × 1024.513 × 1024.383 × 1021.025 × 1034.537 × 1029.290 × 1024.455 × 1024.228 × 102
δ4.144 × 1023.990 × 1023.992 × 1028.046 × 1023.990 × 1024.279 × 1023.990 × 1029.082 × 1024.439 × 1027.166 × 1024.016 × 1024.095 × 102
F3Std9.0178.6517.371 × 10−21.211 × 1017.6276.0649.377 × 10−43.7978.564 × 10−19.4567.621 × 10−14.275
RMSE6.328 × 1026.267 × 1025.990 × 1026.576 × 1026.174 × 1026.011 × 1025.990 × 1026.620 × 1025.996 × 1026.501 × 1025.994 × 1026.053 × 102
δ6.144 × 1026.154 × 1025.990 × 1026.284 × 1026.077 × 1025.990 × 1025.990 × 1026.508 × 1025.990 × 1026.281 × 1025.990 × 1026.016 × 102
F4Std1.502 × 1011.415 × 1016.5271.365 × 1011.409 × 1011.287 × 1011.670 × 1018.1257.0391.627 × 1011.267 × 1011.231 × 101
RMSE8.633 × 1028.730 × 1028.386 × 1029.736 × 1028.634 × 1028.646 × 1028.406 × 1029.434 × 1028.305 × 1029.729 × 1028.309 × 1028.522 × 102
δ8.389 × 1028.428 × 1028.237 × 1029.485 × 1028.318 × 1028.418 × 1028.189 × 1029.234 × 1028.185 × 1029.404 × 1028.129 × 1028.309 × 102
F5Std3.758 × 1022.793 × 1022.020 × 1025.955 × 1023.863 × 1021.738 × 1025.7271.492 × 1021.234 × 1016.723 × 1021.766 × 1022.139
RMSE1.554 × 1031.752 × 1031.202 × 1033.133 × 1031.358 × 1032.301 × 1039.005 × 1022.782 × 1039.052 × 1023.237 × 1031.030 × 1039.032 × 102
δ9.009 × 1021.340 × 1039.071 × 1022.246 × 1039.081 × 1021.939 × 1038.990 × 1022.578 × 1038.991 × 1022.378 × 1038.992 × 1028.996 × 102
F6Std2.008 × 1047.669 × 1026.916 × 1021.425 × 1087.584 × 1031.261 × 1037.327 × 1033.893 × 1078.278 × 1029.403 × 1077.738 × 1032.375 × 101
RMSE3.505 × 1043.624 × 1033.155 × 1032.790 × 1081.245 × 1043.568 × 1031.144 × 1041.259 × 1082.883 × 1032.244 × 1081.199 × 1041.832 × 103
δ5.702 × 1031.922 × 1032.287 × 1037.281 × 1071.863 × 1031.925 × 1031.929 × 1033.348 × 1071.841 × 1039.198 × 1071.946 × 1031.814 × 103
F7Std3.216 × 1012.292 × 1011.181 × 1014.638 × 1012.474 × 1012.839 × 1013.994 × 1019.0789.0925.094 × 1014.697 × 1018.845
RMSE2.099 × 1032.108 × 1032.063 × 1032.208 × 1032.070 × 1032.112 × 1032.062 × 1032.181 × 1032.044 × 1032.243 × 1032.068 × 1032.055 × 103
δ2.047 × 1032.044 × 1032.044 × 1032.142 × 1032.026 × 1032.066 × 1032.020 × 1032.162 × 1032.028 × 1032.167 × 1032.020 × 1032.044 × 103
F8Std3.882 × 1014.2571.4114.428 × 1012.676 × 1019.088 × 1015.559 × 1016.5032.4907.743 × 1018.0263.748
RMSE2.242 × 1032.226 × 1032.226 × 1032.299 × 1032.231 × 1032.329 × 1032.263 × 1032.252 × 1032.229 × 1032.376 × 1032.226 × 1032.225 × 103
δ2.224 × 1032.222 × 1032.222 × 1032.249 × 1032.220 × 1032.220 × 1032.220 × 1032.237 × 1032.224 × 1032.238 × 1032.220 × 1032.215 × 103
F9Std1.829 × 10−23.946 × 10−31.702 × 10−65.092 × 1019.247 × 10−52.422 × 10−11.368 × 10−125.391 × 1011.078 × 10−36.991 × 1013.306 × 10−52.286
RMSE2.464 × 1032.480 × 1032.480 × 1032.649 × 1032.480 × 1032.480 × 1032.480 × 1032.808 × 1032.480 × 1032.703 × 1032.480 × 1032.476 × 103
δ2.464 × 1032.480 × 1032.480 × 1032.564 × 1032.480 × 1032.480 × 1032.480 × 1032.711 × 1032.480 × 1032.578 × 1032.480 × 1032.471 × 103
F10Std8.723 × 1022.812 × 1012.785 × 1011.170 × 1034.710 × 1018.874 × 1024.374 × 1021.884 × 1013.044 × 1012.323 × 1034.664 × 1021.253 × 10−1
RMSE3.736 × 1032.506 × 1032.506 × 1033.231 × 1032.510 × 1033.311 × 1032.917 × 1032.633 × 1032.507 × 1035.120 × 1033.088 × 103TO
δ2.499 × 103TO2.499 × 1032.537 × 1032.499 × 1032.499 × 103TOTOTO2.521 × 1032.499 × 1032.499 × 103
F11Std7.486 × 1013.072 × 1019.754 × 1018.589 × 1021.058 × 1024.702 × 1015.104 × 1012.046 × 1021.187 × 1026.222 × 1027.807 × 10−139.178 × 101
RMSE2.950 × 1032.909 × 1032.888 × 1035.673 × 1032.964 × 1032.929 × 1032.944 × 1036.744 × 1032.886 × 1034.995 × 1032.899 × 1032.923 × 103
δ2.650 × 1032.899 × 1032.599 × 1034.175 × 1032.899 × 1032.899 × 1032.899 × 1036.224 × 1032.599 × 1033.675 × 1032.899 × 1032.599 × 103
F12Std1.481 × 1021.042 × 1013.1397.512 × 1011.007 × 1012.848 × 1011.459 × 1015.104 × 1011.208 × 1011.195 × 1021.768 × 1011.037 × 10−4
RMSE3.200 × 1032.953 × 1032.938 × 1033.215 × 1032.956 × 1032.970 × 1032.952 × 1033.573 × 1032.971 × 1033.192 × 1032.969 × 1032.899 × 103
δ2.895 × 1032.940 × 1032.934 × 1033.112 × 1032.943 × 1032.946 × 1032.933 × 1033.468 × 1032.953 × 1033.058 × 1032.944 × 1032.899 × 103
Table 7. TUWRs in China from 2004 to 2023 (Unit: Hundred million cubic meters).
Table 7. TUWRs in China from 2004 to 2023 (Unit: Hundred million cubic meters).
Years2004200520062007200820092010201120122013
TUWRs24,129.628,053.125,330.125,255.227,434.324,180.230,906.423,256.729,528.827,957.9
Years2014201520162017201820192020202120222023
TUWRs27,266.927,962.632,466.428,761.227,462.529,041.031,605.229,638.227,088.124,780.0
Table 8. Statistical results of IA-DTPSO and other MAs for addressing TUWRs in China.
Table 8. Statistical results of IA-DTPSO and other MAs for addressing TUWRs in China.
YearsReal ValueIA-DTPSOPSOGKSOIVYA
SimDResEAPE (%)SimDResEAPE (%)SimDResEAPE (%)SimDResEAPE (%)
200528,053.127,916.57−136.530.4924,398.68−3654.4213.0326,529.37−1523.735.4325,472.86−2580.239.19
200625,330.125,594.21264.111.0425,454.02123.920.4926,220.56890.463.5225,966.25636.152.51
200725,255.225,617.06361.861.4326,417.971162.774.6026,288.931033.734.0926,595.101339.905.30
200827,434.326,010.91−1423.395.1927,180.11−254.190.9326,494.88−939.413.4227,052.92−381.371.39
200924,180.226,496.102315.909.5827,742.493562.2914.7326,770.272590.0710.7127,386.223206.0213.25
201030,906.426,978.82−3927.5812.7128,130.40−2776.008.9827,071.13−3835.2712.4127,628.87−3277.5210.60
201123,256.727,423.854167.1517.9228,372.315115.6122.0027,373.444116.7417.7027,805.524548.8219.55
201229,528.827,818.17−1710.635.7928,494.53−1034.273.5027,665.48−1863.326.3127,934.13−1594.665.40
201327,957.928,158.21200.310.7228,519.84561.942.0127,942.35−15.550.0628,027.7569.850.24
201427,266.928,444.741177.844.3228,467.421200.524.4028,202.71935.813.4328,095.92829.023.04
201527,962.628,680.48717.882.5728,353.20390.601.4028,446.89484.291.7328,145.54182.940.65
201632,466.428,869.04−3597.3611.0828,190.29−4276.1113.1728,675.96−3790.4411.6728,181.67−4284.7213.19
201728,761.229,014.29253.090.8827,989.42−771.782.6828,891.27130.070.4528,207.97−553.221.92
201827,462.529,120.101657.606.0427,759.33296.831.0829,094.151631.655.9428,227.12764.622.78
MAPEsimulation (%)5.63666.64326.20616.3627
201929,04129,106.9965.990.2328,640.37−400.631.379529,285.85244.850.8428,241.06−799.932.75
202031,605.229,112.08−2493.127.8928,817.00−2788.208.822029,467.49−2137.706.7628,251.21−3353.9810.61
202129,638.229,086.75−551.451.8629,009.75−628.452.120429,640.081.880.0128,258.59−1379.604.65
202227,088.129,034.331946.237.1829,216.742128.647.858229,804.482716.3710.0228,263.971175.874.34
202324,78028,957.864177.8616.8629,436.134656.1318.789929,961.435181.4320.9028,267.893487.8914.07
MAPEprediction (%)6.80417.22547.71027.2876
MAPE (%)5.94396.79646.60196.6061
YearsReal ValueEAPSOHJSPSOPSO-sonoSAO-MPSO
SimDResEAPE (%)SimDResEAPE (%)SimDResEAPE (%)SimDResEAPE (%)
200528,053.128,036.28−16.810.0627,865.98−187.110.6627,772.07−281.021.0027,744.11−308.981.10
200625,330.124,823.89−506.201.9926,510.911180.814.6626,521.711191.614.7025,728.56398.461.57
200725,255.225,798.03542.832.1425,896.87641.672.5425,936.51681.312.6926,184.17928.973.67
200827,434.326,288.65−1145.644.1725,813.99−1620.305.9025,854.32−1579.975.7526,590.81−843.483.07
200924,180.226,745.132564.9310.6026,043.521863.327.7026,074.901894.707.8326,952.852772.6511.46
201030,906.427,137.43−3768.9612.1926,433.89−4472.5014.4726,454.43−4451.9614.4027,270.65−3635.7411.76
201123,256.727,484.064227.3618.1726,897.133640.4315.6526,907.613650.9115.6927,546.624289.9218.44
201229,528.827,794.22−1734.575.8727,385.82−2142.977.2527,387.58−2141.217.2527,784.27−1744.525.90
201327,957.928,075.01117.110.4127,875.64−82.250.2927,869.90−87.990.3127,987.4629.560.10
201427,266.928,331.621064.723.9028,354.761087.863.9828,342.481075.583.9428,159.99893.093.27
201527,962.628,568.01605.402.1628,817.97855.373.0528,799.80837.202.9928,305.45342.851.22
201632,466.428,787.22−3679.1711.3329,263.39−3203.009.8629,239.83−3226.569.9328,427.14−4039.2512.44
201728,761.228,991.69230.490.8029,690.86929.663.2329,662.25901.053.1328,528.04−233.150.81
201827,462.529,183.331720.836.2630,100.992638.499.6030,067.592605.099.4828,610.821148.324.18
MAPEsimulation (%)5.72336.35086.36885.6466
201929,04129,363.72322.721.1130,494.741453.745.0030,456.751415.754.8728,677.83−363.161.25
202031,605.229,534.15−2071.046.5530,873.19−732.012.3130,830.78−774.412.4528,731.17−2874.029.09
202129,638.229,695.7357.530.1931,237.401599.205.3931,190.711552.515.2328,772.67−865.522.92
202227,088.129,849.352761.2510.1931,588.414500.3116.6131,537.544449.4416.4228,803.961715.866.33
202324,78029,995.805215.8021.0431,927.157147.1528.8431,872.237092.2328.6228,826.434046.4316.32
MAPEprediction (%)7.820111.634711.5227.1856
MAPE (%)6.27517.74137.72496.0516
Table 9. Parameter statistics of IA-DTPSO and other algorithms for solving the TUWRs in China.
Table 9. Parameter statistics of IA-DTPSO and other algorithms for solving the TUWRs in China.
ParametersIA-DTPSOPSOGKSOIVYAEAPSOHJSPSOPSO-SonoSAO-MPSO
Csz24,123.624,160.224,385.524,346.224,488.924,146.924,50024,245.4
ξ0.0801290.5605550.3779240.4390500.1984310.0291870.3453760.067554
r1.4531990.3605780.9277020.8998740.9051270.93730710.925674
a0.0587160.171760.643490.271981.41760.649640.644060.12418
b6780.2789−39.702313,448.33427691.169130,532.68289443.92939426.14294152.6776
c23,519.53228199.715620,248.42416,383.6556180.549324,060.546624,153.869621,550.6303
Table 10. Prediction results of TUWRs in China in the next five years.
Table 10. Prediction results of TUWRs in China in the next five years.
Years20242025202620272028
TUWRs26,376.9726,028.7824,960.5528,731.5433,688.46
Table 11. Statistical results of solving TUWRs in China using different models.
Table 11. Statistical results of solving TUWRs in China using different models.
YearsReal ValueID_TGM(1,1)DGM(1,1)NGBM(1,1)
SimDResEAPE (%)SimDResEAPE (%)SimDResEAPE (%)SimDResEAPE (%)
200528,053.127,916.57−136.530.4925,987.99−2065.117.3626,023.11−2029.997.2425,987.99−2065.117.36
200625,330.125,594.21264.111.0426,221.03890.933.5226,251.27921.173.6426,221.03890.933.52
200725,255.225,617.06361.861.4326,456.151200.954.7626,481.431226.234.8626,456.151200.954.76
200827,434.326,010.91−1423.395.1926,693.38−740.922.7026,713.60−720.702.6326,693.38−740.922.70
200924,180.226,496.102315.909.5826,932.742752.5411.3826,947.822767.6211.4526,932.742752.5411.38
201030,906.426,978.82−3927.5812.7127,174.25−3732.1512.0827,184.08−3722.3212.0427,174.25−3732.1512.08
201123,256.727,423.854167.1517.9227,417.924161.2217.8927,422.424165.7217.9127,417.924161.2217.89
201229,528.827,818.17−1710.635.7927,663.78−1865.026.3227,662.85−1865.956.3227,663.78−1865.026.32
201327,957.928,158.21200.310.7227,911.84−46.060.1627,905.39−52.510.1927,911.84−46.060.16
201427,266.928,444.741177.844.3228,162.12895.223.2828,150.05883.153.2428,162.12895.223.28
201527,962.628,680.48717.882.5728,414.65452.051.6228,396.85434.251.5528,414.65452.051.62
201632,466.428,869.04−3597.3611.0828,669.45−3796.9511.7028,645.83−3820.5711.7728,669.45−3796.9511.70
201728,761.229,014.29253.090.8828,926.52165.320.5728,896.98135.780.4728,926.52165.320.57
201827,462.529,120.101657.606.0429,185.911723.416.2829,150.341687.846.1529,185.911723.416.28
MAPEsimulation (%)5.63666.40096.38876.4005
201929,04129,106.9965.990.2329,447.62406.621.4029,405.91364.911.2629,447.62406.621.40
202031,605.229,112.08−2493.127.8929,711.67−1893.535.9929,663.73−1941.476.1429,711.67−1893.535.99
202129,638.229,086.75−551.451.8629,978.10339.901.1529,923.81285.610.9629,978.10339.901.15
202227,088.129,034.331946.237.1830,246.913158.8111.6630,186.173098.0711.4430,246.913158.8111.66
202324,78028,957.864177.8616.8630,518.145738.1423.1630,450.835670.8322.8830,518.145738.1423.16
MAPEprediction (%)6.80418.67118.53708.5839
MAPE (%)5.94396.99836.95406.9751
Table 12. Prediction results of TUWRs in China for the next five years under four different models.
Table 12. Prediction results of TUWRs in China for the next five years under four different models.
Years20242025202620272028
ID_T26,376.9726,028.7824,960.5528,731.5433,688.46
GM(1,1)30,791.7931,067.9031,346.4831,627.5731,911.17
DGM(1,1)30,717.8030,987.1231,258.8031,532.8731,809.33
NGBM(1,1)29,691.4229,811.4829,928.8530,043.7930,156.50
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, Z.; Wang, J.; Yu, K. IA-DTPSO: A Multi-Strategy Integrated Particle Swarm Optimization for Predicting the Total Urban Water Resources in China. Biomimetics 2025, 10, 233. https://doi.org/10.3390/biomimetics10040233

AMA Style

Zhu Z, Wang J, Yu K. IA-DTPSO: A Multi-Strategy Integrated Particle Swarm Optimization for Predicting the Total Urban Water Resources in China. Biomimetics. 2025; 10(4):233. https://doi.org/10.3390/biomimetics10040233

Chicago/Turabian Style

Zhu, Zheyu, Jiawei Wang, and Kanhua Yu. 2025. "IA-DTPSO: A Multi-Strategy Integrated Particle Swarm Optimization for Predicting the Total Urban Water Resources in China" Biomimetics 10, no. 4: 233. https://doi.org/10.3390/biomimetics10040233

APA Style

Zhu, Z., Wang, J., & Yu, K. (2025). IA-DTPSO: A Multi-Strategy Integrated Particle Swarm Optimization for Predicting the Total Urban Water Resources in China. Biomimetics, 10(4), 233. https://doi.org/10.3390/biomimetics10040233

Article Metrics

Back to TopTop