Next Article in Journal
Heat Transfer Enhancement and Entropy Minimization Through Corrugation and Base Inclination Control in MHD-Assisted Cu–H2O Nanofluid Convection
Next Article in Special Issue
Generalized Interval-Valued Convexity in Fractal Geometry
Previous Article in Journal
On a Unified Subclass of Analytic Functions with Negative Coefficients Defined via a Generalized q-Calculus Operator
Previous Article in Special Issue
Availability of Hydropressor Systems: Redundancy and Multiple Failure Modes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ELPSO-C: A Clustering-Based Strategy for Dimension-Wise Diversity Control in Enhanced Leader Particle Swarm Optimization

Graduate School of Advanced Science and Engineering, Higashihiroshima Campus, Hiroshima University, Higashihiroshima 739-8527, Japan
*
Author to whom correspondence should be addressed.
AppliedMath 2025, 5(4), 159; https://doi.org/10.3390/appliedmath5040159
Submission received: 31 May 2025 / Revised: 4 August 2025 / Accepted: 16 August 2025 / Published: 7 November 2025
(This article belongs to the Special Issue Advances in Intelligent Control for Solving Optimization Problems)

Abstract

In high-dimensional optimization, particle swarm optimization (PSO) algorithms often suffer from premature convergence due to stagnation in certain dimensions. This study proposes an enhanced variant, ELPSO-C, which integrates dimension-wise convergence detection with adaptive exploration mechanisms. By applying agglomerative clustering to inter-particle velocity diversity, ELPSO-C identifies dimensions showing signs of stagnation and selectively reintroduces diversity through targeted mutation strategies. The algorithm preserves global search capability while reducing unnecessary perturbation in well-explored dimensions. Experimental results on a suite of 18 benchmark functions across various dimensions demonstrate that ELPSO-C consistently achieves superior performance compared to existing PSO variants, especially in high-dimensional and complex landscapes. These findings suggest that dimension-aware adaptation is an effective strategy for improving PSO’s robustness and convergence quality.

1. Introduction

Particle swarm optimization (PSO) was originally proposed by Kennedy and Eberhart as a stochastic optimization algorithm inspired by the collective behavior observed in bird flocking and fish schooling [1,2]. Due to its ability to balance exploration and exploitation using simple mathematical operations, PSO has become one of the most widely used metaheuristic algorithms in computational intelligence [3]. Its advantages include high search efficiency, ease of implementation, and few control parameters, making it particularly suitable for a wide range of practical applications. These applications span various fields such as machine learning [4], control systems [5], electric power systems [6], and financial engineering [7]. As a result, PSO has been extensively studied and continuously improved over the past decades.
One of the most critical issues in PSO is its tendency to prematurely converge to local optima, especially when dealing with complex multimodal objective functions [7,8]. In such scenarios, particles often lose diversity and become trapped in limited regions of the search space, preventing the algorithm from sufficiently exploring other promising areas. This phenomenon arises because the social and cognitive components in the PSO update equations tend to rapidly align particle trajectories, leading to loss of exploration capability as the search progresses [9]. As a result, maintaining adequate diversity within the swarm is essential for achieving global search performance and avoiding stagnation [10].
To address the limitations of standard PSO in handling local optima, the Enhanced Leader Particle Swarm Optimization (ELPSO) algorithm was proposed [11]. ELPSO enhances the leader particles by applying a multi-stage mutation strategy, thereby increasing the algorithm’s ability to escape from local minima. Specifically, it adopts diverse mutation mechanisms depending on the current stage of convergence, such as Gaussian, Cauchy, and Lévy-based perturbations, to reinforce the global exploration capability of the best-performing individuals. Although ELPSO improves search accuracy and robustness through leader enhancement, it does not explicitly control the diversity of the entire swarm. As a result, convergence may still occur prematurely when diversity loss propagates among non-leader particles.
To overcome the limitations of ELPSO, this study proposes a novel method called ELPSO-C, which introduces a clustering-based diversity control mechanism at the dimension level. The core idea of ELPSO-C is to monitor and manage the diversity of the swarm on a per-dimension basis, rather than treating the swarm as a whole. By applying clustering to the particles’ distribution in each dimension, ELPSO-C identifies which dimensions are undergoing premature convergence, characterized by reduced diversity. Adaptive operations are then applied selectively to those dimensions in order to restore and maintain diversity. By integrating the leader-enhancing mutation mechanisms of ELPSO with dimension-wise diversity control over the entire swarm, ELPSO-C achieves improved balance between global exploration and local exploitation, thus enhancing overall optimization performance.
The remainder of this paper is organized as follows. Section 2 provides a review of related work in the field of particle swarm optimization and diversity control strategies. Section 3 describes the proposed method, ELPSO-C, in detail, including its clustering-based mechanism and dimension-wise adaptation. Section 4 presents numerical experiments conducted to evaluate the performance of the proposed method using benchmark functions. Finally, Section 5 concludes the paper with a summary of findings and future research directions.

2. Related Work

Particle swarm optimization (PSO), originally proposed by Kennedy and Eberhart [1,2], is a stochastic, population-based optimization algorithm inspired by the collective behavior of biological swarms. Owing to its simplicity, computational efficiency, and few control parameters, PSO has emerged as one of the most widely adopted metaheuristic algorithms. It has been successfully applied in various domains, including machine learning, control systems, electric power systems, and engineering design [3,5,6,7].
A major limitation of standard PSO lies in its tendency to prematurely converge in complex multimodal landscapes, where particles lose diversity and become trapped in local optima [8,10]. The velocity and position of each particle are updated at each iteration according to the following equations:
v i ( t + 1 ) = ω v i ( t ) + c 1 r 1 ( p b e s t i x i ( t ) ) + c 2 r 2 ( g b e s t x i ( t ) ) ,
x i ( t + 1 ) = x i ( t ) + v i ( t + 1 ) ,
where ω denotes the inertia weight, c 1 and c 2 are the acceleration coefficients, and r 1 , r 2 are random variables uniformly drawn from [ 0 , 1 ] . These equations demonstrate how particle movements are guided by their personal best and the global best positions, potentially resulting in a rapid loss of diversity as the search progresses.
In addition to standard PSO and ELPSO frameworks, various studies have explored leader-based or cooperative enhancements to PSO. For example, Li et al. [12] proposed a multi-component PSO algorithm with a leader learning mechanism that combined four PSO strategies in a cooperative strategy pool to improve structural damage detection accuracy at the expense of a higher computational effort. Chen et al. [13] introduced ALC-PSO, where the global-best leader was assigned a lifespan and replaced by challengers as it ages, helping to prevent premature convergence and maintain swarm diversity. Wang et al. [14] applied PSO as an optimization tool for tuning gain parameters in leader–follower cooperative control laws of multi-agent systems, demonstrating PSO’s utility in distributed control design.
Several recent PSO variants have adopted Gaussian-based strategies to enhance swarm diversity and mitigate premature convergence. For instance, Chen et al. [15] proposed an Adaptive PSO with Gaussian Perturbation and Mutation (AGMPSO), where Gaussian noise was injected into the global best particle when stagnation was detected. Wang et al. [16] introduced an Adaptive Dimensional Gaussian Mutation PSO (ADGMPSO), which applied Gaussian perturbation to elite particles on a per-dimension basis with adaptive control. Han et al. [17] developed a Group Learning PSO (GLPSO) that incorporated adaptive Gaussian noise into particle velocities to promote exploration. While these methods apply Gaussian perturbations globally or at the level of selected particles, they do not explicitly identify stagnated dimensions across the swarm. In contrast, our approach performs a clustering-based analysis of particle positions to detect stagnated dimensions and selectively reset their values using a targeted diversity control strategy. This dimension-wise mechanism enables more precise intervention, which is particularly beneficial in high-dimensional optimization problems.
To overcome this issue, various improvements have been proposed. Enhanced Leader PSO (ELPSO) [11] applies multi-stage mutation strategies—such as Gaussian, Cauchy, and opposition-based mutations—to enhance the exploration ability of leader particles. While ELPSO increases robustness against local optima, it does not explicitly manage diversity at the dimension level. Clustering techniques have also been integrated into swarm and evolutionary computation frameworks to support adaptive diversity control and structure-aware search [4,18]. In particular, hierarchical agglomerative clustering using complete linkage helps to identify structural patterns in the search space and to guide localized adaptation. Building on these concepts, the proposed ELPSO-C combines leader-enhancing mutation mechanisms with clustering-based dimension-wise control to improve search diversity, particularly in high-dimensional and complex optimization problems.
In addition to clustering-based approaches, recent studies have applied domination landscape concepts to better understand and exploit the structure of local optima in swarm and evolutionary optimization. Hao et al. [19] first proposed the domination landscape (DL) as a directed graph of solutions ordered by dominance relationships, providing a hierarchical view of optima in the search space. Building on this idea, later studies have applied DL analysis in swarm algorithms to preserve solution diversity and avoid premature convergence. For example, Li et al. [20] employ a DL-based matrix to compare successive fitness landscapes in dynamic environments, allowing their evolutionary swarm to detect shifts in the distribution of optima and reuse or diversify solutions accordingly. Similarly, Hou et al. [21] incorporate DL theory into particle swarm optimization, using dominance relations to guide particles towards multiple high-quality peaks rather than a single dominant solution, thereby maintaining diversity and reducing the risk of entrapment in local optima. These approaches show that modeling the fitness landscape as a dominance hierarchy helps characterize the landscape’s multi-optima structure and inform strategies (like diversity preservation and knowledge transfer) that align with the goals of the current study.
Other well-known swarm intelligence algorithms such as Artificial Bee Colony (ABC), Gray Wolf Optimizer (GWO), and Tree-Seed Algorithm (TSA) have also been proposed and widely applied to various continuous optimization problems [22,23,24]. ABC is inspired by the foraging behavior of honey bees and balances exploration and exploitation through the roles of employed and onlooker bees. GWO mimics the hierarchical leadership and hunting strategies of gray wolves, effectively balancing local and global search. TSA models the propagation behavior of tree seeds, combining random and directional steps to switch between different search modes. While these algorithms have shown strong performance on many problems, our proposed methods are built on the PSO framework with a specific focus on dimension-wise diversity control and adaptive intervention strategies.

2.1. Enhanced Leader Particle Swarm Optimization (ELPSO)

ELPSO is a variant of the standard PSO algorithm designed to mitigate premature convergence. It enhances global search capabilities by incorporating multi-stage mutation strategies into the behavior of leader particles, thereby promoting swarm diversity. This section outlines the motivation and conceptual basis of ELPSO, details its mutation strategy, and presents the overall algorithmic structure.

2.1.1. Motivation and Concept

A fundamental limitation of standard PSO is its tendency to suffer from premature convergence, particularly in complex multimodal search spaces. As particles are iteratively attracted toward their personal best ( p i best ) and the global best ( g best ) solutions, the diversity of the swarm often diminishes rapidly. This loss of diversity reduces the ability of the algorithm to escape local optima and conduct sufficient exploration. A key contributing factor to this issue is the dominant influence exerted by the leader particles. Once a leader emerges, other particles tend to follow its trajectory too closely, resulting in the entire swarm collapsing into a limited region of the search space [8,11].
To overcome the limitations of standard PSO, Jordehi proposed the Enhanced Leader Particle Swarm Optimization (ELPSO) algorithm [11]. The core idea of ELPSO is to reinforce the role of leader particles through a multi-stage mutation strategy designed to prevent their stagnation, which often leads to premature convergence around suboptimal regions. Unlike conventional PSO, in which particles update their positions solely based on deterministic equations, ELPSO adaptively applies different mutation types to leader particles depending on their search state. These mutations introduce diverse perturbations to leader positions, thereby preserving exploration potential and enabling the swarm to escape local optima. This approach helps maintain higher swarm diversity throughout the search process and enhances the algorithm’s ability to identify global optima in complex landscapes.
ELPSO preserves the core structure of standard PSO while enhancing search performance by modifying the behavior of leader particles. Through the use of adaptive mutation strategies, ELPSO offers greater flexibility and improved global exploration compared to earlier PSO variants, making it an effective approach for mitigating premature convergence and stagnation. However, a notable limitation of ELPSO is its lack of explicit mechanisms for controlling diversity at the dimension level. Although the behavior of leader particles is diversified, the algorithm does not monitor or preserve variation along individual dimensions. This shortcoming motivated the development of ELPSO-C, which extends ELPSO by introducing clustering-based, dimension-wise diversity control.

2.1.2. Multi-Stage Mutation Strategy

To mitigate the stagnation of leader particles in standard PSO, ELPSO employs a multi-stage mutation strategy consisting of five distinct operators: Gaussian mutation, Cauchy mutation, dimension-wise opposition-based mutation, global opposition-based mutation, and DE-based mutation [11]. Instead of relying on a single perturbation method, ELPSO adaptively selects operators based on the degree of stagnation. Milder mutations are applied during early stagnation, whereas stronger perturbations are triggered under prolonged stagnation. This composite strategy enhances diversity and achieves a better balance between global exploration and local exploitation.
Gaussian Mutation
Gaussian mutation perturbs each component x d of a particle’s position vector x by adding Gaussian noise:
x d new = x d + N ( 0 , σ 2 ( t ) ) ,
where N ( 0 , σ 2 ( t ) ) denotes a normal distribution with mean zero and time-dependent variance σ 2 ( t ) . Following [11], the standard deviation σ ( t ) is defined as:
σ ( t ) = σ 0 1 t T ,
where σ 0 is the initial standard deviation, t is the current iteration number, and T is the maximum number of iterations. This schedule progressively reduces the mutation strength, allowing for finer local search in the later stages of the optimization process.
Cauchy Mutation
Cauchy mutation introduces perturbations drawn from the Cauchy distribution, which has heavier tails than the Gaussian distribution. This property enables occasional large jumps, thereby increasing the likelihood of escaping local optima:
x d new = x d + C ( 0 , γ ( t ) ) ,
where C ( 0 , γ ( t ) ) denotes a Cauchy distribution with a time-dependent scale parameter γ ( t ) . Following [11], the scale parameter is defined as:
γ ( t ) = γ 0 1 t T ,
where γ 0 is the initial scale, and T is the total number of iterations.
Dimension-Wise Opposition-Based Mutation
In the ELPSO framework, opposition-based mutation generates an opposite solution based on the current position [11]. For each component x d of the position vector within bounds [ a d , b d ] , the opposite value is computed as:
x d opp = a d + b d x d .
In this study, we introduce a variant in which the opposition operation is selectively applied to a subset of dimensions D { 1 , , D } :
x d new = a d + b d x d if d D , x d otherwise .
We refer to this variant as dimension-wise opposition-based mutation. This operator is newly introduced in the present study and was not part of the original ELPSO framework proposed in [11]. While inspired by the general concept of opposition-based learning, it applies the transformation selectively to a subset of dimensions, thereby providing more targeted diversity enhancement.
Global Opposition-Based Mutation
This mutation applies the opposition-based transformation to all dimensions simultaneously:
x d new = a d + b d x d d { 1 , , D } .
It aggressively relocates a particle to its mirrored position and is particularly useful when the current search region is deemed unpromising.
Differential Evolution-Based Mutation
Inspired by the differential evolution (DE) algorithm, this mutation creates new candidate positions using vector differentials between randomly selected particles:
x new = x r + F · ( x s x t ) ,
where x r , x s , and x t are randomly selected individuals from the population, and F is a scaling factor. This operator introduces global exploration capability into the PSO framework.
Each mutation type is triggered based on a stagnation counter or predefined performance criteria. The strength and frequency of application are adaptively controlled to maintain a balance between convergence and exploration.
To systematically integrate the multi-stage mutation strategy into the ELPSO framework, a staged and conditional logic is adopted. Specifically, each leader particle maintains a stagnation counter that tracks the number of consecutive iterations without fitness improvement. When this counter exceeds a predefined threshold, mutation operators are invoked in a sequential manner. Initially, a mild operator, such as Gaussian or Cauchy mutation, is applied. If stagnation persists, more aggressive strategies—such as dimension-wise or global opposition-based mutation—are introduced. For prolonged stagnation, the differential evolution-based mutation is employed to enhance global exploration.
This adaptive staging is implemented using conditional branching and loop structures within the algorithm. The appropriate mutation stage is selected based on the degree of stagnation, enabling the strategy to effectively balance intensification and diversification.
From an implementation perspective, each mutation operation must be followed by a boundary correction to ensure feasibility:
x d new = min { max { x d new , a d } , b d } , d { 1 , , D } ,
where [ a d , b d ] denotes the valid search range for each dimension. Furthermore, the stagnation counter should be reset upon improvement in fitness, and redundant application of the same mutation stage should be avoided. These considerations contribute to both algorithmic stability and computational efficiency.
Each mutation operator serves a distinct purpose in the search process. Gaussian mutation is well suited for early-stage global exploration due to its moderate perturbation range. Cauchy mutation, with its heavy-tailed distribution, enables occasional large jumps that help escape boundary traps. Dimension-wise opposition-based mutation facilitates local refinement in selected dimensions, while global opposition-based mutation aggressively repositions particles across the entire search space. DE-based mutation introduces directional variation and promotes long-range exploration. These operators are selectively applied based on the degree of stagnation and their expected impact on swarm diversity.
Other strategies for maintaining diversity in swarm-based optimization include techniques such as random immigrants [25], adaptive parameter control [26], and hybridization with differential evolution [18]. The random immigrants approach introduces new particles with random positions into the swarm to refresh diversity, although it may disrupt convergence if not carefully managed. Adaptive parameter schemes adjust coefficients such as inertia weight and acceleration constants during the optimization process, promoting a better balance between exploration and exploitation. Compared to these methods, ELPSO explicitly targets stagnation in leader particles and leverages conditional multi-stage mutation for focused perturbation. While ELPSO enhances flexibility and resilience to local optima, it does not directly control swarm-wide diversity at the dimension level—an issue addressed by our proposed ELPSO-C method.

2.1.3. Algorithmic Structure

The ELPSO algorithm extends the canonical PSO by incorporating mutation mechanisms that are triggered based on stagnation detection. The procedure of the algorithm is as follows:
Step 1: 
Initialization: Initialize the positions x i ( 0 ) and velocities v i ( 0 ) of all particles uniformly at random within the search space. Set the personal best p i best = x i ( 0 ) , and determine the initial global best g best . Set the inertia weight ω = ω max , and initialize stagnation counters to zero.
Step 2: 
Fitness Evaluation: Evaluate the fitness of each particle. If a particle’s current fitness is better than its p i best , update p i best . Update g best if any particle improves upon it.
Step 3: 
Velocity and Position Update: Update each particle’s velocity and position using Equations (1) and (2), respectively.
Step 4: 
Inertia Weight Update: Linearly decrease the inertia weight over time using:
ω ( t ) = ω max ω max ω min T · t ,
where t is the current iteration, and T is the maximum number of iterations. Here, ω max and ω min denote the upper and lower bounds of the inertia weight, respectively. A larger ω in early iterations promotes global exploration, while a smaller ω in later stages encourages local exploitation near promising regions.
Step 5: 
Stagnation Detection: For each leader particle, track the number of consecutive iterations without fitness improvement. If this count exceeds a predefined threshold, mark the particle as stagnant.
Step 6: 
Mutation Strategy Application: Apply the appropriate mutation strategy to stagnant leader particles. Strategies are selected progressively based on the stagnation duration.
Step 7: 
Boundary Constraint Handling: If any position component after velocity or mutation updates falls outside its feasible range, project it back using a constraint-handling rule such as clipping or reflection.
Step 8: 
Termination Check: If the iteration count t reaches T or a stopping criterion is satisfied (e.g., convergence threshold), return g best as the solution. Otherwise, increment t and return to Step 2.

2.2. Clustering Techniques in Optimization

Clustering techniques, widely used in data mining and pattern recognition, have also been employed in evolutionary computation to enhance both adaptive behavior and search efficiency. In swarm-based optimization, clustering serves as a tool for analyzing structural properties of particles, such as spatial distribution and behavioral similarity. By leveraging clustering-based insights, one can design more effective mechanisms for diversity maintenance, convergence acceleration, or solution space decomposition.
This section provides an overview of hierarchical agglomerative clustering, explains the complete-linkage method as a representative distance criterion, and discusses its application to dimension-wise analysis within the ELPSO-C framework.
Clustering techniques have become powerful tools in metaheuristic optimization, used to analyze and control the distribution of particles or solutions in the search space. Among these, hierarchical clustering is especially valuable because it reveals structured relationships in the data, providing interpretable insights into convergence behavior and diversity patterns. In this study, we adopted hierarchical clustering as a core mechanism for diversity control in particle-based optimization, using its structure to guide adaptive exploration.
Hierarchical clustering can be broadly categorized into two approaches: agglomerative and divisive. In this study, we focused on the agglomerative strategy, which constructs a hierarchy in a bottom-up manner. Initially, each data point is treated as an individual cluster. Then, in an iterative process, the two closest clusters are merged at each step based on a defined linkage criterion. This process continues until all points are grouped into a single cluster, thereby forming a nested hierarchy of clusters.
The algorithmic procedure of hierarchical agglomerative clustering (HAC) consists of the following iterative steps [27]: initializing each data point as an individual cluster, computing the pairwise distance matrix, merging the two closest clusters, and updating the distance matrix accordingly. This process is repeated until all data points are grouped into a single cluster. The entire clustering progression can be represented as a dendrogram, a tree-like diagram that visually illustrates the nested relationships among clusters and reveals the structure of the data at various levels of granularity.
The choice of linkage criterion, which defines the distance between clusters, has a significant impact on the outcome of hierarchical clustering. Common linkage methods include single linkage, complete linkage, and average linkage, each influencing the clustering structure differently. In this study, we adopted the complete-linkage method, which defines the distance between clusters as the maximum distance between their elements. This approach is particularly effective in suppressing the formation of elongated or chained clusters, thereby mitigating the tendency toward local concentration in the search space.

Agglomerative Clustering Procedure

Step 1: 
Initialize each data point as an individual cluster C 1 , C 2 , , C n , and compute the initial distance matrix D I S = [ d i s ( C i , C j ) ] between all clusters. For clusters C i and C j , represented by their respective centroid vectors x ( i ) and x ( j ) , the normalized Euclidean distance is defined as:
d i s ( C i , C j ) = d = 1 D x d ( i ) x d ( j ) s d 2 ,
where D is the dimensionality of the data, and s d denotes the standard deviation along the dth dimension.
Step 2: 
According to the selected linkage criterion—such as complete linkage—identify the pair of clusters ( C i , C j ) with the smallest distance d i s ( C i , C j ) in the matrix D I S .
Step 3: 
Merge clusters C i and C j to form a new cluster C i j .
Step 4: 
Recalculate the distances between the newly formed cluster C i j and all remaining clusters, and update the distance matrix D I S .
Step 5: 
Repeat Steps 2–4 until all data points are grouped into a single cluster or the desired number of clusters is achieved.

3. Proposed Method: ELPSO-C

This section presents an improved algorithm, named ELPSO-C (ELPSO with Clustering), which introduces a clustering-based mechanism into the original ELPSO framework [11]. The key idea behind the proposed method is to focus on dimension-wise diversity during the search process and to perform clustering on the dimensions based on their respective search characteristics. By identifying dimensions that have prematurely converged or are approaching stagnation, the algorithm adaptively applies specialized update operations to maintain diversity and facilitate more effective exploration. Through this mechanism, ELPSO-C aims to prevent suboptimal convergence and improve global optimization performance. The proposed method combines the leader enhancement strategy of ELPSO with a clustering operation applied to the entire swarm, thereby achieving both wider coverage of the search space and improved solution accuracy.

3.1. Dimension-Wise Clustering Process

In particle swarm optimization (PSO), one of the primary causes of premature convergence is the loss of diversity in certain dimensions of the solution space. Since PSO updates each particle’s position independently across dimensions, the convergence behavior can vary from one dimension to another. Particularly in high-dimensional optimization problems, some dimensions may stagnate earlier than others, leading to reduced exploration capacity and eventual search stagnation. As the dimensionality increases, the influence of each individual dimension on the overall objective value diminishes, which in turn increases the susceptibility of the algorithm to premature convergence in a subset of dimensions. To address this issue, it is essential to monitor and manage the search dynamics on a per-dimension basis. This motivates the incorporation of dimension-wise analysis and control mechanisms into the PSO framework.
To address the above issue, the proposed method introduces a dimension-wise clustering strategy based on the diversity characteristics of each dimension. Specifically, diversity metrics such as the mean and variance of particle velocities are computed for each dimension and used to characterize its search status. By applying agglomerative clustering to the constructed feature vectors, dimensions exhibiting similar search behavior are grouped together. Among these, clusters containing dimensions with low diversity are identified as being close to stagnation. The algorithm then selectively applies specialized update operations to those dimensions in order to reinvigorate the search activity. This selective intervention enables adaptive control of the search process without uniformly perturbing all dimensions.
Unlike conventional PSO variants that primarily rely on global swarm-level indicators, the proposed method emphasizes localized, dimension-wise analysis and adaptation. This clustering-based mechanism enables the algorithm to detect and mitigate premature convergence at the level of individual dimensions, thereby maintaining diversity more effectively. By restricting intervention to only those dimensions exhibiting signs of stagnation, ELPSO-C avoids unnecessary perturbations and promotes targeted exploration. This promotes a more effective balance between global exploration and local exploitation, particularly in high-dimensional search spaces. Ultimately, the integration of clustering and dimension-wise adaptation enhances the algorithm’s robustness and search efficiency.

3.1.1. Data Construction and Metrics

To evaluate the convergence status of each dimension, the proposed method transforms the velocity vectors of all particles into a dimension-wise representation. Let v i k = [ v i 1 k , v i 2 k , , v i d k ] denote the d-dimensional velocity vector of the ith particle at iteration k. The velocity matrix V k R n × d at iteration k, consisting of all particles’ velocities, is expressed as:
V k = v 11 k v 12 k v 1 d k v 21 k v 22 k v 2 d k v n 1 k v n 2 k v n d k
For each dimension j { 1 , , d } , we extract a vector v j k R n by collecting the absolute values of the velocity components across all n particles at iteration k. Specifically,
v j k = | v 1 j k | , | v 2 j k | , , | v n j k | T , for j = 1 , 2 , , d .
Each of these vectors is then summarized by its mean and variance, resulting in a two-dimensional feature vector for each dimension. Specifically, for each j { 1 , , d } , we define:
data j k = ave ( v j k ) , var ( v j k )
These feature vectors data j k R 2 form the basis for clustering dimensions according to their diversity characteristics.

3.1.2. Agglomerative Clustering Procedure

This subsection describes the first type of clustering applied in the proposed method, referred to as clustering (i). It groups the dimensions of the solution space based on their diversity characteristics, using agglomerative clustering. This allows the algorithm to capture and utilize similarities in convergence behavior among dimensions.
To detect dimensions that are close to convergence or exhibiting stagnation, the proposed method performs agglomerative clustering on the search behavior of each dimension. Each data j k R 2 serves as a feature vector representing the behavior of dimension j at iteration k. To group dimensions with similar diversity characteristics, agglomerative clustering is applied to these feature vectors using the standardized Euclidean distance. The clustering process adopts the complete-linkage criterion, in which the distance between two clusters is defined as the maximum pairwise distance between their constituent elements. This approach results in a hierarchical structure of clusters, from which a predefined number of clusters, n c , are extracted for subsequent analysis.
By transforming particle-level velocity data into dimension-wise feature vectors, this clustering mechanism enables the identification of stagnating dimensions and supports adaptive diversity control in the proposed ELPSO-C framework.
In the proposed method, two types of clustering are used in tandem. Clustering (i) groups dimensions based on their diversity features, enabling the identification of structurally similar dimensions. These clusters are then ranked and utilized in clustering (ii), which guides adaptive diversity control by determining which clusters should be targeted for intervention. This separation of analysis and control allows the method to selectively enhance diversity in stagnating dimensions while avoiding unnecessary modifications in well-explored areas.

3.1.3. Cluster Ranking Strategy

To identify which clusters are likely to represent stagnating dimensions, a ranking score is assigned to each cluster based on statistical summaries of the diversity metrics. For each cluster, we compute the average of the means and the average of the variances across all dimensions contained within the cluster. Then, each cluster is ranked separately in ascending order according to these two criteria: lower average mean and lower average variance indicate a higher likelihood of convergence.
Each cluster receives a rank score for its average mean and another for its average variance. These two rank scores are summed to obtain a final composite score. Clusters with lower total scores are considered more prone to premature convergence and are assigned higher priority in subsequent diversity control operations. Table 1 illustrates an example of the ranking procedure when the number of clusters is n c = 4 .
By using this simple yet effective ranking mechanism, the proposed method can prioritize intervention in the most critical dimensions—those that are most likely to have stagnated—while avoiding unnecessary operations in well-explored dimensions. This forms the basis for the subsequent adaptation strategies described in the following subsections.

3.1.4. Cluster-Based Strategy Application

This subsection describes clustering (ii), which utilizes the clusters obtained through clustering (i) and the subsequent ranking process. Based on the ranked clusters, diversity control operations such as velocity reinitialization are selectively applied to the most stagnated clusters. In contrast to clustering (i), which focuses on dimension-wise grouping, clustering (ii) guides the application of control strategies at the cluster level.
Based on the ranking described in the previous section, the proposed method selectively applies diversity control mechanisms to the top-ranked clusters that exhibit the lowest diversity. Let n c denote the total number of clusters, and suppose that the top n c / 2 clusters are identified as convergence-prone. For each such cluster l, a relative convergence coefficient α l is computed to quantify the level of stagnation compared to the overall swarm. This coefficient is defined as:
α l = ave ave v k , j | j cluster l ave ave v k , j | j = 1 , , D ,
Here, v n c , j k denotes the absolute velocity vector in dimension j belonging to cluster l, and ave ( · ) denotes the arithmetic mean. The numerator represents the average of the mean absolute velocities within the cluster, while the denominator is the mean over all dimensions.
This coefficient provides an intuitive measure of convergence within each cluster. A low value of α l suggests that the particles’ movement along the corresponding dimensions has significantly slowed, indicating that the search in these directions is stagnating. In contrast, high α l values imply ongoing exploration. Thus, α l effectively highlights which clusters are in danger of premature convergence and should be targeted by diversity-enhancing operations.
When α l is sufficiently small, indicating that the corresponding cluster has significantly lower diversity than the overall swarm, a reinitialization operation is performed to restore exploration capability. Specifically, the velocity components of all particles in the dimensions belonging to the selected clusters are randomly reset within the defined search bounds:
v i , l k = rand [ x min , j , x max , j ] , i = 1 , , n , l = 1 , , n c / 2
This reinitialization process injects diversity into dimensions deemed to be stagnating, thereby preventing the swarm from becoming trapped in local optima and enhancing the robustness of the search process.

3.1.5. Clustering Application Conditions

To prevent unnecessary computational overhead and ensure that clustering is only performed when beneficial, the proposed method incorporates specific conditions for triggering the clustering process. These conditions are designed to detect stagnation and loss of diversity in the search process.
The first condition evaluates the relative change in the global best fitness value between two consecutive iterations:
| f ( gbest k 1 ) f ( gbest k ) | | f ( gbest k 1 ) | < 0.001
This condition, referred to as Condition (A), is satisfied when the improvement in the objective function is below a specified threshold, indicating stagnation in the global search.
The second condition, Condition (B), ensures that the velocity-based diversity across dimensions is sufficiently high before initiating clustering:
var ( ave ( v 1 k ) , , ave ( v D k ) ) > 0.0001
This condition verifies that there is still meaningful variability in the average absolute velocity values across dimensions, suggesting that clustering could effectively differentiate between stagnated and active dimensions.
Conversely, when the variability in average velocities is low, Condition (C) is used to determine if velocity reinitialization should be prioritized instead:
var ( ave ( v 1 k ) , , ave ( v D k ) ) 0.0001
This condition implies that most dimensions have converged to similar velocity profiles, and injecting diversity through reinitialization is more effective than clustering.
By using these conditions adaptively, the proposed method balances computational efficiency with effective diversity maintenance throughout the optimization process.

3.1.6. Summary of Clustering-Based Adaptation

The preceding subsections have described a clustering-based mechanism to enhance the performance of PSO by detecting and mitigating premature convergence in specific dimensions. The process consists of six key steps:
  • Compute the absolute velocity vectors v j k for each dimension j.
  • Construct feature vectors data j k consisting of the mean and variance of v j k .
  • Perform agglomerative clustering on data j k using standardized Euclidean distance and complete linkage to form n c clusters.
  • Rank the clusters based on their average mean and variance values.
  • Apply control strategies (e.g., inertia adjustment or velocity reinitialization) to the top n c / 2 clusters based on ranking.
  • Use predefined conditions to determine whether clustering or reinitialization should be executed in each iteration.
This clustering-based adaptation strategy enables dimension-wise control that is both targeted and efficient. It balances the need for diversity preservation with computational cost and allows ELPSO-C to adaptively regulate its search behavior depending on the convergence state of individual dimensions.

3.2. ELPSO-C Algorithm Summary

The entire procedure of the proposed ELPSO-C algorithm can be summarized as follows:
Step 1 
Initialization: Randomly initialize each particle’s position vector x i 0 and velocity vector v i 0 within the search bounds [ x min , x max ] .
Step 2 
Evaluation: Evaluate the fitness value of each particle.
Step 3 
Update personal best: For each particle, if the current position yields a better fitness than the personal best p b e s t i k , update p b e s t i k .
Step 4 
Update global best: Identify the global best g b e s t k among all p b e s t i k .
Step 5 
Apply mutation strategies: Apply mutation operators (e.g., Gaussian, Cauchy, dimension-wise opposition, full opposition, differential mutation) to generate new candidates. Replace a particle’s position if the mutated candidate improves upon g b e s t k .
Step 6 
Perform clustering (i) and cluster ranking: Generate data based on the velocity vectors and apply agglomerative clustering. Rank the resulting n c clusters based on diversity metrics.
Step 7 
Apply clustering-based control (ii):
Step 7-1 
If both Conditions (19) and (20) are satisfied, apply clustering (i) and proceed with Step 7-3.
Step 7-2 
If Conditions (19) and (21) are satisfied, apply clustering (ii).
Step 7-3 
If Condition (19) is not satisfied, update the velocity using Equation (1).
Step 8 
Update position: Update each particle’s position using Equation (2).
Step 9 
Termination check: If the termination criterion is met, stop. Otherwise, increment k and return to Step 2.
This stepwise structure integrates both conventional PSO operations and the proposed clustering-based adaptive strategies to effectively balance exploration and exploitation throughout the search process.

4. Numerical Experiments

This section presents numerical experiments to evaluate the effectiveness of the proposed method, ELPSO-C, in solving optimization problems with various characteristics. Specifically, the performance of ELPSO-C is compared against conventional PSO and ELPSO as well as other evolutionary algorithms, using a set of benchmark functions [28] that include both unimodal and multimodal problems.
To investigate the performance across different levels of problem complexity, the experiments are conducted using benchmark functions with dimensionalities of 10, 30, 50, and 100. The performance is evaluated in terms of convergence accuracy and robustness over multiple independent runs.
The remainder of this section is organized as follows. Section 4.1 describes the experimental settings, including benchmark functions, parameter values, and evaluation metrics. Section 4.2 presents the results obtained on each benchmark function and compares the performance of different algorithms. Section 4.3 provides a discussion based on the experimental results, followed by concluding remarks.

4.1. Experimental Settings

The results of our comparative experiments involve multiple benchmark functions, various dimensional settings, and multiple algorithmic variants, with detailed statistical measures such as mean and standard deviation. To ensure transparency and allow readers to fairly assess the performance of the proposed method, we present the results in detailed form. While this leads to longer tables, it preserves essential information necessary for rigorous comparison. Although other swarm intelligence algorithms such as Artificial Bee Colony (ABC), Gray Wolf Optimizer (GWO), and Tree-Seed Algorithm (TSA) are well established and have demonstrated strong performance in continuous optimization, we did not include them in our numerical comparisons. This is due to practical limitations in implementation and parameter calibration, which are necessary for fair and reproducible evaluation. Instead, we focused on comparing with baseline methods (PSO, GA) and structurally related PSO variants (ELPSO, ACOR), which share similar search mechanisms and allow for more direct performance assessment of the proposed dimension-wise diversity strategies.
To evaluate the performance of the proposed ELPSO-C algorithm, we conducted numerical experiments using a set of 18 benchmark functions widely employed in the field of evolutionary computation. These benchmark functions are selected to cover a diverse range of characteristics, including unimodal and multimodal landscapes, separable and non-separable variables, and different levels of difficulty. A complete list of the benchmark functions is provided in Table 2.
The detailed definitions, search domains, and global minima of all benchmark functions listed in Table 2 are provided in Appendix A. Each function was tested under four different dimensional settings: 10, 30, 50, and 100. For each combination of function and dimensionality, the optimization process was independently repeated 30 times to account for the stochastic nature of the algorithms. The performance was evaluated in terms of the mean and standard deviation of the final objective values obtained after a fixed number of iterations.
The parameters of PSO, ELPSO, and ELPSO-C were uniformly configured unless otherwise noted. The swarm size was set to 40, and the maximum number of iterations was fixed at 5000. The inertia weight ω was linearly decreased from 0.9 to 0.5 over iterations. The acceleration coefficients c 1 and c 2 were both set to 2.0. For ELPSO and ELPSO-C, the stagnation threshold for triggering mutation operations was set to 50 consecutive iterations without improvement.
For ELPSO-C, the number of clusters was fixed at five unless otherwise stated. Agglomerative clustering with complete linkage was applied based on normalized metrics of velocity diversity in each dimension. The top n c / 2 clusters ranked by convergence tendency were selected for diversity-enhancing operations.
All algorithms were implemented in Python 3.11 and executed on a machine equipped with an Intel Core i5 processor and 32 GB of RAM. Each function evaluation strictly adhered to the defined search domain, and boundary handling was performed using the saturation method. To ensure a fair comparison across all methods, we adopted standardized parameter settings for each algorithm. These settings are summarized in Table 3.

4.2. Results and Performance Comparison

Numerical experiments were conducted on the benchmark functions described above with dimensionalities D = 10 , 30 , 50 , and 100. In the following sections, the results for each benchmark function are presented in terms of the average, maximum, minimum, standard deviation, and average computation time of the best solution found in each of the 30 trials. The experimental results for each of the 18 benchmark functions are shown in Appendix B using both tables and figures.
To ensure fair and reproducible comparisons, each algorithm was executed independently 30 times per benchmark function, and both mean and standard deviation are reported. Although statistical significance testing such as the Wilcoxon signed-rank test is not included in the current analysis, the presented descriptive statistics offer sufficient resolution to compare the relative performance of algorithms. Statistical testing will be considered as a valuable enhancement in future work. Each figure in Appendix B presents a box-plot-like visualization for each function, where the box represents the range between the mean and the mean ± standard deviation, and the whiskers indicate the maximum and minimum values of the best solutions obtained. If the value of mean minus standard deviation becomes negative, it is replaced with the smallest objective value among all solutions found by any method for the corresponding benchmark function, to ensure proper visualization on logarithmic scales.

4.3. Discussion on the Experimental Results

The experimental results obtained from the 18 benchmark functions—including unimodal and multimodal functions—clearly demonstrate the effectiveness of the proposed ELPSO-C algorithm, especially in higher-dimensional optimization problems. However, ELPSO-C did not always achieve the best performance on all test functions. In particular, for low-dimensional or unimodal problems, the advantage of dimension-wise clustering was less pronounced. In such cases, the additional computational overhead introduced by clustering may outweigh its benefit, leading to comparable or slightly inferior results compared to ELPSO or standard PSO. These results are consistent with the design intention of ELPSO-C, which primarily targets high-dimensional and complex search spaces.
Below is a summarized discussion categorized into low-dimensional (10 dimensions), medium-dimensional (30 and 50 dimensions), and high-dimensional (100 dimensions) scenarios.

4.3.1. Low-Dimensional Problems (10D)

For most of the low-dimensional functions, ELPSO-C achieved competitive or superior performance compared to PSO, PSO-C, ELPSO, GA, and ACOR. In particular, ELPSO and ELPSO-C consistently outperformed other methods on unimodal functions (e.g., shifted sphere and weighted sphere functions). This superior performance can be attributed to the effective integration of diversity control via dimension-wise clustering and the leader-enhancement strategies introduced in ELPSO. However, for multimodal and rotated functions (e.g., Rastrigin, Griewank), ELPSO-C showed mixed results, reflecting its moderate capability in escaping local optima in lower dimensions, where the search space is relatively confined.

4.3.2. Medium-Dimensional Problems (30D and 50D)

In medium-dimensional scenarios, ELPSO-C demonstrated substantial improvements over other approaches, particularly in multimodal and complex rotated functions such as the shifted and rotated Zakharov, Rosenbrock, and Rastrigin functions. The introduction of dimension-wise clustering significantly enhanced the exploration capacity by identifying stagnating dimensions and adaptively reinitializing particle velocities, thus maintaining higher diversity. While ELPSO showed good results, its performance was further improved by incorporating clustering operations, emphasizing the complementary nature of leader-enhanced mutation strategies and dimension-wise diversity control.

4.3.3. High-Dimensional Problems (100D)

The effectiveness of ELPSO-C was most evident in high-dimensional problems, where it consistently achieved the best performance across nearly all benchmark functions tested. This is due to the intensified importance of diversity control strategies in higher-dimensional spaces, where premature convergence and stagnation become increasingly problematic. Specifically, dimension-wise clustering proved effective in preventing premature convergence by identifying and reactivating dimensions with reduced diversity, thus significantly enhancing global search capabilities. This result underscores the strength of the proposed adaptive clustering-based diversity enhancement strategies combined with ELPSO’s leader mutation mechanisms.

4.3.4. Summary

In summary, the experimental outcomes validate the efficacy of the proposed ELPSO-C algorithm. Its integrated approach of dimension-wise clustering for diversity preservation and leader enhancement through multiple mutation strategies significantly improves the optimization performance, particularly in complex and high-dimensional optimization problems.

5. Conclusions

This study proposed an enhanced swarm-based optimization algorithm, ELPSO-C, to address premature convergence in high-dimensional problems. The key idea was to apply agglomerative clustering to dimension-wise velocity-based diversity features, enabling selective intervention in stagnating dimensions. This mechanism maintained diversity without disrupting well-explored dimensions.
The effectiveness of ELPSO-C was validated through experiments on 18 benchmark functions, including rotated and shifted multimodal problems. ELPSO-C consistently outperformed baseline algorithms, especially in high-dimensional landscapes, by achieving lower average and variance of objective values with reasonable computational cost.
While the proposed method shows promising performance, some limitations remain. The choice of clustering parameters (e.g., number of clusters) may require tuning depending on problem characteristics. The diversity metric does not directly reflect sensitivity to the objective function, which may limit its effectiveness in ill-conditioned problems. Moreover, the clustering operation introduces additional computational overhead. Future work will focus on developing adaptive mechanisms for determining clustering parameters and incorporating objective-function sensitivity or gradient information into the feature vectors. We also plan to explore lightweight or incremental clustering approaches for real-time applications. Additional directions include evaluating the impact of particle count on convergence and performance, integrating gradient-based strategies (e.g., Nesterov acceleration), and conducting statistical significance tests (e.g., t-test or Wilcoxon test) for performance validation. Lastly, extending the comparative study to include recent metaheuristics such as the Special Relativity Search Algorithm (SRSA) [31] or the Whale Optimization Algorithm (WOA) would further assess the robustness of ELPSO-C.

Author Contributions

Conceptualization, T.H. and K.S.; methodology, T.H. and K.S.; software, K.S.; validation, T.H., S.S. and I.N.; formal analysis, T.H.; investigation, T.H. and I.N.; resources, K.S.; data curation, K.S.; writing—original draft preparation, T.H.; writing—review and editing, T.H. and S.S.; visualization, T.H.; supervision, I.N.; project administration, T.H.; funding acquisition, T.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of insterest.

Appendix A. Benchmark Function Definitions

Table A1 summarizes the mathematical expressions, search domains, and global minima of the 18 benchmark functions [28] used in our experiments.
Table A1. Benchmark function definitions, search ranges, and global minima.
Table A1. Benchmark function definitions, search ranges, and global minima.
IDFunction NameFormulaSearch RangeGlobal Minimum
F1Sphere f 1 ( x ) = i = 1 D x i 2 [ 100 , 100 ] D f 1 ( x ) = 0
F2Weighted sphere f 2 ( x ) = i = 1 D i x i 2 [ 100 , 100 ] D f 2 ( x ) = 0
F3Rosenbrock f 3 ( x ) = i = 1 D 1 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 [ 30 , 30 ] D f 3 ( x ) = 0
F4Schwefel 2.22 f 4 ( x ) = i = 1 D | x i | + i = 1 D | x i | [ 100 , 100 ] D f 4 ( x ) = 0
F5Rastrigin f 5 ( x ) = i = 1 D x i 2 10 cos ( 2 π x i ) + 10 [ 5.12 , 5.12 ] D f 5 ( x ) = 0
F6Ackley f 6 ( x ) = 20 exp 0.2 1 D i = 1 D x i 2 exp 1 D i = 1 D cos ( 2 π x i ) + 20 + e [ 32 , 32 ] D f 6 ( x ) = 0
F7Griewank f 7 ( x ) = 1 4000 i = 1 D x i 2 i = 1 D cos x i i + 1 [ 600 , 600 ] D f 7 ( x ) = 0
F8Schwefel f 8 ( x ) = 418.9829 × D i = 1 D x i sin ( | x i | ) [ 500 , 500 ] D f 8 ( x ) = 0
F9Shifted sphere f 9 ( x ) = i = 1 D ( x i o i ) 2 [ 100 , 100 ] D f 9 ( x ) = 0
F10Shifted weighted sphere f 10 ( x ) = i = 1 D i ( x i o i ) 2 [ 100 , 100 ] D f 10 ( x ) = 0
F11Shifted Rastrigin f 11 ( x ) = i = 1 D ( x i o i ) 2 10 cos ( 2 π ( x i o i ) ) + 10 [ 5.12 , 5.12 ] D f 11 ( x ) = 0
F12Shifted Griewank f 12 ( x ) = 1 4000 i = 1 D ( x i o i ) 2 i = 1 D cos x i o i i + 1 [ 600 , 600 ] D f 12 ( x ) = 0
F13Shifted rotated Bent Cigar f 13 ( x ) = ( x 1 o 1 ) 2 + 10 6 i = 2 D ( x i o i ) 2 [ 100 , 100 ] D f 13 ( x ) = 0
F14Shifted rotated Zakharov f 14 ( x ) = i = 1 D ( x i o i ) 2 + i = 1 D 0.5 i ( x i o i ) 2 f 14 ( x ) = + i = 1 D 0.5 i ( x i o i ) 4 [ 100 , 100 ] D f 14 ( x ) = 0
F15Shifted rotated Rosenbrock f 15 ( x ) = i = 1 D 1 [ 100 ( ( x i + 1 o i + 1 ) ( x i o i ) 2 ) 2 + ( ( x i o i ) 1 ) 2 ] [ 100 , 100 ] D f 15 ( x ) = 0
F16Shifted rotated Rastrigin f 16 ( x ) = i = 1 D ( x i o i ) 2 10 cos ( 2 π ( x i o i ) ) + 10 [ 5.12 , 5.12 ] D f 16 ( x ) = 0
F17Shifted rotated Schaffer F7 f 17 ( x ) = 1 D 1 i = 1 D 1 s i sin ( 50 s i 1 / 5 ) + 1 2 [ 100 , 100 ] D f 17 ( x ) = 0
F18Shifted rotated Griewank f 18 ( x ) = 1 4000 i = 1 D ( x i o i ) 2 i = 1 D cos x i o i i + 1 [ 600 , 600 ] D f 18 ( x ) = 0

Appendix B. Experimental Results

This appendix presents the detailed experimental results for the 18 benchmark functions used in this study. For each benchmark problem, the performance of all compared algorithms is summarized in a dedicated table. Each table reports the average, maximum, and minimum of the best objective values found in 30 trials, as well as the standard deviation and the average computation time. In all tables, the performance values of all algorithms are compared, and the best value in each row is highlighted in bold.
Tables in this appendix are labeled as Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8, Table A9, Table A10, Table A11, Table A12, Table A13, Table A14, Table A15, Table A16, Table A17, Table A18 and Table A19 corresponding to each benchmark function in the order listed in Appendix A.
Table A2. Experimental results on F1, the sphere function, for D = 10 , 30, 50, and 100 (30 trials).
Table A2. Experimental results on F1, the sphere function, for D = 10 , 30, 50, and 100 (30 trials).
MetricPSOPSO-CELPSOELPSO-CGAACOR
D = 10 − Mean 3.16 × 10 17 1.90 × 10 12 1.05 × 10 62 6.74 × 10 11 7.59 × 10 + 02 9.73 × 10 04
Max 2.35 × 10 15 4.61 × 10 11 1.03 × 10 60 6.30 × 10 10 2.55 × 10 + 03 9.59 × 10 02
Min 1.86 × 10 178 1.93 × 10 16 2.63 × 10 98 2.67 × 10 14 1.01 × 10 03 4.05 × 10 306
Variance 6.08 × 10 32 3.65 × 10 23 1.05 × 10 122 1.26 × 10 20 4.35 × 10 + 05 9.20 × 10 05
Time [s]49.28148.1623.6170.5714.23638.44
D = 30 − Mean 3.05 × 10 + 02 1.41 × 10 06 6.63 × 10 03 1.17 × 10 05 1.55 × 10 + 04 1.38 × 10 + 03
Max 1.53 × 10 + 03 1.65 × 10 05 4.23 × 10 02 5.87 × 10 05 3.10 × 10 + 04 4.11 × 10 + 03
Min 3.95 × 10 + 01 7.67 × 10 08 3.00 × 10 04 4.18 × 10 07 7.54 × 10 + 03 3.22 × 10 + 02
Variance 6.48 × 10 + 04 3.72 × 10 12 4.63 × 10 05 1.46 × 10 10 2.15 × 10 + 07 6.08 × 10 + 05
Time [s]67.17315.6036.16155.2721.181766.12
D = 50 − Mean 1.85 × 10 + 03 3.00 × 10 + 02 5.01 × 10 01 5.11 × 10 04 4.21 × 10 + 04 1.20 × 10 + 04
Max 8.78 × 10 + 03 1.00 × 10 + 04 3.18 × 10 + 00 1.80 × 10 03 6.07 × 10 + 04 2.10 × 10 + 04
Min 3.35 × 10 + 02 1.22 × 10 05 2.37 × 10 02 7.09 × 10 05 2.32 × 10 + 04 5.71 × 10 + 03
Variance 1.33 × 10 + 06 2.91 × 10 + 06 1.82 × 10 01 9.71 × 10 08 6.56 × 10 + 07 7.44 × 10 + 06
Time [s]90.78484.9953.79233.8827.852779.22
D = 100 − Mean 1.04 × 10 + 04 2.30 × 10 + 03 7.98 × 10 + 00 3.63 × 10 02 1.51 × 10 + 05 7.34 × 10 + 04
Max 2.80 × 10 + 04 3.00 × 10 + 04 2.51 × 10 + 01 8.03 × 10 02 1.98 × 10 + 05 9.69 × 10 + 04
Min 2.52 × 10 + 03 2.20 × 10 03 4.27 × 10 01 8.22 × 10 03 9.60 × 10 + 04 4.90 × 10 + 04
Variance 2.21 × 10 + 07 2.37 × 10 + 07 2.16 × 10 + 01 2.09 × 10 04 4.55 × 10 + 08 9.87 × 10 + 07
Time [s]125.35891.54102.50427.8339.805894.17
Figure A1. Experimental results on F1, the sphere function, for D = 10 , 30, 50, and 100 (30 trials).
Figure A1. Experimental results on F1, the sphere function, for D = 10 , 30, 50, and 100 (30 trials).
Appliedmath 05 00159 g0a1
Table A3. Experimental results on F2, the weighted sphere function, for D = 10 , 30, 50, and 100 (30 trials).
Table A3. Experimental results on F2, the weighted sphere function, for D = 10 , 30, 50, and 100 (30 trials).
MetricPSOPSO-CELPSOELPSO-CGAACOR
D = 10 − Mean 1.14 × 10 02 2.35 × 10 14 1.27 × 10 67 9.80 × 10 13 1.04 × 10 + 01 1.49 × 10 05
Max 9.69 × 10 01 2.82 × 10 13 6.11 × 10 66 1.49 × 10 11 4.80 × 10 + 01 1.05 × 10 03
Min 2.48 × 10 205 3.37 × 10 19 3.73 × 10 108 1.90 × 10 15 1.56 × 10 05 1.98 × 10 277
Variance 9.53 × 10 03 2.68 × 10 27 5.19 × 10 133 4.00 × 10 24 8.71 × 10 + 01 1.23 × 10 08
Time [s]54.88149.4027.4174.6615.60596.94
D = 30 − Mean 1.53 × 10 + 01 1.07 × 10 + 01 8.99 × 10 04 4.77 × 10 07 5.95 × 10 + 02 5.13 × 10 + 01
Max 8.44 × 10 + 01 1.84 × 10 + 02 7.36 × 10 03 3.72 × 10 06 1.38 × 10 + 03 1.41 × 10 + 02
Min 1.31 × 10 + 00 1.82 × 10 09 8.66 × 10 05 4.00 × 10 08 2.92 × 10 + 02 6.60 × 10 + 00
Variance 2.00 × 10 + 02 1.03 × 10 + 03 9.85 × 10 07 2.79 × 10 13 3.32 × 10 + 04 6.62 × 10 + 02
Time [s]73.97315.0842.25155.3724.371718.49
D = 50 − Mean 1.50 × 10 + 01 5.27 × 10 + 01 6.12 × 10 02 4.29 × 10 05 2.64 × 10 + 03 6.93 × 10 + 02
Max 8.49 × 10 + 01 5.24 × 10 + 02 2.73 × 10 01 3.54 × 10 04 4.80 × 10 + 03 1.09 × 10 + 03
Min 1.78 × 10 + 00 1.14 × 10 06 5.67 × 10 04 7.45 × 10 06 1.39 × 10 + 03 3.46 × 10 + 02
Variance 1.71 × 10 + 02 1.21 × 10 + 04 2.09 × 10 03 2.57 × 10 09 3.04 × 10 + 05 2.63 × 10 + 04
Time [s]72.15468.8768.32237.2731.693038.35
D = 100 − Mean 1.13 × 10 + 03 3.71 × 10 + 02 1.82 × 10 + 00 5.40 × 10 01 1.90 × 10 + 04 8.74 × 10 + 03
Max 4.45 × 10 + 03 3.07 × 10 + 03 2.89 × 10 + 01 2.62 × 10 + 01 2.51 × 10 + 04 1.26 × 10 + 04
Min 3.53 × 10 + 02 4.78 × 10 04 1.12 × 10 02 2.96 × 10 03 1.41 × 10 + 04 6.08 × 10 + 03
Variance 2.73 × 10 + 05 4.46 × 10 + 05 8.22 × 10 + 00 1.35 × 10 + 01 5.79 × 10 + 06 1.41 × 10 + 06
Time [s]137.34887.10122.08418.3953.655498.25
Figure A2. Experimental results on F2, the weighted sphere function, for D = 10 , 30, 50, and 100 (30 trials).
Figure A2. Experimental results on F2, the weighted sphere function, for D = 10 , 30, 50, and 100 (30 trials).
Appliedmath 05 00159 g0a2
Table A4. Experimental results on F3, the Rosenbrock function, for D = 10 , 30, 50, and 100 (30 trials).
Table A4. Experimental results on F3, the Rosenbrock function, for D = 10 , 30, 50, and 100 (30 trials).
MetricPSOPSO-CELPSOELPSO-CGAACOR
D = 10 − Mean 7.46 × 10 + 00 3.67 × 10 02 1.08 × 10 01 9.96 × 10 02 4.26 × 10 + 02 1.83 × 10 + 01
Max 1.85 × 10 + 02 1.84 × 10 + 00 1.84 × 10 + 00 1.84 × 10 + 00 3.89 × 10 + 03 2.35 × 10 + 02
Min 1.56 × 10 01 2.94 × 10 13 0.00 × 10 + 00 2.15 × 10 10 1.87 × 10 + 00 1.28 × 10 + 00
Variance 3.29 × 10 + 02 6.61 × 10 02 1.82 × 10 01 1.61 × 10 01 2.27 × 10 + 05 1.08 × 10 + 03
Time [s]62.29146.4930.7072.1820.98601.12
D = 30 − Mean 2.01 × 10 + 02 3.93 × 10 + 00 5.49 × 10 + 00 8.35 × 10 + 00 1.82 × 10 + 04 1.09 × 10 + 03
Max 9.25 × 10 + 02 1.07 × 10 + 02 2.57 × 10 + 01 3.66 × 10 + 01 5.69 × 10 + 04 7.43 × 10 + 03
Min 3.87 × 10 + 01 3.52 × 10 03 1.55 × 10 05 2.30 × 10 05 4.33 × 10 + 03 1.48 × 10 + 02
Variance 1.73 × 10 + 04 1.70 × 10 + 02 1.01 × 10 + 02 5.62 × 10 + 01 8.21 × 10 + 07 8.23 × 10 + 05
Time [s]97.76298.3360.19169.7745.571732.89
D = 50 − Mean 1.46 × 10 + 03 6.74 × 10 + 01 3.48 × 10 + 00 1.46 × 10 + 01 6.34 × 10 + 04 1.39 × 10 + 04
Max 2.35 × 10 + 03 2.54 × 10 + 03 1.08 × 10 + 02 9.97 × 10 + 01 1.01 × 10 + 05 3.10 × 10 + 04
Min 8.19 × 10 + 02 2.87 × 10 + 01 2.37 × 10 03 8.64 × 10 04 2.44 × 10 + 04 4.47 × 10 + 03
Variance 1.09 × 10 + 05 6.22 × 10 + 04 2.13 × 10 + 02 5.12 × 10 + 02 3.98 × 10 + 08 3.20 × 10 + 07
Time [s]130.64465.47101.32271.1469.622767.56
D = 100 − Mean 3.99 × 10 + 03 1.00 × 10 + 02 1.18 × 10 + 01 9.59 × 10 + 00 3.07 × 10 + 05 4.07 × 10 + 05
Max 1.00 × 10 + 04 1.89 × 10 + 02 9.70 × 10 + 01 9.64 × 10 + 01 5.18 × 10 + 05 1.26 × 10 + 06
Min 1.42 × 10 + 03 9.09 × 10 + 01 1.19 × 10 03 4.12 × 10 02 1.63 × 10 + 05 8.77 × 10 + 04
Variance 2.31 × 10 + 06 2.89 × 10 + 02 9.80 × 10 + 02 8.07 × 10 + 02 4.06 × 10 + 09 1.87 × 10 + 11
Time [s]215.43837.74236.69565.32132.115555.77
Figure A3. Experimental results on F3, the Rosenbrock function, for D = 10 , 30, 50, and 100 (30 trials).
Figure A3. Experimental results on F3, the Rosenbrock function, for D = 10 , 30, 50, and 100 (30 trials).
Appliedmath 05 00159 g0a3
Table A5. Results for F4, Schwefel’s problem 2.22, for D = 10 , 30, 50, and 100 (30 trials).
Table A5. Results for F4, Schwefel’s problem 2.22, for D = 10 , 30, 50, and 100 (30 trials).
MetricPSOPSO-CELPSOELPSO-CGAACOR
D = 10 − Mean 1.01 × 10 01 1.01 × 10 07 1.00 × 10 01 9.16 × 10 07 7.60 × 10 + 00 8.00 × 10 03
Max 1.01 × 10 + 01 7.57 × 10 07 1.00 × 10 + 01 7.45 × 10 06 2.67 × 10 + 01 3.01 × 10 01
Min 8.60 × 10 12 5.21 × 10 09 1.12 × 10 11 3.76 × 10 08 5.00 × 10 03 6.27 × 10 163
Var 1.01 × 10 + 00 1.79 × 10 14 9.90 × 10 01 1.11 × 10 12 2.24 × 10 + 01 1.64 × 10 03
Time [s]54.27153.4528.0071.9715.54637.79
D = 30 − Mean 3.59 × 10 + 00 1.50 × 10 + 01 4.00 × 10 + 00 4.20 × 10 + 00 1.81 × 10 + 02 1.52 × 10 + 01
Max 4.75 × 10 + 01 6.00 × 10 + 01 3.02 × 10 + 01 4.00 × 10 + 01 2.51 × 10 + 03 5.73 × 10 + 01
Min 2.46 × 10 03 1.54 × 10 03 1.22 × 10 02 1.50 × 10 04 5.56 × 10 + 01 3.57 × 10 + 00
Var 6.17 × 10 + 01 1.85 × 10 + 02 4.84 × 10 + 01 5.44 × 10 + 01 1.20 × 10 + 05 9.88 × 10 + 01
Time [s]78.66464.1245.79159.9627.991773.50
D = 50 − Mean 1.59 × 10 + 01 1.50 × 10 + 01 9.44 × 10 + 00 1.29 × 10 + 01 6.85 × 10 + 12 7.55 × 10 + 01
Max 9.79 × 10 + 01 6.00 × 10 + 01 6.11 × 10 + 01 6.00 × 10 + 01 3.72 × 10 + 14 2.01 × 10 + 02
Min 5.02 × 10 02 1.54 × 10 03 1.21 × 10 01 6.13 × 10 03 1.52 × 10 + 02 2.42 × 10 + 01
Var 3.13 × 10 + 02 1.85 × 10 + 02 1.83 × 10 + 02 2.43 × 10 + 02 1.71 × 10 + 27 1.03 × 10 + 03
Time [s]96.81464.1277.04247.2742.482791.94
D = 100 − Mean 9.91 × 10 + 01 3.45 × 10 + 01 1.83 × 10 + 01 3.50 × 10 + 01 7.62 × 10 + 40 3.56 × 10 + 02
Max 2.35 × 10 + 02 1.20 × 10 + 02 8.08 × 10 + 01 1.00 × 10 + 02 7.26 × 10 + 42 6.08 × 10 + 02
Min 2.17 × 10 + 01 6.43 × 10 02 4.76 × 10 01 2.03 × 10 01 1.42 × 10 + 17 1.93 × 10 + 02
Var 2.14 × 10 + 03 6.58 × 10 + 02 2.90 × 10 + 02 6.90 × 10 + 02 5.22 × 10 + 83 5.90 × 10 + 03
Time [s]159.22845.76152.73515.9078.265932.17
Figure A4. Results for F4, Schwefel’s problem 2.22, for D = 10 , 30, 50, and 100 (30 trials).
Figure A4. Results for F4, Schwefel’s problem 2.22, for D = 10 , 30, 50, and 100 (30 trials).
Appliedmath 05 00159 g0a4
Table A6. Results for F5, Rastrigin’s function, for D = 10 , 30, 50, and 100 (30 trials).
Table A6. Results for F5, Rastrigin’s function, for D = 10 , 30, 50, and 100 (30 trials).
MetricPSOPSO-CELPSOELPSO-CGAACOR
D = 10 − Mean 1.91 × 10 + 01 7.14 × 10 13 5.06 × 10 + 00 4.97 × 10 02 1.45 × 10 + 01 1.34 × 10 + 01
Max 4.48 × 10 + 01 4.91 × 10 11 1.99 × 10 + 01 3.98 × 10 + 00 3.05 × 10 + 01 3.58 × 10 + 01
Min 3.98 × 10 + 00 0.00 × 10 + 00 0.00 × 10 + 00 0.00 × 10 + 00 4.01 × 10 + 00 2.98 × 10 + 00
Var 7.55 × 10 + 01 2.43 × 10 23 1.21 × 10 + 01 1.66 × 10 01 2.88 × 10 + 01 2.91 × 10 + 01
Time [s]59.28157.9431.2673.0119.91648.48
D = 30 − Mean 9.64 × 10 + 01 1.98 × 10 + 01 6.28 × 10 + 01 3.46 × 10 + 01 1.40 × 10 + 02 9.06 × 10 + 01
Max 1.76 × 10 + 02 1.18 × 10 + 02 1.23 × 10 + 02 9.45 × 10 + 01 3.07 × 10 + 02 1.57 × 10 + 02
Min 4.52 × 10 + 01 1.00 × 10 + 00 1.60 × 10 + 01 1.31 × 10 + 01 5.84 × 10 + 01 4.37 × 10 + 01
Var 6.76 × 10 + 02 2.95 × 10 + 02 5.94 × 10 + 02 3.40 × 10 + 02 1.51 × 10 + 03 4.65 × 10 + 02
Time [s]91.92315.0155.06165.9740.151790.37
D = 50 − Mean 2.31 × 10 + 02 8.78 × 10 + 01 1.26 × 10 + 02 1.02 × 10 + 02 4.45 × 10 + 02 2.13 × 10 + 02
Max 3.38 × 10 + 02 1.96 × 10 + 02 2.29 × 10 + 02 2.06 × 10 + 02 5.89 × 10 + 02 2.89 × 10 + 02
Min 1.46 × 10 + 02 2.59 × 10 + 01 4.56 × 10 + 01 3.38 × 10 + 01 2.66 × 10 + 02 1.22 × 10 + 02
Var 1.56 × 10 + 03 1.44 × 10 + 03 1.57 × 10 + 03 1.32 × 10 + 03 3.67 × 10 + 03 1.24 × 10 + 03
Time [s]119.01465.4487.80266.0263.562810.82
D = 100 − Mean 6.66 × 10 + 02 2.42 × 10 + 02 3.09 × 10 + 02 2.75 × 10 + 02 1.32 × 10 + 03 7.28 × 10 + 02
Max 8.26 × 10 + 02 4.96 × 10 + 02 5.03 × 10 + 02 4.21 × 10 + 02 1.52 × 10 + 03 1.52 × 10 + 03
Min 5.16 × 10 + 02 1.06 × 10 + 02 7.03 × 10 + 01 1.59 × 10 + 02 1.09 × 10 + 03 5.21 × 10 + 02
Var 4.61 × 10 + 03 5.24 × 10 + 03 7.01 × 10 + 03 3.62 × 10 + 03 8.20 × 10 + 03 4.19 × 10 + 04
Time [s]204.40871.91212.40522.02103.065959.35
Figure A5. Results for F5, Rastrigin’s function, for D = 10 , 30, 50, and 100 (30 trials).
Figure A5. Results for F5, Rastrigin’s function, for D = 10 , 30, 50, and 100 (30 trials).
Appliedmath 05 00159 g0a5
Table A7. Results for F6, Ackley’s function, for D = 10 , 30, 50, and 100 (30 trials).
Table A7. Results for F6, Ackley’s function, for D = 10 , 30, 50, and 100 (30 trials).
MetricPSOPSO-CELPSOELPSO-CGAACOR
D = 10 − Mean 4.38 × 10 + 00 1.21 × 10 + 00 6.44 × 10 01 3.07 × 10 06 2.13 × 10 + 01 6.86 × 10 01
Max 1.36 × 10 + 01 1.38 × 10 + 01 3.05 × 10 + 00 1.44 × 10 05 2.19 × 10 + 01 2.03 × 10 + 01
Min 1.02 × 10 12 1.08 × 10 08 1.02 × 10 12 1.60 × 10 07 2.04 × 10 + 01 1.02 × 10 12
Var 1.08 × 10 + 01 1.10 × 10 + 01 7.53 × 10 01 6.83 × 10 12 9.67 × 10 02 4.45 × 10 + 00
Time [s]67.07164.6737.0179.6826.27648.61
D = 30 − Mean 1.64 × 10 + 01 1.09 × 10 + 01 6.29 × 10 + 00 8.42 × 10 01 2.14 × 10 + 01 1.36 × 10 + 01
Max 1.95 × 10 + 01 1.89 × 10 + 01 1.39 × 10 + 01 9.59 × 10 + 00 2.19 × 10 + 01 2.10 × 10 + 01
Min 1.16 × 10 + 01 2.51 × 10 02 2.82 × 10 + 00 6.11 × 10 04 2.09 × 10 + 01 5.99 × 10 + 00
Var 2.86 × 10 + 00 1.54 × 10 + 01 4.78 × 10 + 00 4.65 × 10 + 00 3.43 × 10 02 2.48 × 10 + 01
Time [s]106.58468.0067.54172.2760.451798.76
D = 50 − Mean 1.86 × 10 + 01 1.09 × 10 + 01 1.06 × 10 + 01 5.09 × 10 + 00 2.15 × 10 + 01 1.70 × 10 + 01
Max 1.95 × 10 + 01 1.89 × 10 + 01 1.55 × 10 + 01 1.31 × 10 + 01 2.18 × 10 + 01 2.11 × 10 + 01
Min 1.62 × 10 + 01 2.51 × 10 02 5.07 × 10 + 00 1.29 × 10 + 00 2.11 × 10 + 01 1.19 × 10 + 01
Var 3.80 × 10 01 1.54 × 10 + 01 4.99 × 10 + 00 5.51 × 10 + 00 1.87 × 10 02 8.28 × 10 + 00
Time [s]139.82468.00110.00288.2879.182834.69
D = 100 − Mean 1.98 × 10 + 01 1.83 × 10 + 01 1.46 × 10 + 01 1.36 × 10 + 01 2.15 × 10 + 01 2.11 × 10 + 01
Max 2.03 × 10 + 01 1.93 × 10 + 01 1.77 × 10 + 01 1.78 × 10 + 01 2.17 × 10 + 01 2.14 × 10 + 01
Min 1.93 × 10 + 01 1.14 × 10 + 01 1.10 × 10 + 01 6.81 × 10 + 00 2.12 × 10 + 00 1.71 × 10 + 01
Var 4.37 × 10 02 1.18 × 10 + 00 1.39 × 10 + 00 5.46 × 10 + 00 6.94 × 10 03 4.19 × 10 01
Time [s]234.20894.19263.98607.14145.996000.24
Figure A6. Results for F6, Ackley’s function, for D = 10 , 30, 50, and 100 (30 trials).
Figure A6. Results for F6, Ackley’s function, for D = 10 , 30, 50, and 100 (30 trials).
Appliedmath 05 00159 g0a6
Table A8. Results for F7, Griewank’s function, for D = 10 , 30, 50, and 100 (30 trials).
Table A8. Results for F7, Griewank’s function, for D = 10 , 30, 50, and 100 (30 trials).
MetricPSOPSO-CELPSOELPSO-CGAACOR
D = 10 − Mean 1.99 × 10 01 3.24 × 10 02 1.63 × 10 01 4.83 × 10 02 8.30 × 10 + 00 1.37 × 10 01
Max 5.51 × 10 01 8.85 × 10 02 4.85 × 10 01 1.23 × 10 01 4.17 × 10 + 01 3.82 × 10 01
Min 4.68 × 10 02 0.00 × 10 + 00 3.45 × 10 02 7.40 × 10 03 1.11 × 10 + 00 9.86 × 10 03
Var 9.78 × 10 03 3.30 × 10 04 8.75 × 10 03 5.12 × 10 04 3.58 × 10 + 01 4.15 × 10 03
Time [s]61.76156.2231.9874.6321.89598.93
D = 30 − Mean 3.49 × 10 + 00 1.70 × 10 02 4.40 × 10 02 1.41 × 10 02 1.34 × 10 + 02 1.36 × 10 + 01
Max 1.11 × 10 + 01 8.57 × 10 02 1.20 × 10 01 9.02 × 10 02 2.49 × 10 + 02 3.96 × 10 + 01
Min 1.24 × 10 + 00 2.48 × 10 07 1.67 × 10 03 1.28 × 10 06 6.17 × 10 + 01 3.09 × 10 + 00
Var 4.16 × 10 + 00 3.77 × 10 04 6.51 × 10 04 2.31 × 10 04 1.64 × 10 + 03 4.79 × 10 + 01
Time [s]92.32343.0957.25175.4140.821723.48
D = 50 − Mean 1.92 × 10 + 01 1.82 × 10 + 00 3.87 × 10 01 1.04 × 10 02 3.78 × 10 + 02 1.05 × 10 + 02
Max 1.25 × 10 + 02 9.05 × 10 + 01 7.61 × 10 01 5.45 × 10 02 5.89 × 10 + 02 1.62 × 10 + 02
Min 4.50 × 10 + 00 1.57 × 10 05 1.52 × 10 02 1.29 × 10 04 1.85 × 10 + 02 5.45 × 10 + 01
Var 2.14 × 10 + 02 1.60 × 10 + 02 2.61 × 10 02 1.40 × 10 04 6.09 × 10 + 03 5.90 × 10 + 02
Time [s]126.57521.1894.94278.4964.133069.09
D = 100 − Mean 9.87 × 10 + 01 2.62 × 10 + 01 9.84 × 10 01 3.27 × 10 02 1.41 × 10 + 03 6.46 × 10 + 02
Max 2.88 × 10 + 02 2.71 × 10 + 02 1.31 × 10 + 00 8.29 × 10 02 1.82 × 10 + 03 8.11 × 10 + 02
Min 2.57 × 10 + 01 1.29 × 10 03 7.46 × 10 02 5.03 × 10 03 8.72 × 10 + 02 4.60 × 10 + 02
Var 2.12 × 10 + 03 2.66 × 10 + 03 4.42 × 10 02 2.70 × 10 04 3.34 × 10 + 04 6.74 × 10 + 03
Time [s]198.05954.31210.52556.68106.505522.85
Table A9. Results for F8, Schwefel’s function, for D = 10 , 30, 50, and 100 (30 trials).
Table A9. Results for F8, Schwefel’s function, for D = 10 , 30, 50, and 100 (30 trials).
MetricPSOPSO-CELPSOELPSO-CGAACOR
D = 10 − Mean 1.18 × 10 + 03 5.85 × 10 + 02 5.81 × 10 + 02 4.09 × 10 + 02 4.80 × 10 + 02 8.56 × 10 + 02
Max 2.03 × 10 + 03 1.31 × 10 + 03 1.79 × 10 + 03 1.31 × 10 + 03 1.05 × 10 + 03 1.77 × 10 + 03
Min 3.57 × 10 + 02 1.27 × 10 04 1.27 × 10 04 1.27 × 10 04 4.44 × 10 03 1.18 × 10 + 02
Var 9.87 × 10 + 04 7.35 × 10 + 04 1.91 × 10 + 05 1.10 × 10 + 05 7.17 × 10 + 04 8.43 × 10 + 04
Time [s]59.94141.4731.1872.2419.60548.77
D = 30 − Mean 5.13 × 10 + 03 9.27 × 10 + 03 4.56 × 10 + 02 3.67 × 10 + 02 5.82 × 10 + 03 3.49 × 10 + 03
Max 7.28 × 10 + 03 1.25 × 10 + 04 2.03 × 10 + 03 4.72 × 10 + 03 8.08 × 10 + 03 4.85 × 10 + 03
Min 2.87 × 10 + 03 6.25 × 10 + 03 6.16 × 10 04 3.82 × 10 04 3.83 × 10 + 03 2.02 × 10 + 03
Var 9.33 × 10 + 05 1.93 × 10 + 06 1.74 × 10 + 05 5.35 × 10 + 05 8.25 × 10 + 05 3.33 × 10 + 05
Time [s]88.01423.9152.58169.9835.971867.13
D = 50 − Mean 1.02 × 10 + 04 9.27 × 10 + 03 6.27 × 10 + 02 2.85 × 10 + 02 1.31 × 10 + 04 7.15 × 10 + 03
Max 1.25 × 10 + 04 1.25 × 10 + 04 5.48 × 10 + 03 3.17 × 10 + 03 1.62 × 10 + 04 8.87 × 10 + 03
Min 6.76 × 10 + 03 6.25 × 10 + 03 2.64 × 10 03 6.54 × 10 04 7.72 × 10 + 03 5.27 × 10 + 03
Var 1.45 × 10 + 06 1.93 × 10 + 06 8.81 × 10 + 05 2.15 × 10 + 05 1.91 × 10 + 06 6.57 × 10 + 05
Time [s]110.39423.9183.14271.2651.313045.01
D = 100 − Mean 2.41 × 10 + 04 2.15 × 10 + 04 1.23 × 10 + 03 5.67 × 10 + 02 3.35 × 10 + 04 1.84 × 10 + 04
Max 3.01 × 10 + 04 2.81 × 10 + 04 1.63 × 10 + 04 8.64 × 10 + 03 3.88 × 10 + 04 2.17 × 10 + 04
Min 1.85 × 10 + 04 1.56 × 10 + 04 1.98 × 10 03 2.19 × 10 03 2.91 × 10 + 04 1.44 × 10 + 04
Var 3.28 × 10 + 06 1.01 × 10 + 07 6.91 × 10 + 06 1.66 × 10 + 06 3.58 × 10 + 06 1.58 × 10 + 06
Time [s]186.88804.47188.78554.4598.635709.09
Figure A7. Results for F7, Griewank’s function, for D = 10 , 30, 50, and 100 (30 trials).
Figure A7. Results for F7, Griewank’s function, for D = 10 , 30, 50, and 100 (30 trials).
Appliedmath 05 00159 g0a7
Table A10. Results for F9, the shifted sphere function, for D = 10 , 30, 50, and 100 (30 trials).
Table A10. Results for F9, the shifted sphere function, for D = 10 , 30, 50, and 100 (30 trials).
MetricPSOPSO-CELPSOELPSO-CGAACOR
D = 10 − Mean 5.49 × 10 + 01 9.16 × 10 + 01 0.00 × 10 + 00 8.48 × 10 11 7.37 × 10 01 3.30 × 10 01
Max 9.38 × 10 + 02 9.38 × 10 + 02 0.00 × 10 + 00 1.21 × 10 09 1.99 × 10 + 00 3.29 × 10 + 01
Min 0.00 × 10 + 00 2.68 × 10 16 0.00 × 10 + 00 1.33 × 10 14 9.27 × 10 02 0.00 × 10 + 00
Var 4.62 × 10 + 04 7.55 × 10 + 04 0.00 × 10 + 00 3.32 × 10 20 2.43 × 10 01 1.07 × 10 + 01
Time [s]47.65146.6624.0370.8722.32589.62
D = 30 − Mean 4.92 × 10 + 03 2.81 × 10 + 03 2.64 × 10 + 00 8.96 × 10 06 1.84 × 10 + 04 2.58 × 10 + 03
Max 2.06 × 10 + 04 9.67 × 10 + 03 2.31 × 10 + 02 4.23 × 10 05 4.12 × 10 + 04 1.15 × 10 + 04
Min 1.03 × 10 + 03 1.76 × 10 08 1.71 × 10 02 5.28 × 10 07 9.19 × 10 + 03 5.99 × 10 + 02
Var 1.07 × 10 + 07 6.91 × 10 + 06 5.27 × 10 + 02 6.08 × 10 11 3.62 × 10 + 07 2.61 × 10 + 06
Time [s]68.76304.0537.00152.8420.791693.08
D = 50 − Mean 1.64 × 10 + 04 4.22 × 10 + 03 8.51 × 10 + 01 5.22 × 10 04 5.05 × 10 + 04 1.79 × 10 + 04
Max 3.07 × 10 + 04 1.92 × 10 + 04 1.22 × 10 + 03 3.11 × 10 03 8.08 × 10 + 04 3.22 × 10 + 04
Min 8.25 × 10 + 03 1.98 × 10 05 7.66 × 10 + 00 9.57 × 10 05 2.81 × 10 + 04 7.40 × 10 + 03
Var 1.81 × 10 + 07 1.87 × 10 + 07 4.46 × 10 + 04 1.54 × 10 07 1.34 × 10 + 08 2.00 × 10 + 07
Time [s]90.73483.2655.83234.2526.603024.95
D = 100 − Mean 1.05 × 10 + 05 1.10 × 10 + 04 1.16 × 10 + 03 4.02 × 10 02 1.91 × 10 + 05 1.15 × 10 + 05
Max 1.38 × 10 + 05 5.78 × 10 + 04 3.52 × 10 + 03 1.88 × 10 01 2.84 × 10 + 05 1.56 × 10 + 05
Min 7.26 × 10 + 04 7.38 × 10 03 4.84 × 10 + 02 1.04 × 10 02 1.07 × 10 + 05 7.34 × 10 + 04
Var 2.05 × 10 + 08 1.25 × 10 + 08 3.35 × 10 + 05 6.09 × 10 04 9.02 × 10 + 08 2.44 × 10 + 08
Time [s]126.95852.62104.16405.5346.725889.74
Figure A8. Results for F8, Schwefel’s function, for D = 10 , 30, 50, and 100 (30 trials).
Figure A8. Results for F8, Schwefel’s function, for D = 10 , 30, 50, and 100 (30 trials).
Appliedmath 05 00159 g0a8
Figure A9. Results for F9, the shifted sphere function, for D = 10 , 30, 50, and 100 (30 trials).
Figure A9. Results for F9, the shifted sphere function, for D = 10 , 30, 50, and 100 (30 trials).
Appliedmath 05 00159 g0a9
Table A11. Results for F10, the shifted weighted sphere function, for D = 10 , 30, 50, and 100 (30 trials).
Table A11. Results for F10, the shifted weighted sphere function, for D = 10 , 30, 50, and 100 (30 trials).
MetricPSOPSO-CELPSOELPSO-CGAACOR
D = 10 − Mean 1.30 × 10 + 00 6.87 × 10 01 0.00 × 10 + 00 9.02 × 10 13 1.33 × 10 + 01 1.49 × 10 05
Max 3.90 × 10 + 01 2.15 × 10 + 01 0.00 × 10 + 00 1.37 × 10 11 3.86 × 10 + 01 1.05 × 10 03
Min 0.00 × 10 + 00 3.95 × 10 18 0.00 × 10 + 00 8.16 × 10 16 4.49 × 10 05 1.98 × 10 277
Var 2.52 × 10 + 01 1.23 × 10 + 01 0.00 × 10 + 00 3.79 × 10 24 9.06 × 10 + 01 1.23 × 10 08
Time [s]53.40148.4227.4073.8416.23596.94
D = 30 − Mean 1.95 × 10 + 02 1.16 × 10 + 02 1.27 × 10 + 00 4.62 × 10 07 7.44 × 10 + 02 1.08 × 10 + 02
Max 5.05 × 10 + 02 5.25 × 10 + 02 6.60 × 10 + 01 1.96 × 10 06 1.54 × 10 + 03 2.48 × 10 + 02
Min 2.99 × 10 + 01 4.00 × 10 09 8.16 × 10 04 2.58 × 10 08 3.14 × 10 + 02 2.08 × 10 + 01
Var 7.93 × 10 + 03 1.48 × 10 + 04 4.95 × 10 + 01 1.94 × 10 13 6.59 × 10 + 04 2.40 × 10 + 03
Time [s]75.54320.9542.63154.2924.041770.73
D = 50 − Mean 9.77 × 10 + 02 2.24 × 10 + 02 5.98 × 10 + 00 1.01 × 10 04 3.18 × 10 + 03 1.10 × 10 + 03
Max 1.96 × 10 + 03 1.38 × 10 + 03 4.05 × 10 + 01 2.95 × 10 03 5.43 × 10 + 03 1.79 × 10 + 03
Min 5.28 × 10 + 02 1.75 × 10 06 7.68 × 10 01 7.51 × 10 06 1.67 × 10 + 03 6.12 × 10 + 02
Var 6.88 × 10 + 04 7.52 × 10 + 04 4.59 × 10 + 01 9.46 × 10 08 6.20 × 10 + 05 8.10 × 10 + 04
Time [s]89.85453.2268.16232.6532.032786.18
D = 100 − Mean 1.13 × 10 + 04 1.48 × 10 + 03 1.80 × 10 + 02 1.15 × 10 + 00 2.43 × 10 + 04 1.37 × 10 + 04
Max 1.69 × 10 + 04 5.25 × 10 + 03 4.58 × 10 + 02 3.46 × 10 + 01 3.29 × 10 + 04 1.83 × 10 + 04
Min 6.03 × 10 + 03 1.78 × 10 03 8.16 × 10 + 01 5.20 × 10 03 1.59 × 10 + 04 9.21 × 10 + 03
Var 3.54 × 10 + 06 1.53 × 10 + 06 5.17 × 10 + 03 2.00 × 10 + 01 1.19 × 10 + 07 2.95 × 10 + 06
Time [s]141.04819.14124.75409.7754.445478.88
Figure A10. Results for F10, the shifted weighted sphere function, for D = 10 , 30, 50, and 100 (30 trials).
Figure A10. Results for F10, the shifted weighted sphere function, for D = 10 , 30, 50, and 100 (30 trials).
Appliedmath 05 00159 g0a10
Table A12. Results for F11, the shifted Rastrigin function, for D = 10 , 30, 50, and 100 (30 trials).
Table A12. Results for F11, the shifted Rastrigin function, for D = 10 , 30, 50, and 100 (30 trials).
MetricPSOPSO-CELPSOELPSO-CGAACOR
D = 10 − Mean 2.41 × 10 + 01 1.25 × 10 + 00 1.22 × 10 + 01 7.67 × 10 01 1.41 × 10 + 01 1.44 × 10 + 01
Max 5.87 × 10 + 01 2.16 × 10 + 01 3.39 × 10 + 01 2.59 × 10 + 01 3.33 × 10 + 01 3.78 × 10 + 01
Min 4.97 × 10 + 00 0.00 × 10 + 00 9.95 × 10 01 0.00 × 10 + 00 3.13 × 10 + 00 4.97 × 10 + 00
Var 1.13 × 10 + 02 1.87 × 10 + 01 6.35 × 10 + 01 1.05 × 10 + 01 3.69 × 10 + 01 4.15 × 10 + 01
Time [s]60.53155.7931.8173.1123.51598.44
D = 30 − Mean 1.68 × 10 + 02 8.90 × 10 + 01 8.15 × 10 + 01 7.67 × 10 + 01 1.67 × 10 + 02 1.05 × 10 + 02
Max 2.45 × 10 + 02 1.97 × 10 + 02 1.50 × 10 + 02 1.71 × 10 + 02 3.02 × 10 + 02 1.81 × 10 + 02
Min 9.34 × 10 + 01 3.02 × 10 + 01 2.90 × 10 + 01 3.18 × 10 + 01 7.98 × 10 + 01 5.69 × 10 + 01
Var 1.24 × 10 + 03 1.38 × 10 + 03 4.84 × 10 + 02 7.68 × 10 + 02 2.05 × 10 + 03 7.11 × 10 + 02
Time [s]93.10298.0655.94162.7139.011777.60
D = 50 − Mean 3.55 × 10 + 02 2.03 × 10 + 02 1.51 × 10 + 02 1.82 × 10 + 02 4.84 × 10 + 02 2.45 × 10 + 02
Max 4.79 × 10 + 02 3.50 × 10 + 02 2.42 × 10 + 02 3.96 × 10 + 02 7.02 × 10 + 02 3.72 × 10 + 02
Min 2.65 × 10 + 02 9.39 × 10 + 01 8.64 × 10 + 01 7.90 × 10 + 01 3.19 × 10 + 02 1.49 × 10 + 02
Var 2.04 × 10 + 03 2.48 × 10 + 03 9.84 × 10 + 02 3.35 × 10 + 03 6.00 × 10 + 03 1.89 × 10 + 03
Time [s]121.00455.8089.63259.8859.302812.31
D = 100 − Mean 9.94 × 10 + 02 5.94 × 10 + 02 4.31 × 10 + 02 5.36 × 10 + 02 1.46 × 10 + 03 8.03 × 10 + 02
Max 1.17 × 10 + 03 8.81 × 10 + 02 5.59 × 10 + 02 9.34 × 10 + 02 1.80 × 10 + 03 1.06 × 10 + 03
Min 8.33 × 10 + 02 4.46 × 10 + 02 3.05 × 10 + 02 2.70 × 10 + 02 1.13 × 10 + 03 5.92 × 10 + 02
Var 4.94 × 10 + 03 6.94 × 10 + 03 1.86 × 10 + 03 3.30 × 10 + 04 2.05 × 10 + 04 1.09 × 10 + 04
Time [s]203.08815.72211.25517.98108.755490.48
Figure A11. Results for F11, the shifted Rastrigin function, for D = 10 , 30, 50, and 100 (30 trials).
Figure A11. Results for F11, the shifted Rastrigin function, for D = 10 , 30, 50, and 100 (30 trials).
Appliedmath 05 00159 g0a11
Table A13. Results for F12, the shifted Griewank function, for D = 10 , 30, 50, and 100 (30 trials).
Table A13. Results for F12, the shifted Griewank function, for D = 10 , 30, 50, and 100 (30 trials).
MetricPSOPSO-CELPSOELPSO-CGAACOR
D = 10 − Mean 2.16 × 10 + 00 9.97 × 10 01 2.04 × 10 01 4.79 × 10 02 9.70 × 10 + 00 1.38 × 10 01
Max 2.00 × 10 + 01 1.99 × 10 + 01 5.14 × 10 01 1.52 × 10 01 3.02 × 10 + 01 5.15 × 10 01
Min 4.18 × 10 02 0.00 × 10 + 00 4.42 × 10 02 2.21 × 10 14 1.54 × 10 + 00 2.46 × 10 02
Var 1.69 × 10 + 01 1.10 × 10 + 01 1.07 × 10 02 7.00 × 10 04 4.36 × 10 + 01 5.96 × 10 03
Time [s]63.34158.7332.3476.5423.78599.34
D = 30 − Mean 4.43 × 10 + 01 2.46 × 10 + 01 3.22 × 10 01 3.08 × 10 01 1.71 × 10 + 02 2.74 × 10 + 01
Max 1.51 × 10 + 02 1.00 × 10 + 02 2.24 × 10 + 00 1.13 × 10 + 01 4.11 × 10 + 02 6.85 × 10 + 01
Min 1.40 × 10 + 01 1.47 × 10 07 5.37 × 10 02 3.54 × 10 06 7.22 × 10 + 01 6.54 × 10 + 00
Var 6.40 × 10 + 02 7.95 × 10 + 02 9.28 × 10 02 2.78 × 10 + 00 2.94 × 10 + 03 1.73 × 10 + 02
Time [s]94.91343.2957.25173.2940.051726.06
D = 50 − Mean 1.48 × 10 + 02 3.95 × 10 + 01 1.30 × 10 + 00 1.09 × 10 02 4.43 × 10 + 02 1.60 × 10 + 02
Max 2.61 × 10 + 02 2.37 × 10 + 02 3.99 × 10 + 00 8.46 × 10 02 8.19 × 10 + 02 2.48 × 10 + 02
Min 6.75 × 10 + 01 1.39 × 10 05 9.43 × 10 01 4.65 × 10 05 1.92 × 10 + 02 6.60 × 10 + 01
Var 1.94 × 10 + 03 1.94 × 10 + 03 1.88 × 10 01 1.97 × 10 04 1.12 × 10 + 04 1.28 × 10 + 03
Time [s]121.34514.4692.05277.6359.773069.85
D = 100 − Mean 9.35 × 10 + 02 1.15 × 10 + 02 1.12 × 10 + 01 2.51 × 10 01 1.73 × 10 + 03 1.04 × 10 + 03
Max 1.32 × 10 + 03 4.33 × 10 + 02 3.96 × 10 + 01 8.41 × 10 + 00 2.41 × 10 + 03 1.40 × 10 + 03
Min 6.32 × 10 + 02 1.39 × 10 02 4.98 × 10 + 00 1.10 × 10 02 1.07 × 10 + 03 7.34 × 10 + 02
Var 1.99 × 10 + 04 1.01 × 10 + 04 3.70 × 10 + 01 1.10 × 10 + 00 6.00 × 10 + 04 1.44 × 10 + 04
Time [s]208.24899.05221.05544.99111.005942.49
Figure A12. Results for F12, the shifted Griewank function, for D = 10 , 30, 50, and 100 (30 trials).
Figure A12. Results for F12, the shifted Griewank function, for D = 10 , 30, 50, and 100 (30 trials).
Appliedmath 05 00159 g0a12
Table A14. Results for F13, the shifted and rotated Bent Cigar function, for D = 10 , 30, 50, and 100 (30 trials).
Table A14. Results for F13, the shifted and rotated Bent Cigar function, for D = 10 , 30, 50, and 100 (30 trials).
MetricPSOPSO-CELPSOELPSO-CGAACOR
D = 10 − Mean 3.44 × 10 + 07 2.85 × 10 + 07 2.79 × 10 + 03 3.66 × 10 + 03 5.70 × 10 + 08 1.03 × 10 + 04
Max 1.16 × 10 + 09 1.34 × 10 + 09 8.18 × 10 + 03 8.18 × 10 + 03 3.20 × 10 + 09 6.58 × 10 + 04
Min 1.00 × 10 + 02 1.03 × 10 + 02 1.00 × 10 + 02 1.03 × 10 + 02 1.40 × 10 + 03 1.00 × 10 + 02
Var 2.93 × 10 + 16 2.85 × 10 + 16 1.15 × 10 + 07 9.85 × 10 + 06 3.75 × 10 + 17 1.97 × 10 + 08
Time [s]52.40143.4627.5868.9718.43636.90
D = 30 − Mean 4.15 × 10 + 09 2.63 × 10 + 09 6.41 × 10 + 07 3.02 × 10 + 06 1.70 × 10 + 10 2.05 × 10 + 09
Max 1.41 × 10 + 10 1.33 × 10 + 10 2.25 × 10 + 09 2.91 × 10 + 08 4.19 × 10 + 10 5.54 × 10 + 09
Min 7.35 × 10 + 08 1.22 × 10 + 02 1.27 × 10 + 04 1.03 × 10 + 02 4.77 × 10 + 09 1.96 × 10 + 08
Var 7.01 × 10 + 18 1.06 × 10 + 19 8.45 × 10 + 16 8.39 × 10 + 14 3.33 × 10 + 19 1.30 × 10 + 18
Time [s]72.41425.2740.19149.0122.491699.99
D = 50 − Mean 1.56 × 10 + 10 2.63 × 10 + 09 5.90 × 10 + 07 2.51 × 10 + 06 5.01 × 10 + 10 1.72 × 10 + 10
Max 2.59 × 10 + 10 1.33 × 10 + 10 1.60 × 10 + 09 2.47 × 10 + 08 8.09 × 10 + 10 3.14 × 10 + 10
Min 6.06 × 10 + 09 1.22 × 10 + 02 2.09 × 10 + 06 1.93 × 10 + 02 2.74 × 10 + 10 7.53 × 10 + 09
Var 1.80 × 10 + 19 1.06 × 10 + 19 3.23 × 10 + 16 6.05 × 10 + 14 1.41 × 10 + 20 2.30 × 10 + 19
Time [s]87.18425.2757.36252.9230.692990.90
D = 100 − Mean 1.04 × 10 + 11 1.04 × 10 + 10 1.25 × 10 + 09 3.99 × 10 + 05 1.84 × 10 + 11 1.15 × 10 + 11
Max 1.48 × 10 + 11 4.33 × 10 + 10 4.19 × 10 + 09 1.65 × 10 + 07 2.40 × 10 + 11 1.51 × 10 + 11
Min 7.22 × 10 + 10 6.32 × 10 + 03 3.99 × 10 + 08 1.07 × 10 + 04 1.30 × 10 + 11 8.41 × 10 + 10
Var 1.88 × 10 + 20 1.05 × 10 + 20 3.23 × 10 + 17 3.48 × 10 + 12 5.59 × 10 + 20 2.08 × 10 + 20
Time [s]133.83760.31116.30450.3449.875925.77
Figure A13. Results for F13, the shifted and rotated Bent Cigar function, for D = 10 , 30, 50, and 100 (30 trials).
Figure A13. Results for F13, the shifted and rotated Bent Cigar function, for D = 10 , 30, 50, and 100 (30 trials).
Appliedmath 05 00159 g0a13
Figure A14. Results for F14, the shifted and rotated Zakharov function, for D = 10 , 30, 50, and 100 (30 trials).
Figure A14. Results for F14, the shifted and rotated Zakharov function, for D = 10 , 30, 50, and 100 (30 trials).
Appliedmath 05 00159 g0a14
Table A15. Results for F14, the shifted and rotated Zakharov function, for D = 10 , 30, 50, and 100 (30 trials).
Table A15. Results for F14, the shifted and rotated Zakharov function, for D = 10 , 30, 50, and 100 (30 trials).
MetricPSOPSO-CELPSOELPSO-CGAACOR
D = 10 − Mean 3.11 × 10 + 02 3.30 × 10 + 02 3.00 × 10 + 02 3.00 × 10 + 02 1.21 × 10 + 04 3.00 × 10 + 02
Max 1.44 × 10 + 03 1.44 × 10 + 03 3.00 × 10 + 02 3.00 × 10 + 02 2.99 × 10 + 04 3.20 × 10 + 02
Min 3.00 × 10 + 02 3.00 × 10 + 02 3.00 × 10 + 02 3.00 × 10 + 02 2.25 × 10 + 03 3.00 × 10 + 02
Var 1.28 × 10 + 04 2.87 × 10 + 04 0.00 × 10 + 00 9.41 × 10 20 3.19 × 10 + 07 4.12 × 10 + 00
Time [s]57.79150.5231.6872.0022.49644.34
D = 30 − Mean 8.82 × 10 + 03 1.95 × 10 + 04 3.48 × 10 + 02 3.00 × 10 + 02 1.02 × 10 + 05 1.57 × 10 + 03
Max 2.46 × 10 + 04 6.88 × 10 + 04 5.02 × 10 + 03 3.00 × 10 + 02 1.53 × 10 + 05 7.20 × 10 + 03
Min 2.72 × 10 + 03 8.89 × 10 + 02 3.00 × 10 + 02 3.00 × 10 + 02 4.93 × 10 + 04 3.62 × 10 + 02
Var 1.55 × 10 + 07 2.81 × 10 + 08 2.20 × 10 + 05 9.12 × 10 10 5.81 × 10 + 08 1.83 × 10 + 06
Time [s]87.23456.0254.99164.4548.281796.73
D = 50 − Mean 4.71 × 10 + 04 1.95 × 10 + 04 1.48 × 10 + 03 3.00 × 10 + 02 2.06 × 10 + 05 1.09 × 10 + 04
Max 1.24 × 10 + 05 6.88 × 10 + 04 1.38 × 10 + 04 3.00 × 10 + 02 3.16 × 10 + 05 5.02 × 10 + 04
Min 1.85 × 10 + 04 8.89 × 10 + 02 3.33 × 10 + 02 3.00 × 10 + 02 9.53 × 10 + 04 2.25 × 10 + 03
Var 2.83 × 10 + 08 2.81 × 10 + 08 4.54 × 10 + 06 8.81 × 10 05 1.73 × 10 + 09 4.57 × 10 + 07
Time [s]115.87456.0287.41256.9465.092920.45
D = 100 − Mean 1.52 × 10 + 05 1.68 × 10 + 04 6.65 × 10 + 03 3.00 × 10 + 02 4.72 × 10 + 05 6.63 × 10 + 04
Max 2.09 × 10 + 05 7.02 × 10 + 04 2.59 × 10 + 04 3.01 × 10 + 02 5.87 × 10 + 05 1.48 × 10 + 05
Min 9.50 × 10 + 04 3.00 × 10 + 02 2.52 × 10 + 03 3.00 × 10 + 02 3.55 × 10 + 05 2.29 × 10 + 04
Var 5.08 × 10 + 08 2.15 × 10 + 08 1.50 × 10 + 07 8.56 × 10 03 2.91 × 10 + 09 6.05 × 10 + 08
Time [s]194.50837.93207.22543.21112.595519.70
Figure A15. Results for F15, the shifted and rotated Rosenbrock function, for D = 10 , 30, 50, and 100 (30 trials).
Figure A15. Results for F15, the shifted and rotated Rosenbrock function, for D = 10 , 30, 50, and 100 (30 trials).
Appliedmath 05 00159 g0a15
Table A16. Results for F15, the shifted and rotated Rosenbrock function, for D = 10 , 30, 50, and 100 (30 trials).
Table A16. Results for F15, the shifted and rotated Rosenbrock function, for D = 10 , 30, 50, and 100 (30 trials).
MetricPSOPSO-CELPSOELPSO-CGAACOR
D = 10 − Mean 4.37 × 10 + 02 4.30 × 10 + 02 4.25 × 10 + 02 4.12 × 10 + 02 5.53 × 10 + 02 4.22 × 10 + 02
Max 5.88 × 10 + 02 5.41 × 10 + 02 5.40 × 10 + 02 5.04 × 10 + 02 9.19 × 10 + 02 5.08 × 10 + 02
Min 4.00 × 10 + 02 4.00 × 10 + 02 4.00 × 10 + 02 4.00 × 10 + 02 4.22 × 10 + 02 4.00 × 10 + 02
Var 1.78 × 10 + 03 1.45 × 10 + 03 1.12 × 10 + 03 2.23 × 10 + 02 6.14 × 10 + 03 7.47 × 10 + 02
Time [s]64.79151.6736.0873.3524.99605.82
D = 30 − Mean 8.25 × 10 + 02 6.26 × 10 + 02 4.80 × 10 + 02 4.47 × 10 + 02 2.52 × 10 + 03 6.39 × 10 + 02
Max 1.79 × 10 + 03 1.62 × 10 + 03 6.77 × 10 + 02 5.44 × 10 + 02 5.92 × 10 + 03 9.71 × 10 + 02
Min 5.21 × 10 + 02 4.23 × 10 + 02 4.11 × 10 + 02 4.00 × 10 + 02 1.04 × 10 + 03 4.93 × 10 + 02
Var 6.52 × 10 + 04 3.25 × 10 + 04 3.08 × 10 + 03 8.39 × 10 + 02 9.38 × 10 + 05 8.01 × 10 + 03
Time [s]100.30453.6864.51166.3349.111729.61
D = 50 − Mean 1.46 × 10 + 03 6.26 × 10 + 02 4.95 × 10 + 02 4.58 × 10 + 02 6.34 × 10 + 04 1.47 × 10 + 03
Max 2.35 × 10 + 03 1.62 × 10 + 03 6.54 × 10 + 02 5.57 × 10 + 02 1.01 × 10 + 05 2.85 × 10 + 03
Min 8.19 × 10 + 02 4.23 × 10 + 02 4.45 × 10 + 02 4.21 × 10 + 02 2.44 × 10 + 04 8.15 × 10 + 02
Var 1.09 × 10 + 05 3.25 × 10 + 04 2.26 × 10 + 03 8.11 × 10 + 02 3.98 × 10 + 08 1.38 × 10 + 05
Time [s]130.64453.68102.77264.7469.622934.19
D = 100 − Mean 1.58 × 10 + 04 1.33 × 10 + 03 1.12 × 10 + 03 6.94 × 10 + 02 3.32 × 10 + 04 1.68 × 10 + 04
Max 2.45 × 10 + 04 4.21 × 10 + 03 1.49 × 10 + 03 8.77 × 10 + 02 5.98 × 10 + 04 2.66 × 10 + 04
Min 8.70 × 10 + 03 6.47 × 10 + 02 8.74 × 10 + 02 5.44 × 10 + 02 1.94 × 10 + 04 8.46 × 10 + 03
Var 1.33 × 10 + 07 5.58 × 10 + 05 1.69 × 10 + 04 4.14 × 10 + 03 5.75 × 10 + 07 1.54 × 10 + 07
Time [s]221.13827.10248.35573.31131.725967.33
Figure A16. Results for F16, the shifted and rotated Rastrigin function, for D = 10 , 30, 50, and 100 (30 trials).
Figure A16. Results for F16, the shifted and rotated Rastrigin function, for D = 10 , 30, 50, and 100 (30 trials).
Appliedmath 05 00159 g0a16
Table A17. Results for F16, the shifted and rotated Rastrigin function, for D = 10 , 30, 50, and 100 (30 trials).
Table A17. Results for F16, the shifted and rotated Rastrigin function, for D = 10 , 30, 50, and 100 (30 trials).
MetricPSOPSO-CELPSOELPSO-CGAACOR
D = 10 − Mean 7.09 × 10 + 02 5.68 × 10 + 02 5.43 × 10 + 02 5.33 × 10 + 02 5.86 × 10 + 02 5.17 × 10 + 02
Max 3.17 × 10 + 03 1.46 × 10 + 03 1.44 × 10 + 03 5.68 × 10 + 02 6.50 × 10 + 02 5.40 × 10 + 02
Min 5.17 × 10 + 02 5.09 × 10 + 02 5.07 × 10 + 02 5.07 × 10 + 02 5.42 × 10 + 02 5.04 × 10 + 02
Var 1.51 × 10 + 05 3.27 × 10 + 04 8.42 × 10 + 03 2.01 × 10 + 02 3.63 × 10 + 02 5.24 × 10 + 01
Time [s]63.56162.5035.7377.7628.44651.85
D = 30 − Mean 7.18 × 10 + 03 4.72 × 10 + 03 1.10 × 10 + 03 1.09 × 10 + 03 8.39 × 10 + 02 6.16 × 10 + 02
Max 2.16 × 10 + 04 2.55 × 10 + 04 4.10 × 10 + 03 3.63 × 10 + 03 9.47 × 10 + 02 8.20 × 10 + 02
Min 2.54 × 10 + 03 9.04 × 10 + 02 6.44 × 10 + 02 6.29 × 10 + 02 7.52 × 10 + 02 5.48 × 10 + 02
Var 1.15 × 10 + 07 1.95 × 10 + 07 3.11 × 10 + 05 2.88 × 10 + 05 1.63 × 10 + 03 1.72 × 10 + 03
Time [s]107.38495.1569.49191.5058.661798.20
D = 50 − Mean 2.18 × 10 + 04 4.72 × 10 + 03 2.29 × 10 + 03 1.72 × 10 + 03 1.15 × 10 + 03 7.98 × 10 + 02
Max 3.46 × 10 + 04 2.55 × 10 + 04 9.15 × 10 + 03 9.20 × 10 + 03 1.38 × 10 + 03 1.08 × 10 + 03
Min 1.25 × 10 + 04 9.04 × 10 + 02 1.18 × 10 + 03 8.92 × 10 + 02 1.02 × 10 + 03 6.87 × 10 + 02
Var 2.03 × 10 + 07 1.95 × 10 + 07 1.64 × 10 + 06 1.28 × 10 + 06 4.44 × 10 + 03 6.70 × 10 + 03
Time [s]145.30495.15118.90293.0389.512841.95
D = 100 − Mean 1.07 × 10 + 05 1.17 × 10 + 04 8.84 × 10 + 03 1.15 × 10 + 03 2.12 × 10 + 03 1.51 × 10 + 03
Max 1.47 × 10 + 05 5.06 × 10 + 04 2.17 × 10 + 04 1.40 × 10 + 03 2.61 × 10 + 03 2.13 × 10 + 03
Min 7.34 × 10 + 04 2.56 × 10 + 03 4.48 × 10 + 03 9.25 × 10 + 02 1.84 × 10 + 03 1.20 × 10 + 03
Var 1.99 × 10 + 08 1.01 × 10 + 08 1.34 × 10 + 07 9.87 × 10 + 03 1.60 × 10 + 04 4.37 × 10 + 04
Time [s]254.33898.74290.62686.85168.895565.35
Figure A17. Results for F17, the shifted and rotated Schaffer’s F7 function, for D = 10 , 30, 50, and 100 (30 trials).
Figure A17. Results for F17, the shifted and rotated Schaffer’s F7 function, for D = 10 , 30, 50, and 100 (30 trials).
Appliedmath 05 00159 g0a17
Table A18. Results for F17, the shifted and rotated Schaffer’s F7 function, for D = 10 , 30, 50, and 100 (30 trials).
Table A18. Results for F17, the shifted and rotated Schaffer’s F7 function, for D = 10 , 30, 50, and 100 (30 trials).
MetricPSOPSO-CELPSOELPSO-CGAACOR
D = 10 − Mean 1.55 × 10 + 03 1.02 × 10 + 03 1.41 × 10 + 03 8.53 × 10 + 02 3.94 × 10 + 03 6.30 × 10 + 02
Max 4.63 × 10 + 03 8.96 × 10 + 03 6.84 × 10 + 03 5.30 × 10 + 03 9.08 × 10 + 03 1.19 × 10 + 03
Min 6.01 × 10 + 02 6.00 × 10 + 02 6.00 × 10 + 02 6.00 × 10 + 02 1.28 × 10 + 03 6.00 × 10 + 02
Var 1.06 × 10 + 06 9.96 × 10 + 05 1.67 × 10 + 06 4.22 × 10 + 05 2.92 × 10 + 06 8.25 × 10 + 03
Time [s]73.97161.4841.2879.8636.80610.86
D = 30 − Mean 5.06 × 10 + 04 1.43 × 10 + 04 2.03 × 10 + 04 1.41 × 10 + 04 3.82 × 10 + 04 1.17 × 10 + 04
Max 1.05 × 10 + 05 6.97 × 10 + 04 4.92 × 10 + 04 5.11 × 10 + 04 1.52 × 10 + 05 4.95 × 10 + 04
Min 8.03 × 10 + 03 6.19 × 10 + 02 2.17 × 10 + 03 8.97 × 10 + 02 1.34 × 10 + 04 9.80 × 10 + 02
Var 3.69 × 10 + 08 1.52 × 10 + 08 1.31 × 10 + 08 1.21 × 10 + 08 2.62 × 10 + 08 8.13 × 10 + 07
Time [s]143.55362.95101.75212.4686.961758.90
D = 50 − Mean 1.22 × 10 + 05 5.17 × 10 + 04 5.41 × 10 + 04 4.99 × 10 + 04 1.01 × 10 + 05 7.87 × 10 + 04
Max 1.85 × 10 + 05 1.29 × 10 + 05 1.28 × 10 + 05 1.31 × 10 + 05 1.70 × 10 + 05 2.36 × 10 + 05
Min 6.83 × 10 + 04 8.50 × 10 + 03 2.05 × 10 + 04 9.33 × 10 + 03 5.86 × 10 + 04 1.36 × 10 + 04
Var 6.62 × 10 + 08 8.53 × 10 + 08 3.76 × 10 + 08 7.27 × 10 + 08 5.77 × 10 + 08 1.67 × 10 + 09
Time [s]201.67543.62175.18343.40138.553139.37
D = 100 − Mean 3.99 × 10 + 05 2.93 × 10 + 05 2.36 × 10 + 05 2.29 × 10 + 05 4.29 × 10 + 05 4.65 × 10 + 05
Max 5.52 × 10 + 05 4.52 × 10 + 05 4.45 × 10 + 05 3.73 × 10 + 05 7.01 × 10 + 05 7.34 × 10 + 05
Min 2.88 × 10 + 05 1.33 × 10 + 05 9.72 × 10 + 04 7.24 × 10 + 04 2.55 × 10 + 05 2.54 × 10 + 05
Var 2.64 × 10 + 09 3.60 × 10 + 09 3.85 × 10 + 09 3.51 × 10 + 09 5.16 × 10 + 09 1.22 × 10 + 10
Time [s]351.621024.56437.53759.67274.576109.19
Table A19. Results for F18, the shifted and rotated Griewank function, for D = 10 , 30, 50, and 100 (30 trials).
Table A19. Results for F18, the shifted and rotated Griewank function, for D = 10 , 30, 50, and 100 (30 trials).
MetricPSOPSO-CELPSOELPSO-CGAACOR
D = 10 − Mean 9.35 × 10 + 02 2.43 × 10 01 1.12 × 10 + 01 2.51 × 10 01 7.07 × 10 01 1.58 × 10 01
Max 1.32 × 10 + 03 8.91 × 10 01 3.96 × 10 + 01 8.41 × 10 + 00 1.78 × 10 + 00 6.48 × 10 01
Min 6.32 × 10 + 02 1.23 × 10 02 4.98 × 10 + 00 1.10 × 10 02 6.23 × 10 02 1.72 × 10 02
Var 1.99 × 10 + 04 3.05 × 10 02 3.70 × 10 + 01 1.10 × 10 + 00 1.99 × 10 01 1.09 × 10 02
Time [s]208.24154.07221.05544.9922.60592.90
D = 30 − Mean 2.11 × 10 + 00 1.73 × 10 + 00 6.33 × 10 02 1.36 × 10 02 5.61 × 10 + 00 1.71 × 10 + 00
Max 4.42 × 10 + 00 6.56 × 10 + 00 1.13 × 10 + 00 7.09 × 10 02 1.13 × 10 + 01 3.24 × 10 + 00
Min 1.38 × 10 + 00 1.59 × 10 05 5.74 × 10 03 6.66 × 10 07 2.98 × 10 + 00 1.07 × 10 + 00
Var 3.05 × 10 01 2.00 × 10 + 00 2.14 × 10 02 2.79 × 10 04 2.23 × 10 + 00 1.49 × 10 01
Time [s]93.50465.6655.61180.7740.771713.71
D = 50 − Mean 5.29 × 10 + 00 1.73 × 10 + 00 5.61 × 10 01 6.52 × 10 03 1.38 × 10 + 01 5.61 × 10 + 00
Max 1.05 × 10 + 01 6.56 × 10 + 00 1.33 × 10 + 00 3.70 × 10 02 2.20 × 10 + 01 8.83 × 10 + 00
Min 2.38 × 10 + 00 1.59 × 10 05 1.65 × 10 01 4.48 × 10 05 7.08 × 10 + 00 3.24 × 10 + 00
Var 2.14 × 10 + 00 2.00 × 10 + 00 4.16 × 10 02 6.88 × 10 05 7.85 × 10 + 00 1.27 × 10 + 00
Time [s]115.48465.6687.72247.3258.493055.10
D = 100 − Mean 2.68 × 10 + 01 3.82 × 10 + 00 1.27 × 10 + 00 9.70 × 10 03 4.88 × 10 + 01 3.01 × 10 + 01
Max 3.82 × 10 + 01 1.48 × 10 + 01 1.83 × 10 + 00 3.40 × 10 02 7.79 × 10 + 01 3.71 × 10 + 01
Min 1.84 × 10 + 01 1.11 × 10 + 00 1.10 × 10 + 00 1.89 × 10 03 3.48 × 10 + 01 2.12 × 10 + 01
Var 1.39 × 10 + 01 5.81 × 10 + 00 1.56 × 10 02 4.80 × 10 05 5.98 × 10 + 01 1.18 × 10 + 01
Time [s]184.81844.44194.72500.24117.435940.40
Figure A18. Results for F18, the shifted and rotated Griewank function, for D = 10 , 30, 50, and 100 (30 trials).
Figure A18. Results for F18, the shifted and rotated Griewank function, for D = 10 , 30, 50, and 100 (30 trials).
Appliedmath 05 00159 g0a18

References

  1. Eberhart, R.C.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micromachine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  2. Kennedy, J.; Eberhart, R.C. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  3. Shi, Y. Particle Swarm Optimization. IEEE Connect 2004, 2, 8–13. [Google Scholar]
  4. Laskari, E.C.; Parsopoulos, K.E.; Vrahatis, M.N. Particle Swarm Optimization for Integer Programming. In Proceedings of the IEEE Congress on Evolutionary Computation, Honolulu, HI, USA, 12–17 May 2002. [Google Scholar]
  5. Gaing, Z.L. A particle swarm optimization approach for optimum design of PID controller in AVR system. IEEE Trans. Energy Convers. 2004, 19, 384–391. [Google Scholar] [CrossRef]
  6. Fukuyama, Y.; Yoshida, H. A particle swarm optimization for reactive power and voltage control in electric power systems. In Proceedings of the 2001 Congress on Evolutionary Computation, Seoul, Republic of Korea, 27–30 May 2001; pp. 1232–1239. [Google Scholar]
  7. Parsopoulos, K.E.; Vrahatis, M.N. On the computation of all global minimizers through particle swarm optimization. IEEE Trans. Evol. Comput. 2004, 8, 211–224. [Google Scholar] [CrossRef]
  8. Clerc, M.; Kennedy, J. The particle swarm—Explosion, stability, and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. 2002, 6, 58–73. [Google Scholar] [CrossRef]
  9. Mendes, R. Population Topologies and Their Influence in Particle Swarm Performance. Ph.D. Thesis, Escola de Engenharia, Universidade do Minho, Braga, Portugal, 2004. [Google Scholar]
  10. Hu, X.; Eberhart, R.; Shi, Y. Recent advances in particle swarm. In Proceedings of the IEEE Congress on Evolutionary Computation, Portland, OR, USA, 19–23 June 2004. [Google Scholar]
  11. Jordehi, A.R. Enhanced leader PSO (ELPSO): A new PSO variant for solving global optimisation problems. Appl. Soft Comput. 2015, 26, 401–417. [Google Scholar] [CrossRef]
  12. Li, X.L.; Serra, R.; Olivier, J. A multi-component particle swarm optimization with leader learning for structural damage detection. Appl. Soft Comput. 2022, 116, 108315. [Google Scholar] [CrossRef]
  13. Chen, W.N.; Zhang, J.; Lin, Y.; Chen, N.; Zhan, Z.H.; Chung, H.S.H.; Li, Y.; Shi, Y.H. Particle swarm optimization with an aging leader and challengers. IEEE Trans. Evol. Comput. 2013, 17, 241–258. [Google Scholar] [CrossRef]
  14. Wang, X.; Yang, D.; Chen, S. Particle swarm optimization based leader-follower cooperative control in multi-agent systems. Appl. Soft Comput. 2024, 151, 111130. [Google Scholar] [CrossRef]
  15. Chen, B.; Zhang, R.; Chen, L.; Long, S. Adaptive Particle Swarm Optimization with Gaussian Perturbation and Mutation. Sci. Program. 2021, 2021, 6676449. [Google Scholar] [CrossRef]
  16. Wang, C.; Shi, T.; Han, D. Adaptive Dimensional Gaussian Mutation of PSO-Optimized Convolutional Neural Network Hyperparameters. Appl. Sci. 2023, 13, 4254. [Google Scholar] [CrossRef]
  17. Han, J.; Chen, Y.; Huang, X. An Advanced Adaptive Group Learning Particle Swarm Optimization Algorithm. Symmetry 2025, 17, 667. [Google Scholar] [CrossRef]
  18. Fu, W.Y. Accelerated high-dimensional global optimization: A particle swarm optimizer incorporating homogeneous learning and autophagy mechanisms. Inf. Sci. 2023, 648, 119573. [Google Scholar] [CrossRef]
  19. Hao, G.-S.; Lim, M.-H.; Ong, Y.-S.; Huang, H.; Wang, G.-G. Domination landscape in evolutionary algorithms and its applications. Soft Comput. 2019, 23, 3563–3570. [Google Scholar] [CrossRef]
  20. Li, K.; Elsayed, S.; Sarker, R.; Essam, D. Landscape-based similarity check strategy for dynamic optimization problems. IEEE Access 2020, 8, 178570–178581. [Google Scholar] [CrossRef]
  21. Hou, Y.; Hao, G.-S.; Zhang, Y.; Gu, F.; Wang, X.; Zhang, T.-T. A molecular interactions-based social learning particle swarm optimization algorithm. IEEE Access 2020, 8, 135661–135674. [Google Scholar] [CrossRef]
  22. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial Bee Colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  23. Liu, J.; Hou, Y.; Li, Y.; Zhou, H. A multi-strategy improved tree-seed algorithm for numerical optimization and engineering optimization problems. Sci. Rep. 2023, 13, 10768. [Google Scholar] [CrossRef]
  24. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
  25. Suganthan, P.N. Particle swarm optimiser with neighborhood operator. In Proceedings of the Congress on Evolutionary Computation, Washington, DC, USA, 6–9 July 1999. [Google Scholar]
  26. Yasuda, K.; Iwasaki, N. Velocity feedback adaptive particle swarm optimization. In Proceedings of the 6th Metaheuristics International Conference, Lisbon, Portugal, 16–18 June 2005; pp. 947–952. [Google Scholar]
  27. Murtagh, F. A survey of recent advances in hierarchical clustering algorithms. Comput. J. 1983, 26, 354–359. [Google Scholar] [CrossRef]
  28. Hzambran. CEC2013 Benchmark Functions. 2013. Available online: https://github.com/hzambran/cec2013/tree/master/inst/extdata (accessed on 19 February 2024).
  29. Holland, J.H. Adaptation in Natural and Artificial Systems; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  30. Socha, K.; Dorigo, M. Ant colony optimization for continuous domains. Eur. J. Oper. Res. 2008, 185, 1155–1173. [Google Scholar] [CrossRef]
  31. Goodarzimehr, V.; Shojaee, S.; Hamzehei-Javaran, S.; Talatahari, S. Special Relativity Search: A novel metaheuristic method based on special relativity physics. Knowl. Based Syst. 2022, 257, 109484. [Google Scholar] [CrossRef]
Table 1. Cluster ranking based on mean and variance.
Table 1. Cluster ranking based on mean and variance.
ClusterRank by Avg. MeanRank by Avg. VarianceTotal ScoreRank
Cluster11121st
Cluster24484th
Cluster32352nd
Cluster43253rd
Table 2. List of benchmark functions used in the experiments.
Table 2. List of benchmark functions used in the experiments.
IDFunction NameModalityProperties
F1SphereUnimodalSeparable
F2Weighted sphereUnimodalSeparable
F3RosenbrockUnimodalNon-separable
F4Schwefel’s Problem 2.22UnimodalSeparable
F5RastriginMultimodalSeparable
F6AckleyMultimodalSeparable
F7GriewankMultimodalSeparable
F8SchwefelMultimodalNon-separable
F9Shifted sphereUnimodalSeparable, Shifted
F10Shifted weighted sphereUnimodalSeparable, Shifted
F11Shifted RastriginMultimodalSeparable, Shifted
F12Shifted GriewankMultimodalSeparable, Shifted
F13Shifted and rotated Bent CigarUnimodalNon-separable, shifted, rotated
F14Shifted and rotated ZakharovUnimodalNon-separable, shifted, rotated
F15Shifted and rotated RosenbrockUnimodalNon-separable, shifted, rotated
F16Shifted and rotated RastriginMultimodalNon-separable, shifted, rotated
F17Shifted and rotated Schaffer’s F7MultimodalNon-separable, shifted, rotated
F18Shifted and rotated GriewankMultimodalNon-separable, shifted, rotated
Table 3. Parameter settings used in the experiments.
Table 3. Parameter settings used in the experiments.
Method ω c 1 , c 2 n k max n c h 1 s 1 Other
PSO [2]0.71.5, 1.510010,000
PSO-C (proposed)0.71.5, 1.510010,0004
ELPSO [11]0.71.5, 1.5100490012 F = 1.2
ELPSO-C (proposed)0.71.5, 1.51004900412 F = 1.2
GA [29]10010,000 P c = 0.8 , P m = 0.02
ACOR [30]10010,000 t = 50 , ξ = 0.85
Note: n is the population size; k max is the maximum number of iterations; n c is the number of clusters used in cluster-based variants; h 1 and s 1 are stagnation thresholds used to trigger mutation stage switching in ELPSO; F is the scaling factor for DE-based mutation. In GA, P c is the crossover probability and P m is the mutation probability. In ACOR, t is the archive size, and ξ is the scaling factor for the standard deviation of the Gaussian kernel.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hayashida, T.; Sekizaki, S.; Shimoyoshi, K.; Nishizaki, I. ELPSO-C: A Clustering-Based Strategy for Dimension-Wise Diversity Control in Enhanced Leader Particle Swarm Optimization. AppliedMath 2025, 5, 159. https://doi.org/10.3390/appliedmath5040159

AMA Style

Hayashida T, Sekizaki S, Shimoyoshi K, Nishizaki I. ELPSO-C: A Clustering-Based Strategy for Dimension-Wise Diversity Control in Enhanced Leader Particle Swarm Optimization. AppliedMath. 2025; 5(4):159. https://doi.org/10.3390/appliedmath5040159

Chicago/Turabian Style

Hayashida, Tomohiro, Shinya Sekizaki, Kosuke Shimoyoshi, and Ichiro Nishizaki. 2025. "ELPSO-C: A Clustering-Based Strategy for Dimension-Wise Diversity Control in Enhanced Leader Particle Swarm Optimization" AppliedMath 5, no. 4: 159. https://doi.org/10.3390/appliedmath5040159

APA Style

Hayashida, T., Sekizaki, S., Shimoyoshi, K., & Nishizaki, I. (2025). ELPSO-C: A Clustering-Based Strategy for Dimension-Wise Diversity Control in Enhanced Leader Particle Swarm Optimization. AppliedMath, 5(4), 159. https://doi.org/10.3390/appliedmath5040159

Article Metrics

Back to TopTop