Next Article in Journal
Educational Status as a Mediator of Intergenerational Social Mobility in Europe: A Positional Analysis Approach
Previous Article in Journal
The Boyle–Romberg Trinomial Tree, a Highly Efficient Method for Double Barrier Option Pricing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Sine Cosine and Particle Swarm Optimization Algorithm for High-Dimensional Global Optimization Problem and Its Application

1
School of Computer Science and Engineering, North Minzu University, Yinchuan 750021, China
2
School of Mathematics and Information Science, North Minzu University, Yinchuan 750021, China
3
Ningxia Collaborative Innovation Center for Scientific Computing and Intelligent Information Processing, North Minzu University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(7), 965; https://doi.org/10.3390/math12070965
Submission received: 5 March 2024 / Revised: 18 March 2024 / Accepted: 22 March 2024 / Published: 24 March 2024
(This article belongs to the Section Engineering Mathematics)

Abstract

:
Particle Swarm Optimization (PSO) is facing more challenges in solving high-dimensional global optimization problems. In order to overcome this difficulty, this paper proposes a novel PSO variant of the hybrid Sine Cosine Algorithm (SCA) strategy, named Velocity Four Sine Cosine Particle Swarm Optimization (VFSCPSO). The introduction of the SCA strategy in the velocity formulation ensures that the global optimal solution is found accurately. It increases the flexibility of PSO. A series of experiments are conducted on the CEC2005 test suite with compositional algorithms, algorithmic variants, and good intelligent algorithms. The experimental results show that the algorithm effectively improves the overall performance of compositional algorithms; the Friedman test proves that the algorithm has good competitiveness. The algorithm also performs better in PID parameter tuning. Therefore, the VFSCPSO is able to solve the high-dimensional global optimization problems in a better way.

1. Introduction

Over the past half century, human beings have taken inspiration from biological behavior and natural phenomena and proposed a variety of intelligent algorithms. Intelligent algorithms have been deeply applied in different fields. Intelligent algorithms show better global search capability and adaptability. These algorithms are able to adapt more successfully to a variety of optimization problems and overcome many limitations of traditional optimization algorithms. Therefore, intelligent algorithms outperform traditional optimization algorithms in many optimization problems, making them more effective and feasible choices.
Intelligent algorithms can be roughly divided into the following four types according to different classification standards: Firstly, evolutionary algorithms are a class of intelligent algorithms proposed by modeling the substantive abstraction of evolutionary law, these include Genetic Algorithm (GA) [1], Differential Evolution (DE) [2], Dandelion Optimizer (DO) [3], and Biogeography-Based Optimization (BBO) [4]. Secondly, swarm intelligent optimization algorithms are a class of intelligent algorithms that integrates the characteristics of biological groups into the solution of optimization problems, and include Particle Swarm Optimization (PSO) [5], Shuffled Frog Leaping Algorithm (SFLA) [6], Firefly Algorithm (FA) [7], Mayfly Algorithm (MA) [8], and Peafowl Optimization Algorithm (POA) [9]. Thirdly, physics-based algorithms are a class of intelligent algorithms formed by introducing the laws of nature into the solution of optimization problems, these include the Simulated Annealing (SA) [10] algorithm and the Sine Cosine Algorithm (SCA) [11]. Fourth, human-based algorithms are a class of intelligent algorithms that introduces human behavior characteristics, such as Student Psychology-Based Optimization (SPBO) [12] and the Optical Microscope Algorithm (OMA) [13].
PSO [14] is an optimization technique based on evolution and iteration, and the algorithm is used in various fields to solve real optimization problems, such as path planning problems [15], parameter identification problems [16], image processing problems [17], biomedical technologies problems [18], and other optimization problems. The advantages of PSO can be found mainly in the following four aspects. Firstly, the PSO emulates the characteristics of biological behavior, its theory and structure are relatively simple, the number of control parameters are few so that a large number of parameters do not need to be adjusted, and it is relatively easy to adjust the parameters according to the optimization problems [19]. Secondly, PSO has a strong global search ability, through cooperation and information sharing it can search for potential optimal solutions quickly in a wide scope and help the algorithm to approach the global optimal solution. Thirdly, PSO does not rely on the objective function’s gradient information, so it has certain advantages in solving complex optimization problems, and it is more flexible than traditional optimization algorithms and can solve large-scale problems. Fourthly, PSO performs well in dealing with continuous optimization problems and usually finds high-quality solutions. Based on the above discussion, we can find that PSO has more obvious advantages than other intelligent algorithms in solving high-dimensional optimization problems. Therefore, based on the unique advantages of PSO, we opt to employ PSO in order to obtain the effective resolution of high-dimensional optimization problems.
Similar to other intelligent algorithms, PSO has some disadvantages, such as a slow convergence velocity, premature convergence, and falling into local optima. Researchers have carried out many improvements. Among the current inertia weight selection strategies, the most popular one is still that of Shi et al. [20], who proposed a linear dynamic decreasing inertia weight strategy, which increases the exploration and exploitation capability in the process of evolution. Liu et al. [21] proposed Adaptive Weighted Particle Swarm Optimization (AWPSO) to improve the convergence velocity of the algorithm. Ahmed et al. [22] proposed a novel tracking technique based on Self-Tuning Particle Swarm Optimization (ST-PSO). This method maximizes power output from the power generated by a proton-exchange membrane fuel cell. Zhang et al. [23] proposed a corrective strategy, combining the characteristics of two strategies to propose Particle Swarm Optimization with Self-Corrective and Dimension-by-Dimension Learning Capabilities (SCDLPSO), which is more competitive in different dimensional optimization problems. Xue et al. [24] gave a strategy pool consisting of four strategies in solving high-dimensional feature selection problems. In addition, the introduction of other intelligent algorithm’s features can result in hybrid algorithms with better performance: Kaseb et al. [25] added the crossover and mutation operators of GA to PSO, which further balanced the exploration and exploitation capability of the algorithm, and improved its ability to deal with wind conditions in cities. Shams et al. [26] proposed a hybrid Dipper-Throated Optimization and Particle Swarm Optimization (DTPSO), combining the advantages of the Dipper-Throated Optimization algorithm and PSO for hepatocellular carcinoma prediction. Despite the existence of a large number of improvement studies, there is still space for the PSO [27] to enhance its overall performance, and improving the shortcomings of the PSO [28] while maintaining its advantages still requires further work by researchers. Although PSO performs well on many problems, it still has some limitations, such as easily falling into local optimal solutions and its slow convergence velocity. Meanwhile, different optimization problems have different characteristics and requirements, the PSO variant applicable to a certain problem may not be applicable to other problems, and cannot satisfy a broader range of application scenarios. Making improvements to PSO can not only increase the overall performance of the algorithm, but also adapt it to new research trends and requirements. Therefore, although the classical PSO method has many types of variants, it still has many aspects that can be improved and optimized. The improvement of PSO is still an important research direction, which can promote algorithmic innovation and development, and bring new ideas to the field of optimization algorithms.
SCA is an optimization technique based on objective laws, the algorithm performs a search based on the circular properties of the sine–cosine function, which can be applied to engineering design optimization problems [29], neural network problems [30], path planning problems [31], and others. SCA has the following three obvious advantages. Firstly, SCA can make full use of the circular property of the sine–cosine function in order to balance the exploration and exploitation capability of the algorithm and make the algorithm have better overall performance. Secondly, SCA does not rely on gradient information [32] and has a wide range of applicability. Thirdly, SCA has fewer adjustment parameters, which reduces the tedious work of tuning parameters and makes the algorithm more convenient to use. SCA performs well in many optimization problems. As the complexity of the optimization problem increases, it may take more time to find the optimal solution. Researchers have made improvements to SCA. Zhou et al. [33] used Gaussian mutation to increase the diversity of SCA, and the proposed FGSCA became a powerful tool. Ma et al. [34] used a nonlinear strategy and added a hill-climbing strategy in the local search part. The ISCA optimized the learning rate of GRU and the number of hidden layer neurons to further accurately predict the storage and utilization of solar energy by photovoltaic power generation. Hamad et al. [35] proposed QLESCA, which is an algorithm that has obtained superior fitness values. Although SCA has shown excellent performance in many optimization problems, as the complexity of optimization problems increases, more computer resources may be required to find the best strategy. Based on the aforementioned advantages, we selected SCA as our hybrid algorithmic strategy.
Most of the current intelligent algorithms cannot effectively solve large-scale problems [36]. Hence, enhancing the capability of intelligent algorithms to tackle high-dimensional global optimization problems [37] is a highly valuable research area. Problems with more than 100 dimensions are considered high-dimensional optimization problems. These include large-scale problems. Large-scale problems are abstracted from complex real-world optimization problems and have gradually become a hot research topic in optimization problems. As the number of decision variables increases, the computational complexity of calculating the objective function value and constraint increases dramatically, making the solution more difficult. The solution space of such a problem is very large and contains a massive number of potential solutions, but only a small portion of them are feasible and meaningful. At this point, it is easy to fall into local optimal solutions, and efficient algorithms are required for global search. Xu et al. [38] constructed a pool of selection strategies from PSO variants and deduced new PSO strategy sets. The algorithm was found to ultimately improve the overall performance of PSO in solving large-scale problems. Wang et al. [39] proposed a PSO based on a reinforcement learning layer, which adopts a hierarchical number-controlled reinforcement learning strategy and introduces a level competition mechanism, which effectively improves the PSO convergence in large-scale problems. Yi et al. [40] improved a GA variant to achieve good performance in large-scale problems based on human electroencephalography signal processing. The complexity of the problems faced by human beings is increasing. The algorithms in the above studies show some improvement in performance for solving large-scale problems, but can only solve problems with at most a thousand dimensions. Solving large-scale problems is a challenging task that requires a combination of knowledge, a suitable algorithm, and computational resource to obtain a feasible solution. Therefore, in this context, it is still necessary to improve the existing intelligent algorithms to solve high-dimensional global optimization problems.
With the increasing degree of automation, the real-life requirements for motor control are also increasing. Servo motor systems usually choose a PID controller as the executor to adjust the behavior of the entire system by tuning parameters. Since Ziegler and Nichols proposed the Ziegler–Nichols rectification method [41] in 1942, many classical parameter rectification methods have emerged. Selecting appropriate PID parameters is a critical step in the control process, and PID controllers based on intelligent algorithms can play an important role after parameter tuning. Most of the intelligent algorithms are suitable for optimizing the PID controller parameters of servo motors, helping the PID controller to determine the optimal parameter and significantly improving the performance of the whole control system. GA finds the best parameter combinations by selecting and mutating between different PID parameter sets [42] in order to minimize the performance metrics, such as system errors or vibrations, and thus, the system exhibits a good response characteristic. SA [43] discovers more appropriate PID parameters by progressively decreasing the probability of accepting a worse solution in the search solution space. The improved FA [44] self-tunes the PID parameters by introducing inertia weights and tuning factors, which ultimately improves the capability of the PID controller. While the application of intelligent algorithms in PID parameter tuning is relatively mature, this problem has not been perfectly solved. There is still room for further enhancement of this application with improved intelligent algorithms.
Over the past three decades, PSO has continuously enriched its theoretical material. Although PSO has been widely used in different fields, most of the PSO variants can only solve global optimization problems, but relatively little research has been carried out in the field of high-dimensional problems, especially large-scale problems, and a large number of complex optimization problems need to be solved urgently nowadays. Existing PSO variants are not able to solve them efficiently. SCA is a novel intelligent algorithm, introducing the advantages of SCA into PSO, which can open up more possibilities in the field of optimization. The obtained hybrid algorithm has better performance and solves optimization problem in a more efficient way. PSO suffers from the disadvantage of a worse local search ability, and SCA suffers from the disadvantage of the lower stability of the algorithm and lower adaptability in high-dimensional environments. Although there has been a great deal of research, there is no best algorithm, only better ones. In order to cope with this challenge, this paper proposes a new method, the Velocity Four Sine Cosine Particle Swarm Optimization (VFSCPSO) algorithm. VFSCPSO introduces the influence of SCA strategy on the basis of PSO. This new algorithm has high optimization accuracy and stability, effectively solving optimization problems. The primary contributions of this paper are summarized as follows:
  • A new SCA strategy is proposed, this strategy is highly flexible by regulating the search range through the periodic variation of the sine–cosine function.
  • A novel framework of PSO is proposed, which is more efficient than the original PSO algorithm. Meanwhile, VFSCPSO does not increase the computational complexity compared to PSO.
  • The ductability of VFSCPSO in solving large-scale problems is verified through 10,000-dimensional numerical experiments on the CEC2005 test suite. The robustness of VFSCPSO in the PID parameter-tuning problem is verified through simulation experiments.
The rest of this paper is summarized as follows. Section 2 provides preliminary knowledge of the standard PSO and SCA. Section 3 presents the VFSCPSO and analyzes its complexity. Section 4 comprises the experiments and analysis. Finally, the conclusions are presented in Section 5.

2. Standard PSO and SCA

2.1. Particle Swarm Optimization (PSO)

In 1995, Eberhart and Kennedy collaborated to propose PSO, which is a swarm intelligence algorithm that abstracts the foraging behavior of bird communities into a mathematical model. In the standard PSO [45], a set of random solutions is first obtained by assuming that there a total of N particles, and the information of particle i is represented as a D-dimensional vector, where the position x i of particle i can be denoted as x = ( x i 1 , x i 2 , , x i D ) T , the velocity can be denoted v i as v = ( v i 1 , v i 2 , , v i D ) T , the individual extreme position can be denoted as p b e s t i = ( p b e s t 1 , p b e s t 2 , , p b e s t N ) T , and the global extreme position of the population can be denoted as g b e s t , and the formula for updating the velocity and position of the particle i in the ( k + 1 )th iteration is denoted as
v i d k + 1 = w v i d k + c 1 r 1 p b e s t i k x i d k + c 2 r 2 g b e s t k x i d k
The velocity update formula Equation (1) is one of the core formulas of the standard PSO. Where w is the inertia weight, this parameter maintains the motion inertia of the particle and reflects the degree of inheritance of the origin velocity. When the kth iteration is performed, v i d denotes the velocity of particle i in the dth dimension of the search space; x i d denotes the position of particle i in the dth dimension of the search space; p b e s t i denotes the individual extreme position of particle i; g b e s t denotes the global extreme position of the whole population. c 1 and c 2 are the learning factors, and  r 1 and r 2 are random numbers between [ 0 , 1 ] .
x i d k + 1 = x i d k + v i d k + 1
From the other core formula of the PSO, Equation (2), the position equation consists of two key components. The first part is the position x i d of particle i, corresponding to the candidate solution when the kth iteration is performed; the second part determines the adjustment of the position of particle i when the ( k + 1 )th iteration is performed, i.e., the direction of particle i’s motion v i d . The two parts interact with each other to ultimately determine the position of the particle to select the solution when the ( k + 1 )th iteration is performed. The formula for the velocity and position boundary conditions of particle i in the ( k + 1 )th iteration is denoted as
v i k = v i k , v min v i k v max v max , v i k > v max v min , v i k < v min
x i k = x i k , x min x i k x max x max , x i k > x max x min , x i k < x min
In Equations (3) and (4), before updating the position of particle i at the kth iteration, it is judged whether the current velocity within the given velocity range is exceeded. If it is, the current velocity is replaced by the velocity boundary value. The same is true for the judgment of the current position pair. The boundary conditions are used to improve the particle searching ability and efficiency, to avoid the population dispersion caused by the particles jumping out of the searching range. The individual, global position update formula is denoted as
p b e s t i k + 1 = p b e s t i k , f x i k f p b e s t i k x i k + 1 , f x i k > f p b e s t i k
g b e s t k + 1 = g b e s t k , f p b e s t i k f g b e s t k p b e s t i k , f p b e s t i k > f g b e s t k
In Equations (5) and (6), updating the individual extreme value p b e s t i and global extreme value g b e s t in the particle population according to function f can guide the particle population to perform iterative optimization search in the direction of the global optimal solution. The termination condition is usually set to reach the maximum number of iterations in comparison experiments. The pseudocode of PSO is shown in Algorithm 1.
Algorithm 1 Particle Swarm Optimization
Input: Initialize population N, dimension D, iteration K, learning factors c 1 , c 2 , inertia weight w, velocity boundaries v m a x , v m i n , position boundaries x m a x , x m i n
Output: Optimal fitness value
1:
The initialization population is randomly generated via Equations (1)–(4) and the position and velocity of the initialized particles v i d and x i d
2:
Calculate the fitness value
3:
Individual and global optimal positions p b e s t i and g b e s t are generated by equations Equations (5) and (6)
4:
while   k < K   do
5:
   for  i = 0 to N do
6:
       Calculate the v i d by Equations (1) and (3)
7:
       Calculate the x i d by Equations (2) and (4)
8:
       Update p b e s t i and g b e s t by Equations (5) and (6)
9:
   end for
10:
   k = k + 1
11:
end while

2.2. Sine Cosine Algorithm (SCA)

Mirjalili proposed SCA in 2016, inspired by the properties of the sine–cosine function, whose central idea is to find the optimal solution by using the periodic law of variation of the sine–cosine function in the search space combined with a random search strategy. Firstly, a set of candidate solutions are randomly initialized in the search space, the fitness of each candidate solution is calculated and the current optimal solution is recorded. In the exploration phase, due to the high random ratio, SCA makes the change in the solutions in the solution set more obvious, and imposes large fluctuations to control the current solutions in the solution space in order to search the unknown region in the solution space rapidly. In the exploitation phase, the degree of random solution variation is smaller, and weak random perturbations are applied to the obtained solution set to fully search the neighborhood of the current solution. Its optimal process is divided into two phases, exploration and exploitation, and the position update formula of SCA is denoted as
x i k + 1 = x i k + s 1 sin s 2 s 3 P i k x i k , s 4 < 0.5 x i k + s 1 cos s 2 s 3 P i k x i k , s 4 0.5
The position of the solution is updated using the mathematical expression defined in Equation (7), which is the core formula of SCA. x i is the position of the current particle i at the kth iteration; P i is the historical individual optimal solution; s 1 is an adaptive parameter; s 2 , s 3 , and  s 4 are random numbers; and “ | | ” denotes the absolute value.
The role of s 1 is to balance the range of variation of the sine and cosine to control the shift of the algorithm from global to local search gradually; a larger s 1 improves the global search capability of the algorithm, and on the contrary a smaller s 1 facilitates the local exploitation of the algorithm; s 2 is a random number in the range [ 0 , 2 ] that is used to adjust the update solution to lie in or outside the solution space of the current and optimal solutions, defining the direction of the target of the solution and the extreme value of the iterative step size that can be achieved; s 3 is a random number in the range [ 0 , 2 ] that gives a random weight for the current optimal solution, randomly emphasizing or weakening the influence of the target point, which defines the effect level of the candidate solution when it goes a distance away; s 4 is a random number in the range [ 0 , 1 ] that is used to determine the sine or cosine updating formula, which helps to balance out the randomness in switching between sine and cosine to remove the connection that exists between iterative step size and direction.
s 1 = a a k K
Equation (8) is the formula for the random number s 1 , which adjusts the change in the position of the solution by making the phase change of the sine–cosine function change gradually with iterations to indicate the direction of the next solution’s movement, and thus, adjust the change in the position of the solution. The value of a determines the distance that a particle moves in the search space and is used to control the phase change of the sine–cosine function. Usually, the value of a is taken as a constant value of 2. The value of k / K affects the change in amplitude, and as the number of iterations increases, the amplitude of the change in phase decreases gradually. Therefore, a reasonable choice of s 1 is crucial for the algorithm’s performance and convergence. The SCA pseudocode is shown in Algorithm 2.
Algorithm 2 Sine Cosine Algorithm
Input: Initialize dimension D, iteration K, constant parameter a
Output: Optimal fitness value
1:
Initialize a set of randomly generated solutions
2:
while   k < K   do
3:
   Evaluate the fitness value
4:
   Update current optimal solution p b e s t i
5:
   Update parameter s 1 by Equation (8) and update parameters s 2 - s 4
6:
   Update the position of the current solution by Equation (7)
7:
     k = k + 1
8:
end while

3. Proposed Method

3.1. Proposed SCA Strategy

SCA inherently benefits from high exploration, thus avoiding local optimality, and the adaptive range of the sine–cosine function allows for a smooth transition from exploration to exploitation when the algorithm utilizes the sine–cosine term to explore different regions of the search space. At the same time, the current solution always updates its position around the current best solution, and during the optimization process the solution moves towards the most promising region in the search space. The current optimal solution is stored in a variable as a target point and is retained during the optimization process, resulting ultimately in a global optimal value. Here, the current optimal solution is replaced with the global optimum solution. This means that more particles are focused on searching around the global optimal solution, which enables faster convergence to the global optimal solution. Firstly, the particles are guided by collective intelligence. The global optimal solution represents the best position of the whole population, introducing it into the SCA strategy amounts to introducing more collective wisdom, which makes the SCA strategy make better usage of the global information. Secondly, the global optimal solution corresponds to a better position in the solution space, using it as a guiding element can help SCA avoid falling into local optima, which improves the global search ability. The SCA strategy can effectively avoid falling into local optima. Thirdly, integrating the global optimal solution into the SCA strategy implies that the information exchange is enhanced, which makes the particles more likely to search along the direction of the global optimal solution, which strengthens the information exchange in populations, and it benefits them in finding the global optimal solution more rapidly. Finally, the introduction of the global optimal solution into SCA can accelerate the convergence process of the SCA strategy, increasing the efficiency of the algorithm and accelerating the search process. To sum up, changing the current optimal solution position of SCA to the global optimal solution not only enhances the global search capability and information exchange, but also can accelerate the convergence process and avoid falling into local optima, which improves the performance and efficiency of the algorithm. In this case, the SCA strategy proposed is denoted as
S = c 3 r 3 s 1 sin s 2 s 3 g b e s t k x i d k , s 4 < 0.5 c 3 r 3 s 1 cos s 2 s 3 g b e s t k x i d k , s 4 0.5
In Equation (9), c 3 stands for the learning factor, r 3 is a random number that takes values in the range of [ 0 , 1 ] , s 1 and s 2 are random numbers that control the direction in which the particles move, s 1 sin s 2 and s 1 cos s 2 determine the region where the solution will be located, s 3 controls the degree of influence of the global optimal solution g b e s t , and  s 4 ensures that the velocity update switches equally between the sine–cosine function. Parameter c 3 is a random number in the range [ 0 , 1 ] and a is a constant number, 2. c 3 r 3 s 1 sin s 2 s 3 g b e s t k x i d k and c 3 r 3 s 1 cos s 2 s 3 g b e s t k x i d k are composed of SCA influence terms.

3.2. Proposed Velocity Four Sine Cosine Particle Swarm Optimization (VFSCPSO)

The velocity formula Equation (1) is the core formula of the PSO, which is used to guide the particle to move in the direction of the global optimal solution. When updating the particle velocity, the particle is affected by two parts, the individual cognitive part and the group cognitive part, by adjusting the individual historical optimal and group historical optimal term in order to balance the exploration and exploitation in the optimization process, and ultimately converge to the global optimal solution in a rapid way. PSO has a strong global search ability, but the ability to jump out of a local optimal value at a later stage of the algorithm’s search is poor, which is due to the failure to take into account the possible influence of other factors on the particles. In order to improve the diversity of the algorithm and avoid local optima, other reasonable influence factors are integrated into the velocity updating process to improve the performance of the algorithm. We introduce the influence factors of SCA during the velocity updating to improve the diversity of the PSO in the whole searching process, which can make the particles more flexible in searching for the global optimum solution, increase the searching capability of the particles, and further guide the particles to move towards the direction of the global optimal solution. This paper proposes a Velocity Four Sine Cosine Particle Swarm Optimization (VFSCPSO) algorithm. Accordingly, the velocity update formula for the ( k + 1 )th iteration in VFSCPSO is denoted as
v i d k + 1 = w v i d k + c 1 r 1 p b e s t i k x i d k + c 2 r 2 g b e s t k x i d k + S
Equation (10) is the velocity update formula for the VFSCPSO algorithm model. w v i d k + c 1 r 1 p b e s t i k x i d k + c 2 r 2 g b e s t k x i d k is the velocity formula Equation (1) for PSO. S is the SCA strategy. The other steps and formulas of VFSCPSO remain unchanged, therefore will not be further described here. VFSCPSO proposes a new algorithmic framework based on the PSO framework in combination with the SCA strategy, which ultimately improves PSO’s flexibility. The main steps of VFSCPSO can be described as follows:
Step 1: Input the required parameters such as particle population size N, particle search space dimension D, iteration K, inertia weight w, learning factors c 1 , c 2 , c 3 , velocity boundaries v m a x , v m i n , position boundaries x m a x , x m i n , SCA constant parameter a, and so on;
Step 2: Randomly initialize the velocity and position of the particles through Equation (10), using Equations (2)–(6) to update the initial individual and global optimal positions, and individual and global fitness values;
Step 3: Update the SCA parameter s 1 , inertia weights w, and other parameters after entering the iterative loop;
Step 4: Update the velocity of each particle after entering an iterative loop through Equation (10), and Equations (2)–(4), and determine whether the velocity crosses the boundary. If it crosses the boundary, change the velocity value to the boundary velocity value. Similarly the position update and judgment is applied;
Step 5: Calculate the fitness value;
Step 6: Update the individual and global optimal fitness values and positions by judging the conditional formulas in Equations (5) and (6);
Step 7: Update the parameters such as the iteration K;
Step 8: Determine whether the termination condition is reached or not, otherwise return to step 3;
Step 9: Output the result when the termination condition is reached.
Based on the main steps, the flowchart of VFSCPSO shown in Figure 1 can be obtained.
Finally, the VFSCPSO algorithm pseudocode Algorithm 3 is shown below.
Algorithm 3 Velocity Four Sine Cosine Particle Swarm Optimization
Input: Initialize population N, dimension D, iteration K, learning factors c 1 , c 2 , c 3 , inertia weight w, velocity boundaries v m a x , v m i n , position boundaries x m a x , x m i n constant parameter a
Output: Optimal fitness value
1:
The initialization population is randomly generated via Equations (4)–(10) and the position and velocity of the initialized particles v i d and x i d
2:
Calculate the fitness value
3:
Individual and global optimal positions p b e s t i and g b e s t are generated by equations Equations (5) and (6)
4:
while   k < K   do
5:
   Update parameter s 1 , inertia weights w
6:
   for  i = 0 to N do
7:
       Update random numbers r 1 , r 2 , r 3
8:
       Calculate the v i d by Equations (3) and (10)
9:
       Calculate the x i d by Equations (2) and (4)
10:
      Update p b e s t i and g b e s t by Equations (5) and (6)
11:
   end for
12:
    k = k + 1
13:
end while

3.3. Complexity Discussion

In this subsection, we delve into the computational complexity of the VFSCPSO. The complexity of the VFSCPSO is compared with those of the standard PSO and SCA. We use O to denote an incremental upper bound on the time complexity. The population size of VFSCPSO is set to be N, the dimension of the optimization problem is D, the maximum number of iterations is T, and the required time to compute the fitness function is f ( D ) . The complexity of VFSCPSO is calculated as follows:
Step 1: The initialization phase takes time to set the parameters of the algorithm; the time complexity of this phase should be O ( 1 ) .
Step 2: It takes time O ( N · D ) to randomly generate the positions and velocities of the individuals of the population particles, time O ( N · f ( D ) ) to compute the values of the fitness function, and time O ( N ) to find the optimal solution individual setups and record them, so the time complexity of this phase should be O ( D · N + N · f ( D ) + N ) .
Step 3: It takes time t 2 to record the initialization values and initialize the SCA adaptive parameter s 1 , inertia weight w, and other parameters, and the time complexity of this phase should be O ( 1 ) .
Step 4: The optimization takes the same total time O ( N · D ) as PSO and the time complexity of this phase should be O ( N · D ) .
Step 5–step 9: The computation, update, judgment, and output take a constant time t 3 and the time complexity is O ( 1 ) .
To sum up, the computational complexity of VFSCPSO can be expressed as
T · O ( 1 + ( D + f ( D ) + 1 ) · N ) + 1 + D · N + 1 ) , s 4 < 0.5 T · O ( 1 + ( D + f ( D ) + 1 ) · N ) + 1 + D · N + 1 ) , s 4 0.5
The random number s 4 ensures that the sine–cosine function is selected with approximately equivalent probability, and the complexity of the upper and lower equations in Equation (11) is the same, which results in the time complexity of VFSCPSO being O ( T · N · D ) . In order to objectively compare the quality of the algorithms, the time complexity of PSO and SCA is O ( T · N · D ) , the complexity of the VFSCPSO is only increased by an order of magnitude compared to that of PSO and SCA. In this paper, we verify the possibility of the implementation of the VFSCPSO through experiments, and VFSCPSO can converge to higher-quality values faster, which gives it a greater application value in reality.

4. Experiment and Analysis

4.1. Experimental Settings

The experimental environment was configured as Intel(R) Core(TM) i7-10700 CPU @ 2.90 GHz, and the software environment was MATLAB 2021b. In order to ensure the comparative effectiveness and fairness of the experiments, we selected the Competition on Evolutionary Computation 2005 (CEC2005) [46] test suite for comprehensive validation. The CEC2005 test suite was a test suite for the Optimization Algorithm Domain Competition, which provides 23 benchmark functions, aiming at comparing and evaluating the performance and efficiency of different algorithms through a series of benchmark functions to reveal the strengths and weaknesses of different algorithms on different optimization problems. In summary, the CEC2005 test suite is an important resource for evaluating the performance of optimization algorithms, and by testing and comparing algorithms on these functions, researchers are able to gain a more comprehensive understanding of the strengths and limitations of different algorithms. Details of the CEC2005 test suite are presented in Table 1 and Appendix A.
To make VFSCPSO more convincing, the comparison experiment covers a total of 12 intelligent algorithms, including the classical algorithms GA [1] and DE [2], the compositional algorithms PSO [5] and SCA [11] to verify the validity of the hybrid algorithm VFSCPSO, and the novel similar algorithm variants AWPSO [21] and SCDLPSO [23], which have better performance; these are selected to verify the superiority of the hybrid algorithm VFSCPSO. AWPSO is cited from “IEEE TCYB”. AWPSO and SCDLPSO are more representative PSO variants. These algorithms has high influence, convergence accuracy, overall performance, and scope of application and low reproduction error. In addition, the more novel and better performing algorithms SFLA [6] and BBO [4], and the advanced algorithms MA [8], POA [9], DO [3], and OMA [13], which were proposed between 2020 and 2023, respectively, are included to verify the advancement of VFSCPSO, having the values of references, and the specific parameter settings are taken from the original studies. The parameter values of VFSCPSO are based on the settings in Section 3. Meanwhile, the evaluation indexes use the best, mean, worst, and standard deviation (std), where the best results are shown in bold. Finally, the performance of the proposed VFSCPSO is further demonstrated by the Friedman test.
The Friedman test is a simple, flexible, and widely applicable statistical method, especially for situations where three or more related samples need to be compared. This method is also effective for data that do not follow a normal distribution or have a skewed distribution. Compared with other parametric tests, the Friedman test requires a relatively small sample size and can produce reliable results with small sample sizes. The Friedman test is one of the most commonly used methods in experimental design and data analysis. A detailed tutorial on using this nonparametric statistical test to compare the strengths and weaknesses of intelligent algorithms can be found in [47]. The following is the Friedman test process:
Step 1: Gather observed results for each algorithm/problem pair.
Step 2: For each problem i, rank values from 1 (best result) to k (worst result). Denote these ranks as r i j ( 1 j k ) .
Step 3: For each algorithm j, average the ranks obtained in all problems to obtain the final rank R j = 1 n i r i j .
Reasonable experimental design, cross validation, and evaluation methods are important steps to verify the overall performance of the hybrid algorithm; a series of numerical experiments are carried out, along with a detailed analysis of the results. The maximum number of function evaluations (MNFEs) is used as a termination condition for more accurate comparison of algorithmic performance. Compared with the other 12 comparison algorithms, VFSCPSO does not have an additional number of function evaluations, and the same number of iterations means the same MNFEs, so the maximum number of iterations is set to 1000 as the termination condition. To avoid the influence of chance on the results, each algorithm was run independently for 30 runs on each benchmark function separately.

4.2. Experimental Results and Analysis

4.2.1. Performance of Algorithm in Standard Environment

This series of experiments aims to assess the efficacy of VFSCPSO by conducting a comparative analysis with 11 other algorithms. The evaluation includes the determination of best, mean, worst, and std values. The corresponding experimental and test results are presented in Table 2, Table 3, Table 4 and Table 5.
Based on the findings presented in Table 2 and Table 3, it is evident that GA demonstrates suboptimal performance across various benchmark function tests when compared to other algorithms. GA exhibits a notably high best value and a relatively low worst value, especially in functions such as f 1 and f 2 , showing a significant performance gap compared to alternative algorithms. Additionally, GA displays a relatively large std, with substantial fluctuations in performance. DE exhibits commendable performance in some functions, with low fitness values observed in functions like f 6 , f 8 , and f 15 f 20 . However, its performance is average on other functions, suggesting that DE is more suited for optimization problems with a smaller number of dimensions. PSO performs moderately well across most functions, with its best, mean, and worst values falling in the middle range compared to other algorithms, and performs relatively consistently, with a small std. SCA also performs moderately well in optimization problems with a small number of dimensions, particularly excelling in single-peak function f 1 and multi-peak function f 6 . However, the larger std of SCA indicates lower stability. Considering the convergence accuracy and stability of PSO and SCA, their effective combination could result in a hybrid algorithm with enhanced performance. AWPSO performs well across most function tests, significantly improving on the performance of PSO and inheriting its advantage of higher stability with a smaller std value. SCDLPSO achieves relatively good results in functions such as f 1 and f 2 , displaying a low fitness value. However, SCDLPSO exhibits a larger std value compared to AWPSO on functions like f 3 and f 8 , indicating a higher performance fluctuation. While SCDLPSO enhances the accuracy of PSO, it compromises the stability. VFSCPSO demonstrates superior values of best, mean, and worst on functions such as f 1 , f 2 , f 6 , and f 14 . It outperforms the other compared algorithms, boasting a relatively small std value and high stability. In summary, each comparison algorithm exhibits varying performance across different functions, and no single algorithm excels in all scenarios. VFSCPSO, in the comprehensive series of experiments, emerges as a better solution relative to other algorithms. It excels in both single-peak and multi-peak functions, and shows superior global search ability and stability. While achieving favorable results in functions f 21 f 23 , its performance in other fixed-dimensional functions is relatively poorer, possibly due to the lower dimensionality of fixed-dimensional functions. VFSCPSO may be better suited for high-dimensional optimization problems. The Friedman test results in Table 3 show that VFSCPSO has an average rank of 2.60 on the 23 tested functions and ranks first in the overall ranking, indicating that the VFSCPSO algorithm performs better overall. GA has an average rank of 5.69 and ranks seventh in the overall ranking, performing the worst in this group of environments. DE has an average rank of 2.65. This indicates that DE performs better as well, but is not the best algorithm. PSO is ranked sixth in the overall ranking and its average ranking is at the back of the pack. This indicates that VFSCPSO has a large improvement on the performance of the PSO algorithm, but is average in other metrics. SCA and SCDLPSO are ranked fourth in the overall ranking, with their average ranking in the middle of the pack. This indicates that SCA and SCDLPSO perform between VFSCPSO and GA. AWPSO ranks third in the overall ranking and is in the middle of the average ranking. This suggests that AWPSO may perform better overall.
The mean convergence curve is produced based on the results of 30 optimizations of the seven algorithms on the 23 benchmark functions in the CEC2005 test suite, and is used to compare the convergence performance of the algorithms, as shown in Figure 2.
From Figure 2 it is evident that each algorithm displays substantial differences on f 1 f 13 , showing their distinct advantages. In contrast, for f 14 f 23 , each comparison algorithm experiences rapid declines during the initial phase and converges similarly in the iterative termination conditions. Notably, exceptions include f 15 , f 21 , and f 23 , where the algorithms exhibit minimal differences. This can be attributed to f 14 f 23 being fixed-dimensional functions, characterized by relatively lower difficulty. The majority of the compared algorithms effectively solve simpler optimization problems. GA and PSO show a slower decreasing trend and lack competitive superiority. DE performs relatively well on f 6 , f 8 , and f 10 f 13 , as corroborated by Table 2 and Table 3, where it excels on fixed-dimensional functions. However, Figure 2 reveals that the performance of DE fails to significantly distinguish itself from other algorithms, and its advantages are less pronounced compared to VFSCPSO. SCA exhibits diverse performance across different functions, for instance, it performs well on f 2 but comparatively worse on f 3 . This suggests that the effectiveness of SCA varies, with a relatively slower descent velocity observed on the fixed-dimensional functions f 20 f 23 , resulting in poorer performance. The variants of PSO, namely, AWPSO and SCDLPSO, surpass classical algorithms such as GA and DE, as well as their component algorithms PSO and SCA, in the majority of optimization search processes. Notably, both algorithm variants demonstrate faster descent velocities on f 3 and f 4 , with relatively small performance differences on f 5 and f 12 . This indicates that both improved algorithms exhibit enhanced competitiveness.
Based on the insights derived from the experimental results detailed in Table 4 and Table 5, diverse algorithmic performances are observed. SFLA demonstrates moderate performance across all metrics, displaying an average performance on functions f 1 f 5 without a discernible advantage. BBO exhibits suboptimal performance on most functions, except for f 8 , where it outperforms the other algorithms with notably low values for best, mean, and worst, coupled with a relatively low std, indicating enhanced stability. BBO appears more suitable for addressing complex optimization problems. MA performs poorly across all metrics, particularly struggling on f 1 and f 6 . It exhibits weaker performance, a relatively larger std, and poorer stability, suggesting that MA may be less versatile and better suited for a specific class of optimization problem. POA and DO display comparable performance with results on f 2 , featuring relatively low best, mean, and worst values. Additionally, both algorithms perform well on f 6 and f 7 , with low mean values. Normal performance is observed on f 1 , f 3 , and f 4 with higher mean values. POA exhibits relatively higher stability and proves applicable to various optimization problems, while DO is slightly less stable. OMA performs moderately well on most functions, boasting relatively high best and mean values along with low worst values. Although overall performance is suboptimal on f 1 and f 2 , it excels on f 14 f 23 , showing its suitability for lower-dimensional optimization problems. VFSCPSO stands out with excellent performance on most functions, featuring low values for best, mean, worst, and std. Notably, it excels on multi-peak functions such as f 9 f 11 , demonstrating superior convergence accuracy and stability compared to other algorithms. The Friedman test results in Table 5 show that VFSCPSO has an average rank of 2.47 and ranks first in the overall ranking, and the overall performance of the VFSCPSO algorithm is still optimal. BBO has an average rank of 4.65 and it ranks sixth in the overall ranking, which is poor in this group of environments. MA has an average rank of 7, which suggests that MA has a poor overall performance and is likely to be less efficient in practical applications. POA, DO, and OMA are ranked in the middle in the overall ranking and their average rank is also in the middle. This indicates that the performance of the novel algorithms is more prominent. In summary, VFSCPSO excels, especially in multi-peak functions. The performance of other algorithms varies significantly across different objective functions, emphasizing the problem-specific nature of optimization algorithms. The selection of algorithms should consider the characteristics and applicability of each algorithm to specific problem types.
Mean convergence diagrams were produced based on the results of 30 optimizations of the seven algorithms on the 23 benchmark functions on the CEC2005 test suite. These were used to compare the convergence performance of the algorithms, and are shown in Figure 3.
In Figure 3, the convergence curves of VFSCPSO on the single-peak functions f 1 , f 2 , f 4 , f 5 , and f 7 and the multi-peak functions f 9 f 11 demonstrate a notably rapid descent, highlighting a distinct advantage over other comparative algorithms. Moreover, the convergence velocity of DO surpasses VFSCPSO on functions f 3 , f 6 , f 12 , and f 13 , implying its competitiveness in these specific contexts. Remarkably, POA exhibits swift convergence on f 6 . In addressing the intricate optimization challenge posed by f 8 , both VFSCPSO and BBO manifest robust competitiveness, indicating their suitability for solving complex optimization problems. Nevertheless, it is recognized that no algorithm can universally dominate all problem instances. Notably, VFSCPSO exhibits relative weakness on f 15 and f 20 . Conversely, on f 22 and f 23 , VFSCPSO demonstrates conspicuous advantages, suggesting its applicability not only to demanding optimization scenarios like f 8 but also to simpler cases. The overall performance of VFSCPSO is superior across various problem complexities. In contrast, MA consistently grapples with falling into local optima across all convergence curves, revealing its inferior competitiveness compared to other algorithms.

4.2.2. Performance of Algorithms in High-Dimensional Environment

This series of experiments verifies the performance of VFSCPSO in a high-dimensional environment. In real life, addressing real optimization problem places heightened demands on algorithm performance, necessitating swift convergence to efficiently tackle high-dimensional challenges within predefined time constraints. The ensuing analysis includes the computation and documentation of the mean and std derived from this experimental set. The detailed experimental outcomes are encapsulated in Table 6, Table 7, Table 8, Table 9 and Table 10, while the corresponding Friedman test results are presented in Table 8 and Table 11.
Based on the experimental results from Table 6 and Table 7, the evaluation across 50, 100, and 500 dimensions reveals distinctive algorithmic performances. GA demonstrates relatively good performance on f 8 but exhibits poorer efficacy on other functions. SCA displays superior performance on f 2 , f 6 , f 7 , f 9 , and f 11 . Particularly noteworthy is its excellence on f 6 , which is inherited by VFSCPSO. DE holds a clear advantage on f 6 , f 8 , and f 12 . However, its overall performance is slightly lower compared to SCA. The introduction of the sine–cosine strategy in SCA contributes significantly to this advantage. AWPSO performs well on f 3 , f 4 , f 7 , and f 13 , demonstrating better overall performance and remarkable algorithmic stability. SCDLPSO excels on f 3 and performs relatively well on f 1 , indicating its suitability for addressing single-peak problems. In terms of generality, AWPSO outperforms SCDLPSO. VFSCPSO exhibits a lower mean across different dimensions and relatively poorer performance on f 13 . However, it outperforms other comparison algorithms, especially on f 1 , f 4 , f 9 , and f 11 , with significantly better performance and lower std, indicating superior global search ability and stability. Despite a higher average ranking in higher dimensions, the removal of fixed-dimensional functions from the experiment affirms VFSCPSO’s suitability for complex optimization problem in higher dimensions. AWPSO and SCDLPSO exhibit better performance on functions such as f 3 and f 4 compared to GA and DE, thereby demonstrating specific advantages and the efficacy of the algorithm variants. From the Friedman test results in Table 8, GA and PSO still receive lower rankings. The competitiveness of DE diminishes compared to SCA after the dimensionality increase, suggesting that SCA is more suitable for solving high-dimensional optimization problems. The two algorithm variants, AWPSO and SCDLPSO, exhibit comparable performances with competitive outcomes. In summary, VFSCPSO proves effective, particularly in complex optimization problems of higher dimensions, demonstrating improved global search ability and stability.
Mean convergence diagrams were produced based on the results of 30 optimizations of the seven algorithms on the 12 benchmark functions on the CEC2005 test suite in high-dimensional environments, which are used to compare the algorithms’ convergence performance. They are shown in Figure 4.
As can be seen from Figure 4, it is evident that, compared to the standard CEC2005 test environment, the convergence rates of each algorithm remain relatively stable when elevated to 100 dimensions. Specific observations can be made for individual functions. On f 1 , the search accuracy of SCA diminishes, falling short of the performance achieved by SCDLPSO after the dimensionality increase. For f 2 and f 7 , VFSCPSO demonstrates improved search accuracy despite the higher dimensionality. On f 3 , while the search accuracy of the variant AWPSO surpasses that of SCDLPSO, the overall convergence accuracy decreases compared to the experiment with 30 dimensions. On f 4 , all algorithms, excluding VFSCPSO, experience a decline in search accuracy. For f 5 , SCDLPSO exhibits reduced capability to escape local optima, indicating its unsuitability for high-dimensional optimization problems. Regarding f 8 , DE exhibits a degree of competitiveness, but a notable decrease in optimal search on f 11 is observed. SCA achieves slightly higher search accuracy on f 9 , indicating that algorithmic search performance is influenced to varying degrees based on problem characteristics and changes in dimensionality. There is no significant change in the convergence velocity of each comparison algorithm on f 10 and f 12 . VFSCPSO, notably, does not show significant accuracy degradation; instead, an improvement in the optimization search of individual functions is observed. This suggests that VFSCPSO is well suited for solving optimization problems with higher dimensionality.
According to the experimental results presented in Table 9 and Table 10, the overall performance of SFLA is generally satisfactory, demonstrating relative stability, particularly on f 4 . MA exhibits slightly inferior performance compared to SFLA across various functions. BBO performs suboptimally on single-peak functions such as f 1 , f 3 , and f 5 , but excels on multi-peak functions, particularly achieving the best performance on f 8 . DO outperforms others on f 3 , f 6 , f 12 , and f 13 , demonstrating competitiveness but still falling short compared to VFSCPSO. OMA displays strong competitiveness on f 4 , but its overall performance is subpar in other tests. POA stands out, with strong competitiveness on f 6 and commendable performance on f 1 . In contrast, VFSCPSO consistently attains optimal performance across all test functions and dimensions, characterized by a small std, indicating remarkable stability. In summary, algorithmic performance varies significantly across different test functions and dimensions, demonstrating successes for the best value and challenges that necessitate problem-specific considerations. The Friedman test results in Table 11 show that MA continues to exhibit a lack of overall advantage, suggesting its potential suitability for specific problem types rather than high-dimensional optimization. The advantage of POA becomes more pronounced, elevating its ranking from third to second. DO and BBO struggle to adapt to the high-dimensional optimization environment, resulting in drops from second to fifth and third to sixth in the ranking, respectively. The change in OMA’s ranking is not substantial. VFSCPSO consistently maintains its first rank throughout the entire series of experiments, clearly distinguishing itself from other algorithms and proving highly suitable for high-dimensional optimization problems.
Mean convergence curves were made based on the results of 30 optimizations of the seven algorithms on the 12 benchmark functions on the CEC2005 test suite in a high-dimensional environment, and are used to compare the algorithms’ convergence performance. They are shown in Figure 5.
As depicted in Figure 5, the comparison of this series of experiments with 30 dimensions reveals notable alterations in the search velocity and accuracy of SFLA across various functions, such as f 1 , f 5 , f 6 , f 7 , f 8 , and f 11 . Particularly on f 5 , f 6 , and f 12 , the decrease in performance is conspicuous. In contrast, VFSCPSO exhibits a substantial improvement in convergence accuracy on f 2 , whereas other algorithms show minimal variations. While slight performance declines are observed on f 12 , the search velocity remains relatively stable for each algorithm, and no significant changes occur on individual functions, such as f 3 . DO, however, undergoes significant performance changes on f 4 . In summary, SFLA experiences more pronounced alterations in convergence velocity and accuracy with rising dimensions. In contrast, for other algorithms, convergence rates do not exhibit significant changes when progressing to 100 dimensions, showing only marginal decreases. VFSCPSO consistently outperforms in all search aspects, highlighting its superior convergence performance in more complex optimization problems.

4.2.3. Performance of Algorithms in Large-Scale Environment

This series of experiments verifies the performance of VFSCPSO in a large-scale environment. Finally, the mean and std of this set of experiments are calculated and recorded. Table 12, Table 13, Table 14, Table 15 and Table 16 show the experimental results and Table 14 and Table 17 show the Friedman test results.
The experimental results presented in Table 12 and Table 13 show substantial variations in algorithmic performance across functions and dimensions, spanning from f 1 to f 13 . The algorithm ranking on f 1 produces no significant change as the number of dimensions changes. At 1000 dimensions, VFSCPSO emerges as the best performer, with the smallest mean and std, indicative of enhanced stability. GA excels in mean performance, while both GA and PSO exhibit relatively poorer results. The results for 2000 dimensions are similar to those for 1000 dimensions, but the differences are more pronounced at higher numbers of dimensions. At 5000 dimensions VFSCPSO performs well on the mean, alongside competitive performances by SCDLPSO and SCA. All algorithms demonstrate a relatively small std on f 1 at this number of dimensions. While the compared algorithms struggle with dimensionality, VFSCPSO maintains its robust performance, as is particularly evident on f 2 at 5000 dimensions. On both f 1 and f 2 , SCA and GA perform relatively poorly and show instability. On f 3 , VFSCPSO does not have a competitive advantage, and the best performer is AWPSO. VFSCPSO pulls away from the other comparison algorithms on f 4 , with the AWPSO performing better on the mean and VFSCPSO performing best for 1000 and 2000 dimensions. On f 5 , DE and AWPSO consistently perform admirably on the mean, with relatively minor stds across all dimensions. There are still three algorithms SCA, DE, and VFSCPSO that can converge to the global optimum on f 6 . Additionally, DE and AWPSO lead on f 7 in mean performance, while PSO and SCA consistently deliver commendable mean values on f 8 . Across various functions f 1 f 2 , f 4 f 7 , f 9 f 11 , VFSCPSO consistently attains optimal results, showing stability across all dimensions. DE performs well on the mean, and the std is relatively small and performs well on some functions. On f 10 and f 11 , DE performs well on the mean for 1000 and 2000 dimensions, while AWPSO has a relatively small mean for 5000 dimensions. SCA and AWPSO are more competitive on f 12 and f 13 . AWPSO, SCDLPSO, and VFSCPSO perform well and are relatively stable under several functions and dimensions, and can be used as alternative algorithms for large-scale optimization problems. PSO is affected by dimensionality in some cases. The Friedman test results in Table 14 show that VFSCPSO consistently achieves optimal outcomes, which is similar to the results produced in the 50-, 100-, and 500-dimensional settings.
Mean convergence diagrams were produced based on the results of 30 optimizations of the seven algorithms on the 12 benchmark functions on the CEC2005 test suite in a large-scale environment, which are used to compare the algorithms’ convergence performance. They are shown in Figure 6.
As can be seen from Figure 6, the impact of increasing the number of dimensions on the algorithmic convergence accuracy is evident. Notably, as the dimensionality rises, the convergence accuracy of SCA on f 1 notably lags behind that of AWPSO. This discrepancy suggests that AWPSO may be better suited for tackling large-scale optimization problem. The convergence accuracy on f 3 experiences a collective decrease among all algorithms, with SCA and PSO demonstrating the most significant declines. A certain differences is observable in the performances of algorithmic variants, with VFSCPSO showing relatively superior results, though the differences are not obvious. While the convergence accuracy decreases on f 4 for all algorithms, the convergence velocity remains relatively stable. Noteworthy is the pronounced decrease in the search rate of PSO on f 5 , while f 6 and f 9 f 13 exhibit minimal changes. On f 7 , the performance of PSO directly descends to the lowest rank, showing a diminished convergence velocity and accuracy compared to low-dimensional searches. A similar significant decline is observed in the performance of SCDLPSO. On f 8 , VFSCPSO demonstrates heightened competitiveness with increasing dimensionality. In summary, VFSCPSO excels in overall performance and maintains significant competitiveness in addressing large-scale problems.
The experimental results in Table 15 and Table 16 demonstrate the better performance of the VFSCPSO across all dimensions on f 1 , exhibiting advantage in the mean and std metrics and showing good stability. POA demonstrates aptitude for handling f 1 , likely due to its consistent performance even with the rise in dimensionality. In contrast, other algorithms witness a relative decline in performance as the number of dimensions increases, reaffirming VFSCPSO’s sustained leadership. For f 2 , the performance of all algorithms, excluding VFSCPSO, fails to diminish, highlighting the unique effectiveness of VFSCPSO in tackling this function. On f 3 , DO emerges as the top performer in mean, indicating its suitability for addressing f 3 across diverse dimensions. VFSCPSO not only competes better but also maintains superior performance stability. Conversely, MA lags in mean performance. f 5 , f 8 , f 9 , f 10 , f 11 , and f 12 underscore VFSCPSO’s consistent superiority in mean and standard deviation metrics across varying dimensions. VFSCPSO exhibits particular aptitude for handling f 5 . In the case of f 6 and f 13 , the DO excels in both metrics, showing its adaptability to these functions. VFSCPSO, while maintaining competitive performance, also exhibits enhanced stability. The algorithms perform differently in different environments. The SFLA, BBO, MA, DO, OMA, and POA algorithms perform better in some specific contexts, but the VFSCPSO algorithm is relatively more competitive, especially in terms of mean and std. The selection of the most suitable algorithm still needs to consider the characteristics of the specific problem and practical application requirements. The Friedman test results in Table 17 show that the VFSCPSO algorithm has the first overall ranking under all dimensions and performs the best. The DO algorithm has the next best overall ranking under 2000 and 5000 dimensions, coming in second and also higher. The BBO and POA algorithms also perform better in the overall rankings, coming in third and fourth. VFSCPSO has the highest counts under all the dimensions, suggesting that it achieves the best performance under each dimension. The DO does not achieve the best performance under 1000 dimensions. However, it performs well under all other dimensions. Other algorithms did not achieve the best performance under some dimensions, and overall the VFSCPSO and DO algorithms performed relatively well in all dimensions.
Mean convergence curves were produced based on the results of 30 optimizations of the seven algorithms on the 12 benchmark functions on the CEC2005 test suite in a large-scale environment for comparing the algorithms’ convergence performance, as shown in Figure 7. Since there is no decrease on f 2 except VFSCPSO, the convergence curve of f 2 is not shown.
From Figure 7, it can be seen that as the dimension increases, the convergence trend of each algorithm on f 1 f 5 is almost the same as that in 100 dimension, and on f 6 although some algorithms are able to iterate to the global optimum, the number of iterations of DO and POA increases in the search of global optimum, and the number of iterations of the VFSCPSO is almost unchanged. SFLA has a change in convergence trend on some functions, the accuracy on f 3 , f 5 f 7 , f 10 f 12 shows a decreasing trend, the other algorithms do not change significantly, the accuracy of convergence velocity on f 13 decreases more obviously. DO also has some competitiveness, the convergence velocity on f 6 is fast and the number of iterations needed to converge to the global optimum is the least, although it does not converge to the global optimum on f 13 , but it achieves the best result among the comparative algorithms. The convergence trend of the convergence curve of the VFSCPSO algorithm is almost the same as that in 100 dimensions, and it achieves the first place in the overall rankings.

4.2.4. Performance of Algorithm in Ultra-Large-Scale Environment

In order to compare and analyze the performance of the hybrid algorithm in solving large-scale optimization problems in a deeper way and to demonstrate the superiority of the hybrid algorithm VFSCPSO proposed in this paper, this series of experiments verifies the performance of the VFSCPSO in an ultra-large-scale environment with D = 10,000. Finally, the best, mean, worst, and std of this series of experiments are calculated and recorded. Table 18 and Table 19 show the experimental results and Friedman test results.
According to the experimental results in Table 18 and Table 19, the other twelve comparative algorithms show significant differences in the 10,000-dimensional environment. VFSCPSO achieves the best results in each of the metrics on f 1 , f 2 , f 4 , f 6 , f 7 , f 9 , f 10 , f 11 , and f 12 , which means that VFSCPSO performs well on most of the test functions. It is highly competitive in most cases, can be applied to different environments, shows diversity and adaptability, and has high convergence and stability. On f 2 , only the VFSCPSO algorithmic model is able to descend, and the comparative algorithms are all caught in the dilemma of being severely unable to descend, and this hybrid algorithmic model has a better global searchability even when it is difficult to optimize, and a relatively good solution can be found. On f 6 , the DE, SCA, DO, POA, and VFSCPSO algorithms are still able to converge to the global optimal solution. The different algorithms show their advantages under different benchmark functions and metrics. PSO and SCA perform well on some functions, but as constituent algorithms do not perform as well as VFSCPSO in most cases. VFSCPSO in 10,000 dimensions still does not produce significant changes on the 12 benchmark functions, with small fluctuations in the solution accuracy, high stability, and good performance consistency across different runs. This further proves that the VFSCPSO has excellent comprehensive performance in dealing with ultra-large-scale optimization problems, and is able to effectively cope with the challenges in large-scale environments. The Friedman test results show that VFSCPSO has the best performance and can be used for solving large-scale optimization problems. In the Friedman test results in Table 18, GA and PSO are relatively balanced and at a disadvantage, which may not be applicable to ultra-large-scale problems. DE is ranked second in the overall ranking, similar to the results of previous experiments, indicating that DE is more stable. SCA and SCDLPSO are ranked in the middle of the list, which is a better overall performance, but not as good as VFSCPSO. AWPSO ranked fifth in overall rank and average rank and the overall performance declined faster, indicating that the algorithm is less stable. According to the analysis results, VFSCPSO ranks first in both the overall ranking and the average ranking, and its performance is still excellent. In the Friedman test results in Table 19, SFLA, MA, and OMA have a high average rank and a low overall rank. These algorithms may not be applicable to ultra-large-scale problems. BBO, POA, and OMA have moderate overall performance in the ranking, with no particularly outstanding performance, and are relatively well balanced to each other. DO has a higher average rank of 2.15 and an overall rank of 2. It performs well on the overall ranking and has a more outstanding performance. VFSCPSO has an average rank of 1.38 and an overall rank of 1, with the highest ranking, indicating that its performance is excellent on all indicators. The average rank is the best performing algorithm on the overall ranking, with the most outstanding performance, and is most suitable for solving ultra-large-scale optimization problems. This is attributed to the SCA strategy that increases the flexibility of the algorithm and avoids the algorithm from falling into local optima, and the 10,000 dimensions run reflects the advantages of VFSCPSO more.
In order to demonstrate the performance of VFSCPSO, the convergence curves of VFSCPSO on the benchmark functions in 100, 500, 1000, 2000, 5000, and 10,000 dimensions were produced. They are shown in Figure 8.
As can be seen from Figure 8, the convergence curves of the hybrid algorithm VFSCPSO are basically the same as the number of dimensions increases. There is no obvious separation of the convergence curves as the number of dimensions continues to rise. Therefore, the VFSCPSO has strong search capability and extensibility, and the performance is basically unaffected by the number of dimensions, meaning it can effectively solve ultra-large-scale problems.

4.2.5. Application in PID Parameter Tuning

Optimization problems includes four elements. These elements are described in the following example of PID parameter tuning.
  • Variables: In optimization problems, variables are quantities that can be adjusted or changed. They represent the decision variables or parameters of the optimal solution sought by the optimization method. In problems in engineering, science, or economics, variables can be various adjustable parameters such as size, velocity, temperature, etc. In the PID parameter-tuning problem, the variables usually are the parameters of the PID controller, i.e., the proportionality coefficients, the integration time, and the differentiation time. The response characteristics of the PID controller are controlled by tuning these parameters.
  • Objective function: The objective function is the quantity to be maximized or minimized in an optimization problem. It represents the objective of the optimization. The form of the objective function depends on the specific problem; it can represent cost, benefit, efficiency, distance, or any other metric to be optimized. The objective is to bring these performance metrics to their optimal values or to satisfy certain requirements. In PID parameter-tuning problems, the objective function is usually the performance metric of the control system.
  • Constraints: Constraints are conditions or restrictions that must be satisfied so that the solution is considered feasible or acceptable. Constraints define the space of feasible solutions and within this space the optimal solution is sought. Constraints usually come from physical limitations, technical limitations, regulations, or other constraints. In the case of PID parameter-tuning problems, constraints may come from the stability requirements of the control system. There may also be engineering constraints from practical applications, such as the range of PID parameter values or the performance requirements of the control system. The constraints are handled using a penalty function approach in this paper. There may also be engineering constraints from practical applications, such as the range of PID parameter values or the performance requirements of the control system. The constraints are handled using a penalty function approach in the paper. By introducing a penalty function, the constraints are incorporated into the objective function, thus transforming the original problem into an unconstrained optimization problems.
  • Optimization method: An optimization method is a technique used to search for optimal solutions that minimize or maximize an objective function. The selection of an appropriate optimization method depends on the nature of the problem, constraints, objective function, and the availability of computational resources. The optimization method described in Section 1 can be used in the PID parameter-tuning problem, and therefore, will not be repeated here.
Understanding these elements is essential to understanding and solving optimization problems. These elements together shape the nature of the optimization problem and the possibilities for its solution, and so the impact of each element needs to be carefully considered and analyzed when undertaking the research and solution of the optimization problem. The structural diagram of PID tuning based on intelligent algorithms is shown in Figure 9.
The main goal of the optimal design of PID controller parameters is to optimize certain performance indicators of the control system. It is difficult for a single error performance index to simultaneously meet the needs of the control system for speed, stability, and robustness, etc. Therefore, the integral of absolute error (IAE) is used as an adaptation function. The IAE criterion takes into account the dynamic nature of the iterative process and measures the integral of the absolute value of the error in the system response with time, which is used to quantify the tracking performance of the system with respect to a setting or target value. The parameters are then optimized using an intelligent algorithm. The IAE is used as the objective function f to obtain the demand of dynamic characteristics to satisfy the transition process, and the squared term of the control input is added to f to avoid too much control, and the final objective function is chosen as Equation (12):
f = 0 ω 1 | e ( t ) | + ω 2 u 2 ( t ) d t
In Equation (12), e ( t ) is the system deviation, which is the error between the input and output values, and u 2 ( t ) is the squared term of the PID controller input, which is the control value. The weights ω 1 and ω 2 take values in [ 0 , 1 ] . Normally, ω 1 = 0.999 and ω 2 = 0.001 are taken, in addition, a penalty function is necessary to avoid overshooting, and once overshooting is generated, the systematic deviation is used as an item of the optimality index, at which point Equation (13) is chosen as the f that introduces the overshooting term:
f = 0 ω 1 | e ( t ) | + ω 2 u 2 ( t ) + ω 3 | e ( t ) | d t , e ( t ) < 0
In Equation (13), ω 3 is also a weight, ω 3 > > ω 1 , which is usually taken as ω 3 = 100. ω 3 | e ( t ) | is the penalty term. By combining the penalty function and IAE, the control performance of the system can be considered more comprehensively, and the PID parameter tuning carried out to achieve more stable and accurate control.
This paper introduces the basic concepts of servo motor systems and PID controllers and also describes the basic model of a servo motor system. The system performance can be significantly improved by adjusting and selecting appropriate PID controller parameters. A PID controller is a feedback control algorithm commonly used in control systems and its parameters need to be adjusted in order to optimize the performance of the system. VFSCPSO has a good adaptability in optimization problems and can be used to solve the problem of optimizing the PID control parameters of a servo motor. In this experiment, the maximum number of iterations is set to 20, the population size is 30, and the variable dimension is 3. In addition, the other parameters in VFSCPSO are set in accordance with numerical experiments, and for other algorithms we refer to the original studies.
The parameter tuning of the PID controller is a key part of the control system design, which usually needs to determine the values of K p , K i , and K d according to the characteristics of the controlled process. Since the PID controller was proposed, many parameter-tuning methods with excellent performance have emerged, but mankind has always been in pursuit of higher effectiveness. VFSCPSO is able to improve the shortcomings of poor adaptivity, complex control processes, and insufficient accuracy in PID control parameter optimization more effectively than traditional methods. In addition to the fitness value f m i n , the following metrics are selected:
The overshoot σ is the percentage of the steady-state value y s accounted for by the difference between the output step response peak y m and the steady-state value y s , σ % = y m y s / y s . The adjustment time t s refers to the time when the output step response enters steady state and starts to arrive at and remain in steady state within the specified error range, and it is the moment corresponding to the range of the steady-state value y s ± Δ error band, which is generally taken to be Δ = 0.02 or Δ = 0.05 .
In order to objectively evaluate the effect of using these five intelligent algorithms in optimizing the PID control system of direct current servo motors, the following three evaluation metrics are selected based on the fitness function: best fitness value f m i n , overshoot σ , and tuning time t s ( 2 % ) . The experimental results are shown in Table 20.
From the simulation results shown in Table 20 and Figure 10, it can be seen that the PID controller optimized by VFSCPSO fully integrates the advantages and characteristics of proportional, integral, and differential control methods. Intelligent algorithms improve the performance of the direct current servo motor system to a greater extent, making the step response of the whole system optimal in terms of stability, velocity, and accuracy. Adjusting K d is conducive to accelerating the response velocity of the system and increasing the stability of the system, and adjusting K p can accelerate the response velocity of the system and improve the response velocity of the system. On the f m i n metric, the quality of the optimal solution found during the VFSCPSO search is the best, followed by PSO and SCA. FA and GA have relatively high values and relatively poor performance in finding the optimal parameters. On the σ metric, GA has a smaller amount of overshooting, indicating better control performance, VFSCPSO is second, and FA has a larger amount of overshooting, requiring more time to reach stability. On t s metrics, VFSCPSO takes less time to adjust from the initial state to the steady state and the system responds faster, while FA has a longer adjustment time. In the comparison experiments of the five algorithms, the response curve of FA is the most unstable, and the response curve of GA is smoother but slower in response. Meanwhile, the overall performance of VFSCPSO is better than the two constituent algorithms, achieving the best overall performance.
As can be seen from the convergence curves in Figure 11, the convergence curve of the VFSCPSO algorithm decreases faster than those of the two algorithms GA and FA, which indicates that VFSCPSO is able to achieve better results than the other intelligent algorithms for this type of problem. SCA achieves worse results at the time of initialization but performs better at the later stage, which is exactly the opposite of PSO. In summary, VFSCPSO combines the advantages of the two compositional algorithms, with higher accuracy and faster convergence in the PID parameter-tuning problem, indicating that VFSCPSO has greater feasibility in practical engineering optimization problems.

4.2.6. Analysis of VFSCPSO’s Exploration and Exploitation

Exploration is the global search capability, exploring the entire search space. Exploitation is the local search capability, which adjusts the particle position within the local neighborhood. In intelligent algorithms, global search and local search will give full play to their respective advantages. Algorithms that are able to utilize both exploration and exploitation searches in an integrative manner can be more effective in finding excellent solutions. In this paper, the exploration and exploitation are obtained using dimension-wise diversity measurements presented in [48]. This subsection provides a test of the exploration and exploitation capabilities of the VFSCPSO. The method uses the following formulas:
Div j = 1 N P i = 1 N P median ( j ) x i ( j ) Div = 1 D j = 1 D Div j
In Equation (14), m e d i a n ( j ) denotes the median of dimension j in the population, x i ( j ) denotes the dimension j of individual i, N P is the population size, and D is the number of variables in the optimization problems. A rise in the average of individual dimensional distances within the population in Equation (14) signifies exploration, whereas a decline in the average denotes exploitation. The percentage of exploration or exploitation can be calculated by the following formulas:
Exploration % = D i v D i v max × 100 % , Exploitation % = D i v D i v max D i v max × 100 %
In Equation (15), D i v max denotes the maximum population diversity during the iterative process. D i v is calculated from Equation (14). The values for exploration and exploitation calculated from Equation (15) are contradictory and supplement each other. Figure 12 shows the exploration and exploitation proportion of VFSCPSO on the CEC2005 test suite.
From Figure 12, it can be seen that the exploitation and exploration tendency curves of VFSCPSO in different types of CEC2005 benchmark functions have their own characteristics, while the number of iterations in which exploration is equal to exploitation is different. The proportion of exploitation eventually reaches about 0.9 and the proportion of exploration eventually reaches about 0.1 in the later evolutionary period, which indicates the dominant capability of exploitation in the later iterations. This is due to the fact that the hybrid sine cosine strategy of the VFSCPSO enables exploration and exploitation to reach a balanced state in the end. Therefore, VFSCPSO can maintain a balance between exploitation and exploration.

4.2.7. Analysis of VFSCPSO’s Disadvantages

In the realm of algorithms, perfection remains an elusive concept. Every algorithm has its disadvantages. Although VFSCPSO exhibits remarkable efficacy in solving high-dimensional optimization problems, it also has some unconcealed disadvantages. Therefore, we must scrutinize its shortcomings and acknowledge them.
In this paper, the main limitation of VFSCPSO is parameter tuning. In Section 3.2, the optimal value of a is constant, 2. This is the standard SCA decision. But for most cases, not all. When the value of a is different, VFSCPSO may maintain different convergence accuracy. This means that a may be different for different problems. Moreover, we did not perform a grid search for a, which may have an exact optimal value.
In addition, VFSCPSO did not achieve full excellence in comparison with other good algorithms. For example, as shown in Table 8, VFSCPSO has worse results than AWPSO for individual benchmark functions. Therefore, the competitiveness of VFSCPSO may be increased by adding other effective strategies.
Finally, from the results of VFSCPSO challenging other good algorithms, VFSCPSO still has not effectively solved all complex problems. This shows that VFSCPSO cannot search effectively in the solution space of these problems. Therefore, it may be that the convergence performance of VFSCPSO can be further improved through other technologies.

5. Conclusions

In this paper, after deeply analyzing the advantages and drawbacks of PSO and SCA, a hybrid algorithm VFSCPSO is proposed. The framework of VFSCPSO mainly integrates the SCA strategy into the original PSO framework. It enables the algorithm to flexibly switch between local and global search, adapting to both high-dimensional global problems and engineering optimization problems. Then, to further show the advantages of VFSCPSO in large-scale environments, performance comparisons are conducted in the CEC2005 test suite in 50, 100, 500, 1000, 2000, 5000, and 10,000 dimensions. VFSCPSO shows efficient performance in solving large-scale problems, with the capability to solve global optimization problems up to 10,000 dimensions. In order to verify the advantages of VFSCPSO in solving engineering optimization problems, the experimental results of combining VFSCPSO with the comparative algorithms in the PID parameter-tuning problem show that the overall performance of VFSCPSO is the best. With the rapid development of modern society, problems in daily life have increasingly high requirements for algorithms. A good algorithm needs to serve human life, which is also our pursuit. Therefore, the next step will be to further study the defects of VFSCPSO, improve its optimization ability, and fully enhance the algorithm’s search performance. In the future, VFSCPSO will be applied to solve more complex real-world optimization problems, such as the vehicle path problem, data mining, environmental monitoring, and so on.

Author Contributions

H.W.: investigation, methodology, experiment, writing—original draft. Y.G.: supervision, funding acquisition, writing—review and editing. Y.H.: data curation, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Key Project of Ningxia Natural Science Foundation, Several Swarm Intelligence Algorithms and Their Application, under Grant 2022AAC02043; in part by the 2022 Graduate Innovation Project of North Minzu University, under Grant YCX22192; in part by the National Natural Science Foundation of China, under Grant 11961001 and Grant 61561001; in part by the Construction Project of First-Class Subjects in Ningxia Higher Education, under Grant NXYLXK2017B09; in part by the Major Proprietary Funded Project of North Minzu University, under Grant ZDZX201901; and in part by the Basic Discipline Research Projects supported by Nanjing Securities, under Grant NJZQJCXK202201.

Data Availability Statement

All the data were obtained under the same experimental environment. Then, all the source programs of the compared algorithms are coded according to their original references. We solemnly declare that all data in this paper are true and valid.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. All authors guarantee that the paper is legitimate and belongs to their own scientific research results.

Appendix A. CEC2005 Test Suite of 23 Benchmark Functions

These benchmark functions have different characteristics, covering single-peak functions ( f 1 f 7 ), multi-peak functions ( f 8 f 13 ), and fixed-dimension functions ( f 14 f 23 ).

Appendix A.1. Sphere Function

f 1 ( x ) = i = 1 n x i 2

Appendix A.2. Schwefel’s Problem 2.22

f 2 ( x ) = i = 1 n x i + i = 1 n x i

Appendix A.3. Schwefel’s Problem 1.2

f 3 ( x ) = i = 1 n j = 1 i x j 2

Appendix A.4. Schwefel’s Problem 2.21

f 4 ( x ) = max i x i , 1 i n

Appendix A.5. Generalized Rosenbrock’s Function

f 5 ( x ) = i = 1 n 1 100 x i + 1 x i 2 2 + x i 1 2

Appendix A.6. Step Function

f 6 ( x ) = i = 1 n x i + 0.5 2

Appendix A.7. Quartic Function i.e., Noise

f 7 ( x ) = i = 1 n i x i 4 + random [ 0 , 1 )

Appendix A.8. Generalized Schwefel’s Problem 2.26

f 8 ( x ) = i = 1 n x i sin x i

Appendix A.9. Generalized Rastrigin’s Function

f 9 ( x ) = i = 1 n x i 2 10 cos 2 π x i + 10

Appendix A.10. Ackley’s Function

f 10 ( x ) = i = 1 n 20 exp 0.2 1 n i = 1 n x i 2 exp 1 n i = 1 n cos 2 π x i + 20 + e

Appendix A.11. Generalized Griewank’s Function

f 11 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos x i i + 1

Appendix A.12. Generalized Penalized Function 1

f 12 ( x ) = π n 10 sin 2 π y 1 + i = 1 n y i 1 2 1 + 10 sin 2 π y i + 1 + y n 1 2 + i = 1 n u x i , 10 , 100 , 4 y i = 1 + x i + 1 4 u ( x i ,   a ,   k ,   m ) = { k x i a m , x i > a , 0 , a x i a , k x i a m , x i < a .

Appendix A.13. Generalized Penalized Function 2

f 13 ( x ) = 0.1 sin 2 π x 1 + i 1 n x i 1 2 1 + sin 2 3 π y x i + 1 + x n 1 2 1 + sin 2 2 π x n + i 1 n u x i , 5 , 100 , 4

Appendix A.14. Shekel’s Foxholes Function

f 14 ( x ) = 1 500 + j = 1 25 1 j + i = 1 2 x i a i j 6 1

Appendix A.15. Kowalik’s Function

f 15 ( x ) = i = 1 11 a i x i b i 2 + b i x 2 b i 2 + b i x 3 + x 4 2

Appendix A.16. Six-Hump Camel-Back Function

f 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4

Appendix A.17. Branin Function

f 17 ( x ) = x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 2 + 10 1 1 8 π cos x 1 + 10

Appendix A.18. Goldstein Price Function

f 18 ( x ) = 1 + x 1 + x 2 + 1 2 19 14 x 1 + 3 x 1 2 14 x 1 + 6 x 1 x 2 + 3 x 2 2

Appendix A.19. Hartman’s Family 1

f 19 ( x ) = i = 1 n c i exp j = 1 3 a i j x j p i j 2

Appendix A.20. Hartman’s Family 2

f 20 ( x ) = i = 1 n c i exp j = 1 6 a i j x j p i j 2

Appendix A.21. Shekel’s Family 1

f 21 ( x ) = i = 1 5 x a i ( x a i ) T + c i 1

Appendix A.22. Shekel’s Family 2

f 22 ( x ) = i = 1 7 x a i x a i T + c i 1

Appendix A.23. Shekel’s Family 3

f 23 ( x ) = i = 1 10 x a i x a i T + c i 1

References

  1. Holland, J.H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology; MIT Press: Cambridge, MA, USA, 1975. [Google Scholar]
  2. Storn, R.; Price, K. Differential evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  3. Zhao, S.; Zhang, T.; Ma, S.; Chen, M. Dandelion Optimizer: A nature-inspired metaheuristic algorithm for engineering applications. Eng. Appl. Artif. Intell. 2022, 114, 105075. [Google Scholar] [CrossRef]
  4. Simon, D. Biogeography-based optimization. IEEE Trans. Evol. Comput. 2008, 12, 702–713. [Google Scholar] [CrossRef]
  5. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the ICNN’95-International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  6. Eusuff, M.M.; Lansey, K.E. Optimization of water distribution network design using the shuffled frog leaping algorithm. J. Water Resour. Plan. Manag. 2003, 129, 210–225. [Google Scholar] [CrossRef]
  7. Yang, X.S. Firefly algorithms for multimodal optimization. In Proceedings of the International Symposium on Stochastic Algorithms, Sapporo, Japan, 26–28 October 2009; pp. 169–178. [Google Scholar] [CrossRef]
  8. Zervoudakis, K.; Tsafarakis, S. A mayfly optimization algorithm. Comput. Ind. Eng. 2020, 145, 106559. [Google Scholar] [CrossRef]
  9. Wang, J.; Yang, B.; Chen, Y.; Zeng, K.; Zhang, H.; Shu, H.; Chen, Y. Novel phasianidae inspired peafowl (Pavo muticus/cristatus) optimization algorithm: Design, evaluation, and SOFC models parameter estimation. Sustain. Energy Technol. Assess. 2022, 50, 101825. [Google Scholar] [CrossRef]
  10. Bertsimas, D.; Tsitsiklis, J. Simulated annealing. Stat. Sci. 1993, 8, 10–15. [Google Scholar] [CrossRef]
  11. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  12. Das, B.; Mukherjee, V.; Das, D. Student psychology based optimization algorithm: A new population based optimization algorithm for solving optimization problems. Adv. Eng. Softw. 2020, 146, 102804. [Google Scholar] [CrossRef]
  13. Cheng, M.Y.; Sholeh, M.N. Optical microscope algorithm: A new metaheuristic inspired by microscope magnification for solving engineering optimization problems. Knowl.-Based Syst. 2023, 279, 110939. [Google Scholar] [CrossRef]
  14. Nayak, J.; Swapnarekha, H.; Naik, B.; Dhiman, G.; Vimal, S. 25 Years of Particle Swarm Optimization: Flourishing Voyage of Two Decades. Arch. Comput. Methods Eng. 2023, 30, 1663–1725. [Google Scholar] [CrossRef]
  15. Yu, Z.; Si, Z.; Li, X.; Wang, D.; Song, H. A novel hybrid particle swarm optimization algorithm for path planning of UAVs. IEEE Internet Things J. 2022, 9, 22547–22558. [Google Scholar] [CrossRef]
  16. Wang, C.; Wang, Z.; Han, F.; Dong, H.; Liu, H. A novel PID-like particle swarm optimizer: On terminal convergence analysis. Complex Intell. Syst. 2022, 8, 1217–1228. [Google Scholar] [CrossRef]
  17. Tijjani, S.; Ab Wahab, M.N.; Mohd Noor, M.H. An enhanced particle swarm optimization with position update for optimal feature selection. Expert Syst. Appl. 2024, 247, 123337. [Google Scholar] [CrossRef]
  18. Suriyan, K.; Nagarajan, R. Particle Swarm Optimization in Biomedical Technologies: Innovations, Challenges, and Opportunities. In Emerging Technologies for Health Literacy and Medical Practice; IGI Global: Hershey, PA, USA, 2024; pp. 220–238. [Google Scholar] [CrossRef]
  19. Li, F.; Yue, Q.; Liu, Y.; Ouyang, H.; Gu, F. A fast density peak clustering based particle swarm optimizer for dynamic optimization. Expert Syst. Appl. 2024, 236, 121254. [Google Scholar] [CrossRef]
  20. Shi, Y.; Eberhart, R.C. Empirical study of particle swarm optimization. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; Volume 3, pp. 1945–1950. [Google Scholar] [CrossRef]
  21. Liu, W.; Wang, Z.; Yuan, Y.; Zeng, N.; Hone, K.; Liu, X. A novel sigmoid-function-based adaptive weighted particle swarm optimizer. IEEE Trans. Cybern. 2019, 51, 1085–1093. [Google Scholar] [CrossRef] [PubMed]
  22. Refaat, A.; Elbaz, A.; Khalifa, A.E.; Mohamed Elsakka, M.; Kalas, A.; Hegazy Elfar, M. Performance evaluation of a novel self-tuning particle swarm optimization algorithm-based maximum power point tracker for porton exchange membrane fuel cells under different operating conditions. Energy Convers. Manag. 2024, 301, 118014. [Google Scholar] [CrossRef]
  23. Zhang, J.; Zhang, J.; Ji, W. Particle swarm optimization algorithm with self-correcting and dimension-by-dimension learning capabilities. J. Chin. Comput. Syst. 2021, 42, 919–926. [Google Scholar]
  24. Xue, Y.; Tang, T.; Pang, W.; Liu, A.X. Self-adaptive parameter and strategy based particle swarm optimization for large-scale feature selection problems with multiple classifiers. Appl. Soft Comput. 2020, 88, 106031. [Google Scholar] [CrossRef]
  25. Kaseb, Z.; Rahbar, M. Towards CFD-based optimization of urban wind conditions: Comparison of Genetic algorithm, Particle Swarm Optimization, and a hybrid algorithm. Sustain. Cities Soc. 2022, 77, 103565. [Google Scholar] [CrossRef]
  26. Shams, M.Y.; El-kenawy, E.S.M.; Ibrahim, A.; Elshewey, A.M. A hybrid dipper throated optimization algorithm and particle swarm optimization (DTPSO) model for hepatocellular carcinoma (HCC) prediction. Biomed. Signal Process. Control. 2023, 85, 104908. [Google Scholar] [CrossRef]
  27. Major Advances in Particle Swarm Optimization: Theory, Analysis, and Application. Swarm Evol. Comput. 2021, 63, 100868. [CrossRef]
  28. Shami, T.M.; El-Saleh, A.A.; Alswaitti, M.; Al-Tashi, Q.; Summakieh, M.A.; Mirjalili, S. Particle Swarm Optimization: A Comprehensive Survey. IEEE Access 2022, 10, 10031–10061. [Google Scholar] [CrossRef]
  29. Yang, X.; Wang, R.; Zhao, D.; Yu, F.; Huang, C.; Heidari, A.A.; Cai, Z.; Bourouis, S.; Algarni, A.D.; Chen, H. An adaptive quadratic interpolation and rounding mechanism sine cosine algorithm with application to constrained engineering optimization problems. Expert Syst. Appl. 2023, 213, 119041. [Google Scholar] [CrossRef]
  30. Gul, H.H.; Egrioglu, E.; Bas, E. Statistical learning algorithms for dendritic neuron model artificial neural network based on sine cosine algorithm. Inf. Sci. 2023, 629, 398–412. [Google Scholar] [CrossRef]
  31. Akay, R.; Yildirim, M.Y. Multi-strategy and self-adaptive differential sine-cosine algorithm for multi-robot path planning. Expert Syst. Appl. 2023, 232, 120849. [Google Scholar] [CrossRef]
  32. Rizk-Allah, R.M. An improved sine–cosine algorithm based on orthogonal parallel information for global optimization. Soft Comput. 2019, 23, 7135–7161. [Google Scholar] [CrossRef]
  33. Zhou, W.; Wang, P.; Heidari, A.A.; Zhao, X.; Chen, H. Spiral Gaussian mutation sine cosine algorithm: Framework and comprehensive performance optimization. Expert Syst. Appl. 2022, 209, 118372. [Google Scholar] [CrossRef]
  34. Ma, H.; Zhang, C.; Peng, T.; Nazir, M.S.; Li, Y. An integrated framework of gated recurrent unit based on improved sine cosine algorithm for photovoltaic power forecasting. Energy 2022, 256, 124650. [Google Scholar] [CrossRef]
  35. Hamad, Q.S.; Samma, H.; Suandi, S.A.; Mohamad-Saleh, J. Q-learning embedded sine cosine algorithm (QLESCA). Expert Syst. Appl. 2022, 193, 116417. [Google Scholar] [CrossRef]
  36. Cheng, R.; Jin, Y. A social learning particle swarm optimization algorithm for scalable optimization. Inf. Sci. 2015, 291, 43–60. [Google Scholar] [CrossRef]
  37. Chakraborty, S.; Saha, A.K.; Chakraborty, R.; Saha, M. An enhanced whale optimization algorithm for large scale optimization problems. Knowl.-Based Syst. 2021, 233, 107543. [Google Scholar] [CrossRef]
  38. Xu, H.Q.; Gu, S.; Fan, Y.C.; Li, X.S.; Zhao, Y.F.; Zhao, J.; Wang, J.J. A strategy learning framework for particle swarm optimization algorithm. Inf. Sci. 2023, 619, 126–152. [Google Scholar] [CrossRef]
  39. Wang, F.; Wang, X.; Sun, S. A reinforcement learning level-based particle swarm optimization algorithm for large-scale optimization. Inf. Sci. 2022, 602, 298–312. [Google Scholar] [CrossRef]
  40. Yi, J.H.; Xing, L.N.; Wang, G.G.; Dong, J.; Vasilakos, A.V.; Alavi, A.H.; Wang, L. Behavior of crossover operators in NSGA-III for large-scale optimization problems. Inf. Sci. 2020, 509, 470–487. [Google Scholar] [CrossRef]
  41. Ziegler, J.G.; Nichols, N.B. Optimum settings for automatic controllers. Trans. Am. Soc. Mech. Eng. 1942, 64, 759–765. [Google Scholar] [CrossRef]
  42. Cao, F. PID controller optimized by genetic algorithm for direct-drive servo system. Neural Comput. Appl. 2020, 32, 23–30. [Google Scholar] [CrossRef]
  43. Zhu, Y.; Jiao, J. Automatic Control System Design for Industrial Robots Based on Simulated Annealing and PID Algorithms. Adv. Multimed. 2022, 2022, 9226576. [Google Scholar] [CrossRef]
  44. Huang, M.; Tian, M.; Liu, Y.; Zhang, Y.; Zhou, J. Parameter optimization of PID controller for water and fertilizer control system based on partial attraction adaptive firefly algorithm. Sci. Rep. 2022, 12, 12182. [Google Scholar] [CrossRef] [PubMed]
  45. Shi, Y.; Eberhart, R.C. Parameter selection in particle swarm optimization. In Proceedings of the Evolutionary Programming VII: 7th International Conference, EP98, San Diego, CA, USA, 25–27 March 1998; pp. 591–600. [Google Scholar] [CrossRef]
  46. Suganthan, P.N.; Hansen, N.; Liang, J.J.; Deb, K.; Chen, Y.P.; Auger, A.; Tiwari, S. Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Kangal Rep. 2005, 2005005, 2005. [Google Scholar]
  47. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  48. Hussain, K.; Salleh, M.N.M.; Cheng, S.; Shi, Y. On the exploration and exploitation in popular swarm-based metaheuristic algorithms. Neural Comput. Appl. 2019, 31, 7665–7683. [Google Scholar] [CrossRef]
Figure 1. Flowchart of VFSCPSO.
Figure 1. Flowchart of VFSCPSO.
Mathematics 12 00965 g001
Figure 2. Convergence curves of VFSCPSO, PSO, SCA, GA, DE, SCDLPSO, and AWPSO on the CEC2005 test suite.
Figure 2. Convergence curves of VFSCPSO, PSO, SCA, GA, DE, SCDLPSO, and AWPSO on the CEC2005 test suite.
Mathematics 12 00965 g002
Figure 3. Convergence curves of VFSCPSO, SFLA, BBO, MA, POA, DO, and OMA on the CEC2005 test suite.
Figure 3. Convergence curves of VFSCPSO, SFLA, BBO, MA, POA, DO, and OMA on the CEC2005 test suite.
Mathematics 12 00965 g003
Figure 4. Convergence curves of VFSCPSO, PSO, SCA, GA, DE, SCDLPSO, and AWPSO on the CEC2005 test suite (D = 100).
Figure 4. Convergence curves of VFSCPSO, PSO, SCA, GA, DE, SCDLPSO, and AWPSO on the CEC2005 test suite (D = 100).
Mathematics 12 00965 g004
Figure 5. Convergence curves of VFSCPSO, SFLA, BBO, MA, POA, DO, and OMA on the CEC2005 test suite (D = 100).
Figure 5. Convergence curves of VFSCPSO, SFLA, BBO, MA, POA, DO, and OMA on the CEC2005 test suite (D = 100).
Mathematics 12 00965 g005
Figure 6. Convergence curves of VFSCPSO, PSO, SCA, GA, DE, SCDLPSO, and AWPSO on the CEC2005 test suite (D = 2000).
Figure 6. Convergence curves of VFSCPSO, PSO, SCA, GA, DE, SCDLPSO, and AWPSO on the CEC2005 test suite (D = 2000).
Mathematics 12 00965 g006
Figure 7. Convergence curves of VFSCPSO, SFLA, BBO, MA, POA, DO, and OMA on the CEC2005 test suite (D = 2000).
Figure 7. Convergence curves of VFSCPSO, SFLA, BBO, MA, POA, DO, and OMA on the CEC2005 test suite (D = 2000).
Mathematics 12 00965 g007
Figure 8. Convergence curves of VFSCPSO in different dimensions of the CEC2005 test suite.
Figure 8. Convergence curves of VFSCPSO in different dimensions of the CEC2005 test suite.
Mathematics 12 00965 g008
Figure 9. Structural diagram of PID tuning.
Figure 9. Structural diagram of PID tuning.
Mathematics 12 00965 g009
Figure 10. PID simulation result diagram.
Figure 10. PID simulation result diagram.
Mathematics 12 00965 g010
Figure 11. Convergence curves of five algorithms in PID parameter tuning of direct current servo motor.
Figure 11. Convergence curves of five algorithms in PID parameter tuning of direct current servo motor.
Mathematics 12 00965 g011
Figure 12. VFSCPSO tendency in exploration and exploitation on CEC2005 test suite.
Figure 12. VFSCPSO tendency in exploration and exploitation on CEC2005 test suite.
Mathematics 12 00965 g012
Table 1. CEC2005 test suite information.
Table 1. CEC2005 test suite information.
ProblemNameDimensionsSearch Space f ( x * )
f 1 Sphere Function30 [ 100 , 100 ] D 0
f 2 Schwefel’s Problem 2.2230 [ 10 , 10 ] D 0
f 3 Schwefel’s Problem 1.230 [ 100 , 100 ] D 0
f 4 Schwefel’s Problem 2.2130 [ 100 , 100 ] D 0
f 5 Generalized Rosenbrock’s Function30 [ 30 , 30 ] D 0
f 6 Step Function30 [ 100 , 100 ] D 0
f 7 Quartic Function, i.e., Noise30 [ 1.28 , 1.28 ] D 0
f 8 Generalized Schwefel’s Problem 2.2630 [ 500 , 500 ] D −12,569.5
f 9 Generalized Rastrigin’s Function30 [ 5.12 , 5.12 ] D 0
f 10 Ackley’s Function30 [ 32 , 32 ] D 0
f 11 Generalized Griewank’s Function30 [ 600 , 600 ] D 0
f 12 Generalized Penalized Function 130 [ 50 , 50 ] D 0
f 13 Generalized Penalized Function 230 [ 50 , 50 ] D 0
f 14 Shekel’s Foxholes Function2 [ 65.536 , 65.536 ] D 0.998003838
f 15 Kowalik’s Function4 [ 5 , 5 ] D 0.0003075
f 16 Six-Hump Camel-Back Function2 [ 5 , 5 ] D −1.03162845
f 17 Branin Function2 [ 5 , 10 ] [ 0 , 15 ] 0.397887358
f 18 Goldstein Price Function2 [ 2 , 2 ] D 3
f 19 Hartman’s Family 13 [ 0 , 1 ] D −3.86278215
f 20 Hartman’s Family 26 [ 0 , 1 ] D −3.32199517
f 21 Shekel’s Family 14 [ 0 , 10 ] D −10.1531997
f 22 Shekel’s Family 24 [ 0 , 10 ] D −10.4029406
f 23 Shekel’s Family 34 [ 0 , 10 ] D −10.5364
Table 2. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 1 f 15 .
Table 2. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 1 f 15 .
ProblemMetricGADEPSOSCAAWPSOSCDLPSOVFSCPSO
f 1 best3.24E+001.82E−104.15E−021.37E−486.77E−234.40E−463.63E−74
mean6.63E+003.23E−106.12E−016.32E−339.66E−181.59E−381.69E−69
worst1.19E+014.69E−102.00E+001.88E−312.65E−164.57E−374.54E−68
std2.27E+007.16E−114.67E−013.43E−324.83E−178.35E−388.27E−69
f 2 best6.38E−015.19E−079.08E−011.32E−321.47E−113.33E−216.24E−39
mean8.37E−019.29E−071.08E+019.33E−285.28E−083.33E−017.44E−37
worst1.04E+001.28E−063.12E+011.85E−265.11E−071.00E+016.17E−36
std1.20E−011.79E−078.84E+003.38E−271.25E−071.83E+001.25E−36
f 3 best6.62E+031.63E+041.44E+038.50E+033.76E+028.09E+018.66E−02
mean1.38E+042.87E+045.54E+031.72E+041.09E+031.43E+033.06E+03
worst3.31E+043.93E+043.66E+042.46E+042.71E+036.31E+035.24E+04
std5.82E+036.31E+036.92E+034.04E+035.10E+021.81E+031.38E+04
f 4 best3.31E+003.89E+008.15E−016.70E−114.68E−045.69E−011.89E−32
mean4.98E+004.86E+001.93E+002.97E+002.10E−031.42E+006.45E−30
worst6.55E+005.76E+003.24E+001.95E+016.04E−033.67E+006.49E−29
std6.92E−014.81E−017.09E−015.52E+001.33E−036.40E−011.42E−29
f 5 best1.61E+023.68E+013.01E+012.60E+017.40E+009.90E−046.34E−02
mean4.39E+025.31E+016.94E+012.65E+012.84E+013.56E+016.15E−01
worst2.35E+038.46E+012.97E+022.69E+017.86E+018.14E+014.07E+00
std4.35E+021.31E+015.97E+012.28E−011.67E+012.84E+018.61E−01
f 6 best1.00E+000.00E+001.00E+000.00E+000.00E+000.00E+000.00E+00
mean7.97E+000.00E+001.71E+010.00E+001.77E+006.33E−010.00E+00
worst1.60E+010.00E+003.40E+010.00E+001.00E+017.00E+000.00E+00
std3.22E+000.00E+006.96E+000.00E+002.60E+001.56E+000.00E+00
f 7 best2.00E−021.68E−022.27E−033.46E−043.09E−039.75E−031.45E−05
mean6.53E−022.85E−028.35E−033.24E−036.40E−031.25E−019.41E−05
worst1.34E−014.34E−021.79E−021.37E−021.44E−022.70E+005.07E−04
std2.67E−026.71E−034.77E−032.78E−032.71E−034.88E−011.08E−04
f 8 best−1.26E+04−1.26E+04−1.00E+04−6.93E+03−4.11E+03−1.06E+04−1.26E+04
mean−1.26E+04−1.26E+04−8.49E+03−5.76E+03−3.33E+03−8.58E+03−1.25E+04
worst−1.25E+04−1.26E+04−5.17E+03−5.26E+03−2.79E+03−7.28E+03−1.22E+04
std5.39E+003.82E−051.01E+033.19E+023.41E+028.80E+027.89E+01
f 9 best1.38E+001.84E+013.25E+010.00E+001.69E+011.09E+010.00E+00
mean2.63E+002.39E+019.38E+011.78E−163.13E+013.15E+010.00E+00
worst4.13E+003.00E+011.54E+025.33E−155.87E+017.16E+010.00E+00
std7.44E−012.63E+003.41E+019.73E−169.35E+001.71E+010.00E+00
f 10 best9.94E−015.95E−064.47E−014.44E−151.74E−123.24E−148.88E−16
mean1.39E+009.18E−062.34E+008.80E+003.87E−015.19E−013.14E−15
worst1.87E+002.25E−053.94E+002.01E+012.32E+001.90E+004.44E−15
std2.13E−013.12E−069.03E−011.00E+017.97E−016.90E−011.74E−15
f 11 best1.02E+001.78E−092.26E−010.00E+001.51E+017.40E−030.00E+00
mean1.06E+001.61E−085.83E−011.29E−092.20E+011.20E−010.00E+00
worst1.10E+001.02E−079.60E−013.87E−082.57E+014.07E−010.00E+00
std1.98E−022.44E−082.13E−017.06E−092.48E+009.44E−020.00E+00
f 12 best7.48E−037.22E−125.97E−022.20E−032.04E−248.40E−238.56E−06
mean2.80E−021.34E−119.14E−014.18E−031.38E−012.02E−012.21E−04
worst8.77E−022.88E−114.15E+007.35E−038.30E−012.98E+005.71E−04
std1.81E−024.71E−129.53E−011.50E−032.18E−015.59E−011.42E−04
f 13 best1.38E−012.11E−118.58E−013.34E−021.82E−251.35E−322.89E−04
mean3.84E−014.44E−112.57E+001.91E−013.20E−216.95E−032.49E−03
worst7.05E−018.25E−115.09E+004.91E−013.83E−205.48E−027.32E−03
std1.47E−011.48E−111.32E+001.24E−018.89E−211.27E−021.90E−03
f 14 best9.98E−019.98E−019.98E−019.98E−019.98E−019.98E−019.98E−01
mean9.98E−019.98E−019.98E−019.98E−019.98E−019.98E−019.98E−01
worst9.98E−019.98E−019.98E−019.98E−019.98E−019.98E−019.98E−01
std2.23E−103.39E−163.39E−165.14E−123.39E−163.34E−161.08E−13
f 15 best6.01E−045.02E−043.07E−043.24E−043.07E−043.07E−043.07E−04
mean2.06E−036.70E−042.55E−035.82E−041.05E−038.44E−031.26E−03
worst2.04E−027.78E−042.04E−021.22E−032.04E−022.04E−022.25E−03
std3.52E−038.38E−055.05E−031.94E−043.66E−039.91E−038.35E−04
Table 3. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 16 f 23 .
Table 3. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 16 f 23 .
ProblemMetricGADEPSOSCAAWPSOSCDLPSOVFSCPSO
f 16 best−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00
mean−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00
worst−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00
std7.62E−066.78E−166.78E−169.74E−096.78E−166.78E−161.83E−09
f 17 best3.98E−013.98E−013.98E−013.98E−013.98E−013.98E−013.98E−01
mean3.98E−013.98E−013.98E−013.98E−013.98E−013.98E−013.98E−01
worst3.98E−013.98E−013.98E−013.98E−013.98E−013.98E−013.98E−01
std4.11E−050.00E+000.00E+001.36E−060.00E+000.00E+001.09E−07
f 18 best3.00E+003.00E+003.00E+003.00E+003.00E+003.00E+003.00E+00
mean3.00E+003.00E+003.00E+003.00E+003.00E+003.00E+003.00E+00
worst3.00E+003.00E+003.00E+003.00E+003.00E+003.00E+003.00E+00
std5.31E−051.73E−151.32E−152.32E−079.69E−161.32E−151.78E−07
f 19 best−3.86E+00−3.86E+00−3.86E+00−3.86E+00−3.86E+00−3.86E+00−3.86E+00
mean−3.86E+00−3.86E+00−3.86E+00−3.86E+00−3.86E+00−3.86E+00−3.86E+00
worst−3.86E+00−3.86E+00−3.85E+00−3.86E+00−3.86E+00−3.86E+00−3.85E+00
std1.07E−062.71E−152.73E−037.53E−052.71E−152.71E−151.45E−03
f 20 best−3.32E+00−3.32E+00−3.32E+00−3.32E+00−3.32E+00−3.32E+00−3.32E+00
mean−3.27E+00−3.32E+00−3.18E+00−3.30E+00−3.25E+00−3.22E+00−3.20E+00
worst−3.20E+00−3.32E+00−2.95E+00−3.20E+00−3.14E+00−3.13E+00−3.02E+00
std6.03E−021.36E−157.85E−024.60E−026.37E−025.14E−029.50E−02
f 21 best−1.03E+01−1.03E+01−1.03E+01−1.03E+01−1.03E+01−1.03E+01−1.03E+01
mean−5.25E+00−1.03E+01−9.29E+00−9.94E+00−7.58E+00−9.63E+00−1.03E+01
worst−2.66E+00−1.03E+01−5.10E+00−5.10E+00−2.66E+00−5.09E+00−1.03E+01
std3.46E+003.39E−062.13E+001.32E+003.30E+001.81E+005.81E−05
f 22 best−1.06E+01−1.06E+01−1.06E+01−1.06E+01−1.06E+01−1.06E+01−1.06E+01
mean−6.19E+00−1.06E+01−1.04E+01−1.06E+01−6.74E+00−9.74E+00−1.06E+01
worst−2.24E+00−1.06E+01−5.34E+00−1.04E+01−2.91E+00−5.34E+00−1.06E+01
std3.55E+004.51E−059.65E−013.58E−023.50E+002.00E+005.62E−05
f 23 best−1.07E+01−1.07E+01−1.07E+01−1.07E+01−1.07E+01−1.07E+01−1.07E+01
mean−5.28E+00−1.07E+01−1.07E+01−1.07E+01−6.57E+00−9.58E+00−1.07E+01
worst−2.28E+00−1.07E+01−1.07E+01−1.07E+01−3.02E+00−5.19E+00−1.07E+01
std2.34E+001.44E−034.21E−041.68E−022.16E+001.97E+001.65E−15
Count 09426411
Avg. rank 5.6956521742.6521739134.7826086963.7826086963.5652173913.7826086962.608695652
Overall rank 7264341
Table 4. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 1 f 15 .
Table 4. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 1 f 15 .
ProblemMetricSFLABBOMAPOADOOMAVFSCPSO
f 1 best2.38E−149.56E−014.22E+044.58E−162.28E−122.35E−023.63E−74
mean3.79E−132.84E+005.93E+049.08E−151.23E−117.37E−021.69E−69
worst3.33E−126.33E+007.05E+045.39E−144.81E−112.56E−014.54E−68
std5.97E−131.35E+006.16E+031.17E−149.27E−124.95E−028.27E−69
f 2 best1.75E−073.81E−012.16E+023.05E−108.65E−072.13E−036.24E−39
mean1.82E−035.41E−019.61E+092.41E−092.01E−064.31E−037.44E−37
worst5.09E−027.88E−011.33E+111.00E−086.50E−062.44E−026.17E−36
std9.28E−039.60E−022.91E+102.07E−091.05E−064.19E−031.25E−36
f 3 best8.30E+038.47E+039.27E+044.06E+034.89E+023.93E+038.66E−02
mean2.43E+041.76E+045.91E+059.68E+032.38E+037.39E+033.06E+03
worst4.89E+043.88E+041.74E+061.54E+044.66E+031.35E+045.24E+04
std1.01E+046.87E+033.48E+053.12E+031.10E+032.63E+031.38E+04
f 4 best5.88E−014.44E+005.40E+011.28E−011.45E−037.37E+001.89E−32
mean5.47E+005.61E+007.83E+011.47E+004.25E−031.02E+016.45E−30
worst1.20E+017.64E+009.06E+015.90E+001.08E−021.36E+016.49E−29
std3.08E+008.21E−018.13E+001.40E+002.08E−031.37E+001.42E−29
f 5 best1.07E+011.09E+021.27E+081.70E+012.40E+018.61E+016.34E−02
mean7.93E+013.60E+022.12E+084.22E+012.65E+011.67E+026.15E−01
worst3.03E+023.07E+032.72E+081.59E+029.04E+012.78E+024.07E+00
std7.24E+015.41E+023.58E+073.32E+011.21E+015.68E+018.61E−01
f 6 best0.00E+001.00E+004.45E+040.00E+000.00E+004.00E+000.00E+00
mean6.00E−012.57E+005.89E+040.00E+000.00E+002.59E+010.00E+00
worst3.00E+007.00E+007.23E+040.00E+000.00E+001.26E+020.00E+00
std8.14E−011.50E+005.87E+030.00E+000.00E+002.28E+010.00E+00
f 7 best1.47E−028.58E−033.55E+017.55E−036.16E−041.39E−021.45E−05
mean3.33E−022.79E−029.33E+011.69E−022.82E−033.26E−029.41E−05
worst4.84E−025.41E−021.37E+022.97E−025.52E−036.42E−025.07E−04
std9.29E−039.86E−032.48E+016.16E−031.13E−031.22E−021.08E−04
f 8 best−6.50E+03−1.26E+04−3.43E+03−1.11E+04−1.10E+04−8.99E+03−1.26E+04
mean−4.84E+03−1.26E+04−2.72E+03−9.12E+03−1.00E+04−7.16E+03−1.25E+04
worst−3.59E+03−1.26E+04−2.05E+03−7.09E+03−9.08E+03−5.48E+03−1.22E+04
std8.04E+022.77E+003.48E+021.10E+034.17E+029.65E+027.89E+01
f 9 best7.07E+015.75E−013.34E+021.89E+016.72E−122.39E+010.00E+00
mean1.11E+021.23E+003.80E+024.40E+017.27E+005.45E+010.00E+00
worst2.05E+022.68E+004.34E+028.36E+011.70E+011.00E+020.00E+00
std3.18E+014.58E−013.06E+011.38E+015.52E+002.29E+010.00E+00
f 10 best3.59E−055.32E−011.88E+015.59E−072.73E−072.09E+008.88E−16
mean2.55E+007.24E−012.01E+011.66E+017.54E−073.76E+003.14E−15
worst4.34E+001.14E+002.07E+012.00E+011.18E−066.07E+004.44E−15
std1.09E+001.69E−014.04E−017.55E+002.01E−079.32E−011.74E−15
f 11 best6.93E−137.84E−012.72E+023.33E−163.60E−113.34E−020.00E+00
mean1.49E−029.84E−015.12E+021.36E−021.29E−029.86E−020.00E+00
worst4.41E−021.05E+006.37E+023.44E−024.89E−022.36E−010.00E+00
std1.29E−026.46E−027.69E+011.19E−021.41E−025.16E−020.00E+00
f 12 best2.05E−092.43E−032.93E+082.51E−151.51E−091.00E−058.56E−06
mean1.66E+001.50E−024.60E+085.53E−025.49E−094.31E−012.21E−04
worst6.51E+001.15E−016.26E+085.19E−011.69E−081.42E+005.71E−04
std1.82E+002.17E−029.20E+071.08E−013.27E−094.37E−011.42E−04
f 13 best6.84E−134.72E−024.63E+086.99E−162.54E−087.97E−032.89E−04
mean2.95E+001.35E−019.03E+084.39E−037.69E−087.71E+002.49E−03
worst3.69E+012.24E−011.25E+091.10E−021.39E−073.09E+017.32E−03
std8.46E+004.33E−021.87E+085.47E−032.67E−089.65E+001.90E−03
f 14 best9.98E−019.98E−013.04E+019.98E−019.98E−019.98E−019.98E−01
mean9.98E−019.98E−013.75E+029.98E−019.98E−019.98E−019.98E−01
worst9.98E−019.98E−015.00E+029.98E−019.98E−019.98E−019.98E−01
std3.71E−165.01E−071.75E+023.39E−162.44E−163.39E−161.08E−13
f 15 best3.08E−045.54E−044.63E−033.07E−043.07E−043.07E−043.07E−04
mean5.72E−041.58E−034.09E−027.65E−043.38E−044.33E−041.26E−03
worst1.38E−033.28E−031.47E−011.22E−031.22E−032.50E−032.25E−03
std2.70E−046.48E−043.31E−024.66E−041.67E−044.29E−048.35E−04
Table 5. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 16 f 23 .
Table 5. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 16 f 23 .
ProblemMetricSFLABBOMAPOADOOMAVFSCPSO
f 16 best−1.03E+00−1.03E+00−1.00E+00−1.03E+00−1.03E+00−1.03E+00−1.03E+00
mean−1.03E+00−1.03E+00−5.05E−01−1.03E+00−1.03E+00−1.03E+00−1.03E+00
worst−1.03E+00−1.03E+003.80E−01−1.03E+00−1.03E+00−1.03E+00−1.03E+00
std5.98E−165.66E−043.99E−016.78E−166.65E−156.78E−161.83E−09
f 17 best3.98E−013.98E−014.23E−013.98E−013.98E−013.98E−013.98E−01
mean3.98E−013.99E−011.21E+003.98E−013.98E−013.98E−013.98E−01
worst3.98E−014.05E−014.85E+003.98E−013.98E−013.98E−013.98E−01
std0.00E+001.62E−039.24E−010.00E+003.27E−130.00E+001.09E−07
f 18 best3.00E+003.00E+003.74E+003.00E+003.00E+003.00E+003.00E+00
mean3.00E+003.00E+002.60E+013.00E+003.00E+003.00E+003.00E+00
worst3.00E+003.04E+009.87E+013.00E+003.00E+003.00E+003.00E+00
std2.06E−157.03E−031.98E+011.34E−152.70E−101.98E−151.78E−07
f 19 best−3.86E+00−3.86E+00−3.84E+00−3.86E+00−3.86E+00−3.86E+00−3.86E+00
mean−3.86E+00−3.86E+00−3.57E+00−3.86E+00−3.86E+00−3.86E+00−3.86E+00
worst−3.86E+00−3.86E+00−2.95E+00−3.86E+00−3.86E+00−3.86E+00−3.85E+00
std2.51E−151.21E−042.49E−012.71E−155.15E−102.70E−151.45E−03
f 20 best−3.32E+00−3.32E+00−2.87E+00−3.32E+00−3.32E+00−3.32E+00−3.32E+00
mean−3.25E+00−3.28E+00−2.13E+00−3.24E+00−3.26E+00−3.32E+00−3.20E+00
worst−3.20E+00−3.20E+00−1.37E+00−3.20E+00−3.20E+00−3.32E+00−3.02E+00
std5.92E−025.83E−024.45E−015.54E−026.03E−021.79E−159.50E−02
f 21 best−1.03E+01−1.03E+01−1.61E+00−1.03E+01−1.03E+01−1.03E+01−1.03E+01
mean−7.36E+00−8.74E+00−8.48E−01−8.68E+00−7.90E+00−9.41E+00−1.03E+01
worst−2.66E+00−2.66E+00−3.60E−01−2.66E+00−2.66E+00−5.09E+00−1.03E+01
std3.54E+002.87E+003.31E−012.60E+002.89E+001.64E+005.81E−05
f 22 best−1.06E+01−1.06E+01−2.35E+00−1.06E+01−1.06E+01−1.06E+01−1.06E+01
mean−6.79E+00−8.00E+00−1.09E+00−9.51E+00−8.85E+00−1.06E+01−1.06E+01
worst−2.91E+00−2.91E+00−5.59E−01−3.94E+00−5.13E+00−1.06E+01−1.06E+01
std3.28E+003.31E+003.92E−012.27E+002.55E+005.12E−035.62E−05
f 23 best−1.07E+01−1.07E+01−3.79E+00−1.07E+01−1.07E+01−1.07E+01−1.07E+01
mean−7.05E+00−6.44E+00−1.50E+00−7.43E+00−7.55E+00−7.62E+00−1.07E+01
worst−3.17E+00−4.24E+00−8.86E−01−5.19E+00−5.19E+00−6.84E+00−1.07E+01
std2.80E+002.34E+006.40E−012.46E+002.54E+001.28E+001.65E−15
Count 31055612
Avg. rank 4.1304347834.65217391372.9565217392.6956521743.478260872.47826087
Overall rank 5673241
Table 6. Experimental results of VFSCPSO on CEC2005 test suite f 1 f 10 (D = 50, 100, and 500).
Table 6. Experimental results of VFSCPSO on CEC2005 test suite f 1 f 10 (D = 50, 100, and 500).
ProblemDimensionsMetricGADEPSOSCAAWPSOSCDLPSOVFSCPSO
f 1 50mean7.88E+007.88E−105.25E−013.15E−324.52E−179.37E−431.19E−69
std2.66E+002.01E−103.41E−011.72E−312.43E−163.26E−426.30E−69
100mean5.74E+001.48E−096.74E−011.86E−381.29E−188.49E−415.24E−70
std1.59E+004.28E−104.65E−017.36E−384.29E−184.53E−401.48E−69
500mean7.41E+002.56E−096.55E−011.36E−377.96E−194.76E−417.48E−70
std3.04E+006.65E−106.17E−015.09E−373.45E−182.05E−401.76E−69
f 2 50mean8.68E−013.70E−061.24E+012.25E−264.83E−083.33E−011.04E−36
std1.33E−016.52E−071.30E+017.76E−261.86E−071.83E+001.73E−36
100mean9.06E−011.82E−041.31E+011.06E−261.24E−073.34E−017.17E−37
std1.74E−013.59E−059.41E+003.11E−264.85E−071.83E+009.86E−37
500mean1.73E+358.60E+811.24E+015.32E−263.58E−089.99E−017.91E−37
std9.45E+354.70E+821.28E+011.66E−251.01E−073.05E+001.19E−36
f 3 50mean1.59E+042.65E+044.70E+031.69E+041.05E+031.56E+032.81E+03
std6.69E+034.08E+034.49E+033.39E+034.69E+023.52E+031.43E+04
100mean1.21E+042.61E+047.95E+031.67E+049.58E+028.89E+022.39E+03
std4.27E+035.77E+039.77E+033.77E+034.53E+027.42E+021.19E+04
500mean1.44E+042.62E+048.23E+031.71E+041.32E+038.70E+022.53E+03
std6.25E+036.32E+031.54E+044.96E+037.04E+021.03E+031.23E+04
f 4 50mean2.05E+012.51E+013.50E+004.91E+013.86E−014.16E+001.14E−28
std1.45E+001.21E+001.05E+001.16E+012.52E−011.28E+002.17E−28
100mean5.01E+016.92E+015.68E+008.16E+013.63E+008.99E+004.24E−28
std1.23E+002.38E+001.50E+004.64E+005.33E−011.50E+001.48E−27
500mean8.85E+019.89E+011.26E+019.84E+018.10E+001.75E+019.66E−27
std4.30E−012.58E−012.99E+003.67E−014.61E−011.95E+002.45E−26
f 5 50mean3.43E+026.84E+011.79E+022.65E+012.52E+013.34E+012.09E+00
std1.63E+021.95E+015.58E+022.40E−019.52E+002.58E+016.72E+00
100mean3.19E+026.19E+011.04E+022.64E+012.46E+013.03E+033.43E+00
std1.21E+021.87E+011.95E+022.19E−011.03E+011.64E+048.08E+00
500mean3.27E+027.67E+013.14E+032.66E+012.85E+012.74E+015.40E−01
std1.28E+021.98E+011.64E+042.37E−011.66E+012.44E+015.34E−01
f 6 50mean7.33E+000.00E+001.75E+010.00E+001.37E+004.67E−010.00E+00
std2.01E+000.00E+008.14E+000.00E+002.06E+006.81E−010.00E+00
100mean8.63E+000.00E+001.94E+010.00E+001.33E+005.33E−010.00E+00
std3.58E+000.00E+001.12E+010.00E+001.42E+001.20E+000.00E+00
500mean7.13E+000.00E+007.00E+010.00E+002.23E+001.37E+000.00E+00
std2.37E+000.00E+009.18E+010.00E+003.68E+004.63E+000.00E+00
f 7 50mean6.03E−023.05E−021.88E−013.85E−037.10E−033.62E−029.79E−05
std2.06E−024.84E−036.80E−013.33E−032.56E−032.24E−021.11E−04
100mean6.76E−023.18E−029.43E−033.05E−036.50E−032.10E−011.32E−04
std1.94E−026.36E−036.02E−031.62E−032.35E−036.86E−011.04E−04
500mean6.27E−023.32E−023.67E−013.63E−036.72E−031.26E−011.22E−04
std2.39E−026.19E−031.54E+002.61E−032.40E−035.00E−011.19E−04
f 8 50mean−1.26E+04−1.26E+04−8.74E+03−5.66E+03−3.41E+03−8.83E+03−1.24E+04
std3.98E+003.09E−051.07E+033.00E+023.90E+029.55E+023.56E+02
100mean−1.26E+04−1.26E+04−8.54E+03−5.84E+03−3.35E+03−9.01E+03−1.22E+04
std5.61E+002.47E−051.29E+033.74E+023.15E+021.38E+034.93E+02
500mean−1.26E+04−1.26E+04−8.55E+03−5.71E+03−3.31E+03−8.90E+03−1.22E+04
std5.11E+001.84E−051.05E+032.53E+024.19E+029.99E+024.80E+02
f 9 50mean2.51E+001.61E+019.22E+012.25E−153.48E+012.76E+010.00E+00
std8.36E−012.39E+003.05E+017.85E−151.26E+011.21E+010.00E+00
100mean2.45E+008.82E+008.81E+016.65E−103.09E+013.01E+010.00E+00
std7.64E−012.52E+003.56E+012.68E−098.79E+001.39E+010.00E+00
500mean2.42E+004.27E+008.79E+013.64E−103.24E+013.38E+010.00E+00
std9.77E−012.39E+003.14E+011.99E−096.16E+002.03E+010.00E+00
f 10 50mean1.31E+001.41E−052.42E+009.02E+007.10E−014.55E−013.61E−15
std2.74E−014.29E−067.49E−019.90E+001.12E+007.47E−011.53E−15
100mean1.27E+001.78E−052.33E+008.70E+005.23E−017.72E−012.55E−15
std2.70E−014.20E−069.26E−011.01E+011.03E+008.77E−011.80E−15
500mean1.27E+002.14E−051.91E+008.02E+002.94E−014.73E−013.14E−15
std2.86E−014.53E−068.21E−019.99E+007.77E−017.60E−011.74E−15
Table 7. Experimental results of VFSCPSO on CEC2005 test suite f 11 f 13 (D=50, 100, and 500).
Table 7. Experimental results of VFSCPSO on CEC2005 test suite f 11 f 13 (D=50, 100, and 500).
ProblemDimensionsMetricGADEPSOSCAAWPSOSCDLPSOVFSCPSO
f 11 50mean1.06E+004.14E−085.11E−013.29E−122.09E+011.19E−010.00E+00
std2.37E−024.37E−081.98E−011.55E−113.72E+008.47E−020.00E+00
100mean1.06E+007.20E−085.47E−011.72E−102.09E+011.31E−010.00E+00
std2.45E−021.02E−071.82E−019.21E−102.65E+001.24E−010.00E+00
500mean1.07E+001.81E−075.19E−013.11E−152.12E+011.22E−010.00E+00
std2.27E−022.83E−072.33E−011.70E−142.22E+001.06E−010.00E+00
f 12 50mean2.89E−022.75E−116.58E−014.04E−031.14E−011.27E−012.87E−04
std1.89E−027.92E−126.50E−011.58E−032.79E−012.12E−012.07E−04
100mean3.82E−024.71E−117.31E−014.52E−031.45E−012.78E−012.24E−04
std3.35E−021.26E−117.72E−012.22E−031.92E−014.70E−011.41E−04
500mean3.89E−027.76E−116.81E−013.91E−031.73E−012.08E−012.29E−04
std3.89E−022.62E−118.50E−011.61E−032.44E−013.06E−011.33E−04
f 13 50mean3.70E−019.58E−112.56E+001.85E−014.64E−207.66E−033.86E−03
std1.41E−012.68E−111.55E+001.07E−011.66E−191.96E−023.90E−03
100mean3.86E−011.83E−102.94E+001.74E−011.72E−204.03E−034.38E−03
std1.31E−016.34E−111.67E+001.18E−014.08E−208.89E−035.55E−03
500mean4.00E−013.20E−102.74E+002.13E−013.04E−204.76E−032.94E−03
std1.36E−018.15E−111.32E+001.49E−011.07E−191.14E−021.67E−03
Table 8. Results of Friedman test for Table 6 and Table 7.
Table 8. Results of Friedman test for Table 6 and Table 7.
DimensionMetricsGADEPSOSCAAWPSOSCDLPSOVFSCPSO
50count0103209
Ave.rank5.9230763.769235.2307693.4615383.84615341.538461
Total.rank7362451
100count0103119
Ave.rank5.6153843.769235.1538463.4615383.8461534.3076921.615384
Total.rank7362451
500count0103119
Ave.rank5.8461533.6153845.0769233.769233.92307641.538461
Total.rank7263451
Table 9. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 1 f 10 (D = 50, 100, and 500).
Table 9. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 1 f 10 (D = 50, 100, and 500).
ProblemDimensionsMetricSFLABBOMADOOMAPOAVFSCPSO
f 1 50mean3.42E+022.40E+005.64E+041.23E−117.40E−029.60E−151.19E−69
std8.12E+026.76E−018.83E+037.78E−123.47E−021.27E−146.30E−69
100mean6.12E+032.96E+005.92E+041.29E−111.49E−015.95E−155.24E−70
std2.89E+031.24E+007.71E+038.22E−123.40E−017.08E−151.48E−69
500mean3.93E+042.90E+005.81E+041.12E−117.27E−029.73E−157.48E−70
std2.74E+031.19E+006.44E+037.88E−124.21E−022.16E−141.76E−69
f 2 50mean1.61E+035.46E−011.02E+192.14E−062.94E−031.39E−081.04E−36
std8.73E+038.74E−025.56E+199.23E−077.20E−041.31E−081.73E−36
100mean1.43E+135.72E−012.19E+381.94E−063.27E−032.27E+007.17E−37
std7.55E+131.10E−011.20E+391.48E−063.94E−036.85E+009.86E−37
500mean2.98E+2397.91E−016.02E+1901.73E−061.44E−034.54E+837.91E−37
std6.55E+041.31E−016.55E+046.93E−074.62E−042.48E+841.19E−36
f 3 50mean2.47E+042.48E+045.86E+052.35E+037.91E+039.04E+032.81E+03
std1.28E+041.10E+044.28E+051.23E+032.77E+034.12E+031.43E+04
100mean2.54E+041.93E+045.70E+052.08E+037.86E+031.13E+042.39E+03
std1.05E+041.16E+044.04E+051.06E+032.46E+033.90E+031.19E+04
500mean7.77E+042.23E+046.00E+052.06E+037.58E+031.23E+042.53E+03
std2.37E+048.37E+034.06E+051.09E+032.44E+033.94E+031.23E+04
f 4 50mean3.48E+011.33E+018.37E+015.87E−011.21E+017.44E+011.14E−28
std5.03E+001.79E+006.81E+003.87E−011.43E+007.86E+002.17E−28
100mean4.26E+013.17E+018.97E+012.86E+011.35E+018.85E+014.24E−28
std9.77E−012.38E+003.89E+007.00E+009.86E−011.64E+001.48E−27
500mean9.46E+017.81E+019.51E+019.03E+011.76E+019.70E+019.66E−27
std2.62E−011.27E+002.49E+002.00E+001.14E+003.82E−012.45E−26
f 5 50mean3.41E+042.72E+021.94E+082.42E+011.54E+024.23E+012.09E+00
std8.02E+043.26E+024.29E+071.60E−016.21E+013.54E+016.72E+00
100mean2.40E+062.13E+022.02E+082.43E+011.90E+025.09E+013.43E+00
std1.58E+069.18E+013.42E+072.06E−018.84E+013.71E+018.08E+00
500mean8.25E+073.49E+021.99E+082.42E+011.61E+023.90E+015.40E−01
std1.80E+074.54E+023.49E+073.30E−017.71E+012.83E+015.34E−01
f 6 50mean4.48E+012.63E+005.90E+040.00E+003.34E+010.00E+000.00E+00
std8.31E+011.73E+006.14E+030.00E+002.97E+010.00E+000.00E+00
100mean6.63E+032.97E+005.76E+040.00E+003.84E+010.00E+000.00E+00
std2.98E+032.31E+007.27E+030.00E+003.32E+010.00E+000.00E+00
500mean3.81E+042.33E+005.82E+040.00E+003.70E+010.00E+000.00E+00
std3.21E+031.52E+008.55E+030.00E+003.17E+010.00E+000.00E+00
f 7 50mean3.03E−012.59E−028.72E+012.89E−033.75E−021.44E−029.79E−05
std2.78E−019.10E−032.74E+012.16E−031.18E−025.16E−031.11E−04
100mean1.75E+002.71E−029.31E+013.46E−033.47E−021.49E−021.32E−04
std7.25E−019.06E−032.13E+011.48E−031.11E−025.74E−031.04E−04
500mean4.07E+012.88E−028.56E+012.89E−033.33E−021.28E−021.22E−04
std5.36E+009.54E−031.72E+011.00E−031.21E−024.12E−031.19E−04
f 8 50mean−3.96E+03−1.26E+04−2.69E+03−1.00E+04−7.11E+03−8.98E+03−1.24E+04
std5.45E+022.31E+004.33E+026.20E+028.19E+028.15E+023.56E+02
100mean−3.88E+03−1.26E+04−2.79E+03−1.02E+04−7.18E+03−8.62E+03−1.22E+04
std3.04E+023.12E+005.64E+023.79E+029.82E+027.28E+024.93E+02
500mean−4.47E+03−1.26E+04−2.61E+03−1.01E+04−7.21E+03−8.78E+03−1.22E+04
std3.17E+022.12E+003.99E+025.94E+028.83E+028.40E+024.80E+02
f 9 50mean1.16E+021.26E+003.81E+028.50E+004.67E+013.86E+010.00E+00
std4.11E+014.43E−013.69E+017.26E+001.88E+011.02E+010.00E+00
100mean1.21E+021.14E+003.85E+027.54E+005.84E+013.94E+010.00E+00
std5.30E+013.73E−012.08E+016.75E+002.60E+011.12E+010.00E+00
500mean3.29E+021.04E+003.94E+027.14E+004.90E+014.11E+010.00E+00
std1.68E+013.49E−013.08E+015.35E+002.20E+011.05E+010.00E+00
f 10 50mean4.87E+007.11E−012.01E+017.39E−074.14E+001.66E+013.61E−15
std3.10E+001.75E−013.15E−012.57E−071.14E+007.54E+001.53E−15
100mean1.31E+017.56E−012.01E+017.97E−073.84E+001.79E+012.55E−15
std2.25E+001.86E−013.45E−012.86E−077.40E−016.08E+001.80E−15
500mean1.96E+017.03E−012.00E+018.11E−074.04E+001.85E+013.14E−15
std1.97E−012.09E−015.60E−012.36E−078.67E−015.04E+001.74E−15
Table 10. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 11 f 13 (D = 50, 100, and 500).
Table 10. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 11 f 13 (D = 50, 100, and 500).
ProblemDimensionsMetricSFLABBOMADOOMAPOAVFSCPSO
f 11 50mean2.87E+009.95E−015.26E+021.41E−021.32E−019.92E−030.00E+00
std7.45E+004.80E−024.13E+011.94E−027.10E−021.32E−020.00E+00
100mean6.07E+019.99E−015.23E+021.75E−021.33E−019.27E−030.00E+00
std2.60E+013.95E−027.46E+012.26E−029.25E−021.05E−020.00E+00
500mean3.46E+021.01E+005.37E+029.10E−031.59E−011.39E−020.00E+00
std3.67E+012.88E−025.04E+011.33E−021.05E−011.64E−020.00E+00
f 12 50mean1.25E+031.59E−024.36E+085.88E−094.13E−018.31E−022.87E−04
std4.76E+031.76E−021.12E+083.06E−096.26E−012.31E−012.07E−04
100mean1.76E+052.03E−024.26E+085.87E−094.82E−016.22E−022.24E−04
std2.01E+052.77E−028.97E+072.09E−096.15E−011.00E−011.41E−04
500mean1.49E+081.81E−024.36E+085.63E−092.64E−016.58E−022.29E−04
std3.45E+071.94E−028.58E+072.27E−093.96E−011.65E−011.33E−04
f 13 50mean2.50E+041.45E−018.70E+087.43E−086.73E+003.66E−033.86E−03
std1.03E+057.32E−022.11E+082.24E−088.44E+005.27E−033.90E−03
100mean2.30E+061.20E−018.24E+086.92E−081.10E+013.66E−034.38E−03
std1.84E+063.26E−021.61E+083.56E−081.18E+015.27E−035.55E−03
500mean3.47E+081.29E−019.08E+086.74E−086.40E+003.66E−032.94E−03
std5.39E+074.61E−021.99E+083.12E−089.00E+005.27E−031.67E−03
Table 11. Results of Friedman’s test for Table 9 and Table 10.
Table 11. Results of Friedman’s test for Table 9 and Table 10.
DimensionsMetricSFLABBOMADOOMAPOAVFSCPSO
50Count0104019
Avg. rank5.769233.92307672.0769234.3076923.3076921.384615
Overall rank6472531
100Count0104019
Avg. rank5.8461533.7692372.0769234.1538463.5384611.384615
Overall rank6472531
500Count0104019
Avg. rank63.6923076.8461532.0769234.1538463.6923071.307692
Overall rank6372531
Table 12. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 1 f 10 (D = 1000, 2000, and 5000).
Table 12. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 1 f 10 (D = 1000, 2000, and 5000).
ProblemDimensionsMetricGADEPSOSCAAWPSOSCDLPSOVFSCPSO
f 1 1000mean7.22E+002.66E−095.71E−011.30E−337.45E−192.05E−391.24E−70
std2.36E+007.86E−105.03E−014.99E−331.68E−181.12E−383.04E−70
2000mean6.94E+002.93E−095.61E−013.86E−341.35E−181.42E−386.89E−70
std2.13E+006.40E−104.12E−011.35E−333.81E−184.67E−381.72E−69
5000mean7.13E+003.30E−094.40E−018.30E−383.10E−184.46E−425.15E−70
std2.15E+009.43E−102.61E−012.61E−371.16E−171.09E−411.16E−69
f 2 1000mean6.55E+046.55E+042.82E+006.55E+044.50E+019.60E−051.39E−36
stdNanNan2.98E+00Nan4.39E+013.30E−042.12E−36
2000mean6.55E+046.55E+044.62E+006.55E+046.55E+044.59E−134.50E−36
stdNanNan1.98E+00NanNan2.05E−127.65E−36
5000mean6.55E+046.55E+046.55E+046.55E+046.55E+046.55E+044.77E−30
stdNanNanNanNanNanNan1.49E−29
f 3 1000mean1.73E+042.65E+049.31E+031.88E+041.27E+031.03E+032.80E+03
std8.51E+034.88E+031.87E+044.60E+036.18E+021.07E+031.72E+03
2000mean1.58E+042.74E+044.79E+031.70E+041.03E+031.66E+032.59E+03
std8.06E+034.56E+036.65E+034.33E+034.08E+021.47E+031.86E+03
5000mean1.42E+042.64E+043.75E+031.61E+048.68E+027.41E+022.67E+03
std6.21E+036.09E+034.00E+034.56E+034.42E+026.15E+021.97E+03
f 4 1000mean9.41E+019.95E+011.47E+019.93E+019.37E+002.15E+011.65E−25
std1.97E−011.22E−012.43E+001.44E−013.21E−012.18E+005.94E−25
2000mean9.70E+019.97E+011.89E+019.97E+011.07E+012.35E+011.15E−24
std1.22E−015.85E−023.00E+008.35E−023.72E−011.34E+003.25E−24
5000mean9.88E+019.99E+012.42E+019.99E+011.26E+012.79E+018.03E−24
std5.01E−021.60E−023.74E+004.93E−022.62E−012.25E+001.41E−23
f 5 1000mean3.46E+027.32E+011.01E+022.64E+012.44E+013.18E+013.92E+00
std1.44E+021.94E+011.38E+021.86E−011.11E+012.92E+019.17E+00
2000mean3.33E+026.68E+013.09E+032.65E+012.97E+012.09E+021.33E+00
std1.46E+021.73E+011.64E+042.35E−011.97E+016.77E+024.84E+00
5000mean3.75E+026.47E+011.36E+022.65E+012.24E+011.83E+014.04E−01
std1.57E+021.61E+011.91E+022.60E−013.05E+002.46E+012.99E−01
f 6 1000mean8.33E+000.00E+001.97E+010.00E+001.70E+007.67E−010.00E+00
std3.21E+000.00E+001.16E+010.00E+001.93E+001.38E+000.00E+00
2000mean7.50E+000.00E+002.02E+010.00E+001.50E+005.00E−010.00E+00
std2.61E+000.00E+007.96E+000.00E+001.36E+006.88E−010.00E+00
5000mean7.80E+000.00E+002.12E+010.00E+001.93E+001.33E−010.00E+00
std2.96E+000.00E+008.35E+000.00E+001.87E+003.52E−010.00E+00
f 7 1000mean6.13E−023.34E−023.70E−013.91E−037.36E−033.09E−021.10E−04
std1.84E−026.56E−031.16E+002.86E−031.94E−031.36E−028.58E−05
2000mean6.51E−023.36E−024.57E−013.40E−037.00E−033.31E−027.92E−05
std2.53E−027.33E−031.42E+001.92E−032.35E−031.51E−025.14E−05
5000mean5.05E−023.53E−021.48E−023.82E−037.60E−033.89E−011.20E−04
std2.00E−024.80E−038.59E−032.40E−032.69E−031.38E+009.93E−05
f 8 1000mean−1.26E+04−1.26E+04−8.32E+03−5.70E+03−3.28E+03−9.15E+03−1.22E+04
std4.68E+002.87E−051.05E+033.54E+023.97E+021.01E+035.32E+02
2000mean−1.26E+04−1.26E+04−8.62E+03−5.79E+03−3.28E+03−9.23E+03−1.24E+04
std8.24E+001.66E−051.02E+033.58E+023.32E+021.12E+033.20E+02
5000mean−1.26E+041.26E+04−8.71E+03−5.62E+03−3.16E+03−8.87E+03−1.24E+04
std6.55E+002.06E−051.10E+032.72E+023.88E+021.22E+031.62E+02
f 9 1000mean2.48E+003.95E+009.09E+011.14E−052.92E+012.92E+010.00E+00
std7.17E−012.32E+003.37E+016.26E−051.00E+011.28E+010.00E+00
2000mean2.59E+003.65E+008.10E+017.65E−112.61E+012.98E+010.00E+00
std8.62E−012.65E+003.23E+013.66E−108.00E+001.28E+010.00E+00
5000mean2.37E+003.11E+008.17E+010.00E+003.36E+013.96E+010.00E+00
std8.56E−012.23E+003.84E+010.00E+001.04E+012.08E+010.00E+00
f 10 1000mean1.21E+002.31E−052.23E+001.00E+014.28E−016.18E−013.02E−15
std1.85E−014.94E−068.08E−011.02E+018.77E−018.19E−011.77E−15
2000mean1.15E+002.08E−052.38E+001.14E+018.72E−014.86E−012.90E−15
std2.33E−013.85E−066.31E−011.01E+011.44E+006.56E−011.79E−15
5000mean1.32E+002.54E−051.98E+001.00E+011.27E−017.07E−013.73E−15
std2.92E−015.67E−061.06E+001.06E+014.91E−017.60E−011.50E−15
Table 13. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 11 f 13 (D = 1000, 2000, and 5000).
Table 13. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 11 f 13 (D = 1000, 2000, and 5000).
ProblemDimensionsMetricGADEPSOSCAAWPSOSCDLPSOVFSCPSO
f 11 1000mean1.07E+001.25E−074.78E−019.86E−112.16E+011.46E−010.00E+00
std1.62E−021.45E−072.24E−015.40E−103.00E+001.06E−010.00E+00
2000mean1.06E+008.15E−085.19E−015.92E−172.09E+011.27E−010.00E+00
std2.55E−026.74E−081.82E−012.23E−163.17E+001.25E−010.00E+00
5000mean1.08E+001.73E−075.33E−017.77E−172.02E+011.43E−010.00E+00
std2.77E−021.80E−072.25E−012.46E−162.46E+001.06E−010.00E+00
f 12 1000mean3.82E−027.68E−119.59E−013.74E−031.35E−012.80E−012.07E−04
std3.52E−022.89E−111.12E+001.79E−031.83E−014.57E−011.50E−04
2000mean4.07E−026.60E−117.61E−014.06E−031.14E−011.34E−012.55E−04
std3.70E−021.80E−111.05E+002.04E−031.50E−012.91E−012.07E−04
5000mean2.88E−028.07E−117.17E−014.58E−032.07E−013.04E−013.38E−04
std1.36E−021.82E−115.85E−011.82E−033.82E−014.16E−011.47E−04
f 13 1000mean3.57E−013.09E−103.67E+002.44E−013.66E−045.12E−032.59E−03
std1.03E−011.06E−102.35E+001.69E−012.01E−031.07E−021.70E−03
2000mean3.87E−012.87E−102.90E+002.42E−013.71E−192.75E−033.72E−03
std1.30E−017.50E−111.98E+001.59E−011.52E−184.88E−033.25E−03
5000mean4.12E−013.42E−102.38E+002.55E−012.49E−222.20E−033.93E−03
std1.27E−011.42E−102.27E+001.37E−013.97E−224.55E−035.45E−03
Table 14. Results of Friedman’s test for Table 12 and Table 13.
Table 14. Results of Friedman’s test for Table 12 and Table 13.
DimensionsProblemGADEPSOSCAAWPSOSCDLPSOVFSCPSO
1000Count0104019
Avg. rank5.6153853.9230775.1538463.61538543.6923081.538462
Overall rank7462531
2000Count0103209
Avg. rank5.6923083.76923153.5384623.9230773.7692311.615385
Overall rank7362531
5000Count0203119
Avg. rank5.3076923.6923084.9230773.4615383.7692313.7692311.615385
Overall rank7362441
Table 15. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 1 f 10 (D = 1000, 2000, and 5000).
Table 15. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 1 f 10 (D = 1000, 2000, and 5000).
ProblemDimensionsMetricSFLABBOMADOOMAPOAVFSCPSO
f 1 1000mean3.86E+042.48E+005.88E+041.46E−118.33E−025.98E−151.24E−70
std3.33E+031.10E+008.76E+039.50E−124.69E−027.73E−153.04E−70
2000mean4.00E+042.79E+005.77E+041.37E−118.54E−025.95E−156.89E−70
std2.97E+031.36E+006.89E+036.85E−124.83E−029.53E−151.72E−69
5000mean3.82E+042.05E+005.64E+041.40E−116.77E−027.34E−155.15E−70
std3.71E+037.78E−018.85E+036.97E−123.75E−021.20E−141.16E−69
f 2 1000mean6.55E+046.55E+046.55E+046.55E+046.55E+046.55E+041.39E−36
stdNanNanNanNanNanNan2.12E−36
2000mean6.55E+046.55E+046.55E+046.55E+046.55E+046.55E+044.50E−36
stdNanNanNanNanNanNan7.65E−36
5000mean6.55E+046.55E+046.55E+046.55E+046.55E+046.55E+044.77E−30
stdNanNanNanNanNanNan1.49E−29
f 3 1000mean7.38E+042.26E+045.85E+052.44E+038.25E+031.21E+042.80E+03
std2.24E+047.97E+033.20E+051.28E+033.21E+036.56E+031.72E+03
2000mean7.33E+042.01E+046.12E+051.99E+038.00E+031.31E+042.59E+03
std1.93E+041.24E+043.88E+059.75E+022.81E+034.65E+031.86E+03
5000mean5.72E+042.62E+046.65E+051.80E+037.96E+031.60E+042.67E+03
std2.10E+048.79E+035.38E+051.10E+032.65E+034.21E+031.97E+03
f 4 1000mean9.80E+018.83E+019.65E+019.57E+011.90E+019.85E+011.65E−25
std1.34E−017.79E−011.39E+007.60E−011.10E+001.57E−015.94E−25
2000mean9.92E+019.40E+019.76E+019.82E+012.05E+019.91E+011.15E−24
std6.99E−023.40E−019.93E−013.68E−019.22E−011.08E−013.25E−24
5000mean9.97E+019.76E+019.83E+019.94E+012.25E+019.96E+018.03E−24
std2.54E−022.24E−015.56E−011.23E−016.30E−013.45E−021.41E−23
f 5 1000mean9.78E+072.24E+021.97E+082.42E+011.71E+025.96E+013.92E+00
std1.31E+071.01E+023.42E+072.46E−018.53E+015.08E+019.17E+00
2000mean8.91E+073.20E+021.99E+082.43E+011.54E+024.84E+011.33E+00
std1.43E+073.38E+023.68E+072.28E−014.60E+012.93E+014.84E+00
5000mean8.54E+072.70E+021.82E+082.43E+011.95E+024.96E+014.04E−01
std1.60E+071.41E+023.56E+072.80E−017.38E+014.18E+012.99E−01
f 6 1000mean3.94E+043.13E+005.65E+040.00E+004.43E+010.00E+000.00E+00
std4.51E+031.78E+005.35E+030.00E+003.14E+010.00E+000.00E+00
2000mean4.00E+042.85E+006.16E+040.00E+003.48E+010.00E+000.00E+00
std3.60E+031.95E+005.63E+030.00E+002.50E+010.00E+000.00E+00
5000mean3.86E+043.60E+005.80E+040.00E+003.07E+010.00E+000.00E+00
std2.74E+031.84E+006.89E+030.00E+002.14E+010.00E+000.00E+00
f 7 1000mean4.38E+012.87E−028.73E+013.03E−033.31E−021.32E−021.10E−04
std6.81E+009.19E−032.11E+011.52E−031.15E−024.26E−038.58E−05
2000mean4.08E+012.65E−029.02E+012.75E−033.35E−021.43E−027.92E−05
std8.05E+008.82E−032.13E+011.24E−039.93E−036.04E−035.14E−05
5000mean3.85E+012.52E−029.44E+014.22E−033.21E−021.40E−021.20E−04
std8.03E+007.27E−032.98E+013.08E−037.21E−034.00E−039.93E−05
f 8 1000mean−4.41E+03−1.26E+04−2.74E+03−9.84E+03−7.39E+03−8.69E+03−1.22E+04
std3.61E+023.39E+004.13E+025.43E+021.19E+037.09E+025.32E+02
2000mean−4.32E+03−1.26E+04−2.66E+03−1.01E+04−7.49E+03−8.35E+03−1.24E+04
std3.02E+023.32E+003.01E+026.63E+021.14E+039.00E+023.20E+02
5000mean−4.50E+03−1.26E+04−2.79E+03−1.01E+04−7.27E+03−8.62E+03−1.24E+04
std3.25E+023.94E+004.65E+026.18E+021.05E+035.29E+021.62E+02
f 9 1000mean3.34E+021.19E+003.88E+028.87E+005.18E+013.90E+010.00E+00
std1.38E+015.13E−013.05E+016.49E+002.19E+019.08E+000.00E+00
2000mean3.33E+021.28E+003.88E+021.16E+015.03E+013.75E+010.00E+00
std1.46E+014.90E−013.77E+018.83E+002.02E+011.05E+010.00E+00
5000mean3.27E+021.25E+004.03E+029.30E+004.59E+013.48E+010.00E+00
std1.19E+013.97E−014.29E+016.10E+001.58E+011.03E+010.00E+00
f 10 1000mean1.96E+017.09E−012.01E+018.32E−074.20E+001.86E+013.02E−15
std2.39E−011.85E−013.38E−012.63E−079.80E−015.05E+001.77E−15
2000mean1.96E+017.10E−012.00E+017.40E−073.81E+001.69E+012.90E−15
std1.94E−011.73E−012.71E−012.77E−071.01E+007.30E+001.79E−15
5000mean1.95E+016.40E−012.01E+017.51E−073.96E+001.86E+013.73E−15
std3.44E−011.58E−013.84E−012.71E−071.13E+005.15E+001.50E−15
Table 16. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 11 f 13 (D = 1000, 2000, and 5000).
Table 16. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 11 f 13 (D = 1000, 2000, and 5000).
ProblemDimensionsMetricSFLABBOMADOOMAPOAVFSCPSO
f 11 1000mean3.51E+021.00E+005.33E+021.13E-021.59E-011.97E-020.00E+00
std3.00E+013.03E-024.56E+011.81E-028.94E-022.26E-020.00E+00
2000mean3.64E+021.00E+005.24E+022.59E-021.13E-011.10E-020.00E+00
std2.67E+014.19E-025.96E+012.94E-024.27E-021.07E-020.00E+00
5000mean3.50E+021.00E+005.41E+021.28E-021.27E-018.86E-030.00E+00
std3.40E+016.12E-027.51E+011.96E-026.31E-028.91E-030.00E+00
f 12 1000mean1.53E+081.75E-023.86E+086.22E-094.13E-017.26E-022.07E-04
std3.19E+071.90E-021.07E+083.29E-095.70E-011.34E-011.50E-04
2000mean1.57E+081.18E-024.40E+084.65E-095.00E-013.63E-022.55E-04
std3.49E+071.01E-029.30E+071.75E-094.83E-016.95E-022.07E-04
5000mean1.30E+081.67E-024.82E+084.48E-092.98E-011.38E-023.38E-04
std1.87E+071.25E-021.15E+082.37E-094.02E-015.35E-021.47E-04
f 13 1000mean3.62E+081.21E-019.07E+087.28E-081.45E+013.30E-032.59E-03
std8.35E+073.59E-021.65E+083.78E-081.45E+015.12E-031.70E-03
2000mean3.68E+081.33E-019.23E+088.00E-085.49E+003.30E-033.72E-03
std7.66E+074.09E-021.28E+083.21E-088.12E+005.17E-033.25E-03
5000mean3.28E+081.47E-019.12E+086.51E-085.22E+003.66E-033.93E-03
std5.95E+073.90E-021.75E+082.20E-087.12E+005.36E-035.45E-03
Table 17. Results of Friedman’s test for Table 15 and Table 16.
Table 17. Results of Friedman’s test for Table 15 and Table 16.
DimensionsMetricSFLABBOMADOOMAPOAVFSCPSO
1000Count0104019
Avg. rank5.6923083.5384626.4615382.0769234.0769233.4615381.307692
Overall rank6472531
2000Count0104019
Avg. rank5.7692313.5384626.3846152.2307694.0769233.2307691.384615
Overall rank6472531
5000Count0104019
Avg. rank5.7692313.6153856.3846152.2307694.0769233.1538461.384615
Overall rank6472531
Table 18. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 1 f 13 (D = 10,000).
Table 18. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 1 f 13 (D = 10,000).
ProblemMetricGADEPSOSCAAWPSOSCDLPSOVFSCPSO
f 1 best5.03E+001.89E−097.25E−021.87E−451.76E−212.79E−475.73E−73
mean7.10E+002.83E−097.90E−011.20E−365.80E−193.42E−386.16E−70
worst9.79E+003.91E−091.84E+006.97E−363.65E−183.40E−375.44E−69
std1.52E+006.70E−105.94E−012.51E−361.14E−181.07E−371.7E−69
f 2 best6.55E+046.55E+045.11E+006.55E+048.20E+013.35E−122.05E−36
mean6.55E+046.55E+046.55E+046.55E+046.55E+046.55E+041.66E−30
worst6.55E+046.55E+046.55E+046.55E+046.55E+046.55E+041.64E−29
stdNanNanNanNanNanNan5.17E−30
f 3 best8.77E+031.51E+042.95E+021.04E+044.28E+021.67E+021.07E−01
mean1.33E+042.91E+045.57E+031.76E+041.18E+039.88E+021.04E+04
worst1.86E+044.00E+042.13E+042.55E+042.16E+033.55E+036.99E+04
std4.22E+038.11E+035.94E+033.86E+035.25E+021.10E+031.44E+04
f 4 best9.94E+019.99E+012.11E+019.99E+011.36E+012.85E+018.02E−27
mean9.94E+019.99E+012.84E+011.00E+021.40E+013.18E+011.32E−21
worst9.94E+011.00E+023.43E+011.00E+021.44E+013.44E+011.20E−20
std2.02E−021.78E−024.10E+007.58E−032.54E−012.12E+003.77E−21
f 5 best1.87E+023.86E+012.98E+012.62E+017.21E+001.27E+015.06E−03
mean3.22E+027.23E+015.75E+012.64E+012.11E+014.95E+015.65E−01
worst4.81E+021.03E+021.83E+022.70E+012.74E+011.05E+021.67E+00
std8.36E+011.98E+014.64E+012.18E−015.22E+003.41E+015.63E−01
f 6 best2.00E+000.00E+000.00E+000.00E+000.00E+000.00E+000.00E+00
mean6.30E+000.00E+001.57E+010.00E+001.30E+002.50E+000.00E+00
worst1.00E+010.00E+003.80E+010.00E+003.00E+001.00E+010.00E+00
std2.54E+000.00E+001.18E+010.00E+001.06E+003.78E+000.00E+00
f 7 best4.61E−022.82E−024.09E−031.46E−034.08E−032.50E−021.98E−05
mean6.59E−023.44E−028.23E−033.56E−032.76E−013.63E−026.67E−05
worst7.95E−024.39E−022.06E−026.13E−032.69E+004.84E−021.92E−04
std1.17E−024.83E−035.36E−031.53E−038.49E−017.52E−035.50E−05
f 8 best−1.26E+04−1.26E+04−1.01E+04−6.25E+03−3.44E+03−1.05E+04−1.26E+04
mean−1.26E+04−1.26E+04−8.37E+03−5.70E+03−3.13E+03−8.97E+03−1.23E+04
worst−1.25E+04−1.26E+04−7.32E+03−5.18E+03−2.81E+03−6.27E+03−1.15E+04
std5.29E+001.12E−059.60E+023.48E+021.97E+021.29E+033.66E+02
f 9 best1.04E+004.06E−010.00E+000.00E+001.79E+011.19E+010.00E+00
mean2.49E+002.41E+003.36E+010.00E+002.99E+012.48E+010.00E+00
worst4.33E+007.38E+001.35E+020.00E+004.18E+014.97E+010.00E+00
std9.98E−012.02E+004.78E+010.00E+007.91E+001.07E+010.00E+00
f 10 best7.96E−011.78E−052.13E+004.44E−153.29E−124.31E−148.88E−16
mean1.36E+002.27E−052.56E+008.03E+001.13E+004.00E−012.31E−15
worst1.77E+002.96E−053.19E+002.01E+013.09E+001.50E+004.44E−15
std3.27E−013.41E−063.47E−011.04E+011.24E+006.49E−011.83E−15
f 11 best1.04E+002.64E−083.98E−010.00E+001.72E+012.70E−020.00E+00
mean1.06E+009.74E−086.45E−015.55E−172.01E+011.21E−010.00E+00
worst1.10E+003.26E−078.51E−015.55E−162.26E+013.32E−010.00E+00
std1.55E−028.96E−081.50E−011.76E−161.85E+008.98E−020.00E+00
f 12 best1.15E−025.50E−110.00E+000.00E+005.90E−241.57E−320.00E+00
mean3.67E−028.46E−112.70E−011.25E−032.28E−012.95E−017.03E−05
worst1.02E−011.22E−102.87E+005.84E−038.32E−012.54E+004.58E−04
std2.83E−021.89E−116.17E−011.76E−032.76E−017.90E−011.37E−04
f 13 best2.17E−011.82E−101.74E+007.85E−021.68E−241.35E−328.58E−04
mean3.81E−013.54E−103.68E+002.23E−014.57E−206.50E−033.35E−03
worst6.29E−015.96E−107.57E+003.93E−012.58E−192.10E−026.47E−03
std1.21E−011.16E−101.70E+001.20E−018.15E−207.47E−031.70E−03
Count 0302119
Avg. rank 53.30769253.6923084.0769233.8461541.615385
Overall rank 6263541
Table 19. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 1 f 13 (D = 10,000).
Table 19. Experimental results and Friedman test results of VFSCPSO on CEC2005 test suite f 1 f 13 (D = 10,000).
ProblemMetricSFLABBOMADOOMAPOAVFSCPSO
f 1 best3.23E+041.49E+005.26E+042.18E−122.97E−027.88E−165.73E−73
mean3.72E+042.83E+006.07E+041.85E−117.90E−023.50E−156.16E−70
worst4.07E+044.47E+006.43E+046.82E−112.34E−011.48E−145.44E−69
std3.12E+038.98E−013.80E+031.88E−116.04E−024.35E−151.7E−69
f 2 best6.55E+046.55E+046.55E+046.55E+046.55E+046.55E+042.05E−36
mean6.55E+046.55E+046.55E+046.55E+046.55E+046.55E+041.66E−30
worst6.55E+046.55E+046.55E+046.55E+046.55E+046.55E+041.64E−29
stdNanNanNanNanNanNan5.17E−30
f 3 best4.29E+048.03E+039.37E+044.31E+024.74E+036.91E+031.07E−01
mean6.07E+041.84E+045.89E+052.46E+038.21E+031.40E+041.04E+04
worst7.77E+042.84E+041.88E+063.94E+031.14E+041.91E+046.99E+04
std1.27E+047.01E+035.55E+051.16E+032.25E+034.11E+031.44E+04
f 4 best9.98E+019.86E+019.80E+019.97E+012.26E+019.97E+018.02E−27
mean9.99E+019.87E+019.91E+019.97E+012.37E+019.98E+011.32E−21
worst9.99E+019.88E+019.97E+019.99E+012.58E+019.98E+011.20E−20
std1.10E−025.57E−026.02E−015.16E−029.51E−012.38E−023.77E−21
f 5 best7.18E+071.31E+021.47E+082.41E+011.02E+022.50E+015.06E−03
mean7.67E+072.11E+022.02E+082.43E+011.63E+026.20E+015.65E−01
worst8.15E+072.89E+022.58E+082.45E+012.56E+029.57E+015.65E−01
std6.83E+065.77E+013.21E+071.40E−015.22E+013.18E+015.63E−01
f 6 best2.93E+040.00E+005.25E+040.00E+006.00E+000.00E+000.00E+00
mean3.17E+041.80E+006.07E+040.00E+003.02E+010.00E+000.00E+00
worst3.40E+043.00E+007.12E+040.00E+007.20E+010.00E+000.00E+00
std3.31E+039.19E−016.40E+030.00E+002.09E+010.00E+000.00E+00
f 7 best3.12E+011.42E−028.83E+017.93E−041.77E−025.36E−031.98E−05
mean3.41E+012.75E−021.04E+022.32E−033.31E−021.42E−026.67E−05
worst3.70E+013.86E−021.24E+024.19E−035.11E−022.76E−021.92E−04
std4.06E+007.74E−031.06E+011.11E−031.10E−026.80E−035.50E−05
f 8 best−4.49E+03−1.26E+04−4.25E+03−1.10E+04−7.82E+03−9.36E+03−1.26E+04
mean−4.32E+03−1.26E+04−2.76E+03−1.03E+04−7.03E+03−8.15E+03−1.23E+04
worst−4.14E+03−1.26E+04−1.95E+03−9.33E+03−6.13E+03−6.73E+03−1.15E+04
std2.50E+021.47E+006.52E+024.82E+025.99E+028.31E+023.66E+02
f 9 best3.30E+025.83E−013.37E+024.99E−122.49E+012.19E+010.00E+00
mean3.32E+021.20E+003.77E+024.98E+004.04E+013.40E+010.00E+00
worst3.33E+021.69E+004.11E+021.99E+016.47E+014.68E+010.00E+00
std2.37E+003.92E−012.30E+017.30E+001.30E+018.45E+000.00E+00
f 10 best1.93E+014.77E−011.95E+014.04E−073.37E+003.99E−078.88E−16
mean1.95E+016.44E−012.02E+017.28E−074.30E+001.80E+012.31E−15
worst1.97E+018.68E−012.05E+011.03E−065.89E+002.00E+014.44E−15
std2.80E−011.43E−012.96E−012.03E−078.45E−016.31E+001.83E−15
f 11 best2.93E+029.53E−014.74E+027.42E−115.07E−021.55E−150.00E+00
mean3.12E+021.01E+005.48E+021.74E−021.37E−011.82E−020.00E+00
worst3.31E+021.03E+006.42E+027.30E−023.21E−016.61E−020.00E+00
std2.71E+012.67E−025.74E+012.66E−028.52E−022.22E−020.00E+00
f 12 best1.37E+085.14E−032.63E+083.08E−092.32E−043.36E−160.00E+00
mean1.54E+089.97E−034.29E+085.34E−095.24E−015.18E−027.03E−05
worst1.72E+082.19E−026.61E+089.00E−091.46E+002.07E−014.58E−04
std2.46E+074.69E−031.17E+081.92E−095.48E−017.33E−021.37E−04
f 13 best2.36E+084.93E−026.26E+083.03E−081.19E−022.51E−158.58E−04
mean3.04E+081.36E−018.71E+085.75E−082.60E+004.39E−033.35E−03
worst3.72E+082.12E−011.13E+091.16E−071.14E+011.10E−026.47E−03
std9.59E+074.63E−021.64E+082.38E−084.00E+005.67E−031.70E−03
Count 0104019
Avg. rank 5.769233.5384616.3846152.15384643.3846151.384615
Overall rank 6472531
Table 20. System performance parameters tuned by the five algorithms.
Table 20. System performance parameters tuned by the five algorithms.
MetricVFSCPSOPSOSCAFAGA
f m i n 39.2674539.5718539.8390540.2673648.93632
σ 0.0124080.0168050.0233500.025460.007273
t s 1.0124081.0168051.0233501.025461.007273
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, H.; Gao, Y.; He, Y. Hybrid Sine Cosine and Particle Swarm Optimization Algorithm for High-Dimensional Global Optimization Problem and Its Application. Mathematics 2024, 12, 965. https://doi.org/10.3390/math12070965

AMA Style

Wang H, Gao Y, He Y. Hybrid Sine Cosine and Particle Swarm Optimization Algorithm for High-Dimensional Global Optimization Problem and Its Application. Mathematics. 2024; 12(7):965. https://doi.org/10.3390/math12070965

Chicago/Turabian Style

Wang, Huimin, Yuelin Gao, and Yahua He. 2024. "Hybrid Sine Cosine and Particle Swarm Optimization Algorithm for High-Dimensional Global Optimization Problem and Its Application" Mathematics 12, no. 7: 965. https://doi.org/10.3390/math12070965

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop