Competitive Coevolution-Based Improved Phasor Particle Swarm Optimization Algorithm for Solving Continuous Problems

: Particle swarm optimization (PSO) is a population-based heuristic algorithm that is widely used for optimization problems. Phasor PSO (PPSO), an extension of PSO, uses the phase angle θ to create a more balanced PSO due to its increased ability to adjust the environment without parameters like the inertia weight w . The PPSO algorithm performs well for small-sized populations but needs improvements for large populations in the case of rapidly growing complex problems and dimensions. This study introduces a competitive coevolution process to enhance the capability of PPSO for global optimization problems. Competitive coevolution disintegrates the problem into multiple sub-problems, and these sub-swarms coevolve for a better solution. The best solution is selected and replaced with the current sub-swarm for the next competition. This process increases population diversity, reduces premature convergence, and increases the memory efﬁciency of PPSO. Simulation results using PPSO, fuzzy-dominance-based many-objective particle swarm optimization (FMPSO), and improved competitive multi-swarm PPSO (ICPPSO) are generated to assess the convergence power of the proposed algorithm. The experimental results show that ICPPSO achieves a dominating performance. The ICPPSO results for the average ﬁtness show average improvements of 15%, 20%, 30%, and 35% over PPSO and FMPSO. The Wilcoxon statistical signiﬁcance test also conﬁrms a signiﬁcant difference in the performance of the ICPPSO, PPSO, and FMPSO algorithms at a 0.05 signiﬁcance level.


Introduction
Particle swarm optimization (PSO) was initially introduced by Kennedy and Eberhart [1].PSO is a simple stochastic searching technique for optimization that is motivated by the ordinary swarming comportment of bird clustering and fish schooling.The performance of PSO in finding virtuous solutions for optimization problems is very good.The PSO algorithm has the advantage of fast convergence, easy code implementation, less computational complexity, and few parameter adjustments [2,3].Instead of using only the fittest particle, PSO uses all particles of the population for computation due to its social behavior.All particles update their positions as determined by their individual best position (pbest) and overall best position (gbest).
The productivity of PSO is assessed by the minimum number of iterations needed to find an optimum solution with the indicated accuracy and minimal computation.The performance of the PSO algorithm highly depends on the selection of parameters.In PSO parametric modification, PSO parameters are adjusted to improve the convergence and exploration capabilities [4].Parameter adjustments include modification of the inertia weight, cognitive factor, social factor, techniques for defining the own best pbest and overall best gbest, and different prototypes for the velocity update.The inertia weight w was introduced by Bansal et al. [5] to maintain and stabilize a broader scope of the search.To improve PSO using the inertia weight, different strategies like the growing inertia weight [5], falling inertia weight, and adaptive inertia weight are used.
Numerical optimization problems are important in computing and are commonly solved using evolutionary computing algorithms such as PSO and differential evolution [6].When considering optimization problems, PSO has been successfully applied in both the continuous and discrete domains [7].Among several discrete PSO variations, binary PSO [7] is possibly the most well-known model, and it has been applied to many problems, e.g., job-shop scheduling [8].PSO is a heuristic algorithm like a genetic algorithm; however, it is computationally less expensive [9].PSO parameters depend on specific applications and are adaptive according to the application.In multi-objective optimization problems, the PSO algorithm with multiple sub-populations has been used to achieve prominent results [10].
PSO has been efficiently applied in many real-world problems.Most real-world problems have increasing complexity, so the proficiency and effectiveness of PSO need to be continuously improved.Despite the many advantages of PSO, there are still certain research gaps that require attention.These research gaps include early convergence, memory efficiency, slow convergence toward the global optimum, PSO without parameters, and slow computational speed for large populations.Many variants of PSO have been successfully developed to handle large populations, where the population is divided into multiple subswarms, and each particle maintains the information of the local best.In the cooperative approach, the whole population is divided into many sub-swarms.Each sub-swarm coevolves with the others to form a complete solution.In competitive coevolution, the population is divided into many sub-swarms, and two sub-swarms are selected to compete for coevolution, with the swarm with the best fitness earning the right to represent itself.
Efficient and effective information sharing between sub-swarms is also an important research area, where each sub-swarm shares its individual best fitness with the others.To enhance performance and population diversity, a competitive coevolution process is applied to the sub-swarms.This improves the performance and population diversity of algorithms.Phasor PSO (PPSO) provides efficient results compared to other PSO variants in many multidimensional optimization problems.Introducing the phase angle in PPSO enhances the effectiveness and adaptability of the algorithm.However, there is still a need to modify PPSO to solve global optimization problems.In the case of a small population, PPSO achieves effective results but in a large population, PPSO needs to efficiently handle a large number of particles.
PPSO lacks population diversity due to a lack of competitive processes during evolution.This reduced diversity results in the degradation of the performance of PPSO due to stagnation in local optima.Diversity is incorporated by employing the competitive coevolution process, where the fitness of individuals is estimated following the exchange of information with individuals from other sub-populations.

Research Motivation
PPSO lacks population diversity due to a lack of competitive processes during the evolution of large-sized populations.Consequently, the results become trapped in local optima.This diversity issue reduces the convergence speed of the PPSO algorithm and ultimately reduces its performance.The inclusion of a competitive process for exchanging information during the evolutionary process of PPSO helps achieve convergence more quickly without compromising on performance.

Problem Statement
In the case of a large population, the PPSO algorithm suffers from a slow convergence speed due to the large number of particles.The selection of a suitable technique for dividing the population into multiple sub-swarms is crucial for PPSO.A large population size requires a large memory size to manage operations.PPSO spends more time and computational load in managing memory for a large population, leading to premature convergence within the population, which results in increased time complexity.PPSO requires interaction among all individual algorithms to reduce both the space and time complexity of the algorithm.The best individuals who share their values with the population help increase the performance of the algorithm.

Research Significance
The incorporation of a competitive coevolution process in PPSO helps improve the performance of the PPSO algorithm.In competitive coevolution, the process improves population diversity and the performance of the algorithm.The coevolutionary process makes PPSO more memory efficient and adaptable, resulting in a higher efficacy rate.The increased complexity in terms of dimensions, number of particles, and population range shows that the improved competitive multi-swarm phasor PSO (ICPPSO) performs better.

Research Contributions
This study makes the following contributions:

•
A cooperative coevolution concept is incorporated into PPSO, which helps enhance diversity in the population and avoid local optima issues.

•
The hybridization of PPSO with the competitive coevolution method helps enhance the performance of the PPSO algorithm.

•
The experimental results of the average fitness performance metric and the results of the proposed algorithm are presented by using six standard benchmark functions.
The rest of this study is divided into six sections.The operation of PSO and its parameters are described in Section 2. Section 3 discusses the existing literature and PSO variants.The operation of PSSO and its related challenges are presented in Section 4. Section 5 elaborates on the proposed approach.The experimental results and discussions are provided in Section 6.Finally, the conclusions are presented in Section 7.

Operation of PSO and Its Parameters
PSO is regarded as a well-organized population and resistor parameter-based procedure for the universal optimization of different problems.PSO algorithms and GAs share several common aspects.The initialization of the population with a random solution and the subsequent generation updates to search for optima are almost the same.Genetic operators, like crossover and mutation operators, are not used in PSO because of its social behavior.So, in PSO, each particle progresses by cooperating and competing with other individuals [11].Each particle modifies its movement according to its own best-reached position and the overall best position among all particles.
In PSO, every probable solution is characterized as a particle.Each particle has certain parameters that continuously adjust to update its position.These parameters include the current position, speed of the particle, and best position found by the particle thus far.At the beginning of PSO, all particles in the population are initialized with a random position, and their velocity is set to 0. In PSO, every particle in the dimensional space D is treated as a point.X I represents the position of the i th particle and calculates the existing quality of each particle.Each particle in the population updates its velocity V to evaluate the tracking of its movement at t iterations, where X T I = (xi 1 , xi 2 , xi 3 , . . ., xi D ) signifies the position of particles and V T I = (vi 1 , vi 2 , vi 3 , . . ., vi D ) represents the velocity of particles for the d th dimension (d = 1, 2, . . ., D).These parameters are updated using Equations ( 1) and (2).
where c 1 and c 2 are acceleration controller coefficients, r 1 , and r 2 are randomly generated coefficients within the range of (0, 1), X is the position of a particle, and k represents the current iteration.The range [−v max , v max ] is defined for the velocity V i to stop the particle from moving outside the problem exploration area.Pbest is computed through a cognitive learning approach, in which the best solution thus far from the current particle is stored.
Gbest is computed through a social learning approach, in which the best solution thus far from every particle in the population is stored.Pbest and Gbest are calculated using Equations ( 3) and ( 4).
In Equation (3), Pbest is selected using the greedy criteria based on the fitness of the new position and the current position of the particle.Equation ( 3) is used to update the Pbest of each population member in each iteration.Gbest in Equation (4) represents the best particle overall.

Inertia Weight
The success of the optimization algorithm is analytically dependent on accurate stability among local and global searches during the progress of all iterations.To maintain this balance between local and overall searches, Shi et al. [12] proposed a fresh parameter w for the velocity update equation, called the inertia weight.The velocity update Equation ( 5) is expressed as where V id is the velocity, and ω is the inertia weight.Equation ( 5) is used to update the velocity for the updating of the particle position of the population of the swarm or sub-swarm.If the cost of inertia is high, particles have a higher probability of traveling to new search areas.However, if the cost of inertia is small, the probability of exploring the search area is reduced, resulting in fewer updates to the particles' velocity.At the initial value of inertia, w remains constant at 0.4 throughout the search process, but later, researchers employed different strategies and made changes to the inertia weight.These strategies can be classified into three types.To determine the optimum value in dynamic locations, Eberhart and Shi [13] effectively used the random cost of the inertia weight.The second type is a time-varying inertia weight approach, in which the inertia weight differs with the number of iterations over time.A linearly decreasing inertia approach was introduced by Lei et al. [14], which effectively contributed to the positive modification of PSO characteristics.Similarly, other linear methods [15] and nonlinear approaches [16] have proven to be effective inertia weight approaches.Arumugam et al. [17] introduced a method where the inertia weight is estimated from the proportion of the Gbest fitness and mean of the best finesses in every iteration.The selection of the inertia weight strategy is always problem-dependent.

Cognitive and Social Learning Coefficients
The cognitive and social learning coefficients also play an important role in bringing stability to local and global searches.In Equation ( 5), c1 is the rational coefficient and c 2 is the social learning component.Equation ( 6) represents the cognitive part, and Equation (7) represents the communal learning part.Initially, the values of c 1 and c 2 were equal and set to 2.0 by Shi et al. [4].The cognitive component c 1 controls the footstep size of particles taken toward the personal best solution, and the social coefficient c 2 controls the footstep size taken toward the global best solution.The social component c 2 has a greater impact on improving convergence compared to c 1 .Time-varying cognitive coefficients were implemented by Cai et al. [18], which focus on convergence speed in the first phase, and the second phase focuses on global search competency.Large values of the cognitive coefficient c 1 and social coefficient c 2 equate to an enormous local search ability, and small values of c 1 and the social coefficient c 2 equate to an enormous global search space.In Equation ( 6), r 1 and r 2 are randomly generated numbers, CP is the cognitive part, and SP is the social part of the velocity.
Equation ( 6) is used to calculate the relative contribution of the cognitive part by utilizing the pbest position of the current individual.
The social part in the velocity is calculated using Equation (7), which focuses on the contribution of the gbest individual.

Pseudocode of PSO Algorithm
In the Algorithm 1 of PSO, all particles X of the population are initialized with random spots, and the fitness of each particle is computed.Then, the velocity V is updated using the velocity update equation, followed by the position update of particle X. Pbest and Gbest are updated for each particle in the population.This process is repeated unless the termination criteria are reached.

Algorithm 1 Pseudocode for the PSO algorithm.
Require: X is an individual, NP is the population size, V is the velocity, Pbest is the personal best position, Gbest is the personal best position, c 1 and c 2 are the social and local amplification factors, r 1 and r 2 are two random numbers, and k is the iteration number.Ensure: Evolved population with Gbest at the optimal position to achieve optimal fitness values of the problem.for each particle do 13: Calculate particle velocity according to the given equation 14: Update particle position according to Equation (2) 16: end for 18: end while

Particle Swarm Optimization Variants and Literature Review
The particle swarm optimization algorithm is a well-known evolutionary computing optimization that is known to be very effective for solving real-world optimization problems [19].Many variations of PSO algorithms have been presented, including various swarms, fresh effective learning policies, diversity-retaining strategies, and hybrid algorithms, to resolve several optimization complications.These enhancements of PSO include the population initialization process, adjustment of parameters (inertia weight, coefficients, pbest, gebest), sub-swarm techniques for large-scale optimization, and hybrid methods with other algorithms.For population enhancement, different techniques can be used.
PSO-sono was introduced by Meng et al. [20] for numerical optimization problems.They presented a hybrid paradigm based on the sorted swarm, adaptation schemes for the constriction coefficient and the paradigm ratio, and fully informed variations of the PSO algorithm.The experimental results showed the competitiveness of the presented enhancement of the existing PSO for standard benchmark functions.In terms of modified PSO, the concept of distance-based enhancement was presented by Lazzus et al. in [21].A random, guided new direction helped improve the search capability of the PSO algorithm.The presented method showed significant improvements compared to standard PSO using benchmark optimization functions.
Nabi et al. [22] introduced task scheduling-based adaptive PSO in their research work to balance loads in cloud computing.They adaptively updated the inertia weight to update the velocity of the PSO algorithm using an adaptive linear decreasing approach.The research results demonstrated significant improvements compared to five other inertiaweight techniques used in the PSO algorithm.The concept of an optimal control parameter in PSO was introduced by Eltamaly in his research work on energy systems [23].The author used two nested PSOs to optimize the control parameters, where the inner PSO was used as a fitness function for the outer PSO.The experimental results showed that the presented approach helped optimize parameters for standard benchmark problems.
Quantum PSO is a new discrete PSO algorithm that utilizes the concept of a quantum individual.Quantum PSO utilizes the concept of a quantum bit within the quantum particle.The quantum bit can probabilistically take a value of 0 or 1 upon random observation [24].The concept of soliton-based quantum-behaved PSO was presented by Fallahi and Taghadosi to solve optimization problems in their research work [25].In non-linear situations, solitons can rearrange and reproduce themselves stably without becoming trapped.The experimental results of soliton quantum-behaved PSO showed significant improvements when considering probability density function-based motion scenarios.

Phasor PSO
PPSO is a new and improved, simple, and adjustable PSO model proposed by Ghasemi et al. [31].PPSO is built with the addition of the phase angle θ to particle update equations, aiming to achieve optimal results in high-dimension optimization problems.In PPSO, each control parameter generated by the algorithm is merged with the phase angle.The increase in optimization efficiency is the most significant advantage of PPSO.Because of the phase angle (θ), PSO becomes a non-parametric algorithm with simpler calculations.In PPSO, episodic trigonometric functions, e.g., sin and cos, are used as control parameters.

Single Population Approach
Initially, PSO works best with small population sizes.However, modifications and upgrades by researchers have made PSO efficient for large populations and multimodal problems.PSO works better with different population sizes depending on the given problem [32].In PSO, the social iteration within the population is one of the key factors of the algorithm.The population has a communication structure among particles that enables them to collectively share search space experiences, aiming to solve complex problems and improve population diversity.This social network for information exchange is called topology.PSO has three commonly used topologies, as shown in Figure 1 [33].The global topology shown in Figure 1 is also known as the star topology, in which all the particles in the swarm are interconnected.Each particle in the population can share information directly.The local topology, also known as the ring topology, is where all particles in a swarm have only two neighbor particles to share information with.The von Neumann topology is the local best topology, in which each particle in a swarm has only four neighbor particles to share information with.In the von Neumann topology, particles are arranged in a grid-like structure.All three topologies increase PSO performance depending on the given problem.

Synchronous and Asynchronous Approaches to PSO
Original PSO uses a synchronous approach, in which the position and velocity of the element are reorganized after the entire swarm ends the current iteration.This kind of PSO is also known as S-PSO.S-PSO offers worthy information to all particles of the population.In S-PSO, each particle of the swarm has the benefit of picking a healthier neighbor and exploring the material delivered by this neighbor.However, premature convergence is a common shortcoming of S-PSO.In asynchronous PSO (A-PSO) [34], once the performance of the particle has been calculated, the pbest is updated immediately.In A-PSO, particle information is upgraded by the current iteration instead of information from the previous iteration.A-PSO has a shorter execution time but inadequate information due to the upgrading of information during current iterations.Due to reliable solution quality and robust exploitation, S-PSO performs better than A-PSO.So, the selection of the approach between S-PSO and A-PSO is problem-dependent.In some problems, S-PSO performs well, whereas in others, S-PSO shows good performance.

Multi-Population Approach
In PSO, population diversity is increased by dividing the entire population into multiple substitute swarms [35].At the start of the algorithm, the entire population is divided into multiple sub-swarms.Then, the PSO algorithm is applied to each sub-swarm, and each sub-swarm computes the best results of its region.In the last part, all sub-swarms share their best results and compute a single result for the whole population.Different evolutionary and coevolutionary methods are used to exchange information among sub-swarms.

Modified PSO for Multimodal Function
In the original PSO, the whole population is used for computation, but in the modified PSO (MPSO), the whole population is divided into multiple sub-swarms regarding the arrangement of the particles [36].The best particle in each sub-swarm is stored as the local best Lbest of the current sub-swarm.Instead of using the global best in the velocity update, the Lbest is used as the best particle of each sub-swarm.Now, the velocity update equation is modified, as shown in Equation ( 8) Instead of using the Gbest in the velocity update equation, each sub-swarm uses its best particle thus far within its range.Because of the multiple sub-swarms, several optimal solutions, like the Lbest and Pbest, can be found, which can be useful in multimodal optimization problems.For a population p, with M as the total number of sub-populations and N as the total quantity of particles in the population, the number of particles in each sub-population is calculated using N M .

Multi-Adaptive Strategy-Based PSO
In multi-adaptive strategy-based PSO (MAPSO), an entire population is divided into several small-size sub-swarms [37].Two adaptive strategies, i.e., adaptive learning exemplars and adaptive population size, are introduced into the sub-swarms' mechanism to improve the comprehensive performance of MAPSO.According to the fitness value, the search particle in a sub-swarm can adaptively select its own learning particles.Throughout the entire optimization process, computational resources are rationally distributed based on the adaptation of the population.The adaptive learning strategy facilitates a favorable search behavior.In the adaptive population size strategy, the particle deletion process can speed up the convergence of MAPSO.Introducing more particles generated with the help of the differential evolution algorithm into the current population in the adaptive population size strategy can provide more helpful information.MAPSO is a relatively time-consuming variant of PSO, especially on simple uni-modal functions.It is more appropriate for complex problems rather than simple uni-modal problems.

PSO Based on Multi-Exemplar and Forgetting Capabilities
The multi-exemplar and forgetting capabilities of PSO are employed in a new version of PSO, called expanded PSO (XPSO) [11].Initially, XPSO enhances the social learning aspect of each particle by using certain exemplars, learning from both the best particle in the local neighborhood and the best experience from the entire population, referred to as the global best.Then, diverse forgetting abilities are assigned to different particles by XPSO.In addition, the acceleration coefficients of each particle can be adjusted during the evolution process.It is very important to select a good neighbor for each particle to extract more useful information from the exemplar to provide positive guidance for the particles.In XPSO, a random order of numbers is assigned to particles, and neighbors are determined for the current particle based on this assigned random order.Although XPSO demonstrates reasonable performance compared to other PSO variants, two areas have room for further improvement and research.One is the efficiency of the newly introduced parameter in XPSO.The other involves finding ways to extract more valuable knowledge from the collective experiences of the entire population, and then applying this information to the parametric adjustment and learning models of PSO.

Multiple Archive-Based PSO
The idea of triple archive-based PSO was introduced by Xia et al. in their research work [38].They effectively addressed the issues related to learning models and handled proper exemplar selection in their paper.They stored the proficient individuals in the archive and then reused the archive.

Coevolution Algorithm in Multi-Population Problems
Cooperative coevolution and competitive coevolution are two categories of coevolution algorithms.These approaches have been used by numerous researchers in their research on various versions of the PSO algorithm.The coevolution algorithm is a technique that provides an extension of the evolutionary algorithm for multi-objective and large population problems [10].Coevolution improves diversity and reduces the risk of premature convergence of the whole population.It also aids in decomposing the problem into sub-parts [39].In coevolution, the whole population is divided into two or more sub-populations, and the progress of all sub-populations is accelerated simultaneously [40].In each sub-population, the fitness of particles is evaluated based on collaboration with individuals from other populations.In the evolutionary algorithm, each sub-population advances by manipulating its own values.So, the performance of the coevolutionary algorithm depends on direct communication between two or more particles from different sub-populations.In evolutionary algorithms, the fitness of particles is determined empirically and is independent of population circumstances.While the coevolutionary algorithm uses a biased approach, the fitness of an individual is estimated after an exchange of information with individuals from other sub-populations [38].
Niu et al. [41] introduced collaborative and competitive versions of multi-swarm cooperative PSO using the concept of master and slave swarms.Diversity in multiple slave swarms was maintained by independently running PSO on each slave swarm, which then communicated with the master swarm to evolve the knowledge of the master swarm.Wang et al. [42] presented a two-step cooperative PSO, where a two-swarm strategy is used in the first step to perform dimension partition and integration using a cooperative strategy.The velocity is controlled adaptively in the second step using an amplification factor of the velocity with a value of 0.9 to control the landscape of the considered problem.Hu et al. [43] tackled the problem of path failure in reconstructing the network topology by introducing an immune cooperative particle swarm optimization algorithm in the domain of heterogeneous wireless sensor networks.They considered macro-nodes and source sensors, creating sub-swarms in the form of k disjoint communication paths to provide alternative paths in case of broken paths.They used the immune cooperative mechanism to enhance the global search capability.The concept of adaptive cooperative PSO was introduced by Wang [44] to tackle the curse of dimensionality in cooperative PSO by dividing the swarm into sub-swarms of smaller dimensions.They controlled the exploration and exploitation capabilities of sub-swarms by exchanging information cooperatively using the adaptive inertia weight.
Li et al. [45] introduced a mixed mutation strategy-based multi-population cooperation PSO for higher-dimensional optimization problems.Multi-population cooperation PSO utilized the mean learning strategy based on dynamic segments in the coevolution process for information sharing.Covariance guidance-based multi-population cooperative PSO [35] divides the population into inferior, exploratory, and elite groups based on the Euclidean distance from the global leading particle.Cooperation between the inferior group, exploratory group, and inferior group is used in the cooperative process to maintain the balance between exploration and exploitation through information exchange.The concept of multiple populations for multiple objectives was introduced in multiple populations of coevolutionary particle swarm optimization [46].The authors' presented algorithm was used for financial management in selecting specific values of stocks while adding cardinality constraints to balance return and risk to obtain a feasible solution.Health monitoring systems normally exchange data and information in closely cooperating medical applica-tions.Tang et al. [47] introduced coevolution-based quantum-behaved PSO to optimize the allocation of resources in the cognitive radio sensor network domain.The experimental results showed excellent performance in the considered domain.Madni et al. [48] introduced the concept of cooperative coevolution-based multi-guide PSO by applying cooperative coevolution to each objective of the sub-swarm, aiming to reduce computational costs.The cooperative coevolution-based multi-guide PSO algorithm exhibited excellent performance in high-dimensional optimization problems.

Cooperative Coevolution Mechanism
Cooperative coevolution was introduced by Van den Bergh and Frans [49].In cooperative coevolution, the initial population is divided into multiple sub-swarms.Then, the particle under assessment from the current sub-swarm is estimated by gathering the best particles from other sub-populations to form a complete solution.After the evaluation of each particle, the archive is updated.The archive stores the leading individuals throughout the evolution.A completely appropriate solution formed by the sub-swarms will be placed into the archive if no other leading solutions are found.If any leading solution is found during the iterations, the appropriate solution is replaced with the leading solution.
The sub-swarms are assessed in a repetitive mode in cooperative coevolution.The parameters of the current sub-swarm are updated before advancing to the next population.The arrangements of the current sub-swarm will be updated before moving to the succeeding sub-swarm.This method of updating arguments is based on a reflexive, anti-symmetric, and transitive order, such as using Pareto ranks and niche counts in commands to resolve rank ties.Pareto ranks use the following Equation ( 9) for ranking: where n i is the number of dominant archive members of the i th particle.The lower-ranked particle is selected.In the event of a tie between two particles, the one with a lower niche count is chosen.The selection of a dominant particle helps increase the diversity of the overall solution.Cooperative coevolution has significantly improved the performance of team objectives.
At the start of the competitive coevolution mechanism, the whole population can be divided into multiple sub-swarms, and the initially selected variable for the probability of sub-swarms is initialized with the help of a uniform distribution.In a uniform distribution, the chance of selection of each sub-swarm is equal.So, variable1 is initialized using 1/D for uniform probability selection.After the first iteration, the variable of probability (variable1) can be upgraded depending on the process of competition between sub-swarms.Initially, the cycle probability variable is allocated to the i th sub-swarm, and the competitor subswarm is designated using roulette wheel selection.After the selection of two sub-swarms, the solution of the competitor and current sub-swarm coevolve with all further sub-swarms to form two new sub-swarms.The sub-swarm with the improved solution is selected and included in the population with a probability selection variable in the next iteration.Now probability variable (variable1), which is first initialized with uniform probability, is updated using Equation (10).
where α denotes the level of learning.So, the value of p in the i th sub-swarm is increased if the i th sub-swarm is more adapted by decision variable1.It is used for the cooperative coevolution process in the next iteration of MOPSO.

Competitive Coevolution Mechanism
In competitive coevolution, variable 'p' is allocated to each sub-swarm, indicating the probability of signifying certain sub-swarms.Competitive coevolution used a different strategy compared to cooperative coevolution, in which the population is divided into multiple swarms, but only two sub-swarms-the recent sub-swarm and contestant subswarm-can contest to be representative of variable1.Only one sub-swarm (current or contestant) can be representative at one time.In competitive coevolution, fitness implies only the robustness of the sub-swarm; improvement in one sub-swarm decreases the performance of the other sub-swarm.The continuous competition between two solutions, defeating each other, results in an increased solution quality.The evolution of the whole population in the form of sub-swarms helps prevent the solution from becoming trapped in local optima.

Phasor PSO Challenges and Limitations
PPSO is an improved version of PSO that uses episodic trigonometric functions, like sin and cos, as control parameters.The isolated nature of sin and cos is harnessed to characterize all control parameters of PSO.To meet this goal, each particle is linked with a one-dimensional phase angle θ.The initial value of the inertia weight is set to 0. So, the velocity update Equation ( 11) of PSO is now changed with the phase angle θ and an updated position calculation using Equation (12).
The phase angle for the social aspect of PPSO is represented by p(θ lter i ) in Equation ( 11), and the phase angle for the cognitive aspect is represented by g(θ lter i ) The position of PPSO is updated by using the phasor velocity V i in Equation ( 12).
In the velocity update of PPSO, p(θ lter i ) and g(θ lter i ) are calculated using trigonometric functions of sine and cosine with the angle θ in the following equations: The social aspect of PPSO uses a phasor-based amplification factor denoted as p(θ lter i ) in Equation ( 13) utilizing sine and cosine functions, and the cognitive aspect uses a phasorbased amplification factor denoted as g(θ lter i ) in Equation ( 14).In PPSO, initially, the population is randomly generated in the D-dimensional space, similar to the original PSO but with the addition of the phase angle θ for each particle, through a uniform distribution θ lter=1 i = U(0, 2π) and an initial velocity limit V lter=1 max,i .Then, the velocities of all particles are recalculated using Equation ( 6), and the positions of the particles are updated using Equation ( 16), which is the same as in the original PSO.For the next iteration phase, the angle θ and the maximum velocity of each particle are calculated using Equations ( 15) and ( 16), and the iterations are repeated until the maximum number of iterations is reached.
The phase angle θ for the i th particle for the next iteration is calculated using an amplified summation of trigonometric functions, as shown in Equation (15).The velocity is updated using a non-parametric Equation (16).One of the preeminent benefits of PPSO compared to some other PSO variants is its ability to enhance the optimization efficiency of PSO even when dealing with higher dimensions of problems.For shaping the control parameters of PSO, planting the phasor angle is an effective, flexible, and trustworthy strategy.Despite these advantages over PSO, PPSO does have several challenges and limitations.

Slow Convergence Speed
PPSO is an efficient solution, particularly in the case of a small population size.But in the case of a large population size, it suffers slow convergence speed due to a vast number of particles.The selection of the suitable technique for dividing the population into multiple sub-swarms is required for phasor PSO.

Memory Efficiency
A large population size requires a large memory size to manage its operation.PPSO requires efficient techniques to manage memory for a large population size.PPSO consumes more time and computational resources in managing memory for a large population.

Multi-Swarm Search Ability
PPSO is effective in a single population but for large-scale optimization, its population needs to be divided into multiple swarms.A suitable strategy is required for dividing the whole population into multiple swarms.

Less Diversity
PPSO suffers from premature convergence in large populations, resulting in increased time complexity in the adjustment of the population.Insufficient population diversity is also a cause for premature convergence.Population diversity needs to be improved to enhance the exploration and exploitation abilities of PPSO.

Interaction Among Individuals
PPSO requires interaction among all individuals in an algorithm to reduce the space and time complexity of the algorithm.The best individuals sharing their values with the population help to enhance the performance of the algorithm.

Improved Competitive Coevolution-Based Multi-Swarm Phasor Particle Swarm Optimization
Different variants of PSO yield efficient and minimized results for large-scale optimization problems.PSO is adaptable, and its suitability for a particular problem depends on the algorithm, meaning that some variants suitable for large populations may not be as suitable for small populations.Conversely, variants that are suitable for small-scale problems may not necessarily be suitable for large-scale optimization problems.In recent years, researchers have focused on the development of PSO algorithms for large-scale optimization and multidimensional problems.
Premature convergence of PSO in global optimum problems is reduced by splitting the whole population into multiple sub-swarms.This practice also improves the population range of the exploration.Coevolutionary algorithms have been successfully applied to different PSO variants to enhance diversity and efficiency.Although enormous progress has been made in evolutionary and coevolutionary optimization like MPSO, efforts to enhance PPSO for multidimensional optimization problem-consuming coevolutionary algorithms have not been made thus far.There are two types of coevolution algorithms: 'competitive coevolution' and 'cooperative coevolution'.In cooperative coevolution, each individual coevolves with other individuals.The method proposed in the current research involves the enhancement of PPSO using a competitive coevolution technique.This technique empowers PPSO to contribute effective and efficient results in the case of a large population.This technique makes PPSO more memory efficient and reduces the risk of premature convergence.
In ICPPSO, a large population is distributed into multiple sub-swarms.Initially, the current sub-swarm is selected through a uniform distribution, and the competitor subswarm is nominated using the roulette wheel selection method.Then, in the competition procedure, the characteristics of both selected sub-swarms merge with the characteristics of all the other sub-swarms and produce two results.The sub-swarm that offers the finest result is the winner and represents the j th decision variable.The winning sub-swarm replaces the values of the current sub-swarm.Thus, the selected sub-swarm becomes more adaptable because of coevolution, increasing its probability of representation in the next iteration.After the coevolution process, the sub-swarms are combined to form one single population.The pseudocode for ICPPSO is provided below as Algorithm 2.
Algorithm 2 Pseudocode for the ICPPSO algorithm.Input: NP is the population size, V min is the minimum velocity, V max is the maximum velocity, X min and X max are the minimum and maximum values of the search space of a given problem, θ is the particle angle, f is the fitness function.Output Evolved population with gbest at the optimal position to achieve optimal fitness values of the problem.Current sub-swarm =sub-swarm particle.13: end if 14: Competition of current and competitor sub-swarms.15: Select best (current sub-swarm, competitor sub-swarm).16: Adapt relevant sub-swarm according to competition results.17: Merge all sub-swarms into a single population.18: Update velocity and position of particles.19: Update pbest and gbest of the whole population.20: Update value of θ and Vmax.21: Jump to step 5 until termination criteria reached

Summary of Contributions and Implications of Improved Competitive Multi-Swarm Phasor Particle Swarm Optimization
The contributions and implications of the improved competitive multi-swarm phasor particle swarm optimization are as follows: • Initially, PPSO was developed for small population sizes.But in the case of a large population size, it suffers from slow convergence speed due to the vast number of particles.The incorporation of competitive coevolution into PPSO helps divide the population into multiple sub-swarms, which helps improve convergence speed.

•
Simple PPSO is effective in a single population, but for large-scale optimization, its population needs to be divided into multiple swarms.A suitable strategy is required for dividing the whole population into multiple swarms.The incorporation of competitive coevolution into PPSO helps improve multi-swarm search capability.

Benefits of Improved Competitive Multi-Swarm Phasor Particle Swarm Optimization
• Simple PPSO suffers from premature convergence in populations, resulting in more time complexity in the adjustment of the population.Less population diversity is also a cause for premature convergence.Population diversity needs to be improved to enhance the exploration and exploitation ability of PPSO.The incorporation of competitive coevolution into PPSO can help improve population diversity.• PPSO requires interaction among all individuals of the algorithm to reduce the space and time complexity of the algorithm.The best individuals sharing their values with the population help to increase the performance of the algorithm.The incorporation of competitive coevolution into PPSO helps improve the interaction among individuals.• A large population size requires a large memory size to manage its operation.Phasor PSO spends more time and computational load on managing memory for a large population.In the case of a large population size, it suffers from slow convergence speed due to the vast number of particles.The proposed ICPPSO divides the main swam into multiple sub-swarms so the algorithm processes one sub-swarm at a time, which helps save memory.

Parameter Settings
In the proposed ICPPSO algorithm, the parametric adjustments differ from the original PSO.In ICPPSO, the inertia weight is set to '0', similar to PPSO.Initially, the control parameters are set based on PPSO, and then some parameters are adjusted for the coevolution process in ICPPSO.A population size of NP = 100 and a dimension size of D = 10 are selected.Six fitness functions ( f 1, f 2, f 3, f 4, f 5, f 6) are used, along with 1000 iterations.
The whole population is divided into five sub-swarms, and the number of individuals in each sub-swarm is assigned randomly from the main population.All sub-swarms are evolved one by one.The pool contains two sub-swarms selected probabilistically, where one sub-swarm is a competitor sub-swarm and the other is a current sub-swarm.Then, two sub-swarms are used in coevolution with all other sub-swarms to form two new sub-swarms.The results generated using these parameters are reported in Table 1.

Phasor Angle and Initial Velocity
In the proposed ICPSO, the inertia weight is set to zero, and the phasor angle θ is used similarly to PPSO.Initially, θ is initialized with a uniform distribution of θ i = U(0, 2π).In later iterations, θ is adjusted according to Equation ( 17), and the maximum velocity limit is upgraded according to Equation (18).
The phase angle θ for the i th particle for the subsequent iteration is calculated using the amplified summation of trigonometric functions, as shown in Equation (17).The velocity is updated using a non-parametric Equation (18).

Uniform Distribution
In the proposed ICPPSO, a uniform distribution is used for the initial values of θ, similar to the original PPSO.ICPPSO uses a uniform distribution in the initial selection of the competitor sub-swarms.In the uniform distribution, each particle has an equal probability in the selection process.The probability density function of the uniform distribution is expressed in Equation (19).
where a is the lower limit, b is the upper limit, and x must belong to the range [a, b]; otherwise, 0 is used.

Competition and Coevolution
PPSO lacks population diversity due to the lack of a competitive process during evolution, which results in becoming trapped in local optima.The diversity issue reduces the convergence speed of the PPSO algorithm, ultimately reducing performance.The inclusion of any competitive process during the evolutionary process of PPSO helps achieve convergence more quickly.
The competitive coevolution process increases diversity in the whole population, which helps avoid premature convergence.Diversity is incorporated because the fitness of an individual is estimated following the exchange of information with individuals from other sub-populations in the competitive coevolution process.
ICPSSO uses competition between the competitor and the current sub-swarms while coevolving with the best particle.The competitor sub-swarm is initially selected using a uniform distribution, and in subsequent iterations, it is selected using a roulette wheel selection process.The selection process relies on the probability of each sub-swarm, as expressed in Equation ( 20) where f i is the probability of each sub-swarm, and ∑ N j=1 f i is the summation of probabilities of all swarms.After competitor sub-swarm selection, the coevolution process begins.Both the selected sub-swarms coevolve with all other sub-swarms, and the sub-swarm with the best solution emerges as the winner, representing itself in the subsequent iterations.Solutions with higher probabilities have a more significant area for selection, making them more adaptable in the selection process.

Benchmark Functions
Details of the benchmark functions used to generate the experimental results are provided in Table 2, including the name of the function, search space, and equation.
Although a large number of benchmark functions can be found in the existing literature, the choice of these functions depends on the nature of the evaluation, the modality of the model, etc.A large number of existing studies have used benchmark functions f6, f7, and f8 in their research works to reduce the complexity time of performance evaluation.For example, [50][51][52][53][54] employed functions f6, f7, and f8 to evaluate the performance of their proposed models.These studies suggest that functions f5, f6, or f7 suffice for the evaluation and comparison of evolutionary computing algorithms [50,[54][55][56].While there are several fixed-dimension functions available in the literature, our algorithm mainly addresses multi-dimensional problems.Therefore, we have considered six standard benchmark multimodal functions, as suggested in [50][51][52][53].The experimental results show that these functions can effectively assess performance and facilitate significance tests of the proposed algorithm in comparison to existing algorithms.

Experimental Results and Discussion
In this section, we implement the proposed ICPPSO method using variants of the PSO algorithm, known as PPSO, including the competitive coevolution process in PPSO, to demonstrate the coevolution and sub-swarm effect on the PPSO algorithm.The experimental results of the PPSO, ICPPSO, and FMPSO algorithms are generated using populations of 100, 150, and 200 with dimensions of 10, 30, and 50, respectively.The number of training iterations considered is 1000 for all algorithms during the experiments.The experimental results for the fitness values of the PPSO, ICPPSO, and FMPSO algorithms are reported in Tables 3-8.Tables 3 and 4 show the results using a population size of 100 and a dimension size of 10.Although this cannot be deemed the all-time best implementation, the results are more optimized compared to the original PPSO, especially concerning large population diversity and size.The results generally show that initially, the effectiveness of the proposed method was almost the same, but as the population size and admissions size increased, so did the effectiveness of the proposed method.The results columns represent the same six objective functions used for PPSO, FMPSO [46], and ICPPSO (proposed algorithm) to assess the effectiveness of the proposed solution.For all objective functions, the proposed solution performed well and became more effective with the increasing size and diversity of the population.
Convergence graphs for all the employed models are shown in Figure 2a-f.The competitive coevolution process within the sub-swarms played a major role in improving the performance of the proposed solution.If any sub-swarm encountered a premature convergence issue, the coevolution process provided assistance to escape from it.It was observed that PPSO achieved a good convergence speed for smaller population sizes, but for large population sizes and diversity, the chance of premature convergence increased.In the proposed solution, premature convergence was reduced because of the coevolution process.Based on the results obtained, it can be generally concluded that for all six objective functions, the proposed method is more effective and successful in cases of larger population sizes and diversity.
Figure 2 shows the logarithmic convergence graphs for PPSO, ICPPSO, and FMPSO for functions f 1 to f 6 , respectively.The x-axis shows the number of training iterations in multiples of 50, and the y-axis shows the fitness values of Gbest for the considered algorithms.These convergence graphs were generated using a population size of 100 and a dimension of 10. Figure 2a shows that the performance of all three algorithms was similar in the initial iterations, but as the number of iterations increased, the performance of ICPPSO and FMPSO improved compared to the PPSO algorithm until the final iteration, at which point the performance of ICPPSO and FMPSO became similar.Figure 2b-d clearly show that as the number of iterations increased, the performance of the proposed algorithm gradually improved until the final iteration.Moreover, the performance of FMPSO surpassed that of the PPSO algorithm, as indicated by the results.
Tables 5 and 6 present the results of the fitness values for the PPSO, ICPPSO, and FMPSO algorithms for the test suite of six functions.A population size of 100 and a dimension size of 30 were considered in generating these results.For all three algorithms, Table 5 shows the results of functions f 1 to f 3 , whereas Table 6 shows the results of functions f 4 to f 6 .It is clear from the results that the performance of ICPPSO was approximately similar for function f 1 , but the proposed algorithm outperformed PPSO and ICPPSO for functions f 2 , f 3 , f 4 , f 5 , and f 6 .So, we can say that the incorporation of competitive coevolution significantly enhances the performance of the PPSO algorithm.On average, ICPPSO achieved improvements of 15%, 20%, 30%, and 35% compared to PPSO and FMPSO in terms of fitness results.Figure 3 shows the convergence graphs of all algorithms for functions f 1 to f 6 , respectively.These graphs were generated using a population size of 100 and a dimension of 30. Figure 3a shows that for the initial iterations, the performance of all the algorithms was similar; however, the performance of FMPSO and the proposed ICPPSO improved as the number of iterations increased.The performance of the proposed ICPPSO was better for functions f 1, f 2, f 5, and f 6 compared to both FMPSO and PPSO.Moreover, the performance of FMPSO was superior to that of PPSO, as indicated by the Tables 7 and 8 present the results using a population size of 200 and a dimension size of 50, with the convergence graphs being shown in Figure 4a-f.The experimental results are indicative of the superior performance of the proposed ICPPSO compared to both PPSO and FMPSO.A population size of NP = 200 and a dimension size of D = 50 were selected.Six fitness functions ( f 1 , f 2 , f 3 , f 4 , f 5 , f 6 ) were used, and the experiments were run for 1000 iterations using the PPSO, ICPPSO, and FMPSO algorithms.The convergence results are shown in Figure 4, demonstrating the superior performance of the proposed ICPPSO across all functions.
The experimental results show the fitness values of the PPSO, FMPSO, and ICPPSO algorithms for functions f 1 to f 6 .These results were generated with population sizes of 100, 150, and 200 and dimensions of 10, 30, and 50.The results for all three algorithms were generated using the same parameter settings.It is evident from the results that the performance of ICPPSO surpassed that of PPSO and FMPSO for most of the functions.In the case of D = 30, the performance of the proposed algorithm was superior for functions f 1 , f 2 , f 5 , and f 6 , whereas FMPSO exhibited better performance for functions f 3 and f 4 .However, ICPPSO achieved competitive performance for functions f 3 and f 4 .The convergence graphs also show that ICPPSO exhibited the overall best performance for the considered suite of benchmark functions.Besides improvements in the convergence of the proposed ICPPSO algorithm, it also helped increase memory efficiency.Memory usage is discussed here in the context of sub-populations rather than the whole population.Since the proposed algorithm divided the whole population into six sub-populations, it increased memory efficiency.indicated average improvements of 15%, 20%, 30%, and 35% over the PPSO and FMPSO algorithms.The results of the Wilcoxon statistical significance test for the ICPPSO algorithm rejected the null hypothesis, showing a significant difference in performance compared to the PPSO and FMPSO algorithms at a 0.05 confidence level.Our future work will involve the incorporation of a dynamic and efficient self-adaptive selection process into ICPPSO for large-scale problems.

1: for whole population do
Select the particle with the best fitness value of all the particles as the Gbest

Table 2 .
List of benchmark functions.

Table 3 .
Average fitness values for f 1 to f 3 , with NP = 100 and D = 10.

Table 4 .
Average fitness values for f 4 to f 6 , with NP = 100 and D = 10.

Table 5 .
Average fitness values for f 1 to f 3 , with NP = 150 and D = 30.

Table 6 .
Average fitness values for f 4 to f 6 , with NP = 150 and D = 30.

Table 7 .
Average fitness values for f 1 to f 3 , with NP = 200 and D = 50.