Improved Particle Swarm Optimization Algorithms for Optimal Designs with Various Decision Criteria

: Particle swarm optimization (PSO) is an attractive, easily implemented method which is successfully used across a wide range of applications. In this paper, utilizing the core ideology of genetic algorithm and dynamic parameters, an improved particle swarm optimization algorithm is proposed. Then, based on the improved algorithm, combining the PSO algorithm with decision making, nested PSO algorithms with two useful decision making criteria (optimistic coefﬁcient criterion and minimax regret criterion) are proposed . The improved PSO algorithm is implemented on two unimodal functions and two multimodal functions, and the results are much better than that of the traditional PSO algorithm. The nested algorithms are applied on the Michaelis–Menten model and two parameter logistic regression model as examples. For the Michaelis–Menten model, the particles converge to the best solution after 50 iterations. For the two parameter logistic regression model, the optimality of algorithms are veriﬁed by the equivalence theorem. More results for other models applying our algorithms are available upon request.


Introduction 1.Overview of PSO Algorithm
Particle swarm optimization (PSO) is a meta-heuristic, population-based swarm intelligence algorithm first proposed by [1].Recently, the PSO algorithm has attracted a great deal of researchers for its powerful ability to solve a wide range of complicated optimization problems without requiring any assumption on the objective function.The applications of the PSO algorithm are summarized by [2,3], which divide the applications into 26 different categories, including but not limited to: image and video analysis applications, control applications, design applications, power generation and power systems, combinatorial optimization problems, etc.
The PSO algorithm is a bionic algorithm which simulates the preying behavior of a bird flock.In the Particle Swarm Optimization algorithm, each solution of the optimization problem is considered to be a "bird" in the search space, and it is called a "particle".The whole population of the solution is termed as a "swarm", and all of the particles are searched by following the current best particle in the swarm.Each particle i has two characteristics: one is position (denoted by x i ), which determines the particle's fitness value, the other is velocity (denoted by v i ), which determines the direction and distance of the search.In iteration t (t is a positive integer), to avoid confusion, the position and velocity of particle i are usually denoted by x i (t) and v i (t).Each particle tracks two "best" positions : the first is the best position found by the particle itself so far, which is denoted by "p ibest (t)"; the second is the best position found by the whole swarm so far, denoted by "g best (t)".When the algorithm terminates, g best (t) is declared to be the solution to our problem.
The velocity and position of each particle are updated by equations: v i (t + 1) = ωv i (t) + c 1 rand 1 (p ibest (t) − x i (t)) + c 2 rand 2 (g best (t) − x i (t)) (1) Here, v i (t) is the velocity of the particle i, x i (t) is the position of the particle i, and ω is the inertia weight.p ibest (t) and g best (t) are the local best position for particle i and global best position found by all of the particles in iteration t, respectively.Here, rand 1 and rand 2 are two random numbers in [0, 1], while c 1 , c 2 are "learning factors", with c 1 termed the "cognitive learning factor" , and c 2 the "social learning factor" [1].
From the formulas, the update of each v i is composed of three parts: the first part is the inertia velocity before the change; the second part is the cognitive learning part, which represents the learning process of the particle from its own experience; the third part is the social learning part, which represents the learning process of the particle from the experience of other particles.

Related Works and Main Improvements of This Manuscript
Recent related studies for particle swarm optimization methods include: Ref. [4] proposed a multi hierarchical hybrid particle swarm optimization algorithm.Ref. [5] proposed a combination of genetic algorithm and particle swarm optimization for the global optimization.Ref. [6] proposed an improved PSO algorithm in power system network reconfiguration.Ref. [7] introduced a multiobjective PSO algorithm based on multistrategy.Ref. [8] proposed a vector angles-based many-objective PSO algorithm using archive.Ref. [9] proposed a novel hybrid gravitational search particle swarm optimization algorithm.Ref. [10] proposed the application of particle swarm optimization to portfolio construction.
Previous works provide a significant foundation for particle swarm optimization algorithms in optimal designs.However, there are some shortcomings in previous work: (i) In order to facilitate the operation, previous works use the same dynamic parameters for the whole particle swarm in each iteration.However, in some situations, different kinds of particles may have different dynamic parameters.(ii) Previous works mainly focus on pessimistic criterion.The combination of the PSO algorithm and other useful decision making criteria, including the optimistic coefficient criterion and minimax regret criterion, are seldom considered in previous research.
To solve these problems, in this paper, utilizing the core ideology of the genetic algorithm and dynamic parameters, an improved particle swarm optimization algorithm is proposed in Section 3.Then, based on the improved algorithm, combining the PSO algorithm with decision making, nested PSO algorithms are proposed in Section 4. The main improvements of this manuscript include: in the improved PSO algorithm in Section 3, top 10 percent particles are chosen as "superior particles".Then, different cognitive learning factors and social learning factors are used for superior particles and normal particles.In each iteration, the superior particles will split, and the inferior particles will be eliminated.In the nested PSO algorithms in Section 4, two useful decision making criteria, optimistic coefficient criterion and minimax regret criterion, are combined with the PSO algorithm.These research studies have not been conducted yet by other researchers.The details of the improvements are in Sections 3 and 4.
The proposed algorithms are implemented on various mathematical functions, which are shown in Section 5.

Experimental Design and the Fisher Information Matrix
An experimental design ξ which has n support points can be written in the form: Here, x i , i = 1 . . .n are the values of the support points within the allowed design region, and ξ i are the weights, which sum to 1, and represent the relative frequency of observations at the corresponding design point.
For several nonlinear models, the design criterion to be optimized contains unknown parameters.The general form of the regression model can be written as y = f (ξ, θ) + .Here, f (ξ, θ) can be either a linear or nonlinear function, θ is the vector of unknown parameters, and ξ is the vector of design (includes the information for both weight and the value of the support point).The range of θ is Θ, and the range of ξ is Ξ.The value of a design is measured by its Fisher information matrix, which is defined to be the negative of the expectation of the matrix of second derivatives (with respect to θ) of the log-likelihood function.
The Fisher information matrix for a given design ξ is ]. Here, p(ξ, θ) is the probability function of ξ.To estimate parameters accurately, the objective function log|I −1 (θ, ξ)| should be minimized over all designs ξ on Ξ.
For example, for the popular Michaelis-Menten model in biological sciences, which is presented by [11]: The Fisher information matrix for a given design ξ is defined by Chen et al. [12] proposed the equivalence theorem for experimental design with the Fisher information matrix: for a heteroscedastic linear model with mean function g(x), λ(x) is the assumed reciprocal variance of the response at x, then the variance of the fitted response at the point z is proportional to v(z, ξ) = g T (z)I(ξ) −1 g(z), where I(ξ) = λ(x)g(x)g T (x)ξ(dx).Design ξ * is optimal if there exists a probability measure µ * on A(ξ * ), where A(ξ with equality at the support points of ξ * .Here r(x, u, ξ * ) = (g T (x)I(ξ) −1 g(x)) 2

Essential Elements of Decision Making
A decision making problem is composed of four elements [13,14]: (i) A number of actions to be taken; (ii) A number of states which can not be controlled by the decision maker; (iii) Objective function: payoff function or loss function which depends on both an action and a state (our objective is to maximize the payoff function or minimize the loss function); (iv) Criterion: by certain criterion, the decision maker decides which action to take.
In the objective function log|I −1 (θ, ξ)|, θ is in the set of states which are out of our control and design ξ is an action to be taken.

Optimization Criterion for Decision Making
Decision-making with loss functions is proposed in several papers, such as [15].Clearly, our objective is to minimize the loss functions.Based on the loss function, there are several popular criterion for decision making: (i) Pessimistic criterion: The pessimistic decision maker always considers the worst case, that is, suppose θ will maximize the loss function.The decision maker will take the action that minimizes the loss function in the worst case.This criterion is also known as the minimax criterion.The formula for this criterion is: (ii) Optimistic coefficient criterion: usually the decision maker will trade off from optimism and pessimism in decision making.This derives the optimistic coefficient criterion which take the weighted average of maximum and minimum of the loss function.The weight is called optimistic coefficient, which is between 0 and 1.It reflects the content of optimism of the decision maker.The formula for this criterion is: Here, α is the optimistic coefficient.When α = 0, the optimistic coefficient criterion shrinks to pessimistic criterion.(iii) Minimax regret criterion: in this criterion, our objective is to minimize the maximum possible regret value.The regret value is defined by the difference between the loss under certain action and the minimum loss possible under the same state.The formula for this criterion is: Here, The significance for criterion (ii) and (iii) are: usually the decision maker will trade off from optimism and pessimism in decision making.This derives the optimistic coefficient criterion which takes the weighted average of the maximum and minimum of the possible loss.The weight is called the optimistic coefficient, which reflects the content of optimism of the decision maker.
Some times after the decision maker makes a decision, he or she may regret when certain states appear.In this case, people want to minimize the maximum regret value, which is the distance between the loss value of the action they take and the minimum loss value possible in the relevant state.The regret value is also called opportunity costs, which represent regret in the sense of lost opportunities.

Improved Particle Swarm Optimization for Experimental Design
In this section, combining the core ideology of the genetic algorithm and dynamic parameters, an improved PSO algorithm is proposed as the foundation of the nested PSO algorithm.

Main Improvement of Our Algorithm
(i) The superior particles are split and the inferior particles are eliminated in each iteration.That is, in each iteration, the fitness values of the particles are ranked from high to low, and particles with top 10 percent fitness values as "superior particles" are taken.Then, each superior particle is split into two particles with the same velocities and positions, and particles with the bottom 10 percent fitness values are deleted to keep the swarm in a constant size.This improvement adopts the core ideology of the genetic algorithm: the individuals with higher fitness values will reproduce more offsprings.The splitting procedure in optimization methods is usually called individual cloning.(ii) Dynamic parameters c 1 and c 2 are utilized in the algorithm: Here, the iter is the current number of iterations and maxiter is the maximum number of iterations.c upper and c low are the upper and lower bounds of the learning factors, respectively.
According to the ideology of the genetic algorithm and common sense, the superior particles have better learning abilities, thus, the learning factors for superior particles have higher upper and lower bounds than normal particles.After repeated attempts and comparisons, c upper = 2.5, c low = 1.2 are taken for superior particles, and c upper = 2, c low = 0.75 are taken for normal particles.Consequently, in the running process of the algorithm, the cognitive learning factor is linearly decreased and the social learning factor is linearly increased.(iii) Dynamic parameter ω is utilized in the algorithm: ω 1 and ω 1 are the upper and lower bounds of ω, respectively.According to common sense, the superior particles are more active, thus, they have lower inertia.In our algorithm, ω 1 = 0.75, ω 2 = 0.25 are set for superior particles, and ω 1 = 0.9, ω 2 = 0.4 are set for normal particles.
Improvements (ii) and (iii) are utilized in the algorithm because these approaches are in accordance with the idea of particle swarm optimization: at the beginning, each bird has a large cognitive learning factor and small social learning factor, and each bird searches mainly by its own experience.After a period of time, as each bird gets more and more knowledge from the bird population, it relies increasingly on the social knowledge for its search.In addition, the effect of inertia velocity will decrease over time since the particles obtain more and more information from cognitive learning and social learning in the process of searching, so they rely increasingly on their learning instead of the inertia.
After applying these improvements, the improved PSO algorithms are as follows.Following the common notations in related research, in each iteration, the x i (t), v i (t), p ibest (t) and g best (t) in Formulas (1) and (2) are written as x i , v i , p ibest and g best , without causing confusion.

Improved PSO Algorithm for Minimization/Maximization Problem
When a maximum number of iterations is reached or when the change of the fitness value in successive searches is negligible, the stopping criteria is satisfied.In Algorithm 1, we set the maximum number of iterations to 500.When the difference of the fitness values between two adjacent iterations is less than 0.002, the change is considered as negligible.
If any value of x i or v i exceed the upper or lower bounds, then we will take the corresponding upper bound or lower bound instead of that value.
Clearly, this improved algorithm can be used to solve either the minimization or maximization problem.For the minimization problem, in step 1.3, the local best is the x i with minimum f (x i ).The update process of p ibest and g best in 2.4 is: for each particle i, if the updated fitness value is less than the fitness value of the current p ibest , then p ibest is updated to the new solution; otherwise, p ibest remains unchanged.g best is the particle which takes minimum of the fitness value of p ibest .
For the maximization problem, in step 1.3, the local best is the x i with the maximum fitness value.The update process of p ibest and g best in 2.4 is: for each particle i, if the updated fitness value is greater than the fitness value of the current p ibest , then p ibest is updated to the new solution; otherwise, p ibest keeps unchanged.g best is the particle which takes the maximum of the fitness value of p ibest (See Figure A1 Flowchart of Algorithm 1 in Appendix A).  9)-( 11), and repeat the update process.

Nested Particle Swarm Optimization for Experimental Design
The application of the Particle Swarm Optimization algorithm to the maximization and minimization problems and a nested PSO algorithm for the pessimistic criterion are presented by Chen et al. [12].Chen's paper is a milestone in the research of applying particle swarm optimization to experimental design.The combination of decision making and particle swarm optimization has been studied in several previous papers, such as [16,17], Yang et al. [18] and Yang and Shu [19].
However, the combination of the PSO algorithm and other decision making criteria, including the optimistic coefficient criterion and minimax regret criterion, are seldom considered in previous research.These combination problems are more interesting and challenging, and are worthy of in-depth study.
To solve these problems, nested PSO algorithms with multiple decision making criteria are proposed in this section.The implementations of these algorithms are proposed in Section 5.

Introduction of Nested PSO Algorithms
For regression with the Fisher information matrix involving unknown parameters, we need two "swarms" of particles (one is ξ, the other is θ) to solve it with a nested PSO algorithm.These two swarms of particles are used in different layers of iterations.In each layer, the fitness value is determined by one of the two swarms of particles.For convenience of expression, we denote the two swarms corresponding to ξ and θ by swarm 1 and swarm 2, the position by x i and y i , and the velocity by xv i yv i , respectively.Each swarm consists of 50 particles.

PSO Algorithm for Optimistic Coefficient Criterion
When a maximum number of iterations is reached or when the change of the fitness value in successive searches is negligible, the stopping criteria is satisfied.In Algorithm 2, we set the maximum number of iterations to 100.When the difference of the fitness values between two adjacent iterations is less than 0.2 percent of the current fitness value, the change is considered as negligible.9)-( 11), and repeat the update process.
If any value of x i , y i or xv i , yv i exceed the upper or lower bounds, then we will take the corresponding upper bound or lower bound instead of that value.
In this algorithm, the process of evaluating f actions (x) is the inner circulation, the process of evaluating min ξ f actions (θ, ξ) is the outer circulation.Pessimistic criterion is the special case for this algorithm when α = 0.
In this algorithm, the process of evaluating f actions (θ, ξ) is the inner circulation, the process of evaluating min ξ f actions (θ, ξ) is the outer circulation.When a maximum number of iterations is reached or when the change of the fitness value in successive searches is negligible, the stopping criteria is satisfied.In Algorithm 3, we set the maximum number of iterations to 100.When the difference of the fitness values between two adjacent iterations is less than 0.2 percent of the current fitness value, the change is considered as negligible.
Initialization process: 1.1 For each of the n particles in each of the two swarms, ξ and θ, initialize particle position x i , y i and velocity xv i , yv i with random vectors.9)-( 11), and repeat the update process.

Results and Comparisons
In this section, in Section 5.1 the improved PSO algorithm in Section 3 is compared with the traditional PSO on two unimodal functions and two multimodal functions to show the ability of the algorithm.In Section 5.2, we propose the comparisons of our improved PSO algorithm in Section 3 with a typical algorithm on one unimodal functions and two multimodal functions.The test functions of Sections 5.1 and 5.2 are mainly chosen from [4].After that, the nested PSO algorithms in Section 4 are applied on two representative models with unknown parameters as examples.The algorithms are programmed by Matlab 2020a, and operated under Intel Core i7, Windows 7.

Comparisons of Traditional PSO Algorithm and Improved PSO Algorithm
In this subsection, the improved PSO algorithm in Section 3 is compared with traditional PSO on two unimodal functions and two multimodal functions.Table 1 is the information of test functions.In order to facilitate the calculation, the original minimum problems are transformed into maximum problems with maximum value = 100, that is why the fitness values are defined as 1 F+0.01 .
To eliminate the randomness, for each function, each algorithm is run 50 times independently, and the statistical results are analyzed.In each run, when the fitness value is more than 99.9, the computation is considered as successful.In the table, the success rate indicates the percentage of success (not the times of success), and average indicates the average of the fitness values.
Table 2 is the comparisons of the performance of traditional PSO and improved PSO.From Table 2, the results of our improved PSO algorithm are much better than that of traditional PSO for all of the four test functions, which confirmed the ability of the improved PSO algorithm.

Comparisons of Our Improved PSO Algorithm with a Typical Combination of Genetic and PSO Algorithm
Ref. [5] proposed a typical combination of the genetic and particle swarm optimization algorithm for the global optimization, which incorporates the crossover and mutation operations of the genetic algorithm into the PSO algorithm.This combination method is a common PSO variant.In this subsection, we propose the comparisons of our improved PSO algorithm with that typical combinatorial algorithm on one unimodal function and two multimodal functions to show the ability our improved algorithm.In this subsection, we use two new test functions on Tables 1 and 3 The number of runs of each algorithm is the same as Section 5.1.In each run, when the fitness value is more than 99.9, the computation is considered as successful.In the table, the success rate indicates the percentage of success (not the times of success), and average indicates the average of the fitness values.
Table 4 is the comparisons of the performance of our improved PSO algorithm with the typical combination of genetic and PSO algorithm in [5].From Table 4, the results of our improved PSO algorithm are much better than that of the typical combination of genetic and PSO algorithm for all of the three test functions, which confirmed the ability of the improved PSO algorithm.

Implementation of Nested PSO Algorithm
In this subsection, the nested PSO algorithms in Section 4 are applied on two representative models with unknown parameters as examples.Since the most often used parameters optimistic coefficient criterion are 0.3, 0.5 and 0.7, in Tables 5 and 6, these parameters are mainly used in the computation.Extensions to other models are immediate, with a simple change of the objective function.More results of other models applying our algorithms are available upon request.
Example 1. Michaelis-Menten model.This model and its Fisher information matrix have been introduced in Section 2. For the Michaelis-Menten model on design space X = [0, x], Ref. [11] showed that an optimal design is supported at two support points, one of which is x.In this section, the nested algorithms in  Example 2. Two parameter logistic regression model [20].The probability of response is assumed to be p(x; θ) = 1/(1 + exp(−b(x − a))).Here, θ = (a, b) T is the unknown parameter vector.
The information matrix of this model is: The nested algorithms in Section 4 are applied on two parameter logistic regression models with parameters a ∈ [0, 2 From Tables 5 and 6 above, the f (g best ) of the optimistic coefficient criterion is better than that of the pessimistic criterion , and f (g best ) is inversely proportional to α.That is because the pessimistic criterion always considers the worst case, but the optimistic coefficient criterion takes a trade off between the optimistic case and pessimistic case.When α increases, the extent of optimism gets larger, so the loss function gets smaller (and therefore better).Figure 2 plots the equivalence theorem with c(x, ξ, µ * ) versus x for two parameter logistic regression models.The vertical coordinate represents the value of c(x, ξ, µ * ), and the horizontal coordinate represents the value of x.From all of these four plots, c(x, ξ, µ * ) ≤ 0 for all x ∈ X, which confirms that all of the results obtained by our series of algorithms are optimal.

Conclusions and Future Works
In this paper, an improved particle swarm optimization (PSO) algorithm is proposed and implemented on two unimodal functions and two multimodal functions.Then, combining the Particle Swarm Optimization (PSO) algorithm with the theorem of decision making under uncertainty, nested PSO algorithms with two decision making criteria are proposed and implemented on the Michaelis-Menten model and two parameter logistic regression models.For the Michaelis-Menten model, the particles converge to the best solution after 50 iterations.For two parameter logistic regression models, the optimality of algorithms are verified by the equivalence theorem.In the nested PSO algorithms, the g best is inversely proportional to the optimistic coefficient.
The PSO algorithm is a powerful algorithm that needs only a well-defined objective function to minimize or maximize with different optimal criteria, here, the function log|I −1 (θ, ξ)|.Thus, extensions to other models are immediate, with a simple change of the objective function.More results for other models applying our algorithms are available upon request.The limitation of our PSO method is: it does not work very efficiently in solving problems with a complicated matrix, which is more suitable to be solved with the simulated annealing algorithm.
Future work includes, but is not limited to, the comprehensive comparison of this series of PSO algorithms with other metaheuristic algorithms, such as the genetic algorithm and simulated annealing algorithm.This is interesting and challenging work, which will be researched in the near future.
Initialize particle position x i , y i and velocity xv i , yv i for the 2 swarms, and evaluate the fitness values, local best and global best positions.
Split each superior particle to 2 particles, and update the velocities positions of particles.Update the fitness value.
Update the local and global best positions, and eliminate the particles with bottom 10 percent fitness values.
Output the best position and fitness value.Initialize particle position x i , y i and velocity xv i , yv i for the 2 swarms, and evaluate the fitness values, RV(θ, ξ), local best and global best positions.
Split each superior particle to 2 particles, and update the velocities positions of particles.Update the fitness value max θ∈Θ RV(θ, ξ)).

Update velocities and positions of particles
in swarm 1. Update the he fitness value min ξ max θ∈Θ RV(θ, ξ), local and global best positions, and eliminate the particles with bottom 10 percent fitness values.
Output the best position and fitness value.
test function on Section 5.1.

Figure 1
Figure 1 plots the value and weight of support point 1 for Michaelis-Menten model with 50 particles.Vertical coordinate represents the value of the support point 1, and horizontal coordinate represents the weight of support point 1. Figure 1 shows how the particles converges to the best solution after 50 iterations (the best solution is indicated by red star).Figure2plots the equivalence theorem with c(x, ξ, µ * ) versus x for two parameter logistic regression models.The vertical coordinate represents the value of c(x, ξ, µ * ), and the horizontal coordinate represents the value of x.From all of these four plots, c(x, ξ, µ * ) ≤ 0 for all x ∈ X, which confirms that all of the results obtained by our series of algorithms are optimal.

Figure 1 .Figure 2 .
Figure 1.The convergence of particles in nested PSO algorithm for Michaelis-Menten model under various decision criteria.Vertical coordinate represents the value of the support point 1, and horizontal coordinate represents the weight of support point 1.(a) Michaelis-Menten model with pessimistic criterion; (b) Michaelis-Menten model with optimistic coefficient criterion, α = 0.7; (c) Michaelis-Menten model with optimistic coefficient criterion, α = 0.3; (d) Michaelis-Menten model with minimax regret criterion.

Algorithm 1 :
Improved PSO algorithm.Initialization process 1.1 For each of the n particles, initialize particle position x i and velocity v i with random values in corresponding search space.1.2 Evaluate the fitness value of each particle f (x i ) according to the objective function.1.3 Determine local best and global best positions p ibest and g best .Update process: 2.1 Rank the fitness values of the particles from high to low, and take particles with top 10 percent fitness values as "superior particles".Then split each superior particle into two particles with the same velocities and positions, and update the velocity of particles with Formula (1).2.2 Based on the velocity, update the position of particles with Formula (2).2.3 Update the fitness value f (x i ).2.4 Update the local and global best positions p ibest and g best .Then update the fitness values of p ibest and g best .2.5 Eliminate the particles with bottom 10 percent fitness values.If the stopping criteria is satisfied, output the g best and fitness value (denoted by f (g best ).If not, update c 1 , c 2 and ω by Formulas (

Algorithm 2 :
PSO algorithm for optimistic coefficient criterion.Initialization process: 1.1 For each of the n particles in each of the two swarms ξ, and θ, initialize particle position x i , y i and velocity xv i , yv i with random vectors in corresponding search space.1.2 Evaluate the fitness value max θ∈Θ log|I −1 (θ, ξ)| and min θ∈Θ log|I −1 (θ, ξ)| by improved PSO algorithm.Then, initialize the f actions (θ, ξ) and local and global best position.1.3 Determine local best and global best positions p ibest and g best .Update process: 2.1 Rank the fitness values of the particles from high to low, and take particles with top 10 percent fitness values as "superior particles".Then, split each superior particle into two particles with the same velocities and positions, and update the velocity of particles by Formula (1).2.2 Based on the velocity, update the position of particles by Formula (2).2.3 Based on the new position, calculate the fitness value f actions (θ, ξ) by Algorithm 1. 2.4 Update the local and global best positions p ibest and g best .Then, update the fitness values of p ibest and g best .2.5 Eliminate the particles with bottom 10 percent fitness values.If the stopping criteria is satisfied, output the g best and fitness value (denoted by f (g best ).If not, update c 1 , c 2 and ω by Formula ( 1.2 Compute the fitness value min ξ log|I −1 (θ, ξ)| by improved algorithm.Based on that, compute RV(θ, ξ).1.3 Determine local best and global best positions p ibest and g best .Update process: 2.1 Rank the fitness values of the particles from high to low, and take particles with top 10 percent fitness values as "superior particles".Then, split each superior particle into two particles with the same velocities and positions, and update velocity yv i of particles in swarm 2 by Formula (1).2.2 Based on the velocity, update the position of particles in swarm 2 by Formula (2).2.3 Update the fitness value max θ∈Θ RV(θ, ξ)) with Algorithm 1. 2.4 Update velocity xv i of particles in swarm 1 by Formula (1).2.5 Based on the velocity, update the position of particles by in swarm 1 Formula (2).2.6 Update the fitness value (the loss function) min ξ max θ∈Θ RV(θ, ξ) by Algorithm 1.Then, update p ibest and g best .Then, update the fitness values of p ibest and g best .2.7 Eliminate the particles with bottom 10 percent fitness values.If the stopping criteria is satisfied, output the g best and fitness value (denoted by f (g best ).If not, update c 1 , c 2 and ω by Formulas (

Table 1 .
Information of test functions.

Table 2 .
Comparisons of the performance of traditional PSO and improved PSO.

Table 3 .
Information of new test functions.

Table 4 .
Comparisons of our improved PSO algorithm with a typical combination of genetic and PSO algorithm.

Table 6 .
Different criterion with two parameter logistic regression model.