An Improved Artificial Electric Field Algorithm for Multi-Objective Optimization

: Real-world problems such as scientific, engineering, mechanical, etc., are multi-objective optimization problems. In order to achieve an optimum solution to such problems, multi-objective optimization algorithms are used. A solution to a multi-objective problem is to explore a set of candidate solutions, each of which satisfies the required objective without any other solution dominating it. In this paper, a population-based metaheuristic algorithm called an artificial electric field algorithm (AEFA) is proposed to deal with multi- objective optimization problems. The proposed algorithm utilizes the concepts of strength Pareto for fitness assignment and the fine-grained elitism selection mechanism to maintain population diversity. Furthermore, the proposed algorithm utilizes the shift-based density estimation approach integrated with strength Pareto for density estimation, and it implements bounded exponential crossover (BEX) and polynomial mutation operator (PMO) to avoid solutions trapping in local optima and enhance convergence. The proposed algorithm is validated using several standard benchmark functions. The proposed algorithm’s performance is compared with existing multi-objective algorithms. The experimental results obtained in this study reveal that the proposed algorithm is highly competitive and maintains the desired balance between exploration and exploitation to speed up convergence towards the Pareto optimal front. Abstract: Real-world problems such as scientiﬁc, engineering, mechanical, etc., are multi-objective optimization problems. In order to achieve an optimum solution to such problems, multi-objective optimization algorithms are used. A solution to a multi-objective problem is to explore a set of candidate solutions, each of which satisﬁes the required objective without any other solution dominating it. In this paper, a population-based metaheuristic algorithm called an artiﬁcial electric ﬁeld algorithm (AEFA) is proposed to deal with multi-objective optimization problems. The proposed algorithm utilizes the concepts of strength Pareto for ﬁtness assignment and the ﬁne-grained elitism selection mechanism to maintain population diversity. Furthermore, the proposed algorithm utilizes the shift-based density estimation approach integrated with strength Pareto for density estimation, and it implements bounded exponential crossover (BEX) and polynomial mutation operator (PMO) to avoid solutions trapping in local optima and enhance convergence. The proposed algorithm is validated using several standard benchmark functions. The proposed algorithm’s performance is compared with existing multi-objective algorithms. The experimental results obtained in this study reveal that the proposed algorithm is highly competitive and maintains the desired balance between exploration and exploitation to speed up convergence towards the Pareto optimal front.


Introduction
Most realistic problems in science and engineering consist of diverse competing objectives that require coexisting optimization to obtain a solution. Sometimes, there are several distinct alternatives [1,2] solutions for such problems rather than a single optimal solution. Targeting objectives are considered the best solutions as they are superior to other solutions in decision space. Generally, realistic multiple objective problems (MOOPs) require a long time to evaluate each objective function and constraint. In solving such realistic MOOPs, the stochastic process shows more competence and suitability than conventional methods. The evolutionary algorithms (EA) that mimic natural biological selection and evolution process are found to be an efficient approach to solve MOOPs. These approaches evidence their efficiency to solve solutions simultaneously and explore an enormous search space with adequate time. Genetic algorithm (GA) is an acclaimed approach among bio-inspired EA. In recent years, GA has been used to solve MOOPs, called non-dominated sorting genetic algorithm (NSGA-II) [3]. NSGA II is recognized as a recent advancement in multi-objective evolutionary algorithms. Besides, particle swarm optimization is another biologically motivated approach which is extended to multi-objective particle swarm optimization (MOPSO) [4,5], bare-bones multi-objective particle swarm optimization (BBMOPSO) [6], and non-dominated sorting particle swarm optimization (NSPSO) [7] to solve

Related Work
Since a single solution cannot optimize multiple objectives at once, the problem is formulated as multi-objective optimization. Nobahari et al. [1] proposed a non-dominated sorting based gravitational search algorithm (NSGSA). Several multi-objective evolutionary algorithms (MOEAs) [15][16][17][18] are proposed in the past years. These algorithms have proven their ability to find various Pareto-optimal solutions in one single computation. Srinivas and Deb [19] proposed a non-dominated Sorting genetic algorithm (NSGA) where elitism was used to achieve a better convergence. Zitzler and Thiele [20] introduced the concept strength Pareto evolutionary algorithm (SPEA) with elitism selection. They proposed to maintain an external population to keep all the non-dominated solutions obtained in each generation. In each generation, the external and internal population were combined as a set, then based on the number of dominated solutions, all the non-dominated solutions in the combined set were assigned a fitness value. The fitness value decided the rank of the solutions that directed the exploration process towards the nondominated solution. Rudolph [21] proposed a simple elitist [22] multi-objective EA based on a comparison between the parent population and the child population. At each iteration, the candidate parent solutions were compared with the child non-dominated solution, and a final non-dominated solution set was formed to participate as the parent population for the next iteration. Although the proposed algorithm ensured the convergence to the Pareto-optimal front, it suffered from population diversity loss. All population-based evolutionary algorithms help in maintaining population diversity and convergence for multi-objective optimization [23]. In recent years, many significant contributions have been made that centered around the development of hybrid [24] algorithms and many-objective optimization algorithms [25,26]. Zhao and Cao [27] proposed an external memory-based particle swarm optimization (MOPSO) algorithm in which, besides the initial search population, two external memories were used to store global and local best individuals. Also, a geographic-based technique was utilized to maintain population diversity. Zhang and Li [28] combined traditional mathematical approaches with EA and introduced a decomposition-based MOEA, MOEA/D [29]. Martinez and Coello [30] extended MOEA/D and proposed a constraint-based selection mechanism to solve constrained MOPs. Gu et al. [31] proposed a dynamic weight design method based on the projections of non-dominated solutions to produce evenly distributed non-dominated solutions. Zhang [32] introduced a novel and efficient approach for non-dominated sorting to solve MOEAs. Chong and Qiu [33] proposed a novel opposition-based self-adaptive hybridized differential evolution algorithm to handle the continuous MOPs. Cheng et al. [34] proposed a many-objective optimization for high-dimensional objective space in which a reference vector guided an evolutionary algorithm introduced to maintain balance between diversity and convergence of the solutions. Hassanzadeh and Rouhani [35] proposed a multi-objective gravitational search algorithm (MOGSA). Yuan et al. [36] extended the gravitational search algorithm (GSA) to strength Pareto based multi-objective gravitational search algorithm (SPGSA). A review of existing literature is presented in Table 1.  Space metric 3.
Diversity metric Population diversity can be further improved.

Our Contribution
In this paper, an improved artificial electric field algorithm is proposed to solve multi-objective optimization problems. The concept of strength Pareto is used in the optimization process of the proposed algorithm to refine fitness assignment. The shift-based density estimation (SDE) technique with fine-grained elitism selection approach and SDE with strength Pareto are applied to improve the population diversity and density estimation, respectively. The bounded exponential crossover (BEX) operator and polynomial mutation operator (PMO) are used to reduce the possibility of the solution trapping in local optima. The proposed algorithm is validated using different benchmark functions and compared with existing multi-objective optimization techniques.

Preliminaries and Background
This section briefly discusses the basic concepts of multi-objective optimization, artificial electric field algorithm, shift-based density estimation technique, and recombination and mutation operators.

Multi-Objective Optimization
In general, a multiple-objective problem (MOOP) involving m diverse objectives is mathematically defined as P = {P 1 ; P 2 ; . . . ; P d } where P represents the solution to MOOP, and d represents the dimension of the decision boundary. A MOOP can be a minimization problem, a maximation problem, and a combination of both. The target objectives in MOOP are defined as Minimize/Maximize: Subject to the constraints: G a (P) ≤ 0, a = 1, 2, . . . .A and, where F i (P) represents the ith objective function of P th solution. An MOOP either can have no or more than one constraint, e.g., G a → P and H b → P are considered as the a th and b th optional inequality and equality constraints, respectively. A and B represent the total number of inequality and equality constraints, respectively. The MOOP is then processed to calculate the value of the solution (P) for which F(P) satisfies the desired optimization. Unlike single-objective optimization (SOO), MOOP considers multiple conflicting objectives and produces a set of solutions in the search space. These solutions are processed by the concept of Pareto dominance theory [36], defined as follows: Considering an MOOP, a vector x ∈ X (Universe) is called Pareto optimal if and only if no other solution y ∈ Y dominates it. In such a case, solutions → x are said to be non-dominated solutions. All these non-dominated solutions are composed to a set called Pareto-optimal set.

Artificial Electric Field Algorithm (AEFA)
Artificial electric field algorithm (AEFA) is a population-based meta-heuristic algorithm, which mimics the Coulomb's law of attraction electrostatic force and law of motion. In AEFA, the possible candidate solutions of the given problem are represented as a collection of the charged particles. The charge associated with each charged particle helps in determining the performance of each candidate solution. Attraction electrostatic force causes each particle to attract towards one another resulting in global movement towards particles with heavier charges. A candidate solution to the problem corresponds to the position of charged particles and fitness function, which determines their charge and unit mass. The steps of AEFA are as follows: Step 1. Initialization of Population. A population of P candidate solutions (charged particles) is initialized as follows: where CP k i represents the position of ith charged particle in the kth dimension, and D is the dimensional space.
Step 2. Fitness Evaluation. A fitness function is defined as a function that takes a candidate solution as input and produces an output to show how well the candidate solution fits with respected to the considered problem. In AEFA, the performance of each charged particle depends on the fitness value during each iteration. The best and worst fitness are computed as follows: where Fitness i (T) and n represent the fitness value of ith charged particle and total number of charged particles in the population, respectively. Best (T), Worst (T) represent the best fitness and the worst fitness respectively of all charged particles at time T.
Step 3. Computation of Coulomb's Constant. At time t, Coulomb's constant is denoted by K(t) and computed as follows: Here, K 0 represents the initial value and is set to 100. α is a parameter and is set to 30. iter and maxiter represent current iteration and maximum number of iterations, respectively.
Step 4. Compute the Charge of Charged Particles. At time T, the charge of ith charged particle is represented by Q i (T). It is computed based on the current population's fitness as follows: where Here, Fitness cp i is the fitness of the ith charged particle. q i (T) helps in determining the total charge Q i (T) acting on the ith charged particle at time T.
Step 5. Compute the Electrostatic Force and Acceleration of the Charged Particles.
1. The electrostatic force exerted by the jth charged particle on the ith charged particle in the D th dimension at time T is computed as: where Q i (T) and Q j (T) are the charges of ith and jth charged particles at any time T. ε is a small positive constant, and R ij (T) is the distance between two charged particles i and j. P D j (T) and X D j (T) are the global best and current position of the charged particle at time T. F D i (T) is the net force exerted on ith charged particle by all other charged particles at time T. rand () is uniform random number generated in the [0, 1] interval.

2.
The acceleration a D i (T) of ith charged particle at time T in D th dimension is computed using the Newton law of motion as follows: where E D i (T) and M D i (T) represent respectively the electric field and unit mass of ith charged particle at time T and in D th dimension.
Step 6. Update velocity and position of charged particles. At time T, the position and velocity of ith charged particle in D th dimension ae updated as follows: where vel D i (T) and CP D i (T) represent the velocity and position of the ith charged particle in D th dimension at time T, respectively. rand() is a uniform random number in the interval [0, 1].

Shift-Based Density Estimation
In order to maintain good convergence and population diversity, the shift-based density estimation (SDE) [37] was introduced. While determining the density of an individual (P i ), unlike traditional density estimation approaches, SDE shifts other individuals in the population P to a new position. The SDE utilizes the convergence comparison between current and other individuals for each objective. For example, if P j ∈ P outperforms P k ∈ P on any objective, the corresponding objective value of P J is shifted to the position of P k on the same objective. In contrast, the value (position) of the objective remains unchanged. The process is described as where F sd i P j represent the shifted objective value of F i P j , and . . F sd m P j represents shifted objective vector of F P j . The steps followed in density estimation are explained in Algorithm 1.

Algorithm 1 Density Estimation for Non-Domination Solutions
Input: Non-dominated solutions P I c 1 , P I c 1 , . . . .P I cn Output: Density of each solution 1.
Shift the position of non-dominated solutions using Equation (12)  2. Calculate the distance between two adjacent non-dominated solutions as follows:

Recombination and Mutation Operators
In AEFA, movement (acceleration) of one charged particle towards another is determined by the total electrostatic force exerted by other particles on it. In this paper, bounded exponential crossover (BEX) [12] and polynomial mutation operator (PMO) [13,14] are used with AEFA to enhance the acceleration in order to increase convergence speed further.

Bounded Exponential Crossover (BEX)
The performance of MOOP highly depends upon the solution generated by the crossover operator. A crossover operator produces solutions that reduce the possibility of being trapped in local optima. Bounded exponential crossover (BEX) is used to generate the offspring solutions in the interval P l i , P u i . BEX involves an additional factor α that depends on the interval P l i , P u i and the parent solution position. The steps to generate offspring solutions in the interval P l i , P u i and related to each pair of the parent x i and y i are given in Algorithm 2.

Polynomial Mutation Operator (PMO)
In order to avoid premature convergence in MOOP, the mutation operator is used. In this paper, the polynomial mutation operator (PMO) is used with the proposed algorithm. PMO includes two control parameters: mutation probability of a parent solution (pm) and the magnitude of the expected solution (η m ). For an individual P m ), PMO is computed as follows: where P (t+1) i represents the decision variable after the mutation, and P (t) i represents the decision variable before the mutation. yu and yd represent the lower and upper bounds of the decision variable, respectively. δ represents a small variation obtained by , otherwise (14) Here, r 2 is a random number in [0, 1] interval, which is compared with a predefined threshold value (0.5), and η m represents the mutation distribution index. Generate uniformly distributed random number u i ∈ U(0, 1) 2.
Compute β x i and β y i which is derived by inverting the bounded exponential distribution, where r i ∈ U(0, 1) represents uniformly distributed random variable and x l i , x u i are the lower and upper bounds of the i th decision variable.

3.
Offspring solutions are generated using

Proposed Algorithm
In this section, the proposed multi-objective optimization method is described in detail. The algorithm starts with parameter initialization. Then, a population of candidate solutions is generated. Further, by computing fitness values for each candidate solution, the best solution is selected. The population is iteratively updated until the termination conditions are satisfied and the optimal solution is returned. The proposed algorithm is presented in Algorithm 3. The explanation of the symbols used in the proposed algorithm is presented in Table 2. The steps followed in the proposed algorithm are as follows.

Population Generation
In the proposed algorithm, there are two populations, i.e., searching population (P n ) and the external population P * I c . The searching population, which contains initial candidate solutions, computes the non-dominated solutions and stores them in the external population. The process is performed iteratively.

Algorithm 3 Proposed Multi-Objective Optimization Algorithm
Input: Searching population of size (n), External Population of size (m) Output: Non-dominated set of charged particles ND C p Begin

1.
Initialize searching population ( P n ) of charged particles CP.

2.
Initialize external population P * = ∅ and set iteration counter I c = 1 Compute the fitness F(CP) using Equation (15)  2. Computer the density using shift-based density estimation (Section 3.3) end for For each CP ∈ P I c U P * Select charged particles (CP) into the mating pool P I c +1 from P I c U P * Evaluate charge, update the velocity and position of P I C +1 as defined in Section 3.2 and obtain the new position of CP c.
Apply crossover and mutation operator (using Section 3.4) on population P I c +1 end while For each CP ∈ P t+1 do If CP is a non-dominated solution then ND C p = ND C p * {CP} ; end if end for

Fitness Evaluation
The traditional AEFA is not suitable for the MOOPs due to the definition of charge. According to Equations (7) and (8), the charge of a particle is related to fitness value. Thus, the multi-objective fitness assignment used in the SPEAII [38] is introduced to evaluate the charge of the AEFA algorithm for two or more objectives in MOOPs. The multi-objective fitness assignment of the proposed algorithm is calculated by using the formula as follows: where F(i) is the fitness value of the charged particle i. R(i) means the raw fitness value of the charged particle i, and D(i) represents the additional density information of the charged particle i. The raw fitness exhibits the strength of each charged particle by assigning a rank to each charged particle.
In MOOP, to avoid such conditions where more than more solution (charged particle) shows a similar rank, an additional density measure is used for distinguishing the difference between them. The SDE technique is used to describe the additional density (crowding distance) of i.

A Fine-Grained Elitism Selection Mechanism
Elitism selection is a critical process in multi-objective optimization. It helps in the determination of the best fit solution for the next generation. The selection process affects the Pareto front output. If a better solution is lost in the selection, it cannot be found again. The selection of a better solution that dominates other solutions helps in removing worse solutions. In this study, a modified fine-grained elitism selection method that utilizes the shift-based density estimation (SDE) approach is proposed to improve population diversity. Similarly, to the non-dominated solution selection mechanism, the proposed method also selects the non-dominated solution as follows: As the external population size (m) is fixed, the non-dominated solutions are copied to the external population set (P * I C +1 ) until the current size of the external population is less than m. In contrast, the non-dominated solutions (P * I C +1 − m) are discarded from the external population set. In order to discard solutions and maintain diversity, the proposed method deletes the solution with the most crowded region computed using the SDE approach, as described in Section 3.3. The steps used to compute the shared crowding distance are as follows.

1.
The distance between two adjacent particles P I c and P * I C +1 is computed as Algorithm 1. 2.
For each particle, an additional density estimation (shared crowding distance) is computed as follows: When the size of the non-dominated solution size exceeds external population size (i.e., P * the proposed method selects the k th charged particle from the external population (P * I c ) using Equation (17) and deletes it. This process is performed iteratively until |P * Here, the solution (charged particle) whose density is greater than the average shared crowing distance of all non-dominated solutions in the external population is considered as the worst solution and is deleted from the external population set. Then, the average crowding distance is recomputed and used in a similar manner until P * I c = m.
Non-dominated set of charged particles (candidate solution) Fitness (T) Objective fitness function Best (T) Charged particle with best fitness at time T Worst (T) Charged particle with worst fitness at time Total charge on a ith charged particle at time T q i (T) Small charge of ith charged particle to determine the total charge acting on ith charged particle F ij Force exerted by jth charge particle on ith charge particle SDE Shift based density estimation I cmax Maximum number of iterations d P I c i ,P I c i+1 Distance between two non-dominated solutions (P Ici and P Ici+1 ) σ crowd Shared crowding distance for each CP σ kcrowd Shared crowding distance of k th CP chosen for deletion from the external population set

Experimental Results and Discussion
This section is further divided into three sub-sections. Section 5.1 discusses the performance comparison of the AEFA with existing evolutionary approaches. Section 5.2 gives a performance comparison of the proposed algorithm with existing multi-objective optimization algorithms, and Section 5.3 presents a sensitivity analysis of the proposed algorithm.

Performance Comparison of the AEFA With Existing Evolutionary Approaches
At first, the performance of AEFA is evaluated on 10 benchmark functions. These benchmarks are taken as three unimodal (UM), three low-dimensional multimodal (LDMM), and four high-dimensional multimodal (HDMM) functions. All these functions belong to the minimization problem. The description of the benchmark functions is given in Table 3. Then, the performance of the AEFA is compared with existing evolutionary approaches. For performance comparison, four statistical performance indices are considered: average best-so-far solution, mean best-so-far solution, the variance of the best-so-far solution, and average run-time. During each iteration, the algorithm updates the best-so-far solution and records running time. The performance indices are computed by analyzing the obtained best-so-far solution and running time.

Parameter Setting
For experimental analysis, the parameters, i.e., population size (P n , P m ), the maximum Coulomb's constant (K 0 ), and the maximum number of iterations (I cmax ), are initialized in Table 4.

Results and Discussion
The performance of AEFA is compared with existing evolutionary optimization algorithms: backtracking search algorithm (BSA) [36], cuckoo search algorithm (CK) [36], artificial bee colony (ABC) algorithm [36], and gravitational search algorithm (GSA) [36]. The results are demonstrated in Table 5 and Figure 1. The results in Table 5 and Figure 1 demonstrate that the AEFA obtained minimum values for all UM and LDMM functions. The obtained values competed with the values of GSA and outperformed the value of ABC, CK, and BSA, which reveals that AEFA maintains a balance between exploration and exploitation and performs better in comparison to existing approaches. However, for HDMM (except benchmark F10) functions, AEFA performs worse as compared to ABC, and BSA, which implies that the AEFA suffers from the loss of diversity resulting in premature convergence in solving complex HDMM problems. It is concluded from results that the original AEFA faces challenges in solving higher-dimensional multimodal problems.  Table 4. Parameters used to evaluate AEFA.

Description Parameter Value
Population size P n , P m 50 Initial value used in Coulomb's constant K 0 100 Maximum number of iterations I cmax 300 for F4 − F6 and 1000 for the rest of the benchmark functions

Performance Comparison of the Proposed Algorithm with Existing Multi-Objective Optimization Algorithms
As shown in Table 5, the AEFA faces challenges in solving MOOP. These challenges are addressed through the proposed algorithm. The proposed algorithm is evaluated with six benchmark functions. Table 6. The performance of the proposed algorithm is compared with the existing MOOP based on four performance measures: generational distance metric (GD) [2], diversity metric (DM) [3], converge metric (CM) [3], and spacing metric (SM) [39]. DM measures the extent of spread attained in the obtained optimal solution. CM measures the convergence to the obtained optimal Pareto front. GD measures the proximity of the optimal solution to the Pareto optimal front. SM demonstrates how evenly the optimal solutions are distributed among themselves. Further, the performance of the proposed algorithm is compared with existing MOOPs in terms of recombination and mutation operators.

Parameter Setting
For experimental analysis, the parameters, i.e., initial population size (P n ), external population size (P m ), the maximum Coulomb's constant (K 0 ), the maximum number of iterations (I cmax ), initial crossover probability (P co ), final crossover probability (P c1 ), initial mutation probability (P m0 ), and final mutation probability (P m1 ), are initialized and presented in Table 7.

Results and Discussion
The performance of the proposed algorithm is analyzed in three steps: (1) the proposed algorithm is evaluated on SCH, FON, and ZDT benchmarks and then compared with NSGA II [3], NSPSO [7], BCMOA [40], and SPGSA [36] based on CM, DM, and GD metrics; the (2) proposed algorithm is evaluated on MOP5 and MOP6 benchmarks and then compared with NSGSA [36], MOGSA [36], SMOPSO [36], MOGA II [36], and SPGSA [36] based on GD and SM metrics; and (3) the efficacy of using the recombination operator is validated by evaluating the proposed algorithm on six benchmarks and then compared with the existing MOOPs. Experiments are performed 10 times on each benchmark function, and results are recorded in terms of mean and variance, contributing to the robustness of the algorithm. The results are presented in Tables 8-12 and Figures 2-6. For better representation, all the results in Figures 2-6 are prepared using logarithmic scale, where a high logarithmic value corresponds to a minimum output value (mean and variance). Results in Table 8 and Figure 2 demonstrate that the proposed algorithms obtained minimum values of CM metric as compared to NSGSA [36], MOGSA [36], SMOPSO [36], MOGA II [36], and SPGSA [36] which proves that the proposed algorithm ensures better convergence and keeps a good balance between exploration and exploitation of low-dimensional as well as high-dimensional problems. Figure 3 and Table 9 show that the proposed algorithm obtained the minimum value of variance (in DM metric) for all the benchmark functions as compared to NSGA II, NSPSO, BCMOA, and SPGSA, which validates the efficacy and robustness of the proposed algorithm. Results in Table 10 (GD metric) and Figure 4 demonstrate that the proposed algorithm performed better in comparison to existing NSGA II, NSPSO, BCMOA, and SPGSA. GD metric validates the proximity of obtained optimal solution to the optimal Pareto front, thereby ensuring the effectiveness of the proposed algorithm. Table 11 and Figure 5 demonstrate that the proposed algorithm achieves better values for GD and SM metrics than the compared algorithm. GD values of the proposed algorithm show more accurate results than the compared algorithm, and SM values show that the proposed algorithm attains a better convergence accuracy for all benchmark functions, which shows that the proposed algorithm can produce uniformly distributed, non-dominated optimal solutions. Table 12 and Figure 6a-c show that the proposed algorithm is compared with SPEA2, SPGSA, SPGSA_NRM, and a self-variant of MOOP without utilizing BEX and PMO operators. Results indicate that involving BEX and PMO operators in the proposed algorithm significantly improves the algorithm optimization performance in comparison to the compared algorithm. Contrary, in the absence of these operators, the proposed algorithm suffers from premature convergence. So, it is concluded from all four metrics (Tables 8-11) and Table 12 that the proposed algorithm shows better efficacy, accuracy, and robustness in searching true Pareto fronts across the global search space in comparison to the existing MOOP algorithm. Further, involving BEX and PMO operators, the proposed algorithm maintains desirable diversity in searching the true Pareto front.

Performance Comparison of the Proposed Algorithm with Existing Multi-Objective Optimization Algorithms
As shown in Table 5, the AEFA faces challenges in solving MOOP. These challenges are addressed through the proposed algorithm. The proposed algorithm is evaluated with six benchmark functions. The description of benchmark functions is presented in Table 6. The performance of the proposed algorithm is compared with the existing MOOP based on four performance measures: generational distance metric (GD) [2], diversity metric (DM) [3], converge metric (CM) [3], and spacing metric (SM) [39]. DM measures the extent of spread attained in the obtained optimal solution. CM measures the convergence to the obtained optimal Pareto front. GD measures the proximity of the optimal solution to the Pareto optimal front. SM demonstrates how evenly the optimal solutions are distributed among themselves. Further, the performance of the proposed algorithm is compared with existing MOOPs in terms of recombination and mutation operators.

Results and Discussion
The performance of the proposed algorithm is analyzed in three steps: (1) the proposed algorithm is evaluated on SCH, FON, and ZDT benchmarks and then compared with NSGA II [3], NSPSO [7], BCMOA [40], and SPGSA [36] based on CM, DM, and GD metrics; the (2) proposed algorithm is evaluated on MOP5 and MOP6 benchmarks and then compared with NSGSA [36], MOGSA [36], SMOPSO [36], MOGA II [36], and SPGSA [36] based on GD and SM metrics; and (3) the efficacy of using the recombination operator is validated by evaluating the proposed algorithm on six benchmarks and then compared with the existing MOOPs. Experiments are performed 10 Bi-Objective (Low dimension) Bi-Objective (High dimension) [13] Tri-Objective (High dimension) Table 7. Parameters used in the proposed algorithm.

Description Parameter Value
Population (charged particles) size P n 100 External Population size P m 100 for SCH, FON, ZDT 800 for MOP functions Initial value used in Coulomb's constant K 0 100 The maximum number of iterations I cmax 100 for SCH and FON functions 250 for ZDT and MOP functions Initial crossover probability P co 1.0 Final crossover probability P c1 0.0 Initial mutation probability P m0 0.01 Final mutation probability P m1 0.001   1.14 × 10 −11 1.56 × 10 −11 1.97 × 10 −10 9.64 × 10 −9 1.6 × 10 −8 Table 11. Performance comparison of the proposed algorithm with existing MOOPs based on MOP5 and MOP6 functions using the GD/SM metric.

Sensitivity Analysis of the Proposed Algorithm
Sensitivity analysis is defined as the process that determines the influences of change in input variable on the robustness of outcome. In this paper, the robustness of the proposed algorithm is evaluated in two steps: (1) sensitivity analysis 1 and (2) sensitivity analysis 2. Sensitivity analysis 1 is performed for the different values of crossover operator (BEX), mutation operator (PMO), and initial

Sensitivity Analysis of the Proposed Algorithm
Sensitivity analysis is defined as the process that determines the influences of change in input variable on the robustness of outcome. In this paper, the robustness of the proposed algorithm is evaluated in two steps: (1) sensitivity analysis 1 and (2) sensitivity analysis 2. Sensitivity analysis 1 is performed for the different values of crossover operator (BEX), mutation operator (PMO), and initial value of coulomb's constant (K 0 ). Sensitivity analysis 2 is performed for the different values of initial population size and maximum number of iterations. The detailed description of the sensitivity analysis and related parameter settings is described in Table 13. Further, a combined performance score, described in Table 14, is used measure the performance of the proposed algorithm.

Results and Discussion
Sensitivity of mutation operator (PMO), crossover operator (BEX), initial population size, maximum number of iterations, and initial value of coulomb's constant (K 0 ) are analyzed using GD and SM metrics, and results are presented in Tables 15-19, respectively. Table 15 shows that performance of the PMO varied from "Average" to "Good", which means that selection of suitable values for mutation is required to achieve optimum results. Table 16 demonstrates that performance of the BEX varied from "Average" to "Very Good", which means that the sensitivity of BEX is low for the proposed algorithm. Table 17 demonstrates that performance of the population size varied from "Average" to "Very Good", which means that the sensitivity of the population is moderate for the proposed algorithm. Table 18 demonstrates that performance of maximum no. of iterations is "Good" for all experiments, which means that the proposed algorithm is less sensitive to maximum number of iterations. Table 19 demonstrates that performance of the initial value of Coulomb's constant (K 0 ) varies from "Good" to "Very Good", which means that the proposed algorithm is less sensitive to (K 0 ). It is concluded from all the tables (Tables 15-19) that the proposed algorithm works efficiently in all given scenarios, which shows its robustness.

Conclusions and Future Work
In this paper, an improved population-based meta-heuristic algorithm, AEFA, is proposed to handle the multi-objective optimization problems. The proposed algorithm used strength Pareto dominance theory to refine fitness assignment and an improved fine-grained elitism selection based on the SDE mechanism to maintain population diversity. Further, the proposed algorithm used the SDE technique with strength Pareto dominance theory for density estimation, and it implemented bounded exponential crossover (BEX) and polynomial mutation operator (PMO) to avoid solutions trapping in local optima and enhance convergence. The experiments were performed in two steps. In the first step, the AEFA was evaluated on different low-dimensional and high-dimensional benchmark functions, and then the performance of the AEFA was compared with the existing evolutionary optimization algorithms (EOAs) based on four parameters: average best-so-far solution, mean best-so-far solution, the variance of the best-so-far solution, and average run-time. Experimental results show that for low-dimensional benchmark functions, AEFA performed better in comparison to existing EOAs, but for complex high-dimensional benchmark functions, AEFA suffered from loss of diversity and performed worse. In the second step, the proposed algorithm was evaluated on different benchmark functions, and then the performance of the proposed algorithm was compared with the existing evolutionary multi-objective optimization algorithms (EMOOPs) based on diversity, converge, generational distance, and spacing metrics. Experimental results prove that the proposed algorithm is not only able to find the optimal Pareto front (non-dominated solutions), but it also shows better performance than the existing multi-objective optimization techniques in terms of accuracy, robustness, and efficacy. In the future, the proposed work can be explored in various directions. One direction is to extend the proposed algorithm to solve various real-word multi-objective optimization problem. The proposed work can also be used to solve complex problems by hybridization of the proposed algorithm with another existing algorithm.