You are currently viewing a new version of our website. To view the old version click .
Processes
  • Article
  • Open Access

14 May 2020

An Improved Artificial Electric Field Algorithm for Multi-Objective Optimization

and
Department of Computer Science and Engineering, Thapar Institute of Engineering and Technology, Patiala 147004, Punjab, India
*
Author to whom correspondence should be addressed.
This article belongs to the Collection Multi-Objective Optimization of Processes

Abstract

Real-world problems such as scientific, engineering, mechanical, etc., are multi-objective optimization problems. In order to achieve an optimum solution to such problems, multi-objective optimization algorithms are used. A solution to a multi-objective problem is to explore a set of candidate solutions, each of which satisfies the required objective without any other solution dominating it. In this paper, a population-based metaheuristic algorithm called an artificial electric field algorithm (AEFA) is proposed to deal with multi-objective optimization problems. The proposed algorithm utilizes the concepts of strength Pareto for fitness assignment and the fine-grained elitism selection mechanism to maintain population diversity. Furthermore, the proposed algorithm utilizes the shift-based density estimation approach integrated with strength Pareto for density estimation, and it implements bounded exponential crossover (BEX) and polynomial mutation operator (PMO) to avoid solutions trapping in local optima and enhance convergence. The proposed algorithm is validated using several standard benchmark functions. The proposed algorithm’s performance is compared with existing multi-objective algorithms. The experimental results obtained in this study reveal that the proposed algorithm is highly competitive and maintains the desired balance between exploration and exploitation to speed up convergence towards the Pareto optimal front.

1. Introduction

Most realistic problems in science and engineering consist of diverse competing objectives that require coexisting optimization to obtain a solution. Sometimes, there are several distinct alternatives [1,2] solutions for such problems rather than a single optimal solution. Targeting objectives are considered the best solutions as they are superior to other solutions in decision space. Generally, realistic multiple objective problems (MOOPs) require a long time to evaluate each objective function and constraint. In solving such realistic MOOPs, the stochastic process shows more competence and suitability than conventional methods. The evolutionary algorithms (EA) that mimic natural biological selection and evolution process are found to be an efficient approach to solve MOOPs. These approaches evidence their efficiency to solve solutions simultaneously and explore an enormous search space with adequate time. Genetic algorithm (GA) is an acclaimed approach among bio-inspired EA. In recent years, GA has been used to solve MOOPs, called non-dominated sorting genetic algorithm (NSGA-II) [3]. NSGA II is recognized as a recent advancement in multi-objective evolutionary algorithms. Besides, particle swarm optimization is another biologically motivated approach which is extended to multi-objective particle swarm optimization (MOPSO) [4,5], bare-bones multi-objective particle swarm optimization (BBMOPSO) [6], and non-dominated sorting particle swarm optimization (NSPSO) [7] to solve multi-objective optimization problems. These multi-objective approaches in one computation can produce several evenly distributed candidate solutions rather than a single solution.
However, based on Coulomb’s law of attraction electrostatic force and law of motion principle, a new algorithm called an artificial electric field algorithm [8] (AEFA) was proposed. AEFA mimics the interaction of charged particles under the attraction electrostatic force control and follows an iterative process. In a multi-dimensional search space, charged particles move towards heavier charges and converge to the heaviest charged particle. Due to the strong capability of exploration and exploitation, AEFA proved to be more robust and effective than artificial bee colony (ABC) [8], ant colony optimization (ACO) [8], particle swarm optimization (PSO) [8], and bio-geography based optimization (BBO) [8]. AEFA received growing attention from researchers in the development of a series of algorithms in several areas, such as parameter optimization [9], capacitor bank replacement [10], and scheduling [11]. There have been no research contributions found in recent years that have extended AEFA to solve MOOPs. In this paper, a modest contribution for improving AEFA to solve MOOP is made. The proposed algorithm (1) adopted the concept of “strength Pareto” for the fitness refinement of the charged particles, (2) introduced a fine-grained elitism selection mechanism based on the shift-based density estimation (SDE) approach to improve population diversity and manage the external population set, (3) utilized SDE with “strength Pareto” for density estimation, and (4) utilized bounded exponential crossover (BEX) [12] and polynomial mutation operator (PMO) [13,14] to improve global exploration, the local exploitation capability, and the convergence rate.
This paper is organized in the following way: Section 2 covers an overview of the existing literature on multi-objective optimization, Section 3 describes the preliminaries and background algorithms, Section 4 describes the proposed multi-objective method in detail, Section 5 presents results and performance of the proposed algorithm in comparison to existing optimization techniques, and Section 6 sums up the findings of this research in concluding remarks.

3. Preliminaries and Background

This section briefly discusses the basic concepts of multi-objective optimization, artificial electric field algorithm, shift-based density estimation technique, and recombination and mutation operators.

3.1. Multi-Objective Optimization

In general, a multiple-objective problem (MOOP) involving m diverse objectives is mathematically defined as
P = { P 1 ;   P 2 ;     ;   P d }
where P represents the solution to MOOP, and d represents the dimension of the decision boundary. A MOOP can be a minimization problem, a maximation problem, and a combination of both. The target objectives in MOOP are defined as
Minimize/Maximize:
F ( P ) = [ F i ( P ) ,   i = 1 ,   2 . m ]
Subject to the constraints:
{   G a ( P ) 0 ,   a = 1 , 2 , . A   and , H b ( P )   0 ,   b = 1 , 2 , . B
where F i ( P ) represents the i th objective function of P t h solution. An MOOP either can have no or more than one constraint, e.g., G a ( P ) and H b ( P ) are considered as the a t h and b t h optional inequality and equality constraints, respectively. A and B represent the total number of inequality and equality constraints, respectively. The MOOP is then processed to calculate the value of the solution ( P ) for which F ( P ) satisfies the desired optimization. Unlike single-objective optimization (SOO), MOOP considers multiple conflicting objectives and produces a set of solutions in the search space. These solutions are processed by the concept of Pareto dominance theory [36], defined as follows:
Considering an MOOP, a vector x = x 1 ;   x 2 ;     ;   x m shows domination on vector y = y 1 ;   y 2 ;     ;   y m , if and only if
  i   { 1 . m } ,   x i y i       i     { 1 . m }   :   x i < y i
where m represents the objective space dimension. A solution x   X (Universe) is called Pareto optimal if and only if no other solution y Y dominates it. In such a case, solutions ( x   ) are said to be non-dominated solutions. All these non-dominated solutions are composed to a set called Pareto-optimal set.

3.2. Artificial Electric Field Algorithm (AEFA)

Artificial electric field algorithm (AEFA) is a population-based meta-heuristic algorithm, which mimics the Coulomb’s law of attraction electrostatic force and law of motion. In AEFA, the possible candidate solutions of the given problem are represented as a collection of the charged particles. The charge associated with each charged particle helps in determining the performance of each candidate solution. Attraction electrostatic force causes each particle to attract towards one another resulting in global movement towards particles with heavier charges. A candidate solution to the problem corresponds to the position of charged particles and fitness function, which determines their charge and unit mass. The steps of AEFA are as follows:
Step 1. Initialization of Population.
A population of P candidate solutions (charged particles) is initialized as follows:
C P i = ( C P i 1 , C P i 2 , C P i k C P i D   ) ,     i = 1 , 2 , 3 p
where C P i k represents the position of i th charged particle in the k th dimension, and D is the dimensional space.
Step 2. Fitness Evaluation.
A fitness function is defined as a function that takes a candidate solution as input and produces an output to show how well the candidate solution fits with respected to the considered problem. In AEFA, the performance of each charged particle depends on the fitness value during each iteration. The best and worst fitness are computed as follows:
B e s t   ( T ) = m i n i = 1 n   F i t n e s s   ( T )
W o r s t   ( T ) = m a x i = 1 n   F i t n e s s   ( T )  
where F i t n e s s i   ( T ) and n represent the fitness value of i th charged particle and total number of charged particles in the population, respectively. B e s t   ( T ) , W o r s t   ( T ) represent the best fitness and the worst fitness respectively of all charged particles at time T.
Step 3. Computation of Coulomb’s Constant.
At time t, Coulomb’s constant is denoted by K ( t ) and computed as follows:
K ( t ) = K 0 exp ( α i t e r m a x i t e r )
Here, K 0 represents the initial value and is set to 100. α is a parameter and is set to 30. i t e r and m a x i t e r represent current iteration and maximum number of iterations, respectively.
Step 4. Compute the Charge of Charged Particles.
At time T , the charge of i th charged particle is represented by Q i ( T ) . It is computed based on the current population’s fitness as follows:
Q i ( T ) = q i ( T )   i = 1 n q i ( T )
where
q i ( T ) = e x p ( f i t n e s s c p i ( T ) W o r s t   ( T ) B e s t   ( T ) W o r s t ( T ) )
Here, F i t n e s s c p i is the fitness of the i th charged particle. q i ( T ) helps in determining the total charge Q i ( T ) acting on the i th charged particle at time T .
Step 5. Compute the Electrostatic Force and Acceleration of the Charged Particles.
  • The electrostatic force exerted by the j t h charged particle on the i th charged particle in the D t h dimension at time T is computed as:
    F i j D ( T ) = K ( t ) ( Q i ( T ) Q j ( T ) ) ( P j D ( T ) X j D ( T ) ) R i j ( T ) + ε
    F i D ( T ) = j = 1 ,   j i   N r a n d   ( )   F i j D ( T )
    where Q i ( T ) and Q j ( T ) are the charges of i th and j th charged particles at any time T . ε is a small positive constant, and R i j ( T ) is the distance between two charged particles i and j . P j D ( T ) and X j D ( T ) are the global best and current position of the charged particle at time T . F i D ( T ) is the net force exerted on i th charged particle by all other charged particles at time T . r a n d   ( ) is uniform random number generated in the [0, 1] interval.
  • The acceleration a i D ( T ) of i th charged particle at time T in D t h dimension is computed using the Newton law of motion as follows:
    a i D ( T ) = Q i ( T ) E i D ( T ) M i D ( T ) ,   E i D ( T ) = F i D ( T ) Q i ( T )
    where E i D ( T ) and M i D ( T ) represent respectively the electric field and unit mass of i th charged particle at time T and in D t h dimension.
Step 6. Update velocity and position of charged particles.
At time T , the position and velocity of i th charged particle in D t h dimension ae updated as follows:
v e l i D ( T + 1 ) = r a n d i   v e l i D ( T ) + a i D ( t )
C P i D ( T + 1 ) = C P i D ( T ) + v e l i D ( t )
where v e l i D ( T ) and C P i D ( T ) represent the velocity and position of the i th charged particle in D t h dimension at time T , respectively. r a n d ( ) is a uniform random number in the interval [0, 1].

3.3. Shift-Based Density Estimation

In order to maintain good convergence and population diversity, the shift-based density estimation (SDE) [37] was introduced. While determining the density of an individual ( P i ) , unlike traditional density estimation approaches, SDE shifts other individuals in the population P to a new position. The SDE utilizes the convergence comparison between current and other individuals for each objective. For example, if P j P outperforms P k   P on any objective, the corresponding objective value of P J is shifted to the position of P k on the same objective. In contrast, the value (position) of the objective remains unchanged. The process is described as
F i s d ( P j ) = { F i ( P k ) ,   if   F i ( P j ) < F i ( P k )   F i ( P j )   ,   o t h e r w i s e  
where F i s d ( P j ) represent the shifted objective value of F i ( P j ) , and F   s d ( P j ) = ( F 1 s d ( P j ) ,   F 2 s d ( P j ) ,   F 3 s d ( P j ) , F m s d ( P j ) ) represents shifted objective vector of F ( P j ) . The steps followed in density estimation are explained in Algorithm 1.
Algorithm 1 Density Estimation for Non-Domination Solutions
Input: Non-dominated solutions ( P I c 1 ,   P I c 1 , . P I c n )
Output: Density of each solution
  • Shift the position of non-dominated solutions using Equation (12)
  • Calculate the distance between two adjacent non-dominated solutions as follows:
        d P I c i ,   P I c i + 1   = j = 1 n ( F i t ( P I c i , j ) F i t ( P I c i + 1 , j ) ) 2
  • Find the k t h minimum value ( σ P I c k   ) in { d P I c i , d P I c i + 1 ,   P I c i   P   P I c i P I c i + 1 }
  • Compute S D E   ( P I c i ) = 1 σ P I c k + 2

3.4. Recombination and Mutation Operators

In AEFA, movement (acceleration) of one charged particle towards another is determined by the total electrostatic force exerted by other particles on it. In this paper, bounded exponential crossover (BEX) [12] and polynomial mutation operator (PMO) [13,14] are used with AEFA to enhance the acceleration in order to increase convergence speed further.

3.4.1. Bounded Exponential Crossover (BEX)

The performance of MOOP highly depends upon the solution generated by the crossover operator. A crossover operator produces solutions that reduce the possibility of being trapped in local optima. Bounded exponential crossover (BEX) is used to generate the offspring solutions in the interval [ P i l ,   P i u   ] . BEX involves an additional factor α that depends on the interval [ P i l ,   P i u   ] and the parent solution position. The steps to generate offspring solutions in the interval [ P i l ,   P i u   ] and related to each pair of the parent x i and y i are given in Algorithm 2.

3.4.2. Polynomial Mutation Operator (PMO)

In order to avoid premature convergence in MOOP, the mutation operator is used. In this paper, the polynomial mutation operator (PMO) is used with the proposed algorithm. PMO includes two control parameters: mutation probability of a parent solution ( p m ) and the magnitude of the expected solution ( η m ). For an individual P i ( t ) = ( P 1 ( t ) ,   P 2 ( t ) P m ( t ) ), PMO is computed as follows:
P i ( t + 1 ) = P i ( t ) + δ ( y u y d )
where P i ( t + 1 ) represents the decision variable after the mutation, and P i ( t ) represents the decision variable before the mutation. y u and y d represent the lower and upper bounds of the decision variable, respectively. δ represents a small variation obtained by
δ = { [ 2 r 2 + ( 1 2 r 2 )   ( max ( y u x i t   ,   x i t y d )   y u y d ) η m + 1 ] 1 η m + 1 1   if   r 2 0.5 1 [ 2 r 2 + ( 1 2 r 2 )   ( max ( y u x i t   ,   x i t y d )   y u y d ) η m + 1 ] 1 η m + 1 , otherwise
Here, r 2 is a random number in [0, 1] interval, which is compared with a predefined threshold value ( 0.5 ) , and η m represents the mutation distribution index.
Algorithm 2 Bounded Exponential Crossover (BEX)
Input: Parent solutions x = x 1 ,   x 2 , . x m and y = y 1 ,   y 2 , . y m   a n d   scaling   parameter   λ > 0
Output: Offspring solutions
While i m   d o
  • Generate uniformly distributed random number u i     U ( 0 , 1 )
  • Compute β i x   and   β i y which is derived by inverting the bounded exponential distribution,
      β i x   = { λ log { exp ( x i l x i   λ ( y i x i ) ) + u i   ( 1 exp ( x i l x i λ ( y i x i ) ) ) } ,   if   r i 0.5   λ   log { 1 u i   ( 1 exp ( x i l x i λ ( y i x i ) ) ) } ,   if   r i > 0.5  
    β i y   =   { λ log { exp ( x i l y i λ ( y i x i ) ) + u i   ( 1 exp ( x i l y i λ ( y i x i ) ) ) } ,   if   r i 0.5   λ   log { 1 u i   ( 1 exp ( y i l x i   λ ( y i x i ) ) ) } ,   if   r i > 0.5
    where r i     U ( 0 , 1 ) represents uniformly distributed random variable and x i l ,   x i u are the lower and upper bounds of the i t h decision variable.
  • Offspring solutions are generated using
    ε i = x i + β i x   ( y i x i ) ,   η i = y i + β i x   ( y i x i )
end while

4. Proposed Algorithm

In this section, the proposed multi-objective optimization method is described in detail. The algorithm starts with parameter initialization. Then, a population of candidate solutions is generated. Further, by computing fitness values for each candidate solution, the best solution is selected. The population is iteratively updated until the termination conditions are satisfied and the optimal solution is returned. The proposed algorithm is presented in Algorithm 3. The explanation of the symbols used in the proposed algorithm is presented in Table 2. The steps followed in the proposed algorithm are as follows.
Table 2. Symbols used in the proposed algorithm.

4.1. Population Generation

In the proposed algorithm, there are two populations, i.e., searching population ( P n ) and the external population ( P I c * ) . The searching population, which contains initial candidate solutions, computes the non-dominated solutions and stores them in the external population. The process is performed iteratively.
Algorithm 3 Proposed Multi-Objective Optimization Algorithm
Input: Searching population of size ( n ) , External Population of size ( m )
Output: Non-dominated set of charged particles ( N D C p )
Begin
  • Initialize searching   population   (   P n   ) of charged particles C P .
  • Initialize external population P * ¯ = and set iteration counter I c = 1
While ( I c < I c m a x )
        For each C P   P I c   U   P I c * ¯  
  • Compute the fitness F ( C P ) using Equation (15)
  • Computer the density using shift-based density estimation (Section 3.3)
        end for
        For each C P   P I c   U   P I c * ¯ do
      If F ( C P ) < 1 then
       P I c + 1 * = P I c + 1 *   U   { C P }
     end if
        end for
        If ( P I c + 1 * < m ) then
      P I c + 1 ¯ = P I c + 1 ¯   U   ( ( P I c U   P I c * ) [ 1 : m | P I c + 1 ¯ | ] )
        else
            delete non-dominated solution from P I c * using Equation (17) as described in Section 4.3
          end if
  • Select charged particles (CP) into the mating pool P I c + 1 from P I c     U   P I c + 1 *
  • Evaluate charge, update the velocity and position of P I C + 1 as defined in Section 3.2 and obtain the new position of C P
  • Apply crossover and mutation operator (using Section 3.4) on population P I c + 1
        end while
        For each C P   P t + 1 ¯ do
           If C P is a non-dominated solution then
      N D C p = N D C p * { C P }   ;
           end if
         end for

4.2. Fitness Evaluation

The traditional AEFA is not suitable for the MOOPs due to the definition of charge. According to Equations (7) and (8), the charge of a particle is related to fitness value. Thus, the multi-objective fitness assignment used in the SPEAII [38] is introduced to evaluate the charge of the AEFA algorithm for two or more objectives in MOOPs. The multi-objective fitness assignment of the proposed algorithm is calculated by using the formula as follows:
F ( i ) = R ( i ) + D ( i ) ,   R ( i ) = j   P t U   P t s ( j )   and   s ( j ) |   {   i | i   P t U   P t *   } |
where F ( i ) is the fitness value of the charged particle i . R ( i ) means the raw fitness value of the charged particle i , and D ( i ) represents the additional density information of the charged particle i . The raw fitness exhibits the strength of each charged particle by assigning a rank to each charged particle. In MOOP, to avoid such conditions where more than more solution (charged particle) shows a similar rank, an additional density measure is used for distinguishing the difference between them. The SDE technique is used to describe the additional density (crowding distance) of i .

4.3. A Fine-Grained Elitism Selection Mechanism

Elitism selection is a critical process in multi-objective optimization. It helps in the determination of the best fit solution for the next generation. The selection process affects the Pareto front output. If a better solution is lost in the selection, it cannot be found again. The selection of a better solution that dominates other solutions helps in removing worse solutions. In this study, a modified fine-grained elitism selection method that utilizes the shift-based density estimation (SDE) approach is proposed to improve population diversity. Similarly, to the non-dominated solution selection mechanism, the proposed method also selects the non-dominated solution as follows:
P I C + 1 * = {   C P | C P   P n   U   P I C + 1 * }   F ( C P ) < 1
As the external population size ( m ) is fixed, the non-dominated solutions are copied to the external population set ( P I C + 1 * ) until the current size of the external population is less than m . In contrast, the non-dominated solutions ( P I C + 1 * m ) are discarded from the external population set. In order to discard solutions and maintain diversity, the proposed method deletes the solution with the most crowded region computed using the SDE approach, as described in Section 3.3. The steps used to compute the shared crowding distance are as follows.
  • The distance between two adjacent particles P I c and P I C + 1 * is computed as Algorithm 1.
  • For each particle, an additional density estimation (shared crowding distance) is computed as follows:
    σ c r o w d   = S D E   ( P I c * ) | m |
When the size of the non-dominated solution size exceeds external population size (i.e., P I c * > m ) , the proposed method selects the k t h charged particle from the external population ( P I c * ) using Equation (17) and deletes it. This process is performed iteratively until | P I c * | = m .
σ k c r o w d   > i = 1 | P I c * | σ c r o w d ( P I c * ) i | m |  
Here, the solution (charged particle) whose density is greater than the average shared crowing distance of all non-dominated solutions in the external population is considered as the worst solution and is deleted from the external population set. Then, the average crowding distance is recomputed and used in a similar manner until P I c * = m .

5. Experimental Results and Discussion

This section is further divided into three sub-sections. Section 5.1 discusses the performance comparison of the AEFA with existing evolutionary approaches. Section 5.2 gives a performance comparison of the proposed algorithm with existing multi-objective optimization algorithms, and Section 5.3 presents a sensitivity analysis of the proposed algorithm.

5.1. Performance Comparison of the AEFA With Existing Evolutionary Approaches

At first, the performance of AEFA is evaluated on 10 benchmark functions. These benchmarks are taken as three unimodal (UM), three low-dimensional multimodal (LDMM), and four high-dimensional multimodal (HDMM) functions. All these functions belong to the minimization problem. The description of the benchmark functions is given in Table 3. Then, the performance of the AEFA is compared with existing evolutionary approaches. For performance comparison, four statistical performance indices are considered: average best-so-far solution, mean best-so-far solution, the variance of the best-so-far solution, and average run-time. During each iteration, the algorithm updates the best-so-far solution and records running time. The performance indices are computed by analyzing the obtained best-so-far solution and running time.
Table 3. Benchmark functions used for performance comparison of evolutionary algorithms.

5.1.1. Parameter Setting

For experimental analysis, the parameters, i.e., population size ( P n , P m ), the maximum Coulomb’s constant ( K 0 ) , and the maximum number of iterations ( I c m a x ) , are initialized in Table 4.
Table 4. Parameters used to evaluate AEFA.

5.1.2. Results and Discussion

The performance of AEFA is compared with existing evolutionary optimization algorithms: backtracking search algorithm (BSA) [36], cuckoo search algorithm (CK) [36], artificial bee colony (ABC) algorithm [36], and gravitational search algorithm (GSA) [36]. The results are demonstrated in Table 5 and Figure 1. Figure 1 presents the difference between the AEFA and other existing algorithms based on the values obtained by performance measures on benchmark functions. For a better representation, all the results (except Figure 1d–h) are formulated in logarithm scale (base 10). A larger logarithmic value represents the minimum values of performance measures obtained for different functions. The results in Table 5 and Figure 1 demonstrate that the AEFA obtained minimum values for all UM and LDMM functions. The obtained values competed with the values of GSA and outperformed the value of ABC, CK, and BSA, which reveals that AEFA maintains a balance between exploration and exploitation and performs better in comparison to existing approaches. However, for HDMM (except benchmark F10) functions, AEFA performs worse as compared to ABC, and BSA, which implies that the AEFA suffers from the loss of diversity resulting in premature convergence in solving complex HDMM problems. It is concluded from results that the original AEFA faces challenges in solving higher-dimensional multimodal problems.
Table 5. Performance comparison between AEFA and existing evolutionary optimization algorithms.
Figure 1. Difference among algorithms based on the values obtained by performance measures on benchmark functions. (a) Performance measurement for F1 benchmark function. (b) Performance measurement for F2 benchmark function. (c) Performance measurement for F3 benchmark function. (d) Performance measurement for F4 benchmark function. (e) Performance measurement for F5 benchmark function. (f) Performance measurement for F6 benchmark function. (g) Performance measurement for F7 benchmark function. (h) Performance measurement for F8 benchmark function. (i) Performance measurement for F9 benchmark function. (j) Performance measurement for F10 benchmark function.

5.2. Performance Comparison of the Proposed Algorithm with Existing Multi-Objective Optimization Algorithms

As shown in Table 5, the AEFA faces challenges in solving MOOP. These challenges are addressed through the proposed algorithm. The proposed algorithm is evaluated with six benchmark functions. The description of benchmark functions is presented in Table 6. The performance of the proposed algorithm is compared with the existing MOOP based on four performance measures: generational distance metric (GD) [2], diversity metric (DM) [3], converge metric (CM) [3], and spacing metric (SM) [39]. DM measures the extent of spread attained in the obtained optimal solution. CM measures the convergence to the obtained optimal Pareto front. GD measures the proximity of the optimal solution to the Pareto optimal front. SM demonstrates how evenly the optimal solutions are distributed among themselves. Further, the performance of the proposed algorithm is compared with existing MOOPs in terms of recombination and mutation operators.
Table 6. Benchmark functions used for MOOP performance comparison.

5.2.1. Parameter Setting

For experimental analysis, the parameters, i.e., initial population size ( P n ), external population size ( P m ) , the maximum Coulomb’s constant ( K 0 ) , the maximum number of iterations ( I c m a x ) , initial crossover probability ( P c o ) , final crossover probability ( P c 1 ) , initial mutation probability ( P m 0 ), and final mutation probability ( P m 1 ) , are initialized and presented in Table 7.
Table 7. Parameters used in the proposed algorithm.

5.2.2. Results and Discussion

The performance of the proposed algorithm is analyzed in three steps: (1) the proposed algorithm is evaluated on SCH, FON, and ZDT benchmarks and then compared with NSGA II [3], NSPSO [7], BCMOA [40], and SPGSA [36] based on CM, DM, and GD metrics; the (2) proposed algorithm is evaluated on MOP5 and MOP6 benchmarks and then compared with NSGSA [36], MOGSA [36], SMOPSO [36], MOGA II [36], and SPGSA [36] based on GD and SM metrics; and (3) the efficacy of using the recombination operator is validated by evaluating the proposed algorithm on six benchmarks and then compared with the existing MOOPs. Experiments are performed 10 times on each benchmark function, and results are recorded in terms of mean and variance, contributing to the robustness of the algorithm. The results are presented in Table 8, Table 9, Table 10, Table 11 and Table 12 and Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6. For better representation, all the results in Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 are prepared using logarithmic scale, where a high logarithmic value corresponds to a minimum output value (mean and variance). Results in Table 8 and Figure 2 demonstrate that the proposed algorithms obtained minimum values of CM metric as compared to NSGSA [36], MOGSA [36], SMOPSO [36], MOGA II [36], and SPGSA [36] which proves that the proposed algorithm ensures better convergence and keeps a good balance between exploration and exploitation of low-dimensional as well as high-dimensional problems. Figure 3 and Table 9 show that the proposed algorithm obtained the minimum value of variance (in DM metric) for all the benchmark functions as compared to NSGA II, NSPSO, BCMOA, and SPGSA, which validates the efficacy and robustness of the proposed algorithm. Results in Table 10 (GD metric) and Figure 4 demonstrate that the proposed algorithm performed better in comparison to existing NSGA II, NSPSO, BCMOA, and SPGSA. GD metric validates the proximity of obtained optimal solution to the optimal Pareto front, thereby ensuring the effectiveness of the proposed algorithm. Table 11 and Figure 5 demonstrate that the proposed algorithm achieves better values for GD and SM metrics than the compared algorithm. GD values of the proposed algorithm show more accurate results than the compared algorithm, and SM values show that the proposed algorithm attains a better convergence accuracy for all benchmark functions, which shows that the proposed algorithm can produce uniformly distributed, non-dominated optimal solutions. Table 12 and Figure 6a–c show that the proposed algorithm is compared with SPEA2, SPGSA, SPGSA_NRM, and a self-variant of MOOP without utilizing BEX and PMO operators. Results indicate that involving BEX and PMO operators in the proposed algorithm significantly improves the algorithm optimization performance in comparison to the compared algorithm. Contrary, in the absence of these operators, the proposed algorithm suffers from premature convergence. So, it is concluded from all four metrics (Table 8, Table 9, Table 10 and Table 11) and Table 12 that the proposed algorithm shows better efficacy, accuracy, and robustness in searching true Pareto fronts across the global search space in comparison to the existing MOOP algorithm. Further, involving BEX and PMO operators, the proposed algorithm maintains desirable diversity in searching the true Pareto front.
Table 8. Performance comparison of the proposed algorithm with existing MOOPs based on SCH, FON, and ZDT functions using the CM metric.
Table 9. Performance comparison of the proposed algorithm with existing MOOPs based on SCH, FON, and ZDT functions using the DM metric.
Table 10. Performance comparison of the proposed algorithm with existing MOOPs based on SCH, FON, and ZDT functions using the GD metric.
Table 11. Performance comparison of the proposed algorithm with existing MOOPs based on MOP5 and MOP6 functions using the GD/SM metric.
Table 12. Performance comparison of the proposed algorithm with existing MOOPs based on recombination and mutation operators.
Figure 2. Performance comparison of the proposed algorithm with existing MOOPs based on SCH, FON, and ZDT functions using the CM metric. (a) Performance measurement based on mean value; (b) performance measurement based on variance.
Figure 3. Performance comparison of the proposed algorithm with existing MOOPs based on SCH, FON, and ZDT functions using the DM metric (a). Performance measurement based on mean value; (b) performance measurement based on variance.
Figure 4. Performance comparison of the proposed algorithm with existing MOOPs based on SCH, FON, and ZDT functions using the GD metric. (a) Performance measurement based on mean value; (b) performance measurement based on variance.
Figure 5. Performance comparison of the proposed algorithm with existing MOOPs based on MOP5 and MOP6 functions using the GD/SM metric. (a) Performance measurement for MOP5 benchmark function; (b) performance measurement for MOP6 benchmark function.
Figure 6. Performance comparison of the proposed algorithm with existing MOOPs based on recombination and mutation operators. (a) Performance comparison of the proposed algorithm with existing MOOPs based on recombination and mutation operators on the CM metric. (b) Performance comparison of the proposed algorithm with existing MOOPs based on recombination and mutation operators on the GD metric. (c) Performance comparison of the proposed algorithm with existing MOOPs based on recombination and mutation operators on the SM metric.

5.3. Sensitivity Analysis of the Proposed Algorithm

Sensitivity analysis is defined as the process that determines the influences of change in input variable on the robustness of outcome. In this paper, the robustness of the proposed algorithm is evaluated in two steps: (1) sensitivity analysis 1 and (2) sensitivity analysis 2. Sensitivity analysis 1 is performed for the different values of crossover operator (BEX), mutation operator (PMO), and initial value of coulomb’s constant ( K 0 ) . Sensitivity analysis 2 is performed for the different values of initial population size and maximum number of iterations. The detailed description of the sensitivity analysis and related parameter settings is described in Table 13. Further, a combined performance score, described in Table 14, is used measure the performance of the proposed algorithm.
Table 13. Parameter settings for sensitivity analysis.
Table 14. Performance measure for sensitivity analysis.

Results and Discussion

Sensitivity of mutation operator (PMO), crossover operator (BEX), initial population size, maximum number of iterations, and initial value of coulomb’s constant ( K 0 ) are analyzed using GD and SM metrics, and results are presented in Table 15, Table 16, Table 17, Table 18 and Table 19, respectively. Table 15 shows that performance of the PMO varied from “Average” to “Good”, which means that selection of suitable values for mutation is required to achieve optimum results. Table 16 demonstrates that performance of the BEX varied from “Average” to “Very Good”, which means that the sensitivity of BEX is low for the proposed algorithm. Table 17 demonstrates that performance of the population size varied from “Average” to “Very Good”, which means that the sensitivity of the population is moderate for the proposed algorithm. Table 18 demonstrates that performance of maximum no. of iterations is “Good” for all experiments, which means that the proposed algorithm is less sensitive to maximum number of iterations. Table 19 demonstrates that performance of the initial value of Coulomb’s constant ( K 0 ) varies from “Good” to “Very Good”, which means that the proposed algorithm is less sensitive to ( K 0 ) . It is concluded from all the tables (Table 15, Table 16, Table 17, Table 18 and Table 19) that the proposed algorithm works efficiently in all given scenarios, which shows its robustness.
Table 15. Sensitivity analysis of mutation parameter (PMO).
Table 16. Sensitivity analysis of crossover parameter (BEX).
Table 17. Sensitivity analysis of initial population size.
Table 18. Sensitivity analysis of maximum no. of iterations.
Table 19. Sensitivity analysis of Coulomb’s constant initial value.

6. Conclusions and Future Work

In this paper, an improved population-based meta-heuristic algorithm, AEFA, is proposed to handle the multi-objective optimization problems. The proposed algorithm used strength Pareto dominance theory to refine fitness assignment and an improved fine-grained elitism selection based on the SDE mechanism to maintain population diversity. Further, the proposed algorithm used the SDE technique with strength Pareto dominance theory for density estimation, and it implemented bounded exponential crossover (BEX) and polynomial mutation operator (PMO) to avoid solutions trapping in local optima and enhance convergence. The experiments were performed in two steps. In the first step, the AEFA was evaluated on different low-dimensional and high-dimensional benchmark functions, and then the performance of the AEFA was compared with the existing evolutionary optimization algorithms (EOAs) based on four parameters: average best-so-far solution, mean best-so-far solution, the variance of the best-so-far solution, and average run-time. Experimental results show that for low-dimensional benchmark functions, AEFA performed better in comparison to existing EOAs, but for complex high-dimensional benchmark functions, AEFA suffered from loss of diversity and performed worse. In the second step, the proposed algorithm was evaluated on different benchmark functions, and then the performance of the proposed algorithm was compared with the existing evolutionary multi-objective optimization algorithms (EMOOPs) based on diversity, converge, generational distance, and spacing metrics. Experimental results prove that the proposed algorithm is not only able to find the optimal Pareto front (non-dominated solutions), but it also shows better performance than the existing multi-objective optimization techniques in terms of accuracy, robustness, and efficacy. In the future, the proposed work can be explored in various directions. One direction is to extend the proposed algorithm to solve various real-word multi-objective optimization problem. The proposed work can also be used to solve complex problems by hybridization of the proposed algorithm with another existing algorithm.

Author Contributions

Conceptualization, Methodology, Formal analysis, Validation, Writing—Original Draft Preparation, H.P.; Supervision, R.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Nobahari, H.; Nikusokhan, M.; Siarry, P. Non-dominated sorting gravitational search algorithm. In Proceedings of the 2011 International Conference on Swarm Intelligence, Cergy, France, 14–15 June 2011; pp. 1–10. [Google Scholar]
  2. Van Veldhuizen, D.A.; Lamont, G.B. On measuring multiobjective evolutionary algorithm performance. In Proceedings of the 2000 Congress on Evolutionary Computation, La Jolla, CA, USA, 16–19 July 2000; Volume 1, pp. 204–211. [Google Scholar]
  3. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  4. Zhao, B.; Cao, Y.J. Multiple Objective Particle Swarm Optimization Technique for Economic Load Dispatch. J. Zhejiang Univ. Sci. A 2005, 6, 420–427. [Google Scholar]
  5. Cagnina, L. A Particle Swarm Optimizer for Multi-Objective Optimization. J. Comput. Sci. Technol. 2005, 5, 204–210. [Google Scholar]
  6. Zhang, Y.; Gong, D.W.; Ding, Z. A bare-bones multi-objective particle swarm optimization algorithm for environmental/economic dispatch. Inf. Sci. 2012, 192, 213–227. [Google Scholar] [CrossRef]
  7. Li, X. A non-dominated sorting particle swarm optimizer for multiobjective optimization. In Genetic and Evolutionary Computation Conference; Springer: Berlin/Heidelberg, Germany, 2003; pp. 37–48. [Google Scholar]
  8. Yadav, A. AEFA: Artificial electric field algorithm for global optimization. Swarm Evol. Comput. 2019, 48, 93–108. [Google Scholar]
  9. Demirören, A.; Hekimoğlu, B.; Ekinci, S.; Kaya, S. Artificial Electric Field Algorithm for Determining Controller Parameters in AVR system. In Proceedings of the 2019 International Artificial Intelligence and Data Processing Symposium (IDAP), Malatya, Turkey, 21–22 September 2019; pp. 1–7. [Google Scholar]
  10. Abdelsalam, A.A.; Gabbar, H.A. Shunt Capacitors Optimal Placement in Distribution Networks Using Artificial Electric Field Algorithm. In Proceedings of the 2019 the 7th International Conference on Smart Energy Grid Engineering (SEGE), Oshawa, ON, Canada, 12–14 August 2019; pp. 77–85. [Google Scholar]
  11. Shafik, M.B.; Rashed, G.I.; Chen, H. Optimizing Energy Savings and Operation of Active Distribution Networks Utilizing Hybrid Energy Resources and Soft Open points: Case Study in Sohag, Egypt. IEEE Access 2020, 8, 28704–28717. [Google Scholar] [CrossRef]
  12. Thakur, M.; Meghwani, S.S.; Jalota, H. A modified real coded genetic algorithm for constrained optimization. Appl. Math. Comput. 2014, 235, 292–317. [Google Scholar] [CrossRef]
  13. Gong, M.; Jiao, L.; Du, H.; Bo, L. Multiobjective immune algorithm with nondominated neighbor-based selection. Evol. Comput. 2008, 16, 225–255. [Google Scholar] [CrossRef]
  14. Liu, Y.; Niu, B.; Luo, Y. Hybrid learning particle swarm optimizer with genetic disturbance. Neurocomputing 2015, 151, 1237–1247. [Google Scholar] [CrossRef]
  15. Fonseca, C.M.; Fleming, P.J. Genetic algorithms for multi-objective optimization: Formulation, discussion and generalization. In Proceedings of the 5th International Conference on Genetic Algorithms, San Mateo, CA, USA, 1 June 1993; pp. 416–423. [Google Scholar]
  16. Horn, J.; Nafploitis, N.; Goldberg, D.E. A niched Pareto genetic algorithm for multi-objective optimization. In Proceedings of the 1st IEEE Conference on Evolutionary Computation, Orlando, FL, USA, 27–29 June 1994; pp. 82–87. [Google Scholar]
  17. Tan, K.C.; Goh, C.K.; Mamun, A.A.; Ei, E.Z. An evolutionary artificial immune system for multi-objective optimization. Eur. J. Oper. Res. 2008, 187, 371–392. [Google Scholar] [CrossRef]
  18. Zitzler, E. Evolutionary Algorithms for Multiobjective Optimization: Methods and Applications; Ithaca: Shaker, OH, USA, 1999; Volume 63. [Google Scholar]
  19. Srinivas, N.; Deb, K. Multi-Objective function optimization using non-dominated sorting genetic algorithms. Evol. Comput. 1995, 2, 221–248. [Google Scholar] [CrossRef]
  20. Zitzler, E.; Thiele, L. Multiobjective optimization using evolutionary algorithms—A comparative case study. In International Conference on Parallel Problem Solving from Nature; Springer: Berlin/Heidelberg, Germany, 1998; pp. 292–301. [Google Scholar]
  21. Rudolph, G. Technical Report No. CI-67/99: Evolutionary Search under Partially Ordered Sets; Dortmund Department of Computer Science/LS11, University of Dortmund: Dortmund, Germany, 1999. [Google Scholar]
  22. Rughooputh, H.C.; King, R.A. Environmental/economic dispatch of thermal units using an elitist multiobjective evolutionary algorithm. In Proceedings of the IEEE International Conference on Industrial Technology, Maribor, Slovenia, 10–12 December 2003; Volume 1, pp. 48–53. [Google Scholar]
  23. Deb, K. Multi-Objective Optimization Using Evolutionary Algorithms; John Wiley & Sons: New York, NY, USA, 2001; Volume 16. [Google Scholar]
  24. Vrugt, J.A.; Robinson, B.A. Improved evolutionary optimization from genetically adaptive multimethod search. Proc. Natl. Acad. Sci. USA 2007, 104, 708–711. [Google Scholar] [CrossRef] [PubMed]
  25. Deb, K.; Jain, H. An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints. IEEE Trans. Evol. Comput. 2013, 18, 577–601. [Google Scholar] [CrossRef]
  26. Jain, H.; Deb, K. An evolutionary many-objective optimization algorithm using reference-point based nondominated sorting approach, part II: Handling constraints and extending to an adaptive approach. IEEE Trans. Evol. Comput. 2013, 18, 602–622. [Google Scholar] [CrossRef]
  27. Zhao, B.; Guo, C.X.; Cao, Y.J. A multiagent-based particle swarm optimization approach for optimal reactive power dispatch. IEEE Trans. Power Syst. 2005, 20, 1070–1078. [Google Scholar] [CrossRef]
  28. Zhang, Q.; Li, H. MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  29. Ma, X.; Liu, F.; Qi, Y.; Li, L.; Jiao, L.; Liu, M.; Wu, J. MOEA/D with Baldwinian learning inspired by the regularity property of continuous multiobjective problem. Neurocomputing 2014, 145, 336–352. [Google Scholar] [CrossRef]
  30. Martinez, S.Z.; Coello, C.A.C. A multi-objective evolutionary algorithm based on decomposition for constrained multi-objective optimization. In Proceedings of the 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, China, 6–11 July 2014; pp. 429–436. [Google Scholar]
  31. Gu, F.; Liu, H.L.; Tan, K.C. A multiobjective evolutionary algorithm using dynamic weight design method. Int. J. Innov. Comput. Inf. Control 2012, 8, 3677–3688. [Google Scholar]
  32. Zhang, X.; Tian, Y.; Cheng, R.; Jin, Y. An efficient approach to nondominated sorting for evolutionary multiobjective optimization. IEEE Trans. Evol. Comput. 2014, 19, 201–213. [Google Scholar] [CrossRef]
  33. Chong, J.K.; Qiu, X. An Opposition-Based Self-Adaptive Differential Evolution with Decomposition for Solving the Multiobjective Multiple Salesman Problem. In Proceedings of the 2016 IEEE Congress on Evolutionary Computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 4096–4103. [Google Scholar]
  34. Cheng, R.; Jin, Y.; Olhofer, M.; Sendhoff, B. A reference vector guided evolutionary algorithm for many-objective optimization. IEEE Trans. Evol. Comput. 2016, 20, 773–791. [Google Scholar] [CrossRef]
  35. Hassanzadeh, H.R.; Rouhani, M. A multi-objective gravitational search algorithm. In Proceedings of the 2010 2nd International Conference on Computational Intelligence, Communication Systems and Networks, Liverpool, UK, 28–30 July 2010; pp. 7–12. [Google Scholar]
  36. Yuan, X.; Chen, Z.; Yuan, Y.; Huang, Y.; Zhang, X. A strength pareto gravitational search algorithm for multi-objective optimization problems. Int. J. Pattern Recognit. Artif. Intell. 2015, 29, 1559010. [Google Scholar] [CrossRef]
  37. Li, M.; Yang, S.; Liu, X. Shift-based density estimation for Pareto-based algorithms in many-objective optimization. IEEE Trans. Evol. Comput. 2013, 18, 348–365. [Google Scholar] [CrossRef]
  38. Zitzler, E.; Laumanns, M.; Thiele, L. TIK report 103: SPEA2: Improving the Strength Pareto Evolutionary Algorithm; Computer Engineering and Networks Laboratory (TIK), ETH Zurich: Zurich, Switzerland, 2001. [Google Scholar]
  39. Schott, J.R. Fault Tolerant Design Using Single and Multicriteria Genetic Algorithm Optimization; Air force inst of tech Wright-Patterson AFB: Cambridge, MA, USA, 1995. [Google Scholar]
  40. Guzmán, M.A.; Delgado, A.; De Carvalho, J. A novel multiobjective optimization algorithm based on bacterial chemotaxis. Eng. Appl. Artif. Intell. 2010, 23, 292–301. [Google Scholar] [CrossRef]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.