3.2.1. A New Optimal Individual Selection Method
As shown in Algorithm 2, MOIAOA uses a decomposition mechanism for multi-objective processing. The multi-objective technique based on decomposition relies on weight vectors to break down a multi-objective problem into a series of individual single-objective subproblems associated with the weight vectors. The difference with single-objective optimization, which has only one optimal solution, is that each single-objective subproblem in the decomposition mechanism corresponds to an optimal value. Evidently, under the pull of the optimal position of each single-objective subproblem, the evolutionary direction and evolutionary information that correspond each individual are relatively biased. If we adopt the AOA method that relies solely on a globally optimal individual to drive the evolution of other individuals, then it is only possible to drive each individual closer to the optimal individual, and we cannot rapidly obtain the optimal solution of each single-objective subproblem. In general, non-dominated fronts do not mutate, the optimal solutions of subproblems associated with neighboring weights exhibit greater similarity, and the evolutionary directions of individuals in the corresponding weight vector neighborhoods align more closely. Therefore, in MOIAOA, instead of selecting an optimal individual to guide the evolution of other individuals as in AOA, each single-objective subproblem selects the optimal individual within the neighborhood of the corresponding weight.
The performance of individuals on single-objective subproblems primarily relies on Chebyshev aggregation function values and PBI values [
26]. However, the Chebyshev aggregation function ignores the requirement of distribution in the overall evolution process and favors the pursuit of convergence. On the other hand, the PBI estimation method utilizes the projection distance to evaluate the convergence and the vertical distance to evaluate the distribution; in addition, it balances them with the value of
; however,
is often a fixed value and cannot correspond to the performance of an individual on the corresponding sub-problem in real time. As a result, the requirements of the algorithms cannot be effectively balanced for convergence and distribution in the different evolution stages.
In summary, the computation of a combined incentive value is introduced to enhance the equilibrium between algorithmic convergence and distribution, as depicted in Equation (
11). This value is derived from the Chebyshev aggregation function, vertical distance, and projection distance. Subsequently, the individual with the lowest combined incentive value is selected from the neighborhoods corresponding to each weight, then this individual is considered the optimal option within the spectrum of neighborhoods associated with that weight.
Here,
j represents an individual within the neighborhood BBi corresponding to the current weight
, the number of individuals within BBi is indicated in Equation (12),
represents the min–max linear normalization operation, the calculation of
is demonstrated in Equation (
13), and
,
, and
respectively represent the projection distance, aggregation function value, and vertical distance of individual
j with respect to the current weight
; the specific calculation methods are shown in Equations (14)–(16). Finally,
and
represent penalty factors that are respectively detailed in Equations (17) and (18).
Here,
represents rounding up and
is the minimum neighborhood size, the value of which is generally 3 to achieve better results
Here,
M indicates the number of objectives and
indicates the ideal point consisting of the optimal values on each objective.
Here, , , , and are artificially set constants, typically set to 0.45, 1, 0.2, and 0.3, respectively, although these values can also be adjusted.
In summary, the proposed method, which relies on the combined incentive values of individuals to determine the optimal individual corresponding to each subproblem, offers the following advantages. First, Equation (
12) changes the traditional form of the constant neighborhood size to make it change adaptively following the number of iterations. In the early phase of algorithmic evolution, a significant discrepancy arises in the extent of evolution among individuals, many of whom are far from the optimal frontier. A larger number of neighboring individuals provide considerable options, thereby rapidly shortening the evolutionary gap between them and other subproblems by taking advantage of the evolutionary information of better individuals in the neighboring subproblems. As a result, the speed of convergence to the optimal frontier is accelerated. In the later phase of the algorithm, all individuals approach the optimal frontier, reducing the number of individuals in the neighborhood. Thus, the optimal individual can be more accurately selected, precisely guiding the evolution of this neighborhood. This condition is conducive to the formation of excellent distributivity, and can effectively save computational resources as well. Second, in Equation (
11), the composite incentive value of the individual is primarily affected by the normalized Chebyshev function when TF is less than 0.55, signifying the early phase of algorithmic evolution. The influence of the projection distance is larger than that of the vertical distance, and the
parameter increases gradually with the number of iterations, increasing the influence of the vertical distance to an extent. However, it is inferior to the influence of the vertical distance. This method is conducive to ensuring the convergence of the algorithm and avoiding the disadvantage of an abundant individuals, which can result in poor distribution. The composite incentive value of the individual is primarily affected by the normalized Chebyshev function when the TF is more than 0.55, signifying the later phase of algorithmic evolution. The influence of the vertical distance is larger than that of the projection distance, and the
parameter increases gradually with the number of iterations, reducing the influence of projection distance to a certain extent. Evidently, when the late evolutionary stage is already closer to the theoretically optimal nondominated frontier, a uniform distribution of nondominated solutions is achieved while ensuring that the frontier surface does not recede.
3.2.2. New Exploration Phase
As shown in Equation (
6), each individual in the exploration strategy proposed by the AOA algorithm has an opportunity to learn from any individual within the population. In this way, the diversity of the population is maintained and global search is realized. However, the local search capability is insufficient, decreasing the convergence speed of the algorithm. Thus, the novel exploration strategy shown in Algorithm 3 is proposed to further balance the global and local search capabilities of the algorithm. The algorithm shown below involves an oppositional search mechanism, a simulated binary cross search mechanism, and a mutation search mechanism.
Algorithm 3: New exploration phase framework |
- 1:
if rand > 5/14 then - 2:
← Perform an oppositional search for - 3:
else if rand <= 5/14 ∩ rand >= 1/14 then - 4:
Select two individuals and randomly from - 5:
if then - 6:
; - 7:
end if - 8:
← Perform simulated binary cross-learning for and - 9:
← Perform a mutation-learning for - 10:
else - 11:
← Perform a mutation-learning for - 12:
end if
|
Oppositional Search Mechanism
This section describes the oppositional search mechanism, as shown in Equation (
19):
where
denotes the opposing individuals of individual
obtained according to Equation (
20),
is an individual selected randomly from its neighboring population
,
is a randomly selected individual different from
obtained from the population, C1 and
are the original definitions in the AOA algorithm, which represent the learning factor and the acceleration of
, respectively, and
F denotes the updating direction of each dimension, which is calculated according to Equation (
21).
Here,
and
denote the upper and lower bounds of
, respectively. The opposing individual
of individual
only performs the transformation shown in Equation (
20) on the SD randomly selected dimensions of
. In general,
.
Simulated Binary Cross Search Mechanism
This section employs a simulated binary search mechanism similar to the one used [
27], shown in Equation (
22). To further improve population diversity, the
parameter in [
27] is improved in this section to further improve population diversity, as shown in Equation (23).
Here,
,
are two distinct individuals randomly selected from
and
F determines the update direction of each dimension, which is calculated according to Equation (
21).
Here,
denotes the generation of a random number with mean 1 and variance
, while
is the original value obtained from [
27] as shown in Equation (24):
where mu = rand(1,Dim) denotes a vector of length Dim and magnitude between 0 and 1.
Mutation Search Mechanism
This section proposes a mutation search mechanism, shown in Equation (
25), where each dimension of an individual
is mutated with a probability of 1/Dim:
where
denotes the polynomial variance factor [
27], which is calculated as shown in Equation (
26) and
and
represent the factors regulating the polynomial, Gaussian, and Cauchy variances, and are calculated according to Equations (27) and (28), respectively:
where
u is a random number between [0,1] and
denote artificial set numbers, which are generally set to 0.5.
In summary, the new exploration strategy proposed in this section has the following advantages. First, a comparison between Equations (23) and (24) indicates that the improved
adds a normal random distribution to the original random exponential, which increases the range of perturbations of the difference vector
in Equation (
22). However, because the individuals
and
in Equation (
22) are in a neighborhood and the evolutionary information gap between them is small, the improved
can further expand the population diversity without destroying the evolutionary direction, satisfying the algorithm’s requirements for population diversity and convergence speed. Second, Equation (
25) integrates polynomial mutation, Gaussian mutation, and Cauchy mutation. The proposed mutation approach provides more ways of generating individuals than a variant approach that simply applies one strategy, further increasing the ability of the algorithm to jump out of local optima. Throughout the early evolution phase, the algorithm focuses on finding as many good individuals as possible in a wider and larger search space; thus, larger mutation perturbations are required. However,
and
are set such that the algorithm’s mutation power in the early stage originates mainly from polynomial and Cauchy mutations. Polynomial mutations serve to prevent excessive individual deviations due to the large perturbation range of the Cauchy mutations, whereas the Cauchy mutations serve to enable a globally robust search of the solution space. As t increases, the algorithm enters a late evolution phase in which it focuses on fine search in the vicinity of excellent individuals, as it is at this point that excellent individuals can be obtained. At the same time, the adjustment factors
and
gradually increase and decrease, respectively; therefore, the fine mutation of the algorithm in the subsequent phase is mainly due to polynomial and Gaussian mutations, and the effect of Cauchy mutation gradually becomes negligible. The algorithm uses polynomial mutation to ensure that the evolutionary information of the original individual is not destroyed, while Gaussian mutation is used for fine search of the neighboring range. Third, the new exploration strategy incorporates an oppositional search mechanism, a simulated binary cross-search mechanism, and a mutation search mechanism. Among these, the oppositional search mechanism utilizes the ability of oppositional learning to produce individuals with greater randomness and broadness, thereby expanding the search solution space of individuals and greatly improving the global search capability. The simulated binary cross-search mechanism operates on two different individuals in the neighborhood, fully using the good genes carried by the neighboring individuals and improving the algorithm’s local search capability to a certain extent. The mutation search mechanism, on the contrary, allows the individual dimensions to mutate with relatively low probability, further increasing population diversity without significantly disrupting the evolutionary direction of the population. In addition, the probability of conducting the oppositional search is the highest, and is significantly higher than that of the simulated binary cross search and mutation search. This new exploration strategy is able to consider the local search on the basis of a better realization of the global search. Consequently, the possibility of the algorithm converging to the optimal frontier is greatly improved; in addition, the convergence speed of the algorithm is improved to a certain extent.
3.2.3. New Development Phase
The AOA algorithm enters the development phase during the later evolution stage. According to Equation (
9), all individuals utilize the optimal individual as a reference vector and learn from the difference vector between the optimal individual and the current individual. This approach helps speeds up the evolution of the algorithm to the theoretical optimum. However, unlike single-objective optimization, in decomposition-based multi-objective optimization the subproblem corresponding to each weight vector is most likely associated with only one individual. Thus, the optimal individual is most likely the individual itself. Furthermore, directly following the evolutionary strategy of the original development phase for solving the multi-objective problem is impossible. Therefore, a new development strategy, shown in Algorithm 4, is proposed to address numerous multi-objective problems. The specific mechanisms of search mode 1 and search mode 2 are detailed below.
Algorithm 4: New development phase framework |
- 1:
Determine the intimate population Bi and the neighboring population BBi of each individual Xi, with dimensions Li and LLi, respectively - 2:
if rand > 5/14 then - 3:
if TF > 0.45 ∩ TF < 0.8 then - 4:
← Xi Perform search mode 1 - 5:
else - 6:
← Xi Perform search mode 2 - 7:
end if - 8:
else if rand <= 5/14 ∩ rand >= 1/14 then - 9:
One individual , is randomly selected from BBi and Bi respectively - 10:
if then - 11:
; - 12:
end if - 13:
← Perform simulated binary cross-learning for and - 14:
← Perform a mutation-learning for - 15:
else - 16:
← Perform a mutation-learning for - 17:
end if
|
Search Mode 1
Search mode 1 is shown in Equation (
29):
where
denotes an individual selected randomly from an intimate population
with a probability of 0.8 or from a neighboring population
with a probability of 0.2. The intimate population
is the part of the neighboring population that is closest to the individuals and contains the number of individuals, as shown in Equation (
30); the basis vectors
are determined by Equation (
31) in accordance with the probability.
Here,
denotes the optimal individual corresponding to the current weight vector
. When rand > 0.6, if
, then
should be operated against according to Equation (
20); at this time,
.
Search Mode 2
Search mode 2 is shown in Equation (
32):
where
and
denote one different individual randomly selected from the neighboring population
and intimate population
, respectively.
In summary, the new development mechanism proposed in this section has the following advantages. First, the new development mechanism in this section only adapts the oppositional search mechanism in the exploration search in
Section 3.2 to a new search mechanism that combines search modes 1 and 2. In a binary search strategy, one individual is obtained from the intimate population and the others remain unchanged; the population is already relatively close to the optimal frontier, following the early exploration phase. The development phase no longer uses the adversarial search mechanism of the exploration phase, which focuses on global search; instead, it uses search modes 1 and 2, which focus on exploring the vicinity of or the neighborhood of the optimal individual corresponding to each subproblem. The binary and mutation searches ensure the diversity of the population, thereby improving the algorithm’s power to converge while providing more candidate space for search modes 1 and 2. Evidently, the new development mechanism can greatly enhance the exploration of the algorithm near the optimal frontier as well as the speed of convergence to the optimal frontier. Second, search modes 1 and 2 are applied in the early and late development phases, respectively. A comparison between search modes 1 and 2 indicates that the difference between them is mainly reflected in the selection of the basis vectors and difference vectors. In selecting the basis vectors, the original approach of using the optimal individuals in AOA is retained in search mode 1, whereas the use of individuals and the centers of the individuals and the optimal individuals as basis vectors is introduced. Search mode 1 enhances the direct search in the vicinity of the optimal individual and further improves the speed of exploring out the optimal frontier. Conversely, in search mode 2 the base vector is selected as either the individual itself or the midpoint between the optimal individual and its oppositional counterpart. This preference amplifies the search within its immediate vicinity. Evidently, the shift from search mode 1 to search mode 2 aligns with the algorithm’s evolving behavior, as the obtained front surface gradually approaches the optimal front. This transition facilitates swift convergence toward the optimal front in proximity to the optimal front itself. Subsequently, the algorithm progressively refines the fine search towards the theoretical optimal frontier. When selecting difference vectors, in search mode 1, individuals communicate primarily with themselves; the selection process favors individuals from the intimate population with high probability and individuals from the neighboring population are selected randomly with low probability. A large number of full exchanges between individuals and intimate individuals plays a role in guiding them closer to the current optimal frontier, considering that individuals within intimate populations typically possess superior evolutionary insights. Furthermore, limited interaction and learning with individuals from neighboring populations extends the scope of search, contributing to an increased exploration range. This approach aids in generating new evolutionary traits, thereby mitigating instances of evolutionary stagnation to a certain extent. The optimal frontier obtained for the period corresponding to search mode 2 has been largely formed; at this point, two individuals are randomly selected directly from the intimate population to communicate with each other. Communication and learning between them makes for a finer search in their own neighborhood, considering that the evolutionary information between an individual and the individuals in its intimate neighborhood is already similar. This condition better facilitates the achievement of a balanced distribution.
3.2.4. A New Approach to Environmental Selection
Numerous experiments and sources in the literature confirm that in terms of convergence the performance of MOEA/D is generally good. However, there is room for improvement in terms of its distributivity, especially in the boundary part of the frontier surface. The algorithm should focus on improving convergence, considering that the frontier obtained by the multi-objective algorithm in the early phase of evolution is generally far away from the theoretical frontier. In the subsequent stage of evolution, it is relatively close to the theoretical frontier, and the algorithm should focus on pursuing distributivity. Accordingly, this section describes improvements to the environment selection method of the MOEA/D mechanism [
26]. The new environment selection approach is proposed according to the different requirements of the various evolutionary phases, as follows.
An environmental selection approach is applied to the early evolutionary phase.
In addition to the Chebyshev aggregation function value, which can reflect the convergence effect of the individual, the projection distance from the individual to the weight vector can allow us to directly consider the convergence of the current frontier. Moreover, a smaller projection distance indicates that the current individual is closer to the theoretical frontier. In this view, to further improve the convergence speed of the algorithm in the early phase of evolution, when TF < 0.5, the projection distance is added to the Chebyshev aggregation function, as shown in Equation (
33):
where
a is an artificial setting of the constant; generally, 10 can achieve better results, though this setting can be modified according to the specific problem at hand.
An environmental selection approach is applied to the subsequent evolutionary phase.
An intensive study revealed the following main reasons for the poor distributivity of the MOEA/D mechanism. The iterative individuals to which the weight vectors belong are selected through a comparison of the Chebyshev aggregation function values among individuals within the neighborhood. Multiple neighboring weight vectors can select the same individual within the neighborhood for the next generation of evolution. This condition inevitably leads to the number of intersections between the weight vector and the theoretical frontier being less than the number of weight vectors. This finding leads to discontinuities at certain locations, resulting in poor distributivity. However, individuals are associated with weight vectors by angle, and the associated individuals of neighboring weight vectors are different from each other compared with individuals of neighboring domains. If the weight vectors are used to select individuals from among their own associated individuals to participate in the next evolution, then the probability of neighboring weight vectors selecting the same individual must be much lower than the MOEA/D mechanism. Using the weight vector can enhance distributivity more effectively than the MOEA/D decomposition mechanism by selecting suitable individuals from among the associated individuals. However, a large number of experiments have confirmed that boundary weight vectors generally exhibit worse distributivity than non-boundary weight vectors that simply use the Chebyshev aggregation function to select superior individuals from among the associated individuals. A more comprehensive analysis uncovers that this phenomenon stems from the proximity of boundary weights to 0 along at least one dimension. For the optimal individual associated with such a weight vector, the corresponding fitness value is also inevitably close to 0 in the dimension where the weight vector approaches 0. As a result, computation of the TCH value according to Equation (
14) inevitably yields an exceedingly large result. The best individual that most closely matches the boundary weights cannot be selected due to the nonnadaptability of the boundary individual during TCH computation. This condition results in extremely poor distributivity in the vicinity of the boundary weight vectors.
In summary, in order to effectively improve the distributivity, different environment selection mechanisms should be adopted for boundary weight vectors and non-boundary weight vectors. In the later evolutionary phase, when TF ⩾ 0.5, a new environment selection method based on boundary separation is proposed with the following steps.
Step 1: The old and new populations are merged, the cosine angle of each individual with each weight vector is calculated, each individual is associated with the weight vector with the smallest angle, and the weight vectors of the associated individuals and the unassociated individuals are identified.
Step 2: For the weight vector with associated individuals, the corresponding set of associated individuals, which is
, is determined. Whether the weight vector is a boundary weight according to Equation (
34) is identified. If it is a boundary weight with all its associated individuals, then their incentive values are calculated according to Equation (
35); otherwise, Equation (
36) is used. Then, the individual with the smallest incentive value is selected from among the individuals associated with each weight vector to belong to the weight vector that participates in the next evolution.
Here,
denotes the number of dimensions less than
in each dimension of the weight vector
and
M represents the number of objectives. If Equation (
34) is satisfied, then it is considered a boundary weight.
Here,
c is an artificially set constant and
denotes the angle value between the weight vector
and its corresponding associated individual.
Here, b is an artificially set constant.
Step 3: For the weight vectors without associated individuals, the incentive values of all individuals are calculated according to Equation (
35) and the individual with the smallest incentive value is selected to participate in the next evolution.
In summary, the environment selection method based on boundary separation applied to subsequent evolution phases possesses the following advantages. First, for the non-boundary weight vectors, the original Chebyshev aggregation function, which is part of the decomposition mechanism, continues to be retained in the computation of the excitation function, as shown in Equation (
36). This condition ensures the convergence of the algorithm and avoids the backwardness of the acquired frontier due to the pursuit of distributivity. On this basis, the vertical distance reflecting the strengths and weaknesses of individual distributivity is added and the influence of distributivity on individual assessment increases with the number of iterations. This finding is conducive to the realization of uniform distribution. Second, as shown in Equation (
35), the incentive function contains the angle, vertical distance, and projection distance, avoiding the drawback of the boundary weight vector being unable to consider distributivity due to the calculation of the Chebyshev aggregation function. The vertical distance and angle are the most intuitive data reflecting the distributivity of the algorithm. Thus, as the number of iterations increases, the influence of the vertical distance and angle on the excitation function is gradually added. This condition is conducive to the realization of a uniform distribution. In addition, if the projection distance differs from the angle and vertical distance, the algorithm approaches the PF surface anyway. At the same time, the effect of using the projection distance is remarkable in that no loss of convergence occurs due to taking these measures to maintain distributivity.
3.2.5. Weight Update Method Based on the Shape of the Frontier Surface
Currently, multi-objective algorithms based on decomposition commonly use the two-layer generation scheme proposed in NSGAIII to generate the weight vectors [
27]. If the theoretical front is in the form of a hyperplane, then this approach ensures that the weight vectors are evenly distributed on it. However, when the frontier surface shows a convex shape, the distribution state of the weight vectors is more dispersed in the convex region. Meanwhile, the distribution state of the weight vectors becomes concentrated when the frontier surface shows a concave shape. The best practical optimal frontier obtained by multi-objective methods based on decomposition is the intersection of the theoretical frontier and the weight vectors. Thus, when the front surface is a non-hyperplane, the solution set distribution of the multi-objective methods based on decomposition is not uniform.
As shown in
Figure 1, the normalized fitness value of each dimension of the point on the hyperplane is equal to 1, using the hyperplane where the black point is located as a criterion. For points within the convex region, where the red dot is located, the normalized fitness values across various dimensions exhibit a pattern where values are larger as they move closer to the central region and conversely smaller as they approach the boundary areas. These values fall within the range of 1 to 2. For points within the concave region, where the blue dot is located, the normalized fitness values across various dimensions exhibit a pattern where values are smaller as they move closer to the central region and conversely larger as they approach the boundary areas. These values fall within the range of 0 to 1. Therefore, the concave–convex character of the frontier surface can be discriminated using the individual’s normalized fitness value versus 1. Considerable experimental studies have found that, for
, the weight vector
forms a uniformly distributed convex weight vector when a < 1. Conversely, when a > 1,
form a uniformly distributed concave weight vector. When generating uniformly distributed concave–convex weight vectors, if the current surface exhibits a convex shape then the dispersion of weight vectors within the convex region is mitigated. Similarly, if the current surface takes on a concave shape, then the concentration of weight vector distribution within the concave region is improved.
On this basis, the following strategy for adjusting the weight vector using the shape of the frontier surface is introduced:
where
a is an artificially set constant, which is usually set to 0.05 for convex fronts and −0.4 for concave fronts,
denotes the scale size of the
ith individual, and
denotes the normalized ratio between the dimensions of each individual in the currently obtained frontier, which is calculated according to Equation (
38):
where
represents the ideal fitness value for all individuals. The fitness values in Equation (
37) are not used; rather, the normalized fitness values are used in order to avoid bias due to the large differences between the values of each objective function. This bias can lead to an overall front surface that leans toward the side with relatively small objective function values, subsequently impacting the shape of the front surface. Consequently, a front surface with a hyperplane-like shape cannot be achieved, which impacts the overall distribution.
In summary, this section presents a weight update method based on the shape of frontier surface, which is outlined in Algorithm 5.
Algorithm 5: Weight update method based on the shape of the frontier surface |
- 1:
if
then - 2:
% Normalization of fitness values for each individual - 3:
% Sum of normalized fitness values for each individual - 4:
if then - 5:
% Select the maximum value for each column of - 6:
- 7:
- 8:
else - 9:
% Select the maximum value for each column of - 10:
- 11:
end if - 12:
if then Prediction of the frontier surface showing a convex shape - 13:
- 14:
←The weights for each individual are updated according to Equations (37) and (38) - 15:
else if then Prediction of the frontier surface showing a concave shape - 16:
- 17:
←The weights for each individual are updated according to Equations (37) and (38) - 18:
end if - 19:
Re-identify the neighborhood of the updated weight vector - 20:
end if
|
3.2.6. Algorithm Complexity Analysis
In this analysis, the population size is N, the maximum number of iterations is T, the problem dimension is D, and the number of neighbors is NI. The MOIAOA algorithm mainly consists of the new exploration phase, the new development phase, and the new approach to environmental selection. The worst time complexity of each phase of the MOIAOA algorithm in a single run is analyzed as follows: in the new exploration phase, at most N times (19) or N × D times (22) or (25) needs to be computed; thus, the worst time complexity of this phase is O (N × D). In the new development phase, at most N times (29) or N × D times (22) or (25) needs to be computed; thus, the worst time complexity of this phase is O (N × D). In the new approach to environmental selection phase, at most N × NI times (33) needs to be computed; thus, the worst time complexity of this phase is O (N × NI).
Therefore, the worst time complexity required for a single run of MOIAIA is O (N × D) + O (N × D) + O (N × NI) = O (N × (2D + NI)).