Next Article in Journal
Nonlocal Elasticity Response of Doubly-Curved Nanoshells
Previous Article in Journal
Generalising Exponential Distributions Using an Extended Marshall–Olkin Procedure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Method for Dynamic Multi-Objective Optimization Based on Segment and Cloud Prediction

1
Air and Missile Defense College, Air Force Engineering University, Xi’an 710051, China
2
No. 95806 of PLA, Beijing 100021, China
3
Air Traffic Control and Navigation College, Air Force Engineering University, Xi’an 710051, China
*
Authors to whom correspondence should be addressed.
Symmetry 2020, 12(3), 465; https://doi.org/10.3390/sym12030465
Submission received: 27 January 2020 / Revised: 7 March 2020 / Accepted: 11 March 2020 / Published: 16 March 2020

Abstract

:
In the real world, multi-objective optimization problems always change over time in most projects. Once the environment changes, the distribution of the optimal solutions would also be changed in decision space. Sometimes, such change may obey the law of symmetry, i.e., the minimum of the objective function in such environment is its maximum in another environment. In such cases, the optimal solutions keep unchanged or vibrate in a small range. However, in most cases, they do not obey the law of symmetry. In order to continue the search that maintains previous search advantages in the changed environment, some prediction strategy would be used to predict the operation position of the Pareto set. Because of this, the segment and multi-directional prediction is proposed in this paper, which consists of three mechanisms. First, by segmenting the optimal solutions set, the prediction about the changes in the distribution of the Pareto front can be ensured. Second, by introducing the cloud theory, the distance error of direction prediction can be offset effectively. Third, by using extra angle search, the angle error of prediction caused by the Pareto set nonlinear variation can also be offset effectively. Finally, eight benchmark problems were used to verify the performance of the proposed algorithm and compared algorithms. The results indicate that the algorithm proposed in this paper has good convergence and distribution, as well as a quick response ability to the changed environment.

1. Introduction

Dynamic multi-objective optimization problems (DMOOPs) belong to a kind of time-varying multi-objective optimization problems that is frequently encountered in the disciplines of science and engineering. These problems not only display the conflict in optimization targets and high dimension in solution space, but also have the time-varying characteristics in optimization objective, constraints, and decision space [1,2,3,4,5,6,7]. As a result, the high conditionality of traditional algorithms makes it difficult to meet requirements about large-scale, high timeliness and complex non-deterministic polynomial (NP) hard problems. Compared with traditional algorithms, evolution methods have low requirements in problem model, high efficiency, as well as embarrassingly parallel self-organizing and self-adaption, so they have been widely used in the modern industry and scientific research field [8,9,10].
The easiest way is to keep the diversity of the population in evolution methods used to solve DMOOPs. For example, DNSGA-II (dynamic multi-objective optimization and decision-making using the modified NSGA-II) proposed by Deb [11] uses partial random initialization or random variation in order to enhance the flexibility of the population in the changed environment. Moreover, three immigration schemes proposed by Azevedo are used to initialize the population in the changed environment [12], as well as the immune clonal algorithm proposed by Shang Rong-hua [13]. Because of the unknown variation trend for the dynamic environment, those methods can be formulated into an attempted exploration method, which could ensure the superiority individual could be reserved in the changed environment. The upside for these methods is that they can adapt to a variety of changing environments, while their downside is that advantages of search before environmental change would be difficult to keep after change. Thus, the characteristics of this kind of methods are their powerful environmental adaptability and poor prediction accuracy.
Compared with the early methods, by maintaining the diversity of the population, historical information from multiple environmental changes is used to predict the position of the population in the next environmental change. That is, it is a type of statistical method to study the non-dominate solution set. There are two solutions for this, namely the memory strategy and the prediction strategy.
The memory strategy predicts the position of the population in the next environmental change by rules, which stores a lot of information about non-dominate solution set. The memory strategy is suited to solve the dynamic problem of “short periodicity”. Here, “Periodicity” could be described by the historical information, and “short” indicates less storage [14,15,16].
However, this has several drawbacks, namely large memory consumption and incorrect predictions to solve time-varying nonlinear problems or more frequent environmental changes.
The prediction strategy uses historical information by statistical analysis and predicts the possible direction of population migration using prediction theory (for example, linear regression, autoregressive model, etc.), relative to the memory strategy. The FPS (feed-forward prediction strategy) proposed by Hatzakis [17] is used to predict the migration locations of the non-dominant solution set, which is based on an autoregressive model. Because of the incomplete statistical analysis about historical information, the FPS is weak and ineffective. The PPS (population prediction strategy) proposed by Zhou [18] uses an autoregressive model to predict the central migration locations of the population, and shape estimation to migrate the whole population. Compared with the FPS, the PPS has mainly two characteristics: (1) further treatment of historical information, which is used to predict the central locations of the population; and (2) shape estimation, which is better than estimation one by one. All of these methods based on autoregressive model have a fundamental disadvantage, namely, a large amount of statistical information is needed to construct the model. Moreover, if the prediction error is increasing repeatedly, the prediction error will be amplified. As a result, it is difficult to adjust when the change rule of the environment is changed.
In short, the autoregressive model is widely used to solve regular change problems, but has the problems of slow response speed and large amounts of statistical data. To compensate, the diversity-keeping strategy is widely used. The DSS (directed search strategy) proposed by Wu [19] is used to solve the dynamic problem, which uses both central population prediction and cross search. Moreover, the method of population center prediction is a linear prediction. Compared with MLR (multiple linear regression), it needs less historical information and has a better adaptive capacity to the environment. For DMOOPS, a method proposed by Rong [20] is used to predict the changes, which uses a linear model based on segmentation of the optimal front. Furthermore, the linear model used by Li [21] is based on the prediction of the autoregressive model. It predicts the key points of the optimal front, such as the boundary point, inflection point, etc. Also, the linear model is used by Ruan [22] to predict the position of the optimal front. In maintaining population diversity, the changes in population are predicted based on the extreme point of the previous generation [20]. However, it is worth noting that the prediction error should always exist in the linear model when solving dynamic problems (progressive increase, progressive decrease, or more complex changes). Therefore, all the literature above employs the diversity keeping scheme to redress prediction errors.
According to the problem of remedying the shortage of the linear model, the segmentation cloud predict strategy (SCPS) is proposed in this paper to improve the precision of prediction. First, the searched optimal front would be segmented to divide the population. Second, a new linear prediction analysis is applied to each part of the population, which would draw on the thoughts of the cloud model. According to the notion of uncertain time in the cloud model, entropy (or super entropy) would be used to redress the prediction errors. Moreover, for some abrupt nonlinear changes in dynamic problems, the extra angle search strategy is used to ensure the diversity of the population during the prediction process.
The rest of this paper is organized as follows. The problem of dynamic multi-objective is described in Section 2. The segmentation cloud prediction strategy is proposed in Section 3. Benchmark problems are applied in Section 4 to show the performance of the proposed method, together with related experimental analysis. Conclusions are provided in the last section.

2. Description of Dynamic Multi-Objective Problem

Considering the symmetry between minimum and maximum under symmetric environments, the optimization problem can be normalized to a unit form. Generally, the DMOP could be described as follows,
{ m i n   F ( x , t ) = ( f 1 ( x , t ) , , f m ( x , t ) ) T s . t .   g ( v , t ) 0 ,   h ( v , t ) = 0
where the decision variables are x = ( x 1 , , x n ) T Ω , Ω is the n dimension decision space, and the time variable is t. g ( v , t ) is the equality constraint, while h ( v , t ) is the inequality constraint. The objective function vector is y = ( f 1 , , f m ) T Λ , and Λ is the m dimension decision space. The evaluation function F ( x , t ) : Ω × T Λ would define the mapping from decision space to object space [23].
Definition 1.
(Pareto domination). For some time t, if individual p, it can as p q . If and only if   i = { 1 , 2 , , m } : f i ( p , t ) f i ( q , t ) ,   j = { 1 , 2 , , m } : f j ( p , t ) < f j ( q , t ) .
Definition 2.
(Pareto optimal solution set, PS). Let x Ω , which is the decision variable. PS could be defined as follows,
PS : = { x Ω | ¬ y Ω , y x }
Definition 3.
(Pareto optimal front, PF). Let x Ω , which is the decision variable. PF could be defined as follows,
PF : = { y = F ( x , t ) | x PS }

3. Segmentation Cloud Prediction Strategy

First, the optimal front is segmented into multiple fragments, and the segmentation result can be used to divide the population in the decision space. Second, certain individuals are chosen to search the change directions of the optimal front by extra angle search strategy. At last, some add search is conducted to search for the possible offset position of the optimal front by extra angle search.

3.1. Population Segmentation

For the DMOOPs, the optimal front might be deflected or deformed. In order to predict the variation of PF more accurately, the PF is segmented into multiple fragments, and each fragment is used to make predictions. Compared with the linear prediction for the population center, this method would have a more accurate predictive ability of PF deformation and nonlinear migration.
First, for PF, a boundary point is chosen as the first key point, and the distance between this point and the others is calculated. Then, the point with the maximum distance is the second key point. In the same manner of choosing second key point, the third key point would be the point with maximum sum of distance between the two key points, and so on for m+1 key points of the m objective functions ( y = ( f 1 , , f m ) T ). As a result, each individual would choose the key point with minimum distance to constitute m+1 fragments. Thus, populations would be segmented according to m+1 key points.
This method has simple computational properties and low complexity. The time complexity for choosing each key point is O(N), where N is the population size. The computation complexity of this method is O(N(m+2)/2)=O(Nm), while the computation complexity of the clustering method is O(N2). Therefore, the proposed method of segmentation is simpler for population segmentation and has lower computation complexity.

3.2. Directional Cloud Prediction Strategy

Suppose the population size is N, and the ith population Popi contains Ki individuals. The center of sub-population Popi could be described as follows,
C i ( t ) = 1 | P o p i | x P o p i x
where P o p i = { x 1 i , , x K i i } are all the individuals of the population, the kth individual is x k i = ( x k 1 i , , x k n i ) , C i ( t ) is the center position of population at time t, and | | is the cardinality of set.
The predicted migration vector of the population di(t) and the predicted error of two migrations Δ ( t ) at time t+1 could be calculated as follows, based on the predictions of center positions of populations at time t and time t-1.
{ d i ( t ) = C i ( t ) C i ( t 1 ) Δ ( t ) = d i ( t ) d i ( t 1 )
The prediction of the migration position of the population is based on the normal cloud model. Suppose the moving direction of the population is expectation, the Euclidean distance of motion vector d i ( t ) is entropy, and the deviation Δ ( t ) between two moving direction of the population is super entropy. The normal cloud generator could be built by expectation, entropy, and super entropy. First, we generate normal random vectors E n ~ N ( Δ ( t ) , Δ ( t ) 2 / n 2 whose expectation is Δ ( t ) and standard deviation is Δ ( t ) / n , where E n = ( E n 1 , , E n n ) . Then, we generate normal random vectors v 1 ~ N ( d i ( t ) , E n 2 ) , whose expectation is d i ( t ) and standard deviation is E n . Therefore, the optimal solution position of predicted sub-population could be shown as follows,
y = x + v 1
where the individual position before change is x, the direction of prediction is v1, and the individual position of prediction after change is y.
Figure 1 shows the schematic diagram of directional cloud search, where the direction of directional prediction is d ( t ) , and the possible direction of cloud prediction is d ( t ) . That is, the main search direction of directional cloud prediction is based on d ( t ) , and it could search at a certain probability to predict the position for the last offset direction Δ ( t ) .

3.3. Extra Angle Search

Directional cloud prediction contains the error compensation of linear prediction. However, this is only effective for gradual and regular changes. If the dynamic problem shows inverted reciprocating or non-gradual changes, the direction of directional cloud prediction would be opposite or vertical to the change direction of the dynamic problem. To deal with this problem, the extra angle search strategy would be purposed. That is, based on some random chosen individuals, it would search all possible directions in a stated angle. Figure 2 shows the schematic diagram of the extra angle search strategy.
First, the random vectors are constructed, perpendicular to d i ( t ) . Based on the random vector r = ( r 1 , , r n ) within the scope of [−1, 1], the jth dimension of r would be assigned as Equation (7),
r j = i j n d i ( t ) r i ( t )
The direction vector of extra angle search would be calculated as follows,
e i ( t ) = d i ( t ) + r r d i ( t ) c o t ( θ )
where θ is deflection angel, which means the possible range of search and can be calculated as
θ i = d i ( t 1 ) d i ( t 2 ) d i ( t 1 ) d i ( t 2 )
When the direction vector of extra angle search is determined, we use the normal cloud model the to search the angular deviation of in all individuals. The normal random vector E n * ~ N ( d i ( t ) , Δ ( t ) 2 ) is generated, whose expectation is e i ( t ) and standard deviation is Δ ( t ) and En* = (En1*, …, Enn*). Then, the normal random vector v 2 ~ N ( e i ( t ) , E n * 2 ) can be generated, whose expectation is e i ( t ) and standard deviation is E n * . The position of prediction from the angular deviation search can be calculated as follow,
y * = x + v 2
where the position of individuals before change is x, the direction of prediction is v2, and the position of prediction after change is y*.

3.4. Environmental Detection

The sensitivity of the environment is very important for the algorithm. The computational efficiency of DMOP will decrease if the sensitivity is excessive or insufficient. Moreover, each DMOP might not change at the same time, and a different objective function would have different amplitude of variation. Therefore, the change in values of each objective function would be overall considered and normalized. The environmental sensitivity can be calculated as follows
ε ( t ) = j = 1 H F ( x j , t ) F ( x j , t 1 ) H max 1 i H F ( x i , t ) F ( x i , t 1 ) .
where H is the number of individuals is randomly selected for the population. Random selection mainly reduces the computing cost for environment detection. The vector of the objective function for individual x j at time t is F ( x j , t ) . Generally, the proportion of the population selected for environment detection is 5%.

3.5. SCPS Framework

The SCPS will iterate over the basic algorithm framework of the dynamic multi-objective evolutionary algorithm (MOEA). The basic algorithm framework of the dynamic MOEA (DMOEA) mainly includes two parts: dynamic prediction and static MOEA search. This paper mainly focused on the prediction performance of DMOEA, so the classical NSGA--II [24] can be selected as static MOEA search. The SCPS is described in detail as below.
Input: When enough historical information cannot be collected, the proportion of the population in random initialization is ζ , the proportion of directional prediction population is L1, the population size is N, and the final time of environmental change is Tmax, t:= 0, d(t − 1):= 0.
Output: PS.
Step 1: Randomly initialize the population.
Step 2: According to Equation (11), detect if the environment changes or not. If change, turn to Step 3; otherwise, turn to Step 8.
Step 3: If the value of d(t − 1) is 0, turn to Step 4; otherwise turn to Step 5.
Step 4: Randomly select ζ × N individuals to evolve, let d(t − 1) = d(t).
Step 5: According to Section 1, segment the population into m + 1 parts. Calculate the center of the population by Equation (4), and calculate the moving direction of population at time t. Then select L1 × N individuals with the binary tournament selection model and predict the position with the cloud model.
Step 6: Select N × (1 − L1) individuals with the binary tournament selection model and calculate the search vector of extra angle search for these individuals using Equation (7) to Equation (9).
Step 7: Boundary detect for new individuals.
Step 8: Calculate the non-dominated sort and crowding distance. Hold the first n individuals.
Step 9: Termination conditional judgment. If meet, turn to Step 10. Otherwise turn to Step 2.
Step 10: End, output the population PS.
One-step prediction (Step 5) and search of deflection angle (Step 6) would randomly select K1 × N and N × (1 − K1) individuals to predict, respectively. As the predicted individuals might be beyond the decision space, the Step 7 is to boundary detect new individuals, and revise the ones beyond.
y i = { x i if   a i x i b i 0.5 ( a i + x i ) if   y i < a i 0.5 ( b i + x i ) if   y i > b i
where the ith dimension of individual x before prediction is x i , the ith dimension predicted position of individual is y i , i = { 1 , , n } , and a i and b i are the upper and lower bound of the ith dimension of the decision variable, respectively.

4. Experimental Analysis

4.1. Benchmark Problems

In [25], a new dynamic benchmark problem (generator) was constructed. Many dynamic benchmark problems grown from this generator are related to various practical engineering problems such as the greenhouse system, hydraulic and hydroelectric engineering, and line scheduling.

4.2. Parameter Setting

In order to test the searching performance of the proposed algorithm in dynamic problems more effectively, the environmental change degree and frequentness of 3 groups were set as benchmark problems, and ( n , τ ) respectively were (5,10), (10,10), (10,20). Multi-objective evolutionary algorithms based on decomposition (MOEA/D) [26], PPS [18], multi-direction prediction (MDP) [20], and SCPS were chosen as the comparing algorithms in the test. The population sizes of all the comparing algorithms were 200 (N = 200), and the evolution termination time Tmax = 10.
(1) MOEA/D: T = 20.
(2) PPS: retain number of population center M = 23, and the model is p linear regression model, p = 3.
(3) SCPS: ζ = 0.2, and the chosen individual number of prediction models K1 = 0.5N.

4.3. Metrics

There are numerous metrics for dynamic MOEA. The inverted generational distance (IGD) is chosen as the evaluative criteria of each iteration, and the modified IGD (MIGD) is used to evaluate each algorithm running several times in each benchmark problem [18].
I G D ( P ( t ) , P * ( t ) ) = x P * d ( x , P ( t ) ) | P * ( t ) |
where the equally distributed ideal PF solution set at time t is P * ( t ) , and the result of the algorithm is P ( t ) . The distance between individual x and solution set P(t) is d ( x , P ( t ) ) = m i n a P ( t ) F ( x , t ) F ( a , t ) , and the cardinality of set P * ( t ) is | P * ( t ) | . The mean IGD of the algorithm, which would run in the benchmark problem for a period of time is evaluated by MIGD. MIGD is defined as follows,
M I G D = 1 | T | t T I G D ( P ( t ) , P * ( t ) )
where a group of discrete points in time is T, and the potential of T is | T | .

4.4. Test and Analysis

Table 1 shows the mean MIGD results of SCPS and other compared methods running 20 times in eight benchmark problems. The smaller the value of MIGD, the higher the average prediction accuracy. The optimal values are highlighted in bold.
SCPS has strong searching ability for the eight dynamic test problems. Only on the test problem JY4 ( ( n , τ ) = ( 5 , 10 ) ), the PPS method is slightly better than the SCPS. For all the test problems, four algorithms have strong searching ability on JY1-JY5 as well as JY8. Their MIGDs reached the 10−2 magnitude. That is, all these four algorithms have good ability to track the dynamic front.
JY6 is a Type III dynamic problem as shown in [25], where the multimodal PS and PD change over time, and the distribution of the optimal solution would be also in a constant process of change. Although the SCPS would be better than the other three algorithms on this problem, its MIGD is unsatisfactory.
JY7 is also a multimodal problem, and its PF shape is in a constant process of change. As the number of local optimum is fixed, JY7 is relatively simpler than JY6. Thus, the four algorithms running in JY7 are better than that in JY6. According to the standard deviation of MIGD, SCPS has better stability than the other 3 algorithms and a higher solution.
In order to analyze the dynamic searching ability of the four algorithms, eight group benchmark problems were chosen, and under the condition that ( n , τ ) = ( 10 , 20 ) , the time-varying characteristic of IGD of algorithms was analyzed. Figure 3 shows the timely varying curves of IGD of the four algorithms running in the benchmark problem. Each time the environment changes, the smaller the IGD value is, and the more accurate the prediction of the algorithm.
It is indicted that the MOEA/D has a good searching performance in the static problem, but it could not dynamically predict. Thus, MOEA/D has an inadequate capability to solve the dynamic problem in bounded time. For the problems JY2 and JY5, PPS is better than MDP. However, for the problems JY7 and JY8, MDP is better than PPS. Especially, for JY7, large errors of the prediction might exist and keep increasing in PPS. In all eight benchmark problems, SCPS is better than the other three algorithms.
In order to further analyze the experimental results, the distribution of predicted results at five moments was compared for JY4, JY5, JY7, and JY8. Figure 4 shows the distribution of predicted results at five moments for the four algorithms running in four benchmark problems.
The discrete area of JY4 is time-varying. And the concavity of PF in JY5 changes over time. Although the PF changes of these two benchmark problems are different, they are both the unimodal problem. Thus, their results have good distribution and convergence. The PF shape of JY7 and JY8 would be changed all the time, so optimization is rather difficult. The points shown in Figure 4 are the non-dominant solution set of the population, and the number of these points reflects the performance of the algorithm. MDP and PPS have good distribution and convergence at the first moment, but the distribution of non-dominant solution set for these two algorithms is not satisfactory at the next four moments. That is, some points are far from the true PF. In comparison to the other three algorithms, SCPS performs particularly well in distribution and convergence. Moreover, in JY8, all four algorithms have good distribution, but SCPS performs better than the others in convergence.
According to the scatter distribution, SCPS has better searching performance. Its population segment provides an accurate shape change prediction. The cloud prediction strategy is the basic prediction strategy, and extra angle search is the main prediction angle error compensation.
Depending on the results of the IGD mean, IGD change with iterations, and scatter prediction results, the proposed algorithm is stable and has good predictive ability. Its prediction is more reasonable and closer to the population in the last moment.

5. Conclusions

The segment cloud prediction this paper proposed divides the population on the basis of the distance of PF. According to the results of center linear prediction for each population, the center position would be determined. Moreover, the population distribution is predicted by cloud theory. The angle error caused by the linear prediction process is decreased considerably by using extra angle search. The simulation results show that this algorithm has a better convergence and distribution, as well as environmental suitability. However, it cannot be denied that PF segmentation and cloud theory may not perform so well in the face of more difficult dynamic prediction problems, such as a mutation mapping relation between PF and PS. The high-dimensional dynamic multi-objective optimization problem is also an important research target, in which the effectiveness of Euclidean distance is also worth discussing. Future work can focus on these aspects.

Author Contributions

Conceptualization, P.N. and J.G.; methodology, Y.S.; writing—original draft preparation, W.Q.; writing—review and editing, Q.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Grant No. 61703426) and by the Young Talent Fund of University Association for Science and Technology in Shaanxi, China (Grant No. 20190108).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xia, X.; Gui, L.; Zhan, Z. A multi-swarm particle swarm optimization algorithm based on dynamical topology and purposeful detecting. Appl. Soft Comput. 2018, 67, 126–140. [Google Scholar] [CrossRef]
  2. Song, Y.; Fu, Q.; Wang, Y.; Wang, X. Divergence-based cross entropy and uncertainty measures of Atanassov’s intuitionistic fuzzy sets with their application in decision making. Appl. Soft Comput. J. 2019, 84, 105703. [Google Scholar] [CrossRef]
  3. Zhang, B.; Zhang, M.; Song, Y.; Zhang, L. Combing evidence sources in time domain with decision maker’s preference on time sequence. IEEE Access 2019, 7, 174210–174218. [Google Scholar] [CrossRef]
  4. Song, Y.; Wang, X.; Quan, W.; Huang, W. A new approach to construct similarity measure for intuitionistic fuzzy sets. Soft Comput. 2019, 23, 1985–1998. [Google Scholar] [CrossRef]
  5. Song, Y.; Wang, X.; Zhu, J.; Lei, L. Sensor dynamic reliability evaluation based on evidence and intuitionistic fuzzy sets. Appl. Intell. 2018, 48, 3950–3962. [Google Scholar] [CrossRef]
  6. Song, Y.; Wang, X.; Lei, L.; Xue, A. A novel similarity measure on intuitionistic fuzzy sets with its applications. Appl. Intell. 2015, 42, 252–261. [Google Scholar] [CrossRef]
  7. Lei, L.; Song, Y.; Luo, X. A New Re-encoding ECOC Using a Reject Option. Appl. Intell. [CrossRef]
  8. Soh, H.; Ong, Y.S.; Nguyen, Q.C.; Nguyen, Q.H.; Habibullah, M.S.; Hung, T.; Kuo, J.L. Discovering unique, low-energy pure water isomers: memetic exploration, optimization and landscape analysis. IEEE Trans. Comput. 2010, 14, 419–437. [Google Scholar] [CrossRef]
  9. Thammawichai, M.; Kerrigan, E.C. Energy-efficient real-time scheduling for two-type heterogeneous multiprocessors. Real Time Syst. 2018, 54, 132–165. [Google Scholar] [CrossRef] [Green Version]
  10. Shen, M.; Zhan, Z.H.; Chen, W.N.; Gong, Y.J.; Zhang, J.; Li, Y. Bi-velocity discrete particle swarm optimization and its application to multicast routing problem in communication networks. IEEE Trans. Ind. Electron. 2014, 61, 7141–7151. [Google Scholar] [CrossRef]
  11. Deb, K.; Karthik, S. Dynamic multiobjective optimization and decision-making using modified NSGA-II: A case study on hydro-thermal power scheduling. In Proceedings of the 4th International Conference on Evolutionary Multi-Criterion Optimization, Matsushima, Japan, 5–8 March 2007; Springer: Berlin/Heidelberg, Germany, 2007; pp. 803–817. [Google Scholar]
  12. Azevedo, C.R.B.; Araujo, A.F.R. Generalized immigration schemes for dynamic evolutionary multiobjective optimization. In Proceedings of the IEEE Congress on Evolutionary Computation, New Orleans, LA, USA, 5–8 June 2011; pp. 2033–2040. [Google Scholar]
  13. Shang, R.H.; Jiao, L.C.; Gong, M.G.; Ma, W.P. An immune clonal algorithm for dynamic multi-objective optimization. J. Softw. 2007, 18, 2700–2711. [Google Scholar] [CrossRef]
  14. Kominami, M.; Hamagami, T. A new genetic algorithm with diploid chromosomes by using probability decoding for adaptation to various environments. Electron. Commun. Jpn. 2010, 93, 38–46. [Google Scholar] [CrossRef]
  15. Yang, S. On the Design of Diploid Genetic Algorithms for Problem Optimization in Dynamic Environments. In Proceedings of the 2006 Congress on Evolutionary Computation, Vancouver, BD, Canada, 16–21 July 2006; pp. 1362–1369. [Google Scholar]
  16. Liu, M.; Zeng, W.H. Memory enhanced dynamic multi-objective evolutionary algorithm based on decomposition. J. Softw. 2013, 24, 1571–1588. [Google Scholar] [CrossRef]
  17. Hatzakis, I.; Wallace, D. Dynamic multi-objective optimization with evolutionary algorithms: A forward-looking approach. In Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, Washington, DC, USA, 8–12 July 2006; pp. 1201–1208. [Google Scholar]
  18. Zhou, A.M.; Jin, Y.C.; Zhang, Q.F. A population prediction strategy for evolutionary dynamic multiobjective optimization. IEEE Trans. Cybern. 2014, 44, 40–53. [Google Scholar] [CrossRef]
  19. Wu, Y.; Jin, Y.; Liu, X. A directed search strategy for evolutionary dynamic multiobjetive optimization. Soft Comput. 2015, 19, 3221–3235. [Google Scholar] [CrossRef]
  20. Rong, M.; Gong, D.; Zhang, Y.; Jin, Y.; Pedrycz, W. Multidirectional prediction approach for dynamic multiobjective optimization problems. In Intelligent Computing Methodologies, ICIC 2016 Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2016; Volume 9773. [Google Scholar]
  21. Li, Q.; Zou, J.; Yang, S.; Zheng, J.; Ruan, G. A predictive strategy based on special points for evolutionary dynamic multi-objective optimization. Soft Comput. 2018, 1–17. [Google Scholar] [CrossRef] [Green Version]
  22. Ruan, G.; Yu, G.; Zheng, J.; Zou, J.; Yang, S. The effect of diversity maintenance on prediction in dynamic multi-objective optimization. Appl. Soft Comput. 2017, 56, 631–647. [Google Scholar] [CrossRef]
  23. Gee, S.B.; Tan, K.C.; Abbass, H.A. A benchmark test suite for dynamic evolutionary multiobjective optimization. IEEE Trans. Cybern. 2017, 47, 461–472. [Google Scholar] [CrossRef]
  24. Deb, K.; Agrawal, S.; Pratap, A.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
  25. Jiang, S.; Yang, S. Evolutionary dynamic multiobjective optimization: bencmarks and algorithm comparisons. IEEE Trans. Cybern. 2017, 47, 198–211. [Google Scholar] [CrossRef]
  26. Zhang, Q.; Li, H. MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
Figure 1. Directional cloud search.
Figure 1. Directional cloud search.
Symmetry 12 00465 g001
Figure 2. Extra angle search.
Figure 2. Extra angle search.
Symmetry 12 00465 g002
Figure 3. IGD indicator change of 4 algorithms running in benchmark problem.
Figure 3. IGD indicator change of 4 algorithms running in benchmark problem.
Symmetry 12 00465 g003
Figure 4. Scatter distribution of prediction population.
Figure 4. Scatter distribution of prediction population.
Symmetry 12 00465 g004
Table 1. Results of all the compared algorithms regarding MIGD.
Table 1. Results of all the compared algorithms regarding MIGD.
Functionn, τSCPSMDPPSSMOEAD
Mean ValueVarianceMean ValueVarianceMean ValueVarianceMean ValueVariance
JY15,101.75 × 10−23.99 × 10−44.87 × 10−21.18 × 10−32.03 × 10−24.79 × 10−35.16 × 10−22.06 × 10−3
10,107.55 × 10−31.31 × 10−41.64 × 10−23.86 × 10−43.17 × 10−29.98 × 10−33.43 × 10−21.16 × 10−3
10,204.28 × 10−36.01 × 10−58.50 × 10−31.30 × 10−48.73 × 10−31.53 × 10−32.34 × 10−21.77 × 10−3
JY25,104.76 × 10−22.15 × 10−46.17 × 10−27.78 × 10−45.09 × 10−29.68 × 10−47.15 × 10−21.27 × 10−3
10,107.39 × 10−35.10 × 10−51.66 × 10−23.06 × 10−43.09 × 10−21.38 × 10−23.30 × 10−26.96 × 10−4
10,204.13 × 10−33.05 × 10−58.30 × 10−31.28 × 10−48.47 × 10−31.27 × 10−32.48 × 10−21.83 × 10−3
JY35,109.17 × 10−34.36 × 10−41.75 × 10−23.46 × 10−41.52 × 10−11.33 × 10−12.12 × 10−11.86 × 10−1
10,109.11 × 10−36.18 × 10−41.74 × 10−23.07 × 10−42.91 × 10−12.38 × 10−11.87 × 10−11.78 × 10−1
10,205.09 × 10−37.72 × 10−51.34 × 10−21.74 × 10−41.26 × 10−21.60 × 10−31.02 × 10−11.30 × 10−1
JY45,104.07 × 10−21.05 × 10−38.55 × 10−21.57 × 10−33.85 × 10−25.97 × 10−35.66 × 10−22.03 × 10−3
10,102.52 × 10−25.52 × 10−45.36 × 10−25.52 × 10−45.25 × 10−27.97 × 10−33.72 × 10−21.15 × 10−3
10,203.41 × 10−36.64 × 10−55.43 × 10−36.43 × 10−57.47 × 10−31.01 × 10−31.07 × 10−24.64 × 10−4
JY55,102.42 × 10−35.83 × 10−61.13 × 10−22.38 × 10−41.82 × 10−29.69 × 10−33.48 × 10−26.36 × 10−4
10,102.42 × 10−33.29 × 10−61.10 × 10−22.06 × 10−42.11 × 10−21.31 × 10−22.46 × 10−21.03 × 10−3
10,202.41 × 10−37.04 × 10−66.03 × 10−35.48 × 10−57.41 × 10−32.17 × 10−32.06 × 10−21.52 × 10−4
JY65,102.116.81 × 10−25.901.65 × 10−12.971.96 × 10−13.351.18 × 10−1
10,101.616.26 × 10−23.346.99 × 10−23.233.76 × 10−12.041.25 × 10−1
10,203.88 × 10−11.34 × 10−21.183.45 × 10−27.97 × 10−18.59 × 10−25.36 × 10−11.49 × 10−2
JY75,101.421.26 × 10−116.61.6889.864.720.513.76
10,102.06 × 10−19.54 × 10−24.33 × 10−15.27 × 10−283.967.68.38 × 10−15.71 × 10−1
10,208.55 × 10−24.93 × 10−32.40 × 10−12.10 × 10−25.807.093.35 × 10−18.29 × 10−2
JY85,103.34 × 10−34.25 × 10−51.12 × 10−22.12 × 10−41.93 × 10−21.38 × 10−26.31 × 10−23.93 × 10−3
10,103.17 × 10−33.45 × 10−51.13 × 10−22.23 × 10−42.86 × 10−21.44 × 10−26.34 × 10−21.92 × 10−2
10,202.77 × 10−32.06 × 10−56.72 × 10−31.19 × 10−47.57 × 10−31.93 × 10−34.21 × 10−21.47 × 10−3

Share and Cite

MDPI and ACS Style

Ni, P.; Gao, J.; Song, Y.; Quan, W.; Xing, Q. A New Method for Dynamic Multi-Objective Optimization Based on Segment and Cloud Prediction. Symmetry 2020, 12, 465. https://doi.org/10.3390/sym12030465

AMA Style

Ni P, Gao J, Song Y, Quan W, Xing Q. A New Method for Dynamic Multi-Objective Optimization Based on Segment and Cloud Prediction. Symmetry. 2020; 12(3):465. https://doi.org/10.3390/sym12030465

Chicago/Turabian Style

Ni, Peng, Jiale Gao, Yafei Song, Wen Quan, and Qinghua Xing. 2020. "A New Method for Dynamic Multi-Objective Optimization Based on Segment and Cloud Prediction" Symmetry 12, no. 3: 465. https://doi.org/10.3390/sym12030465

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop