Next Article in Journal
A Small-Sample Scenario Optimization Scheduling Method Based on Multidimensional Data Expansion
Previous Article in Journal
Substring Counting with Insertions
Previous Article in Special Issue
“Multi-Objective and Multi-Level Optimization: Algorithms and Applications”: Foreword by the Guest Editor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Linear Regression Prediction-Based Dynamic Multi-Objective Evolutionary Algorithm with Correlations of Pareto Front Points

College of Software Engineering, Zhengzhou University of Light Industry, Zhengzhou 450001, China
*
Authors to whom correspondence should be addressed.
Algorithms 2025, 18(6), 372; https://doi.org/10.3390/a18060372
Submission received: 14 May 2025 / Revised: 16 June 2025 / Accepted: 18 June 2025 / Published: 19 June 2025

Abstract

:
The Dynamic Multi-objective Optimization Problem (DMOP) is one of the common problem types in academia and industry. The Dynamic Multi-Objective Evolutionary Algorithm (DMOEA) is an effective way for solving DMOPs. Despite the existence of many research works proposing a variety of DMOEAs, the demand for efficient solutions to DMOPs in drastically changing scenarios is still not well met. To this end, this paper is oriented towards DMOEA and innovatively proposes to explore the correlation between different points of the optimal frontier (PF) to improve the accuracy of predicting new PFs for new environments, which is the first attempt, to our best knowledge. Specifically, when the DMOP environment changes, this paper first constructs a spatio-temporal correlation model between various key points of the PF based on the linear regression algorithm; then, based on the constructed model, predicts a new location for each key point in the new environment; subsequently, constructs a sub-population by introducing the Gaussian noise into the predicted location to improve the generalization ability; and then, utilizes the idea of NSGA-II-B to construct another sub-population to further improve the population diversity; finally, combining the previous two sub-populations, re-initializing a new population to adapt to the new environment through a random replacement strategy. The proposed method was evaluated by experiments on the CEC 2018 test suite, and the experimental results show that the proposed method can obtain the optimal MIGD value on six DMOPs and the optimal MHVD value on five DMOPs, compared with six recent research results.

1. Introduction

Dynamic Multi-objective Optimization Problems (DMOPs) have attracted the attention of scientists and developers for decades due to their common appearances in various fields, e.g., logistics [1], disaster management [2], nuclear physics [3], and chemical reaction equipment management [4]. Solving DMOPs is challenging work due to the uncertain change of the actual environmental chaos. The Dynamic Multi-Objective Evolutionary Algorithm (DMOEA) is one of the most efficient ways to overcome this challenge. A DMOEA employs a global search strategy of an evolutionary algorithm (EA) that is inspired by a natural or social law to solve DMOP instances (multi-objective optimization problems) in every steady environmental state (i.e., static multi-objective problems), and tunes searching directions to adapt to changed environments by some reaction/prediction approaches.
It is challenging to design a DMOEA for solving DMOPs. This is mainly because the real world can bring rapid changes to DMOPs, which requires the DMOEA to not only capture the change fast and accurately but also respond to the change in a reasonable way [5]. Therefore, several works have studied responsive approaches for DMOEA to upgrade the search area (the population) based on the predicted movement of the optimal solution set (PS) or the Pareto optimal front (PF) when the environment (the current DMOP instance) is changed. But there are still some issues that existing related works have ignored or not addressed. First, the majority of DMOEAs assume the PS/PF is moving linearly, which falls short of reality. Second, most DMOEAs predict PS/PF movements by only mining the relationship for every single independent variable. These methods infer the new value of a variable from its historical values, and thus do not consider correlations between different variables, even though the phenomenon is common in the real world. Last but not least, to our best knowledge, no existing DMOEA considers the moving interconnectedness of different solutions of PS. Due to the above issues, the current related works can result in inaccurate predicted results, which may lead to bad population updates for the DMOP and degrade the performance of the solution solving.
To overcome the above issues, in this paper, we design a prediction approach based on linear regression (LR) technology to capture correlations between variables and between optimal solutions to accurately learn the movement pattern of PS/PF. To be specific, for the DMOP, by using historical values of all variables jointly, we establish an LR model where the variables in the current environment are the regressand and the variables in the last environment are the regressors. With the LR-based prediction model, we can calculate the variable values in the next environment by the current ones and update the current population based on the predicted variable values to adapt to the new environment. In brief, the contributions of this paper include the following three aspects:
  • We propose a PS/PF movement prediction model based on LR to learn correlations between variables and between optimal solutions, which predicts key points of PS in the new environment based on ones of the current PS.
  • We design a DMOEA based on the above LR-based prediction model, which updates the population based on the predicted key points of PS when the environment is changed and employs the Non-dominated Sorting Genetic Algorithm II (NSGA-II) to evolve the population when the environment is stable.
  • We conduct experiments based on CEC 2018 benchmark DMOPs. The results show that our DMOEA has the best overall performance in solving DMOP, compared with six other up-to-date DMOEAs.
The following contents are organized as follows. The second section illustrates the terminologies that are related to the DMOP and helpful for understanding this paper. The third section presents our proposed DMOEA based on the LR-based prediction model. The fourth section describes the experiment environment for the performance evaluation of the proposed DMOEA. The fifth section shows and analyzes the experiment results. The sixth discusses the existing works related to the DMOEA. The last section concludes the work of this paper.

2. Prerequisite Knowledge

Definition 1 (DMOP (Dynamic  Multi-optimization Problem)).
DMOP is the optimization problem consisting of a series of Multi-optimization Problems (MOPs) or DMOP instances that have multiple objectives. The DMOP instance is changed with the time/environment.
The DMOP can be formulated into Equation (1). F ( t ) = { f 1 ( t ) , f 2 ( t ) , , f m ( t ) ( t ) } is the objectives to be optimized at time t. m ( t ) represents the number of optimization objectives at t. Both the objective functions and their number can be changed with the time or the environments. Equations (2) and (3) are equality and inequality constraints, respectively. The constraint functions, H ( t ) and G ( t ) , may be varied with time. X ( t ) = { x 1 ( t ) , x 2 ( t ) , , x n ( t ) ( t ) } includes the independent variables/arguments of the DMOP (objective and constraint functions), with the domain of D ( t ) . The domain and the number of arguments ( n ( t ) ) are also changed with the time. t represents the time that the environment is changed. At this moment, DMOP is changed from one MOP into another. T is the number of environments that DMOP is concerned about. If the DMOP is concerned about the long-term optimization, T = + .
m i n i m i z i n g F ( t ) = { f 1 ( t ) , f 2 ( t ) , , f m ( t ) ( t ) }
subject to
H ( t ) = 0 ,
G ( t ) 0 ,
X ( t ) = { x 1 , x 2 , , x n ( t ) } D ( t ) ,
t = t 1 , t 2 , , t T .
Definition 2 (Dominating).
For an MOP or a DMOP instance, an available solution dominates another available solution if and only if the first solution does not have a worse value of all objectives than the latter one and has at least one better objective value.
It can be formulated as Equation (6) that the available solution X 1 dominates the available solution X 2 , where F represents the optimization objective functions of the MOP.
X 1 X 2 : = f F ( f ( X 1 ) f ( X 2 ) ) f F ( f ( X 1 ) < f ( X 2 ) )
Definition 3 (Pareto optimal solution set (PS) and Pareto optimal fronts (PFs)).
For an MOP or a DMOP instance, its PS is the set consisting of all available solutions (Pareto optimal solutions) that cannot be dominated by any other solutions. The PF is the set consisting of the objective function values of all Pareto optimal solutions.
PS and PF can be formulated as Equations (7) and (8), respectively, for the MOP with the domain D and the objective function set F.
P S : = { X | ¬ X D ( X X ) }
P F : = { F ( X ) | X P S }
The DMOEA is to find the P S and P F of all DMOP instances for the DMOP.

3. Linear Regression Prediction-Based DMOEA

In this section, to capture various environment change patterns and exploit the inter-correlations of P F points (optimal solutions) of the DMOP, we design an LR-based method to predict positions of some points of P S at the new environment based on their historical positions. In this paper, we consider that the position of each P F point at the next time is decided by that of all P F points in history, as exhibited in Figure 1. Due to the high complexity of predicting all P F points (the entire P S ), we are concerned about a few key points on P F . To be specific, we set key points as 11 statistical points ( 10 k -th percentile, k = 0 , 1 , , 10 ) and one center point of the P F , in this paper. Our proposed method is also compatible with other kinds of key points, e.g., knee points, that we will be concerned about for a better prediction. Below we detail our proposed LR-based method.
At the beginning of time t + 1 , we have historical P S and P F at all times before t, and need to predict the new P S and P F at t + 1 . Because the manifold of P F is ever-changing for DMOPs in the real world, we focus on the prediction of key points on the P F in this paper, which is a powerful guide in updating the population for the DMOEA to adapt to the environmental changes and is the basic idea employed by almost all existing prediction approaches of DMOEAs. We use X k ( t ) ( k = 0 , 1 , , 10 ) to represent the 10 k -th percentile of the P S at t, and X k ( t ) ( k = 11 ) to represent the center of the P S at t, i.e., X 11 ( t ) = X P S ( t ) X / | P S ( t ) | . By LR technology, we can establish the relational model of key points between two continuous environments, as Equation (9). A k ( k [ 0 , 11 ] ) and B are the parameters that will be learned by LR, where A k ( t + 1 ) is an n × n matrix and B is a n dimensional vector. n represents the dimension of the search space of the concerned DMOP. X k ( t + 1 ) ^ represents the estimated/predicted position of X k ( t + 1 ) .
X k ( t + 1 ) ^ = k [ 0 , 11 ] A k · X k ( t ) + B , k [ 0 , 11 ] .
The samples used for learning the LR model at time t + 1 include X ( t ) , X ( t + 1 ) , t = 0 , , t 1 , which can be easily achieved from the historical P S s and P F s, and where X ( t ) and X ( t + 1 ) correspond to X k ( t ) and X k ( t + 1 ) ^ ( k [ 0 , 11 ] ) in Equation (9), respectively.
Based on the least-squares method, LR is to learn parameters A k ( t ) and B ( t ) to minimize the square error (SE) between predicted values and real values for samples. Thus, learning the LR model is solving the minimization problem (10) for the k-th key point. This optimization problem can be solved by setting the partial derivatives for the parameters to 0. But in such a way, the derived prediction model is easily overfitting. Therefore, L2 regularization is applied to raise the ability to extend the model, and the optimization objective function (loss function) is transformed into Equation (12). λ is the regularization coefficient that is generally set to a number less than 0.01 [6,7].
minimizing A k , B S E k = t = 1 t ( X k ( t ) ^ X k ( t ) ) 2
= t = 1 t ( k [ 0 , 11 ] A k · X k ( t 1 ) + B X k ( t ) ) 2
L k ( A k , B ) = S E k + λ · ( k [ 0 , 11 ] A k 2 + B 2 )
Based on the loss function (12), we can achieve the parameters ( A k ( k [ 0 , 11 ] ) and B) by the gradient descent method that updates these parameters step by step by iterative Formulas (13) and (14). α is the learning rate, which is tuned by experiments and is usually set between 0.1 and 0.001 [8]. L k A k and L k B are the partial derivatives of the loss function with respect to the parameters, and can be calculated by Equations (15) and (16).
A k = A k α · L k A k , k [ 0 , 11 ] ,
B = B α · L k B .
L k A k = t = 1 t ( k [ 0 , 11 ] A k · X k ( t 1 ) + B X k ( t ) ) 2 + λ · ( k [ 0 , 11 ] A k 2 + B 2 ) A k
= 2 · t = 1 t ( ( k [ 0 , 11 ] X k ( t 1 ) ) T · ( k [ 0 , 11 ] A k · X k ( t 1 ) + B X k ( t ) ) ) + 2 · λ · A k ,
L k B = 2 · t = 1 t ( k [ 0 , 11 ] A k · X k ( t 1 ) + B X k ( t ) ) + 2 · λ · B .
Now, based on the above formula, our proposed LR-based DMOEA can be illustrated by Algorithm 1. At first, the algorithm initializes a population and the environment (lines 1–2 in Algorithm 1). Then, the algorithm evaluates all objective function values for each individual (line 3 in Algorithm 1) and finds P S and P F , which consist of individuals that are not dominated by any other individual and their objective function values, respectively (line 4 in Algorithm 1). After this, the algorithm randomly initializes the parameters of the LR model, which will be updated during the DMOP solving process (line 5 in Algorithm 1).
After the above initialization, the algorithm begins to search for solutions for DMOPs by iteratively updating/evolving the population (lines 6–20 in Algorithm 1). There are mainly two kinds of population update methods used in our proposed DMOEA. When the environment is stable (line 15 in Algorithm 1), the population is updated by an evolutionary strategy. In this paper, we focus on the prediction method to adapt to the environmental changes of DMOPs, thus, we simply employ the evolutionary strategy of NSGA-II that is widely used for solving multi-objective problems (lines 15 in Algorithm 1) in this paper. In the future, we will exploit other evolutionary strategies with excellent cooperation with our proposed prediction method to improve the efficiency in solving DMOPs. When the environment is changed (lines 8–13 in Algorithm 1), the DMOEA re-initializes a population with the help of the LR-based prediction method to respond to the change, as described below.
Algorithm 1 LR-based DMOEA
Input:   Objective functions F; domain/solution space D;
Output:  P S and P F ;
1:
Randomly initialize a population;
2:
Initialize the time or environment;
3:
Evaluate the objective function values;
4:
Find the current P S and P F ;
5:
Randomly initialize parameters of LR model in Equation (9), A k , B;
6:
while not reach termination condition do
7:
   Store the last key points, X k ( t ) , k = 1 , , 11 ;
8:
   if environment is changed then
9:
     Update LR model with stored key points by decreasing gradient algorithm (Algorithm 2);
10:
     Predict new positions of key points by the LR model, X k ( t + 1 ) ^ ;
11:
     Build a sub-population with X k ( t + 1 ) ^ ;
12:
     Generate a sub-population with the idea of DNSGA-II-B
13:
     Update the population with random replacements
14:
   else
15:
     Evolve the population and update P S and P F by NSGA-II;
16:
   end if
17:
   Re-evaluate the objective function values
18:
   Update the current P S and P F
19:
   Go to the next time tick;
20:
end while
21:
return P S and P F ;
Within the environment of a DMOP, the DMOEA first updates parameters of the LR-based prediction model based on the stored locations of P F key points (line 9 in Algorithm 1). The parameter update method for the LR model is outlined in Algorithm 2, and will be talked about later. Then, with the updated LR model, the DMOEA predicts the new locations of key points ( X k ( t + 1 ) ^ ) from their last locations ( X k ( t ) ) by Equation (9) (line 10 in Algorithm 1). Given the predicted new locations, we generate a set of new individuals with the quantity of a certain proportion of the population size, which constitutes a sub-population (line 11 in Algorithm 1). These new individuals are generated by introducing a Gaussian noise into the predicted key points in every dimension, as formulated by Equation (17). P represents the location of a newly generated individual. N ( 0 , σ 2 ) is the Gaussian distribution with zero mean and the standard deviation of σ .
P = X k ( t + 1 ) ^ + N ( 0 , σ 2 )
By the introduction of Gaussian noises, the generalization of the prediction method can be improved, as the generated new individuals are more likely to be close to the P F of the new environment, when the predicted locations of key points are near the new P F instead of on it. This is a frequent occurrence because the prediction method is not perfect for precisely predicting the P F movements in the real world.
Next, for improving the population diversity, we employ the population re-initialization method of DNSGA-II-B to generate another sub-population (line 12 in Algorithm 1). When the environment is changed, DNSGA-II-B randomly selects a certain percentage of individuals from the current population, and performs the mutation operator on them to generate new individuals.
Given the above two generated sub-populations, our proposed DMOEA re-initializes the population by randomly replacing part of its individuals with generated ones (line 13 in Algorithm 1).
For new individuals that are generated above by the evolutionary strategy, the prediction method, and the mutation operator of DNSGA-II-B, the proposed DMOEA evaluates their fitness function values (line 17 in Algorithm 1), and updates the current P S and P F (line 18 in Algorithm 1) as some individuals in P S may be dominated by new individuals and there may be new individuals that cannot be dominated by the current P S ’s members. After this, the proposed DMOEA enters the next tick for the next population re-initialization or evolution (line 19 in Algorithm 1).
The parameters of the LR model used for location predictions of key points are updated at the beginning of every environment change (line 9 in Algorithm 1). In this paper, we employ the decreasing gradient algorithm to learn the LR model based on the historical key points stored in line 7 of Algorithm 1. As illustrated in Algorithm 2, the process of the decreasing gradient algorithm for training the LR model is repeatedly updating the parameters of the LR model by the iterative Formulas (13)–(16) (line 3 in Algorithm 2). This process is repeated until the predefined maximum iteration time is reached or the parameters almost are not updated several times (line 1 in Algorithm 2).
Algorithm 2 LR model training by decreasing gradient algorithm
Input:  Historical key points, X k ( t ) , t = 0 , 1 , , T , k = 1 , , 11 ;
       initial parameters, A k , k = 1 , , 11 , and B;
Output: Updated parameters, A k , B;
1:
while not reach termination condition do
2:
   for  t = 1 , , T ; k = 1 , , 11  do
3:
     Update A k and B by Equations (13)–(16) with ( X k ( t 1 ) , X k ( t ) ) ;
4:
   end for
5:
end while
6:
return A k , k = 1 , , 11 , and B;

4. Experiment Environment

For evaluating the effectiveness and efficiency of our proposed DMOEA, we conducted extensive experiments to compare the performance with six other DMOEAs on the CEC 2018 benchmark test suite [9]. The six benchmark DMOEAs are as follows:
  • FL (Forward-Looking) [10] is one of the simplest and most widely used methods for the environmental response of DMOPs. FL assumes that P F moves with a uniform velocity and predicts that the new P F is moving from the current P F with the same step as the last movement (from the last P F to the current P F ). For the P F center point and every dimension, the moving velocity is v k ( t ) = x k ( t ) x k ( t 1 ) , and the new position of each P F point is x k ( t + 1 ) = x k ( t ) + v k ( t ) .
  • FLV (Velocity Forward-Looking) assumes that P F moves with changeable velocity and the velocity is changed with a constant acceleration. Thus, the new position of a key point in a dimension is x k ( t + 1 ) = x k ( t ) + v k ( t ) + 1 2 · ( a k ( t ) ) 2 . v k ( t ) is the last moving velocity of the center point. a k ( t ) represents the last moving acceleration of the center point, which is calculated by a k ( t ) = v k ( t ) v k ( t 1 ) .
  • KF (Kalman Filtering-based DMOEA) [11,12] employs the Kalman filtering technology to model the movement of the P F center point.
  • SVR (Support Vector Regression-based DMOEA) [13,14] learns a regression model by support vector regression for the P F center point in every dimension based on its historical locations, and predicts the new location by the learned regression model.
  • XGB (eXtreme Gradient Boosting-based DMOEA) [15] is similar to SVR except that the regression model is learned by XGB, which is a classical and widely used ensemble learning with boosting technologies.
  • RF (Random Forest Regression-based DMOEA) is similar to the above two methods except that the regression model is learned by RF, which is an ensemble learning exploiting the bagging technology.
Due to our focus on the prediction method to adapt to the environmental changes for DMOPs, all of the above DMOEAs exploit NSGA-II as the evolutionary strategy when the environment is stable.
The CEC 2018 benchmark test suite includes 14 DMOPs with varied P S and P F movement patterns. The details of these 14 DMOPs are shown in Table 1, Table 2 and Table 3. In our experiment, we set the environment change to severe by setting the frequency of change and the severity of change to 10 for each DMOP. The number of changes was 30. The number of variables was 10. All of these parameters of DMOPs were set referring to the report that proposes the CEC 2018 test suite [9].
The performance metrics used for evaluating DMOEAs include Mean Inverted Generational Distance (MIGD) [16,17]. Mean Hypervolume Difference (MHVD) [18], and Mean Average Crowding Distance (MACD). MIGD is the average of Inverted Generational Distance (IGD) in all environment cases for solving DMOPs. The calculation of IGD is conducted first by uniformly sampling points from the ideal P F ( P F ). Then, for each point on the P F , it identifies the closest point on the solved P F (obtained by the algorithm), sums all these minimum distances, and then computes their average. A smaller IGD value reflects better alignment between the algorithm’s results and P F . In summary, MIGD can be calculated by Equations (18) and (19).
I G D t = p P F t min p P F t | | p p | | | P F t |
M I G D = 1 T · t T I G D t
Due to the common unknown ideal P F for real DMOPs, some works use MHVD as a quality metric to quantify the performance of DMOEAs. MHVD is the average value of HVD between P F and solved P F for all environments. HVD is the difference between the hypervolumes of P F and the solved P F . The hypervolume is the Lebesgue measure of the volume dominated by a set of points with respect to a reference point. The reference point can be set to the point that the value is the upper bound of the corresponding objective function in every dimension. Thus, the calculation of MHVD for a solved P F is formulated as Equations (18) and (19), where H V ( S ) represents the hypervolume of a point set S.
H V D t = H V ( P F t ) H V ( P F t )
M H V D = 1 T · t T H V D t
MACD is the time-average of the Average Crowding Distance (ACD) of PF, where the calculation of ACD can refer to [19].
For each of these three above performance metrics, a smaller value indicates better overall algorithm performance in both convergence and distribution characteristics.
In this paper, all experiments were implemented by Python 3.12.2 and Numpy 1.26.4, and run on a personal computer with the Windows 11 HOME operating system, a 14th Gen Intel Core i7-14700(F) CPU, and 16GB of DDR5 memory. Each experiment was repeated 20 times, and the average result is reported below.

5. Experiment Results

Table 4 and Table 5 present the values of various performance metrics achieved by different DMOEAs in solving 14 DMOPs of the CEC 2018 benchmark suite. As shown in the table, we can see that our proposed DMOEA (LR) achieves the best MIGD in six DMOPs, the best MHVD in five DMOPs, and the best MACD in seven DMOPs, and these three numbers are larger than other DMOEAs. This confirms the superior performance of our proposed method for solving DMOPs. The main advantage of our proposed DMOEA is twofold. First, our proposed DMOEA considers predicting the positions of multiple key points instead of only the center point for P F . This can adapt to more generalized P F shape changes. Secondly, we try to learn the relationship among different key points of P F by LR, instead of only exploiting the correlation within one signal point (e.g., the center point). Thus, our proposed method achieves the best overall performance.
There are mainly two situations in which our proposed method achieves a relatively poor performance. The first one is when P F has almost a parallel motion overall, such as DF1. In such a case, DMOEAs that predict the P F based on only its center point may achieve better performance than our proposed method that employs the correlation of multiple key points. This is because correlation learning introduces more complex patterns of environmental changes, which is helpful for DMOPs with irregular changes, but may not accurately capture simple rules.
Another situation where our proposed method does not achieve the best performance is when the environment changes are too complex to predict. In this case, no prediction method achieves a good performance, while the simplest method may achieve the best performance. For example, FL achieves the best MIGD for DF11 and the best MHVD for DF14.
Next, we illustrate the stability of our proposed method based on the box plots of performance metric values achieved by various algorithms. To avoid overly lengthy content, we took the MIGD metric and DF1-DF3 as case studies, as shown in Figure 2, for the following two reasons. First, the results are similar between different performance metrics. Second, these three cases include three scenes where the ranks of our algorithms are low, the best, and high, respectively. As we can see from Figure 2, the interquartile range achieved by our algorithm has a similar rank to the average value in each case. Additionally, the box plots achieved by our algorithm have no outliers. Therefore, our algorithm has a stable performance in solving DMOPs.

6. Related Works

The basic idea of a DMOEA is predicting the change patterns of DMOP environments ( P S and P F ) and generating new individuals for population re-initialization to adapt to the new environment when it is changed. Therefore, there are several related works aiming at designing accurate prediction methods for DMOEAs.
Zhang et al. [20] first divided the population into several sub-populations, and calculated the accelerations of the center points for these sub-populations. Then, they calculated the velocity for each sub-population based on its center point’s acceleration, and re-initialized a population by adding the velocity and each individual for every sub-population. Wang et al. [21] first detected the intensity of the environmental change based on the difference between the objective function values of a small proportion of the population in two consecutive environments. Then, based on the detection result, they predicted new individuals by a multi-objective particle swarm optimization algorithm [22] with adapted particle velocity updates and the strength Pareto evolutionary algorithm [23]. Different from most works that employed the first-order difference strategy, where the trajectory of moving solutions is decreased by the centroid of P S , Zhang et al. [24] proposed to use the second-order difference method that considered both P F and P S . The new P S center is decided by the joint trajectories of P F and P S by mapping them into a new two-dimensional space.
To overcome the limitation of most DMOEAs that only rely on predictions from adjacent environments, resulting in a new population adapting to the new environment not well, Zhang et al. [25] used the Fuzzy C-Means clustering algorithm to cluster all historical PSs, and then divided the clusters into high-quality and low-quality sets based on their center points. With the cluster result, a Support Vector Machine (SVM) classifier was trained to predict whether a new solution is high quality. The re-initialized population consists of randomly generated individuals that are predicted to be high quality by the SVM classifier and Gaussian-mutated initial individuals. Ge et al. [26] employed correlation alignment and a probabilistic annotation classifier to get high-quality solutions from the current population and randomly generated individuals in the new environment. Based on the dynamic change pattern of elite individuals, which is learned by correlation analysis and a denoising autoencoder, this work identified the parents of current elite solutions, and then used a denoising autoencoder to transfer the change pattern to new environments. This step predicted a set of individuals based on current elitist solutions and the last P S . Individuals predicted by these two steps form the re-initialized population for the new environment.
In this paper, to make better predictions by capturing different moving styles of P F for the DMOP, we try to establish a relation model of different points on P F . This is the first attempt to introduce the inter-correlations between the movement of different points into the DMOEA, to our best knowledge.

7. Conclusions

In this paper, we focus on the prediction method for DMOEA to adapt to the environmental changes of DMOPs. We proposed to exploit the correlations of different points in P F . For this purpose, we use LR to learn a relationship model from the historical locations of all key points to the current location of one key point, and predict the next location by using the learned model for each key point. Then, by introducing Gaussian noise, a certain proportion of individuals is generated for re-initiating the population when the environment is changed. Extensive experiments on the CEC 2018 benchmark suite are conducted to verify the efficiency and effectiveness of our proposed method, and results show that our proposed method provides solutions with better accuracy and diversity compared with six state-of-the-art related works.
In this work, there are two directions that we will study to improve the performance of DMOEAs. The first one is to exploit more powerful machine learning technologies for training more accurate relationship models, so that DMOEAs can be better and faster at adapting to new environments. The second is that we will seek evolutionary strategies with a more balanced ability of exploration versus exploitation to help DMOEAs search for high-quality solutions quickly, referring to newly proposed improvements, e.g., clustering-based selection [27] and surrogate assist [28].

Author Contributions

Conceptualization, J.M. and B.W.; methodology, J.M.; software, Y.S.; validation, J.M.; formal analysis, Y.S.; investigation, Y.X.; resources, Y.X.; data curation, Y.S.; writing—original draft preparation, Y.S.; writing—review and editing, Y.X.; visualization, J.M.; supervision, B.W.; project administration, B.W.; funding acquisition, Y.X. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported by the key scientific and technological projects of Henan Province (Grant No. 252102211072 and 232102210078), National Natural Science Foundation of China (Grant No. 62102372 and 62072414), Anhui Provincial Key Research and Development Project (Grant No. 2024AH053415), Doctor Scientific Research Fund of Zhengzhou University of Light Industry (Grant No. 2021BSJJ029), Anhui Province University Collaborative Innovation Project (Grant No. GXXT-2023-050), Excellent Innovative Research Team of universities in Anhui Province (Grant No. 2023AH010056), Talent Research Fund of Tongling University (Grant No. 2024tlxyrc019), and School-Level Young Backbone Teacher Training Program of Zhengzhou University of Light Industry.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Research data are readily provided upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript.
DMOEADynamic Multi-objective Evolutionary Algorithm
DMOPDynamic Multi-objective Optimization Problem
EAEvolutionary Algorithm
FLForward-Looking
FLVVelocity Forward-Looking
KFKalman Filtering
LRLinear Regression
MIGDMean Inverted Generational Distance
MHVDMean Hypervolume Difference
MOPMulti-objective Optimization Problem
NSGA-IINon-dominated Sorting Genetic Algorithm II
PFPareto optimal Front
PSPareto optimal solution Set
RFRandom Forest Regression
SESquare Error
SVMSupport Vector Machine
SVRSupport Vector Regression
XGBeXtreme Gradient Boosting

References

  1. Zhao, Y.; Shen, X.; Ge, Z. A Knowledge-Guided Multi-Objective Shuffled Frog Leaping Algorithm for Dynamic Multi-Depot Multi-Trip Vehicle Routing Problem. Symmetry 2024, 16, 697. [Google Scholar] [CrossRef]
  2. Geng, S.; Hou, H.; Zhou, Z. A dynamic multi-objective model for emergency shelter relief system design integrating the supply and demand sides. Nat. Hazards 2024, 120, 2379–2402. [Google Scholar] [CrossRef]
  3. Zhang, Z.; Guo, Y.; Tao, Q. Dynamic multi-objective path-order planning research in nuclear power plant decommissioning based on NSGA-II. Ann. Nucl. Energy 2024, 199, 110369. [Google Scholar] [CrossRef]
  4. Zhang, X.; Zhang, G.; Zhang, D.; Zhang, L. Dynamic Multi-Objective Optimization in Brazier-Type Gasification and Carbonization Furnace. Materials 2023, 16, 1164. [Google Scholar] [CrossRef] [PubMed]
  5. Jiang, S.; Zou, J.; Yang, S.; Yao, X. Evolutionary Dynamic Multi-objective Optimisation: A Survey. ACM Comput. Surv. 2023, 55, 47. [Google Scholar] [CrossRef]
  6. Qiu, X.; Chen, Y.; Lin, Y.; Huang, B. Enhancing Stability in Real Nor-Flash Compute-In-Memory Chips: A Narrowing Output Range Approach Using Elastic Net Regularization. In Proceedings of the 2024 International Symposium on Integrated Circuit Design and Integrated Systems (ICDIS’24), Xiamen, China, 22–24 November 2024; pp. 31–40. [Google Scholar] [CrossRef]
  7. Vultureanu-Albişi, A.; Bădică, C. The Model of Regularization Coefficient in Polynomial Regression for Modelling the Spread of COVID-19 in Romania. In Proceedings of the 2022 23rd International Carpathian Control Conference (ICCC), Sinaia, Romania, 29 May–1 June 2022; pp. 94–100. [Google Scholar] [CrossRef]
  8. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Available online: http://www.deeplearningbook.org (accessed on 17 June 2025).
  9. Jiang, S.; Yang, S.; Yao, X.; Tan, K.; Kaiser, M.; Krasnogor, N. Benchmark Problems for CEC’2018 Competition on Dynamic Multiobjective Optimisation. 2017. Available online: http://homepages.cs.ncl.ac.uk/shouyong.jiang/cec2018/CEC2018_Tech_Rep_DMOP.pdf (accessed on 17 June 2025).
  10. Li, Q.; Liu, X.; Wang, F.; Wang, S.; Zhang, P.; Wu, X. A framework based on generational and environmental response strategies for dynamic multi-objective optimization. Appl. Soft Comput. 2024, 152, 111114. [Google Scholar] [CrossRef]
  11. Muruganantham, A.; Tan, K.C.; Vadakkepat, P. Evolutionary Dynamic Multiobjective Optimization Via Kalman Filter Prediction. IEEE Trans. Cybern. 2016, 46, 2862–2873. [Google Scholar] [CrossRef]
  12. Chen, M.; Ma, Y. Dynamic multi-objective evolutionary algorithm with center point prediction strategy using ensemble Kalman filter. Soft Comput. 2021, 25, 5003–5019. [Google Scholar] [CrossRef]
  13. Fang, Z.; Li, H.; Hu, L.; Zeng, N. A learnable population filter for dynamic multi-objective optimization. Neurocomputing 2024, 574, 127241. [Google Scholar] [CrossRef]
  14. Sun, H.; Ma, X.; Hu, Z.; Yang, J.; Cui, H. A two stages prediction strategy for evolutionary dynamic multi-objective optimization. Appl. Intell. 2023, 53, 1115–1131. [Google Scholar] [CrossRef]
  15. Gao, K.; Xu, L. Novel strategies based on a gradient boosting regression tree predictor for dynamic multi-objective optimization. Expert Syst. Appl. 2024, 237, 121532. [Google Scholar] [CrossRef]
  16. Ishibuchi, H.; Masuda, H.; Tanigaki, Y.; Nojima, Y. Modified Distance Calculation in Generational Distance and Inverted Generational Distance. In Proceedings of the Evolutionary Multi-Criterion Optimization, Guimarães, Portugal, 29 March–1 April 2015; pp. 110–125. [Google Scholar]
  17. Yang, Y.; Ma, Y.; Wang, P.; Xu, Y.; Wang, M. A dynamic multi-objective evolutionary algorithm based on two-stage dimensionality reduction and a region Gauss adaptation prediction strategy. Appl. Soft Comput. 2023, 142, 110333. [Google Scholar] [CrossRef]
  18. Zheng, J.; Zhang, B.; Zou, J.; Yang, S.; Hu, Y. A dynamic multi-objective evolutionary algorithm based on Niche prediction strategy. Appl. Soft Comput. 2023, 142, 110359. [Google Scholar] [CrossRef]
  19. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  20. Zhang, J.; Qu, S.; Zhang, Z.; Cheng, S.; Li, M.; Bi, Y. An acceleration-based prediction strategy for dynamic multi-objective optimization. Soft Comput. 2024, 28, 1215–1228. [Google Scholar] [CrossRef]
  21. Wang, Y.; Ma, Y.; Li, Q.; Zhao, Y. A dynamic multi-objective optimization evolutionary algorithm based on classification of environmental change intensity and collaborative prediction strategy. J. Supercomput. 2025, 81, 54. [Google Scholar] [CrossRef]
  22. Coello, C.; Pulido, G.; Lechuga, M. Handling multiple objectives with particle swarm optimization. IEEE Trans. Evol. Comput. 2004, 8, 256–279. [Google Scholar] [CrossRef]
  23. Zitzler, E.; Laumanns, M.; Thiele, L. SPEA2: Improving the Strength Pareto Evolutionary Algorithm; ETH Zurich: Zurich, Switzerland, 2001. [Google Scholar]
  24. Zhang, H.; Wang, G.G.; Dong, J.; Gandomi, A.H. Improved NSGA-III with Second-Order Difference Random Strategy for Dynamic Multi-Objective Optimization. Processes 2021, 9, 911. [Google Scholar] [CrossRef]
  25. Zhang, T.; Tao, Q.; Yu, L.; Yi, H.; Chen, J. A new prediction strategy for dynamic multi-objective optimization using hybrid Fuzzy C-Means and support vector machine. Neurocomputing 2025, 621, 129291. [Google Scholar] [CrossRef]
  26. Ge, F.; Zhao, X.; Chen, D.; Shen, L.; Liu, H. A dynamic multi-objective optimization algorithm based on probability-driven prediction and correlation-guided individual transfer. J. Supercomput. 2025, 81, 348. [Google Scholar] [CrossRef]
  27. Akopov, A.S. An Improved Parallel Biobjective Hybrid Real-Coded Genetic Algorithm with Clustering-Based Selection. Cybern. Inf. Technol. 2024, 24, 32–49. [Google Scholar] [CrossRef]
  28. Díaz-Manríquez, A.; Toscano, G.; Barron-Zambrano, J.H.; Tello-Leal, E. A Review of Surrogate Assisted Multiobjective Evolutionary Algorithms. Comput. Intell. Neurosci. 2016, 2016, 9420460. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The diagram of exploiting the inter-correlations of key points by the LR-based prediction method for the DMOP. The solid curve is the P S in the time labelled below. The circle represents the key point. The dotted line presents the motion trajectory of the key point. The solid line infers the inter-correlation.
Figure 1. The diagram of exploiting the inter-correlations of key points by the LR-based prediction method for the DMOP. The solid curve is the P S in the time labelled below. The circle represents the key point. The dotted line presents the motion trajectory of the key point. The solid line infers the inter-correlation.
Algorithms 18 00372 g001
Figure 2. The box plots of MIGD metric values achieved by various algorithms in solving DF1–DF3. (a) DF1. (b) DF2. (c) DF3.
Figure 2. The box plots of MIGD metric values achieved by various algorithms in solving DF1–DF3. (a) DF1. (b) DF2. (c) DF3.
Algorithms 18 00372 g002
Table 1. The details of DF1-DF6 in the CEC 2018 benchmark suite [9].
Table 1. The details of DF1-DF6 in the CEC 2018 benchmark suite [9].
ProblemObjectivesPSPF
DF1 min f 1 ( x ) = x 1 , f 2 ( x ) = g ( x ) ( 1 ( x 1 g ( x ) ) H ( t ) ) g ( x ) = 1 + i = 2 n ( x i G ( t ) ) 2 H ( t ) = 0.75 sin ( 0.5 π t ) + 1.25 G ( t ) = | sin ( 0.5 π t ) | 0 x 1 1 x i = G ( t ) , i = 2 , , n 0 f 1 1 f 2 = 1 f 1 H ( t )
DF2 min f 1 ( x ) = x r , f 2 ( x ) = g ( x ) ( 1 f 1 / g ) g ( x ) = 1 + i = { 1 , , n } { r } ( x i G ( t ) ) 2 G ( t ) = | sin ( 0.5 π t ) | r = 1 + ( n 1 ) G ( t ) 0 x r 1 x i r = G ( t ) , i = 1 , , n 0 f 1 1 f 2 = 1 f 1
DF3 min f 1 ( x ) = x 1 , f 2 ( x ) = g ( x ) ( 1 ( x 1 g ( x ) ) H ( t ) ) g ( x ) = 1 + i = 2 n ( x i G ( t ) x 1 H ( t ) ) 2 G ( t ) = sin ( 0.5 π t ) H ( t ) = 1.5 + G ( t ) 0 x 1 1 x i = G ( t ) + x 1 H ( t ) , i = 2 , , n 0 f 1 1 f 2 = 1 f 1 H ( t )
DF4 min f 1 ( x ) = g ( x ) | x 1 a | H ( t ) , f 2 ( x ) = g ( x ) | x 1 a b | H ( t ) g ( x ) = 1 + i = 2 n ( x i a x 1 2 i c 2 ) 2 a = sin ( 0.5 π t ) b = 1 + | cos ( 0.5 π t ) | c = max ( | a | , a + b ) H ( t ) = 1.5 + a a x 1 a + b x i = a x 1 2 i c 2 , i = 2 , , n 0 f 1 b H ( t ) f 2 = ( b f 1 1 H ( t ) ) H ( t )
DF5 min f 1 ( x ) = g ( x ) ( x 1 + 0.02 s i n ( w t π x 1 ) ) , f 2 ( x ) = g ( x ) ( 1 x 1 + 0.02 s i n ( w t π x 1 ) ) g ( x ) = 1 + i = 2 n ( x i G ( t ) ) 2 G ( t ) = sin ( 0.5 π t ) w t = 10 G ( t ) 0 x 1 1 x i = G ( t ) , i = 2 , , n 0 f 1 1 f 1 + f 2 = 1 + 0.04 sin ( w t π f 1 f 2 + 1 2 )
DF6 min f 1 ( x ) = g ( x ) ( x 1 + 0.1 s i n ( 3 π x 1 ) ) α t , f 2 ( x ) = g ( x ) ( 1 x 1 + 0.1 s i n ( 3 π x 1 ) ) α t g ( x ) = 1 + i = 2 n ( | G ( t ) | y i 2 10 cos ( 2 π y i ) + 10 ) G ( t ) = sin ( 0.5 π t ) y i = x i G ( t ) α t = 0.2 + 2.8 | G ( t ) | 0 x 1 1 x i = G ( t ) , i = 2 , , n 0 f 1 1 f 1 1 α t + f 2 1 α t = 1 + 0.2 sin ( 3 π f 1 1 α t f 2 1 α t + 1 2 )
Table 2. The details of DF7-DF11 in the CEC 2018 benchmark suite [9].
Table 2. The details of DF7-DF11 in the CEC 2018 benchmark suite [9].
ProblemObjectivesPSPF
DF7 min f 1 ( x ) = g ( x ) 1 + t x 1 , f 2 ( x ) = g ( x ) x 1 1 + t g ( x ) = 1 + i = 2 n ( x i 1 1 + e α t ( x 1 2.5 ) ) 2 α t = 5 cos ( 0.5 π t ) 0 x 1 1 x i = 1 1 + e α t ( x 1 0.5 ) , i = 2 , , n 1 + t 4 f 1 1 + t f 2 = 1 f 1
DF8 min f 1 ( x ) = g ( x ) ( x 1 + 0.1 s i n ( 3 π x 1 ) ) , f 2 ( x ) = g ( x ) ( 1 x 1 + 0.1 s i n ( 3 π x 1 ) ) α t g ( x ) = 1 + i = 2 n ( x i G ( t ) sin ( 4 π x 1 β t ) 1 + | G ( t ) | ) 2 α t = 2.25 + 2 cos ( 2 π t ) β t = 1 G ( t ) = sin ( 0.5 π t ) 0 x 1 1 x i = G ( t ) sin ( 4 π x 1 β t ) 1 + | G ( t ) | , i = 2 , , n 0 f 1 1 f 1 + f 2 1 α t = 1 + 0.2 sin ( 3 π f 1 f 2 1 α t + 1 2 )
DF9 min f 1 ( x ) = g ( x ) ( x 1 + max { 0 , ( 1 2 N t + 0.1 ) sin ( 2 N t π x 1 ) } ) , f 2 ( x ) = g ( x ) ( 1 x 1 + max { 0 , ( 1 2 N t + 0.1 ) sin ( 2 N t π x 1 ) } ) g ( x ) = 1 + i = 2 n ( x i cos ( 4 t + x 1 + x i 1 ) ) 2 N t = 1 + 10 | sin ( 0.5 π t ) | x 1 i = 1 N t [ 2 i 1 2 N t , i N t ] { 0 } x i = cos ( 4 t + x 1 + x i 1 ) , i = 2 , , n f 1 i = 1 N t [ 2 i 1 2 N t , i N t ] { 0 } f 2 = 1 f 1
DF10 min f 1 ( x ) = g ( x ) [ sin ( 0.5 π x 1 ) ] H ( t ) f 2 ( x ) = g ( x ) [ sin ( 0.5 π x 2 ) cos ( 0.5 π x 1 ) ] H ( t ) f 3 ( x ) = g ( x ) [ cos ( 0.5 π x 2 ) cos ( 0.5 π x 1 ) ] H ( t ) g ( x ) = 1 + i = 3 n ( x i sin ( 2 π ( x 1 + x 2 ) ) 1 + | G ( t ) | ) 2 H ( t ) = 2.25 + 2 cos ( 0.5 π t ) G ( t ) = sin ( 0.5 π t ) 0 x i = 1 , 2 1 x i = sin ( 2 π ( x 1 + x 2 ) ) 1 + | G ( t ) | , i = 3 , , n i = 1 3 f i 2 H ( t ) = 1 0 f i = 1 : 3 1
DF11 min f 1 ( x ) = g ( x ) sin ( y 1 ) f 2 ( x ) = g ( x ) sin ( y 2 ) cos ( y 1 ) f 3 ( x ) = g ( x ) cos ( y 2 ) cos ( y 1 ) y i = 1 : 2 = π 6 G t + ( π 2 π 3 G t ) x i g ( x ) = 1 + G ( t ) + i = 3 n ( x i 0.5 G ( t ) x 1 ) 2 G ( t ) = | sin ( 0.5 π t ) | 0 x i = 1 , 2 1 x i = 0.5 G ( t ) x 1 , i = 3 , , n a part of i = 1 3 f i 2 = ( 1 + G ( t ) ) 2 0 f i = 1 : 3 1
Table 3. The details of DF12-DF14 in the CEC 2018 benchmark suite [9].
Table 3. The details of DF12-DF14 in the CEC 2018 benchmark suite [9].
ProblemObjectivesPSPF
DF12 min f 1 ( x ) = g ( x ) cos ( 0.5 π x 1 ) cos ( 0.5 π x 2 ) f 2 ( x ) = g ( x ) cos ( 0.5 π x 1 ) sin ( 0.5 π x 2 ) f 3 ( x ) = g ( x ) sin ( 0.5 π x 1 ) g ( x ) = 1 + i = 3 n ( x i sin ( t x 1 ) ) 2 + | j = 1 2 sin ( k t ( 2 x j r ) π / 2 ) | k t = 10 sin ( π t ) r = 1 m o d ( k t , 2 ) 0 x i = 1 , 2 1 j = 1 2 m o d ( | k t ( 2 x j r ) | , 2 ) = 0 x i = sin ( t x 1 ) , i = 3 , , n i = 1 3 f i = 1 0 f i = 1 : 3 1
DF13 min f 1 ( x ) = g ( x ) cos 2 ( 0.5 π x 1 ) f 2 ( x ) = g ( x ) cos 2 ( 0.5 π x 2 ) f 3 ( x ) = g ( x ) j = 1 2 [ sin 2 ( 0.5 π x j ) + sin ( 0.5 π x j ) cos 2 ( p t π x j ) ] g ( x ) = 1 + i = 3 n ( x i G ( t ) ) 2 p t = 6 G ( t ) G ( t ) = sin ( 0.5 π t ) 0 x i = 1 , 2 1 x i = G ( t ) , i = 3 , , n DF 13 generates both continuous and disconnected PF geometries . The number of disconnected PF segments varies over time .
DF14 min f 1 ( x ) = g ( x ) ( 1 y 1 + 0.05 sin ( 6 π y 1 ) ) f 2 ( x ) = g ( x ) ( 1 x 2 + 0.05 sin ( 6 π x 2 ) ) ( y 1 + 0.05 sin ( 6 π x y 1 ) ) f 3 ( x ) = g ( x ) ( x 2 + 0.05 sin ( 6 π x 2 ) ) ( y 1 + 0.05 sin ( 6 π y 1 ) ) y 1 = 0.5 + G ( t ) ( x 1 0.5 ) g ( x ) = 1 + i = 3 n ( x i G ( t ) ) 2 G ( t ) = sin ( 0.5 π t ) 0 x i = 1 , 2 1 x i = G ( t ) , i = 3 , , n The dynamics that DF 14 has is the changing size and dimension of the PF .   The PF can be degenerated into an 1 - D manifold .   When the PF is not degenerate ,   the size of the 2 - D PF manifold changes over time ,   and the number of knee regions changes accordingly .
Table 4. The performance of various DMOEAs in the CEC 2018 benchmark suite (DF1–DF7).
Table 4. The performance of various DMOEAs in the CEC 2018 benchmark suite (DF1–DF7).
DMOPMetricFLFLSKFSVRXGBRFLR-K
DF1MIGDavg0.0033580.0034200.0052660.0036730.0033460.0034380.003625
rank2376145
MHVDavg0.5721010.5722920.5778660.5727550.5719030.5724680.572770
rank2375146
MACDavg0.0072990.0073900.0073170.0073340.0073250.0073670.007164
rank2735461
DF2MIGDavg0.2756590.2796510.3650380.2620290.2734190.2629970.225870
rank5672431
MHVDavg0.5007630.4999690.6745430.4987080.4948680.5043460.225870
rank5473261
MACDavg0.0641930.0601470.0193800.0447390.0522400.0513170.055945
rank7612435
DF3MIGDavg0.6879660.6834650.4629740.6450550.6930140.6725120.676729
rank6512734
MHVDavg0.9331430.9403320.6882120.8416670.9205130.9239310.914939
rank6712453
MACDavg0.3709890.4077310.0042760.5253440.2939310.2229610.363633
rank5617324
DF4MIGDavg0.4063920.4378700.3356780.3755080.3801700.3921090.362976
rank6713452
MHVDavg0.2520130.2450720.2701420.2522620.2377150.2409640.240490
rank5476132
MACDavg0.0224820.0238980.0220690.0253660.0241680.0232060.027455
rank2416537
DF5MIGDavg0.230000.258430.381520.431960.367110.397120.21538
rank2357461
MHVDavg0.2836830.2949000.1677640.2931180.2889680.2947410.290883
rank2715364
MACDavg0.0540190.0430260.0436630.0397210.0580670.0501780.052056
rank6231745
DF6MIGDavg9.5588989.01747810.22906210.21842510.3801358.7603238.828903
rank4365712
MHVDavg0.1686470.1719780.2527340.2169130.2448780.2685370.157827
rank2364571
MACDavg0.1075310.1740190.1554060.1144930.1127180.1642900.106967
rank2754361
DF7MIGDavg0.4096140.4131100.2993030.4374910.4069600.3886070.365055
rank5617432
MHVDavg0.8182150.8205050.7355570.8199410.8236910.7968080.800441
rank4615723
MACDavg0.0439810.0491980.0511620.0512970.0447310.0496610.043827
rank2467351
Table 5. The performance of various DMOEAs in the CEC 2018 benchmark suite (DF8–DF14).
Table 5. The performance of various DMOEAs in the CEC 2018 benchmark suite (DF8–DF14).
DMOPMetricFLFLSKFSVRXGBRFLR-K
DF8MIGDavg0.2197900.1968330.2590830.2003510.1985180.1982840.205540
rank6174325
MHVDavg0.4778250.4365770.6363850.4785660.4742140.4603880.460648
rank5176423
MACDavg0.0577040.0404420.0359200.0550570.0647620.0303230.036103
rank6425713
DF9MIGDavg1.2044311.1780371.1702271.2896331.3823071.3144540.939267
rank4325761
MHVDavg0.3083180.3122060.3111530.3246050.3302480.3417920.275004
rank2435671
MACDavg0.0799050.0928090.0744880.0809540.0896170.0836000.073592
rank3724651
DF10MIGDavg0.3558900.3885260.4268040.3948650.3874040.3535230.311121
rank3576421
MHVDavg0.2399550.2353060.3088340.2496930.2284010.2126110.202771
rank5476321
MACDavg0.0211000.0150840.0232020.0180400.0182090.0186090.016982
rank6173452
DF11MIGDavg11.7930411.8222511.8500411.8336211.8091611.8009611.84234
rank1475326
MHVDavg0.7217690.7162600.7473320.7211260.7204290.7155920.718209
rank6275413
MACDavg0.0109710.0125170.0108490.0117590.0125250.0114700.012211
rank2614735
DF12MIGDavg0.5292980.5199370.5541050.5244830.5105330.5857710.549777
rank4263175
MHVDavg0.2046820.1958070.3743410.2078800.1910200.1957760.205893
rank4376125
MACDavg0.0158860.0171590.0168160.0181200.0176190.0158210.015708
rank3547621
DF13MIGDavg0.1624250.1462740.1482250.1722160.1626290.1567870.143541
rank5237641
MHVDavg1.1431861.0961581.3416011.2622611.2865141.2626580.905825
rank3274651
MACDavg0.0157870.0154040.0161020.0163720.0158200.0157700.014676
rank4267531
DF14MIGDavg0.3247040.3684830.4250040.5067010.4323080.4806230.286369
rank2347561
MHVDavg0.2768490.2947280.2778590.3091700.3101940.3174780.301353
rank1325674
MACDavg0.0282670.0298660.0254510.0290660.0273540.0282190.024529
rank5726341
Count (rank = 1)MIGD1130216
MHVD1130315
MACD0141017
Total231015318
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, J.; Sang, Y.; Xu, Y.; Wang, B. A Linear Regression Prediction-Based Dynamic Multi-Objective Evolutionary Algorithm with Correlations of Pareto Front Points. Algorithms 2025, 18, 372. https://doi.org/10.3390/a18060372

AMA Style

Ma J, Sang Y, Xu Y, Wang B. A Linear Regression Prediction-Based Dynamic Multi-Objective Evolutionary Algorithm with Correlations of Pareto Front Points. Algorithms. 2025; 18(6):372. https://doi.org/10.3390/a18060372

Chicago/Turabian Style

Ma, Junxia, Yongxuan Sang, Yaoli Xu, and Bo Wang. 2025. "A Linear Regression Prediction-Based Dynamic Multi-Objective Evolutionary Algorithm with Correlations of Pareto Front Points" Algorithms 18, no. 6: 372. https://doi.org/10.3390/a18060372

APA Style

Ma, J., Sang, Y., Xu, Y., & Wang, B. (2025). A Linear Regression Prediction-Based Dynamic Multi-Objective Evolutionary Algorithm with Correlations of Pareto Front Points. Algorithms, 18(6), 372. https://doi.org/10.3390/a18060372

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop