Next Article in Journal
Finding Bottlenecks in Message Passing Interface Programs by Scalable Critical Path Analysis
Previous Article in Journal
Machine Learning-Based Approach for Predicting Diabetes Employing Socio-Demographic Characteristics
Previous Article in Special Issue
A Multi-Objective Tri-Level Algorithm for Hub-and-Spoke Network in Short Sea Shipping Transportation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Decision-Maker’s Preference-Driven Dynamic Multi-Objective Optimization

by
Adekunle Rotimi Adekoya
1,2 and
Mardé Helbig
2,3,*
1
Computer Science Division, Stellenbosch University, Stellenbosch 7600, South Africa
2
Department of Computer Science, University of Pretoria, Hatfield 0002, South Africa
3
School of ICT, Griffith University, Southport 4215, Australia
*
Author to whom correspondence should be addressed.
Algorithms 2023, 16(11), 504; https://doi.org/10.3390/a16110504
Submission received: 5 September 2023 / Revised: 22 October 2023 / Accepted: 27 October 2023 / Published: 30 October 2023
(This article belongs to the Special Issue Optimization Algorithms for Decision Support Systems)

Abstract

:
Dynamic multi-objective optimization problems (DMOPs) are optimization problems where elements of the problems, such as the objective functions and/or constraints, change with time. These problems are characterized by two or more objective functions, where at least two objective functions are in conflict with one another. When solving real-world problems, the incorporation of human decision-makers (DMs)’ preferences or expert knowledge into the optimization process and thereby restricting the search to a specific region of the Pareto-optimal Front (POF) may result in more preferred or suitable solutions. This study proposes approaches that enable DMs to influence the search process with their preferences by reformulating the optimization problems as constrained problems. The subsequent constrained problems are solved using various constraint handling approaches, such as the penalization of infeasible solutions and the restriction of the search to the feasible region of the search space. The proposed constraint handling approaches are compared by incorporating the approaches into a differential evolution (DE) algorithm and measuring the algorithm’s performance using both standard performance measures for dynamic multi-objective optimization (DMOO), as well as newly proposed measures for constrained DMOPs. The new measures indicate how well an algorithm was able to find solutions in the objective space that best reflect the DM’s preferences and the Pareto-optimality goal of dynamic multi-objective optimization algorithms (DMOAs). The results indicate that the constraint handling approaches are effective in finding Pareto-optimal solutions that satisfy the preference constraints of a DM.

1. Introduction

Dynamic multi-objective optimization problems (DMOPs) have multiple goals or objectives, and the objectives and/or constraints change over time [1,2,3,4]. However, the goals are usually in conflict with one another, thereby making the process of finding a single optimal solution a very difficult task [5]. Finding a set of optimal trade-off solutions is therefore the norm, with the Pareto-dominance relation [6] being used to compare the quality of the trade-off solutions. The set of optimal trade-off solutions in the decision space is called the Pareto-optimal Set (POS), while in the objective space, the set is referred to as the Pareto-optimal Front (POF) or Pareto Frontier [7].
DMOPs occur frequently in the real-world in a diverse range of domains, such as structural engineering [8]; plant control and scheduling [9,10,11,12,13]; and process optimization in manufacturing, for example material carbonization [14], copper removal in hydrometallurgy [15], and balancing disassembly lines [16].
However, the set of trade-off solutions may be overwhelming in number; a subset that better reflects the decision-maker (DM)’s preferences, may be required [17,18]. Some research has been conducted on incorporating a DM’s preferences for static multi-objective optimization problems (MOPs) [9,19,20,21,22,23,24,25,26,27]. Most of these studies used a priori, interactive, and a posteriori approaches. It is noteworthy to state that to the best of the authors’ knowledge, a priori and interactive preference incorporation methods have not been applied to DMOPs. Posteriori could have been applied when real-world problems were solved and a set of solutions were provided to the real-world DM.
Introducing DM preferences, however, leads to a reformulation of DMOPs as constrained problems, which are then solved by dynamic multi-objective optimization algorithms (DMOAs) using a variation of a penalty function [28,29,30,31,32]. The constraints imparted on DMOPs as a result of DM preferences are defined in the objective space; thereby, the constraints partition the objective space into feasible and infeasible regions.
The contributions of this study are
  • A preference incorporation method adapted for DMOPs that is partly a priori and partly interactive and enables a DM to specify their preferences. The a priori incorporation of DM preferences occurs through a procedure, named bootstrap. The interactive incorporation of preferences is employed whenever a change occurs in the dynamic environment such that the DM preference set may be significantly affected.
  • A bounding box approach (refer to Equation (2) in Section 2) to specify a DM’s preferences in the dynamic multi-objective optimization (DMOO) search process. The proposed bounding box, unlike the proposal in [33], is employed in the context of DMOPs, thus making it the first of its kind.
  • New approaches that can drive a DMOA’s search constrained by the DM’s preferences, as well as a comparative analysis of the constraint managing approaches incorporated into a DMOA. The proposed constraint managing approaches are fundamentally different from one another in terms of how they penalize solutions that violate a DM’s preferences.
  • New performance measures that measure how well a found solution adheres to the preferences of a DM. In this article, a solution will henceforth be referred to as a decision.
The base DMOA used in this study is a hybrid form of differential evolution (DE) [34], combining non-dominated sorting [35] with vector-evaluation schemes for selecting target vectors and the vectors that survive to the next generation during the optimization process, since it has been shown to perform well in solving DMOPs [36]. The proposed constraint managing approaches are incorporated in the same DMOA (the hybrid DE) to ensure a fair comparison of their performance. Their performance is measured using current (traditional) DMOO measures [1,37,38] and the new measures proposed in this article. It should be noted, however, that the constraint managing approaches and the preferences incorporation approaches can be incorporated into any DMOA.
The rest of the article is organized as follows: Section 2 presents background concepts and theories required to understand the rest of the article. The experimental setup, including the algorithmic setup, benchmark functions, performance measures, and statistical analysis employed in the study are discussed in Section 3. Section 4 presents and discusses the results of the experiments. Finally, conclusions are presented in Section 5 based on the results obtained from the experiments.

2. Background

This section discusses the key concepts which underlie the proposals in this study. Section 2.1 discusses the mathematical formulation of DMOPs that are addressed in this study. It also discusses the mathematics of the proposed bounding box approach and the limiting behaviors of the penalty function employed in this study. Section 2.2 discusses the mathematics required for new performance measures proposed in this article. Lastly, Section 2.3 discusses the core DMOA on which the based DMOA used in this study is based.

2.1. Bounding Box Mathematics

Let a composite function F be defined as follows:
F : Ω x × Ω t O
where Ω x = n , with n 2 , refers to the decision space, Ω t R refers to the time space, t Ω t is a real-valued time instance and t = 1 n t τ τ t , with n t referring to the severity of change, τ referring to the iteration counter, and τ t referring to the frequency of change.
Let the objective space, O, be defined as [39]
O = 2 ( e . g . , FDA 1 [ 10 ] , dMOP 2 [ 39 ] ) 3 ( e . g . , FDA 5 [ 10 ] )
Then, a decomposition of F follows:
F ( x , t ) = ( f 1 , f 2 ) ( e . g . , FDA 1 [ 10 ] , dMOP 2 [ 39 ] ) ( f 1 , f 2 , f 3 ) ( e . g . , FDA 5 [ 10 ] )
Each objective function f i is defined as
f i : Ω x × Ω t R } , i = 1 , 2 , 3
Let a DM’s preference set be defined as
Box ( z , p ) = { z O | d ( z , p ) r , p O , r }
where d is the Euclidean distance measure, p is the center of the box formed by the points in this set, Box ( z , p ) , r is the radius of the box, O is the objective space as defined in Equation (1), and the values of p and r are interactively selected by the DM.
Let a penalty function and its limiting behaviours be defined as
penalty ( z k O , λ ) = 0 , i f d ( z k , p ) r λ ( d ( z k , p ) r ) , i f d ( z k , p ) > r
lim λ c penalty ( z k O , λ ) = 0 , i f d ( z k , p ) r c ( d ( z k , p ) r ) , i f d ( z k , p ) > r
lim λ realmax penalty ( z k O , λ ) = 0 , i f d ( z k , p ) r realmax , i f d ( z k , p ) > r
lim λ realmax z k + penalty ( z k O , λ ) = z k , i f d ( z k , p ) r I 1 · realmax , i f d ( z k , p ) > r
lim λ c z k + penalty ( z k O , λ ) = z k , i f d ( z k , p ) r z k + I 1 · c ( d ( z k , p ) r ) , i f d ( z k , p ) > r
where λ (≥0) is a penalty control parameter whose value is determined by each algorithm, and p , r, and d are defined as in Equation (2).
Then, a penalized outcome, z k * O , is defined as z k * = z k + I 1 · penalty ( z k , λ ) where z k is a non-penalized outcome in the objective space, z k = F ( x k , t ) , x k Ω x , F is as defined in Equation (1), and I 1 is an all-ones vector in the objective space (e.g., (1, 1) 2 ).

2.2. Mathematics for Newly Proposed Performance Measures

This section discusses the mathematics required for two newly proposed performance measures. Section 2.2.1 discusses a measure that calculates the deviation of the violating decisions. The calculation of the spread of non-violating decisions that are found in the bounding box is discussed in Section 2.2.2.

2.2.1. Deviation of Violating Decisions

Solution space vectors whose objective values are outside the preference set are referred to as violating decisions, since they violate DM preferences. Depending on the control parameters used in the implementation of the penalty function of the proposed algorithms, the violating decisions may occasionally find their way into the archive, especially in situations where all the non-dominated solutions violate DM preferences and non-violating decisions are not found. However, it is a rare scenario: the non-violating decisions, if they are found in the archive, are very likely to dominate the violating and penalized decisions in the Pareto-dominance sense. However, when violating decisions find their way into the archive, a measure of the proximity of these violating decisions to the preferred bounding box is required. The smaller the total proximity, the better the violating decisions are. This section presents the mathematics underlying the calculation of the total proximity/deviation of the violating decisions.
Let p , r, and the distance measure d ( z , p ) be as defined in Section 2.1, and let a set of violating decisions, Z, be defined as follows:
Z = { z k O | d ( z , p ) > r } , k = 1 , , Z
Let the cardinality, N, of Z be defined as
N = Z
Let the deviation of z k Z be defined as
d k = d ( z k , p ) r ( d k > 0 )
Then the total deviation of all elements in Z is
dVD = ( 1 + d k ) 2 N

2.2.2. Spread of Non-Violating Decisions

The spread of non-violating decisions is one of the four measures proposed in this article. This measure estimates how well spread out the preferred decisions are within the bounding box located in the objective space. The greater the value of this measure, the better the performance of an algorithm. The calculation of this measure is presented in Algorithm 1.
Algorithm 1 Spread of Non-violating decisions
1: procedure SpreadEstimator(outcomes)
    ▹ outcomes: objective vectors preferred by DM
2:     Get count of outcomes, N count ( outcomes )
3:     if  N 1  then
4:         return 0▹ 1 or zero outcomes, spread is zero
5:     if  N = = 2 then▹ 2 outcomes
6:         return norm 2 ( outcomes ( 2 ) outcomes ( 1 ) ) ▹ return spread between 2 values
7:     dtot 0 ▹ more than 2 outcomes, calculation required - initialize total spread, d t o t
8:      firstNode ← outcomes(1)▹ get a node
9:      currentNode ← firstNode▹ set current node
10:     while  unProcessedNodes ( outcomes ) > 1  do▹ process each outcome
11:         MarkNodeAsProcessed(currentNode)▹ mark outcome as processed
12:         nearestNode=getNearestNode(currentNode,outcomes)▹ find nearest node to outcome being processed
13:         dist = n o r m 2 ( currentNode nearestNode ) ▹ calcuated distance between these two solutions
14:          dtot ← dtot + dist▹ add their distance to total distance
15:          currentNode ← nearestNode
16:     dist = n o r m 2 ( currentNode firstNode ) ▹ finally calculate distance to first solution
17:      dtot ← dtot + dist▹ add last distance to total distance
18:     return dtot▹ return total distance

2.3. Core Dynamic Multi-Objective Optimization Algorithm

The core DMOA used in this study is presented in Algorithm 2. The algorithm starts with a set of randomly generated solutions, after initializing its run-time parameters, such as the population size, maximum archive size, maximum number of iterations, etc. The non-dominated solutions are added to the archive. A loop is performed until the number of iterations exceeds the maximum number of iterations. The non-dominated solutions, which are found at the end of the loop’s execution, constitute the final solutions to the associated optimization problem. The algorithm uses sentry solutions to check whether a change in the environment has occurred.
Algorithm 2 Dynamic Multi-objective Optimization Algorithm
1: procedure DMOA(freq, severity, maxiteration, dMOP)
2:     Set population size, N
3:     Set archive max size, SizeArchive
4:     Initialize the iteration counter, iteration 0
5:     Initialize time, t 0
6:     Initialize( P t , freq , severity , dMOP , t )▹ initialize population of solutions, P t
7:     AssignNonDominatedToArchive(P t , dMOP, t)▹ initialize archive
8:     while iteration ≤ maxiteration do▹ check if stopping condition has been reached
9:         t 1 / severity · iteration / freq ▹ calculate the current time
10:         Optimizer(P t , dMOP, t)▹ perform the search optimization
11:         Pick sentry solutions▹ select sentry solutions to check for change
12:         if ENV changes(P t , dMOP, t) then▹ check for change in environment
13:            ProcessChange(P t , freq, severity, dMOP, t)▹ respond to change
14:         iteration ← iteration + 1 ▹ increase iteration count

3. Experimental Setup

This section discusses the experimental setup used for this study. Section 3.1 discusses the algorithm setup. The DM preferences are discussed in Section 3.2. Section 3.3 discusses the benchmark functions, and the performance measures are highlighted in Section 3.4. The statistical analysis approach is discussed in Section 3.5.

3.1. Algorithm Setup

The approach that was followed to incorporate the DM’s preference into the search process of the DMOA is discussed in Section 3.1.1.

3.1.1. Decision-Maker’s Preference Incorporation

The different procedures and how they are used with a DMOA for the preference-driven search process are presented in Figure 1. Before the normal run of the DMOA starts, the a priori preference incorporation procedure is used to define the DM’s preference. During the run of the DMOA, any of the constraint managing approaches can be used. If a small environment change occurs, the DMOA’s change response approach is executed during the normal DMOA run. However, if the change is large, requiring a change in the boundary box placement, the interactive preference incorporation procedure is first completed before the DMOA’s change response approach is executed during the normal DMOA run.
A Priori Preference Incorporation: A single run of the algorithm is executed. A series of POFs is presented to the DM. The DM selects one of these POFs, and then selects one of the points on the POF, which will become his x p and p , i.e., the preferred decision/solution vector and the preferred outcome, respectively. This preference, together with the radius of the bounding box specified by the DM during the bootstrap procedure (refer to Algorithm 3), is used to drive the DMOA’s search to optimize the DMOPs under the constraints of the DM’s preferences. The time complexity of the bootstrap procedure is similar to the time complexity of the DMOA that is used to produce the POSs.
Algorithm 3 Bootstrap Procedure
1: procedure BootStrap(freq, severity, iteration, F)
2:     Call DMOA(freq, severity, iteration, F)
   ▹ DMOA returns { POS t k } , k = 1 , , n
   ▹ where k is the kth environment change
3:      i DMChooseIn ( 1 , , n ) ▹ DM indicates their preferred POS
4:      x p DMChooseIn ( P O S t i ) ▹ DM indicates their preferred solution
5:      p F ( x p , t ) ▹ DM’s preference is formulated
6:     DM Choose box radius, r ← random()▹ DM indicates their preferred boundary box size
7:     return (x p : p : r)
Interactive Preference Incorporation: A significant change in the environment may occur where the resulting POF may shift in such a way that the DM preference, p , is no longer part of the new POF. In this scenario, the DM interactively redefines the position of the bounding box ensuring that its preference lies on the new POF. A few scenarios may emerge when this shift of the POF occurs. The initial preferred outcome, p, may no longer lie on the new POF, but the functional value of the corresponding decision variable, x p , may still lie on the new POF. The second possibility is that both p and the functional value of x p do not lie on the new POF. In both cases, a new bounding box position needs to be defined. The interactive redefinition of the bounding box position is presented in Algorithm 4. The time complexity of the interactive preference incorporation procedure is a constant value, i.e., low time complexity.
Algorithm 4 Interactive Incorporation of Preferences
1: procedure RepositionBoundingBox(F, x p , p, r, t)
    ▹ F: multi-objective function to be evaluated
    ▹ x p : DM preferred decision vector
    ▹ p: DM preferred outcome as defined in Equation (2), page 3
    ▹ r: DM preferred box radius
    ▹ Archive: POS t
2:      POF t F ( POS t , t ) ▹ POF is corresponding objective values of POS
3:     if p POF t  then▹ DM preferred outcome still lies on POF
4:          return (x p :p:r)
5:     if F(x p ,t) ∈ POF t  then▹ DM’s preferred decision lies on POF, preferred outcome does not
6:         Reposition center of box, p ← F(x p , t)▹ Automatically reposition center of box
7:          return (x p :p:r)
8:      x p DMChooseIn ( P O S t ) ▹ DM selects a new position for x p and p
9:     Reposition center of box, p F ( x p , t ) ▹ Reposition center of box based on DM’s input
10:      return ( x p : p : r )

3.1.2. Algorithms

The following three approaches employ a penalty function (refer to Equation (3), Section 2) to penalize violating decisions which do not satisfy the DM’s preferences. Each of these approaches were incorporated into the hybrid DE and are the three constraint managing approaches evaluated in this study:
  • Proportionate Penalty: With this approach, the penalty is proportional to the violation, and violating decisions are penalized during function evaluation. Algorithm 5 presents this approach, referred to as PPA for the rest of the article.
    Algorithm 5 Proportionate Penalty Algorithm
    1: procedure FuncEvaluate(F, x, t)
        ▹ F: multi-objective function to be evaluated
        ▹ x: decision vector
        ▹ p: as defined in Equation (2), page 3
        ▹ r: as defined in Equation (2), page 3
        ▹ λ : as defined in Equation (2), page 3
        ▹ λ : a random number between 100 and 1000
        ▹ I 1 : as defined in Equation (2), page 3
    2:     z ← F(x,t)▹ calculate objective value of x
    3:     d n o r m 2 (z-p) − r▹ calculate violation of z
    4:     if d ≤ 0 then
    5:         return z▹ x is a non-violating decision, no penalty applied
        ▹ x is a preference violating decision, proceed to penalize it for violation
    6:     penalty λ · d▹ calculate penalty
    7:     penalty ← I 1 · penalty▹ vectorize penalty
    8:     z ← z + penalty▹ impose proportionate penalty to objective value of x
    9:     return z▹ return new penalized objective value of x
  • Death Penalty: Maximum/death penalty is imposed on violating decisions during function evaluation. Some penalty, which is death, is administered on a decision irrespective of the magnitude of the violation of that decision. With maximum penalty, it becomes very unlikely that violating decisions will find their way into the archive, because they will be dominated by non-violating decisions. Violating decisions are computationally eliminated during the search process, and the optimization process is driven towards a region of the search space dominated by non-violating decisions. The Death Penalty Algorithm, referred to as DPA in the rest of the article, is presented in Algorithm 6.
    Algorithm 6 Death Penalty
    1: procedure FuncEvaluate(F, x, t)
        ▹ F: multi-objective function to be evaluated
        ▹ x: decision vector
        ▹ p: as defined in Equation (2), page 3
        ▹ r: as defined in Equation (2), page 3
        ▹ I 1 : as defined in Equation (2), page 3
        ▹ realmax: maximum real value on a machine
    2:     z ← F(x,t)▹ calculate objective value of x
    3:     d n o r m 2 (z-p) − r ▹ calculate violation of z
    4:     if d ≤ 0 then
    5:         return z ▹ x is a non-violating decision, no penalty applied
    ▹ x is a preference violating decision, proceed to penalize it for violation
    6:     penalty ← realmax ▹ calculate penalty
    7:     penalty ← I 1 · penalty ▹ vectorize penalty
    8:     z ← penalty ▹ impose death/max penalty
    9:     return z ▹ return new penalized objective value of x
  • Restrict Search To Feasible Region: Feasibility is preserved by starting the search within the preferred bounding box and employing the death penalty to prevent preference violating decisions from entering the archive. This approach restricts the search to the feasible region, unlike [40], and it improves the exploring capability of this algorithm. Preferred decisions start the search during initialization of the population of decisions. A pool of preferred decisions is aggregated with the DM preference and the current decisions in the archive. Then, a loop is performed where nearly identical clones of the pool of preferred decisions are created using polynomial mutation [41]. These new clones constitute a new population from where the search will start. Some of the non-dominated decisions in the new population are added to the archive. Algorithm 7 presents the Restrict Search To Feasible Region Algorithm, referred to as RSTFRA in the rest of the article.
    Algorithm 7 Restrict Search to Feasible Region
    1: procedure Initialize( P t , f r e q , s e v e r i t y , F , t )
        ▹ x p : DM preferred decision vector)
        ▹ archive: POS
        ▹ F: multi-objective function to be evaluated
        ▹ N: population size, fixed for this study
    2:     pool ← [x p ; archive] ▹ pooled preferences
    3:     i ← 1 ▹ initialize counter
    4:     while i ≤ N do
    5:         iNumberAttempts ← 1
    6:         while (iNumberAttempts ≤ 100) && (!isPreferredDecision(solution, F, t) ) do ▹ searching for prefered decision
    7:            solution ← randomlyChooseIn(pool) ▹ randomly select solution from pool
    8:            solution ← polynomial_mutate(solution) ▹ apply mutation to solution
    9:            iNumberAttempts ← iNumberAttempts + 1 ▹ increment number of attempts
    10:         addSolutionToPopulation(P t , solution) ▹ add mutated solution to the population
    11:         i ← i + 1
    12:     AssignNonDominatedToArchive(P t , F, t ) ▹ add non-dominated decisions to archive
The time complexity of the constraint managing approaches PPA and DP is a constant value. The time complexity of RSTFRA is O ( m ) due to adding the non-dominated solution to the archive with size m.

3.1.3. Differential Evolution Algorithm Control Parameters

The following settings were used for the DE algorithm in this study:
  • The base algorithm (refer to Algorithm 8) is characterized as DE/best/1/bin.
  • To generate a trial vector from a parent vector during the mutation phase of the algorithm, the best vector in the adjacent hypercube or sub-population of the parent vector is selected as the target vector. The number of hypercubes employed by the algorithm is the same as the number of objective functions in the underlying DMOP.
  • Two randomly selected vectors from the parent vector’s hypercube are used to form a difference vector.
  • Binary crossover [42] is used due to its viability as a crossover method in DE algorithms.
  • The scaling factor, β , amplifies the effects of the difference vector. It has been shown that a larger β increases the probably of escaping local minima, but can lead to premature convergence. On the other hand, a smaller value results in smaller mutation step sizes slowing down convergence, but facilitating better exploitation of the search space [43,44]. This leads to a typical choice for β in the range ( 0.4 , 0.95 )  [43,44]. Therefore, in this study, the algorithm randomly chooses β ( 0.4 , 1 ) . The recombination probability is p r = 0.8 , since DE convergence is insensitive to the control parameters [42,45] and a large value of p r speeds up convergence [43,45,46].
Algorithm 8 Differential Evolution Algorithm
1: procedure Optimizer(P, F, t)
    ▹ β : scaling factor set per algorithm
    ▹ p r : recombination prob set per algorithm
    ▹ maxgen( 1 ) : number of function evaluations set per algorithm
    ▹ P: current population of vectors
    ▹ F: multi-objective function to be optimized
    ▹ t: current time
2:     gen = 1 ▹ set the generation counter
3:     P g e n P ▹ initialize current population
4:     V ←Ø▹ initialize set of vectors
5:     while gen ≤ maxgen do ▹ check if stopping condition has been reached
6:         while moreUnprocessed(v ∈ P g e n ) do ▹ process all individuals of population
7:              v getTrialVector( β , v, P g e n , F, t) ▹ calculate trial vector
8:              v getChildVector(p r , v , v, F, t) ▹ produce child vector
9:              V ← V ∪ {v, v } ▹ add trial and child vectors to set of vectors
10:             markAsProcessed(v ∈ P g e n )
11:         P g e n getNextGenerationVectors(V) ▹ produce next generation
12:         gen ← gen+1 ▹ increment counter
13:         V ←Ø▹ reset set of vectors
14:     AssignNonDominatedToArchive(P m a x g e n , F, t ) ▹ add non-dominated solutions to archive
The time complexity of the static non-dominated sorting genetic algorithm II (NSGA-II) is O ( i N 2 ) , where i is the number of objective functions and N is the population size [35]. The non-dominated sorting has a time complexity of O ( i N 2 ) , the crowding distance calculation has a time complexity of O ( i N log N ), and elitist sorting has a time complexity of O ( i N 2 )  [35].
The DE algorithm used in this study uses the same non-dominated sorting and elitist sorting as NSGA-II. In addition, the time complexity of adding a solution to the archive is O ( i m ) , where m is the size of the archive. When a change in the environment occurs, the re-evaluation of the archive has a time complexity of O ( i m 2 ) . However, it should be noted that all DMOAs that incorporate a change response would typically re-evaluate the solutions. Therefore, the time complexity of the DE is O ( i m 2 ) . Furthermore, since the base algorithm used in this study is only for demonstration purposes, if these approaches are incorporated into another DMOA, the time complexity will depend on that of the chosen DMOA.

3.2. Decision-Maker’s Preferences

The DM preferences are correspondingly associated, serially, with each of the eighteen experimental configurations in Section 3.3. For instance, the first experimental preference in Table 1 is associated with the first experimental preference in Table 2, while both are associated with the first experimental configuration in Table 3.

3.3. Benchmark Functions

Three DMOPs with various τ t - n t combinations were used in this study. The experimental configurations used for these benchmarks are presented in Table 3.
The following symbols were used in Table 3: τ t : frequency of change; n t : severity of change; c ( f ( x ) ) : count of function evaluations per iteration; σ ( r u n s ) : number of runs per configuration; F D A 1 : type I DMOP (POS is dynamic, POF is static), POF = 1 f 1 and is convex, POS is x i = G ( t )  [10,38]; F D A 5 : type II DMOP (POS and POF are dynamic), for 3 objectives, POF = f 1 2 + f 2 2 + f 3 2 = ( 1 + G ( t ) ) 2 and is non-convex, POS is x i = G ( t )  [10,38]; d M O P 2 : POF changes from convex to concave, type II DMOPs, POF = 1 f 1 H ( t ) , POS is x i = G ( t )  [38,39].

3.4. Performance Measures

Each of the performance measures were calculated immediately before a change in the environment occurred. This was performed for thirty runs. An average of the values of the thirty runs was then calculated for each measure in each environment.
The following traditional DMOO performance measures were used in this study:
  • Accuracy (acc) measures how accurately a DMOA is able to approximate the true POF of a DMOP [1,37,38]. The lower the value of acc, the better the performance of the DMOA.
  • Stability (stab) quantifies the effect of environment changes on the accuracy measure value [1,37,38,47]. The lower the value of this measure, the better the DMOA’s performance.
  • Hypervolume Ratio (hvr) [48] measures the proportion of the objective space that is covered by a non-dominated set without suffering from the bias of a convex region as seen with the hypervolume measure [49]. The higher the value of this measure, the better the DMOA’s performance.
  • Reactivity (react) [50] measures how long it takes a DMOA to recover after a change in environment occurred, i.e., the length of time it takes to reach a specified accuracy threshold after the change occurred [38]. The lower the value of this measure, the better the DMOA’s performance.
The following newly proposed measures were used in this study:
  • Number of Non-Violating Decisions (nNVD) measures the number of decisions that fall within the DM’s preference set. The higher the value of this measure, the better a DMOA’s performance.
  • Spread of Non-Violating Decisions (sNVD) measures the spread of decisions within the preference set. A high value indicates a good DMOA performance.
  • Number of Violating Decisions (nVD) measures the number of violating decisions in the archive. These are decisions that do not lie within the preference set. The lower the value of this measure, the better the performance of the DMOA.
  • Total Deviation of Violating Decisions (dVD) measures the total deviation from the preference set for all violating decisions in the archive. It is calculated based on the steps that are highlighted in Section 2.2.1. The lower the value of this measure, the better the DMOA’s performance.
The four new performance measures proposed in this article specifically measure the performance of a DMOA with regards to DM preference constraints, and thus facilitate the comparative analysis of DMOAs in the context of a DM’s preferences.

3.5. Statistical Analysis

A statistical analysis of the performance measure values was conducted in accordance with the wins-losses B algorithm proposed in [1]. The wins-losses B algorithm was implemented in R [51], and the Kruskal–Wallis and Mann–Whitney U statistical functions in R were used as stipulated in [1]. The calculation of wins and losses by the wins-losses B algorithm is presented in Algorithm 9 [52].
A win or loss is only recorded if there was a statistical significant difference in performance of the two algorithms that are being compared with the pair-wise Mann–Whitney U test. Therefore, D i f f > 0 indicates a good performance, since the DMOA obtained more wins than losses. On the other hand, D i f f < 0 indicates a poor performance, since the DMOA was awarded more losses than wins.
Algorithm 9 wins-losses B algorithm for wins and losses calculation [52]
1:
for each benchmark do
2:
    for each n t - τ t combination do
3:
        for each performance measure, p m  do
4:
           for each algorithm a l g  do
5:
               Calculate the average value p m a v g for each of the 30 runs
6:
               Perform Kruskal–Wallis test on the average values, p m a v g
7:
               if statistically significant difference then
8:
                   for each pair of algorithms do
9:
                       Perform Mann–Whitney U test
10:
                       if statistically significant difference then
11:
                          for each environment e n v  do
12:
                              Assign a win to algorithm with best average over all
13:
                                 p m a v g for e n v
14:
                              Assign a loss to algorithm with worst average over all
15:
                                 p m a v g for e n v
16:
Calculate D i f f = # w i n s # l o s s e s
▹ calculate D i f f for each parameter (benchmark, n t - τ t combination, performance measure, algorithm) as required for analysis

4. Results

This section presents a summary of the results obtained from this study. Detailed results are, however, presented in Appendix A.
The summarized results are presented in Table 4, Table 5 and Table 6. For all of these tables, any column with a bold entry signifies the winning algorithm for the particular measure of performance, or for the experimental configuration, in the corresponding row.
Figure 2 and Figure 3 present the objective space for a selected DMOP which is constrained by a bounding box representing a DM’s preferences for two selected experimental configurations. The bounding box in these specific instances is a sphere. The two figures present the results for a randomly chosen run and a randomly chosen environment among many environments (changes) that are typical of a single run of a DMOA solving a DMOP.
Table 4 presents the results for six experimental configurations, for various n t τ t combinations, i.e., different types of environment changes. It highlights the total number of wins and losses obtained by each constraint managing approach (algorithm) over all benchmarks and measures for each of the environment types. The death-penalty algorithm (DPA) performed the best for four types of environments ( n t τ t combinations), while the proportionate-penalty algorithm (PPA) outperformed the other DMOAs in the other two environment types ( n t = 10 , τ = 2 and n t = 10 , τ = 4 ). In those six n t τ t combinations, the RSTFRA never outperformed DPA but it performed better than PPA in two types of environments ( n t = 1 , τ = 4 and n t = 1 , τ = 2 ). DPA was the only DMOA that obtained more wins than losses for all environment types. The Restrict-search-to-feasible-region algorithm (RSTFRA) obtained more losses than wins for all environment types, except for n t = 10 , τ = 2 . On the other hand, PPA obtained more wins than losses for all environments, except n t = 1 , τ = 4 and n t = 1 , τ = 2 .
Table 5 presents the performance of the proposed algorithms with respect to the performance measures discussed in Section 3.4. It highlights the total number of wins and losses obtained by each algorithm for all the benchmarks and environment types for each of the performance measures. DPA performed the best for five of the eight measures and second-best for the rest. Two (react and dVD) of its five wins were ties with RSTFRA. Results for the first four measures in Section 3.4 indicated that DPA performed the best for three (acc, hvr, react) out of the four measures. It won with the least number of losses for the accuracy measure, acc, making it the most accurate of the proposed algorithms. For those four measures, RSTFRA won once (stab), but obtained the same number of losses as PPA for the win. RSTFRA also obtained the highest number of worst rankings. None of the algorithms obtained more wins than losses for all of the measures, with all algorithms obtaining more losses than wins for at least three measures.
DPA had the highest number of wins for the measures proposed in this study, i.e., it performed the best for two out of four measures, making DPA the best performing algorithm for all the performance measures discussed in Section 3.4. PPA recorded the highest number of wins for the nNVD measure, while DPA ranked first for the sNVD measure. Thus, PPA and DPA performed better than RSTFRA in finding non-violating decisions of a DM’s preferences. Although RSTFRA ranked best for nVD and dVD, the magnitude of wins recorded by RSTFRA for those two measures were negligible. Despite RSTFRA ranking first for nVD and dVD, PPA never lost to any of the other algorithms on those measures. DPA tied with RSTFRA on the dVD measure.
Table 6 presents the overall results, presenting the total number of wins and losses obtained by each algorithm over all performance measures and all environment types for all benchmarks. PPA ranked first with 403 wins, DPA recorded 346 wins, while RSTFRA ranked last with 274 wins. In addition, RSTFRA recorded the highest number of overall losses (409), resulting in the most negative D i f f value. DPA recorded the least number of overall losses and the most positive D i f f value. These overall results are consistent with the earlier results, which indicate that DPA performed the best on most of the performance measures and n t τ t combinations, while RSTFRA consistently lagged behind the other two proposed algorithms.
Figure 2 and Figure 3 present the objective space where the preferred objective vectors, or preferred outcomes, are contained in a DM’s preference set. The preference set, or the bounding box, in these instances is a sphere whose defining properties are specified by a DM in the bootstrap procedure described in Algorithm 3. For the sphere specifications in Figure 2 and Figure 3, the first three numbers represent the location of the center of the sphere, while the last number represents the radius of the sphere.
Figure 2 and Figure 3 are simply snapshots and are thus incapable of showing the dynamics of the preference set. They are, however, presented in this section to provide a one-time view into the state of the objective space during the optimization process.
In all the snapshots presented by Figure 2 and Figure 3, all the decisions in the archive are preferred by the DM, since all the objective vectors lie within the spheres representing the DM’s preferences. This is a testament to the fact that the proposed algorithms are effective in finding optimal trade-off solutions/decisions that reflect a DM’s preferences within the search space.
DPA in Figure 2 had the highest number of preferred vectors/outcomes within its spheres, which is consistent with earlier results in this section, indicating its overall superiority over the other algorithms proposed in this study. As a matter of fact, it is ranked best for the experimental configuration represented by Figure 2, and RSTFRA is ranked the worst performing algorithm.
In the experimental configuration represented by Figure 3, PPA ranked best, though only marginally better than DPA. Both algorithms effectively found the DM’s preferred decisions, as none of the algorithms produced violating decisions.

5. Conclusions

This article investigated the incorporation of a DM’s preferences when solving DMOPs. The following contributions were made: an approach that is partly a priori and partly interactive that enables a decision-maker to indicate its preferences for dynamic problems; a bounding box approach to incorporate the preferences in the DMOA’s search; constraint managing approaches to drive the search of a DMOA constrained by the preferences; and new performance measures measuring how well a DMOA’s found solutions adhere to the preferences of a DM.
The results show that a DM’s preferences can effectively be specified using the proposed approach which is partly a priori and interactive. The results further indicate that the proposed bounding box specification is an effective mathematical abstraction of a DM’s preferences. The three proposed constraint managing approaches showed varying degree of performance. The DPA performed the best, while RSTFRA lagged behind the other proposed approaches. Furthermore, the four new performance measures proposed in this article that specifically evaluate the performances of DMOAs in the context of a DM’s preferences proved to effectively evaluate the performance of the DMOAs.
Future work will include experimenting with some of the geometric properties of the bounding box and the impact that these properties have on being able to specify the preferences of the DM in various ways.
It will not be trivial to compare the performance of different approaches that define a decision-maker’s preferences in the traditional way that DMOAs’ performance is evaluated. The way in which a specific approach defines the decision-maker’s preferences will directly influence the solutions that a DMOA finds. This article took the first step towards this, by proposing new performance measures for measuring the performance of DMOAs based on how well their found solutions adhere to the DM’s preferences. However, the question remains: if you compare two approaches that incorporate decision-maker preferences, how will you be able to determine whether one will be better than the other? As long as both approaches find solutions that do adhere to the DM preferences, in the end, the best (or most preferred) approach will be dependent on the application and the preference of the real decision-maker. Future work will investigate this further, i.e., in which ways can the performance of DMOAs incorporating DM preferences be efficiently compared.
In the future, the proposed bounding box approach and constraint managing approaches will be incorporated into various state-of-the-art DMOAs, evaluating the DMOAs’ performance on a range of DMOPs with varying characteristics [38] and measuring their performance with the newly proposed measures. Lastly, approaches to incorporate uncertainty in a DM’s preferences and the performance of the proposed approaches in this article in the presence of uncertainty will also be investigated.

Author Contributions

Conceptualization, A.R.A. and M.H.; methodology, A.R.A. and M.H.; software, A.R.A.; validation, A.R.A. and M.H.; formal analysis, A.R.A. and M.H.; investigation, A.R.A.; resources, A.R.A.; data curation, A.R.A.; writing—original draft preparation, A.R.A.; writing—review and editing, M.H.; visualization, A.R.A.; supervision, M.H.; project administration, A.R.A.; funding acquisition, A.R.A. and M.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work is based on the research supported by the National Research Foundation (NRF) of South Africa (Grant Numbers 46712 and 105743). The opinions, findings and conclusions or recommendations expressed in this article is that of the authors alone, and not that of the NRF. The NRF accepts no liability whatsoever in this regard.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data of this study can be provided upon request.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A. Detailed Results

Table A1. acc and stab for each DMOA with various frequency and severity of change in different environments.
Table A1. acc and stab for each DMOA with various frequency and severity of change in different environments.
DMOOP n t τ t PMPPADPARSTFRAPMPPADPARSTFRA
fda1104acc0.71000.73600.6633stab0.00000.00000.0000
fda1104acc0.59590.65330.7089stab0.19130.23820.2055
fda1104acc0.62360.65810.6701stab0.15830.18450.1725
fda1104acc0.63550.57560.6087stab0.20440.16750.1932
fda1105acc0.66450.67210.7355stab0.00000.00000.0000
fda1105acc0.64430.63030.6217stab0.18260.18010.1188
fda1105acc0.62040.66300.6485stab0.20730.22420.1906
fda1105acc0.56340.52950.5401stab0.20730.22420.1906
fda1102acc0.69930.56430.6647stab0.00000.00000.0000
fda1102acc0.59520.67730.6496stab0.13470.16970.1500
fda1102acc0.63110.67960.6332stab0.10910.16210.1531
fda1102acc0.56690.59710.5843stab0.21050.19190.1923
fda114acc0.99750.99360.9823stab0.00000.00000.0000
fda114acc0.72610.66840.7465stab0.00000.00020.0000
fda114acc0.96710.99610.9836stab0.03290.00390.0164
fda114acc0.66060.66450.7412stab0.00040.00000.0000
fda115acc0.96440.71280.7965stab0.00000.00000.0000
fda115acc0.79280.88840.8250stab0.00000.00000.0000
fda115acc0.91390.67440.8290stab0.08450.2780.1030
fda115acc0.77000.86120.7521stab0.00000.00000.0012
fda112acc0.95800.93700.8967stab0.00000.00000.0000
fda112acc0.88320.90170.8840stab0.00000.00000.0000
fda112acc0.95870.95190.8972stab0.04100.02520.0882
fda112acc0.88340.79150.8299stab0.00000.00000.0000
Table A2. hvr and react for each DMOA for FDA1 with various frequency and severity of change.
Table A2. hvr and react for each DMOA for FDA1 with various frequency and severity of change.
DMOOP n t τ t PMPPADPARSTFRAPMPPADPARSTFRA
fda1104hvr1.74001.78651.8379react13.000013.000013.0000
fda1104hvr1.56091.70011.9762react9.00009.00009.0000
fda1104hvr1.70161.66611.5562react5.00005.00005.0000
fda1104hvr1.79772.00631.5598react1.00001.00001.0000
fda1105hvr1.65881.5921.9064react16.000016.000016.0000
fda1105hvr1.73411.70481.5894react11.000011.000011.0000
fda1105hvr1.61161.78461.7651react6.00006.00006.0000
fda1105hvr1.43831.7331.624react1.00001.00001.0000
fda1102hvr1.74931.53631.7156react7.00007.00007.0000
fda1102hvr1.62431.55221.5465react5.00005.00005.0000
fda1102hvr1.59601.54371.4597react3.00003.00003.0000
fda1102hvr1.63731.51621.4294react1.00001.00001.0000
fda114hvr1.49551.58511.7424react13.000013.000013.0000
fda114hvr2.96062.72862.9838react8.30008.06678.4000
fda114hvr1.47131.67971.7775react5.00005.00005.0000
fda114hvr2.79612.71682.8980react1.00001.00001.0000
fda115hvr2.00551.40141.4275react12.50007.50009.5000
fda115hvr3.70404.35783.6041react10.400010.633310.3667
fda115hvr1.80951.28731.2637react4.83333.00003.1667
fda115hvr3.60454.20513.5012react1.00001.00001.0000
fda112hvr2.03331.78472.0553react4.20003.50002.8000
fda112hvr4.26804.24094.0464react4.63334.56674.5667
fda112hvr1.94151.70692.4759react2.80002.60002.3333
fda112hvr3.99933.57543.7061react1.00001.00001.0000
Table A3. nNVD and sNVD for each DMOA for FDA1 with various frequency and severity of change.
Table A3. nNVD and sNVD for each DMOA for FDA1 with various frequency and severity of change.
DMOOP n t τ t PMPPADPARSTFRAPMPPADPARSTFRA
fda1104nNVD99.433398.133399.8000sNVD0.02970.03010.0296
fda1104nNVD99.500096.166796.0667sNVD0.02980.03080.0308
fda1104nNVD99.500094.366795.6000sNVD0.02980.03140.0310
fda1104nNVD99.033396.300096.8333sNVD0.02990.03080.0306
fda1105nNVD100.0000100.0000100.0000sNVD0.02950.02950.0295
fda1105nNVD100.0000100.0000100.0000sNVD0.02950.02950.0295
fda1105nNVD100.000099.9667100.0000sNVD0.02950.02950.0296
fda1105nNVD100.0000100.0000100.0000sNVD0.02950.02950.0295
fda1102nNVD64.966764.900067.1333sNVD0.04570.04580.0441
fda1102nNVD67.433358.666757.6667sNVD0.04420.05060.0514
fda1102nNVD67.566758.700056.0000sNVD0.0440.05050.0532
fda1102nNVD67.700057.466757.7333sNVD0.04380.05180.0512
fda114nNVD26.266727.633325.6333sNVD0.06710.06480.0686
fda114nNVD26.233326.566721.6333sNVD0.07190.07770.0853
fda114nNVD26.166723.133322.7000sNVD0.06750.07730.0784
fda114nNVD25.000025.366725.6000sNVD0.07350.07560.0769
fda115nNVD100.0000100.0000100.0000sNVD0.02960.02950.0295
fda115nNVD71.433399.0333100.0000sNVD0.04220.03010.0296
fda115nNVD100.000098.933399.3000sNVD0.02960.02990.0298
fda115nNVD76.266798.166795.7667sNVD0.03940.03030.0291
fda112nNVD66.033365.700066.2000sNVD0.04480.04530.0449
fda112nNVD31.700030.233340.3000sNVD0.09730.10290.0726
fda112nNVD62.600043.233343.4333sNVD0.047080.06900.0670
fda112nNVD31.466731.366728.8667sNVD0.09810.10010.1089
Table A4. nVD and dVD for each DMOA for FDA1 with various frequency and severity of change.
Table A4. nVD and dVD for each DMOA for FDA1 with various frequency and severity of change.
DMOOP n t τ t PMPPADPARSTFRAPMPPADPARSTFRA
fda1104nVD0.00000.00000.0000dVD0.00000.00000.0000
fda1104nVD0.00000.00000.0000dVD0.00000.00000.0000
fda1104nVD0.00000.00000.0000dVD0.00000.00000.0000
fda1104nVD0.00000.00000.0000dVD0.00000.00000.0000
fda1105nVD0.00000.00000.0000dVD0.00000.00000.0000
fda1105nVD0.00000.00000.0000dVD0.00000.00000.0000
fda1105nVD0.00000.00000.0000dVD0.00000.00000.0000
fda1105nVD0.00000.00000.0000dVD0.00000.00000.0000
fda1102nVD0.00000.00000.0000dVD0.00000.00000.0000
fda1102nVD0.00000.00000.0000dVD0.00000.00000.0000
fda1102nVD0.00000.00000.0000dVD0.00000.00000.0000
fda1102nVD0.00000.00000.0000dVD0.00000.00000.0000
fda114nVD0.00000.00000.0000dVD0.00000.00000.0000
fda114nVD0.00000.00000.0000dVD0.00000.00000.0000
fda114nVD0.00000.00000.0000dVD0.00000.00000.0000
fda114nVD0.06670.00000.0000dVD0.06670.00000.0000
fda115nVD0.00000.00000.0000dVD0.00000.00000.0000
fda115nVD0.00000.00000.0000dVD0.00000.00000.0000
fda115nVD0.00000.00000.0000dVD0.00000.00000.0000
fda115nVD0.00000.00000.0000dVD0.00000.00000.0000
fda112nVD0.00000.00000.0000dVD0.00000.00000.0000
fda112nVD0.00000.00000.0000dVD0.00000.00000.0000
fda112nVD0.00000.00000.0000dVD0.00000.00000.0000
fda112nVD0.00000.00000.0000dVD0.00000.00000.0000
Table A5. acc and stab for each DMOA for FDA5 with various frequency and severity of change.
Table A5. acc and stab for each DMOA for FDA5 with various frequency and severity of change.
DMOOP n t τ t PMPPADPARSTFRAPMPPADPARSTFRA
fda5104acc0.64040.59620.5380stab0.00000.00000.0000
fda5104acc0.67510.77310.7557stab0.17930.14570.0884
fda5104acc0.44450.46190.6266stab0.42510.42610.2747
fda5104acc0.22820.22700.1946stab0.47430.53000.4098
fda5105acc0.26830.40800.8316stab0.00000.00000.0000
fda5105acc0.42980.68030.8570stab0.19920.15430.0168
fda5105acc0.51400.69980.8183stab0.22660.11000.0883
fda5105acc0.53820.69130.7904stab0.24610.15990.1331
fda5102acc0.62390.67270.9835stab0.00000.00000.0000
fda5102acc0.87990.89250.9526stab0.09850.08600.0000
fda5102acc0.62850.65490.9234stab0.33670.29230.0277
fda5102acc0.42850.43200.9186stab0.44020.42910.0455
fda514acc0.99841.00001.0000stab0.00000.00000.0000
fda514acc0.99860.99550.9970stab0.00140.00450.0030
fda514acc1.00000.99910.9991stab0.00000.00090.0009
fda514acc0.99960.99450.9956stab0.00040.00550.0044
fda515acc1.00000.99930.9958stab0.00000.00000.0000
fda515acc0.99530.99150.9746stab0.00470.00850.0254
fda515acc1.00000.99710.9976stab0.00000.00290.0024
fda515acc0.99630.99620.9793stab0.00370.00380.0207
fda512acc1.00000.99930.9948stab0.00000.00000.0000
fda512acc0.99630.99900.9781stab0.00370.00100.0219
fda512acc1.00000.99930.9980stab0.00000.00070.0020
fda512acc1.00000.99870.9853stab0.00000.00130.0147
Table A6. hvr and react for each DMOA for FDA5 with various frequency and severity of change.
Table A6. hvr and react for each DMOA for FDA5 with various frequency and severity of change.
DMOOP n t τ t PMPPADPARSTFRAPMPPADPARSTFRA
fda5104hvr1.66541.38141.2592react12.966712.966712.5000
fda5104hvr2.46931.71101.5734react8.93338.83338.8333
fda5104hvr3.23021.89111.6400react5.00004.96675.0000
fda5104hvr2.02062.00171.7762react1.00001.00001.0000
fda5105hvr2.15372.73152.4955react16.000016.000015.9333
fda5105hvr2.20182.60231.9044react11.000010.800010.9333
fda5105hvr2.2672.36232.0773react6.00006.00006.0000
fda5105hvr2.16362.42521.8647react1.00001.00001.0000
fda5102hvr1.54141.68261.2248react6.90007.00007.0000
fda5102hvr3.30343.26741.4156react5.00004.96675.0000
fda5102hvr4.92335.14331.1358react3.00003.00003.0000
fda5102hvr3.51454.03141.2700react1.00001.00001.0000
fda514hvr4.45654.00323.8261react12.600013.000013.0000
fda514hvr2.43132.57192.2178react7.76677.30007.5333
fda514hvr4.74463.43473.5272react5.00004.86674.8667
fda514hvr2.19792.78972.2656react1.00001.00001.0000
fda515hvr2.99872.64551.9198react16.000015.000011.5000
fda515hvr2.64752.13081.0979react8.20007.30002.8000
fda515hvr3.0882.86652.5523react6.00005.33335.5000
fda515hvr2.53492.37171.3460react1.00001.00001.0000
fda512hvr3.07853.51841.8674react4.00003.90003.0000
fda512hvr2.26793.15411.0209react3.30003.70001.2000
fda512hvr2.94133.31063.2396react3.00002.93332.6667
fda512hvr3.21663.27042.2137react1.00001.00001.0000
Table A7. nNVD and sNVD for each DMOA for FDA5 with various frequency and severity of change.
Table A7. nNVD and sNVD for each DMOA for FDA5 with various frequency and severity of change.
DMOOP n t τ t PMPPADPARSTFRAPMPPADPARSTFRA
fda5104nNVD39.500050.033345.3000sNVD0.11750.09660.1008
fda5104nNVD57.166767.333368.4333sNVD0.09060.07910.0726
fda5104nNVD63.500074.633363.1333sNVD0.11000.09150.0962
fda5104nNVD71.900072.933358.8667sNVD0.12370.11220.1245
fda5105nNVD42.000049.966727.4000sNVD0.16520.13270.1010
fda5105nNVD72.066770.200035.0667sNVD0.11050.10150.0915
fda5105nNVD74.433365.466736.4000sNVD0.10890.10050.0940
fda5105nNVD79.066760.166734.0333sNVD0.10170.10790.1059
fda5102nNVD12.266711.83332.9333sNVD0.27120.25460.0040
fda5102nNVD13.16679.10001.5667sNVD0.19280.29100.0032
fda5102nNVD19.30009.80002.6333sNVD0.22000.37290.0005
fda5102nNVD41.566733.80002.6000sNVD0.14130.17160.0062
fda514nNVD12.966712.70009.6333sNVD0.19670.21170.0279
fda514nNVD32.466718.666712.3667sNVD0.22580.10870.0253
fda514nNVD13.666712.96679.3667sNVD0.26830.26540.0122
fda514nNVD29.233328.900016.5333sNVD0.22380.08530.0124
fda515nNVD19.400023.200021.7333sNVD0.08970.05920.0568
fda515nNVD35.833334.200067.6333sNVD0.09460.06870.0368
fda515nNVD21.033323.433321.0333sNVD0.13740.10110.0651
fda515nNVD39.133.633362.8333sNVD0.07570.06930.0407
fda512nNVD22.166723.633326.4333sNVD0.16000.13650.0837
fda512nNVD20.866717.86676.6000sNVD0.07260.09640.0577
fda512nNVD18.800017.500017.4333sNVD0.20130.23570.1891
fda512nNVD20.266718.033311.4667sNVD0.06170.07470.0391
Table A8. nVD and dVD for each DMOA for FDA5 with various frequency and severity of change.
Table A8. nVD and dVD for each DMOA for FDA5 with various frequency and severity of change.
DMOOP n t τ t PMPPADPARSTFRAPMPPADPARSTFRA
fda5104nVD0.03330.00000.0000dVD0.03330.00000.0000
fda5104nVD0.00000.00000.0000dVD0.00000.00000.0000
fda5104nVD0.00000.00000.0000dVD0.00000.00000.0000
fda5104nVD0.00000.00000.0000dVD0.00000.00000.0000
fda5105nVD0.00000.00000.0000dVD0.00000.00000.0000
fda5105nVD0.00000.00000.0000dVD0.00000.00000.0000
fda5105nVD0.00000.00000.0000dVD0.00000.00000.0000
fda5105nVD0.00000.00000.0000dVD0.00000.00000.0000
fda5102nVD0.00000.00000.0000dVD0.00000.00000.0000
fda5102nVD0.00000.00000.0000dVD0.00000.00000.0000
fda5102nVD0.03330.00000.0000dVD0.03330.00000.0000
fda5102nVD0.03330.00000.0000dVD0.03330.00000.0000
fda514nVD0.06670.00000.0000dVD0.06670.00000.0000
fda514nVD0.00000.00000.0000dVD0.00000.00000.0000
fda514nVD0.00000.00000.0000dVD0.00000.00000.0000
fda514nVD0.00000.00000.0000dVD0.00000.00000.0000
fda515nVD0.00000.00000.0000dVD0.00000.00000.0000
fda515nVD0.00000.00000.0000dVD0.00000.00000.0000
fda515nVD0.00000.00000.0000dVD0.00000.00000.0000
fda515nVD0.00000.00000.0000dVD0.00000.00000.0000
fda512nVD0.03330.00000.0000dVD0.03330.00000.0000
fda512nVD0.00000.00000.0000dVD0.00000.00000.0000
fda512nVD0.00000.00000.0000dVD0.00000.00000.0000
fda512nVD0.00000.00000.0000dVD0.00000.00000.0000
Table A9. acc and stab for each DMOA for dMOP2 with various frequency and severity of change.
Table A9. acc and stab for each DMOA for dMOP2 with various frequency and severity of change.
DMOOP n t τ t PMPPADPARSTFRAPMPPADPARSTFRA
dmop2104acc0.87000.78110.8707stab0.00000.00000.0000
dmop2104acc0.82840.82870.8715stab0.00000.01640.0000
dmop2104acc0.81240.88050.8840stab0.00000.01100.0002
dmop2104acc0.99800.97170.9995stab0.00060.01650.0001
dmop2105acc0.78580.76800.9103stab0.00000.00000.0000
dmop2105acc0.68570.71870.9217stab0.00990.02610.0000
dmop2105acc0.83740.77310.9080stab0.00000.01590.0000
dmop2105acc0.97730.97550.9951stab0.02090.02260.0036
dmop2102acc0.81270.84900.8298stab0.00000.00000.0000
dmop2102acc0.82320.85810.7979stab0.00000.00000.0000
dmop2102acc0.94430.82770.8558stab0.00000.00000.0000
dmop2102acc0.99800.99750.9988stab0.00090.00030.0002
dmop214acc0.92660.34490.3097stab0.00000.00000.0000
dmop214acc0.93090.88330.5509stab0.02240.11670.4491
dmop214acc1.00000.38020.6802stab0.00000.61980.3198
dmop214acc0.52360.43890.4764stab0.00730.00450.0396
dmop215acc0.94780.28820.9615stab0.00000.00000.0000
dmop215acc0.93930.95080.9574stab0.01770.04920.0426
dmop215acc0.99910.37370.9778stab0.00090.62630.0222
dmop215acc0.48610.38840.7584stab0.01030.07100.2271
dmop212acc0.95480.75870.9164stab0.00000.00000.0000
dmop212acc0.98330.89490.9059stab0.00590.10510.0941
dmop212acc0.99420.82780.9069stab0.00580.17220.0931
dmop212acc0.47350.53910.7011stab0.01690.06250.2519
Table A10. hvr and react for each DMOA for dMOP2 with various frequency and severity of change.
Table A10. hvr and react for each DMOA for dMOP2 with various frequency and severity of change.
DMOOP n t τ t PMPPADPARSTFRAPMPPADPARSTFRA
dmop2104hvr1.28721.25941.2577react12.866712.966712.3667
dmop2104hvr1.19821.22841.3295react8.03338.63338.6000
dmop2104hvr1.59501.56241.4643react4.96674.83334.8333
dmop2104hvr1.30381.43361.4014react1.00001.00001.0000
dmop2105hvr1.91431.88821.1107react16.000016.000016.0000
dmop2105hvr1.67301.79191.1055react11.000011.000011.0000
dmop2105hvr1.74231.65001.0242react6.00005.93335.9667
dmop2105hvr1.88472.05221.1175react1.00001.00001.0000
dmop2102hvr1.35751.55241.4858react6.80006.90006.8000
dmop2102hvr1.33031.45891.4937react4.80004.93334.8333
dmop2102hvr1.78861.36571.3977react2.93332.80002.9000
dmop2102hvr1.59891.59301.4986react1.00001.00001.0000
dmop214hvr1.53612.01631.6181react8.60002.60001.4000
dmop214hvr2.46489.20454.3700react7.80005.26672.3333
dmop214hvr0.98266.730210.8249react5.00002.46674.6000
dmop214hvr2.01655.44072.7435react1.00001.00001.0000
dmop215hvr1.6514.84690.9488react12.00002.500012.5000
dmop215hvr2.152914.78190.9485react9.63338.33338.6667
dmop215hvr0.98616.57010.9669react6.00002.83336.0000
dmop215hvr1.79276.47970.7462react1.00001.00001.0000
dmop212hvr1.93877.61311.4606react3.30001.50002.9000
dmop212hvr2.33695.38661.5312react4.76673.13332.8667
dmop212hvr0.952310.43552.5348react3.00002.93333.0000
dmop212hvr1.79741.37710.7159react1.00001.00001.0000
Table A11. nNVD and sNVD for each DMOA for dMOP2 with various frequency and severity of change.
Table A11. nNVD and sNVD for each DMOA for dMOP2 with various frequency and severity of change.
DMOOP n t τ t PMPPADPARSTFRAPMPPADPARSTFRA
dmop2104nNVD5.36675.83336.4333sNVD0.05970.05110.0466
dmop2104nNVD4.06673.93333.6000sNVD0.06980.08350.0716
dmop2104nNVD3.13333.00003.1000sNVD0.07750.10510.0987
dmop2104nNVD4.30003.33333.0667sNVD0.07210.08060.0951
dmop2105nNVD12.600013.33333.8333sNVD0.13000.12640.0291
dmop2105nNVD10.40009.86673.0000sNVD0.15480.15850.0395
dmop2105nNVD11.00009.33332.9667sNVD0.14650.17100.0382
dmop2105nNVD9.66677.93332.5667sNVD0.17400.20790.0364
dmop2102nNVD5.83335.86676.5667sNVD0.11500.11800.1052
dmop2102nNVD3.90003.13332.8667sNVD0.15430.17510.2310
dmop2102nNVD3.73332.93332.8333sNVD0.17520.17450.2025
dmop2102nNVD4.63333.20003.0333sNVD0.14190.18250.2077
dmop214nNVD68.166791.566794.4667sNVD0.04410.03260.0319
dmop214nNVD49.300024.800060.7000sNVD0.06330.15340.1541
dmop214nNVD50.800063.166730.2333sNVD0.05850.06600.1982
dmop214nNVD00.03330sNVD0.00000.00000.0000
dmop215nNVD81.066775.43330.9333sNVD0.03660.05960
dmop215nNVD59.53333.43331.0000sNVD0.05190.09290
dmop215nNVD60.633366.33330.9sNVD0.04920.05320
dmop215nNVD0.00000.00000.0000sNVD0.00000.00000.0000
dmop212nNVD31.433332.03335.8000sNVD0.10380.11330.0060
dmop212nNVD13.43331.70002.1000sNVD0.22900.35700.0454
dmop212nNVD19.933312.53330.6333sNVD0.15990.53810.0247
dmop212nNVD0.00000.00000.0000sNVD0.00000.00000.0000
Table A12. nVD and dVD for each DMOA for dMOP2 with various frequency and severity of change.
Table A12. nVD and dVD for each DMOA for dMOP2 with various frequency and severity of change.
DMOOP n t τ t PMPPADPARSTFRAPMPPADPARSTFRA
dmop2104nVD0.06670.00000.0000dVD0.06670.00000.0000
dmop2104nVD0.03330.00000.0000dVD0.03330.00000.0000
dmop2104nVD0.00000.00000.0000dVD0.00000.00000.0000
dmop2104nVD0.00000.00000.0000dVD0.00000.00000.0000
dmop2105nVD0.03330.00000.0000dVD0.03330.00000.0000
dmop2105nVD0.00000.00000.0000dVD0.00000.00000.0000
dmop2105nVD0.13330.00000.0000dVD0.13330.00000.0000
dmop2105nVD0.06670.00000.0000dVD0.06670.00000.0000
dmop2102nVD0.00000.00000.0000dVD0.00000.00000.0000
dmop2102nVD0.00000.00000.0000dVD0.00000.00000.0000
dmop2102nVD0.03330.00000.0000dVD0.03330.00000.0000
dmop2102nVD0.03330.00000.0000dVD0.03330.00000.0000
dmop214nVD0.00000.00000.0000dVD0.00000.00000.0000
dmop214nVD0.000023.33330.0000dVD0.00008.08380.0000
dmop214nVD0.00000.00003.3333dVD0.00000.00001.1525
dmop214nVD1.000096.6667100.0000dVD78.300264.21585.5742
dmop215nVD0.00006.66676.6667dVD0.00001.08631.0156
dmop215nVD0.000066.66670.0000dVD0.000025.45650.0000
dmop215nVD0.000013.333310.0000dVD0.00002.59701.7496
dmop215nVD1.0000100.0000100.0000dVD83.063365.187393.9007
dmop212nVD0.00000.00006.6667dVD0.00000.00001.1100
dmop212nVD0.000026.73331.7000dVD0.00009.01730.4143
dmop212nVD0.00000.000024.1667dVD0.00000.00007.6244
dmop212nVD1.000062.466757.6333dVD85.172590.460393.0091

References

  1. Helbig, M.; Engelbrecht, A.P. Analysing the performance of dynamic multi-objective optimisation algorithms. In Proceedings of the IEEE Congress on Evolutionary Computation, Cancun, Mexico, 20–23 June 2013; pp. 1531–1539. [Google Scholar] [CrossRef]
  2. Jiang, S.; Yang, S. A benchmark generator for dynamic multi-objective optimization problems. In Proceedings of the UK Workshop on Computational Intelligence (UKCI), Bradford, UK, 8–10 September 2014; pp. 1–8. [Google Scholar] [CrossRef]
  3. Azzouz, R.; Bechikh, S.; Said, L.B. Recent Advances in Evolutionary Multi-objective Optimization; Springer: Berlin/Heidelberg, Germany, 2017; Volume 20. [Google Scholar] [CrossRef]
  4. Nguyen, T.T.; Yang, S.; Branke, J. Evolutionary dynamic optimization: A survey of the state of the art. Swarm Evol. Comput. 2012, 6, 1–24. [Google Scholar] [CrossRef]
  5. Coello Coello, C.A.; Reyes-Sierra, M. Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art. Int. J. Comput. Intell. Res. 2006, 2, 287–308. [Google Scholar] [CrossRef]
  6. Pareto, V. Cours D’Economie Politique; Librairie Droz: Geneva, Switzerland, 1964. [Google Scholar]
  7. Deb, K. Multi-Objective Optimization Using Evolutionary Algorithms; John Wiley & Sons, Inc.: New York, NY, USA, 2001. [Google Scholar]
  8. Bianco, N.; Fragnito, A.; Iasiello, M.; Mauro, G.M. A CFD multi-objective optimization framework to design a wall-type heat recovery and ventilation unit with phase change material. Appl. Energy 2023, 347, 121368. [Google Scholar] [CrossRef]
  9. Deb, K.; Bhaskara Rao, N.U.; Karthik, S. Dynamic Multi-objective Optimization and Decision-making Using Modified NSGA-II: A Case Study on Hydro-thermal Power Scheduling. In Proceedings of the International Conference on Evolutionary Multi-Criterion Optimization, EMO’07, Matsushima, Japan, 5–8 March 2007; pp. 803–817. [Google Scholar]
  10. Farina, M.; Deb, K.; Amato, P. Dynamic multiobjective optimization problem: Test cases, approximation, and applications. IEEE Trans. Evol. Comput. 2004, 8, 425–442. [Google Scholar]
  11. Hämäläinen, R.P.; Mäntysaari, J. A dynamic interval goal programming approach to the regulation of a lake–river system. J. Multi-Criteria Decis. Anal. 2001, 10, 75–86. [Google Scholar] [CrossRef]
  12. Hämäläinen, R.P.; Mäntysaari, J. Dynamic multi-objective heating optimization. Eur. J. Oper. Res. 2002, 142, 1–15. [Google Scholar] [CrossRef]
  13. Huang, L.; Suh, I.H.; Abraham, A. Dynamic multi-objective optimization based on membrane computing for control of time-varying unstable plants. Inf. Sci. 2011, 181, 2370–2391. [Google Scholar] [CrossRef]
  14. Zhang, X.; Zhang, G.; Zhang, D.; Zhang, L.; Qian, F. Dynamic Multi-Objective Optimization in Brazier-Type Gasification and Carbonization Furnace. Materials 2023, 16, 1164. [Google Scholar] [CrossRef]
  15. Zhou, X.; Sun, Y.; Huang, Z.; Yang, C.; Yen, G.G. Dynamic multi-objective optimization and fuzzy AHP for copper removal process of zinc hydrometallurgy. Appl. Soft Comput. 2022, 129, 109613. [Google Scholar] [CrossRef]
  16. Fang, Y.; Liu, F.; Li, M.; Cui, H. Domain Generalization-Based Dynamic Multiobjective Optimization: A Case Study on Disassembly Line Balancing. IEEE Trans. Evol. Comput. 2022, 1. [Google Scholar] [CrossRef]
  17. Iris, C.; Asan, S.S. Computational Intelligence Systems in Industrial Engineering. Comput. Intell. Syst. Ind. Eng. 2012, 6, 203–230. [Google Scholar] [CrossRef]
  18. Helbig, M. Challenges Applying Dynamic Multi-objective Optimisation Algorithms to Real-World Problems. In Women in Computational Intelligence: Key Advances and Perspectives on Emerging Topics; Smith, A.E., Ed.; Springer International Publishing: Cham, Switzerland, 2022; pp. 353–375. [Google Scholar] [CrossRef]
  19. Jaimes, A.L.; Montaño, A.A.; Coello Coello, C.A. Preference incorporation to solve many-objective airfoil design problems. In Proceedings of the IEEE Congress of Evolutionary Computation (CEC), New Orleans, LA, USA, 5–8 June 2011; pp. 1605–1612. [Google Scholar] [CrossRef]
  20. Coello, C.A.C.; Lamont, G.B.; Veldhuizen, D.A.V. Evolutionary Algorithms for Solving Multi-Objective Problems; Springer: New York, NY, USA, 2007. [Google Scholar] [CrossRef]
  21. Cruz-Reyes, L.; Fernandez, E.; Gomez, C.; Sanchez, P. Preference Incorporation into Evolutionary Multiobjective Optimization Using a Multi-Criteria Evaluation Method. In Recent Advances on Hybrid Approaches for Designing Intelligent Systems; Castillo, O., Melin, P., Pedrycz, W., Kacprzyk, J., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 533–542. [Google Scholar] [CrossRef]
  22. Cruz-Reyes, L.; Fernandez, E.; Sanchez, P.; Coello Coello, C.A.; Gomez, C. Incorporation of implicit decision-maker preferences in multi-objective evolutionary optimization using a multi-criteria classification method. Appl. Soft Comput. J. 2017, 50, 48–57. [Google Scholar] [CrossRef]
  23. Ferreira, T.N.; Vergilio, S.R.; de Souza, J.T. Incorporating user preferences in search-based software engineering: A systematic mapping study. Inf. Softw. Technol. 2017, 90, 55–69. [Google Scholar] [CrossRef]
  24. Rostami, S.; O’Reilly, D.; Shenfield, A.; Bowring, N. A novel preference articulation operator for the Evolutionary Multi-Objective Optimisation of classifiers in concealed weapons detection. Inf. Sci. 2015, 295, 494–520. [Google Scholar] [CrossRef]
  25. Goulart, F.; Campelo, F. Preference-guided evolutionary algorithms for many-objective optimization. Inf. Sci. 2016, 329, 236–255. [Google Scholar] [CrossRef]
  26. Sudenga, S.; Wattanapongsakornb, N. Incorporating decision maker preference in multiobjective evolutionary algorithm. In Proceedings of the IEEE Symposium on Computational Intelligence for Engineering Solutions (CIES), Orlando, FL, USA, 9–12 December 2014; pp. 22–29. [Google Scholar] [CrossRef]
  27. Thiele, L.; Thiele, L.; Miettinen, K.; Miettinen, K.; Korhonen, P.J.; Korhonen, P.J.; Molina, J.; Molina, J. A Preference-Based Evolutionary Algorithm for Multi-Objective Optimization. Evol. Comput. 2009, 17, 411–436. [Google Scholar] [CrossRef]
  28. Mezura-Montes, E.; Coello Coello, C.A. Constraint-handling in nature-inspired numerical optimization: Past, present and future. Swarm Evol. Comput. 2011, 1, 173–194. [Google Scholar] [CrossRef]
  29. Kennedy, J.; Eberhart, R. A discrete binary version of the particle swarm algorithm. In Proceedings of the 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, Orlando, FL, USA, 12–15 October 1997; Volume 5, pp. 4104–4108. [Google Scholar] [CrossRef]
  30. Jensen, P.A.; Bard, J.F. Operations Research Models and Methods; John Wiley & Sons: New York, NY, USA, 2003. [Google Scholar]
  31. Methods, C.; Michalewicz, Z. A Survey of Constraint Handling Techniques in Evolutionary Computation Methods. Evol. Program. 1995, 4, 135–155. [Google Scholar]
  32. Zhang, W.; Yen, G.G.; He, Z. Constrained Optimization Via Artificial Immune System. IEEE Trans. Cybern. 2014, 44, 185–198. [Google Scholar] [CrossRef]
  33. Azzouz, R.; Bechikh, S.; Said, L.B. Articulating Decision Maker’s Preference Information within Multiobjective Artificial Immune Systems. In Proceedings of the 2012 IEEE 24th International Conference on Tools with Artificial Intelligence, Athens, Greece, 7–9 November 2012; Volume 1, pp. 327–334. [Google Scholar] [CrossRef]
  34. Das, S.; Suganthan, P.N. Differential Evolution: A Survey of the State-of-the-Art. IEEE Trans. Evol. Comput. 2011, 15, 4–31. [Google Scholar] [CrossRef]
  35. Deb, K.; Agrawal, S.; Pratap, A.; Meyarivan, T. A Fast Elitist Non-dominated Sorting Genetic Algorithm for Multi-objective Optimization: NSGA-II. In Proceedings of the Parallel Problem Solving from Nature PPSN VI, Paris, France, 18–20 September 2020; Schoenauer, M., Deb, K., Rudolph, G., Yao, X., Lutton, E., Merelo, J.J., Schwefel, H.P., Eds.; Springer: Berlin/Heidelberg, Germany, 2000; pp. 849–858. [Google Scholar]
  36. Adekunle, R.A.; Helbig, M. A differential evolution algorithm for dynamic multi-objective optimization. In Proceedings of the IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, USA, 27 November–1 December 2017; pp. 1–10. [Google Scholar] [CrossRef]
  37. Helbig, M.; Engelbrecht, A.P. Issues with performance measures for dynamic multi-objective optimisation. In Proceedings of the IEEE Symposium on Computational Intelligence in Dynamic and Uncertain Environments (CIDUE), Singapore, 16–19 April 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 17–24. [Google Scholar]
  38. Helbig, M. Solving Dynamic Multi-Objective Optimisation Problems Using Vector Evaluated Particle Swarm Optimisation. Ph.D. Thesis, University of Pretoria, Pretoria, South Africa, 2012. [Google Scholar]
  39. Goh, C.K.; Tan, K.C. A competitive-cooperative coevolutionary paradigm for dynamic multiobjective optimization. IEEE Trans. Evol. Comput. 2009, 13, 103–127. [Google Scholar] [CrossRef]
  40. Padhye, N.; Mittal, P.; Deb, K. Feasibility preserving constraint-handling strategies for real parameter evolutionary optimization. Comput. Optim. Appl. 2015, 62, 851–890. [Google Scholar] [CrossRef]
  41. Hamdan, M. On the disruption-level of polynomial mutation for evolutionary multi-objective optimisation algorithms. Comput. Inform. 2010, 29, 783–800. [Google Scholar]
  42. Engelbrecht, A.P. Computational Intelligence: An Introduction; John Wiley & Sons: New York, NY, USA, 2007. [Google Scholar]
  43. Gaemperle, R.; Mueller, S.D.; Koumoutsakos, P. A Parameter Study for Differential Evolution. Adv. Intell. Syst. Fuzzy Syst. Evol. Comput. 2002, 10, 293–298. [Google Scholar]
  44. Ronkkonen, J.; Kukkonen, S.; Price, K. Real-parameter optimization with differential evolution. In Proceedings of the IEEE Congress on Evolutionary Computation, Scotland, UK, 2–5 September 2005; Volume 1, pp. 506–513. [Google Scholar] [CrossRef]
  45. Price, K.V. Differential Evolution. In Handbook of Optimization: From Classical to Modern Approach; Zelinka, I., Snášel, V., Abraham, A., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 187–214. [Google Scholar] [CrossRef]
  46. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  47. Cámara, M.; Ortega, J.; de Toro, F. A Single Front Genetic Algorithm for Parallel Multi-objective Optimization in Dynamic Environments. Neurocomputing 2009, 72, 3570–3579. [Google Scholar] [CrossRef]
  48. Van Veldhuizen, D. Multiobjective Evolutionary Algorithms: Classification, Analyses, and New Innovations. Ph.D. Thesis, Faculty of the Graduate School of Engineering, Air Force Institute of Technology, Air University, Wright-Patterson Air Force Base, OH, USA, 1999. [Google Scholar]
  49. Helbig, M.; Engelbrecht, A.P. Performance measures for dynamic multi-objective optimisation algorithms. Inf. Sci. 2013, 250, 61–81. [Google Scholar] [CrossRef]
  50. Sola, M.C. Parallel Processing for Dynamic Multi-Objective Optimization. Ph.D. Thesis, Universidad de Granada, Granada, Spain, 2010. [Google Scholar]
  51. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2017. [Google Scholar]
  52. Helbig, M. Dynamic Multi-objective Optimization Using Computational Intelligence Algorithms. In Proceedings of the Computational Intelligence and Data Analytics; Buyya, R., Hernandez, S.M., Kovvur, R.M.R., Sarma, T.H., Eds.; Springer Nature: Singapore, 2023; pp. 41–62. [Google Scholar]
Figure 1. Preference-driven search process of a DMOA.
Figure 1. Preference-driven search process of a DMOA.
Algorithms 16 00504 g001
Figure 2. Decisions found by DPA (above) and RSTFRA (below) for FDA5 with Sphere Spec = ( 1.7808 , 2.9185 × 10 59 , 0.0 , 1.5 ) , n t = 10 and τ t = 5 .
Figure 2. Decisions found by DPA (above) and RSTFRA (below) for FDA5 with Sphere Spec = ( 1.7808 , 2.9185 × 10 59 , 0.0 , 1.5 ) , n t = 10 and τ t = 5 .
Algorithms 16 00504 g002
Figure 3. Decisions found by DPA (above) and PPA (below) for FDA5 with Sphere Spec = ( 1.0276 × 10 16 , 4.4955 × 10 62 , 1.68781 , 1.5 ) , n t = 10 and τ t = 2 .
Figure 3. Decisions found by DPA (above) and PPA (below) for FDA5 with Sphere Spec = ( 1.0276 × 10 16 , 4.4955 × 10 62 , 1.68781 , 1.5 ) , n t = 10 and τ t = 2 .
Algorithms 16 00504 g003
Table 1. Experimental preferences for decision variables.
Table 1. Experimental preferences for decision variables.
S/N x 1 x 2 x 3 x 4 x 5 x 6
10.476000.530000.58770.591040.53240.4989
20.477000.330000.16300.428170.32500.2654
30.476000.190000.09120.179280.16160.2331
40.816000.950000.87680.866230.48920.7924
50.76100−0.100000.1513−0.172670.03870.1229
60.167000.180000.0032−0.003110.1462−0.0284
70.964003.5 × 10 6 0.53780.518800.31880.5433
80.000002.8 × 10 5 0.36600.289650.45920.3918
91.000000.000300.39920.399250.52920.4621
100.357001.000000.50830.746670.83830.7472
110.000000.000910.73300.498100.82070.6616
120.000000.080000.64190.878020.84850.8020
130.146000.310000.29700.308780.30520.3065
140.734000.160000.16570.135080.11010.1729
150.317000.310000.32300.321970.32900.2881
160.061000.004600.00270.002250.00320.0079
171.000000.044000.03910.105930.02040.0658
180.000000.002800.00150.000420.00160.0045
Table 2. Experimental preferences for objective values.
Table 2. Experimental preferences for objective values.
S/N f 1 f 2 f 3
10.48000.3200N/A
20.48000.6200N/A
30.48000.9300N/A
40.82002.4000N/A
50.76000.1700N/A
60.17000.6300N/A
70.93004.5 × 10 71 1.3843
81.80002.9 × 10 59 0
91.0 × 10 16 4.5 × 10 62 1.6781
102.8 × 10 8 2.8 × 10 8 1.6354
112.90000.00410
123.50000.44000
130.15004.6000N/A
140.73009.5000N/A
150.32004.3000N/A
160.06100.9700N/A
171.00000.2100N/A
180.00001.0000N/A
Table 3. Benchmark function configurations.
Table 3. Benchmark function configurations.
S/NDMOP τ t n t Iterationsc(f(x)) σ ( runs )
1FDA1410162030
2FDA1510202030
3FDA121082030
4FDA141162030
5FDA151202030
6FDA12182030
7FDA5410162030
8FDA5510202030
9FDA521082030
10FDA541162030
11FDA551202030
12FDA52182030
13dMOP2410162030
14dMOP2510202030
15dMOP221082030
16dMOP241162030
17dMOP251202030
18dMOP22182030
Table 4. Overall wins and losses for various frequency and severity of change combinations using wins-losses B [1].
Table 4. Overall wins and losses for various frequency and severity of change combinations using wins-losses B [1].
n t τ t RESULTSPPADPARSTFRA
104Wins645953
104Losses545864
104Diff101−11
104Rank123
105Wins616732
105Losses454075
105Diff1627−43
105Rank213
102Wins736035
102Losses405474
102Diff336−39
102Rank123
14Wins416756
14Losses664454
14Diff−25232
14Rank312
15Wins567343
15Losses534772
15Diff326−29
15Rank213
12Wins517755
12Losses644970
12Diff−1328−15
12Rank312
Table 5. Overall Wins and Losses for various performance measures, and frequency and severity of change combinations, using wins-losses B [1].
Table 5. Overall Wins and Losses for various performance measures, and frequency and severity of change combinations, using wins-losses B [1].
PMRESULTSPPADPARSTFRA
accWins698363
accLosses756080
accDiff−623−17
accRank213
stabWins232332
stabLosses223422
stabDiff1−1110
stabRank221
hvrWins829440
hvrLosses6250104
hvrDiff2044−64
hvrRank213
reactWins144545
reactLosses572522
reactDiff−432023
reactRank311
nNVDWins916539
nNVDLosses386790
nNVDDiff53−2−51
nNVDRank123
sNVDWins678747
sNVDLosses684885
sNVDDiff−139−38
sNVDRank213
nVDWins024
nVDLosses042
nVDDiff0−22
nVDRank321
dVDWins044
dVDLosses044
dVDDiff000
dVDRank311
Table 6. Overall wins and losses for various DMOA using wins-losses B [1].
Table 6. Overall wins and losses for various DMOA using wins-losses B [1].
RESULTSPPADPARSTFRA
Wins346403274
Losses322292409
Diff24111−135
Rank213
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Adekoya, A.R.; Helbig, M. Decision-Maker’s Preference-Driven Dynamic Multi-Objective Optimization. Algorithms 2023, 16, 504. https://doi.org/10.3390/a16110504

AMA Style

Adekoya AR, Helbig M. Decision-Maker’s Preference-Driven Dynamic Multi-Objective Optimization. Algorithms. 2023; 16(11):504. https://doi.org/10.3390/a16110504

Chicago/Turabian Style

Adekoya, Adekunle Rotimi, and Mardé Helbig. 2023. "Decision-Maker’s Preference-Driven Dynamic Multi-Objective Optimization" Algorithms 16, no. 11: 504. https://doi.org/10.3390/a16110504

APA Style

Adekoya, A. R., & Helbig, M. (2023). Decision-Maker’s Preference-Driven Dynamic Multi-Objective Optimization. Algorithms, 16(11), 504. https://doi.org/10.3390/a16110504

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop