Next Article in Journal
Site-Scale Integrated Decision Support Tool (i-DSTss) for Stormwater Management
Previous Article in Journal
Extending the Holding Time for Agricultural Water Testing EPA Method 1603 for Produce Growers
Previous Article in Special Issue
Site Selection of Aquifer Thermal Energy Storage Systems in Shallow Groundwater Conditions
Article

Assessment of Water Resources Management Strategy Under Different Evolutionary Optimization Techniques

Department of Civil & Environmental Engineering, University of Strathclyde, 75 Montrose St, Glasgow G1 1XJ, UK
*
Author to whom correspondence should be addressed.
Water 2019, 11(10), 2021; https://doi.org/10.3390/w11102021
Received: 6 September 2019 / Revised: 26 September 2019 / Accepted: 27 September 2019 / Published: 28 September 2019
(This article belongs to the Special Issue Water Resources Management Strategy Under Global Change)

Abstract

Competitive optimization techniques have been developed to address the complexity of integrated water resources management (IWRM) modelling; however, model adaptation due to changing environments is still a challenge. In this paper we employ multi-variable techniques to increase confidence in model-driven decision-making scenarios. Here, water reservoir management was assessed using two evolutionary algorithm (EA) techniques, the epsilon-dominance-driven self-adaptive evolutionary algorithm (ε-DSEA) and the Borg multi-objective evolutionary algorithm (MOEA). Many objective scenarios were evaluated to manage flood risk, hydropower generation, water supply, and release sequences over three decades. Computationally, the ε-DSEA’s results are generally reliable, robust, effective and efficient when compared directly with the Borg MOEA but both provide decision support model outputs of value.
Keywords: self-adaptive technique; many-objective; multi-variable; decision makers; reservoir operation strategy self-adaptive technique; many-objective; multi-variable; decision makers; reservoir operation strategy

1. Introduction

Water resource management problems (i.e., surface and groundwater) are complex due to their non-linear, dynamic, multimodal properties that need robust methods to solve, such as optimization algorithms [1] based on evolutionary algorithms (EAs) inspired from evolution and the natural selection of species [2,3]. Many of these have been proposed by researchers with different techniques, such as: the non-dominated sorting genetic algorithm (NSGA II) [4], multi-objective evolutionary algorithm based on decomposition (MOEA/D) [5], indicator-based evolutionary algorithm (IBEA) [6] and differential evolution (DE) [7]. Furthermore, approaches based on swarm intelligence include particle swarm optimization (PSO) [8] and ant colony optimization (ACO) [9] while the annealing process in metallurgy inspired simulated annealing (SA) [10]. A review of EAs and other metaheuristic algorithms and their applications can be found in [11,12]. Examples using these techniques for solving water resources management problems include Hurford et al., [13], and others [14,15,16,17] using ε-NSGA-II, MOEA/D, Borg MOEA and NSGA-II, respectively to optimize reservoir management strategy based on multidisciplinary objectives like flood control, hydropower generation, and water supply.
Benchmark functions such as DTLZ and WFG series were often used in comparative studies to assess algorithms’ performance, as in [18,19,20,21], however they consider forward and easy to solve versus real-world problems [22]. These algorithms often have many parameters that require calibration, which has a major impact on computational performance and optimal achievement [23,24]. Karafotias et al., [25] presented a review of the approaches for the calibration and control of parameters. There are two types of EA parameter-setting problems categorised as (a) parameter tuning and (b) parameter control. Parameter tuning relates to the initial values of some parameters that are set before executing the algorithm. Parameter control involves adjusting values during the run time [26].
Parameter tuning is applied to parameters such as population size, mutation and crossover rate, and has been discussed in the literature and recommended values proposed [25]. However, some of the parameters can vary widely and generally need extensive trials to find suitable values for a particular problem. For example, the distribution index for the simulated binary crossover (SBX) operator may vary between 0 and 500 [27]. Similarly, Reynoso-Meza et al., [28] concluded from experimental studies on multi-objective optimization problems that the value of the step size for the differential evolution (DE) operator is case sensitive. It is difficult to set default values for all problems. Parameter control is more important than parameter tuning in genetic algorithms (which directly affect the algorithms’ performance) [29], however, these parameters have received less attention in the literature [25]. These issues reduce the confidence of decision makers to use modelled EA results [1]. For example, Ishibuchi et al., [21] demonstrates that algorithms’ optimality behaviour may change under different problems’ environments, based on experimental studies on test benchmark functions. Nevertheless, the need of EA models capable to adapt with such a problematic is evident.
In this article, a novel approach ε-DSEA (epsilon-dominance-driven self-adaptive evolutionary algorithm) is presented using a range of novel techniques including: (i) Diversity expansion; (ii) Self-adaptation of the control parameters of recombination operators; (iii) Exploration extension; and (iv) Virtual dominance archive. The algorithm’s performance was investigated using a constrained real-world regional water resources management problem. A comparative analysis with Borg MOEA [30] was utilized as the Borg MOEA has superior reliability when compared with a range of robust published algorithms [18,31].
The key properties that define the comparison are: (i) Reliability, which refers to the replication and consistency of the best solutions achieved [32] (ii); Robustness, which relates the algorithms dependable performance in different problem environments [22]; (iii) Computational efficiency, i.e. the algorithm’s speed of convergence to the non-dominated solutions [33]; and (iv) Effectiveness, which refers to the closeness of the solutions achieved to the true Pareto-front and their distribution; and dominance front extension in objective space [34]. The outcome demonstrates the robustness of the proposed ε-DESA technique to maintain optimality achievement under different problem environments that may increase integrated water resources management (IWRM) decision makers’ confidence to adopt the EAs’ results.

2. Materials and Methods

2.1. Adopted Multi-Objective Optimization Approach

Commonly, real-word optimization problems have multiple objectives. A brief explanation of some of the key concepts associated with multi-objective optimization is provided here. An constrained multi-objective optimization problem may be described briefly as follows [35]
Minimize: F(x) = [f1(x), ⋯, ⋯, ⋯fM(x)]T
Subject to   g i ( x ) 0 , i n g h j ( x ) = 0 , j n h x     X
XRn is the decision space, i.e. X = [xL, xU] where x = [x1, x2, …, xn]T is the decision variable vector of dimension n; and xL and xU are the vectors of the lower and upper bounds on x, respectively. F(x) consists of M objective functions fi : X ZRM, where i = 1, …, M, and Z is the objective space’s feasible region containing all decision variables in X that satisfy all constraints. The g i ( x ) and h j ( x ) represents the ith of n g and jth for n h inequality and equality constraints, respectively. For unconstraint problems, n g = n h = , and Z = X [30].
The concept of Pareto-dominance [35,36,37] is used widely to characterise the solutions of multi-objective optimization problems, and superior solutions are said to dominate inferior solutions.
Thus:
  • In a minimization problem, a vector u = (u1, . . . , uM)T is said to dominate another vector v = (v1, . . . , vM)T if uivi for i = 1, . . . , M and uv. This property may be denoted as u v.
  • A feasible solution x X is called a Pareto-optimal solution, if there is no alternative solution y X such that F(y) F(x).
  • The Pareto-optimal set, PS, is the union of all Pareto-optimal solutions, and may be defined as PS = {x X : yX, F(y)F(x)}.
  • The Pareto-optimal front, PF, is the set comprising the Pareto-optimal solutions in the objective space. It may be expressed as PF = {F(x)|x ∈ PS}.

2.2. Details of Epsilon-Dominance-Driven Self-Adaptive Evolutionary Algorithm (ε-DSEA) Optimization Algorithm

The algorithm is based on the main principles of multi-objective evolutionary algorithms (MOEAs) e.g. recombination, mutation and dominance sorting. However, novel techniques are included to enhance the algorithm’s ability to handle the complexities of different problem environments. These techniques are:
  • Diversity expansion to increase decision variables’ search space exploitation
  • Self-adaptive operators’ parameters for parameters in process tuning
  • Exploration extension for algorithm revival and stagnation coping
  • Virtual dominance archive to improve diversity and convergence.
The algorithm employs six recombination operators having different evolving techniques for biological genetics process (e.g., crossover), which depend on chromosomes from the parent to generate new chromosomes. These operators are: simulated binary crossover (SBX) [27]; differential evolution (DE) [7]; parent-centric crossover (PCX) [38]; unimodal normal distribution crossover (UNDX) [39]; simplex crossover (SPX) [40]; and uniform mutation (UM) [41]. The corollary offspring (son) from these operators, excluding the UM, will mutate by polynomial mutation (PM) operator [42] to produce new generation set. Geetha and Kumaran [43] reviewed several crossover operators used in evolutionary algorithms. Here, the hyper-boxes (whose dimensions are equal to ε) sorting technique of objective search space [44], and non-dominated archive were employed as in [30,45,46].

2.2.1. Diversity Expansion

The search procedure in an optimization algorithm has two main components, exploration and exploitation. Evidence in the literature indicates the best results are achieved if exploration and exploitation are deployed preferentially in the early and latter stages of the search, respectively [47,48]. Accordingly, a procedure that safeguards diversity in the population at the start is incorporated in the proposed algorithm which employs all the available recombination operators at the initial stage. After the initial random seeding, the algorithm uses each recombination operator to generate new offspring, selecting parents from the entire population. If more parents are needed (e.g., in case of odd number of parents), they are selected from the population using a binary tournament selection. Figure 1 illustrates the procedure by which the parents are selected.

2.2.2. Self-Adaptive Mechanism and Formulae

In each generation, the recombination operators are selected on a competitive basis, according to the proportion of dominance solutions in the archive (NDS) contributed by each operator. Thus, the selection probabilities for the recombination operators are obtained as follows [30,49].
𝒫 i NDS = N D S i + 1 j = 1 N R O ( N D S j + 1 ) i , j = 1 , 2 , N R O
where 𝒫 i N D S is the probability of ith recombination operator, NDSi is the number of solutions in the archive contributed by the ith recombination operator, NRO is the number of recombination operators; The constant 1.0 is used to avoid probability values of zero.
However, operator’s dominance achievement is sensitive to the relevant parameter setting. This problematic, (e.g., parameter control problem) was classified into three categories depending on the way the parameter variation is accomplished [26]: (a) deterministic, (b) adaptive, and (c) self-adaptive. Deterministic control is based on rules that are specified a priori [50,51]. In self-adaptive control, the parameters may be encoded to evolve in the genotype such that, for example, mutation and recombination are applied to the decision variables also [52,53], which extends the search space to cover the parameter values and consumes more time during the optimization processes [26]. The adaptive method (b) was considered more effective in solving complex problems, as a feedback from the optimization process is set to adjust parameter values during optimization progresses [26,54]. The aforementioned technique was adopted by many researchers to improve algorithms’ performance, as in [30,49,55,56,57], however none of these (and others) develop a self-adaptive technique that is sensitive to optimality achievement during evaluation progress [31].
The success of any operator depends on the chosen values of the parameters that directly affect its performance. Any operator may lead an algorithm to suboptimal solutions because of unsatisfactory parameter calibration. However, parameter calibration is extremely challenging. This difficulty provided the motivation for establishing a dynamic relationship between the values of the control parameters of the recombination operators and their relative effectiveness, to obviate the need for fine tuning. The efficiency of the optimization algorithm is thus improved by continuously seeking to improve the collective effectiveness of the recombination operators. In other words, the formulation developed herein allows the values of the control parameters of each recombination operator to improve adaptively based on the success of the recombination operator compared to the rest of the recombination operators.
Table 1 shows the lower and upper bounds of the operator control parameters. If an operator’s ability to contribute offspring to the dominance archive is decreased, its selection probability 𝒫 i N D S will decrease according to Equation (2). In turn, the values of the relevant control parameter decrease and the recombination operator’s ability to contribute new offspring to the archive will improve.
It is worth noting that, initially, all the recombination operators have an equal selection probability ( 𝒫 i N D S ) of 1/NRO. During the evaluation process the 𝒫 i N D S value for any recombination operator changed along with its control parameters, according to its contribution in the dominance archive. If any recombination operator is relatively unsuccessful, its selection probability ( 𝒫 i N D S ) and parameters controls will decrease. If the effectiveness of another recombination operator decreases, the selection probabilities of some or all the other recombination operators will increase with the values of their control parameters. In this way a dynamic equilibrium is maintained among the operators’ selection probabilities, which in turn regulates the operator control parameters. A set of formulae (equations) developed ensemble parameters’ tuning with the relevant dominance attainment. The parameters’ tuning domains (i.e., tuning range) were set based on default or recommended values suggested in the literature, and experimental investigation carried out on common test functions, as illustrated in Table 1.
Figure 2a illustrates the relationships between operators’ dominance attainment and their control parameters values and shows how operators’ parameters auto-tuned according to the operator successful to produce non-dominated solutions in the dominance archive.

2.2.3. Exploration Extension Mechanism

This mechanism is based on initializing (resetting) all operators’ selection probabilities 𝒫 i N D S uniformly to1/NRO. It aims to provide an equal opportunity for all the operators, by assessing the performance best on the most recent results. Otherwise, the previously successful operators with more solutions in the archive would continue to dominate based on past performance as dictated by Equation (2).
The number of resets depends on a random integer Nr such that N r + [1, 3]. When the algorithm starts, an Nr value is selected at random and the maximum permissible number of function evaluations NFEmax is divided by N r + 1 to determine the reset interval Er. For example, if NFEmax = 300,000 and Nr = 2, the reset occurs at every Er = 100,000 function evaluations. Hence, in this case, two resets occur during the entire optimization. Formally,
E r = N F E m a x N r + 1 ; N r + [ 1 , 3 ]
where Er is the reset interval.
Figure 2b shows an example of the resetting process and its relation with self-adaptive mechanism to extend algorithm explorations and escaping from possible local optima.

2.2.4. Virtual Dominance Archive

In early stages of an evaluation process for constraints problems with enormous decision variables, the ε-dominance archive techniques (Section 2.2.3) tend to maintain only the non-dominated solutions in the dominance archive. Experimental tests on such problems show only one non-dominated solution maintained in the archive while exploring the design space for feasible solutions. Hence, the operators’ parameters will be on its minimum values during this stage in the evaluation process using the proposed self-adaptive mechanism. To overcome this issue, a virtual dominance archive was developed by randomly generating a virtual number of the dominance solutions for the selected operator to preserve diversity and early convergence exploration for feasible solutions using the entire parameter’s domain.

2.2.5. Constraint Handling Strategy

The fundamentals of evolutionary algorithms are based on handling only unconstraint optimization problems [35], many techniques were proposed for constraint problems, like penalty function, special representations and operators, and repair method [58]. Here, the penalty function technique is adopted as follows [59]:
( x ) = f ( x ) + 𝒫 ( x )
where ( x ) is the expanded objective function, and 𝒫 ( x ) is the constraint violation amount, which can be expressed as (based on Equation (1)):
𝒫 ( x ) = i = 1 I A i . max ( 0 , g i ( x ) ) 2 + j = 1 J B j . | h j ( x ) |
where A i   a n d   B j   + , are penalty factors. I and J are the total numbers of inequity and equity constraints, respectively.

2.3. Comparative Paradigms

There are many types of MOEAs’ paradigms introduced in the literatures, including many-objectives algorithms [12,60,61,62], however, previous algorithms’ design principles were often adopted in developing new algorithms [61] like ε-MOEA [45] and ε-NSGA-II [63] which employed the ε-dominance sorting proposed by Laumanns et al., 2002, on the original version of MOEA [64] and NSGA-II [4]; MOEA/D [5] also employed decomposition on the origin MOEA.
MOEAs’ effectiveness is commonly measured using quantitative metrics like the hypervolume metric [65] which evaluate the non-dominated solutions’ hypervolume, and generational distance metric [66] which measure the average distance between the dominance solutions and the closer Pareto-front set. However, these metrics (and others) may provide misguiding results and most of their design principles depends on the true Pareto-front, which is unknown in real-world problems [22].
Accordingly, the comparative assessment of ε-DSEA are based on a real-world engineering problem. Here, the state-of-the-art Borg MOEA [30] was adopted for comparative purposes since it outperforms or met other state-of-the-art algorithms’ achievement, such as: ε-MOEA, ε-NSGA-II, MOEA/D, GBE3, OMOPSO, IBEA, NSGA-II, AMALGAM [18,30,67,68,69,70]. Borg employs many MOEAs’ design principles based on previous works like; recombination, mutation, and dominance sorting (e.g., ε-box). The authors present novel techniques to improve the exploration and exploitation process including; ε-progress indicator of stagnation and improvement, population expansion to preserve diversity exploration, multiple recombination operators for search variations, and self-adaptive of operator. A concise detail of these techniques are presented below, more details are presented in aforementioned literatures.
Borg MOEA uses an active population of solutions and an external archive that stores dominant solutions, and the population size is proportional to the archive size. Initially, the archive is empty; hence an initial population size is required. Subsequently, the population size changes as follows [30]
γ = N P N A ;   γ 1
where NP and NA are the population and archive sizes, respectively, and γ is the ratio of the population size to the archive size, and equal to 4 [30].
The ε-progress index measures the improvements while searching for new solutions. If the algorithm finds new dominant solutions in a new unoccupied ε-box (if the new dominant solutions have different ε-box indices) it means there is improvement, otherwise no improvement flag will mark a stagnation sign. If the last case continues for a number of evaluations, a revival process named “restart” will be triggered to escape from possible local optima. The restart involves emptying the population and re-populating based on the population to archive ratio (Equation (6)). The population is refilled using all solutions in the archive. Any remaining empty slots in the population are filled with solutions created by uniform mutation of solutions that are selected randomly from the archive.
The trigger for the revival process depends on any of the following three conditions:
  • If there is no change in the archive size for a certain number of evaluations;
  • If there is no improvement indicated by the ϵ -progress indicator; and
  • If the current population to archive ratio exceeds 1.25×γ
Borg follows the same crossover and mutation techniques mentioned in Section 2.2, and employs Equation (2) to self-adaptive operator’s selection. In Borg, the relevant operators’ parameters have fixed pre-execution values during evaluation process.

2.4. Identification of a Real-World Experimental Test Problem

A case study in Iraq’s Diyala river basin was adopted as a real-world IWRM problem which is more complex than common benchmark test functions [22]. GWP, [71]. The Global Water Partnership defines the IWRM as “IWRM is a process which promotes the co-ordinated development and management of water, land and related resources, in order to maximize the resultant economic and social welfare in an equitable manner without compromising the sustainability of vital ecosystems”. Authors and institutes adopt different water management concepts (about 41 variant possible explanations for the term “integrated”) due to the generalization in IWRM definition. Some examples are: water supply and water demands; surface water and groundwater; water quantity and water quality; urban and rural water issues; government and NGOs (non-governmental organizations) [72,73]. The river basin has two multipurpose dams, Derbendikhan just at the northern international border in Sulaymaniya governorate, and Himren in the middle part of the basin in the Diyala governorate (Figure 3). Here, Derbendikhan dam’s operation strategy for the next three decades was selected as a benchmark problem. Based on monthly dataset from 1981 to 2012 (33 years), a total of 396 decision variables (reservoir releases) need to be managed during the time-scale. Generating hydropower is the main current operation target, hence power penstocks (tunnels) are the main reservoir outlet of the proposed management model.

2.4.1. Objectives Functions Formulae

The reservoir water budget is governing by the water balance equation, as:
S t + 1 D = S t D + I t D R t D E t D + P t D S E t D + G R t D ,   t = 1 ,   2 , T
where S t D and S t + 1 D are the reservoir storage at time t and t+1, I t D and R t D are reservoir inflows and releases, respectively. E t D is the evaporation losses from reservoir surface, P t D is the direct rainfall on the reservoir. While, S E t D and G R t D are seepage losses and groundwater recharges from the reservoir, respectively.
The reservoir operation strategy ( F D ) is represented by the following multi-objective (or many for more than 3 objectives) formula:
m i n   F D = ( f w i n t e r D ,   f s u m m e r D ,   f p o w e r D , f D e l S W , f r e g D )
where f w i n t e r D is for maximizing winter storage to fulfil summer demands, f s u m m e r D is for minimizing summer storage to absorb expected flood wave in the next season, f p o w e r D is for maximizing hydropower generation, f D e l S W is for minimizing agriculture projects’ water deficit, and f r e g D is for minimization releases fluctuation. These targets represent the following aspects: social ( f w i n t e r D and f s u m m e r D ); economic ( f p o w e r D , f D e l S W ); and environmental ( f r e g D ).
The details of these objectives functions are as follows:
min f w i n t e r D = t = 1 T w ( S m a x D S t D S m a x D ) 2     + C P   ,   t   = 1 ,   2 , ..   T W
min f s u m m e r D = t = 1 T s ( S t D S m i n p D S m a x D ) 2 + C P   ,   t   = 1 ,   2 ,   ..   T S
min f p o w e r D = t = 1 T ( P w m a x D P w t D P w m a x D ) 2 + C P   ,   t   = 1 ,   2 , .. T
min f D e l S W = t = 1 T ( P D t D e l t M P D m a x ) 2 + C P   ,   t   = 1 ,   2 , .. T
min f r e g D = t = 1 T 1 ( R t D R t + 1 D R m a x D ) 2 + C P   ,   t   = 1 ,   2 , .. T
C P = A × i = 1 N C g i ;     A 1   ,   i   = 1 ,   2 ,   .   .   N C
Where:
S m a x D = maximum allowable reservoir storage
S m i n p D = minimum allowable reservoir storage for hydropower generation
TW, TS and T = winter, summer and total operation periods, respectively.
P w t D = hydropower generation at time t
P w m a x D = maximum hydropower generation
P D t = projects’ water demands at time t
P D m a x = maximum projects’ water demands
D e l t M = delivered water at time t
R m a x D = maximum reservoir releases at time t
CP = penalty factor includes all the violations of the model, which could be expressed
NC = number of constraints
g i = penalty function for the (ith) constraint
A = a positive real number
The hydropower generation can be expressed as:
P w t D = η e D . γ w . Q t t u D . H t n D
where ( Q t t u D ) is the turbine discharge, ( H t n D ) is the net head between reservoir level and the tail water after the power plant, ( η e D ) is the efficiency of power plant, and ( γ w ) is the water density.

2.4.2. Reservoir System Constraints

The reservoir storage is limited between the minimum and maximum allowable storage,   283.48 S t D 2572.0 (million cubic meters), the water level head ( H t D ) should be 434.0 m.a.s.l, the power generation must be less than 249000 Kw and greater than 16000 Kw, and the release between 51.84 R t D 878.6 million cubic meters/month. Hence, the penalty functions ( g i ) can be expressed as:
g 1 = t = 1 T M a x [ 0 ,   ( S t D 283.48 ) ]
g 2 = t = 1 T M a x [ 0 ,   ( 2572.0 S t D ) ]
g 3 = t = 1 T M a x [ 0 ,   ( H t D 434.0 ) ]
g 4 = t = 1 T M a x [ 0 ,   ( P w t D 16000 ) ]
g 5 = t = 1 T M a x [ 0 ,   ( 249000 P w t D ) ]
g 6 = t = 1 T M a x [ 0 ,   ( R t D 51.84 ) ]
g 7 = t = 1 T M a x [ 0 ,   ( 878.6 R t D ) ]

2.5. Computational Properties

The computational parameters of the problems were 2.0 × 106 function evaluations and ε = 0.1 for three objectives, and ε = 0.5 for five objectives, with 10 and 20 runs for both algorithms. The minimum population size was 100 while the maximum was 1000. A Dell OptiPlex 780 computer was used (Core Duo 2 E8400, 2 × 3.0 GHz, 8.0 GB RAM, Ubuntu 16.04 operating system). Table 2 shows the parameter values used for both algorithms. A program (code) in C language was developed to build the current model.

3. Results

3.1. Performance Achievement

3.1.1. Algorithms’ Reliability

Figure 4 illustrates the Pareto-front for 20 replicated random runs of the case study benchmark problem for both algorithms using three objectives. Although both algorithms converged to possible optima, ε-DSEA shows better reliability. Notably, Borg MOEA has faced some challenges in six trials, as it was stagnant in four and had overdue convergence in two. Some of these contain only one solution, as highlighted with dotted lines. In addition, some others have discontinuous Pareto-front (with gaps) in the objective search space. This behaviour reduces an algorithm’s reliability to produce near optimal solutions over replicated random execution (for example when testing the confidence of the model output), and is a key factor when solving more complex problem using high-performance computer resources (e.g., parallel processing with multi-core). Conversely, ε-DSEA shows reliability over 20 runs to converge to the possible optimal solution, with commonly continuous Pareto-front. Hence, less randomness creeps into runs increasing the confidence that optimal solutions can be achieved.
A high-dimension problem (5 objectives) was employed for advance algorithms’ assessment as shown in Figure 5. The optimum solutions’ median for 10 trials of both algorithms is presented. Notably, both algorithms produce possible optimum solutions, but the ε-DSEA has slightly more reliable trends over execution repetitions as supported by the self-adaptive parameters’ technique used in this EA, which was also approved by [74] for 20 runs. Insight investigation shows that a Borg MOEA stack in local optima twice (run 3 and 7), and adapt with PCX operator for all trials, as in three objectives scenario; while ε-DSEA adapts at the initial stage with SBX operator, then with PCX and SPX operators in parallel for the rest of evaluation process (Figure S1 in the supplementary data). The resetting technique’s effectiveness is obvious over changing the trend of the operators’ adaptation to escape from a local optima pitfall. Execution trial No. 4 shows competitive achievement of both algorithm, which may consider for comparative investigation.
Hence, the proposed mechanism provides advance diversity and balancing between exploration and exploitation process toward possible Pareto-front set.

3.1.2. Algorithms’ Robustness and Efficiency

In ε-DSEA, non-dominated feedback loops control the operators’ adaptation and their parameters. Figure 6a illustrates the self-adaptive operators’ parameter-tuning behaviour of ε-DSEA during the evaluation process. The most effective operators adopted to generate dominance solutions for the best trial are SBX, PCX, and SPX. Initially the virtual dominance archive mechanism tuned the operator’s parameters when only one solution is kept in the dominance archive. Then the SBX operator was adopted until the first resetting trigger at 5.0 × 105 function evaluation. The PCX operator is then involved by increasing the variation parameters ( σ η and σ ζ ) to about 0.15. The SPX operator is also involved at the same time when its parameter (λ) changed to about 2.7. Both PCX and SPX operators compete to explore dominance solutions till the third resetting trigger, and after that the SPX operator starts to generate more dominance solutions in the dominance archive. Increasing PCX and SPX parameters will generate new offspring farther away from their original parents, which will increase algorithm exploration in the design search space.
The algorithms’ convergence (efficiency) also investigated using the decision variables vector ( X d v ) development in the dominance archive during the evaluation process. The X d v is equal to x 1 2 + x 2 2 + x 3 2 + + x n 2 , where x1 to xn are the decision variables. Based on the best solution achieved, Figure 6b shows X d v convergence of both algorithms. Both achieved early convergence, but ε-DSEA converged faster, hence ε-DSEA’s efficiency was endorsed in the proposed test problem.
The progress of the objectives’ convergence of both algorithms over 10 iterations of high-dimension problem is presented in Figures S2 and S3 in the supplementary data, respectively. Early convergence was achieved by both algorithms, ε-DSEA converged at 1.25 × 104 function evaluations for all iterations, and Borg MOEA converged at 25 × 104. The ε-DSEA needs less execution time to achieve solutions. Where there are limited computational resources (e.g., CPU, Ram, etc.,) this achievement is significant. Furthermore, Borg MOEA suffered significant and interim stagnation in 7 trials (2, 3, 7, 9, and 4, 6, 10, respectively) in the early stage of evaluations. Only three out of 10 trials maintained dominance solutions improvement over the entire evaluation. The PCX operator’s adaptation with fixed parameters and recycling repetitively archive’s dominance solutions may restrict the extent of the algorithm’s exploration in the design search space. Conversely, only one trial (no. 9) suffered significate stagnation in ε-DSEA, however the expansion diversity and resetting techniques succeed in reviving the algorithm’s exploration to find new dominance solutions in the dominance archive. The robustness of ε-DSEA to escape from local optima are evident. Figure 7 shows trial no. 4 as a sample of convergence progress, since both algorithms achieved competitive solutions (based on Figure 5).

3.1.3. Algorithms’ Effectiveness

For real-world multi-objective problems, and especially in water resources management problems, the true Pareto-front (e.g., optimum solution) is unknown [22], and it is difficult to measure an algorithm’s effectiveness for such problems, as other relevant factors should also be evaluated such as the coverage of the Pareto-front and its extent in the objective space [34]. Hence, the qualitative comparison was often based on the best solution achieved over several replicated trials (e.g., equal or more than 20 runs). The results here show the reliability of both EA models but better computational performance by ε-DSEA.

3.2. Strategic Achievement

Table 3 demonstrates results’ analysis of both algorithms’ achievement based on best optimum solution to maximise hydropower generation, as it is one of the main dam’s operation targets. The gross sum of hydropower, storage, and releases of the reservoir were presented to demonstrate the contrast between two algorithms’ achievement, based on the relevant optimization techniques. The results of both are harmonic, with advance merit of ε-DSEA.
Figure 8a,b depict algorithms’ attainment of 10 multi-objective multi-variable trials and show the consistency of the ε-DSEA is marginally better than Borg notably from the period 1 to 216 months. The same behaviour also achieved for the next period, which reflect ε-DSEA algorithms’ ability to generate possible competitive optimal solution with fewer replicated trials.
Consistency with the relevant historical (actual) dataset should also be reviewed during decision making trials (i.e. solutions’ quality), as in Figure 8c,d. Both algorithms achieved competitive results, but ε-DSEA’s result has better reaction to flood waves and better agreement with the historical data. Here, spillway discharge did not factor in the models or the developed management model, since flood waves events usually last hours or a couple of days in the investigated region. Accordingly, no relevant sensitive reaction was observed since monthly average management was adopted by the model.
As a source of renewable energy, hydropower generation is one of the key-operational targets of the tested real-world problem, and often for any multipurpose dam projects, that needs to be carefully management under different operation scenarios, such as flood risk management. Two objectives were adopted, f w i n t e r D and f s u m m e r D , the later selected for insight investigation, as it is the most critical operation scenario that may affect other operational targets. Figure 9a,b illustrate solution distribution density achieved by both algorithms over the power generation domain. In general, ε-DSEA achieved high repetition of 30 to 50 Mw solutions, and gentle gradient repetition after that, while Borg MOEA has steeper gradient repetition starting from 30 Mw and thereafter. ε-DSEA achievement offers insight for investment decision making as minimum power generation of 30 Mw could be guaranteed for next three decades.
Hydropower generation depends on two variables, turbine’s discharge and water net head, as in Equation (15) (turbine’s efficiency and water’s specific weight assumed constant). Figure 9c,d illustrate hydropower generation solution’s space achieved by both algorithms. Competitive distribution over solution space was accomplished by both algorithms for turbine discharge ≤600 MCM, while better space’s exploitation for greater values (>600 MCM) achieved by ε-DSEA. Borg tends to maximize power generation by releasing more water, but solutions are irregularly deployed (clustering) which possibly due to local optima pitfall.
For example, in the region of <80m net head ( H n D ) and >600 MCM releases (Figure 9c,d), only five solutions formed as two groups achieved (green colour), which is not the case in ε-DSEA.
Recreation is another target to optimize for revenue. Figure 10 shows reservoir surface area achieved by both algorithms for the considered time-scale of f s u m m e r D scenario. The mean and median surface area were about 52 and 49 km2, 51 and 44 km2 achieved by ε-DSEA and Borg respectively. The small violation between these values indicates competitive results’ distribution, corollary more solutions greater than these values achieved by ε-DSEA. Hence, projects’ revenues could be improved even in such a critical scenario.

4. Discussion

4.1. Algorithms’ Optimization Techniques

The technique of Borg MOEA tends to adapt based on operator experience after finding possible feasible solutions. The SBX operator was adopted in the early stage of evaluation process, then PCX operator adopted to the end. Zheng et al., [48] observed that, for two-objective problems, Borg MOEA tended to converge prematurely and population diversity decreased relatively rapidly. This is because Borg MOEA does not maintain a separate transient sub-population of offspring as in NSGA-II. Offspring that dominate its parents immediately replaces one of the parents; the choice of the parent that is replaced is random. As new solutions are introduced, the selection pressure on less competitive solutions increases, due to the binary tournament selection used for a crossover. Fitting solutions have a higher probability of selection for a crossover, leading to more exploitation and less exploration and thus less diversity. Secondly, the injection trigger that depends mainly on ε-progress indicators, did not always succeed in reflecting stagnation during the evaluation process. Thirdly, PCX operator produces offspring in the vicinity of the parents. If the PCX operator creates solutions around the best solutions, the PCX-generated solutions quickly dominate the archive, leading to more exploitation, less exploration and consequently relatively rapid loss of diversity. As stated previously, the recombination operators are deployed in proportion to the number of offspring they contributed in the archive.
In Borg MOEA the operator that produces more successful (i.e., non-dominated) offspring is deployed more frequently. However, as the search progresses and the balance between exploration and exploitation shifts gradually, it is desirable that the operators be deployed based on the current status of the search rather than their previous performances or cumulative successes. In other words, the selection of the operators should recognize the current performance also. Hence, the proposed performance assessment of the operators relies on the results from the current phase of the search rather than the cumulative performance to date.
The novel technique of virtual dominance archive used in ε-DSEA removes operator bias and extends the algorithm’s exploration by tuning the operators’ parameters control within the specified domain, resulting in a robust convergence progress at initial stage of the evaluation process. In the same context, resetting these parameters’ values during evaluation process help the algorithm to escape from local optima pitfall, and improve its exploitation to find new non-dominated solutions.

4.2. Water Resources Management Case Study

Although both algorithms achieved possible optimum solutions, ε-DSEA generates more robust results. Based on gross sum of five objective problems (Table 3), an extra 520 MW and 46.31 MCM of hydropower generation and reservoir storage was achieved, respectively. The mean and median water head achieved by ε-DSEA were about 1.5 m higher than those of Borg MOEA in all cases. This will provide advance security against possible dam failures in the future as the water head should be above 455.0 m.a.s.l [75]. Although this problematic was not considered in the current model as an objective or a constraint, all the relevant mean and median values achieved by both algorithms satisfy this restriction. This area of study should be considered in future work.
Figure 11a,b illustrates results’ quality of reservoir water level (m.a.s.l) produced by both algorithms over 10 runs, based on maximizing hydropower generation. Notably, more results of ≥460.0 and fewer of ≤440.0 were generated by ε-DSEA. The same achievement observed in the critical scenario of minimizing storage in summer, represented by f s u m m e r D , as in Figure 11c,d. Nevertheless, the latter has better results’ than Borg MOEA regarding the current potential risk. Moreover, the harmony of results’ distribution over 10 trials is evident, endorsing the reliability of ε-DSEA.
Within this framework, good exploration-exploitation balance of slave (or secondary) solution search space of hydropower generation and reservoir surface area were attained (Figure 9). Although the master decision variables’ search space is relevant to reservoir releases while other dependent variables are calculated accordingly (e.g., power generation, storage, area, etc.), the compete optimality achievement mapped to those sub-variables. This is not the case in other optimization algorithms, since often competitive investigation only covers the master decision variables and/or objectives’ search space, either in test functions or in real-world case study benchmarks. Insight or deep diagnosis assessment considering indirect variables should be used in such competitive studies.
The results suggest water resources decision makers are advised to implement different optimization algorithms, especially when solving multi or many objective problems, to explore possible new optimal results with advance quality, and to reinforce results’ reliability. The ε-DSEA achievement was previously assessed using more complex water resources problem, as in [74,76], however, more assessment is recommended to solve different problem environments.

5. Conclusions

In this research, strategic planning of water resources under competitive optimization techniques was investigated. Decision makers often adopt optimization techniques to evaluate a wide range of competitive water resources management decisions. However, past studies demonstrate that an algorithm’s optimal output varies according to the problem environment. Furthermore, slave (dependent) variables’ quality are often not analysed in depth, which could play an important role in improving a project’s economic success. Here, a comparative assessment of two optimization techniques was tested against a real-world water resources strategy. The Borg MOEA and the ε-DSEA performance was contrasted based on the relevant strategic plans using objective functions (i.e., the Pareto-front), master and slave variables.
The results by both models showed possible optima, with an advanced reliability and robustness when using ε-DSEA as it provided consistent results closer to near-optimal solutions. Both algorithms have auto-adaptive operator techniques, but Borg appoints a mono-operator after a specific number of evaluations (e.g., PCX), while multi-operators sequence during the evaluation stages (e.g., starting by SBX operator and ending with PCX and SPX operators in parallel). The Borg MOEA reviver’s techniques should consider this drawback, as early stages stagnation is evident, especially in many-objectives problem. The ε-DSEA escapes from local optima by employing by-stage operators’ parameters, which may be useful when applying to real-world problem solving.
The compete achievement of ε-DSEA was mapped onto the relevant water resources strategic plan. In all adopted scenarios for the real-world case study results show that extra hydropower, reservoir storage and surface area can be achieved. The releases have better consistency and sensitivity with the historical dataset and flood waves. The model outputs can be used to manage power generation to support, for example, an investment opportunity while still promising recreation investment opportunities achieved by maintaining larger reservoir surface area over the adopted time-scale. The results demonstrate the importance of insight and in-depth analysis of relevant objectives and variables using EA models.
The ε-DSEA and the relevant approach could be evolve for similar and/or even more complex real-world problems, such as groundwater management, water supply system, water allocation, etc., by adding and/or modifying objective functions, decision variables, and constraints.

Supplementary Materials

The following are available online at https://www.mdpi.com/2073-4441/11/10/2021/s1: Figure S1: Active ε-DSEA operators’ selection probability achievement over 10 runs using five-objectives engineering problem, Figure S2: Convergence progress of dominance solutions during evaluation process of engineering problem for 10 trials using Borg MOEA, Figure S3: Convergence progress of dominance solutions during evaluation process of engineering problem for 10 trials using ε-DSEA.

Author Contributions

Conceptualization, J.Y.A.-J.; Methodology, J.Y.A.-J.; Writing—original draft preparation, J.Y.A.-J.; Writing—review and editing, R.M.K.; Validation, R.M.K.

Funding

This research was funded by Iraqi Ministry of Higher Education and Scientific Research (MHESR)/University of Baghdad, grant number 2012-2013.

Acknowledgments

The authors thank Professor Patrick Reed and David Hadka for providing the source code for Borg MOEA.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Maier, H.; Kapelan, Z.; Kasprzyk, J.; Matott, L. Thematic issue on Evolutionary Algorithms in Water Resources. Environ. Model. Softw. 2015, 69, 222–225. [Google Scholar] [CrossRef]
  2. Holland, J.H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence; University of Michigan Press: Ann Arbor, MI, USA, 1975. [Google Scholar]
  3. Schaffer, J.D. Multiple objective optimization with vector evaluated genetic algorithms. In Proceedings of the 1st international Conference on Genetic Algorithms; 1985; pp. 93–100. [Google Scholar]
  4. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef]
  5. Zhang, Q.; Li, H. MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE Trans. Evol. Comput. 2007, 11, 712–731. [Google Scholar] [CrossRef]
  6. Zitzler, E.; Künzli, S. Indicator-Based Selection in Multiobjective Search. Computer Vision ECCV 2012 2004, 3242, 832–842. [Google Scholar]
  7. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  8. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  9. Dorigo, M.; Stützle, T. Ant Colony Optimization; Bradford Company: Scituate, MA, USA, 2004. [Google Scholar]
  10. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  11. Coello, C.A.C.; Lamont, G.L.; van Veldhuizen, D.A. Evolutionary Algorithms for Solving Multi-Objective Problems, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  12. Zhou, A.; Qu, B.-Y.; Li, H.; Zhao, S.-Z.; Suganthan, P.N.; Zhang, Q. Multiobjective evolutionary algorithms: A survey of the state of the art. Swarm Evol. Comput. 2011, 1, 32–49. [Google Scholar] [CrossRef]
  13. Hurford, A.P.; Huskova, I.; Harou, J.J. Using many-objective trade-off analysis to help dams promote economic development, protect the poor and enhance ecological health. Environ. Sci. Policy 2014, 38, 72–86. [Google Scholar] [CrossRef]
  14. Qi, Y.; Bao, L.; Ma, X.; Miao, Q.; Li, X. Self-adaptive Multi-objective Evolutionary Algorithm based on Decomposition for Large-scale problems: A Case Study on Reservoir Flood Control Operation. Inf. Sci. 2016, 367–368, 529–549. [Google Scholar] [CrossRef]
  15. Salazar, J.Z.; Reed, P.M.; Quinn, J.D.; Giuliani, M.; Castelletti, A. Balancing exploration, uncertainty and computational demands in many objective reservoir optimization. Adv. Water Resour. 2017, 109, 196–210. [Google Scholar] [CrossRef]
  16. Al-Jawad, J.Y.S.; Tanyimboh, T.T. Reservoir operation using a robust evolutionary optimization algorithm. J. Environ. Manag. 2017, 197, 275–286. [Google Scholar] [CrossRef] [PubMed]
  17. Dai, L.; Zhang, P.; Wang, Y.; Jiang, D.; Dai, H.; Mao, J.; Wang, M. Multi-objective optimization of cascade reservoirs using NSGA-II: A case study of the Three Gorges-Gezhouba cascade reservoirs in the middle Yangtze River, China. Hum. Ecol. Risk Assess. Int. J. 2017, 23, 1–22. [Google Scholar] [CrossRef]
  18. Hadka, D.; Reed, P. Diagnostic Assessment of Search Controls and Failure Modes in Many-Objective Evolutionary Optimization. Evol. Comput. 2012, 20, 423–452. [Google Scholar] [CrossRef] [PubMed]
  19. Li, K.; Deb, K.; Zhang, Q.; Kwong, S. An Evolutionary Many-Objective Optimization Algorithm Based on Dominance and Decomposition. IEEE Trans. Evol. Comput. 2015, 19, 694–716. [Google Scholar] [CrossRef]
  20. Liu, Z.-Z.; Wang, Y.; Huang, P.-Q. AnD: A many-objective evolutionary algorithm with angle-based selection and shift-based density estimation. Inf. Sci. 2018, 1–20. [Google Scholar] [CrossRef]
  21. Ishibuchi, H.; Setoguchi, Y.; Masuda, H.; Nojima, Y. Performance of Decomposition-Based Many-Objective Algorithms Strongly Depends on Pareto Front Shapes. IEEE Trans. Evol. Comput. 2017, 21, 1. [Google Scholar] [CrossRef]
  22. Maier, H.; Kapelan, Z.; Kasprzyk, J.; Kollat, J.; Matott, L.; Cunha, M.; Dandy, G.; Gibbs, M.; Keedwell, E.; Marchi, A.; et al. Evolutionary algorithms and other metaheuristics in water resources: Current status, research challenges and future directions. Environ. Model. Softw. 2014, 62, 271–299. [Google Scholar] [CrossRef]
  23. Goldberg, D.E. Sizing Populations for Serial and Parallel Genetic Algorithms. In Proceedings of the 3rd International Conference on Genetic Algorithms, San Francisco, CA, USA, 4–7 June 1989; pp. 70–79. [Google Scholar]
  24. De Jong, K. Parameter Setting in EAs: A 30 Year Perspective. Informatik im Fokus 2007, 54, 1–18. [Google Scholar]
  25. Karafotias, G.; Hoogendoorn, M.; Eiben, A.E. Parameter Control in Evolutionary Algorithms: Trends and Challenges. IEEE Trans. Evol. Comput. 2015, 19, 167–187. [Google Scholar] [CrossRef]
  26. Eiben, A.; Hinterding, R.; Michalewicz, Z. Parameter control in evolutionary algorithms. IEEE Trans. Evol. Comput. 1999, 3, 124–141. [Google Scholar] [CrossRef]
  27. Deb, K.; Agrawal, R.B. Simulated Binary Crossover for Continuous Search Space. Complex Syst. 1995, 9, 115–148. [Google Scholar]
  28. Reynoso-Meza, G.; Sanchis, J.; Blasco, X.; Mart´ınez, M. An empirical study on parameter selection for multiobjective optimization algorithms using Differential Evolution. In Proceedings of the 2011 IEEE Symposium on Differential Evolution (SDE), Paris, France, 11–15 April 2011; pp. 1–7. [Google Scholar]
  29. Stephens, C.R.; Olmedo, I.G.; Mora-Vargas, J.; Waelbroeck, H. Self-Adaptation in Evolving Systems. Artif. Life 1998, 4, 183–201. [Google Scholar] [CrossRef] [PubMed]
  30. Hadka, D.; Reed, P. Borg: An Auto-Adaptive Many-Objective Evolutionary Computing Framework. Evol. Comput. 2013, 21, 231–259. [Google Scholar] [CrossRef] [PubMed]
  31. Reed, P.; Hadka, D.; Herman, J.; Kasprzyk, J.; Kollat, J.; Reed, P.; Herman, J.; Kasprzyk, J. Evolutionary multiobjective optimization in water resources: The past, present, and future. Adv. Water Resour. 2013, 51, 438–456. [Google Scholar] [CrossRef]
  32. Marchi, A.; Dandy, G.; Wilkins, A.; Rohrlach, H. Methodology for Comparing Evolutionary Algorithms for Optimization of Water Distribution Systems. J. Water Resour. Plan. Manag. 2014, 140, 22–31. [Google Scholar] [CrossRef]
  33. Silver, E.A. An overview of heuristic solution methods. J. Oper. Res. Soc. 2004, 55, 936–956. [Google Scholar] [CrossRef]
  34. Zitzler, E.; Deb, K.; Thiele, L. Comparison of Multiobjective Evolutionary Algorithms: Empirical Results. Evol. Comput. 2000, 8, 173–195. [Google Scholar] [CrossRef] [PubMed]
  35. Deb, K. Multi-Objective Optimization using Evolutionary Algorithms, 1st ed.; John Wiley & Sons: Chichester, UK, 2001. [Google Scholar]
  36. Stadler, W. A survey of multicriteria optimization or the vector maximum problem, part I: 1776–1960. J. Optim. Theory Appl. 1979, 29, 1–52. [Google Scholar] [CrossRef]
  37. Miettinen, K. Nonlinear Multiobjective Optimization; Kluwer Academic Publishers: Boston, MA, USA, 1999. [Google Scholar]
  38. Deb, K.; Joshi, D.; Anand, A. Real-coded evolutionary algorithms with parent-centric recombination. In Proceedings of the 2002 Congress on Evolutionary Computation, Honolulu, HI, USA, 12–17 May 2002; Volume 1, pp. 61–66. [Google Scholar]
  39. Kita, H.; Ono, I.; Kobayashi, S. Multi-parental extension of the unimodal normal distribution crossover for real-coded genetic algorithms. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99, Washington, DC, USA, 6–9 July 1999; pp. 1581–1587. [Google Scholar]
  40. Tsutsui, S.; Yamamura, M.; Higuchi, T. Multi-parent recombination with simplex crossover in real coded genetic algorithms. In Proceedings of the 1999 Genetic and Evolutionary Computation Conference, San Francisco, CA, USA, 13–17 July 1999; pp. 657–664. [Google Scholar]
  41. Michalewicz, Z.; Logan, T.; Swaminathan, S. Evolutionary operators for continuous convex parameter spaces. In Proceedings of the 3rd Annual Conference on Evolutionary Programming; 1994; pp. 84–97. [Google Scholar]
  42. Deb, K.; Agrawal, S. A Niched-Penalty Approach for Constraint Handling in Genetic Algorithms. Artif. Neural Nets Genet. Algorithms 1999, 4, 235–243. [Google Scholar]
  43. Geetha, T.; Mahalakshmi, K.; UmmuSalma, I.; Kumaran, K.M. An Observational Analysis of Genetic Operators. Int. J. Comput. Appl. 2013, 63, 24–34. [Google Scholar] [CrossRef]
  44. Laumanns, M.; Thiele, L.; Deb, K.; Zitzler, E. Combining Convergence and Diversity in Evolutionary Multiobjective Optimization. Evol. Comput. 2002, 10, 263–282. [Google Scholar] [CrossRef] [PubMed]
  45. Deb, K.; Manikanth, M.; Shikhar, M. A Fast Multi-Objective Evolutionary Algorithm for Finding Well-Spread Pareto-Optimal Solutions; KanGAL Technical Report No.2003002; IIT: Kanpur, India, 2003. [Google Scholar]
  46. Kollat, J.; Reed, P. A computational scaling analysis of multiobjective evolutionary algorithms in long-term groundwater monitoring applications. Adv. Water Resour. 2007, 30, 408–419. [Google Scholar] [CrossRef]
  47. Zecchin, A.C.; Simpson, A.R.; Maier, H.R.; Marchi, A.; Nixon, J.B.; Maier, H. Improved understanding of the searching behavior of ant colony optimization algorithms applied to the water distribution design problem. Water Resour. Res. 2012, 48. [Google Scholar] [CrossRef]
  48. Zheng, F.; Simpson, A.R.; Zecchin, A.C.; Maier, H.R.; Feifei, Z. Comparison of the Searching Behavior of NSGA-II, SAMODE, and Borg MOEAs Applied to Water Distribution System Design Problems. J. Water Resour. Plan. Manag. 2016, 142. [Google Scholar] [CrossRef]
  49. Vrugt, J.A.; Robinson, B.A. Improved evolutionary optimization from genetically adaptive multimethod search. Proc. Natl. Acad. Sci. USA 2007, 104, 708–711. [Google Scholar] [CrossRef]
  50. Hesser, J.; Männer, R. Towards an optimal mutation probability for genetic algorithms. In Parallel Problem Solving from Nature, Proceedings of 1st Workshop PPSN I Dortmund, FRG, 1–3 October 1990; Schwefel, H.-P., Männer, R., Eds.; Springer: Berlin/Heidelberg, Germany, 1991; pp. 23–32. [Google Scholar]
  51. Aleti, A. An Adaptive Approach to Controlling Parameters of Evolutionary Algorithms. PhD. Thesis, Swinburne University of Technology, Melbourne, Australia, 2012. [Google Scholar]
  52. Deb, K.; Beyer, H.-G. Self-Adaptive Genetic Algorithms with Simulated Binary Crossover. Evol. Comput. 2001, 9, 197–221. [Google Scholar] [CrossRef] [PubMed]
  53. Farmani, R.; Wright, J.A. Self-adaptive fitness formulation for constrained optimization. IEEE Trans. Evol. Comput. 2003, 7, 445–455. [Google Scholar] [CrossRef]
  54. Giger, M.; Keller, D.; Ermanni, P. AORCEA—An adaptive operator rate controlled evolutionary algorithm. Comput. Struct. 2007, 85, 1547–1561. [Google Scholar] [CrossRef]
  55. Kaveh, A.; Shahrouzi, M. Dynamic selective pressure using hybrid evolutionary and ant system strategies for structural optimization. Int. J. Numer. Methods Eng. 2008, 73, 544–563. [Google Scholar] [CrossRef]
  56. Vafaee, F.; Nelson, P.C. An explorative and exploitative mutation scheme. In Proceedings of the IEEE Congress on Evolutionary Computation, Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar]
  57. Vrugt, J.A.; Robinson, B.A.; Hyman, J.M. Self-adaptive multimethod search for global optimization in real-parameter spaces. IEEE Trans. Evol. Comput. 2009, 13, 243–259. [Google Scholar] [CrossRef]
  58. Lwin, K.; Qu, R.; Kendall, G. A learning-guided multi-objective evolutionary algorithm for constrained portfolio optimization. Appl. Soft Comput. 2014, 24, 757–772. [Google Scholar] [CrossRef]
  59. Bechikh, S.; Datta, R.; Gupta, A. Recent Advances in Evolutionary Multi-objective Optimization, Adaptation, Learning, and Optimization; Springer International Publishing: Cham, Switzerland, 2017; Volume 20. [Google Scholar]
  60. Li, B.; Li, J.; Tang, K.; Yao, X. Many-Objective Evolutionary Algorithms: A Survey. ACM Comput. Surv. 2015, 48, 13:1–13:35. [Google Scholar] [CrossRef]
  61. Mane, S.U.; Rao, M.R.N. Many-Objective Optimization: Problems and Evolutionary Algorithms—A Short Review. Int. J. Appl. Eng. Res. 2017, 12, 973–4562. [Google Scholar]
  62. Bechikh, S.; Elarbi, M.; Said, L.B. Many-objective Optimization Using Evolutionary Algorithms: A Survey. In Recent Advances in Evolutionary Multi-objective Optimization; Bechikh, S., Datta, R., Gupta, A., Eds.; Springer: Cham, Switzerland, 2017; pp. 105–137. [Google Scholar]
  63. Kollat, J.; Reed, P. Comparing state-of-the-art evolutionary multi-objective algorithms for long-term groundwater monitoring design. Adv. Water Resour. 2006, 29, 792–807. [Google Scholar] [CrossRef]
  64. Goldberg, D.E. Genetic Algorithms in Search, Optimization and Machine Learning, 1st ed.; Addison-Wesley Longman Publishing Co., Inc.: Boston, MA, USA, 1989. [Google Scholar]
  65. Zitzler, E. Evolutionary Algorithms for Multiobjective Optimization: Methods and Applications. Ph.D. Thesis, Swiss Federal Institute of Technology, Zurich, Switzerland, 1999. [Google Scholar]
  66. Van Veldhuizen, D.A.; Lamont, G.B. Evolutionary Computation and Convergence to a Pareto Front. In Proceedings of the Late Breaking Papers at the Genetic Programming 1998 Conference, Madison, WI, USA, 22–25 July 1998; pp. 221–228. [Google Scholar]
  67. Hadka, D.; Reed, P.M.; Simpson, T.W. Diagnostic assessment of the borg MOEA for many-objective product family design problems. In Proceedings of the 2012 IEEE Congress on Evolutionary Computation, Brisbane, Australia, 10–15 June 2012; pp. 1–10. [Google Scholar]
  68. Woodruff, M.J.; Reed, P.M.; Simpson, T.W. Many objective visual analytics: Rethinking the design of complex engineered systems. Struct. Multidiscip. Optim. 2013, 48, 201–219. [Google Scholar] [CrossRef]
  69. Salazar, J.Z.; Reed, P.M.; Herman, J.D.; Giuliani, M.; Castelletti, A. A diagnostic assessment of evolutionary algorithms for multi-objective surface water reservoir control. Adv. Water Resour. 2016, 92, 172–185. [Google Scholar] [CrossRef]
  70. Yan, D.; Ludwig, F.; Huang, H.Q.; Werners, S.E. Many-objective robust decision making for water allocation under climate change. Sci. Total Environ. 2017, 607, 294–303. [Google Scholar] [CrossRef]
  71. GWP (Global Water Partnership). Sharing Knowledge for Equitable, Efficient and Sustainable Water Resources Management; Version 2; Global Water Partnership (GWP): Stockholm, Sweden, 2003. [Google Scholar]
  72. Cardwell, H.E.; Cole, R.A.; Cartwright, L.A.; Martin, L.A. Integrated Water Resources Management: Definitions and Conceptual Musings. J. Contemp. Water Res. Educ. 2006, 135, 8–18. [Google Scholar] [CrossRef]
  73. Biswas, A.K. Integrated Water Resources Management: Is It Working? Int. J. Water Resour. Dev. 2008, 24, 5–22. [Google Scholar] [CrossRef]
  74. Al-Jawad, J.Y.; Alsaffar, H.M.; Bertram, D.; Kalin, R.M. A Comprehensive Optimum Integrated Water Resources Management Approach for Multidisciplinary Water Resources Management Problems. J. Environ. Manag. 2018, in press. [Google Scholar] [CrossRef]
  75. World Bank. Dokan and Derbendikhan Dam Inspections Report; Consultant Services by SMEC International Pty. Ltd.: Melbourne, Australia, 2006. [Google Scholar]
  76. Al-Jawad, J.Y.; Alsaffar, H.M.; Bertram, D.; Kalin, R.M. Optimum socio-environmental flows approach for reservoir operation strategy using many-objectives evolutionary optimization algorithm. Sci. Total Environ. 2018, 651, 1877–1891. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Illustrates operator’s parents’ selection from the entire population candidates after the initial random seeding at the beginning of the evaluation process.
Figure 1. Illustrates operator’s parents’ selection from the entire population candidates after the initial random seeding at the beginning of the evaluation process.
Water 11 02021 g001
Figure 2. Illustrates the self-adaptive mechanism (a) and exploration extension (b) used by ε-DSEA.
Figure 2. Illustrates the self-adaptive mechanism (a) and exploration extension (b) used by ε-DSEA.
Water 11 02021 g002
Figure 3. Catchment area of the transboundary Diyala river basin in Iraq.
Figure 3. Catchment area of the transboundary Diyala river basin in Iraq.
Water 11 02021 g003
Figure 4. Pareto-fronts for twenty random runs achieved by both algorithms.
Figure 4. Pareto-fronts for twenty random runs achieved by both algorithms.
Water 11 02021 g004
Figure 5. Five objectives’ median optimum solutions of 10 runs of both algorithms.
Figure 5. Five objectives’ median optimum solutions of 10 runs of both algorithms.
Water 11 02021 g005
Figure 6. (a) Parameters self-adaptation of the most effective operators for the best solution achieved, and (b) algorithm convergence to generate dominance solutions during the evaluation process.
Figure 6. (a) Parameters self-adaptation of the most effective operators for the best solution achieved, and (b) algorithm convergence to generate dominance solutions during the evaluation process.
Water 11 02021 g006
Figure 7. Convergence progress during evaluation process of both algorithms for trial no. 4.
Figure 7. Convergence progress during evaluation process of both algorithms for trial no. 4.
Water 11 02021 g007
Figure 8. Comparative graphs between achieved releases of both algorithms based on maximizing hydropower generation objective. (a) and (b) show detail results for 10 runs, while (c) and (d) illustrate historical and model releases.
Figure 8. Comparative graphs between achieved releases of both algorithms based on maximizing hydropower generation objective. (a) and (b) show detail results for 10 runs, while (c) and (d) illustrate historical and model releases.
Water 11 02021 g008
Figure 9. Insight analysis of hydropower generation accomplished by both algorithms under flood risk management scenario ( f s u m m e r D ). (a) and (b) shows solutions repetition density, while (c) and (d) demonstrate head-discharge-hydropower solution space.
Figure 9. Insight analysis of hydropower generation accomplished by both algorithms under flood risk management scenario ( f s u m m e r D ). (a) and (b) shows solutions repetition density, while (c) and (d) demonstrate head-discharge-hydropower solution space.
Water 11 02021 g009
Figure 10. Reservoir’s surface area distribution achieved by both algorithms under f s u m m e r D scenario.
Figure 10. Reservoir’s surface area distribution achieved by both algorithms under f s u m m e r D scenario.
Water 11 02021 g010
Figure 11. Demonstrates reservoir water level distribution achieved by both algorithms of 10 runs. (a) and (b) are the best solution to maximize hydropower generation; (c) and (d) the same as those to minimize summer storage.
Figure 11. Demonstrates reservoir water level distribution achieved by both algorithms of 10 runs. (a) and (b) are the best solution to maximize hydropower generation; (c) and (d) the same as those to minimize summer storage.
Water 11 02021 g011
Table 1. Parameters control formulae in epsilon-dominance-driven self-adaptive evolutionary algorithm (ε-DSEA).
Table 1. Parameters control formulae in epsilon-dominance-driven self-adaptive evolutionary algorithm (ε-DSEA).
OperatorParametersDomainAdaptation FunctionsComments
SBX 1 η [0, 100] 𝒫 i N D S × 100 Distribution index
DE 2CR
F
[0.1,1.0]
[0.5, 1.0]
M a x   ( 0.1 ,   𝒫 i N D S i )
0.5 + ( 𝒫 i N D S / 2 )
Crossover probability
Step size
SPX 3 λ [2.5, 3.5] 2.5   + 𝒫 i N D S Expansion rate
PCX 4 σ η
σ ζ
[0.1, 0.3] 0.1   + ( 𝒫 i N D S / 5 )
0.1   + ( 𝒫 i N D S / 5 )
These parameters (standard deviations) control the spatial distribution of the offspring for PCX and UNDX
UNDX 4 σ ζ
σ η
[0.4, 0.6]
[0.1, 0.35]   / L
0.4   + ( 𝒫 i NDS / 5 )
[ 0.1   + ( 𝒫 i NDS / 3 ) ] / L
1 Simulated Binary Crossover; 2 Differential Evolution; 3 Simplex Crossover; 4 Parent-Centric Crossover; 5 Unimodal Normal Distribution Crossover
Table 2. Parameter values used in the optimization algorithms.
Table 2. Parameter values used in the optimization algorithms.
ParametersBorgε-DSEAaParametersBorgε-DSEA
Initial population size100100SPX parents103
Tournament selection size22SPX offspring22
SBX crossover rate1.01.0SPX expansion rate λ3[2.5, 3.5]
SBX distribution index η15.0[0, 100]UNDX parents1010
DE crossover rate CR0.1[0.1, 1.0] UNDX offspring22
DE step size F0.5[0.5, 1.0] UNDX σζ0.5 [0.4, 0.6]
PCX parents1010UNDX ση0.35/ L [0.1, 0.35]/ L
PCX offspring22UM mutation rate1/L1/L
PCX ση0.1[0.1, 0.3]PM mutation rate1/L1/L
PCX σζ0.1[0.1, 0.3]PM distribution index ηm2020
L is the number of decision variables. The permissible range for dynamic parameters is shown in brackets. The parameters ση and σζ are defined in Table 1. a The initial values of dynamic parameters used in ε-DSEA are as shown for Borg MOEA.
Table 3. Results’ analysis of water resources management strategy of Derbendikhan dam achieved by both algorithms based on maximising hydropower generation.
Table 3. Results’ analysis of water resources management strategy of Derbendikhan dam achieved by both algorithms based on maximising hydropower generation.
Borg MOEAε-DSEA
Area 1Head 2Power 3Storage 4Releases 5AreaHeadPowerStorageReleases
3 Objective problem
Min.19.18437.7924.50433.55129.9417.32434.7324.83373.82129.75
Max.122.79485.97249.002565.84866.25121.80485.86246.372551.05877.20
Mean74.37474.7194.761732.33336.0676.39475.5394.771775.22336.17
Median72.03477.1283.001743.17297.1978.92478.9584.371867.41316.10
St.727.1910.1150.88523.86174.8424.9910.4251.88496.30183.22
Gross6 37.52686.00133.08 37.53702.99133.12
5 Objective problem
Min.16.94434.0923.44361.60130.7219.10437.6724.09431.16130.45
Max.122.97485.98249.002568.50866.08123.14486.00249.002570.98797.97
Mean66.09470.6790.461555.39337.7271.90472.9591.771672.36334.80
Median61.55473.6382.141540.19316.8171.78477.0582.001738.53298.79
St.29.8912.8345.64597.36162.7428.9412.5847.18583.53169.11
Gross 35.82615.94133.74 36.34662.25132.58
1 Surface area of reservoir in km2; 2 Head of water in m.a.s.l; 3 Hydropower generation in MW; 4 Reservoir storage in m3 ×106); 5 Reservoir releases in m3/month ×106; 6 Gross sum units of: Power in GW; Storage in m3 ×109); Releases in m3/month ×109. 7 St. for Standard Deviation.
Back to TopTop