Next Article in Journal
From Sharing the Burden of Scarcity to Markets: Ill-Fitting Water Property Rights and the Pressure of Economic Transition in South Asia
Next Article in Special Issue
Optimization Difficulty Indicator and Testing Framework for Water Distribution Network Complexity
Previous Article in Journal
Optimal Allocation Model of Water Resources Based on the Prospect Theory
 
 
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Self-Adaptive Models for Water Distribution System Design Using Single-/Multi-Objective Optimization Approaches

1
Research Institute for Mega Construction, Korea University, Anamdong, Seongbukgu, Seoul 136–713, Korea
2
School of Civil, Environmental and Architectural Engineering, Korea University, Anamdong, Seongbukgu, Seoul 136-713, Korea
*
Author to whom correspondence should be addressed.
Water 2019, 11(6), 1293; https://doi.org/10.3390/w11061293
Received: 22 April 2019 / Revised: 7 June 2019 / Accepted: 16 June 2019 / Published: 20 June 2019

Abstract

:
This study compares the performance of self-adaptive optimization approaches in efficient water distribution systems (WDS) design and presents a guide for the selection of the appropriate method employing optimization utilizing the characteristic of each technique formulation. To this end, this study performs three types of analyses. First, the sensitivity analysis of each self-adaptive approach is conducted on single/multi-objective mathematical benchmark problems with various problem types (e.g., using solution shape or many local optimal solutions). Second, based on the applications and results of the mathematical problem, the performance of the algorithm is verified in the WDS design problem considering the minimum cost and the maximum system resilience under the single/multi-objective optimization framework. Third, the characteristics of search operators in the self-adaptive approach are compared according to the presence or absence of additional parameters and operators. Moreover, various performance indices are employed to compare the quantitative evaluation of each algorithm. Each algorithm is found to exhibit different characteristics depending on the problem scale and solution type. These results are expected to benefit future research in the formulation of new approaches and developments. Hence, this study provides rigorous testing of the performance of newly proposed algorithms in a highly simplified manner.

1. Introduction

The water distribution system (WDS) is a system of engineered hydrologic and hydraulic components that enables the collection, transmission, treatment, storage, and distribution of water from the source to demand locations, while satisfying the nodal pressure and pipe velocity. WDS design determines the size and capacity of the system components. In the WDS industry, achieving optimal design is very difficult given the highly complex hydraulic conditions. Therefore, various design techniques have been developed and applied for effective WDS design in the past.
The traditional WDS design relies on the engineer’s ability based on their experience. Consequently, traditional approaches differed in the quality of their solutions. They did not provide stable designs and could not guarantee optimality. To overcome this kind of drawback, the calculation-based approaches (i.e., linear programming, non-linear programming, and dynamic programming) were applied for the optimal design of WDSs [1,2,3,4,5,6,7,8,9,10]. However, the application of these techniques in real-world problems renders the WDS models too complex to be handled by conventional methods. Therefore, more efficient techniques to solve this kind of design problem must be investigated.
In recent years, simulation-based meta-heuristic algorithms have been proposed and applied in WDS design [11,12,13,14,15,16]. Meta-heuristic optimization algorithms are inspired by various natural events (e.g., the genetic algorithm [17], particle swarm optimization [18], ant colony optimization [19]) and artificial phenomena (e.g., harmony search [14], simulated annealing [20]). Meta-heuristic approaches have been useful in the optimization of WDSs while considering the system construction cost [11,21]. However, WDS design considering only the system construction cost is very sensitive to uncertain future conditions. Therefore, recent studies emphasized the various design factors that needed to be considered for realistic WDS design, and applied the multi-objective optimization technique for the optimal design of WDSs. The multi-objective design approaches considered the reliability and resilience of WDSs as the design factors. However, even when applying multi-objective techniques for WDS design, design performance when using optimization algorithms is greatly dependent on the parameters, for which values need to be appropriate. To solve this drawback, many studies on relative parameter control techniques, such as the self-adaptive approach, have been proposed and applied in various fields, including WDS design.
Among the various optimization algorithms, harmony search (HS) algorithms have been improved in order to increase the solution quality performance. The HS algorithm proposed by Geem et al. [14] and Kim et al. [22] represents a method to find the solution using musical improvisation. In comparison to other optimization algorithms in the previous studies, the HS algorithm contains fewer mathematical operations and can be easily adapted for solving various kinds of optimization problems [23,24]. However, the main drawback of the HS algorithm is that the three parameters assume constant values, and it is difficult to determine the appropriate values that rely on different problems. In many efforts toward improving the solution performance, a variety of parameter-setting-free (PSF) and self-adaptive HS methods have been proposed. The released PSF and self-adaptive HS algorithms mainly improved the search operation and used three dynamic parameters (i.e., harmony memory consideration rate (HMCR), pitch adjustment rate (PAR), and bandwidth (Bw)) in various fields (e.g., mathematics, multidisciplinary engineering problems).
Setting optimal parameters greatly affects the solution performance of HS; however, manually setting the parameters is quite time consuming, and it is difficult to find adaptive parameters for every application. Therefore, some studies use a dynamic approach for parameter control. These approaches can be sorted into three categories:
  • Addition of new operators to generate a new solution;
  • Application of dynamic parameters (i.e., HMCR, PAR, and Bw) depending on number of iterations, harmony memory size, and decision variable (DV) size; and
  • Consideration of the operation type in optimization process.
Pan et al. [25] developed a self-adaptive global-best HS (SGHS) algorithm inspired by the global HS (GHS) algorithm [24]. The SGHS employed a new improvisation scheme with an adaptive bandwidth that depends on the rate of generation with respect to the total number of iterations. The early generation algorithms employed a dynamic bandwidth value (from large to small), and in more than half of the iterations, the bandwidth applied the minimum value. Wang and Huang [26] and Degertekin [27] replaced the PAR process and removed the bandwidth during the generation of a new solution. The method, called the self-adaptive HS (SAHS), generated the new solution according to the minimum and maximum values in the harmony memory (HM). Luo [28] developed the novel self-adaptive HS (NSHS) that considers the HMCR, PAR, and Bw for suitable parameter settings. HMCR is set to a constant value according to the number of decision variables. The PAR procedure is replaced considering the variance of fitness, and new solutions are generated by the boundary conditions of the decision variable. Furthermore, the dynamic bandwidth, with a gradually decreasing value as the iteration number increases and an increasing value as the range of boundary condition widens, is applied to the NSHS.
The parameter-setting-free approaches [29,30,31,32] proposed for automatically updating the HS own their parameters (i.e., HMCR, PAR) at every iteration using a structure formation. The study introduced a new operating-type memory (OTM), and thus HMCR and PAR are updated considering the operating type such as harmony-memory-considering or pitch-adjusting. However, Geem [29] and Geem and Sim [31] use a slightly different method to calculate the HMCR and PAR. The detailed differences are described in Section 3.2. The previous PSF approach considered only HMCR and PAR; however, the almost-parameter-setting-free (APSF) method proposed by Jiang et al. [33] included Bw. This is controlled by the minimum/maximum decision variable value.
Recent studies have been performed with the multi-objective framework. The authors considered all parameters (i.e., HMCR, PAR, and Bw) and calculated them using mathematical operations. The approaches use a dynamic parameter-adjustment method, but this cannot eliminate the hassle of parameter setting, because the boundary conditions of each parameter depend on the problem size and formulation [34,35].
According to the literature survey, approximately 20 studies related to PSF or self-adaptive HS (Table 1) have been published and can be summarized by seven kinds of techniques, although similar techniques are used. However, previous studies that consider the self-adaptive approach could not be reflected in the system characteristics and have different performances depending on the type of WDS (i.e., looped and branched network) and network size.
Therefore, this study compares the performance of the self-adaptive optimization approaches for efficient WDS design and presents a guide to select the appropriate approach in WDS design using the optimization technique utilizing the characteristic of each technique formulation. In order to find the most appropriate self-adaptive optimization algorithm in various types of WDS designs, first of all the basic performance of each approach is compared in single and multi-objective optimization problems. Firstly, the sensitivity analysis of each self-adaptive approach is performed on the single/multi-objective mathematical benchmark problems in various problem types (e.g., solution shape, many local optimal solutions). Secondly, based on the applications and results of the mathematical problem, the performance of the algorithm is verified in the WDS design, considering the minimum cost and maximum system resilience under the single/multi-objective optimization framework. Thirdly, the characteristics of the search operators of the self-adaptive approach are compared according to the presence or absence of additional parameters and operators. Moreover, various performance indexes are used to compare the quantitative performance of each algorithm. This study can provide a guide for finding the appropriate self-adaptive approach in WDS design for the benefit of future research, considering that the performance of newly proposed algorithms can be rigorously tested in a highly simplified way.

2. Optimal Design of Water Distribution Systems

The initial WDS designs using the optimization approach considered only the network construction cost by changing pipe diameter. However, the designs which considered the minimum cost as a design factor were vulnerable to cope with the uncertain future conditions. Moreover, since current water users desire a stable and reliable water supply, the approach using the multi-objective optimization technique, which can consider design factors aside from minimum cost such as system performance indicators (e.g., reliability, resilience, robustness, and redundancy) [48,49,50,51] were proposed in the WDS design field. Therefore, this study applies the multi-objective optimization technique for WDS design, and the following subsection presents the formulation of the objective functions and the hydraulic constraints.

2.1. Minimizing Construction Cost

The objective of the first objective functions in WDS design is to minimize the construction cost (CC). The cost-estimation equation for network design proposed by Shamir and Howard [52] is applied. This cost can be estimated by multiplying the cost of each commercial pipe diameter by the length of each pipe. Thus, the sum of the cost of all pipes in the network is given by Equation (1).
C C = i = 1 N C ( D i ) L i
where C(Di) is the cost function of the ith pipe per unit length (m) of each pipe diameter, Li is the length (m) of the ith pipe, Di is the pipe diameter (mm) of the ith pipe, and N is the total number of pipes.

2.2. Maximizing System Resilience

System resilience (SR) can define the system capability to create foresight, recognize, anticipate, and defend against the changing risks before adverse consequences occur. Moreover, it defines the system ability to gracefully degrade and promptly recover from failure [53]. Various resilience indicators have been proposed since such an indicator was first developed by Todini [54]. It was expressed as a surrogate measure for hydraulic benefits. The index is based on the concept that the total input power into a network consists of the power dissipated in the network and the power delivered at demand nodes. Moreover, less power consumed internally to overcome the friction results in more surplus power at demand nodes, and thus the ability to counter failure conditions. The system resilience index calculated by Equation (2) was in the range from 0 to 1, where the higher index value indicates better resilience.
SR = j = 1 N q j ( h j h j * ) k = 1 N R Q K H K + i = 1 N P P i j = 1 N q j h j *
where N, NR, and NP are the number of nodes, reservoirs, and pumps, respectively; hj is a head at node j, hj* is the minimum required head at node j, HK is the water level of reservoir K, QK is water flow of reservoir K, and Pi is the power of pump i.

2.3. Hydraulic Constraints and the Penalty Function

In this study, to design WDSs using self-adaptive optimization, the EPANET 2.0 [55] is applied as a hydraulic solver to verify that the hydraulic constraints [56]. EPANET 2.0 performs hydraulic analysis in WDSs and it contains a flow conservation method, with various types of head loss formulations (i.e., Hazen-Williams, Darcy–Weisbach, Chez–Manning equations) [57]. Combining this program with the proposed optimization model, the hydraulic analysis results are checked. This model is considered the nodal pressure and pipe velocity as the design constraint. If the pressure at each node does not meet the pressure constraint, or if the water velocity at each pipe does not meet the velocity constraint, then a large penalty point value such as Equation (3) is added to the objective function values. The role of α is to distinguish feasible and infeasible solutions by adding a large penalty cost. β affects the penalty function when the nodal pressure approaches the hydraulic constraints, and it cannot affect the selection of the optimal solution because of the small penalty point. In order to prevent such an occurrence, a large penalty point can be generated by adding a large value to β.
P = { α ( | h i h min |   o r   | h max h i | ) + β   if ,   pressure   constraint α ( | v j v min |   o r   | v max v j | ) + β   if ,   velocity   constraint ,
where hi is the pressure head at node i (m); vj is the water velocity at pipe j (m/s); hmin and hmax are the minimum and maximum pressure heads (m), respectively; vmin and vmax are the minimum and maximum water velocity (m/s), respectively, and α and β are the penalty constants.

3. Optimization Algorithms

To achieve effective WDS design through the appropriate search operator guide in the optimization technique, this study applies HS as an optimization approach. HS has several characteristics suitable for various engineering applications, including WDSs. In the past decade, some studies have focused on applying WDS design, and their results were significantly better than other optimization techniques [58,59,60]. Moreover, HS has a good balance between exploration and exploitation, as well as a constraint-handling technique for addressing constrained problems efficiently and supporting real coding representations. For these reasons, various improved versions and self-adaptive versions of HS were developed for application to the engineering optimization problems. In the subsection below, various HSs used in the self-adaptive approach are described. The presented approaches are selected by seven kinds of techniques, which are summarized among the 23 studies related to the self-adaptive HS in Table 1.

3.1. Harmony Search

The HS can be explained in terms of the improvisation process of a musician. The search for the optimum harmony in music is equivalent to the optimum solution. When many different musicians play an instrument, various sounds generate a single harmony. The musician may gradually change to another suitable sequence and finally find an aesthetically pleasing harmony. In other words, the HS algorithm is an approach that finds the optimum harmony in music. In the HS, four parameters are used to search for the optimum solution (i.e., HM, HMCR, PAR, and Bw), and these parameters are assigned a constant value. A search space for the instrument is limited to memory space described as harmony memory (HM), where the HM size (HMS) represents the maximum number of harmonies that can be saved in the memory space. The main operators of HS are the random selection (RS), memory consideration (MC), and pitch adjustment (PA), serving to extract better solutions from the harmony memory. The main operator formulation is described by Equations (4) and (5).
x i N e w = { x i [ x i L o w e r , x i U p p e r ] i f , R n d > H M C R ( Random   search ) x i HM = [ x i 1 , x i 2 , , x i HMS ] i f , R n d H M C R ( Memory   consideration )
A f t e r   m e m o r y   c o n s i d e r a t i o n , x i N e w = { x i N e w    i f , R n d > P A R x i N e w   + B w    i f , R n d P A R     ( Pitch   adjustment )
where xiNew denotes a new decision variable, and xiLower, xiUpper are the boundary conditions of the decision variables. Rnd is the uniform random value, and Bw is the bandwidth.

3.2. Parameter-Setting-Free Harmony Search

The first parameter-setting-free HS (PSF-HS) was proposed by Geem [29] for reduction of the suitable parameter setting. PSF-HS modifies the improvisation step of HS by updating HMCR and PAR at every iteration for each decision variable. To update the parameters, this study introduced the operation-type memory (OTM) in Equation (6). The operator uses this memory to generate a new solution among the HS operators (i.e., RS, MC, PA), and the individual HMCR and PAR are calculated by Equations (7) and (8). The first PSF-HS proposed by Geem [29] calculated these parameters according to Equation (7); however, Geem and Shim [31] later modified this to Equation (8). Therefore, this study considers both calculation methods.
OTM = [ y 1 1 = R S y 2 1 = P A y n 1 = M C y 1 2 = R S y 2 2 = M C y n 2 = P A y 1 HMS = R S y 2 HMS = R S y n HMS = M C ]
H M C R i = n ( y i j = M C   a n d   P A ) H M S ,   P A R i = n ( y i j = P A ) n ( y i j = M C   a n d   P A ) ( b y   G e e m , 2010 )
H M C R i = n ( y i j = M C ) H M S ,   P A R i = n ( y i j = P A ) H M S ( b y   G e e m   a n d   S h i m , 2010 ) ,
where yn denotes an operation type for the decision variable. Each yn represents one of three cases: (i) RS, (ii) MC, and (iii) PA. n is the count function of the number of operations (e.g., MC or PA) in HM. HMCRi and PARi represent the HMCR and PAR of the ith decision variable, respectively.
As the number of iterations increases the HMCR generally increases, whereas the PAR decreases. This trend can cause HMCR and PAR to exceed 1 and 0, respectively. To prevent this problem, the noise value Φ is used to control the HMCRi and PARi between 0 and 1, as indicated by Equation (9).
H M C R i = { H M C R i + Φ × U ( 1 , 1 ) [ 0 , 1 ] H M C R i [ 0 , 1 ] ,   P A R i = { P A R i + Φ × U ( 1 , 1 ) [ 0 , 1 ] P A R i [ 0 , 1 ] ,
where U(−1, 1) is a random value within [−1, 1].

3.3. Almost-Parameter-Free Harmony Search

The almost-parameter-free HS (APS-HS) was first proposed by Jiang et al. [33]. This approach is a modified version of the original PSF-HS [31] that additionally considers dynamic Bw, including automatic HMCR and PAR settings. The OTM was applied to calculate the adopted HMCR and PAR using the same formulation as in Equations (6), (8), and (9). In the APS-HS, Bw is dynamically updated according to the maximum and minimum values in the HM, as in Equation (10).
x i j + 1 = { x i j [ x i j min ( x i j ) ] × U ( 0 , 1 ) i f , R n d > 0.5 x i j [ max ( x i j ) x i j ] × U ( 0 , 1 ) otherwise ,
where xij is ith decision variable in the jth iteration.

3.4. Novel Self-Adaptive Harmony Search

Novel self-adaptive HS (NSHS) was developed by Luo [28], and it is a modified process for determining HMCR, PAR, and Bw from constant values. HMCR is set according to the dimension of the problem, and these values are analogous, e.g., a complex problem has a large HMCR, as shown by Equation (11).
In the original HS, Bw is important to tune the solution; therefore, NSHS used dynamic Bw for fine-tuning such that the tuning range is wider at the beginning and narrower at the end of the simulation. This phenomenon inspired the dynamic fine-tuning bandwidth shown as Equation (12). The PAR procedure is replaced considering the variance of fitness, and new solutions are generated with the boundary condition of the decision variable in Equation (13).
H M C R = 1 1 n ( D V ) + 1
B w i j = { B w i U p p e r B w i L o w e r 100 × ( 1 j N I ) i f , f s t d < 0.0001 0.0001 otherwise
x i N e w = { x i + U ( 1 , 1 ) + B w i j i f , H M C R U ( 0 , 1 ) x i L o w e r + U ( 0 , 1 ) × ( x i U p p e r x i L o w e r ) + U ( 1 , 1 ) × B w i j i f , H M C R U ( 0 , 1 )   a n d   f s t d > 0.0001 m i n ( x i ) + U ( 0 , 1 ) × ( m a x ( x i ) m i n ( x i ) ) + U ( 1 , 1 ) × B w i j i f , H M C R U ( 0 , 1 )   a n d   f s t d > 0.0001
where n(DV) is the number of decision variables, fstd is the standard deviation calculated by f s t d = s t d ( f ( x 1 ) , f ( x 2 ) , , f ( x H M S ) ) , BwiLowerand BwiUpper are the boundaries of the bandwidth at the ith decision variable in the jth iteration, and NI is the total number of iterations.

3.5. Self-Adaptive Global-Based Harmony Search Algorithm

Shivaie et al. [43] developed the self-adaptive global-based HS (SGHSA) in 2015 to find better solutions and provide more effective parameter tuning. In the new improvisation method of the SGHSA, an altered pitch adjustment rule is used to avoid falling into a local optimum solution, as in Equation (14). The value of the Bw parameter is dynamically reduced by subsequent generations, according to Equation (15).
A f t e r   m e m o r y   c o n s i d e r a t i o n , x i N e w = { x r B w i j × U ( 0 , 1 ) i f , R n d > 0.5 x r + B w i j × U ( 0 , 1 ) otherwise    ( r = 1 , 2 , , H M S )
B w i j = { B w i U p p e r ( B w i U p p e r B w i L o w e r N I ) × 2 × j i f , j < N I 2 B w i L o w e r otherwise ,
where j is the present iteration number (j = 1, 2, …, NI).

3.6. Parameter Adaptive Harmony Search

Kumar et al. [47] proposed a dynamic change in the values of HMCR and PAR, consequently modifying the improved version of HS, referred to as the parameter-adaptive HS (PAHS). PAHS sets its parameters linearly or nonlinearly to make the algorithm explore each solution. The value of HMCR linearly increases as the optimization process continues. In contrast, PAR has a high value during earlier generations, which decreases nonlinearly. This parameter control approach improves the search efficiency, as the global and local search depend on the generation. The best solutions obtained are stored in HM, as the algorithm proceeds with the increase in the number of generations. The forms of HMCR, PAR, and Bw are shown in Equations (16)–(18).
H M C R j = H M C R m i n + ( H M C R m a x - H M C R m i n N I ) × j
P A R j = P A R m a x × e x p ( l n ( P A R m i n / P A R m a x ) N I × j )
B w j = B w m a x × e x p ( l n ( B w m i n / B w m a x ) N I × j )

4. Multi-Objective Optimization Formulation

In the present study, a multi-objective HS (MOHS) [59,60] is used to consider multiple objective functions. The MOHS combines the three approaches (i.e., HS, the non-dominated sorting method, and the crowding distance approach) to improve the Pareto optimal solution diversity and convergence, as shown in Figure 1.
The search operators of the MOHS are same as those in the HS [14], namely random search, memory consideration, and pitch adjustment. However, because the MOHS considers multiple objective functions, it uses non-dominated ranks [61] to evaluate solution performance. The individual solution rank is determined by counting the number of dominant solutions. For example, if there is no outstanding solution for both objective functions, the considered solution is defined as a non-dominated solution and is ranked first. The solution with a higher rank is preserved in the next-generation harmony memory. The crowding distance approach [62] is used to generate wide Pareto optimal solutions for diversity improvement under the same rank, after non-dominated sorting. This approach calculates the sum of the Euclidian distances between the two neighboring solutions in the Pareto solution space. In Figure 1, the euclidian distance is calculated by the extreme fitness value of each objective function (i.e., f1max,min and f2max,min) and the distance between beside two Pareto optimal solutions (i.e., d1 and d2).
The crowding distance of two extreme solutions among the Pareto optimal solutions is infinite; therefore, these two solutions are always preserved in the harmony memory. In this study, the concepts of multi-objective optimization (i.e., the non-dominated sorting and the crowding distance approach) are combined with respective self-adaptive algorithms to perform multi-objective optimization.

5. Application and Results

This section presents comparisons of the performance of the self-adaptive optimization approaches for efficient WDS design. To compare the quantitative performance of each algorithm, various performance indexes for single/multi-objective optimization are used. Figure 2 shows the model formulation of this study. First, to compare the performance of each self-adaptive approach, the various types of mathematical benchmark problems are applied in the single/multi-objective optimization framework.
Secondly, according to the basic performance of each self-adaptive algorithm, the optimal design of WDSs is performed in the single/multi-objective aspect. The applied design factors are the minimum cost and maximum system resilience, and the three different scales of WDSs are used for verification. In the third analysis, the search operators of the self-adaptive approach are categorized and compare the type of parameter automatic control.

5.1. Performance Indices

To quantitatively evaluate single-objective optimization mathematical benchmark problems, the individual simulation was repeated 50 times, with each simulation using fewer than 50,000 numbers of function evaluations (NFEs) for each problem.
The performance comparison of each algorithm used statistical approaches (e.g., best error, mean error, worst error, and standard deviation), success ratio (SR) of the feasible solution, number of improved optimal solutions during optimization (NIS), and the number of function evaluations to enter the feasible solution range (NFEs-Fs). In this study, the feasible solution ranges used were 10−10 (DV = 2, 5, 10) and 10−5 (DV = 30, 50). The parameter settings of all the approaches used in this study are tabulated in Table 2.
In the evaluation of multi-objective problems, various performance measures have been proposed in previous studies to evaluate various aspects (e.g., convergence and diversity) of Pareto-optimal solution (POS) sets. As explained below, these obtained performance indexes can represent other performance indexes that reflect solution convergence and diversity. Therefore, the following performance indexes were employed to compare different POS sets.
The coverage of sets (CS) criterion, proposed by Zitzler et al. [63], is an index for comparison of the non-dominated degrees of optimal solutions from different iterations.
The values taken by CS (X′, X″) are in the range of 0–1. If the value of CS is 1, this means that all X′ are dominated by X″. In order to understand the exact non-dominated relationship between two different iterations, it is necessary to calculate both CS (X′, X″) and CS (X″, X′) and analyze the resulting values. An expression for CS is displayed in Equation (19).
CS ( X , X ) = | { a X ;   a X :   a _ a } | | X | ,
where X′ and X″ denote the optimal solutions found in different iterations, and a′ and a″ represent each optimal solution.
The diversity (DI), proposed by Zitzler [64], is designed to evaluate the diversity of Pareto optimal solutions. It is presented in Equation (20) and uses the minimum and maximum values of the objective function.
DI = 1 M m = 1 M ( max f m min f m F m M a x F m M i n ) 2 ,
where FmMax and FmMin are the maximum and minimum values of the Pareto front (PF), fm is the mth objective function value, and M is the number of objective functions.
The generational distance (GD) metric was first presented by Veldhuizen and Lamont [65]. The main objective of this criterion is to clarify the capability of the different algorithms to find a set of non-dominated solutions with the lowest Pareto optimal front distance. This evaluation factor is defined in mathematical form in Equation (21).
G D = ( 1 n p f i = 1 n p f d i 2 ) 1 / 2 ,
where npf is the number of members in the generated PF, di is the Euclidean distance between member i in PF and the nearest member in PFoptimal, and the Euclidean distance (d) is calculated based on Equation (22).
d ( p , q ) = d ( q , p ) = ( i = 1 n ( f q i f p i ) 2 ) 1 / 2 ,
where q = (fq1, fq2, …, fqn) is a point on PF, and p = (fp1, fp2, …, fpn) is the member nearest to q in PFoptimal. The best possible value for the GD metric is 0, which corresponds to the PFg exactly covering PFoptimal.
Space (SP) is an evaluation index proposed by Schott [66] for the measurement of the homogeneity of the spatial distribution of optimal solutions, as well as how homogeneously consecutive solutions are distributed. SP can be calculated as shown in Equation (23).
SP = 1 n 1 i = 1 n ( d ¯ d i ) 2 ,
where di denotes min j ( | f 1 i ( x ) f 1 j ( x ) | + | f 2 i ( x ) f 2 j ( x ) | ) , i, j = 1, …, n; d ¯ is the mean value of all di; and n is the number of non-dominated solutions. The values of SP are in the range of 0–1. Values closer to 0 indicate that the optimal solution is distributed more homogeneously.

5.2. Comparison of Algorithm Performance in Mathematical Benchmark Problems

This section compares the selected self-adaptive HS algorithm based on single- and multi-objective optimization aspects. The experiments were implemented on an Intel 3.40 GHz Core (TM) i7 with 8 GB RAM, where these algorithms were programmed with Visual Basic 6.0 under Windows 7 (64-bit). To eliminate the effect of the initial solution quality, all initial solutions employed the same solution set by random generation.
Because the compared algorithms were developed to solve the single-objective optimization problem, these algorithms had to modify and add some approaches, namely, non-dominated solution sorting and improved solution diversity methods, in the case when the multi-objective optimization problems are solved. Therefore, these self-adaptive HS algorithms are supplemented using the non-dominated sorting approach [62] and the crowding distance method [61].

5.2.1. Single-Objective Optimization Problems

In the application of single-objective optimization problems, 25 unconstrained benchmark problems (=5 type functions × 5 kinds of decision variables: 2, 5, 10, 30, 50) proposed by Dixon [67], Rastrigin [68], and Molga and Smutnicki [69] and two constrained benchmark problems [70,71] are employed to compare the performance of these approaches in Table 3. According to the characteristic of comparison of the algorithm operator, four analyses were performed to compare the performance.
First, the optimization stability was analyzed for each algorithm. This analysis uses statistical analysis (the best, worst, and mean error) of the optimization results. Second, the optimization results are compared according to the problem’s search space size.
This analysis was that the number of decision variables is changed from two to fifty to compare the optimization ability based on the search space size. Third, the ability of initial convergence of each algorithm was evaluated using the success ratio and NFEs-Fs. To calculate these two performance indexes, the standard successful solution value is determined according to the decision variable size (if the number of decision variables is less than or equal to 10, the standard successful solution value is 10−10, and 10−5 if it is not) and examined the initial convergence ability of each algorithm.
The initial convergence of the optimization algorithm finds the feasible solution in the early stage; this reflects the exploration ability and greatly influences the performance of the optimal solution. Fourth, the exploitation ability was evaluated using NIS, which is the number of improving optimal solutions during optimization. The descriptive statistical result is shown in Figure 3. These statistical results represent the best, worst, and mean error of each algorithm in the number of various decision variables.
The results show that SGHSA performs well as compared to other self-adaptive HS and SGHSA variants for single-objective optimization benchmark functions. Especially in low-dimensional cases (DV = 2, 5), the statistical results of SHS, PSF-HS, and APF-HS exhibit high variance among the 50 independent runs. These three algorithms that follow the HS operators (i.e., harmony memory consideration and pitch adjustment) have an unstable characteristic of the optimum solution, even if there are no issues with regard to the parameter settings. In addition, NSHS does not show a satisfactory performance in the Rosenbrock function, which has a valley-shaped solution space, because the NSHS’s search operators follow the standard deviation of the optimal solution. In the case of the valley-shaped function with a shallow and wide search space, the standard deviation is larger than that of a bowl-shaped function.
Moreover, although the above analysis results show the applications for unconstrained problems, to compare the type sof benchmark problems this study applies the constrained benchmark problems. In Table 4, the constrained benchmark problems are compared with the more well-known optimization algorithms (i.e., genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE)) to evaluate the performance of presented self-adaptive techniques. The results show similar trends with unconstrained problems. From a statistical perspective, the SGHSA obtains stable optimal solutions based on the mean solution and standard deviation (SD), and also it shows the best optimal solution in both of the problems.
The next experiment is performed to observe the effect of the dimensionality on the test functions, which is varied from 2 to 50. The results indicate that most of the solution errors result in degradation of the performance of the optimal solution as the number of decision variables increases. This result is normally generated by solving the optimization problem. However, PAHS shows the greatest degradation among these self-adaptive algorithms, because it controls its own parameters based on the number of iterations (i.e., the HMCR shows a linear increment, and the PAR and Bw decrease exponentially). This makes the earlier iteration perform an exploration (global search), while at the end of the iteration an exploitation (local search) is performed. This parameter control approach provides an improvement in the optimal solution; however, there is no guarantee of a consistently a high-quality solution, as the solution becomes worse if it is at the local optimum. Table 5 lists the success ratios and NFEs-Fs to illustrate the initial convergence ability for the Rastrigin function. The feasible solution ranges are determined as 10−10 (DV = 2, 5, 10) and 10−5 (DV = 30, 50), depending on their decision variables. In all cases, SGHSA has the best success ratio and NSHS shows the lowest NFEs-Fs. However, as the number of decision variables is increased, the success ratio of NSHS decreases drastically.
The difference between the operators of these two algorithms is a new solution-generation approach. The NSHS applies the deterministic method which uses a random value and dynamic Bw according to the optimal solution’s standard deviation.
Although this approach focuses on the exploration ability, it has a large uncertainty with regard to finding an optimal solution.
Therefore, the quality of the optimal solution follows the value at the initial stage. However, SGHSA employs the dynamic PAR and Bw linearly, which might be the basic method to improve the performance of the optimization algorithm. Harmonious global and local searches apply the global search focus in the initial stage and the local search focus in the last stage. With respect to the NIS, the results are similar to that of NFEs-Fs. The NIS is the number of improving optimal solutions during optimization, and hence this indicator represents how many times a new optimal solution is found. Because SGHSA has the largest average NIS, the exploration and exploitation abilities are balanced, and it yields a satisfactory performance.

5.2.2. Multi-Objective Optimization Problems

In this section, six multi-objective benchmark problems are used to evaluate the performance of the presented algorithm. Table 6 shows the classical test problems frequently used in the mathematical multi-objective optimization. They are all unconstrained problems, including five multi-objective problems: ZDT1, ZDT2, ZDT3, ZDT4, and ZDT6 [63]. For a quantitative comparison, the spacing (SP), coverage set (CS), generational distance (GD), and diversity (DI) indicator values obtained by the MOHS, PSF-HS, APF-HS, SGHSA, NSHS, and PAHS in 30 independent runs, as well as the statistical analysis results, are shown in Table 7.
The four performance indexes were used to quantitatively compare the approaches. The first index, CS, which is the most important of the multi-objective optimization, was used to calculate the ratio of the number of non-dominated solutions obtained by a method to the number of Pareto solutions in the total set of solutions. In all these problems, SGHSA had the highest CS and outperformed the other five approaches. This indicates that SGHSA performed the best in terms of non-domination strength. Furthermore, Figure 4 shows the Pareto optimal solution of each self-adaptive HS for the ZDT1 problem. In Figure 4, the Pareto optimal solution of SGHSA showed the best convergence ability, and thus it can be included in the global Pareto optimal solutions, which are the non-dominated solutions among all integrated optimal solutions. In the case of the GD, this indicator also represents the convergence of the multi-objective optimal solution by calculating the distance between the Pareto optimal solution obtained by each algorithm and global optimal Pareto solution. Similarly, to the first index, the CS, the SGHSA was outstanding; however, the difference was not large compared to the other five approaches, with the exception of ZDT1. The SGHSA also has a similar meaning to the CS, as the approximated Pareto optimal solution obtained by the SGHSA is closer to the global Pareto optimal solution than those obtained by the others. It can be clearly seen from Table 7 and Figure 4 that the SGHSA and PAHS have obtained better values for the DI indicator than other algorithms in most of the test problems. This means that the Pareto optimal solution obtained by SGHSA and PAHS has a high diversity on average. However, DI, which demonstrates the algorithm diversity in the multi-objective framework, is appropriate to evaluate a situation where convergence is secured. This is because DI calculates the distance between the extreme solutions of each side, and the formulation has little correlation with the convergence ability of the approach.
Therefore, the NSHS and PSF-HSs with low convergence cannot be fairly evaluated. Furthermore, because the SP is an indicator that evaluates diversity (DI) in the multi-objective optimization, it showed similar results when compared with algorithms that guarantee convergence (i.e., the APF-HS, SGHSA, and PAHS). In summary, the SGHSA generated the best results for using the performance indexes in these mathematical benchmark problems. Particularly, the APF-HS and PAHS, which show great convergence for four test problems out of five, exhibited a weakened convergence on the ZDT3. The main reason behind APF-HS and PAHS showing weak performance on ZDT3 is that the Pareto optimal solutions of ZDT3 are discontinuous. For this result, however, the search operators that performed the memory consideration and pitch adjustment applied in APF-HS and PAHS create harmony for the global and local search. Discontinuous problems such as ZDT3 require a more local search, such as SGHSA. SGHSA performs pitch adjustment or random searches at every iteration, unlike general HS.
Therefore, during the optimization, if there are sufficient calculation iterations, the effective local search generates better optimal solutions in the multi-objective optimization. In addition, the NSHS exhibited the lowest convergence ability in multi-objective optimization, although most of the single-objective optimization problems (20 out of 25 problems) were solved successfully, like in the SGHSA. Because the NSHS is applied to the search approach which the global search reinforces in the initial stage and is focused on the local search in the late iteration using the standard deviation of the optimal solution (the standard deviation of the optimal solution is high in the initial stage, and a small standard deviation indicates convergence to the solution). This search operator improves the diversity in the single-objective optimization. However, in the multi-objective optimization, the standard deviation of the optimal solution does not affect the solution quality because the Pareto optimal solutions are determined considering the non-domination relationship of the two objective functions. Moreover, in the multi-objective optimization framework, the fitness values are also widely distributed in order to provide various solutions to the decision maker. Therefore, based on these analyses, the NSHS can be verified with a limitation to solve the multi-objective optimization problem.

5.3. Comparison of Self-Adaptive Technique Performance in WDS Design Problems

In the previous section, the characteristic of each algorithm’s search operator was demonstrated according to the application of mathematical benchmark problems. This section applies the design of WDSs as the engineering problem to verify and compare the self-adaptive techniques. The three benchmark WDSs (i.e., the Hanoi network, Saemangeum network, and P-city network) are applied considering single/multi-objective frameworks. In case of the single-objective problem, the minimum cost is used as an objective function, while the multi-objective optimization problems consider minimum cost and maximum system resilience simultaneously. Both design problems apply system hydraulic constraints such as nodal pressure and pipe velocity. The below section provides descriptions of the benchmark networks in this study.
The Hanoi (HAN) network was introduced by Fujiwara and Khang [7]. It consists of three closed loops, one reservoir, 32 demand nodes, and 34 pipes. The Hazen–Williams (HW) roughness coefficient is 130 for all pipes. The minimum required pressure is 30 m. A total of six commercial pipe diameters ranging from 304.8 mm to 1016 mm are available. The minimum cost of the Hanoi network based on a single-objective optimal design (SOOD) has been reported to be USD 6.081 million [49], and the cost of that based on a multi-objective optimal design (MOOD) using the lowest cost and system resilience has been reported to be USD 6.195 million [40].
The Saemangeum network was first proposed by Yoo et al. [72]. It is composed of nine closed loops, one reservoir, 334 nodes, and 356 pipes. The total length of the network is 41.38 km. Table 8 presents the pipe cost data. A total of 18 commercial pipe diameters were used, and the cost estimation is the same as the construction cost for each pipe diameter according to the K-water report [73] in the cost estimation of the water supply facilities. The water pressure at each node and the water velocity in the pipeline denote the hydraulic constraints of the Saemangeum network. The constraints are 10 m and 35 m for the minimum and maximum head at each node, respectively, and 0.01 m/s and 2.5 m/s for the minimum and maximum water speed, respectively.
The P-city network [60] is the real-world network in Korea and it has a total of 1297 demand nodes supplied by one source. There are 1339 pipelines, 5 pump stations, and 5 valves. The total length of the network is 111.827 km. There are a total of nine commercial pipe diameters, ranging from 25 mm to 500 mm [73]. The HW roughness coefficient is in the range of 90–130. The minimum required pressure head is 15 m for each node. The original construction cost of the P-city network based on the SOOD was KRW 34.946 billion.

5.3.1. Single-Objective Optimal Design of WDSs

Table 9 shows the performance comparison results of the three different sizes of WDSs by applying self-adaptive techniques. Applied self-adaptive techniques are performed using the same number of function evaluations (NFEs) depending on each WDS for fair comparisons. In addition, since the performance of the optimization varies with the quality of the initial solutions [51], we apply the initial solutions extracted from the random sampling to each self-adaptive technique equally.
Table 9 shows the statistical optimization results including the worst, average, and best costs, average NFEs-Fs, and the reduction rate between the KS and the average cost. The NFEs-Fs is the measure that calculates the number of function evaluations when the solution meets the known global optimal solution. Therefore, it can be applied in the Hanoi network, which already revealed the global optimal solution in a previous study [49]. In terms of the average cost, the NSHS and SGHSA are superior to all other methods in Hanoi network. The current known global optimal solution for the Hanoi network is USD 6.081 million. This solution can likewise be obtained in all the approaches; however, the NSHS needs less computational time in terms of the average NFEs-Fs. Nevertheless, in the large network such as the Saemangeum and the P-city network, the optimization results of the SGHSA provide better results in comparison to the others. In real-world networks, the reduction rates between the KS and the average cost are greater than 10% (i.e., Seamangum) and 17% (P-city) while satisfying the hydraulic constraints (i.e., the boundary condition of nodal pressure and pipe velocity). Among the self-adaptive approaches, the SGHSA and NSHS obtain the best statistical solutions. This verifies the results obtained in Section 5.2. In Section 5.2, the results of single-objective optimization in the mathematical benchmark problems exhibited similar trends with the application of the WDS design. Unlike the results of the multi-objective optimization, NSHS shows relatively satisfactory performance in the single-objective optimization, which in turn shows similar performance in the least-cost design problem.

5.3.2. Multi-Objective Optimal Design of WDSs

This chapter obtains the multi-objective optimization results in WDSs. The results represent three analyses, namely the Pareto optimal solutions of each network, performance evaluation using performance measures, and the distribution of the non-dominated solution. Figure 5 demonstrates the Pareto optimal solution and the performance comparison of each network using performance indices in the convergence and diversity aspects.
The four performance measures are used to quantitatively compare each approach. Among the applied indices, CS and GD evaluate the convergence aspect, while DI and SP are responsible for the diversity aspect. Figure 5b,d,f represent the values of the performance indices for each approach. The first index, CS, is the most important index for the comparison of convergence in different multi-objective optimization solutions. It is used to calculate the ratio of the number of non-dominated solutions obtained by each method to the number of Pareto solutions in the total set of solutions. In the Hanoi network, the SGHSA has the highest CS of 0.37 and outperforms the other five approaches. This means that the SGHSA performed the best in terms of the non-domination strength.
The second index, DI, which shows how well a method finds a widespread Pareto optimal solution, is calculated from the given objective functions of the least-cost and maximum-resilience designs. With respect to diversity, the four approaches (i.e., PSF-HS, APF-HS, PAHS, and SGHSA) generate similar DI, and the result means that most of self-adaptive approaches except NSHS have good diversity ability. The third index, GD, also exhibits the convergence aspect of the Pareto optimal solution, similarly to CS, and it is calculated by summation of the distance between the best-known PF and the given Pareto solution of each approach. The result of GD shows a similar trend to CS, which means that the convergence of SGHSA is better than that of the other approaches in terms of convergence. In case of the Seamangum network, the results of the performance evaluation using indices are analogous with the performance of Hanoi network in terms of the convergence and diversity. However, the results in the P-city network are slightly different for PAHS. In the convergence of PAHS, there is a big difference compared with the previous optimization results. The PAHS was modified with the three parameters setting formulations depending on their iteration number. Even though the large-scale optimization problem needs to fine-tune its parameters, the parameters of PAHS are changed regularly. This means that the local search ability should be enhanced in a shorter time compared with the small-scale problem. However, the PAHS changes the search performance linearly regardless of the scale of the problem. Thus, it is difficult to precisely control the exploitation ability. Therefore, PAHS is not appropriate in the large scale multi-objective optimization problems for the formulation.
Figure 6a–c illustrate the relative distribution of solutions contributed by each self-adaptive approach in the Pareto optimal solution of three WDS problems (i.e., Hanoi network, Saemangeum network, P-city network).
By using the distribution plot, it is much easier to compare their performance (both convergence and diversity) in an intuitive sense. Since the solutions in the best-known PF are evenly mapped, the gaps appearing in the graph for each self-adaptive approach denote the absence of solutions in the corresponding position within the set of best-known PF. The projection of Pareto optimal solution using various self-adaptive approaches shows that the different distributions rely on the network size. In the Hanoi and Saemangeum network, the Pareto optimal solution of PAHS is evenly distributed. However, in the P-city network as the largest network among the application networks, the PAHS has a lower contribution than other networks. PAHS has a weak convergence ability in the large-scale optimization problem, and it is an unsuitable approach for large WDS design. In contrast, the NSHS has a reasonably good distribution in the large-scale problem. However, this approach has a low contribution in the Hanoi, Saemangeum network. Therefore, the NSHS can be evaluated with less reliability to obtain the Pareto optimal solution. In summary of all results, regardless of problem scale, the SGHSA exhibits greater convergence and diversity ability and demonstrates that the best approach among the proposed methods also applies to large problems. Therefore, the SGHSA is concluded to perform the best out of the five proposed approaches.

5.4. Comparison of Algorithm Characteristics (i.e., Operator and Parameter)

This section compares the characteristics of the search operators of the self-adaptive HS (Table 10). The objective of the self-adaptive method is to eliminate the process of parameter setting and automatically set the appropriate parameters for any optimization problem. Based on this objective, the best self-adaptive approach does not require additional new parameters, and the optimization algorithm for all the parameters should be controlled appropriately under good performance.
In this study, the self-adaptive HS algorithms dealt with adds new parameters to set the appropriate parameters in all algorithms, except the PSF-HS. The parameter-setting-free approach preserves the operator of the existing HS algorithm and automatically sets parameters. Although PSF-HS might be closer to the best self-adaptive technique in terms of only using its own parameters, the performance is not good. However, the algorithms which have satisfactory performance such as the SGHSA add new parameters or search operators. Though the additional new parameters do not need the optimal parameter value, these are also parameters affecting the solution quality. For this reason, the additional new parameters need proper values. Therefore, under the next-generation self-adaptive approach in the HS framework, it is necessary to develop a method that does not add new parameters and can derive excellent solution performance. In the case of the algorithms considered in this study, the combination of the second PSF-HS and SGHSA can be expected to provide reliable results because the SGHSA is able to enhance the insufficient search ability of the second PSF-HS using the dynamic Bw.

6. Conclusions

This study compares the performance of the self-adaptive optimization approaches for efficient WDS design and presents a guide to select the appropriate approach in WDS design using the optimization technique utilizing the characteristics of each technique. The compared self-adaptive approaches are selected from previous studies, which were developed in various fields (i.e., water resources engineering, mathematics, economics, structure engineering, traffic engineering, electrical engineering, and mechanical engineering). Among the 23 studies, there are seven kinds of techniques which have characteristic approaches for automatic parameter settings (i.e., parameter-setting-free harmony search, almost-parameter-free harmony search, novel self-adaptive harmony search, self-adaptive global-based harmony search algorithm, parameter adaptive harmony search).
To this end, this study performs three kinds of analyses. First, the sensitivity analysis of each self-adaptive approach is performed on single/multi-objective mathematical benchmark problems in various problem types (e.g., solution shape, many local optimal solutions). Secondly, based on the applications and results of the mathematical problem, the performance of the algorithm is verified in the WDS design, considering the minimum cost and maximum system resilience under the single/multi-objective optimization framework. Third, the characteristics of the search operators of the self-adaptive approach are compared according to the presence or absence of additional parameters and operators. Moreover, various performance indexes are used to compare the quantitative performance of each algorithm. To compare the characteristics of each algorithm from various perspectives, firstly, the self-adaptive approaches are verified using the single/multi-objective mathematical benchmark problems in various problem types (e.g., solution shape, many local optimal solutions). In the single-objective optimization, the compared algorithms are assigned various performance indices to evaluate the search stability (i.e., the statistical analysis), the effect of dimensionality (i.e., the sensitivity analysis for various decision variables), and the initial convergence ability (i.e., the success ratio and NFEs-Fs). In the multi-objective optimization, the compared algorithms apply various performance indices (i.e., the spacing, coverage set, generational distance, and diversity) to evaluate the diversity and convergence quantitative of Pareto optimal solutions. Secondly, based on the results of single/multi-objective optimization in the mathematical problems, these algorithms are applied to WDS design as an engineering problem to verify and compare the self-adaptive approaches. The objective functions are considered to minimize the total cost with maximum system resilience while satisfying system hydraulic constraints. Finally, the characteristics of search operators of the self-adaptive HS are compared and analyzed according to the presence or absence of additional parameters and operators.
According to the performance of various analyses in this study, the SGHSA outperformed in terms of the single/multi-objective and WDS design problems. Also, the NSHS and PAHS showed quite good performance, but both of the approaches have low versatility. This means that although they show good results in certain problems, they exhibit low performance depending on the characteristics of the algorithm. For example, in the case of NSHS, the multi-objective optimization showed the lowest solution convergence because NSHS performed the local and global search considering the standard deviation of fitness. However, with respect to the search operator the SGHSA adds new parameters (e.g., max and min Bw) and search operators. The boundary value does not need an optimal value, and this also affects the solution quality. Therefore, the additional new parameters likewise need to have proper values. Hence, for the next-generation self-adaptive approach, it is necessary to develop an algorithm that does not add new parameters and can derive excellent solution performance. In a future study, these results are expected to benefit the development of a new self-adaptive optimization algorithm, considering the effective evaluation of each algorithm. In particular, the comparison approaches (i.e., the single/multi-objective optimization, applying the engineering problem, and the characteristics of search operators) can be rigorously tested in a significantly simpler way.

Author Contributions

Y.H.C. surveyed the previous studies. Y.H.C. wrote the original manuscript. Y.H.C. conducted simulations. Y.H.C. and J.H.K. conceived the original idea of the proposed method.

Funding

This subject is supported by Korea Ministry of Environment as “Global Top project (2016002120004)”.

Acknowledgments

This subject is supported by Korea Ministry of Environment as “Global Top project”.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Alperovits, E.; Shamir, U. Design of optimal water distribution systems. Water Resour. Res. 1977, 13, 885–900. [Google Scholar] [CrossRef]
  2. Quindry, G.E.; Liebman, J.C.; Brill, E.D. Optimization of looped water distribution systems. J. Environ. Eng. 1981, 107, 665–679. [Google Scholar]
  3. Goulter, I.C.; Coals, A.V. Quantitative approaches to reliability assessment in pipe networks. J. Transp. Eng. 1986, 112, 287–301. [Google Scholar] [CrossRef]
  4. Su, Y.C.; Mays, L.W.; Duan, N.; Lansey, K.E. Reliability-based optimization model for water distribution systems. J. Hydraul. Eng. 1987, 113, 1539–1556. [Google Scholar] [CrossRef]
  5. Kessler, A.; Shamir, U. Analysis of the linear programming gradient method for optimal design of water supply networks. Water Resour. Res. 1989, 25, 1469–1480. [Google Scholar] [CrossRef]
  6. Lansey, K.E.; Mays, L.W. Optimization model for water distribution system design. J. Hydraul. Eng. 1989, 115, 1401–1418. [Google Scholar] [CrossRef]
  7. Fujiwara, O.; Khang, D.B. A two-phase decomposition method for optimal design of looped water distribution networks. Water Resour. Res. 1990, 26, 539–549. [Google Scholar] [CrossRef]
  8. Schaake, J.C.; Lai, F.H. Linear Programming and Dynamic Programming Application to Water Distribution Network Design; MIT Hydrodynamics Laboratory: Cambridge, MA, USA, 1969. [Google Scholar]
  9. Wood, D. KYPipe Reference Manual. Civil Engineering Software Center; University of Kentucky: Lexington, KY, USA, 1995. [Google Scholar]
  10. Sherali, H.D. Global optimization of nonconvex polynomial programming problems having rational exponents. J. Glob. Optim. 1998, 12, 267–283. [Google Scholar] [CrossRef]
  11. Simpson, A.R.; Dandy, G.C.; Murphy, L.J. Genetic algorithms compared to other techniques for pipe optimization. J. Water Res. Plan. Manag. 1994, 120, 423–443. [Google Scholar] [CrossRef]
  12. Maier, H.R.; Simpson, A.R.; Zecchin, A.C.; Foong, W.K.; Phang, K.Y.; Seah, H.Y.; Tan, C.L. Ant colony optimization for design of water distribution systems. J. Water Res. Plan. Manag. 2003, 129, 200–209. [Google Scholar] [CrossRef]
  13. Eusuff, M.M.; Lansey, K.E. Optimization of water distribution network design using the shuffled frog leaping algorithm. J. Water Res. Plan. Manag. 2003, 129, 210–225. [Google Scholar] [CrossRef]
  14. Geem, Z.W.; Kim, J.H.; Loganathan, G.V. A new heuristic optimization algorithm: Harmony search. Simulation 2001, 76, 60–68. [Google Scholar] [CrossRef]
  15. Bolognesi, A.; Bragalli, C.; Marchi, A.; Artina, S. Genetic heritage evolution by stochastic transmission in the optimal design of water distribution networks. Adv. Eng. Softw. 2001, 41, 792–801. [Google Scholar] [CrossRef]
  16. Vasan, A.; Simonovic, S.P. Optimization of water distribution network design using differential evolution. J. Water Res. Plan. Manag. 2010, 136, 279–287. [Google Scholar] [CrossRef]
  17. Goldberg, D.E.; Holland, J.H. Genetic algorithms and machine learning. Mach. Learn. 1998, 3, 95–99. [Google Scholar] [CrossRef]
  18. Eberhart, R.; Kennedy, J. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  19. Dorigo, M.; Di Caro, G. Ant colony optimization: A new meta-heuristic. In Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; pp. 1470–1477. [Google Scholar]
  20. Cunha, M.D.C.; Sousa, J. Water distribution network design optimization: Simulated annealing approach. J. Water Res. Plan. Manag. 1999, 125, 215–221. [Google Scholar] [CrossRef]
  21. Lippai, I.; Heaney, J.P.; Laguna, M. Robust water system design with commercial intelligent search optimizers. J. Comput. Civ. Eng. 1999, 13, 135–143. [Google Scholar] [CrossRef]
  22. Kim, J.H.; Geem, Z.W.; Kim, E.S. Parameter estimation of the nonlinear Muskingum model using harmony search. J. Am. Water Resour. Assoc. 2001, 37, 1131–1138. [Google Scholar] [CrossRef]
  23. Mahdavi, M.; Fesanghary, M.; Damangir, E. An improved harmony search algorithm for solving optimization problems. Appl. Math. Comput. 2007, 188, 1567–1579. [Google Scholar] [CrossRef]
  24. Omran, M.G.; Mahdavi, M. Global-best harmony search. Appl. Math. Comput. 2008, 198, 643–656. [Google Scholar] [CrossRef]
  25. Pan, Q.K.; Suganthan, P.N.; Tasgetiren, M.F.; Liang, J.J. A self-adaptive global best harmony search algorithm for continuous optimization problems. Appl. Math. Comput. 2010, 216, 830–848. [Google Scholar] [CrossRef]
  26. Wang, C.M.; Huang, Y.F. Self-adaptive harmony search algorithm for optimization. Expert Syst. Appl. 2010, 37, 2826–2837. [Google Scholar] [CrossRef]
  27. Degertekin, S.O. Improved harmony search algorithms for sizing optimization of truss structures. Comput. Struct. 2012, 92, 229–241. [Google Scholar] [CrossRef]
  28. Luo, K. A novel self-adaptive harmony search algorithm. J. Appl. Math. 2013. [Google Scholar] [CrossRef]
  29. Geem, Z.W. Parameter estimation of the nonlinear Muskingum model using parameter-setting-free harmony search. J. Hydrol. Eng. 2010, 16, 684–688. [Google Scholar] [CrossRef]
  30. Geem, Z.W.; Cho, Y.H. Optimal design of water distribution networks using parameter-setting-free harmony search for two major parameters. J. Water Res. Plan. Manag. 2010, 137, 377–380. [Google Scholar] [CrossRef]
  31. Geem, Z.W.; Sim, K.B. Parameter-setting-free harmony search algorithm. Appl. Math. Comput. 2010, 217, 3881–3889. [Google Scholar] [CrossRef]
  32. Geem, Z.W. Economic dispatch using parameter-setting-free harmony search. J. Appl. Math. 2013, 5–9. [Google Scholar] [CrossRef]
  33. Jiang, S.; Zhang, Y.; Wang, P.; Zheng, M. An almost-parameter-free harmony search algorithm for groundwater pollution source identification. Water Sci. Technol. 2013, 68, 2359–2366. [Google Scholar] [CrossRef]
  34. Diao, K.; Zhou, Y.; Rauch, W. Automated creation of district metered area boundaries in water distribution systems. J. Water Res. Plan. Manag. 2012, 139, 184–190. [Google Scholar] [CrossRef]
  35. Sabarinath, P.; Thansekhar, M.R.; Saravanan, R. Multiobjective optimization method based on adaptive parameter harmony search algorithm. J. Appl. Math. 2015. [Google Scholar] [CrossRef]
  36. Jing, C.; Man, H.F.; Wang, Y.M. Novel Self-adaptive Harmony Search Algorithm for continuous optimization problems. In Proceedings of the 30th IEEE Chinese Control Conference, Yantai, China, 22–24 July 2011. [Google Scholar]
  37. Kulluk, S.; Ozbakir, L.; Baykasoglu, A. Self-adaptive global best harmony search algorithm for training neural networks. Procedia Comput. Sci. 2011, 3, 282–286. [Google Scholar] [CrossRef][Green Version]
  38. Kattan, A.; Abdullah, R. A dynamic self-adaptive harmony search algorithm for continuous optimization problems. Appl. Math. Comput. 2013, 219, 8542–8567. [Google Scholar] [CrossRef]
  39. Rani, D.S.; Subrahmanyam, N.; Sydulu, M. Self adaptive harmony search algorithm for optimal capacitor placement on radial distribution systems. In Proceedings of the 2013 IEEE International Conference on Energy Efficient Technologies for Sustainability (ICEETS), Nagercoil, India, 10–12 April 2013. [Google Scholar]
  40. Wang, Q.; Guidolin, M.; Savic, D.; Kapelan, Z. Two-objective design of benchmark problems of a water distribution system via MOEAs: Towards the best-known approximation of the true Pareto front. J. Water Res. Plan. Manag. 2014, 141, 04014060. [Google Scholar] [CrossRef]
  41. Naik, B.; Nayak, J.; Behera, H.S.; Abraham, A. A self adaptive harmony search based functional link higher order ANN for non-linear data classification. Neurocomputing 2016, 179, 69–87. [Google Scholar] [CrossRef]
  42. Rajagopalan, A.; Sengoden, V.; Govindasamy, R. Solving economic load dispatch problems using chaotic self-adaptive differential harmony search algorithm. Int. Trans. Electr. Energy Syst. 2015, 25, 845–858. [Google Scholar] [CrossRef]
  43. Shivaie, M.; Ameli, M.T.; Sepasian, M.S.; Weinsier, P.D.; Vahidinasab, V. A multistage framework for reliability-based distribution expansion planning considering distributed generations by a self-adaptive global-based harmony search algorithm. Reliab. Eng. Syst. Saf. 2015, 139, 68–81. [Google Scholar] [CrossRef]
  44. Vahid, M.Z.; Sadegh, M.O. A new method to reduce losses in distribution networks using system reconfiguration with distributed generations using self-adaptive harmony search algorithm. In Proceedings of the 4th IEEE Iranian Joint Congress on Fuzzy and Intelligent Systems (CFIS), Zahedan, Iran, 9–11 September 2015. [Google Scholar]
  45. Yan, H.H.; Duan, J.H.; Zhang, B.; Pan, Q.K. Harmony search algorithm with self-adaptive dynamic parameters. In Proceedings of the 27th IEEE Chinese Control and Decision Conference (CCDC), Qingdao, China, 23–25 May 2015. [Google Scholar]
  46. Zhao, F.; Liu, Y.; Zhang, C.; Wang, J. A self-adaptive harmony PSO search algorithm and its performance analysis. Expert Syst. Appl. 2015, 42, 7436–7455. [Google Scholar] [CrossRef]
  47. Kumar, V.; Chhabra, J.K.; Kumar, D. Parameter adaptive harmony search algorithm for unimodal and multimodal optimization problems. J. Comput. Sci. 2014, 5, 144–155. [Google Scholar] [CrossRef]
  48. Prasad, T.D.; Park, N.S. Multiobjective genetic algorithms for design of water distribution networks. J. Water Res. Plan. Manag. 2004, 130, 73–82. [Google Scholar] [CrossRef]
  49. Reca, J.; Martínez, J. Genetic algorithms for the design of looped irrigation water distribution networks. Water Resour. Res. 2006, 42. [Google Scholar] [CrossRef]
  50. Jung, D.; Kang, D.; Kim, J.H.; Lansey, K. Robustness-based design of water distribution systems. J. Water Res. Plan. Manag. 2013, 140, 04014033. [Google Scholar] [CrossRef]
  51. Choi, Y.; Jung, D.; Jun, H.; Kim, J. Improving Water Distribution Systems Robustness through Optimal Valve Installation. Water 2018, 10, 1223. [Google Scholar] [CrossRef]
  52. Shamir, U.Y.; Howard, C.D. Water distribution systems analysis. J. Hydraul. Eng. 1968, 94, 219–234. [Google Scholar]
  53. Lansey, K. Sustainable, robust, resilient, water distribution systems. In Proceedings of the WDSA 2012: 14th Water Distribution Systems Analysis Conference, Adelaide, Australia, 24–27 September 2012. [Google Scholar]
  54. Todini, E. Looped water distribution networks design using a resilience index based heuristic approach. Urban Water 2000, 2, 115–122. [Google Scholar] [CrossRef]
  55. Rossman, L.A. EPANET 2: Users Manual; EPA: Washington, DC, USA, 2010. [Google Scholar]
  56. Bragalli, C.; D’Ambrosio, C.; Lee, J.; Lodi, A.; Toth, P. On the optimal design of water distribution networks: A practical MINLP approach. Optim. Eng. 2012, 13, 219–246. [Google Scholar] [CrossRef]
  57. Walski, T.M.; Chase, D.V.; Savic, D.A. Water Distribution Modeling; Civil and Environmental Engineering and Engineering Mechanics Faculty Publications: Waterbury, CT, USA, 2001. [Google Scholar]
  58. Geem, Z.W.; Hwangbo, H. Application of harmony search to multi-objective optimization for satellite heat pipe design. In Proceedings of the UKC 2006 AST-1.1, Teaneck, NJ, USA, 10–13 August 2006; pp. 1–3. [Google Scholar]
  59. Choi, Y.H.; Lee, H.M.; Yoo, D.G.; Kim, J.H. Self-adaptive multi-objective harmony search for optimal design of water distribution networks. Eng. Optimiz. 2017, 49, 1957–1977. [Google Scholar] [CrossRef]
  60. Choi, Y.H.; Jung, D.; Lee, H.M.; Yoo, D.G.; Kim, J.H. Improving the quality of pareto optimal solutions in water distribution network design. J. Water Res. Plan. Manag. 2017, 143, 04017036. [Google Scholar] [CrossRef]
  61. Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T.A.M.T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evolut. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef][Green Version]
  62. Fonseca, C.M.; Fleming, P.J. Genetic Algorithms for Multiobjective Optimization: Formulation Discussion and Generalization. ICGA 1993, 93, 416–423. [Google Scholar]
  63. Zitzler, E.; Deb, K.; Thiele, L. Comparison of multiobjective evolutionary algorithms: Empirical results. Evolut. Comput. 2000, 8, 173–195. [Google Scholar] [CrossRef] [PubMed]
  64. Zitzler, E. Evolutionary Algorithms for Multiobjective Optimization: Methods and Applications; Shaker: Ithac, NY, USA, 1999. [Google Scholar]
  65. van Veldhuizen, D.A.; Lamont, G.B. Evolutionary computation and convergence to a Pareto front. In Proceedings of the Late Breaking Papers at the 1998 Genetic Programming Conference, Madison, WI, USA, 28–31 July 1998. [Google Scholar]
  66. Schott, J.R. Fault Tolerant Design Using Single and Multicriteria Genetic Algorithm Optimization (No. AFIT/CI/CIA-95-039); Air Force Institute of Technology: Wright-Patterson AFB, OH, USA, 1995. [Google Scholar]
  67. Dixon, L.C.W. The Global Optimization Problem. An Introd. Towar. Glob. Optim. 1978, 2, 1–15. [Google Scholar]
  68. Rastrigin, L.A. Extremal control systems. Theor. Found. Eng. Cybern. Ser. 1974, 19, 3. [Google Scholar]
  69. Molga, M.; Smutnicki, C. Test functions for optimization needs. Test Funct. Optim. Needs 2005, 3, 101. [Google Scholar]
  70. Hedar, A.R.; Fukushima, M. Derivative-free filter simulated annealing method for constrained continuous global optimization. J. Glob. Optim. 2006, 35, 521–549. [Google Scholar] [CrossRef]
  71. Chootinan, P.; Chen, A. Constraint handling in genetic algorithms using a gradient-based repair method. Comput. Oper. Res. 2006, 33, 2263–2281. [Google Scholar] [CrossRef]
  72. Yoo, D.G.; Lee, H.M.; Sadollah, A.; Kim, J.H. Optimal pipe size design for looped irrigation water supply system using harmony search: Saemangeum project area. Sci. World J. 2015. [Google Scholar] [CrossRef]
  73. K-Water. Water Facilities Construction Cost Estimation Report; K-Water: Deajeon, Korea, 2010. [Google Scholar]
Figure 1. Concept of non-dominated sorting and crowding distance approaches.
Figure 1. Concept of non-dominated sorting and crowding distance approaches.
Water 11 01293 g001
Figure 2. Description of entire model formulation. WDS = water distribution system.
Figure 2. Description of entire model formulation. WDS = water distribution system.
Water 11 01293 g002
Figure 3. Statistical analysis results of unconstrained single-objective optimization problems.
Figure 3. Statistical analysis results of unconstrained single-objective optimization problems.
Water 11 01293 g003
Figure 4. Pareto optimal solution of each self-adaptive HS (ZDT1).
Figure 4. Pareto optimal solution of each self-adaptive HS (ZDT1).
Water 11 01293 g004
Figure 5. Comparison of results in the multi-objective optimal design of water distribution systems: (a,c,e) Pareto optimal solutions of each network; (b,d,f) Performance measures of the best solution of each network.
Figure 5. Comparison of results in the multi-objective optimal design of water distribution systems: (a,c,e) Pareto optimal solutions of each network; (b,d,f) Performance measures of the best solution of each network.
Water 11 01293 g005
Figure 6. Distribution of non-dominated solutions from each self-adaptive approach in the Pareto optimal solutions: (a) Hanoi network, (b) Saemangeum network, (c) P-city network.
Figure 6. Distribution of non-dominated solutions from each self-adaptive approach in the Pareto optimal solutions: (a) Hanoi network, (b) Saemangeum network, (c) P-city network.
Water 11 01293 g006
Table 1. Category of self-adaptive harmony search (HS). PAR = pitch adjustment rate; HMCR = harmony memory consideration rate; Bw = bandwidth.
Table 1. Category of self-adaptive harmony search (HS). PAR = pitch adjustment rate; HMCR = harmony memory consideration rate; Bw = bandwidth.
Algorithm CategoryApplication FieldImprovement, [Reference]
Single-objective
Self-adaptive HS
Water resource engineeringHMCR, PAR, [29]
Water resource engineeringHMCR, PAR, [30]
Water resource engineering, mathematicsHMCR, PAR, [31]
Economic dispatchHMCR, PAR, [32]
Water quality engineeringHMCR, PAR, Bw, [33]
MathematicsBw, [25]
MathematicsBw, [26]
MathematicsHMCR, Bw, [36]
MathematicsHMCR, PAR, [37]
Structure engineeringBw, [27]
MathematicsPAR, Bw, [38]
MathematicsHMCR, PAR, Bw, [28]
Traffic engineeringPAR, Bw, [39]
MathematicsHMCR, [40]
Data miningPAR, [41]
Economic dispatch problemPAR, [42]
Electricity systemPAR, Bw, [43]
Electricity systemHMCR, PAR, [44]
MathematicsHMCR, PAR, [45]
MathematicsPAR, Bw, [46]
MathematicsHMCR, PAR, Bw, [47]
Multi-objective self-adaptive HSMathematicsHMCR, PAR, Bw, [34]
Mechanical engineeringHMCR, PAR, Bw, [35]
Table 2. Applied parameters for self-adaptive HS algorithms
Table 2. Applied parameters for self-adaptive HS algorithms
AlgorithmsHMSHMCRPARBwNoise Value
LBUBLBUBLBUB
HS5/10
(if DV ≤10,
HMS = 5
otherwise, HMS = 10)
0.950.110−4-
PSF-HS first0.050.110−510−410−410−3
PSF-HS second
APF-HS10−510−4
NSHS-
SGHSA0.950.050.110−510−4-
PAHS0.50.95
Note: DV = decision variables; LB, UB = lower boundary and upper boundary value; PSF = parameter-setting-free HS; APF = almost parameter-setting-free HS; NSHS = novel self-adaptive HS; SGHSA = self-adaptive global-based HS; PAHS = parameter-adaptive HS.
Table 3. Test problems for single-objective optimization.
Table 3. Test problems for single-objective optimization.
Name
(Function Shape)
FormulationSearch
Domain
Unconstrained problemsSphere function
(Bowl-shaped)
M i n   f ( x ) = i = 1 n x i 2 [−∞, ∞]n
Rosenbrock function (Valley-shaped) M i n   f ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] [−30, 30]n
Rastrigin function (Many local optima) M i n   f ( x ) = 10 n + i = 1 n [ x i 2 10 cos ( 2 π x i ) ] [−5.12, 5.12]n
Griewank function (Many local optima) M i n   f ( x ) = i = 1 n x i 2 4000 i = 1 n cos ( x i i ) + 1 [−600, 600]n
Ackley function
(Many local optima)
M i n   f ( x ) = 20 exp ( 0.2 ( 1 / n ) i = 1 n ( x i 2 ) )     exp ( ( 1 / n ) i = 1 n c o s ( 2 π x i ) ) + 20 + exp ( 1 ) [−32.768, 32.768]n
Constrained problem 1 M i n   f ( x ) = ( x 1 10 ) 2 + ( x 2 20 ) 3 h ( x ) = ( x 1 5 ) 2 ( x 2 5 ) 2 + 100 0 g ( x ) = ( x 1 6 ) 2 ( x 1 5 ) 2 82.81 0 13 < x1 < 100
0 < x2 < 100
Constrained problem 2 M i n   f ( x ) = ( x 1 10 ) 2 + 5 ( x 2 12 ) 2 + x 3 4 + 3 ( x 4 11 ) 2       + 10 x 5 6 + 7 x 6 2 + x 7 4 4 x 6 x 7 10 x 6 8 x 7 g 1 ( x ) = 127 2 x 1 2 3 x 2 4 x 3 4 x 4 2 5 x 5 0 g 2 ( x ) = 282 7 x 1 3 x 2 10 x 3 2 x 4 + x 5 0 g 3 ( x ) = 196 23 x 1 x 2 6 x 6 2 + 8 x 8 0 g 4 ( x ) = 4 x 1 2 x 2 2 3 x 1 x 2 2 x 3 2 5 x 6 + 11 x 7 0 [−10, 10]n
n = 1,2,3…,6,7
Table 4. Statistical analysis results of constrained single-objective optimization problems.
Table 4. Statistical analysis results of constrained single-objective optimization problems.
Problem NameAlgorithmWorst SolutionMean SolutionBest SolutionSD
Constrained problem 1SHS−6473.900−6642.600−6952.1002.43 × 102
PSF-HS first−6577.123−6740.288−6956.2512.70 × 102
PSF-HS second−6901.721−6961.814−6961.8147.30 × 10−4
APF-HS−6960.415−6961.814−6961.8141.70 × 10−2
NSHS−6961.284−6961.813−6961.8146.00 × 10−3
SGHSA−6961.813−6961.813−6961.8145.77 × 10−4
PAHS−6350.262−6875.94−6961.8141.60 × 10−1
GA−6821.511−6861.196−6961.8137.23 × 10−4
PSO−6791.813−6921.133−6961.8138.88 × 10−4
DE−6960.813−6961.412−6961.8145.04 × 10−5
Constrained problem 2SHS683.181681.160680.9114.11 × 10−2
PSF-HS first680.721680.681680.6311.00 × 10−5
PSF-HS second682.965681.347680.6315.70 × 10−1
APF-HS682.651681.642680.4262.70 × 10−2
NSHS682.081681.246680.4263.22 × 10−3
SGHSA680.763680.656680.4263.40 × 10−4
PAHS680.719680.643680.6321.55 × 10−2
GA680.653680.638680.6316.61 × 10−3
PSO684.528680.971680.6345.10 × 10−1
DE681.144680.503680.4266.71 × 10−1
Table 5. Optimization results using performance indexes (Rastrigin function).
Table 5. Optimization results using performance indexes (Rastrigin function).
Number of Decision
Variables
AlgorithmSuccess Ratio
(%)
Best
NFEs-Fs
Average
NFEs-Fs
Average
NIS
2SHS14345336,727.353.4
PSF-HS first7412919770.675.7
PSF-HS second64125810,235.376.4
APF-HS961047709.489.8
NSHS10051190.1168.3
SGHSA100176529.0173.7
PAHS962003000.4111.3
5SHS0--19.2
PSF-HS first4612,21821,572.138.1
PSF-HS second638,24739,332.337.1
APF-HS94708615,585.048.3
NSHS3657629863.441.2
SGHSA100417886.7162.5
PAHS5618,58826,523.335.3
10SHS0--22.2
PSF-HS first3232,52244,321.370.7
PSF-HS second442,77648,721.162.9
APF-HS96917721,627.188.8
NSHS3236,25145,126.269.2
SGHSA10012631974.3270.2
PAHS2439,85347,953,336.8
30SHS15--28.0
PSF-HS first10094233.4117.6
PSF-HS second100101214.258.7
APF-HS10063112.3281.9
NSHS1001830.1399.3
SGHSA10057116.9425.1
PAHS10050629677.9209.3
50SHS0--33.3
PSF-HS first50369929.349.1
PSF-HS second100447808.256.9
APF-HS10097223.8381.1
NSHS1003039.8682.1
SGHSA100148302.9752.5
PAHS8037,90542,695.4390.3
Note: NFEs-Fs = number of function evaluations to enter the feasible solution range; NIS = number of improved optimal solutions during optimization.
Table 6. Test problem for multi-objective optimization.
Table 6. Test problem for multi-objective optimization.
NameFormulationSearch
Domain
Zitzler Deb Thiele’s
function No. 1
M i n = { f 1 ( x ) = x 1 f 2 ( x ) = g ( x ) h ( f 1 ( x ) , g ( x ) ) g ( x ) = 1 + 9 n 1 ( i = 2 n x i ) h ( f 1 ( x ) , g ( x ) ) = 1 f 1 ( x ) g ( x ) 0 ≤ xi ≤ 1
1 ≤ i ≤ 30
Zitzler Deb Thiele’s
function No. 2
M i n = { f 1 ( x ) = x 1 f 2 ( x ) = g ( x ) h ( f 1 ( x ) , g ( x ) ) g ( x ) = 1 + 9 n 1 ( i = 2 n x i ) h ( f 1 ( x ) , g ( x ) ) = 1 ( f 1 ( x ) g ( x ) ) 2 0 ≤ xi ≤ 1
1 ≤ i ≤ 30
Zitzler Deb Thiele’s
function No. 3
M i n = { f 1 ( x ) = x 1 f 2 ( x ) = g ( x ) h ( f 1 ( x ) , g ( x ) ) g ( x ) = 1 + 9 n 1 ( i = 2 n x i ) h ( f 1 ( x ) , g ( x ) ) = 1 f 1 ( x ) g ( x ) ( f 1 ( x ) g ( x ) ) sin ( 10 π f 1 ( x ) ) 0 ≤ xi ≤ 1
1 ≤ i ≤ 30
Zitzler Deb Thiele’s
function No. 4
M i n = { f 1 ( x ) = x 1 f 2 ( x ) = g ( x ) h ( f 1 ( x ) , g ( x ) ) g ( x ) = 1 + 10 ( n 1 ) + i = 2 n ( x i 2 10 cos ( 4 π x i ) ) h ( f 1 ( x ) , g ( x ) ) = 1 f 1 ( x ) g ( x ) 0 ≤ x1 ≤ 1
−5 ≤ xi ≤ 5
2 ≤ i ≤ 10
Zitzler Deb Thiele’s
function No. 6
M i n = { f 1 ( x ) = 1 exp ( 4 x 1 ) sin 6 ( 6 π x 1 ) f 2 ( x ) = g ( x ) h ( f 1 ( x ) , g ( x ) ) g ( x ) = 1 + 9 ( ( i = 2 n x i ) / ( n 1 ) ) 0.25 h ( f 1 ( x ) , g ( x ) ) = 1 ( f 1 ( x ) g ( x ) ) 2 0 ≤ xi ≤ 1
1 ≤ i ≤ 10
Table 7. Multi-objective optimization results using performance indexes.
Table 7. Multi-objective optimization results using performance indexes.
Multi-Objective Optimization
Problems
AlgorithmsConvergenceDiversity
CSGDDISP
ZDT1SHS0.1128.41 × 10−161.3144.21 × 10−3
PSF-HS first0.1292.34 × 10−171.4142.53 × 10−3
PSF-HS second0.1338.09 × 10−181.2472.05 × 10−2
APF-HS0.1352.66 × 10−181.2884.02 × 10−2
SGHSA 0.14501.2073.71 × 10−2
NSHS0.09800.7866.73 × 10−5
PAHS0.13901.4143.16 × 10−3
ZDT2SHS0.1431.96 × 10−91.2364.87 × 10−3
PSF-HS first0.1511.93 × 10−91.4014.15 × 10−3
PSF-HS second0.1591.92 × 10−90.9157.38 × 10−3
APF-HS0.1611.85 × 10−90.8896.19 × 10−3
SGHSA 0.1681.88 × 10−91.4149.91 × 10−3
NSHS0.0621.82 × 10−90.7811.23 × 10−6
PAHS0.1361.81 × 10−91.4144.06 × 10−3
ZDT3SHS0.1441.53 × 10−91.1066.13 × 10−3
PSF-HS first0.1203.72 × 10−171.1086.01 × 10−3
PSF-HS second0.1311.41 × 10−170.8563.55 × 10−2
APF-HS0.1357.85 × 10−181.2369.36 × 10−3
SGHSA 0.1493.97 × 10−181.3994.69 × 10−2
NSHS0.1043.38 × 10−181.1088.28 × 10−3
PAHS0.1272.49 × 10−181.0885.75 × 10−3
ZDT4SHS0.1512.75 × 10−91.4143.73 × 10−3
PSF-HS first0.1512.74 × 10−91.3143.12 × 10−3
PSF-HS second0.1592.76 × 10−91.1601.21 × 10−2
APF-HS0.1612.73 × 10−91.2655.44 × 10−3
SGHSA 0.1682.72 × 10−91.4145.57 × 10−3
NSHS02.71 × 10−91.0031.18 × 10−6
PAHS0.1472.75 × 10−91.4143.39 × 10−3
ZDT6SHS0.1263.18 × 10−91.2164.12 × 10−3
PSF-HS first0.1273.17 × 10−91.3612.93 × 10−3
PSF-HS second0.1343.15 × 10−90.8482.71 × 10−2
APF-HS0.1413.16 × 10−91.3616.64 × 10−3
SGHSA 0.1483.13 × 10−91.3616.68 × 10−3
NSHS0.1213.12 × 10−91.3617.55 × 10−3
PAHS0.1413.11 × 10−91.3613.06 × 10−3
Table 8. The specification of the applied networks.
Table 8. The specification of the applied networks.
ProblemNPNNPDPCDSSNFEsKS
SOODMOOD
Hanoi network (HAN)3432304.8, 406.4, 508.0, 609.6, 762.0, 101645.72, 70.40,
98.37, 129.33,
180.74, 278.28
2.87 × 102650,000100,000USD
6.081
million
Saemangeum network (SAN)35633480, 100,
150, 200,
250, 300,
350, 400,
450, 500,
600, 700,
800
86,500, 100,182,
124,737, 153,347,
186,909, 219,089,
250,307, 288,313,
305,397, 344,394,
400,586, 506,082,
678,144
7.53 × 10446100,000500,000KRW 11.200
billion
P-city network (PCN)1339129725, 50,
80, 100,
150, 200,
250, 300,
500
43.80, 56.85,
72.51,82.95,
109.05,135.15,
161.25,187.35,
291.7
5.37 × 101277100,000500,000KRW 34.946
billion
Note: NP = number of pipes; NN = number of nodes; PD = pipe diameter (mm); PCD = pipe cost data (m/unit cost); SS = search space size; NFEs = number of function evaluations; SOOD = single-objective optimization design; MOOD = multi-objective optimization design; KS = known minimum solution. If the network denotes the benchmark, then the KS presents the known global optimal solution in the benchmark network (i.e., Hanoi network), or in case of the real-world network (i.e., Saemangeum and P-city network), the original design cost is used as a KS; KRW = Korean won. It is the currency of the Korea.
Table 9. Result of optimal design of the Saemangeum network.
Table 9. Result of optimal design of the Saemangeum network.
NetworkMethodKSBest CostAverage CostWorst CostAverage
NFEs-Fs
Reduction Rate between KS and Average Cost (%)
HANSimple HS6.081 million
(Unit: USD)
6.0816.3196.63243.149-
PSF-HS first6.0816.2526.50840.200-
PSF-HS second6.0816.2136.62338.721-
APF-HS6.0816.2236.78239.842-
SGHSA6.0816.1506.42327.980-
NSHS6.0816.1456.53128.400-
PAHS6.0816.1526.59232.450-
SANSimple HS11.200 billion
(Unit: KRW)
10.01610.07211.006-10.07
PSF-HS first10.01710.06810.982-10.11
PSF-HS second9.98210.06610.832-10.12
APF-HS9.99110.00610.320-10.65
SGHSA9.9439.97010.218-10.97
NSHS9.9569.97210.466-10.96
PAHS9.9709.98610.312-10.84
PCNSimple HS34.946 billion
(Unit: KRW)
25.49428.98133.494-17.07
PSF-HS first25.54528.32132.545-18.96
PSF-HS second25.60127.65132.601-20.88
APF-HS25.51227.65431.512-20.87
SGHSA25.19427.41630.187-21.55
NSHS25.24027.55330.100-21.16
PAHS25.28727.95331.024-20.01
Table 10. Comparison of the operator and parameters of self-adaptive HS.
Table 10. Comparison of the operator and parameters of self-adaptive HS.
AlgorithmAdditional
Operators
Additional
Parameters
Improvement
PSF-HS first--HMCR, PAR
PSF-HS second--HMCR, PAR
APF-HS-Max/Min BwHMCR, PAR, Bw
NSHSPAMax/Min Bw, fstd, AdjminHMCR, Bw
SGHSAPAMax/min BwBw
PAHS -Max/Min HMCR, PAR, BwHMCR, PAR, Bw

Share and Cite

MDPI and ACS Style

Choi, Y.H.; Kim, J.H. Self-Adaptive Models for Water Distribution System Design Using Single-/Multi-Objective Optimization Approaches. Water 2019, 11, 1293. https://doi.org/10.3390/w11061293

AMA Style

Choi YH, Kim JH. Self-Adaptive Models for Water Distribution System Design Using Single-/Multi-Objective Optimization Approaches. Water. 2019; 11(6):1293. https://doi.org/10.3390/w11061293

Chicago/Turabian Style

Choi, Young Hwan, and Joong Hoon Kim. 2019. "Self-Adaptive Models for Water Distribution System Design Using Single-/Multi-Objective Optimization Approaches" Water 11, no. 6: 1293. https://doi.org/10.3390/w11061293

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop