Self-Adaptive Models for Water Distribution System Design Using Single- / Multi-Objective Optimization Approaches

: This study compares the performance of self-adaptive optimization approaches in e ﬃ cient water distribution systems (WDS) design and presents a guide for the selection of the appropriate method employing optimization utilizing the characteristic of each technique formulation. To this end, this study performs three types of analyses. First, the sensitivity analysis of each self-adaptive approach is conducted on single / multi-objective mathematical benchmark problems with various problem types (e.g., using solution shape or many local optimal solutions). Second, based on the applications and results of the mathematical problem, the performance of the algorithm is veriﬁed in the WDS design problem considering the minimum cost and the maximum system resilience under the single / multi-objective optimization framework. Third, the characteristics of search operators in the self-adaptive approach are compared according to the presence or absence of additional parameters and operators. Moreover, various performance indices are employed to compare the quantitative evaluation of each algorithm. Each algorithm is found to exhibit di ﬀ erent characteristics depending on the problem scale and solution type. These results are expected to beneﬁt future research in the formulation of new approaches and developments. Hence, this study provides rigorous testing of the performance of newly proposed algorithms in a highly simpliﬁed manner.


Introduction
The water distribution system (WDS) is a system of engineered hydrologic and hydraulic components that enables the collection, transmission, treatment, storage, and distribution of water from the source to demand locations, while satisfying the nodal pressure and pipe velocity. WDS design determines the size and capacity of the system components. In the WDS industry, achieving optimal design is very difficult given the highly complex hydraulic conditions. Therefore, various design techniques have been developed and applied for effective WDS design in the past.
The traditional WDS design relies on the engineer's ability based on their experience. Consequently, traditional approaches differed in the quality of their solutions. They did not provide stable designs and could not guarantee optimality. To overcome this kind of drawback, the calculation-based approaches (i.e., linear programming, non-linear programming, and dynamic programming) were applied for the optimal design of WDSs [1][2][3][4][5][6][7][8][9][10]. However, the application of these techniques in real-world problems renders the WDS models too complex to be handled by conventional methods. Therefore, more efficient techniques to solve this kind of design problem must be investigated. Therefore, this study compares the performance of the self-adaptive optimization approaches for efficient WDS design and presents a guide to select the appropriate approach in WDS design using the optimization technique utilizing the characteristic of each technique formulation. In order to find the most appropriate self-adaptive optimization algorithm in various types of WDS designs, first of all the basic performance of each approach is compared in single and multi-objective optimization problems. Firstly, the sensitivity analysis of each self-adaptive approach is performed on the single/multi-objective mathematical benchmark problems in various problem types (e.g., solution shape, many local optimal solutions). Secondly, based on the applications and results of the mathematical problem, the performance of the algorithm is verified in the WDS design, considering the minimum cost and maximum system resilience under the single/multi-objective optimization framework. Thirdly, the characteristics of the search operators of the self-adaptive approach are compared according to the presence or absence of additional parameters and operators. Moreover, various performance indexes are used to compare the quantitative performance of each algorithm. This study can provide a guide for finding the appropriate self-adaptive approach in WDS design for the benefit of future research, considering that the performance of newly proposed algorithms can be rigorously tested in a highly simplified way.

Optimal Design of Water Distribution Systems
The initial WDS designs using the optimization approach considered only the network construction cost by changing pipe diameter. However, the designs which considered the minimum cost as a design factor were vulnerable to cope with the uncertain future conditions. Moreover, since current water users desire a stable and reliable water supply, the approach using the multi-objective optimization technique, which can consider design factors aside from minimum cost such as system performance indicators (e.g., reliability, resilience, robustness, and redundancy) [48][49][50][51] were proposed in the WDS design field. Therefore, this study applies the multi-objective optimization technique for WDS design, and the following subsection presents the formulation of the objective functions and the hydraulic constraints.

Minimizing Construction Cost
The objective of the first objective functions in WDS design is to minimize the construction cost (CC). The cost-estimation equation for network design proposed by Shamir and Howard [52] is applied. This cost can be estimated by multiplying the cost of each commercial pipe diameter by the length of each pipe. Thus, the sum of the cost of all pipes in the network is given by Equation (1).
where C(D i ) is the cost function of the ith pipe per unit length (m) of each pipe diameter, L i is the length (m) of the ith pipe, D i is the pipe diameter (mm) of the ith pipe, and N is the total number of pipes.

Maximizing System Resilience
System resilience (SR) can define the system capability to create foresight, recognize, anticipate, and defend against the changing risks before adverse consequences occur. Moreover, it defines the system ability to gracefully degrade and promptly recover from failure [53]. Various resilience indicators have been proposed since such an indicator was first developed by Todini [54]. It was expressed as a surrogate measure for hydraulic benefits. The index is based on the concept that the total input power into a network consists of the power dissipated in the network and the power delivered at demand nodes. Moreover, less power consumed internally to overcome the friction results in more surplus power at demand nodes, and thus the ability to counter failure conditions. The system resilience index calculated by Equation (2) was in the range from 0 to 1, where the higher index value indicates better resilience.
where N, NR, and NP are the number of nodes, reservoirs, and pumps, respectively; h j is a head at node j, h j * is the minimum required head at node j, H K is the water level of reservoir K, Q K is water flow of reservoir K, and P i is the power of pump i.

Hydraulic Constraints and the Penalty Function
In this study, to design WDSs using self-adaptive optimization, the EPANET 2.0 [55] is applied as a hydraulic solver to verify that the hydraulic constraints [56]. EPANET 2.0 performs hydraulic analysis in WDSs and it contains a flow conservation method, with various types of head loss formulations (i.e., Hazen-Williams, Darcy-Weisbach, Chez-Manning equations) [57]. Combining this program with the proposed optimization model, the hydraulic analysis results are checked. This model is considered the nodal pressure and pipe velocity as the design constraint. If the pressure at each node does not meet the pressure constraint, or if the water velocity at each pipe does not meet the velocity constraint, then a large penalty point value such as Equation (3) is added to the objective function values. The role of α is to distinguish feasible and infeasible solutions by adding a large penalty cost. β affects the penalty function when the nodal pressure approaches the hydraulic constraints, and it cannot affect the selection of the optimal solution because of the small penalty point. In order to prevent such an occurrence, a large penalty point can be generated by adding a large value to β.
where h i is the pressure head at node i (m); v j is the water velocity at pipe j (m/s); h min and h max are the minimum and maximum pressure heads (m), respectively; v min and v max are the minimum and maximum water velocity (m/s), respectively, and α and β are the penalty constants.

Optimization Algorithms
To achieve effective WDS design through the appropriate search operator guide in the optimization technique, this study applies HS as an optimization approach. HS has several characteristics suitable for various engineering applications, including WDSs. In the past decade, some studies have focused on applying WDS design, and their results were significantly better than other optimization techniques [58][59][60]. Moreover, HS has a good balance between exploration and exploitation, as well as a constraint-handling technique for addressing constrained problems efficiently and supporting real coding representations. For these reasons, various improved versions and self-adaptive versions of HS were developed for application to the engineering optimization problems. In the subsection below, various HSs used in the self-adaptive approach are described. The presented approaches are selected by seven kinds of techniques, which are summarized among the 23 studies related to the self-adaptive HS in Table 1.

Harmony Search
The HS can be explained in terms of the improvisation process of a musician. The search for the optimum harmony in music is equivalent to the optimum solution. When many different musicians play an instrument, various sounds generate a single harmony. The musician may gradually change to another suitable sequence and finally find an aesthetically pleasing harmony. In other words, the HS algorithm is an approach that finds the optimum harmony in music. In the HS, four parameters are used to search for the optimum solution (i.e., HM, HMCR, PAR, and Bw), and these parameters are assigned a constant value. A search space for the instrument is limited to memory space described as harmony memory (HM), where the HM size (HMS) represents the maximum number of harmonies that can be saved in the memory space. The main operators of HS are the random selection (RS), memory consideration (MC), and pitch adjustment (PA), serving to extract better solutions from the harmony memory. The main operator formulation is described by Equations (4) and (5).

Parameter-Setting-Free Harmony Search
The first parameter-setting-free HS (PSF-HS) was proposed by Geem [29] for reduction of the suitable parameter setting. PSF-HS modifies the improvisation step of HS by updating HMCR and PAR at every iteration for each decision variable. To update the parameters, this study introduced the operation-type memory (OTM) in Equation (6). The operator uses this memory to generate a new solution among the HS operators (i.e., RS, MC, PA), and the individual HMCR and PAR are calculated by Equations (7) and (8). The first PSF-HS proposed by Geem [29] calculated these parameters according to Equation (7); however, Geem and Shim [31] later modified this to Equation (8). Therefore, this study considers both calculation methods.
where y n denotes an operation type for the decision variable. Each y n represents one of three cases: (i) RS, (ii) MC, and (iii) PA. n is the count function of the number of operations (e.g., MC or PA) in HM. HMCR i and PAR i represent the HMCR and PAR of the ith decision variable, respectively. As the number of iterations increases the HMCR generally increases, whereas the PAR decreases. This trend can cause HMCR and PAR to exceed 1 and 0, respectively. To prevent this problem, the noise value Φ is used to control the HMCR i and PAR i between 0 and 1, as indicated by Equation (9).

Almost-Parameter-Free Harmony Search
The almost-parameter-free HS (APS-HS) was first proposed by Jiang et al. [33]. This approach is a modified version of the original PSF-HS [31] that additionally considers dynamic Bw, including automatic HMCR and PAR settings. The OTM was applied to calculate the adopted HMCR and PAR using the same formulation as in Equations (6), (8), and (9). In the APS-HS, Bw is dynamically updated according to the maximum and minimum values in the HM, as in Equation (10).
where x i j is ith decision variable in the jth iteration.

Novel Self-Adaptive Harmony Search
Novel self-adaptive HS (NSHS) was developed by Luo [28], and it is a modified process for determining HMCR, PAR, and Bw from constant values. HMCR is set according to the dimension of the problem, and these values are analogous, e.g., a complex problem has a large HMCR, as shown by Equation (11).
In the original HS, Bw is important to tune the solution; therefore, NSHS used dynamic Bw for fine-tuning such that the tuning range is wider at the beginning and narrower at the end of the simulation. This phenomenon inspired the dynamic fine-tuning bandwidth shown as Equation (12). The PAR procedure is replaced considering the variance of fitness, and new solutions are generated with the boundary condition of the decision variable in Equation (13). (13) where n(DV) is the number of decision variables, f std is the standard deviation calculated by Lower and Bw i Upper are the boundaries of the bandwidth at the ith decision variable in the jth iteration, and NI is the total number of iterations.

Self-Adaptive Global-Based Harmony Search Algorithm
Shivaie et al. [43] developed the self-adaptive global-based HS (SGHSA) in 2015 to find better solutions and provide more effective parameter tuning. In the new improvisation method of the SGHSA, an altered pitch adjustment rule is used to avoid falling into a local optimum solution, as in Equation (14). The value of the Bw parameter is dynamically reduced by subsequent generations, according to Equation (15).

Parameter Adaptive Harmony Search
Kumar et al. [47] proposed a dynamic change in the values of HMCR and PAR, consequently modifying the improved version of HS, referred to as the parameter-adaptive HS (PAHS). PAHS sets its parameters linearly or nonlinearly to make the algorithm explore each solution. The value of HMCR linearly increases as the optimization process continues. In contrast, PAR has a high value during earlier generations, which decreases nonlinearly. This parameter control approach improves the search efficiency, as the global and local search depend on the generation. The best solutions obtained are

Multi-Objective Optimization Formulation
In the present study, a multi-objective HS (MOHS) [59,60] is used to consider multiple objective functions. The MOHS combines the three approaches (i.e., HS, the non-dominated sorting method, and the crowding distance approach) to improve the Pareto optimal solution diversity and convergence, as shown in Figure 1.

Multi-Objective Optimization Formulation
In the present study, a multi-objective HS (MOHS) [59,60] is used to consider multiple objective functions. The MOHS combines the three approaches (i.e., HS, the non-dominated sorting method, and the crowding distance approach) to improve the Pareto optimal solution diversity and convergence, as shown in Figure 1. The search operators of the MOHS are same as those in the HS [14], namely random search, memory consideration, and pitch adjustment. However, because the MOHS considers multiple objective functions, it uses non-dominated ranks [61] to evaluate solution performance. The individual solution rank is determined by counting the number of dominant solutions. For example, if there is no outstanding solution for both objective functions, the considered solution is defined as a nondominated solution and is ranked first. The solution with a higher rank is preserved in the nextgeneration harmony memory. The crowding distance approach [62] is used to generate wide Pareto optimal solutions for diversity improvement under the same rank, after non-dominated sorting. This approach calculates the sum of the Euclidian distances between the two neighboring solutions in the Pareto solution space. In Figure 1, the euclidian distance is calculated by the extreme fitness value of each objective function (i.e., f1 max,min and f2 max,min ) and the distance between beside two Pareto optimal solutions (i.e., d1 and d2).
The crowding distance of two extreme solutions among the Pareto optimal solutions is infinite; therefore, these two solutions are always preserved in the harmony memory. In this study, the concepts of multi-objective optimization (i.e., the non-dominated sorting and the crowding distance approach) are combined with respective self-adaptive algorithms to perform multi-objective optimization. The search operators of the MOHS are same as those in the HS [14], namely random search, memory consideration, and pitch adjustment. However, because the MOHS considers multiple objective functions, it uses non-dominated ranks [61] to evaluate solution performance. The individual solution rank is determined by counting the number of dominant solutions. For example, if there is no outstanding solution for both objective functions, the considered solution is defined as a non-dominated solution and is ranked first. The solution with a higher rank is preserved in the next-generation harmony memory. The crowding distance approach [62] is used to generate wide Pareto optimal solutions for diversity improvement under the same rank, after non-dominated sorting. This approach calculates the sum of the Euclidian distances between the two neighboring solutions in the Pareto solution space. In Figure 1, the euclidian distance is calculated by the extreme fitness value of each objective function (i.e., f 1 max,min and f 2 max,min ) and the distance between beside two Pareto optimal solutions (i.e., d 1 and d 2 ). The crowding distance of two extreme solutions among the Pareto optimal solutions is infinite; therefore, these two solutions are always preserved in the harmony memory. In this study, the concepts of multi-objective optimization (i.e., the non-dominated sorting and the crowding distance approach) are combined with respective self-adaptive algorithms to perform multi-objective optimization.

Application and Results
This section presents comparisons of the performance of the self-adaptive optimization approaches for efficient WDS design. To compare the quantitative performance of each algorithm, various performance indexes for single/multi-objective optimization are used. Figure 2 shows the model formulation of this study. First, to compare the performance of each self-adaptive approach, the various types of mathematical benchmark problems are applied in the single/multi-objective optimization framework.
This section presents comparisons of the performance of the self-adaptive optimization approaches for efficient WDS design. To compare the quantitative performance of each algorithm, various performance indexes for single/multi-objective optimization are used. Figure 2 shows the model formulation of this study. First, to compare the performance of each self-adaptive approach, the various types of mathematical benchmark problems are applied in the single/multi-objective optimization framework.
Secondly, according to the basic performance of each self-adaptive algorithm, the optimal design of WDSs is performed in the single/multi-objective aspect. The applied design factors are the minimum cost and maximum system resilience, and the three different scales of WDSs are used for verification. In the third analysis, the search operators of the self-adaptive approach are categorized and compare the type of parameter automatic control.

Performance Indices
To quantitatively evaluate single-objective optimization mathematical benchmark problems, the individual simulation was repeated 50 times, with each simulation using fewer than 50,000 numbers of function evaluations (NFEs) for each problem.
The performance comparison of each algorithm used statistical approaches (e.g., best error, mean error, worst error, and standard deviation), success ratio (SR) of the feasible solution, number of improved optimal solutions during optimization (NIS), and the number of function evaluations to enter the feasible solution range (NFEs-Fs). In this study, the feasible solution ranges used were 10 −10 (DV = 2, 5, 10) and 10 −5 (DV = 30, 50). The parameter settings of all the approaches used in this study are tabulated in Table 2.   Secondly, according to the basic performance of each self-adaptive algorithm, the optimal design of WDSs is performed in the single/multi-objective aspect. The applied design factors are the minimum cost and maximum system resilience, and the three different scales of WDSs are used for verification. In the third analysis, the search operators of the self-adaptive approach are categorized and compare the type of parameter automatic control.

Performance Indices
To quantitatively evaluate single-objective optimization mathematical benchmark problems, the individual simulation was repeated 50 times, with each simulation using fewer than 50,000 numbers of function evaluations (NFEs) for each problem.
The performance comparison of each algorithm used statistical approaches (e.g., best error, mean error, worst error, and standard deviation), success ratio (SR) of the feasible solution, number of improved optimal solutions during optimization (NIS), and the number of function evaluations to enter the feasible solution range (NFEs-Fs). In this study, the feasible solution ranges used were 10 −10 (DV = 2, 5, 10) and 10 −5 (DV = 30, 50). The parameter settings of all the approaches used in this study are tabulated in Table 2.  In the evaluation of multi-objective problems, various performance measures have been proposed in previous studies to evaluate various aspects (e.g., convergence and diversity) of Pareto-optimal solution (POS) sets. As explained below, these obtained performance indexes can represent other performance indexes that reflect solution convergence and diversity. Therefore, the following performance indexes were employed to compare different POS sets.
The coverage of sets (CS) criterion, proposed by Zitzler et al. [63], is an index for comparison of the non-dominated degrees of optimal solutions from different iterations.
The values taken by CS (X , X") are in the range of 0-1. If the value of CS is 1, this means that all X are dominated by X". In order to understand the exact non-dominated relationship between two different iterations, it is necessary to calculate both CS (X , X") and CS (X", X ) and analyze the resulting values. An expression for CS is displayed in Equation (19).
where X and X" denote the optimal solutions found in different iterations, and a and a" represent each optimal solution. The diversity (DI), proposed by Zitzler [64], is designed to evaluate the diversity of Pareto optimal solutions. It is presented in Equation (20) and uses the minimum and maximum values of the objective function.
where F m Max and F m Min are the maximum and minimum values of the Pareto front (PF), f m is the mth objective function value, and M is the number of objective functions. The generational distance (GD) metric was first presented by Veldhuizen and Lamont [65]. The main objective of this criterion is to clarify the capability of the different algorithms to find a set of non-dominated solutions with the lowest Pareto optimal front distance. This evaluation factor is defined in mathematical form in Equation (21).
where n pf is the number of members in the generated PF, d i is the Euclidean distance between member i in PF and the nearest member in PF optimal , and the Euclidean distance (d) is calculated based on Equation (22). where . . , f pn ) is the member nearest to q in PF optimal . The best possible value for the GD metric is 0, which corresponds to the PF g exactly covering PF optimal . Space (SP) is an evaluation index proposed by Schott [66] for the measurement of the homogeneity of the spatial distribution of optimal solutions, as well as how homogeneously consecutive solutions are distributed. SP can be calculated as shown in Equation (23).
. . , n; d is the mean value of all d i ; and n is the number of non-dominated solutions. The values of SP are in the range of 0-1. Values closer to 0 indicate that the optimal solution is distributed more homogeneously.

Comparison of Algorithm Performance in Mathematical Benchmark Problems
This section compares the selected self-adaptive HS algorithm based on single-and multi-objective optimization aspects. The experiments were implemented on an Intel 3.40 GHz Core (TM) i7 with 8 GB RAM, where these algorithms were programmed with Visual Basic 6.0 under Windows 7 (64-bit).
To eliminate the effect of the initial solution quality, all initial solutions employed the same solution set by random generation.
Because the compared algorithms were developed to solve the single-objective optimization problem, these algorithms had to modify and add some approaches, namely, non-dominated solution sorting and improved solution diversity methods, in the case when the multi-objective optimization problems are solved. Therefore, these self-adaptive HS algorithms are supplemented using the non-dominated sorting approach [62] and the crowding distance method [61].

Single-Objective Optimization Problems
In the application of single-objective optimization problems, 25 unconstrained benchmark problems (=5 type functions × 5 kinds of decision variables: 2, 5, 10, 30, 50) proposed by Dixon [67], Rastrigin [68], and Molga and Smutnicki [69] and two constrained benchmark problems [70,71] are employed to compare the performance of these approaches in Table 3. According to the characteristic of comparison of the algorithm operator, four analyses were performed to compare the performance. Table 3. Test problems for single-objective optimization.

Name (Function Shape) Formulation Search Domain
Unconstrained problems  (1) [−32.768, 32.768]n Constrained problem 1 [−10, 10]n n = 1,2,3 . . . , 6,7 First, the optimization stability was analyzed for each algorithm. This analysis uses statistical analysis (the best, worst, and mean error) of the optimization results. Second, the optimization results are compared according to the problem's search space size.
This analysis was that the number of decision variables is changed from two to fifty to compare the optimization ability based on the search space size. Third, the ability of initial convergence of each algorithm was evaluated using the success ratio and NFEs-Fs. To calculate these two performance indexes, the standard successful solution value is determined according to the decision variable size (if the number of decision variables is less than or equal to 10, the standard successful solution value is 10 −10 , and 10 −5 if it is not) and examined the initial convergence ability of each algorithm.
The initial convergence of the optimization algorithm finds the feasible solution in the early stage; this reflects the exploration ability and greatly influences the performance of the optimal solution. Fourth, the exploitation ability was evaluated using NIS, which is the number of improving optimal solutions during optimization. The descriptive statistical result is shown in Figure 3. These statistical results represent the best, worst, and mean error of each algorithm in the number of various decision variables.  Moreover, although the above analysis results show the applications for unconstrained problems, to compare the type sof benchmark problems this study applies the constrained benchmark problems. In Table 4, the constrained benchmark problems are compared with the more well-known optimization algorithms (i.e., genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE)) to evaluate the performance of presented self-adaptive techniques. The results show similar trends with unconstrained problems. From a statistical perspective, the SGHSA obtains stable optimal solutions based on the mean solution and standard deviation (SD), and also it shows the best optimal solution in both of the problems.  The results show that SGHSA performs well as compared to other self-adaptive HS and SGHSA variants for single-objective optimization benchmark functions. Especially in low-dimensional cases (DV = 2, 5), the statistical results of SHS, PSF-HS, and APF-HS exhibit high variance among the 50 independent runs. These three algorithms that follow the HS operators (i.e., harmony memory consideration and pitch adjustment) have an unstable characteristic of the optimum solution, even if there are no issues with regard to the parameter settings. In addition, NSHS does not show a satisfactory performance in the Rosenbrock function, which has a valley-shaped solution space, because the NSHS's search operators follow the standard deviation of the optimal solution. In the case of the valley-shaped function with a shallow and wide search space, the standard deviation is larger than that of a bowl-shaped function.
Moreover, although the above analysis results show the applications for unconstrained problems, to compare the type sof benchmark problems this study applies the constrained benchmark problems. In Table 4, the constrained benchmark problems are compared with the more well-known optimization algorithms (i.e., genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE)) to evaluate the performance of presented self-adaptive techniques. The results show similar trends with unconstrained problems. From a statistical perspective, the SGHSA obtains stable optimal solutions based on the mean solution and standard deviation (SD), and also it shows the best optimal solution in both of the problems. The next experiment is performed to observe the effect of the dimensionality on the test functions, which is varied from 2 to 50. The results indicate that most of the solution errors result in degradation of the performance of the optimal solution as the number of decision variables increases. This result is normally generated by solving the optimization problem. However, PAHS shows the greatest degradation among these self-adaptive algorithms, because it controls its own parameters based on the number of iterations (i.e., the HMCR shows a linear increment, and the PAR and Bw decrease exponentially). This makes the earlier iteration perform an exploration (global search), while at the end of the iteration an exploitation (local search) is performed. This parameter control approach provides an improvement in the optimal solution; however, there is no guarantee of a consistently a high-quality solution, as the solution becomes worse if it is at the local optimum. Table 5 lists the success ratios and NFEs-Fs to illustrate the initial convergence ability for the Rastrigin function. The feasible solution ranges are determined as 10 −10 (DV = 2, 5, 10) and 10 −5 (DV = 30, 50), depending on their decision variables. In all cases, SGHSA has the best success ratio and NSHS shows the lowest NFEs-Fs. However, as the number of decision variables is increased, the success ratio of NSHS decreases drastically. The difference between the operators of these two algorithms is a new solution-generation approach. The NSHS applies the deterministic method which uses a random value and dynamic Bw according to the optimal solution's standard deviation.
Although this approach focuses on the exploration ability, it has a large uncertainty with regard to finding an optimal solution.
Therefore, the quality of the optimal solution follows the value at the initial stage. However, SGHSA employs the dynamic PAR and Bw linearly, which might be the basic method to improve the performance of the optimization algorithm. Harmonious global and local searches apply the global search focus in the initial stage and the local search focus in the last stage. With respect to the NIS, the results are similar to that of NFEs-Fs. The NIS is the number of improving optimal solutions during optimization, and hence this indicator represents how many times a new optimal solution is found. Because SGHSA has the largest average NIS, the exploration and exploitation abilities are balanced, and it yields a satisfactory performance.

Multi-Objective Optimization Problems
In this section, six multi-objective benchmark problems are used to evaluate the performance of the presented algorithm. Table 6 shows the classical test problems frequently used in the mathematical multi-objective optimization. They are all unconstrained problems, including five multi-objective problems: ZDT1, ZDT2, ZDT3, ZDT4, and ZDT6 [63]. For a quantitative comparison, the spacing (SP), coverage set (CS), generational distance (GD), and diversity (DI) indicator values obtained by the MOHS, PSF-HS, APF-HS, SGHSA, NSHS, and PAHS in 30 independent runs, as well as the statistical analysis results, are shown in Table 7.
The four performance indexes were used to quantitatively compare the approaches. The first index, CS, which is the most important of the multi-objective optimization, was used to calculate the ratio of the number of non-dominated solutions obtained by a method to the number of Pareto solutions in the total set of solutions. In all these problems, SGHSA had the highest CS and outperformed the other five approaches. This indicates that SGHSA performed the best in terms of non-domination strength. Furthermore, Figure 4 shows the Pareto optimal solution of each self-adaptive HS for the ZDT1 problem. In Figure 4, the Pareto optimal solution of SGHSA showed the best convergence ability, and thus it can be included in the global Pareto optimal solutions, which are the non-dominated solutions among all integrated optimal solutions. In the case of the GD, this indicator also represents the convergence of the multi-objective optimal solution by calculating the distance between the Pareto optimal solution obtained by each algorithm and global optimal Pareto solution. Similarly, to the first index, the CS, the SGHSA was outstanding; however, the difference was not large compared to the other five approaches, with the exception of ZDT1. The SGHSA also has a similar meaning to the CS, as the approximated Pareto optimal solution obtained by the SGHSA is closer to the global Pareto optimal solution than those obtained by the others. It can be clearly seen from Table 7 and Figure 4 that the SGHSA and PAHS have obtained better values for the DI indicator than other algorithms in most of the test problems. This means that the Pareto optimal solution obtained by SGHSA and PAHS has a high diversity on average. However, DI, which demonstrates the algorithm diversity in the multi-objective framework, is appropriate to evaluate a situation where convergence is secured. This is because DI calculates the distance between the extreme solutions of each side, and the formulation has little correlation with the convergence ability of the approach. Therefore, the NSHS and PSF-HSs with low convergence cannot be fairly evaluated. Furthermore, because the SP is an indicator that evaluates diversity (DI) in the multi-objective optimization, it showed similar results when compared with algorithms that guarantee convergence (i.e., the APF-HS, SGHSA, and PAHS). In summary, the SGHSA generated the best results for using the performance indexes in these mathematical benchmark problems. Particularly, the APF-HS and PAHS, which show great convergence for four test problems out of five, exhibited a weakened convergence on the ZDT3. The main reason behind APF-HS and PAHS showing weak performance on ZDT3 is that the Pareto optimal solutions of ZDT3 are discontinuous. For this result, however, the search operators that performed the memory consideration and pitch adjustment applied in APF-HS and PAHS create harmony for the global and local search. Discontinuous problems such as ZDT3 require a more local search, such as SGHSA. SGHSA performs pitch adjustment or random searches at every iteration, unlike general HS. has a similar meaning to the CS, as the approximated Pareto optimal solution obtained by the SGHSA is closer to the global Pareto optimal solution than those obtained by the others. It can be clearly seen from Table 7 and Figure 4 that the SGHSA and PAHS have obtained better values for the DI indicator than other algorithms in most of the test problems. This means that the Pareto optimal solution obtained by SGHSA and PAHS has a high diversity on average. However, DI, which demonstrates the algorithm diversity in the multi-objective framework, is appropriate to evaluate a situation where convergence is secured. This is because DI calculates the distance between the extreme solutions of each side, and the formulation has little correlation with the convergence ability of the approach.   Therefore, during the optimization, if there are sufficient calculation iterations, the effective local search generates better optimal solutions in the multi-objective optimization. In addition, the NSHS exhibited the lowest convergence ability in multi-objective optimization, although most of the single-objective optimization problems (20 out of 25 problems) were solved successfully, like in the SGHSA. Because the NSHS is applied to the search approach which the global search reinforces in the initial stage and is focused on the local search in the late iteration using the standard deviation of the optimal solution (the standard deviation of the optimal solution is high in the initial stage, and a small standard deviation indicates convergence to the solution). This search operator improves the diversity in the single-objective optimization. However, in the multi-objective optimization, the standard deviation of the optimal solution does not affect the solution quality because the Pareto optimal solutions are determined considering the non-domination relationship of the two objective functions. Moreover, in the multi-objective optimization framework, the fitness values are also widely distributed in order to provide various solutions to the decision maker. Therefore, based on these analyses, the NSHS can be verified with a limitation to solve the multi-objective optimization problem.

Comparison of Self-Adaptive Technique Performance in WDS Design Problems
In the previous section, the characteristic of each algorithm's search operator was demonstrated according to the application of mathematical benchmark problems. This section applies the design of WDSs as the engineering problem to verify and compare the self-adaptive techniques. The three benchmark WDSs (i.e., the Hanoi network, Saemangeum network, and P-city network) are applied considering single/multi-objective frameworks. In case of the single-objective problem, the minimum cost is used as an objective function, while the multi-objective optimization problems consider minimum cost and maximum system resilience simultaneously. Both design problems apply system hydraulic constraints such as nodal pressure and pipe velocity. The below section provides descriptions of the benchmark networks in this study.
The Hanoi (HAN) network was introduced by Fujiwara and Khang [7]. It consists of three closed loops, one reservoir, 32 demand nodes, and 34 pipes. The Hazen-Williams (HW) roughness coefficient is 130 for all pipes. The minimum required pressure is 30 m. A total of six commercial pipe diameters ranging from 304.8 mm to 1016 mm are available. The minimum cost of the Hanoi network based on a single-objective optimal design (SOOD) has been reported to be USD 6.081 million [49], and the cost of that based on a multi-objective optimal design (MOOD) using the lowest cost and system resilience has been reported to be USD 6.195 million [40].
The Saemangeum network was first proposed by Yoo et al. [72]. It is composed of nine closed loops, one reservoir, 334 nodes, and 356 pipes. The total length of the network is 41.38 km. Table 8 presents the pipe cost data. A total of 18 commercial pipe diameters were used, and the cost estimation is the same as the construction cost for each pipe diameter according to the K-water report [73] in the cost estimation of the water supply facilities. The water pressure at each node and the water velocity in the pipeline denote the hydraulic constraints of the Saemangeum network. The constraints are 10 m and 35 m for the minimum and maximum head at each node, respectively, and 0.01 m/s and 2.5 m/s for the minimum and maximum water speed, respectively.  The P-city network [60] is the real-world network in Korea and it has a total of 1297 demand nodes supplied by one source. There are 1339 pipelines, 5 pump stations, and 5 valves. The total length of the network is 111.827 km. There are a total of nine commercial pipe diameters, ranging from 25 mm to 500 mm [73]. The HW roughness coefficient is in the range of 90-130. The minimum required pressure head is 15 m for each node. The original construction cost of the P-city network based on the SOOD was KRW 34.946 billion. Table 9 shows the performance comparison results of the three different sizes of WDSs by applying self-adaptive techniques. Applied self-adaptive techniques are performed using the same number of function evaluations (NFEs) depending on each WDS for fair comparisons. In addition, since the performance of the optimization varies with the quality of the initial solutions [51], we apply the initial solutions extracted from the random sampling to each self-adaptive technique equally.  Table 9 shows the statistical optimization results including the worst, average, and best costs, average NFEs-Fs, and the reduction rate between the KS and the average cost. The NFEs-Fs is the measure that calculates the number of function evaluations when the solution meets the known global optimal solution. Therefore, it can be applied in the Hanoi network, which already revealed the global optimal solution in a previous study [49]. In terms of the average cost, the NSHS and SGHSA are superior to all other methods in Hanoi network. The current known global optimal solution for the Hanoi network is USD 6.081 million. This solution can likewise be obtained in all the approaches; however, the NSHS needs less computational time in terms of the average NFEs-Fs. Nevertheless, in the large network such as the Saemangeum and the P-city network, the optimization results of the SGHSA provide better results in comparison to the others. In real-world networks, the reduction rates between the KS and the average cost are greater than 10% (i.e., Seamangum) and 17% (P-city) while satisfying the hydraulic constraints (i.e., the boundary condition of nodal pressure and pipe velocity). Among the self-adaptive approaches, the SGHSA and NSHS obtain the best statistical solutions. This verifies the results obtained in Section 5.2. In Section 5.2, the results of single-objective optimization in the mathematical benchmark problems exhibited similar trends with the application of the WDS design. Unlike the results of the multi-objective optimization, NSHS shows relatively satisfactory performance in the single-objective optimization, which in turn shows similar performance in the least-cost design problem.

Multi-Objective Optimal Design of WDSs
This chapter obtains the multi-objective optimization results in WDSs. The results represent three analyses, namely the Pareto optimal solutions of each network, performance evaluation using performance measures, and the distribution of the non-dominated solution. Figure 5 demonstrates the Pareto optimal solution and the performance comparison of each network using performance indices in the convergence and diversity aspects.

Multi-Objective Optimal Design of WDSs
This chapter obtains the multi-objective optimization results in WDSs. The results represent three analyses, namely the Pareto optimal solutions of each network, performance evaluation using performance measures, and the distribution of the non-dominated solution. Figure 5 demonstrates the Pareto optimal solution and the performance comparison of each network using performance indices in the convergence and diversity aspects. The four performance measures are used to quantitatively compare each approach. Among the applied indices, CS and GD evaluate the convergence aspect, while DI and SP are responsible for the diversity aspect. Figure 5 (b), (d), (f) represent the values of the performance indices for each approach. The first index, CS, is the most important index for the comparison of convergence in different multi-objective optimization solutions. It is used to calculate the ratio of the number of nondominated solutions obtained by each method to the number of Pareto solutions in the total set of solutions. In the Hanoi network, the SGHSA has the highest CS of 0.37 and outperforms the other five approaches. This means that the SGHSA performed the best in terms of the non-domination strength.
The second index, DI, which shows how well a method finds a widespread Pareto optimal solution, is calculated from the given objective functions of the least-cost and maximum-resilience designs. With respect to diversity, the four approaches (i.e., PSF-HS, APF-HS, PAHS, and SGHSA)  The four performance measures are used to quantitatively compare each approach. Among the applied indices, CS and GD evaluate the convergence aspect, while DI and SP are responsible for the diversity aspect. Figure 5b,d,f represent the values of the performance indices for each approach. The first index, CS, is the most important index for the comparison of convergence in different multi-objective optimization solutions. It is used to calculate the ratio of the number of non-dominated solutions obtained by each method to the number of Pareto solutions in the total set of solutions. In the Hanoi network, the SGHSA has the highest CS of 0.37 and outperforms the other five approaches. This means that the SGHSA performed the best in terms of the non-domination strength.
The second index, DI, which shows how well a method finds a widespread Pareto optimal solution, is calculated from the given objective functions of the least-cost and maximum-resilience designs. With respect to diversity, the four approaches (i.e., PSF-HS, APF-HS, PAHS, and SGHSA) generate similar DI, and the result means that most of self-adaptive approaches except NSHS have good diversity ability. The third index, GD, also exhibits the convergence aspect of the Pareto optimal solution, similarly to CS, and it is calculated by summation of the distance between the best-known PF and the given Pareto solution of each approach. The result of GD shows a similar trend to CS, which means that the convergence of SGHSA is better than that of the other approaches in terms of convergence. In case of the Seamangum network, the results of the performance evaluation using indices are analogous with the performance of Hanoi network in terms of the convergence and diversity. However, the results in the P-city network are slightly different for PAHS. In the convergence of PAHS, there is a big difference compared with the previous optimization results. The PAHS was modified with the three parameters setting formulations depending on their iteration number. Even though the large-scale optimization problem needs to fine-tune its parameters, the parameters of PAHS are changed regularly. This means that the local search ability should be enhanced in a shorter time compared with the small-scale problem. However, the PAHS changes the search performance linearly regardless of the scale of the problem. Thus, it is difficult to precisely control the exploitation ability. Therefore, PAHS is not appropriate in the large scale multi-objective optimization problems for the formulation. Figure 6a-c illustrate the relative distribution of solutions contributed by each self-adaptive approach in the Pareto optimal solution of three WDS problems (i.e., Hanoi network, Saemangeum network, P-city network). generate similar DI, and the result means that most of self-adaptive approaches except NSHS have good diversity ability. The third index, GD, also exhibits the convergence aspect of the Pareto optimal solution, similarly to CS, and it is calculated by summation of the distance between the best-known PF and the given Pareto solution of each approach. The result of GD shows a similar trend to CS, which means that the convergence of SGHSA is better than that of the other approaches in terms of convergence. In case of the Seamangum network, the results of the performance evaluation using indices are analogous with the performance of Hanoi network in terms of the convergence and diversity. However, the results in the P-city network are slightly different for PAHS. In the convergence of PAHS, there is a big difference compared with the previous optimization results. The PAHS was modified with the three parameters setting formulations depending on their iteration number. Even though the large-scale optimization problem needs to fine-tune its parameters, the parameters of PAHS are changed regularly. This means that the local search ability should be enhanced in a shorter time compared with the small-scale problem. However, the PAHS changes the search performance linearly regardless of the scale of the problem. Thus, it is difficult to precisely control the exploitation ability. Therefore, PAHS is not appropriate in the large scale multi-objective optimization problems for the formulation.  By using the distribution plot, it is much easier to compare their performance (both convergence and diversity) in an intuitive sense. Since the solutions in the best-known PF are evenly mapped, the By using the distribution plot, it is much easier to compare their performance (both convergence and diversity) in an intuitive sense. Since the solutions in the best-known PF are evenly mapped, the gaps appearing in the graph for each self-adaptive approach denote the absence of solutions in the corresponding position within the set of best-known PF. The projection of Pareto optimal solution using various self-adaptive approaches shows that the different distributions rely on the network size. In the Hanoi and Saemangeum network, the Pareto optimal solution of PAHS is evenly distributed. However, in the P-city network as the largest network among the application networks, the PAHS has a lower contribution than other networks. PAHS has a weak convergence ability in the large-scale optimization problem, and it is an unsuitable approach for large WDS design. In contrast, the NSHS has a reasonably good distribution in the large-scale problem. However, this approach has a low contribution in the Hanoi, Saemangeum network. Therefore, the NSHS can be evaluated with less reliability to obtain the Pareto optimal solution. In summary of all results, regardless of problem scale, the SGHSA exhibits greater convergence and diversity ability and demonstrates that the best approach among the proposed methods also applies to large problems. Therefore, the SGHSA is concluded to perform the best out of the five proposed approaches.

Comparison of Algorithm Characteristics (i.e., Operator and Parameter)
This section compares the characteristics of the search operators of the self-adaptive HS (Table 10). The objective of the self-adaptive method is to eliminate the process of parameter setting and automatically set the appropriate parameters for any optimization problem. Based on this objective, the best self-adaptive approach does not require additional new parameters, and the optimization algorithm for all the parameters should be controlled appropriately under good performance. In this study, the self-adaptive HS algorithms dealt with adds new parameters to set the appropriate parameters in all algorithms, except the PSF-HS. The parameter-setting-free approach preserves the operator of the existing HS algorithm and automatically sets parameters. Although PSF-HS might be closer to the best self-adaptive technique in terms of only using its own parameters, the performance is not good. However, the algorithms which have satisfactory performance such as the SGHSA add new parameters or search operators. Though the additional new parameters do not need the optimal parameter value, these are also parameters affecting the solution quality. For this reason, the additional new parameters need proper values. Therefore, under the next-generation self-adaptive approach in the HS framework, it is necessary to develop a method that does not add new parameters and can derive excellent solution performance. In the case of the algorithms considered in this study, the combination of the second PSF-HS and SGHSA can be expected to provide reliable results because the SGHSA is able to enhance the insufficient search ability of the second PSF-HS using the dynamic Bw.

Conclusions
This study compares the performance of the self-adaptive optimization approaches for efficient WDS design and presents a guide to select the appropriate approach in WDS design using the optimization technique utilizing the characteristics of each technique. The compared self-adaptive approaches are selected from previous studies, which were developed in various fields (i.e., water resources engineering, mathematics, economics, structure engineering, traffic engineering, electrical engineering, and mechanical engineering). Among the 23 studies, there are seven kinds of techniques which have characteristic approaches for automatic parameter settings (i.e., parameter-setting-free harmony search, almost-parameter-free harmony search, novel self-adaptive harmony search, self-adaptive global-based harmony search algorithm, parameter adaptive harmony search).
To this end, this study performs three kinds of analyses. First, the sensitivity analysis of each self-adaptive approach is performed on single/multi-objective mathematical benchmark problems in various problem types (e.g., solution shape, many local optimal solutions). Secondly, based on the applications and results of the mathematical problem, the performance of the algorithm is verified in the WDS design, considering the minimum cost and maximum system resilience under the single/multi-objective optimization framework. Third, the characteristics of the search operators of the self-adaptive approach are compared according to the presence or absence of additional parameters and operators. Moreover, various performance indexes are used to compare the quantitative performance of each algorithm. To compare the characteristics of each algorithm from various perspectives, firstly, the self-adaptive approaches are verified using the single/multi-objective mathematical benchmark problems in various problem types (e.g., solution shape, many local optimal solutions). In the single-objective optimization, the compared algorithms are assigned various performance indices to evaluate the search stability (i.e., the statistical analysis), the effect of dimensionality (i.e., the sensitivity analysis for various decision variables), and the initial convergence ability (i.e., the success ratio and NFEs-Fs). In the multi-objective optimization, the compared algorithms apply various performance indices (i.e., the spacing, coverage set, generational distance, and diversity) to evaluate the diversity and convergence quantitative of Pareto optimal solutions. Secondly, based on the results of single/multi-objective optimization in the mathematical problems, these algorithms are applied to WDS design as an engineering problem to verify and compare the self-adaptive approaches. The objective functions are considered to minimize the total cost with maximum system resilience while satisfying system hydraulic constraints. Finally, the characteristics of search operators of the self-adaptive HS are compared and analyzed according to the presence or absence of additional parameters and operators.
According to the performance of various analyses in this study, the SGHSA outperformed in terms of the single/multi-objective and WDS design problems. Also, the NSHS and PAHS showed quite good performance, but both of the approaches have low versatility. This means that although they show good results in certain problems, they exhibit low performance depending on the characteristics of the algorithm. For example, in the case of NSHS, the multi-objective optimization showed the lowest solution convergence because NSHS performed the local and global search considering the standard deviation of fitness. However, with respect to the search operator the SGHSA adds new parameters (e.g., max and min Bw) and search operators. The boundary value does not need an optimal value, and this also affects the solution quality. Therefore, the additional new parameters likewise need to have proper values. Hence, for the next-generation self-adaptive approach, it is necessary to develop an algorithm that does not add new parameters and can derive excellent solution performance. In a future study, these results are expected to benefit the development of a new self-adaptive optimization algorithm, considering the effective evaluation of each algorithm. In particular, the comparison approaches (i.e., the single/multi-objective optimization, applying the engineering problem, and the characteristics of search operators) can be rigorously tested in a significantly simpler way.