Adaptive-Hybrid Harmony Search Algorithm for Multi-Constrained Optimum Eco-Design of Reinforced Concrete Retaining Walls

: In the optimum design of reinforced concrete (RC) structural members, the robustness of the employed method is important as well as solving the optimization problem. In some cases where the algorithm parameters are deﬁned as non-effective values, local-optimum solutions may prevail over the existing global optimum results. Any metaheuristic algorithm can be effective to solve the optimization problem but must give the same results for several runs. Due to the randomization nature of these algorithms, the performance may vary with respect to time. The essential and novel work done in this study is the comparative investigation of 10 different metaheuristic algorithms and two modiﬁcations of harmony search (HS) algorithm on the optimum cost design of RC retaining walls constrained with geotechnical and structural state limits. The employed algorithms include classical ones (genetic algorithm (GA), differential evaluation (DE), and particle swarm optimization (PSO)), proved ones on structural engineering applications (harmony search, artiﬁcial bee colony, ﬁreﬂy algorithm), and recent algorithms (teaching–learning-based optimization (TLBO), ﬂower pollination algorithm (FPA), grey wolf optimization, Jaya algorithm (JA)). The modiﬁcations of HS include adaptive HS (AHS) concerning the automatic change of algorithm parameters and hybridization of AHS with JA that is developed for the investigated problem. According to the numerical investigations, recent algorithms such as TLBO, FPA, and JA are generally the best at ﬁnding the optimum values with less deviation than the others. The adaptive-hybrid HS proposed in this study is also competitive with these algorithms, while it can reach the best solution by using a lower population number which can lead to timesaving in the optimization process. By the minimization of material used in construction via best optimization, sustainable structures that support multiple types of constraints are provided.


Introduction
Retaining walls are civil engineering structures which are designed to restrain the movement of soil mass in the lateral direction for providing free space in front of them. The material of retaining walls may be different, and the required specifications and properties of design projects are used in the design according to the different types of retaining walls. These properties include soil profile on the project site, the magnitude of the lateral loads, construction time, equipment mobilization, construction area boundaries, immediate environment, neighboring structures, and drainage conditions. Gravity type walls, reinforced concrete walls, cantilever soldier pile walls, sheet piles, bulkheads, piles toe projection restriction [28] and optimum design of dynamically loaded RC retaining walls [29].
In the present study, a hybrid algorithm combined adaptive HS and JA is proposed to improve the global search part of HS with the only single phase of JA using both the best and worst solution in one equation. Via this modification, the convergence and robustness of HS are improved. Thus, the single-phase JA is also improved by adding a second phase that is the local search part of HS. To present and evaluate the performance of this algorithm, several cases of RC retaining walls are investigated. Ten different metaheuristic algorithms are used in comparison including the most classical ones such as GA, differential evolution (DE), and PSO; proved metaheuristic algorithms such as HS, artificial bee colony (ABC), and FA; and recently proposed new-generation algorithms such as TLBO, FPA, GWO, and JA. The research cases include 30 multiple cycles of the optimization methodology for evaluation of algorithms based on minimum cost, average cost, and standard deviation.
After this introduction section, the structure of the paper continues with the short descriptions of employed metaheuristic algorithms and the newly proposed hybrid algorithm in Section 2. Then, Section 3 includes the application of the methods to RC retaining wall cases. In the first case that includes a shorter wall than Case 2, the optimum value of the toe slab/front encasement width of the retaining wall is to be zero, and the optimum RC retaining wall is L-shaped. Secondly, there is presented an optimum design with a T-shaped wall by using 30 multiple cycles of optimization. Also, both the best number of the population and iteration numbers are evaluated. Finally, multiple cycle evaluation is performed for different wall parameters defined as design constants by using HS and modified versions. In Section 4, the conclusion is given by separately considering the results of all cases.

Employed Metaheuristic Algorithms
In this section, 10 metaheuristic algorithms are briefly summarized based on the modifications adapted for the optimized problem. Also, the adaptive version of HS and the hybridized version of it with JA are given.

Genetic Algorithm (GA)
Genetic algorithm (GA) is one of the metaheuristic algorithms that was developed at the beginning of the 1970s by J. Holland who was inspired by biological systems, which can transform into pretty successful organisms by adapting to the environment. Furthermore, the algorithm is arranged with five biologic processes called mating, reproduction, cloning, crossover, and mutation intended for optimization applications [4,30]. New candidate solutions (child members) are defined in the direction of the mentioned biological processes in the evolution of the population (optimizing of design variables) and grown by appropriate candidates (ancestor members) selected from the initial population. Next, the crossover is applied between these members. The determinant value in the crossing process is a crossover possibility (cap), and whether crossover will occur or not is determined by the value [31]. The mutation process is necessary for members that have similar features, and crossover may remain incapable (Equation (1)). Finally, better solutions are selected by comparing new members with old ones and transferred to a new generation. For this, the fitness value of the assigned new solutions is considered.
In the equation, mr is mutation rate, q is a gene (design parameter) randomly selected from total design parameter, and X q,new , X q,min , and X q,max are new, lower, and upper limit values of q th parameter, respectively. A random number between 0 and 1 is shown as rand( ). X q,new = mr > rand( ), X q,min + rand( ) X q,max − X q,min (1)

Differential Evolution (DE)
Differential evolution (DE) is an evolutionary algorithm, which was developed by R. Storn and K. Price [32], and can be considered as an advanced version of GA. At the same time, DE applies the mutation, rearrangement (cross-over), and selection operations as in GA. During the process, two parameters are benefited with names: crossover possibility (CR) and weighting factor (F). CR is considered as a possible value, which determines the crossover cases that will occur between the new solution and solution handled at the beginning [33,34].
In the optimization process, a new solution is generated through selection of three different solutions randomly ( and ), for each candidate solution. In mutation, whole variables of chromosome (candidate solution) are changed by using these three chromosomes (Equation (2)) [34,35].
After the mutation process, crossover (assigning of another random existing solution; rand cs ) is applied according to CR parameter; otherwise, the current candidate solution (cs) is the same as the randomly selected one from all of the chromosomes (Equation (3)). Finally, the optimization of design variables is completed by considering fitness value (objective function) as in GA [34].

Particle Swarm Optimization (PSO)
Kennedy and Eberhart [5] presented the particle swarm optimization (PSO) algorithm in 1995 by inspiration from the natural behaviors of colonies of insect swarms, a flock of birds, or fish [36]. Each member of the swarm is a particle that has a specific velocity and position in the search area. Additionally, a value called inertia weight parameter (w) is used for combining local and global search and for generating balance with each other (as different from classic PSO). Also, new velocity and position are formalized as in Equations (4) and (5), respectively. As shown in the equations, V i,new and X i,new are the new velocity and position for i th design variable. X i,y best and X i,g best are the best global (among all particles) and local position (obtained in each iteration) in terms of the objective function, respectively. Also, X i,j and V i,j are values of current position and velocity of the corresponding j th particle; c 1 with c 2 parameters are positive constants, which provide controlling flying velocities [37,38].

Artificial Bee Colony (ABC) Algorithm
Karaboga [39] developed an algorithm known as the artificial bee colony (ABC) by using the benefits of natural behaviors of bee colonies in food-searching. This colony contains different bee groups such as employee/worker, onlooker, and scout, which aims at the improvement of nectar quality. Moreover, in the process of algorithm development, some rules are considered as equality of employee and onlooker bees with the location of food sources, and being employee bee of scout ones when nectar of sources finishes. At first, there is the employee bee stage for optimization and a random selection is made for a food (k) with design variable (p) for defined new value of j th food source belonging p th parameter X p,new and φ i,j the possibility that is formalized via Equations (6) and (7), respectively. The second one is the onlooker bee stage and carried out via Equation (8).
Comparison of food quality (P j ) is needed for this stage. Equation (8) is applied for the whole of the nectar of fn sources, where X p,j and X p,k are j th and k th food source position of p th parameter, respectively [39][40][41]: X p,new = rand < P j , X p,j + φ i,j X p,j − X p,k Finally, employee bees appear as scout bees. To find new foods, a condition is applied indicated with Equation (9). ip j is a parameter, which takes value according to improving design variables, and SIL controls this value as a limitation. Also, X i,max is the upper and X i,min is the lower limit of i th design variable [40,42].

Firefly Algorithm (FA)
Firefly algorithm was developed by Yang [8] and it is one of the population-based methods. As for the development of the algorithm, flashing ability is based. It is a natural feature of fireflies and is effective for performing activities such as foraging and communicating with other fireflies to find them. For usage of this algorithm in optimization problems, some assumptions are suggested as listed below [21,34,43]:

1.
Fireflies are hermaphrodites and attract the other ones to themselves in every condition.

2.
Brighter k th firefly (I x k ) attracts the j th firefly (I x j ) which is a less bright one, because attractiveness (β) increases as long as brightness (I) increases. But k th firefly continues to fly randomly in the case that a less bright one is not found than itself (for minimization problems) (Equation (10)).

3.
The brightness of fireflies is determined by the objective function. Therefore, the brightness of k th firefly (I(x k )) at an x position is proportional with objective function (f (x k )).
In Equations (11) and (12), β r jk and β 0 are attractiveness for k th firefly corresponding j th and minimum attractiveness ranged between 0-1; X k and X j are net position; X p,k and X p,j are any p th design variable value belonging to k th and j th firefly, respectively; besides r jk expresses the distance between j-k firefly and ts is the total design parameter number.
2.6. Teaching-Learning-Based Optimization (TLBO) Rao et al. developed an algorithm called teaching-learning-based optimization (TLBO) in 2011 [44]. The basic idea comes from the principle of teaching students by a teacher and self-learning by themselves in a class.
The task of teachers is to improve the knowledge level of students, affect them, and produce higher grades. Thus, he/she realizes the optimization of the class average. Moreover, students improve not only their knowledge but also their grades through communication, sharing knowledge, and investigation.
That is why optimization is performed through teachers and students. In the teacher phase, the teacher is chosen from a student, who has the best/higher grades, and he/she improves the knowledge of the other students using teaching factor (TF) (Equation (13)). This operation is formalized in Equation (14). In the learner phase, two random solutions are selected as "a" and "b" among all students that their grades were updated in the teacher phase, and grades are again updated depending on the solution, which is the better one in terms of the objective function, as expressed in Equation (15):

Grey Wolf Optimization (GWO)
Grey wolf optimization (GWO) was developed with inspiration from the conception of leadership hierarchy with hunting behavior of grey wolves in nature [45]. In the hierarchy, three wolves are defined as alfa (α) (first leader), beta (β) (helper), and delta (δ) (transferring orders coming from beta), and these control the progression in a pack. The remaining wolves are called omega wolves (ω) and they are the weakest ones in the pack. For characterizing the algorithm as optimization, a group hunt performed by wolves becomes prominent. Firstly, group leaders are determined. After wolves determine their prey, they follow and finally encircle it. In the meantime, distance of prey with encircling wolf ( → D) is indicated via Equation (16). Additionally, each wolf can change position around their prey randomly, as in Equation (17). Here, X i,new , X i,p , and X i,j are new value, initial matrix value of p th (prey), and j th (ω) candidate solution for i th design variable, respectively; → C is a coefficient factor, → A is a vector, which defines the case that wolf attacks prey, → a is a vector, which affects the distance of the prey-grey wolf, t is the current iteration number [46,47].
After encirclement, the attack realizes but is consulted to the knowledge of leader wolves for being successful of this state. Distances of α, β, and δ wolves to prey and new positions of each one can be defined via Equations In the end, the grey wolf, which found the new position, attacks the prey. → a is decreased to approach the prey of wolves. Correspondingly to this, → A is also changed and the rule, which expresses the realization of attack, is as Equation (28) [45].

Flower Pollination Algorithm (FPA)
Flower Pollination Algorithm (FPA) is a kind of metaheuristic algorithm which was suggested by Yang [48]. Pollination, which is an important ability providing continuity of flowery plants' species, underlies the working structure of this algorithm.
On the other hand, in the optimization process the algorithm performs two stages known as self-pollination and cross-pollination and applies them as local (Equation (29)) and global search (Equation (30)), respectively. In global search, a function called Lévy in the assumption that pollinators continue to search by fly randomly is used. Lévy function is expressed with Equation (31) [43]: where sp is switch probability (change value of search). For i th design variable, X i,new is new value; X i,j is initial matrix value corresponding j th candidate solution; X i,g best is the best solution in terms of the objective function; X i,k and X i,m are randomly selected k th and m th solutions.

Jaya Algorithm (JA)
In 2016, Rao proposed a method called the Jaya algorithm (JA). The name of this algorithm comes from the Sanskrit word Jaya, which is victory [49]. This method searches and finds the best solutions by diverging to the worst existing value and requires only common parameters such as population size for the optimization process. Furthermore, all of the solutions are generated for each variable via Equation (32), where X i,best and X i,worst are the best and worst values in terms of the objective function, besides X i,j and X i,new are the current and new value of i th design variable [50,51].
When the current best solution and the other solutions are close to each other, JA may trap a local solution although this algorithm has a good convergence ability. This is a non-effective feature of using a single-phase option.

Harmony Search (HS)
Harmony search (HS) is a metaheuristic algorithm that was proposed by enhancing musical performance with better harmonies by a musician. In the working principle of this method, musical performances are improved by increasing the effects of harmonies directed to gain approval from audiences. Geem et al. [7] considered HS as an optimization tool. For the optimistic approach of the natural musical process, HS is operated via two different choices as the possibility of memory usage and generation of random notes. These are formalized as the following Equations (33) and (34), respectively. The classical equations of HS together with the evaluation history was presented by Zhang and Geem [52] and Geem [53]. The modified equations of HS were used in this study.
HMCR is harmony memory consideration rate and FW is fret width. Also, X i,new , X i,min , X i,max , and X i,k express a new value, the lower limit, the upper limit of i th design variable, and the k th random-selected candidate vector, respectively. A random number between −1/2 and 1/2 is shown as rand −1 2 , 1 2 .

Adaptive Harmony Search (AHS)
In this modification of HS, the mentioned algorithm is arranged according to the usage differences of HMCR and FW values by considering the iteration process. A modified version of HS parameters expressed as Modi f ied HMCR (harmony memory consideration rate), and Modi f ied FW are utilized as changeable according to iterations. These parameters can be formalized as Equations (35) and (36).
HMCR and FW are also classical parameters and their values are initially chosen by the user; CI and TI express the current iteration step and total iteration number, respectively. In the numerical evaluations, the initial values of HMCR and FW are also taken randomly to present an algorithm that is not dependent on specific parameters. Via this modification, the convergence ability is increased as can be seen in the numerical examples. Besides, it is not required to calibrate the algorithm for the parameters since it can use different combinations of parameters during iterations.

Adaptive-Hybrid Harmony Search (AHHS)
HS did not use the best or worst solutions in the formulations, while both of them are used in JA in a single equation. Besides, JA has only a single-phase and always uses the best and worst solutions without considering the other ones. This difference between these two algorithms leads to the idea of hybridizing these two algorithms.
For the hybrid modification, HS is combined with JA by considering the general optimization equation belonging to the JA method shown in Equation (32) instead of the global search phase of HS (expressed via Equation (33)), besides the usage of modified versions of HMCR and FW parameters. By the modification, the effective features of JA on using both best and worst solutions in a single equation are used, while the nonadvantages of using a single-phase are avoided by the hybridization. Also, the user-defined initial values of HMCR and FW are investigated by taking random values at the start of the optimization to propose a user-defined parameter-free algorithm as JA. By using a two-phase algorithm, the trapping to a local optimum is also prevented.
As proved in the numerical example, this algorithm has good convergence, fast computational time, and robustness.

Investigation and Optimization of Reinforced Concrete (RC) Retaining Walls
The cross-section of a T-shaped retaining wall is shown in Figure 1 including the design variables and design constants. The definitions of the symbols are listed in Table 1. Active and passive stresses occurred according to earth pressures of soil, and external loads are calculated using Rankine earth pressure theory [54]. The objective function is the total material cost of the RC retaining wall, and the aim is to minimize it concerning the design constraints listed in Table 2. Also, some coefficients, which are handled to provide the safety of retaining wall, can be seen in Table 3. These coefficients and constraints are calculated according to ACI 318: Building Code Requirements for Structural Concrete [55]. If one of the design constraints is violated, the objective function is penalized with a big value. In the design of the RC retaining wall, a tension-controlled design is performed. For that reason, all sections must be designed for the situation that requires a minimum value of 0.005 net tensile strain in the extreme tension steel at nominal strength.
In this section, three different optimization applications are explained: GA, DE, PSO, HS (classic (HS), adaptive (AHS), and adaptive-hybridized (AHHS) versions), FA, ABC, TLBO, FPA, GWO, and JA for minimization of total material cost (concrete and steel reinforcement) for cantilever retaining wall designs. These have consisted of four different cases for wall models individually: Case 1: Thirty multiple cycles of optimization of design data within Table 1. Case 2: Thirty multiple cycles of optimization of design data of Case 1, but for different H of the wall.
Case 3: Determination of the best iteration number and population number combination by using different values concerning data in Table 1.
Case 4: Optimization with 20 multiple cycles for different wall parameter combinations given in Table 2.    Table 3. ACI 318-14 regulation values utilized in the optimization process.

Load Coefficients in ACI Regulation Symbol Value
The coefficient for load increment C l 1.

Optimization for T-Shaped Wall Designs via Multiple Cycles (Case 1)
In Case 1, optimization processes were carried out for 30 cycles with 20 populations and 5000 iteration numbers to generate optimum wall designs by considering of design values stated in Table 1. The termination criterion is to reach the value of the defined iteration number. Via that criterion, it is also possible to check the number of iterations needed to reach the optimum result. Optimum values of design parameters and objective function with statistical measurements attained in the result of 30 cycles with the usage of 10 metaheuristic algorithms are shown in Table 4. Also, optimization results of HS, AHS, and AHHS where HMCR and FW values are handled as 0.5-0.1, 0.1-0.1 together with random-determined (rand( )), can be seen in Table 5. From the results, the optimum design has zero X 2 variables. The front encasement is not required in the optimum design by using the design constants given in Table 1. Only, GA and GWO results have a small non-zero value for X 2 , but these algorithms are not effective to find the best optimum result.  On the other hand, according to Table 5, from all of the HMCR-FW combinations, the best results were provided through the usage of random-valued arrangements for each modification of HS. Therefore, for the other following cases, only random-valued arrangements will be evaluated. Additionally, the most effective modification is observed as the random-valued combination of AHHS, so it can reach the minimum cost as an objective function with extremely small deviation compared with all the HS and AHS arrangements.

Optimization for Wall Designs via Multiple Cycles for H = 10 m (Case 2)
The optimum design of the RC retaining wall is conducted for Case 2 to find an optimum design with front encasement. H value of the wall is taken as 10 m, while all other design properties of the wall and parameters of the optimization process are the same as Case 1, except for iteration number 40,000. The optimum results are given in Tables 6 and 7. It can be understood from Table 6 that DE, TLBO, FPA, and JA could reach the minimum cost, but DE deviated from this value with a very high rate along with the cycles. Also, TLBO and especially JA can converge to this value with an extremely minor standard deviation. Both algorithms can be considered as the best options intended for determining optimum design.  On the other hand, HS and AHS are not so effective in terms of reaching the minimum cost. However, AHHS can converge to the minimum cost with a small deviation almost similar to the best algorithms (TLBO and JA).
Also, Figure 2 shows convergence behaviors concerning minimum total cost provided in a cycle where the best results are obtained with HS and both modified versions of HS. As can be seen from this figure, reaching the minimum cost level via AHHS, which can find the optimum results and best cost, is realized earlier than the other two methods.

Best Population and Iteration Numbers for Optimization Processes (Case 3)
In Case 3, the best iteration and population numbers found for determining the optimum design parameters and minimum cost values of retaining wall structures. Accordingly, in all mentioned metaheuristics, the population number is operated as 3,5,10,15,20,25,30, and iteration number is handled from 1 to 5000 increasingly 499, for a specific design comprising from values expressed in Table 1. In this connection, optimum design parameters providing the minimum material cost are found out owing to the determination of the most convenient population together with iteration numbers for each algorithm. These results can be seen in Tables 8 and 9. As is shown in the tables above, the minimum cost can be obtained via DE, TLBO, FPA, JA, and AHHS. However, it is so clear that the standard deviation of the AHHS result has an extremely low rate when this cost value is determined, besides the population number is smaller than the other most effective algorithm, JA. AHHS can reach the optimum result with a smaller number of the analysis that is calculated by multiplying the population number with the iteration number for all algorithms other than TLBO since it is double this value for TLBO that employs two phases in an iteration.

Optimum Analysis for Different Wall Structure Variations with Multiple Cycles (Case 4)
Finally, numerous retaining wall models are generated employing different ranges for three design constants including stem height (H), soil weight per unit weight of volume for wall back soil (γ z ), and surcharge load on the top elevation of soil (q a ). These constants and their properties are summarized in Table 10. Table 10. Range values of design constants used in the optimization process.       On the other hand, for AHS, minimum costs provided in sequent cycles for γ z = 16 kN/m 3 with 7 m wall height in 0 and 10 kN/m 2 (q a ), occur as unstable/wavy. Also, in AHHS analysis results, it can be said that there is so small change for minimum cost intended for only one design parameter combination as γ z = 16 kN/m 3 , H = 10 m with q a = 0 kN/m 2 . In general, classical HS and modified versions are effective in this evaluation.

Case 1
In Case 1, optimum design variables and minimum cost were obtained for a specific design through performing 30 cycles. The best result as minimum cost value can be provided via DE, PSO, FPA, TLBO, and JA. However, the most effective ones are JA (first) and TLBO (second) because the standard deviation of both best results provided via themselves are the smallest ones, respectively. These two methods can be the most effective and useful ones for the stability of optimization results. Standard deviation values belonging to DE and PSO especially were quite large, and could cause change of optimum values in every analysis. Thus, DE and PSO are not consistent and the mentioned two new generation algorithms can be accepted as effective and stable for this structure model. The optimum design has zero toe slab/front encasement width of retaining wall, and all algorithms other than GA and GWO are effective to find this solution. Additionally, AHS and AHHS are successful to reach the minimum cost through the usage of randomdetermined parameters and all of the parameter combinations, respectively. In particular, AHHS possesses a considerable performance in performing this due to showing relatively small deviations.

Case 2
Case 2 was carried out like Case 1 by considering the wall height (H) as 10 m. Optimization results were ensured with the usage of the same parameters (except for iteration numbers such as 40,000) expressed in Table 1. Different from Case 1, the optimum result of the front encasement (X 2 ) is not zero. As seen in Table 6, although DE, TLBO, FPA, and JA are effective to find the best objective function as minimum total cost, DE has a big error/deviation rate. This may cause to occur optimum costs at varying values. The most effective methods are TLBO and JA in terms of the finding of the best cost with a very small deviation. Moreover, FPA is also successful at this issue but can approach the minimum cost result with a slightly high deviation. As to HS modifications, the best result as $1365.2365 can be observed in operation performed via only AHHS, which consists of HS and JA. A similar result can be found in the performance with TLBO and JA in terms of smallness of standard deviation. Also, the proposed hybrid algorithm has a better convergence ability than HS and AHS. For the difference between the best and average results, the hybrid method has a very small difference compared to the others.

Case 3
In Case 3, the optimization process was performed by using different iteration and population numbers to find the best parameter combinations. According to the obtained results (Table 8), DE and FPA have equal performance in terms of minimizing the cost, and JA and TLBO are the best algorithms towards the aim of this process. However, there is a significant point for the best iteration number that DE is better than FPA as 1997 iterations. Both required iterations and population numbers are almost the same for TLBO and JA. When they are compared, JA's deviation is smaller than TLBO's one. In a summary, DE is very effective for providing optimum values and prohibits loss of time. As the second alternative method, FPA can be preferred due to the equivalent small deviation and less iteration number. It proves that the best pick of the population and iteration number plays an important role in the performance increase by DE and FPA. According to the results of HS and its modifications, only AHHS can be successful for detection of the minimum cost as $428.1139 with a very small standard deviation. From the results, AHHS is the best among all of the metaheuristics for this case because of the extremely low population number of 10, compared with the other values such as 25 and 30. That is why AHHS provides a speed increase in reaching the optimum values comparing to other ones, while the other advantages are slightly equal with the best ones.

Case 4
In Case 4, minimum costs and optimum design variables were determined for retaining wall designs modeled by using different value ranges for the design constants. The results were evaluated concerning two different values of γ z (minimum 16 kN/m 3 , maximum 22 kN/m 3 ) and minimum, average, and maximum values of H. Only HS and its modified versions, which are developed in the current study, were evaluated, and Figures 3-8 were presented to show the fluctuation of minimum costs for specific designs by the usage of some values of q a , by regarding HS, AHS, and AHHS, respectively. It can be seen that in analysis results made for the modified versions the deviations occur in only 16 kN/m 3 value of γ z for all wall height combinations. It means that since γ z increases, minimum cost deviations for different designs generated with various wall heights do not appear for any q a . Also, according to the results of HS, AHS, and AHHS for 16 kN/m 3 , it can be seen that the deviations for costs along 20 cycles are the most apparent in classical HS and AHS with 0 and 10 for q a and 7 m wall height (H), respectively. Consequently, in Case 4, the most effective and usable algorithm is AHHS for the wall designs in terms of both stability of objective function values and the true value of minimum costs without almost fluctuation for costs.

General Advantages of AHHS
It is noticed that AHHS is also quite effective and can provide optimum design variables with the minimization of total cost in all Cases 1, 2, and 3. According to this performance, it is possible to say that AHHS is a competitive method. The robustness of the algorithm and success for the local optima problem was provided via obtaining very small standard deviation results for multiple runs, while most of the classical methods have big values for standard deviation. The essential performance of the algorithm is realized for the computational effort value due to having a smaller number of iterations than the others for reaching the final optimum results. Furthermore, AHHS can be used as a user-defined parameter-free algorithm by assigning the parameter with a random number. The analysis results done with randomly assign initial parameters of HMCR and FW provided that AHHS was not affected by the choice of the parameter. Also, the best performance is obtained via using random parameters. In that case, being an adaptive algorithm plays an important role in this performance.