1. Introduction
Dams are structures built on river branches with sufficient water flow to store water by creating a barrier. The dam body serves the functions of flood control and electricity generation, as well as storing water for domestic, agricultural, and industrial use by accumulating water on its surface. The dam body is typically filled with soil, rock, or concrete fill materials. To ensure the dam structure remains statically balanced, horizontal and vertical forces must be safely transferred through the dam body. The primary obstacle to the operational sustainability of dams is sediment accumulation, which reduces reservoir volume by an average of 0.8% annually. The data presented in ICOLD (International Commission on Large Dams) Bulletin 147 [
1] clearly demonstrates the effectiveness of this volumetric contraction, condemning traditional designs lacking sustainable management strategies to a short economic lifespan of 50 to 70 years. In the existing literature, this value is considered only as a parameter that does not reduce water storage capacity or energy capacity; it is identified as a critical technical risk factor that accurately undermines project investment return periods and operational management. In this study, a concrete-filled dam was designed upstream of the Hasanlar Dam, which was put into operation in 1972 in the Yığılca district of Düzce Province. The Küçük Melen River, which feeds the dam lake, accumulates large amounts of sediment, reducing the dam’s operational capacity. The primary criterion necessitating this study is that the Hasanlar Dam is nearing the end of its economic lifespan due to the accumulated sediment. Abu-Afifeh et al. [
2] conducted a study of the Huangfuchuan River basin with the objective of demonstrating how sediment accumulation and climatic factors reduce the lifespan of dams. In their study, Shahraki et al. [
3] analysed the importance of reservoir water volume. To this end, the Ant Colony Optimization (ACO) algorithm was employed to optimise water resource allocation for the Pishin Dam. The various management scenarios contemplated by the authors were predicated on the demands for agricultural, drinking water, and environmental use. Carvajal González et al. [
4] conducted an examination of the areas that would be affected by flood risk in the event of dam failure under various scenarios.
Literature on dam structures has mostly focused on minimizing the cost of the dam by calculating the most economical cross-section of the body under the effects of horizontal and vertical loads. The studies on the minimization of the dam cross-section have been carried out as follows: Aldemir [
5] performed a two-dimensional numerical analysis of the effects of seismic effects; Saplıoğlu et al. [
6] used the Symbiotic Search Algorithm; Khatibinia et al. [
7] calculated it with a hybrid IGSA-OC based on the Gravitational Search Algorithm (IGSA) and orthogonal crossing (OC) and compared the results with Particle Swarm Optimization (PSO); Khatibinia et al. [
8] used the IGSA-OC algorithm with the finite element method; Seifollahi et al. [
9] calculated it with the Grasshopper Optimization Algorithm and compared it with different optimization methods; Kaveh et al. [
10] used the Loaded System Search (CSS), colliding object nbn ts optimization (CBO), and developed colliding objects optimization (ECBO) versions; Ferdowsi et al. [
11] used Invasive Weed Optimization (IWO); Salmasi used a Genetic Algorithm (GA) [
12] for the minimization of the dam cross-section; and Rezaeeian et al. [
13] used Ant Colony Optimization (ACO). In another study, they presented a predictive model to predict dam displacement using Gaussian Process Regression (GPR). The study includes a practical application on a prototype gravity dam, where various simple and combined covariance functions are evaluated to determine the most effective choice. The analysis aims to optimize the performance of the proposed method through a comprehensive investigation of different covariance functions [
14]. The efficiency of hydroelectric power plants is affected by several complex factors. Factors such as river/reservoir flow, turbine efficiency, water levels, and reservoir capacity, as well as environmental impacts, affect the efficiency of hydroelectric power plants. In the study, Artificial Neural Network (ANN) models were developed and applied to predict Ecuador’s hydropower production in the short and medium term [
15]. Pumped-storage hydroelectric plants (PHPs) use only pumped water to generate electricity, with no input to the upper reservoirs of the PHPs, and inputs to the PHPs are considered. They used different heuristic methods to reduce the production cost of PHPs and find the optimal operating parameters of power systems [
16]. Tounsi et al. [
17] reservoir management and flow forecasting are important to improve the efficiency of hydroelectric power plants and manage water resources sustainably. The development of a Machine Learning (ML)-based technique has enabled the comprehension of reservoir management rules and the prediction of downstream discharge values. Allawi et al. [
18] explored the application of the Shark Machine Learning Algorithm (SMLA) in formulating an optimal reservoir release strategy and assessed the model’s efficacy with respect to the physical attributes of the dam and reservoir. Estimation of physical parameters of dams is of increasing importance today. Water flow rate (RWFR) estimation is an important type of estimation for planning and constructing water dams or operating existing ones effectively. RWFR prediction plays a critical role in the future planning and construction of new water dams, as well as the effective operation of previously built ones. Ilhan [
19] in their work, they proposed machine learning algorithms for predicting future short-term RWFR. Li et al. [
20] emphasized that in terms of dam construction and upstream design, stream flow, hydrology, climatology, flood management, drought risks, and water resources planning and management are of vital importance. Ekhtiari et al. [
21] successfully modeled the dam site selection problem (DSSP) using binary programming, encompassing determinism, uncertainty, and hybrid conditions. The proposed new model, by integrating nadir compromise programming (NCP) and stochastic programming, effectively addresses uncertainties, emphasizing its contribution to making more reliable decisions in dam site selection. To support environmentally friendly hydroelectric planning in developing regions, O’Hanley et al. [
22] proposed a spatial optimization model to balance the trade-off between hydroelectric power generation and the richness of migratory fish species. This model, specifically tailored to the life cycles of tropical migratory fish, aims to minimize environmental impacts by integrating decisions on dam placement and removal. The model, specifically tailored to the life cycles of tropical migratory fish, aims to minimize environmental impacts by integrating decisions on dam placement and removal. Kaygusuz [
23] emphasized that the construction of each hydroelectric power plant affects the watercourse and disrupts the environmental balance at the site, while dams and reservoirs significantly transform the surrounding landscape. It is asserted that hydroelectric recovery is a significant measure for enhancing the efficiency of water supply systems, providing benefits to the environment, economy, and society. It is imperative to acknowledge the status of hydropower as a well-established technology, currently representing the most substantial contribution from continuous or so-called renewable energy sources. It has been demonstrated that the utilization of the hydroelectric potential inherent within water supply dams results in significantly mitigating adverse environmental impacts, underscoring the efficacy and necessity of such practices. Studies on increasing water supply efficiency (WSS) reveal that these structures are not only useful for operating on water but also as a source for clean energy production. In this context, Vilanova and Balestieri [
24] demonstrated that recovering graphed hydraulic energy at various stages of the supply process can increase energy and reduce operating costs. Similarly, Küçükali [
25] found that the utilization of existing water supply dams in Türkiye accounts for 50% of investment costs and that these systems operate with a much higher capacity factor than run-of-river power plants. By examining the global trajectory of renewable energy research, Manzano-Agugliaro et al. [
26] noted that hydroelectric technology has reached the most advanced stage among other alternative energy sources and that scientific production in this field is particularly concentrated in areas where resources are abundant. The authors emphasize that, because of this technological development, current research focuses on the potential and growth of small hydroelectric power plants rather than large ones. Furthermore, Hunt et al. [
27] highlighted that the adoption of a modular approach in dam construction has brought about reduced construction requirements and an optimized structure for movable turbines. Modularity in the construction sector is highlighted for its advantages, including shorter construction periods, increased workforce efficiency and safety, enhanced production quality, reduced delays due to weather conditions, decreased environmental and social impacts, minimized construction site congestion, reduced uncertainties, and improved overall efficiency. Quaranta et al. [
28] stated that the modernization of Hydroelectric Power Plants (HPPs) can provide various benefits in terms of energy production, flexibility, safety, and operation, while also emphasizing positive effects on the environment. Santos et al. [
29] emphasized the potential of increasing the hydroelectric reserve capacity by raising the structure height. Therefore, they indicate that it could contribute to higher efficiency and more effective operation of hydroelectric power plants. Laks et al. [
30], using fuzzy analytic hierarchy process methods, conducted a case study exploring the potential for increasing energy production by altering the dam level of a small hydropower plant. Ghasempour et al. [
31] conducted a study to implement a hybrid renewable energy system with the aim of increasing energy production and reducing water evaporation in the reservoir.
Recent research reveals that advanced numerical and probabilistic methods are gaining increasing importance in stability analysis and structural optimization problems. Probabilistic stability analyses based on the random finite element method are used as an effective approach in evaluating the reliability of layered slopes by considering the spatial variability of soil properties [
32]. In addition, comprehensive review studies on civil engineering materials show that comparative examination of different optimization strategies contributes to a better understanding of performance improvement processes. Such studies reveal the advantages and limitations of the methods used more clearly, forming an important basis for future [
33].
This research focuses on the strategic replacement of the Hasanlar Dam, located in the Yığlıca district of Düzce, Turkey. Sediment accumulation poses a critical threat to dam functionality worldwide, frequently limiting their economic viability to a window of just 50 to 70 years. This study proposes a preventive renovation approach that focuses on constructing a new reinforced concrete structure in the upper basin, instead of relying solely on reactive repairs. The approach aims to improve structural durability while also ensuring long-term economic efficiency.
As part of the process, a new dam axis was initially determined, and the reservoir volume was calculated using remote sensing methodologies to establish the height criteria for the new structure. Accordingly, the new dam height was determined based on the existing capacity of the Hasanlar Dam, which is 55 million m
3 [
34]. To determine the initial geometry, a comprehensive stability analysis was conducted, considering self-weight, hydrostatic loads resulting from upstream and downstream elevations, and other relevant forces. Subsequently, the cross-sectional area was minimized using heuristic optimization techniques, where the slope angles were constrained within the range 0 <
n < 0.20 and 0.60 <
m < 0.80 [
11]. To make this geometry more precise, seven current algorithms have been applied: Genetic Algorithm (GA), Arithmetic Optimization Algorithm (AOA), Gray Wolf Optimization (GWO), Dragonfly Algorithm (DA), Particle Swarm Optimization (PSO), Crayfish Optimization Algorithm (COA), and Cheetah Optimization (CO). While these methods have extensive utilization across diverse domains, their integrated application here addresses a specific gap in hydraulic engineering. For instance, Abualigah et al. [
35] provided a comprehensive compilation of metaheuristics for engineering design, while Turgut et al. [
36] investigated chaotic and oppositional variations in the AOA. Further studies by Agushaka et al. [
37] and Yiğit et al. [
38] explored the synergy between AOA, GWO, DA, and PSO for structural problems, and Lin et al. [
39] utilized IGWO and WOA to determine long-term deformation characteristics.
Metaheuristic algorithms such as Genetic Algorithm, Particle Swarm Optimization, and Grey Wolf Optimization are commonly used to address engineering optimization problems and have proven to be effective techniques. While the computational approaches in question excel at tackling non-linear engineering challenges, they are far from being interchangeable. Their core differences lie in how they navigate the search landscape and manage the balance between convergence speed and global accuracy. The Genetic Algorithm (GA) is particularly notable for its robust exploration of diverse solution spaces, ensuring it doesn’t settle for mediocre results. However, this thoroughness is not without its drawbacks. The iterative nature of GA demands significant computational resources, often leading to overheads that require careful management in time-sensitive projects.
In contrast, Particle Swarm Optimization (PSO) is widely used in engineering design due to its simple parameter structure and fast convergence. More recent algorithms, such as the Crawfish Optimization Algorithm (CAO), aim to improve the search process by balancing exploration and exploitation more effectively. In this study, PSO and CAO are evaluated and compared in terms of their performance. The comparison focuses on their application to the optimization of dam cross-sections.
The main contribution of this research is the integration of high-resolution spatial datasets with advanced computational intelligence methods. The outcomes of this approach can be summarized as follows:
Strategic modernization: Instead of relying on conventional repair strategies, the study proposes a proactive framework for infrastructure that is approaching the end of its economic service life.
Methodological integration: The study connects remote sensing data with the practical conditions and engineering constraints of the study area.
Improved efficiency: A 29.36% reduction in the dam’s cross-sectional area was achieved, indicating a more efficient use of both financial and technical resources.
The remainder of this paper is organized as follows.
Section 2 presents the mathematical background of the metaheuristic methods used in the study.
Section 3 describes the application at Hasanlar Dam, including volume calculations and the formulation of the optimization problem.
Section 4 presents the optimization results and compares the algorithm performances. Finally,
Section 5 provides conclusions and discusses potential directions for future work.
2. Metaheuristic Algorithms
The increasing complexity of structural design in hydraulic engineering has led to the use of more advanced computational methods. This section examines the performance of metaheuristic optimization algorithms in the design and optimization of dam geometries. Unlike conventional deterministic methods, which may have difficulty converging in nonlinear search spaces, metaheuristic algorithms provide flexible and efficient search mechanisms for identifying near-optimal solutions.
To reduce the dam’s cross-sectional area while maintaining structural safety, seven optimization algorithms were applied: PSO, DA, CO, CAO, GA, GWO, and AOA. By comparing the search performance of these algorithms, the study aims to identify a resource-efficient geometric configuration for the dam structure. The results contribute to improving both structural performance and water management efficiency in hydraulic infrastructure.
2.1. Methodological Framework
Integrating geospatial data with hydraulic engineering analysis requires the effective use of both spatial analysis and computational optimization methods. In this study, a structured four-stage methodology is adopted to combine high-resolution spatial datasets with metaheuristic optimization techniques. The workflow presented in
Figure 1 outlines the main steps of the research process, starting from the acquisition of spatial data and ending with the development of an optimized dam geometry.
This approach ensures that the optimization process is carried out with a consistent mathematical framework while also considering the site-specific conditions of the study area. As a result, the proposed framework provides a practical solution that can be directly applied to the engineering requirements of the project site.
Phase 1 (Data Acquisition): The determination of the current 55 million m3 volume of the Hasanlar Dam and the selection of a new axle are carried out at this stage.
Phase 2 (Initial Design): The initial geometry is created using stability analysis and load definitions (intrinsic weight, hydrostatic loads, etc.).
Phase 3 (Optimization Engine): Using 7 different algorithms, such as GA, AOA, and GWO, the cross-sectional area is minimized under slope constraints (0 < n < 0.20 and 0.60 < m < 0.80).
Phase 4 (Validation and Selection): The most efficient section is selected according to convergence analysis and performance criteria. During this period, a 29.36% decrease was calculated.
2.2. Genetic Algorithm
Genetic algorithms are a subset of evolutionary algorithms proposed by John Holland that aim to achieve the best result in the solution space [
40]. It is an optimization method that uses evolutionary operators (selection, crossover, mutation) to evaluate possible individuals in the solution space to achieve better solutions. The primary goal of GA is to produce individuals that maximize or minimize the fitness function [
41].
2.2.1. Initial Population
In GA, candidate solutions are called chromosomes and are usually calculated using Equation (1).
Here,
xi represents chromosomes and
n represents population size. The chromosomes created are encoded using binary, real-valued, or permutation representations, depending on the type of problem [
42].
2.2.2. Fitness Function
The success of a genetic algorithm depends on the correct definition of the fitness function, which is usually calculated using Equation (2) [
43].
If the objective is minimization, the fitness function can be normalized using Equation (3).
This conversion prevents negative values or division by zero in the solution [
44].
2.2.3. Selection
The selection process ensures that individuals with higher suitability values for the solution are more likely to be included in the new generation. The roulette wheel method is one of the most commonly used methods in the selection process and is presented in Equation (4) [
43].
This ensures that individuals’ chances of being selected are proportional to their suitability scores [
45].
2.2.4. Crossover
Crossing over produces new individuals through the transfer of genetic information from two parent individuals. Single-point crossover is one of the classic and simplest crossing-over methods. In this method, two parent individuals are selected, and a random cut-off point is determined along their chromosomes [
46]. From this point on, the genetic information of the parents is exchanged to create two new offspring [
47]. The two parental chromosomes given are presented in Equation (5).
When point k is randomly selected, the offspring produced are presented in Equation (6) [
48].
2.2.5. Mutation
In genetic algorithms, mutation is a critical operator used to increase solution diversity and avoid local optima. It generates new solutions by adding random noise generated from a normal distribution (Gaussian) to an existing solution, and its formulation is presented in Equation (7) [
43].
Here, xi represents the parameter that will undergo mutation, (0, σ2) represents the normal distribution with mean zero and variance σ2, and represents the new solution after mutation.
2.2.6. Stopping Criteria
Genetic algorithms are terminated when a certain number of iterations is reached or when the fitness function falls below a certain threshold value [
43]. This criterion is presented in Equation (8).
2.3. Grey Wolf Optimizer
The Grey Wolf Optimizer (GWO) is a metaheuristic approach inspired by the sophisticated leadership hierarchy and collective hunting mechanisms of grey wolves in nature. Specifically, GWO models the social governance and predatory patterns inherent in wolf packs [
49]. GWO mathematically models the social hierarchy within a pack and collective hunting strategies, dividing individuals into four basic functional layers: alpha (α), beta (β), delta (δ), and omega (ω).
Within this hierarchical structure, the α, β, and δ wolves, who assume leadership roles, function as the primary decision-making mechanisms in determining the search direction during the optimization process, serving as the most potential candidates. In addition to this leadership mechanism, which forms the basis of the system, the ω wolves also play an indispensable role in the large-scale exploration of the solution space.
In this approach, the search agents do not follow a fixed path. Instead, their positions are continuously updated according to the guidance of the α, β, and δ leaders. This mechanism helps maintain diversity within the population and supports the exploration capability of the algorithm. By preserving this diversity, the method reduces the risk of premature convergence. As a result, the algorithm is less likely to become trapped in local optima and can explore the search space more effectively. Ultimately, this multi-layered interaction structure functions as a self-balancing feedback mechanism that guides the entire population toward a more resilient and sensitive global convergence [
38].
To evaluate the effectiveness of GWO on optimization problems, the following factors can be considered [
50]:
It has been proposed that the social hierarchy under discussion facilitates the preservation of optimal solutions obtained in each iteration by GWO.
The proposed mechanism for the containment of the solutions is defined by a neighborhood of circular shape. This circular neighborhood can expand into a hypersphere.
The random parameters A and C are utilized to assist in the generation of candidate solutions, which are characterized by hyperspheres with varying random.
As outlined in the proposed hunting strategy, potential prey locations may be identified by candidate solutions.
The adaptive values of a and A ensure exploration and exploitation, respectively.
It is evident that parameters a and A possess adaptive values that facilitate the execution of a continuous transition between the exploration and exploitation phases by GWO.
In the case of decreasing A, 50% of iterations are allocated to exploration (wherein |A| ≥ 1) whilst the remaining 50% are allocated to exploitation (wherein |A| < 1).
It is important to note that, for the purpose of configuration, only two basic parameters need to be set: a and C.
The findings derived from semi-real and real-world problems underscore the potential of GWO to demonstrate high performance, both in unconstrained and constrained scenarios. This finding underscores the potential of GWO as an effective optimization tool in a broad spectrum of applications [
50].
2.4. Arithmetic Optimization Algorithm
The Arithmetic Optimization Algorithm represents a heuristic algorithm capable of efficiently optimizing constrained problems and yielding competitive results [
51]. The fundamental elements of number theory are at the core of the AOA design. The Arithmetic Optimization Algorithm (AOA) uses basic mathematical operators to guide the search process in optimization problems. The algorithm divides the search procedure into two main phases to balance global exploration and local exploitation. The Multiply (M) and Divide (D) operators are mainly used during the exploration phase to maintain diversity in the population and search different regions of the solution space. In contrast, the Addition (A) and Subtraction (S) operators are applied in the exploitation phase to refine promising solutions. This mechanism enables the algorithm to gradually improve candidate solutions and move toward the global optimum, making AOA suitable for complex engineering optimization problems. A notable advantage of AOA is its simplicity in configuration, as it operates effectively by defining only the population size and the maximum number of iterations [
52].
In the context of AOA, the optimization process formally begins with the formulation of a set of potential solutions (X), as shown in the matrix (Equation (9)). The most optimal solution, or one that approaches optimality, is determined by means of a randomly generated candidate solution that is superior at each iteration [
52,
53].
The AOA commences the solution process with candidate solutions that incorporate the MOA (Math Optimizer Accelerated) coefficient, as specified in Equation (10).
In this calculation, MOA (Citer) is the function value at the current iteration, Citer is the number of iterations between the first iteration and the maximum iteration, and Max and Min are the lower and upper limit values for the accelerated function.
In the exploration phase of the AOA, the search space is limited by the division operator in the MOA (Citer) calculation with r1 > MOA and r2 ≤ 0.5 (r1 and r2 are random numbers). The algorithm updates the solution in the exploration phase according to Equation (11).
Here, x
i (Citer + 1) is the i-th iteration solution, x
i,j (Citer + 1) is the j-th route of the i-th iteration, best(X
j) is the route of the best solution, µ is the adjustment control parameter for the search space, ϵ is a small integer, and UB
j and LB
j are the upper and lower bound conditions for the j-th position. MOP (Math Optimizer Probability) value is calculated by Equation (12).
In the present context, MOP(Citer) denotes the function value at the t-th iteration, with Citer denoting the current iteration and Miter representing the maximum number of iterations. The parameter α is of relevance, as it stipulates the exploitation accuracy over the iterations; this has been set at 5.
In the exploitation phase, the AOA utilizes the addition or subtraction operator. It is evident that the AOA employs the addition or subtraction operator during the exploitation phase. The exploitation process is constrained by the MOP subtraction function with r1 ≤ MOA and the addition function with r3 ≤ 0.5 (r1 and r3 are random numbers), as articulated in Equation (13) [
52].
The AOA can adapt to new problems in optimization with ease and clarity; this quality is due to its underlying mathematical expression. This characteristic is instrumental in addressing novel optimization challenges by employing a scientific methodology [
52].
2.5. Dragonfly Algorithm
Dragonflies moving in large groups and in one direction in a static swarm support effective communication and information sharing, increasing the overall performance of the DA and providing a faster convergence. DA offers an effective metaheuristic approach to solve complex optimization problems using these nature-inspired features. Thanks to these features, DA is an optimization tool we preferred in our study because this algorithm increases problem-solving ability by successfully integrating behaviors learned from nature into the model. DA uses a discovery and exploitation strategy. DA mimics social interactions when dragonflies forage, explore food, and avoid enemies while moving dynamically or statically [
38]. Velocity and position vectors are used to update the position of dragonflies in a search space. The main goal is to obtain the optimal solution by determining the best position in the search space. Movement positioning uses a representation that includes five basic components: separation, cohesion, alignment, attraction to food, and diversion from opponents [
54]. Mathematical models of the five main factors explaining the relationship between individuals within the swarm are as follows [
55,
56]. Separation is calculated by the following mathematical formula (Equation (14)):
Alignment is calculated by the following mathematical formula (Equation (15)):
Cohesion is calculated with the following mathematical equation (Equation (16)):
Attraction towards food sources is calculated as follows (Equation (17)):
Outward behavior is calculated in Equation (17), with the expression X
− indicating the position of the threatening enemy (Equation (18)).
ΔX step vector and X position vector are used to update the positions of artificial dragonflies and simulate their movements (Equation (19)).
Position vector calculation (Equation (20)):
The Lévy flight method has been used to increase the stochastic behavior, randomness, and exploration abilities of artificial dragonflies. Positions of dragonflies have been updated (Equation (21)).
2.6. Particle Swarm Optimization
Through the systematic application of PSO, this study aims to demonstrate not only the feasibility of heuristic methods in dam structure optimization but also the practical advantages of employing PSO. The results obtained from the application of PSO will be analyzed and compared with other optimization methods, providing insights into the effectiveness and potential improvements that heuristic algorithms can bring to engineering projects, especially those related to dam structures. They analyzed the time-dependent operating behavior of a roller-compacted concrete dam (RCCD) using PSO [
39]. In PSO, the main reason why animals move in flocks is that they can reach their basic needs, such as food and security, more effectively [
38]. In PSO, the velocity values of each particle are calculated through sigmoid function values, and the positions of the particles are updated using these values [
57].
In the PSO algorithm, the speed update is performed by a mathematical process expressed by the formula (Equation (22)), below.
c
1, c
2, r
1, and r
2 coefficients and their meanings are explained in the nomenclature table. In Equation (1), the w value is employed, representing the inertial weight value in this study. To improve the performance of PSO at various stages, the value of w should be determined at an appropriate level. To enhance the effectiveness of PSO at diverse stages, it is necessary to determine the value at an optimal level [
57,
58]. The calculation of w, as outlined in Equation (23), is as follows:
It has been established that each particle employs a mechanism connected to Equation 24 to perpetually revise its positions within the domain of PSO applications.
The velocity vector of a system of particles is computed in Equation (8), utilizing both individual particle velocities and the velocities of the swarm as a whole [
58]. This equation describes the velocity update mechanism that controls the movement of particles. In the context of the PSO, Equation (25) is employed to facilitate the update of the decision-making position vector, with said update being based on the velocity vector of each constituent particle in each iteration. This update is achieved by means of the utilization of the Sigmoid function.
The velocity update equation is presented in a modified formula in Equation (26).
Here, X represents the contraction factor and is expressed as defined by Equation (27) [
59]:
The PSO (Particle Swarm Optimization) method is a noteworthy optimization algorithm that is distinguished by its simplicity in application and ease of usage. The PSO algorithm operates through the emulation of the foraging behavior exhibited by flying birds. This algorithm has been extensively applied to address a wide range of engineering problems [
38]. A distinguishing feature of PSO is its utilization of social responses as opposed to the evolutionary processes observed in other algorithms, thus marking a significant departure from the biological underpinnings typical of numerous other evolutionary algorithms.
2.7. Crayfish Optimization Algorithm
COA is a metaheuristic optimization algorithm with three key stages that mimic the summer resort behavior, competition behavior, and foraging behavior of crayfish. These phases aim to create a balance between exploration and exploitation. COA explores solutions in the “summer resort” phase, while the “competition” and “foraging” phases represent the exploitation phase. Temperature control directs the transitions between these stages; high temperatures drive crayfish to seek shelter or compete for caves, while favorable temperatures dictate search strategies based on food size. Temperature regulation is intended to increase the randomness level and global optimization capabilities of COA [
60].
COA initializes the population randomly, as shown in Equation 20, like many swarm intelligence optimization algorithms. Simultaneously, a temperature between 15 and 35 °C is randomly generated (Equation (28)). If the temperature is greater than 30 °C, the swarm enters the heat avoidance or competition stage; if not, it enters the foraging stage [
61].
Create a random number Q, and if its value is less than 0.5, it enters the heatstroke phase, as shown in Equation (29), where X
shade is the location of the cave [
61].
As shown in Equation (30), if the random number Q is greater than or equal to 0.5, it enters the competition phase [
61].
Wherein Z represents a random crayfish according to Equation (31) [
61].
Define the size of the food as Q, as shown in Equation (32), below [
61].
where C
3 is the feed factor, representing the largest feed, with a value of constant 3; fitness
i is the fitness value of the ith crayfish; and fitness
food is the fitness value of the feed site [
61].
Q > (C3 + 1)/2 indicates that the food is too big. At this point, the crayfish will use its first claw to crush the food, viz [
61].
The second and third claws will alternately pick up the food and transfer it into their mouths, once it has been shredded into small pieces. The alternation process is simulated using a combination of sine and cosine functions, as demonstrated in Equation (34) [
61].
2.8. Cheetah Optimizer
In 2022, Akbari and colleagues proposed an intuitive cheetah algorithm that utilizes the hunting and feeding strategies of carnivorous cheetahs in nature. The algorithm was developed by drawing inspiration from the cheetah’s behaviors of searching for prey, waiting to attack its prey, and attacking [
62]. In addition to these strategies, the process of finding prey and returning home was added to the algorithm to improve and reinforce population diversity and solution search [
62].
The cheetah’s hunting strategy is mathematically modeled in the algorithm according to four different situations. These are searching for prey, sitting and waiting, attacking (quickly attacking and capturing the prey), and giving up on the prey and returning to their own territory [
63].
2.8.1. Prey Search Strategy
A cheetah observes its environment while sitting or standing to reach prey. To mathematically model this strategy, the cheetah’s current position is defined by the vector X
ti,j. The position of the cheetah in the next iteration or random search, according to an optional step size for the search, is given by Equation (35) [
62].
where
and
are the current and next iteration i and j positions of the cheetah, t is the current hunting time, T is the maximum hunting time,
is the parameter that randomises the search direction, and
is the step length of the cheetah for hunting. Since the process of cheetahs searching for prey in nature is generally slow, the step length
can be set to 0.001 × t/T with a value greater than zero [
62].
2.8.2. Sit and Wait Strategy
Cheetahs lie in ambush while searching for prey. This situation is given mathematically in Equation (36) in the algorithm [
62].
where
and
are the updated positions of j at each step i. This prevents premature convergence and ensures each cheetah searches for a solution independently [
62].
2.8.3. Attack Strategy
Cheetahs use their speed and flexibility when they decide to attack a prey animal. As soon as the potential prey recognizes the cheetah, it runs away, prompting the cheetah to adjust its position to capture it, as illustrated in Equation (37) [
62].
Here,
is the current position of the prey,
and
is the displacement of the cheetah as it rapidly approaches the prey. The ith cheetah’s position is calculated according to the prey’s current position. The factor
reflects the interaction between the leader and the cheetahs, enabling the calculation of the positions
of other cheetahs. The rotation factor is randomly generated to fit a normal distribution using the equation
[
62].
4. Results and Discussion
The initial geometry is designed to carry hydraulic loads in a balanced manner and maintain structural stability. As theoretically detailed in the previous sections based on Creager’s recommendation [
64], the crest width was explicitly calculated as 10.92 m (14% of the 78 m total structural height).
The structural profile includes a 2 m crest allowance, which, together with a maximum hydraulic depth of 76 m, provides a safety margin against overflow. In addition to the height requirement, the crest width was designed to meet operational safety standards and allow access for maintenance activities. The stability of the dam is defined by the relationship between the optimized slope gradients (
n and
m) and the geometric variables (x
1, x
2, x
3, and x
4). These parameters determine the structural configuration and contribute to the resistance of the dam against shear and erosion forces. Although the current design establishes the basic structural framework, incorporating seismic resistance parameters and detailed hydraulic performance data in future analyses could further improve the accuracy of the model. Under the specified constraints and boundary conditions, this reference geometry produces a foundation cross-sectional area of 3420.16 m
2; detailed attributes of this geometry are presented in
Table 3.
The loads on the dam are listed in
Table 4. The table below shows the various forces acting on the dam structure, including vertical (downward) and horizontal (water pressure, etc.) forces and their moment effects. Analysis of these forces is essential to ensure the stability of the dam, its structural integrity, and its resistance to hydraulic and geotechnical stresses.
A thorough examination of the calculated forces and moments reveals that the present design of the dam is resistant to overturning and sliding. Nevertheless, the considerable effect of the uplift force necessitates the implementation of adequate drainage measures to ensure the structure’s integrity. It is therefore imperative that drainage systems, foundation stability, and optimal load distribution be given due consideration during the final design phase. The variables X1, X2, L, n, and m were minimized using the PSO, DA, CO, CAO, GA, GWO, and AOAs with 1000 iterations.
A performance comparison was carried out to evaluate the effectiveness of the metaheuristic algorithms used for the optimization of the concrete dam section in the upper basin of Hasanlar Dam. The relative performance of the algorithms was analyzed using convergence curves and fitness histories, as shown in
Figure 9. These results illustrate how each algorithm performs during the optimization process and how stable the search process remains within the complex design space of dam structures. This evaluation helps identify the algorithms that handle the nonlinear constraints of the Hasanlar Dam problem effectively and can approach the optimal solution without premature convergence.
The optimized geometric configurations obtained from the multi-algorithm approach are summarized in
Table 5. The developed computational framework aims to balance structural stability with efficient material use. By adjusting the load distribution and reducing unnecessary structural volume, a more efficient dam geometry was obtained. As a result, both safety requirements and cost considerations were addressed in the final design.
As shown in
Table 5, both PSO and COA achieved a 29.36% reduction in the dam cross-sectional area. The similar results obtained from these two different algorithms support the reliability of the developed optimization model. These results suggest that the obtained configuration is close to the global optimum under the defined constraints. The strong performance of PSO and COA can be attributed to their search strategies. As illustrated in the convergence curves in
Figure 9, PSO reaches the optimal region rapidly due to its strong exploitation capability.
This mechanism ensures that particles focus on the best coordinates while adhering to hydraulic stability criteria. Similarly, the hierarchical behavior logic of COA has facilitated the algorithm’s ability to overcome local minimum traps; it has particularly enabled the high-precision refinement of the slope parameters n and m. On the other hand, the Genetic Algorithm (GA) has shown limited adaptation in this design domain with dense constraints and has encountered performance bottlenecks. Although GA operators such as crossover and mutation offer extensive exploration, they have been insufficient in meeting strict threshold values, such as overturning and slip safety. This tension between the wide exploration area and the strict safety constraints has prolonged the convergence time of GA and ultimately resulted in a less efficient cross-sectional area.
As delineated in
Table 6, a comparison is provided of optimized moments in the dam.
Table 7 presents the moment reduction amounts that were determined to be optimal.
Table 8 presents the load amounts that were determined to be optimal.
Table 9 presents the load reduction amounts that were determined to be optimal.
The cross-sectional area of the dam was calculated to be 2416.17 m
2 by the PSO solution. This calculation yielded a reduction in concrete requirements by 1004.00 m
3 per unit length of dam width, in comparison to the initial solution. For a dam with a width of 100 m, the algorithmic approach has the potential to reduce the concrete requirement by a total of 103,399.200 m
3. According to the 2025 State Hydraulic Works (DSI) unit prices, the unit price of compacted concrete is 3099.56 TRY/m
3 [
65]. The Central Bank of the Republic of Turkey’s exchange rate was used to calculate the economic gain from the saved concrete cross-sectional area, which is equivalent to
$7,263,628 (TRY 311,192,724 and USD/TRY = 42.8426) [
66].
The obtained 29.36% cross-section reduction is consistent with comparative values in the existing literature and provides a comprehensive perspective for dam optimization studies. Although pioneering studies such as Turgut et al. [
36] and Yiğit et al. [
38] have previously demonstrated the effectiveness of GA and PSO in hydraulic design, this research offers a broader validation layer by evaluating seven different current metaheuristic solvers in the same context.
As illustrated in
Figure 10, there is a clear delineation between the initial dimensions of the concrete dam and the optimum dimensions obtained through the utilization of PSO methodology.
5. Conclusions and Suggestions
The primary motivation for this study is to demonstrate a proactive approach that envisions the construction of a new concrete dam in the upper basin of the existing Hasanlar Dam; in this regard, it aims to perform both water volume calculations and structural section optimizations.
The process is structured by a comprehensive Geographic Information Systems (GIS) analysis conducted through ArcGIS software. Along the existing riverbed, two different scenarios were examined in the light of parameters such as land slope, accessibility, distance to settlements, and environmental constraints. Field investigations conducted have revealed that the second scenario, located in the upper basin, is the most suitable axle location to eliminate the negative effects that existing infrastructure elements, such as the Düzce Solid Waste Facility, may create. The main strategy in this selection is to maximize the reservoir storage capacity while keeping the geographical disruption to a minimum.
In the selected axle region, reservoir storage volumes belonging to different elevations were derived using remote sensing techniques. The data obtained prove that a storage equivalent to the capacity of the existing Hasanlar Dam can be provided at an altitude of 466 m, and a volume of approximately 55.5 million m3 can be reached. This height represents a critical threshold at which the area of water distribution is rapidly expanding due to the influence of the local topography. Therefore, the level of 466 m; it has been identified as the ‘optimum design point’ that establishes the delicate balance between construction costs, environmental footprint, and storage efficiency.
The basic cross-sectional design of the dam has been developed in such a way as to fully meet the moment balance requirements by integrating hydrostatic pressure, intrinsic weight, and lifting forces. In the structural optimization phase, seven different heuristic algorithms (PSO, DA, CO, CAO, GA, GWO, and AOA) were compared based on performance. The analysis results show that the PSO algorithm achieved the highest efficiency by achieving a contraction of 29.36% in the cross-sectional area. Although the GA exhibited a rapid convergence, it lagged CAO and GWO at the final efficiency point and failed to achieve full compliance with the targeted minimum threshold values.
This reduction in the cross-sectional area allows for a radical saving in the volume of concrete directly and, accordingly, construction costs. The optimized cross-section obtained by the PSO method reduces the concrete usage per unit meter by 1,003,992 m3. For a body width of 100 m, this means a huge concrete saving of 103,399,200 m3. Calculations based on DSI’s 2025 unit prices confirm that this volume decrease created a financial gain of approximately TL 311 million (USD 7.26 million). This research primarily focuses on geometric and volumetric optimization based on static stability criteria such as overturning and sliding; it adopts a pseudo-static approach during the preliminary evaluation phase. However, the current scope of the study does not include the full simulation of extreme dynamic conditions such as nonlinear seismic responses, overflow scenarios caused by excessive flooding, or internal thermal stresses within the concrete mass. The inclusion of such complex variables in the model can be considered a separate area of research that will improve accuracy in later stages of design.
The obtained 29.36% cross-section reduction is consistent with comparative values in the existing literature and provides a comprehensive perspective for dam optimization studies.
In addition, the use of high-resolution geospatial datasets for volumetric estimation reduces data uncertainties and improves the accuracy of the reservoir optimization analysis. Grounding the mathematical framework in raster analysis at 2 m intervals ensures that theoretical material savings are validated against topographic realities in the field. Here, the preventive renewal paradigm represents a shift from traditional repair cycles in infrastructure management to an optimization-focused strategic approach. The integration of computational intelligence and high-resolution digital mapping offers an effective approach for water resource management, particularly in countries where sustainable water use is of economic and environmental importance.