Reservoir Sediment Management Using Artiﬁcial Neural Networks: A Case Study of the Lower Section of the Alpine Saalach River

: Reservoir sedimentation is a critical issue worldwide, resulting in reduced storage volumes and, thus, reservoir efﬁciency. Moreover, sedimentation can also increase the ﬂood risk at related facilities. In some cases, drawdown ﬂushing of the reservoir is an appropriate management tool. However, there are various options as to how and when to perform such ﬂushing, which should be optimized in order to maximize its efﬁciency and effectiveness. This paper proposes an innovative concept, based on an artiﬁcial neural network (ANN), to predict the volume of sediment ﬂushed from the reservoir given distinct input parameters. The results obtained from a real-world study area indicate that there is a close correlation between the inputs—including peak discharge and duration of ﬂushing—and the output (i.e., the volume of sediment). The developed ANN can readily be applied at the real-world study site, as a decision-support system for hydropower operators.


Introduction
Sediment management is an important part of integrative river basin management [1,2]. To achieve or maintain the good ecological status of the river (e.g., according to the EC Wa-terFramework Directive) and to obtain a balanced, natural river system, where erosion and sedimentation occur in alternating patterns, the sediment must be considered [3][4][5]. In particular, at barrages and dams (e.g., at a hydropower plant; HPP) or at intake/diversion structures, the natural river course and continuity is interrupted. Sediments are trapped at the structure, causing sedimentation and the riverbed level to rise while, at the same time, downstream of the structure a sediment deficit occurs, leading to erosion and riverbed deepening. To overcome this imbalance, sediment management strategies are required. Researchers have proposed various feasible concepts, which can be classified into four groups: (i) Reduction of sediment supply from upstream sections; (ii) routing of sediment through the dam; (iii) removal of sediment deposits in the reservoir; and (iv) adaptive strategies [5].
Research and field observations have shown that drawdown flushing by (partial) lowering of the reservoir water level is an effective and, at the same time, cost-efficient option for run-of-river HPPs to remove sediment deposits from the reservoir. This can help to achieve a more balanced sediment regime [6][7][8][9][10]. However, multiple factors influencing the effectiveness of the flushing operation have to be considered, in order to arrive at a solution which balances energy production and ecology. A suitable tool for this is, for instance, numerical hydromorphological modeling, which provides detailed information on the processes during simulated flushing events [5,11]. Such models, though, have different limitations for their practical use: Simple, one-dimensional (1D) models, which are easily applicable, are limited by their complexity, as the flow quantities are only available as a cross-sectional averaged quantity and the geometry of the dam and reservoir is only approximately considered [7,8,12]. While more complex 2D or 3D models can represent reality better, the computational effort increases. For very large domains and long simulation times, high-performance computational (HPC) systems are required-a resource not everywhere available [13][14][15]. Furthermore, specific pre-and post-processing of the models is unavoidable. Moreover, a fundamental requirement of any numerical hydro-morphological model is careful and accurate model calibration and validation against measured data. This is especially important for the morphological models, as these include several empirical factors (e.g., a threshold of motion [16] or a specific transport formula [17,18]), which must be reliably estimated. In addition, physical modeling in laboratories and research facilities can provide information on reservoir flushing, in order to optimize sediment management [19][20][21]. Although these physical models provide a high level of process reliability, they have to deal with scaling issues, lower flexibility, and high specific costs/manpower.
There is, therefore, a need for alternative and innovative tools for operators, helping them in the decision-making process. Recent research has shown that data-driven modeling in river and sediment management offers a powerful alternative [22][23][24][25]. These models, such as artificial neural networks (ANNs), learn the patterns and connections between data. Studies at a dam in Pakistan have shown that an ANN can predict the daily load of suspended sediment better than conventional approaches using a sediment rating curve (SRC) [26,27]. Moreover, it has been shown that a simple ANN can determine the maximum scour depth in a channel better than empirical formulas [28]. Furthermore, promising results have been obtained using an ANN developed for the Three Gorges Dam, in terms of estimating the sedimentation rate [29]. Therefore, it is worth studying whether an ANN can directly predict the total volume of sediment flushed during a specific operation, based on simple discrete parameters. The advantage of such an ANN, after successful training, would be to serve as a tool with very low computational effort, which can be wrapped in a simple and clear graphical user interface and is executable on different operating systems.
This paper thus proposes a methodology for an innovative approach based on an artificial neural network (ANN) for the prediction of the total volume of sediment removed by reservoir flushing. The paper is organized as follows: (I) Materials and Methods, highlighting the ANN methodology and the conditions of the real-world study area; (II) the obtained Results; (III) their Discussion; and (IV) our Concluding remarks.

Artificial Neural Networks
An artificial neural network is a general term encompassing many different architectures. Based on the literature mentioned in the introduction, a feed-forward neural network (FNN) might offer a suitable architecture to attain the stated objective of predicting the total volume of sediment flushed. A FNN are the simplest ANN structure, as the connections between the nodes of the network do not form a cycle and information passes straight (i.e., forward) through the network. The nodes (or neurons) of the ANN are processing units of the information, passing straight forward through the net. The nodes can be mathematically be formulated as follows: where X i is the input information of the neuron, w i the synaptic weight, Θ is the response characteristic (or transfer function) of the neuron, and Y is the output vector. Figure 1a shows such an artificial neuron schematically. In theory, a neuron can have multiple inputs and a single output. In addition to the external input X i , a bias b is added, in order to improve the convergence of the network. An ANN consists of the three main componentsthe input layer, one or more hidden layers, and the output layer-each consisting of a certain number of neurons. Figure 1b shows an example of an ANN, consisting of three layers: The input layer, a hidden layer with four neurons, and an output layer with one neuron. Depending on the problem, more complex architectures might be necessary. A comprehensive overview of the existing networks can be found in [30,31]. It is generally not clear how many layers are necessary for an ANN to understand a problem well. However, several studies have concluded that one hidden layer is optimal for most river-engineering problems, which is therefore applied in the following [32][33][34].
The neurons of the hidden and output layer are mapped using a transfer function (TF). This function transfers the input from an unlimited range to an output of a certain range. Different transfer functions have been described in the literature [27,[30][31][32]35]. The most suitable TF for the hidden layer was estimated by an iterative trial-and-error approach. However, based on the evidence from previous research [14], it was decided to apply the unbound linear TF for the output layer, in order not to limit the output to a certain range, which is recommended for function fitting or non-linear regression problems. Table 1 shows the investigated combinations, using the abbreviations according to MATLAB conventions. Here, "purelin" stands for the linear transfer function, which transfers an input (n) to an output (a) by a = n. Furthermore, "logsig" is the log-sigmoid transfer function, which squashes the input into [0, 1] by a = 1/(1 − exp(−n)). "Tansig" represents the hyperbolic tangent sigmoid transfer function, whose output is in the range from [−1, 1] using a = 2/(1 + exp(−2 · n)) − 1. The abbreviation for the radial basis transfer function is "radbas" with a = exp(−n 2 ). The transfer function "poslin" returns the output n if n is greater than or equal to zero and 0 if n is less than or equal to zero. "Satlin" is the saturated linear transfer function using a = 0 if n ≤ 0, a = n if 0 < n ≤ 1, or a = 1 if 1 < n, such that the output is in the range [−1, 1]. Similar to satlin is "satlins", which is a symmetric saturating linear transfer function using a = −1 if n ≤ −1, a = n if −1 < n ≤ 1, or a = 1 if 1 < n. An extension to radbas is "radbasn", which is the normalized radial basis trasfer function using a = exp(−n 2 )/ ∑(exp(−n 2 )). Finally, "tribas" represents a triangular basis transfer function using a = 1 − abs(n) if −1 ≤ n < 1, and a = 0 otherwise. In addition, the most suitable number of neurons in the hidden layer of the ANN cannot be pre-defined. However, we can estimate a feasible range of neurons using the following Equation [14,36]: where N I and N O represent the number of input and output nodes, respectively. The number of neurons is very important, as too small a number can lead to underfitting of the network, while too large a number can cause overfitting and unnecessarily high computational effort. Initially, the weights of the network are randomly defined. In the training procedure, the weights and biases of the network are adjusted, such that the output of the network matches a given target. For this procedure, the Levenberg-Marquardt learning algorithm was used, which has shown good performance in other river-engineering applications [28,32,35]. Note that, in this study, MATLAB (Version 2020a) was used to design and train the network.

Study Site
The study site for this research was a lower section of the alpine Saalach River at the German-Austrian border. Figure 2a provides an overview of the study site, highlighting important points along the river. The run-of-river HPP Rott dams the river to produce energy and, as originally intended, to stabilize the riverbed (Co-ordinates: Lat, 47.835047; Long, 12.994155). Figure 2b,c show schematic sketches of the HPP. The lower Saalach river transports a huge amount of coarse sediment (e.g., around 40, 000 m 3 /year are supplied into the river downstream of the Dam Kibling on average), which would accumulate in the reservoir of the HPP Rott if no sediment management were performed. However, the operator has to guarantee that the riverbed in the reservoir does not exceed a defined riverbed level. Therefore, drawdown flushing of the reservoir is irregularly performed, induced by opening of the weirs (three in total), thus lowering the water level. During normal operations of the HPP, the water level in the reservoir is equal to the storage level of Z S = 415.80 m (Figure 2c). It deviates from this level only during higher flood events or when flushing [9].
The applicability and accuracy of the ANN developed herein depends heavily on the data available to train the network. The data have to reflect a wide range of possible situations, as ANNs are good for interpolation, but less suited for extrapolation to data outside of the training range. At this HPP, only sparse documentation on flushing, consisting of six real flushing events, was available for this study. This database was too small for our purpose.
Therefore, an already existing two-dimensional hydromorphological numerical model of this section, based on the TELEMAC-SISYPHE modeling system, was used as a surrogate to generate a larger data set for training the ANN. In the numerical model, the twodimensional flow module TELEMAC2D was coupled with the sediment module SISYPHE. The model was initially designed to simulate the long-term morphological development of this river section under different hydropower operational schemes and reaches; therefore, from the gauge at Sizenheim to the HPP Rott ( Figure 2a). The details of this numerical model have been reported in other publications [9,37]. Table 2 summarizes, therefore, just the main model parameters.  As mentioned above, this hydomorphological model was used to synthetically generate a broader, but still realistic database. For the numerical simulations, the discharge time-series from 1 January 2006 to 31 December 2013, measured at the gauge Siezenheim, was applied as the inflow boundary condition. In order to obtain a sufficiently large and heterogeneous database for training the network, different flushing scenarios (numbered with the index i) were generated. Therefore, for each scenario, a different threshold discharge Q f,i was defined, meaning that flushing is performed when this value is exceeded. In total, nine scenarios with the corresponding threshold discharges were developed: Q f,i = [100, 125,150,175,200,225,250,275, 300] [m 3 /s]. Applying these values to the eight year-long discharge time-series, the corresponding water surface level at the HPP was defined for each scenario, denoted as Z f,i [m] in the following. Figure 3 shows one exemplary flushing operation from of the eight years. Here, the differences between the generated scenarios are shown for a specific flow situation, which is around an HQ 2 discharge. It can be seen that the lowering of the water level from the storage level Z S = 415.80 m started differently in each scenario, depending on the specific threshold discharge Q f,i . The lowering continued until free-flowing conditions were reached. When the discharge decreased and fell below the threshold, the water level increased again until Z S was reached. The gradient for lowering and raising of the water level was defined, for this HPP, at 0.5 m/h. This depended mainly on the stability of the embankments of the reservoir.   Furthermore, we expect that the volume of sediment in the domain (or the riverbed elevation) prior to a specific flushing operation makes a difference to the flushing efficiency. Thus, three different initial bathymetries for the 2D numerical model were also defined: The first representing the highest observed riverbed (B max ), the second was the design riverbed for flood protection (B flood ), and the third was the riverbed measured in 2005/6 (B 2005 ). Figure 4 shows the differences, by means of a longitudinal plot of the mean riverbed of the 2D model along the domain. It can be seen that B max had the highest elevation, especially in the reservoir section downstream from 4.6 km. B flood was, in comparison, clearly lower, as it was designed to reduce inundation in the case of flood events [37]. In the free-flowing section, both bathymetries were the same, as the inundation was not critical. The third bathymetry B 2005 was similar to B max , but was lower in the reservoir section and had some alterations in the free flowing section. With these defined initial and boundary conditions, a total of 27 (9 flushing scenarios × 3 inital bathymetries) numerical 2D simulations for the time period of eight years were carried out (Table 3).

Data Preparation
From these 27 initially conducted numerical simulations, each covering a period of eight years, around 600 different single flushing events were extracted. Figure 5 shows one exemplary flushing event, including the time-series of the discharge and the water elevation at the HPP. It should be noted that a single flushing event was defined from the time when the inflow discharge exceeded the threshold discharge and lasted until the discharge fell below this value, which happened frequently over the simulated time. However, such a flushing event consists of continuous temporal data (e.g., Q(t) or Z f (t)), which means that the data needed to be further processed and prepared in a suitable way. The ANN applied required discrete input and target data. According to the literature and our expectations, the amount of sediment flushed depends on the flow present, the operation of the weir, and the initial riverbed before flushing [9]. However, it was not clear how to derive these input parameters from the continuous time-series data of the numerical simulation. Therefore, we defined different parameters to describe a single flushing event, which served later as input to the ANN: The first parameter we defined was the peak discharge during the flushing event, Q max . This parameter alone might not be sufficient to distinguish different flow situations (e.g., a narrow but steep flow curve and a broad wave shape with same peak discharge). Furthermore, for a case like that shown in Figure 5, where two peaks are present, the peak discharge by itself might not be enough. Therefore, a second parameter, the total volume of water passing the weir during flushing, V water , was defined. Furthermore, the operation of the weir must be considered. Here, we defined the duration of flushing, t f , and the minimum water level reached during flushing, Z f,min . The fifth parameter noted was the initial volume of sediment in the domain before flushing, V b,initial . However, as V b,initial is hard to estimate in reality, a normalized variable is introduced here instead. The new parameter was defined , denoting the level of aggregation before a flushing event. At this study site, a maximum acceptable riverbed in the reservoir has been defined by the authorities. The volume of this riverbed was calculated as V b,max , corresponding to the volume of the initial bathymetry B max . The lowest riverbed of all simulations was used to estimate the minimum volume, V b,min . This enabled the real use of such a network, as the LoA ratio can also be estimated in reality.
Finally, the target (and, thus, output) value of the ANN was the total volume of sediment, V f , leaving the domain during flushing. This information was obtained by temporal integration of the sediment flux leaving the domain at the outlet, which are the weirs of the HPP.

Training Methodology
In the section above, we described possible input parameters for the ANN, which were proposed according to our expertise. However, in order to optimize the network archi-tecture and to see their influence on the output, combinations of the described parameters were tested through a training procedure, as highlighted in Table 4. Furthermore, for training the network, input and target values were each normalized into the range [0, 1], using their min-max values after the following equation: The corresponding min and max values are provided in Table 5. The variability and heterogeneity of the input and target values are shown in Figure 6 using boxplots, in which the median and the 25% quantiles are highlighted. As mentioned above, a high variability is necessary to obtain a general and applicable ANN. During the training step, the ANN learns the relationship between the input and target data. Before training, the prepared data are first separated randomly into three subsets-training (70%), testing (15%), and validation (15%)-to avoid overfitting and to obtain a generally applicable ANN. Moreover, the number of neurons was tested iteratively, from 2 to 11, using Equation (2). It is important to mention that the initial weights can also have a major impact on the possible accuracy of a network. In order to avoid this effect, training was conducted several times with new initial weights. In this study, the number of runs for each combination of a number of neurons in the hidden layer (10) and each IP Combination (5), as well as for the TF Combination (9), was 500.
Furthermore, the reliability and performance of the best network developed was investigated further, through a sensitivity analysis. In this analysis, a selected input parameter was changed in a certain bandwidth by multiplication with a factor f s , following the equation: where X is the initial input and X s is the modified input parameter. This indicates whether the ANN can also deal with data outside of the training range.

Performance Criteria
The correlation coefficient (R, Equation (5)), as a relative error measure, and the Root Mean Square Error (RMSE, Equation (6)) and the Mean Absolute Error (MAE, Equation (7)), as absolute error measures, were selected to investigate and evaluate the performance of the different trained networks: where n is the number of data pairs, and x i and y i are the ith predicted and measured values, respectively.

Training
The previously explained training procedure was carried out by testing the number of neurons and the different transfer functions. In total, 225,000 different networks were obtained, which were then analyzed, as detailed in the following. Note that the error criteria to evaluate the performance of the ANNs were evaluated by calculating the error measure from the test sub set. Figure 7 shows the categorized overview of the performance of the different ANNs using boxplots: (a) The transfer function combination; (b) the number of neurons; and (c) the input parameter combination. It is evident that the TF Combinations 9 (purelin-purelin) and 4 (poslin-purelin) clearly had the worst performance, indicating that a TF based on a linear function in the hidden layer is not optimal. Moreover, other TFs with linear functions (5, 6, and 8) in the hidden layer were only slightly better. Indeed, non-linear TFs 1-3 and 7 showed similar accuracy and the lowest RMSEs. The differences in their performance were negligible and similar results could be obtained (Figure 7a). Furthermore, it is evident that, with an increasing number of neurons, the accuracy of all networks increased independently of the TF, but stopped at nine neurons (Figure 7b). This means that a higher number of neurons would lead to results of the same accuracy, but requires additional computational power with no further advantage. Figure 7c shows that the accuracy of the networks increased with additional inputs and, as the same time, the bandwidth of the error became smaller. This means that the network architecture (TF, Number of Neurons) becomes less important and, in more cases, an accurate ANN could be obtained. It is clear that IPs 1 and 2 were the worst, which means that the other parameters were significant and must be considered. ANNs with IP options 3 and 5 obtained similar accuracy. However, it is evident that the input combination 4 was clearly the most accurate combination. This confirms that the LoA parameter is important for the ANN to understand the data. Moreover, adding the V water parameter clearly improved the performance.
The network which showed the best accuracy-that is, the network with the smallest difference (RMSE) between the measured data (Target) and the predicted ANN Outputwas used for further tests. Of all 225,000 networks, the best ANN had TF-combination 2 (tansig-purelin), a number of neurons equal to 9, and IP-combination 4 (using all five parameters). Figure 8 shows the regression of the target data (x-axis) and the corresponding output of the ANN (y-axis) for all data (overall), and the subsets for training, validation, and testing, respectively. This illustrates very well that the ANN could learn the relationship between the inputs and the output, as the relation is almost linear. Moreover, the normalized RMSE of the ANN was very low and close to zero. In order to give the absolute error of the network a physical meaning, the values were de-normalized and the MAE was calculated. The best network had a MAE of 960 m 3 for the test subset. This means that, on average, the ANN over-or under-predicts the volume of flushing, V f , by this value; which is very low, considering possible flushing volumes of up to 100,000 m 3 .

Sensitivity
To validate whether the ANN works well as a general predictor of reservoir flushing, a sensitivity analysis on the input parameter LoA (the level of aggregation) was performed.
This parameter represents the initial volume of sediment in the domain. It was expected that the higher this value was, the higher the amount of sediment flushed, and vice versa. Furthermore, this parameter was clearly independent of the others and could be analyzed alone, as it does not affect the flow present or the weir operation. For the sensitivity analysis, we multiplied this parameter with a factor f s = [0.7, 0.8, 0.9, 1.1, 1.2, 1.3], such that LoA = f s · LoA (Equation (4)), which is basically a decrease (f s < 1) or an increase (f s > 1) of the original LoA. These values, LoA, were used in the sensitivity test as input for the previously determined ANN. The other parameters were kept unchanged. Figure 9 shows the results of this analysis by means of scatter plots of the original results (f f = 1) against the output of the test. It is clear that increases of LoA by 10, 20, and 30%, respectively, (filled scatter points), led to an overall increase of the predicted output. This means a higher amount of sediment flushed, V f . Correspondingly, the ANN predicted a lower output with a decreased LoA to 90, 80, and 70% (empty scatter points), respectively, as input. This closely followed our expectations and confirmed that the designed ANN has a basic understanding of the governing process. From a reservoir with a very low LoA-which means a very low riverbed-less sediment is mobilized during flushing than from a reservoir with a high riverbed closer to the water surface.

Discussion
The proposed tool, using data-driven methods for reservoir flushing estimation, showed promising results. The developed ANN identified a clear correlation between distinct input data and the desired mobilized sediment volume (i.e., the volume of sediment flushed). Although this approach is very simple, requiring only five simple inputs, the mean average error was so small that it can be neglected when the total uncertainty is considered. This was especially true in the case of the Saalach river, where we have, on average, high annual sediment loads of around 40,000 m 3 /year and, during one single flushing event, volumes as high as 20,000 or even 100,000 m 3 are frequently mobilized.
One important point to discuss here is the origin of the training data: In order to develop an applicable and generally valid ANN, it is necessary to use a sufficiently large and heterogeneous database for training; that is, larger than the available data at this study site. Limited data was also one problem of a study using an ANN at the Three Gorges Dam; thus, the predictive ability of the model was not that good (e.g., R = 0.477) [29].
To overcome this problem in our study, a sufficiently large amount of training data were obtained by using a numerical model as a surrogate for real measured data. The idea to use surrogate models in the absence of real data have also been applied in other studies, such as [33,34]. The numerical simulations in this study each covered a period of eight years and included several different flow conditions (e.g., flood events ranging from floods with a return period of 1 year up to 100 years), such that the ANN could learn a variety of possible situations. However, it must be highlighted that the network developed here is, therefore, only applicable to this particular river section. If the river system of the Saalach fundamentally changes in the future (e.g., due to the extension of the existing HPP by an additional gate or water extraction in the section between the discharge gauge and the HPP), the predictive ability of the ANN will decrease. In contrast, other changes in the system that affect the discharge regime will have no change. This could happen, for example, due to climate change, which could cause more frequent occurrence of extreme discharge events and longer periods of low water [39]. Here, the applicability of the ANN is not affected, as the frequency of flushing events or other temporal factors are not inputs of the designed ANN and each event is independent of the previous ones.
The overall concept underpinning this research is transferable to other study sites and extendable with additional parameters and inputs. If the described input data were gathered from different HPPs, a sufficiently large database consisting of real measurements could be obtained. If the data came from different HPPs, it might be necessary to include additional parameters representing the geometry (e.g., length and width) and granulometry (e.g., mean diameter d m ) of the different reservoirs. Such a parameter was not necessary for this test, as the data were only obtained from the one site.
Moreover, in further research, the reservoir might be divided into several parts (e.g., lower and upper sections), in order to spatially locate where the volume is eroded. However, it might then be necessary to couple two ANNs together, each of which responsible for their own section.
Furthermore, it should be pointed out that it took around 87 h per simulation (8 years in reality) and, so, 2349 h in total to generate the database for training. This is a heavy time load, although the Cool-MUC-2 HPC system of the Leibniz Computing Center was used and the 2D numerical modeling software TELEMAC-SISYPHE can run efficiently in parallel mode using domain decomposition techniques [40]. This clearly highlights the huge computational effort which is necessary to investigate sediment flushing by numerical methods. Of course, this effort was also necessary to obtain the database used in the present study, but the trained ANN needed only 1 second on a standard computer to evaluate the volume of flushing for the 600 different scenarios.
The performed sensitivity analysis of our study case confirmed that the ANN determined a general relationship between the input and target data. By changing the initial volume of sediment in the domain before the flushing (LoA), the predicted output behaved according to our expectations. In further work, the sensitivity of other parameters could also be investigated. However, it should then be remembered that these parameters can influence each other. An increasing peak discharge affects the volume of the water, as well as the free-flowing water level during flushing. Moreover, with a change of peak discharge, the duration of the flushing will also change. Therefore, sensitivity analyses were only performed on the LoA herein, as this value is independent of the others.
It must be mentioned that an ANN is just one data-driven approach for the detection of non-linear correlations. Many other different methods are available. In particular, support vector machine (SVM) models may provide a suitable alternative. This approach has been successfully applied to reconstruct streamflows from different data types [41] or for the prediction of suspended sediment loads (SSL) [42]. Some studies have compared the performance of both approaches (i.e., ANN and SVM) and obtained similar accuracy with both models, for example, to model soil pollution [43] or to predict the monthly river flow [44]. Moreover, stochastic modeling (SM) by the multivariate regression method could also be an option worth mentioning [45,46]. In a further study, one could evaluate how SVM and SM models perform for the purpose of reservoir flushing, compared to the ANN concept presented here.
Finally, the developed ANN is transferable in a simple, executable GUI, which can run on every computational system. It can be easily integrated into the operation and management procedure of any reservoir. Figure 10 shows a sketch of a possible workflow. Given the near-future hydrograph (e.g., from a hydrological model), the input Q max , the peak discharge of the expected flood wave can be defined. Further, required inputs can be defined or calculated, such as the volume of the hydrograph V water , the duration of the flushing t f , and the minimum water level reached Z f,min , such that the ANN can predict the volume of sediment flushed from the reservoir given this specific operation. If the volume is considered as effective in lowering the riverbed (decision = yes), the flushing can be initiated. If not (decision = no), another operation mode (e.g., an increased t f , which means a longer flushing operation) can be tested; or, if no operation leads to a desired high volumetric output, the flushing can be postponed until a bigger discharge is estimated.

Conclusions
The main conclusions of this research can be summarized as follows: • In general, artificial neural networks (ANN) are powerful additional tools for river management. • A feed-forward neural network can accurately predict the volume of sediment flushed during a single event, based on discrete input parameters. • The developed ANN can be embedded into existing management structures, providing a decision-support system for reservoir operators. • Extension of the approach to other study areas and, thus, the inclusion of more data are highly recommended. Informed Consent Statement: Not applicable.

Data Availability Statement:
The data presented in this study are available on request from the corresponding author.