Special Issue "Machine Learning Applied to Hydraulic and Hydrological Modelling"

A special issue of Water (ISSN 2073-4441). This special issue belongs to the section "Hydraulics and Hydrodynamics".

Deadline for manuscript submissions: closed (30 September 2020).

Special Issue Editors

Dr. Vasilis Bellos
E-Mail Website
Guest Editor
Scool of Rural and Surveying Engineering, National Technical University of Athens, 9, Iroon Polytechniou str, 15780 Zografou, Athens, Greece
Interests: hydraulics; hydrology; flood modelling; fluvial and pluvial floods; urban flooding; flood risk; uncertainty analysis; machine learning
Special Issues and Collections in MDPI journals
Dr. Juan Pablo Carbajal
E-Mail Website
Guest Editor
Eawag, Urban Water Management, Überlandstrasse 133, CH-8600 Dübendorf, Switzerland

Special Issue Information

Dear Colleagues,

The computational power available nowadays allow us to tackle simulation challenges in hydraulic and hydrological modelling at different scales that were impossible a few decades ago. However, even in the current situation, the time needed for these simulations is inadequate for many scientific and engineering applications, such as decision support systems, flood warning systems, design or optimization of hydraulic structures, calibration of model parameters, uncertainty quantification, real-time model-based control, etc.

To address these issues, the development of fast surrogate models to increase the simulation speed seems to be promising strategy: It does not require a huge investment in new hardware and software, and the same tools can be used to solve very different problems. The field of Machine Learning offers a huge library of methods to build surrogate models, many of which have been successfully used in hydraulic and hydrological modelling.

In this Special Issue we would like to invite research works which incorporate Machine Learning techniques in hydraulic and hydrological modelling, such as (but not restricted to):

-   Artificial Science, in which a relation between input and output is learned using only data, also known as data-driven methods.

-   Scientific Numerical Modelling, such as simplified numerical models, model calibration (system identification) or optimization, renormalized models, up (down)scaled models, coarse models, etc.

-   Emulation, where a fast emulator is developed based on training data derived by a slow simulator

Dr. Vasilis Bellos
Dr. Juan Pablo Carbajal
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Water is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Monthly Rainfall Anomalies Forecasting for Southwestern Colombia Using Artificial Neural Networks Approaches
Water 2020, 12(9), 2628; https://doi.org/10.3390/w12092628 - 20 Sep 2020
Cited by 1 | Viewed by 758
Abstract
Improving the accuracy of rainfall forecasting is relevant for adequate water resources planning and management. This research project evaluated the performance of the combination of three Artificial Neural Networks (ANN) approaches in the forecasting of the monthly rainfall anomalies for Southwestern Colombia. For [...] Read more.
Improving the accuracy of rainfall forecasting is relevant for adequate water resources planning and management. This research project evaluated the performance of the combination of three Artificial Neural Networks (ANN) approaches in the forecasting of the monthly rainfall anomalies for Southwestern Colombia. For this purpose, we applied the Non-linear Principal Component Analysis (NLPCA) approach to get the main modes, a Neural Network Autoregressive Moving Average with eXogenous variables (NNARMAX) as a model, and an Inverse NLPCA approach for reconstructing the monthly rainfall anomalies forecasting in the Andean Region (AR) and the Pacific Region (PR) of Southwestern Colombia, respectively. For the model, we used monthly rainfall lagged values of the eight large-scale climate indices linked to the El Niño Southern Oscillation (ENSO) phenomenon as exogenous variables. They were cross-correlated with the main modes of the rainfall variability of AR and PR obtained using NLPCA. Subsequently, both NNARMAX models were trained from 1983 to 2014 and tested for two years (2015–2016). Finally, the reconstructed outputs from the NNARMAX models were used as inputs for the Inverse NLPCA approach. The performance of the ANN approaches was measured using three different performance metrics: Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and Pearson’s correlation (r). The results showed suitable forecasting performance for AR and PR, and the combination of these ANN approaches demonstrated the possibility of rainfall forecasting in these sub-regions five months in advance and provided useful information for the decision-makers in Southwestern Colombia. Full article
(This article belongs to the Special Issue Machine Learning Applied to Hydraulic and Hydrological Modelling)
Show Figures

Figure 1

Open AccessArticle
Pipeline Scour Rates Prediction-Based Model Utilizing a Multilayer Perceptron-Colliding Body Algorithm
Water 2020, 12(3), 902; https://doi.org/10.3390/w12030902 - 23 Mar 2020
Cited by 4 | Viewed by 1075
Abstract
In this research, the advanced multilayer perceptron (MLP) models are utilized to predict the free rate of expansion that usually occurs around the pipeline (PL) because of waves. The MLP model was structured by integrating it with three optimization algorithms: particle swarm optimization [...] Read more.
In this research, the advanced multilayer perceptron (MLP) models are utilized to predict the free rate of expansion that usually occurs around the pipeline (PL) because of waves. The MLP model was structured by integrating it with three optimization algorithms: particle swarm optimization (PSO), whale algorithm (WA), and colliding bodies’ optimization (CBO). The sediment size, wave characteristics, and PL geometry were used as the inputs for the applied models. Moreover, the scour rate, vertical scour rate along the pipeline, and scour rate at both right and left sides of the pipeline were predicted as the model outputs. Results of the three suggested models, MLP-CBO, MLP-WA, and MLP-PSO, for both testing and training sessions were assessed based on different statistical indices. The results indicated that the MLP-CBO model performed better in comparison to the MLP-PSO, MLP-WA, regression, and empirical models. The MLP-CBO can be used as a powerful soft-computing model for predictions. Full article
(This article belongs to the Special Issue Machine Learning Applied to Hydraulic and Hydrological Modelling)
Show Figures

Figure 1

Open AccessArticle
Efficient Double-Tee Junction Mixing Assessment by Machine Learning
Water 2020, 12(1), 238; https://doi.org/10.3390/w12010238 - 15 Jan 2020
Cited by 3 | Viewed by 798
Abstract
A new approach in modeling of mixing phenomena in double-Tee pipe junctions based on machine learning is presented in this paper. Machine learning represents a paradigm shift that can be efficiently used to calculate needed mixing parameters. Usually, these parameters are obtained either [...] Read more.
A new approach in modeling of mixing phenomena in double-Tee pipe junctions based on machine learning is presented in this paper. Machine learning represents a paradigm shift that can be efficiently used to calculate needed mixing parameters. Usually, these parameters are obtained either by experiment or by computational fluid dynamics (CFD) numerical modeling. A machine learning approach is used together with a CFD model. The CFD model was calibrated with experimental data from a previous study and it served as a generator of input data for the machine learning metamodels—Artificial Neural Network (ANN) and Support Vector Regression (SVR). Metamodel input variables are defined as inlet pipe flow ratio, outlet pipe flow ratio, and the distance between the pipe junctions, with the output parameter being the branch pipe outlet to main inlet pipe mixing ratio. A comparison of ANN and SVR models showed that ANN outperforms SVR in accuracy for a given problem. Consequently, ANN proved to be a viable way to model mixing phenomena in double-Tee junctions also because its mixing prediction time is extremely efficient (compared to CFD time). Because of its high computational efficiency, the machine learning metamodel can be directly incorporated into pipe network numerical models in future studies. Full article
(This article belongs to the Special Issue Machine Learning Applied to Hydraulic and Hydrological Modelling)
Show Figures

Figure 1

Open AccessArticle
Convolutional Neural Network Coupled with a Transfer-Learning Approach for Time-Series Flood Predictions
Water 2020, 12(1), 96; https://doi.org/10.3390/w12010096 - 26 Dec 2019
Cited by 6 | Viewed by 1890
Abstract
East Asian regions in the North Pacific have recently experienced severe riverine flood disasters. State-of-the-art neural networks are currently utilized as a quick-response flood model. Neural networks typically require ample time in the training process because of the use of numerous datasets. To [...] Read more.
East Asian regions in the North Pacific have recently experienced severe riverine flood disasters. State-of-the-art neural networks are currently utilized as a quick-response flood model. Neural networks typically require ample time in the training process because of the use of numerous datasets. To reduce the computational costs, we introduced a transfer-learning approach to a neural-network-based flood model. For a concept of transfer leaning, once the model is pretrained in a source domain with large datasets, it can be reused in other target domains. After retraining parts of the model with the target domain datasets, the training time can be reduced due to reuse. A convolutional neural network (CNN) was employed because the CNN with transfer learning has numerous successful applications in two-dimensional image classification. However, our flood model predicts time-series variables (e.g., water level). The CNN with transfer learning requires a conversion tool from time-series datasets to image datasets in preprocessing. First, the CNN time-series classification was verified in the source domain with less than 10% errors for the variation in water level. Second, the CNN with transfer learning in the target domain efficiently reduced the training time by 1/5 of and a mean error difference by 15% of those obtained by the CNN without transfer learning, respectively. Our method can provide another novel flood model in addition to physical-based models. Full article
(This article belongs to the Special Issue Machine Learning Applied to Hydraulic and Hydrological Modelling)
Show Figures

Figure 1

Open AccessArticle
Development of Water Level Prediction Models Using Machine Learning in Wetlands: A Case Study of Upo Wetland in South Korea
Water 2020, 12(1), 93; https://doi.org/10.3390/w12010093 - 26 Dec 2019
Cited by 9 | Viewed by 1491
Abstract
Wetlands play a vital role in hydrologic and ecologic communities. Since there are few studies conducted for wetland water level prediction due to the unavailability of data, this study developed a water level prediction model using various machine learning models such as artificial [...] Read more.
Wetlands play a vital role in hydrologic and ecologic communities. Since there are few studies conducted for wetland water level prediction due to the unavailability of data, this study developed a water level prediction model using various machine learning models such as artificial neural network (ANN), decision tree (DT), random forest (RF), and support vector machine (SVM). The Upo wetland, which is the largest inland wetland in South Korea, was selected as the study area. The daily water level gauge data from 2009 to 2015 were used as dependent variables, while the meteorological data and upstream water level gauge data were used as independent variables. Predictive performance evaluation using RF as the final model revealed 0.96 value for correlation coefficient (CC), 0.92 for Nash–Sutcliffe efficiency (NSE), 0.09 for root mean square error (RMSE), and 0.19 for persistence index (PI). The results indicate that the water level of the Upo wetland was well predicted, showing superior results compared to that of the ANN, which was used in a previous study. The results intend to provide basic data for development of a wetland management method, using water levels of previously ungauged areas. Full article
(This article belongs to the Special Issue Machine Learning Applied to Hydraulic and Hydrological Modelling)
Show Figures

Figure 1

Open AccessArticle
A Data-Driven Probabilistic Rainfall-Inundation Model for Flash-Flood Warnings
Water 2019, 11(12), 2534; https://doi.org/10.3390/w11122534 - 30 Nov 2019
Cited by 3 | Viewed by 1210
Abstract
Owing to their short duration and high intensity, flash floods are among the most devastating natural disasters in metropolises. The existing warning tools—flood potential maps and two-dimensional numerical models—are disadvantaged by time-consuming computation and complex model calibration. This study develops a data-driven, probabilistic [...] Read more.
Owing to their short duration and high intensity, flash floods are among the most devastating natural disasters in metropolises. The existing warning tools—flood potential maps and two-dimensional numerical models—are disadvantaged by time-consuming computation and complex model calibration. This study develops a data-driven, probabilistic rainfall-inundation model for flash-flood warnings. Applying a modified support vector machine (SVM) to limited flood information, the model provides probabilistic outputs, which are superior to the Boolean functions of the traditional rainfall-flood threshold method. The probabilistic SVM-based model is based on a data preprocessing framework that identifies the expected durations of hazardous rainfalls via rainfall pattern analysis, ensuring satisfactory training data, and optimal rainfall thresholds for validating the input/output data. The proposed model was implemented in 12 flash-flooded districts of the Xindian River. It was found that (1) hydrological rainfall pattern analysis improves the hazardous event identification (used for configuring the input layer of the SVM); (2) brief hazardous events are more critical than longer-lasting events; and (3) the SVM model exports the probability of flash flooding 1 to 3 h in advance. Full article
(This article belongs to the Special Issue Machine Learning Applied to Hydraulic and Hydrological Modelling)
Show Figures

Figure 1

Open AccessArticle
Random Forest Ability in Regionalizing Hourly Hydrological Model Parameters
Water 2019, 11(8), 1540; https://doi.org/10.3390/w11081540 - 25 Jul 2019
Cited by 10 | Viewed by 2021
Abstract
This study investigated the potential of random forest (RF) algorithms for regionalizing the parameters of an hourly hydrological model. The relationships between model parameters and climate/landscape catchment descriptors were multidimensional and exhibited nonlinear features. In this case, machine-learning tools offered the option of [...] Read more.
This study investigated the potential of random forest (RF) algorithms for regionalizing the parameters of an hourly hydrological model. The relationships between model parameters and climate/landscape catchment descriptors were multidimensional and exhibited nonlinear features. In this case, machine-learning tools offered the option of efficiently handling such relationships using a large sample of data. The performance of the regionalized model using RF was assessed in comparison with local calibration and two benchmark regionalization approaches. Two catchment sets were considered: (1) A target pseudo-ungauged catchment set was composed of 120 urban ungauged catchments and (2) 2105 gauged American and French catchments were used for constructing the RF. By using pseudo-ungauged urban catchments, we aimed at assessing the potential of the RF to detect the specificities of the urban catchments. Results showed that RF-regionalized models allowed for slightly better streamflow simulations on ungauged sites compared with benchmark regionalization approaches. Yet, constructed RFs were weakly sensitive to the urbanization features of the catchments, which prevents their use in straightforward scenarios of the hydrological impacts of urbanization. Full article
(This article belongs to the Special Issue Machine Learning Applied to Hydraulic and Hydrological Modelling)
Show Figures

Figure 1

Open AccessArticle
Seepage Comprehensive Evaluation of Concrete Dam Based on Grey Cluster Analysis
Water 2019, 11(7), 1499; https://doi.org/10.3390/w11071499 - 19 Jul 2019
Cited by 3 | Viewed by 1416
Abstract
Most concrete dams have seepage problems to some degree, so it is a common strategy to maintain ongoing monitoring and take timely repair measures. In order to grasp the real operation state of dam seepage, it is vital to analyze the measured data [...] Read more.
Most concrete dams have seepage problems to some degree, so it is a common strategy to maintain ongoing monitoring and take timely repair measures. In order to grasp the real operation state of dam seepage, it is vital to analyze the measured data of each monitoring indicator and establish an appropriate prediction equation. However, dam seepage states under the load and environmental influences are very complicated, involving various monitoring indicators and multiple monitoring points of each indicator. For the purpose of maintaining the temporal continuity and spatial correlation of monitoring objects, this paper used a multi-indicator grey clustering analysis model to explore the grey correlation among various indicators, and realized a comprehensive evaluation of a dam seepage state by computation of the clustering coefficient. The case study shows that the proposed method can be successfully applied to the health monitoring of concrete dam seepage. Full article
(This article belongs to the Special Issue Machine Learning Applied to Hydraulic and Hydrological Modelling)
Show Figures

Figure 1

Open AccessArticle
Application of Long Short-Term Memory (LSTM) Neural Network for Flood Forecasting
Water 2019, 11(7), 1387; https://doi.org/10.3390/w11071387 - 05 Jul 2019
Cited by 76 | Viewed by 7728
Abstract
Flood forecasting is an essential requirement in integrated water resource management. This paper suggests a Long Short-Term Memory (LSTM) neural network model for flood forecasting, where the daily discharge and rainfall were used as input data. Moreover, characteristics of the data sets which [...] Read more.
Flood forecasting is an essential requirement in integrated water resource management. This paper suggests a Long Short-Term Memory (LSTM) neural network model for flood forecasting, where the daily discharge and rainfall were used as input data. Moreover, characteristics of the data sets which may influence the model performance were also of interest. As a result, the Da River basin in Vietnam was chosen and two different combinations of input data sets from before 1985 (when the Hoa Binh dam was built) were used for one-day, two-day, and three-day flowrate forecasting ahead at Hoa Binh Station. The predictive ability of the model is quite impressive: The Nash–Sutcliffe efficiency (NSE) reached 99%, 95%, and 87% corresponding to three forecasting cases, respectively. The findings of this study suggest a viable option for flood forecasting on the Da River in Vietnam, where the river basin stretches between many countries and downstream flows (Vietnam) may fluctuate suddenly due to flood discharge from upstream hydroelectric reservoirs. Full article
(This article belongs to the Special Issue Machine Learning Applied to Hydraulic and Hydrological Modelling)
Show Figures

Figure 1

Open AccessArticle
Concrete Dam Displacement Prediction Based on an ISODATA-GMM Clustering and Random Coefficient Model
Water 2019, 11(4), 714; https://doi.org/10.3390/w11040714 - 06 Apr 2019
Cited by 8 | Viewed by 1227
Abstract
Displacement data modelling is of great importance for the safety control of concrete dams. The commonly used artificial intelligence method modelled the displacement data at each monitoring point individually, i.e., the data correlations between the monitoring points are overlooked, which leads to the [...] Read more.
Displacement data modelling is of great importance for the safety control of concrete dams. The commonly used artificial intelligence method modelled the displacement data at each monitoring point individually, i.e., the data correlations between the monitoring points are overlooked, which leads to the over-fitting problem and the limitations in the generalization of model. A novel model combines Gaussian mixture model and Iterative self-organizing data analysing (ISODATA-GMM) clustering and the random coefficient method is proposed in this article, which takes the temporal-spatial correlation among the monitoring points into account. By taking the temporal-spatial correlation among the monitoring points into account and building models for all the points simultaneously, the random coefficient model improves the generalization ability of the model through reducing the number of free model variables. Since the random coefficient model supposed the data follows normal distributions, we use an ISODATA-GMM clustering algorithm to classify the measuring points into several groups according to its temporal and spatial characteristics, so that each group follows one distribution. Our model has the advantage of having a stronger generalization ability. Full article
(This article belongs to the Special Issue Machine Learning Applied to Hydraulic and Hydrological Modelling)
Show Figures

Figure 1

Open AccessArticle
A Performance Comparison of Machine Learning Algorithms for Arced Labyrinth Spillways
Water 2019, 11(3), 544; https://doi.org/10.3390/w11030544 - 16 Mar 2019
Cited by 9 | Viewed by 1725
Abstract
Labyrinth weirs provide an economic option for flow control structures in a variety of applications, including as spillways at dams. The cycles of labyrinth weirs are typically placed in a linear configuration. However, numerous projects place labyrinth cycles along an arc to take [...] Read more.
Labyrinth weirs provide an economic option for flow control structures in a variety of applications, including as spillways at dams. The cycles of labyrinth weirs are typically placed in a linear configuration. However, numerous projects place labyrinth cycles along an arc to take advantage of reservoir conditions and dam alignment, and to reduce construction costs such as narrowing the spillway chute. Practitioners must optimize more than 10 geometric variables when developing a head–discharge relationship. This is typically done using the following tools: empirical relationships, numerical modeling, and physical modeling. This study applied a new tool, machine learning, to the analysis of the geometrically complex arced labyrinth weirs. In this work, both neural networks (NN) and random forests (RF) were employed to estimate the discharge coefficient for this specific type of weir with the results of physical modeling experiments used for training. Machine learning results are critiqued in terms of accuracy, robustness, interpolation, applicability, and new insights into the hydraulic performance of arced labyrinth weirs. Results demonstrate that NN and RF algorithms can be used as a unique expression for curve fitting, although neural networks outperformed random forest when interpolating among the tested geometries. Full article
(This article belongs to the Special Issue Machine Learning Applied to Hydraulic and Hydrological Modelling)
Show Figures

Figure 1

Open AccessArticle
Subdaily Rainfall Estimation through Daily Rainfall Downscaling Using Random Forests in Spain
Water 2019, 11(1), 125; https://doi.org/10.3390/w11010125 - 11 Jan 2019
Cited by 11 | Viewed by 2133
Abstract
Subdaily rainfall data, though essential for applications in many fields, is not as readily available as daily rainfall data. In this work, regression approaches that use atmospheric data and daily rainfall statistics as predictors are evaluated to downscale daily-to-subdaily rainfall statistics on more [...] Read more.
Subdaily rainfall data, though essential for applications in many fields, is not as readily available as daily rainfall data. In this work, regression approaches that use atmospheric data and daily rainfall statistics as predictors are evaluated to downscale daily-to-subdaily rainfall statistics on more than 700 hourly rain gauges in Spain. We propose a new approach based on machine learning techniques that improves the downscaling skill of previous methodologies. Results are grouped by climate types (following the Köppen–Geiger classification) to investigate possible missing explanatory variables in the analysis. The methodology is then used to improve the ability of Poisson cluster models to simulate hourly rainfall series that mimic the statistical behavior of the observed ones. This approach can be applied for the study of extreme events and for daily-to-subdaily precipitation disaggregation in any location of Spain where daily rainfall data are available. Full article
(This article belongs to the Special Issue Machine Learning Applied to Hydraulic and Hydrological Modelling)
Show Figures

Figure 1

Open AccessArticle
Using Adjacent Buoy Information to Predict Wave Heights of Typhoons Offshore of Northeastern Taiwan
Water 2018, 10(12), 1800; https://doi.org/10.3390/w10121800 - 07 Dec 2018
Cited by 4 | Viewed by 1016
Abstract
In the northeastern sea area of Taiwan, typhoon-induced long waves often cause rogue waves that endanger human lives. Therefore, having the ability to predict wave height during the typhoon period is critical. The Central Weather Bureau maintains the Longdong and Guishandao buoys in [...] Read more.
In the northeastern sea area of Taiwan, typhoon-induced long waves often cause rogue waves that endanger human lives. Therefore, having the ability to predict wave height during the typhoon period is critical. The Central Weather Bureau maintains the Longdong and Guishandao buoys in the northeastern sea area of Taiwan to conduct long-term monitoring and collect oceanographic data. However, records have often become lost and the buoys have suffered other malfunctions, causing a lack of complete information concerning wind-generated waves. The goal of the present study was to determine the feasibility of using information collected from the adjacent buoy to predict waves. In addition, the effects of various factors such as the path of a typhoon on the prediction accuracy of data from both buoys are discussed herein. This study established a prediction model, and two scenarios were used to assess the performance: Scenario 1 included information from the adjacent buoy and Scenario 2 did not. An artificial neural network was used to establish the wave height prediction model. The research results demonstrated that (1) Scenario 1 achieved superior performance with respect to absolute errors, relative errors, and efficiency coefficient (CE) compared with Scenario 2; (2) the CE of Longdong (0.802) was higher than that of Guishandao (0.565); and (3) various types of typhoon paths were observed by examining each typhoon. The present study successfully determined the feasibility of using information from the adjacent buoy to predict waves. In addition, the effects of various factors such as the path of a typhoon on the prediction accuracy of both buoys were also discussed. Full article
(This article belongs to the Special Issue Machine Learning Applied to Hydraulic and Hydrological Modelling)
Show Figures

Graphical abstract

Open AccessArticle
A Machine Learning Approach to Evaluating the Damage Level of Tooth-Shape Spur Dikes
Water 2018, 10(11), 1680; https://doi.org/10.3390/w10111680 - 17 Nov 2018
Cited by 3 | Viewed by 1473
Abstract
Little research has been done on the application of machine learning approaches to evaluating the damage level of river training structures on the Yangtze River. In this paper, two machine learning approaches to evaluating the damage level of spur dikes with tooth-shaped structures [...] Read more.
Little research has been done on the application of machine learning approaches to evaluating the damage level of river training structures on the Yangtze River. In this paper, two machine learning approaches to evaluating the damage level of spur dikes with tooth-shaped structures are proposed: a supervised support vector machine (SVM) model and an unsupervised model combining a Kohonen neural network with an SVM model (KNN-SVM). It was found that the supervised SVM model predicted the damage level of the validation samples with high accuracy, and the unsupervised data-mining KNN-SVM model agreed well with the empirical evaluation result. It is shown that both machine learning approaches could become effective tools to evaluate the damage level of spur dikes and other river training structures. Full article
(This article belongs to the Special Issue Machine Learning Applied to Hydraulic and Hydrological Modelling)
Show Figures

Figure 1

Open AccessArticle
Assessment of Machine Learning Techniques for Monthly Flow Prediction
Water 2018, 10(11), 1676; https://doi.org/10.3390/w10111676 - 17 Nov 2018
Cited by 11 | Viewed by 2251
Abstract
Monthly flow predictions provide an essential basis for efficient decision-making regarding water resource allocation. In this paper, the performance of different popular data-driven models for monthly flow prediction is assessed to detect the appropriate model. The considered methods include feedforward neural networks (FFNNs), [...] Read more.
Monthly flow predictions provide an essential basis for efficient decision-making regarding water resource allocation. In this paper, the performance of different popular data-driven models for monthly flow prediction is assessed to detect the appropriate model. The considered methods include feedforward neural networks (FFNNs), time delay neural networks (TDNNs), radial basis neural networks (RBFNNs), recurrent neural network (RNN), a grasshopper optimization algorithm (GOA)-based support vector machine (SVM) and K-nearest neighbors (KNN) model. For this purpose, the performance of each model is evaluated in terms of several residual metrics using a monthly flow time series for two real case studies with different flow regimes. The results show that the KNN outperforms the different neural network configurations for the first case study, whereas RBFNN model has better performance for the second case study in terms of the correlation coefficient. According to the accuracy of the results, in the first case study with more input features, the KNN model is recommended for short-term predictions and for the second case with a smaller number of input features, but more training observations, the RBFNN model is suitable. Full article
(This article belongs to the Special Issue Machine Learning Applied to Hydraulic and Hydrological Modelling)
Show Figures

Figure 1

Open AccessArticle
Integration of a Parsimonious Hydrological Model with Recurrent Neural Networks for Improved Streamflow Forecasting
Water 2018, 10(11), 1655; https://doi.org/10.3390/w10111655 - 14 Nov 2018
Cited by 19 | Viewed by 1809
Abstract
This study applied a GR4J model in the Xiangjiang and Qujiang River basins for rainfall-runoff simulation. Four recurrent neural networks (RNNs)—the Elman recurrent neural network (ERNN), echo state network (ESN), nonlinear autoregressive exogenous inputs neural network (NARX), and long short-term memory (LSTM) network—were [...] Read more.
This study applied a GR4J model in the Xiangjiang and Qujiang River basins for rainfall-runoff simulation. Four recurrent neural networks (RNNs)—the Elman recurrent neural network (ERNN), echo state network (ESN), nonlinear autoregressive exogenous inputs neural network (NARX), and long short-term memory (LSTM) network—were applied in predicting discharges. The performances of models were compared and assessed, and the best two RNNs were selected and integrated with the lumped hydrological model GR4J to forecast the discharges; meanwhile, uncertainties of the simulated discharges were estimated. The generalized likelihood uncertainty estimation method was applied to quantify the uncertainties. The results show that the LSTM and NARX better captured the time-series dynamics than the other RNNs. The hybrid models improved the prediction of high, median, and low flows, particularly in reducing the bias of underestimation of high flows in the Xiangjiang River basin. The hybrid models reduced the uncertainty intervals by more than 50% for median and low flows, and increased the cover ratios for observations. The integration of a hydrological model with a recurrent neural network considering long-term dependencies is recommended in discharge forecasting. Full article
(This article belongs to the Special Issue Machine Learning Applied to Hydraulic and Hydrological Modelling)
Show Figures

Figure 1

Open AccessArticle
Least Squares Support Vector Mechanics to Predict the Stability Number of Rubble-Mound Breakwaters
Water 2018, 10(10), 1452; https://doi.org/10.3390/w10101452 - 15 Oct 2018
Cited by 6 | Viewed by 1177
Abstract
In coastal engineering, empirical formulas grounded on experimental works regarding the stability of breakwaters have been developed. In recent years, soft computing tools such as artificial neural networks and fuzzy models have started to be employed to diminish the time and cost spent [...] Read more.
In coastal engineering, empirical formulas grounded on experimental works regarding the stability of breakwaters have been developed. In recent years, soft computing tools such as artificial neural networks and fuzzy models have started to be employed to diminish the time and cost spent in these mentioned experimental works. To predict the stability number of rubble-mound breakwaters, the least squares version of support vector machines (LSSVM) method is used because it can be assessed as an alternative one to diverse soft computing techniques. The LSSVM models have been operated through the selected seven parameters, which are determined by Mallows’ Cp approach, that are, namely, breakwater permeability, damage level, wave number, slope angle, water depth, significant wave heights in front of the structure, and peak wave period. The performances of the LSSVM models have shown superior accuracy (correlation coefficients (CC) of 0.997) than that of artificial neural networks (ANN), fuzzy logic (FL), and genetic programming (GP), that are all implemented in the related literature. As a result, it is thought that this study will provide a practical way for readers to estimate the stability number of rubble-mound breakwaters with more accuracy. Full article
(This article belongs to the Special Issue Machine Learning Applied to Hydraulic and Hydrological Modelling)
Show Figures

Figure 1

Back to TopTop