# Development of a Revised Multi-Layer Perceptron Model for Dam Inflow Prediction

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Methodologies

#### 2.1. Data Preprocessing: Seasonal Division

#### 2.2. Data Preprocessing: Normalization

#### 2.3. Model Composition

#### 2.4. Random Search Algorithm

#### 2.5. Model Training Process

## 3. Application and Results

#### 3.1. Target Area

^{2}, the basin circumference was 383.6 km

^{2}, the average width of the basin was 16.5 km, and the average watershed slope was 46.0% [37]. The flow discharge into the Soyang Dam was generated from the Inbukcheon and Soyang rivers. The Soyang Dam was built at the exit of the basin with a storage capacity of 2.9 billion tons. Daily average water level data from 2004 to 2021 were acquired from two water gauges (Wontong and Wondae) installed in Inbukcheon and Soyang rivers, respectively. The daily average dam inflow data for the same period was investigated to determine the water level-inflow discharge time series data.

#### 3.2. Preparation of Input Data

#### 3.3. Model Parameter Estimations for MLP and RMLP

#### 3.4. Data Preprocessing Results

#### 3.5. Model Comparison

^{3}/s. The MLP model predicted 2862.9 m

^{3}/s, which was underestimated by 510.2 m

^{3}/s (15.1%), and the RMLP model predicted 2752.0 m

^{3}/s, which was 620.7 m

^{3}/s (18.4%). MLP was more accurate when only considering the largest peak value. However, when considering all nine peaks, RMLP showed a smaller error. MLP showed a mean deviation of 237.6 m

^{3}/s (26.3%), and RMLP showed 187.4 m

^{3}/s (12.1%). Again, the MLP tended to overestimate the peak value. In conclusion, the RMLP model can predict the amount of dam inflow more accurately in most cases. Table 9 shows the forecast results for major peak events by year.

## 4. Discussion

^{3}/s or more. There were some inconsistencies in the low discharge values of 10 m

^{3}/s or less. However, the MSE was also small because the discharge values and differences were small. As a result, there was still an underestimation error, the MSE of Case 8 in the off season was 80, which was smaller by 16 compared to that of Case 6. When applying seasonal separation, improvement in the prediction accuracy of medium to high flow is important. However, there was no significant difference in the accuracy at low flow rates, even with some errors. Therefore, to improve the accuracy of the high flow prediction model, further study to separate the high flow into two or more stages, such as high, medium high, and low, should be conducted.

^{2}), and coefficient of efficiency (E) were analyzed. Each of the equations is given in Appendix A. In this study, MSE was used as an error calculation method, but MAE also showed a similar tendency to MSE. However, MAE was not suitable for use as an evaluation index for model learning because it showed fewer error values compared to MSE. In addition, because there was no error weight for the high and low flows, it was easy to obtain a result that was biased toward low flow with a large amount of data. Therefore, for discharge prediction, the MSE is more appropriate than the MAE. The results for all the cases are shown in Table 11.

^{2}showed an accuracy of approximately 0.7, and the high discharge was underestimated. In cases where only normalization preprocessing was performed, such as in Cases 3 and 7, R

^{2}was significantly improved. In MLP (Case 3), the medium-low discharge was overestimated, but in RMLP (Case 7), it was significantly improved, and the R

^{2}also increased from 0.811 to 0.867. Finally, when all preprocessing steps were applied, both MLP (Case 4) and RMLP (Case 8) showed an accuracy close to 90%. The R

^{2}for the measured and predicted discharges for each model are shown in Figure 12.

## 5. Conclusions

^{2}analyses were performed. The input data for model learning were the water level and dam inflow data provided by WAMIS. The study area was the Soyang Dam Basin, and time series data were used for 6575 data points from 2004 to 2021. Training was repeated 20 times every 10,000 epochs to determine the final model weights.

^{2}= 0.894.

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## Appendix A

^{2}), and coefficient of efficiency (E) were applied. The MSE equation is given by Equation (A1).

^{2}equation is shown in Equation (A6).

## References

- McCulloch, W.S.; Pitts, W. A Logical Calculus of the Ideas Immanent in Nervous Activity. Bull. Math. Biophys.
**1943**, 5, 115–133. [Google Scholar] [CrossRef] - Rosenblatt, F. The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychol. Rev.
**1958**, 65, 386–408. [Google Scholar] [CrossRef] [PubMed] - Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Representations by Back-Propagating Errors. Nature
**1986**, 323, 533–536. [Google Scholar] [CrossRef] - Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE
**1998**, 86, 2278–2324. [Google Scholar] [CrossRef] - Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput.
**1997**, 9, 1735–1780. [Google Scholar] [CrossRef] - Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. arXiv
**2014**, arXiv:1412.3555. [Google Scholar] - Hinton, G.E.; Srivastava, N.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R.R. Improving Neural Networks by Preventing Co-Adaptation of Feature Detectors. arXiv
**2012**, arXiv:1207.0580. [Google Scholar] - Hahnloser, R.H.; Seung, H.S.; Slotine, J.J. Permitted and Forbidden Sets in Symmetric Threshold-Linear Networks. Neural Comput.
**2003**, 15, 621–638. [Google Scholar] [CrossRef] - Nair, V.; Hinton, G.E. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
- Maas, A.L.; Hannun, A.Y.; Ng, A.Y. Rectifier Nonlinearities Improve Neural Network Acoustic Models. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 21–24 June 2013. [Google Scholar]
- Riedmiller, M.; Braun, H. Rprop-A Fast Adaptive Learning Algorithm. In Proceedings of the ISCIS VII, Antalya, Turkey, 2 November 1992. [Google Scholar]
- Duchi, J.; Hazan, E.; Singer, Y. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. J. Mach. Learn. Res.
**2011**, 12, 2121–2159. [Google Scholar] - Hinton, G.; Srivastava, N.; Swersky, K. Neural Networks for Machine Learning lecture 6a Overview of Mini-Batch Gradient Descent. Cited
**2012**, 14, 2. [Google Scholar] - Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv
**2014**, arXiv:1412.6980. [Google Scholar] - Sit, M.; Demiray, B.Z.; Xiang, Z.; Ewing, G.J.; Sermet, Y.; Demir, I. A Comprehensive Review of Deep Learning Applications in Hydrology and Water Resources. Water Sci. Technol.
**2020**, 82, 2635–2670. [Google Scholar] [CrossRef] - Tran, Q.-K.; Song, S.-K. Water Level Forecasting Based on Deep Learning: A Use Case of Trinity River-Texas-The United States. J. KIISE
**2017**, 44, 607–612. [Google Scholar] [CrossRef] - Hu, C.; Wu, Q.; Li, H.; Jian, S.; Li, N.; Lou, Z. Deep Learning with a Long Short-Term Memory Networks Approach for Rainfall-Runoff Simulation. Water
**2018**, 10, 1543. [Google Scholar] [CrossRef] - Kratzert, F.; Klotz, D.; Brenner, C.; Schulz, K.; Herrnegger, M. Rainfall–Runoff Modelling Using Long Short-Term Memory (LSTM) Networks. Hydrol. Earth Syst. Sci.
**2018**, 22, 6005–6022. [Google Scholar] [CrossRef] - Jung, S.; Cho, H.; Kim, J.; Lee, G. Prediction of Water Level in a Tidal River Using a Deep-Learning Based LSTM Model. J. Korea Water Resour. Assoc.
**2018**, 51, 1207–1216. [Google Scholar] [CrossRef] - Yuan, X.; Chen, C.; Lei, X.; Yuan, Y.; Muhammad Adnan, R. Monthly Runoff Forecasting Based on LSTM–ALO Model. Stoch. Environ. Res. Risk Assess.
**2018**, 32, 2199–2212. [Google Scholar] [CrossRef] - Zhang, D.; Lin, J.; Peng, Q.; Wang, D.; Yang, T.; Sorooshian, S.; Liu, X.; Zhuang, J. Modeling and Simulating of Reservoir Operation Using the Artificial Neural Network, Support Vector Regression, Deep Learning Algorithm. J. Hydrol.
**2018**, 565, 720–736. [Google Scholar] [CrossRef] - Hu, R.; Fang, F.; Pain, C.C.; Navon, I.M. Rapid Spatio-temporal Flood Prediction and Uncertainty Quantification Using a Deep Learning Method. J. Hydrol.
**2019**, 575, 911–920. [Google Scholar] [CrossRef] - Yang, S.; Yang, D.; Chen, J.; Zhao, B. Real-Time Reservoir Operation Using Recurrent Neural Networks and Inflow Forecast from a Distributed Hydrological Model. J. Hydrol.
**2019**, 579, 124229. [Google Scholar] [CrossRef] - Yang, T.; Sun, F.; Gentine, P.; Liu, W.; Wang, H.; Yin, J.; Du, M.; Liu, C. Evaluation and Machine Learning Improvement of Global Hydrological Model-Based Flood Simulations. Environ. Res. Lett.
**2019**, 14, 114027. [Google Scholar] [CrossRef] - Damavandi, H.G.; Shah, R.; Stampoulis, D.; Wei, Y.; Boscovic, D.; Sabo, J. Accurate Prediction of Streamflow Using Long Short-Term Memory Network: A Case Study in the Brazos River Basin in Texas. Int. J. Environ. Sci. Dev.
**2019**, 10, 294–300. [Google Scholar] [CrossRef] - Kratzert, F.; Klotz, D.; Herrnegger, M.; Sampson, A.K.; Hochreiter, S.; Nearing, G.S. Toward Improved Predictions in Ungauged Basins: Exploiting the Power of Machine Learning. Water Resour. Res.
**2019**, 55, 11344–11354. [Google Scholar] [CrossRef] - Kumar, D.; Singh, A.; Samui, P.; Jha, R.K. Forecasting Monthly Precipitation Using Sequential Modelling. Hydrol. Sci. J.
**2019**, 64, 690–700. [Google Scholar] [CrossRef] - Srinivasulu, S.; Jain, A. A Comparative Analysis of Training Methods for Artificial Neural Network Rainfall–Runoff Models. Appl. Soft Comput.
**2006**, 6, 295–306. [Google Scholar] [CrossRef] - Nasseri, M.; Asghari, K.; Abedini, M.J. Optimized Scenario for Rainfall Forecasting Using Genetic Algorithm Coupled with Artificial Neural Network. Expert Syst. Appl.
**2008**, 35, 1415–1421. [Google Scholar] [CrossRef] - Sedki, A.; Ouazar, D.; El Mazoudi, E. Evolving Neural Network Using Real Coded Genetic Algorithm for Daily Rainfall–Runoff Forecasting. Expert Syst. Appl.
**2009**, 36, 4523–4527. [Google Scholar] [CrossRef] - Yeo, W.-K.; Seo, Y.-M.; Lee, S.-Y.; Jee, H.-K. Study on Water Stage Prediction Using Hybrid Model of Artificial Neural Network and Genetic Algorithm. J. Korea Water Resour. Assoc.
**2010**, 43, 721–731. [Google Scholar] [CrossRef] - Barati, R.; Neyshabouri, S.A.A.S.; Ahmadi, G. Development of Empirical Models with High Accuracy for Estimation of Drag Coefficient of Flow around a Smooth Sphere: An Evolutionary Approach. Powdertech
**2014**, 257, 11–19. [Google Scholar] [CrossRef] - Hosseini, K.; Nodoushan, E.J.; Barati, R.; Shahheydari, H. Optimal Design of Labyrinth Spillways Using Meta-Heuristic Algorithms. KSCE J. Civil. Eng.
**2016**, 20, 468–477. [Google Scholar] [CrossRef] - Alizadeh, M.J.; Shahheydari, H.; Kavianpour, M.R.; Shamloo, H.; Barati, R. Prediction of Longitudinal Dispersion Coefficient in Natural Rivers Using a Cluster-Based Bayesian Network. Environ. Earth Sci.
**2017**, 76, 86. [Google Scholar] [CrossRef] - Badfar, M.; Barati, R.; Dogan, E.; Tayfur, G. Reverse Flood Routing in Rivers Using Linear and Nonlinear Muskingum Models. J. Hydrol. Eng.
**2021**, 26, 04021018. [Google Scholar] [CrossRef] - Kazemi, M.; Barati, R. Application of Dimensional Analysis and Multi-Gene Genetic Programming to Predict the Performance of Tunnel Boring Machines. Appl. Soft Comput.
**2022**, 124, 108997. [Google Scholar] [CrossRef] - Lee, J.; Cho, H.; Choi, M.; Kim, D. Development of Land Surface Model for Soyang River Basin. J. Korea Water Resour. Assoc.
**2017**, 50, 837–847. [Google Scholar] [CrossRef] - Water Resource Management Information System (WAMIS). Available online: http://www.wamis.go.kr/ (accessed on 5 March 2022).

**Figure 9.**Inflow prediction results of the MLP (Cases 1–4) and RMLP (Cases 5–8) models. The black line means observed discharge, and the blue triangle indicates predicted discharge with the original data, the purple circle is the seasonal division, the red diamond is the normalization, and the green square is the result when all preprocessing was used.

**Figure 11.**Results of applying both normalization and seasonal division (Case 8) and seasonal division only (Case 6).

Index | Training | Validation | Test | Total |
---|---|---|---|---|

Peak season | 1183 | 240 | 294 | 1717 |

Off season | 3200 | 856 | 802 | 4858 |

Total | 4383 | 1096 | 1096 | 6575 |

Index | Location | Training | Validation | Test | |||
---|---|---|---|---|---|---|---|

Max. | Min. | Max. | Min. | Max. | Min. | ||

Peak season | Wondae (El.m.) | 7.33 | 2.61 | 6.55 | 2.66 | 6.11 | 2.61 |

Wontong (El.m.) | 4.13 | 0.67 | 3.75 | 0.67 | 4.43 | 0.67 | |

Soyang Dam (m ^{3}/s) | 4208.2 | 18.9 | 3918.5 | 26.5 | 3373.1 | 27.1 | |

Off season | Wondae (El.m.) | 5.24 | 0.84 | 4.14 | 1.23 | 3.51 | 1.43 |

Wontong (El.m.) | 1.15 | 0.34 | 0.69 | 0.34 | 0.91 | 0.26 | |

Soyang Dam (m ^{3}/s) | 223.8 | 0.0 | 198.7 | 0.0 | 98.6 | 0.0 |

Type | Input Data | Model Number | |
---|---|---|---|

MLP | RMLP | ||

A | Original data | Case 1 | Case 5 |

B | Seasonal division | Case 2 | Case 6 |

C | Normalization | Case 3 | Case 7 |

D | Seasonal division & normalization | Case 4 | Case 8 |

Model | Parameter | Value | Epochs | Number of Trials | Avg. MSE (m ^{3}/s)^{2} | Min. MSE (m ^{3}/s)^{2} |
---|---|---|---|---|---|---|

MLP | $\mathrm{Range}\mathrm{of}\mathrm{initial}\mathrm{weight}[{W}_{0}$] | 0.2 | 10,000 | 10 | 26,650 | 23,557 |

0.4 | 10,000 | 10 | 24,435 | 15,339 | ||

0.6 | 10,000 | 10 | 23,589 | 13,844 | ||

0.8 | 10,000 | 10 | 19,533 | 11,168 |

Model | Parameter | Value | Epochs | Number of Trials | Avg. MSE (m ^{3}/s)^{2} | Min. MSE (m ^{3}/s)^{2} |
---|---|---|---|---|---|---|

RMLP | Boundary random $\left(BR\right)$ | 0.0 | 10,000 | 10 | 23,046 | 12,185 |

0.01 | 10,000 | 10 | 23,463 | 20,213 | ||

0.05 | 10,000 | 10 | 17,844 | 11,167 | ||

0.1 | 10,000 | 10 | 21,857 | 12,617 | ||

Proportional random $\left(PR\right)$ | 0.0 | 10,000 | 10 | 22,645 | 12,454 | |

0.01 | 10,000 | 10 | 17,844 | 11,167 | ||

0.05 | 10,000 | 10 | 23,806 | 18,173 | ||

0.1 | 10,000 | 10 | 23,716 | 13,165 | ||

Learning rate $\left(\alpha \right)$ | 0.01 | 10,000 | 10 | 17,844 | 11,167 | |

0.04 | 10,000 | 10 | 23,059 | 12,798 | ||

0.07 | 10,000 | 10 | 19,058 | 12,443 | ||

0.1 | 10,000 | 10 | 21,796 | 11,274 | ||

Range of initial weight $\left({W}_{0}\right)$ | 0.2 | 10,000 | 10 | 25,028 | 14,444 | |

0.4 | 10,000 | 10 | 24,538 | 14,203 | ||

0.6 | 10,000 | 10 | 23,680 | 12,909 | ||

0.8 | 10,000 | 10 | 17,844 | 11,167 |

Model | Parameters | Value |
---|---|---|

MLP | $\mathrm{Range}\mathrm{of}\mathrm{initial}\mathrm{weight}\left({W}_{0}\right)$ | 0.8 |

RMLP | $\mathrm{Boundary}\mathrm{random}\left(BR\right)$ | 0.05 |

$\mathrm{Proportional}\mathrm{random}\left(PR\right)$ | 0.01 | |

$\mathrm{Learning}\mathrm{rate}\left(\alpha \right)$ | 0.01 | |

$\mathrm{Range}\mathrm{of}\mathrm{initial}\mathrm{weight}\left({W}_{0}\right)$ | 0.8 |

Model | Index | Input Data | Epochs | Number of Trials | Test Error (MSE) (m ^{3}/s)^{2} | Error Difference (m ^{3}/s)^{2} |
---|---|---|---|---|---|---|

MLP | Case 1 | (a) Original data | 10,000 | 20 | 11,006 | - |

Case 2 | (b) Seasonal division | 10,000 | 20 | 14,370 | +3364 (+30.6%) | |

Case 3 | (c) Normalization | 10,000 | 20 | 8985 | −2021 (−18.4%) | |

Case 4 | (d) Seasonal division and normalization | 10,000 | 20 | 4511 | −6495 (−59.0%) | |

RMLP | Case 5 | (a) Original data | 10,000 | 20 | 11,344 | - |

Case 6 | (b) Seasonal division | 10,000 | 20 | 12,251 | +907 (+8.0%) | |

Case 7 | (c) Normalization | 10,000 | 20 | 5366 | −5978 (−52.7%) | |

Case 8 | (d) Seasonal division and normalization | 10,000 | 20 | 4368 | −6976 (−61.5%) |

Input Data | Test Error (MSE) (m^{3}/s)^{2} | Improvement of Error (1)–(2) | |
---|---|---|---|

(1) MLP | (2) RMLP | ||

(a) Original data | (Case 1) 11,006 | (Case 5) 11,344 | −338 (−3.1%) |

(b) Seasonal division | (Case 2) 14,370 | (Case 6) 12,251 | 2119 (14.7%) |

(c) Normalization | (Case 3) 8985 | (Case 7) 5366 | 3619 (40.3%) |

(d) Seasonal division and normalization | (Case 4) 4511 | (Case 8) 4368 | 142 (3.2%) |

Date | Observed Inflow (m ^{3}/s) | MLP (Case 4) | RMLP (Case 8) | ||
---|---|---|---|---|---|

Predict Inflow (m ^{3}/s) | Error (m ^{3}/s) | Predict Inflow (m ^{3}/s) | Error (m ^{3}/s) | ||

27 Jul. 2019 | 350.1 | 635.8 | 285.7 (+81.6%) | 419.9 | 69.8 (+19.9%) |

7 Aug. 2019 | 696.1 | 840.1 | 144.0 (+20.7%) | 620.2 | −75.9 (−10.9%) |

11 Sep. 2019 | 581.7 | 464.1 | −117.6 (−20.2%) | 313.7 | −268.0 (−46.1%) |

5 Aug. 2020 | 3373.1 | 2862.9 | −510.2 (−15.1%) | 2752.0 | −620.7 (−18.4%) |

3 Sep. 2020 | 2660.6 | 2278.1 | −382.5 (−14.4%) | 2243.0 | −417.3 (−15.7%) |

7 Sep. 2020 | 1436.2 | 1151.1 | −285.1 (−19.8%) | 1044.0 | −392.3 (−27.3%) |

4 Apr. 2021 | 388.8 | 459.8 | 71.0 (+18.3%) | 282.8 | −106.0 (−27.3%) |

17 May 2021 | 522.4 | 701.4 | 179.0 (+34.3%) | 492.1 | −30.3 (−5.8%) |

4 Jul. 2021 | 437.7 | 544.3 | 106.6 (+24.4%) | 354.6 | −83.1 (−19.0%) |

Absolute deviation | 237.6 (+26.3%) | 187.4 (+12.1%) |

Index | Input Data | Number of Data | Error | ||
---|---|---|---|---|---|

MSE (m ^{3}/s)^{2} | MAE (m ^{3}/s) | ||||

Case 5 | (a) Original data | 1096 | 11,344 | 43.0 | |

Case 6 | (b) Seasonal division | Peak season | 294 | 45,417 | 139.4 |

Off season | 802 | 96 | 7.2 | ||

Total | 1096 | 12,251 | 42.5 | ||

Case 7 | (c) Normalization | 1096 | 5366 | 26.1 | |

Case 8 | (d) Seasonal division and normalization | Peak season | 294 | 16,066 | 64.1 |

Off season | 802 | 80 | 6.4 | ||

Total | 1096 | 4368 | 21.9 |

Index | MLP | RMLP | ||||||
---|---|---|---|---|---|---|---|---|

Case 1 | Case 2 | Case 3 | Case 4 | Case 5 | Case 6 | Case 7 | Case 8 | |

MSE (m^{3}/s)^{2} | 11,006 | 14,370 | 8985 | 4511 | 11,344 | 12,251 | 5366 | 4368 |

RMSE (m^{3}/s) | 104.9 | 119.9 | 94.8 | 67.2 | 106.5 | 110.7 | 73.3 | 66.1 |

MAE (m^{3}/s) | 40.1 | 49.1 | 45.5 | 29.7 | 43.0 | 42.5 | 26.1 | 21.9 |

SAD (m^{3}/s) | 43,966 | 53,759 | 49,838 | 32,519 | 47,093 | 46,553 | 28,561 | 24,011 |

MAPE | 0.887 | 1.106 | 3.044 | 0.771 | 1.023 | 0.967 | 1.512 | 0.789 |

R^{2} | 0.732 | 0.673 | 0.811 | 0.902 | 0.720 | 0.729 | 0.867 | 0.894 |

E | 0.704 | 0.613 | 0.758 | 0.879 | 0.695 | 0.670 | 0.856 | 0.882 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Choi, H.S.; Kim, J.H.; Lee, E.H.; Yoon, S.-K.
Development of a Revised Multi-Layer Perceptron Model for Dam Inflow Prediction. *Water* **2022**, *14*, 1878.
https://doi.org/10.3390/w14121878

**AMA Style**

Choi HS, Kim JH, Lee EH, Yoon S-K.
Development of a Revised Multi-Layer Perceptron Model for Dam Inflow Prediction. *Water*. 2022; 14(12):1878.
https://doi.org/10.3390/w14121878

**Chicago/Turabian Style**

Choi, Hyeon Seok, Joong Hoon Kim, Eui Hoon Lee, and Sun-Kwon Yoon.
2022. "Development of a Revised Multi-Layer Perceptron Model for Dam Inflow Prediction" *Water* 14, no. 12: 1878.
https://doi.org/10.3390/w14121878