Next Article in Journal
Enhancing Flood Inundation Simulation Under Rapid Urbanisation and Data Scarcity: The Case of the Lower Prek Thnot River Basin, Cambodia
Previous Article in Journal
Spatiotemporal Patterns, Characteristics, and Ecological Risk of Microplastics in the Surface Waters of Shijiu Lake (Nanjing, China)
Previous Article in Special Issue
Combination of UAV Imagery and Deep Learning to Estimate Vegetation Height over Fluvial Sandbars
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Deep Reinforcement Learning for Optimized Reservoir Operation and Flood Risk Mitigation

1
Department of Civil, Architectural and Environmental System Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
2
Graduate School of Water Resources, Sungkyunkwan University, Suwon 16419, Republic of Korea
*
Author to whom correspondence should be addressed.
Water 2025, 17(22), 3226; https://doi.org/10.3390/w17223226
Submission received: 19 October 2025 / Revised: 10 November 2025 / Accepted: 10 November 2025 / Published: 11 November 2025
(This article belongs to the Special Issue Machine Learning Applications in the Water Domain)

Abstract

Effective reservoir operation demands a careful balance between flood risk mitigation, water supply reliability, and operational stability, particularly under evolving hydrological conditions. This study applies deep reinforcement learning (DRL) models—Deep Q-Network (DQN), Proximal Policy Optimization (PPO), and Deep Deterministic Policy Gradient (DDPG)—to optimize reservoir operations at the Soyang River Dam, South Korea, using 30 years of daily hydrometeorological data (1993–2022). The DRL framework integrates observed and remotely sensed variables such as precipitation, temperature, and soil moisture to guide adaptive storage decisions. Discharge is computed via mass balance, preserving inflow while optimizing system responses. Performance is evaluated using cumulative reward, action stability, and counts of total capacity and flood control violations. PPO achieved the highest cumulative reward and the most stable actions but incurred six flood control violations; DQN recorded one flood control violation, reflecting larger buffers and strong flood control compliance; DDPG provided smooth, intermediate responses with one violation. No model exceeded the total storage capacity. Analyses show a consistent pattern: retain on the rise, moderate the crest, and release on the recession to keep Flood Risk (FR) < 0. During high-inflow days, DRL optimization outperformed observed operation by increasing storage buffers and typically reducing peak discharge, thereby mitigating flood risk.
Keywords: deep reinforcement learning; optimized reservoir operation; flood risk mitigation; the Soyang River Dam deep reinforcement learning; optimized reservoir operation; flood risk mitigation; the Soyang River Dam

Share and Cite

MDPI and ACS Style

Sseguya, F.; Jun, K.S. Deep Reinforcement Learning for Optimized Reservoir Operation and Flood Risk Mitigation. Water 2025, 17, 3226. https://doi.org/10.3390/w17223226

AMA Style

Sseguya F, Jun KS. Deep Reinforcement Learning for Optimized Reservoir Operation and Flood Risk Mitigation. Water. 2025; 17(22):3226. https://doi.org/10.3390/w17223226

Chicago/Turabian Style

Sseguya, Fred, and Kyung Soo Jun. 2025. "Deep Reinforcement Learning for Optimized Reservoir Operation and Flood Risk Mitigation" Water 17, no. 22: 3226. https://doi.org/10.3390/w17223226

APA Style

Sseguya, F., & Jun, K. S. (2025). Deep Reinforcement Learning for Optimized Reservoir Operation and Flood Risk Mitigation. Water, 17(22), 3226. https://doi.org/10.3390/w17223226

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop