# Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing

^{*}

## Abstract

**:**

## 1. Introduction

^{2}, and 337,084 quadrilateral unstructured meshes were used. The average grid length is 80 m. The results showed that the OpenACC parallel method using a Tesla K20c GPU card achieved a higher speedup ratio than that of the OpenMP and MPI parallel methods on a 32-core computer. Liang et al. [14] used the OpenCL language and then developed a GPU-accelerated hydrodynamic model for a rainfall-runoff process simulation. The Tesla K80 with 4992 Nvidia CUDA cores was used for GPU computing. The time consumption of the simulation for a 12-h flood event on 5 million computational cells at 5 m resolution was 2.5 h when using a single Tesla K80 GPU card (Nvidia, CA, USA). Thus, it can be concluded that a GPU-based personal computer can enable catchment-scale simulations at a very high spatial grid resolution and substantially improve the computational efficiency. GPU-accelerated models have also been used for the simulation of coastal ocean tides [15], waves [16], and dam-break floods [17], showing a noticeable speedup. Néelz and Pender [18] presented various benchmarks for the latest generation of 2D hydraulic modelling packages, including the MIKE21 FM model, TUFLOW, and so on. The results show that the MIKE21 FM model can be used for flood simulation.

## 2. Methodologies

#### 2.1. Hydrodynamic Model

#### 2.1.1. Governing Equations

#### 2.1.2. Numerical Method

^{−3}m [4], the node would be classified as a wet-node; otherwise, the node would be classified as a dry-node. Based on the nodal wet/dry classification, a wet-cell is defined as that in which all three nodes of the cell are wetted; and if one or more nodes of a cell are dry, the cell is classified as a dry-cell. It can be concluded that the subroutine of wet/dry classification is a computational loop with data independence.

^{−3}m or 10

^{−6}m, the threshold is non-sensitive to the final computational results, including the maximum water depth, flow velocity, the time of arrival of the floods, and so on.

**n**= (n

_{x}, n

_{y})

^{T}is the unit outward normal vector; $F({U}_{\mathrm{L}})\cdot n$, ${F}_{*\mathrm{L}}$, ${F}_{*\mathrm{R}}$, and $F({U}_{\mathrm{R}})\cdot n$ are computed by

#### 2.2. GPU Parallel Computing

## 3. Model Validations

#### 3.1. Flooding a Disconnected Water Body

#### 3.2. The Toce River Dam-Break Case with Overtopping

## 4. Case Study: A Real Flood Simulation in the Wei River

#### 4.1. Study Area

^{2}. The flood control protection zone of the Wei River right bank involves Shandong, Hebei, and Henan provinces in China (Figure 7), and in Figure 7 the blocks of hatch pattern with different colors belong to different provinces. There are about 180 townships in the flood control protection zone. The study area is bounded by the left embankment of the Tuhaimajia River, the left embankment of the Majia River, the Bohai Estuary, the right embankment of the Zhangweixin River, the right embankment of the Weiyunhe Canal, and the right embankment of the Wei River. The total area is about 9503 km

^{2}, which is a large-scale field and is time-consuming for flood simulation.

#### 4.2. Computational Mesh

^{2}. The other two divisions are refined, in turn, by evenly dividing one grid into four grids from the middle points of the edges, and the corresponding total triangular grid numbers are 717,184 and 2,868,736, respectively. For simplified expression, the three grid divisions are denoted as mesh-1, mesh-2, and mesh-3, respectively. Figure 8 shows the topography of the study area using Gauss-Kruger projection.

#### 4.3. Dyke-Break Flood Simulation

^{3}/s, and the flooding duration time is about 263 h. To simulate the whole process of flooding and recession, the computation time is set to 878 h, and the discharge across the breach is 0 during time t = 264–878 h.

#### 4.4. Parallel Performance Analysis

## 5. Conclusions

^{2}, and three grid division schemes with different resolutions were considered to test the performance of the parallel model, which was executed on the NVIDIA Kepler K20 platform (Nvidia, CA, USA). Results show that all three cases attained a dramatic reduction in the running time by using the GPU-based parallel model. For the mesh-1 of 179,296 grids, the 878 h (36.6 days) flood inundation process took approximately 1.70 h to calculate with the serial model, whereas only 0.15 h was required with a parallel model on the GPU K20 card. For the mesh-3 of 2,868,736 grids, the running times of the serial model and parallel model are 86.72 h and 2.79 h, respectively. Additionally, the running time, speedup ratio, and time-saving ratio all increased with the number of grids. For the three mesh division schemes, the speedup ratios of 11.3, 17.2, and 31.1 were achieved with 179,296, 717,184, and 2,868,736 grids, respectively. Since the accuracy of flood simulations is largely dependent upon the topographical resolution, it can be concluded that the precision and speed of the flood model can be improved by mesh refinement and parallel computing, and the proposed GPU-based parallel model will contribute to large-scale flood simulations and the real-time response of disaster prevention and mitigation.

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Song, L.; Zhou, J.; Guo, J.; Zou, Q.; Liu, Y. A robust well-balanced finite volume model for shallow water flows with wetting and drying over irregular terrain. Adv. Water Resour.
**2011**, 34, 915–932. [Google Scholar] [CrossRef] - Bi, S.; Zhou, J.; Liu, Y.; Song, L. A finite volume method for modeling shallow flows with Wet-Dry fronts on adaptive cartesian grids. Math. Probl. Eng.
**2014**, 2014, 805–808. [Google Scholar] [CrossRef] - Wu, G.; He, Z.; Liu, G. Development of a cell-centered godunov-type finite volume model for shallow water flow based on unstructured mesh. Math. Probl. Eng.
**2014**, 2014, 1–15. [Google Scholar] [CrossRef] - Liu, Q.; Qin, Y.; Zhang, Y.; Li, Z. A coupled 1D–2D hydrodynamic model for flood simulation in flood detention basin. Nat. Hazards
**2015**, 75, 1303–1325. [Google Scholar] [CrossRef] - Rehman, K.; Cho, Y.S. Novel slope source term treatment for preservation of quiescent steady states in shallow water flows. Water
**2016**, 8, 488. [Google Scholar] [CrossRef] - Kvočka, D.; Ahmadian, R.; Falconer, R.A. Flood inundation modelling of flash floods in steep river basins and catchments. Water
**2017**, 9, 705. [Google Scholar] [CrossRef] - Chen, J.; Zhong, P.-A.; Wang, M.-L.; Zhu, F.-L.; Wan, X.-Y.; Zhang, Y. A risk-based model for real-time flood control operation of a cascade reservoir system under emergency conditions. Water
**2018**, 10, 167. [Google Scholar] [CrossRef] - Sanders, B.F.; Schubert, J.E.; Detwiler, R.L. ParBreZo: A parallel, unstructured grid, Godunov-type, shallow-water code for high-resolution flood inundation modeling at the regional scale. Adv. Water Resour.
**2010**, 33, 1456–1467. [Google Scholar] [CrossRef] - Lai, W.; Khan, A.A. A parallel two-dimensional discontinuous galerkin method for shallow-water flows using high-resolution unstructured meshes. J. Comput. Civ. Eng.
**2016**, 31, 04016073. [Google Scholar] [CrossRef] - Wang, X.; Shangguan, Y.; Onodera, N.; Kobayashi, H.; Aoki, T. Direct numerical simulation and large eddy simulation on a turbulent wall-bounded flow using lattice boltzmann method and multiple GPUs. Math. Probl. Eng.
**2014**, 2014, 1–10. [Google Scholar] [CrossRef] - Wang, Y.; Yang, X. Sensitivity analysis of the surface runoff coefficient of HiPIMS in simulating flood processes in a Large Basin. Water
**2018**, 10, 253. [Google Scholar] [CrossRef] - Zhang, S.; Yuan, R.; Wu, Y.; Yi, Y. Parallel computation of a dam-break flow model using OpenACC applications. J. Hydraul. Eng.
**2016**, 143, 04016070. [Google Scholar] [CrossRef] - Zhang, S.; Li, W.; Jing, Z.; Yi, Y.; Zhao, Y. Comparison of three different parallel computation methods for a two-dimensional dam-break model. Math. Probl. Eng.
**2017**, 2017, 1–12. [Google Scholar] [CrossRef] - Liang, Q.; Xia, X.; Hou, J. Catchment-scale high-resolution flash flood simulation using the GPU-based technology. Procedia Eng.
**2016**, 154, 975–981. [Google Scholar] [CrossRef] - Zhao, X.D.; Liang, S.X.; Sun, Z.C.; Zhao, X.Z.; Sun, J.W.; Liu, Z.B. A GPU accelerated finite volume coastal ocean model. J. Hydrodyn.
**2017**, 29, 679–690. [Google Scholar] [CrossRef] - Chen, T.Q.; Zhang, Q.H. GPU acceleration of a nonhydrostatic model for the internal solitary waves simulation. J. Hydrodyn.
**2013**, 25, 362–369. [Google Scholar] [CrossRef] - Wu, J.; Zhang, H.; Yang, R.; Dalrymple, R.A.; Hérault, A. Numerical modeling of dam-break flood through intricate city layouts including underground spaces using GPU-based SPH method. J. Hydrodyn.
**2013**, 25, 818–828. [Google Scholar] [CrossRef] - Néelz, S.; Pender, G. Benchmarking the Latest Generation of 2D Hydraulic Modelling Packages; Environment Agency: Bristol, UK, 2013. [Google Scholar]
- Ying, X.; Khan, A.A.; Wang, S.S.Y. Upwind conservative scheme for the Saint Venant equations. J. Hydraul. Eng.
**2004**, 130, 977–987. [Google Scholar] [CrossRef]

**Figure 2.**Plan and profile of the DEM used in the test case of flooding a disconnected water body [18].

**Figure 3.**The computed water levels at points P1 and P2 in the test case of flooding a disconnected water body: (

**a**) computed results of the present model at P1 and P2; (

**b**) computed results of different models at P1; (

**c**) computed results of different models at P2.

**Figure 7.**The flood control protection zone of the Wei River right bank (the blocks of hatch pattern with blue, yellow, and green belong to Henan Province, Hebei Province, and Shandong Province, respectively).

**Figure 9.**The test case of real flood simulation in the Wei River: the discharge across the hypothetical dyke breach (the dyke breach occurred at time t = 0).

**Figure 10.**The test case of real flood simulation in the Wei River: the distribution of the computed water depth at different times using Mesh-1.

**Figure 11.**The test case of real flood simulation in the Wei River: the water volume of inflow and the cells’ storage.

**Figure 12.**The test case of real flood simulation in the Wei River: the run time distribution of replicate running by using the Mesh-1 and the GPU computing.

**Table 1.**The test case of real flood simulation in the Wei River: the comparison of the final flooded area at different water depth intervals (km

^{2}).

Model | In All | Water Depth Intervals | ||||
---|---|---|---|---|---|---|

0.05–0.5 m | 0.5–1 m | 1–2 m | 2–3 m | ≥3 m | ||

MIKE21 FM | 864.26 | 370.35 | 234.5 | 200.34 | 50.14 | 8.93 |

The proposed model | 856.28 | 364.62 | 241.19 | 193.46 | 47.03 | 9.98 |

Relative error (%) | −0.92% | −1.55% | 2.85% | −3.43% | −6.20% | 11.76% |

**Table 2.**The test case of real flood simulation in the Wei River: results of different calculation schemes (T = running time; S = speedup ratio; TS = time saving ratio).

Mesh Type | Grid Number | Average Grid Area (m^{2}) | T_{CPU} (h) | T_{GPU} (h) | S | TS (%) |
---|---|---|---|---|---|---|

Mesh-1 | 179,296 | 53,000 | 1.70 | 0.15 | 11.3 | 91 |

Mesh-2 | 717,184 | 13,250 | 10.69 | 0.62 | 17.2 | 94 |

Mesh-3 | 2,868,736 | 3313 | 86.72 | 2.79 | 31.1 | 97 |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Liu, Q.; Qin, Y.; Li, G.
Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing. *Water* **2018**, *10*, 589.
https://doi.org/10.3390/w10050589

**AMA Style**

Liu Q, Qin Y, Li G.
Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing. *Water*. 2018; 10(5):589.
https://doi.org/10.3390/w10050589

**Chicago/Turabian Style**

Liu, Qiang, Yi Qin, and Guodong Li.
2018. "Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing" *Water* 10, no. 5: 589.
https://doi.org/10.3390/w10050589