An Efficient Decision Support System for Flood Inundation Management Using Intermittent Remote-Sensing Data

: Timely acquisition of spatial flood distribution is an essential basis for flood-disaster monitoring and management. Remote-sensing data have been widely used in water-body surveys. However, due to the cloudy weather and complex geomorphic environment, the inability to receive remote-sensing images throughout the day has resulted in some data being missing and unable to provide dynamic and continuous flood inundation process data. To fully and effectively use remote-sensing data, we developed a new decision support system for integrated flood inundation management based on limited and intermittent remote-sensing data. Firstly, we established a new multi-scale water-extraction convolutional neural network named DEU-Net to extract water from remote-sensing images automatically. A specific datasets training method was created for typical region types to separate the water body from the confusing surface features more accurately. Secondly, we built a waterfront contour active tracking model to implicitly describe the flood movement interface. In this way, the flooding process was converted into the numerical solution of the partial differential equation of the boundary function. Space upwind difference format and the time Euler difference format were used to perform the numerical solution. Finally, we established seven indicators that considered regional characteristics and flood-inundation attributes to evaluate flood-disaster losses. The cloud model using the entropy weight method was introduced to account for uncertainties in various parameters. In the end, a decision support system realizing the flood losses risk visualization was developed by using the ArcGIS application programming interface (API). To verify the effectiveness of the model constructed in this paper, we conducted numerical experiments on the model's performance through comparative experiments based on a laboratory scale and actual scale, respectively. The results were as follows: (1) The DEU-Net method had a better capability to accurately extract various water bodies, such as urban water bodies, open-air ponds, plateau lakes etc., than the other comparison methods. (2) The simulation results of the active tracking model had good temporal and spatial consistency with the image extraction results and actual statistical data compared with the synthetic observation data. (3) The application results showed that the system has high computational efficiency and noticeable visualization effects. The research results may provide a scientific basis for the emergency-response decision-making of flood disasters, especially in data-sparse regions.


Introduction
From many historical flood events, it is observed that flood disaster is one of the most frequent and destructive natural disasters [1]. Due to the dual impacts of global climate Due to the time and resource limitations and rapid changes in the flooding process, it is always a huge challenge to collect information for hazard mitigation promptly, as proper actions must be performed within a limited amount of time. Today, the application of remote sensing for flood studies is receiving considerable attention. The development in this field has evolved from optical to radar remote sensing [3]. Though radar data could provide frequent day and night observations of the surface under almost any weather condition, they have relatively low resolution and high noise. The imaging of radar data cannot provide an intuitive image experience as well. Besides, satellite synthetic aperture radar (SAR) imagery of some urban areas is difficult to interpret because of the off-nadir viewing configuration, for example, the confusion of floodwater with a specular reflection of smooth land surfaces [4]. Optical remote-sensing data are used broadly for regional monitoring and mapping. They have a cost-effective advantage over radar data for flood extent mapping, especially under cloud-free conditions. Optical images are more suitable for studying flood inundation with a relatively long-time span, rapid water rising, and slow retreating, such as "coastal flood" and "fluvial flood". Meanwhile, the bad weather was primarily concentrated in the pre-flood period, and not all of the weather was severe during flooding. A part of the optical remote-sensing image data with relatively good quality was available. Thus, the main objective of the research described in this paper was to develop a decision support system to evaluate the impact of flood hazards based on these limited remote-sensing data in a widely used Geographic Information System (GIS) environment. State agencies need those reliable decision support systems to assess flood loss, plan and design flood management strategies and mitigation systems, and prepare emergency management plans that may involve.
To carry out a flood hazard assessment from remote-sensing imagery, we should first need to identify the water bodies and their distribution from the remote-sensing imagery. The current water information-extraction methods mainly include the threshold method, machine learning method, and deep learning method. The threshold method constructs a model by selecting the appropriate band in the information transmitted from the satellite and uses the different spectral characteristics of water bodies and non-water bodies to extract water bodies [5]. It is divided into the single-band threshold method [6], interspectral relationship method, and water-body index method [7][8][9][10], among which the water index method is the most popular. The earliest water index method, named the Normalized Difference Water Index (NDWI), was proposed in 1996 [7], eliminating the interference of some vegetation and soil information to extract water. However, the threshold methods have trouble selecting water bodies from objects with similar spectral features, such as shadows and dark roads. With the development of machine learning, several popular machine learning algorithms, such as Decision Tree (DT) [11,12], Support Vector Machine (SVM) [13,14], and Random Forest (RF) [15,16], have been widely used in water body extraction. However, these methods need to mark features artificially, and different feature vectors are required for changed pictures; the quality of the mark has a significant impact on the results [17]. Traditional methods mainly rely on manually designed extractors, requiring professional knowledge and complicated parameter adjustment processes. Thus, the generalization ability and robustness of these methods need to be improved.
Today, deep learning has become popular in image processing, including remotesensing images as well. The advantage of convolutional neural network (CNN) is that features could be captured directly from the original images through multiple convolutional layers [18], which avoid complex feature processing. Some models, such as U-Net [19], LinkNet [20], and DeeplabV3+ [21], were popular in the field of image recognition and had relatively good results in terms of accuracy. U-Net was widely used for its simple and straightforward encoding-decoding structure [22]. However, U-Net is poor in edge information extraction and easily misses part of the targets. In addition, water extraction is different from general target extraction, and it has pronounced regional differences. Some surface features with spectral reflectance close to water, such as shadows, roads, dark roofs, etc., can easily interfere with the water extraction results, leading to incorrect mention and omission of deep learning models. To date, most of the water datasets published lacked samples in complex geological environments.
How to generate a continuous and dynamic flood process through the intermittent flood inundation range extracted by the deep learning model still faces significant challenges. To obtain a dynamic and continuous flood inundation process, the flood evolution model that relies on a priori data was often used in hydrology to describe the flood movement process. Those methods could accurately simulate the water level, flow rate, and changes over time by solving hydrodynamic equations. Still, the uncertainty of the parameters due to regional differences reduced the accuracy of the results, especially in areas with insufficient data [23]. Moreover, flood disasters have the characteristics of suddenness, and many disaster-stricken sites could not provide sufficient real-time observational data. Furthermore, remote-sensing data could be used directly to extract information about the extent of flood inundation. It can infer hydrological model parameters by extracting underlying surface features, such as the land cover and impervious area ratio. The data-assimilation method can integrate the observation data information and the constraints of the hydraulic model and use multi-source information to minimize the uncertainty in the flood evolution process [24]. There are two main ways to use hydraulic models to assimilate remote-sensing data. One is to assimilate water-level data extracted from remote-sensing data with hydraulic models [25]. The assimilation effect of this method depends on the accuracy of the extracted water-level data. The accuracy of the data obtained by the current approach is still low and is at the meter level, which is not very compatible with the hydraulic model. On the other hand, the remotesensing data have a high resolution at present, from which a high-resolution flood inundation range could be obtained. Thus, it is more direct and practical to use the floodinundation-area data rather than the water-level data to assimilate the flood evolution process. Lai et al. [26] researched the fusion of flood inundation range data and the flood dynamics model to dynamically correct the dynamics model. Zhang et al. [27] transformed the flood inundation process into the topological deformation between the curves of the inundation area and performed numerical solutions from space and time dimensions. However, these models still do not fully satisfy the needs of city emergency management, due to model complexity, setup data requirements, and computing times. Besides, there are still few related studies.
Since the submerged-area data obtained from remote-sensing images contain rich hydraulic spatial information, this paper aimed to develop a robust decision support system for integrated flood inundation management based on limited remote-sensing images. Firstly, we established a water-extraction convolutional neural network to costeffectively and accurately extract the water body for the first challenge. Specific datasets were used in training the model to separate the water body from the confusing surface features. In addition, we built a waterfront contour active tracking model to implicitly describe the flood movement interface for the second challenge. The spatial upwind difference and time Euler difference methods were used to obtain the numerical solution of the implicit function and interpolate the submerged range in time and space. Finally, we developed a decision support system, using ArcGIS API. The decision support system combined fast raster layer operations in the GIS platform with vulnerability models to generate flood-hazard maps for decision-makers.
The structure of the rest of this article is as follows. First, we introduce the basic principles of the proposed models. Secondly, we report how we tested and evaluated the performance of the models on a laboratory scale. Simultaneously, we took the Chaohu Lake basin, Anhui Province, as the study area to verify the actual capabilities of the models. The daily inundation ranges from June 15 to September 30 during the flooding process were simulated from the limited raw imagery. Finally, we packaged the models above to develop an information system for loss assessment. The validation of the evaluation results was carried out by cross-comparison.

Methodology
The model proposed in this paper was divided into three parts: multi-scale floodinformation-extraction model, water-boundary tracking model, and loss-assessment decision support system, as shown in Figure 2.

Multi-Scale Flood-Information-Extraction Model
To fully use the cost-effective optical remote-sensing data, we expected to establish a model to extract visible floodwater by using RGB band digital numbers to obtain an accurately distributed water extent with a relatively high spatial resolution.

Model Design
The multi-scale flood information extraction model called DEU-Net proposed in this paper combines the advantages of U-Net and DenseNet. It replaces the ordinary convolution module used for feature extraction in the original structure of U-Net with densely connected blocks in DenseNet. U-Net is an encoding-decoding network based on a fully convolutional neural network and is composed of a symmetrical down-sampling process and an up-sampling process [22]. U-Net combines the information obtained from the downsampling process and the information input from the up-sampling process to restore the details of images [28]. DenseNet [29] is a convolutional neural network with dense connections. There is a direct connection between any two layers in DenseNet. The input of each layer in the network structure is the union of the outputs of all the previous layers, and the feature maps obtained from this layer will also be directly passed to all subsequent layers as input information [30]. The structure of the dense connection block is shown in Figure 3. This structure could fuse information from multiple scales to obtain more affluent and adequate information, thus enhancing the network's expressive capability. In addition, since the sources of our datasets were public datasets, Landsat-8 and GF-1, water bodies in our dataset had multi-scale features due to the gradual improvement in resolution [31]. After training on the datasets, the multi-scale floodinformation-extraction model could fully extract a small area of water and keep the integrity of the slender river, which performs well in dig-out water from remote-sensing images. In the down-sampling feature extraction part of the structure, the images were resampled to a pixel resolution of 512× 512 as input. First, the input passed through 64 filters of size 3×3 to obtain an initial feature map of 256×256×64 and then entered the Dense Block to extract the feature. Since the characteristics of water bodies in remote-sensing images were apparent, each Dense Block set the number of layers to 3 and the growth rate to 12 to reduce graphics processing unit (GPU) consumption during model training. Each dense layer contained a convolutional layer of size 3×3, a batch normalization (BN) layer, and a rectified linear unit (ReLU) layer. A transition layer connected every two Dense Blocks. Each transition layer had the same structure and consisted of a bottleneck layer with a filter of size 1×1, an average pooling layer of size 2×2, and a dropout layer. Then the input image passed through 5 Dense Blocks and four down-sampling transition layers during the down-sampling process on the left of the model structure. The feature map output by the fifth Dense Block was of size 16×16 and then went in the up-sampling expansion stage on the right. The structure on the right was similar to the structure on the left. It was composed of 4 transition layers and 5 Dense Blocks. The role of the transition layer was to deconvolute and expand the abstract feature map obtained by feature extraction by using a 3×3 filter. While performing up-sampling, feature maps of the same size on the left and right sides were merged through jump connections. This design improved the utilization of feature information and provided more gradient flow information, thus enhancing the network structure's training performance and training speed.

Sample Generation
Due to the distinct discrepancies in the water features and the surrounding environment in different regions, the accuracy of the extraction results obtained by the same model may also be uneven. If the model was only trained by standard datasets, it was tough to get good extraction results when the surrounding environment was relatively unsophisticated. Therefore, in addition to making universal water datasets based on Landsat-8 OLI and Gaofen-1 (GF-1) satellite data, we collected images of water bodies in various regions of our country based on Landsat-8 OLI and performed detailed analysis and classification. Regarding the nature of the water body and the confusion of surface objects, the research fields with complex conditions were divided into four typical types. We trained the common datasets and then performed the corresponding specific datasets to get the specific model at first. Before starting water-extraction work, the kind of research area involved should be roughly judged. If the terrain condition is simple, we could use the model trained by standard datasets. If the circumstance is complex, we should first determine which feature area is closer and selected the corresponding model for water body information extraction. The generation process of the datasets is shown in Figure 4, and some samples in specific datasets are shown in Figure 5.

Data Preparation
Post-Event processing Processing somewhat complex, and there are several confusing features, such as building shadows, roads, dark lawns, and dark roofs [17].

•
Mountainous areas: The water bodies in mountainous areas are primarily rivers. Mountain rivers have many branches, and it is hard to accurately extract the edges for the most part. Moreover, they are easily confused with mountain shadows.
• Plateau areas: The chief water bodies in the plateau areas are plateau lakes and plateau rivers. Because of their rich mineral ions, the colors of the water bodies are different from the common ones, such as turquoise and light blue. Confusing features are mountain shadows and cloud shadows left in the image due to the shooting angle.
• Multi-water areas: These areas contain rich water resources, mainly in farming regions such as paddy fields and fish ponds. The water bodies in this area are compactly distributed with many types and different scales. They may include lakes, rivers, and ponds, as well as small puddles. Ground objects that are easy to confuse include farmland and masking nets. In the low resolution of remote-sensing images, water bodies may be indistinguishable from dark farmland.

Water Boundary Tracking Model
The results obtained in the previous section have a high spatial resolution but a low temporal resolution. To receive a flooding process with a high resolution in both time and space, we expected to develop a model to dig out the dynamic and continuous process of flood extent change based on the obtained flood extraction results.

Curve Evolution
The active boundary tracking method is an effective tool for processing the topological changes of the motion curve with time, based on techniques of curve evolution. The problem becomes the "mean-curvature flow"-similar to evolving active contour, it will stop on the expected boundary. We presented a numerical algorithm by using finite differences. This method describes the continuous function ( , , )： → as the implicit expression of a closed evolution curve ( , ): 0 ⩽ ⩽ 1 at time t. That is, the curve C( , ) at time t corresponds to the zero-level set of ( , , ). In this paper, the flood inundation range at time was the source surface , and the flood inundation range at time was the target surface ( ). Thus, the evolution process of the flood inundation range was transformed into the process of infinitely approaching under the control of partial differential equations (see Figure 6). Let us now consider the following "fitting" term: where is any closed curve, and the constants and , depending on , are the averages of inside and outside, respectively. It is obvious that , the boundary of the object, is the minimum value of the fitting term, as shown in Figure 7. In addition, we added some regularizing terms, such as the length of the curve and the region's area inside. We introduced the energy functional ( , , ), which is defined as follows: where ≥ 0, ≥ 0, , and 0 are fixed parameters. ℎ( ) is the length of the closed contour line C, ( ( )) is the internal area of , and and are the weight coefficients of each energy term. In almost all of our numerical calculations (see further), we fix = . Therefore, we considered the minimization problem: , , ( , , ).
We sought the best approximation of the function that took only two kinds of values, namely = u<0 inside u>0 outside . The initialization of the function u is simple for the regular initial contour, and the curve C we selected is a normal circle with a center ( , ) and a radius of r. The calculation formula of the function u is Keeping fixed, with respect to the constants and , it is easy to express these constants' function of by using the following: By the previous formulas, we saw that the energy ( , , ) could be written the only function of ( ( , )). Keeping and fixed, and minimizing ( , , ) with respect to , we deduced the associated Euler-Lagrange equation for . Parameterizing the descent direction by an artificial time ≥ 0, we realized the dynamic evolution of the level set function.
To discretize the equation in , we used a finite differences implicit scheme. Firstly, we recalled the usual notations: let ℎ be the space step, ∆ be the time step, , = ( ℎ, ℎ) be the grid points, and , = Δ , , be an approximation of ( , , ). To ensure the accuracy of the solution and avoid numerical dissipation, the spatial upwind difference format and the time Euler difference format were selected for discretization. The finite differences are as follows: where Δ , and Δ , represent the forward difference format and backward difference format in the x-direction, respectively. Δ , and Δ , represent the forward and backward difference formats in the y-direction. The discretization of the divergence operator and the iterative algorithm is as follows. Knowing , we first computed ( ) and ( ). Then, we computed by the following discretization and linearization.
When the implicitly expressed function , was used for surface evolution, the zero isosurface C( ) = ( , ) ∈ Ω: , = 0 of the function was the evolving shape of the flood inundation range at that point in time t.

Decision Support System for Integrated Flood-Loss Assessment
The final task of this research was to develop an innovative decision support system for integrated flood-loss assessment that allows for a detailed evaluation of the consequences of a flood event. A collection of GIS-based decision-support modules constitutes the core of the proposed system [32]. The previous model provides information about the extent of the flooded area, spatial distributions of flood depth, the arrival time of the flood, and its duration at each point of the computational domain. To carry out these tasks, we need to complement the GIS decision support system with various geospatial information, such as land use, census data, building density, and road network [33]. The system mainly includes the following aspects: remote-sensing image, vector-element resources, flood-inundation data, loss-calculation model, and client implementation, as shown in Figure 8. We used the high-performance platform to obtain the flood inundation data of the computing area and utilized the data conversion tool to convert the data into the corresponding time-series layer. All data were provided to the client in representational state transfer (REST) transmission format. The client used the integrated application server to aggregate data from different sources and realized functions such as browsing, query, analysis, and calculation. To accurately and rapidly evaluate the risk of flood inundation, we selected the cloud model to assess the flood disaster loss and used the entropy weight method to calculate the weight of each indicator. The cloud model [34] is a fuzzy mathematics method based on the uncertainty of concepts in natural language, which starts from the connection between ambiguity and randomness, and realizes the uncertain conversion between qualitative concepts and their quantitative values. The cloud is composed of disordered cloud drops, and a cloud drop means the quantitative realization of a qualitative concept. The more cloud drops, the more it could symbolize the characteristics of the qualitative concept. The main parameters of the cloud model are the expectation ( ), the entropy ( ), and the hyper-entropy ( ). Forward Cloud Generator is the most popular among the cloud models, and it was used in this study. After obtaining the certainty degrees of different indicators through the Forward Cloud Generator, we used the entropy weight method [35] to calculate the weight of each indicator in the flood-disaster loss assessment. The detailed computational flow of the cloud model is presented in Table A1 in Appendix A.

Model Performance Testing and Discussion
A laboratory environment was constructed to benchmark the performance of the model and the accuracy of the simulations. The purpose of doing this is that the laboratory environment is manageable and the observational data obtained are more accurate.

Performance Comparison of the Water Extraction Model Technologies
To evaluate the performance of the model DEU-Net, we made a qualitative and quantitative comparison with the traditional method NDWI based on land-surface reflectance and the machine-learning method SVM, and two widely used deep-learning models, U-Net and DeeplabV3+. Qualitative evaluation was to visualize the waterextraction results produced in five typical ways and compare the performance of these methods from visual interpretation. The overall accuracy (OA), the false water rate (FWR), the missing water rate (MWR), and the mean intersection over union (MIoU) were used in the quantitative assessment. The formulas are listed in Table 1. Table 1. Four evaluation metrics for accuracy assessment of the identified outcomes.
TP, TN, FN, and FP represent the numbers of pixels of actual water, factual background, false background, and false water, respectively.
The results of the water-body extraction using different methods on the test images are shown in Figure 9, and the consequences of accuracy analysis are shown in Table 2.

Evaluation Index Definition Formula
OA The ratio to quantify the degree of match between the predicted value and the actual value The ratio of the number of pixels misclassified as water and the number of predicted water pixels FWR = + × 100%

MWR
The ratio of the number of water pixels that are not recognized as water and the number of actual water pixels MWR = + × 100%

MIoU
The average of the intersection and union of each type of predicted and actual value MIoU = 1 2 + + + + + As can be seen that NDWI based on land surface reflectance had a humble ability to distinguish between shadows and water bodies, and it was easy to cause false extraction (Figures 9a-c), especially in the built-up area with dense buildings (Figure 9b). In the mountainous area (Figure 9c), the NDWI method confused mountain shadows with water bodies, and the extraction results of tributaries were poor. In the water-rich place ( Figure  9d), large-area water bodies had good extraction results, but tiny rivers were difficult to extract accurately. The outcomes extracted by the SVM method had severe noise problems, especially in the built-up area (Figure 9b) and the multi-water area (Figure 9d). DeeplabV3+, U-Net, and DEU-Net all belong to deep learning methods. These three methods were generally better than the water spectral indices and SVM methods according to their performances. In the plateau area, except for a small part of snow that DeeplabV3+ correctly mentioned, these three methods had little difference (Figure 9a). Moreover, they were good at distinguishing the shadow of the building and the water bodies ( Figure 9b). However, as the water body in the lower-left corner was rich in aquatic plants, these three deep learning methods all had a certain degree of error extraction, among which DEU-Net had the lowest degree. In addition, a small part of the water bodies adjacent to the bridge was omitted when using U-Net. DeeplabV3+ had much false extraction in the mountainous area, which confused mountain shadows with water bodies (Figure 9c). In the multi-water area, several dark lands were confused with the water bodies by DeeplabV3+, and the phenomenon of over-extraction was serious. U-Net had two apparent omissions in the lower-left corner, and the results of DEU-Net were the best (Figure 9d). According to the results of the accuracy analysis shown in Table 2, DEU-Net performed better than the others in all four indicators, and NDWI performed worst. As one of the best models for semantic segmentation, DeeplabV3+, performed poorly in FWR in this study but slightly better in MWR. Combined with U-Net, it appeared that DeeplabV3+ was easy to perform overfitting when training for water extraction. The accuracy analysis confirmed the performance of the DEU-Net proposed in this paper.

Laboratory-Scale Experiment of Dike Flood Boundary Simulation
This experiment was a laboratory-scale dike breach process. The calculation area was a rectangular flat-bottomed beach of 10.0 m × 8.0 m, shown in Figure 10a. Five different land-use types were evenly distributed in the study area from left to right, the roughness of which was 0.03, 0.04, 0.05, 0.06, and 0.07, respectively. Assuming that the water flow entered the beach from a fixed breach with a width of 0.4 m from the left-center, the flow process at the fracture boundary is shown in Figure 10b. The duration of the breach was 5 seconds. We used the observational data and the results calculated by the 2D hydrodynamic model by using the above-assumed parameters to carry out model verification experiments, as shown in Figure 11. The advantage was that it could deduct the interference of factors, such as the observation and experiment errors, and it was suitable for verifying the proposed model. Based on the observation data obtained by the simulation, we carried out the numerical experiments of the model. We recorded the actual submerged range at t = 1, 2, 3, 4, and 5 second. Furthermore, we designed three sets of experimental programs based on the data in the submerged area. The specific experimental programs are shown in Table  3. Table 3. Experimental protocol description.

Test protocol Test protocol description Test A
Only the submerged range at t = 5 second Test B Only the submerged range at t = 1, 5 second Test C Only the submerged range at t = 1, 3, 5 second In those cases, we first established an initial continuous function ( , ) = ( − ) + ( − ) − , and the zero isosurface C ( ) = ( , ) ∈ Ω: ( , ) = 0 of the function was the initial evolution shape of the flood inundation range at time 0. Then we used Formula (20) to perform the iterative evolution of the surface function. The intersection of the surface with the plane created the implicit contour of the flood inundation range, as shown in Figure 12.  Figure 13 shows the comparison between the boundary line of the submerged area calculated by using different sets of observation data and the actual value. When only the final state data (Test A) was used as the input parameter, the simulated value differed significantly from the observed data, especially at the moment of t = 1. When the initial state was added to the input parameter (Test B), although the intermediate outcomes at t = 2 and 3 second were still unsatisfactory, the degree of compliance had improved enormously. When the data of the submerged range at t = 1, 3, and 5 second were all used as input parameters (Test C), the simulation effect was close to the actual value. It could be seen that the effect of the simulation was closely related to the constraints in the simulation process. More restrictions could better control the deformation of the curve and make the curve evolve in the right direction. The shorter the time interval, the more constraints and the more accurate the simulation results. Figure 13. Comparison of the simulation performance of flooding processes in the case of dam breach with different initial conditions. From left to right, the outcomes of Test A, Test B, and Test C are respectively represented. The time unit in the graphs is second.

Case Study
A real-life environment that requires a solution to the proposed challenge was chosen to further validate our model's effectiveness.

Study Site
We selected Chaohu Lake and surrounding areas as the study areas, aiming to research the impact of the 2020 flood in the Chaohu Lake basin. The Chaohu Lake Basin, located in the middle of Anhui Province, includes 13 county-level units in Chaohu City, Feidong County, Feixi County, Lujiang County, Changfeng County, Shushan District, Luyang District, Yaohai District, Baohe District, Hanshan County, He County, Wuwei City, and Shucheng County. Its latitude ranges from 30°57′05″N to 32°32′20″N, and longitude ranges from 116°25′20″E to 118°30′00″E. Moreover, the geographic location is shown in Figure 14.

Data Collection and Preprocessing
The data sources used in this paper are shown in Table 4. The data sources of the standard datasets for water-body information extraction produced in this paper were two open-source datasets, Gaofen Image Dataset (GID) [36] and Aerial Image Dataset (AID) [37], GF-1 and Landsat-8 OLI, since they are relatively cost-effective remote-sensing image data and highly accessible. The datasets included images with multiple river scales and various representative interferences, such as mountain shadows, clouds occlusion, road interference, different sand content of the river, dry river bed interference, and mosaic changes of images. The data source of the specific datasets was Landsat-8 OLI, which contained four types: plateau area, mountainous area, built-up area, and multi-water area. The source of flood data in the study area was Landsat-8 OLI. Due to the continuous rainy weather during the flood period, the remote-sensing data were insufficient. Thus, we selected only five phases of Landsat-8 OLI remote-sensing images of relatively good quality across the life cycle of the flood event of the Chaohu Lake Basin, and the acquisition time was 17 May, 20 July, 5 August, 6 September, and 24 October in 2020. The geographic data were used to establish the flood loss assessment model in the Chaohu Lake Basin. In addition, official statistics data of Anhui Province and public-releases data were used to verify the correctness of experimental results.

Results and Discussion
The whole simulation was subdivided into two parts as well. It starts with the extraction of information on the distribution of water bodies at different points in time through remote-sensing imagery. Secondly, we intend to obtain a dynamic continuous flood process by modeling this discrete water level distribution information. Finally, we will conclude with a systematic evaluation and analysis of the probable impacts of this flooding process.

Flood Extraction Results on the Study Site
The proposed U-Net model was used to extract the flood, and we could see the comparison of satellite images and extracted areas in Figure 15. The total outcomes of flood extraction based on date-specific remote-sensing images are shown in Figure 16. According to the consequences, we could deduce that the regions affected by the flood in the Chaohu Lake Basin were primarily located in the southwest, followed by the northeast. 20 July was almost the peak time of this flood event, and the areas around Chaohu Lake were more or less submerged. Before 5 August, the floods did not fall significantly. By 6 September, the floods in the northern area had subsided, and only the southwestern areas were still severely affected. On 24 October, the flooding situation was similar to when there was no flood on 17 May. According to the flooding-related information assembled, flood peaks in various regions were concentrated on 18 to 24 July, and the floods had receded before 30 September. It could be seen that there was a good concordance with our extraction results.  Simultaneously, the water-body ranges extracted from the remote-sensing image on 17 May were selected as the original input data as the start time of the flood. The flood inundation range on 20 July was the peak-time data, and the range on 24 October was selected as the input data of the low-tide period. Flood-spreading process simulation was carried out through the water-boundary tracking model. Meanwhile, the extraction results from 5 August and 6 September were used as a validation dataset to verify the accuracy of the simulation.

Overland Flow Routing Simulation Result
We finally gained the daily flood inundation simulation results from 15 June to 30 September through the active boundary tracking model. Considering the different characteristics of the floodwater increase and decrease, the simulation process was divided into two stages: the rising water and falling water stage. The results are shown in Figure 17 and Figure 18, respectively.
Ⅰ Simulation of water rising process  Figure 17 shows that the inundation area of the Chaohu Lake Basin did not increase significantly until 22 June. The spread of flooding became apparent on 16 July, and a sizeable flooded area appeared in the southwest. On 18 July, the low-lying areas around Chaohu Lake were flooded rapidly. The southwestern regions saw the fastest water rise, followed by the northeast areas, and the southeast areas had the lightest water accumulation. The flooding situation in Lujiang County and Chaohu City was even more serious. On 20 July, the flooded area reached its maximum.
Ⅱ Simulation of water-receding process  Figure 18 shows that after the flood's peak on 20 July, the flood area began to decrease. The floods in the northwest and southeast areas receded promptly, while the southwest areas receded gradually. On 20 August, the northwestern regions dried up, and on 25 August, the northeastern part also retreated. Moreover, Lujiang County continued to be flooded until the end of September.
To demonstrate the effectiveness of the model simulated in detail, we chose a tiny region on the western side of the study area as an area of interest (AOI). Here we displayed results in terms of binary maps of flood extents over the selected AOI shown in panel (a), together with an example of a water rise course map in panel (b) as provided for the same day in Figure 19. More details of the spatial inundation pattern were visible in the selected region. The result showed that the tracking model based on remote-sensing observation data could swiftly generate a higher resolution and accurate flood disaster map. The evolution process could be seen more intuitively. To verify the accuracy of this method, we established a confusion matrix for accuracy evaluation by comparing the simulation results on 5 August and 6 September with the remote-sensing image extraction results, as shown in Figure 20. The results are shown in Table 5. The Kappa values of the two tests were 0.9078 and 0.9211, which showed that the simulation results and the remote-sensing image extraction results were in good agreement.  In addition, we picked certain videos, reports, and bulletins released by authoritative agencies to provide supplementary validation, as shown in Table 6. The simulation outcomes better match the actual situation described in the validation information. To offer decision support to policymakers, we needed to estimate flood risk adequately. Thus, seven indicators were selected from the characteristics of the study area and flood inundation attributes to form a flood-disaster loss-evaluation system. These seven indicators are shown in Table 7. The indicators U1, U2, U3, and U4 were from the basic geographic data on the Internet. The maximum submerged area indicator came from the extraction results of remote-sensing images. The average maximum submerged depth indicator was obtained by kriging interpolation based on the elevation points of the submerged boundary in ArcGIS [42]. The average flooding duration indicator was obtained by creating random sample points on the continuous simulation results and calculating the time difference between the beginning and the end of the flooding. We divided the degree of flooding loss into five grades and separated the interval of each indicator. The parameters and of the cloud model were calculated by = and = .
, and the value of the parameter was manually adjusted through multiple trials. In these formulas, and represented the lower and upper boundary values of a specific interval of an indicator, respectively. The calculated parameter matrix of the cloud model is shown in Table 8. For each indicator, according to the cloud model parameter matrix shown in Table 9, we could generate the certainty degree matrix under different loss levels through the forward cloud generator. Considering the randomness of the calculation results, we calculated 1000 times to obtain higher accuracy. After obtaining the certainty degree matrixes of all indicators, combined with the weight coefficient calculated through the entropy weight method, the comprehensive flooding loss evaluation of 13 county-level regions in the Chaohu Lake Basin could be performed. The results of flooding loss evaluation are shown in Table 9, and the distribution of disaster losses and the twodimensional flood impact assessment visualization platform is shown in Figure 21. Lujiang County and Chaohu City, which were the closest to Chaohu Lake, were the worst affected. Meanwhile, Yaohai District and Shushan District, which were far away, were the least affected. The more severe loss areas were primarily distributed in the southwest and northeast. According to the Emergency Management Department of Anhui Province, Lujiang County, Feixi County, Chaohu City, Shucheng County, Hanshan County, He County, and Wuwei County in the Chaohu Lake Basin were considered as the hard-hit counties, and the statistics were in good agreement with our evaluation results (http://yjt.ah.gov.cn/public/9377745/145229191.html).

Conclusions
Rapid-response mapping of floodwater extents in flood events, such as "coastal flood" and "fluvial flood", while essential for early damage assessment and rescue operations, also presents significant image interpretation challenges. Images from the visible band (red-green-blue (RGB)) remote sensors are the most common and costeffective for real-time applications. Despite the poor quality of optical remote-sensing images caused by the weather conditions experienced during the flood event, this study developed a robust decision support system based on limited and intermittent optical remote-sensing data to fully and effectively use the optical remote-sensing data. The system was constructed based on two primary modules. An automatized multi-scale water extraction model was established to extract visible floodwater, using RGB band digital numbers. Visible floodwater denotes the floodwater on the ground surface that can be observed by remote sensing and is a crucial information source for the analysis of realtime floodwater extent; floodwater under tree canopies and in shadows was excluded from the visible floodwater class. The methodology was applied to delineate visible floodwater distribution from selected Landsat-8 optical image data acquired during the 2020 Chaohu flood event. The spatial resolution of the identified outcomes was high, but the temporal resolution was low. In addition, we developed a waterfront active tracking model to simulate the dynamic and continuous flood-range change based on the obtained flood extraction results, converting the flooding process into the numerical solution of the partial differential equation of the boundary function, and ultimately received a flooding process with a high resolution in both time and space.
An essential conclusion is that relatively high-resolution optical RGB imagery can provide the source of information for rapid response mapping of visible floodwater distributions during the life cycle of a flood event, despite the limited lower temporal resolution. The decision support system developed in this study could be used as a primary tool for rapid extraction of the visible floodwater from RGB image data and estimate flood risk through temporal interpolation. The application results showed that this system had high computational efficiency and noticeable visualization effects, providing a quick overview of the condition and comprehensive insights into the affected area for decision-makers and relief organizations to distribute their resources with maximum efficiency.
In future work, we expect to compare and consider selecting multi-source satellite data with relatively higher spatial and temporal resolution and use data-fusion techniques to further improve the accuracy and real-time performance of the system.

Appendix A
In this section we present material that we believe is essential to the reproducibility of our study, although not indispensable to the main part of the thesis. The calculation principle of the forward cloud generator is described below. Supposing there are evaluation objects and n evaluation indicators for each object, the normalized flood loss evaluation matrix, , could be structured. Subsequently, the weight matrix, , is then combined with the deterministic degree of the indicators calculated by the cloud model to gain the final flood disaster loss degree of each region. The whole calculation process embedded in the system is shown in Table A1. Table A1. The risk calculation process of the cloud models embedded in the system.

Input
Steps Output