Micro-scale Flood Hazard Assessment Based on Catastrophe Theory and an Integrated 2-D Hydraulic Model: A Case Study of Gongshuangcha Detention Basin in Dongting Lake Area, China

Assessments of urban flood hazards are crucial for planning and early warning flood system design. Moreover, hazard risk assessment is useful for emergency planning and insurance. There are two common methods for conducting flood hazard risk assessments (FHRA): those based on physical models and those based on parameters. Although physical models are able to simulate flood propagation processes accurately, they also have obvious shortcomings. Parameter-based FHRAs are more comprehensive because they emphasize the analysis of hazard factors. However, this approach also has various flaws, including its qualitative, macro-scale and high subjective nature. In this study, the strengths of both methods were combined to develop a new micro-scale FHRA. Taking the FHRA of the flood storage and detention area of Dongting Lake as an example, this study used high-precision digital elevation model (DEM) data generated from an airborne light detection and ranging (LiDAR) point cloud to construct a two-dimensional (2-D) flood propagation model. Micro-scale FHRAs were then performed using eight selected FHR indicators based on catastrophe theory. By automatically calculating the FHR value of each assessment unit based on hierarchical recursion, the catastrophe theory and catastrophe progression method effectively avoided uncertainty in weight assignment, which is an issue commonly faced by parameter-based methods. The FHRA results obtained under 144 different sequences of assessment indicators also show that the proposed method has a low sensitivity to the ranking of FHR indicators, as well as a high fault tolerance for different assessment results arising from subjective rankings by humans.


Introduction
Flooding can represent a severe natural threat, with large flood events causing substantial casualties and property losses every year [1]. With increased urbanization along riverbanks, alluvial areas are increasingly affected by worsening flooding events in terms of frequency and intensity [2]. Decision makers have used various technical means and assessment methods to formulate flood policies, reduce losses caused by flooding and improve the level of flood hazard management. In the early period of flood hazard research, the focus was on the analysis of flood threats to people; however, more recently, the focus has gradually moved to accurate qualitative and quantitative analyses of disasters, environmental vulnerability and human/material exposures [3].
Three flood hazard risk (FHR) factors-the risk of disaster-causing factors, environmental vulnerability and human/material exposures-have become the cornerstones of current FHR analysis and assessment studies [3]. Many methods for FHR assessment (FHRA) have been proposed. These can be divided into two main categories: (i) traditional FHRA methods based on physical models (PMs) and (ii) parameter-based flood vulnerability assessment methods [4]. The main difference between the two categories is that the former employs PMs to perform FHRAs for specific flood events [5][6][7][8] and can be combined with flood loss assessment models to determine the economic losses in affected areas [9]. The latter uses collectable data to construct assessment indicators on a spatiotemporal scale, and then explores useful information to conduct qualitative assessments of regions' flood vulnerability. Examples of parameter-based flood vulnerability assessment include the flood vulnerability index (FVI) [10,11], the set pair analysis-variable fuzzy sets (SPA-VFS) model [12], spatial multi-criteria analysis (SMCA) [13]), the block maxima model (BMM) [14] and the decision tree (DT) approach [15].
PM-based FHRAs can accurately simulate the propagation process of a flood event to obtain information including the scope of flood inundation, submerged depth and duration, which are the focal points of FHR research. Digital topographic data are one of the most important data sources for flood simulation research [16]. Multiple studies have shown that high-resolution digital topographies play an important role in improving the accuracy of flood simulations [17,18]. In recent years, the rapid development of light detection and ranging (LiDAR) technologies has greatly facilitated developments in the acquisition and application of high-resolution digital elevation model (DEM) data. Macro-scale, high-resolution DEM data products have become common, which has popularized the application of such data to flood simulation research [19][20][21]. This has provided new opportunities for FHR research, such as micro-scale FHR mapping [22], flood damage assessment for individual buildings [23,24] and micro-scale FHR analysis [25]. The computational complexity of the models used in these studies increases dramatically when their computational grids become more refined. Model design that is more complex requires longer computation time and increases computational instabilities. Balancing the relationship between model complexity and actual application requirements has become a critical research focus [26,27].
Despite the ability of PM-based FHRAs to accurately simulate the propagation process of individual flood events, they are somewhat limited. On one hand, extra emphasis is placed on studying the risk of disaster-causing factors without sufficiently considering the environmental vulnerability and human/material exposures. For FHR parameters, only a few key factors (such as flood submerged depth, flow velocity and duration) are considered for the partitioning of areas based on risk levels to prepare FHR maps. On the other hand, although the characteristic parameters of PMbased FHRAs and the information on disaster-related economic losses can be easily conveyed to and comprehended by the public, the method is dependent on detailed topographic, hydrological and economic data. If there are not so much data available, the results of the FHR analytical method become unverifiable [4].
In contrast, the parameter-based FHRA method is more comprehensive. In addition to considering disaster-causing factors, it concurrently analyzes the environmental vulnerability and human/material exposures, such as population distribution, building vulnerability, DEMs, slope, river and waterway systems and road traffic networks [13,28,29]. The parameter-based FHRA method uses only a small number of attainable flood impact parameters to conduct flood vulnerability assessments of the entire complex system. Unlike PM-based FHRAs, parameter-based FHRAs focus on qualitative evaluation, with many of the applied parameters not having the concepts of spatial distribution or temporal scale. Following from the above, parameter-based FHRAs are usually macro-scale. This is often problematic for policy makers with regard to current trends of refined FHR management. Another issue with parameter-based FHRA methods is that the multiple parameters in the assessment system must each be assigned a certain weight. This is to determine the relative importance of the various types of parameters. Although many studies have introduced ways to assign weights to parameters, such as the analytic hierarchy process [30] and multi-criteria analysis [13], the subjectivity involved cannot be ignored. Weight assignments vary between different users owing to their dissimilar knowledge backgrounds and industry experiences, resulting in different FHRA results.
To address the aforementioned problems, this study fully considered the advantages and disadvantages of both the PM-and parameter-based FHRA methods before proposing a micro-scale FHRA method based on catastrophe theory and with the integration of airborne LiDAR point cloud data and a two-dimensional (2-D) hydraulic model. The Gongshuangcha flood storage and detention area (FSDA) located in China's Dongting Lake region was selected to test the method. First, airborne LiDAR point cloud data were generated for a high-resolution DEM. Next, a 2-D hydraulic model was used to construct a refined flood propagation model to obtain high-resolution flood simulation results. Spatial discretization of the FHR indicators was conducted by selecting the FHR factors, before comprehensive FHR assessment was performed based on the catastrophe progression method.

Study Area
Flood storage and detention areas are an important component of river flood control systems that improve safety and mitigate impacts in key areas. These areas require realistic and economical flood control planning, which can become a regional or global consideration requiring compromise and cooperation between many interests. At present, there are 98 major flood storage and detention areas in China, mainly distributed along the middle and lower reaches of the Yangtze, Yellow, Huaihe and Haihe rivers. Although flood peaks and flows are high along the Yangtze River, its channels have insufficient discharge capacity resulting in frequent flooding events throughout recorded history. The Dongting Lake flood storage and detention project is an important component of the basin's overall flood management.
The Dongting Lake region, located in the middle reaches of the Yangtze River (111°40′-113°10′ E and 28°30′-30°20′ N) and with a total area of 18,780 km 2 , consists of basin-shaped lakes and plains surrounded by mountains. Dongting Lake has an intricate system of feeder drainages distributed in a fan shape, but only one outlet from which the Yangtze River continues downstream. Owing to its special geographical location and complex morphology, this area has always been prone to flooding and requires intense seasonal management. Extensive human, material and financial resources are dedicated on dike management (during winter) and flood control (during summer). There are 266 dikes of different sizes stretching a total of 5812 km; primary and secondary dikes are 3471 and 1509 km long, respectively. However, flood damage in the Dongting Lake area remains severe: direct economic losses in 1996 and 1998 reached 2.14 billion and 1.27 billion US$, respectively. These impacts, along with the heavy repair and protection burden, have significantly limited the area's socio-economic development.
The Gongshuangcha detention basin is one of 24 large flood storage and detention areas in the Dongting Lake area. Located north of Yanjiang City, it is surrounded by water ( Figure 1) and borders the Caowei River to the north, the Huangtubao River to the south, Dongting Lake to the east, and Chishan to the west. The flood detention area covers 293.0 km 2 (of which cultivated land accounts for 157.6 km 2 ), and the total dike length is 121.74 km. There are three towns in the flood detention area; nearly 164,200 people live here legally. Owing to its large population, the occurrence of a flood disaster would inevitably cause major losses to the local economy, and so it is of great significance to carry out flood hazard evaluations in this area.

LiDAR Data
A HARRIER 68i airborne laser measurement system from TopoSys (TopoSys GmbH, Biberach, Germany) was used for aerial photography and measurement in the study area. The digital camera in the airborne laser measurement system was a Rollei Metric AIC Pro (60 million pixel) from Trimble (Sunnyvale, CA, USA). The inertial navigation system model was an Applanix POS/AV series with a sampling frequency of 200 Hz. The laser scanner model was a Riegl LMS-Q680i with a maximum pulse frequency of 80-400 kHz and a scanning angle of 45/60°.
High-resolution DEM data were generated by processing the collected airborne LiDAR point cloud ( Figure 2 and Figure 3). The DEM data had a spatial resolution of 1 m and the DEM grid contained 22,000 rows × 51,000 columns. The coordinate system was the Beijing Geodetic Coordinate System 1954 and the height system was the 1985 National Elevation Benchmarks. Seventy control points were selected in field and the average error of the DEM was 0.04 m. The overall terrain was relatively flat, with elevation ranging from 4.55 to 35.09 m. Many microtopographic features such as dikes, ditches and fields were captured by the DEM.

Two-dimensional (2-D) Hydraulic Model
The flood routing model used 2-D unsteady flow shallow water equations to describe the water flow. First, based on the natural topography and existing hydraulic structures, the computational area was discretized with an unstructured mesh. Next, the water flow, momentum and concentration balance were established for each mesh unit using the finite volume method in time series to ensure its conservation. The Riemann approximate solution was used to calculate the normal numerical flux of water flow and momentum across the unit to ensure calculation accuracy. The model has three Riemann approximations to be chosen: Osher, flux vector splitting (FVS) and flux-difference splitting (FDS).
The model transformed the 2-D problem into a series of local one-dimensional (1-D) problems by integral discretization of the finite volume method and rotation invariance of flux coordinates using the following calculation principles.

Basic Control Equation
The vector expression of the conservative 2-D shallow water equation is [31,32]: where, q = [h,hu,hv]T is the conservative physics vector, f(q) = [hu,hu 2 +gh 2 /2,huv]T is the flux vector in the x direction and g(q) = [hv,huv,hv 2 +gh 2 /2]T is the flux vector in the y direction. Here, h is the water depth; u and v are the vertical mean uniform flow velocity components of the x direction and the y direction, respectively; and g is the acceleration of gravity. The source-sink item b(q) is: where, s0x and sfx are the river slope and friction slope in the x direction, respectively; s0y and sfy are the river slope and friction slope in the y direction, respectively; qw is the net rain depth per unit time; and the friction slope in the model is estimated by Manning's formula.

Discrete Equations
The divergence theorem was applied to Equation (1) on any unit Ω for integral discretization in order to obtain the basic equation of FVM: where, n is the normal unit vector; outside unit ∂Ω; dω and dL are the area and line integral infinitesimal elements, respectively; and F(q) ⋅ n is the normal numerical flux. The F(q) = [f(q), g(q)] shows that the solution of normal flux can transform a 2-D problem into a series of local 1-D problems. The finite volume Ω is shown in Figure 4. Vector q is the average value of units and to the first order is supposed for the constant. The basic equation of FVM was discretized from Equation (3): , A is the area of unit Ω, m is the sum of unit sides, Lj is the length of the jth side in the unit and b*(q) is the source term. The flux in the direction normal F (q), F (q) can be defined as follows: The f(q) and g(q) are invariant to rotation and meet the following equation: and where, Φ is the angle between normal vector n and x; and T(Φ) and T(Φ)−1 are the geometrical transform matrix and its inverse matrix, respectively, which are given by: In turn, this gives: where, q̄ is transformed from vector q, whose corresponding flow component is in normal and tangential directions. Therefore, the key to solving Equation (9) is f(q̄) . Applying the above divergence theorem and flux rotation invariance, the 2-D problem is transformed into a set of normal direction 1-D problems, which can be solved by partial 1-D problems. Since the q of adjacent units can be different, the q value may be discontinuous along boundaries. The Riemann problem was applied in the model to solve f(q̄). The partial 1-D Riemann problem is an initial-value problem: which meets The origin of x̄ is located at the middle point of unit sides and the direction is in the exterior normal direction. Therefore, f(q̄) is the exterior normal flux at the origin; and q̄ and q̄ are the left and right vectors, respectively. Supposing the initial state when t = 0 is known, by solving the Riemann problem we find that the origin is located at x̄= 0.
There are three common ways to estimate normal flux f(q̄): 1) Arithmetic average: the average of unit fluxes on both sides of a common border give f(q̄) = [f(q̄ )+f(q̄ )]/2 or using the average of physically conserved quantities on both sides to calculate the flux as (̄) = f([q̄ +̄ ]/2).
A flood process simulation program was written based on the above model principles. All three estimation approaches were applied in the model for flood coupling solution. The program was able to choose between the three when needed.

Boundary Conditions
The model determined five types of water flow boundaries: the land boundary, the open boundary of slow and rapid flow, the inner boundary, the dynamic boundary of water exchange and the wetland tributary boundary.

Equation Solutions
The equations belong to an explicit scheme discretization and are solved by time-interval iteration. According to the 'Feasibility Study Report on the Flood Diversion and Storage Project of Qianliang Lake, Gongshuangcha and East Dyke of Datonghu in the Dongting Lake Area' approved by the China Ministry of Water Resources in 2009, during typical flooding events in 1954, 1966 and 1998, when the incoming flood flow in the three flood storage areas was 8000-12,000 m 3 /s, the water level impact on Chenglingji was minimal. For the 30-year flood in 1954, the study determined that the maximum flood discharge flow of the Gongshuangcha flood diversion sluice was 3630 m 3 /s and the designed flood discharge water level was 33.10 m.
We simulated the flood routing process by using this sluice to control flood discharge setting its flood discharge hydrograph as the input condition for model calculation. The following conditions were set as follows: when the water level (H) was < 31.63 m, the flow was at the free outflow stage and its rate was 3630 m 3 /s; when 31.63 < H < 32.60 m, the flow was at the submerged outflow stage and its rate was 3050 m 3 /s; when 32.60 < H < 33.65 m, a temporary flood diversion sluice was required.
The 2-D hydraulic model mesh was an unstructured triangulation with a total of 83,378 triangular units. Using the 1-m resolution DEM data, nearest-neighbor interpolation was used to obtain the elevation of each triangle apex and center point as the initial condition for model calculation. A flood submerged process of 4 days and 3 hours was simulated, and the flood storage and detention area was eventually completely submerged ( Figure 5). The final completely submerged water depth data ( Figure 6) were used for flood risk analysis. The flooding depth in the eastern part of Gongshuangcha was generally deep, while in the west (farther from the inflow sluice) it was relatively shallow.

Catastrophe Progression Method
Catastrophe theory, first proposed by Thom [33] and subsequently refined by Zeeman et al. [34], is a mathematical theory to describe the behavior of various non-continuous systems in nature and society. It is mainly used to study the process by which complex systems develop from gradual and quantitative changes to catastrophic and qualitative changes, such as volcanic eruptions, earthquakes and building collapses. Flood events are typical catastrophic processes in the natural environment. The theory assumes that a complex system is controlled by a series of parameters. When the system is in a stable state, the state function used to describe that system should return a unique extreme value, meaning that the state of the system is not affected when the aforementioned parameters change within their respective intervals. If a parameter changes within a certain interval and causes the state function to return more than one extreme value, the system is considered to be unstable. This means that the stability of a system (or the lack thereof) can be determined by observing changes in the extreme value returned by the state function.
The extreme point where the state function is located is where the derivative is zero. Catastrophe theory refers to the point where the derivative of the state function is zero as the critical point and the surface formed by that critical point is known as the equilibrium surface M. It is assumed that the state of a system consists of a combination of multiple unrelated variables with n number of variable To get the set of critical points of this state function, the partial differential equation of that function must be solved as follows: In catastrophe theory, the function When a control variable changes in a certain region, it will cause sudden changes to the state variables. A set of singularity points S must be identified from the aforementioned set of critical points, where S is a subset of M, which is in turn formed by all the degenerate critical points of F. The set of singularity points not only satisfies Equation (12) but also Equation (13): where, det represents the determinant and H(F) is the Hessen matrix of F (i.e., the second-order partial derivative matrix of F). Thom [33] pointed out that many catastrophes in nature can be expressed using particular geometric shapes. A potential function generally does not contain more than four control variables: the natural world is in 3-D and the time dimension is added as the fourth variable. Thom [33] proved that when a potential function contains no more than four control variables, it can at most have seven types of expression that are collectively referred to as the elementary catastrophe types ( Table 1). The first four are the most widely applied in practice because these only consider one state variable. ( , ) F x y y x y wx ty ux vy = + + + − − A multi-objective assessment system often involves more than a dozen (or even dozens of) assessment indicators. With that many indicators, the elementary catastrophe function cannot be applied directly because a maximum of only four control variables are allowed when calculating the state function. At the same time, it is difficult to perfectly rank the importance of the assessment indicators when too many are involved in the calculation. A compromise solution is to place all assessment indicators in a multilevel top-down classification by grouping those indicators that satisfy a particular common characteristic into a small class. Eventually, an inverted hierarchical tree of assessment indicators is constructed. The initial assessment indicators are located at the leaf nodes of the tree structure, with each parent node having a maximum of four child nodes. The relationship between the parent and child nodes can be regarded as that between the state and control variables under catastrophe theory. The different values of the control variables affect the value of the state variables; the effect of the former on the latter can be either large or small. It is for this reason that catastrophe theory is suitable for multi-objective assessments. During specific applications, the actual meanings of the assessment indicators differ, as do their dimensions and value ranges. To facilitate the calculation of the elementary catastrophe model, it is necessary to first normalize the values of the various indicators [35].

(1) Fold catastrophe
There is only one control variable in a fold catastrophe and so it can be directly normalized to the range of [0,1] according to the value range of the control variable.

(2) Cusp catastrophe
In a cusp catastrophe, the value range of x is limited to [0,1]. At this time, the function of x for u and v becomes: (

3) Swallowtail catastrophe
By the same principle, a swallowtail catastrophe is normalized to obtain: 3 4 x u x v x w

(4) Butterfly catastrophe
Normalization of the butterfly catastrophe gives: x t x u x v x w The normalization equation is also known as the fuzzy membership function. The open root mean square situation of the control and state variables indicates that the impacts of the former on the latter vary. In a cusp catastrophe, the control variable x is more important than v; in a swallowtail catastrophe, u>v>w; and in a butterfly catastrophe, t>u>v>w. The various elementary catastrophe types can be determined according to the relationship between the parent and child nodes on the tree structure. Starting from the leaf nodes at the bottommost layer, the catastrophe level is calculated step by step until the value of the comprehensive assessment indicator at the topmost level is reached. When there is mutual impact among the various child nodes in the same parent-child node, the average value of the catastrophe level is adopted when it is passed up to the next level. If such a mutually-constraining relationship does not exist, then the minimum value is adopted according to catastrophe theory, meaning that the minimum value of the catastrophe level is passed to the parent node. At present, catastrophe theory has been widely applied in the field of ecological environments, including assessing island ecological environments [36], sustainable use of urban water resources [37] and risks of water pollution [38].

Selection and Normalization of FHR Indicators
It is critical to carefully consider the FHR factors and their interrelationships [39]. There are definite differences in the FHR factors affecting different regions, such that a factor with significant impacts on a certain region may not be as important elsewhere [40]. For the Dongting Lake region, FHR-related factors representative of the three aspects (the risk of disaster-causing factors, environmental vulnerability and human/material exposures) were selected according to the region's characteristics and by making full use of previously accumulated remotely sensed geographic information and socioeconomic data. The three key indicators for the dangers of the disaster-causing factors included the maximum submerged depth, maximum flow velocity and flood submergence duration. The four indicators for environmental vulnerability were the DEMs, ditches, river systems and road networks. For the human/material exposures, the main indicator considered was the flood vulnerability of buildings. In this study, the 1-m DEM grid was used as the basic assessment unit for conducting micro-scale FHRAs. The selection of comprehensive FHR indicators is shown in Figure 7 and are described in detail below.

(1) Maximum Submerged Depth
Flood submerged depth, flow velocity and duration are the three most important FHR indicators. Some PM-based FHRA methods directly generate FHR maps using submerged depth values [6]. In this study, the calculation results of the flood propagation model were interpolated to the DEM grid according to the inverse distance weighted algorithm [41]. After interpolation, the maximum submerged depth of each assessment unit was calculated to generate the overall maximum submerged depth map of the study area, as shown in Figure 8a. The flood submerged depth was generally 4-6 m.

(2) Maximum Flow Velocity
The results of the flood propagation model were interpolated to obtain the maximum flow velocity of each assessment unit. The generated maximum flow velocity map of the study area is shown in Figure 8b, with the values being 0-2.51 m/s.

(3) Flood Submergence Duration
The flood submergence duration map of the study area is shown in Figure 8c and the values were 0-98.8 h. The map was generated by interpolating the results of the flood propagation model to obtain the flood submergence duration of each assessment unit.

(4) DEM
Ground elevation, which is sensitive to FHR and is an important factor for FHR analysis [42], was directly acquired through DEM data. The DEM generated from the LiDAR cloud data of the study area was directly used as the assessment indicator to generate the ground elevation map, as shown in Figure 8d. The elevations of the study area varied from 4.55 to 35.09 m.

(5) Ditches
Ditches refer to furrows, irrigation channels and other wading structures set up between pieces of farmland. Floodwaters entering the FSDAs will flow through these ditches. When ditches overflow, flooding will occur on farmlands and residential lands, posing a risk to surrounding areas. The method proposed by Qian et al. [43] was used to directly generate data on ditches in the study area from the high-resolution DEM. The raster distance map of ditches, as shown in Figure 8e, was generated according to the principle that the closer an area is to a ditch, the higher the risk in that area. The raster distances of ditches were 0-262 m.

(6) River Systems
To a certain extent, the river and waterway systems reflect the precipitation and underlying surface conditions of a region, and are strongly related to FHR. Places with densely populated river systems, high precipitation and weak permeability face relatively higher FHRs [44]. Tehrany et al. [45] generated buffers zones at various distances from rivers to characterize FHR levels while Xiao et al. [30] and Chen et al. [46] used raster distances to characterize the FHRs of rivers. Two rivers flow through the study area. After entering the FSDA, floodwaters advance along the river channels and overflow the banks on both sides, posing a risk to the surrounding areas. Proximity to the river channels was deemed to be directly proportional to FHRs. This principle was used to generate the raster distance map of river systems for the entire study area, as shown in Figure 8f. The raster distances for the river systems were 0-6471.19 m.

(7) Road Networks
There is a road traffic network within the FSDA, as well as diversion roads built on the surrounding dikes. When floods occur, an effective way to avoid the hazard is to make use of roads for diversion and evacuation. The principle of proximity to roads being inversely proportional to FHR level was used to generate the raster distance map of road networks, as shown in Figure 8g. The raster distances from the diversion roads were 0-3204.61 m.

(8) Buildings
A total of 33,930 buildings were extracted from the study area's airborne LiDAR point cloud using the method for analyzing 3-D fractal dimensions [47]. Of these, 3379 buildings were misclassified and manually eliminated, leaving 30,551 buildings in the study area. The accuracy rate was 90.04% [24]. Buildings are vulnerable to flood hazards and their risk levels are related to the building type, replacement cost and submerged depth. Considering the generality of comprehensive FHRAs, the FHR of buildings were simplified to only take into account their locations. Assuming that proximity to a building equals greater FHR, the raster distance map of buildings was generated and shown in Figure 8h. The raster distances were 0-657.86 m. For comprehensive assessment of the FHR level, it is first necessary to measure individual FHR indicators (i.e., to grade them). This study set five FHR levels: mini, light, medium, heavy and extreme, represented by the Roman numerals I, II, III, IV and V, respectively.  [15]. Considering the characteristics of such indicators, adjustments were made to the value range of each indicator specified under the various FHR levels. At the same time, each FHR indicator has various dimensions and needs to be normalized [30]. The eight normalized FHR indicators are shown in Table 2.

Calculating Catastrophe Level Based on FHR Indicators
(1) Disaster-Causing Factors Figure 7 shows that the main indicators constituting the disaster-causing factors were the maximum submerged depth, maximum flow velocity and flood submergence duration. These three control variables affected the state variable of disaster-causing factors. The latter was a typical swallowtail catastrophe and satisfied the normalized form of Equation (15).
The normalized Equation (15) for swallowtail catastrophes was used to obtain the square root of the normalized value of the maximum submerged depth. This gave the swallowtail catastrophe value for the maximum submerged depth, as shown in Figure 9a. The cube root of the normalized value of the maximum flow velocity gave the swallowtail catastrophe value for the latter, as shown in Figure  9b. The swallowtail catastrophe value for the flood submergence duration was obtained by taking root four of its normalized value, as shown in Figure 9c. Since there are strong correlations among these three FHR indicators, the catastrophe theory principle of determining the value was applied. Specifically, the mean of the swallowtail catastrophe value of the three indicators was used as the risk value of the disaster-causing factors, meaning that 1 = (√ 1 + √ 2 + √ 3 )/3. A map of the risk of disaster-causing factors is shown in Figure 9d. (

2) Environmental Vulnerability
Similarly, Figure 7 shows that the main indicators that constituted environmental vulnerability were ground elevation, ditches, river systems and road networks. These four control variables affected the state variable of environmental vulnerability. They satisfied the normalized form of Equation (16) for butterfly catastrophes; Equation (16) was used to obtain the square root of the normalized value of the ground elevation to obtain its butterfly catastrophe value as shown in Figure  10a. The cube root of the normalized value of the ditches gave its butterfly catastrophe value as shown in Figure 10b. The butterfly catastrophe values for the river systems and road networks were obtained by taking the roots four and five of their respective normalized values. The results are shown in Figure 10c and 10d, respectively. The catastrophe theory's principle of determining value was applied again because of the strong correlations among the four FHR indicators. Specifically, the mean of the butterfly catastrophe values of the four indicators was used as the risk value of the environmental vulnerability; that is, 2 = (√ 4 + √ 5 + √ 6 + √ 7 )/4. A map showing the risk of environmental vulnerability is shown in Figure 10e.

(3) Human/Material Exposures
The main indicator for the human/material exposures was buildings (Figure 7). The latter is the only control variable affecting the former as a state variable and is typical of a fold catastrophe. According to the catastrophe progression method, the value of the control variable of the fold catastrophe can be directly assigned to its state variable. Therefore, the FHR map for the vulnerability of human/material exposures was the same as that for the normalized indicator value of buildings ( Figure 11).

Generation of Comprehensive FHR Maps
For each assessment unit in the study area, the risk values of the disaster-causing factors, environmental vulnerability and human/material exposures were obtained based on the rules for assessing the catastrophe level and according to the bottom-up progression approach. These three types of indicator values constitute a swallowtail catastrophe. This meant that the disaster-causing factors, environmental vulnerability and human/material exposures were the control variables, whereas the comprehensive FHR was the state variable. This satisfies the calculation conditions of Equation (15) for swallowtail catastrophes.
The normalized Equation (15) for swallowtail catastrophes was used to obtain the square root of the normalized value of the disaster-causing factors, which gave the latter's swallowtail catastrophe value, as shown in Figure 12a. The cube root of the normalized value of the environmental vulnerability gives its swallowtail catastrophe value, as shown in Figure 12b. Taking root four of the normalized value of the human/material exposures gives its swallowtail catastrophe value, as shown in Figure 12c. Considering the correlations among the three control variables, the catastrophe theory principle of determining value was applied; that is, the mean of the swallowtail catastrophe values of the three indicators was used as the risk value of the comprehensive FHR. As such, = (√ 1 + √ 2 + √ 3 )/3. A generated comprehensive FHR map is shown in Figure 13.

Discussions
The study area was categorized under level IV and V risks based on the 1954 flood type of the Yangtze River. Areas with higher FHRs were mainly distributed in the east, representing the first region to be flooded. In terms of the flood submerged depth, flow velocity and submergence duration, values in the eastern region were all higher than those in the western region. According to the catastrophe progression method, this study ranked disaster-causing factors ahead of environmental vulnerability and human/material exposures because it was significantly more important. Overall, the three types of disaster-causing factors, namely flood submerged depth, flow velocity and submergence duration, had a greater proportion of impact on the indicator design.
A comparison of Figure 12c and Figure 13 reveals that the risk of human/material exposures represented by buildings is also well reflected in the FHR map. In Figure 9d, which is the risk map considering disaster-causing factors only, some areas in the western region are still at mild and moderate risks (levels II and III). However, the FHR of areas with buildings greatly increased after superpositioning the risk of human/material exposures, as shown on Figure 9d. After environmental vulnerability was integrated into the map, the study area became devoid of areas with mild and moderate risks.
In commonly used parameter-based FHRAs, the assignment of weights to the FHR indicators must be carefully considered to avoid human influence. Taking the analytic hierarchy process (AHP) as an example [30], accurate evaluations must be made of the consistency index (CI), random consistency index (RI) and coefficient of reliability (CR). It is generally believed that CR < 0.1 for passing the consistency test, which indicates that the weight design is ideal. Otherwise, the weights of the indicators need to be re-adjusted. The catastrophe progression method avoids the complicated issue of weight design. When classifying the risk indicators from top to bottom, the evaluators are only required to sort the various indicators under the same node according to their relative importance. Once the hierarchy tree for the FHR indicators has been established, the comprehensive FHR can be automatically calculated. This way, the impact of human factors in the catastrophe progression method is mainly reflected in the ranking of the importance of the various FHR indicators by different evaluators.
To this end, this study analyzed the impact of the comprehensive FHR based on the ranking of the FHR indicators. There were eight indicator types (C1, C2, C3, C4, C5, C6, C7 and C8), of which C1, C2 and C3 belong to the same category, C4, C5, C6 and C7 belonged to another category and C8 is an independent category. C1-C3 and C4-C7 were separately rearranged and recombined, meaning that the former had  The comprehensive FHR results based on the indicators ranked according to C1, C2, C3, C4, C5, C6, C7 and C8 was used as the baseline; the other 143 FHR results were compared with the baseline results. Assuming that the baseline FHR value of each assessment unit was Vcell, then the maximum and minimum assessment values of the other 143 FHR results would be Vcellmax and Vcellmin, respectively. This means that the maximum percentage increase and reduction of an assessment unit would be (Vcellmax − Vcell)/Vcell0 and |(Vcell0 − Vcellmin)|/Vcell0, respectively. Next, the comprehensive FHR value of each assessment unit was compared with the maximum percentage increase and reduction of the baseline value. The statistical results are shown in Figure 14. The comparative charts indicate that the deviations from the baseline value were minor regardless of the maximum percentage increase or reduction. Among the 143 calculation results for the entire study area, deviations of the maximum percentage increase and reduction from the baseline value were 0.524%-2.89% and 2.12%-6.45%, respectively. For the map of the maximum percentage increase, areas with greater variations were mostly located in the western region. As this region's FHR values were relatively low in the baseline assessment results, the magnitude of change under the different permutations and combinations was greater. For the map of the maximum percentage reduction, areas with greater variations were mostly located along the diversion roads. This means that under the different permutations and combinations, the roads indicator had a greater impact.
The study area had a total of 6,008,678 assessment grids. For each assessment grid, the number of grid under the maximum percentage increase and reduction was calculated to generate the histograms shown in Figure 15. The histograms for the maximum percentage increase were mainly distributed in the 0.8%-1.3% region. The mean was 1.14% and the standard deviation was 0.12%. The histograms for the maximum percentage reduction were mainly distributed in the 3.15%-5% region, with the mean and standard deviation being 4.19% and 0.39%, respectively. The above analysis indicates that the catastrophe progression method is not sensitive to changes in ranking of the importance of FHR indicators arising from human factors and that the fault tolerance of the final FHRA results are high. The normalization and grading of individual indicators and factors were the two processes with the greatest impacts on the FHRA results. This being the case, the expert scoring method can be applied to both processes to ensure the soundness and universal acceptance of the FHR value of the individual factors.
Only vulnerability of buildings was selected as one index of the human/material exposures, a lot of further research can be done. Different building structures have different impact on flood resistance. The buildings in the study area are mainly divided into two types: brick-concrete structure and brick-wood structure, which have different flood loss rate curve [24]. Building indicator then can be further divided into two categories to express different risk levels, for example, the risk of brickwood structure is greater than that of brick-concrete structure. In addition, economic and industrial factors can also be considered as human/material exposures. For different villages, their economical and industrial development level can be normalized into the assessment grid.

Conclusions
FHR maps are usually calculated using administrative areas (such as townships and villages) as the basic FHRA units. The limitations of the assessment unit mean that the various calculation indicators for the same assessment unit must be assigned a similar value, which simplifies the FHR indicator values to some extent. However, during practical application, many assessment indicators are spatially related. The most representative examples are river and waterway systems and diversion roads. Although the network density of these two features within an administrative area can be used to characterize the respective indicator values for the assessment unit, changes in the network form are still not being considered. For example, when a river traverses multiple assessment units, it will be artificially segmented. It is unrealistic for the indicator value (such as the flood submerged depth) to be evenly distributed among the assessment units because there are spatial variations to that value. Typically, the mean flood submerged depth of the entire assessment unit is adopted to match the scale of the unit.
This study attempted to place the assessment unit of comprehensive FHRAs on a micro-scale by using a 1-m grid as the basic assessment unit. This was equivalent to performing independent calculation processes for the catastrophe level of every 1-m grid unit. In doing so, the spatial changes of different assessment indicators (such as the flood submerged depth) are better reflected. Those indicators related to spatial location (such as buildings) are also better expressed and analyzed. When these indicators are used in studies with administrative areas as the assessment units, they often have to be simplified values that do not consider spatial locations and are dimensionless, such as building density [48]. The proposed micro-scale FHRA method can provide more detailed results than the classical flood hazard map.
The catastrophe progression method automatically executes hierarchical recursive calculations. Once the level that an FHR indicator is determined and the individual indicators are normalized, the calculation results of the catastrophe progression method are always unique. Both the normalization of individual indicators and the calculation of comprehensive FHR values follow the basic idea of catastrophe theory: a state variable has two stable extreme values (the maximum and minimum values), with all values in between being unstable states. The calculation process shows that the catastrophe progression method effectively avoids the subjective issues faced by common FHRAs, specifically, weight assignment to the various FHR indicators. In contrast, the catastrophe progression method only needs to consider the level of importance of each type of FHR indicator. This results shows that the important indicators should be placed ahead of the secondary indicators. This approach is more intuitive and easier to grasp than assigning weights to the various indicators.
The research shows that micro-scale FHRAs can be realized by integrating airborne LiDAR point cloud data and a 2-D hydraulic model. Catastrophe theory and the catastrophe progression method automatically calculated the FHR value of each assessment grid based on hierarchical recursion, thereby effectively avoiding the problem of uncertainty in weight assignment faced by the common parameter-based methods. The FHRA results for 144 different sequences of assessment indicators show that the proposed method has low sensitivity to the ranking of FHR indicators and high fault tolerance of different assessment results arising from subjective rankings by humans.
Author Contributions: Dingtao Shen conducted the primary experiments, cartography, and analyzed the results. Jiechen Wang provided the original idea for this paper. Tianlu Qian, Yu Xia, and Yu Zhang actively participated throughout the research process and offered data support for this work. All authors have read and agreed to the published version of the manuscript.