Integrate Weather Radar and Monitoring Devices for Urban Flooding Surveillance

With the increase of extreme weather events, the frequency and severity of urban flood events in the world are increasing drastically. Therefore, this study develops ARMT (automatic combined ground weather radar and CCTV (Closed Circuit Television System) images for real-time flood monitoring), which integrates real-time ground radar echo images and automatically estimates a rainfall hotspot according to the cloud intensity. Furthermore, ARMT combines CCTV image capturing, analysis, and Fourier processing, identification, water level estimation, and data transmission to provide real-time warning information. Furthermore, the hydrograph data can serve as references for relevant disaster prevention, and response personnel may take advantage of them and make judgements based on them. The ARMT was tested through historical data input, which showed its reliability to be between 83% to 92%. In addition, when applied to real-time monitoring and analysis (e.g., typhoon), it had a reliability of 79% to 93%. With the technology providing information about both images and quantified water levels in flood monitoring, decision makers can quickly better understand the on-site situation so as to make an evacuation decision before the flood disaster occurs as well as discuss appropriate mitigation measures after the disaster to reduce the adverse effects that flooding poses on urban areas.


The Urban Flood Disaster
As the climate has been changing, global rainfall patterns have been constantly changing. According to the Global Climate Observing System (GCOS), rainfall in many cities in the world is getting more severe and is changing drastically [1]. Taiwan is located in a region with frequent natural disasters, where short-duration heavy rainfall often causes flooding in urban regions due to the fact that excessive rainfall cannot be drained within a short period of time in the regions, which usually causes considerable economic losses. The increasing incidence and severity of floodings poses a threat to most cities around the world [2,3]. When floods threatening people's lives occur in low-lying areas, the needed manpower and physical resources that have to be mobilized for evacuation and withdrawal make disaster relief operations more difficult. These days, early warning systems are widely used in flood disaster and water resources monitoring and forecast, which collect fixed-point This study aims to integrate radar echo technology into the automated activation of a monitoring image water level/water depth analysis system. When it is predicted that there will likely be precipitation in the region via the weather radar, the system will automatically enable the activation of the monitoring image analytic module within the region. Furthermore, it switches the monitoring regions immediately according to the condition change of the clouds and precipitation. In this way, the technology achieves a more humanized monitoring way, which improves the monitoring system.
Before carrying out monitoring and processing analysis, we first integrated the data from all CCTV image stations, established a database of image parameters, and had all relevant parameters set, including latitude and longitude of the surveillance stations, image scales, and warning thresholds. After that, we defined each identification region under its appropriate surveillance station. Following the completion of the above work, the system can now carry out monitoring and processing automatically each time there is a precipitation event. If the identification meets the warning threshold condition, the system immediately sends out the analytic result to notify the relevant disaster prevention personnel.
In the process of monitoring and processing analysis, the weather radar firstly identifies the rainfall hotspot and calculates the corresponding longitude and latitude coordinates. Then the surveillance cameras within a range near the coordinates are located, and at the same time, the image identification and analysis system is activated for monitoring. Once the system is activated, it continuously captures surveillance camera images every 10 min, which are processed by the algorithm developed in this study to estimate flood depth. To do the estimation, first, proper images are selected followed by image intensification. Finally, the water level/water depth is estimated with respect to the image. During the process of continuous calculation, the error is corrected if the difference is too large in the former and latter estimation of the water level/water depth. And if the interpretation is that there is a flood event-reaching the warning water level-the system immediately sends out an analytic result warning notice and continues the monitoring until the rainfall event ends, as shown in Figure 1.

Cloud Thickness Estimation with Respect to Weather Radar
After the weather radar is acquired, the region range is defined with respect to the latitude and longitude of Taiwan, which ranges in latitude from 21 to 26 degrees north and in longitude from 119 to 123 degrees east. The image part within the range in the map is subjected to quantification so that all pixel intensity values in the range of the region are obtained. Then each pixel intensity value is examined through operations to determine if it is larger than the set threshold value-i.e., the

Cloud Thickness Estimation with Respect to Weather Radar
After the weather radar is acquired, the region range is defined with respect to the latitude and longitude of Taiwan, which ranges in latitude from 21 to 26 degrees north and in longitude from 119 to 123 degrees east. The image part within the range in the map is subjected to quantification so that all pixel intensity values in the range of the region are obtained. Then each pixel intensity value is examined through operations to determine if it is larger than the set threshold value-i.e., the reflectivity threshold value is set to 15 dBZ. It represents the green color in the image and is considered possible rainfall. Then the centroid position is estimated, which represents the cloud cover position in the range of the region, signifying a region with a thicker cloud cover-a region with a higher probability of rainfall. The surveillance cameras within a certain range (such as 5 km, 10 km, 50 km, etc.) around the centroid position are then interfaced, and the surveillance images are gained automatically, followed by the identification and analysis of the water level in the images of the region. Each activation cycle is preset to 24 h. Provided there is no cloud cover for 2 h, it stops automatically until the next appearance of cloud cover is detected.
(I) An Algorithm for Automated Monitoring Activation with Respect to Ground Weather Radar Step 1: Link the Central Weather Bureau website to acquire terrain-free weather radar.
Step 2: Process images, including grayscale conversion processing, echo map matrix scaling to overlap the Taiwan administrative region map, and conversion of pixel position into latitude and longitude.
Step 3: Estimate the cloud cover appearing on the weather radar within the scope between latitude ranging from 21 to 26 degrees north and longitude ranging from 119 to 123 degrees east: cloud thickness estimation, centroid calculation, comparison of CCTV images within 5, 10, and 50 km radius from the centroid, which is achieved by searching for CCTV locations through the database to obtain the images.
Step 4: Monitor unceasingly through CCTV cameras for 24 h. If waterlogging or flooding occurs, a notice is issued.
Step 5: The stop mechanism: the monitoring stops when there is no cloud cover detected for 2 consecutive hours within the field of latitude ranging from 21 to 26 degrees north and longitude ranging from 119 to 123 degrees east.
To run the algorithm of centroid analysis with respect to ground radar echoes, we need to begin by defining and dividing the administrative regions. In the process of quantifying and calculating the thunderstorm cell centroid, firstly, the image of the region is converted to a grayscale image, and the gray level values are referred to as the intensity values of the pixels included in the region. After the intensity values are gained, the centroid position can be calculated. The following is the centroid calculation equation. (1) where C x represents the position of the centroid along the X-axis in the coordinate plane. C y represents the position of the centroid along the Y-axis. n denotes the maximum representative position of the operation points along the Xand Y-axis in the coordinate plane, and n is an integer. x is the coordinate value on the X-axis in the coordinate plane, and y is the coordinate value on the Y-axis. The centroid position can be determined through the calculation ( Figure 2). The centroid position calculated by the system is shown in the black box in the figure. m xy represents the intensity value of the operation point located at (x,y) coordinates. Once the centroid position is determined, the coordinate values of the coordinate plane representing the cloud cover position are converted into latitude and longitude coordinates and then provided for the monitoring system to automatically start the subsequent work.

Image Water Level Identification Technology
As for the image identification, the CCTV camera images are initially procured, followed by selecting the identifying area of an obvious target object (as shown in the box of Figure 3) for water level/water depth determination and dissection. After the target object is selected, we use four image features-entropy, edge value, discrete cosine transform (DCT), and signal-to-noise ratio (SNR)-to conduct the image classification and selection and get the estimated data of water level, which is subsequently converted into real water depth through the pixel scale and is documented in the database.

(II) An Algorithm for Water Level Identification with Respect to Monitoring Image
Step 1: Select appropriate images according to the image features.
Step 2: Select images with obvious target objects (e.g., a white wall, a column).
Step 3: Design the virtual water gauge-also called digital water gauge, image water depth identification region, or region of interest (ROI)-and obtain the scale regarding the image to estimate the flood water depth through setting the conversion between the image pixel values and actual scale. The unit of the image scale is in centimeter per pixel (cm/pixel). The conversion of the image scale is performed as follows.

Image Water Level Identification Technology
As for the image identification, the CCTV camera images are initially procured, followed by selecting the identifying area of an obvious target object (as shown in the box of Figure 3) for water level/water depth determination and dissection. After the target object is selected, we use four image features-entropy, edge value, discrete cosine transform (DCT), and signal-to-noise ratio (SNR)-to conduct the image classification and selection and get the estimated data of water level, which is subsequently converted into real water depth through the pixel scale and is documented in the database.

Image Water Level Identification Technology
As for the image identification, the CCTV camera images are initially procured, followed by selecting the identifying area of an obvious target object (as shown in the box of Figure 3) for water level/water depth determination and dissection. After the target object is selected, we use four image features-entropy, edge value, discrete cosine transform (DCT), and signal-to-noise ratio (SNR)-to conduct the image classification and selection and get the estimated data of water level, which is subsequently converted into real water depth through the pixel scale and is documented in the database.

(II) An Algorithm for Water Level Identification with Respect to Monitoring Image
Step 1: Select appropriate images according to the image features.
Step 2: Select images with obvious target objects (e.g., a white wall, a column).
Step 3: Design the virtual water gauge-also called digital water gauge, image water depth identification region, or region of interest (ROI)-and obtain the scale regarding the image to estimate the flood water depth through setting the conversion between the image pixel values and actual scale. The unit of the image scale is in centimeter per pixel (cm/pixel). The conversion of the image scale is performed as follows. (1) The target

(II) An Algorithm for Water Level Identification with Respect to Monitoring Image
Step 1: Select appropriate images according to the image features.
Step 2: Select images with obvious target objects (e.g., a white wall, a column).
Step 3: Design the virtual water gauge-also called digital water gauge, image water depth identification region, or region of interest (ROI)-and obtain the scale regarding the image to estimate the flood water depth through setting the conversion between the image pixel values and actual scale. The unit of the image scale is in centimeter per pixel (cm/pixel). The conversion of the image scale is performed as follows. (1) The target objects identifiable in the image, such as a white wall, a telecommunications cabinet, an electric pole and so on, are used to conduct the conversion of image scale. (2) For the case without any fixed and obvious target objects in the image, we went to scout the site for information to attain the image scale conversion between image pixels and on-site actual size.
Step 4: Use the technology of image processing and image quantification to estimate the water level.
Step 5: Correct error in the estimated water level and store it in the database for future warning issuing (issued by the implementation unit) and water level situation notifying (decided by the implementation unit).
In order to improve the accuracy of the estimated water level, the screen and filter method of CCTV images was necessary. According to the literature, entropy, edge value, DCT, and SNR were used as image features in defining whether CCTV images can be used to estimate the water level or not. The entropy was mainly the "indetermination" of the image pixel value. As the image was more complex, the entropy value would be higher. For example, if the CCTV image included raindrops, the entropy would reduce because the texture was smoother. Furthermore, a blurred image would also get a low entropy value. The edge value was calculated through different brightness gradient values. If the image quality was clearer then it would get more edge line, otherwise it would be less (for example image with raindrops, image blurring, etc.), and the edge line would also be discontinuous ( Figure 4).
Sensors 2019, 19 FOR PEER REVIEW 7 objects identifiable in the image, such as a white wall, a telecommunications cabinet, an electric pole and so on, are used to conduct the conversion of image scale. (2) For the case without any fixed and obvious target objects in the image, we went to scout the site for information to attain the image scale conversion between image pixels and on-site actual size.
Step 4: Use the technology of image processing and image quantification to estimate the water level.
Step 5: Correct error in the estimated water level and store it in the database for future warning issuing (issued by the implementation unit) and water level situation notifying (decided by the implementation unit). In order to improve the accuracy of the estimated water level, the screen and filter method of CCTV images was necessary. According to the literature, entropy, edge value, DCT, and SNR were used as image features in defining whether CCTV images can be used to estimate the water level or not. The entropy was mainly the "indetermination" of the image pixel value. As the image was more complex, the entropy value would be higher. For example, if the CCTV image included raindrops, the entropy would reduce because the texture was smoother. Furthermore, a blurred image would also get a low entropy value. The edge value was calculated through different brightness gradient values. If the image quality was clearer then it would get more edge line, otherwise it would be less (for example image with raindrops, image blurring, etc.), and the edge line would also be discontinuous ( Figure 4). The DCT was the change ratio between two pixels in an image and arrangement from lower to higher (blurry to clear). In DCT, if the part of a low frequency was larger, then the image would be more blurred. If the part of a high frequency was higher, then the image would be more clear. In Figure 5, the DCT was different between two kinds of CCTV images that could be used to estimate water level or not. Finally, the screen standard of CCTV images was extracted in four features and The DCT was the change ratio between two pixels in an image and arrangement from lower to higher (blurry to clear). In DCT, if the part of a low frequency was larger, then the image would be more blurred. If the part of a high frequency was higher, then the image would be more clear. In Figure 5, the DCT was different between two kinds of CCTV images that could be used to estimate water level or not. Finally, the screen standard of CCTV images was extracted in four features and built by J48 decision tree. The image can be used to estimate the water level. In this case, entropy, edge, DCT, and SNR were >7.1, >0.04, >0.1, and <3.5, respectively.  For water level estimation, the CCTV camera images are firstly subjected to grayscale conversion and image intensification. Since the water surface for analysis is likely in a non-horizontal plane, there is a need to perform horizontal correction and define the image scale of the actual water depth before the analysis starts. Due to the fact that the camera does not always face the front horizontally, it is necessary to correct the level of the detected object into the horizontal to facilitate the detection of flood depth. This is performed mainly by selecting four points on the image, the first two of which are done on the water surface and used to calculate the slope for correcting to horizontal by image rotation. The latter two points are done at tow target objects with the known actual height. For example, suppose that a window and the ground are known to be 130 cm apart and by selecting these two image positions, we determine the difference between the two points to be 120 pixels. By this, it can be extrapolated that a pixel amounts to 1.08 cm. During the analysis, the ROI function is used to select the water depth identification region ( Figure 6). This method can not only increase the computing efficiency but also reduce the foreign object interference in the image region and improve the accuracy of flood depth calculation [9][10][11].  For water level estimation, the CCTV camera images are firstly subjected to grayscale conversion and image intensification. Since the water surface for analysis is likely in a non-horizontal plane, there is a need to perform horizontal correction and define the image scale of the actual water depth before the analysis starts. Due to the fact that the camera does not always face the front horizontally, it is necessary to correct the level of the detected object into the horizontal to facilitate the detection of flood depth. This is performed mainly by selecting four points on the image, the first two of which are done on the water surface and used to calculate the slope for correcting to horizontal by image rotation. The latter two points are done at tow target objects with the known actual height. For example, suppose that a window and the ground are known to be 130 cm apart and by selecting these two image positions, we determine the difference between the two points to be 120 pixels. By this, it can be extrapolated that a pixel amounts to 1.08 cm. During the analysis, the ROI function is used to select the water depth identification region ( Figure 6). This method can not only increase the computing efficiency but also reduce the foreign object interference in the image region and improve the accuracy of flood depth calculation [9][10][11].
Sensors 2019, 19 FOR PEER REVIEW 8 built by J48 decision tree. The image can be used to estimate the water level. In this case, entropy, edge, DCT, and SNR were >7.1, >0.04, >0.1, and <3.5, respectively. For water level estimation, the CCTV camera images are firstly subjected to grayscale conversion and image intensification. Since the water surface for analysis is likely in a non-horizontal plane, there is a need to perform horizontal correction and define the image scale of the actual water depth before the analysis starts. Due to the fact that the camera does not always face the front horizontally, it is necessary to correct the level of the detected object into the horizontal to facilitate the detection of flood depth. This is performed mainly by selecting four points on the image, the first two of which are done on the water surface and used to calculate the slope for correcting to horizontal by image rotation. The latter two points are done at tow target objects with the known actual height. For example, suppose that a window and the ground are known to be 130 cm apart and by selecting these two image positions, we determine the difference between the two points to be 120 pixels. By this, it can be extrapolated that a pixel amounts to 1.08 cm. During the analysis, the ROI function is used to select the water depth identification region ( Figure 6). This method can not only increase the computing efficiency but also reduce the foreign object interference in the image region and improve the accuracy of flood depth calculation [9][10][11].  Afterwards, the selected ROI was processed to remove the noise by using the median filter mask. The filter mask size is 1 × w, where w is half the width of ROI. Subjecting the ROI to the median filter processing can achieve the estimation of flood depth more efficiently as well as remove the noise, and the median filter does not interfere with the resulting accuracy. The main principle of the method is that the filter mask conducts the detection and comparison at ROI, and the pixels in the ROI will be replaced by the median within the mask [12][13][14]. Now the ROI is ready to be processed through edge detection [15][16][17][18]. It is conducted mainly by counting the grayscale values of all pixels in the ROI and calculating the threshold value [19,20]. When the horizontal pixels accumulate grayscale values greater than the threshold value-the threshold value is set to half the width of ROI-it is determined as the water surface. Then, by selecting a pixel position with known height and inputting the corresponding actual height in centimeters, the actual water level height can be calculated (Figure 7). If there are several measurements for the water surface, the current water level is determined with reference to the previous water level.
Sensors 2019, 19 FOR PEER REVIEW 9 Afterwards, the selected ROI was processed to remove the noise by using the median filter mask. The filter mask size is 1 x w, where w is half the width of ROI. Subjecting the ROI to the median filter processing can achieve the estimation of flood depth more efficiently as well as remove the noise, and the median filter does not interfere with the resulting accuracy. The main principle of the method is that the filter mask conducts the detection and comparison at ROI, and the pixels in the ROI will be replaced by the median within the mask [12][13][14]. Now the ROI is ready to be processed through edge detection [15][16][17][18]. It is conducted mainly by counting the grayscale values of all pixels in the ROI and calculating the threshold value [19,20]. When the horizontal pixels accumulate grayscale values greater than the threshold value-the threshold value is set to half the width of ROI-it is determined as the water surface. Then, by selecting a pixel position with known height and inputting the corresponding actual height in centimeters, the actual water level height can be calculated (Figure 7). If there are several measurements for the water surface, the current water level is determined with reference to the previous water level. During monitoring, if the water level/water depth reaches the alert value (e.g., the water level of 112.5 m at Lansheng Bridge is alert level 2), which is automatically detected by the system, the system will send an e-mail report to notify ARMT users. After sending the e-mail, the log message will be recorded in the process database. The warning message includes the supplier of the image, image captured time, system report time, message content description, and the current picture, etc. (Figure 8). During monitoring, if the water level/water depth reaches the alert value (e.g., the water level of 112.5 m at Lansheng Bridge is alert level 2), which is automatically detected by the system, the system will send an e-mail report to notify ARMT users. After sending the e-mail, the log message will be recorded in the process database. The warning message includes the supplier of the image, image captured time, system report time, message content description, and the current picture, etc. (Figure 8).

Performance of Image Water Level Identification
Considering that there may be interference of foreign objects with the monitoring images, and hence, an estimation error exists during the automated estimation of flood by the system, it is

Performance of Image Water Level Identification
Considering that there may be interference of foreign objects with the monitoring images, and hence, an estimation error exists during the automated estimation of flood by the system, it is necessary to correct the water level/water depth estimation error. This is executed through correction of the error point regarding the water level/water depth at the previous time point, as follows: If the estimated water level/water depth (W t ) at the current time (t) is greater than the threshold (thr) set in the system, then the correction of the image water level/water depth estimation is carried out so as to get the corrected water level/water depth (Ŵ t ) [21], the equation of which is shown as follows (Equation (2)): In order to determine the reliability of the calculated water level/water depth after error correction, the analytic result is examined through reliability estimation. The estimation error of the water level/water depth is defined as the absolute difference (Error i ) between the real measured value observed by human eyes (W eyes,i ) and the estimated value from the image identification technology (W computer,i ) regarding the water level of the i th image (Equation (3)). The smaller the estimation error-the absolute difference (Error i )-is, the more reliable the image identification water level/water depth estimation is. In the AMRT, it adopts the non-contact method and combines the CCTV which is already built to estimate the water level/water depth with the human eye judges of the water level measuring the current water level through a hydrograph and simultaneously the actual value locally. They then compare it with the results calculated by AMRT. The reliability estimation of the analytic results is defined as Equation (4), which stands for the probability of the fact that the absolute difference (Error i ) between the real measured value (W eyes,i ) and the estimated value from the image identification technology (W computer,i ) is less than 50 cm. If the absolute difference is less than 50 cm, the image identification water level/water depth estimation is considered reliable. If not, it means the estimation error is too large and thus, unreliable. N denotes the number of the images of a monitoring station. Meanwhile, the root mean square error (RMSE) and mean absolute error (MAE) were applied to measure performance of the algorithm as shown in Equations (5) and (6).
According to the above analysis method, we evaluated the estimation error of the image identification regarding the historical images from four different kinds of surveillance camera images (from Baoqiao, Xinan Bridge, Shimen Reservoir, and Chiayi County Donggang Activity Center which located in Taipei, Taichung, Taoyuan and Chiayi with respectively.) which all showed fluctuations of water level. The evaluated reliability of each surveillance station is 0.83, 0.87, 0.90, and 0.92, respectively, and the results are shown in Table 1. From the results, it can be concluded that the reliability of the image identification estimation is about 88%. The reasons causing the estimation error include water droplet interference in the identification region and abnormality in the identification image (the results included the estimation error correction).

Experimental Results and Discussion
In this study, not only the historical image data from four monitoring stations but also real event scenario data from six monitoring stations during a typhoon in Taiwan (typhoon Megi) were tested for water level identification and reliability. The reliability examination was performed by comparison between the image identification result and actual water level. In the historical event analysis, from each station 180 to 200 images were collected, respectively, in the sizes of 352 × 288 (pixels) and 352 × 240 (pixels) and image scales ranging from 0.60 to 8.55 cm/pixel. Under the condition that the estimation error of the identification was less than 50 cm, the average calculated reliability was 88% (Table 1). In the real application of this developed technology in all-weather flood monitoring and analysis during a typhoon in Taiwan, data from six monitoring stations-Nangang Bridge, Xinan Bridge, Jiquan Bridge, Baoqiao Bridge, Shanggueishan Bridge, and Lansheng Bridge-were collected at a frequency of one image per 5 min, the image size of which was 704 × 480 (pixels) and the image scales ranged from 2.86 to 14.44 cm/pixel. Under the condition that the estimation error of the identification was less than 50 cm, the reliability of the water level estimation with respect to the flood monitoring was between 79% and 93% ( Table 2). The hit, miss, and false alarm statistics from six stations are listed in Table 3. The rates of hit estimated from six stations were higher than 92%. The maximum false negative rate (Miss) and false positive rate (False) from six stations were 6% and 3%, respectively.  During typhoon Megi in September 2016, this technology was used to monitor the fluctuations of the river water level, and a warning was issued twice at Lansheng Bridge monitoring station. The water level around the station even reached full water level, which caused the local police and firefighters to block roads urgently and ban people from entering to avoid danger. In this event, monitoring image identification technology was applied to obtain real-time water level information and the fluctuation data. In the course of monitoring water level changes (Figure 9), the identification error point was mainly caused by the interference of water droplets on camera lenses and resulted in identification errors ( Figure 10). During typhoon Megi in September 2016, this technology was used to monitor the fluctuations of the river water level, and a warning was issued twice at Lansheng Bridge monitoring station. The water level around the station even reached full water level, which caused the local police and firefighters to block roads urgently and ban people from entering to avoid danger. In this event, monitoring image identification technology was applied to obtain real-time water level information and the fluctuation data. In the course of monitoring water level changes (Figure 9), the identification error point was mainly caused by the interference of water droplets on camera lenses and resulted in identification errors ( Figure 10).  gress of the water level analysis of a monitoring cycle. Real line was shown for the level d alert 2. The RMSE (root mean square error) and MAE (mean absolute error) were 41. 21.86 cm, respectively. The water level is defined in reference to sea level (i.e., height abo level or elevation). The monitoring image identification technology enables us to gain real-time wate rmation and fluctuation data. Although limited by the effects of the lens condition and l ironment, it can be applied to early automatic warning. Once the analytic result reach rning threshold, the system automatically sends an advance warning notice to the d vention and response unit in order to take protective measures as a means of early preven event of a flood, including the evacuation of people, road closures, and flood control.

onclusion
In this study, the ground weather radar is employed to identify the rainfall hotspot, w ducted through automated determination of cloud cover position and interpretation troid of the cloud cover in the image to define the probable region range of the precipita ition, adding the existing CCTV footage to carry out real-time flood water level monitori eloped technology achieves an all-weather automated surveillance objective. Finally, the a ntified results from the image identification-informative image and quantified d ediately transmitted to relevant personnel via information and communications technol sent, the early warning of floods mainly depends on on-site measurement or related ices to obtain any information on the water level, which lacks images and often ertainty of results. Therefore, this research adopted the surveillance camera images The monitoring image identification technology enables us to gain real-time water level information and fluctuation data. Although limited by the effects of the lens condition and lighting environment, it can be applied to early automatic warning. Once the analytic result reaches the warning threshold, the system automatically sends an advance warning notice to the disaster prevention and response unit in order to take protective measures as a means of early prevention in the event of a flood, including the evacuation of people, road closures, and flood control.

Conclusions
In this study, the ground weather radar is employed to identify the rainfall hotspot, which is conducted through automated determination of cloud cover position and interpretation of the centroid of the cloud cover in the image to define the probable region range of the precipitation. In addition, adding the existing CCTV footage to carry out real-time flood water level monitoring, the developed technology achieves an all-weather automated surveillance objective. Finally, the analytic quantified results from the image identification-informative image and quantified data-is immediately transmitted to relevant personnel via information and communications technology. At present, the early warning of floods mainly depends on on-site measurement or related sensor devices to obtain any information on the water level, which lacks images and often causes uncertainty of results. Therefore, this research adopted the surveillance camera images to the automated monitoring and identification of flood events and further obtainment of flood water level and image data. This method can not only reduce the risk for personnel during on-site surveying but also measure the water level, as the sensor did.
During the early stage of flood control and the management of river water levels, hydrological engineering technology, such as the construction of embankments, is mainly and widely adopted. However, due to the extreme climate change of the past decade, these engineering constructions can no longer catch up with the speed at which the disasters occur. As a result, besides hydrological engineering, new concepts like disaster mitigation and risk avoidance have been added to river management and control in recent years [22,23], such as opening reservoir gates in advance to discharge water, closing floodgates, evacuating people in low-lying areas in advance, and so forth. The existing early warning systems mainly depend on the water level information provided by the monitoring station through remote sensing images and numerical prediction systems, and they can provide large-scale flood warnings several hours, even 24 h, in advance [24,25]. Nevertheless, they show low specificity and non-real-time information, and a large-scale flood warning is not suitable for a small, localized region to mitigate disaster because it is difficult to cross-verify the predictions and the actual flood conditions. Decision makers cannot be provided with sufficient information for disaster management and plan disaster mitigation measures quickly. Therefore, ARMT aims to provide an automated flood monitoring system which exploits the existing CCTV cameras to provide current river images and image data of water level identification. Through this system, decision makers can get to know the on-site situation more quickly in order to evacuate people before a flood disaster occurs, or they can think of appropriate disaster mitigation measures after the disaster to reduce the impact of floods on urban areas.