Near Real-Time Flood Mapping with Weakly Supervised Machine Learning
Abstract
:1. Introduction
- (1)
- The proposed framework for producing weakly labeled data allows for model training without relying on human-annotated labels. This framework enables supervised models to be trained using weak labels that can be automatically generated in near realtime (less than 2 min), showcasing its potential for deployment in operational workflows to assist emergency disaster response.
- (2)
- This paper develops a bi-temporal U-Net model that enhances the quality of feature extraction by utilizing both pre- and post-flood images for flood mapping. By implementing a traditionally uni-temporal deep learning model to accept a bi-temporal input, the proposed model paves the way for potential model implementations to accommodate multi-temporal input. The proposed bi-temporal model with the weak-label generation framework demonstrates a robust approach for near-real-time flood mapping across study sites within cities with different degrees of urbanization, demonstrating its capability for transferability (i.e., generalizability) in emergency response for future flooding events.
- (3)
- The experiments conducted in this study validated the effectiveness of flood mapping across different types of input as well as baseline models used for flood detection.
2. Model Background and Considerations
2.1. Manual Feature Extraction
2.2. Image Segmentation
2.2.1. Discontinuity Detection through Edge Detection
2.2.2. Similarity Detection through Histogram Thresholding and K-Means Clustering
2.3. Machine/Deep Learning Models
2.3.1. Supervised Learning Algorithm
2.3.2. Weakly Supervised Learning Algorithm
3. Materials and Methods
3.1. Case Study
3.2. Methodology
3.2.1. Weakly Labeled Flood Map Generation for Model Training
Problem Formulation
Change Detection
Spectral and Spatial Filtering
3.2.2. Bi-Temporal UNet Architecture
3.2.3. Post-Processing
3.3. Performance Evaluation
4. Results
4.1. Flood Event 1: Florence
4.2. Flood Event 2: Harvey
5. Discussion
5.1. Model Input
5.2. Classification Methods for Binarizing Flood Maps
5.3. Weakly Labeled Flood Map for Model Training Performance
5.4. Bi-Temporal UNet Performance
5.5. Baseline Machine Learning Model Performances
6. Conclusions and Future Work
- ML explainability examination: The effectiveness of machine learning (ML) models in flood mapping is expected to be largely influenced by the spatial and spectral heterogeneity of RS data, which captures the urban structure and level of urbanization in a study area. By quantifying the heterogeneity of RS data and examining its influence on ML-based flood mapping, we can gain insights into the performance of ML models and improve their explainability.
- Uncertainties in weak labels and ML: The performances of ML models are highly dependent on the accuracy of labels for training. While the proposed weak label generation framework can generalize to different geographic environments for producing training data corresponding to each flood event, noisy labels resulting from the framework can misguide the learning process. Uncertainty analysis can help understand the impact of various factors (e.g., noisy labels and model errors) on the performance of ML models. Quantifying and managing uncertainty can aid in selected ML models that generalize well for various locations and also provide valuable insights into ML explainability. In addition, different techniques, such as bootstrapping [45] and modifications of model architecture [46], can be explored to address the impact of weak labels and to mitigate their effects in ML models. Furthermore, formal methods, which use mathematical models to specify, build, and verify software and hardware systems, could also be leveraged for the verification and validation of the ML system [47,48].
- Weak label generation improvement: RS data consist of abundant spatial, temporal, and spectral information, which has a great potential for automatic weak label generation. This work enables weak labeling based on spectral and temporal information through traditional RS (NDWI and change detection) and image processing (Canny edge detection) techniques. Meanwhile, different auxiliary spatial data can aid in sampling weak labels, such as the FEMA national flood hazard layer (NFHL) and the building footprints. In particular, the FEMA NFHL is a geospatial dataset consisting of the U.S. national flood hazard data, indicating the annual probability of flooding, the flood way, and the flood zone. Building footprints can also be used to narrow down the geographic extent for sampling flooded and non-flooded areas as weak labels since most buildings are not likely to be submerged by floods. As such, we may further investigate the fusion of spatial, temporal, and spectral information for ground truth label generation and large-scale weakly supervised pixel-wise flood mapping in future.
- Flood mapping model enhancement: Although our proposed framework yielded the best results compared with other baseline models, there were still many false positives present in our output. With the f1 score being around 87%, we plan to experiment with pixel-wise data augmentation techniques, such as adding random shadows, to reduce the number of false positives. Additionally, we intend to conduct research and experiment with other model architectures for image segmentation that utilize bi-temporal input. Furthermore, we will investigate the inclusion of additional input information, such as characteristics of other ground objects and elevation data, to enhance the accuracy of water extraction.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Tabari, H. Climate change impact on flood and extreme precipitation increases with water availability. Sci. Rep. 2020, 10, 13768. [Google Scholar] [CrossRef] [PubMed]
- National Research Council. Mapping the Zone: Improving Flood Map Accuracy; National Academies Press: Washington, DC, USA, 2009. [Google Scholar]
- Lorenzo-Alonso, A.; Utanda, Á.; Aulló-Maestro, M.E.; Palacios, M. Earth observation actionable information supporting disaster risk reduction efforts in a sustainable development framework. Remote Sens. 2018, 11, 49. [Google Scholar] [CrossRef] [Green Version]
- Olthof, I.; Tolszczuk-Leclerc, S. Comparing Landsat and RADARSAT for current and historical dynamic flood mapping. Remote Sens. 2018, 10, 780. [Google Scholar] [CrossRef] [Green Version]
- Ireland, G.; Volpi, M.; Petropoulos, G.P. Examining the capability of supervised machine learning classifiers in extracting flooded areas from Landsat TM imagery: A case study from a Mediterranean flood. Remote Sens. 2015, 7, 3372–3399. [Google Scholar] [CrossRef] [Green Version]
- Malinowski, R.; Groom, G.; Schwanghart, W.; Heckrath, G. Detection and delineation of localized flooding from WorldView-2 multispectral data. Remote Sens. 2015, 7, 14853–14875. [Google Scholar] [CrossRef] [Green Version]
- Feng, Q.; Liu, J.; Gong, J. Urban flood mapping based on unmanned aerial vehicle remote sensing and random forest classifier—A case of Yuyao, China. Water 2015, 7, 1437–1455. [Google Scholar] [CrossRef]
- Xie, M.; Jiang, Z.; Sainju, A.M. Geographical hidden markov tree for flood extent mapping. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, London, UK, 19–23 August 2018; ACM: New York, NY, USA, 2018. [Google Scholar]
- Lim, J.; Lee, K.-s. Flood mapping using multi-source remotely sensed data and logistic regression in the heterogeneous mountainous regions in North Korea. Remote Sens. 2018, 10, 1036. [Google Scholar] [CrossRef] [Green Version]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436. [Google Scholar] [CrossRef]
- Gebrehiwot, A.; Hashemi-Beni, L.; Thompson, G.; Kordjamshidi, P.; Langan, T.E. Deep convolutional neural network for flood extent mapping using unmanned aerial vehicles data. Sensors 2019, 19, 1486. [Google Scholar] [CrossRef] [Green Version]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015. [Google Scholar]
- Liu, X.; Deng, Z.; Yang, Y. Recent progress in semantic image segmentation. Artif. Intell. Rev. 2019, 52, 1089–1106. [Google Scholar] [CrossRef] [Green Version]
- Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning, PMLR, Virtual Event, 13–18 July 2020. [Google Scholar]
- He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Peng, B.; Huang, Q.; Rao, J. Spatiotemporal Contrastive Representation Learning for Building Damage Classification. In Proceedings of the 2021 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Brussels, Belgium, 11–16 July 2021. [Google Scholar]
- Peng, B.; Huang, Q.; Vongkusolkit, J.; Gao, S.; Wright, D.B.; Fang, Z.N.; Qiang, Y. Urban Flood Mapping With Bitemporal Multispectral Imagery Via a Self-Supervised Learning Framework. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 14, 2001–2016. [Google Scholar] [CrossRef]
- Liu, X.; Sahli, H.; Meng, Y.; Huang, Q.; Lin, L. Flood inundation mapping from optical satellite images using spatiotemporal context learning and modest AdaBoost. Remote Sens. 2017, 9, 617. [Google Scholar] [CrossRef] [Green Version]
- Longbotham, N.; Pacifici, F.; Glenn, T.; Zare, A.; Volpi, M.; Tuia, D.; Christophe, E.; Michel, J.; Inglada, J.; Chanussot, J. Multi-modal change detection, application to the detection of flooded areas: Outcome of the 2009–2010 data fusion contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 331–342. [Google Scholar] [CrossRef]
- Wieland, M.; Li, Y.; Martinis, S. Multi-sensor cloud and cloud shadow segmentation with a convolutional neural network. Remote Sens. Environ. 2019, 230, 111203. [Google Scholar] [CrossRef]
- Kwak, Y.-j. Nationwide flood monitoring for disaster risk reduction using multiple satellite data. ISPRS Int. J. Geo-Inf. 2017, 6, 203. [Google Scholar] [CrossRef] [Green Version]
- Ouma, Y.O.; Tateishi, R. A water index for rapid mapping of shoreline changes of five East African Rift Valley lakes: An empirical analysis using Landsat TM and ETM+ data. Int. J. Remote Sens. 2006, 27, 3153–3181. [Google Scholar] [CrossRef]
- Rosser, J.F.; Leibovici, D.; Jackson, M. Rapid flood inundation mapping using social media, remote sensing and topographic data. Nat. Hazards 2017, 87, 103–120. [Google Scholar] [CrossRef] [Green Version]
- Sivanpillai, R.; Jacobs, K.M.; Mattilio, C.M.; Piskorski, E.V. Rapid flood inundation mapping by differencing water indices from pre-and post-flood Landsat images. Front. Earth Sci. 2021, 15, 1–11. [Google Scholar] [CrossRef]
- McFeeters, S.K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
- Xu, H. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar] [CrossRef]
- Feyisa, G.L.; Meilby, H.; Fensholt, R.; Proud, S.R. Automated Water Extraction Index: A new technique for surface water mapping using Landsat imagery. Remote Sens. Environ. 2014, 140, 23–35. [Google Scholar] [CrossRef]
- Zhang, Q.; Jindapetch, N.; Buranapanichkit, D. Investigation of image edge detection techniques based flood monitoring in real-time. In Proceedings of the 2019 16th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), Pattaya, Thailand, 10–13 July 2019; IEEE: Piscataway, NJ, USA, 2019. [Google Scholar]
- Ghandorh, H.; Boulila, W.; Masood, S.; Koubaa, A.; Ahmed, F.; Ahmad, J. Semantic segmentation and edge detection—Approach to road detection in very high resolution satellite images. Remote Sens. 2022, 14, 613. [Google Scholar] [CrossRef]
- Billa, L.; Pradhan, B. Semi-automated procedures for shoreline extraction using single RADARSAT-1 SAR image. Estuar. Coast. Shelf Sci. 2011, 95, 395–400. [Google Scholar]
- Dhanachandra, N.; Manglem, K.; Chanu, Y.J. Image segmentation using K-means clustering algorithm and subtractive clustering algorithm. Procedia Comput. Sci. 2015, 54, 764–771. [Google Scholar] [CrossRef] [Green Version]
- Huang, M.; Yu, W.; Zhu, D. An improved image segmentation algorithm based on the Otsu method. In Proceedings of the 2012 13th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing, Washington, DC, USA, 8–10 August 2012; IEEE: Piscataway, NJ, USA, 2012. [Google Scholar]
- Jain, S.K.; Singh, R.; Jain, M.; Lohani, A. Delineation of flood-prone areas using remote sensing techniques. Water Resour. Manag. 2005, 19, 333–347. [Google Scholar] [CrossRef]
- Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
- Nandi, I.; Srivastava, P.K.; Shah, K. Floodplain mapping through support vector machine and optical/infrared images from Landsat 8 OLI/TIRS sensors: Case study from Varanasi. Water Resour. Manag. 2017, 31, 1157–1171. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Li, Z.; Demir, I. U-net-based semantic classification for flood extent extraction using SAR imagery and GEE platform: A case study for 2019 central US flooding. Sci. Total Environ. 2023, 869, 161757. [Google Scholar] [CrossRef]
- Zhao, G.; Pang, B.; Xu, Z.; Peng, D.; Xu, L. Assessment of urban flood susceptibility using semi-supervised machine learning model. Sci. Total Environ. 2019, 659, 940–949. [Google Scholar] [CrossRef]
- Bonafilia, D.; Tellman, B.; Anderson, T.; Issenberg, E. Sen1Floods11: A georeferenced dataset to train and test deep learning flood algorithms for sentinel-1. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
- Powers, D.M. Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv 2020, arXiv:2010.16061. [Google Scholar]
- Tehrany, M.S.; Pradhan, B.; Mansor, S.; Ahmad, N. Flood susceptibility assessment using GIS-based support vector machine model with different kernel types. Catena 2015, 125, 91–101. [Google Scholar] [CrossRef]
- Najafzadeh, M.; Basirian, S. Evaluation of River Water Quality Index Using Remote Sensing and Artificial Intelligence Models. Remote Sens. 2023, 15, 2359. [Google Scholar] [CrossRef]
- Liu, X.; Zhu, A.-X.; Yang, L.; Pei, T.; Qi, F.; Liu, J.; Wang, D.; Zeng, C.; Ma, T. Influence of legacy soil map accuracy on soil map updating with data mining methods. Geoderma 2022, 416, 115802. [Google Scholar] [CrossRef]
- Jadon, S. A survey of loss functions for semantic segmentation. In Proceedings of the 2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), Vina del Mar, Chile, 27–29 October 2020; IEEE: Piscataway, NJ, USA, 2020. [Google Scholar]
- Reed, S.; Lee, H.; Anguelov, D.; Szegedy, C.; Erhan, D.; Rabinovich, A. Training deep neural networks on noisy labels with bootstrapping. arXiv 2014, arXiv:1412.6596. [Google Scholar]
- Sukhbaatar, S.; Bruna, J.; Paluri, M.; Bourdev, L.; Fergus, R. Training convolutional networks with noisy labels. arXiv 2014, arXiv:1406.2080. [Google Scholar]
- Krichen, M.; Mihoub, A.; Alzahrani, M.Y.; Adoni, W.Y.H.; Nahhal, T. Are Formal Methods Applicable To Machine Learning And Artificial Intelligence? Proceedings of the 2022 2nd International Conference of Smart Systems and Emerging Technologies (SMARTTECH), Riyadh, Saudi Arabia, 9–11 May 2022. IEEE: Piscataway, NJ, USA, 2022. [Google Scholar]
- Raman, R.; Gupta, N.; Jeppu, Y. Framework for Formal Verification of Machine Learning Based Complex System-of-Systems. Insight 2023, 26, 91–102. [Google Scholar] [CrossRef]
Full Size | Site F1 | Site F2 | |
---|---|---|---|
FP | 0.0287 | 0.0564 | 0.1153 |
FN | 0.0274 | 0.0187 | 0.018 |
Precision | 0.7638 | 0.8167 | 0.6527 |
Recall | 0.7718 | 0.9306 | 0.9234 |
f1 score | 0.7678 | 0.87 | 0.7648 |
OA | 0.9439 | 0.9249 | 0.8668 |
Model | Test Area | Precision | Recall | f1 Score | IoU | OA |
---|---|---|---|---|---|---|
Bi-temporal UNet (pre + post only) | F1 (k-means) F2 (k-means) | 0.8877 0.6598 | 0.8980 0.9538 | 0.8928 0.7800 | 0.8064 0.6393 | 0.9403 0.8737 |
UNet (post only) | F1 F2 | 0.7764 0.5266 | 0.5922 0.4300 | 0.6719 0.4734 | 0.5059 0.3101 | 0.8400 0.7755 |
Decision Tree (DT) | F1 (post) F1 (pre + post) F2 (post) F2 (pre + post) | 0.7859 0.8452 0.5823 0.6911 | 0.6473 0.7619 0.6003 0.7500 | 0.7099 0.8014 0.5911 0.7193 | 0.5502 0.6686 0.4196 0.5617 | 0.8535 0.8954 0.8051 0.8626 |
Random Forest Classifier (RF) | F1 (post) F1 (pre + post) F2 (post) F2 (pre + post) | 0.8363 0.8932 0.6048 0.7352 | 0.7047 0.7839 0.6298 0.7699 | 0.7649 0.8350 0.6170 0.7521 | 0.6193 0.7167 0.4462 0.6027 | 0.8800 0.9142 0.8165 0.8809 |
Gradient Boosting Classifier (GBC) | F1 (post) F1 (pre + post) F2 (post) F2 (pre + post) | 0.8464 0.8941 0.6064 0.7875 | 0.7054 0.7939 0.6272 0.7793 | 0.7695 0.8410 0.6166 0.7834 | 0.6254 0.7257 0.4457 0.6440 | 0.8830 0.9169 0.8169 0.8990 |
Adaptive Boosting Classifier (AdaBoost) | F1 (post) F1 (pre + post) F2 (post) F2 (pre + post) | 0.8380 0.8734 0.6015 0.7037 | 0.6802 0.7694 0.6184 0.7572 | 0.7509 0.8181 0.6098 0.7295 | 0.6011 0.6922 0.4387 0.5741 | 0.8750 0.9053 0.8142 0.8682 |
Model | Test Area | Precision | Recall | f1 Score | IoU | OA |
---|---|---|---|---|---|---|
Bi-temporal UNet (pre + post only) | H1 (Otsu’s) H2 (mean) | 0.9015 0.9059 | 0.9207 0.8987 | 0.9110 0.9023 | 0.8365 0.8220 | 0.9131 0.9491 |
UNet (post only) | H1 H2 | 0.8693 0.8109 | 0.9147 0.9551 | 0.8914 0.8771 | 0.8041 0.7811 | 0.8925 0.9300 |
Decision Tree (DT) | H1 (post) H1 (pre + post) H2 (post) H2 (pre + post) | 0.8507 0.8842 0.7689 0.8565 | 0.8152 0.8977 0.8407 0.8814 | 0.8326 0.8909 0.8032 0.8687 | 0.7132 0.8033 0.6711 0.7680 | 0.8418 0.8939 0.8922 0.9303 |
Random Forest Classifier (RF) | H1 (post) H1 (pre + post) H2 (post) H2 (pre + post) | 0.8716 0.8946 0.8540 0.8839 | 0.8613 0.9071 0.8968 0.8909 | 0.8664 0.9008 0.8749 0.8874 | 0.7643 0.8195 0.7776 0.7975 | 0.8718 0.9036 0.9329 0.9408 |
Gradient Boosting Classifier (GBC) | H1 (post) H1 (pre + post) H2 (post) H2 (pre + post) | 0.9208 0.9264 0.8736 0.9260 | 0.8561 0.8782 0.8633 0.8633 | 0.8873 0.9017 0.8684 0.8936 | 0.7974 0.8209 0.7674 0.8076 | 0.8950 0.9076 0.9315 0.9462 |
Adaptive Boosting Classifier (AdaBoost) | H1 (post) H1 (pre + post) H2 (post) H2 (pre + post) | 0.9285 0.9299 0.8696 0.9188 | 0.8229 0.8529 0.8493 0.8357 | 0.8725 0.8897 0.8593 0.8753 | 0.7739 0.8013 0.7534 0.7782 | 0.8840 0.8980 0.9272 0.9377 |
Method | Test Area | Smoothing | Precision | Recall | f1 Score | IoU | OA |
---|---|---|---|---|---|---|---|
Mean histogram | F1 F2 | No Yes No Yes | 0.8638 0.7694 0.6070 0.5471 | 0.9228 0.9572 0.9726 0.9944 | 0.8923 0.8531 0.7474 0.7059 | 0.8055 0.7438 0.5967 0.5454 | 0.9383 0.9087 0.8457 0.8055 |
Bimodal histogram | F1 F2 | No Yes No Yes | 0.8680 0.7739 0.6081 0.5482 | 0.9189 0.9554 0.9720 0.9943 | 0.8927 0.8551 0.7482 0.7067 | 0.8062 0.7469 0.5976 0.5465 | 0.9390 0.9104 0.8464 0.8063 |
Otsu’s method | F1 F2 | No Yes No Yes | 0.8890 0.7989 0.6455 0.5796 | 0.8963 0.9452 0.9587 0.9896 | 0.8927 0.8659 0.7715 0.7310 | 0.8061 0.7636 0.6281 0.5761 | 0.9403 0.9190 0.8667 0.8291 |
K-means (k = 2) | F1 F2 | No Yes No Yes | 0.8877 0.7973 0.6598 0.5915 | 0.8980 0.9460 0.9538 0.9873 | 0.8928 0.8653 0.7800 0.7398 | 0.8064 0.7625 0.6393 0.5870 | 0.9403 0.9184 0.8737 0.8370 |
Method | Test Area | Smoothing | Precision | Recall | f1 Score | IoU | Overall Accuracy |
---|---|---|---|---|---|---|---|
Mean histogram | H1 H2 | No Yes No Yes | 0.9012 0.8647 0.9059 0.8523 | 0.9209 0.9376 0.8987 0.9377 | 0.9109 0.8997 0.9023 0.8930 | 0.8364 0.8177 0.8220 0.8067 | 0.9131 0.8991 0.9491 0.9412 |
Bimodal histogram | H1 H2 | No Yes No Yes | 0.9006 0.8642 0.9177 0.8667 | 0.9211 0.9378 0.8809 0.9254 | 0.9107 0.8995 0.8989 0.8951 | 0.8361 0.8173 0.8164 0.8101 | 0.9128 0.8988 0.9482 0.9432 |
Otsu’s method | H1 H2 | No Yes No Yes | 0.9015 0.8651 0.9162 0.8648 | 0.9207 0.9375 0.8832 0.9269 | 0.9110 0.8998 0.8994 0.8948 | 0.8365 0.8179 0.8171 0.8096 | 0.9131 0.8993 0.9483 0.9430 |
K-means (k = 2) | H1 H2 | No Yes No Yes | 0.8994 0.8627 0.9165 0.8652 | 0.9206 0.9375 0.8827 0.9266 | 0.9099 0.8986 0.8993 0.8948 | 0.8347 0.8158 0.8170 0.8097 | 0.9120 0.8978 0.9483 0.9430 |
Input | Bi-Temporal UNet | UNet | DT | RF | GBC | ADA | |
---|---|---|---|---|---|---|---|
Florence | Post F1 | - | |||||
Pre + Post F1 | - | ||||||
Post F2 | - | ||||||
Pre + Post F2 | - | ||||||
Harvey | Post H1 | - | |||||
Pre + Post H1 | - | ||||||
Post H2 | - | ||||||
Pre + Post H2 | - |
Site F1 (K-Means) | Site F2 (K-Means) | ||||
---|---|---|---|---|---|
Evaluation | Weakly Labeled | Hand Labeled | Evaluation | Weakly Labeled | Hand Labeled |
Precision | 0.8877 | 0.7645 | Precision | 0.6598 | 0.7325 |
Recall | 0.898 | 0.9263 | Recall | 0.9538 | 0.926 |
f1 score | 0.8928 | 0.8377 | f1 score | 0.78 | 0.8179 |
IoU | 0.8064 | 0.7207 | IoU | 0.6393 | 0.6919 |
OA | 0.9403 | 0.9006 | OA | 0.8737 | 0.9032 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Vongkusolkit, J.; Peng, B.; Wu, M.; Huang, Q.; Andresen, C.G. Near Real-Time Flood Mapping with Weakly Supervised Machine Learning. Remote Sens. 2023, 15, 3263. https://doi.org/10.3390/rs15133263
Vongkusolkit J, Peng B, Wu M, Huang Q, Andresen CG. Near Real-Time Flood Mapping with Weakly Supervised Machine Learning. Remote Sensing. 2023; 15(13):3263. https://doi.org/10.3390/rs15133263
Chicago/Turabian StyleVongkusolkit, Jirapa, Bo Peng, Meiliu Wu, Qunying Huang, and Christian G. Andresen. 2023. "Near Real-Time Flood Mapping with Weakly Supervised Machine Learning" Remote Sensing 15, no. 13: 3263. https://doi.org/10.3390/rs15133263
APA StyleVongkusolkit, J., Peng, B., Wu, M., Huang, Q., & Andresen, C. G. (2023). Near Real-Time Flood Mapping with Weakly Supervised Machine Learning. Remote Sensing, 15(13), 3263. https://doi.org/10.3390/rs15133263