Optimal Fusion of Multispectral Optical and SAR Images for Flood Inundation Mapping through Explainable Deep Learning
Abstract
:1. Introduction
1.1. Previous Work
1.1.1. Traditional Flood Inundation Mapping
1.1.2. Artificial Intelligence for Flood Inundation Mapping
- Presenting a novel deep learning semantic-segmentation model capable of producing high quality flood inundation maps from both SAR and optical images, demonstrated through comparative analysis with state-of-the-art models.
- Investigating various combinations of fusion of SAR and optical spectral bands and indices, providing insight into the optimal combinations for accurate flood inundation maps in both clear and cloud-covered conditions.
- Integrating XAI to interpret the behavior of the deep learning models, providing a more comprehensive understanding of their capacity to learn effectively. This not only enhances the trustworthiness of the models, but also provides deeper insight into the influence of each input-data type on the models’ decision-making process.
2. Materials and Methods
2.1. Dataset
2.1.1. Dataset Preparation
2.1.2. Spectral Bands and Indices
2.2. Proposed Model Architecture
2.3. Preprocessing
2.4. Experimental Settings
3. Results
3.1. Comparison of Models
3.1.1. Quantitative Evaluation
3.1.2. Computational Complexity
3.1.3. Qualitative Evaluation
3.1.4. Ablation Study
3.2. Comparison of Sentinel-1 and Sentinel-2 Combinations
3.2.1. Quantitative Evaluation
3.2.2. Qualitative Evaluation
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Markus, M.; Angel, J.; Byard, G.; McConkey, S.; Zhang, C.; Cai, X.; Notaro, M.; Ashfaq, M. Communicating the impacts of projected climate change on heavy rainfall using a weighted ensemble approach. J. Hydrol. Eng. 2018, 23, 4018004. [Google Scholar] [CrossRef]
- Mosavi, A.; Ozturk, P.; Chau, K.-W. Flood prediction using machine learning models: Literature review. Water 2018, 10, 1536. [Google Scholar] [CrossRef]
- Leandro, J.; Chen, K.-F.; Wood, R.R.; Ludwig, R. A scalable flood-resilience-index for measuring climate change adaptation: Munich city. Water Res. 2020, 173, 115502. [Google Scholar] [CrossRef] [PubMed]
- Sahana, M.; Patel, P.P. A comparison of frequency ratio and fuzzy logic models for flood susceptibility assessment of the lower Kosi River Basin in India. Environ. Earth Sci. 2019, 78, 289. [Google Scholar] [CrossRef]
- Tavus, B.; Can, R.; Kocaman, S. A Cnn-based flood mapping approach using sentinel-1 data. In ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences; Copernicus GmbH: Göttingen, Germany, 2022; pp. 549–556. [Google Scholar] [CrossRef]
- Li, L.; Chen, Y.; Xu, T.; Meng, L.; Huang, C.; Shi, K. Spatial attraction models coupled with Elman neural networks for enhancing sub-pixel urban inundation mapping. Remote Sens. 2020, 12, 2068. [Google Scholar] [CrossRef]
- Wang, P.; Wang, L.; Leung, H.; Zhang, G. Super-Resolution Mapping Based on Spatial–Spectral Correlation for Spectral Imagery. IEEE Trans. Geosci. Remote Sens. 2021, 59, 2256–2268. [Google Scholar] [CrossRef]
- Costache, R.; Arabameri, A.; Elkhrachy, I.; Ghorbanzadeh, O.; Pham, Q.B. Detection of areas prone to flood risk using state-of-the-art machine learning models. Geomatics Nat. Hazards Risk 2021, 12, 1488–1507. [Google Scholar] [CrossRef]
- Kadiyala, S.P.; Woo, W.L. Flood Prediction and Analysis on the Relevance of Features using Explainable Artificial Intelligence. In Proceedings of the 2021 2nd Artificial Intelligence and Complex Systems Conference, Bangkok, Thailand, 21–22 October 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Pradhan, B.; Lee, S.; Dikshit, A.; Kim, H. Spatial Flood Susceptibility Mapping using and Explainable Artificial Intelligence (XAI) Model. Geosci. Front. 2023, 14, 101625. [Google Scholar] [CrossRef]
- Islam, S.R.; Eberle, W.; Ghafoor, S.K.; Ahmed, M. Explainable Artificial Intelligence Approaches: A Survey. January 2021. Available online: http://arxiv.org/abs/2101.09429 (accessed on 10 October 2023).
- Liang, J.; Liu, D. A local thresholding approach to flood water delineation using Sentinel-1 SAR imagery. ISPRS J. Photogramm. Remote Sens. 2020, 159, 53–62. [Google Scholar] [CrossRef]
- McFeeters, S.K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
- Xu, H. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar] [CrossRef]
- Kriegler, F.J.; Malila, W.A.; Nalepka, R.F.; Richardson, W. Preprocessing Transformations and Their Effects on Multispectral Recognition. In Proceedings of the 6th International Symposium on Remote Sensing and Environment, Ann Arbor, MI, USA, 13 October 1969; pp. 97–131. [Google Scholar]
- Bonafilia, D.; Tellman, B.; Anderson, T.; Issenberg, E. Sen1Floods11: A georeferenced dataset to train and test deep learning flood algorithms for sentinel-1. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 13–19 June 2020; pp. 210–211. [Google Scholar] [CrossRef]
- Konapala, G.; Kumar, S.V.; Ahmad, S.K. Exploring Sentinel-1 and Sentinel-2 diversity for flood inundation mapping using deep learning. ISPRS J. Photogramm. Remote Sens. 2021, 180, 163–173. [Google Scholar] [CrossRef]
- Tanim, A.H.; McRae, C.B.; Tavakol-Davani, H.; Goharian, E. Flood Detection in Urban Areas Using Satellite Imagery and Machine Learning. Water 2022, 14, 1140. [Google Scholar] [CrossRef]
- Chakma, P.; Akter, A. Flood Mapping in the Coastal Region of Bangladesh Using Sentinel-1 SAR Images: A Case Study of Super Cyclone Amphan. J. Civ. Eng. Forum 2021, 7, 267–278. [Google Scholar] [CrossRef]
- Dutsenwai, H.S.; Bin Ahmad, B.; Mijinyawa, A.; Tanko, A.I. 37 Fusion of SAR images for flood extent mapping in northern peninsula Malaysia. Int. J. Adv. Appl. Sci. 2016, 3, 37–48. [Google Scholar] [CrossRef]
- Panahi, M.; Rahmati, O.; Kalantari, Z.; Darabi, H.; Rezaie, F.; Moghaddam, D.D.; Ferreira, C.S.S.; Foody, G.; Aliramaee, R.; Bateni, S.M.; et al. Large-scale dynamic flood monitoring in an arid-zone floodplain using SAR data and hybrid machine-learning models. J. Hydrol. 2022, 611, 128001. [Google Scholar] [CrossRef]
- Sundaram, S.; Yarrakula, K. Multi-Temporal Analysis of Sentinel-1 SAR data for Urban Flood Inundation Mapping-Case study of Chennai Metropolitan City Hyperspectral Remote Sensing View Project Risk Mapping Analysis with Geographic Information Systems for a Transportation Network Supply Chain View Project. 2017. Available online: https://www.researchgate.net/publication/322977903 (accessed on 23 July 2023).
- Gebrehiwot, A.; Hashemi-Beni, L. Automated Indunation Mapping: Comparison of Methods. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Waikoloa, HI, USA, 26 September–2 October 2020; pp. 3265–3268. [Google Scholar] [CrossRef]
- Fraccaro, P.; Stoyanov, N.; Gaffoor, Z.; La Rosa, L.E.C.; Singh, J.; Ishikawa, T.; Edwards, B.; Jones, A.; Weldermariam, K. Deploying an Artificial Intelligence Application to Detect Flood from Sentinel 1 Data. 2022. Available online: www.aaai.org (accessed on 14 October 2023).
- Ghosh, B.; Garg, S.; Motagh, M. Automatic flood detection from sentinel-1 data using deep learning architectures. In ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences; Copernicus GmbH: Göttingen, Germany, 2022; pp. 201–208. [Google Scholar] [CrossRef]
- Katiyar, V.; Tamkuan, N.; Nagai, M. Near-real-time flood mapping using off-the-shelf models with SAR imagery and deep learning. Remote Sens. 2021, 13, 2334. [Google Scholar] [CrossRef]
- Bereczky, M.; Wieland, M.; Krullikowski, C.; Martinis, S.; Plank, S. Sentinel-1-Based Water and Flood Mapping: Benchmarking Convolutional Neural Networks Against an Operational Rule-Based Processing Chain. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 2023–2036. [Google Scholar] [CrossRef]
- Li, Z.; Demir, I. U-net-based semantic classification for flood extent extraction using SAR imagery and GEE platform: A case study for 2019 central US flooding. Sci. Total. Environ. 2023, 869, 161757. [Google Scholar] [CrossRef]
- Sanderson, J.; Tengtrairat, N.; Woo, W.L.; Mao, H.; Al-Nima, R.R. XFIMNet: An Explainable Deep Learning Architecture for Versatile Flood Inundation Mapping with Synthetic Aperture Radar and Multi-Spectral Optical Images. Int. J. Remote Sens. 2023. [Google Scholar]
- Paul, S.; Ganju, S. Flood Segmentation on Sentinel-1 SAR Imagery with Semi-Supervised Learning. July 2021. Available online: http://arxiv.org/abs/2107.08369 (accessed on 14 October 2023).
- Yadav, R.; Nascetti, A.; Ban, Y. Attentive Dual Stream Siamese U-net for Flood Detection on Multi-temporal Sentinel-1 Data. In Proceedings of the IGARSS 2022–2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022. [Google Scholar] [CrossRef]
- Jiang, C.; Zhang, H.; Wang, C.; Ge, J.; Wu, F. Water Surface Mapping from Sentinel-1 Imagery Based on Attention-Unet3+: A Case Study of Poyang Lake Region. Remote Sens. 2022, 14, 4708. [Google Scholar] [CrossRef]
- Wang, J.; Wang, S.; Wang, F.; Zhou, Y.; Wang, Z.; Ji, J.; Xiong, Y.; Zhao, Q. FWENet: A deep convolutional neural network for flood water body extraction based on SAR images. Int. J. Digit. Earth 2022, 15, 345–361. [Google Scholar] [CrossRef]
- Zhao, B.; Sui, H.; Liu, J. Siam-DWENet: Flood inundation detection for SAR imagery using a cross-task transfer Siamese network. Int. J. Appl. Earth Obs. Geoinf. 2023, 116, 103132. [Google Scholar] [CrossRef]
- Akiva, P.; Purri, M.; Dana, K.; Tellman, B.; Anderson, T. H2O-Net: Self-Supervised Flood Segmentation via Adversarial Domain Adaptation and Label Refinement. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 5 January 2021. [Google Scholar]
- Sediqi, K.M.; Lee, H.J. A novel upsampling and context convolution for image semantic segmentation. Sensors 2021, 21, 2170. [Google Scholar] [CrossRef] [PubMed]
- Lee, C.-Y.; Xie, S.; Gallagher, P.; Zhang, Z.; Tu, Z. Deeply-Supervised Nets. September 2014. Available online: http://arxiv.org/abs/1409.5185 (accessed on 23 October 2023).
- Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Springer: Berlin/Heidelberg, Germany, 2018; pp. 3–11. [Google Scholar] [CrossRef]
- Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [PubMed]
- Chen, L.-C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar] [CrossRef]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. Int. J. Comput. Vision 2020, 128, 336–359. [Google Scholar] [CrossRef]
- Loshchilov, I.; Hutter, F. Decoupled Weight Decay Regularization. November 2017. Available online: http://arxiv.org/abs/1711.05101 (accessed on 24 October 2023).
- Loshchilov, I.; Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. arXiv 2016, arXiv:1608.03983. [Google Scholar] [CrossRef]
- Chen, L.-C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. February 2018. Available online: http://arxiv.org/abs/1802.02611 (accessed on 23 October 2023).
- Fan, T.; Wang, G.; Li, Y.; Wang, H. Ma-net: A multi-scale attention network for liver and tumor segmentation. IEEE Access 2020, 8, 179656–179665. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar] [CrossRef]
- He, X.; Zhang, S.; Xue, B.; Zhao, T.; Wu, T. Cross-modal change detection flood extraction based on convolutional neural network. Int. J. Appl. Earth Obs. Geoinf. 2023, 117, 103197. [Google Scholar] [CrossRef]
- Garg, S.; Feinstein, B.; Timnat, S.; Batchu, V.; Dror, G.; Rosenthal, A.G.; Gulshan, V. Cross Modal Distillation for Flood Extent Mapping. February 2023. Available online: http://arxiv.org/abs/2302.08180 (accessed on 13 October 2023).
- Gašparović, M.; Klobučar, D. Mapping floods in lowland forest using sentinel-1 and sentinel-2 data and an object-based approach. Forests 2021, 12, 553. [Google Scholar] [CrossRef]
- Manocha, A.; Afaq, Y.; Bhatia, M. Mapping of water bodies from sentinel-2 images using deep learning-based feature fusion approach. Neural Comput. Appl. 2023, 35, 9167–9179. [Google Scholar] [CrossRef]
- Hosseiny, B.; Mahdianpari, M.; Brisco, B.; Mohammadimanesh, F.; Salehi, B. WetNet: A Spatialoral Ensemble Deep Learning Model for Wetland Classification Using Sentinel-1 and Sentinel-2. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
Flood Event | S1 Date | S2 Date | Orbit | Rel. Orbit |
---|---|---|---|---|
Bolivia | 15 February 2018 | 15 February 2018 | Descending | 156 |
Ghana | 18 September 2018 | 19 September 2018 | Ascending | 147 |
India | 12 August 2016 | 12 August 2016 | Descending | 77 |
Cambodia | 5 August 2018 | 4 August 2018 | Ascending | 26 |
Nigeria | 21 September 2018 | 20 September 2018 | Ascending | 103 |
Pakistan | 28 June 2017 | 28 June 2017 | Descending | 5 |
Paraguay | 31 October 2018 | 31 October 2018 | Ascending | 68 |
Somalia | 7 May 2018 | 5 May 2018 | Ascending | 116 |
Spain | 17 September 2019 | 18 September 2019 | Descending | 110 |
Sri Lanka | 30 May 2017 | 28 May 2017 | Descending | 19 |
USA | 22 May 2019 | 22 May 2019 | Ascending | 136 |
Band | Resolution | Central Wavelength | Description |
---|---|---|---|
Band 1—Coastal | 60 m | 443 nm | Band 1 captures the aerosol properties in coastal zones, which aids in assessing water quality. |
Band 2—Blue (B) | 10 m | 490 nm | Band 2 captures the blue light in the visible spectrum and is useful for soil and vegetation discrimination and identifying land-cover types. |
Band 3—Green (G) | 10 m | 560 nm | Band 3 captures the green light in the visible spectrum, which provides good contrast between muddy and clear water, and is useful for detecting oil on water surfaces and vegetation. |
Band 4—Red (R) | 10 m | 665 nm | Band 4 captures the red light in the visible spectrum, which is useful for identifying vegetation and soil types, and differentiating between land-cover types. |
Band 5—RedEdge-1 Band 6—RedEdge-2 Band 7—RedEdge-3 | 20 m 20 m 20 m | 705 nm 740 nm 783 nm | Bands 5, 6, and 7 capture the spectral region within the red edge where vegetation has increased reflectance and are useful for classifying vegetation. |
Band 8—Near-Infrared (NIR) | 10 m | 842 nm | Band 8 captures light in the near-infrared spectrum and captures the reflectance properties of water, so is useful for discriminating between land and water bodies. |
Band 8a—Narrow Near-Infrared | 20 m | 865 nm | Band 8a captures light in the near-infrared spectrum at a longer wavelength, providing additional sensitivity to vegetation reflectance, so is useful for vegetation classification. |
Band 9—Water Vapor | 60 m | 945 nm | Band 9 captures light in the shortwave-infrared spectrum and is useful for detecting atmospheric water vapor. |
Band 10—Cirrus | 60 m | 1375 nm | Band 10 captures light in the shortwave-infrared spectrum and is sensitive to cirrus clouds, so can be useful for cloud removal. |
Band 11—Shortwave- Infrared-1 (SWIR-1) Band 12—Shortwave- Infrared-2 (SWIR-2) | 20 m 20 m | 1610 nm 2190 nm | Bands 11 and 12 capture light in the shortwave-infrared spectrum, and are sensitive to surface moisture, so are useful for measuring the moisture content of soil and vegetation, as well as discriminating between water bodies and other land types. |
Model | Accuracy | Loss | F1 Score | IOU |
---|---|---|---|---|
U-Net++ ResNet50 | 0.9387 | 0.1763 | 0.7098 | 0.5509 |
U-Net++ MobileNet_V2 | 0.9242 | 0.2108 | 0.6623 | 0.5345 |
DeepLabV3+ ResNet50 | 0.9237 | 0.1882 | 0.6502 | 0.5272 |
DeepLabV3+ MobileNet_V2 | 0.9389 | 0.1784 | 0.6962 | 0.5491 |
MA-Net ResNet50 | 0.9206 | 0.2349 | 0.6565 | 0.5201 |
MA-Net MobileNet_V2 | 0.9398 | 0.2249 | 0.6739 | 0.5425 |
Proposed | 0.9478 | 0.1342 | 0.7425 | 0.5997 |
Model | Accuracy | Loss | F1 Score | IOU |
---|---|---|---|---|
U-Net++ ResNet50 | 0.9301 | 0.2086 | 0.6921 | 0.5422 |
U-Net++ MobileNet_V2 | 0.9185 | 0.2213 | 0.6537 | 0.5234 |
DeepLabV3+ ResNet50 | 0.9196 | 0.2146 | 0.6448 | 0.5207 |
DeepLabV3+ MobileNet_V2 | 0.9205 | 0.2093 | 0.6829 | 0.5401 |
MA-Net ResNet50 | 0.9201 | 0.2431 | 0.6351 | 0.4819 |
MA-Net MobileNet_V2 | 0.9101 | 0.2733 | 0.6056 | 0.4723 |
Proposed | 0.9436 | 0.1724 | 0.7376 | 0.5908 |
Model | Accuracy | Loss | F1 Score | IOU |
---|---|---|---|---|
U-Net++ ResNet50 | 0.9298 | 0.2743 | 0.6764 | 0.5404 |
U-Net++ MobileNet_V2 | 0.9127 | 0.2884 | 0.6509 | 0.5218 |
DeepLabV3+ ResNet50 | 0.9124 | 0.2789 | 0.6249 | 0.5197 |
DeepLabV3+ MobileNet_V2 | 0.9196 | 0.2752 | 0.6789 | 0.5388 |
MA-Net ResNet50 | 0.9078 | 0.2825 | 0.6037 | 0.4727 |
MA-Net MobileNet_V2 | 0.9008 | 0.2852 | 0.5956 | 0.4590 |
Proposed | 0.9432 | 0.2237 | 0.7176 | 0.5862 |
Model | Accuracy | Loss | F1 Score | IOU |
---|---|---|---|---|
U-Net++ ResNet50 | 0.9643 | 0.1123 | 0.7921 | 0.6827 |
U-Net++ MobileNet_V2 | 0.9756 | 0.0934 | 0.8237 | 0.7246 |
DeepLabV3+ ResNet50 | 0.9689 | 0.0952 | 0.8109 | 0.6983 |
DeepLabV3+ MobileNet_V2 | 0.9462 | 0.1028 | 0.7827 | 0.6828 |
MA-Net ResNet50 | 0.9622 | 0.1137 | 0.7202 | 0.6331 |
MA-Net MobileNet_V2 | 0.9635 | 0.1167 | 0.7536 | 0.6402 |
Proposed | 0.9789 | 0.0883 | 0.8396 | 0.7307 |
Model | Accuracy | Loss | F1 Score | IOU |
---|---|---|---|---|
U-Net++ ResNet50 | 0.9638 | 0.1197 | 0.7892 | 0.6643 |
U-Net++ MobileNet_V2 | 0.9721 | 0.0968 | 0.8074 | 0.7121 |
DeepLabV3+ ResNet50 | 0.9642 | 0.1007 | 0.7986 | 0.6912 |
DeepLabV3+ MobileNet_V2 | 0.9409 | 0.1095 | 0.7774 | 0.6784 |
MA-Net ResNet50 | 0.9448 | 0.1557 | 0.7043 | 0.5901 |
MA-Net MobileNet_V2 | 0.9552 | 0.1242 | 0.7553 | 0.6273 |
Proposed | 0.9763 | 0.0906 | 0.8238 | 0.7241 |
Model | Accuracy | Loss | F1 Score | IOU |
---|---|---|---|---|
U-Net++ ResNet50 | 0.9583 | 0.1346 | 0.7762 | 0.6529 |
U-Net++ MobileNet_V2 | 0.9702 | 0.1209 | 0.7980 | 0.6832 |
DeepLabV3+ ResNet50 | 0.9573 | 0.1254 | 0.7923 | 0.6804 |
DeepLabV3+ MobileNet_V2 | 0.9364 | 0.1317 | 0.7705 | 0.6715 |
MA-Net ResNet50 | 0.9218 | 0.2147 | 0.6889 | 0.5696 |
MA-Net MobileNet_V2 | 0.9307 | 0.2277 | 0.7048 | 0.5810 |
Proposed | 0.9718 | 0.1143 | 0.8054 | 0.7031 |
Model | Training Time per Epoch (s) Sentinel-1 | Inference Time per Image (ms) Sentinel-1 | Training Time per Epoch (s) Sentinel-2 | Inference Time per Image (ms) Sentinel-2 | Number of Parameters |
---|---|---|---|---|---|
U-Net++ ResNet50 | 23 | 318 | 27 | 346 | 48,982,754 |
U-Net++ MobileNet_V2 | 8 | 104 | 9 | 124 | 6,824,578 |
DeepLabV3+ ResNet50 | 11 | 136 | 16 | 192 | 26,674,706 |
DeepLabV3+ MobileNet_V2 | 3 | 72 | 8 | 108 | 4,378,482 |
MA-Net ResNet50 | 31 | 352 | 34 | 374 | 147,471,498 |
MA-Net MobileNet_V2 | 19 | 304 | 25 | 339 | 48,891,766 |
Proposed | 17 | 287 | 22 | 312 | 42,745,693 |
Module Removed | Accuracy | Loss | F1 Score | IOU |
---|---|---|---|---|
Dense Skip Connections | 0.9219 | 0.2269 | 0.6565 | 0.5361 |
Deep Supervision | 0.9406 | 0.2731 | 0.7053 | 0.5714 |
Atrous Convolution | 0.9323 | 0.2255 | 0.6882 | 0.5480 |
Spatial Pyramid Pooling | 0.9226 | 0.2348 | 0.6632 | 0.5291 |
Weighting | 0.9348 | 0.2317 | 0.6924 | 0.5706 |
Proposed Model | 0.9432 | 0.2237 | 0.7176 | 0.5862 |
Model | Accuracy | Loss | F1 Score | IOU |
---|---|---|---|---|
Sentinel-1 (VV + VH) | 0.9478 | 0.1342 | 0.7425 | 0.5997 |
Sentinel-2 (All Bands) | 0.9789 | 0.0883 | 0.8396 | 0.7307 |
VV + VH + Sentinel-2 | 0.9607 | 0.0953 | 0.7882 | 0.6651 |
VV + VH + NIR | 0.9551 | 0.1146 | 0.7722 | 0.6395 |
VV + VH + SWIR | 0.9531 | 0.1157 | 0.7602 | 0.6253 |
VV + VH + NIR + SWIR | 0.9614 | 0.0934 | 0.7925 | 0.6662 |
VV + VH + RGB | 0.9373 | 0.1687 | 0.6965 | 0.5473 |
VV + VH + RGB + NIR | 0.9611 | 0.0955 | 0.7898 | 0.6640 |
VV + VH + RGB + SWIR | 0.9569 | 0.1072 | 0.7701 | 0.6421 |
VV + VH + RGB + NIR + SWIR | 0.9684 | 0.0764 | 0.8232 | 0.7064 |
VV + VH + NDVI | 0.9509 | 0.1275 | 0.7444 | 0.6068 |
VV + VH + MNDWI | 0.9536 | 0.1224 | 0.7585 | 0.6215 |
Model | Accuracy | Loss | F1 Score | IOU |
---|---|---|---|---|
Sentinel-1 (VV + VH) | 0.9436 | 0.1724 | 0.7376 | 0.5908 |
Sentinel-2 (All Bands) | 0.9763 | 0.0906 | 0.8238 | 0.7241 |
VV + VH + Sentinel-2 | 0.9662 | 0.1019 | 0.7673 | 0.6620 |
VV + VH + NIR | 0.9610 | 0.1776 | 0.7510 | 0.6417 |
VV + VH + SWIR | 0.9537 | 0.1629 | 0.6581 | 0.5963 |
VV + VH + NIR + SWIR | 0.9564 | 0.1184 | 0.7153 | 0.5991 |
VV + VH + RGB | 0.9311 | 0.2201 | 0.6521 | 0.5416 |
VV + VH + RGB + NIR | 0.9590 | 0.1103 | 0.7179 | 0.6496 |
VV + VH + RGB + SWIR | 0.9672 | 0.1019 | 0.8012 | 0.6943 |
VV + VH + RGB + NIR + SWIR | 0.9708 | 0.0915 | 0.8216 | 0.7225 |
VV + VH + NDVI | 0.9368 | 0.1588 | 0.6901 | 0.5704 |
VV + VH + MNDWI | 0.9470 | 0.1875 | 0.6872 | 0.5676 |
Model | Accuracy | Loss | F1 Score | IOU |
---|---|---|---|---|
Sentinel-1 | 0.9432 | 0.2237 | 0.7176 | 0.5862 |
Sentinel-2 | 0.9718 | 0.1143 | 0.8054 | 0.7031 |
VV + VH + Sentinel-2 | 0.9632 | 0.2442 | 0.7582 | 0.6425 |
VV + VH + NIR | 0.9550 | 0.2980 | 0.7155 | 0.6125 |
VV + VH + SWIR | 0.9620 | 0.2808 | 0.7509 | 0.6310 |
VV + VH + NIR + SWIR | 0.9588 | 0.2793 | 0.7495 | 0.6385 |
VV + VH + RGB | 0.9353 | 0.3849 | 0.6011 | 0.4769 |
VV + VH + RGB + NIR | 0.9513 | 0.1821 | 0.7584 | 0.6318 |
VV + VH + RGB + SWIR | 0.9636 | 0.2180 | 0.7854 | 0.6760 |
VV + VH + RGB + NIR + SWIR | 0.9641 | 0.1833 | 0.8019 | 0.7053 |
VV + VH + NDVI | 0.9464 | 0.2612 | 0.6983 | 0.5734 |
VV + VH + MNDWI | 0.9464 | 0.3270 | 0.6950 | 0.5638 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sanderson, J.; Mao, H.; Abdullah, M.A.M.; Al-Nima, R.R.O.; Woo, W.L. Optimal Fusion of Multispectral Optical and SAR Images for Flood Inundation Mapping through Explainable Deep Learning. Information 2023, 14, 660. https://doi.org/10.3390/info14120660
Sanderson J, Mao H, Abdullah MAM, Al-Nima RRO, Woo WL. Optimal Fusion of Multispectral Optical and SAR Images for Flood Inundation Mapping through Explainable Deep Learning. Information. 2023; 14(12):660. https://doi.org/10.3390/info14120660
Chicago/Turabian StyleSanderson, Jacob, Hua Mao, Mohammed A. M. Abdullah, Raid Rafi Omar Al-Nima, and Wai Lok Woo. 2023. "Optimal Fusion of Multispectral Optical and SAR Images for Flood Inundation Mapping through Explainable Deep Learning" Information 14, no. 12: 660. https://doi.org/10.3390/info14120660
APA StyleSanderson, J., Mao, H., Abdullah, M. A. M., Al-Nima, R. R. O., & Woo, W. L. (2023). Optimal Fusion of Multispectral Optical and SAR Images for Flood Inundation Mapping through Explainable Deep Learning. Information, 14(12), 660. https://doi.org/10.3390/info14120660