Next Article in Journal
The Future Is Community-Led: Rethinking Rural Tourism Sustainability Through the Bregenzerwald Model
Previous Article in Journal
The Impact of Global Value Chain Restructuring on the OFDI Transformation of Manufacturing Industry: Evidence from China
Previous Article in Special Issue
Sustainable Earthquake Preparedness: A Cross-Cultural Comparative Analysis in Montenegro, North Macedonia, and Serbia
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Correction

Correction: Munawar et al. UAVs in Disaster Management: Application of Integrated Aerial Imagery and Convolutional Neural Network for Flood Detection. Sustainability 2021, 13, 7547

1
School of Built Environment, University of New South Wales, Kensington, Sydney, NSW 2052, Australia
2
School of Civil Engineering and Surveying, University of Southern Queensland, Springfield, QLD 4300, Australia
3
Independent Researcher, Sydney, NSW 2150, Australia
*
Author to whom correspondence should be addressed.
Sustainability 2025, 17(12), 5444; https://doi.org/10.3390/su17125444
Submission received: 6 May 2025 / Revised: 12 May 2025 / Accepted: 16 May 2025 / Published: 13 June 2025
(This article belongs to the Special Issue Disaster Risk Reduction and Resilient Built Environment)
In order to further clarify the figures and data sources used in the paper, the authors would like to make the following corrections to the published paper [1]. The changes are as follows:
  • Updating the institutional affiliation of one of the authors to reflect their status at the time of paper publication:
Faculty of Chemical Energy, University of New South Wales, Kensington, Sydney, NSW 2052, Australia
with
Independent Researcher, Sydney, NSW 2150, Australia
2.
Replacing the sentence in “Abstract”:
The study area is based on a flood-prone region of the Indus River in Pakistan, where both pre-and post-disaster images are collected through UAVs.
with
The study area includes both global flood-prone regions and a specific flood-prone region of the Indus River in Pakistan, where both pre-and post-disaster images are collected from publicly available datasets.
3.
To avoid readers’ confusion, the authors wish to add the following sentence to the “Abstract” before the sentence “For the training phase, 2150 image patches are created by resizing and cropping the source images.”
The overarching goal of this study is to develop and validate a model using a global dataset, rather than restricting the analysis solely to the Indus River region in Pakistan. While the study discusses Pakistan as a reference case, the model was trained and tested using diverse images to ensure broader applicability.
4.
Replacing the sentence in Section 3 “Materials and Methods”, paragraph 2:
In the current study, CNN and some other techniques have been used to detect floods from multispectral aerial images captured from the Indus River located in Pakistan.
with
In the current study, CNN and other techniques have been employed to detect floods from multispectral aerial images captured from the Indus River in Pakistan, as well as from global disaster zones where publicly available images are accessible on the Internet.
5.
Replacing the sentence in Section 3 “Materials and Methods”, paragraph 2:
Images can also be retrieved and used from online sources such as Google Earth or social media.
with
Images were also retrieved and used from online sources such as Google Earth, social media, and other Internet platforms.
6.
Replacing the sentence in Section 3.1 “Data Collection and Target Area”, paragraph 1:
The current study’s target area is the Indus Basin in Punjab, Pakistan, as shown in Figure 3.
with
The current study’s target area is the Indus Basin in Punjab, Pakistan, as shown in Figure 3. However, the images used in this study represent publicly available disaster images of different locations from the Internet.
7.
Replacing the sentence in Section 3.1 “Data Collection and Target Area”, paragraph 2:
Image data for the dataset is collected from UAV-based images and online sources such as Google Earth.
with:
Image data for the dataset are collected from UAV-based images of different disaster locations obtained through online sources such as Google Earth and other publicly available Internet sources.
8.
Replacing the caption of Figure 5 in Section 3.3 “Training Phase”:
Figure 5. The training phase of the proposed method.
with
Figure 5. The training phase of the proposed method (sub-images used in this figure are obtained from the Internet for illustration purposes only and do not reflect the case study area).
9.
Replacing the caption of Figure 6 in Section 3.4 “Testing Phase”:
Figure 6. The testing phase of the proposed method.
with
Figure 6. The testing phase of the proposed method (sub-images used in this figure are obtained from the Internet for illustration purposes only and do not reflect the case study area).
10.
Replacing the sentence in Section 4 “Results and Discussions”, paragraph 2:
The two sets of flood detection results represented in Figures 7 and 8 show images captured at the Indus River Region in Pakistan.
with
The two sets of flood detection results represented in Figures 7 and 8 show images captured in two global disaster regions.
11.
Replacing the sentence in Section 4 “Results and Discussions”, paragraph 3:
Figure 8 shows the results obtained from an image captured from the Indus River region II.
with
Figure 8 shows the results obtained from an image captured from another global disaster zone, i.e., Region II.
12.
Replacing the caption of Figure 7 in Section 4 “Results and Discussions”:
Figure 7. Flood Detection Results at Indus River Region I. (a) Input Image (b) Ground Reality (c) Segmentation Results (d) Output Image, True-Positive (Red), True-Negative (Blue), False-Positive (Green), False-Negative (Yellow).
with
Figure 7. Flood detection results in Region I: (a) input image, (b) ground reality, (c) segmentation results, (d) output image. True positive: red; true negative: blue; false positive: green; false negative: yellow. (Please note: These images are obtained from the Internet for illustration purposes only and selected by the proposed CNN based on defined patterns and characteristics. They do not reflect the specific case study area).
13.
Replacing the caption of Figure 8 in Section 4 “Results and Discussions”:
Figure 8. Flood Detection Results at Indus River Region II. (a) Input Image (b) Ground Reality (c) Segmentation Results (d) Output Image, True Positive (Red), True Negative (Blue), False Positive (Green), False Negative (Yellow).
with
Figure 8. Flood detection results in Region II: (a) input image, (b) ground reality, (c) segmentation results, (d) output image. True positive: red; true negative: blue; false positive: green; false negative: yellow. (Note: These images are obtained from the Internet for illustration purposes only and selected by the proposed CNN based on defined patterns and characteristics. They do not reflect the specific case study area.).
14.
Replacing the content “Indus River I” and “Indus River II” with “Region I” and “Region II” in Table 3 in Section 4 “Results and Discussions”:
with
Table 3. Flood detection results.
Table 3. Flood detection results.
RegionPrecision (P)Recall (R)F1-Score (F)
Region I (Figure 7)0.840.910.87
Region II (Figure 8)0.930.750.83
15.
Replacing the caption of Figure 11 in Section 4 “Results and Discussions”:
Figure 11. Test on a non-disaster region (a) Input Image (b) Output.
with
Figure 11. Test on a sample global non-disaster region: (a) input image; (b) output.
16.
Changing the “Author Contributions” to reflect the authors’ skillset and contributions:
software, H.S.M., F.U.
with
software, S.I.K.
The authors state that the scientific conclusions are unaffected. This correction was approved by the Academic Editor. The original publication has also been updated.

Reference

  1. Munawar, H.S.; Ullah, F.; Qayyum, S.; Khan, S.I.; Mojtahedi, M. UAVs in Disaster Management: Application of Integrated Aerial Imagery and Convolutional Neural Network for Flood Detection. Sustainability 2021, 13, 7547. [Google Scholar] [CrossRef]
Table 3. Flood Detection Results.
Table 3. Flood Detection Results.
RegionPrecision (P)Recall (R)F1-Score (F)
Indus River I (Figure 7)0.840.910.87
Indus River II (Figure 8)0.930.750.83
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Munawar, H.S.; Ullah, F.; Qayyum, S.; Khan, S.I.; Mojtahedi, M. Correction: Munawar et al. UAVs in Disaster Management: Application of Integrated Aerial Imagery and Convolutional Neural Network for Flood Detection. Sustainability 2021, 13, 7547. Sustainability 2025, 17, 5444. https://doi.org/10.3390/su17125444

AMA Style

Munawar HS, Ullah F, Qayyum S, Khan SI, Mojtahedi M. Correction: Munawar et al. UAVs in Disaster Management: Application of Integrated Aerial Imagery and Convolutional Neural Network for Flood Detection. Sustainability 2021, 13, 7547. Sustainability. 2025; 17(12):5444. https://doi.org/10.3390/su17125444

Chicago/Turabian Style

Munawar, Hafiz Suliman, Fahim Ullah, Siddra Qayyum, Sara Imran Khan, and Mohammad Mojtahedi. 2025. "Correction: Munawar et al. UAVs in Disaster Management: Application of Integrated Aerial Imagery and Convolutional Neural Network for Flood Detection. Sustainability 2021, 13, 7547" Sustainability 17, no. 12: 5444. https://doi.org/10.3390/su17125444

APA Style

Munawar, H. S., Ullah, F., Qayyum, S., Khan, S. I., & Mojtahedi, M. (2025). Correction: Munawar et al. UAVs in Disaster Management: Application of Integrated Aerial Imagery and Convolutional Neural Network for Flood Detection. Sustainability 2021, 13, 7547. Sustainability, 17(12), 5444. https://doi.org/10.3390/su17125444

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop