UAV Aided Forest Fire Risk Prediction Based on Remote Sensing, Machine Learning and Cloud Computing

A special issue of Forests (ISSN 1999-4907). This special issue belongs to the section "Forest Inventory, Modeling and Remote Sensing".

Deadline for manuscript submissions: closed (31 July 2023) | Viewed by 5860

Special Issue Editors


E-Mail Website
Guest Editor
School of Engineering, Design and Built Environment, Western Sydney University, Locked Bag 1797, Penrith, NSW 2751, Australia
Interests: AI; ML; IoT; disaster management; computer vision; wireless communication; cloud computing

E-Mail Website
Guest Editor
Department of Electrical and Communication Engineering, United Arab Emirates University (UAEU), Al Ain, United Arab Emirates
Interests: digital signal processing; wireless sensor network; wireless communications; cognitive radio; statistical signal processing; mobile communications; satellite communication

Special Issue Information

Dear Colleagues,

Forest fires have been a critical concern for many countries, such as Australia, USA, Brazil, Ukraine, Spain, Greece, Japan, Egypt, Algeria, and Italy. These fires not only have an adverse impact on a country’s economy but are a serious threat to livestock, human beings, and the whole ecosystem. Despite all the technological advancement around the globe, predicting forest fires based on remote sensing applications is quite challenging. Owing to these abruptly happening forest fires, there is a need to develop state-of-the-art, cutting-edge technology that can help in predicting these fires based on remote sensing before it is too late. This Special Issue invites authors to contribute their research findings in the field of artificial intelligence, machine learning, the Internet of Things, wireless communication, cloud computing, etc. All kinds of research articles using cutting-edge technologies are invited to this Special Issue. We highly appreciate your efforts and quality contributions toward achieving this goal.

Dr. Zakria Qadir
Dr. Nasir Saeed
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Forests is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • forest fire
  • disaster management
  • UAV
  • smart cities
  • IoT
  • ML
  • AI
  • computer vision

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 28279 KiB  
Article
Accuracy Assessment of Drone Real-Time Open Burning Imagery Detection for Early Wildfire Surveillance
by Sarun Duangsuwan and Katanyoo Klubsuwan
Forests 2023, 14(9), 1852; https://doi.org/10.3390/f14091852 - 12 Sep 2023
Cited by 3 | Viewed by 1202
Abstract
Open burning is the main factor contributing to the occurrence of wildfires in Thailand, which every year result in forest fires and air pollution. Open burning has become the natural disaster that threatens wildlands and forest resources the most. Traditional firefighting systems, which [...] Read more.
Open burning is the main factor contributing to the occurrence of wildfires in Thailand, which every year result in forest fires and air pollution. Open burning has become the natural disaster that threatens wildlands and forest resources the most. Traditional firefighting systems, which are based on ground crew inspection, have several limits and dangerous risks. Aerial imagery technologies have become one of the most important tools to prevent wildfires, especially drone real-time monitoring for wildfire surveillance. This paper presents an accuracy assessment of drone real-time open burning imagery detection (Dr-TOBID) to detect smoke and burning as a framework for a deep learning-based object detection method using a combination of the YOLOv5 detector and a lightweight version of the long short-term memory (LSTM) classifier. The Dr-TOBID framework was designed using OpenCV, YOLOv5, TensorFlow, LebelImg, and Pycharm and wirelessly connected via live stream on open broadcaster software (OBS). The datasets were separated by 80% for training and 20% for testing. The resulting assessment considered the conditions of the drone’s altitudes, ranges, and red-green-black (RGB) mode in daytime and nighttime. The accuracy, precision, recall, and F1-Score are shown for the evaluation metrics. The quantitative results show that the accuracy of Dr-TOBID successfully detected open burning monitoring, smoke, and burning characteristics, where the average F1-score was 80.6% for smoke detection in the daytime, 82.5% for burning detection in the daytime, 77.9% for smoke detection at nighttime, and 81.9% for burning detection at nighttime. Full article
Show Figures

Figure 1

22 pages, 3507 KiB  
Article
FL-YOLOv7: A Lightweight Small Object Detection Algorithm in Forest Fire Detection
by Zhuo Xiao, Fang Wan, Guangbo Lei, Ying Xiong, Li Xu, Zhiwei Ye, Wei Liu, Wen Zhou and Chengzhi Xu
Forests 2023, 14(9), 1812; https://doi.org/10.3390/f14091812 - 5 Sep 2023
Cited by 6 | Viewed by 1599
Abstract
Given the limited computing capabilities of UAV terminal equipment, there is a challenge in balancing the accuracy and computational cost when deploying the target detection model for forest fire detection on the UAV. Additionally, the fire targets photographed by the UAV are small [...] Read more.
Given the limited computing capabilities of UAV terminal equipment, there is a challenge in balancing the accuracy and computational cost when deploying the target detection model for forest fire detection on the UAV. Additionally, the fire targets photographed by the UAV are small and prone to misdetection and omission during detection. This paper proposes a lightweight, small target detection model, FL-YOLOv7, based on YOLOv7. First, we designed a light module, C3GhostV2, to replace the feature extraction module in YOLOv7. Simultaneously, we used the Ghost module to replace some of the standard convolution layers in the backbone network, accelerating inference speed and reducing model parameters. Secondly, we introduced the Parameter-Free Attention (SimAm) attention mechanism to highlight the features of smoke and fire targets and suppress background interference, improving the model’s representation and generalization performance without increasing network parameters. Finally, we incorporated the Adaptive Spatial Feature Fusion (ASFF) module to address the model’s weak small target detection capability and use the loss function with dynamically adjustable sample weights (WIoU) to weaken the impact of low-quality or complex samples and improve the model’s overall performance. Experimental results show that FL-YOLOv7 reduces the parameter count by 27% compared to the YOLOv7 model while improving 2.9% mAP50small and 24.4 frames per second in FPS, demonstrating the effectiveness and superiority of our model in small target detection, as well as its real-time and reliability in forest fire scenarios. Full article
Show Figures

Figure 1

23 pages, 11786 KiB  
Article
Forest Fire Segmentation via Temporal Transformer from Aerial Images
by Mohammad Shahid, Shang-Fu Chen, Yu-Ling Hsu, Yung-Yao Chen, Yi-Ling Chen and Kai-Lung Hua
Forests 2023, 14(3), 563; https://doi.org/10.3390/f14030563 - 13 Mar 2023
Cited by 7 | Viewed by 2169
Abstract
Forest fires are among the most critical natural tragedies threatening forest lands and resources. The accurate and early detection of forest fires is essential to reduce losses and improve firefighting. Conventional firefighting techniques, based on ground inspection and limited by the field-of-view, lead [...] Read more.
Forest fires are among the most critical natural tragedies threatening forest lands and resources. The accurate and early detection of forest fires is essential to reduce losses and improve firefighting. Conventional firefighting techniques, based on ground inspection and limited by the field-of-view, lead to insufficient monitoring capabilities for large areas. Recently, due to their excellent flexibility and ability to cover large regions, unmanned aerial vehicles (UAVs) have been used to combat forest fire incidents. An essential step for an autonomous system that monitors fire situations is first to locate the fire in a video. State-of-the-art forest-fire segmentation methods based on vision transformers (ViTs) and convolutional neural networks (CNNs) use a single aerial image. Nevertheless, fire has an inconsistent scale and form, and small fires from long-distance cameras lack salient features, so accurate fire segmentation from a single image has been challenging. In addition, the techniques based on CNNs treat all image pixels equally and overlook global information, limiting their performance, while ViT-based methods suffer from high computational overhead. To address these issues, we proposed a spatiotemporal architecture called FFS-UNet, which exploited temporal information for forest-fire segmentation by combining a transformer into a modified lightweight UNet model. First, we extracted a keyframe and two reference frames using three different encoder paths in parallel to obtain shallow features and perform feature fusion. Then, we used a transformer to perform deep temporal-feature extraction, which enhanced the feature learning of the fire pixels and made the feature extraction more robust. Finally, we combined the shallow features of the keyframe for de-convolution in the decoder path via skip-connections to segment the fire. We evaluated empirical outcomes on the UAV-collected video and Corsican Fire datasets. The proposed FFS-UNet demonstrated enhanced performance with fewer parameters by achieving an F1-score of 95.1% and an IoU of 86.8% on the UAV-collected video, and an F1-score of 91.4% and an IoU of 84.8% on the Corsican Fire dataset, which were higher than previous forest fire techniques. Therefore, the suggested FFS-UNet model effectively resolved fire-monitoring issues with UAVs. Full article
Show Figures

Figure 1

Back to TopTop