Next Article in Journal
Foliar Application of dsRNA to Induce Gene Silencing in Emerald Ash Borer: Systemic Distribution, Persistence, and Bioactivity
Previous Article in Journal
The Impact of Forest Therapy Programs on Stress Reduction: A Systematic Review
Previous Article in Special Issue
FL-YOLOv7: A Lightweight Small Object Detection Algorithm in Forest Fire Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accuracy Assessment of Drone Real-Time Open Burning Imagery Detection for Early Wildfire Surveillance

by
Sarun Duangsuwan
1,* and
Katanyoo Klubsuwan
2
1
Electrical Engineering, Department of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Prince of Chumphon Campus, Chumphon 86160, Thailand
2
E-Idea Company Ltd., Bangkok 10230, Thailand
*
Author to whom correspondence should be addressed.
Forests 2023, 14(9), 1852; https://doi.org/10.3390/f14091852
Submission received: 1 August 2023 / Revised: 4 September 2023 / Accepted: 6 September 2023 / Published: 12 September 2023

Abstract

:
Open burning is the main factor contributing to the occurrence of wildfires in Thailand, which every year result in forest fires and air pollution. Open burning has become the natural disaster that threatens wildlands and forest resources the most. Traditional firefighting systems, which are based on ground crew inspection, have several limits and dangerous risks. Aerial imagery technologies have become one of the most important tools to prevent wildfires, especially drone real-time monitoring for wildfire surveillance. This paper presents an accuracy assessment of drone real-time open burning imagery detection (Dr-TOBID) to detect smoke and burning as a framework for a deep learning-based object detection method using a combination of the YOLOv5 detector and a lightweight version of the long short-term memory (LSTM) classifier. The Dr-TOBID framework was designed using OpenCV, YOLOv5, TensorFlow, LebelImg, and Pycharm and wirelessly connected via live stream on open broadcaster software (OBS). The datasets were separated by 80% for training and 20% for testing. The resulting assessment considered the conditions of the drone’s altitudes, ranges, and red-green-black (RGB) mode in daytime and nighttime. The accuracy, precision, recall, and F1-Score are shown for the evaluation metrics. The quantitative results show that the accuracy of Dr-TOBID successfully detected open burning monitoring, smoke, and burning characteristics, where the average F1-score was 80.6% for smoke detection in the daytime, 82.5% for burning detection in the daytime, 77.9% for smoke detection at nighttime, and 81.9% for burning detection at nighttime.

1. Introduction

Open burning is the main factor contributing towards wildfires and leads to advancing the problem of air pollution in Thailand [1]. Open burning produces smoke, resulting in haze, which risks the public’s health. Most open burning is caused by activities such as agricultural material burning, forest foraging, waste burning, and burning for agricultural preparation. From January to April, the Air Quality Index (AQI) often captures a score over 300, especially in the northern zones of Thailand. Regularly, an AQI over 300 is typically reported every year, meaning it is very dangerous for human health [2]. Meanwhile, open burning has become the most critical factor contributing to wildfires, resulting in damage to almost 60% of forest resources, as shown in Figure 1. The problem with open burning is that it is difficult to control, destroys forest resources, and results in greenhouse gas emissions [3].
To minimize the loss of forest resources and control open burning, early open burning detection is a crucial solution to reduce the risk and losses, and it can help firefighters extinguish the fire at its earliest stages. Some early wildfire surveillance systems have been proposed, such as using the watchtower equipped with various sensors and satellite systems [4]. However, there are limitations to these systems, such as the fact that they could reduce fire detection performance. The watchtower suffers from a limited range of views and high construction costs, while satellites have a large field of view [5]. Satellite-based images are not the best solution for early forest fire detection due to their cost, flexibility, and low spatial/temporal image resolution, making fire detection in real-time difficult [6]. Advanced, high-resolution fixed cameras found on the ground are solutions to these limitations of watching for forest fires. Early terrestrial wildfire surveillance systems are mostly based on optical/thermal cameras for watchtowers [7]. Watchtowers combined with an optical sensor camera and other types of sensors, such as humidity, smoke, and temperature sensors, were applied [8]. Unfortunately, these sensors may work efficiently in a closed environment but are unsuitable in forests because they are less effective in close proximity to fire and smoke. Similarly, on-ground thermal cameras mounted on a watchtower are expensive [9].
An autonomous unmanned aerial vehicle (UAV)-based wildfire detection and surveillance system is a solution for remote sensing technologies [10,11]. It can cover larger areas than terrestrial watchtowers and provide images with higher spatial and temporal resolutions than satellites. For instance, UAVs equipped with optical cameras are considered the best choice for wildfire surveillance. UAV-based computer vision, using a deep learning algorithm, can detect and send information to cloud networking using efficient communication, such as 5G technology [12]. Interestingly, various works achieve deep learning for wildfire detection by using the convolutional neural network (CNN) [13], the faster region-based convolution neural network (R-CNN) [14], U-Net [15], DeepLab [16], long short-term memory (LSTM) [17], and the generative adversarial network (GAN) [18].
Deep learning methods can help UAVs detect fire and smoke precisely. There are three of these methods: image-classification-based methods, object-detection-based methods, and semantic-segmentation-based methods.
Image-classification-based methods with deep CNN architectures are the best choice for image classification targets due to their ability to extract highly representative features from 2D images. Several studies have presented deep CNN to classify fire and non-fire for UAV-based fire detection [19,20,21].
Object-detection-based methods are unlike image-classification-based methods; object detection algorithms can find and localize the object of interest in an input image or video by drawing a rectangular bounding box around the targeted object. Several studies have presented the idea that faster R-CNNs are the best choice for object-detection-based methods [22,23]. Furthermore, the faster R-CNN algorithm showed great results in detecting both smoke and flames in UAV imagery [24]. The accuracy of faster R-CNN [24] was shown as the tested object detection algorithm achieved F1-Score rates of 72.7%, 70.6%, and 71.5% for flame, smoke, and both flame and smoke, respectively.
A semantic-segmentation-based method is a deep learning technique used to classify each pixel in the image according to object detection for forest fire identification. Semantic segmentation algorithms, such as Light-FireNet [25], DeepLabV3+ [26], and SegNet [27], are applications of deep learning that are not restricted to image classification and objection detection but can be used for semantic and instance segmentation.
It is well known that drones can be used to detect open burning before wildfires spread using object-detection-based methods. However, drone images have different characteristics than the traditional imagery of the watchtower. For instance, drone altitudes, ranges, and motion affect the drone’s accuracy in real-time monitoring. The aim of this paper is to assess the accuracy of Dr-TOBID by considering the drone altitudes, range, and detection in the daytime and nighttime for real-time open burning detection.
The main contributions of this paper are as follows:
  • The model of the YOLOv5 detector and LSTM classifier is proposed for training and testing the drone’s real-time open burning imagery detection that is called Dr-TOBID for a wildfire surveillance platform.
  • The structure of the deep learning framework is designed by Anaconda platforms such as OpenCV, YOLOv5, TensorFlow, LebelImg, Pycharm, and OBS.
  • The detection of smoke and burning from the open burning location considers conditions by investigating the characteristics of drone altitudes, ranges, and RGB mode in the daytime and nighttime.
  • Finally, the evaluation metrics, such as accuracy, precision, recall, and F1-Score, are assessed as the accuracy of Dr-TOBID.
The rest of this paper is organized as follows: Section 2 presents the related work of the paper. Section 3 describes the methodologies, such as the proposed frameworks, YOLOv5, and LSTM, respectively. Section 4 describes the experimental setup. Section 5 presents the accuracy result and discussion. Finally, the conclusion of this paper is drawn.

2. Related Works

Early Wildfire Surveillance

The early wildfire detection technologies were described in various works [28]. Several studies focused on four categories: sensor nodes, UAVs, camera networks, and satellite surveillance. To highlight the main existing works, the sensor nodes can be communicated by using Long-Range (LoRa) devices as the wireless link to locate the fire sources. Meanwhile, UAV trends indicate that multiple UAVs (MUAVs) can be applied to forest density by using the agent-based method [29], which is a platform for early wildfire surveillance. The camera networks for early wildfire surveillance can be split into aerial cameras and watchtower cameras for terrestrial imagery. The watchtower camera is suitable for large forest areas, while the aerial camera can provide a precise point of view. Likewise, the satellite surveillance system can cover locations larger than the watchtower and aerial cameras, where the new solution of the distribution satellite system (DSS) was presented in [30].
A comprehensive survey of fire forest detection was presented in [31], where the authors recently highlighted the strengths, limitations, and future developments of fire detection systems. More than 100 papers have been described in [31], which direct attention to existing works and open problems. Advancements in fire detection systems are the result of the Internet of Things (IoT), deep learning, and big data analytics. Leveraging these advancements can enable more efficient processing and analysis of large volumes of data. In particular, the potential of UAV-based wildfire detection and surveillance systems can be harnessed for data collection. Meanwhile, communication networks are significantly vital, such as a reliable communication network, data streaming, and false alarms.
The proposed IoT and machine learning model was presented in [32] for modern fire detection and warning systems. The authors presented the prototype of the video surveillance unit (VSU) based on a low-power IoT device called a low-power wide area network (LPWAN). The LPWAN is a communication network for sending data and making fire detection by machine learning models. The result showed that the warning alarm system via LPWAN can be responded to at approximately 37.1667 s of latency. Additionally, the F1-score can be shown at 96% for the accuracy of VSU. It is shown that the LPWAN alarm is remotely sent to notify the personnel in charge. This method can cover limited areas of forest and provide data about the whereabouts of fires.
A drone-based surveillance system has been proposed by Jemmali, M. et al. [33]. They proposed intelligent scheduling algorithms for warning systems. Monitoring forest areas by drone can be divided into sub-regions. The authors proposed six algorithms, but the best result is a randomly iterative drone (RID) algorithm. The RID obtained a percentage rate of up to 90.3% with a time of 88 ms. This contribution result seeks to decrease the number of flight missions over the forests within a given slot of time, within 24 h.
Deep learning models have various achievements for fire detection. A platform smart fire detection system (SFDS) was presented in [34]. The SFDS leverages the strength of deep learning to detect real-time fire characteristics. The authors proposed four layers: an Application layer, a Fog layer, a Cloud layer, and an IoT layer for fire detection. YOLOv8 is used for training and testing the backbone model. The performance comparison for fire detection shows that YOLOv8 can detect fire and smoke with a high precision rate of 97.1%.
The modified deep learning by combining the EfficientNet-B5 and DenseNet-201 models was presented in [35] to improve the limitations of small object size, background complexity, and image degradation of drone imagery. The authors have presented the combination of two vision transformers and a deep convolution model to segment wildfire regions and determine the precise fire region. The accuracy of the proposed model was achieved at 85.12% for detecting fire based on the semantic segmentation method.
In Ref. [36], Khan, S. et al. presented the FFireNet deep learning model by using the pre-trained convolutional-based MobileNetV2 model and adding fully connected layers to solve the new task segments. The performance of the FFireNet model can classify and detect fire and non-fire by achieving an accuracy of 98.42%, a 1.58% error rate, 99.47% recall, a 98.43% F1-score, and 97.42 precision. The FFireNet components consist of a 15-layer self-learning fire feature exactor and classifier, from which the FFireNet model was developed from Fine_Net in Ref. [37].
In Ref. [38], Jeong, M. et al. proposed the combination of a YOLOv3 detector and a lightweight version of the LSTM classifier for wildfire smoke detection in real-time from videos acquired through cameras. The teacher-student LSTM classifier can solve the vanishing gradient problem by reducing the number of layers.
Likewise, mixed YOLOv3-Lite for lightweight real-time object detection was presented in Ref. [39], which uses a shallow-layer, narrow-channel, and multi-scale feature-image parallel fusion structure. However, mixed YOLOv3-Lite is suitable for mobile smart devices to detect images without requiring high performance.
The key challenges of early wildfire surveillance by real-time UAV-based deep learning methods are lightweight objection detection and tracking fire forest locations. Using a YOLOv5 detector [40] is suitable for real-time UAVs because it is lighter than YOLOv8 [34], while the classifiers targeted at the LSTM can solve the vanishing gradient problem [41]. The YOLOv5 is very small, at only 25 MB, and faster than the processing time of the YOLOv8. To this end, we aim to develop the YOLOv5 and LSTM algorithms for lightweight objection detection to detect smoke and fire from open burning locations and assess the accuracy of Dr-TOBID.

3. Methodology

3.1. Frameworks

In this subsection, we propose the framework of real-time open burning detection by using the deep learning model as shown in Figure 2. Typically, a drone is equipped with cameras that send data to a ground control station, which is acquired by the deep learning model to detect the characteristics of smoke and fire. The computer vision-based deep learning model employed a high-speed computer for the ground control station to execute quick real-time image processing.
There are five steps in the proposed framework that work by:
  • The flight control of the drone is applied by the remote controller or path planning for searching the open burning locations.
  • The video streaming in real-time will be linked via the wireless transmission channel.
  • Data collection and acquisition will be sent to the ground control station.
  • The deep learning procedures are the next steps, such as feature extraction, training/testing, classifying, and validating results.
  • Display the monitoring data on the OBS software.
The deep learning model is developed using Anaconda platform-based Python 3.7. The first step is to install OpenCV as a video input streaming function. The next step is the installation of YOLO version 5 and LabelImg for training and testing procedures. After getting the training preprocessed, it is necessary to isolate pixels that describe the object of interest from the dataset. Next, TensorFlow is installed. TensorFlow is a deep learning library that is used for smoke and fire feature extraction by using the LSTM algorithm as a lightweight imagery processing tool to classify the smoke and fire characterization. The next procedure supplies the Pycharm for confirming the model and evaluating the performance metrics that are perfect before being displayed on OBS real-time monitoring.

3.1.1. YOLOv5 Model

The state-of-the-art real-time object detector using YOLOv5 was illustrated in [41,42]. YOLOv5 consists of four main models: input, Backbone, Neck, and output. We simplify the YOLOv5 architecture as shown in Figure 3. The YOLOv5 model obtains the input video/images dataset into the backbone states. First, the Focus structure consists of four slice operations and one convolution involving 32-layer cores. At the backbone layer, the feature fusion CBS consists of the convolution layer, batch normalization, and Sigmoid Linear Unit of the activation function. The CSP1 and CSP2 are the cross-state partials created by the Net cross-stage partials network to reduce the amount of duplicated information flow. The SPPF is used for combining information from multiple layers. The SPPF consists of three max polling 5 × 5 layers inside, two CBS, and one Concat processing. YOLOv5 utilizes a feature pyramid network (PANet) at the Neck to transfer information at the lowest possible level to be efficient, improving object poisoning accuracy. The last state of YOLOv5 is the prediction output for the training and testing datasets.

3.1.2. Extension of YOLOv5 with LSTM layers

The LSTM is an extension of the RNN algorithm [21] that is used to control the continuous time series output of a slicing frame image. We propose the LSTM layers after the processing of YOLOv5, getting the output as shown in Figure 4. Initially, the input of RNN is a sequence of data and the hidden states h t at the time step. Equation (1) shows the relative input, hidden layer, and output of the RNN algorithm [7,30].
h t = σ w x h x + w h h h t 1 + b h
y t = g σ w h o h t + b o
where σ is a Sigmoid function, g denotes the operation of fully concatenating layer, b h and b o are bias vector, w x h , w h h , w h o represent the weight vectors from the input layer to hidden, the previously hidden layer to the next hidden layer, and the current hidden layer to the output layer, respectively. The h t and h t 1 denote the current hidden layer and the earlier hidden layer.
The RNN is a sequential model of deep learning. However, RNN still has some problems with long memory and existing gradient errors during the training process. The LSTM is extended; it can learn long-term dependencies and avoid gradient errors from the backpropagation neural network (BPNN). As illustrated in Figure 4, the LSTM layer is supplied to track the uncertainty of long-term YOLOv5 training. At each time step, there are three hidden layers of LSTM used, where the sequencing of time series output as s = s 11 , s 12 , s 13 and the memory layers are c = c n 1 , c n 2 , c n 3 , n denotes the number of the memory cell. The LSTM layers have four states, such as input state i t , forget state f t , memory cell state c n , and output state o t , respectively, as shown in substitutions (3)–(7).
i t = σ w s i s + w h i h t , n + b i
f t = σ w s f s + w h f h t , n + b f
o t = σ w s o s + w h o h t , n + b o
c n = tanh w s c s + w h c h t , n + b c
h t , n = o t tanh c n
where tanh presents the hyperbolic tangent function, denotes the products with a gate value, and b i , b f , b o , and b c are bias vectors, respectively. The weight vectors are the hidden input state w s i , the hidden input-output state w s o , and the hidden-output state w h o , respectively.
The memory cell of LSTM uses the output time series from YOLOv5 y t information to predict target detection. By predicting the object location from the previously learned informed, LSTM provides efficient tracking.

4. Experimental Setup

4.1. Dr-TOBID

To develop the prototype of Dr-TOBID, the DJI Matrice 200 was used, as shown in Figure 5. The full HD camera was mounted on a DJI Matrice 200, and power was supplied by a power bank of 20,000 mAh. While the video’s real-time streaming is transmitted and received by the WiFi 5.8 GHz module, power is supplied by a LiPo-battery of 3300 mAh. The flight time of Dr-TOBID is fully utilized at 25 min per battery.
Figure 6 shows the flowchart diagram of the Dr-TOBID operation to detect the open burning location. First step: start the procedure on the flight control into the open burning location, and the connecting WiFi channel is completed to stream the video back to the ground control station. In the second step, it is into the mode to select the RGB and HSV modes, where RGB is a normal screen mode and HSV is similar to the thermal screen mode. However, the HSV mode is not considered in the experimental results of this work. The third step is into the procedure of YOLOv5 and LSTM processing to train and learn the data input between the smoke and burning characteristics. Finally, the status of detection on OBS live streaming at the ground control station is displayed.

4.2. Dataset

The accuracy of the deep learning model is highly dependent on the datasets used during training and testing. The preparation of datasets is classified into two classes: smoke and burning. Figure 7a shows the sample datasets of smoke and burning as shown in Figure 7b. The datasets were collected from Google and from drone imagery. The resolution of the datasets collected was 1920 × 1280 pixels, and videos were used at 2048 × 3072 pixels. The datasets consisted of 1489 smoke images, 1462 burning images, 89 smoke videos, and 72 burning videos, as listed in Table 1. Datasets were used 80% for training and 20% for testing, with a total of 200 Epoch iterations.

4.3. Experimental Model

The sample sources of open burning characteristics, such as the smoke model, are shown in Figure 8a, and the burning model is shown in Figure 8b, respectively. In the experiment, we deployed various wastes from agriculture, such as palm leaves, coconut meals, grass clippings, trees, and foliage, for burning situations. Note that the experimental location was at King Mongkut’s Institute of Technology Ladkrabang Prince of Chumphon Campus, while the experimental time from daytime to nighttime was carried out from 4 p.m. to 8 p.m.
The equipment used for the ground control station is a laptop computer that has specifications such as an Intel Core i7-7700K CPU processor, 16 GB of random-access memory (RAM), and an NVIDIA GeForce GTX 1080 Ti GPU running on Microsoft Windows 11, as shown in Table 2. The Dr-TOBID operation in the field is shown in Figure 9, where the Dr-TOBID can fly the flight for 30 min and be controlled remotely via 2.4 GHz frequency. To assess the accuracy of Dr-TOBID, we investigated the varying ranges and altitudes of Dr-TOBID when assessing the open burning location. The camera view was adjusted to 60 degrees as an optimized angle. The experimental models were shown in Figure 10a–f, such as the range at 5 m and altitude at 5 m, the range of 5 m and altitude of 7 m, the range of 5 m and altitude of 10 m, the range of 10 m and altitude of 5 m, the range of 10 m and altitude of 7 m, and the range of 10 m and altitude of 10 m, respectively. We performed the experiment from 4 p.m. to 6 p.m. during the daytime and from 6 p.m. until 8 p.m. at night.

5. Results

5.1. Experimental Results

This section presents the experimental results. After the experiment to detect smoke and burning by Dr-TOBID both in the daytime and nighttime. The characteristics of real-time video object detection in each experimental model and the accuracy evaluated between training and testing are shown in Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17 and Figure 18, respectively. For instance, Figure 11 shows the experimental test in the daytime for detecting the smoke open burning at 5 m range and 5 m altitude of Dr-TOBID. It can be shown that the processing of YOLOv5+LSTM can detect the boundary of smoke after training and testing. Comparing the training and testing accuracy in Figure 12a–f. The model can obtain high accuracy at 24 Epochs, and over 30 Epochs, it is like an exponential curve, increasing accuracy at 200 Epochs to 0.92. The testing model is like the training model for this sample case.
In addition, the result of the assessment of Dr-TOBID at 7 m and 10 m altitudes, as shown in Figure 11b,c, shows that real-time video streaming is still detected. The accuracies of training and testing of smoke detection in the daytime are shown in Figure 12b–f, respectively. At the burning detection in the daytime, the Dr-TOBID is powerful enough to detect the burning characterized by the open burning location, as shown in Figure 13a–f. The accuracy versus Epoch iterations is examined in Figure 14a–f, and the relative accuracy of the training and testing models is perfected. It can be shown that Dr-TOBID accuracies are sufficient for video real-time monitoring.
To confirm the accuracy of Dr-TOBID in the nighttime, we performed experimental results in the nighttime situation of open burning with both smoke and burning detection. The smoke detected is shown in Figure 15a–f. As a result, the resolution of video in real-time is degraded because of the limit of the FPV camera when the frame rate is transmitted at 22 frames per second. Although the resolution of the video dropped, the Dr-TOBID is still powerful in detecting smoke characteristics as well as burning characteristics, as shown in Figure 17a–f. The accuracy results of burning detection are remarkably interesting, as shown in Figure 18a–f, respectively.
To summarize the accuracy of Dr-TOBID, the evaluation matrices, such as accuracy, precision, recall, and F1-score, were obtained as follows:
Accuracy = T P + T N T P + F P + T N + F N
Precision     =     T P T P + F P
Recall     =     T P T P + F N
F 1 - score     =     2 × R e c a l l × P r e c i s i o n R e c a l l + P r e c i s i o n
where T P and T N represent the true position and true negative, F P and F N represent the false positive and false negative, respectively.
The summaries of the accuracy assessment of the experimental test by the evaluation matrices are shown in Table 3. Furthermore, we plot the charts of the summarized accuracy results. Figure 19 shows the accuracy of smoke detection in the daytime, the accuracy of burning detection in the daytime as shown in Figure 20, the accuracy of smoke detection in the nighttime as shown in Figure 21, and Figure 22 shows the accuracy of burning detection in the nighttime.

5.2. Discussion

To discuss the proposed YOLOv5 detector and classifier LSTM model, the accuracies of Dr-TOBID were considered at an F1-score of 80.6% for smoke detection in the daytime, 82.5% for burning detection in the daytime, 77.9% for smoke detection in the nighttime, and 81.9% for burning detection in the nighttime.
The advantage of YOLOv5 is well known: it is very small and suitable for real-time video streaming. In this work, the ground control station has a processing time of 32 s for the training and testing from 0 to 200 Epochs. Thus, it can be sufficiently used for real-time detection in smoke and fire conditions.
Considering the LSTM classifier, the process of LSTM can control the error rate of false positives in fire and smoke image detection. The most important role of the LSTM classifier is to reduce false positives without missing as many candidate regions as possible. However, the proposed results still have errors in the accuracy of smoke and burning detection. Thus, we discuss some limitations of Dr-TOBID,
  • The communication network Dr-TOBID provided the WiFi link, where the maximum data rates are 12 Mbps. Indeed, the data rate can be used at 2 Mbps due to the attenuation of the wireless link. Thus, this problem reduces the resolution of video streaming. The alternative solution to resolve this is employed by the mobile network [12]. In addition, HSV mode can guarantee resolution.
  • Warning system: In fact, the detection of fire in the forests must be early. The proposal in [33] can be applied further.
The extension framework of Dr-TOBID is planned to develop toward the cyber-physical system (CPS), a new digital framework for early wildfire surveillance [43]. Our ground control station can be redesigned by employing cloud computing such as Node-RED and Grafana dashboards to report the fire status, detect the position, provide early warning, and alert the energy criteria of the drone. This work is crucial to emphasize that our research is currently in an initial CPS. In the proposed study, we have assessed the accuracy of Dr-TOBID to detect the smoke and burning in open burning at their early stages rather than when they have already escalated to destroy the forest.

6. Conclusions

This paper presents the accuracy assessment of drone real-time open burning imagery detection that is called a Dr-TOBID platform for early wildfire surveillance. The proposed deep learning model has been applied by designing the YOLOv5 and combining it with the LSTM algorithm as a new remote sensing method for real-time smoke and fire detection. The use of the lightweight LSTM can perfect the training and learning states of the batch processing, keeping the time series after YOLOv5 output, and confirming the best final process. Investigating the Dr-TOBID by varying setups, the experimental model that can confirm the proposed deep learning model, smoke and burning detection both in the daytime and nighttime, has been carried out. The evaluation matrices are assessed to determine where the best accuracy of detection is in daytime burning detection. The accuracies of Dr-TOBID were considered at an F1-score of 80.6% for smoke detection in the daytime, 82.5% for burning detection in the daytime, 77.9% for smoke detection in the nighttime, and 81.9% for burning detection in the nighttime.
Eventually, this paper achieved the aim of open burning monitoring under the proposed deep learning model. In future work, the communication network-enabled 5G module will be considered in large-area open burning detection, and the localization of fire locations will be addressed, as well as the latency and drone scheduling that must be considered for early warning.

Author Contributions

Conceptualization, S.D. and K.K.; methodology, S.D.; software, S.D.; validation, S.D. and K.K.; investigation, S.D.; writing—original draft preparation, S.D.; writing—review and editing, S.D. and K.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by King Mongkut’s Institute of Technology Ladkrabang [grant number KREF046410].

Data Availability Statement

The data presented in this study are available on request to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pariruang, W.; Hata, M.; Furuuchi, M. Influence of agricultural activities, forest fires and agro-industries on air quality in Thailand. J. Environ. Sci. 2017, S2, 85–97. [Google Scholar]
  2. Jirataya, P.; Agapol, J.; Savitri, G. Assessment of air pollution from household solid waste open burning in Thailand. Sustainability 2018, 10, 2553. [Google Scholar]
  3. Phairuang, W.; Suwattiga, P.; Chetiyanukornkul, T.; Hongteiab, S.; Limpaseni, W.; Ikemori, F.; Hata, M.; Furuuchi, M. The influence of the open burning of agricultural biomass and forest fires in Thailand on the carbonaceous components in size-fractionated particles. Environ. Poll. 2019, 247, 238–247. [Google Scholar] [CrossRef] [PubMed]
  4. Thangavel, K.; Spiller, D.; Sabatini, R.; Amici, S.; Sasidharan, S.T.; Fayek, H.; Marzocca, P. Autonomous satellite wildfire detection using hyperspectral imagery and neural network: A case study on Australian wildfire. Remote Sens. 2023, 15, 720. [Google Scholar] [CrossRef]
  5. Dash, P.J.; Pearse, D.G.; Watt, S.M. UAV multispectral imagery can complement satellite data for monitoring forest health. Remote Sens. 2018, 10, 1216. [Google Scholar] [CrossRef]
  6. Priya, S.R.; Vani, K. Deep learning based forest fire classification and detection in satellite images. In Proceedings of the 11th International Conference on Advanced Computing (ICoAC), Chennai, India, 18–20 December 2019. [Google Scholar]
  7. Cao, Y.; Yang, F.; Tang, Q.; Lu, X. An attention enhanced bidirectional LSTM for early forest fire smoke recognition. IEEE Access 2019, 7, 154732–154742. [Google Scholar] [CrossRef]
  8. Barmpoutis, P.; Papaioannou, P.; Dimitropoulos, K.; Grammalidis, N. A review on early forest fire detection systems using optical remote sensing. Sensors 2020, 20, 6442. [Google Scholar] [CrossRef]
  9. Barmpoutis, P.; Kastridis, A.; Stathaki, T.; Yuan, J.; Shi, M.; Grammalidis, N. Suburban forest risk assessment and forest surveillance using 360-degree cameras and a multiscale deformable transformer. Remote Sens. 2023, 15, 1995. [Google Scholar] [CrossRef]
  10. Yuan, C.; Liu, Z.; Zhang, Y. Learning-based smoke detection for unmanned aerial vehicles applied to forest fire surveillance. J. Intel. Robot. Syst. 2019, 93, 337–349. [Google Scholar] [CrossRef]
  11. Partheepan, S.; Sanati, F.; Hassan, J. Autonomous unmanned aerial vehicles in bushfire management: Challenges and opportunities. Drones 2023, 7, 47. [Google Scholar] [CrossRef]
  12. Pandey, S.; Singh, R.; Kathuria, S.; Negi, P.; Chhabra, G.; Joshi, K. Emerging technologies for prevention and monitoring of forest fire. In Proceedings of the International Conference on Innovative Data Communication Technologies and Application (ICIDCA), Uttarakhand, India, 14–16 March 2023; pp. 1115–1121. [Google Scholar]
  13. Geetha, S.; Adhishek, C.S.; Akshayanat, C.S. Machine vision based fire detection techniques: A survey. Fire Technol. 2021, 57, 591–623. [Google Scholar] [CrossRef]
  14. Kim, B.; Lee, J. A video-based fire detection using deep learning models. Appl. Sci. 2019, 9, 2862. [Google Scholar] [CrossRef]
  15. Reder, S.; Mund, J.P.; Albert, N.; Wabermann, L.; Miranda, L. Detection of windthrown tree stems on UAV-orthomosaics using U-Net convolutional networks. Remote Sens. 2022, 14, 75. [Google Scholar] [CrossRef]
  16. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intel. 2018, 40, 834–848. [Google Scholar] [CrossRef] [PubMed]
  17. Natekar, S.; Patil, S.; Nair, A.; Roychowdhury, S. Forest fire prediction using LSTM. In Proceedings of the 2nd International Conference for Emerging Technology (INCET), Belagavi, India, 21–23 May 2021; pp. 1–5. [Google Scholar]
  18. Park, M.; Tran, D.Q.; Bak, J.; Park, S. Advanced wildfire detection using generative adversarial network-based augmented datasets and weakly supervised object localization. Int. J. Appl. Earth Obs. Geoinf. 2022, 114, 103052. [Google Scholar] [CrossRef]
  19. Haq, M.A.; Rahaman, G.; Baral, P.; Ghosh, A. Deep learning based supervised image classification using UAV images for forest areas classification. J. Indian Soc. Remote Sens. 2021, 43, 601–606. [Google Scholar] [CrossRef]
  20. Rahman Rasel, A.K.Z.; Sakif Nabil, S.M.; Sikder, N.; Masud, M.; Aljuaid, H.; Bairagi, A.K. Uumanned aerial vehicle assisted forest fire detection using deep convolutional neural network. Intel. Autom. Soft Comput. 2023, 35, 3259–3277. [Google Scholar] [CrossRef]
  21. Novac, I.; Geipel, K.G.; Domingo Gil, E.; Paula, L.G.; Hyttel, K.; Chrysostomou, D. A framework for wildfire inspection using deep convolutional neural networks. In Proceedings of the IEEE/SICE International Symposium on System Integration (SII), Honolulu, HI, USA, 12–15 January 2020; pp. 867–872. [Google Scholar]
  22. Chandana, V.S.; Vasavi, S. Autonomous drones based forest surveillance using faster R-CNN. In Proceedings of the International Conference on Electronics and Renewable Systems (ICEARS), Tuticorin, India, 16–18 March 2022; pp. 1718–1723. [Google Scholar]
  23. Guede-Fernandaz, F.; Martins, L.; Almeida, R.V.; Gamboa, H.; Vieira, P. A deep learning based object identification system for forest fire detection. Fire 2021, 4, 75. [Google Scholar] [CrossRef]
  24. Barmpoutis, P.; Stathaki, T.; Dimitropoulos, K.; Grammalidis, N. Early fire detection based on aerial 360-degree sensors, deep convolution neural networks and exploitation of fire dynamic textures. Remote Sens. 2020, 12, 3177. [Google Scholar] [CrossRef]
  25. Khudayberdiev, O.; Zhang, J.; Abdullahi, S.M.; Zhang, S. Light-Firenet: An efficient lightweight network for fire detection in diverse environments. Multi. Tools App. 2022, 81, 24553–24572. [Google Scholar] [CrossRef]
  26. Harkat, H.; Nascimento, J.M.P.; Bernardino, A.; Thariq Ahmed, H.F. Assessing the impact of the loss function and encoder architecture for fire aerial images segmentation using Deeplabv3+. Remote Sens. 2022, 14, 2023. [Google Scholar] [CrossRef]
  27. Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intel. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
  28. Mohapatra, A.; Trinh, T. Early wildfire detection technologies in practice—A Review. Sustainability 2022, 14, 12270. [Google Scholar] [CrossRef]
  29. Maqbool, A.; Mirza, A.; Afzal, F.; Shah, T.; Khan, W.Z.; Zikria, Y.B.; Kim, S.W. System-level performance analysis of cooperative multiple unmanned aerial vehicles for wildfire surveillance using agent-based modeling. Sustainability 2022, 14, 5927. [Google Scholar] [CrossRef]
  30. Thangavel, K.; Spiller, D.; Sabatini, R.; Marzocca, P.; Esposito, M. Near real-time wildfire management using distributed satellite system. IEEE Geo. Remot. Sens. Let. 2023, 20, 550070. [Google Scholar] [CrossRef]
  31. Carta, F.; Zidda, C.; Putzu, M.; Loru, D.; Anedda, M. Advancements in forest fire prevention: A comprehensive survey. Sensors 2023, 23, 6635. [Google Scholar] [CrossRef] [PubMed]
  32. Peruzzi, G.; Pozzebon, A.; Van Der Meer, M. Fight fire with fire: Detecting forest fires with embedded machine learning models dealing with audio and images on low power IoT devices. Sensors 2023, 23, 783. [Google Scholar] [CrossRef]
  33. Jemmali, M.; B.Melhim, L.K.; Boulila, W.; Amdouni, H.; Alharbi, M.T. Optimizing forest fire prevention: Intelligent scheduling algorithms for drone-based surveillance system. arXiv 2023, arXiv:2035:10444. [Google Scholar]
  34. Talaat, F.M.; ZainEldin, H. An improved fire detection approach based on YOLO-v8 for smart cities. Neural Comp. Appli. 2023, 35, 20939. [Google Scholar] [CrossRef]
  35. Ghali, R.; Akhloufi, M.A.; Mseddi, W.S. Deep learning and transformer approaches for UAV-based wildfire detection and segmentation. Sensors 2022, 22, 1977. [Google Scholar] [CrossRef]
  36. Khan, S.; Khan, A. FFireNet: Deep learning based forest fire classification and detection in smart cities. Symmetry 2022, 14, 2155. [Google Scholar] [CrossRef]
  37. Micheal, A.A.; Vani, K.; Sanjeevi, S.; Lin, C.-H. Object detection and tracking with UAV data using deep learning. J. Indian Soc. Remote Sens. 2021, 49, 463–469. [Google Scholar] [CrossRef]
  38. Jeong, M.; Park, M.; Nam, J.; Chul Ko, B. Light-weight student LSTM for real-time wildfire smoke detection. Sensors 2020, 20, 5508. [Google Scholar] [CrossRef] [PubMed]
  39. Zhao, H.; Zhou, Y.; Zhang, L.; Peng, Y.; Hu, X.; Peng, H.; Cai, X. Mixed YOLOv3-Lite: A lightweight real-time object detection method. Sensors 2020, 20, 1861. [Google Scholar] [CrossRef]
  40. Bouguettaya, A.; Zarzour, H.; Taberkit, A.M. A review on early wildfire detection from unmanned aerial vehicles using deep learning-based computer vision algorithms. Signal Proc. 2022, 190, 108309. [Google Scholar] [CrossRef]
  41. Mukhiddinov, M.; Abdusalomov, A.B.; Cho, J. A wildfire smoke detection system using unmanned aerial vehicles images based on the optimized YOLOv5. Sensors 2022, 22, 9384. [Google Scholar] [CrossRef]
  42. Bahher, C.; Ksibi, A.; Ayadi, M.; Jamjoom, M.M.; Ullah, Z.; Soufiene, B.O. Wildfire and smoke detection using staged YOLO model and ensemble CNN. Electronics 2023, 12, 228. [Google Scholar] [CrossRef]
  43. Battistoni, P.; Cantone, A.A.; Martino, G.; Passamano, V.; Romano, M.; Sebillo, M.; Vitiello, G. A cyber-physical system for wildfire detection and firefighting. Future Internet 2023, 15, 237. [Google Scholar] [CrossRef]
Figure 1. Percent wildfire occurrence cases in Thailand.
Figure 1. Percent wildfire occurrence cases in Thailand.
Forests 14 01852 g001
Figure 2. A framework for drone real-time open burning detection using a deep learning model.
Figure 2. A framework for drone real-time open burning detection using a deep learning model.
Forests 14 01852 g002
Figure 3. The network architecture of the YOLOv5 model.
Figure 3. The network architecture of the YOLOv5 model.
Forests 14 01852 g003
Figure 4. The network of a classifier by using the LSTM algorithm.
Figure 4. The network of a classifier by using the LSTM algorithm.
Forests 14 01852 g004
Figure 5. The prototype of Dr-TOBID.
Figure 5. The prototype of Dr-TOBID.
Forests 14 01852 g005
Figure 6. The flowchart of Dr-TOBID’s operation in the field to detect the open burning location.
Figure 6. The flowchart of Dr-TOBID’s operation in the field to detect the open burning location.
Forests 14 01852 g006
Figure 7. Samples of various dataset characteristics for training: (a) Smoke; (b) Burning.
Figure 7. Samples of various dataset characteristics for training: (a) Smoke; (b) Burning.
Forests 14 01852 g007
Figure 8. The samples of open burning locations are: (a) Smoke; (b) Burning.
Figure 8. The samples of open burning locations are: (a) Smoke; (b) Burning.
Forests 14 01852 g008
Figure 9. Dr-TOBID operation in the field.
Figure 9. Dr-TOBID operation in the field.
Forests 14 01852 g009
Figure 10. Experimental models: (a) range 5 m and altitude 5 m; (b) range 5 m and altitude 7 m; (c) range 5 m and altitude 10 m; (d) range 10 m and altitude 5 m; (e) range 10 m and altitude 7 m; (f) range 10 m and altitude 10 m.
Figure 10. Experimental models: (a) range 5 m and altitude 5 m; (b) range 5 m and altitude 7 m; (c) range 5 m and altitude 10 m; (d) range 10 m and altitude 5 m; (e) range 10 m and altitude 7 m; (f) range 10 m and altitude 10 m.
Forests 14 01852 g010aForests 14 01852 g010b
Figure 11. Experimental test of smoke detection in the daytime: (a) range 5 m and altitude 5 m; (b) range 5 m and altitude 7 m; (c) range 5 m and altitude 10 m; (d) range 10 m and altitude 5 m; (e) range 10 m and altitude 7 m; (f) range 10 m and altitude 10 m.
Figure 11. Experimental test of smoke detection in the daytime: (a) range 5 m and altitude 5 m; (b) range 5 m and altitude 7 m; (c) range 5 m and altitude 10 m; (d) range 10 m and altitude 5 m; (e) range 10 m and altitude 7 m; (f) range 10 m and altitude 10 m.
Forests 14 01852 g011aForests 14 01852 g011b
Figure 12. Accuracy of smoke detection in the daytime: (a) range 5 m and altitude 5 m; (b) range 5 m and altitude 7 m; (c) range 5 m and altitude 10 m; (d) range 10 m and altitude 5 m; (e) range 10 m and altitude 7 m; (f) range 10 m and altitude 10 m.
Figure 12. Accuracy of smoke detection in the daytime: (a) range 5 m and altitude 5 m; (b) range 5 m and altitude 7 m; (c) range 5 m and altitude 10 m; (d) range 10 m and altitude 5 m; (e) range 10 m and altitude 7 m; (f) range 10 m and altitude 10 m.
Forests 14 01852 g012aForests 14 01852 g012b
Figure 13. Experimental test of burning detection in the daytime: (a) range 5 m and altitude 5 m; (b) range 5 m and altitude 7 m; (c) range 5 m and altitude 10 m; (d) range 10 m and altitude 5 m; (e) range 10 m and altitude 7 m; (f) range 10 m and altitude 10 m.
Figure 13. Experimental test of burning detection in the daytime: (a) range 5 m and altitude 5 m; (b) range 5 m and altitude 7 m; (c) range 5 m and altitude 10 m; (d) range 10 m and altitude 5 m; (e) range 10 m and altitude 7 m; (f) range 10 m and altitude 10 m.
Forests 14 01852 g013aForests 14 01852 g013b
Figure 14. Accuracy of burning detection in the daytime: (a) range 5 m and altitude 5 m; (b) range 5 m and altitude 7 m; (c) range 5 m and altitude 10 m; (d) range 10 m and altitude 5 m; (e) range 10 m and altitude 7 m; (f) range 10 m and altitude 10 m.
Figure 14. Accuracy of burning detection in the daytime: (a) range 5 m and altitude 5 m; (b) range 5 m and altitude 7 m; (c) range 5 m and altitude 10 m; (d) range 10 m and altitude 5 m; (e) range 10 m and altitude 7 m; (f) range 10 m and altitude 10 m.
Forests 14 01852 g014aForests 14 01852 g014b
Figure 15. Experimental test of smoke detection at nighttime: (a) range 5 m and altitude 5 m; (b) range 5 m and altitude 7 m; (c) range 5 m and altitude 10 m; (d) range 10 m and altitude 5 m; (e) range 10 m and altitude 7 m; (f) range 10 m and altitude 10 m.
Figure 15. Experimental test of smoke detection at nighttime: (a) range 5 m and altitude 5 m; (b) range 5 m and altitude 7 m; (c) range 5 m and altitude 10 m; (d) range 10 m and altitude 5 m; (e) range 10 m and altitude 7 m; (f) range 10 m and altitude 10 m.
Forests 14 01852 g015aForests 14 01852 g015b
Figure 16. Accuracy of smoke detection at nighttime: (a) range 5 m and altitude 5 m; (b) range 5 m and altitude 7 m; (c) range 5 m and altitude 10 m; (d) range 10 m and altitude 5 m; (e) range 10 m and altitude 7 m; (f) range 10 m and altitude 10 m.
Figure 16. Accuracy of smoke detection at nighttime: (a) range 5 m and altitude 5 m; (b) range 5 m and altitude 7 m; (c) range 5 m and altitude 10 m; (d) range 10 m and altitude 5 m; (e) range 10 m and altitude 7 m; (f) range 10 m and altitude 10 m.
Forests 14 01852 g016aForests 14 01852 g016b
Figure 17. Experimental test of burning detection at nighttime: (a) range 5 m and altitude 5 m; (b) range 5 m and altitude 7 m; (c) range 5 m and altitude 10 m; (d) range 10 m and altitude 5 m; (e) range 10 m and altitude 7 m; (f) range 10 m and altitude 10 m.
Figure 17. Experimental test of burning detection at nighttime: (a) range 5 m and altitude 5 m; (b) range 5 m and altitude 7 m; (c) range 5 m and altitude 10 m; (d) range 10 m and altitude 5 m; (e) range 10 m and altitude 7 m; (f) range 10 m and altitude 10 m.
Forests 14 01852 g017aForests 14 01852 g017b
Figure 18. Accuracy of burning detection at nighttime: (a) range 5 m and altitude 5 m; (b) range 5 m and altitude 7 m; (c) range 5 m and altitude 10 m; (d) range 10 m and altitude 5 m; (e) range 10 m and altitude 7 m; (f) range 10 m and altitude 10 m.
Figure 18. Accuracy of burning detection at nighttime: (a) range 5 m and altitude 5 m; (b) range 5 m and altitude 7 m; (c) range 5 m and altitude 10 m; (d) range 10 m and altitude 5 m; (e) range 10 m and altitude 7 m; (f) range 10 m and altitude 10 m.
Forests 14 01852 g018aForests 14 01852 g018b
Figure 19. The accuracy of Dr-TOBID for smoke detection in the daytime.
Figure 19. The accuracy of Dr-TOBID for smoke detection in the daytime.
Forests 14 01852 g019
Figure 20. The accuracy of Dr-TOBID for burning detection in the daytime.
Figure 20. The accuracy of Dr-TOBID for burning detection in the daytime.
Forests 14 01852 g020
Figure 21. The accuracy of Dr-TOBID for smoke detection at nighttime.
Figure 21. The accuracy of Dr-TOBID for smoke detection at nighttime.
Forests 14 01852 g021
Figure 22. The accuracy of Dr-TOBID for burning detection at nighttime.
Figure 22. The accuracy of Dr-TOBID for burning detection at nighttime.
Forests 14 01852 g022
Table 1. Number of dataset categories.
Table 1. Number of dataset categories.
Types ImagesVideosTotal
Smoke1489891578
Burning1462721534
Table 2. The specifications of the ground control station and Dr-TOBID accessories.
Table 2. The specifications of the ground control station and Dr-TOBID accessories.
DescriptionSpecifications
FPV Camera 1Full HD 1080P/30 frame rates
WiFi module5.8 GHz frequency wireless link
Flight time25 min.
StorageSSD: 512 GB
CPUIntel Core i7-7700K
GPUNVIDIA GeForce GTX 1080 Ti
RAMDDR4 16 GB
The power transmitted by WiFi module26 dBm
Operation systemWindows 11
Software installationsAnaconda Navigator 3, OpenCV, YOLOv5, TensorFlow, Labellmg, Pycharm, and OBS
1 FPV denotes the first-person view.
Table 3. The evaluation matrices of the experimental test.
Table 3. The evaluation matrices of the experimental test.
DurationDetectionRanges (m), Altitudes (m)AccuracyPrecisionRecallF1-Score
DaytimeSmoke5, 50.6890.7270.8880.798
5, 70.7420.7390.8950.807
5, 100.6940.7250.8880.798
10, 50.6670.7720.8810.821
10, 70.7320.7370.8930.807
10, 100.7360.7380.8940.808
Burning5, 50.8260.7640.9060.828
5, 70.8180.7620.9050.825
5, 100.7800.7510.9000.819
10, 50.7100.7300.8900.802
10, 70.7650.7470.8980.815
10, 100.6960.8400.8850.861
NighttimeSmoke5, 50.6020.6890.8690.768
5, 70.5830.6830.8660.761
5, 100.6440.7440.8780.805
10, 50.6670.7150.8820.788
10, 70.6350.7030.8770.779
10, 100.6340.7020.8750.778
Burning5, 50.7580.7450.8970.813
5, 70.7180.7330.8920.804
5, 100.7700.7480.8990.816
10, 50.6810.7200.8850.793
10, 70.7060.8270.8880.857
10, 100.8410.7670.9070.831
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Duangsuwan, S.; Klubsuwan, K. Accuracy Assessment of Drone Real-Time Open Burning Imagery Detection for Early Wildfire Surveillance. Forests 2023, 14, 1852. https://doi.org/10.3390/f14091852

AMA Style

Duangsuwan S, Klubsuwan K. Accuracy Assessment of Drone Real-Time Open Burning Imagery Detection for Early Wildfire Surveillance. Forests. 2023; 14(9):1852. https://doi.org/10.3390/f14091852

Chicago/Turabian Style

Duangsuwan, Sarun, and Katanyoo Klubsuwan. 2023. "Accuracy Assessment of Drone Real-Time Open Burning Imagery Detection for Early Wildfire Surveillance" Forests 14, no. 9: 1852. https://doi.org/10.3390/f14091852

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop