Next Article in Journal
A Comparison of Seven Medium Resolution Impervious Surface Products on the Qinghai–Tibet Plateau, China from a User’s Perspective
Previous Article in Journal
RAIH-Det: An End-to-End Rotated Aircraft and Aircraft Head Detector Based on ConvNeXt and Cyclical Focal Loss in Optical Remote Sensing Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Accurate Forest Fire Recognition Method Based on Improved BPNN and IoT

1
Guangdong Eco-Engineering Polytechnic, Guangzhou 510520, China
2
College of Electronic Engineering, South China Agricultural University, Guangzhou 510642, China
3
Guangdong Academy of Forestry Sciences, Guangzhou 510520, China
4
Zhujiang College, South China Agricultural University, Guangzhou 510642, China
5
Foshan-Zhongke Innovation Research Institute of Intelligent Agriculture and Robotics, Foshan 52800, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(9), 2365; https://doi.org/10.3390/rs15092365
Submission received: 1 March 2023 / Revised: 24 April 2023 / Accepted: 27 April 2023 / Published: 29 April 2023

Abstract

:
Monitoring and early warning technology for forest fires is crucial. An early warning/monitoring system for forest fires was constructed based on deep learning and the internet of things. Forest fire recognition was improved by combining the size, color, and shape characteristics of the flame, smoke, and area. Complex upper-layer fire-image features were extracted, improving the input conversion by building a forest fire risk prediction model based on an improved dynamic convolutional neural network. The proposed back propagation neural network fire (BPNNFire) algorithm calculated the image processing speed and delay rate, and data were preprocessed to remove noise. The model recognized forest fire images, and the classifier classified them to distinguish images with and without fire. Fire images were classified locally for feature extraction. Forest fire images were stored on a remote server. Existing algorithms were compared, and BPNNFire provided real-time accurate forest fire recognition at a low frame rate with 84.37% accuracy, indicating superior recognition. The maximum relative error between the measured and actual values for real-time online monitoring of forest environment indicators, such as air temperature and humidity, was 5.75%. The packet loss rate of the forest fire monitoring network was 5.99% at Longshan Forest Farm and 2.22% at Longyandong Forest Farm.

Graphical Abstract

1. Introduction

Forest fires destroy numerous trees and cause the death or displacement of wild animals. They cause the destruction and modification of forest vegetation and structure, and of the ecological environment, climate, and soil properties, reducing the ability of the forest to prevent water and soil loss and to regulate the weather. Concurrently, they reduce the vegetation area, allowing the ground to receive direct solar radiation, making the surface temperature rise too quickly, further damaging ground creatures, and causing desertification and other hazards [1].Traditional forest fire monitoring is time-consuming and labor-intensive. Forest fire prevention has gradually shifted from manual patrols, lookout monitoring, air forest protection, satellite remote sensing, and other monitoring methods to early monitoring and warning systems for forest fires. Early warning systems and the timely detection of forest fires are crucial measures to reduce fire loss. Wireless sensor networks (WSNs) are a relatively stable technical means to obtain environmental information on combustibles under forests, monitor changes in the environmental micrometeorology before forest fires, and provide early warnings of imminent fires [2,3]. Therefore, the WSN, as an early monitoring tool for forest fires, has become an effective means of real-time forest fire monitoring. Deep learning methods that analyze the forest canopy image information from images collected above the forest have also been applied for dynamic forest fire monitoring/early warning systems. Both techniques are described in detail below.
Before a forest fire, environmental information under the forest canopy can be collected using WSN technology, and changes in the forest environment micrometeorological data can be recorded and used to perceive signs of forest fires at an early stage. The generation of flame and smoke above the forest can be identified by applying deep learning to forest canopy images to realize fire-monitoring sampling and identify early warnings, reducing firefighting time and preventing major fires [4,5,6].The WSN monitors the meteorological factors of the forest environment and the moisture content of undergrowth combustibles, using this data to simulate the spread and propagation of forest fires. A forest fire early warning model was built, verified, and modified in an actual forest environment by analyzing these factors [7,8,9,10].
Moussa et al. [11] built a forest fire-monitoring WSN system using temperature, humidity, and smoke concentration sensors to collect real-time forest environmental information. This system improved the DV-Hop location algorithm and the accuracy of the traditional iteration algorithm; optimized the distance estimation, node location, and model performance; and conducted field verification to meet the needs for early monitoring and positioning.
Saeed [12] employed flame, smoke, temperature, and humidity sensors to monitor forest fires. In combination with the technical characteristics of ZigBee and long-range radio (LoRa), Zigbee technology was used for data transmission in the sensing area of the forest. By improving the LoRa relay–group network communication agreement and conducting field tests, Saeed collected and transmitted data for an intelligent forest fire early warning system.
Sinha [13] applied WSN nodes to monitor the characteristic signals of forest fires and fused temperature, carbon monoxide concentration, and smoke information to obtain the trust function generated by forest fires. The field test analyzed the probability of a forest fire. The results demonstrated that the system can reliably set off an alarm when the collected information exceeds a set threshold, quickly and accurately identifying the probability of a forest fire.
Moreover, WSNs can collect real-time information, but using them to perceive the current fire situation in a forest canopy promptly is difficult. Humans have difficulty reaching and deploying WSN monitoring nodes in certain forest environments, such as steep terrains. When the tree crown catches fire, monitoring the state of the forest fire under the trees is challenging. Therefore, many scholars have used deep learning methods to recognize the forest canopy image information from fire images collected above the forest to identify fire danger as early as possible [14]. Uncrewed aerial vehicles (UAVs) and video surveillance can obtain forest canopy image information using deep learning methods for fire monitoring. The complex features of the forest fire image, such as smoke and flame, are analyzed to build a forest fire monitoring/early warning model [15,16,17,18].
Furthermore, forest fire risk prediction models have been created to predict the time of fire occurrence, fire-spread direction, and fire intensity using deep learning to analyze various factors affecting forest fire [19,20,21,22,23]. Park et al. [24] proposed a semi-supervised classification model using support vector machine technology to divide forest areas into high, medium, and low activity based on the likelihood of a forest fire in an area. The model’s forest fire identification is satisfactory, with a prediction accuracy of 90%.
Sousa et al. [25] determined a forest fire risk index based on the forest area and vegetation at Mount Tai, and divided the monitoring area into high-, medium-, and low-risk areas on a fire-risk rating map. Based on the sensitive characteristics of chaotic systems, the theory of differential flatness was adopted to track and control UAVs, realizing random path planning.
Elshewey et al. [26] employed data enhancement technology to enhance the data features of forest fire images and mark candidate boxes on target detection datasets. Three convolutional neural networks(VGG16, Inception V3, and ResNet50)were applied to build a forest fire early warning model using migration technology. The model parameters were gradually optimized, and their accuracy was verified. This method can accurately detect the fire area in an image and provide fire location information.
Shreya et al. [27] combined cellular automata with existing forest fire models to build an improved forest fire spread model. The model calculates the speed change rate index according to meteorological factors affecting the forest fire spread, and adaptively adjusts the time steps of the cellular automata through a speed change rate index. The model can be used to simulate and predict forest fire spread. The experimental data reveal that the fire-prediction accuracy of the model output reaches the fire-detection accuracy of real-time prediction.
Thus, deep learning methods are highly suitable for forest fire identification [28,29,30,31,32,33]. However, in practical applications, most deep learning methods suffer from low accuracy in identifying forest fires. The low efficiency of complex upper-layer feature extraction of fire images results in low robustness of input conversion [34,35,36,37,38]. This study researches forest fire monitoring using the WSN, deep learning, and existing research in this field, providing critical technical support for enriching and innovating the current forest fire prevention and control technologies and management methods, considering the limitations of the WSN and deep learning-based methods.
The main contributions of this study are the construction of a forest fire monitoring and early warning system based on deep learning and the internet of things (IoT). A forest fire recognition method based on image segmentation processes the collected forest fire images. Forest fire recognition is improved by combining the size, color, and shape characteristics of the flame, smoke, and area. Complex upper-layer features of fire images are efficiently extracted, improving the robustness of input conversion by building an improved dynamic convolutional neural network (DCNN) forest fire risk prediction model. The proposed back propagation neural network forest fire identification (BPNNFire) algorithm calculates the processing speed and delay rate of the video images. The data are collected and preprocessed to remove noise. The model recognizes the forest fire images, and the classifier classifies them to distinguish between images with fire and those without fire. A fire image is classified locally for feature extraction of forest fire images and is transmitted to a remote server for storage.

2. Materials and Methods

2.1. The Dataset Description

A total of 7690 forest fire images of diverse types were collected from Guangdong Province and the internet to build a dataset. The dataset contains top-view image information for trees, lakes, roads, and other forest environments captured using a UAV from the air, collected under sufficient light and low-illumination conditions.
The dataset was divided into training and testing sets in an 8:2 ratio (i.e., 6152 image samples for training and 1538 images for testing, collected with the same background). The rotation angle method increased the number of training set samples. Six angles were used to rotate the training set images: 30°, 75°, 180°, 210°, 255°, and 300°. This approach increased the training sample diversity, prevented overfitting during training, and improved the model detection accuracy. Forest fire images were recognized as different images in the training network using various rotation angles. The original image dataset can be increased six-fold through rotation angles. The number of samples before the dataset expansion was 7690, and the number after the expansion was 53,830. The expanded sample data were divided into training and validation sets. Table 1 presents the data distribution of the training, verification, and testing sets.
This study constructed a dataset with 3840 images from the internet and forest fire images collected in Longshan Forest Farm (23°12′N, 113°22′E), which is located in Lechang City, Shaoguan, Guangdong Province, China. The forest farm mainly grew Chinese fir, as well as Lechang Michelia, camphor, and various local broad-leaved trees. The generalizability of model training and the effectiveness and efficiency of extracting relevant features from images were ensured. The dataset was divided into training and testing sets. The training set consisted of 3072 images, including 320 images with fire and 2752 without fire. The testing set had 768 images: 384 with fire and 384 without fire. The training set distribution was unbalanced, but the testing set distribution was balanced; therefore, whether the model could be used in real-world scenarios could be assessed. The dataset contained images of various scenes, such as forest fires with various sizes and varying lighting intensity at night and in the morning. Non-fire images were captured in bright light, during early morning, and at sunset. Figure 1 depicts representative images of forest fire datasets in diverse environments.

2.2. Network Structure

This section describes the improved BPNNFire algorithm. Figure 2 illustrates the early warning structure of the forest fire monitoring network. The upper part represents the image information regarding fire over the forest monitored by a UAV, and the image is recognized using deep learning technology. Finally, the forest fire image is recognized and predicted at the output layer. The lower part represents identifying forest environmental information through the WSN to provide a timely warning of forest fires. The collected data are stored in the database.
The forest fire monitoring/early warning system is depicted in Figure 3, with a forest environment information collection node, UAV, camera, sensor, solar power supply module, BPNNFire algorithm and DCNN model deployment platform, and forest fire online monitoring system. The forest environmental information collection nodes deployed in the field were powered using solar panels. The nodes include various sensors, such as air temperature, humidity, soil moisture, illumination, and wind speed sensors. The forest environmental information monitoring node collects forest environmental data within its area and transmits data to the gateway node in multi-hop form to a remote server through the network node. The data in the server are connected to the forest fire monitoring/early warning system through the interface, which can display real-time monitored forest environment data. The UAV collects forest fire image data over the forest and transmits the monitored image signal to the BPNNFire algorithm and DCNN model deployment platform. The forest fire online monitoring/early warning system can display and process the monitored forest fire images online and promptly detect a forest fire.

2.3. BPNNFire Algorithm

We propose the improved BPNNFire algorithm. Figure 4 presents the steps of the BPNNFire algorithm. First, data were collected and preprocessed to remove noise. The model recognized the forest fire image after input, and the classifier distinguished between images with and without fire. An image with fire was classified locally for feature extraction and transmitted to a remote server for storage. Figure 4 displays the framework of the forest fire recognition algorithm.
The following steps are performed in the improved BPNNFire algorithm:
  • The checked window is divided into 16 × 16 parts;
  • Each pixel is compared to the neighboring pixel;
  • If the value of the neighbor is smaller than the focal pixel value, the pixel value is set to 0; otherwise, it is 1, generating an 8-bit binary number;
  • The frequency histogram generated by each member number is calculated in the entire cell;
  • Histograms can be normalized according to the use-case authenticity, and the histogram of each cell is normalized to obtain the feature vector;
When the DCNN model provides an early warning, the forest fire risk prediction model generates region of interest (ROI) particles in rectangles, constantly moving in various directions and changing positions. In this case, the size of the ROI is updated over time. Step 1 explains each point of ROI formation in detail, and the similarity of fire or smoke can be identified. The following are the model parameters: object weight W i , object height H i , central coordinates C i x , y , and rectangles of each coordinate described by the ROI R C 1 x , y , R C 2 x , y , R C 3 x , y , R C 4 x , y . The derivation process is as follows:
C R O I = C i 1 x , y
W R O I = W i 1 × 1.2
H R O I = H i 1 × 1.2
R C 1 x , y = C R O I x W R O I ÷ 1.5 , y H R O I ÷ 1.5
R C 2 x , y = C R O I x + W R O I ÷ 1.5 , y + H R O I ÷ 1.5
R C 3 x , y = C R O I x W R O I ÷ 1.5 , y + H R O I ÷ 1.5
R C 4 x , y = C R O I x + W R O I ÷ 1.5 , y H R O I ÷ 1.5
I m a g e = C o r p R C 1 , R C 2 , R C 3 , R C 4
Among them, H i 1 , W i 1 , and C i 1 x , y are inputs.

2.4. Construction of the DCNN Prediction Model

Next, we build a forest fire risk prediction model based on the improved DCNN and BPNN algorithm to extract complex upper-layer features of fire images and improve the robustness of input conversion. Figure 5 details the DCNN prediction model network architecture. The last layer is the high inference classification for the DCNN.
(1)
Convolutional layer
The convolutional layer applies convolution to the input of an image, and the result is transferred to the next layer. Each point in the convolutional layer is an acceptance region, a collection of units in the previous layer. Neurons can acquire basic visual features of local perception, and this layer has various feature maps and can display multiple feature maps. The form of the convolutional layer is expressed by Equation (9):
X j l = f i ε ε M x j l 1 k i j l + b i l
where the X j l output denotes a feature map of a specific convolutional layer, M j represents a set of input maps, k indicates the core size determining the contribution of the model to the image, and b is a deviation value;
(2)
Sampling layer
The subsampling or pooling layer performs subsampling and local averaging to reduce the complexity resolution of the feature map. It also eliminates output sensitivity. The form of the subsampling layer can be expressed as follows (10):
X j l = f β j d o w n l x j l 1 + b i l
where β j d o w n l denotes a subsampling function. Generally, the subsampling (down) function provides n by n blocks in the input image to calculate the final output, offering an n-times smaller normalized output. In addition, β represents the multiplication deviation, and b is the additive deviation. The proposed DCNN model follows;
(3)
DCNN model
The model has five convolutional layers, five max-pooling layers, and two fully connected layers, each of which has 4096 neurons. The softmax classifier is implemented for the last level of high inference classification, and the rectified linear unit activation function is employed, as presented in the following formula:
f x = x i f x > 0 a x x 0
An image is input into the model, adjusting its size to 256 × 256 × 3. The convolutional layer performs 11 operations on the input image of size 11 × 11 × 3 with 96 core filters, in Step4.Then, in the second volume-building layer, there are 256 cores with a size of 5 × 5 × 64. In Step 2, the output is transferred to the pooling layer. On the third convolutional layer, the convolutional operation is reapplied. There are 384 cores with a size of 3 × 3 × 256 but no pooling process. The remaining two volume layers use filters with 256 and 384 cores in Step 1. After the fifth roll-up layer, the pooled layer is reused, and a single 3 × 3 filter is used. Finally, there are two fully-connected layers for final classification, each with 4096 neurons. Figure 6 displays the schematic diagram of flame-area extraction using the neural network.

3. Results

3.1. Performance Comparison Test and Analysis of the BPNN Algorithm

3.1.1. Model Performance Test

Table 2 lists the specific training environment parameters, including the processor parameters, graphics card parameters, memory, development environment, and other specific information used in this experiment.
The convolutional layer realizes a highly-active DCNN by training the convolutional filter. More convolutions result in more complex characteristics. The convolutional kernel is trained to form a better combination mode, obtain optimal parameters through the proposed model, and effectively classify the test samples by optimizing the parameters. The training and testing loss function curves of the algorithm are presented in Figure 7. These curves reveal that the value of the loss function gradually decreases, and the curve tends to converge as the number of training and testing iterations increases. Figure 7 depicts the test accuracy iteration curve, revealing that the testing accuracy increases as the number of training iterations increases, and the curve reaches the maximum value with 500 iterations.

3.1.2. Forest Fire Image Recognition Effect Test

Various algorithms are used to compare the forest fire image data processing speed and to process videos with a duration of 4 min and 16 s. The video has 29 images/s, and the size of each frame is 960×540, for a total of 7424 images. The processing speed and delay rate are calculated using the BPNNFire algorithm, BPNN, interframe difference, background elimination, and Vibe algorithm, presented in Figure 8.
Figure 8 indicates that the processing speed is 7.54 fps for the BPNNFire algorithm, 7.26 fps for the BPNN algorithm, 7.13 fps for the interframe difference algorithm, 6.79 fps for the background elimination algorithm, and 0.87 fps for the Vibe algorithm. When the flight speed of the UAV is constant, owing to the limited change-of-scene information recorded in the video, preprocessing during video frame capture can reduce the number of frames to achieve a real-time data processing effect. When the video frame is set to 5 fps, the interframe difference algorithm and background elimination method are used to increase the processing speed to verify the accuracy of the forest fire recognition algorithm at a low frame rate. The test data in Figure 8 reveal that, under the same conditions, the BPNNFire algorithm has the lowest delay rate, and its processing speed can meet the real-time usage and accuracy requirements. Regarding algorithm processing results, other algorithms are used to process images. Figure 9 presents the recognition results of various algorithms.
Figure 10 displays the accuracy of fire-image determination using various algorithms, revealing that the BPNNFire algorithm can meet real-time accurate forest fire recognition requirements under a low frame rate. The recognition accuracy rate reaches 84.37%, indicating that the BPNNFire algorithm is superior to other algorithms in recognition accuracy.

3.2. Internet of Things Monitoring System Test and Analysis

By monitoring environmental forest data, the hidden dangers that may lead to a forest fire can be perceived early, providing an advanced warning to minimize the occurrence of disasters. When a forest fire is about to occur, the micrometeorological information of the forest environment changes. By setting a threshold in the online forest fire monitoring system, an early warning of a forest fire can be realized to prevent it. When a forest fire occurs, the environment forest information around the fire also initially changes. When the monitored forest environment parameters exceed a certain threshold, the online forest fire monitoring and warning system initiates an alarm. The UAV is applied to collect the forest fire image data over the forest and transmit the monitored image signal to the BPNNFire algorithm and DCNN model deployment platform in real time. From May to August 2021, WSNs for online forest fire monitoring were deployed in Guangdong Shaoguan Longshan and Guangdong Longyandong Forest Farmsto verify the function of the forest fire monitoring/early warning system and conduct real-time online monitoring of environmental indicators, such as air temperature, humidity, soil moisture, and illumination. The online forest fire monitoring/early warning system processed the monitored forest fire image online to detect a fire promptly.

3.2.1. System Packet Loss Rate Test

The forest fire monitoring data from two forest farms from 1 July 2021 to 30 September 2021 were downloaded onto the monitoring data management platform to perform a statistical analysis. According to the 30 min collection cycle, the number of monitoring data packets was 3572, and the number of data packets received by the platform was 3358 for Longshan and 2725 for Longyandong. As the SIM card of Longyandong Forest Farm gateway was not recharged in time, 785 data were lost between 0:00 on 15 July 2021 and 10:00 on 5 August 2021. Table 3 presents the statistics for the packet loss rate data. The packet loss rate of the forest environment monitoring network of Longshan Forest Farm was calculated at 5.99%, and the packet loss rate of the forest environment monitoring network of Longyandong Forest Farm was 23.7%. If the data loss period of Longyandong Forest Farm had not been considered, the gateway of Longyandong Forest Farm would have sent 2787 packets, with a packet loss rate of2.22%.
The analysis found that the packet loss rate for Longshan Forest Farm was higher than that of Longyandong Forest Farm. The environment data monitoring node for Longshan Forest Farm was deployed close to the gateway node. However, due to the dense tree growth in the deployment environment, the difference in geographical locations and altitudes between the two nodes, and the weak signal strength of the SIM card, the gateway node was often disconnected and lost several data packets. However, although a height drop and housing block occurred between the environment information monitoring node and gateway node in Longyandong Forest Farm, many nodes were deployed, improving the data transmission link, overcoming the effect of geographical factors, and effectively reducing the packet loss rate. The results indicate that the hardware device and embedded routing protocol can be applied to an unattended situation in a forest environment monitoring field to ensure long-term stable system operation.

3.2.2. Fire-Monitoring Network Deployment of Longyandong Forest Farm

Several sets of WSN nodes were deployed on the Maofeng Mountain forested land at Longyandong Forest Farm during the experiment, and the network deployment method was consistent with that for Shaoguan Longshan Forest Farm. The air temperature, humidity, soil moisture, light intensity, and other indicators of the forest environment were collected.
The relative error test on the forest environmental data was conducted from 1 to 30 August 2021. The maximum temperature humidity data measured in this period and the maximum temperature humidity data measured on the platform were used for relative error statistics. The error calculation formula is as shown:
relative   error = Platform   data   -   measured   data measured   data × 100 %
The data measurement tool recorded the maximum temperature and humidity data daily, fitting and analyzing the monthly monitoring data and generating the data curve in Figure 11 and Figure 12.
The results indicate that the designed forest fire monitoring/early warning system is stable. Figure 11 displays the actual and platform maximum temperature data measured in the period, and it also depicts the actual and platform maximum humidity data measured in the period. Figure 12 demonstrates that the maximum relative error of temperature and humidity is 5.75%, whereas the minimum relative error is zero.

4. Conclusions

This study conducted comprehensive tests and analyses in Longshan and Longyandong Forest Farms to verify system stability and functionality. The research results are as follows:
  • The system can typically transmit the forest environment data monitored by the ground WSN, and the UAV can generally return fire images above the forest and provide a prompt early warning, meeting the needs of forest fire monitoring;
  • Multiple algorithms compared the processing speed of a video with a processing time of 4 min and 16 s. The video had 29 images/s, and the size of each frame was 960×540, for a total of 7424 images. The processing speed and delay rate of the video images were calculated using the BPNNFire algorithm and other algorithms. The test results revealed that the BPNNFire algorithm’s judgment accuracy rate was 84.37%, indicating that this algorithm was superior to other recognized algorithms;
  • The real-time online monitoring of forest environmental indicators for three months indicates that the packet loss rate of the forest fire monitoring network was 5.99%for Longshan Forest Farm and 2.22% for Longyandong Forest Farm. The constructed hardware equipment and embedded routing protocol could be applied to an unattended situation in the forest fire monitoring field to ensure long-term stable system operation. The maximum relative error between the measured temperature and humidity values and the actual value was 5.75%, indicating that the forest fire monitoring/early warning system could stably receive and transmit forest environmental data, and that the system connectivity was good.
Although the improved BPNNFire algorithm, as applied to forest fire detection, achieved good results, the detection accuracy and performance of the forest fire recognition model must still be improved for further optimization. Future research could explore how to reduce the model complexity, simplify the model structure, and optimize the model parameters, ensuring a lightweight model the calculation efficiency.

Author Contributions

Conceptualization, S.Z., X.Z. and S.C.; methodology, S.Z., X.Z., S.C. and Y.Z.; software, S.Z. and P.G.; validation, S.Z., X.Z., P.G. and Y.Z.; formal analysis, S.Z., P.G. and Z.W.; investigation, S.Z., X.Z., S.C., L.W., F.H. and P.G.; resources, S.Z., S.C. and Y.Z.; data curation, S.Z.,P.G. and Y.Z.; writing—original draft preparation, S.Z., P.G. and X.Z.; writing—review and editing, S.Z., S.C., P.G. and X.Z.; visualization, S.Z., P.G., F.H. and X.Z.; supervision, S.C., W.W. and Y.Z.; project administration, S.Z., S.C., X.Z., Y.Z. and Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Scientific Research Project of Guangdong Eco-Engineering Polytechnic, grant number 2020kykt-xj-zk10; Special fund project of Guangdong science and technology innovation strategy, grant number pdjh2021b0850; Characteristic Innovation Projects of Department of Education of Guangdong Province, grant number 2018KJCX003; Guangdong Basic and Applied Basic Research Foundation, grant numbers 2022A1515140162 and 2022A1515140013; Guangdong Forestry Science and Technology Innovation Project, grant number 2018KJCX003 and 2020KJCX003.

Data Availability Statement

The data can be requested from the corresponding authors.

Acknowledgments

We would like to thank the anonymous reviewers for their critical comments and suggestions for improving the manuscript.

Conflicts of Interest

The authors declare that there are no conflict of interest.

References

  1. Hanewinkel, M.; Hummel, S.; Albrecht, A. Assessing natural hazards in forestry for risk management: A review. Eur. J. For. Res. 2011, 130, 329–351. [Google Scholar] [CrossRef]
  2. Díaz-Ramírez, A.; Tafoya, L.A.; Atempa, J.A.; Mejía-Alvarez, P. Wireless sensor networks and fusion information methods for forest fire detection. Procedia Technol. 2012, 3, 69–79. [Google Scholar] [CrossRef]
  3. Bouabdellah, K.; Noureddine, H.; Larbi, S. Using wireless sensor networks for reliable forest fires detection. Procedia Comput. Sci. 2013, 19, 794–801. [Google Scholar] [CrossRef]
  4. Vikram, R.; Sinha, D.; De, D.; Das, A.K. EEFFL: Energy efficient data forwarding for forest fire detection using localization technique in wireless sensor network. Wirel. Netw. 2020, 26, 5177–5205. [Google Scholar] [CrossRef]
  5. Nebot, À.; Mugica, F. Forest Fire Forecasting Using Fuzzy Logic Models. Forests 2021, 12, 1005. [Google Scholar] [CrossRef]
  6. Nikhil, S.; Danumah, J.H.; Saha, S.; Prasad, M.K.; Rajaneesh, A.; Mammen, P.C.; Ajin, R.S.; Kuriakose, S.L. Application of GIS and AHP Method in Forest Fire Risk Zone Mapping: A Study of the Parambikulam Tiger Reserve, Kerala, India. J. Geovisualization Spat. Anal. 2021, 5, 1–14. [Google Scholar]
  7. Faroudja, A. A survey of machine learning algorithms based forest fires prediction and detection systems. Fire Technol. 2021, 57, 559–590. [Google Scholar]
  8. Barmpoutis, P.; Papaioannou, P.; Dimitropoulos, K.; Grammalidis, N. A review on early forest fire detection systems using optical remote sensing. Sensors 2020, 20, 6442. [Google Scholar] [CrossRef] [PubMed]
  9. Gaur, A.; Singh, A.; Kumar, A.; Kumar, A.; Kapoor, K. Video flame and smoke based fire detection algorithms: A literature review. Fire Technol. 2020, 56, 1943–1980. [Google Scholar] [CrossRef]
  10. Moumgiakmas, S.S.; Samatas, G.G.; Papakostas, G.A. Computer Vision for Fire Detection on UAVs—From Software to Hardware. Future Internet 2021, 13, 200. [Google Scholar] [CrossRef]
  11. Moussa, N.; El Belrhiti El Alaoui, A.; Chaudet, C. A novel approach of WSN routing protocols comparison for forest fire detection. Wireless Netw. 2020, 26, 1857–1867. [Google Scholar] [CrossRef]
  12. Saeed, F.; Paul, A.; Karthigaikumar, P.; Nayyar, A. Convolutional neural network based early fire detection. Multimed. Tools Appl. 2020, 79, 9083–9099. [Google Scholar] [CrossRef]
  13. Sinha, D.; Kumari, R.; Tripathi, S. Semisupervised classification based clustering approach in WSN for forest fire detection. Wirel. Pers. Commun. 2019, 109, 2561–2605. [Google Scholar] [CrossRef]
  14. Varela, N.; Ospino, A.; Zelaya, N.A.L. Wireless sensor network for forest fire detection. Procedia Comput. Sci. 2020, 175, 435–440. [Google Scholar] [CrossRef]
  15. Yebra, M.; Quan, X.; Riaño, D.; Larraondo, P.R.; van Dijk, A.I.; Cary, G.J. A fuel moisture content and flammability monitoring methodology for continental Australia based on optical remote sensing. Remote Sens. Environ. 2018, 212, 260–272. [Google Scholar] [CrossRef]
  16. Majid, S.; Alenezi, F.; Masood, S.; Ahmad, M.; Gündüz, E.S.; Polat, K. Attention based CNN model for fire detection and localization in real-world images. Expert Syst. Appl. 2022, 189, 116114. [Google Scholar] [CrossRef]
  17. Kukuk, S.B.; Kilimci, Z.H. Comprehensive analysis of forest fire detection using deep learning models and conventional machine learning algorithms. Int. J. Comput. Exp. Sci. Eng. 2021, 7, 84–94. [Google Scholar] [CrossRef]
  18. Avazov, K.; Hyun, A.E.; Sami, S.A.A.; Khaitov, A.; Abdusalomov, A.B.; Cho, Y.I. Forest Fire Detection and Notification Method Based on AI and IoT Approaches. Future Internet 2023, 15, 61. [Google Scholar] [CrossRef]
  19. Tang, Y.; Huang, Z.; Chen, Z.; Chen, M.; Zhou, H.; Zhang, H.; Sun, J. Novel visual crack width measurement based on backbone double-scale features for improved detection automation. Eng. Struct. 2023, 274, 115158. [Google Scholar] [CrossRef]
  20. Wardihani, E.; Ramdhani, M.; Suharjono, A.; Setyawan, T.A.; Hidayat, S.S.; Helmy, S.W.; Triyono, E.; Saifullah, F. Real-time forest fire monitoring system using unmanned aerial vehicle. J. Eng. Sci. Technol. 2018, 13, 1587–1594. [Google Scholar]
  21. Tang, Y.; Qiu, J.; Zhang, Y.; Wu, D.; Cao, Y.; Zhao, K.; Zhu, L. Optimization strategies of fruit detection to overcome the challenge of unstructured background in field orchard environment: A review. Precis. Agric. 2023, 274, 1–37. [Google Scholar] [CrossRef]
  22. Sevinc, V.; Kucuk, O.; Goltas, M. A Bayesian network model for prediction and analysis of possible forest fire causes. For. Ecol. Manag. 2020, 457, 117723. [Google Scholar] [CrossRef]
  23. Apriani, Y.; Oktaviani, W.A.; Sofian, I.M. Design and Implementation of LoRa-Based Forest Fire Monitoring System. J. Robot. Control 2022, 3, 236–243. [Google Scholar] [CrossRef]
  24. Park, M.; Tran, D.Q.; Lee, S.; Park, S. Multilabel Image Classification with Deep Transfer Learning for Decision Support on Wildfire Response. Remote Sens. 2021, 13, 3985. [Google Scholar] [CrossRef]
  25. Dampage, U.; Bandaranayake, L.; Wanasinghe, R.; Kottahachchi, K.; Jayasanka, B. Forest fire detection system using wireless sensor networks and machine learning. Sci. Rep. 2022, 12, 46. [Google Scholar] [CrossRef] [PubMed]
  26. Elshewey, A.M. Machine learning regression techniques to predict burned area of forest fires. Int. J. Soft Comput. 2021, 16, 1–8. [Google Scholar]
  27. Tang, Y.; Chen, C.; Leite, A.C.; Xiong, Y. Precision control technology and application in agricultural pest and disease control. Front. Plant Sci. 2023, 14, 1163839. [Google Scholar] [CrossRef]
  28. Guede-Fernández, F.; Martins, L.; de Almeida, R.V.; Gamboa, H.; Vieira, P. A deep learning based object identification system for forest fire detection. Fire 2021, 4, 75. [Google Scholar] [CrossRef]
  29. Li, C.; Tang, Y.; Zou, X.; Zhang, P.; Lin, J.; Lian, G.; Pan, Y. A Novel Agricultural Machinery Intelligent Design System Based on Integrating Image Processing and Knowledge Reasoning. Appl. Sci. 2022, 12, 7900. [Google Scholar] [CrossRef]
  30. Naderpour, M.; Rizeei, H.M.; Ramezani, F. Forest fire risk prediction: A spatial deep neural network-based framework. Remote Sens. 2021, 13, 2513. [Google Scholar] [CrossRef]
  31. Tang, Y.; Zhou, H.; Wang, H.; Zhang, Y. Fruit detection and positioning technology for a Camellia oleifera C. Abel orchard based on improved YOLOv4-tiny model and binocular stereo vision. Expert Syst. Appl. 2023, 211, 118573. [Google Scholar] [CrossRef]
  32. Bajracharya, B.; Thapa, R.B.; Matin, M.A. Forest fire detection and monitoring. In Earth Observation Science and Applications for Risk Reduction and Enhanced Resilience in Hindu Kush Himalaya Region; Springer: Berlin/Heidelberg, Germany, 2021; pp. 147–167. [Google Scholar]
  33. Miller, E.A. A Conceptual Interpretation of the Drought Code of the Canadian Forest Fire Weather Index System. Fire 2020, 3, 23. [Google Scholar] [CrossRef]
  34. Zhou, Y.; Tang, Y.; Zou, X.; Wu, M.; Tang, W.; Meng, F.; Zhang, Y.; Kang, H. Adaptive Active Positioning of Camellia oleifera Fruit Picking Points: Classical Image Processing and YOLOv7 Fusion Algorithm. Appl. Sci. 2022, 12, 12959. [Google Scholar] [CrossRef]
  35. Narita, D.; Gavrilyeva, T.; Isaev, A. Impacts and management of forest fires in the Republic of Sakha, Russia: A local perspective for a global problem. Polar Sci. 2021, 27, 100573. [Google Scholar] [CrossRef]
  36. Bui, D.T.; Hoang, N.D.; Samui, P. Spatial pattern analysis and prediction of forest fire using new machine learning approach of Multivariate Adaptive Regression Splines and Differential Flower Pollination optimization: A case study at Lao Cai province (Viet Nam). J. Environ. Manag. 2019, 237, 476–487. [Google Scholar]
  37. Peng, B.; Zhang, J.; Xing, J.; Liu, J. Distribution prediction of moisture content of dead fuel on the forest floor of Maoershan national forest, China using a LoRa wireless network. J. For. Res. 2022, 33, 899–909. [Google Scholar] [CrossRef]
  38. Michael, Y.; Helman, D.; Glickman, O.; Gabay, D.; Brenner, S.; Lensky, I.M. Forecasting fire risk with machine learning and dynamic information derived from satellite vegetation index time-series. Sci. Total Environ. 2021, 764, 142844. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Images in one forest fire dataset.
Figure 1. Images in one forest fire dataset.
Remotesensing 15 02365 g001
Figure 2. Structure of a forest fire monitoring network.
Figure 2. Structure of a forest fire monitoring network.
Remotesensing 15 02365 g002
Figure 3. Measurement equipment and processes in the experiment.
Figure 3. Measurement equipment and processes in the experiment.
Remotesensing 15 02365 g003
Figure 4. Framework of the forest fire recognition algorithm.
Figure 4. Framework of the forest fire recognition algorithm.
Remotesensing 15 02365 g004
Figure 5. Structure of the deep convolutional neural network prediction model.
Figure 5. Structure of the deep convolutional neural network prediction model.
Remotesensing 15 02365 g005
Figure 6. Schematic diagram of flame-area extraction using the neural network.
Figure 6. Schematic diagram of flame-area extraction using the neural network.
Remotesensing 15 02365 g006
Figure 7. Training and testing loss function curves.
Figure 7. Training and testing loss function curves.
Remotesensing 15 02365 g007
Figure 8. Processing speed and delay rate of the algorithms.
Figure 8. Processing speed and delay rate of the algorithms.
Remotesensing 15 02365 g008
Figure 9. Recognition results of various processing algorithms. (a) Original image; (b) BPNNFire algorithm; (c) BPNN algorithm; (d) Interframe difference algorithm; (e) Background elimination algorithm; (f) Vibe algorithm.
Figure 9. Recognition results of various processing algorithms. (a) Original image; (b) BPNNFire algorithm; (c) BPNN algorithm; (d) Interframe difference algorithm; (e) Background elimination algorithm; (f) Vibe algorithm.
Remotesensing 15 02365 g009
Figure 10. Fire determination accuracy of various algorithms.
Figure 10. Fire determination accuracy of various algorithms.
Remotesensing 15 02365 g010
Figure 11. Maximum temperature and humidity monitoring data.
Figure 11. Maximum temperature and humidity monitoring data.
Remotesensing 15 02365 g011
Figure 12. Relative error of temperature and humidity data.
Figure 12. Relative error of temperature and humidity data.
Remotesensing 15 02365 g012
Table 1. Distribution of forest fire image datasets.
Table 1. Distribution of forest fire image datasets.
CategoryNumber of Dataset SamplesTraining Set SamplesVerification Set SamplesTesting Set Samples
Fire image53,83043,06452375529
Table 2. Training environment parameters.
Table 2. Training environment parameters.
NameTraining Environment
CPUInter® Xeon® Gold [email protected] GHz
GPUNVIDIA GTX 3090@24 GB
RAM128 GB
PyCharm version2020.3.2
Python version3.7.10
PyTorch version1.6.0
CUDA version11.1
cuDNN version 8.0.5
Table 3. Packet loss rate statistics.
Table 3. Packet loss rate statistics.
Forest FarmLongshanLongyandong
Number of local packets sent35723572
Number of packets received by the platform33582725
Packet loss rate5.99%23.7%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, S.; Gao, P.; Zhou, Y.; Wu, Z.; Wan, L.; Hu, F.; Wang, W.; Zou, X.; Chen, S. An Accurate Forest Fire Recognition Method Based on Improved BPNN and IoT. Remote Sens. 2023, 15, 2365. https://doi.org/10.3390/rs15092365

AMA Style

Zheng S, Gao P, Zhou Y, Wu Z, Wan L, Hu F, Wang W, Zou X, Chen S. An Accurate Forest Fire Recognition Method Based on Improved BPNN and IoT. Remote Sensing. 2023; 15(9):2365. https://doi.org/10.3390/rs15092365

Chicago/Turabian Style

Zheng, Shaoxiong, Peng Gao, Yufei Zhou, Zepeng Wu, Liangxiang Wan, Fei Hu, Weixing Wang, Xiangjun Zou, and Shihong Chen. 2023. "An Accurate Forest Fire Recognition Method Based on Improved BPNN and IoT" Remote Sensing 15, no. 9: 2365. https://doi.org/10.3390/rs15092365

APA Style

Zheng, S., Gao, P., Zhou, Y., Wu, Z., Wan, L., Hu, F., Wang, W., Zou, X., & Chen, S. (2023). An Accurate Forest Fire Recognition Method Based on Improved BPNN and IoT. Remote Sensing, 15(9), 2365. https://doi.org/10.3390/rs15092365

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop