Next Article in Journal
Individual Tree-Level Monitoring of Pest Infestation Combining Airborne Thermal Imagery and Light Detection and Ranging
Previous Article in Journal
The Spatial Pattern of the Tertiary Relict Plant Tetracentron sinense Oliver and Its Influencing Factors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Forest Fire Recognition Method Based on Modified Deep CNN Model

1
College of Information Engineering, Guangdong Eco-Engineering Polytechnic, Guangzhou 510520, China
2
Foshan-Zhongke Innovation Research Institute of Intelligent Agriculture and Robotics, Foshan 528231, China
3
College of Electronic Engineering, South China Agricultural University, Guangzhou 510642, China
4
Guangdong Academy of Forestry Sciences, Guangzhou 510520, China
5
Zhujiang College, South China Agricultural University, Guangzhou 510642, China
*
Author to whom correspondence should be addressed.
Forests 2024, 15(1), 111; https://doi.org/10.3390/f15010111
Submission received: 29 November 2023 / Revised: 24 December 2023 / Accepted: 4 January 2024 / Published: 5 January 2024
(This article belongs to the Section Natural Hazards and Risk Management)

Abstract

:
Controlling and extinguishing spreading forest fires is a challenging task that often leads to irreversible losses. Moreover, large-scale forest fires generate smoke and dust, causing environmental pollution and posing potential threats to human life. In this study, we introduce a modified deep convolutional neural network model (MDCNN) designed for the recognition and localization of fire in video imagery, employing a deep learning-based recognition approach. We apply transfer learning to refine the model and adapt it for the specific task of fire image recognition. To combat the issue of imprecise detection of flame characteristics, which are prone to misidentification, we integrate a deep CNN with an original feature fusion algorithm. We compile a diverse set of fire and non-fire scenarios to construct a training dataset of flame images, which is then employed to calibrate the model for enhanced flame detection accuracy. The proposed MDCNN model demonstrates a low false alarm rate of 0.563%, a false positive rate of 12.7%, a false negative rate of 5.3%, and a recall rate of 95.4%, and achieves an overall accuracy of 95.8%. The experimental results demonstrate that this method significantly improves the accuracy of flame recognition. The achieved recognition results indicate the model’s strong generalization ability.

1. Introduction

China has relatively scarce forestry resources, with a domestic forest cover of 23.04%. The per capita forest area is less than 0.16 hectares, and the per capita forest stock is 12.35 m3; both are lower than the global average. Advanced technologies such as computers, remote sensing, laser monitoring, radar communication, and satellite image monitoring have improved the ability to monitor wildfires [1]. They have been combined with advanced management concepts to greatly reduce the hidden dangers of forest fires. Nonetheless, forest fires remain a huge challenge for forestry development because of their very high threat and cost [2]. Once a forest fire spreads, it is difficult to control and extinguish, and it causes irreversible losses. Large-scale forest fires produce smoke and dust that cause considerable environmental pollution and may threaten human life. Protecting forest vegetation from damage and ensuring the balance of the forest’s ecological environment will help to safeguard the human living environment and promote economic and social development. While forest fire monitoring and warning must be performed, reducing the number of forest fires itself is very important. On the one hand, forest fire monitoring and warning is a type of early prediction of forest disasters that involves mathematical modeling of historical forest fire data, forest environmental parameters, meteorological data, etc., to predict the likelihood of forest fires occurring. On the other hand, the prediction, prevention, and management of forest fires are key tasks in forest fire prevention. The forest fire risk prediction system is an important tool for monitoring forest fires, assisting in fire planning, and allocating fire extinguishing resources [3,4]. At present, using deep learning to predict the risk of forest fires clearly shows promise. Deep learning is popular in neural network modeling because of its strong self-learning and adaptive abilities and the advantages offered by various convolutional neural networks (CNNs) and candidate area algorithms [5,6]. Forest fire risk prediction includes identifying risks and measuring their scale and frequency. Such prediction involves four stages: identification of hot areas, assessment of forest fire sensitivity, classification of areas vulnerable to forest fire, and assessment of possible forest fire risk [7,8]. Environmental factors such as terrain features, human infrastructure, and meteorological forms have been identified as influencing parameters that play an important role in constructing susceptibility models for forest fire risk [9,10,11].
Researchers worldwide have made significant advancements in forest fire risk prediction technologies. For example, remote sensing and geographic information system (GIS) learning models are used to assess the probability of forest fire risk and monitor the susceptibility to forest fires [12,13,14]. Knowledge-based methods including fuzzy logic [15], analytic hierarchy process (AHP) [16], and network analysis methods are also being used for this purpose. Further, deep learning approaches such as random forest models and logistic regression have been used for forest fire risk prediction [17]. Deep learning methods and artificial neural networks (ANNs) show potential for handling complex nonlinear energy problems. To obtain repeatable and reliable results using deep learning algorithms, sufficient training sample optimization parameters must be set to address robustness issues in the effective extraction of complex upper-level features and input conversion in fire images [18]. At present, the management of forest fire prevention in small- and medium-sized forest farms is not standardized. Further, automated monitoring and warning for forest fires remain in the small-scale experimental stage, and large-scale applications will take time to achieve [19,20]. The difficulties faced in achieving real-time monitoring and early warning of forest fires are mainly reflected in the low accuracy of forest fire identification.
Mao et al. developed a system that uses ANNs to automatically identify fire smoke. A high-resolution scanning radiometer has been used to determine where to cut off the fire line in time to reduce the damage caused by forest fires. Another study developed a fire spread simulator in which the neural network structure is optimized for calibration on different terrains to help firefighters develop firefighting strategies [21,22]. Thach et al. [23] established a GIS database and trained and verified a forest fire model by using a combination of a support vector machine (SVM), receptive field, and multilayer perceptron neural network algorithms. Classification accuracy and kappa statistics were used to evaluate the model performance, and the experimental results showed that the model has good performance. Vikram et al. [24] applied an SVM to propose a semisupervised classification model that divides forest regions into different regions, including high activity, medium activity, and low activity. The model showed good recognition performance for forest fires, with an accuracy of 90%. Moayedi et al. [25] used a hybrid evolutionary algorithm to build a forest fire prediction model. Three fuzzy element initiation methods based on the combination of an adaptive neuro-fuzzy inference system, genetic algorithm, particle swarm optimization, and differential evolution were used to generate a forest fire sensitivity map of fire-prone areas. The results showed that the fire risk prediction model can effectively predict the occurrence of forest fires. Peruzzi [26] constructed a forest fire smoke recognition model for fire prediction by using a backpropagation neural network. The experimental results showed that this method can solve the problem of the large delay in forest fire risk prediction and effectively predict the forest fire risk. Grari [27] proposed the smoke generative adversarial network framework to expand training sample data and used a Gaussian–Bernoulli deep belief network to preprocess the sample data to remove noise from the image. The classification test loss rate of the model was 0.25%. Upon using a deep belief network–convolutional neural network model, the forest fire smoke recognition accuracy reached 98.52%. Nikhil et al. [28] proposed a global forest fire risk estimation method and developed a monitoring system based on fuzzy logic and fuzzy algebra. They applied the k-means clustering algorithm and a density-based spatial clustering algorithm. Spatiotemporal data mining (STDM) was used to conduct field experiments in Greece, and good predictions were obtained for forest fire risk areas. Abdusalomov [29] used deep learning to establish a convolutional transfer learning feature extraction network to improve the accuracy of forest fire classification and designed a deep convolution and domain adaptive sample classification algorithm to verify its effectiveness, with good experimental results. Overall, compared to traditional machine learning algorithms, deep learning methods have better recognition performance for images with complex backgrounds for which feature extraction is difficult.
The application of the You Only Look Once Version 8 (Yolov8) algorithm in forest fire monitoring represents a significant advancement in the field [30]. As an object detection algorithm grounded in deep learning methodologies, Yolov8 excels in swiftly and accurately identifying objects within images. In the context of forest fire surveillance, it is adept at detecting critical fire indicators such as flames and smoke. The integration of Yolov8 with aerial drones or video surveillance infrastructure enables real-time monitoring of forest fires, facilitating prompt alerts and strategic responses. However, its efficacy is predominantly in close-range object recognition. Given the extensive detection range required for forest fires, an enhanced convolutional neural network could augment recognition efficiency and exhibit superior capabilities in analyzing a vast array of distant images, thus improving overall detection performance.
In light of these challenges, this study has adopted the AlexNet deep convolutional neural network model, recognizing its robust performance with small-scale datasets and a reduced parameter count, particularly advantageous given the scarcity of extensive datasets in forest fire monitoring tasks. The evaluation was performed using a dedicated fire database. By leveraging an advanced deep convolutional neural network model (MDCNN) and collecting forest canopy imagery via unmanned aerial vehicles or video surveillance systems, we processed forest fire images and dissected complex features such as smoke and flames to develop a fire recognition model aimed at forest fire surveillance and early warning systems. The insights garnered from this research offer vital theoretical and practical benchmarks for the future development of expansive forest fire monitoring and alert frameworks.

2. Materials and Methods

In actual forest fire monitoring situations, many interferences are difficult to effectively identify and are easily mistaken for flames. These include fallible flames (e.g., people wearing red clothes in forest environments) and lighter flames (flames that do not cause fires). To effectively eliminate such interference, more suitable features must be selected for judgment.

2.1. Image Flame Features and Model Selection

A flame identified through a color model contains two types of objects. The first type is the flame, which can be divided into two types of objects. One is a flame that can cause a fire, and the other is a stable combustion material, including flames that can cause a fire and flames that can burn stably, such as lighters, matches, and candles. Such objects cannot be effectively recognized through color models. The second type is the recognition of objects similar to flames, with a common feature being red or yellow colors similar to those of flames. Such objects are difficult to recognize through color space models. Therefore, identifying flames solely through color models is insufficient, making it necessary to further identify fire flames through the extraction of other features.
Image feature recognition is essentially the extraction of image features. The recognition accuracy is determined by how effectively appropriate features are extracted. Image features are the basic attributes of an image, and images of different objects have their own unique features. The differences in the attributes of different images can be used to distinguish different objects. This article investigates flames, and their characteristics are summarized as shown in Figure 1.
As shown in Figure 1, image flame features can be divided into two categories, static and dynamic, which are further subdivided into five major categories. Accurately identifying flames is crucial for extracting flame features, and it is better to extract more flame features. Extracting too few features makes it difficult to exclude objects similar to flames, and extracting too many features makes feature fusion methods difficult and affects real-time operation. Therefore, how to select flame features is an important issue to be addressed in this study.
Numerous researchers have conducted extensive investigations on the selection of image features following image segmentation processing [31]. Early studies mostly extracted one or two features for recognition, such as analyzing the flame tip features, comparing the differences between several interfering substances and flame tip features, and recognizing them as a single recognition feature [32]. However, this method is too simple, and the selected interfering substances are not representative. Color and flame frequency have been selected as features for recognition. Although the accuracy has been greatly improved, the selected interferences are clearly too few to be applicable to all flame interferences. Many studies have found that identifying fire flames solely through a single feature often results in unsatisfactory results and is prone to misjudgment [33].
Researchers have employed the fusion of multiple features to enhance accuracy, and studies on the selection of multiple features require feature fusion. At present, feature fusion algorithms include neural networks, Bayesian classifiers, and SVM. One study extracted five commonly used features and developed an AHP-based feature fusion method; however, the AHP method used relied too heavily on manual experience for analysis [34]. Another study used a deep CNN for identification and achieved improved accuracy; however, the picture size needed to be fixed when inputting the picture [35]. A study used a deep CNN for recognition and achieved improved accuracy compared with that of traditional methods; however, identifying the entire video input required considerable time in practical applications [36]. A study used an SVM classifier to identify flames; however, different features need to be selected as inputs to the classifier, and the selected features are subjective and cannot be guaranteed to be the best features [37].
These feature- fusion algorithms have their advantages and disadvantages. Unlike flame recognition, Bayesian methods need to determine parameters in advance; SVMs are suitable for training small samples; and deep CNNs have strong learning ability, high robustness to interference, and obvious advantages in the case of visual flames with many interference sources and complex analysis [38]. Therefore, this study proposes a fire recognition method based on the deep CNN model, in which complex preprocessing links are reduced and the whole fire identification process is integrated into a single deep neural network that is convenient for training and optimization. In the identification process, to solve the interference of similar fire scenarios in fire recognition, based on the motion characteristics of flames, a new scheme is proposed to eliminate similar fire scene interference caused by lighting based on the changes in frame coordinates before and after the fire video. After comparing numerous deep learning open-source frameworks, the Caffe framework was chosen for training and testing in this study.

2.2. Modified Deep CNN Model for Forest Fire Recognition

A CNN is a feedforward neural network with convolutional computation and a deep structure. It is mainly divided into two parts: model training and model evaluation. For fire video recognition, first, a large number of fire images is collected for model training, a deep CNN is used to obtain a deeper expression of the fire characteristics, and a large number of fire recognition models is obtained. Then, the test dataset is used to evaluate the obtained models to find the optimal model. Finally, the optimal model is used to determine whether the newly input photo contains flames. The fire video recognition flowchart is shown in Figure 2.
This study proposes a modified deep convolutional neural network model (MDCNN) for recognizing and locating forest fire video images. In experiments, the softmax classification function is replaced with a sigmoid function suitable for binary classification to construct a fire recognition model. A single deep neural network is used for image recognition; its positioning method is different from that of the sliding window method. In terms of positioning, this model generates a series of default boxes on the pixels of the middle layer feature map according to different proportions and sizes. In the model operation process, the network generates scores based on existing target categories and generates localization boxes based on the size of localization weights, which is more accurate in matching object traits. At the same time, the recognition network combines feature maps of different resolutions to handle objects of different sizes. The advantage of this network is that it still has a high recognition speed while improving the fire recognition accuracy, thus providing favorable conditions for forest fire recognition. It can also achieve high accuracy for low-resolution inputs. The improved MDCNN model is more lightweight and can effectively achieve the recognition of forest fire images, resulting in high detection efficiency for forest fires.
Common deep convolutional neural network models include LeNet, AlexNet, VGGNet, GoogLeNet, and ResNet [39]. Due to the focus of this study on image processing and the initial capacity of the flame database being five thousand images, we have chosen to use AlexNet for model training and final testing on the established flame database. AlexNet is a classic deep convolutional neural network model that has a relatively small number of parameters and performs well, particularly on small-scale databases. Therefore, it is suitable for the requirements of this study.
The AlexNet network model consists of five convolutional layers, three pooling layers, three fully connected layers, and a dropout layer added to prevent overfitting [40]. The first and second convolutional layers have convolutional kernel sizes of 11 × 11 and 5 × 5, respectively. The next three convolutional layers all have convolutional kernel sizes of 3 × 3. The specific parameters of the AlexNet network model are shown in Table 1, where Conv1, Conv2, Conv3, Conv4, and Conv5 respectively represent the first to fifth convolutional layers. Max-pool represents the maximum pooling layer. Fc1, Fc2, and Fc3 respectively represent the first, second, and third fully connected layers. A convolutional kernel size is represented by (11 × 11, 1, 4, stride = 4). The input channel is 1, the output channel is 4, and the step size is 4.
(1)
Input layer: The main task of the input layer is to preprocess the original image. AlexNet requires an input size of 227 × 227. However, because the sample set in this article was collected through different channels, the size of the sample images is not consistent. Therefore, to reduce the computational complexity, all images were resized to match the input size.
(2)
Convolutional layer: Five convolutional layers are used in this study. The convolutional layer is the most important part of the entire network, and its core is the convolutional kernel (or filter). Convolutions have two attributes, size and depth, that can be set manually. As the sample size in this article is self-established and small, it is not suitable to adopt a high depth to prevent overfitting. Convolution reduces the dimensionality while extracting images. Convolutional layers are used to extract image features at a deeper level. After completing the convolution, functions are used to correct the results. Commonly used correction functions include sigmoid, rectified linear unit (ReLu), softplus, and tanh. Their function images are shown below:
In Figure 3, the gradient changes of the sigmoid and tanh functions are relatively gentle in the saturation zone, approaching 0; this makes it easy to cause the gradient to disappear, leading to a decrease in the rate of convergence. When many layers exist in the network, gradient vanishing is one of the main problems in the process of correcting convolution results. As shown in the figure, the ReLu function is a constant in the positive saturation region and does not experience gradient vanishing. It also converges quickly, and gradients can be found easily. Therefore, ReLu is used for correction in this study.
(3)
Pooling layer: The pooling layer is usually followed by the convolutional layer; it is used to reduce the size of the matrix, preserve the main features while reducing the parameters of the next layer, and reduce the computational complexity to prevent overfitting. Max pooling and average pooling methods are used often. For image recognition, the max pooling method can reduce the mean shift caused by convolutional layer parameter errors. It can retain more texture information, which is also important in image processing. The principle is shown in Figure 4.
For the 2 × 2 window in the figure above, the largest number is selected as the value of the output matrix. For example, the first matrix has a maximum value of 6, therefore, the first value of the output matrix is 6.
(4)
Fully connected layer: The fully connected layer correctly classifies images. To identify whether the target is a flame, the fully connected layer is divided into two categories, 0 and 1, which respectively represent non-fire and fire source images. The number of neurons input to the fully connected layer is greatly reduced through the processing of convolutional and pooling layers. For example, in AlexNet, after processing an image with a size of 227 × 227 and a color channel count of 3, the number of neurons input into the fully connected layer is 4096. Finally, the output of softmax can be determined based on the actual number of classification labels. In the fully connected layer, a dropout mechanism randomly deletes some neurons; this can save time in preventing the overfitting of contracts.
As shown in Figure 5, the input image has a width and height of 300 × 300 with three channels. The network structure is VGG-16, where two convolutional layers are modified from fully connected layers and four convolutional layers are added to obtain more accurate feature maps for localization. This network recognizes fires using two parts: the classification part that predicts whether the input image is a fire or non-fire image and the category score and the positioning part that applies small convolutional kernels to the feature response map. The offset of default boxes on different feature maps is predicted. The input feature map size for detection and classifier 1 is 38 × 38; 4 default boxes are present around each feature entity, and the number of default boxes is 38 × 4. The other default boxes are similar, and the final recognition position is obtained by excluding redundant interference items through non-maximum suppression. The red box in the image represents the identified forest fire image with flames.
During the training process, it is necessary to set different hyperparameters, train models with different hyperparameters, and select them to obtain the optimal solution to the target problem. For this purpose, we trained a large number of models with different parameters based on the training data and analysis of the training results; improved the model by adjusting the learning rate, threshold, and other hyperparameters; and finally used the model with the highest accuracy rate. We adopted the transfer learning strategy. Because the pretraining model was trained on a large dataset and the weight of each layer reflects the feature selection of image objects, we used the pretraining model for initialization through a fine-tuning strategy to obtain better results. After running 100,000 fine-tuning iterations in the experiment, the final model was obtained, and it showed high accuracy in recognizing forest fires.

2.3. Parameter Selection

Table 2 provides detailed information about the training environment parameters utilized in this experiment, encompassing processor specifications, graphics card specifications, memory capacity, development environment, and other pertinent details.
The forest fire image dataset comprised images captured from fire videos published online. Then, frame generation software Blender 4.0.1 (Blender, Amsterdam, The Netherlands) was used to process the frames and generate XML files. The training set was formed by randomly selecting 90% of labeled images. The remaining 10% of images were used as the test set. The training and test sets were converted to lmdb format, image width and height were adjusted to 300 × 300, data augmentation methods including mirroring and flipping were performed, and preprocessing and normalization were performed. The solver parameter settings were as follows: weight attenuation, 0.0005; initial learning rate, 0.0001; learning rate change ratio, 0.1; and network impulse, 0.9.
(1)
Selection of Number of Iterations
The dataset was trained, and the loss value of the sample function was recorded. As the number of iterations increases, the total loss (train_loss) and localization loss (mbox_loss) of network training gradually converge; specifically, they show a continuous decreasing trend, approach a stable state, and stabilize after 3000 iterations. The loss function curves of training are shown in Figure 6.
L ( x , c , l , g ) = 1 N ( L c o n f ( x , c ) + α L l o c ( x , l , g ) )
Here, N is the number of matched real boxes, indicating whether the matched boxes belong to a certain category, with values {0,1}; l is the prediction box; g is the true value; and c is the confidence level of the selected target belonging to a certain category, and it is used to adjust the weight relationship between classification and positioning.
In this paper, optimal fitting performance was achieved by conducting multiple iterative experiments. Specifically, the iteration counts of 1000, 2000, and 3000 were found to yield the best-fitting results. Therefore, in this study, the iteration counts were set as 1000, 2000, and 3000, respectively. In Figure 6, the left- and right-hand sides show the loss curve and accuracy curve, respectively.
When the number of iterations is 1000 and 2000, as shown in Figure 6a and Figure 6b, respectively, the accuracy curve does not tend to stabilize owing to the insufficient number of iterations. When the number of iterations is increased to 3000, as shown in Figure 6c, the accuracy of both curves approaches 1, and the training accuracy tends to stabilize. Figure 6a,b show that the loss curve does not converge owing to the insufficient number of iterations; therefore, the number of iterations needs to be increased. However, Figure 6c shows that the loss curve fluctuates only slightly when the number of iterations is 3000, and the rate of convergence is faster than that when the number of iterations is 2000. The proposed model generated with 3000 iterations was used in this study.
(2)
Comparative experiments on different learning parameters
Among all parameter settings, the learning rate is one of the most important parameters affecting the performance. The learning rate is a parameter that guides how to adjust the network weight through the gradient of the loss function. The use of a lower learning rate can catch any local minimum; however, it will affect the time performance, and more time will be required for convergence. Therefore, selecting an appropriate learning rate means that a shorter time is required to train the model. In this study, learning rates of 0.001, 0.005, and 0.01 were set for 2000 iterations.
The precision curve in Figure 7a shows that the rate of convergence of the yellow curve with a learning rate of 0.01 is much larger than those of the blue curve with a learning rate of 0.005 and the red curve with a learning rate of 0.001. Therefore, from the perspective of accuracy, the learning rate is 0.01. Figure 7b shows that the red curve with a learning rate of 0.001 does not converge, the yellow curve with a learning rate of 0.01 and the blue curve with a learning rate of 0.005 converge, and the rate of convergence of the yellow curve is higher. Therefore, from a loss perspective, the final learning rate is 0.01.

3. Results and Discussion

3.1. Experimental Calculation and Result Analysis

To verify the accuracy of the proposed flame recognition model, an improved AlexNet is used for training. The test depth is five layers, the number of iterations is 2500, and the learning rate is set to 0.01. The training sample images include 1000 fire source images and 1500 non-fire source images. The final test image number is 1000, including 400 fire source images and 600 interference images. The main steps in training are as follows: establish a training image set with samples as described above; select the main parameters of AlexNet, including the number of iterations and learning rate; conduct training to obtain the training model; and save the obtained training model for testing sample recognition. The training images are mainly divided into the following categories, as shown in Table 3:
In the flame decision part, the model established by color saturation is not accurate for object recognition with similar color saturation. Therefore, images with complex backgrounds and similar colors to the test images in the dataset were extracted, and more interference items were added for recognition. The experimental results are shown in Table 4. Three approaches are compared in this article. The first is an improved RGB model method that only uses color for recognition. The second is a method that adds sharp corner features and color features for joint recognition. The third is a method in which SVMs are used for feature extraction.
The verification results show that the proposed model has the following advantages. First, feature values do not need to be selected manually, thus reducing the impact of human factors on accuracy. Second, this model can process a large amount of image data at once, and its data processing speed is much higher than that of other models when the model is trained in advance. Third, among interference items, some images similar to flames (e.g., people wearing red clothes, fire extinguishers, etc.) cannot be effectively excluded through color space models. Table 4 shows that these interference items are prone to false positives. However, this model can effectively eliminate these interference items while showing higher accuracy than those of other models. For images with flames that do not cause a fire, such as candle and lighter flame images, the proposed model has a false alarm rate of only 0.563%, which is significantly better than those of the other two models.
In response to the issue of false alarms caused by lighter flames, a large number of lighter flames were added to the non-flame training samples for testing to reduce the occurrence of false alarms. Outdoor smoke and other factors can also cause a high false alarm rate, mainly owing to the similarity between smoke in forest environments and smoke from forest fires. The images with high false alarm rates are as follows (Figure 8):
Fire and non-fire images were selected from different scenarios to test the network recognition effect. The recognition results are shown in Figure 9. For fire images, the proposed model successfully achieved recognition and localization.
In summary, the model can accurately identify flames in ordinary situations; however, the recognition accuracy decreases somewhat in complex situations. Further, the recognition accuracy is greatly improved when the video quality is good. Overall, it offers advantages compared to traditional methods. However, the system processing time is three to five times longer than that of traditional methods.
The probability values and image status recognition of fire and non-fire image output during the recognition and positioning processes using the proposed network are listed in Table 5.
To evaluate the model’s performance, a test set comprising 3885 images was employed, including 2500 fire images and 1385 non-fire images. The assessment of the model’s effectiveness in recognition was conducted using key metrics such as accuracy, precision, recall, and F1-Score. These measures provide a comprehensive understanding of the model’s performance in distinguishing between fire and non-fire instances.
➀ Accuracy: This represents the proportion of correct samples out of the total number of samples. The calculation formula is as follows:
A C C = T P + T N T P + F P + T N + F N
➁ Precision: This represents the accuracy of positive predictions, i.e., the proportion of correct positive predictions out of all positive predictions. The calculation formula is as follows:
P = T P T P + F P
➂ Recall: This represents the proportion of correct positive predictions out of all actual positive instances. The calculation formula is as follows:
R = T P T P + F N
where TP represents true positives, TN represents true negatives, FP represents false positives, and FN represents false negatives.
➃ F1-Score: This represents the measure of precision and recall, calculated as the harmonic mean of the two, and is suitable for image segmentation. The calculation formula is as follows:
F 1 = ( 1 + a 2 ) P R a 2 ( P + R )
The results outlined in Table 6 reflect the performance of various models, including the MDCNN model, which exhibits the following evaluation metrics after computation:
-
False Negative Rate (FNR): 5.3%
-
False Positive Rate (FPR): 12.7%
-
Recall Rate: 95.4%
-
Accuracy Rate: 95.8%
These metrics indicate that the MDCNN model has a high ability to correctly identify fire instances (high recall rate) and a high overall rate of correct predictions (high accuracy rate). However, there is still room for improvement in reducing the rates of both false negatives (missed fire detection) and false positives (incorrectly identified fires).
Table 6. Test results of MDCNN model and other models.
Table 6. Test results of MDCNN model and other models.
FPRFNRRecallAccuracyF1
MDCNN12.7%5.3%95.4%95.8%92.5
CNN9.7%5.4%92.7%91.589.3
AlexNet10.4%5.4%93.4%92.790.4
VGGNet7.9%5.4%85.9%83.281.2
GoogLeNet8.4%5.4%87.5%85.582.7
ResNet9.3%5.4%90.6%87.484.5
The comparison of various deep convolutional neural network (CNN) models in Figure 10 reveals the performance of each model in terms of accuracy for the task of forest fire risk monitoring. The models assessed include CNN, AlexNet, VGGNet, GoogLeNet, and ResNet, each with its own unique architectural features and complexity.
The MDCNN model showcases superior performance with an accuracy of 95.8%, which is higher than the other models compared in the study. This high accuracy is complemented by a recall rate of 95.4%, indicating the model’s effectiveness in correctly identifying the positive cases (fire images). The false positive rate stands at 12.7%, and the false negative rate at 5.3%, which are the instances where the model incorrectly identified non-fire as fire and fire as non-fire, respectively.
When compared to the AlexNet model, the MDCNN model’s accuracy exceeds it by 3.2%, demonstrating the improvements made by the MDCNN model in terms of hierarchical structure and possibly other optimizations that contribute to its enhanced performance in forest fire detection tasks. This indicates that the MDCNN model is more reliable and could be considered a more suitable option for real-world applications in forest fire risk monitoring systems.

3.2. Anti- Interference Experiment

During the testing process, some images have brighter lighting. The distance between the front and rear positioning boxes in the video is calculated based on the motion characteristics of the fire. Only when the position coordinate is not 0 and the front–back frame distance is not 0 can fire be determined. This method cleverly eliminates the impact of static fire scenes on fire recognition. Two different video scenes were selected for testing. The distance between the position coordinates of the test image and the previous frame is shown in Table 7, and distance d is calculated as
d = ( x 2 min x 1 min ) 2 + ( y 2 min y 1 min ) 2 + ( x 2 max x 1 max ) 2 + ( y 2 max y 1 max ) 2
In Table 7, Figures a, b, and c are three consecutive frames of images with fire, and the front–back frame distance is calculated based on their position coordinates. The interference photo f outputs the corresponding position coordinates and the distance between the front and rear frames. A distance of 0 between the current rear frames indicates a non-fire image. Figures d and e are images without a fire, do not generate a positioning box, and have no coordinate values, and the default distance is 0.
Using the proposed method for experiments, the model correctly recognizes static fire scenes as non-fire scenes and can eliminate interference caused by static fire scenes. In terms of evaluating the accuracy performance and generalization ability of the model, different fire and non-fire scenarios were selected for recognition, which showed good recognition performance and an accuracy of 95.8% on the self-built dataset.
Finally, the constructed forest fire monitoring and early warning model will achieve real-time monitoring of forest fires through cloud platforms. Figure 11 is an interface diagram of real-time forest fire monitoring. When a forest fire occurs, corresponding alerts will be triggered to remind staff to take timely actions.

4. Conclusions

In this study, suspected flames undetected during forest fire surveillance were classified and their image features were extracted for improved recognition. Several feature extraction methods were systematically analyzed and compared, with feature types being manually set. Following the construction of the MDCNN network model, the optimal learning rate and iteration number for precise flame detection were meticulously selected. Training was conducted with an established set of flame image samples, leading to the development of a robust training model. The model’s accuracy was then evaluated against other models, with its superior performance underscoring the efficacy of the proposed approach. The primary conclusions of this research are as follows:
(1)
A forest fire recognition model was developed using a modified CNN network, resulting in a highly accurate fire video image recognition model after extensive training. The model’s accuracy and generalization capabilities were assessed using a diverse set of fire and non-fire scenarios.
(2)
To address recognition disruptions caused by scenes resembling fire, a method that adjusts the coordinates of the bounding boxes between consecutive frames was implemented. This approach effectively reduces static scenario interference and enhances the recognition capabilities of the model.
(3)
The model demonstrated commendable performance in flame detection, achieving remarkable results across multiple metrics. Firstly, it achieved a remarkably low false alarm rate of only 0.563%, indicating its ability to accurately classify non-flame instances. Additionally, the model achieved a false positive rate of 12.7%, which demonstrates its capability to minimize the occurrence of false detections. Moreover, the false negative rate of 5.3% further showcases the model’s ability to effectively identify and classify flame instances. Furthermore, the model achieved an impressive recall rate of 95.4%, indicating its high sensitivity in detecting flames. This means that the model successfully identified the vast majority of actual flame instances. The overall accuracy rate of 95.8% further highlights the model’s reliability in accurately classifying both flame and non-flame instances. These outstanding results validate the effectiveness of the proposed method in significantly augmenting the precision of flame detection. Flame detection is a critical task that is typically susceptible to errors, but the proposed method successfully mitigates these challenges, providing a reliable and accurate solution.
Although high accuracy in identifying forest fires has been exhibited by the MDCNN model, opportunities for optimizing its recognition performance remain. Future research will be focused on refining model parameters, minimizing model complexity, and developing a more streamlined and effective model for forest fire recognition. We will continue to improve the algorithm and utilize better hardware conditions to achieve faster forest fire detection speed, enhancing the real-time accuracy of forest fire monitoring and identification.

Author Contributions

Conceptualization, S.Z., X.Z. and S.C.; methodology, S.Z., X.Z., S.C. and Y.Z.; software, S.Z. and P.G.; validation, S.Z., Q.Z., F.H. and Y.Z.; formal analysis, S.Z., P.G. and Z.W.; investigation, S.Z., X.Z., F.H., Q.Z. and S.C.; resources, S.Z., Z.W. and S.C.; data curation, S.Z. and F.H.; writing—original draft preparation, S.Z., X.Z. and S.C.; writing—review and editing, S.Z., X.Z., P.G., F.H. and S.C.; visualization, S.Z., X.Z., F.H. and S.C.; supervision, Z.W., W.W. and S.C.; project administration, S.Z., X.Z., Y.Z., Z.W. and S.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Guangdong Basic and Applied Basic Research Foundation (2022A1515140162 and 2022A1515140013), Characteristic Innovation Projects of Department of Education of Guangdong Province (2020KTSCX271), the Guangdong Forestry Science and Technology Innovation Project (2020KJCX003), the Guangdong Provincial Forestry Association Science and Technology Plan Project (2020-GDFS-KJ-01), the Guangdong Eco-Engineering Polytechnic textbook construction Project (2022-xj-jcjs006), and the Guangdong Eco-Engineering Polytechnic Double Leader Teacher Party Branch Studio Project.

Data Availability Statement

The data can be requested from the corresponding authors.

Acknowledgments

We would like to thank the anonymous reviewers for their critical comments and suggestions for improving the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wahyono; Harjoko, A.; Dharmawan, A.; Adhinata, F.D.; Kosala, G.; Jo, K.-H. Real-Time Forest Fire Detection Framework Based on Artificial Intelligence Using Color Probability Model and Motion Feature Analysis. Fire 2022, 5, 23. [Google Scholar] [CrossRef]
  2. Alkhatib, A.A.A.; Abdelal, Q.; Kanan, T. Wireless Sensor Network for Forest Fire Detection and behavior Analysis. Int. J. Adv. Soft Comput. Its Appl. 2021, 13, 82–104. [Google Scholar]
  3. Apriani, Y.; Oktaviani, W.A.; Sofian, I.M. Design and Implementation of LoRa-Based Forest Fire Monitoring System. J. Robot. Control 2022, 3, 236–243. [Google Scholar] [CrossRef]
  4. Guede-Fernández, F.; Martins, L.; de Almeida, R.V.; Gamboa, H.; Vieira, P. A Deep Learning Based Object Identification System for Forest Fire Detection. Fire 2021, 4, 75. [Google Scholar] [CrossRef]
  5. Majid, S.; Alenezi, F.; Masood, S.; Ahmad, M.; Gündüz, E.S.; Polat, K. Attention based CNN model for fire detection and localization in real-world images. Expert Syst. Appl. 2021, 189, 116114. [Google Scholar] [CrossRef]
  6. Abid, F. A Survey of Machine Learning Algorithms Based Forest Fires Prediction and Detection Systems. Fire Technol. 2020, 57, 559–590. [Google Scholar] [CrossRef]
  7. Avazov, K.; Hyun, A.E.; Sami S, A.A.; Khaitov, A.; Abdusalomov, A.B.; Cho, Y.I. Forest Fire Detection and Notification Method Based on AI and IoT Approaches. Futur. Internet 2023, 15, 61. [Google Scholar] [CrossRef]
  8. Azevedo, B.F.; Brito, T.; Lima, J.; Pereira, A.I. Optimum Sensors Allocation for a Forest Fires Monitoring System. Forests 2021, 12, 453. [Google Scholar] [CrossRef]
  9. Parajuli, A.; Manzoor, S.A.; Lukac, M. Areas of the Terai Arc landscape in Nepal at risk of forest fire identified by fuzzy analytic hierarchy process. Environ. Dev. 2023, 45, 100810. [Google Scholar] [CrossRef]
  10. Dutta, S.; Vaishali, A.; Khan, S.; Das, S. Forest Fire Risk Modeling Using GIS and Remote Sensing in Major Landscapes of Himachal Pradesh. In Ecological Footprints of Climate Change: Adaptive Approaches and Sustainability; Springer International Publishing: Cham, Switzerland, 2023; pp. 421–442. [Google Scholar]
  11. Singo, M.V.; Chikoore, H.; Engelbrecht, F.A.; Ndarana, T.; Muofhe, T.P.; Mbokodo, I.L.; Murungweni, F.M.; Bopape, M.-J.M. Projections of future fire risk under climate change over the South African savanna. Stoch. Environ. Res. Risk Assess. 2023, 37, 2677–2691. [Google Scholar] [CrossRef]
  12. Yandouzi, M.; Grari, M.; Idrissi, I.; Boukabous, M.; Moussaoui, O.; Azizi, M.; Ghoumid, K.; Elmiad, A.K. Forest Fires Detection using Deep Transfer Learning. Int. J. Adv. Comput. Sci. Appl. 2022, 13, 0130832. [Google Scholar] [CrossRef]
  13. Feizizadeh, B.; Omarzadeh, D.; Mohammadnejad, V.; Khallaghi, H.; Sharifi, A.; Karkarg, B.G. An integrated approach of artificial intelligence and geoinformation techniques applied to forest fire risk modeling in Gachsaran, Iran. J. Environ. Plan. Manag. 2022, 66, 1369–1391. [Google Scholar] [CrossRef]
  14. Alkhatib, R.; Sahwan, W.; Alkhatieb, A.; Schütt, B. A Brief Review of Machine Learning Algorithms in Forest Fires Science. Appl. Sci. 2023, 13, 8275. [Google Scholar] [CrossRef]
  15. Arteaga, B.; Díaz, M.; Jojoa, M. Deep Learning Applied to Forest Fire Detection. In Proceedings of the IEEE International Symposium on Signal Processing and Information Technology, Louisville, KY, USA, 9–11 December 2020. [Google Scholar]
  16. Khan, A.; Hassan, B.; Khan, S.; Ahmed, R.; Abuassba, A. DeepFire: A Novel Dataset and Deep Transfer Learning Benchmark for Forest Fire Detection. Mob. Inf. Syst. 2022, 2022, 5358359. [Google Scholar] [CrossRef]
  17. Sathishkumar, V.E.; Cho, J.; Subramanian, M.; Naren, O.S. Forest fire and smoke detection using deep learning-based learning without forgetting. Fire Ecol. 2023, 19, 1–17. [Google Scholar] [CrossRef]
  18. Crowley, M.A.; Stockdale, C.A.; Johnston, J.M.; Wulder, M.A.; Liu, T.; McCarty, J.L.; Rieb, J.T.; Cardille, J.A.; White, J.C. Towards a whole-system framework for wildfire monitoring using Earth observations. Glob. Chang. Biol. 2022, 29, 1423–1436. [Google Scholar] [CrossRef] [PubMed]
  19. Michael, Y.; Helman, D.; Glickman, O.; Gabay, D.; Brenner, S.; Lensky, I.M. Forecasting fire risk with machine learning and dynamic information derived from satellite vegetation index time-series. Sci. Total. Environ. 2020, 764, 142844. [Google Scholar] [CrossRef] [PubMed]
  20. Mao, W.; Wang, W.; Dou, Z.; Li, Y. Fire Recognition Based on Multi-Channel Convolutional Neural Network. Fire Technol. 2018, 54, 531–554. [Google Scholar] [CrossRef]
  21. Thach, N.N.; Ngo, D.B.-T.; Xuan-Canh, P.; Hong-Thi, N.; Thi, B.H.; Nhat-Duc, H.; Dieu, T.B. Spatial pattern assessment of tropical forest fire danger at Thuan Chau area (Vietnam) using GIS-based advanced machine learning algorithms: A comparative study. Ecol. Inform. 2018, 46, 74–85. [Google Scholar] [CrossRef]
  22. Vikram, R.; Sinha, D.; De, D.; Das, A.K. EEFFL: Energy efficient data forwarding for forest fire detection using localization technique in wireless sensor network. Wirel. Networks 2020, 26, 5177–5205. [Google Scholar] [CrossRef]
  23. Moayedi, H.; Mehrabi, M.; Bui, D.T.; Pradhan, B.; Foong, L.K. Fuzzy-metaheuristic ensembles for spatial assessment of forest fire susceptibility. J. Environ. Manag. 2020, 260, 109867. [Google Scholar] [CrossRef]
  24. Peruzzi, G.; Pozzebon, A.; Van Der Meer, M. Fight Fire with Fire: Detecting Forest Fires with Embedded Machine Learning Models Dealing with Audio and Images on Low Power IoT Devices. Sensors 2023, 23, 783. [Google Scholar] [CrossRef] [PubMed]
  25. Grari, M.; Yandouzi, M.; IdrissiI; Boukabous, M. Using IoT and ML for Forest Fire Detection, Monitoring, and Prediction: A Literature Review. J. Theor. Appl. Inf. Technol. 2022, 100, 5445–5461. [Google Scholar]
  26. Nikhil, S.; Danumah, J.H.; Saha, S.; Prasad, M.K.; Rajaneesh, A.; Mammen, P.C.; Ajin, R.S.; Kuriakose, S.L. Application of GIS and AHP Method in Forest Fire Risk Zone Mapping: A Study of the Parambikulam Tiger Reserve, Kerala, India. J. Geovis. Spat. Anal. 2021, 5, 1–14. [Google Scholar] [CrossRef]
  27. Abdusalomov, A.B.; Islam, B.M.S.; Nasimov, R.; Mukhiddinov, M.; Whangbo, T.K. An Improved Forest Fire Detection Method Based on the Detectron2 Model and a Deep Learning Approach. Sensors 2023, 23, 1512. [Google Scholar] [CrossRef] [PubMed]
  28. Saydirasulovich, S.N.; Mukhiddinov, M.; Djuraev, O.; Abdusalomov, A.; Cho, Y.-I. An Improved Wildfire Smoke Detection Based on YOLOv8 and UAV Images. Sensors 2023, 23, 8374. [Google Scholar] [CrossRef] [PubMed]
  29. Ntinopoulos, N.; Sakellariou, S.; Christopoulou, O.; Sfougaris, A. Fusion of Remotely-Sensed Fire-Related Indices for Wildfire Prediction through the Contribution of Artificial Intelligence. Sustainability 2023, 15, 11527. [Google Scholar] [CrossRef]
  30. Nguyen, H.D. Hybrid models based on deep learning neural network and optimization algorithms for the spatial prediction of tropical forest fire susceptibility in Nghe an province, Vietnam. Geocarto Int. 2022, 37, 11281–11305. [Google Scholar] [CrossRef]
  31. Garcia, T.; Ribeiro, R.; Bernardino, A. Wildfire aerial thermal image segmentation using unsupervised methods: A multilayer level set approach. Int. J. Wildland Fire 2023, 32, 435–447. [Google Scholar] [CrossRef]
  32. Deshmukh, A.A.; Sonar SD, B.; Ingole, R.V.; Agrawal, R.; Dhule, C.; Morris, N.C. Satellite Image Segmentation for Forest Fire Risk Detection using Gaussian Mixture Models. In Proceedings of the 2023 2nd International Conference on Applied Artificial Intelligence and Computing (ICAAIC), Salem, India, 4–6 May 2023; pp. 806–811. [Google Scholar]
  33. Dinh, C.T.; Nguyen, T.H.; Do, T.H.; Bui, N.A. Research and Evaluate some Deep Learning Methods to Detect Forest Fire based on Images from Camera. In Proceedings of the 12th Conference on Information Technology and It’s Applications (CITA 2023), Danang, Vietnam, 28–29 July 2023; pp. 181–191. Available online: https://elib-vku-udn-vn.translate.goog/handle/123456789/2683?mode=full&_x_tr_sch=http&_x_tr_sl=vi&_x_tr_tl=sr&_x_tr_hl=sr-Latn&_x_tr_pto=sc (accessed on 20 November 2023).
  34. Tupenaite, L.; Zilenaite, V.; Kanapeckiene, L.; Gecys, T.; Geipele, I. Sustainability assessment of modern high-rise timber buildings. Sustainability 2021, 13, 8719. [Google Scholar] [CrossRef]
  35. Reder, S.; Mund, J.P.; Albert, N.; Waßermann, L.; Miranda, L. Detection of Windthrown Tree Stems on UAV-Orthomosaics Using U-Net Convolutional Networks. Remote Sens. 2021, 14, 75. [Google Scholar] [CrossRef]
  36. Šerić, L.; Ivanda, A.; Bugarić, M.; Braović, M. Semantic Conceptual Framework for Environmental Monitoring and Surveillance—A Case Study on Forest Fire Video Monitoring and Surveillance. Electronics 2022, 11, 275. [Google Scholar] [CrossRef]
  37. Tran, D.Q.; Park, M.; Jeon, Y.; Bak, J.; Park, S. Forest-Fire Response System Using Deep-Learning-Based Approaches with CCTV Images and Weather Data. IEEE Access 2022, 10, 66061–66071. [Google Scholar] [CrossRef]
  38. Casal-Guisande, M.; Bouza-Rodríguez, J.-B.; Cerqueiro-Pequeño, J.; Comesaña-Campos, A. Design and Conceptual Development of a Novel Hybrid Intelligent Decision Support System Applied towards the Prevention and Early Detection of Forest Fires. Forests 2023, 14, 172. [Google Scholar] [CrossRef]
  39. Saha, S.; Bera, B.; Shit, P.K.; Bhattacharjee, S.; Sengupta, N. Prediction of forest fire susceptibility applying machine and deep learning algorithms for conservation priorities of forest resources. Remote. Sens. Appl. Soc. Environ. 2023, 29, 100917. [Google Scholar] [CrossRef]
  40. Alsheikhy, A.A. A Fire Detection Algorithm Using Convolutional Neural Network. J. King Abdulaziz Univ. Eng. Sci. 2022, 32, 39. [Google Scholar] [CrossRef]
  41. Casallas, A.; Jiménez-Saenz, C.; Torres, V.; Quirama-Aguilar, M.; Lizcano, A.; Lopez-Barrera, E.A.; Ferro, C.; Celis, N.; Arenas, R. Design of a Forest Fire Early Alert System through a Deep 3D-CNN Structure and a WRF-CNN Bias Correction. Sensors 2022, 22, 8790. [Google Scholar] [CrossRef] [PubMed]
  42. Ryu, J.; Kwak, D. A Study on a Complex Flame and Smoke Detection Method Using Computer Vision Detection and Convolutional Neural Network. Fire 2022, 5, 108. [Google Scholar] [CrossRef]
  43. Chaturvedi, S.; Khanna, P.; Ojha, A. A survey on vision-based outdoor smoke detection techniques for environmental safety. ISPRS J. Photogramm. Remote Sens. 2022, 185, 158–187. [Google Scholar] [CrossRef]
Figure 1. Summary of flame characteristics.
Figure 1. Summary of flame characteristics.
Forests 15 00111 g001
Figure 2. Fire video recognition flowchart.
Figure 2. Fire video recognition flowchart.
Forests 15 00111 g002
Figure 3. Performance of four activation functions.
Figure 3. Performance of four activation functions.
Forests 15 00111 g003
Figure 4. Schematic diagram of max pooling.
Figure 4. Schematic diagram of max pooling.
Forests 15 00111 g004
Figure 5. Fire recognition network based on MDCNN.
Figure 5. Fire recognition network based on MDCNN.
Forests 15 00111 g005
Figure 6. (left) Loss curve and (right) accuracy curve for different numbers of iterations.
Figure 6. (left) Loss curve and (right) accuracy curve for different numbers of iterations.
Forests 15 00111 g006aForests 15 00111 g006b
Figure 7. Accuracy and loss curves for different learning rates. (a) Accuracy curve. (b) Loss curve.
Figure 7. Accuracy and loss curves for different learning rates. (a) Accuracy curve. (b) Loss curve.
Forests 15 00111 g007
Figure 8. Video images with high false alarm rates.
Figure 8. Video images with high false alarm rates.
Forests 15 00111 g008
Figure 9. Fire identification results for different scenarios.
Figure 9. Fire identification results for different scenarios.
Forests 15 00111 g009
Figure 10. Accuracy comparison of the MDCNN model and other models.
Figure 10. Accuracy comparison of the MDCNN model and other models.
Forests 15 00111 g010
Figure 11. The forest fire early warning cloud platform.
Figure 11. The forest fire early warning cloud platform.
Forests 15 00111 g011
Table 1. AlexNet network model parameters.
Table 1. AlexNet network model parameters.
Layer NameKernel SizeStrideInput Size
Conv111 × 114224 × 224 × 3
Max-pool13 × 3255 × 55 × 96
Conv25 × 5127 × 27 × 96
Max-pool23 × 3227 × 27 × 256
Conv33 × 3113 × 13 × 256
Conv43 × 3113 × 13 × 384
Conv53 × 3113 × 13 × 384
Max-pool33 × 3213 × 13 × 256
Fcl2048/4096
Fc22048/4096
Fc31000/4096
Table 2. Training environment parameters.
Table 2. Training environment parameters.
NameTraining Environment
CPUInter® Xeon® Gold [email protected] GHz
GPUNVIDIA GTX 3090@24 GB
RAM128 GB
PyCharm version2020.3.2
Python version3.7.10
PyTorch version1.6.0
CUDA version11.1
cuDNN version 8.0.5
Table 3. Training image categories.
Table 3. Training image categories.
Image CategoryDescription
Category 1Outdoor fire source with large flames and thick smoke
Category 2Fire source with large flames and less background interference
Category 5Outdoor burning image (interference)
Category 6Outdoor lighter image (interference)
Category 7Evening sunset (interference)
Category 8Picture of car lights (interference)
Table 4. Error rate of four methods for verification.
Table 4. Error rate of four methods for verification.
Image SourcePicture InformationFalse Alarm Rate
Total FramesFlame FramesLiterature [41]Literature [42]Literature [43]Proposed Model
Interference term31502.64%5.28%15.9%0%
Flame image3502005.17%17.3%31.23%0.563%
Video 180036417.6%35.6%46.53%11.87%
Table 5. Probability values of partial fire/non-fire images.
Table 5. Probability values of partial fire/non-fire images.
PictureProbability ValueState
a0.653Fire
b0.765Fire
c0.779Fire
d0.875Fire
e0.231Non-fire
f0.187Non-fire
g0.138Non-fire
h0.327Non-fire
Table 7. Position coordinates and distance from previous test image.
Table 7. Position coordinates and distance from previous test image.
Figuresabcdef
(Xmin,ymin)93,14294,142 86,145 00206.5
(Xmax,ymax)367,198374,196 385,204 00473.92
d(px)38.23.3513.36000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, S.; Zou, X.; Gao, P.; Zhang, Q.; Hu, F.; Zhou, Y.; Wu, Z.; Wang, W.; Chen, S. A Forest Fire Recognition Method Based on Modified Deep CNN Model. Forests 2024, 15, 111. https://doi.org/10.3390/f15010111

AMA Style

Zheng S, Zou X, Gao P, Zhang Q, Hu F, Zhou Y, Wu Z, Wang W, Chen S. A Forest Fire Recognition Method Based on Modified Deep CNN Model. Forests. 2024; 15(1):111. https://doi.org/10.3390/f15010111

Chicago/Turabian Style

Zheng, Shaoxiong, Xiangjun Zou, Peng Gao, Qin Zhang, Fei Hu, Yufei Zhou, Zepeng Wu, Weixing Wang, and Shihong Chen. 2024. "A Forest Fire Recognition Method Based on Modified Deep CNN Model" Forests 15, no. 1: 111. https://doi.org/10.3390/f15010111

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop