Next Article in Journal
Wildfire Identification Based on an Improved Two-Channel Convolutional Neural Network
Previous Article in Journal
Response and Regulatory Network Analysis of Roots and Stems to Abiotic Stress in Populus trichocarpa
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Forest Fire Identification System Based on Weighted Fusion Algorithm

College of Information Science and Technology, Nanjing Forestry University, Nanjing 210037, China
*
Author to whom correspondence should be addressed.
Forests 2022, 13(8), 1301; https://doi.org/10.3390/f13081301
Submission received: 2 July 2022 / Revised: 28 July 2022 / Accepted: 12 August 2022 / Published: 16 August 2022
(This article belongs to the Section Natural Hazards and Risk Management)

Abstract

:
The occurrence of forest fires causes serious damage to ecological diversity and the safety of people’s property and life. However, due to the complex forest environment, the changeable shape of forest fires, and the uncertainty of flame color and texture, forest fire detection becomes very difficult. Traditional image processing methods rely heavily on artificial features and are not generally applicable to different forest fire scenes. In order to solve the problem of inaccurate forest fire recognition caused by the manual extraction of features, some scholars use deep learning technology to adaptively learn and extract forest fire features, but they often use a single target detection model, and their lack of learning and perception makes it difficult for them to accurately identify forest fires in a complex forest fire environment. Therefore, in order to overcome the shortcomings of the manual extraction of features and achieve a higher accuracy of forest fire recognition, this paper proposes an algorithm based on weighted fusion to identify forest fire sources in different scenarios, fuses two independent weakly supervised models Yolov5 and EfficientDet, completes the training and prediction of data sets in parallel, and uses the weighted boxes fusion algorithm (WBF) to process the prediction results to obtain the fusion frame. Finally, the model is evaluated by Microsoft COCO standard. Experimental results show that compared with Yolov5 and EfficientDet, the proposed Y4SED improves the detection performance by 2.5% to 4.5%. The fused algorithm proposed in this paper has better feature extraction ability, can extract more forest fire feature information, and better balances the recognition accuracy and complexity of the model, which provides a reference for forest fire target detection in the real environment.

1. Introduction

In the world, forest fires occur frequently every year, which not only cause serious economic losses and destroy the ecological environment, but also pose a certain threat to human life and safety. Forest fires usually spread quickly and are difficult to control in a short time. Therefore, the real-time warning of forest fire sources can help people to put out the fire in the early stage of the fire, greatly reducing the cost and loss of firefighting. However, the traditional forest fire source identification method has obvious shortcomings. The detection system based on smoke sensors [1,2,3] has good performance in indoor spaces and is only suitable for installation in places where there is burning and a lot of smoke, but it is difficult to install outdoors. Infrared or ultraviolet detectors [4] are susceptible to environmental interference, and given their short detection distance, they are not suitable for detecting large areas. Satellite remote sensing [5,6] is good at detecting large-area forest fires, but cannot detect early regional fire detection.
With the development of computer technology, more and more scholars use image processing technology to monitor forest fire sources. Chen et al. [7] combined RGB and his color criteria to segment the fire candidate area and identified whether there was a fire by studying the area change and centroid stability of the fire candidate area. Horng et al. [8] used the interframe difference method and color masking technology to remove the false fire area and obtain the suspected area of the fire in the color space model. Based on this, they constructed a simple method to estimate the burning degree of the fire flame so as to receive an appropriate early warning. Celik et al. [9] studied diverse video sequences and images and proposed a fuzzy color model using statistical analysis. Combined with motion analysis, this model can realize good discrimination of fire-like objects. In short, most traditional fire detection methods based on image processing focus on creating artificial features, such as flame color and texture, to detect fires [10,11].
The arrival of the era of artificial intelligence has made everything intelligent and in-depth. Models based on convolutional neural networks have more advantages in feature learning than traditional manual recognition, and the features they extract contain deeper semantic information. Studies on forest fire identification based on deep learning have been in progress at home and abroad. Zheng et al. [12] studied the feasibility of using Faster R-CNN [13], YOLOv3 [14], SSD [15], and EfficientDet deep convolution neural network to detect forest fire smoke. They found that YOLOv3 has a detection speed of up to 27 FPS and better smoke detection accuracy. For fire detection tasks, most researchers habitually use a single target detector to complete, such as use the improvement of a single model, and few use the idea of integrated learning to solve the problem of missed detection in actual fire detection. However, forest fire detection is a complex and spaced task, and it is impractical to use a separate individual learner to detect fire in different scenes. Each individual learner has its own expertise and can extract different features from images. Yolov5 has the best detection performance in the Yolo series [16], with small depth and high image reasoning speed. EfficientDet is a target detection model proposed by the Google brain team in 2020. Under extensive resource constraints, it is always more efficient than existing technologies [17]. In the eight models of the Efficientdet series D0~D7, with the gradual improvement of the accuracy of the network, the computational time complexity and spatial complexity will increase accordingly. Therefore, this paper proposes a new forest fire detection method based on the weighted fusion algorithm, which integrates Yolov5s and EfficientDet-D2, two single-stage models with higher real-time performance, which can significantly improve the robustness of the model and improve the detection performance, so as to effectively solve the problem of missing forest fire detection.

2. Materials and Methods

2.1. Experimental Environment

For details about how to configure the experimental environment, see Table 1. Before the training data, the experimental data set was divided into training set and test set according to 9:1. Only the training set participates in the actual model training process, and the test set is only used to evaluate the accuracy of the model.

2.2. Data Set

The data set established in this paper includes various forest fire images and images containing fire disruptors (sun). In order to ensure that our model can handle complex forest fire sources (ground fire, trunk fire, and canopy fire), in addition to obtaining data sets from the open-source fire data set website BoWFire [18] and others, we also used web crawler technology to obtain forest fire images from the Internet. The obtained data set was then manually filtered, and a forest fire data set containing 2976 images was created. Representative samples from the forest fire source image data set are randomly shown below in Figure 1.

2.3. Integrated Learning

In deep learning, our goal is to train a model with good performance and strong robustness, but the actual situation is not so; usually, different individual learners show their own “preferences” for feature learning. Ensemble Learning [19] is the integration of multiple weak supervision models with “preferences” to obtain a more efficient strong supervision model. Table 2, Table 3 and Table 4 show the principles. In addition, m i indicates the ith model.
Therefore, ensemble learning generally generates several individual learners first and then adopts some strategy to combine them effectively [20]. In order to maximize the integration effect, the integrated individual learner should be homogenous. The greater the difference and accuracy of the individual learner are, the better the integration effect will be. In other words, the integration in Table 1 plays a “positive role”.
In view of the reality of forest fire detection, algorithm accuracy and real-time requirements are high, and forest fire target size is different. Therefore, we chose the single-stage model with higher real-time performance, and its typical representatives are Yolo, SSD, and EfficientDet.
  • According to the network depth and network width, Yolov5 can be divided into Yolov5s, Yolov5m, Yolov5l, and Yolov5x. The Yolov5s network was used in this paper. The depth of Yolov5s network is the smallest in the Yolov5 series [21]. The image inference speed of the Yolov5s model reaches 455FPS, which is widely used by a large number of scholars with this advantage.
  • SSD is another single-order target recognition model after Yolo. It uses the method of direct regression bbox and classification probability in Yolo. At the same time, it also refers to Fast R-CNN and uses anchor extensively to improve the recognition accuracy. It has the advantages of high precision and high real-time, but its recognition effect on small targets is general.
  • Limited by hardware computing resources, Efficientdet-D2 was used for the experiment in this paper. For forest fire detection, the advantage of the EfficientDet model is that it has different trunk network, feature fusion network, and output network [22], which can select different detection frameworks according to the cost performance of software and hardware and the actual requirements for accuracy and efficiency in the real environment, so as to design a more efficient forest fire detector.

2.4. Fusion Model Y4SED

The traditional method of filtering prediction boxes is Non-Maximum Suppression (NMS). The filtering process of boxes depends on the selection of this single threshold IOU [23]. However, different thresholds may affect the final results of the model. If multiple objects exist side by side, one of them will be deleted. The NMS discards the redundancy box and therefore cannot effectively generate average local predictions from different models. It can be seen from Figure 2 that unlike NMS, the WBF [24] algorithm uses the confidence (score) of all prediction boxes to construct the fused box.
Taking two prediction boxes as an example, the calculation method of the weighted box generated by the fusion of two prediction boxes is described. Assuming that the two prediction boxes are b b o x A : [ A x 1 , A y 1 , A x 2 , A y 2 , A s ] and   b b o x B : [ B x 1 , B y 1 , B x 2 , B y 2 , B s ] , ( A x 1 , A y 1 ) represent the coordinates of the upper left corner of the b b o x A frame; ( A x 2 , A y 2 ) represent the coordinates of the lower right corner of the b b o x A frame, and A s represents the confidence degree of the b b o x A frame. The same is true for b b o x B . b b o x C is obtained by combining b b o x A and b b o x B , as shown in Figure 3:
The value calculated by Formulas (1) and (2) constitutes the upper-left coordinate of the b b o x C fusion box; the value calculated by Formulas (3) and (4) constitutes the lower-right coordinate of the b b o x C fusion box, and the confidence of the b b o x C box is calculated by Formula (5).
C x 1 = A x 1 × A s + B x 1 × B s A s + B s
C y 1 = A y 1 × A s + B y 1 × B s A s + B s
C x 2 = A x 2 × A s + B x 2 × B s A s + B s
C y 2 = A y 2 × A s + B y 2 × B s A s + B s
C s = A s + B s 2  
This paper proposes a new Yolov5-S and Efficientdet-D2 integration model, Y5SED, to solve the problem of missing detection caused by a single weakly supervised learning model and ensure that our model has better robustness to different scenarios. Figure 4 shows that this paper fused two separate weakly supervised models, Yolov5-S and Efficientdet-D2, to train forest fire data sets, saved the predicted results in Jason files, and then used the weighted boxes fusion algorithm (WBF) to fuse the prediction boxes of the two models. Experiments show that this method can greatly improve the detection accuracy of the model.

2.5. Evaluation Indicators

In order to verify the model proposed in this paper, Microsoft COCO [25], the most authoritative standard recognized in the field of target recognition, was adopted to evaluate the model (as shown in Table 5).
The Average Precision (AP) value in Table 6 is the area enclosed by the P–R curve. P is accuracy rate; R is recall rate. The IOU threshold is 0.5. Generally, the larger the value is, the better the model learning effect is and vice versa.
Accuracy (P) measures the percentage of positive predictions among all positive samples. In the forest fire identification task, the recall rate represents the ratio of the number of correctly predicted forest fire images ( T P ) to the total number of predicted forest fire images ( T P + F P ) , as shown in Formula (6).
P = T P T P + F P
The recall rate (R) measures the percentage of all forecast samples that are forecast to be positive. On a forest fire identification mission, the accuracy rate is the ratio of the number of forest fire images correctly predicted by the model ( T P ) to all predicted images ( T P + F N ) . This index is shown in Formula (7).
R = T P T P + F N

3. Experimental Results and Analysis

3.1. Parameter Setting

For details of the Yolov5 model and EfficientDet model training in the experiment, please refer to Table 6.
In the WBF algorithm, the IOU threshold is set to 0.5, and the results of the two models are given the same weight, that is, the weights of both models are set to 1.

3.2. Experimental Analysis

Through experimental observations, we found that Yolov5 is better at large-target fires as shown in Figure 5a,b, but sometimes misses objects (Figure 6). Meanwhile, even though EfficientDet is not sensitive to large-target fires (Figure 5c,d), it is more careful than YoloV5 and can identify more fires. Therefore, we believe that the combination of these two weak monitoring models with different specialties can effectively solve the problem of missing forest fire detection.
Therefore, this paper adopts the Yolov5-S and EfficientDet-D2 model to complete forest fire identification. The yolov5-S model is extremely fast, so the time complexity of the algorithm will be reduced, but the experiment shows that the learning is not as efficient as EfficientDet-D2. Yolov5 is better at detecting large targets, and EfficientDet is more careful than Yolov5. Therefore, the combination of the two can train a strong supervision model with better robustness, which can effectively solve the problem of missing detection in forest fire identification.

3.3. Experimental Analysis

According to Table 7, compared with the traditional non-maximum suppression algorithm (NMS) and the screening prediction frame method for forest fire identification, the integrated model (Y4SED) established by using the weighted boxes fusion algorithm (WBF) in this paper obtains a higher average accuracy in forest fire identification and effectively solves the problem of missed detection of forest fire. In addition, the experimental results in Table 8 show that compared with the original individual model (Yolov5 and EfficientDet), the integrated model proposed in this paper (Y4SED) has significantly improved all indexes. When the IOU threshold is 0.5, the AP value of forest fire recognition by the integrated model can reach 87%. Yolov5-S and Efficientdet-D2 were improved by 4.5% and 2.5%, respectively, compared with a single weakly supervised learner. At the same time, the integration model in values of AP 0.5 , AP S , AP M , AP L , AR 0.5 , AR S , AR M , and AR L have significant advantages. Therefore, considering the result of the experiment, in terms of detection accuracy and algorithm operation space complexity, this article proposes the Y4SED model, which has a stronger anti-interference ability and is better able to distinguish between complex background and different kinds of forest fires (small fire, medium fire, large fire, strip-type fire, surface trunk, crown fire, and fire at night). In addition, there will be no false detection in the face of interfering objects similar to fire (such as the sun) (as shown in Figure 7), so it can more effectively complete the forest fire identification work. In addition, the model can not only detect whether there is a forest fire in real time through video (as shown in Figure 8), but also accurately locate the specific location of the fire, which plays a vital role in judging whether there is a forest fire and forest protection.

4. Conclusions

The identification of forest fires is of great significance for forest protection, but the actual environment of the forest is complex and changeable; the background is complex; the shape and color of the flame change all the time; there is no fixed form, and the identification of forest fire often has the problem of missed detection. In view of the above problems, this paper proposes a forest fire recognition system based on the weighted fusion algorithm, which solves the problem of missing detection of forest fires due to the complex background and the diversity of flame types under actual conditions. A new parallel execution method using the Yolov5 and EfficientDet models is proposed, and this paper does not use the traditional non-maximum suppression algorithm, but uses the weighted boxes fusion algorithm (WBF) to fuse the prediction frames of the two models to obtain the fused prediction frame. Since Yolov5 is suitable for forest fire identification of large targets, EfficientDet is more “careful” than Yolov5. The two can complement each other and effectively improve the performance of the model. Experimental results show that the average accuracy of the proposed model for forest fire identification can reach 87%. The integration of learning strategies significantly improved the forest fire recognition task of widespread missing detection problem. Compared with the single target detector and the model using the traditional non-maximum suppression algorithm, our model achieves a better compromise between the average accuracy and the spatial complexity of the algorithm. These major improvements make the model perform well in the actual application of forest fire identification and play an important role in the timely detection of forest fires and forest protection in reality.

Author Contributions

J.Q. devised the programs and drafted the initial manuscript. H.L. designed the project and revised the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Key Research and Development plan of Jiangsu Province (Grant No.BE2021716) and Jiangsu Modern Agricultural Machinery Equipment and Technology Demonstration and Promotion Project (NJ2021-19).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chowdhury, N.; Mushfiq, D.R.; Chowdhury, A.E. Computer Vision and Smoke Sensor Based Fire Detection System. In Proceedings of the 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT), Dhaka, Bangladesh, 3–5 May 2019; pp. 1–5. [Google Scholar]
  2. Varela, N.; Ospino, A.; Zelaya, N.A.L. Wireless sensor network for forest fire detection. Procedia Comput. Sci. 2020, 175, 435–440. [Google Scholar] [CrossRef]
  3. Lin, H.; Liu, X.; Wang, X.; Liu, Y. A fuzzy inference and big data analysis algorithm for the prediction of forest fire based on rechargeable wireless sensor networks. Sustain. Comput. Inform. Syst. 2018, 18, 101–111. [Google Scholar] [CrossRef]
  4. Sun, F.; Yang, Y.; Lin, C.; Liu, Z.; Chi, L. Forest Fire Compound Feature Monitoring Technology Based on Infrared and Visible Binocular Vision. J. Phys. Conf. Ser. 2021, 1792, 012022. [Google Scholar] [CrossRef]
  5. Barmpoutis, P.; Papaioannou, P.; Dimitropoulos, K.; Grammalidis, N. A Review on Early Forest Fire Detection Systems Using Optical Remote Sensing. Sensors 2020, 20, 6442. [Google Scholar] [CrossRef]
  6. Zhan, J.; Hu, Y.; Cai, W.; Zhou, G.; Li, L. PDAM–STPNNet: A Small Target Detection Approach for Wildland Fire Smoke through Remote Sensing Images. Symmetry 2021, 13, 2260. [Google Scholar] [CrossRef]
  7. Chen, T.H.; Wu, P.H.; Chiou, Y.C. An early fire-detection method based on image processing. In Proceedings of the 2004 International Conference on Image Processing, Singapore, 24–27 October 2004; Volume 3, pp. 1707–1710. [Google Scholar]
  8. Horng, W.B.; Peng, J.W.; Chen, C.Y. A new image-based real-time flame detection method using color analysis. In Proceedings of the 2005 IEEE Networking, Sensing and Control, Tucson, AZ, USA, 19–22 March 2005; pp. 100–105. [Google Scholar]
  9. Çelik, T.; Özkaramanlı, H.; Demirel, H. Fire and smoke detection without sensors: Image processing based approach. In Proceedings of the 2007 15th European Signal Processing Conference, Poznan, Poland, 3–7 September 2007; pp. 1794–1798. [Google Scholar]
  10. Khan, M.N.A.; Tanveer, T.; Khurshid, K.; Zaki, H.; Zaidi, S.S.I. Fire Detection System using Raspberry Pi. In Proceedings of the 2019 International Conference on Information Science and Communication Technology (ICISCT), Karachi, Pakistan, 9 March 2019; pp. 1–6. [Google Scholar]
  11. Priya, R.S.; Vani, K. Deep Learning Based Forest Fire Classification and Detection in Satellite Images. In Proceedings of the 2019 11th International Conference on Advanced Computing (ICoAC), Chennai, India, 18–20 December 2019; pp. 61–65. [Google Scholar]
  12. Zheng, X.; Chen, F.; Lou, L.; Cheng, P.; Huang, Y. Real-Time Detection of Full-Scale Forest Fire Smoke Based on Deep Convolution Neural Network. Remote Sens. 2022, 14, 536. [Google Scholar] [CrossRef]
  13. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Processing Syst. 2015, 28, 91–99. [Google Scholar] [CrossRef] [PubMed]
  14. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26–30 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 779–788. [Google Scholar]
  15. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; Springer: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar]
  16. Ultralytics. Yolov5. Available online: https://github.com/ultralytics/yolov5 (accessed on 1 May 2022).
  17. Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 10781–10790. [Google Scholar]
  18. BoWFire Dataset. Available online: https://bitbucket.org/gbdi/bowfifire-dataset/downloads/ (accessed on 1 May 2022).
  19. Xie, Y.; Peng, M. Forest fire forecasting using ensemble learning approaches. Neural Comput. Appl. 2019, 31, 4541–4550. [Google Scholar] [CrossRef]
  20. Dong, X.; Yu, Z.; Cao, W.; Shi, Y.; Ma, Q. A survey on ensemble learning. Front. Comput. Sci. 2020, 14, 241–258. [Google Scholar] [CrossRef]
  21. Yang, G.; Feng, W.; Jin, J.; Lei, Q.; Li, X.; Gui, G.; Wang, W. Face mask recognition system with YOLOV5 based on image recognition. In Proceedings of the 2020 IEEE 6th International Conference on Computer and Communications (ICCC), Chengdu, China, 11–14 December 2020; pp. 1398–1404. [Google Scholar]
  22. Song, S.; Jing, J.; Huang, Y.; Shi, M. EfficientDet for fabric defect detection based on edge computing. J. Eng. Fibers Fabr. 2021, 16, 15589250211008346. [Google Scholar] [CrossRef]
  23. Zhou, D.; Fang, J.; Song, X.; Guan, C.; Yin, J.; Dai, Y.; Yang, R. Iou loss for 2d/3d object detection. In Proceedings of the 2019 International Conference on 3D Vision (3DV), Quebec City, QC, Canada, 16–19 September 2019; pp. 85–94. [Google Scholar]
  24. Solovyev, R.; Wang, W.; Gabruseva, T. Weighted boxes fusion: Ensembling boxes from different object detection models. Image Vis. Comput. 2021, 107, 104117. [Google Scholar] [CrossRef]
  25. Rezatofighi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized intersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 658–666. [Google Scholar]
  26. Lydia, A.; Francis, S. Adagrad—An optimizer for stochastic gradient descent. Int. J. Inf. Comput. Sci. 2019, 5, 566–568. [Google Scholar]
  27. Yao, Z.; Gholami, A.; Shen, S.; Mustafa, M.; Keutzer, K.; Mahoney, M. Adahessian: An adaptive second order optimizationer for machine learning. Proc. AAAI Conf. Artif. Intell. 2021, 35, 10665–10673. [Google Scholar]
Figure 1. Representative images from the forest fire data set including (a) ground fires (b) ground fires (c) trunk fires, and (d) sun distractor.
Figure 1. Representative images from the forest fire data set including (a) ground fires (b) ground fires (c) trunk fires, and (d) sun distractor.
Forests 13 01301 g001
Figure 2. Schematic diagram of WBF and NMS processing multiple prediction results. The red box represents the real red box, and the blue box represents the prediction made by multiple models.
Figure 2. Schematic diagram of WBF and NMS processing multiple prediction results. The red box represents the real red box, and the blue box represents the prediction made by multiple models.
Forests 13 01301 g002
Figure 3. Process of two prediction boxes fusing into one box through the fusion box formula.
Figure 3. Process of two prediction boxes fusing into one box through the fusion box formula.
Forests 13 01301 g003
Figure 4. Integration model architecture diagram.
Figure 4. Integration model architecture diagram.
Forests 13 01301 g004
Figure 5. (a) True positive predictions generated by Yolov5; (b) True positive predictions generated by Yolov5; (c) EfficientDet false positive prediction; (d) EfficientDet-generated false positive prediction.
Figure 5. (a) True positive predictions generated by Yolov5; (b) True positive predictions generated by Yolov5; (c) EfficientDet false positive prediction; (d) EfficientDet-generated false positive prediction.
Forests 13 01301 g005
Figure 6. EfficientDet is a more “careful” target detector than Yolov5. (a) Yolov5 missed a flame target; (b) Yolov5 missed two flame targets; (c) EfficientDet detection of all flame areas; (d) EfficientDet detected five flame targets.
Figure 6. EfficientDet is a more “careful” target detector than Yolov5. (a) Yolov5 missed a flame target; (b) Yolov5 missed two flame targets; (c) EfficientDet detection of all flame areas; (d) EfficientDet detected five flame targets.
Forests 13 01301 g006
Figure 7. Recognition effect of Y4SED model on pictures containing interferences (sun). (a) The model has no false detection and strong anti-interference ability; (b) The model has no false detection and strong anti-interference ability.
Figure 7. Recognition effect of Y4SED model on pictures containing interferences (sun). (a) The model has no false detection and strong anti-interference ability; (b) The model has no false detection and strong anti-interference ability.
Forests 13 01301 g007
Figure 8. Real-time detection of forest fire video.
Figure 8. Real-time detection of forest fire video.
Forests 13 01301 g008
Table 1. Experimental environment configuration.
Table 1. Experimental environment configuration.
Experimental EnvironmentConfiguration Parameters
Programming languagePython3.8
Deep learning frameworkPyTorch1.7.1
GPUNVIDIA GeForce RTX 3060
GPU accelerating packageCUDA: 11.0
Operating systemWindows10
CPU processorAMD Ryzen 7 5800H
Table 2. Positive effects of integration.
Table 2. Positive effects of integration.
ModelTest Case 1Test Case 2Test Case 3
m1
m2
m3
Integration
Table 3. Integration does not work.
Table 3. Integration does not work.
ModelTest Case 1Test Case 2Test Case 3
m1
m2
m3
Integration
Table 4. Integration has a “negative effect”.
Table 4. Integration has a “negative effect”.
ModelTest Case 1Test Case 2Test Case 3
m1
m2
m3
Integration
Table 5. Accuracy and recall under Microsoft COCO standard.
Table 5. Accuracy and recall under Microsoft COCO standard.
Average accuracy (AP)
AP0.5The average accuracy when IOU = 0.5
Average accuracy at multiple scales
APSAP0.5 of small target (size < 322)
APMAP0.5 of medium target (322 < size < 962)
APLAP0.5 of big target (size > 962)
Average recall rate (AR)
AR0.5The average recall rate when IOU = 0.5
Average recall rate at multiple scales
ARSAR0.5 of small target (size < 322)
ARMAR0.5 of medium target (322 < size < 962)
ARLAR0.5 of big target (size > 962)
Table 6. Details of model training.
Table 6. Details of model training.
ModelTrainingTestOptimizerVectorBatch SizeNumber of Iterations
Yolov5-S2678298SGD [26]1 × 10−212300
EfficientDet-D22678298AdamW [27]1 × 10−412300
Table 7. Comparison of average accuracy of forest fire identification using WBF algorithm and NMS algorithm models.
Table 7. Comparison of average accuracy of forest fire identification using WBF algorithm and NMS algorithm models.
Algorithm Used in the Integration ModelAP0.5
NMS79
WBF87
Table 8. Experimental results—Models were evaluated using Microsoft COCO standards.
Table 8. Experimental results—Models were evaluated using Microsoft COCO standards.
ModelAP0.5APSAPMAPLAR0.5ARSARMARL
Yolov5-S82.536.048.766.069.2485976
EfficientDet-D284.536.350.264.168.649.864.273
Y4SED873650.168.671.552.463.477.3
A P 0.5 , A P S , A P M , A P L , A R 0.5 , A R S , A R M , and A R L are all percentages. The best data for each indicator are shown in bold.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Qian, J.; Lin, H. A Forest Fire Identification System Based on Weighted Fusion Algorithm. Forests 2022, 13, 1301. https://doi.org/10.3390/f13081301

AMA Style

Qian J, Lin H. A Forest Fire Identification System Based on Weighted Fusion Algorithm. Forests. 2022; 13(8):1301. https://doi.org/10.3390/f13081301

Chicago/Turabian Style

Qian, Jingjing, and Haifeng Lin. 2022. "A Forest Fire Identification System Based on Weighted Fusion Algorithm" Forests 13, no. 8: 1301. https://doi.org/10.3390/f13081301

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop