Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (114)

Search Parameters:
Keywords = early fire and smoke detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 2698 KiB  
Article
Design and Validation of an Edge-AI Fire Safety System with SmartThings Integration for Accelerated Detection and Targeted Suppression
by Seung-Jun Lee, Hong-Sik Yun, Yang-Bae Sim and Sang-Hoon Lee
Appl. Sci. 2025, 15(14), 8118; https://doi.org/10.3390/app15148118 - 21 Jul 2025
Viewed by 280
Abstract
This study presents the design and validation of an integrated fire safety system that leverages edge AI, hybrid sensing, and precision suppression to overcome the latency and collateral limitations of conventional smoke detection and sprinkler systems. The proposed platform features a dual-mode sensor [...] Read more.
This study presents the design and validation of an integrated fire safety system that leverages edge AI, hybrid sensing, and precision suppression to overcome the latency and collateral limitations of conventional smoke detection and sprinkler systems. The proposed platform features a dual-mode sensor array for early fire recognition, motorized ventilation units for rapid smoke extraction, and a 360° directional nozzle for targeted agent discharge using a residue-free clean extinguishing agent. Experimental trials demonstrated an average fire detection time of 5.8 s and complete flame suppression within 13.2 s, with 90% smoke clearance achieved in under 95 s. No false positives were recorded during non-fire simulations, and the system remained fully functional under simulated cloud communication failure, confirming its edge-resilient architecture. A probabilistic risk analysis based on ISO 31000 and NFPA 551 frameworks showed risk reductions of 75.6% in life safety, 58.0% in property damage, and 67.1% in business disruption. The system achieved a composite risk reduction of approximately 73%, shifting the operational risk level into the ALARP region. These findings demonstrate the system’s capacity to provide proactive, energy-efficient, and spatially targeted fire response suitable for high-value infrastructure. The modular design and SmartThings Edge integration further support scalable deployment and real-time system intelligence, establishing a strong foundation for future adaptive fire protection frameworks. Full article
Show Figures

Figure 1

18 pages, 2545 KiB  
Article
Reliable Indoor Fire Detection Using Attention-Based 3D CNNs: A Fire Safety Engineering Perspective
by Mostafa M. E. H. Ali and Maryam Ghodrat
Fire 2025, 8(7), 285; https://doi.org/10.3390/fire8070285 - 21 Jul 2025
Viewed by 193
Abstract
Despite recent advances in deep learning for fire detection, much of the current research prioritizes model-centric metrics over dataset fidelity, particularly from a fire safety engineering perspective. Commonly used datasets are often dominated by fully developed flames, mislabel smoke-only frames as non-fire, or [...] Read more.
Despite recent advances in deep learning for fire detection, much of the current research prioritizes model-centric metrics over dataset fidelity, particularly from a fire safety engineering perspective. Commonly used datasets are often dominated by fully developed flames, mislabel smoke-only frames as non-fire, or lack intra-video diversity due to redundant frames from limited sources. Some works treat smoke detection alone as early-stage detection, even though many fires (e.g., electrical or chemical) begin with visible flames and no smoke. Additionally, attempts to improve model applicability through mixed-context datasets—combining indoor, outdoor, and wildland scenes—often overlook the unique false alarm sources and detection challenges specific to each environment. To address these limitations, we curated a new video dataset comprising 1108 annotated fire and non-fire clips captured via indoor surveillance cameras. Unlike existing datasets, ours emphasizes early-stage fire dynamics (pre-flashover) and includes varied fire sources (e.g., sofa, cupboard, and attic fires), realistic false alarm triggers (e.g., flame-colored objects, artificial lighting), and a wide range of spatial layouts and illumination conditions. This collection enables robust training and benchmarking for early indoor fire detection. Using this dataset, we developed a spatiotemporal fire detection model based on the mixed convolutions ResNets (MC3_18) architecture, augmented with Convolutional Block Attention Modules (CBAM). The proposed model achieved 86.11% accuracy, 88.76% precision, and 84.04% recall, along with low false positive (11.63%) and false negative (15.96%) rates. Compared to its CBAM-free baseline, the model exhibits notable improvements in F1-score and interpretability, as confirmed by Grad-CAM++ visualizations highlighting attention to semantically meaningful fire features. These results demonstrate that effective early fire detection is inseparable from high-quality, context-specific datasets. Our work introduces a scalable, safety-driven approach that advances the development of reliable, interpretable, and deployment-ready fire detection systems for residential environments. Full article
Show Figures

Figure 1

23 pages, 2463 KiB  
Article
MCDet: Target-Aware Fusion for RGB-T Fire Detection
by Yuezhu Xu, He Wang, Yuan Bi, Guohao Nie and Xingmei Wang
Forests 2025, 16(7), 1088; https://doi.org/10.3390/f16071088 - 30 Jun 2025
Viewed by 269
Abstract
Forest fire detection is vital for ecological conservation and disaster management. Existing visual detection methods exhibit instability in smoke-obscured or illumination-variable environments. Although multimodal fusion has demonstrated potential, effectively resolving inconsistencies in smoke features across diverse modalities remains a significant challenge. This issue [...] Read more.
Forest fire detection is vital for ecological conservation and disaster management. Existing visual detection methods exhibit instability in smoke-obscured or illumination-variable environments. Although multimodal fusion has demonstrated potential, effectively resolving inconsistencies in smoke features across diverse modalities remains a significant challenge. This issue stems from the inherent ambiguity between regions characterized by high temperatures in infrared imagery and those with elevated brightness levels in visible-light imaging systems. In this paper, we propose MCDet, an RGB-T forest fire detection framework incorporating target-aware fusion. To alleviate feature cross-modal ambiguity, we design a Multidimensional Representation Collaborative Fusion module (MRCF), which constructs global feature interactions via a state-space model and enhances local detail perception through deformable convolution. Then, a content-guided attention network (CGAN) is introduced to aggregate multidimensional features by dynamic gating mechanism. Building upon this foundation, the integration of WIoU further suppresses vegetation occlusion and illumination interference on a holistic level, thereby reducing the false detection rate. Evaluated on three forest fire datasets and one pedestrian dataset, MCDet achieves a mean detection accuracy of 77.5%, surpassing advanced methods. This performance makes MCDet a practical solution to enhance early warning system reliability. Full article
(This article belongs to the Special Issue Advanced Technologies for Forest Fire Detection and Monitoring)
Show Figures

Figure 1

23 pages, 5036 KiB  
Article
Data-Driven Health Status Assessment of Fire Protection IoT Devices in Converter Stations
by Yubiao Huang, Tao Sun, Yifeng Cheng, Jiaqing Zhang, Zhibing Yang and Tan Yang
Fire 2025, 8(7), 251; https://doi.org/10.3390/fire8070251 - 27 Jun 2025
Viewed by 248
Abstract
To enhance fire safety in converter stations, this study focuses on detecting abnormal data and potential faults in fire protection Internet of Things (IoT) devices, which are networked sensors monitoring parameters such as temperature, smoke, and water tank levels. A data quality evaluation [...] Read more.
To enhance fire safety in converter stations, this study focuses on detecting abnormal data and potential faults in fire protection Internet of Things (IoT) devices, which are networked sensors monitoring parameters such as temperature, smoke, and water tank levels. A data quality evaluation model is proposed, covering both validity and timeliness. For validity assessment, a transformer-based time series reconstruction method is used, and anomaly thresholds are determined using the peaks over threshold (POT) approach from extreme value theory. The experimental results show that this method identifies anomalies in fire telemetry data more accurately than traditional models. Based on the objective evaluation method and clustering, an interpretable health assessment model is developed. Compared with conventional distance-based approaches, the proposed method better captures differences between features and more effectively evaluates the reliability of fire protection systems. This work contributes to improving early fire risk detection and building more reliable fire monitoring and emergency response systems. Full article
Show Figures

Figure 1

24 pages, 3008 KiB  
Article
Hybrid Backbone-Based Deep Learning Model for Early Detection of Forest Fire Smoke
by Gökalp Çınarer
Appl. Sci. 2025, 15(13), 7178; https://doi.org/10.3390/app15137178 - 26 Jun 2025
Viewed by 250
Abstract
Accurate forest fire detection is critical for the timely intervention and mitigation of environmental disasters. It is very important to intervene in forest fires before major damage occurs by using smoke data. This study proposes a novel deep learning-based approach that significantly enhances [...] Read more.
Accurate forest fire detection is critical for the timely intervention and mitigation of environmental disasters. It is very important to intervene in forest fires before major damage occurs by using smoke data. This study proposes a novel deep learning-based approach that significantly enhances the accuracy of fire detection by incorporating advanced feature extraction techniques. Through rigorous experiments and comprehensive evaluations, our method outperforms existing approaches, demonstrating its effectiveness in detecting fires at an early stage. The proposed approach leverages convolutional neural networks to automatically identify fire signatures from remote sensing images, offering a reliable and efficient solution for forest fire monitoring. A total of 30 different object detection models, including the proposed model, were run with the extended Wildfire Smoke dataset, and the results were compared. As a result of extensive experiments, it was observed that the proposed model gave the best result among all models, with a test mAP value of 96.9%. Our findings not only contribute to the advancement of fire detection technologies, but also underscore the importance of deep learning in addressing real-world environmental challenges. Full article
(This article belongs to the Special Issue Innovative Applications of Artificial Intelligence in Engineering)
Show Figures

Figure 1

51 pages, 5828 KiB  
Review
A Comprehensive Review of Advanced Sensor Technologies for Fire Detection with a Focus on Gasistor-Based Sensors
by Mohsin Ali, Ibtisam Ahmad, Ik Geun, Syed Ameer Hamza, Umar Ijaz, Yuseong Jang, Jahoon Koo, Young-Gab Kim and Hee-Dong Kim
Chemosensors 2025, 13(7), 230; https://doi.org/10.3390/chemosensors13070230 - 23 Jun 2025
Viewed by 1007
Abstract
Early fire detection plays a crucial role in minimizing harm to human life, buildings, and the environment. Traditional fire detection systems struggle with detection in dynamic or complex situations due to slow response and false alarms. Conventional systems are based on smoke, heat, [...] Read more.
Early fire detection plays a crucial role in minimizing harm to human life, buildings, and the environment. Traditional fire detection systems struggle with detection in dynamic or complex situations due to slow response and false alarms. Conventional systems are based on smoke, heat, and gas sensors, which often trigger alarms when a fire is in full swing. In order to overcome this, a promising approach is the development of memristor-based gas sensors, known as gasistors, which offer a lightweight design, fast response/recovery, and efficient miniaturization. Recent studies on gasistor-based sensors have demonstrated ultrafast response times as low as 1–2 s, with detection limits reaching sub-ppm levels for gases such as CO, NH3, and NO2. Enhanced designs incorporating memristive switching and 2D materials have achieved a sensitivity exceeding 90% and stable operation across a wide temperature range (room temperature to 250 °C). This review highlights key factors in early fire detection, focusing on advanced sensors and their integration with IoT for faster, and more reliable alerts. Here, we introduce gasistor technology, which shows high sensitivity to fire-related gases and operates through conduction filament (CF) mechanisms, enabling its low power consumption, compact size, and rapid recovery. When integrated with machine learning and artificial intelligence, this technology offers a promising direction for future advancements in next-generation early fire detection systems. Full article
(This article belongs to the Special Issue Recent Progress in Nano Material-Based Gas Sensors)
Show Figures

Figure 1

17 pages, 4666 KiB  
Article
Lightweight YOLOv5s Model for Early Detection of Agricultural Fires
by Saydirasulov Norkobil Saydirasulovich, Sabina Umirzakova, Abduazizov Nabijon Azamatovich, Sanjar Mukhamadiev, Zavqiddin Temirov, Akmalbek Abdusalomov and Young Im Cho
Fire 2025, 8(5), 187; https://doi.org/10.3390/fire8050187 - 8 May 2025
Viewed by 742
Abstract
Agricultural fires significantly threaten global food systems, ecosystems, and rural economies, necessitating timely detection to prevent widespread damage. This study presents a lightweight and enhanced version of the YOLOv5s model, optimized for early-stage agricultural fire detection. The core innovation involves deepening the C3 [...] Read more.
Agricultural fires significantly threaten global food systems, ecosystems, and rural economies, necessitating timely detection to prevent widespread damage. This study presents a lightweight and enhanced version of the YOLOv5s model, optimized for early-stage agricultural fire detection. The core innovation involves deepening the C3 block and integrating DarknetBottleneck modules to extract finer visual features from subtle fire indicators such as light smoke and small flames. Experimental evaluations were conducted on a custom dataset of 3200 annotated agricultural fire images. The proposed model achieved a precision of 88.9%, a recall of 85.7%, and a mean Average Precision (mAP) of 87.3%, outperforming baseline YOLOv5s and several state-of-the-art (SOTA) detectors such as YOLOv7-tiny and YOLOv8n. The model maintains a compact size (7.5 M parameters) and real-time capability (74 FPS), making it suitable for resource-constrained deployment. Our findings demonstrate that focused architectural refinement can significantly improve early fire detection accuracy, enabling more effective response strategies and reducing agricultural losses. Full article
Show Figures

Figure 1

22 pages, 25979 KiB  
Article
Advancing Early Wildfire Detection: Integration of Vision Language Models with Unmanned Aerial Vehicle Remote Sensing for Enhanced Situational Awareness
by Leon Seidel, Simon Gehringer, Tobias Raczok, Sven-Nicolas Ivens, Bernd Eckardt and Martin Maerz
Drones 2025, 9(5), 347; https://doi.org/10.3390/drones9050347 - 3 May 2025
Viewed by 1427
Abstract
Early wildfire detection is critical for effective suppression efforts, necessitating rapid alerts and precise localization. While computer vision techniques offer reliable fire detection, they often lack contextual understanding. This paper addresses this limitation by utilizing Vision Language Models (VLMs) to generate structured scene [...] Read more.
Early wildfire detection is critical for effective suppression efforts, necessitating rapid alerts and precise localization. While computer vision techniques offer reliable fire detection, they often lack contextual understanding. This paper addresses this limitation by utilizing Vision Language Models (VLMs) to generate structured scene descriptions from Unmanned Aerial Vehicle (UAV) imagery. UAV-based remote sensing provides diverse perspectives for potential wildfires, and state-of-the-art VLMs enable rapid and detailed scene characterization. We evaluated both cloud-based (OpenAI, Google DeepMind) and open-weight, locally deployed VLMs on a novel evaluation dataset specifically curated for understanding forest fire scenes. Our results demonstrate that relatively compact, fine-tuned VLMs can provide rich contextual information, including forest type, fire state, and fire type. Specifically, our best-performing model, ForestFireVLM-7B (fine-tuned from Qwen2-5-VL-7B), achieved a 76.6% average accuracy across all categories, surpassing the strongest closed-weight baseline (Gemini 2.0 Pro at 65.5%). Furthermore, zero-shot evaluation on the publicly available FIgLib dataset demonstrated state-of-the-art smoke detection accuracy using VLMs. Our findings highlight the potential of fine-tuned, open-weight VLMs for enhanced wildfire situational awareness via detailed scene interpretation. Full article
Show Figures

Figure 1

19 pages, 6509 KiB  
Article
Optimized Faster R-CNN with Swintransformer for Robust Multi-Class Wildfire Detection
by Sugi Choi, Sunghwan Kim and Haiyoung Jung
Fire 2025, 8(5), 180; https://doi.org/10.3390/fire8050180 - 30 Apr 2025
Cited by 1 | Viewed by 600
Abstract
Wildfires are a critical global threat, emphasizing the need for efficient detection systems capable of identifying fires and distinguishing fire-related from non-fire events in their early stages. This study integrates the swintransformer into the Faster R-CNN backbone to overcome challenges in detecting small [...] Read more.
Wildfires are a critical global threat, emphasizing the need for efficient detection systems capable of identifying fires and distinguishing fire-related from non-fire events in their early stages. This study integrates the swintransformer into the Faster R-CNN backbone to overcome challenges in detecting small flames and smoke and distinguishing complex scenarios like fog/haze and chimney smoke. The proposed model was evaluated using a dataset comprising five classes: flames, smoke, clouds, fog/haze, and chimney smoke. Experimental results demonstrate that swintransformer-based models outperform ResNet-based Faster R-CNN models, achieving a maximum mAP50 of 0.841 with the swintransformer-based model. The model exhibited superior performance in detecting small and dynamic objects while reducing misclassification rates between similar classes, such as smoke and chimney smoke. Precision–recall analysis further validated the model’s robustness across diverse scenarios. However, slightly lower recall for specific classes and a lower FPS compared to ResNet models suggest a need for further optimization for real-time applications. This study highlights the swintransformer’s potential to enhance wildfire detection systems by addressing fire and non-fire events effectively. Future research will focus on optimizing its real-time performance and improving its recall for challenging scenarios, thereby contributing to the development of robust and reliable wildfire detection systems. Full article
Show Figures

Figure 1

23 pages, 2831 KiB  
Article
RT-DETR-Smoke: A Real-Time Transformer for Forest Smoke Detection
by Zhong Wang, Lanfang Lei, Tong Li, Xian Zu and Peibei Shi
Fire 2025, 8(5), 170; https://doi.org/10.3390/fire8050170 - 27 Apr 2025
Cited by 2 | Viewed by 1236
Abstract
Smoke detection is crucial for early fire prevention and the protection of lives and property. Unlike generic object detection, smoke detection faces unique challenges due to smoke’s semitransparent, fluid nature, which often leads to false positives in complex backgrounds and missed detections—particularly around [...] Read more.
Smoke detection is crucial for early fire prevention and the protection of lives and property. Unlike generic object detection, smoke detection faces unique challenges due to smoke’s semitransparent, fluid nature, which often leads to false positives in complex backgrounds and missed detections—particularly around smoke edges and small targets. Moreover, high computational overhead further restricts real-world deployment. To tackle these issues, we propose RT-DETR-Smoke, a specialized real-time transformer-based smoke-detection framework. First, we designed a high-efficiency hybrid encoder that combines convolutional and Transformer features, thus reducing computational cost while preserving crucial smoke details. We then incorporated an uncertainty-minimization strategy to dynamically select the most confident detection queries, further improving detection accuracy in challenging scenarios. Next, to alleviate the common issue of blurred or incomplete smoke boundaries, we introduced a coordinate attention mechanism, which enhances spatial-feature fusion and refines smoke-edge localization. Finally, we propose the WShapeIoU loss function to accelerate model convergence and boost the precision of the bounding-box regression for multiscale smoke targets under diverse environmental conditions. As evaluated on our custom smoke dataset, RT-DETR-Smoke achieves a remarkable 87.75% mAP@0.5 and processes images at 445.50 FPS, significantly outperforming existing methods in both accuracy and speed. These results underscore the potential of RT-DETR-Smoke for practical deployment in early fire-warning and smoke-monitoring systems. Full article
Show Figures

Figure 1

15 pages, 10730 KiB  
Article
An Efficient Forest Smoke Detection Approach Using Convolutional Neural Networks and Attention Mechanisms
by Quy-Quyen Hoang, Quy-Lam Hoang and Hoon Oh
J. Imaging 2025, 11(2), 67; https://doi.org/10.3390/jimaging11020067 - 19 Feb 2025
Cited by 1 | Viewed by 820
Abstract
This study explores a method of detecting smoke plumes effectively as the early sign of a forest fire. Convolutional neural networks (CNNs) have been widely used for forest fire detection; however, they have not been customized or optimized for smoke characteristics. This paper [...] Read more.
This study explores a method of detecting smoke plumes effectively as the early sign of a forest fire. Convolutional neural networks (CNNs) have been widely used for forest fire detection; however, they have not been customized or optimized for smoke characteristics. This paper proposes a CNN-based forest smoke detection model featuring novel backbone architecture that can increase detection accuracy and reduce computational load. Since the proposed backbone detects the plume of smoke through different views using kernels of varying sizes, it can better detect smoke plumes of different sizes. By decomposing the traditional square kernel convolution into a depth-wise convolution of the coordinate kernel, it can not only better extract the features of the smoke plume spreading along the vertical dimension but also reduce the computational load. An attention mechanism was applied to allow the model to focus on important information while suppressing less relevant information. The experimental results show that our model outperforms other popular ones by achieving detection accuracy of up to 52.9 average precision (AP) and significantly reduces the number of parameters and giga floating-point operations (GFLOPs) compared to the popular models. Full article
Show Figures

Figure 1

16 pages, 7520 KiB  
Article
LSKA-YOLOv8n-WIoU: An Enhanced YOLOv8n Method for Early Fire Detection in Airplane Hangars
by Li Deng, Siqi Wu, Jin Zhou, Shuang Zou and Quanyi Liu
Fire 2025, 8(2), 67; https://doi.org/10.3390/fire8020067 - 7 Feb 2025
Cited by 1 | Viewed by 1325
Abstract
An aircraft hangar is a special large-space environment containing a lot of combustible materials and high-value equipment. It is essential to quickly and accurately detect early-stage fires when they occur. In this study, experiments were conducted in a real aircraft hangar to simulate [...] Read more.
An aircraft hangar is a special large-space environment containing a lot of combustible materials and high-value equipment. It is essential to quickly and accurately detect early-stage fires when they occur. In this study, experiments were conducted in a real aircraft hangar to simulate the occurrence of early-stage fires, and the collected images were classified, labeled, and organized to form the dataset used in this paper. The fire data in the dataset were categorized into two target classes: fire and smoke. This study proposes an aircraft hangar fire detection method that integrates an attention mechanism, which was based on the You Only Look Once Version 8 Nano (YOLOv8n) framework and further improved. Technically, the optimization of YOLOv8n was mainly carried out in two stages: Firstly, at the network structure level, the neck network of YOLOv8n was reconstructed using a large separable kernel attention (LSKA) module; secondly, in terms of loss function design, the original CIoU loss function was replaced with a dynamic focus-based Wise-IoU to enhance the detection performance of the model. This new algorithm is named LSKA-YOLOv8n+WIoU. Experimental results show that the LSKA-YOLOv8n+WIoU algorithm has superior fire detection performance compared to related state-of-the-art algorithms. Compared to the YOLOv8n model, the precision increased by 10% to 86.7%, the recall increased by 8.8% to 67.2%, and the mean average precision (mAP) increased by 5.9% to 69.5%. The parameter size was reduced by 0.5MB to 5.7MB. Through these improvements, the accuracy of flame and smoke detection was enhanced while reducing computational complexity, increasing computational efficiency, and effectively mitigating the phenomena of missed and false detections. This study contributes to enhancing the accuracy and speed of fire detection systems used in aircraft hangar environments, providing reliable support for early-stage aircraft hangar fire alarm work. Full article
(This article belongs to the Special Issue Aircraft Fire Safety)
Show Figures

Figure 1

16 pages, 4586 KiB  
Article
Real-Time Detection of Smoke and Fire in the Wild Using Unmanned Aerial Vehicle Remote Sensing Imagery
by Xijian Fan, Fan Lei and Kun Yang
Forests 2025, 16(2), 201; https://doi.org/10.3390/f16020201 - 22 Jan 2025
Cited by 1 | Viewed by 1221
Abstract
Detecting wildfires and smoke is essential for safeguarding forest ecosystems and offers critical information for the early evaluation and prevention of such incidents. The advancement of unmanned aerial vehicle (UAV) remote sensing has further enhanced the detection of wildfires and smoke, which enables [...] Read more.
Detecting wildfires and smoke is essential for safeguarding forest ecosystems and offers critical information for the early evaluation and prevention of such incidents. The advancement of unmanned aerial vehicle (UAV) remote sensing has further enhanced the detection of wildfires and smoke, which enables rapid and accurate identification. This paper presents an integrated one-stage object detection framework designed for the simultaneous identification of wildfires and smoke in UAV imagery. By leveraging mixed data augmentation techniques, the framework enriches the dataset with small targets to enhance its detection performance for small wildfires and smoke targets. A novel backbone enhancement strategy, integrating region convolution and feature refinement modules, is developed to facilitate the ability to localize smoke features with high transparency within complex backgrounds. By integrating the shape aware loss function, the proposed framework enables the effective capture of irregularly shaped smoke and fire targets with complex edges, facilitating the accurate identification and localization of wildfires and smoke. Experiments conducted on a UAV remote sensing dataset demonstrate that the proposed framework achieves a promising detection performance in terms of both accuracy and speed. The proposed framework attains a mean Average Precision (mAP) of 79.28%, an F1 score of 76.14%, and a processing speed of 8.98 frames per second (FPS). These results reflect increases of 4.27%, 1.96%, and 0.16 FPS compared to the YOLOv10 model. Ablation studies further validate that the incorporation of mixed data augmentation, feature refinement models, and shape aware loss results in substantial improvements over the YOLOv10 model. The findings highlight the framework’s capability to rapidly and effectively identify wildfires and smoke using UAV imagery, thereby providing a valuable foundation for proactive forest fire prevention measures. Full article
Show Figures

Figure 1

28 pages, 16688 KiB  
Article
A Comparative Analysis of YOLOv9, YOLOv10, YOLOv11 for Smoke and Fire Detection
by Eman H. Alkhammash
Fire 2025, 8(1), 26; https://doi.org/10.3390/fire8010026 - 13 Jan 2025
Cited by 6 | Viewed by 5709
Abstract
Forest fires cause extensive environmental damage, making early detection crucial for protecting both nature and communities. Advanced computer vision techniques can be used to detect smoke and fire. However, accurate detection of smoke and fire in forests is challenging due to different factors [...] Read more.
Forest fires cause extensive environmental damage, making early detection crucial for protecting both nature and communities. Advanced computer vision techniques can be used to detect smoke and fire. However, accurate detection of smoke and fire in forests is challenging due to different factors such as different smoke shapes, changing light, and similarity of smoke with other smoke-like elements such as clouds. This study explores recent YOLO (You Only Look Once) deep-learning object detection models YOLOv9, YOLOv10, and YOLOv11 for detecting smoke and fire in forest environments. The evaluation focuses on key performance metrics, including precision, recall, F1-score, and mean average precision (mAP), and utilizes two benchmark datasets featuring diverse instances of fire and smoke across different environments. The findings highlight the effectiveness of the small version models of YOLO (YOLOv9t, YOLOv10n, and YOLOv11n) in fire and smoke detection tasks. Among these, YOLOv11n demonstrated the highest performance, achieving a precision of 0.845, a recall of 0.801, a mAP@50 of 0.859, and a mAP@50-95 of 0.558. YOLOv11 versions (YOLOv11n and YOLOv11x) were evaluated and compared against several studies that employed the same datasets. The results show that YOLOv11x delivers promising performance compared to other YOLO variants and models. Full article
Show Figures

Figure 1

25 pages, 6720 KiB  
Article
Forest Fire Discrimination Based on Angle Slope Index and Himawari-8
by Pingbo Liu and Gui Zhang
Remote Sens. 2025, 17(1), 142; https://doi.org/10.3390/rs17010142 - 3 Jan 2025
Viewed by 1075
Abstract
In the background of high frequency and intensity forest fires driven by future warming and a drying climate, early detection and effective control of fires are extremely important to reduce losses. Meteorological satellite imagery is commonly used for near-real-time forest fire monitoring, thanks [...] Read more.
In the background of high frequency and intensity forest fires driven by future warming and a drying climate, early detection and effective control of fires are extremely important to reduce losses. Meteorological satellite imagery is commonly used for near-real-time forest fire monitoring, thanks to its high temporal resolution. To address the misjudgments and omissions caused by solely relying on changes in infrared band brightness values and a single image in forest fire early discrimination, this paper constructs the angle slope indexes ANIR, AMIR, AMNIR, ∆ANIR, and ∆AMIR based on the reflectance of the red band and near-infrared band, the brightness temperature of the mid-infrared and far-infrared band, the difference between the AMIR and ANIR, and the index difference between time-series images. These indexes integrate the strong inter-band correlations and the reflectance characteristics of visible and short-wave infrared bands to simultaneously monitor smoke and fuel biomass changes in forest fires. We also used the decomposed three-dimensional OTSU (maximum inter-class variance method) algorithm to calculate the segmentation threshold of the sub-regions constructed from the AMNIR data to address the different discrimination thresholds caused by different time and space backgrounds. In this paper, the Himawari-8 satellite imagery was used to detect forest fires based on the angle slope indices thresholds algorithm (ASITR), and the fusion of the decomposed three-dimensional OTSU and ASITR algorithm (FDOA). Results show that, compared with ASITR, the accuracy of FDOA decreased by 3.41% (0.88 vs. 0.85), the omission error decreased by 52.94% (0.17 vs. 0.08), and the overall evaluation increased by 3.53% (0.85 vs. 0.88). The ASITR has higher accuracy, and the fusion of decomposed three-dimensional OTSU and angle slope indexes can reduce forest fire omission error and improve the overall evaluation. Full article
Show Figures

Figure 1

Back to TopTop