Early Fire and Smoke Detection Using Deep Learning: A Comprehensive Review of Models, Datasets, and Challenges
Abstract
1. Introduction
1.1. Critical Review of Existing Surveys
1.2. Main Contributions
- The paper provides a comprehensive compendium of deep learning architectures (CNNs, YOLO, Faster R-CNN, and spatiotemporal models based on LSTM) for fire and smoke detection. It includes classification, object detection, segmentation, and hybrid systems. Further, it reports the effectiveness of the systems in terms of accuracy, speed, and fitness for real-time deployment (especially under resource-constrained conditions).
- It provides a rich comparative assessment of public datasets grouped by detection focus (fire, smoke, and combined) and discusses the challenges of data imbalance, variability, and lack of diverse forms of annotation. This valuable contribution clearly lays out the data bottleneck issue that model training and evaluation can address.
- It evaluates examples of real-world deployment contexts, e.g., urban, forest fire, tunnel, marine, and drone-based, identifies the importance of edge AI and light-weight models in building value, and provides evidence for how limits imposed by environment, context, and infrastructure vary and shape detection performance and model design.
- It identifies multiple challenges, e.g., high false alarm rates, the inability to know why an AI model made a decision (lack of interpretability), and the ability to generalize to unforeseen or changed environments. The paper proposes different areas of research as ways forward. Proposed avenues are the engagement of multimodal sensor fusion, the application of federated learning, synthetic data, and explainable AI. This vision has potential value to provide guidance for current and next-generation fire detection systems.
2. Review Methodology
2.1. Systematic Literature Review Methodology
2.2. Inclusion/Exclusion Criteria
2.3. Data Analysis
3. Traditional Fire and Smoke Detection Methods
4. Deep Learning Techniques for Fire and Smoke Detection
5. Datasets Analysis for Fire and Smoke Detection
5.1. Fire-Only Detection Datasets
5.2. Smoke-Only Detection Datasets
5.3. Fire and Smoke Detection Datasets
5.4. Flame and Smoke Detection Datasets
Dataset Class | Name | Year | Data Type | Samples | Environment |
---|---|---|---|---|---|
Fire Dataset | MIVIA Fire Detection [99] | 2015 | Video | 31 videos | Indoor/Outdoor |
FURG Fire [101] | 2015 | Video | Multiple annotated videos | Wildfire/Indoor/Controlled | |
Corsican Fire [102] | 2016 | Video | Multimodal dataset | Forest/NIR Visible | |
BoWFire [108] | 2017 | Images | 466 images | Urban/Industrial | |
FireNet [100] | 2019 | Video | 62 videos, 160 images | Indoor/Outdoor | |
FLAME [103] | 2022 | Images | 39,375 (train), 8617 (test), 2003 masks | Wildfire/Outdoor | |
Forest Fire Images [104] | 2022 | Images | 5050 images | Forest | |
Forest Fire Dataset [105,106] | 2023 | Images | 2974 (classification) + 1690 (detection) | Forest | |
FlameVision [107] | 2023 | Video | 8600 images | Wildfire/Outdoor | |
DBA-Fire [109] | 2023 | Images | 3905 images (YOLO format) | Indoor/Outdoor | |
FLAME 2 [110] | 2023 | Video | 7 RGB-IR video pairs + metadata | Wildfire/Outdoor | |
Smoke Dataset | Video Smoke Detection [111] | 2004 | Video | ∼80K images | Outdoor |
MIVIA Smoke [99] | 2015 | Video | 149 videos (∼35 h) | Outdoor | |
DSDF [114] | 2020 | Images | 18,413 images | Outdoor/Lab Simulations | |
FIgLib [112] | 2020 | Images | 24,800 high-res images | Outdoor/Wildfire | |
SKLFS [113] | 2022 | Images | 36,104 images | Synthetic/Real | |
Nemo dataset [115] | 2022 | Images | 2859 images | Wildfire/Outdoor | |
Fire and Smoke Dataset | FireSense [119] | 2010 | Video | 49 videos | Urban/Heritage sites |
VisiFire [117] | 2015 | Video | 40 video clips | Outdoor/Forest/Urban | |
FiSmo [120] | 2017 | Images | ∼9K images + videos | Emergency/Social Media | |
Fire Flame [116] | 2019 | Images | 3000 images | Mixed | |
Fire & Smoke [125] | 2020 | Images | 100 images | Indoor/Outdoor/Urban | |
D-Fire [118] | 2021 | Images | 21,000+ images | Outdoor/Mixed | |
Domestic-Fire-and-Smoke-Dataset [124] | 2021 | Images | 5000+ images | Indoor and Outdoor | |
DFS [121] | 2022 | Images | 9462 images | Mixed | |
ForestFireSmoke [122] | 2023 | Images | 14,300 images | Forest | |
FASDD [123] | 2023 | Images | 10,000 images | Various | |
ONFIRE Dataset [126] | 2023 | Video | 322 videos | Urban/Wildfire | |
Wildfire Dataset [129,130] | 2023 | Images | 2701 images | Forest | |
WIT-UAS [131] | 2023 | Images | 1691 IR images (2062 labeled) | Wildfire | |
PYRONEAR2024 [127] | 2024 | Images | 150,000 images, 150,000 annotations | Wildfire/Outdoor | |
MS-FSDB [128] | 2024 | Images | 12,518 high-res images | Various | |
Flame and Smoke Dataset | KMU [132] | 2011 | Video | 308.1 MB video | Indoor/Outdoor/Wildfire |
RISE [133] | 2020 | Video | 12,567 video clips | Industrial/Environmental |
6. Detection Scenarios and Taxonomy
6.1. Scenario Classification
6.2. Taxonomy by Detection Method
6.3. Multi-Label and Multi-Class Scenarios
7. Real-World Deployment and Edge AI
8. Open Challenges
- Dataset Limitations: A critical challenge in developing robust fire and smoke detection models lies in the scarcity of large-scale, high-quality, and diverse datasets with comprehensive annotations. Existing repositories are often limited in size and fail to encompass essential conditions such as nighttime visibility, dense urban environments, or extreme weather, which restrict model generalization and hinder consistent benchmarking across architectures [165,167]. Advances in this area increasingly rely on collaborative efforts to construct standardized, multimodal datasets—including RGB, infrared, and thermal imagery—that capture diverse environmental conditions and support comprehensive evaluation.
- High False Alarm Rates: Deep learning-based detection systems remain prone to misclassifying confounding factors—such as sunlight glare, vehicle headlights, fog, or steam—as fire, resulting in elevated false alarm rates that compromise trust and emergency response effectiveness [30,167,169,170]. Addressing this issue has driven research toward the integration of multimodal sensor data and the development of sophisticated preprocessing and anomaly detection techniques, enabling models to better distinguish between true fire events and visual distractors.
- Computational and Resource Constraints: High-performing architectures, including Faster R-CNN and 3D CNN+LSTM models, require substantial computational resources, limiting their deployment on UAVs, IoT devices, and other edge platforms [165,168,172]. The design of lightweight neural networks, combined with hardware acceleration through TPUs, FPGAs, or other specialized processors, has become a central strategy for maintaining detection accuracy while achieving real-time performance in resource-constrained environments.
- Interpretability and Trustworthiness: The black-box nature of deep learning models presents a significant obstacle in safety-critical applications such as fire detection, where errors can have severe consequences [30,172]. Efforts to incorporate interpretability directly into model architectures and to develop standardized evaluation protocols for explainable AI have emerged as key avenues for enhancing trustworthiness and facilitating regulatory compliance.
- Integration, Standards, and Ethics: Many current systems lack evaluation within operational emergency-response pipelines and do not fully comply with international safety standards such as EN 54. Ethical concerns—including privacy, data security, and potential misuse of surveillance technologies—further complicate deployment [13,168,172]. Progress in this domain depends on coordinated efforts among AI researchers, emergency-response authorities, and policymakers to establish regulatory frameworks, operational standards, and ethical guidelines that support safe and responsible implementation.
9. Future Directions
- Multimodal Sensor Fusion: Integrating heterogeneous data sources such as RGB, infrared, thermal, LiDAR, and environmental sensors can reduce false alarms and increase resilience across diverse conditions. Future work should prioritize lightweight, synchronized fusion frameworks optimized for edge deployment [173]. Popular frameworks like TensorFlow (v2.12.0), PyTorch (v2.0.1), and PyTorch Lightning (v2.0.2) can be combined with specialized toolkits (e.g., OpenVINO or NVIDIA TensorRT) to deploy multimodal fusion models on resource-constrained hardware. Research should also investigate transformer-based fusion architectures that natively combine different modalities.
- Adaptive and Lifelong Learning: Because fire dynamics and environmental conditions evolve, models must adapt continuously to new scenarios. Lifelong and continual learning methods can prevent catastrophic forgetting while improving generalization. Tools such as Avalanche (continual learning framework in PyTorch v2.0.1) or Elastic Weight Consolidation (EWC) can be leveraged to implement adaptive systems [174]. Integrating online learning into UAV- and IoT-based monitoring pipelines would allow models to evolve with changing environments.
- Federated and Distributed Learning: Federated learning enables collaborative training across distributed devices without centralizing sensitive data, thereby improving dataset diversity, ensuring privacy, and reducing communication overhead [175]. Frameworks such as TensorFlow Federated, PySyft, or NVIDIA Clara FL can be applied to large-scale surveillance networks for real-time wildfire monitoring. Future systems should investigate hybrid approaches that combine federated learning with edge-cloud orchestration to achieve scalability and robustness.
- Synthetic Data and Data Augmentation: Generating synthetic data through physics-based simulators (e.g., FARSITE, Fire Dynamics Simulator) or deep generative models such as generative adversarial networks (GANs) and variational autoencoders (VAEs) can mitigate dataset scarcity [176]. Combining this with advanced augmentation strategies in libraries like Albumentations or AugLy allows the modeling of rare and hazardous fire scenarios. Such synthetic pipelines enhance robustness under conditions that are unsafe or difficult to capture in practice, and can serve as benchmarks for stress-testing models.
- Lightweight Architectures and Edge AI: Future detection systems should prioritize efficiency-focused architectures such as MobileNetV3, YOLO-tiny variants, ShuffleNet, and lightweight Vision Transformers (e.g., MobileViT) [139]. When combined with hardware accelerators such as NVIDIA Jetson, Google Coral TPU, or Intel Movidius, these designs can achieve real-time inference on UAVs, IoT devices, and embedded systems. Tools like TensorRT, ONNX Runtime, and OpenVINO can be employed for model compression, quantization, and hardware-aware optimization.
- Explainable and Transparent AI: Developing inherently interpretable models and standardized explanation methods is essential for trust and regulatory compliance. Beyond visualization techniques such as Grad-CAM, SHAP, and LIME, research should focus on architectures with built-in interpretability (e.g., attention visualization in transformers, prototype-based models) [177]. Toolkits such as Captum (PyTorch) or InterpretML can provide explainability features during both training and deployment phases. Such methods can support certification and accountability in safety-critical applications.
- Context-Aware and Attention-Based Models: Embedding semantic reasoning and advanced attention mechanisms (e.g., CBAM, SE-blocks, or transformer-based attention) enables better discrimination between real fire events and distractors such as fog, reflections, or smoke-like patterns [178]. Context-aware architectures should integrate spatiotemporal modeling and scene understanding, potentially using frameworks like Detectron2 or MMDetection for modular experimentation. These methods promise higher robustness in cluttered and dynamic real-world environments.
10. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
AI | Artificial Intelligence |
BiFPN | Bi-directional Feature Pyramid Network |
CBAM | Convolutional Block Attention Module |
CNN | Convolutional Neural Network |
DL | Deep Learning |
DNN | Deep Neural Network |
Faster R-CNN | Faster Region-based Convolutional Neural Network |
GPU | Graphics Processing Unit |
Grad-CAM | Gradient-weighted Class Activation Mapping |
IR | Infrared |
IoT | Internet of Things |
KD | Knowledge Distillation |
LSTM | Long Short-Term Memory |
mAP | mean Average Precision |
ML | Machine Learning |
NIR | Near-Infrared |
R-CNN | Region-based Convolutional Neural Network |
RGB | Red, Green, Blue |
RNN | Recurrent Neural Network |
TPU | Tensor Processing Unit |
UAV | Unmanned Aerial Vehicle |
ViT | Vision Transformer |
WSOL | Weakly Supervised Object Localization |
YOLO | You Only Look Once |
References
- Bowman, D.M.J.S.; Balch, J.K.; Artaxo, P.; Bond, W.J.; Carlson, J.M.; Cochrane, M.A.; D’Antonio, C.M.; DeFries, R.S.; Doyle, J.C.; Harrison, S.P.; et al. Fire in the earth system. Science 2009, 324, 481–484. [Google Scholar] [CrossRef]
- World Health Organization. Burns. Available online: https://www.who.int/news-room/fact-sheets/detail/burns (accessed on 18 September 2025).
- Geetha, S.; Abhishek, C.S.; Akshayanat, C.S. Machine vision based fire detection techniques: A survey. Fire Technol. 2021, 57, 591–623. [Google Scholar] [CrossRef]
- Brenner, M.; Siqueira, H.V.; Gonçalves, L.M.G.; Pereira, A.S. RGB-D and thermal sensor fusion: A systematic literature review. IEEE Access 2023, 11, 82410–82442. [Google Scholar] [CrossRef]
- Roy, D.P.; Boschetti, L.; Justice, C.O.; Ju, J. The collection 5 MODIS burned area product—Global evaluation by comparison with the MODIS active fire product. Remote Sens. Environ. 2008, 112, 3690–3707. [Google Scholar] [CrossRef]
- Wang, H.; Zhou, L.; Wang, L. Miss detection vs. false alarm: Adversarial learning for small object segmentation in infrared images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Wang, L.; Zheng, Y.; Yu, Y.; Liu, L.; Lu, H. A deep learning-based experiment on forest wildfire detection in machine vision course. IEEE Access 2023, 11, 32671–32681. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, Q.; Zhao, R. Progressive dual-attention residual network for salient object detection. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 5902–5915. [Google Scholar] [CrossRef]
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), Savannah, GA, USA, 2–4 November 2016. [Google Scholar]
- Xu, H.; Yu, Y.; Su, Y.; Zhang, Y.; Tang, Y.; Zhang, C. Light-YOLOv5 in complex fire scenarios. Remote Sens. 2022, 14, 1330. [Google Scholar]
- Muhammad, K.; Ahmad, J.; Lv, Z.; Bellavista, P.; Yang, P.; Baik, S.W. Efficient deep CNN-based fire detection and localization in video surveillance applications. IEEE Trans. Syst. Man Cybern. Syst. 2018, 49, 1419–1434. [Google Scholar] [CrossRef]
- Sucuoğlu, H.S.; Böğrekci, İ.; Demircioğlu, P. Real time fire detection using faster R-CNN model. Int. J. 3D Print. Technol. Digit. Ind. 2019, 3, 220–226. [Google Scholar]
- Zhao, M.; Barati, M. A real-time fault localization in power distribution grid for wildfire detection through deep convolutional neural networks. IEEE Trans. Ind. Appl. 2021, 57, 4316–4326. [Google Scholar] [CrossRef]
- Dewangan, A.; Krishna, K.M.; Krishna, P.V.; Tripathi, R.; Krishna, C.M. FIgLib & SmokeyNet: Dataset and deep learning model for real-time wildland fire smoke detection. Remote Sens. 2022, 14, 1007. [Google Scholar]
- Torabian, M.; Pourghassem, H.; Mahdavi-Nasab, H. Fire detection based on fractal analysis and spatio-temporal features. Fire Technol. 2021, 57, 2583–2614. [Google Scholar] [CrossRef]
- Nosseir, A.E.; Kashani, A.; Ghelfi, P.; Bogoni, A. OFS-embedded smart composites: OFDR distributed sensing for structural condition and operation monitoring in spacecraft propellant tank. In Proceedings of the 29th International Conference on Optical Fiber Sensors, Alexandria, VA, USA, 13–17 May 2025; Volume 13639, pp. 1573–1576. [Google Scholar]
- Zhou, Z.; Liu, X.; Li, Y.; Zheng, Y.; Xu, H. High-performance fire detection framework based on feature enhancement and multimodal fusion. J. Saf. Sci. Resil. 2025, 7, 100212. [Google Scholar] [CrossRef]
- MIVIA. Fire Detection Dataset. Available online: https://mivia.unisa.it/datasets/video-analysis-datasets/fire-detection-dataset/ (accessed on 18 September 2025).
- Ansmann, A.; Baars, H.; Engelmann, R.; Althausen, D.; Haarig, M.; Seifert, P.; Ohneiser, K.; Düsing, S.; Wandinger, U. CALIPSO aerosol-typing scheme misclassified stratospheric fire smoke: Case study from the 2019 Siberian wildfire season. Front. Environ. Sci. 2021, 9, 769852. [Google Scholar] [CrossRef]
- Li, Y.; Liu, Q.; Gao, X.; Zhang, Y.; Zhang, X. Real-time early indoor fire detection and localization on embedded platforms with fully convolutional one-stage object detection. Sustainability 2023, 15, 1794. [Google Scholar] [CrossRef]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual explanations. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- Nosseir, A.E.; Ghelfi, P.; Bogoni, A.; Hassan, M. Design and prototyping of a smart propellant tank for spacecraft. In Proceedings of the 75th International Astronautical Congress (IAC), Baku, Azerbaijan, 14–18 October 2024. [Google Scholar]
- Gaur, A.; Singh, A.; Kumar, A.; Kumar, A.; Kapoor, K. Video flame and smoke based fire detection algorithms: A literature review. Fire Technol. 2020, 56, 1943–1980. [Google Scholar] [CrossRef]
- Jin, C.; Wang, T.; Alhusaini, N.; Zhao, S.; Liu, H.; Xu, K.; Zhang, J. Video fire detection methods based on deep learning: Datasets, methods, and future directions. Fire 2023, 6, 315. [Google Scholar] [CrossRef]
- Gragnaniello, D.; Greco, A.; Sansone, C.; Vento, B. Fire and smoke detection from videos: A literature review under a novel taxonomy. Expert Syst. Appl. 2024, 255, 124783. [Google Scholar] [CrossRef]
- Alkhatib, R.; Sahwan, W.; Alkhatieb, A.; Schütt, B. A brief review of machine learning algorithms in forest fires science. Appl. Sci. 2023, 13, 8275. [Google Scholar] [CrossRef]
- Ghali, R.; Akhloufi, M.A. Deep learning approaches for wildland fires remote sensing: Classification, detection, and segmentation. Remote Sens. 2023, 15, 1821. [Google Scholar] [CrossRef]
- Vasconcelos, R.N.; Franca Rocha, W.J.S.; Costa, D.P.; Duverger, S.G.; Santana, M.M.M.; Cambui, E.C.B.; Ferreira-Ferreira, J.; Oliveira, M.; Barbosa, L.S.; Cordeiro, C.L. Fire detection with deep learning: A comprehensive review. Land 2024, 13, 1696. [Google Scholar] [CrossRef]
- Yang, S.; Huang, Q.; Yu, M. Advancements in remote sensing for active fire detection: A review of datasets and methods. Sci. Total Environ. 2024, 943, 173273. [Google Scholar] [CrossRef] [PubMed]
- Sulthana, S.F.; Wise, C.T.A.; Ravikumar, C.V.; Anbazhagan, R.; Idayachandran, G.; Pau, G. Review study on recent developments in fire sensing methods. IEEE Access 2023, 11, 90269–90282. [Google Scholar] [CrossRef]
- Özel, B.; Alam, M.S.; Khan, M.U. Review of modern forest fire detection techniques: Innovations in image processing and deep learning. Information 2024, 15, 538. [Google Scholar] [CrossRef]
- Khan, R.A.; Bajwa, U.I.; Raza, R.H.; Anwar, M.W. Beyond boundaries: Advancements in fire and smoke detection for indoor and outdoor surveillance feeds. Eng. Appl. Artif. Intell. 2025, 142, 109855. [Google Scholar] [CrossRef]
- Diaconu, B.M. Recent advances and emerging directions in fire detection systems based on machine learning algorithms. Fire 2023, 6, 441. [Google Scholar] [CrossRef]
- Lee, J.-G.; Jun, S.; Cho, Y.-W.; Lee, H.; Kim, G.B.; Seo, J.B.; Kim, N. Deep learning in medical imaging: General overview. Korean J. Radiol. 2017, 18, 570–584. [Google Scholar] [CrossRef]
- Goldammer, J.G.; Kashparov, V.; Zibtsev, S.; Robinson, S. Best Practices and Recommendations for Wildfire Suppression in Contaminated Areas, with Focus on Radioactive Terrain. Global Fire Monitoring Center (GFMC): Freiburg, Basel, Kyiv, INIS-UA–21O0045; 2014. [Google Scholar]
- Fonollosa, J.; Solórzano, A.; Marco, S. Chemical sensor systems and associated algorithms for fire detection: A review. Sensors 2018, 18, 553. [Google Scholar] [CrossRef]
- Wikipedia. Aspirating Smoke Detector. Available online: https://en.wikipedia.org/wiki/Aspirating_smoke_detector (accessed on 18 September 2025).
- Wikipedia. Flame Detector. Available online: https://en.wikipedia.org/wiki/Flame_detector (accessed on 18 September 2025).
- Wiśnios, M.; Paciorek, M.; Szulim, R.; Poźniak, K. Identifying characteristic fire properties with stationary and non-stationary fire alarm systems. Sensors 2024, 24, 2772. [Google Scholar] [CrossRef]
- EN 54; Fire Detection and Fire Alarm Systems. European Committee for Standardization: Brussels, Belgium, 2011.
- Kiwa. EN 14604 certification smoke alarm devices. Kiwa.com April 11, 2023. Available online: https://www.kiwa.com/en/services/certification/en-14604-certification-smoke-alarm-devices/?utm_source=chatgpt.com (accessed on 18 September 2025).
- Giglio, L.; Randerson, J.T.; van der Werf, G.R. Analysis of daily, monthly, and annual burned area using the fourth-generation global fire emissions database (GFED4). J. Geophys. Res. Biogeosci. 2013, 118, 317–328. [Google Scholar] [CrossRef]
- Honary, R.; Shelton, J.; Kavehpour, P. A review of technologies for the early detection of wildfires. ASME Open J. Eng. 2025, 4, 040803. [Google Scholar] [CrossRef]
- Lee, Y.; Shim, J. False positive decremented research for fire and smoke detection in surveillance camera using spatial and temporal features based on deep learning. Electronics 2019, 8, 1167. [Google Scholar] [CrossRef]
- Zhang, B.; Li, Y. Research of deep learning-based fire and smoke detection using adaptive attention. In Proceedings of the SPIE, 6th International Conference on Optical and Photonic Engineering (icOPEN), Chongqing, China, 23 January 2024; Volume 13575, p. 135752B. [Google Scholar] [CrossRef]
- Frizzi, S.; Kaabi, R.; Bouchouicha, M.; Ginoux, J.M.; Rovetta, A. Convolutional neural network for video fire and smoke detection. In Proceedings of the 14th International Conference on Control, Automation, Robotics and Vision (ICARCV), Phuket, Thailand, 13–15 November 2016; pp. 1057–1062. [Google Scholar]
- Sathishkumar, V.E.; Cho, J.; Subramanian, M.; Naren, O.S. Forest fire and smoke detection using deep learning-based learning without forgetting. Fire Ecol. 2023, 19, 9. [Google Scholar] [CrossRef]
- Khan, A.; Hassan, B.; Khan, S.; Ahmed, R.; Abuassba, A. DeepFire: A novel dataset and deep transfer learning benchmark for forest fire detection. Mobile Inf. Syst. 2022, 45358359. [Google Scholar] [CrossRef]
- Sousa, M.J.; Moutinho, A.; Almeida, M. Wildfire detection using transfer learning on augmented datasets. Expert Syst. Appl. 2020, 142, 112975. [Google Scholar] [CrossRef]
- Ghani, R.F. Robust real-time fire detector using CNN and LSTM. In Proceedings of the IEEE Student Conference on Research and Development (SCOReD), Selangor, Malaysia, 2–4 December 2019. [Google Scholar]
- Chaoxia, C.; Shang, W.; Zhang, F. Information-guided flame detection based on Faster R-CNN. IEEE Access 2020, 8, 58923–58932. [Google Scholar] [CrossRef]
- Li, L.; Liu, F.; Ding, Y. Real-time smoke detection with Faster R-CNN. In Proceedings of the 2021 2nd International Conference on Artificial Intelligence and Information Systems, Chongqing, China, 28–30 May 2021; pp. 1–5. [Google Scholar]
- Kim, B.; Lee, J. A Bayesian network-based information fusion combined with DNNs for robust video fire detection. Appl. Sci. 2021, 11, 7624. [Google Scholar] [CrossRef]
- Khan, S.; Muhammad, K.; Hussain, T.; Del Ser, J.; Cuzzolin, F.; Bhattacharyya, S.; Akhtar, Z.; de Albuquerque, V.H.C. Deepsmoke: Deep learning model for smoke detection and segmentation in outdoor environments. Expert Syst. Appl. 2021, 182, 115125. [Google Scholar] [CrossRef]
- Jia, Y.; Chen, W.; Yang, M.; Wang, L.; Liu, D.; Zhang, Q. Video smoke detection with domain knowledge and transfer learning from deep convolutional neural networks. Optik 2021, 240, 166947. [Google Scholar] [CrossRef]
- Shahriar Sozol, M.S.; Mondal, M.R.H.; Thamrin, A.H. Indoor fire and smoke detection based on optimized YOLOv5. PLoS ONE 2025, 20, e0322052. [Google Scholar]
- Hu, M.; Ren, Y.; Chai, H. Forest fire detection based on improved YOLOv5. In Proceedings of the 2021 4th International Conference on Artificial Intelligence and Pattern Recognition, Xiamen, China, 18–20 June 2021; pp. 172–178. [Google Scholar]
- Mseddi, W.S.; Ghali, R.; Jmal, M.; Attia, R. Fire detection and segmentation using YOLOv5 and U-NET. In Proceedings of the European Signal Processing Conference (EUSIPCO), Dublin, Ireland, 23–27 August 2021; pp. 741–745. [Google Scholar] [CrossRef]
- Casas, E.; Ramos, L.; Bendek, E.; Rivas-Echeverría, F. Assessing the effectiveness of YOLO architectures for smoke and wildfire detection. IEEE Access 2023, 11, 96554–96583. [Google Scholar] [CrossRef]
- Bahhar, C.; Ksibi, A.; Ayadi, M.; Jamjoom, M.M.; Ullah, Z.; Ben Othman, S.; Sakli, H. Wildfire and smoke detection using staged YOLO model and ensemble CNN. Electronics 2023, 12, 228. [Google Scholar] [CrossRef]
- Kim, S.-Y.; Muminov, A. Forest fire smoke detection based on deep learning approaches and unmanned aerial vehicle images. Sensors 2023, 23, 5702. [Google Scholar] [CrossRef] [PubMed]
- Fernandes, A.M.; Utkin, A.B.; Chaves, P. Automatic early detection of wildfire smoke with visible light cameras using deep learning and visual explanation. IEEE Access 2022, 10, 12814–12828. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, W.; Liu, Y.; Jing, R.; Liu, C. An efficient fire and smoke detection algorithm based on an end-to-end structured network. Eng. Appl. Artif. Intell. 2022, 116, 105492. [Google Scholar] [CrossRef]
- Li, R.; Hu, Y.; Li, L.; Guan, R.; Yang, R.; Zhan, J.; Cai, W.; Wang, Y.; Xu, H.; Li, L. SMWE-GFPNNet: A high-precision and robust method for forest fire smoke detection. Knowl.-Based Syst. 2024, 289, 111528. [Google Scholar] [CrossRef]
- Cheng, G.; Xian, B.; Liu, Y.; Chen, X.; Hu, L.; Song, Z. A hierarchical Transformer network for smoke video recognition. Digit. Signal Process. 2025, 158, 104959. [Google Scholar] [CrossRef]
- Choi, S.; Song, Y.; Jung, H. Study on improving detection performance of wildfire and non-fire events early using Swin Transformer. IEEE Access 2025, in press. [Google Scholar] [CrossRef]
- Chen, X.; Hopkins, B.; Wang, H.; O’Neill, L.; Afghah, F.; Razi, A.; Fulé, P.; Coen, J.; Rowell, E.; Watts, A. Wildland fire detection and monitoring using a drone-collected RGB/IR image dataset. IEEE Access 2022, 10, 121301–121317. [Google Scholar] [CrossRef]
- Ryu, J.; Kwak, D. A study on a complex flame and smoke detection method using computer vision detection and convolutional neural network. Fire 2022, 5, 108. [Google Scholar] [CrossRef]
- Hu, M.; Chai, H.; Ren, Y. Forest fire video detection based on multi-scale feature fusion with data enhancement. In Proceedings of the 2021 4th International Conference on Artificial Intelligence and Pattern Recognition, Xiamen, China, 18–20 June 2021; pp. 218–224. [Google Scholar]
- Wang, C.; Li, Q.; Liu, S.; Cheng, P.; Huang, Y. Transformer-based fusion of infrared and visible imagery for smoke recognition in commercial areas. Comput. Mater. Continua 2025, 84, 3. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, Y.; Khan, Z.A.; Huang, A.; Sang, J. Multi-level feature fusion networks for smoke recognition in remote sensing imagery. Neural Netw. 2025, 184, 107112. [Google Scholar] [CrossRef] [PubMed]
- Ramos, L.T.; Casas, E.; Romero, C.; Rivas-Echeverría, F.; Bendek, E. A study of YOLO architectures for wildfire and smoke detection in ground and aerial imagery. Results Eng. 2025, 26, 104869. [Google Scholar] [CrossRef]
- El-Madafri, I.; Peña, M.; Olmedo-Torre, N. Real-time forest fire detection with lightweight CNN using hierarchical multi-task knowledge distillation. Fire 2024, 7, 392. [Google Scholar] [CrossRef]
- Alkhammash, E.H. A comparative analysis of YOLOv9, YOLOv10, YOLOv11 for smoke and fire detection. Fire 2025, 8, 1. [Google Scholar] [CrossRef]
- Xue, Z.; Kong, L.; Wu, H.; Chen, J. Fire and smoke detection based on improved YOLOV11. IEEE Access 2025, 13, 73022–73040. [Google Scholar] [CrossRef]
- He, L.; Zhou, Y.; Liu, L.; Zhang, Y.; Ma, J. Research and application of deep learning object detection methods for forest fire smoke recognition. Sci. Rep. 2025, 15, 16328. [Google Scholar] [CrossRef]
- Zhu, W.; Niu, S.; Yue, J.; Zhou, Y. Multiscale wildfire and smoke detection in complex drone forest environments based on YOLOv8. Sci. Rep. 2025, 15, 2399. [Google Scholar] [CrossRef]
- Niu, K.; Wang, C.; Xu, J.; Liang, J.; Zhou, X.; Wen, K.; Lu, M.; Yang, C. Early forest fire detection with UAV image fusion: A novel deep learning method using visible and infrared sensors. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2025, in press. [Google Scholar] [CrossRef]
- Shirwaikar, R.; Narvekar, A.; Hosamani, A.; Fernandes, K.; Tak, K.; Parab, V. Real-time semi-occluded fire detection and evacuation route generation: Leveraging instance segmentation for damage estimation. Fire Saf. J. 2025, 152, 104338. [Google Scholar] [CrossRef]
- Wang, X.; Wang, J.; Chen, L.; Zhang, Y. Improving computer vision-based wildfire smoke detection by combining SE-ResNet with SVM. Processes 2024, 12, 747. [Google Scholar] [CrossRef]
- Shin, D.H.; Kang, J.M.; Cheong, T. FDN: A real-time ensemble fire detection network. IEEE Access 2025, in press. [Google Scholar]
- Liu, L.; Chen, L.; Asadi, M. Capsule neural network and adapted golden search optimizer based forest fire and smoke detection. Sci. Rep. 2025, 15, 4187. [Google Scholar] [CrossRef]
- Jin, P.; Cheng, P.; Liu, X.; Huang, Y. From smoke to fire: A forest fire early warning and risk assessment model fusing multimodal data. Eng. Appl. Artif. Intell. 2025, 152, 110848. [Google Scholar] [CrossRef]
- Suh, Y. Vision-based detection algorithm for monitoring dynamic change of fire progression. J. Big Data 2025, 12, 134. [Google Scholar] [CrossRef]
- Gonçalves, A.M.; Brandão, T.; Ferreira, J.C. Wildfire detection with deep learning—A case study for the CICLOPE project. IEEE Access 2024, 12, 82095–82110. [Google Scholar] [CrossRef]
- Shang, L.; Hu, X.; Huang, Z.; Zhang, Q.; Zhang, Z.; Li, X.; Chang, Y. YOLO-DKM: A flame and spark detection algorithm based on deep learning. IEEE Access 2025, in press. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
- Shi, P.; Zhao, Y.; Qian, Y.; Li, X.; Chen, X. An efficient forest fire detection algorithm using improved YOLOv5. Forests 2023, 14, 2440. [Google Scholar] [CrossRef]
- Zhao, L.; Liu, J.; Peters, S.; Li, J.; Mueller, N.; Oliver, S. Learning class-specific spectral patterns to improve deep learning-based scene-level fire smoke detection from multi-spectral satellite imagery. Remote Sens. Appl. Soc. Environ. 2024, 34, 101152. [Google Scholar] [CrossRef]
- Wang, S.; Wu, M.; Wei, X.; Song, X.; Wang, Q.; Jiang, Y.; Gao, J.; Meng, L.; Chen, Z.; Zhang, Q.; et al. An advanced multi-source data fusion method utilizing deep learning techniques for fire detection. Eng. Appl. Artif. Intell. 2025, 142, 109902. [Google Scholar] [CrossRef]
- Panindre, P.; Acharya, S.; Kalidindi, N.; Kumar, S. Artificial intelligence-integrated autonomous IoT alert system for real-time remote fire and smoke detection in live video streams. IEEE Internet Things J. 2025, in press. [Google Scholar] [CrossRef]
- Fan, X.; Lei, F.; Yang, K. Real-time detection of smoke and fire in the wild using unmanned aerial vehicle remote sensing imagery. Forests 2025, 16, 201. [Google Scholar] [CrossRef]
- Yan, C.; Wang, J. MAG-FSNet: A high-precision robust forest fire smoke detection model integrating local features and global information. Measurement 2025, 247, 116813. [Google Scholar] [CrossRef]
- Ishtiaq, M.; Won, J.-U. YOLO-SIFD: YOLO with sliced inference and fractal dimension analysis for improved fire and smoke detection. Comput. Mater. Continua 2025, 82, 3. [Google Scholar] [CrossRef]
- Hoang, V.-H.; Lee, J.W.; Park, C.-S. Enhancing fire detection with YOLO models: A Bayesian hyperparameter tuning approach. Comput. Mater. Continua 2025, 83, 3. [Google Scholar] [CrossRef]
- Dewi, C.; Santoso, M.V.V.; Chernovita, H.P.; Mailoa, E.; Philemon, S.A.; Chen, A.P.S. Integration of YOLOv11 and histogram equalization for fire and smoke-based detection of forest and land fires. Deakin Univ. 2025. Preprint/Institutional Report. [Google Scholar] [CrossRef]
- Foggia, P.; Saggese, A.; Vento, M. Real-time fire detection for video-surveillance applications using a combination of experts based on color, shape, and motion. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 1545–1556. [Google Scholar] [CrossRef]
- Jadon, A.; Omama, M.; Varshney, A.; Ansari, M.S.; Sharma, R. Firenet: A specialized lightweight fire & smoke detection model for real-time IoT applications. arXiv 2019, arXiv:1905.11922. [Google Scholar]
- Steffens, C.R.; Rodrigues, R.N.; da Costa Botelho, S.S. An unconstrained dataset for non-stationary video-based fire detection. In Proceedings of the Latin American Robotics Symposium (LARS)/Brazilian Symposium on Robotics (SBR), Uberlandia, Brazil, 29–31 October 2015; pp. 25–30. [Google Scholar] [CrossRef]
- Toulouse, T.; Rossi, L.; Steffenel, L.A.; Sébillot, P. Computer vision for wildfire research: An evolving image dataset for processing and analysis. Fire Saf. J. 2017, 92, 188–194. [Google Scholar] [CrossRef]
- IEEE Dataport. Flame Dataset: Aerial Imagery of Pile Burn Detection Using Drones (UAVs). Available online: https://ieee-dataport.org/open-access/flame-dataset-aerial-imagery-pile-burn-detection-using-drones-uavs (accessed on 18 September 2025).
- Kaggle. Forest Fire Images Dataset. Available online: https://www.kaggle.com/datasets/mohnishsaiprasad/forest-fire-images (accessed on 18 September 2025).
- Khan, A.; Hassan, B. Dataset for forest fire detection. Mendeley Data 2020, 1. [Google Scholar] [CrossRef]
- Kaggle. Fire Dataset. Available online: https://www.kaggle.com/datasets/phylake1337/fire-dataset (accessed on 18 September 2025).
- Kaggle. FlameVision Dataset. Available online: https://www.kaggle.com/datasets/anamibnjafar0/flamevision (accessed on 18 September 2025).
- Bitbucket. BoWFire Dataset. Available online: https://bitbucket.org/gbdi/bowfire-dataset/src/master/ (accessed on 18 September 2025).
- GitHub. DBA-YOLO Dataset. Available online: https://github.com/ScryAbu/DBA-YOLO-Dataset (accessed on 18 September 2025).
- Hopkins, B.; Chernogorov, A.; Ahmed, M.; Farooq, W. FLAME 2: Fire detection and modeling—Aerial multi-spectral image dataset. IEEE Dataport 2022. [CrossRef]
- Yuan, F. Video Smoke Detection Dataset. State Key Lab of Fire Science, USTC. Available online: http://staff.ustc.edu.cn/~yfn/vsd.html (accessed on 18 September 2025).
- University of California San Diego. HPWREN Fire Ignition Images Library for Neural Network Training. Available online: https://www.hpwren.ucsd.edu/FIgLib/ (accessed on 18 September 2025).
- Zhou, Q. SKLFS dataset. Fire Detection Research Group, 2018. Available online: http://smoke.ustc.edu.cn/datasets.htm (accessed on 18 September 2025).
- Gong, X.; Li, Y.; Zhang, X.; Zhou, J. Dark-channel based attention and classifier retraining for smoke detection in foggy environments. Digit. Signal Process. 2022, 123, 103454. [Google Scholar] [CrossRef]
- Yazdi, A.; Chen, C.; Dai, W.; Wang, D. Nemo: An open-source transformer-supercharged benchmark for fine-grained wildfire smoke detection. Remote Sens. 2022, 14, 3979. [Google Scholar] [CrossRef]
- DeepQuestAI. FireFlame Dataset. Available online: https://github.com/DeepQuestAI/Fire-Smoke-Dataset (accessed on 18 September 2025).
- Cetin, A.E.; VisiFire Dataset. Bilkent EE Signal Processing Group. Available online: http://signal.ee.bilkent.edu.tr/VisiFire/ (accessed on 18 September 2025).
- de Venâncio, P.V.A.; Rodrigues, H.C.P.; de Andrade, M.T. Fire detection based on a two-dimensional convolutional neural network and temporal analysis. In Proceedings of the IEEE Latin American Conference on Computational Intelligence (LA-CCI), Arequipa, Peru, 22–24 November 2021; pp. 1–6. [Google Scholar]
- Kose, K.; Tsalakanidou, F.; Besbes, H.; Tlili, F.; Governeur, B.; Pauwels, E. FireSense: Fire detection and management through a multi-sensor network for protection of cultural heritage areas from the risk of fire and extreme weather conditions. In Proceedings of the 7th Framework Programme for Research and Technological Development, Thessaloniki, Greece; 2010. [Google Scholar]
- Cazzolato, M.T.; Avalhais, L.; Chino, D.; Ramos, J.S.; de Souza, J.A.; Rodrigues, J.F., Jr.; Traina, A. Fismo: A compilation of datasets from emergency situations for fire and smoke analysis. In Proceedings of the Brazilian Symposium on Databases (SBBD), Uberlândia, Brazil, 2–6 October 2017; pp. 213–223. [Google Scholar]
- Wu, S.; Zhang, X.; Liu, R.; Li, B. A dataset for fire and smoke object detection. Multimed. Tools Appl. 2022, 1–20. [Google Scholar] [CrossRef]
- Kaggle. Forest Fire, Smoke and Non-Fire Image Dataset. Available online: https://www.kaggle.com/datasets/amerzishminha/forest-fire-smoke-and-non-fire-image-dataset (accessed on 18 September 2025).
- Wang, M.; Jiang, L.; Yue, P.; Yu, D.; Tuo, T. FASDD: An open-access 100,000-level flame and smoke detection dataset for deep learning in fire detection. Earth Syst. Sci. Data Discuss. 2023, preprint. [Google Scholar] [CrossRef]
- GitHub. Domestic-Fire-and-Smoke-Dataset. Available online: https://github.com/datacluster-labs/Domestic-Fire-and-Smoke-Dataset (accessed on 18 September 2025).
- Kaggle. Fire and Smoke Dataset. Available online: https://www.kaggle.com/datasets/dataclusterlabs/fire-and-smoke-dataset (accessed on 18 September 2025).
- Omdena. OnFire Dataset. Available online: https://datasets.omdena.com/dataset/onfire-dataset (accessed on 18 September 2025).
- Lostanlen, M.; Isla, N.; Guillen, J.; Veith, F.; Buc, C.; Barriere, V. Scrapping the web for early wildfire detection: A new annotated dataset of images and videos of smoke plumes in-the-wild. arXiv 2024, arXiv:2402.05349. [Google Scholar]
- Han, X.; Wu, Y.; Pu, N.; Feng, Z.; Zhang, Q.; Bei, Y.; Cheng, L. Fire and smoke detection with burning intensity representation. In Proceedings of the 6th ACM Multimedia Asia Conference, Tokyo, Japan, 3–5 January 2024. [Google Scholar]
- Kaggle. The Wildfire Dataset. Available online: https://www.kaggle.com/datasets/elmadafri/the-wildfire-dataset (accessed on 18 September 2025).
- El-Madafri, I.; Peña, M.; Olmedo-Torre, N. The Wildfire Dataset: Enhancing deep learning-based forest fire detection with a diverse evolving open-source dataset focused on data representativeness and a novel multi-task learning approach. Forests 2023, 14, 1697. [Google Scholar] [CrossRef]
- Jong, A.; Shoaib, M.; Baek, J.; Lee, C. WIT-UAS: A wildland-fire infrared thermal dataset to detect crew assets from aerial views. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Detroit, MI, USA, 1–5 October 2023; pp. 11464–11471. [Google Scholar]
- KMU-CVPR. KMU Fire & Smoke Database. KMU CVPR Lab, 2012. Available online: https://cvpr.kmu.ac.kr/Dataset/Dataset.htm (accessed on 18 September 2025).
- Hsu, Y.-C.; Lin, C.-Y.; Chiu, S.-W.; Huang, S.-J.; Hsieh, Y.-S. Project RISE: Recognizing industrial smoke emissions. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual Event, 2–9 February 2021; 35, pp. 14813–14821. [Google Scholar]
- Wang, H.; Zhao, Y.; Zhang, H.; Xu, X. Deep learning based fire detection system for surveillance videos. In Proceedings of the International Conference on Intelligent Robotics and Applications (ICIRA), Shenyang, China, 8–11 August 2019; Part II. pp. 318–328. [Google Scholar]
- Chaturvedi, S.; Khanna, P.; Ojha, A. A survey on vision-based outdoor smoke detection techniques for environmental safety. ISPRS J. Photogramm. Remote Sens. 2022, 185, 158–187. [Google Scholar] [CrossRef]
- Yang, Z.; Liu, Y.; Liu, W.; Ma, H. Indoor video flame detection based on lightweight convolutional neural network. Pattern Recognit. Image Anal. 2020, 30, 551–564. [Google Scholar] [CrossRef]
- Bouguettaya, A.; Zarzour, H.; Taberkit, A.M.; Kechida, A. A review on early wildfire detection from unmanned aerial vehicles using deep learning-based computer vision algorithms. Signal Process. 2022, 190, 108309. [Google Scholar] [CrossRef]
- Dampage, U.B.; aranayake, L.; Wanasinghe, R.; Kottahachchi, K.; Jayasanka, B. Forest fire detection system using wireless sensor networks and machine learning. Sci. Rep. 2022, 12, 46. [Google Scholar] [CrossRef]
- Fouda, M.M.; Abdel-Basset, M.; Rezgui, A.; Elhoseny, M.; Hamam, H. A lightweight hierarchical AI model for UAV-enabled edge computing with forest-fire detection use-case. IEEE Netw. 2022, 36, 38–45. [Google Scholar] [CrossRef]
- Pesonen, J.; Hakala, T.; Karjalainen, V.; Koivumäki, N.; Markelin, L.; Raita-Hakola, A.-M.; Suomalainen, J.; Pölönen, I.; Honkavaara, E. Detecting wildfires on UAVs with real-time segmentation trained by larger teacher models. In Proceedings of the 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 4–8 January 2025; pp. 5166–5176. [Google Scholar]
- Yun, B.; Zheng, Y.; Lin, Z.; Li, T. FFYOLO: A lightweight forest fire detection model based on YOLOv8. Fire 2024, 7, 93. [Google Scholar] [CrossRef]
- Zheng, Y.; Tao, F.; Gao, Z.; Li, J. FGYOLO: An integrated feature enhancement lightweight unmanned aerial vehicle forest fire detection framework based on YOLOv8n. Forests 2024, 15, 1823. [Google Scholar] [CrossRef]
- Kumar, A.; Perrusquía, A.; Al-Rubaye, S.; Guo, W. Wildfire and smoke early detection for drone applications: A light-weight deep learning approach. Eng. Appl. Artif. Intell. 2024, 136, 108977. [Google Scholar] [CrossRef]
- Muksimova, S.; Mardieva, S.; Cho, Y.-I. Deep encoder–decoder network-based wildfire segmentation using drone images in real-time. Remote Sens. 2022, 14, 6302. [Google Scholar] [CrossRef]
- Muksimova, S.; Cho, Y.-I.; Mardieva, S. Lightweight fire detection in tunnel environments. Fire 2025, 8, 134. [Google Scholar] [CrossRef]
- Tao, Y.; Li, B.; Li, P.; Qian, J.; Qi, L. Improved lightweight YOLOv11 algorithm for real-time forest fire detection. Electronics 2025, 14, 1508. [Google Scholar] [CrossRef]
- Chen, G.; Xu, H.; Wang, X.; Tang, Y.; Zhang, Y. LMDFS: A lightweight model for detecting forest fire smoke in UAV images based on YOLOv7. Remote Sens. 2023, 15, 3790. [Google Scholar] [CrossRef]
- Wei, C.; Xu, J.; Li, Q.; Jiang, S. An intelligent wildfire detection approach through cameras based on deep learning. Sustainability 2022, 14, 15690. [Google Scholar] [CrossRef]
- Li, C.; Zhu, B.; Chen, G.; Li, Q.; Xu, Z. Intelligent monitoring of tunnel fire smoke based on improved YOLOX and edge computing. Appl. Sci. 2025, 15, 2127. [Google Scholar] [CrossRef]
- Muksimova, S.; Umirzakova, S.; Baltayev, J.; Cho, Y.-I. Lightweight deep learning model for fire classification in tunnels. Fire 2025, 8, 85. [Google Scholar] [CrossRef]
- Zhao, C.; Zhao, L.; Zhang, K.; Ren, Y.; Chen, H.; Sheng, Y. Smoke and Fire-You Only Look Once: A lightweight deep learning model for video smoke and flame detection in natural scenes. Fire 2025, 8, 104. [Google Scholar] [CrossRef]
- Gagliardi, A.; de Gioia, F.; Saponara, S. A real-time video smoke detection algorithm based on Kalman filter and CNN. J. Real-Time Image Process. 2021, 18, 2085–2095. [Google Scholar] [CrossRef]
- Zeng, L.; Xu, G.; Dong, L.; Hu, P. Fast smoke and flame detection based on lightweight deep neural network. In Proceedings of the International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC), Hangzhou, China, 26–27 August 2020; Volume 1, pp. 232–235. [Google Scholar] [CrossRef]
- Li, H.; Liu, L.; Du, J.; Jiang, F.; Guo, F.; Hu, Q.; Fan, L. An improved YOLOv3 for object detection of transmission lines. IEEE Access 2022, 10, 45620–45628. [Google Scholar] [CrossRef]
- Wang, J.; Zhang, X.; Zhang, C. A lightweight smoke detection network incorporated with the edge cue. Expert Syst. Appl. 2024, 241, 122583. [Google Scholar] [CrossRef]
- Abdusalomov, A.; Umirzakov, S.; Nurmatov, A.; Cho, Y.-I. AI-driven UAV surveillance for agricultural fire safety. Fire 2025, 8, 142. [Google Scholar] [CrossRef]
- Duangsuwan, S.; Klubsuwan, K. Accuracy assessment of drone real-time open burning imagery detection for early wildfire surveillance. Forests 2023, 14, 1852. [Google Scholar] [CrossRef]
- Zhang, Z.; Tan, L.; Tiong, R.L.K. Ship-Fire Net: An improved YOLOv8 algorithm for ship fire detection. Sensors 2024, 24, 727. [Google Scholar] [CrossRef]
- Wang, S.; Zhang, Y.; Hsieh, T.H.; Liu, W.; Yin, F.; Liu, B. Fire situation detection method for unmanned fire-fighting vessel based on coordinate attention structure-based deep learning network. Ocean Eng. 2022, 266, 113208. [Google Scholar] [CrossRef]
- Zhao, L.; Liu, J.; Peters, S.; Li, J.; Oliver, S.; Mueller, N. Investigating the impact of using IR bands on early fire smoke detection from Landsat imagery with a lightweight CNN model. Remote Sens. 2022, 14, 3047. [Google Scholar] [CrossRef]
- Park, M.; Bak, J.; Park, S. Advanced wildfire detection using generative adversarial network-based augmented datasets and weakly supervised object localization. Int. J. Appl. Earth Obs. Geoinf. 2022, 114, 103052. [Google Scholar] [CrossRef]
- Gao, P. A fire and smoke detection model based on YOLOv8 improvement. Int. J. Adv. Comput. Sci. Appl. 2024, 15, 3. [Google Scholar] [CrossRef]
- Mahmoudi, S.A.; Gloesener, M.; Benkedadra, M.; Lerat, J.S. Edge AI system for real-time and explainable forest fire detection using compressed deep learning models. In Proceedings of the 2025 ACM Conference on Copyright, New York, NY, USA, 18–20 March 2025; pp. 847–854. [Google Scholar] [CrossRef]
- Chaturvedi, S.; Shubham, Arun, C.; Singh, Thakur, P.; Khanna, P.; Ojha, A. Ultra-lightweight convolution-transformer network for early fire smoke detection. Fire Ecol. 2024, 20, 83. [Google Scholar] [CrossRef]
- Boroujeni, S.P.H.; Mehrabi, N.; Afghah, F.; McGrath, C.P.; Bhatkar, D.; Biradar, M.A.; Razi, A. Fire and smoke datasets in 20 years: An in-depth review. arXiv 2025, arXiv:2503.14552. [Google Scholar] [CrossRef]
- Bi, W.; Li, B.; Lei, B. Lightweight fire detection algorithm based on deep learning. In Proceedings of the Advanced Fiber Laser Conference (AFL), Online, 25–28 March 2024; Volume 13104, pp. 580–585. [Google Scholar] [CrossRef]
- Goswami, T.; Kaushik, B.K. Forest fire detection using multimodal deep learning on IR-RGB datasets. Int. J. Emerg. Trends Eng. Technol. 2024, 72, 175–183. [Google Scholar] [CrossRef]
- Altaf, M.; Yasir, M.; Dilshad, N.; Kim, W. An optimized deep-learning-based network with an attention module for efficient fire detection. Fire 2025, 8, 15. [Google Scholar] [CrossRef]
- Sharma, A.; Kumar, R.; Kansal, I.; Popli, R.; Khullar, V.; Verma, J.; Kumar, S. Fire detection in urban areas using multimodal data and federated learning. Fire 2024, 7, 104. [Google Scholar] [CrossRef]
- Gutmacher, D.; Hoefer, U.; Wöllenstein, J. Gas sensor technologies for fire detection. Sens. Actuators B Chem. 2012, 175, 40–45. [Google Scholar] [CrossRef]
- Ashiquzzaman, A.; Min, Oh, S.; Lee, D.; Lee, J.; Kim, J. Context-aware deep convolutional neural network application for fire and smoke detection in virtual environment for surveillance video analysis. In Proceedings of the SmartCom 2020, Bangkok, Thailand, 26–28 February 2020; pp. 459–467. [Google Scholar]
- Mahgoub, A.; Khan, A.; Elshafee, A. Fire alarm system for smart cities using edge computing. In Proceedings of the IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT), Doha, Qatar, 2–5 February 2020. [Google Scholar]
- Xiang, C.; Jin, W.; Huang, D.; Tran, M.A.; Guo, J.; Wan, Y.; Xie, W.; Kurczveil, G.; Netherton, A.M.; Liang, D.; et al. High-performance silicon photonics using heterogeneous integration. IEEE J. Sel. Top. Quantum Electron. 2021, 28, 1–15. [Google Scholar] [CrossRef]
- Li, X.; Tang, B.; Li, H. AdaER: An adaptive experience replay approach for continual lifelong learning. Neurocomputing 2024, 572, 127204. [Google Scholar] [CrossRef]
- Ma, C.; Li, J.; Shi, L.; Ding, M.; Wang, T.; Han, Z.; Poor, H.V. When federated learning meets blockchain: A new distributed learning paradigm. IEEE Comput. Intell. Mag. 2022, 17, 26–33. [Google Scholar] [CrossRef]
- Kim, J.-H.; Hwang, Y. GAN-based synthetic data augmentation for infrared small target detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
- Balasubramaniam, N.; Kauppinen, M.; Rannisto, A.; Hiekkanen, K.; Kujala, S. Transparency and explainability of AI systems: From ethical guidelines to requirements. Inf. Softw. Technol. 2023, 159, 107197. [Google Scholar] [CrossRef]
- Dong, W.; Qu, J.; Zhang, T.; Li, Y.; Du, Q. Context-aware guided attention based cross-feedback dense network for hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
Ref. | Year | Type | Modalities | Techniques | Tasks | Datasets | Domains | Taxonomy | Edge AI | Challenges | Future Directions | Key Contributions |
---|---|---|---|---|---|---|---|---|---|---|---|---|
[25] | 2020 | Comp. | l | l | l | ✓ | l | ✓ | ✗ | ✓ | ✗ | Shift from handcrafted to DL in video fire/smoke detection; speed–accuracy trade-offs. |
[28] | 2023 | Brief | l | l | l | ✓ | l | ✓ | ✗ | ✓ | ✓ | ML (sup./unsup./RL) in forest fire science; sensors, UAVs, RS data. |
[29] | 2023 | Comp. | l | l | l | ✓ | l | ✓ | ✗ | ✓ | ✓ | DL (CNNs, ViTs, lightweight) for wildland fire RS; benchmark datasets. |
[35] | 2023 | Comp. | l | l | l | ✓ | l | ✓ | ✗ | ✓ | ✓ | Scientometric review (2300+ works); synthesis of ML/DL methods, datasets, and domains. |
[32] | 2023 | System. | l | l | l | ✗ | l | ✓ | ✓ | ✓ | ✓ | Sensor-based fire detection (smoke, heat, flame, gas) and IoT/WSN systems. |
[26] | 2023 | Focused | l | l | l | ✓ | l | ✓ | ✗ | ✓ | ✓ | DL video fire detection (CNNs, RNNs, hybrids); dataset and metric synthesis. |
[31] | 2024 | System. | l | l | l | ✓ | l | ✓ | ✗ | ✓ | ✓ | RS datasets (MODIS, VIIRS, Sentinel) linked to thresholding, contextual, and DL methods. |
[30] | 2024 | Comp. | l | l | l | ✓ | l | ✓ | ✗ | ✓ | ✓ | DL in remote sensing (1990–2023); CNN, YOLO, U-Net; bibliometric + systematic. |
[33] | 2024 | System. | l | l | l | ✓ | l | ✓ | ✗ | ✓ | ✓ | Categorizes forest fire detection into five areas (fire, smoke, joint, robotics). |
[27] | 2024 | Literature | l | l | l | ✓ | l | ✓ | ✗ | ✓ | ✓ | Literature review with novel video taxonomy (size × activity); dataset re-annotation. |
[34] | 2025 | System. | l | l | l | ✓ | l | ✓ | ✗ | ✓ | ✓ | 33 surveillance studies (indoor/outdoor); ML/DL/Hybrids + multimodal inputs. |
Ours | 2025 | Comp. | l | l | l | ✓ | l | ✓ | ✓ | ✓ | ✓ | Cross-domain synthesis; unified dataset taxonomy; edge deployment; challenges (dataset bias, false alarms, generalization, interpretability) + roadmap (fusion, federated/synthetic data, lightweight DL, explainable AI). |
Selection Criteria | Description |
---|---|
Inclusion Criteria: | |
• Studies that apply AI, ML, or DL techniques to fire and/or smoke detection. | • To ensure methodological relevance to fire/smoke detection using intelligent systems. |
• Peer-reviewed journal articles, top-tier conference papers, or authoritative technical reports. | • To maintain academic quality and credibility. |
• Publications from 2020 to 2025. | • To reflect recent developments and state-of-the-art techniques. |
• Studies involving real-world deployment, empirical evaluations, or system-level testing. | • To prioritize practical relevance and real-life applicability. |
• Research targeting diverse environments (e.g., forest, urban, industrial, residential). | • To ensure broad coverage of fire and smoke detection contexts. |
Exclusion Criteria: | |
• Studies unrelated to fire, smoke, or flame detection tasks. | • To exclude irrelevant research outside the review scope. |
• Papers lacking empirical validation or meaningful technical depth. | • To remove speculative or unsubstantiated contributions. |
• Non-English publications. | • To maintain linguistic consistency and accessibility. |
• Duplicate or derivative works lacking novel contributions. | • To avoid redundancy and ensure original content. |
• Outdated surveys or works focused only on pre-2020 methods without updates. | • To focus on recent advancements and evolving technologies. |
Ref. | Year | Model/Architecture | Parameters/Speed | Dataset Used | Environment | Performance |
---|---|---|---|---|---|---|
[48] | 2016 | Basic 6-layer CNN | ∼1M params | MIVIA dataset | Indoor/Outdoor videos | 84% accuracy |
[50] | 2019 | VGG19, ResNet50 | ∼60M params | DeepFire, custom datasets | Forest UAV, general | 95% accuracy |
[52] | 2019 | CNN and LSTM | High complexity; 120 fps | Extended MIVIA (252 videos) | Video surveillance | 95.39% accuracy |
[53] | 2020 | Faster R-CNN | Slower inference | BoWFire, Corsican | General vision | 93.36% mAP |
[55] | 2021 | Bayesian Fusion, Faster R-CNN and LSTM | 60M params | Public fire dataset | Video-based detection | High accuracy |
[56] | 2021 | EfficientNet, DeepLabv3+ | ∼5.3M params | Outdoor smoke datasets | Surveillance | +3% accuracy; lower FAR |
[59] | 2021 | YOLOv5 | 7.5M params | Forest fire dataset (5k images) | Forest monitoring | 92.3% mAP |
[54] | 2021 | Faster R-CNN | inference ≈ 120 fps | Factory dataset | Indoor smoke detection | 99.0% accuracy |
[57] | 2021 | AlexNet, ResNet and Inception V3 | AlexNet (60M), Inception V3 (24M), ResNet50 (25.6M) backbones | Bilkent, VOC2012, SKLFS | Large buildings, labs | F- = 0.99 |
[58] | 2021 | YOLOv5 | 6.2 ms inference | Wildfire datasets | Forest/UAV | 92.1% mAP |
[60] | 2021 | Hybrid (YOLO+U-Net) | ∼10M+ params | Mixed wildfire dataset | Forest monitoring | Reduced false alarms |
[64] | 2022 | Transformer, CNN backbone | 26 fps | Custom, public datasets | Outdoor | Excellent performance on small objects |
[65] | 2022 | ResNet, EfficientNet and GradCAM | ResNet18 (12M), ResNet34 (21M), ResNet50 (23M), EfficientNet-B0 (5.3M, best, ≈ 88 fps), EfficientNet-B5 (30M+) | Large wildfire dataset (35k images) | Surveillance towers | AUROC = 0.949 |
[70] | 2022 | CNN, InceptionV3 | Preprocessing + CNN | Custom dataset | Indoor/Outdoor | +5–6% accuracy improvement |
[71] | 2022 | Modified ResNet | (∼25M params) | Custom fire video dataset | Forest fire monitoring | +14.9% mAP vs baseline |
[69] | 2022 | multi-modal CNNs | (∼3.4M params, 2.5K trainable) | FLAME2 UAV dataset | UAV wildfire monitoring | High pixel-level accuracy |
[90] | 2023 | YOLOv5 | 43 fps real-time | Wildfire dataset | Forest/Outdoor | 79% mAP |
[62] | 2023 | YOLO, Ensemble CNN | (∼7.5M params) for smoke; YOLOv5l (∼46.5M params) for fire | UAV + public datasets | UAV wildfire/smoke detection | Acc. = 99%; mAP = 0.85 (smoke) |
[61] | 2023 | YOLOv5/6/7/8 + YOLO-NAS | 300-epoch trained YOLOs | Foggia dataset | Wildfire smoke detection | F1 = 0.95; Recall = 0.98 |
[49] | 2023 | VGG16, Inception, Xception | VGG16 (138M), InceptionV3 (24M), Xception (22.9M) | BowFire + custom datasets | Forest fire images | Xception 98.7% acc.; LwF 91.4% |
[63] | 2023 | YOLOv7, CBAM, BiFPN | ≈37M params | UAV smoke dataset (6500 imgs) | UAV wildfire smoke | AP50 = 86.4% |
[91] | 2024 | CNN | ResNet50 (23.6M), InceptionResNetV2 (54M), MobileNetV2 (2.3M), VIB_SD (1.7M) | USTC_SmokeRS, Landsat_Smk | Satellite remote sensing | Improved scene-level accuracy |
[75] | 2025 | YOLOv5 | GA-optimized YOLOv5 | Indoor FS dataset (5k images) | Indoor environments | 92.1% mAP |
[82] | 2024 | SE-ResNet, SVM Hybrid | ∼3.28 s/image on RTX 2080 Ti | Public wildfire smoke datasets | Outdoor (wildfire/cloud mist distinction) | Acc. = 98.99%; F1 = 99% |
[66] | 2024 | Swin Transformer, GFPN | ∼261M params | Forest Fire Smoke Complex Background Dataset | Forest monitoring | mAP = 80.92%; mAP50 = 90.01%; mAP75 = 83.38 |
[74] | 2025 | YOLOv8/9/10/11 | 100 epochs | Ground + aerial imagery | Forest wildfire | YOLOv8 best balance |
[92] | 2025 | YOLO | YOLOv5s ≈7.5M params, ≈140 FPS; YOLOv8 ≈11M params, ≈90 FPS | Custom fire dataset | Multi-source incl. satellite RS | mAP50 = 81.4%; mAP50–95 = 59% |
[93] | 2025 | EfficientDet, Scaled-YOLOv4, Faster R-CNN | Scaled-YOLOv4: ≈64M params, detection time ≈0.016 s/frame (≈62 FPS) | Custom curated NFPA dataset | CCTV/IoT real-time | Scaled-YOLOv4: mAP0.5 = 80.6%; 0.016s/frame |
[84] | 2025 | Capsule Neural Network, Adapted Golden Search Optimizer (AGSO) | ≈11 layers | BowFire + wildfire smoke dataset | Forest wildfire detection | Improved accuracy vs CNN; >14% mAP gain on YOLOv5n |
[80] | 2025 | YOLOv5s | 7.5M params | New UAV Fusion Dataset (2752 pairs) | UAV wildfire early detection | +10% precision on small fires; reduced false positives |
[83] | 2025 | ResNeXt-50, EfficientNet-B4, YOLOv5 | ResNeXt-50 (25M), EffNet-B4 (19M), YOLOv5s (7.5M) | AI-Hub fire dataset (8 classes: fire, smoke colors, fog, clouds, light) | Multiclass wild/urban fires | F1 +34.4%; mAP@50 +1.3% vs YOLOv5 |
[77] | 2025 | YOLOv11-DH3 | YOLOv11n ≈ 7.7M params; YOLOv11x ≈ 258M params | Baidu Paddle fire dataset + Wildfire dataset | Indoor + outdoor smoke/fire | Precision 91.6%; Recall 90% |
[76] | 2025 | YOLOv9/10/11 | YOLOv11n (2.6M), YOLOv10n (2.3M), YOLOv9t (2.0M) | Smoke and Fire dataset + D-Fire dataset | Forest monitoring | YOLOv11n best: Precision = 0.845, Recall = 0.801 |
[94] | 2025 | YOLOv10 | Speed: 8.98 FPS | SmokeFireUAV dataset (1489 UAV images) | UAV wildfire monitoring | mAP = 79.28%; F1 = 76.14%; 8.98 FPS |
[85] | 2025 | YOLOv8n, MSDBlock, Repulsion Loss | YOLOv8n backbone ≈ 3.2M params | Multimodal dataset (3352 pairs, images + risk data) | Forest fire risk monitoring | Accuracy = 93.06%; +18.75% gain vs unimodal model |
[58] | 2025 | HPO-YOLOv5, DeepSORT | ≈ 7.5M params | Indoor FireSmoke dataset (5000 imgs) | Indoor fire/smoke monitoring | mAP@0.5 = 92.1%; |
[95] | 2025 | YOLOv8, CNN-Transformer fusion | YOLOv8n baseline 3.2M params | VIGP-FS dataset | Forest smoke detection (ground stations) | Precision = 88.4%; Recall = 83.4%; mAP@0.5 = 89.3% |
[73] | 2025 | ConvNeXt backbone + AFEM + BFFM + contrastive learning | - | USTC_SmokeRS, E_SmokeRS, Aerial dataset | Remote sensing smoke recognition | Acc. = 98.87%; Detection rate = 94.54%; FAR = 3.30% |
[79] | 2025 | YOLOv8 | - | D-Fire dataset (drone-based) | UAV wildfire smoke/fire | Precision = 93.57%; Recall = 88.51%; |
[81] | 2025 | YOLOv8 | reduced from 3.2M to 2.6M params | Indoor semi-occluded fire dataset | Indoor disaster management | Precision = 0.73; F1 = 0.81; 81% detection; |
[78] | 2025 | YOLOv11x | 501 epochs, optimized loss functions | WD + FFS datasets | Forest wildfire detection | Precision = 0.949; Recall = 0.850 |
[67] | 2025 | Transformer Network | - | SVRD (8544 videos) | Smoke video recognition | High accuracy with reduced complexity. |
[68] | 2025 | Swin Transformer, Mask-RCNN | - | Custom fire/non-fire dataset | Wildfire + non-fire scenes | Segm mAP50 = 0.842; |
[96] | 2025 | YOLOv8, SAHI, fractal analysis | 11M (YOLOv8n) | Public + augmented datasets | Fire/smoke detection (urban + natural) | mAP50–95 improved by +25% vs YOLOv8 baseline |
[97] | 2025 | YOLOv8/YOLOv10 + Bayesian Tuning | YOLOv8s ≈11M params; YOLOv8l ≈43M; YOLOv10s ≈12M; YOLOv10l ≈60M | D-Fire dataset (21,527 images) | Outdoor wildfire monitoring | YOLOv8l mAP50 +0.84 vs default; YOLOv8 best trade-off |
[72] | 2025 | Transformer Fusion IR + Visible | - | Infrared + visible dataset (urban parks) | Urban/business park smoke | Acc. = 90.88%; Prec. = 98.38%; Rec. = 92.41%; FP/FN < 5% |
[98] | 2025 | YOLOv11 | YOLOv11 baseline; small ≈9M params, large ≈258M params | D-Fire dataset (21,527 images) | Forest + land fire detection | HE: mAP50 = 0.771; CLAHE: better local; DBST-LCM: Prec. = 79% |
[86] | 2025 | Multi-object CNN | - | Industrial/construction datasets | Fire progression monitoring | Acc. improved fire severity tracking |
[87] | 2024 | DenseNet, detail-selective CNN | - | CICLOPE alarm images (Portugal) | Wide-area surveillance | Acc. = 99.7% |
[88] | 2025 | Improved YOLOv8 | FPS = 131.36 | Custom dataset (20,044 imgs: flames, sparks, smoke) | Industrial, forest, indoor/outdoor | Precision = 82.1%; Recall = 71.8% |
Condition Type | Description |
---|---|
Fire-only | Scenes containing flames without visible smoke (e.g., indoor fires, open flames). |
Smoke-only | Scenes containing isolated smoke emissions without visible flame (e.g., early wildfire stages, controlled burns). |
Fire + Smoke | Scenarios where both smoke and flames are clearly present and interact (e.g., active wildfires). |
Fire-like distractors | Non-fire elements that mimic flame or smoke, such as car headlights, sun glare, steam, fog, or clouds. |
Smoke-like fog | Atmospheric conditions where fog appears visually similar to smoke, increasing detection ambiguity. |
Neutral/Background | Clean scenes or frames without any fire or smoke activity, used for negative class training or model calibration. |
Ref | Aim | Framework | Model Size | Platform | Environment |
---|---|---|---|---|---|
[159] | Dehaze-CA based YOLOv5 for offshore fire detection | YOLOv5 + Coordinate Attention | – | Real-time Embedded | Maritime |
[160] | GAN-based augmentation and WSOL for wildfire detection | GAN + WSOL + YOLOv5 | Lightweight | Drone and CCTV-based systems | Forest |
[161] | MM-SERNet: smoke and fire risk fusion for early warning | YOLOv5n + MSDBlock + Repulsion Loss | Lightweight | Edge Devices | Forest surveillance |
[85] | YOLOv5-based lightweight fire detection | YOLOv5 + GhostNet + BiFPN | Reduced | Embedded Deployment | Generic fire scenes |
[153] | Kalman Filter + CNN for embedded smoke detection | Kalman filter + shallow CNN | Small memory requirements | Jetson Nano, Raspberry Pi 3 | Urban/Industrial |
[154] | YOLOv3 + MobileNetv2 + FCOS | YOLOv3 + MobileNetV2 + FCOS | 5.42 MB | Edge device for power lines | Transmission lines |
[155] | ESmokeNet: edge-aware smoke detection | Edge-guided CNN + MCE + EdgeNet | – | Edge Devices | General outdoor scenes |
[156] | YOLOv8n-ERF: enhanced for real-time fire/smoke detection | YOLOv8n + EMA + BiFNet | Compressed | Scaled Embedded Deployment | Fire-Smoke Dataset |
[157] | Deeplabv3+ with MobileViT for drone-based wildfire detection | Deeplabv3+ + MobileViT | Lightweight | Drones and UAVs systems | Forest |
[163] | Edge AI with compressed DNNs (CNN + ViT) | CNN + ViT + Model Compression | – | Jetson Nano, Jetson Xavier, Jetson Orin, Onin Nano | Wildfire/forest |
[164] | Ultra-light Conv-ViT for early fire smoke detection | ConvNet + Vision Transformer | 0.6M params | Edge Devices | Forest |
[146] | FireSmoke-YOLO with FSPPF and DSC | YOLO + FSPPF + DSC | Reduced 4.4MB | UAV, satellite | Forests, urban |
[150] | YOLOX + Edge Computing for Tunnel Fire Smoke | YOLOX + Wavelet + Quantization | Optimized (edge-friendly) | Edge devices in tunnels | Tunnel |
[165] | YOLOv11 for real-time forest fire detection | C3lxbMV2 + SC-Down | 53.2% less than YOLOv11 | Edge Devices | Forest |
[166] | Lightweight Refridnet with pruning | Refridnet + MobileNetV2 + pruning | Reduced 87.8% | Smart home, power lines | General |
[167] | FPFYOLO: YOLOv8 variant for forest fires | YOLOv8 + CPDA + MCDH | 25.3% fewer params | Edge Deployment | Drone forest |
[30] | Multi-task KD with MobileNetV3 for wildfire | MobileNetV3 + KD + CES metric | Lightweight | IoT and Edge Devices | Forest |
[168] | Lightweight CNN+LSTM with attention for tunnel | CNN + LSTM + CBAM + SE | Quantized, efficient | Embedded Deployment | Tunnel |
[13] | Hybrid CNN-LSTM for tunnel fire classification | CNN + LSTM + Transfer Learning | Lightweight | Real-time environmental surveillance systems | Tunnel |
[169] | SE-YOLOv7-tiny + WSLO-YOLOv1 with attention + W-Slo9 | YOLOv7-tiny + attention + W-Slo9 | Lightweight | Resource-constrained | Natural/forest |
[170] | AI-Driven UAV Surveillance for Agricultural Fire Safety | SSD + MobileNetV2 | 5.0 GFLOPs | UAV-based fire detection | Agriculture |
[171] | DSFOD: Drone Real-Time Open Burning Detection | YOLOv5 + LSTM | – | – | Wildfire surveillance |
[172] | Forest Fire Smoke Monitoring for Edge Systems | LCNet + Transformer + SSLD | Reduced 3/4 params | Jetson NX embedded | Forest |
[142] | FGFYOLO: UAV-Based Forest Fire Detection with YOLOv8 | YOLOv8 + GSConv + GBP+BiFormer | 26.4% fewer params | UAV | UAV |
[160] | Baseline 1 CNN for Fire Smoke Detection from Landsat | VSD + BiLSTM | Lightweight | – | Satellite Detection |
[139] | Real-Time Wildfire Segmentation via Encoder-Decoder | EffNetV2 + AG | – | UAV-based | UAV |
[141] | LMDES: YOLOv7-Based Forest Fire Detection | YOLOv7 + GSConv + CARAFE + CBAM | 14% fewer params | UAV | UAV |
[158] | Ship-fire Net: YOLOv8n-Based detection system | YOLOv8n + GhostNetV2 + CA | – | Real-time Embedded | Maritime |
[148] | YOLOv5 + MobileNetV3 for Intelligent Wildfire Detection | YOLOv5n + MobileNetV3 + CA | Low model size | Edge or Embedded | Forest/wildlands |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Elhanashi, A.; Essahraui, S.; Dini, P.; Saponara, S. Early Fire and Smoke Detection Using Deep Learning: A Comprehensive Review of Models, Datasets, and Challenges. Appl. Sci. 2025, 15, 10255. https://doi.org/10.3390/app151810255
Elhanashi A, Essahraui S, Dini P, Saponara S. Early Fire and Smoke Detection Using Deep Learning: A Comprehensive Review of Models, Datasets, and Challenges. Applied Sciences. 2025; 15(18):10255. https://doi.org/10.3390/app151810255
Chicago/Turabian StyleElhanashi, Abdussalam, Siham Essahraui, Pierpaolo Dini, and Sergio Saponara. 2025. "Early Fire and Smoke Detection Using Deep Learning: A Comprehensive Review of Models, Datasets, and Challenges" Applied Sciences 15, no. 18: 10255. https://doi.org/10.3390/app151810255
APA StyleElhanashi, A., Essahraui, S., Dini, P., & Saponara, S. (2025). Early Fire and Smoke Detection Using Deep Learning: A Comprehensive Review of Models, Datasets, and Challenges. Applied Sciences, 15(18), 10255. https://doi.org/10.3390/app151810255