Visual Intelligence in Smart Cities: A Lightweight Deep Learning Model for Fire Detection in an IoT Environment
Abstract
:1. Introduction
- Considering the problems of IoT devices in the real world concerning limited computing power, we present a lightweight deep model that works effectively when compared to the well-known lightweight models such as NASNetMobile and EfficientNet; the proposed FlameNet model achieves higher performance in terms of accuracy, frames per second(FPS), and small footprint on the disk, while having fewer trainable parameters.
- To assist the intermediate features, we progressively modified spatial attention (MSA), which refined the backbone extracted features leading to superior performance. The empirical findings show that our suggested system gave superior performance compared to the state-of-the-art (SOTA) models with respect to accuracy, has 24.34% fewer parameters than NASNetMobile, and, in terms of time complexity, when tested on Rasberry Pi (RPi) and a central processing unit (CPU), it obtained 8.96 and 10.64 FPS, respectively, in a real-time environment.
- Different benchmark datasets for fire detection in specific environments can be found in the literature, but they are not adaptable to a wide range of situations. To address this issue, we developed a new composite dataset that includes challenging images of various fire and non-fire categories. This dataset is collected from popular public datasets to ultimately train our model on diverse data. Furthermore, as part of our evaluation of our proposed dataset, we re-implemented SOTA studies to test its performance and diversity. As a result, we were able to compare different approaches and evaluate how well they performed in addressing the challenges we faced in our dataset.
2. Related Work
3. Proposed Methodology
3.1. Dataset Collection
3.2. Deep Features Extraction
3.3. Modified Spatial Attention
4. Results and Discussions
4.1. Evaluation Metrics
4.2. Performance Analysis with State of the Art Networks
4.3. Time Complexity Analysis
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Dilshad, N.; Khan, T.; Song, J. Efficient Deep Learning Framework for Fire Detection in Complex Surveillance Environment. Comput. Syst. Sci. Eng. 2023, 46, 749–764. [Google Scholar] [CrossRef]
- Shah, S.A.; Seker, D.Z.; Rathore, M.M.; Hameed, S.; Yahia, S.B.; Draheim, D. Towards disaster resilient smart cities: Can internet of things and big data analytics be the game changers? IEEE Access 2019, 7, 91885–91903. [Google Scholar] [CrossRef]
- Rathnayake, R.; Sridarran, P.; Abeynayake, M. Fire risk of apparel manufacturing buildings in Sri Lanka. J. Facil. Manag. 2021, 20, 59–78. [Google Scholar] [CrossRef]
- Nordenfjeldske Development Services (NFDS), Fire Statistics. 2021. Available online: https://www.nfds.go.kr/stat/general.do (accessed on 20 June 2023).
- Insurance Information Institute. 2021. Available online: https://www.iii.org/fact-statistic/facts-statistics-wildfires (accessed on 20 June 2023).
- Dubey, V.; Kumar, P.; Chauhan, N. Forest fire detection system using IoT and artificial neural network. In Proceedings of the International Conference on Innovative Computing and Communications; Springer: Berlin/Heidelberg, Germany, 2019; pp. 323–337. [Google Scholar]
- Wolters, C. California Fires Are Raging: Get the Facts on Wildfires; National Geographic: Washington, DC, USA, 2019. [Google Scholar]
- Guha-Sapir, D.; Hoyois, P.; Wallemacq, P.; Below, R. Annual Disaster Statistical Review 2016: The Numbers and Trends; Centre for Research on the Epidemiology of Disasters: Brussels, Belgium, 2018. [Google Scholar]
- Muhammad, K.; Ahmad, J.; Baik, S.W. Early fire detection using convolutional neural networks during surveillance for effective disaster management. Neurocomputing 2018, 288, 30–42. [Google Scholar] [CrossRef]
- El-Hosseini, M.; ZainEldin, H.; Arafat, H.; Badawy, M. A fire detection model based on power-aware scheduling for IoT-sensors in smart cities with partial coverage. J. Ambient Intell. Humaniz. Comput. 2021, 12, 2629–2648. [Google Scholar] [CrossRef]
- Khan, S.; Muhammad, K.; Mumtaz, S.; Baik, S.W.; de Albuquerque, V.H.C. Energy-efficient deep CNN for smoke detection in foggy IoT environment. IEEE Internet Things J. 2019, 6, 9237–9245. [Google Scholar] [CrossRef]
- Yin, Z.; Wan, B.; Yuan, F.; Xia, X.; Shi, J. A deep normalization and convolutional neural network for image smoke detection. IEEE Access 2017, 5, 18429–18438. [Google Scholar] [CrossRef]
- Sharma, J.; Granmo, O.C.; Goodwin, M.; Fidje, J.T. Deep convolutional neural networks for fire detection in images. In Engineering Applications of Neural Networks, Proceedings of the 18th International Conference, EANN 2017, Athens, Greece, 25–27 August 2017; Springer: Berlin/Heidelberg, Germany, 2017; pp. 183–193. [Google Scholar]
- Shah, S.A.; Seker, D.Z.; Hameed, S.; Draheim, D. The rising role of big data analytics and IoT in disaster management: Recent advances, taxonomy and prospects. IEEE Access 2019, 7, 54595–54614. [Google Scholar] [CrossRef]
- Minh, D.; Wang, H.X.; Li, Y.F.; Nguyen, T.N. Explainable artificial intelligence: A comprehensive review. Artif. Intell. Rev. 2022, 55, 3503–3568. [Google Scholar] [CrossRef]
- Nguyen, T.N.; Lee, S.; Nguyen, P.C.; Nguyen-Xuan, H.; Lee, J. Geometrically nonlinear postbuckling behavior of imperfect FG-CNTRC shells under axial compression using isogeometric analysis. Eur. J. Mech.-A/Solids 2020, 84, 104066. [Google Scholar] [CrossRef]
- Nguyen, T.N.; Nguyen-Xuan, H.; Lee, J. A novel data-driven nonlinear solver for solid mechanics using time series forecasting. Finite Elem. Anal. Des. 2020, 171, 103377. [Google Scholar] [CrossRef]
- Dang, M.; Nguyen, T.N. Digital Face Manipulation Creation and Detection: A Systematic Review. Electronics 2023, 12, 3407. [Google Scholar] [CrossRef]
- Yu, L.; Wang, N.; Meng, X. Real-time forest fire detection with wireless sensor networks. In Proceedings of the 2005 International Conference on Wireless Communications, Networking and Mobile Computing, Wuhan, China, 26 September 2005; Volume 2, pp. 1214–1217. [Google Scholar]
- Podržaj, P.; Hashimoto, H. Intelligent space as a framework for fire detection and evacuation. Fire Technol. 2008, 44, 65–76. [Google Scholar] [CrossRef]
- Jan, H.; Yar, H.; Iqbal, J.; Farman, H.; Khan, Z.; Koubaa, A. Raspberry pi assisted safety system for elderly people: An application of smart home. In Proceedings of the 2020 First International Conference of Smart Systems and Emerging Technologies (SMARTTECH), Riyadh, Saudi Arabia, 3–5 November 2020; pp. 155–160. [Google Scholar]
- Roque, G.; Padilla, V.S. LPWAN based IoT surveillance system for outdoor fire detection. IEEE Access 2020, 8, 114900–114909. [Google Scholar] [CrossRef]
- Malbog, M.A.F.; Lacatan, L.L.; Dellosa, R.M.; Austria, Y.D.; Cunanan, C.F. Edge detection comparison of hybrid feature extraction for combustible fire segmentation: A Canny vs Sobel performance analysis. In Proceedings of the 2020 11th IEEE Control and System Graduate Research Colloquium (ICSGRC), Shah Alam, Malaysia, 8 August 2020; pp. 318–322. [Google Scholar]
- Khan, R.A.; Uddin, J.; Corraya, S.; Kim, J. Machine vision based indoor fire detection using static and dynamic features. Int. J. Control Autom. 2018, 11, 87–98. [Google Scholar]
- Liu, C.B.; Ahuja, N. Vision based fire detection. In Proceedings of the 17th International Conference on Pattern Recognition, ICPR 2004, Cambridge, UK, 23–26 August 2004; Volume 4, pp. 134–137. [Google Scholar]
- Zhang, Z.; Zhao, J.; Zhang, D.; Qu, C.; Ke, Y.; Cai, B. Contour based forest fire detection using FFT and wavelet. In Proceedings of the 2008 International Conference on Computer Science and Software Engineering, Wuhan, China, 12–14 December 2008; Volume 1, pp. 760–763. [Google Scholar]
- Foggia, P.; Saggese, A.; Vento, M. Real-time fire detection for video-surveillance applications using a combination of experts based on color, shape, and motion. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 1545–1556. [Google Scholar] [CrossRef]
- Jayashree, D.; Pavithra, S.; Vaishali, G.; Vidhya, J. System to detect fire under surveillanced area. In Proceedings of the 2017 Third International Conference on Science Technology Engineering & Management (ICONSTEM), Chennai, India, 23–24 March 2017; pp. 214–219. [Google Scholar]
- Frizzi, S.; Kaabi, R.; Bouchouicha, M.; Ginoux, J.M.; Moreau, E.; Fnaiech, F. Convolutional neural network for video fire and smoke detection. In Proceedings of the IECON 2016—42nd Annual Conference of the IEEE Industrial Electronics Society, Florence, Italy, 24–27 October 2016; pp. 877–882. [Google Scholar]
- Muhammad, K.; Ahmad, J.; Mehmood, I.; Rho, S.; Baik, S.W. Convolutional neural networks based fire detection in surveillance videos. IEEE Access 2018, 6, 18174–18183. [Google Scholar] [CrossRef]
- Muhammad, K.; Ahmad, J.; Lv, Z.; Bellavista, P.; Yang, P.; Baik, S.W. Efficient deep CNN-based fire detection and localization in video surveillance applications. IEEE Trans. Syst. Man Cybern. Syst. 2018, 49, 1419–1434. [Google Scholar] [CrossRef]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Muhammad, K.; Khan, S.; Elhoseny, M.; Ahmed, S.H.; Baik, S.W. Efficient fire detection for uncertain surveillance environment. IEEE Trans. Ind. Inform. 2019, 15, 3113–3122. [Google Scholar] [CrossRef]
- Aslan, S.; Güdükbay, U.; Töreyin, B.U.; Çetin, A.E. Early wildfire smoke detection based on motion-based geometric image transformation and deep convolutional generative adversarial networks. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 8315–8319. [Google Scholar]
- Hashemzadeh, M.; Zademehdi, A. Fire detection for video surveillance applications using ICA K-medoids-based color model and efficient spatio-temporal visual features. Expert Syst. Appl. 2019, 130, 60–78. [Google Scholar] [CrossRef]
- Xu, G.; Zhang, Y.; Zhang, Q.; Lin, G.; Wang, Z.; Jia, Y.; Wang, J. Video smoke detection based on deep saliency network. Fire Saf. J. 2019, 105, 277–285. [Google Scholar] [CrossRef]
- Majid, S.; Alenezi, F.; Masood, S.; Ahmad, M.; Gündüz, E.S.; Polat, K. Attention based CNN model for fire detection and localization in real-world images. Expert Syst. Appl. 2022, 189, 116114. [Google Scholar] [CrossRef]
- Reddy, G.; Avula, S.; Badri, S. A novel forest fire detection system using fuzzy entropy optimized thresholding and STN-based CNN. In Proceedings of the 2020 International Conference on COMmunication Systems & NETworkS (COMSNETS), Bengaluru, India, 7–11 January 2020. [Google Scholar]
- Kim, B.; Lee, J. A video-based fire detection using deep learning models. Appl. Sci. 2019, 9, 2862. [Google Scholar] [CrossRef]
- Peng, Y.; Wang, Y. Real-time forest smoke detection using hand-designed features and deep learning. Comput. Electron. Agric. 2019, 167, 105029. [Google Scholar] [CrossRef]
- Chino, D.Y.; Avalhais, L.P.; Rodrigues, J.F.; Traina, A.J. Bowfire: Detection of fire in still images by integrating pixel color and texture analysis. In Proceedings of the 2015 28th SIBGRAPI Conference on Graphics, Patterns and Images, Salvador, Brazil, 26–19 August 2015; pp. 95–102. [Google Scholar]
- Saied al Saied Fire Dataset. 2021. Available online: https://www.kaggle.com/datasets/phylake1337/fire-dataset?select=fire_datase (accessed on 20 June 2023).
- Parez, S.; Dilshad, N.; Alanazi, T.M.; Lee, J.W. Towards Sustainable Agricultural Systems: A Lightweight Deep Learning Model for Plant Disease Detection. Comput. Syst. Sci. Eng. 2023, 47, 515–536. [Google Scholar] [CrossRef]
- Parez, S.; Dilshad, N.; Alghamdi, N.S.; Alanazi, T.M.; Lee, J.W. Visual Intelligence in Precision Agriculture: Exploring Plant Disease Detection via Efficient Vision Transformers. Sensors 2023, 23, 6949. [Google Scholar] [CrossRef]
- Khan, H.; Haq, I.U.; Munsif, M.; Mustaqeem; Khan, S.U.; Lee, M.Y. Automated wheat diseases classification framework using advanced machine learning technique. Agriculture 2022, 12, 1226. [Google Scholar] [CrossRef]
- Dilshad, N.; Hwang, J.; Song, J.; Sung, N. Applications and challenges in video surveillance via drone: A brief survey. In Proceedings of the 2020 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Republic of Korea, 21–23 October 2020; pp. 728–732. [Google Scholar]
- Zahir, S.; Khan, R.U.; Ullah, M.; Ishaq, M.; Dilshad, N.; Ullah, A.; Lee, M.Y. Robust Counting in Overcrowded Scenes Using Batch-Free Normalized Deep ConvNet. Comput. Syst. Sci. Eng. 2023, 46, 2741–2754. [Google Scholar] [CrossRef]
- Dilshad, N.; Ullah, A.; Kim, J.; Seo, J. Locateuav: Unmanned aerial vehicle location estimation via contextual analysis in an iot environment. IEEE Internet Things J. 2022, 10, 4021–4033. [Google Scholar] [CrossRef]
- Dilshad, N.; Song, J. Dual-Stream Siamese Network for Vehicle Re-Identification via Dilated Convolutional layers. In Proceedings of the 2021 IEEE International Conference on Smart Internet of Things (SmartIoT), Jeju, Republic of Korea, 13–15 August 2021; pp. 350–352. [Google Scholar]
- Ullah, M.; Amin, S.U.; Munsif, M.; Safaev, U.; Khan, H.; Khan, S.; Ullah, H. Serious games in science education. A systematic literature review. Virtual Real. Intell. Hardw. 2022, 4, 189–209. [Google Scholar] [CrossRef]
- Khan, H.; Ullah, M.; Al-Machot, F.; Cheikh, F.A.; Sajjad, M. Deep learning based speech emotion recognition for Parkinson patient. Image 2023, 298, 2. [Google Scholar] [CrossRef]
- Munsif, M.; Ullah, M.; Ahmad, B.; Sajjad, M.; Cheikh, F.A. Monitoring neurological disorder patients via deep learning based facial expressions analysis. In IFIP International Conference on Artificial Intelligence Applications and Innovations; Springer: Berlin/Heidelberg, Germany, 2022; pp. 412–423. [Google Scholar]
- Munsif, M.; Afridi, H.; Ullah, M.; Khan, S.D.; Cheikh, F.A.; Sajjad, M. A lightweight convolution neural network for automatic disasters recognition. In Proceedings of the 2022 10th European Workshop on Visual Information Processing (EUVIP), Lisbon, Portugal, 11–14 September 2022; pp. 1–6. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
- Wightman, R.; Touvron, H.; Jégou, H. Resnet strikes back: An improved training procedure in timm. arXiv 2021, arXiv:2110.00476. [Google Scholar]
- Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 10–15 June 2019; pp. 6105–6114. [Google Scholar]
- Zoph, B.; Vasudevan, V.; Shlens, J.; Le, Q.V. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8697–8710. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Yar, H.; Hussain, T.; Khan, Z.A.; Koundal, D.; Lee, M.Y.; Baik, S.W. Vision sensor-based real-time fire detection in resource-constrained IoT environments. Comput. Intell. Neurosci. 2021, 2021, 5195508. [Google Scholar] [CrossRef] [PubMed]
- Khan, A.; Hassan, B.; Khan, S.; Ahmed, R.; Abuassba, A. DeepFire: A Novel Dataset and Deep Transfer Learning Benchmark for Forest Fire Detection. Mob. Inf. Syst. 2022, 2022, 5358359. [Google Scholar] [CrossRef]
Ref. | Overview | Dataset Used | Dataset Availability | Method | Condition |
---|---|---|---|---|---|
[27] | Author propose a method that is able to detect fires by analyzing videos acquired by surveillance cameras | videos dataset | ✕ | ML base method | Indoor, outdoor |
[28] | Author proposed system to detect fire under surveillance area | image dataset | ✕ | SVM | Indoor |
[29] | Author proposed CNN for video fire and smoke detection | videos dataset | ✕ | CNN | Outdoor |
[31] | Author proposed CNN architecture, inspired by the SqueezeNet architecture for fire detection, localization, and semantic understanding | Foggia, Chino dataset | ✓ | Custom-CNN | Indoor, outdoor |
[33] | Author proposed an efficient CNN-based system for fire detection in videos captured in uncertain surveillance scenarios | image dataset | ✓ | CNN | Indoor |
[34] | Author propose a vision-based method to detect smoke using Deep Convolutional Generative Adversarial Neural Networks (DC-GANs) | video clips dataset | ✓ | DCGAN | Outdoor |
[35] | Author proposed a robust ICA K-medoids-based color model is developed to reliably detect all candidate fire regions in a scene | VisiFire, AzarFire dataset | ✓ | ICA-K-medoids based | Indoor, outdoor |
[36] | Author proposed video smoke detection method based on deep saliency network | Video dataset | ✕ | deep saliency network | Outdoor |
[37] | Author presents a custom framework for detecting fire using transfer learning with SOTA CNNs trained over real-world fire breakout images | combine dataset | ✓ | Attention base CNN | Indoor |
[38] | Author proposed an STN-based CNN and fuzzy entropy optimized thresholding for forest fire detection | images dataset | ✕ | STN-based CNN | Outdoor |
[39] | Authors employed neural networks to swiftly identify instances of fire and smoke in both indoor and outdoor settings, utilizing video footage | fire, smoke images | ✓ | RCNN, LSTM | Outdoor |
[40] | Authors combined manual features and DL features to develop a rapid and precise forest smoke detection system | smoke images | ✕ | DL | Outdoor |
Ignited-Flames Dataset | Training | Testing | Validation | Total |
---|---|---|---|---|
Fire | 6456 | 1811 | 737 | 9004 |
Non-Fire | 6076 | 1671 | 655 | 8402 |
Model | FPR ↓ | FNR ↓ | Accuracy (%)↑ |
---|---|---|---|
Xception | 0.0994 | 0.0195 | 93.69 |
ResNet50 | 0.0733 | 0.0464 | 93.98 |
EfficientNetB0 | 0.0199 | 0.0188 | 95.98 |
NASNetMobile | 0.0122 | 0.0168 | 96.04 |
VGG16 | 0.0017 | 0.0251 | 98.63 |
FlameNet | 0.0022 | 0.0168 | 99.40 |
Model | Classs | Classification Report | Parameters (M) ↓ | |||
---|---|---|---|---|---|---|
Precision | Recall | F1-Score | Accuracy (%)↑ | |||
Xception [55] | Fire | 0.99 | 0.89 | 0.93 | 93.69 | 20.87 M |
NonFire | 0.86 | 0.98 | 0.92 | |||
ResNet50 [56] | Fire | 0.96 | 0.92 | 0.94 | 93.98 | 23.59 M |
NonFire | 0.92 | 0.96 | 0.93 | |||
EfficientNetB0 [57] | Fire | 0.94 | 0.98 | 0.96 | 95.98 | 40.52 M |
NonFire | 0.98 | 0.94 | 0.96 | |||
NASNetMobile [58] | Fire | 0.99 | 0.93 | 0.96 | 96.04 | 4.27 M |
NonFire | 0.92 | 0.99 | 0.95 | |||
VGG16 [59] | Fire | 0.98 | 1.00 | 0.99 | 98.63 | 14.27 M |
NonFire | 1.00 | 0.97 | 0.99 | |||
FlameNet | Fire | 0.98 | 1.00 | 0.99 | 99.40 | 3.23 M |
NonFire | 1.00 | 0.98 | 0.99 |
Method | Classs | Classification Report | Parameters (M) ↓ | |||
---|---|---|---|---|---|---|
Precision | Recall | F1-Score | Accuracy (%) ↑ | |||
Dilshad et al. [1] | Fire | 0.95 | 0.77 | 0.85 | 87.38 | 9.99 M |
NonFire | 0.83 | 0.96 | 0.89 | |||
Yar et al. [60] | Fire | 0.94 | 0.93 | 0.93 | 93.11 | 11.17 M |
NonFire | 0.93 | 0.93 | 0.93 | |||
Sharma et al. [13] | Fire | 0.93 | 1.00 | 0.96 | 96.18 | 23.59 M |
NonFire | 1.00 | 0.93 | 96 | |||
Khan et al. [61] | Fire | 0.98 | 0.98 | 0.98 | 98.28 | 20.02 M |
NonFire | 0.98 | 0.98 | 0.98 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nadeem, M.; Dilshad, N.; Alghamdi, N.S.; Dang, L.M.; Song, H.-K.; Nam, J.; Moon, H. Visual Intelligence in Smart Cities: A Lightweight Deep Learning Model for Fire Detection in an IoT Environment. Smart Cities 2023, 6, 2245-2259. https://doi.org/10.3390/smartcities6050103
Nadeem M, Dilshad N, Alghamdi NS, Dang LM, Song H-K, Nam J, Moon H. Visual Intelligence in Smart Cities: A Lightweight Deep Learning Model for Fire Detection in an IoT Environment. Smart Cities. 2023; 6(5):2245-2259. https://doi.org/10.3390/smartcities6050103
Chicago/Turabian StyleNadeem, Muhammad, Naqqash Dilshad, Norah Saleh Alghamdi, L. Minh Dang, Hyoung-Kyu Song, Junyoung Nam, and Hyeonjoon Moon. 2023. "Visual Intelligence in Smart Cities: A Lightweight Deep Learning Model for Fire Detection in an IoT Environment" Smart Cities 6, no. 5: 2245-2259. https://doi.org/10.3390/smartcities6050103
APA StyleNadeem, M., Dilshad, N., Alghamdi, N. S., Dang, L. M., Song, H.-K., Nam, J., & Moon, H. (2023). Visual Intelligence in Smart Cities: A Lightweight Deep Learning Model for Fire Detection in an IoT Environment. Smart Cities, 6(5), 2245-2259. https://doi.org/10.3390/smartcities6050103