Fire Detection in Urban Areas Using Multimodal Data and Federated Learning
Abstract
:1. Introduction
- Identification of Fire Using Sensors Dataset with Deep Learning Models: This contribution involves utilizing a dataset collected from gas-detecting sensors for the identification of fire. The use of fundamental deep learning models indicates that the research employs established and widely used techniques in the field of artificial intelligence. The significance lies in the exploration of how sensor data alone, which are traditionally used for fire detection, can be effectively processed and classified using deep learning models. This could contribute to improving the accuracy and speed of fire detection systems in indoor environments.
- Identification of Fire Using Thermal Image Dataset with Deep Learning Models: This contribution focuses on the use of thermal imaging data for fire identification, employing fundamental deep learning models. Thermal imaging can capture temperature changes associated with fires before smoke particles become visible, offering an additional dimension to fire detection. The research highlights the potential of thermal imaging in combination with deep learning models, showcasing how visual information can be a valuable source for fire detection, especially in scenarios where traditional sensors might have limitations.
- Multimodal Fire Identification Using Both Sensors and Image Datasets: This contribution represents the integration of data from both sensors and thermal imaging cameras for fire identification. By combining these modalities, the research aims to create a more robust and comprehensive fire detection system. The multimodal approach addresses the limitations of individual data sources, potentially improving the accuracy and reliability of fire detection by considering multiple aspects such as gas presence and temperature changes.
- Fire Identification Mechanism Based on Federated Learning: The incorporation of federated learning (FL) in fire identification is a significant contribution. FL allows model training across multiple devices without centralized data, enhancing privacy and security. The emphasis on safeguarding the privacy of consumers’ private information is crucial in scenarios like fire detection where sensitive data might be involved. FL provides a solution by training models collaboratively without exposing raw data to a central server. Indoor fire detection is vital and requires new techniques to surpass conventional detection methods. This research uses federated learning to integrate multimodal sensor data with distributed information to improve urban fire detection system accuracy, efficiency, and privacy. Conventional sensor datasets, thermal imagery, and a privacy-centric federated method of learning are used to progress the field and meet the growing need for quicker and more precise interior fire detection.
2. Related Work
2.1. Fire Detection
2.2. Federated Learning
3. Proposed Work
3.1. Dataset
3.2. Multimodal Fire Detection Dataset
3.2.1. Fire Sensors Integration
3.2.2. Thermal Camera
3.3. Preprocessing of Multimodal Data
3.4. Data Classification for Multimodal System for DL Models
3.5. Multimodal Data Classification in Federated Eco-System
- The group of devices transmits a message of availability indicating that they are prepared to finish a FL task.
- At time ti, the FL server selects a portion of these available devices and distributes the deep learning (DL) model to them.
- Following that, each device runs a training procedure using the local data to create a new local ML model.
- Based on the aforementioned training procedure, each device communicates the updated parameters of its machine learning model.
- The updated global DL model for time ti is then calculated by the FL server by combining the local models.
- All devices receive the updated global DL model from the FL server.
- Every round, this process is repeated, with the FL server deciding how frequently to update it.
4. Experimental Results, Analysis, and Discussion
- Accuracy indicates how well the model can identify the correct label for each sample, and it is calculated using Equation (3) [4,5,9,13,14].Accuracy = (TP + TN)/(TP + TN + FP + FN)
- Precision is a performance metric that calculates the proportion of correctly identified positive samples to the total number of positive samples predicted by the model. It measures how accurate the model is in identifying the relevant samples. The formula for precision is shown in Equation (4) [4,5,9,13,14].
- Recall, also known as sensitivity or true positive rate, is a metric that measures the proportion of actual positive samples that are correctly identified by the model. It is calculated by dividing the number of true positive predictions by the total number of actual positive samples in the dataset, as shown in Equation (5) [4,5,9,13,14].
4.1. Analysis on Unimodal Data
4.1.1. Image Data Analysis
4.1.2. Sensors Data Analysis
4.2. Multimodal (Image and Sensors Data Analysis)
4.3. Analysis of Multimodal Data on Federated Learning Ecosystem
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Jain, A.; Srivastava, A. Privacy-preserving efficient fire detection system for indoor surveillance. IEEE Trans. Ind. Inform. 2021, 18, 3043–3054. [Google Scholar] [CrossRef]
- Foggia, P.; Saggese, A.; Vento, M. Real-time fire detection for video-surveillance applications using a combination of experts based on color, shape, and motion. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 1545–1556. [Google Scholar] [CrossRef]
- Mothukuri, V.; Parizi, R.M.; Pouriyeh, S.; Huang, Y.; Dehghantanha, A.; Srivastava, G. A survey on security and privacy of federated learning. Future Gener. Comput. Syst. 2021, 115, 619–640. [Google Scholar] [CrossRef]
- KhoKhar, F.A.; Shah, J.H.; Khan, M.A.; Sharif, M.; Tariq, U.; Kadry, S. A review on federated learning towards image processing. Comput. Electr. Eng. 2022, 99, 107818. [Google Scholar] [CrossRef]
- Caldas, S.; Konečny, J.; McMahan, H.B.; Talwalkar, A. Expanding the reach of federated learning by reducing client resource requirements. arXiv 2018, arXiv:1812.07210. [Google Scholar]
- Fleming, J.M. Photoelectric and Ionization Detectors—A Review of The Literature Re–Visited. Retrieved Dec. 2004, 31, 2010. [Google Scholar]
- Keller, A.; Rüegg, M.; Forster, M.; Loepfe, M.; Pleisch, R.; Nebiker, P.; Burtscher, H. Open photoacoustic sensor as smoke detector. Sens. Actuators B Chem. 2005, 104, 1–7. [Google Scholar] [CrossRef]
- Yar, H.; Ullah, W.; Khan, Z.A.; Baik, S.W. An Effective Attention-based CNN Model for Fire Detection in Adverse Weather Conditions. ISPRS J. Photogramm. Remote Sens. 2023, 206, 335–346. [Google Scholar] [CrossRef]
- Dilshad, N.; Khan, T.; Song, J. Efficient deep learning framework for fire detection in complex surveillance environment. Comput. Syst. Sci. Eng. 2023, 46, 749–764. [Google Scholar] [CrossRef]
- Yar, H.; Khan, Z.A.; Ullah FU, M.; Ullah, W.; Baik, S.W. A modified YOLOv5 architecture for efficient fire detection in smart cities. Expert Syst. Appl. 2023, 231, 120465. [Google Scholar] [CrossRef]
- Dilshad, N.; Khan, S.U.; Alghamdi, N.S.; Taleb, T.; Song, J. Towards Efficient Fire Detection in IoT Environment: A Modified Attention Network and Large-Scale Dataset. IEEE Internet Things J. 2023. [Google Scholar] [CrossRef]
- Yar, H.; Hussain, T.; Agarwal, M.; Khan, Z.A.; Gupta, S.K.; Baik, S.W. Optimized dual fire attention network and medium-scale fire classification benchmark. IEEE Trans. Image Process. 2022, 31, 6331–6343. [Google Scholar] [CrossRef] [PubMed]
- Nadeem, M.; Dilshad, N.; Alghamdi, N.S.; Dang, L.M.; Song, H.K.; Nam, J.; Moon, H. Visual Intelligence in Smart Cities: A Lightweight Deep Learning Model for Fire Detection in an IoT Environment. Smart Cities 2023, 6, 2245–2259. [Google Scholar] [CrossRef]
- Hu, Y.; Fu, X.; Zeng, W. Distributed Fire Detection and Localization Model Using Federated Learning. Mathematics 2023, 11, 1647. [Google Scholar] [CrossRef]
- Wang, M.; Jiang, L.; Yue, P.; Yu, D.; Tuo, T. FASDD: An Open-access 100,000-level Flame and Smoke Detection Dataset for Deep Learning in Fire Detection. Earth Syst. Sci. Data Discuss. 2023, 1–26. [Google Scholar] [CrossRef]
- Tamilselvi, M.; Ramkumar, G.; Prabu, R.T.; Anitha, G.; Mohanavel, V. A Real-time Fire recognition technique using a Improved Convolutional Neural Network Method. In Proceedings of the 2023 Eighth International Conference on Science Technology Engineering and Mathematics (ICONSTEM), Chennai, India, 6–7 April 2023; pp. 1–8. [Google Scholar]
- Bhmra, J.K.; Anantha Ramaprasad, S.; Baldota, S.; Luna, S.; Zen, E.; Ramachandra, R.; Kim, H.; Baldota, C.; Arends, C.; Zen, E.; et al. Multimodal Wildland Fire Smoke Detection. Remote Sens. 2023, 15, 2790. [Google Scholar] [CrossRef]
- Nakıp, M.; Güzeliş, C. Development of a multi-sensor fire detector based on machine learning models. In Proceedings of the 2019 Innovations in Intelligent Systems and Applications Conference (ASYU), Izmir, Turkey, 31 October–2 November 2019; pp. 1–6. [Google Scholar]
- Majid, S.; Alenezi, F.; Masood, S.; Ahmad, M.; Gunduz, E.S.; Polat, K. Attention-based CNN model for fire detection and localization in real-world images. Expert Syst. Appl. 2022, 189, 116114. [Google Scholar] [CrossRef]
- Yang, Z.; Bu, L.; Wang, T.; Yuan, P.; Jineng, O. Indoor video flame detection based on lightweight convolutional neural network. Pattern Recognit. Image Anal. 2020, 30, 551–564. [Google Scholar] [CrossRef]
- Li, Y.; Su, Y.; Zeng, X.; Wang, J. Research on multi-sensor fusion indoor fire perception algorithm based on improved TCN. Sensors 2022, 22, 4550. [Google Scholar] [CrossRef]
- Chen, S.; Ren, J.; Yan, Y.; Sun, M.; Hu, F.; Zhao, H. Multi-sourced sensing and support vector machine classification for effective detection of fire hazard in early stage. Comput. Electr. Eng. 2022, 101, 108046. [Google Scholar] [CrossRef]
- Hussain, T.; Dai, H.; Gueaieb, W.; Sicklinger, M.; De Masi, G. UAV-based Multi-scale Features Fusion Attention for Fire Detection in Smart City Ecosystems. In Proceedings of the 2022 IEEE International Smart Cities Conference (ISC2), Pafos, Cyprus, 26–29 September 2022; pp. 1–4. [Google Scholar]
- Tao, J.; Gao, Z.; Guo, Z. Training Vision Transformers in Federated Learning with Limited Edge-Device Resources. Electronics 2022, 11, 2638. [Google Scholar] [CrossRef]
- Sridhar, P.; Thangavel, S.K.; Parameswaran, L.; Oruganti VR, M. Fire Sensor and Surveillance Camera-Based GTCNN for Fire Detection System. IEEE Sens. J. 2023, 23, 7626–7633. [Google Scholar] [CrossRef]
- McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics; Proceedings for Machine Learning Research; MD, USA, 2017; pp. 1273–1282. Available online: https://proceedings.mlr.press/v54/mcmahan17a?ref=https://githubhelp.com (accessed on 15 December 2023).
- Govil, K.; Welch, M.L.; Ball, J.T.; Pennypacker, C.R. Preliminary results from a wildfire detection system using deep learning on remote camera images. Remote Sens. 2020, 12, 166. [Google Scholar] [CrossRef]
- Cao, Y.; Yang, F.; Tang, Q.; Lu, X. An attention-enhanced bidirectional LSTM for early forest fire smoke recognition. IEEE Access 2019, 7, 154732–154742. [Google Scholar] [CrossRef]
- Shi, N.; Lai, F.; Kontar, R.A.; Chowdhury, M. Fed-ensemble: Improving generalization through model ensembling in federated learning. arXiv 2021, arXiv:2107.10663. [Google Scholar]
- Sousa, M.J.; Moutinho, A.; Almeida, M. Wildfire detection using transfer learning on augmented datasets. Expert Syst. Appl. 2020, 142, 112975. [Google Scholar] [CrossRef]
- Wang, L.; Wang, W.; Li, B. CMFL: Mitigating communication overhead for federated learning. In Proceedings of the 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), Dallas, TX, USA, 7–10 July 2019; pp. 954–964. [Google Scholar]
- Available online: https://www.kaggle.com/datasets/phylake1337/fire-dataset (accessed on 10 January 2024).
- Available online: https://www.kaggle.com/datasets/deepcontractor/smoke-detection-dataset/discussion (accessed on 10 January 2024).
- Available online: https://data.mendeley.com/datasets/f3mjnbm9b3/1 (accessed on 10 January 2024).
- Havens, K.J.; Sharp, E.J. Thermal Imaging Techniques to Survey and Monitor Animals in the Wild: A Methodology; Academic Press: Cambridge, MA, USA, 2015. [Google Scholar]
- Konečný, J.; McMahan, H.B.; Yu, F.X.; Richtárik, P.; Suresh, A.T.; Bacon, D. Federated learning: Strategies for improving communication efficiency. arXiv 2016, arXiv:1610.05492. [Google Scholar]
- Bonawitz, K.; Eichner, H.; Grieskamp, W.; Huba, D.; Ingerman, A.; Ivanov, V.; Kiddon, C.; Konečný, J.; Mazzocchi, S.; McMahan, B.; et al. Towards federated learning at scale: System design. Proc. Mach. Learn. Syst. 2019, 1, 374–388. [Google Scholar]
- Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol. (TIST) 2019, 10, 1–19. [Google Scholar] [CrossRef]
- Zhao, Y.; Li, M.; Lai, L.; Suda, N.; Civin, D.; Chandra, V. Federated learning with non-iid data. arXiv 2018, arXiv:1806.00582. [Google Scholar] [CrossRef]
- Kukreja, V.; Kumar, D.; Kaur, A. GAN-based synthetic data augmentation for increased CNN performance in Vehicle Number Plate Recognition. In Proceedings of the 2020 4th International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 5–7 November 2020; pp. 1190–1195. [Google Scholar]
- Dhiman, P.; Kukreja, V.; Manoharan, P.; Kaur, A.; Kamruzzaman, M.M.; Dhaou, I.B.; Iwendi, C. A novel deep learning model for detection of severity level of the disease in citrus fruits. Electronics 2022, 11, 495. [Google Scholar] [CrossRef]
Ref. | Contributions | Methods Used | Results |
---|---|---|---|
[8] | Multiple fire detection methods utilizing scalar and vision sensors have been discussed. | Scalar sensor-based approaches analyze data from flame, smoke, temperature, and particle sensors to detect fires. | Scalar sensor-based approaches are cost-effective and simple to implement but are only suitable for indoor scenarios and require human interaction for alarm confirmation |
[9] | Proposed an efficient VGG-based model (E-FireNet) for fire detection. Conducted comprehensive experiments and compared performance with state-of-the-art models. | Preprocessing of collected fire images to increase the number of samples. | E-FireNet achieves 0.98 accuracy, 1 precision, 0.99 recall, and 0.99 F1-score. |
- Utilization of an efficient CNN model for fire detection and classification. | - The proposed model shows convincing performance in terms of accuracy, model size, and execution time. | ||
[10] | The provided paper proposes a modified YOLOv5s model for efficient fire detection in smart cities, achieving promising results with lower complexity and smaller model size. | Modified YOLOv5s model with integrated Stem module, smaller kernels, and P6 module | The proposed modified YOLOv5s model achieves promising results with lower complexity and smaller model size. |
- Re-implementation of 12 different state-of-the-art object detection models for comparison. | - The proposed model had better detection performance compared to other state-of-the-art object detection models. | ||
[11] | The optimized fire attention network (OFAN) is proposed as a lightweight and efficient convolutional neural network (CNN) for real-time fire detection. - It uses dilated variants of convolution layers and additional dense layers to capture global context and optimize weight. | The OFAN is calibrated for real-time processing using a lightweight feature extractor backbone model. | The OFAN outperforms state-of-the-art fire detection models, achieving high accuracies on three widely used fire detection datasets. It achieves accuracies of 96.23, 96.54, and 94.63 on BoWFire, FD, and the newly proposed DiverseFire dataset, respectively. |
[12] | The paper introduces the optimized dual fire attention network (DFAN) for efficient fire detection and provides a medium-scale fire classification benchmark dataset. | Dual fire attention network (DFAN) for effective and efficient fire detection. - Modified spatial attention mechanism to enhance discrimination potential of fire and non-fire objects. | The DFAN provides the best results compared to 21 state-of-the-art methods. The proposed dataset advances traditional fire detection datasets by considering multiple classes. |
[13] | The authors propose a novel efficient lightweight network called FlameNet for fire detection in smart city environments. | FlameNet works in two steps: first, it detects the fire using the FlameNet network, and then an alert is sent to the fire, medical, and rescue departments. | The newly developed Ignited-Flames dataset is used for analysis, and the proposed FlameNet achieves 99.40% accuracy for fire detection. The empirical findings and analysis of factors such as model accuracy, size, and processing time support the suitability of the FlameNet model for fire detection. |
[14] | Proposed an improved federated learning algorithm (FedVIS) for fire detection and localization. | Improved federated learning algorithm incorporating computer vision: FedVIS - Federated dropout and gradient selection algorithm to reduce communication overhead. | The proposed FedVIS outperforms other federated learning methods in terms of detection effect and communication costs. The model’s robustness and generalization to heterogeneous data are improved. |
[15] | The paper is about the construction of a large-scale Flame and Smoke Detection Dataset (FASDD) for deep learning in fire detection. | Construction of a 100,000-level Flame and Smoke Detection Dataset (FASDD). Formulation of a unified workflow for preprocessing, annotation, and quality control of fire samples. | Most object detection models trained on FASDD achieve satisfactory fire detection results. YOLOv5x achieves nearly 80% [email protected] accuracy on heterogenous images. |
[16] | The paper focuses on using an Improved convolutional neural network (ICNN) and LGBM Classifier for real-time fire recognition. | Improved convolutional neural network (ICNN). -LGBM Classifier. | The suggested technique effectively recognized and alerted the public to the occurrence of devastating fires. The suggested technology proved to be effective in protecting smart cities and detecting fires in the urban environment. |
[17] | The paper is about the development of a deep learning model called SmokeyNet for detecting smoke from wildland fires using multiple data sources. | SmokeyNet: Baseline model for smoke detection using image sequences. - SmokeyNet Ensemble: Combines baseline model with GOES-based fire predictions and weather data. | The paper presents the results of experiments on the SmokeyNet model. The results show that incorporating weather data improves performance in terms of accuracy and time-to-detect. |
[18] | Proposed a method to reduce false positive fire alarms and designed an electronic circuit with 6 sensors to detect 7 physical sensory inputs. | Implementation of fusing and classifying sensor data using machine learning models. - Comparison of multilayer perceptron, support vector machine, and radial basis function network. | Multilayer perceptron is the best model with 96.875% classification accuracy. |
[19] | A vision-based fire detection framework for private spaces is proposed. - The framework preserves the privacy of occupants using a near infra-red camera. | Vision-based monitoring with convolutional neural network and other machine learning algorithms - Near infra-red camera for image capture while preserving privacy. | Developed a novel system incorporating spatial and temporal properties of fire. Validated the lightweight nature of the system through a real-world implementation. |
[20] | The paper proposes an indoor fire video recognition method based on a multichannel convolutional neural network. | Designing a convolutional neural network (CNN) model. Recognition training on image features of each channel. Fire identification using flame color feature, circularity feature, and area change feature. | Solves the problem of low recognition accuracy in existing fire video recognition technology. Can be applied to indoor fire video recognition. |
[21] | Proposed a multisensor fusion indoor fire perception algorithm named TCN-AAP-SVM. Considered time dimension information through trend extraction and sliding window. Addressed shortcomings of existing fire classification algorithms. | Improved temporal convolutional network (TCN). Adaptive average pooling (AAP)—support vector machine (SVM) classifier. | Proposed algorithm improves fire classification accuracy by more than 2.5%. Proposed algorithm improves fire detection speed by more than 15%. Outperforms TCN, BP neural network, and LSTM in accuracy and speed. |
[22] | Proposed system achieves high precision, recall, and F1 scores for fire detection. System reduces false alarms and improves early fire detection. | Multimodal sensors are integrated to acquire data of carbon monoxide, smoke, temperature, and humidity. Support Vector Machine (SVM) is used for data analysis and classification. | Precision: 99.8%—Recall: 99.6%—F1 score: 99.7%. |
[23] | Effective fire detection using deep learning techniques in smart cities. Use of unmanned aerial vehicles (UAVs) for wide area coverage. Highlighting the most important fire regions using multiheaded self-attention. | Deep multiscale features from a backbone model are employed. Attention mechanism is applied for accurate fire detection. Features fusion is used to represent the image effectively. Multiheaded self-attention enhances the fused features. | Preliminary experimental results demonstrate effective performance of the proposed model. The proposed model outperforms rivals in fire detection accuracy. |
[24] | Upgraded the classic model by adding LGBM in the final layer. Developed a real-time fire catastrophe monitoring system. Altered the network structure for effective fire recognition under different weather conditions. | Improved convolutional neural network (ICNN)—LGBM Classifier—Data augmentation methods. Automated color enhancement—Parameter reductions. | The technique effectively detects fire areas and provides early warnings. The suggested technology is effective in protecting smart cities and detecting fires in urban environments. Tested the system against previously published fire detection methods. |
[25] | Proposed a novel optimized Gaussian probability-based threshold convolutional neural network (GTCNN) model for fire detection. | Sensor-based methods for fire detection. Computer vision-based approaches using surveillance camera-based video (SV). | The proposed optimized GTCNN achieves a detection accuracy of 98.23%. The optimized GTCNN outperforms other deep learning networks in terms of accuracy. |
S.No. | Component | Configurations |
---|---|---|
1 | 1 Server Computer | Core i7, 32 GB RAM, NVIDIA 3070 8 GB Graphics Memory |
2 | 5 Client Computer’s | Core i5, 16 GB RAM, NVIDIA 1650 4 GB Graphics Memory |
3 | Python Programming | Version 3.7 |
4 | Keras | Version 3.0 |
5 | TensorFlow | Version 2.14 |
6 | TensorFlow Federated | Version 1.0 |
7 | Camera | 5 MP HD |
Category | Number of Image Entries | Number of Sensor Data Entries |
---|---|---|
Fire | 5000 | 5000 |
Non-Fire | 5000 | 5000 |
Used Sensor | Fire Sensitive to Sensor |
---|---|
Sensor1/MQ-2 | Liquefied petroleum fire, methane fire, butane fire, smoke |
Sensor2/MQ-3 | Smoke, ethanol, alcohol |
Sensor2/MQ-4 | Methane, CNG fire |
Sensor5/MQ-5 | Liquefied petroleum fire, natural fire |
Sensor6/MQ-6 | Liquefied petroleum fire, butane fire |
Sensor7/MQ-7 | Carbon monoxide fire |
Sensor8/MQ-8 | Hydrogen fire |
Sensor9/MQ-9 | Carbon monoxide fire, flammable fire |
Sensor10/MQ-135 | Air quality (CO, ammonia, benzene, alcohol, smoke) |
Sensor11/MQ-138 | Benzene, toluene, alcohol, acetone, propane, formaldehyde, hydrogen |
Sensor12/MQ-139 | Infra-red flame |
Accuracy | Val Accuracy | Loss | Val Loss | Precision | Val Precision | Recall | Val Recall | |
---|---|---|---|---|---|---|---|---|
Convolutional Neural Network | 94.95 | 90.6 | 0.396 | 0.3878 | 94.95 | 90.6 | 94.95 | 90.6 |
DenseNet201 | 99.66 | 100 | 0.0843 | 0.0641 | 99.66 | 100 | 99.66 | 100 |
MobileNetV2 | 100 | 99.33 | 0.1018 | 0.1113 | 100 | 99.33 | 100 | 99.33 |
XceptionNet | 99.92 | 97.99 | 0.09 | 0.1148 | 99.92 | 97.99 | 99.92 | 97.99 |
Accuracy | Val Accuracy | Loss | Val Loss | Precision | Val Precision | Recall | Val Recall | |
---|---|---|---|---|---|---|---|---|
BiLSTM_Dense | 94.71 | 95.58 | 0.13 | 0.22 | 94.71 | 95.58 | 94.71 | 95.58 |
Dense | 95.15 | 94.46 | 0.14 | 0.22 | 95.15 | 94.46 | 95.15 | 94.46 |
LSTM_Dense | 94.19 | 95.84 | 0.14 | 0.21 | 94.19 | 95.84 | 94.19 | 95.84 |
Accuracy | Val Accuracy | Loss | Val Loss | Precision | Val Precision | Recall | Val Recall | |
---|---|---|---|---|---|---|---|---|
Multimodal | 100 | 92 | 0.06 | 0.2 | 100 | 92 | 100 | 92 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sharma, A.; Kumar, R.; Kansal, I.; Popli, R.; Khullar, V.; Verma, J.; Kumar, S. Fire Detection in Urban Areas Using Multimodal Data and Federated Learning. Fire 2024, 7, 104. https://doi.org/10.3390/fire7040104
Sharma A, Kumar R, Kansal I, Popli R, Khullar V, Verma J, Kumar S. Fire Detection in Urban Areas Using Multimodal Data and Federated Learning. Fire. 2024; 7(4):104. https://doi.org/10.3390/fire7040104
Chicago/Turabian StyleSharma, Ashutosh, Rajeev Kumar, Isha Kansal, Renu Popli, Vikas Khullar, Jyoti Verma, and Sunil Kumar. 2024. "Fire Detection in Urban Areas Using Multimodal Data and Federated Learning" Fire 7, no. 4: 104. https://doi.org/10.3390/fire7040104
APA StyleSharma, A., Kumar, R., Kansal, I., Popli, R., Khullar, V., Verma, J., & Kumar, S. (2024). Fire Detection in Urban Areas Using Multimodal Data and Federated Learning. Fire, 7(4), 104. https://doi.org/10.3390/fire7040104