A Lightweight Fire Detection Framework for Edge Visual Sensors Using Small-Sample Domain Adaptation
Abstract
1. Introduction
- A physically interpretable multi-feature fusion framework for fire detection is proposed, which jointly exploits color statistics (HSI), dynamic texture descriptors (LBP), and frequency-domain energy features (2D-DWT). Unlike purely data-driven approaches, the proposed feature design is explicitly motivated by the non-rigid fluid characteristics and flickering dynamics of flames, improving robustness against color-similar and illumination-induced distractors.
- A small-sample supervised domain adaptation strategy based on Adaptive SVM (A-SVM) is developed to address severe domain shift between laboratory environments and real-world surveillance scenes. By introducing a regularization constraint that preserves source-domain knowledge while adapting to sparse labeled target samples, the proposed method achieves effective cross-domain transfer with minimal annotation cost.
- Extensive experiments under challenging daytime interference and nighttime low-illumination scenarios demonstrate the effectiveness of the proposed approach, showing significant improvements in precision, recall, and F1-score over traditional rule-based methods and non-adaptive SVM baselines. The results verify that reliable cross-domain fire detection can be achieved without relying on large-scale retraining or deep neural networks.
2. Multi-Dimensional Feature Engineering
2.1. Image Preprocessing and Candidate Extraction
2.1.1. Motion Foreground Extraction Based on Adaptive Gaussian Mixture Model
2.1.2. Color Saliency Screening Based on HSI Space
- Physical Criteria: Through the analysis of spectral characteristics from a large number of fire samples, we discovered that the core region of a real flame exhibits “high brightness and high saturation.” In contrast, common strong light interferences (such as streetlights or glass reflections), while possessing high brightness, often exhibit lower saturation due to a significant white light component in their spectrum.
- Threshold Filtering: Based on these physical differences, we establish an adaptive saturation threshold to perform secondary filtering on the motion regions extracted by GMM. This step effectively filters out the majority of moving objects that do not possess fire-like colors (e.g., red vehicles), and the resulting ROI serves as the input for the subsequent multi-feature fusion module.
2.2. Multi-Feature Fusion
- HSI-based Color Statistical Features (6-dimensional): Although color is susceptible to interference, it remains the most intuitive characteristic of fire. We convert the image to the HSI (Hue–Saturation–Intensity) space. Unlike the RGB space where color and brightness are coupled, the HSI space decouples chromatic information. Experiments indicate that the core region of a flame exhibits extremely high saturation (S values often reach as high as 202), whereas high-intensity distractors such as white road signs and metallic reflections, despite their high brightness, typically have lower saturation (S < 50) due to the significant presence of white light components in their spectrum. We extract the first moment (mean) and second moment (standard deviation) of the three HSI channels, totaling 6 dimensions, serving as the fundamental criteria for distinguishing “real fire” from “strong light.”
- LBP-based Dynamic Texture Features (10-dimensional): To quantify the “disorder” and “random turbulence” characteristics of the flame surface, this paper introduces the Local Binary Pattern (LBP) operator. LBP is a texture descriptor based on grayscale invariance, with its core advantage being the ability to capture local micro-structures of an image at a very low computational cost.
- Algorithm Mechanism: For each pixel in the image, a sliding window is defined. The grayscale value of the central pixel is used as a threshold to compare with the 8 surrounding neighbor pixels: if a neighbor’s pixel value is greater than or equal to the central value, it is marked as 1; otherwise, it is marked as 0. This process encodes the grayscale differences in the neighborhood into an 8-bit binary number (LBP Code), thereby reflecting the micro-texture patterns (such as flat areas, edges, corners, etc.) surrounding that point.
- Statistical Histogram: For the entire ROI (Region of Interest), we compile the distribution of LBP values for all pixels to construct an LBP histogram. For flames, their internal violent turbulent motion and non-rigid deformation lead to a high degree of randomness in texture patterns across time and space, manifesting as a high-entropy distribution across multiple specific patterns in the histogram. In contrast, neon lights, street lamps, or static red objects typically possess regular artificial textures, and their LBP histograms often exhibit peaks at a few “regular patterns.” By extracting the LBP histogram (reduced to 10-dimensional features in this study), the model can effectively capture this difference in “texture entropy,” thereby eliminating interference from regular textures.
- Frequency-Domain Energy Features Based on 2D Discrete Wavelet Transform (2D-DWT): Static features often struggle to accurately describe the “high-frequency flickering” of flame edges caused by airflow disturbances. To address this, this study utilizes the 2D Discrete Wavelet Transform to convert images from the spatial domain to the frequency domain, thereby performing multi-resolution analysis.
- Multi-Band Decomposition: By employing Haar or Daubechies wavelet basis functions, the image undergoes single-level decomposition via high-pass and low-pass filter banks to obtain four sub-bands: the low-frequency approximation component (LL), and the high-frequency detail components in three directions: horizontal (LH), vertical (HL), and diagonal (HH).
- Low-Frequency Energy (LL): Reflects the general profile and overall brightness distribution of the image, representing the macroscopic morphology of the flame.
- High-Frequency Energy (LH, HL, HH): Captures abrupt changes in image grayscale values, corresponding to the edges and details of the object.
- Dynamic Energy Criteria: As a plasma, the edge of a flame is influenced by thermal convection, generating irregular flickering at a frequency of approximately 10–15 Hz. This physical phenomenon manifests in the frequency domain as substantial energy contained within the high-frequency components (LH, HL, HH), with this energy value fluctuating drastically over time. The high-frequency energy is typically calculated by summing the squares of the coefficients from each sub-band:
- Construction of the Feature Fusion Layer: To integrate the aforementioned heterogeneous features, as illustrated by the “Fusion Layer” in the center of Figure 2 and Figure 3, we adopted an “early fusion” strategy. Specifically, prior to inputting into the classifier, the 6-dimensional color vector, the 10-dimensional LBP texture vector, and the 10-dimensional wavelet energy vector are concatenated to form a dense 26-dimensional super-feature vector. This fusion approach maximizes the preservation of original physical information from multiple perspectives, enabling the subsequent A-SVM classifier to simultaneously seek the optimal decision boundary within the color, texture, and frequency domain spaces, thereby achieving a multi-dimensional perception of the fire target.
2.3. Baseline SVM Classifier
3. Small-Sample Domain Adaptation Strategy (A-SVM)
3.1. Theoretical Framework
- Regularization : Acts as a constraint (like a spring) to prevent the new model from deviating too far from the source knowledge , avoiding overfitting on sparse target samples.
- Data Adaptation (): Drives the decision boundary to adjust to the specific distribution of the target domain samples.
3.2. Adaptation Workflow
- Source Pre-training: Train baseline on standard lab datasets.
- Low-Resource Sampling: Collect ~90 representative images from the target scene.
- Logic Consistency Cleaning: To ensure data quality, we introduce a cleaning step (Figure 5) to remove “dirty data” where labels conflict with features (e.g., labeling a static streetlamp as fire).
- Adaptive Fine-tuning: Solve Equation (2) to obtain the adapted model .
- Inference: Deploy for real-time detection.
3.3. Algorithmic Workflow
| Algorithm 1: Lightweight A-SVM Fire Detection Framework |
| Input: Video Stream , Source Domain Data (Labeled), Target Domain Data (Small-batch), Parameters . |
| Output: Fire Detection Result (Alert/Normal). |
| 1: / Phase 1: Lightweight Feature Extraction / |
| 2: for each frame $F_i$ in video stream do |
| 3: Perform SLIC Superpixel segmentation to obtain region set ; |
| 4: for each region do |
| 5: Apply 2D-DWT to decompose and extract low-frequency sub-band ; |
| 6: Compute LBP histogram features from |
| 7: Construct feature vector ; |
| 8: end for |
| 9: end for |
| 10: / Phase 2: Adaptive Model Optimization / |
| 11: Construct the composite kernel matrix utilizing both ; |
| 12: Solve the dual optimization problem to obtain optimal coefficients ; |
| 13: Generate the adaptive decision function with optimized Support Vectors; |
| 14: / Phase 3: Real-time Inference / |
| 15: for each new incoming sample do |
| 16: if then Trigger FIRE ALARM; |
| 17: else Status NORMAL; |
| 18: end for |
4. Experimental Results and Analysis
4.1. Experimental Setup
4.1.1. Dataset Preparation and Testing Protocol
- Source Data: We selected 90 representative frames from the video stream as the raw source.
- Feature Extraction: Through superpixel segmentation, these frames were processed to generate a total of 1836 region-level feature samples (approx. 20 regions per frame).
- Data Partition: The generated samples were randomly partitioned with a ratio of 80% for adaptation and 20% for testing:
- Adaptation Set (Training): 1468 samples (80%), used strictly for training the A-SVM decision boundary.
- Testing Set (Daytime): 368 samples (20%), used as the held-out validation set to evaluate Precision.
4.1.2. Implementation Details
4.1.3. Dataset Construction and Distribution Shift Analysis
- Source Domain Data (Benchmark Training Set): A standard laboratory fire dataset containing 458 high-quality annotated images was selected. This dataset was collected in a controlled environment with uniform lighting and simple backgrounds, primarily used to train the baseline SVM classifier to learn the basic visual features of fire.
- Target Domain Data (Cross-Domain Challenge Set): To test generalization capabilities, this project constructed a self-built cross-domain dataset containing two extreme challenge scenarios:
- Scenario A (Daytime Strong Interference): Simulates urban street and parking lot environments filled with red cars, red clothing, and specular reflections from strong sunlight. The main challenge lies in the confusion of color features.
- Scenario B (Nighttime Low Illumination): Simulates dark surveillance blind spots accompanied by interference from streetlights and headlights, where the flame core appears white (whitened) due to visual sensor overexposure. The main challenge lies in the absence of key color features.
4.1.4. Evaluation Metrics
- Precision:
- Recall:
- F1-Score:
4.2. Scenario I: Daytime Anti-Interference
4.3. Scenario II: Nighttime Robustness
4.4. Comprehensive Comparison
4.5. Comparison with State-of-the-Art Lightweight Models
- In Daytime Scenarios: It achieved an mAP@0.5 of only 0.485, failing to distinguish red interference objects from real fire.
- In Nighttime Scenarios: The performance dropped even further to 0.354, indicating that the model struggles to extract reliable features under low-light conditions.
- Model Size: The storage footprint of our trained model is only 13.5 KB, which is three orders of magnitude smaller than the lightest deep learning model (YOLOv10n, 5.8 MB). This ultra-compact size allows the algorithm to reside entirely in the L1/L2 cache of low-power processors, minimizing memory access latency.
- End-to-End Latency: To ensure a fair comparison with the end-to-end inference of YOLO models, we evaluated the total system latency, including the preprocessing stage (GMM background modeling and ROI extraction) and the classification stage. As shown in Table 6, the preprocessing stage takes approximately 7.44 ms, and the classification stage takes 4.70 ms. Consequently, the total end-to-end latency of our proposed framework is 12.14 ms, corresponding to a system speed of 82.4 FPS. Although this includes the overhead of background modeling, our method is still 5.2 times faster than the lightweight YOLOv11n (15.7 FPS) on the same CPU platform.
- Edge Feasibility: Although direct testing on Raspberry Pi was not conducted in this study, the massive performance margin (over 5.2× faster than YOLO) theoretically guarantees that the proposed framework can maintain real-time capabilities on resource-constrained edge devices (e.g., embedded MCUs) where deep learning models would suffer from severe latency.
5. Conclusions
6. Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Vasconcelos, R.N.; Franca Rocha, W.J.S.; Costa, D.P.; Duverger, S.G.; Santana, M.M.M.d.; Cambui, E.C.B.; Ferreira-Ferreira, J.; Oliveira, M.; Barbosa, L.d.S.; Cordeiro, C.L. Fire Detection with Deep Learning: A Comprehensive Review. Land 2024, 13, 1696. [Google Scholar] [CrossRef]
- Cheng, G.; Chen, X.; Wang, C.; Li, X.; Xian, B.; Yu, H. Visual Fire Detection Using Deep Learning: A Survey. Neurocomputing 2024, 569, 127975. [Google Scholar] [CrossRef]
- Jadon, A.; Omama, M.; Varshney, A.; Ansari, M.S.; Sharma, R. FireNet: A specialized lightweight fire & smoke detection model for real-time IoT applications. arXiv 2019, arXiv:1905.11922. [Google Scholar]
- Choi, S.; Song, Y.; Jung, H. Study on Improving Detection Performance of Wildfire and Non-Fire Events Early Using Swin Transformer. IEEE Access 2025, 13, 46824–46837. [Google Scholar] [CrossRef]
- Ko, B.C.; Lee, K.H.; Nam, J.Y. Vision-based fire detection using dynamic texture features. Pattern Recognit. Lett. 2009, 30, 1262–1269. [Google Scholar] [CrossRef]
- Celik, T.; Demirel, H.; Ozkaramanli, H. Fire detection using statistical color model in video sequences. J. Vis. Commun. Image Represent. 2007, 18, 176–185. [Google Scholar] [CrossRef]
- Muhammad, K.; Ahmad, J.; Baik, S.W. Early fire detection using convolutional neural networks during surveillance for effective disaster management. Neurocomputing 2018, 288, 30–42. [Google Scholar] [CrossRef]
- Li, X.; Zhao, W.; Liu, Y. Fire detection using convolutional neural networks and visual features. Case Stud. Therm. Eng. 2020, 19, 100625. [Google Scholar] [CrossRef]
- Abdusalomov, A.; Mukhiddinov, M.; Whangbo, T.K. Fully automatic real-time fire detection based on deep CNN. IEEE Access 2020, 8, 164133–164143. [Google Scholar]
- Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
- Yang, J.; Yan, R.; Hauptmann, A. Cross-domain video concept detection using adaptive SVMs. In Proceedings of the 15th ACM International Conference on Multimedia, Augsburg, Germany, 25–29 September 2007; pp. 188–197. [Google Scholar] [CrossRef]
- Al-Khowarizmi, A.; Nasution, I.R.; Lubis, M. Optimization of support vector machine with cubic kernel function to detect cyberbullying in social networks. Telkomnika 2024, 22, 329–339. [Google Scholar] [CrossRef]
- Kim, B.; Lee, J. Fire Detection Based on Image Learning by Collaborating CNN-SVM with Enhanced Recall. J. Sens. Sci. Technol. 2024, 33, 119–124. [Google Scholar] [CrossRef]
- Gong, F.; Li, C.; Gong, W.; Li, X.; Yuan, X.; Ma, Y.; Song, T. A Real-Time Fire Detection Method from Video with Multifeature Fusion. Comput. Intell. Neurosci. 2019, 2019, 1939171. [Google Scholar] [CrossRef] [PubMed]
- Guo, F.; Kan, J. A Review of Wildfire Detection Technologies based on Computer Vision. Appl. Sci. 2024, 14, 2453. [Google Scholar] [CrossRef]
- El-Madafri, I.; Peña, M.; Olmedo-Torre, N. Dual-Dataset Deep Learning for Improved Forest Fire Detection: A Novel Hierarchical Domain-Adaptive Learning Approach. Mathematics 2024, 12, 534. [Google Scholar] [CrossRef]
- Yan, Z.; Wang, L.; Qin, K.; Zhou, F.; Ouyang, J.; Wang, T.; Hou, X.; Bu, L. Unsupervised Domain Adaptation for Forest Fire Recognition Using Transferable Knowledge from Public Datasets. Forests 2023, 14, 52. [Google Scholar] [CrossRef]
- Xing, D.; Li, J.; Zhang, G.; Zhao, Y. Smoke segmentation based on multi-color space feature fusion. J. Unmanned Veh. Syst. 2020, 3, 65–73. [Google Scholar]
- Jamali, K.; Karimi, H. Saliency Based Fire Detection Using Texture and Color Features. Int. J. Image Graph. Signal Process. 2019, 11, 11–20. [Google Scholar] [CrossRef]
- Wang, Y.; Tian, Y.; Liu, J.; Xu, Y. Multi-Stage Multi-Scale Local Feature Fusion for Infrared Small Target Detection. Remote Sens. 2023, 15, 4506. [Google Scholar] [CrossRef]
- Jocher, G. YOLOv5 by Ultralytics. GitHub Repository. 2020. Available online: https://github.com/ultralytics/yolov5 (accessed on 5 February 2026).
- Jocher, G.; Chaurasia, A.; Qiu, J. Ultralytics YOLO. GitHub Repository. 2023. Available online: https://github.com/ultralytics/ultralytics (accessed on 5 February 2026).
- Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. YOLOv10: Real-Time End-to-End Object Detection. arXiv 2024, arXiv:2405.14458. [Google Scholar] [CrossRef]
- Brigato, L.; Iocchi, L. A Close Look at Deep Learning with Small Data. In Proceedings of the 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 2490–2497. [Google Scholar]
- Xu, B.; Shirani, A.; Lo, D.; Alipour, M.A. Prediction of Relatedness in Stack Overflow: Deep Learning vs. SVM. In Proceedings of the 12th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), Oulu, Finland, 11–12 October 2018; pp. 1–10. [Google Scholar]
- He, H.; Garcia, E.A. Learning from Imbalanced Data. IEEE Trans. Knowl. Data Eng. 2009, 21, 1263–1284. [Google Scholar] [CrossRef]
- Schwartz, R.; Dodge, J.; Smith, N.A.; Etzioni, O. Green AI. Commun. ACM 2020, 63, 54–63. [Google Scholar] [CrossRef]
- Warden, P.; Situnayake, D. TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-Power Microcontrollers; O’Reilly Media: Sebastopol, CA, USA, 2019. [Google Scholar]
- Yuan, H.; Yang, H.; Li, R.; Xu, P.; Du, S.; Dong, R. A lightweight fire detection for edge computing based on Mobilenet. In Proceedings of the International Conference on Computer, Artificial Intelligence, and Control Engineering (CAICE 2023), Hangzhou, China, 23 May 2023; Volume 12645, p. 126453G. [Google Scholar] [CrossRef]
- Mahmoudi, S.A.; Gloesener, M.; Benkedadra, M.; Lerat, J.S. Edge AI System for Real-Time and Explainable Forest Fire Detection Using Compressed Deep Learning Models. In Proceedings of the 14th International Conference on Pattern Recognition Applications and Methods (ICPRAM), Porto, Portugal, 23–25 February 2025; Volume 1, pp. 456–463. [Google Scholar] [CrossRef]
- El-Madafri, I.; Peña, M.; Olmedo-Torre, N. Real-Time Forest Fire Detection with Lightweight CNN Using Hierarchical Multi-Task Knowledge Distillation. Fire 2024, 7, 392. [Google Scholar] [CrossRef]
- Sharma, A.; Kumar, R.; Kansal, I.; Popli, R.; Khullar, V.; Verma, J.; Kumar, S. Fire Detection in Urban Areas Using Multimodal Data and Federated Learning. Fire 2024, 7, 104. [Google Scholar] [CrossRef]
- Xu, Y.; Wang, H.; Bi, Y.; Nie, G.; Wang, X. MCDet: Target-Aware Fusion for RGB-T Fire Detection. Forests 2025, 16, 1088. [Google Scholar] [CrossRef]
- Lee, S.-J.; Yun, H.-S.; Sim, Y.-B.; Lee, S.-H. Design and Validation of an Edge-AI Fire Safety System with SmartThings Integration for Accelerated Detection and Targeted Suppression. Appl. Sci. 2025, 15, 8118. [Google Scholar] [CrossRef]
- Lu, J.; Zheng, Y.; Guan, L.; Lin, B.; Shi, W.; Zhang, J.; Wu, Y. FCMI-YOLO: An efficient deep learning-based algorithm for real-time fire detection on edge devices. PLoS ONE 2025, 20, e0329555. [Google Scholar] [CrossRef]
- Polenakis, I.; Sarantidis, C.; Karydis, I.; Avlonitis, M. Smoke Detection on the Edge: A Comparative Study of YOLO Algorithm Variants. Signals 2025, 6, 60. [Google Scholar] [CrossRef]











| Dataset Split | Source | Total Samples | Role |
|---|---|---|---|
| Adaptation Set | 90 Frames | 1468 (Regions) | Training (80% split) |
| Testing Set (Daytime) | 368 Frames | 368 (Regions) | Testing (20% held-out split) |
| Testing Set (Nighttime) | 90 New Frames | 90 (Frames) | Testing (Independent scenario) |
| Method | Mechanism | Experimental Purpose |
|---|---|---|
| Traditional (HSV) | Rule-based Judgment: Relies on fixed color thresholds (e.g., ) and simple geometric rules | Serves as the baseline lower bound to evaluate the limitations of purely physical features without machine learning in complex scenarios |
| Baseline (SVM) | Direct Transfer: Standard SVM trained only on source domain (lab) data and directly applied to the target domain without adaptation | Verifies the severity of “Domain Shift.” Demonstrates that source domain knowledge alone is insufficient for environmental changes |
| Ours (A-SVM) | Small-Sample Domain Adaptation: Uses source model as a prior and fine-tunes parameters via regularization using minimal target samples (e.g., ~90 images) | Verifies the core contribution. Evaluates the ability to achieve high-precision adaptation to new distributions with low cost |
| Method | Performance Metrics | Reasoning Analysis |
|---|---|---|
| Traditional (HSV) | Precision: 75.0% | High False Positive Rate. | Relies solely on color thresholds. It fails to distinguish “Fire Red” from “Object Red” (e.g., cars, clothes) in the feature space, leading to semantic confusion. |
| High False Positive Rate | ||
| Ours (A-SVM) | Precision: 93.0% | Successfully introduces Texture (LBP) and Shape (Wavelet) features for orthogonal verification. It leverages multi-feature complementarity to filter out static distractors |
| F1-Score: 96.0% |
| Method | Performance Metrics | Reasoning Analysis |
|---|---|---|
| Traditional (HSV) | Recall drops to 71.0% | Rigid reliance on color rules. Nighttime fire cores often appear white due to overexposure, failing to meet the “Red” spectral definition, leading to ~30% missed detections. |
| Baseline (SVM) | Limited Improvement | Although texture features are present, the model weights are biased towards daytime/lab data (heavy reliance on color), failing to adapt to the drastic feature drift at night |
| Ours (A-SVM) | Recall reaches 99.7% | Victory of Domain Adaptation: Through fine-tuning with ~90 nighttime samples, A-SVM adaptively shifts focus from color to High-Frequency Energy and Edge Morphology, compensating for color loss. |
| Accuracy: 98.0% |
| Scenario | Method | Accuracy | Precision | Recall | F1-Score |
|---|---|---|---|---|---|
| Daytime (Strong Interference) | Traditional (HSV) | 75.0% | 75.0% | 100.0% | 86.0% |
| Baseline (SVM) | 82.5% | 81.0% | 91.0% | 85.7% | |
| Ours (A-SVM) | 94.0% | 93.0% | 98.0% | 96.0% | |
| Nighttime (Low Illumination) | Traditional (HSV) | 68.0% | 92.0% | 71.0% | 80.0% |
| Baseline (SVM) | 74.5% | 88.0% | 78.0% | 82.7% | |
| Ours (A-SVM) | 90.0% | 98.0% | 99.7% | 98.8% |
| Method | Recall | Precision | F1-score | Model Size | Training Time (s) | End-to-End Latency (ms) | System FPS |
|---|---|---|---|---|---|---|---|
| YOLOv5nu | 1.000 | 0.354 | 0.523 | 6.2 MB | 1169.1 | 60.2 | 16.6 |
| YOLOv8n | 1.000 | 0.332 | 0.498 | 6.5 MB | 1175.2 | 61.2 | 16.3 |
| YOLOv10n | 1.000 | 0.305 | 0.467 | 5.8 MB | 1597.7 | 76.2 | 13.1 |
| YOLOv11n | 1.000 | 0.353 | 0.522 | 10.1 MB | 1306.8 | 63.7 | 15.7 |
| Ours(A-SVM) | 0.903 | 0.982 | 0.941 | 13.5 KB | <10.0 | 12.1 | 82.4 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Hu, J.; Yao, R.; Yang, Q.; Ding, Y.; Zhang, L.; Liu, J. A Lightweight Fire Detection Framework for Edge Visual Sensors Using Small-Sample Domain Adaptation. Sensors 2026, 26, 1121. https://doi.org/10.3390/s26041121
Hu J, Yao R, Yang Q, Ding Y, Zhang L, Liu J. A Lightweight Fire Detection Framework for Edge Visual Sensors Using Small-Sample Domain Adaptation. Sensors. 2026; 26(4):1121. https://doi.org/10.3390/s26041121
Chicago/Turabian StyleHu, Jie, Ruitong Yao, Qingyuan Yang, Yuning Ding, Long Zhang, and Juan Liu. 2026. "A Lightweight Fire Detection Framework for Edge Visual Sensors Using Small-Sample Domain Adaptation" Sensors 26, no. 4: 1121. https://doi.org/10.3390/s26041121
APA StyleHu, J., Yao, R., Yang, Q., Ding, Y., Zhang, L., & Liu, J. (2026). A Lightweight Fire Detection Framework for Edge Visual Sensors Using Small-Sample Domain Adaptation. Sensors, 26(4), 1121. https://doi.org/10.3390/s26041121
