Next Article in Journal
Effective Comparison of Thermo-Mechanical Characteristics of Self-Compacting Concretes Through Machine Learning-Based Predictions
Previous Article in Journal
Deep-Learning-Driven Technique for Accurate Location of Fire Source in Aircraft Cargo Compartment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Lightweight UAV-Based System for Early Fire-Risk Identification in Wild Forests

1
Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 13120, Gyeonggi-do, Republic of Korea
2
Department of Applied Informatics, Kimyo International University in Tashkent, Tashkent 100121, Uzbekistan
3
Department of Information Systems and Technologies, Tashkent State University of Economics, Tashkent 100066, Uzbekistan
4
Department of Digital Technologies, Alfraganus University, Yukori Karakamish Street 2a, Tashkent 100190, Uzbekistan
5
Departmen of Automation and Control, Navoi State Mining and Technological University, Navoi City 210100, Uzbekistan
6
Department of Computer Systems, Information and Education Technologies, Tashkent University of Information Technologies Named After Muhammad Al-Khwarizmi, Tashkent 100200, Uzbekistan
*
Author to whom correspondence should be addressed.
Fire 2025, 8(8), 288; https://doi.org/10.3390/fire8080288
Submission received: 12 June 2025 / Revised: 16 July 2025 / Accepted: 22 July 2025 / Published: 23 July 2025

Abstract

The escalating impacts and occurrence of wildfires threaten the public, economies, and global ecosystems. Physiologically declining or dead trees are a great portion of the fires because these trees are prone to higher ignition and have lower moisture content. To prevent wildfires, hazardous vegetation needs to be removed, and the vegetation should be identified early on. This work proposes a real-time fire risk tree detection framework using UAV images, which is based on lightweight object detection. The model uses the MobileNetV3-Small spine, which is optimized for edge deployment, combined with an SSD head. This configuration results in a highly optimized and fast UAV-based inference pipeline. The dataset used in this study comprises over 3000 annotated RGB UAV images of trees in healthy, partially dead, and fully dead conditions, collected from mixed real-world forest scenes and public drone imagery repositories. Thorough evaluation shows that the proposed model outperforms conventional SSD and recent YOLOs on Precision (94.1%), Recall (93.7%), mAP (90.7%), F1 (91.0%) while being light-weight (8.7 MB) and fast (62.5 FPS on Jetson Xavier NX). These findings strongly support the model’s effectiveness for large-scale continuous forest monitoring to detect health degradations and mitigate wildfire risks proactively. The framework UAV-based environmental monitoring systems differentiates itself by incorporating a balance between detection accuracy, speed, and resource efficiency as fundamental principles.

1. Introduction

The escalating frequency and severity of wildfires represent a growing global environmental crisis, with profound ecological, economic, and societal consequences [1]. A significant proportion of these wildfires are initiated by vegetation in advanced stages of physiological decline [2], particularly dead or partially dead trees that serve as potent ignition sources due to their high flammability and reduced moisture content [3]. The early identification of dead and dying trees that are in poor physiological condition is an essential aspect of wildfire prevention. These trees, which have low moisture levels, are dry and have desiccated branches, accumulate along with the fuel load, and as a result, they become natural fire accelerants that cause ignition and the fire to spread very rapidly. The occurrence of these trees, which are dead, in the forest ecosystem not only raises the fire danger in the area but also leads to the firebreak being less effective and the healthy vegetation in the area becoming more vulnerable to damage [4]. Automated detection systems, by facilitating the timely removal or reduction of these high-risk specimens, can be a proactive instrument to diminish not only the frequency but also the extent of wildfires, thus enabling more efficient forest management and ecological resilience. As climate change exacerbates drought conditions and pest infestations [5], the early identification and removal of such hazardous vegetation have become imperative for proactive forest fire management [6]. Despite considerable progress in remote sensing and environmental monitoring [7], the automated detection of at-risk trees remains a challenging task due to the subtle visual distinctions between healthy and declining vegetation [8], varying canopy structures, and fluctuating environmental conditions [9].
Conventional forest health monitoring systems often rely on satellite imagery or ground surveys, which are limited by temporal resolution, spatial granularity, and scalability [10]. Satellite platforms, while valuable for large-scale observations, frequently lack the resolution necessary to detect individual tree anomalies [11]. Ground-based surveys, although precise, are labor-intensive, time-consuming, and infeasible for monitoring vast and remote areas [12]. In contrast, unmanned aerial vehicles (UAVs) equipped with high-resolution RGB cameras offer an agile, cost-effective, and scalable alternative, capable of capturing fine-grained imagery over extensive forested regions [13]. This paradigm shift has opened new avenues for employing computer vision and deep learning techniques to automate the identification of hazardous trees from aerial perspectives [14]. In this context, object detection models have emerged as powerful tools for extracting semantic information from visual data. However, most state-of-the-art models such as Faster R-CNN, YOLOv7, and DETR prioritize detection accuracy over computational efficiency, rendering them suboptimal for edge deployment scenarios involving drones or low-power embedded systems [15]. To bridge this gap, there is a growing interest in lightweight architectures that balance accuracy, speed, and memory consumption, enabling real-time inference without sacrificing detection quality. Admittedly, while UAV-based environmental monitoring technology and the deep learning of object detection have progressed at a great speed, there are still several crucial issues that have not been resolved. To begin with, the majority of object detectors that are most advanced in the industry—like YOLOv7 and DETR—are way too demanding in terms of computational resources for carrying out real-time edge deployment on lightweight UAVs. In addition to that, it has to be pointed out that earlier research had given a glimpse of the health assessment of forests; however, only a few such works directed their main focus at fire-prone tree detection, which calls for very detailed categorization of dead and partially dead trees that are only very slightly changed in color and form [16]. Moreover, no lightweight architectures that are specifically designed for the target domain and at the same time correspond to the accuracy and efficiency needed for practical application in the deployment of detection systems in aerial, green-covered areas can be found. The present study meets these requirements by designing a small and still accurate UAV-compatible detection framework that is adjusted for online operation, and it is especially focused.
This study presents a novel object detection model tailored for the early identification of fire-prone trees in drone imagery. The proposed framework synergistically integrates the MobileNetV3-Small backbone—an architecture optimized for mobile and edge devices—with a Single Shot MultiBox Detector (SSD) head to create a compact yet high-performing detection pipeline. MobileNetV3-Small leverages efficient operations such as depthwise separable convolutions, inverted residual blocks, and squeeze-and-excitation (SE) attention mechanisms, coupled with the hardware-friendly h-swish activation function, to achieve superior trade-offs between latency and accuracy. When combined with the SSD detection head, which enables dense multi-scale predictions in a single forward pass, the architecture becomes particularly suited for deployment on aerial platforms where power and processing resources are constrained.
To evaluate the efficacy of the proposed method, we curated a dedicated UAV-acquired dataset encompassing diverse tree health conditions under varying environmental scenarios. Through extensive experimentation, our model demonstrated superior performance across key metrics—Precision, Recall, mean Average Precision (mAP), and F1-score—outperforming standard SSD implementations as well as several recent YOLO variants. These results underscore the practical utility of our approach for real-time forest health surveillance and early wildfire risk mitigation. Although both MobileNetV3-Small and SSD are widely used, our model introduces task-specific modifications that adapt these components to the unique challenges of fire-prone tree detection from UAV imagery. These include customized feature map scaling, anchor box tuning based on tree crown size distribution, and UAV-optimized input resolution (320 × 320), all integrated into a lightweight pipeline suitable for real-time environmental monitoring. This work makes the following key contributions:
  • We introduce a customized lightweight object detection architecture for UAV-based forest monitoring, which differs functionally from conventional MobileNetV3 + SSD pairings by integrating squeeze-and-excitation (SE) blocks in the mid-level feature layers, fine-tuning feature map resolution scales, and optimizing for small-object detection in cluttered, foliage-dense aerial scenes.
  • We construct a domain-specific dataset of over 3000 annotated UAV images encompassing healthy, partially dead, and fully dead trees, collected across different seasons and environmental contexts to improve generalization under natural variability.
  • We design a UAV-compatible pipeline that includes resolution-standardized image preprocessing, tree health-aware augmentation, and lightweight model tuning for Jetson Xavier NX deployment, ensuring a balance between high detection performance and real-time inference.
  • We conduct a comprehensive comparison with 20 state-of-the-art detection models, demonstrating that our framework achieves superior precision and recall while maintaining a minimal model size and low latency suitable for edge deployment.
This paper is organized as follows: Section 2 reviews related work in UAV-based forest monitoring and lightweight object detection. Section 3 details the proposed model architecture and implementation. Section 4 presents the dataset construction, training protocols, and experimental results. Finally, Section 5 concludes with future directions for real-time wildfire prevention systems.

2. Related Works

The early identification of hazardous trees, particularly those in dead or declining health, is essential for proactive wildfire prevention and sustainable forest ecosystem management [17]. As vegetation in advanced stages of physiological decline exhibits heightened flammability and reduced moisture content, the timely detection of such fire-prone trees has become a critical component of risk mitigation strategies [18]. Recent advancements in remote sensing technologies, unmanned aerial vehicles (UAVs), and deep learning-based computer vision have significantly enhanced our capacity to monitor forest health at scale [19]. However, achieving reliable, real-time detection of hazardous trees in heterogeneous and resource-constrained environments remains a major challenge.
Over recent decades, remote sensing has emerged as a powerful tool for assessing vegetation condition and forest health, utilizing platforms ranging from coarse-resolution satellites to high-resolution UAV systems [20]. Traditional approaches have employed multispectral and hyperspectral imaging to detect vegetation stress through indices such as the Normalized Difference Vegetation Index (NDVI) [21] and Red Edge Position (REP) [22], both of which correlate with chlorophyll degradation and canopy reflectance variations [23]. While such techniques have proven effective for large-scale monitoring, they often lack the spatial resolution necessary to localize early signs of tree deterioration at the individual level, particularly in heterogeneous and densely forested areas [23].
To address these limitations, researchers have explored integrating structural data from LiDAR with spectral imagery to improve canopy-level health diagnostics. For instance, LiDAR-derived crown features combined with near-infrared imagery have been used to detect decay in coniferous trees [24]. Although these multi-modal systems enhance analytical power, their deployment remains cost-prohibitive and technically demanding, reducing their applicability for frequent, large-area monitoring operations. In parallel, UAV platforms equipped with RGB and multispectral sensors have revolutionized data acquisition at low altitudes, enabling ultra-high-resolution imagery capable of capturing detailed phenotypic cues at the individual tree level [25]. UAV imagery has supported diverse forestry applications, including species classification, biomass estimation, and disease or pest detection [26]. For example, CNNs trained on drone imagery have effectively classified species and identified ash dieback symptoms in European woodlands [27], while other studies have employed aerial color analysis to detect bark beetle infestations [28]. Despite such progress, relatively few efforts have focused explicitly on detecting dead or partially dead trees that pose elevated wildfire risks. The task is complicated by subtle visual symptoms, occlusions caused by overlapping canopies, and environmental variability across lighting and seasons. Moreover, many existing approaches rely on computationally intensive deep learning models that require powerful GPU hardware, thus impeding real-time inference on drones and other mobile platforms.
To enable forest monitoring under operational constraints, the research community has turned to lightweight deep learning models that are optimized for low-latency inference on edge devices [29]. Architectures such as MobileNetV2, ShuffleNet, and EfficientNet-Lite reduce model complexity through efficient convolutional operations and channel pruning techniques [30]. Among these, MobileNetV3 represents a state-of-the-art solution incorporating neural architecture search (NAS), squeeze-and-excitation attention mechanisms, and the hardware-efficient hard-swish activation function [31]. These design enhancements allow MobileNetV3 to deliver competitive accuracy with reduced computational cost, making it ideal for deployment in UAV-based field applications.
Within object detection, single-stage detectors like SSD and YOLO have gained traction due to their ability to perform localization and classification in a single forward pass, offering significant speed advantages over two-stage models such as Faster R-CNN [32]. While recent YOLO versions demonstrate high detection accuracy, their increasing architectural complexity and parameter counts limit their suitability for embedded or mobile systems. Although lightweight detection models have matured significantly, few have been tailored to the specific task of identifying fire-prone vegetation from UAV imagery. This gap underscores the need for domain-specific architectures that balance detection performance with efficiency for real-time field deployment. Our work addresses this need by proposing a novel framework based on the MobileNetV3-Small backbone and SSD detection head. The resulting architecture delivers robust classification of hazardous trees while maintaining minimal resource demands, enabling practical use in wildfire risk monitoring and autonomous forest health assessment.

3. Materials and Methods

In this study, we present a novel object detection model specifically designed to identify dead or partially dead trees that pose a high risk of ignition, using drone imagery. Given the increasing frequency of forest fires triggered by such trees and their profound environmental consequences, early detection is critical. Our proposed model integrates a MobileNetV3-Small backbone with an SSD-style detection head, offering a lightweight and efficient architecture optimized for deployment on edge devices Figure 1.
Our proposed architecture integrates the lightweight MobileNetV3-Small network as the feature extraction backbone with an SSD-style object detection head, as illustrated in Figure 1. The pipeline begins with input RGB UAV images fed into MobileNetV3-Small, which extracts multi-scale hierarchical features through its early, middle, and deep layers. These features are passed to the SSD detection head, which applies multi-resolution convolutional layers for simultaneous bounding box regression and class prediction. Figure 1 summarizes this architecture, showing the feature flow from input through backbone layers to the detection outputs, emphasizing the lightweight and real-time design optimized for UAV deployment.

3.1. Baseline Models

MobileNetV3 is a CNN architecture designed specifically for low-latency workloads on mobile and embedded devices. It utilizes efficient depthwise separable convolutions, squeeze-and-excitation attention mechanisms, and the hard-swish activation function to achieve high accuracy at a lower computational cost. Its smaller variant is sharper-tailored, especially for ultra-lightweight tasks, hence being more suitable for UAV deployments. MobileNetV3-Small’s inverted residual blocks allow an increased parameter count and a stronger trade-off between model performance and runtime efficiency.
We use SSDs for object detection, which do both object localization and classification in a single pass. SSD differs from multi-stage detectors, as it uses multi-scale feature maps at different resolutions containing detection heads to enable efficient, low-latency multi-scale object recognition. For each predefined anchor box, a set of spatial locations on the feature maps is positioned to simultaneously predict bounding box offsets and class probabilities. During post-processing, results are filtered by confidence thresholds, redundant predictions are eliminated using non-maximum suppression (NMS), and results are refined. SSDs paired with MobileNetV3 enable real-time and highly accurate detection on resource-constrained devices, making it suitable for UAV-mounted systems used for monitoring fire-prone vegetation.

3.2. The Proposed Model

Our proposed architecture integrates the lightweight MobileNetV3-Small network as the feature extraction backbone with an SSD-style object detection head, as illustrated in Figure 1. The input image x i n p u t R w · h · c is first processed by the backbone, which generates hierarchical feature representations across three distinct stages. In the early layers, the network captures low-level visual cues such as edges, textures, and basic color patterns critical for distinguishing the outline and structure of potentially at-risk trees. The intermediate layers abstract more complex features, including local geometric patterns and partial object structures. Finally, the deeper layers extract high-level semantic features that are essential for robust object discrimination and contextual understanding. These multi-scale features are then fed into the SSD detection head, which performs both bounding box regression and class prediction in a single forward pass:
F 1   =   F D W ( F 1 × 1 ( F D W ( x i n p u t ) ) )
Equation (1) illustrates the initial blocks of the backbone network, which correspond to the early stages of feature extraction. This segment comprises two depth wise separable convolution layers and one standard convolutional layer. These layers form the foundational part of the model, responsible for capturing low-level features such as edges, color gradients, and simple textures that are essential for subsequent hierarchical representation:
F 2   =   F I R B ( F 1 )     2   t i m e s
Equation (2) initiates the sequence of Inverted Residual Blocks (IRBs), each incorporating a skip connection to preserve spatial information and gradient flow. Furthermore, after each equation, we specify the number of times each block configuration is repeated within the corresponding stage of the network. This repetition count reflects the depth and complexity of feature extraction at different hierarchical levels. The internal structure of the IRB is depicted in Equation (3) and Figure 1. Each block begins with the input passing through a series of operations consisting of two depthwise separable convolutions and a standard convolution. The resulting feature maps are then processed by a Squeeze-and-Excitation Block (SEB), which adaptively recalibrates the channel-wise responses. The output of this stage is concatenated and forwarded to the projection layer, implemented as a 1 × 1 convolution, to reduce dimensionality and integrate the transformed features into the next stage:
F I R B   =   ( F p r l _ 1 × 1 ( ( F S E B ( F D W ( F 1 × 1 ( F p r e v i o u s _ l a y e r ) ) ) ) ) ,   F D W ( F 1 × 1 ( F p r e v i o u s _ l a y e r ) ) )
F 3 = F I R B ( F 2 )     3   t i m e s
F 4 = F I R B ( F 3 )     6   t i m e s
F 5 = F I R B ( F 4 )     3   t i m e s
Following this, Equations (4)–(6) represent the subsequent IRBlocks, each designed to capture increasingly complex and abstract features from the input data:
F D e c o d e r   =   F d e c o d e ( F p r i o r ( F c l a s s , F b o x ) )
Equation (7) introduces the SSD-style object detection head, where the feature map output from the MobileNetV3 backbone serves as input to the detection module. At this stage, a two parallel 1 × 1 convolutional layer is applied to the feature map. At the first branch, the bounding box regression head predicts offset coordinates for each predefined anchor box, while at the second branch, the classification head predicts the class scores associated with each anchor. In the subsequent decoding stage, the predicted offsets are applied to the anchor boxes to compute the final bounding box coordinates in the original image space. The raw classification logits are then passed through a sigmoid activation function to obtain confidence scores. To improve precision and minimize false positives, detections with low confidence scores are discarded through threshold-based filtering:
F F i n a l   =   F N M S ( σ ( F D e c o d e r ) , F t h r e s h o l d ( F D e c o d e r ) )
In the final stage, represented by Equation (8), NMS is applied to address overlapping predictions. NMS systematically eliminates redundant bounding boxes that refer to the same object by calculating the IoU among them and retaining only the highest confidence box within each group. This postprocessing step ensures that the final output is both accurate and non-redundant. The result of this layer comprises a set of refined bounding boxes, their associated class labels, and the corresponding confidence scores, which collectively represent the final detection output of the model.

4. The Experiment and Results

To validate the objectives of the lightweight object detection model, we undertook rigorous experiments with a dataset comprising trees in a UAV-acquired custom dataset of varying health conditions. Evaluating the wildfire detection model focused on the balance between measurement accuracy, computational efficiency, and real-time applicability. To test the model, we designed and conducted all the experiments in simulated practical field settings with diverse lighting, seasonal, and vegetation conditions. The model was compared with several benchmark state-of-the-art detectors using precision, recall, mean average precision (mAP), and F1-score as benchmarking metrics. Further to these benchmarks, the size of the model and the inference latency were calculated to determine the suitability of the framework for edge deployment on swivel UAV platforms. The proposed method outperformed alternatives in accurately detecting targets while maintaining low processing demands.

4.1. Dataset

In the study, special attention was given to the drone sensing of the health management of trees. The authors of the article built a dataset of large color (RGB) images from a camera mounted on a drone. The dataset contains 3128 high-resolution images with bounding boxes, class labels, and three categories annotated, respectively: healthy trees (1243 samples), partially dead trees (964 samples), and fully dead trees (921 samples). To be precise, the dataset has a total number of 6905 instances annotated and distributed across various lighting, seasonal, and environmental conditions. These images are from real-world forested areas obtained through UAV flights at a low altitude, in addition to the selected public drone footage from environmental monitoring repositories. Annotations of the images were hand-performed by experts utilizing the LabelImg tool version 1.8.6, which ensured that both the classes and bounding boxes were of high quality. For standardization of input dimensions and reduction in computational overhead, all images were resized to 320 × 320 pixels. Also, to achieve generalization, the dataset has a stratified sampling scheme covering the three health categories, and at the same time, it is filled with phenotypic changes such as leaf discoloration, thinning of the canopy, structural dryness, and with complete defoliation extracted. The class balance and sample diversity were carefully maintained to ensure a robust training distribution suitable for wildfire risk modeling Figure 2.
All input images are standardized to a resolution of 320 × 320 pixels, a dimension chosen to strike a balance between preserving sufficient detail for reliable feature extraction and maintaining computational efficiency during model training and inference. This uniform resizing plays a critical role in minimizing inconsistencies caused by heterogeneous source formats and ensures that the model receives consistent input across all training samples. This resolution normalization, along with stratified sampling of different tree conditions, contributes to improved model robustness and accuracy. It enables effective generalization to diverse forest and orchard environments, thereby supporting the practical deployment of the proposed detection system for proactive monitoring of tree health and early intervention in wildfire-prone zones, Figure 3.
To improve the dataset-based model’s robustness and generalizability, the data were gathered from images taken in different seasons that included varying lighting conditions, different foliage colors, and different canopy densities. Thus, the model can now learn phenotypic variations in tree appearance that naturally arise due to seasonal changes, for example, leaf discoloration in autumn or thinning during early spring. The dataset not only contained these seasonally diverse samples but also served as a seasonal health-monitoring model for the tree, as it can now identify the health-related decline instead of the seasonal phenological changes, thus leading to classification with less seasonal bias.

4.2. The Experimental Results

To ensure fair and reproducible evaluation, we adopted the standard COCO evaluation protocol using an Intersection over Union (IoU) threshold of 0.5. The output of the model undergoes confidence thresholding followed by non-maximum suppression (NMS) to eliminate redundant detections. Precision, Recall, and F1-score were computed based on the counts of true positives, false positives, and false negatives. Mean Average Precision (mAP@0.5) was calculated by averaging the precision across recall levels for each class, using a fixed IoU threshold. All reported metrics were obtained by evaluating the model on a dedicated validation subset of the UAV-acquired dataset, ensuring that the testing environment reflects diverse tree health conditions and varying environmental scenarios. This process guarantees a consistent and robust assessment of the model’s detection performance. Table 1 presents a direct comparison between the baseline SSD-style model and our proposed architecture after 150 training epochs. The results clearly demonstrate the superiority of the proposed method across all evaluation metrics. Specifically, our model achieved a Precision of 93.9%, Recall of 88.7%, mean Average Precision (mAP) of 86.3%, and an F1-score of 89.3%, substantially outperforming the baseline SSD model, which attained 88.5%, 85.2%, 84.6%, and 86.3% on the respective metrics. This improvement highlights the enhanced capability of our lightweight backbone and SSD-style head to detect and classify trees with high reliability and reduced false positives.
Further comparative analysis with state-of-the-art object detection models is provided in Table 2, where our model is benchmarked against multiple YOLO variants (YOLOv5s through YOLOv9s) and the standard SSD model.
As shown, the proposed model consistently outperformed all other configurations. It achieved the highest precision of 94.07%, recall of 93.74%, mAP of 90.73%, and F1-score of 91.03%, surpassing even the latest YOLOv9s architecture. These results demonstrate not only the competitive accuracy of the proposed approach but also its robustness and generalization capability across various detection scenarios Figure 4.
The notable improvement over both the SSD baseline and multiple YOLO versions confirms the effectiveness of integrating the MobileNetV3-Small backbone with a custom SSD-style detection head, offering a lightweight yet powerful solution suitable for deployment on edge devices and UAV platforms, Figure 5.

4.3. Comparing with SOTA Models

To evaluate the effectiveness and competitiveness of the proposed lightweight object detection framework, we conducted a comprehensive comparison against 20 state-of-the-art baseline models widely used in real-time object detection tasks, particularly in edge-AI and UAV-based environmental monitoring applications. The selected models span both single-stage and two-stage detectors, incorporating various architectural designs from heavy backbones to lightweight mobile configurations. These baselines were chosen to reflect a diverse range of trade-offs between accuracy, speed, and computational efficiency. All models were trained and evaluated on the same curated drone-based dataset for dead, partially dead, and healthy tree classification, with identical image preprocessing (320 × 320 resolution) and data augmentation strategies to ensure consistency and fairness. The primary evaluation metrics included Precision, Recall, mean Average Precision (mAP@0.5), and F1-score. Inference speed and model size were also recorded for real-world deployment considerations. Table 3 presents the comparative results of all 20 baseline models alongside our proposed MobileNetV3-Small + SSD framework.
As observed from Table 3, the proposed model outperforms all 20 baseline detectors across all four metrics. Notably, while YOLOv9s achieved strong results (Precision: 92.9%, Recall: 91.1%), our architecture surpassed it by a noticeable margin (Precision: 94.1%, Recall: 93.7%, F1: 91.0%), demonstrating its superior capability in distinguishing between tree health conditions. The proposed model yielded the best results for all evaluation metrics; however, it is also worth noting that other models, like YOLOv9s, were capable of showing similar performance (mAP: 90.0% vs. 90.7%) and good capability in general. YOLOv9s relies on a more sophisticated backbone and may provide extra flexibility in cases when the computation resources are less limited. Nevertheless, the practical benefit of our model is that it has been designed with efficiency in mind, specifically for edge deployment: it uses less memory (8.7MB vs. ~25MB), gets more FPS on Jetson Xavier NX, and even in conditions where resources are scarce, it still keeps the competitive accuracy. The compromises here show that although YOLOv9s is a strong alternative, our solution is the one that is particularly suitable for the situation of real-time UAV-based environmental monitoring, where low-latency inference and small model size are very important.
Latency and model size benchmarks were thoroughly conducted on Jetson Xavier NX to represent realistic embedded conditions; however, the research still did not perform physical UAV field deployment tests. Therefore, performance factors like energy consumption per inference, battery discharge rate, and total flight time influence were not directly measured. Such parameters are very fitting for evaluating end-to-end system feasibility in extended surveillance tasks. We recognize this gap and also intend to implement UAV flight experiments to determine the operational compromises between model efficiency and mission length in changes in forest environment in the coming work Figure 6.
Although heavier models like EfficientDet-D1 and YOLOv8s exhibited competitive accuracy, they come at the cost of larger model sizes and higher latency, making them less suitable for onboard UAV applications. In contrast, our model achieves a strong balance of lightweight design and high detection precision, thanks to the synergy between the MobileNetV3-Small backbone and SSD’s efficient one-pass detection mechanism. To further highlight the suitability for real-time field use, Table 4 presents a comparison of model size and inference latency.
The comparison results validate the superiority of the proposed object detection model not only in terms of detection accuracy but also in real-time performance on resource-constrained edge devices. Its compact architecture, built on MobileNetV3-Small and SSD, achieves an optimal balance between detection precision and computational efficiency, making it a viable solution for forest fire prevention systems deployed via UAVs. The model’s robustness in classifying subtle phenotypic cues from aerial tree imagery, combined with its lightweight deployment profile, supports practical and scalable wildfire risk monitoring.
This ablation study Figure 7 illustrates the incremental impact of architectural components on the object detection accuracy (measured by mean Average Precision, or mAP%) of the proposed wildfire risk detection model. Each horizontal gray bar represents a model variant, starting from a basic SSD baseline and gradually integrating advanced enhancements.
We have conducted a stratified evaluation where we categorized the visual features of the test set to measure the robustness of the proposed model under environmental conditions similar to those in the real world. The images were classified into three main lighting conditions: bright sunlight with high contrast and shadow interference, overcast scenes with balanced illumination, and hazy or low-light settings characterized by reduced contrast and occasional fog. Benchmarking results showed that the method performed with an average precision of 90.7% (mAP@0.5) in the overcast illumination condition, which was established as the neutral baseline. The performance decreased slightly in a bright sunlight environment with a mAP of 89.3% and further fell to 88.1% in the hazy or low-light scene. These results reflect that the method can generalize across different illumination conditions quite efficiently. However, poor lighting conditions lead to a decrease in performance of about 2–2.5%. This is an indication that future system versions could take advantage of the usage of such preprocessing methods as illumination normalization, contrast enhancement, or adaptive learning strategies to be more resilient in the field under difficult conditions Table 5.

5. Discussion

The MobileNetV3-Small + SSD framework, while high in accuracy, has low latency, and is strongly suitable for edge deployment, still has some limitations. The model not only generalizes to tree species but also to vegetation types. The model is not yet able to identify the number of tree species that make up the forest, and it does not distinguish between deciduous and evergreen trees, nor between vegetation types that may result in its performance being impaired in different ecosystems. In addition, tree classification is limited to three health categories. The categories certainly capture the practical issues of forest health, but they oversimplify ecological variability in the real world. It should be mentioned that only four seasons’ data have been used. The model sometimes becomes confused when a healthy tree changes its color naturally, and then it will be classified as partially dead. The same applies to cases when the canopies are overlapping; shadows or low lighting conditions of low contrast will cause some false alarms or missing detections, especially of those trees that are small in size or occluded in part. The fixed 320 × 320 input resolution is the one the model uses in order to give more priority to real-time performance. Consequently, high-frequency features in the smaller crown are not detected because of sensitivity loss. It is true that if we use higher resolution, the result will be more accurate, but this will also mean more memory usage and inference time, which are not good for UAV platforms. Even if the Jetson Xavier NX was the setting to measure the inference speed achieved, no real-life field deployment of the UAV was performed; hence, testing of battery usage, mission duration effect, or environmental robustness is still to be performed. These figures are very important in the operational context, and the research team is going to include them in their future work. On the one hand, the dataset is currently limited to 3128 images. This may be good enough for the main distribution of species, but it is unlikely to capture rare ecological edge cases. On the other hand, the health of trees has been assessed through experts’ opinions, and no additional physiological or thermal verifications have been carried out yet. So the future work intends to tackle those problems by, first, growing the dataset in species and complex spectral types with annotations. Then, a detection mechanism of the resolution based on the adaptive system will be introduced, and finally, the platform/system will be physically implanted onto the UAV for the actual time validation of anomalies and detection in the field.

6. Conclusions

This work proposes a novel, lightweight, and efficient object detection framework for the early detection of fire-threatened trees using RGB imagery obtained from UAVs. It was shown that the proposed model met all the forest monitoring system requirements by combining the MobileNetV3-Small backbone with an SSD-style detection head, achieving a compelling tradeoff among detection accuracy, efficiency, and real-time performance. Comprehensive experiments demonstrate that the model consistently outperformed the conventional SSD and several recent YOLO variants on Precision, Recall, mAP, F1 score, and other evaluation metrics. Moreover, the framework is characterized by low model size and inference latency, making it suitable for resource-constrained UAV platforms operating in dynamic forest environments. The model also shows robustness to the changes in environmental factors and the subtle changes in tree health class phenotypes, revealing its applicability toward mitigating wildfire risks proactively. This model will be expanded further by integrating thermal or multispectral imaging and by extending the system toward real-time anomaly detection frameworks that can self-initiate proactive measures. All in all, the framework proposed in this study marks a step forward, transforming autonomous aerial devices into proactive assets for data-driven surveillance and preventive measures for monitoring forest health and forest fire risks.

Author Contributions

Methodology, A.A., S.U., A.K., D.M. and Y.I.C.; software, A.A., A.K., A.D., T.B. and S.U.; validation, M.Z., M.M., D.M. and Y.I.C.; formal analysis, A.A., A.K., S.U., A.D., T.B. and M.Z.; resources, D.M., A.D., T.B., M.Z. and M.M.; data curation, D.M., A.D., T.B., M.Z. and M.M.; writing—original draft, A.A., A.K. and S.U.; writing—review and editing, M.Z., M.M. and Y.I.C.; supervision, S.U., A.A. and Y.I.C.; project administration, S.U. and A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was supported by the Development of Pyeongtaek City Cloud Data Service and Urban Forest Growth AI System from Daejo P&I.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All used datasets are available online which open access.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bolan, S.; Sharma, S.; Mukherjee, S.; Isaza, D.F.G.; Rodgers, E.M.; Zhou, P.; Hou, D.; Scordo, F.; Chandra, S.; Siddique, K.H.; et al. Wildfires under changing climate, and their environmental and health impacts. J. Soils Sediments 2025, 25, 1057–1073. [Google Scholar] [CrossRef]
  2. Calderisi, G.; Rossetti, I.; Cogoni, D.; Fenu, G. Delayed Vegetation Mortality After Wildfire: Insights from a Mediterranean Ecosystem. Plants 2025, 14, 730. [Google Scholar] [CrossRef] [PubMed]
  3. Dai, D.; Yu, D.; Gao, W.; Perry, G.L.; Paterson, A.M.; You, C.; Zhou, S.; Xu, Z.; Huang, C.; Cao, D.; et al. Leaf Dry Matter Content Is Phylogenetically Conserved and Related to Environmental Conditions, Especially Wildfire Activity. Ecol. Lett. 2025, 28, e70056. [Google Scholar] [CrossRef] [PubMed]
  4. Aibin, M.; Li, Y.; Sharma, R.; Ling, J.; Ye, J.; Lu, J.; Zhang, J.; Coria, L.; Huang, X.; Yang, Z.; et al. Advancing forest fire risk evaluation: An integrated framework for visualizing area-specific forest fire risks using uav imagery, object detection and color mapping techniques. Drones 2024, 8, 39. [Google Scholar] [CrossRef]
  5. Liu, K.; Qin, B.; Hao, R.; Chen, X.; Zhou, Y.; Zhang, W.; Fu, Y.; Yu, K. Genetic analyses reveal wildfire particulates as environmental pollutants rather than nutrient sources for corals. J. Hazard. Mater. 2025, 485, 136840. [Google Scholar] [CrossRef] [PubMed]
  6. Tezcan, B.; Eren, T. Forest fire management and fire suppression strategies: A systematic literature review. Nat. Hazards 2025, 121, 10485–10515. [Google Scholar] [CrossRef]
  7. Prados, A.I.; Allen, M. Key Governance Practices That Facilitate the Use of Remote Sensing Information for Wildfire Management: A Case Study in Spain. Remote Sens. 2025, 17, 649. [Google Scholar] [CrossRef]
  8. Honary, R.; Shelton, J.; Kavehpour, P. A Review of Technologies for the Early Detection of Wildfires. ASME Open J. Eng. 2025, 4, 040803. [Google Scholar] [CrossRef]
  9. Watt, M.S.; Holdaway, A.; Camarretta, N.; Locatelli, T.; Jayathunga, S.; Watt, P.; Tao, K.; Suárez, J.C. Mapping Windthrow Risk in Pinus radiata Plantations Using Multi-Temporal LiDAR and Machine Learning: A Case Study of Cyclone Gabrielle, New Zealand. Remote Sens. 2025, 17, 1777. [Google Scholar] [CrossRef]
  10. Junttila, S. Remote sensing approaches for assessing and monitoring forest health. In Forest Microbiology; Academic Press: Cambridge, MA, USA, 2025; pp. 419–431. [Google Scholar]
  11. Liu, H.; Zhang, F.; Xu, Y.; Wang, J.; Lu, H.; Wei, W.; Zhu, J. Tfnet: Transformer-based multi-scale feature fusion forest fire image detection network. Fire 2025, 8, 59. [Google Scholar] [CrossRef]
  12. Abdusalomov, A.; Umirzakova, S.; Bakhtiyor Shukhratovich, M.; Mukhiddinov, M.; Kakhorov, A.; Buriboev, A.; Jeon, H.S. Drone-Based Wildfire Detection with Multi-Sensor Integration. Remote Sens. 2024, 16, 4651. [Google Scholar] [CrossRef]
  13. Seidel, L.; Gehringer, S.; Raczok, T.; Ivens, S.N.; Eckardt, B.; Maerz, M. Advancing Early Wildfire Detection: Integration of Vision Language Models with Unmanned Aerial Vehicle Remote Sensing for Enhanced Situational Awareness. Drones 2025, 9, 347. [Google Scholar] [CrossRef]
  14. Congress, S.S.C.; Puppala, A.J.; Banerjee, A.; Patil, U.D. Identifying hazardous obstructions within an intersection using unmanned aerial data analysis. Int. J. Transp. Sci. Technol. 2021, 10, 34–48. [Google Scholar] [CrossRef]
  15. Vasilakos, C.; Verykios, V.S. Burned Olive Trees Identification with a Deep Learning Approach in Unmanned Aerial Vehicle Images. Remote Sens. 2024, 16, 4531. [Google Scholar] [CrossRef]
  16. Boroujeni, S.P.H.; Razi, A.; Khoshdel, S.; Afghah, F.; Coen, J.L.; O’Neill, L.; Fule, P.; Watts, A.; Kokolakis, N.M.T.; Vamvoudakis, K.G. A comprehensive survey of research towards AI-enabled unmanned aerial systems in pre-, active-, and post-wildfire management. Inf. Fusion 2024, 108, 102369. [Google Scholar] [CrossRef]
  17. Wang, D.; Sui, W.; Ranville, J.F. Hazard identification and risk assessment of groundwater inrush from a coal mine: A review. Bull. Eng. Geol. Environ. 2022, 81, 421. [Google Scholar] [CrossRef]
  18. Kureel, N.; Sarup, J.; Matin, S.; Goswami, S.; Kureel, K. Modelling vegetation health and stress using hypersepctral remote sensing data. Model. Earth Syst. Environ. 2022, 8, 733–748. [Google Scholar] [CrossRef]
  19. Felter, S.P.; Bhat, V.S.; Botham, P.A.; Bussard, D.A.; Casey, W.; Hayes, A.W.; Hilton, G.M.; Magurany, K.A.; Sauer, U.G.; Ohanian, E.V. Assessing chemical carcinogenicity: Hazard identification, classification, and risk assessment. Insight from a Toxicology Forum state-of-the-science workshop. Crit. Rev. Toxicol. 2021, 51, 653–694. [Google Scholar] [CrossRef] [PubMed]
  20. Drechsel, J.; Forkel, M. Remote sensing forest health assessment—A comprehensive literature review on a European level. Cent. Eur. For. J. 2025, 71, 14–39. [Google Scholar] [CrossRef]
  21. Zhao, Q.; Qu, Y. The retrieval of ground ndvi (normalized difference vegetation index) data consistent with remote-sensing observations. Remote Sens. 2024, 16, 1212. [Google Scholar] [CrossRef]
  22. Li, N.; Huo, L.; Zhang, X. Using only the red-edge bands is sufficient to detect tree stress: A case study on the early detection of PWD using hyperspectral drone images. Comput. Electron. Agric. 2024, 217, 108665. [Google Scholar] [CrossRef]
  23. Gao, S.; Yan, K.; Liu, J.; Pu, J.; Zou, D.; Qi, J.; Mu, X.; Yan, G. Assessment of remote-sensed vegetation indices for estimating forest chlorophyll concentration. Ecol. Indic. 2024, 162, 112001. [Google Scholar] [CrossRef]
  24. Korpela, I.; Polvivaara, A.; Hovi, A.; Junttila, S.; Holopainen, M. Influence of phenology on waveform features in deciduous and coniferous trees in airborne LiDAR. Remote Sens. Environ. 2023, 293, 113618. [Google Scholar] [CrossRef]
  25. Hamzah, H.; Zainal, M.H.; Zakaria, M.A. Unmanned Aerial Vehicle (UAV) in Tree Risk Assessment (TRA): A Systematic Review. Arboric. Urban For. 2025, 51. [Google Scholar] [CrossRef]
  26. Isa, M.M.; Zainal, M.H.; Zakaria, M.A.; Tahar, K.N.; Zhuang, Q. Utilizing Tree Risk Assessment (TRA) and Unmanned Aerial Vehicle (UAV) as a pre-determine tree hazard identification. Environ. Behav. Proc. J. 2025, 10, 359–366. [Google Scholar]
  27. Ecke, S.; Stehr, F.; Frey, J.; Tiede, D.; Dempewolf, J.; Klemmt, H.J.; Endres, E.; Seifert, T. Towards operational UAV-based forest health monitoring: Species identification and crown condition assessment by means of deep learning. Comput. Electron. Agric. 2024, 219, 108785. [Google Scholar] [CrossRef]
  28. Godinez-Garrido, G.; Gonzalez-Islas, J.C.; Gonzalez-Rosas, A.; Flores, M.U.; Miranda-Gomez, J.M.; Gutierrez-Sanchez, M.D.J. Estimation of Damaged Regions by the Bark Beetle in a Mexican Forest Using UAV Images and Deep Learning. Sustainability 2024, 16, 10731. [Google Scholar] [CrossRef]
  29. Mittal, P. A comprehensive survey of deep learning-based lightweight object detection models for edge devices. Artif. Intell. Rev. 2024, 57, 242. [Google Scholar] [CrossRef]
  30. Liu, H.I.; Galindo, M.; Xie, H.; Wong, L.K.; Shuai, H.H.; Li, Y.H.; Cheng, W.H. Lightweight deep learning for resource-constrained environments: A survey. ACM Comput. Surv. 2024, 56, 1–42. [Google Scholar] [CrossRef]
  31. Oliveira, F.; Costa, D.G.; Assis, F.; Silva, I. Internet of Intelligent Things: A convergence of embedded systems, edge computing and machine learning. Internet Things 2024, 26, 101153. [Google Scholar] [CrossRef]
  32. Grzesik, P.; Mrozek, D. Combining machine learning and edge computing: Opportunities, challenges, platforms, frameworks, and use cases. Electronics 2024, 13, 640. [Google Scholar] [CrossRef]
Figure 1. Architecture of the proposed lightweight object detection model integrating MobileNetV3-Small as the feature extraction backbone and an SSD-style head for multi-scale tree health classification. The backbone extracts hierarchical visual features, which are processed by the SSD detection head to generate bounding box predictions and confidence scores in a single forward pass.
Figure 1. Architecture of the proposed lightweight object detection model integrating MobileNetV3-Small as the feature extraction backbone and an SSD-style head for multi-scale tree health classification. The backbone extracts hierarchical visual features, which are processed by the SSD detection head to generate bounding box predictions and confidence scores in a single forward pass.
Fire 08 00288 g001
Figure 2. Visual examples from the UAV-acquired dataset used for model training. (a) A healthy tree, characterized by dense foliage and structural integrity; (b) a partially dead tree, exhibiting crown thinning and minor defoliation; (c) a fully dead tree, with no leaves and brittle branch structures. Color-coded annotations were manually applied to highlight class-specific features used during labeling.
Figure 2. Visual examples from the UAV-acquired dataset used for model training. (a) A healthy tree, characterized by dense foliage and structural integrity; (b) a partially dead tree, exhibiting crown thinning and minor defoliation; (c) a fully dead tree, with no leaves and brittle branch structures. Color-coded annotations were manually applied to highlight class-specific features used during labeling.
Fire 08 00288 g002
Figure 3. Illustration of preprocessing and feature enhancement steps applied to input UAV imagery. The raw RGB image (top-left) transforms edge detection (top-right), texture filtering (bottom-left), and gradient enhancement (bottom-right). These steps amplify fine morphological details—such as branch thinning, leaf density, and silhouette deformation—that improve the model ability to distinguish between healthy and fire-prone trees.
Figure 3. Illustration of preprocessing and feature enhancement steps applied to input UAV imagery. The raw RGB image (top-left) transforms edge detection (top-right), texture filtering (bottom-left), and gradient enhancement (bottom-right). These steps amplify fine morphological details—such as branch thinning, leaf density, and silhouette deformation—that improve the model ability to distinguish between healthy and fire-prone trees.
Fire 08 00288 g003
Figure 4. Precision–Recall (PR) curve for the proposed MobileNetV3-Small + SSD model on the UAV-acquired dataset. The area under the curve (AP) indicates strong overall classification performance, balancing both high precision and recall in identifying fire-prone trees from aerial imagery.
Figure 4. Precision–Recall (PR) curve for the proposed MobileNetV3-Small + SSD model on the UAV-acquired dataset. The area under the curve (AP) indicates strong overall classification performance, balancing both high precision and recall in identifying fire-prone trees from aerial imagery.
Fire 08 00288 g004
Figure 5. Example detection results of the proposed MobileNetV3-Small + SSD model on aerial UAV imagery. Red bounding boxes with “at_risk” labels denote trees classified as fire-prone (dead or partially dead), while blue boxes with “und_risk” labels indicate trees not currently identified as high risk. Confidence scores are shown for each prediction. The model demonstrates robust localization and classification of hazardous trees across varying canopy densities and environmental conditions.
Figure 5. Example detection results of the proposed MobileNetV3-Small + SSD model on aerial UAV imagery. Red bounding boxes with “at_risk” labels denote trees classified as fire-prone (dead or partially dead), while blue boxes with “und_risk” labels indicate trees not currently identified as high risk. Confidence scores are shown for each prediction. The model demonstrates robust localization and classification of hazardous trees across varying canopy densities and environmental conditions.
Fire 08 00288 g005
Figure 6. Trade-off analysis between model size and detection accuracy (mAP %) for selected lightweight object detection models. The color gradient represents inference speed (FPS) on Jetson Xavier NX, highlighting the proposed model superior efficiency and precision for real-time UAV-based deployment.
Figure 6. Trade-off analysis between model size and detection accuracy (mAP %) for selected lightweight object detection models. The color gradient represents inference speed (FPS) on Jetson Xavier NX, highlighting the proposed model superior efficiency and precision for real-time UAV-based deployment.
Fire 08 00288 g006
Figure 7. Ablation study showing the contribution of key architectural components to the proposed model performance. Each enhancement—MobileNetV3 backbone, SE blocks, and h-swish activation—leads to incremental gains in detection accuracy, validating their importance in the final architecture.
Figure 7. Ablation study showing the contribution of key architectural components to the proposed model performance. Each enhancement—MobileNetV3 backbone, SE blocks, and h-swish activation—leads to incremental gains in detection accuracy, validating their importance in the final architecture.
Fire 08 00288 g007
Table 1. The results of comparison between STD-style model and the proposed model after 150 epochs.
Table 1. The results of comparison between STD-style model and the proposed model after 150 epochs.
ModelPrecision (%)Recall (%)mAP (%)F1-Score (%)
SSD88.585.284.686.3
Proposed model93.988.786.389.3
Table 2. The results of comparison among the proposed and SOTA models.
Table 2. The results of comparison among the proposed and SOTA models.
ModificationPrecision (%)Recall (%)mAP (%)F1-Score (%)
SSD88.8785.283.1681.13
YOLOv5s 86.0484.1984.1986.95
YOLOv6s89.6987.7885.1785.22
YOLOv7s91.0987.9888.6588.11
YOLOv8s90.7890.987.6290.18
YOLOv9s92.9691.1290.0191.00
Proposed model94.0793.7490.7391.03
Table 3. Performance comparison of proposed model with 20 baseline object detectors.
Table 3. Performance comparison of proposed model with 20 baseline object detectors.
ModelBackbonePrecision (%)Recall (%)mAP@0.5 (%)F1-Score (%)
SSDVGG1688.585.284.686.3
SSD-LiteMobileNetV289.085.685.187.2
YOLOv5sCSPDarknet86.084.284.286.9
YOLOv5mCSPDarknet87.986.586.087.2
YOLOv6sEfficientRepV189.787.885.285.2
YOLOv6nEfficientRepLite88.385.983.986.1
YOLOv7sE-ELAN91.188.088.688.1
YOLOv7-tinyE-ELAN-lite89.887.086.487.3
YOLOv8sEfficient YOLO90.890.987.690.2
YOLOv8nEfficient YOLO89.387.185.387.9
YOLOv9sRT-DETR Backbone92.991.190.091.0
Faster R-CNNResNet5086.284.782.885.4
RetinaNetResNet5087.585.984.786.7
EfficientDet-D0EfficientNetB090.188.686.589.3
EfficientDet-D1EfficientNetB190.889.488.290.1
NanoDetShuffleNetV287.283.982.585.5
PP-YOLOE-LiteMobileNetV391.589.888.390.6
PicoDetMobileNetV289.987.285.488.5
CenterNet-TinyHourglass-lite88.786.484.187.5
Tiny-YOLOv4CSPDarknet-Tiny88.485.783.786.9
Proposed ModelMobileNetV3-Small94.193.790.791.0
Table 4. Inference speed and model size comparison.
Table 4. Inference speed and model size comparison.
ModelModel Size (MB)Inference Time (ms/Image)FPS (on Jetson Xavier NX)
YOLOv5s14.41855.6
YOLOv7s23.12540.0
YOLOv8s21.02245.4
EfficientDet-D012.12737.0
PP-YOLOE-Lite10.62050.0
Proposed Model8.71662.5
Table 5. Evaluation of proposed model’s mAP@0.5 across these subsets.
Table 5. Evaluation of proposed model’s mAP@0.5 across these subsets.
ConditionmAP@0.5 (%)
Overcast/Neutral90.7
Bright/Direct Light89.3
Hazy/Low-Light88.1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abdusalomov, A.; Umirzakova, S.; Kutlimuratov, A.; Mirzaev, D.; Dauletov, A.; Botirov, T.; Zakirova, M.; Mukhiddinov, M.; Cho, Y.I. Lightweight UAV-Based System for Early Fire-Risk Identification in Wild Forests. Fire 2025, 8, 288. https://doi.org/10.3390/fire8080288

AMA Style

Abdusalomov A, Umirzakova S, Kutlimuratov A, Mirzaev D, Dauletov A, Botirov T, Zakirova M, Mukhiddinov M, Cho YI. Lightweight UAV-Based System for Early Fire-Risk Identification in Wild Forests. Fire. 2025; 8(8):288. https://doi.org/10.3390/fire8080288

Chicago/Turabian Style

Abdusalomov, Akmalbek, Sabina Umirzakova, Alpamis Kutlimuratov, Dilshod Mirzaev, Adilbek Dauletov, Tulkin Botirov, Madina Zakirova, Mukhriddin Mukhiddinov, and Young Im Cho. 2025. "Lightweight UAV-Based System for Early Fire-Risk Identification in Wild Forests" Fire 8, no. 8: 288. https://doi.org/10.3390/fire8080288

APA Style

Abdusalomov, A., Umirzakova, S., Kutlimuratov, A., Mirzaev, D., Dauletov, A., Botirov, T., Zakirova, M., Mukhiddinov, M., & Cho, Y. I. (2025). Lightweight UAV-Based System for Early Fire-Risk Identification in Wild Forests. Fire, 8(8), 288. https://doi.org/10.3390/fire8080288

Article Metrics

Back to TopTop