1. Introduction
Forests, as a vital component of the Earth’s ecosystem, not only provide habitats for numerous species but also play a pivotal role in maintaining ecological balance, regulating climate, and conserving soil and water [
1]. However, with the intensification of global warming trends, the frequency of forest fires has increased significantly, and the scale of fires continues to expand, posing substantial threats to forest resources and ecological environments [
2]. Statistical data reveal that the 2018 California wildfires [
3] resulted in total economic losses of USD 148.5 billion, causing severe financial damage, while the 2020 Australian bushfires [
4] led to the death of approximately 3 billion animals, inflicting catastrophic ecological destruction. Forest fires in their initial stages often exhibit specific flame and smoke characteristics that form distinct contrasts with the natural environmental background. If these early signals can be accurately identified and the ignition points precisely located through advanced monitoring techniques, it will significantly reduce the difficulty, risks, and costs of subsequent fire suppression efforts. Therefore, establishing a timely and accurate forest fire monitoring system is of paramount importance for protecting forest resources and the ecological environment.
Traditional forest fire monitoring systems primarily rely on watchtower observations, manual patrols, and satellite remote sensing. Bao et al. [
5] proposed a modeling method to optimize the spatial coverage of forest fire watchtowers, providing a scientific basis for watchtower placement. In reference [
6], the authors developed a wireless sensor system for wildfire monitoring, advancing the development of fire-related sensor systems. Although these studies have contributed practical foundations and reference directions for wildfire monitoring technology, these conventional methods still exhibit significant limitations. Watchtower observations depend on manual monitoring and are highly susceptible to weather conditions, terrain, and visibility constraints, often leading to missed detections and false alarms [
7]. Manual patrols are limited by terrain complexity and labor costs, resulting in restricted coverage and inefficient time consumption [
8]. Satellite remote sensing faces challenges from cloud interference, hindering timely fire detection [
9]. While infrared sensors offer real-time capabilities, they are prone to environmental interference and entail high deployment costs [
10].
With the continuous advancement of machine learning techniques, the application of machine learning methods for extracting and analyzing various physical characteristics of flames and smoke (e.g., color, motion, spectral, spatial, temporal, and texture features) has become increasingly prevalent in forest fire detection. Khatami et al. [
11] integrated the Particle Swarm Optimization (PSO) algorithm with the K-medoids clustering algorithm to detect flame color features, achieving effective fire detection. Zhang et al. [
12] proposed an improved probabilistic fire detection method that combines flame color and motion features, enhancing detection performance while reducing false alarms and missed detection rates. Chen et al. [
13] developed a multi-feature fusion-based video flame detection approach, incorporating motion, color, and flicker characteristics of flames, which significantly improved both the speed and accuracy of fire detection. Toereyin et al. [
14] utilized wavelet transform for processing infrared videos, thereby increasing the accuracy and efficiency of flame detection. Benjamin et al. [
15] implemented a low-complexity yet efficient fire detection method by employing the Gray-Level Co-occurrence Matrix (GLCM) for texture analysis combined with color-based detection. Compared to traditional methods, these machine learning-based fire detection approaches offer advantages such as higher automation, faster detection speed, and broader coverage, effectively improving both the efficiency and accuracy of fire detection. However, conventional machine learning methods often require extensive feature engineering and parameter tuning when dealing with complex backgrounds and dynamic fire scenarios, which limits their generalization capability and consequently affects their practical performance [
16].
In recent years, the development of deep learning has provided new approaches for fire detection, attracting increasing attention due to its capability for automatic feature extraction and complex pattern learning. Convolutional Neural Network (CNN)-based object detection models have been widely applied in fire detection through end-to-end feature learning. Muhammad et al. [
17] proposed a fast and efficient CNN-based fire detection and localization method that achieves a balance between detection accuracy and efficiency by using small convolutional kernels and a fully connection-free network architecture with increased model depth. Wong et al. [
18] significantly reduced model size while maintaining detection performance by combining SSD with optimized network structures, enabling effective fire detection. Zheng et al. [
19] further validated the effectiveness of deep learning in fire detection using Fast R-CNN, with experiments demonstrating its strong performance in small object detection scenarios. The YOLO series models have become a research focus due to their speed and efficiency. An et al. [
20] developed a dynamic convolution-based YOLOv5 fire detection model that improves both accuracy and speed through optimized anchor box clustering, dynamic convolution, and pruning techniques. Talaat et al. [
21] proposed an improved YOLOv8-based fire detection method for smart city environments that achieves a balance between high accuracy and low false alarm rates, further verifying the effectiveness of YOLO models. However, the inherent local receptive field of CNNs presents challenges when dealing with the dynamic diffusion and semi-transparent characteristics of flames and smoke. The thin and widely distributed nature of flames and smoke leads to feature extraction difficulties, resulting in higher rates of missed detections and false alarms in practical scenarios [
22]. To address these limitations, researchers have introduced attention mechanisms to enhance global context awareness and improve both holistic perception of fire/smoke and local feature details. Majid et al. [
23] developed an attention mechanism and transfer learning-based CNN model for fire detection and localization in real-world images, demonstrating improved detection accuracy. Yar et al. [
24] proposed an optimized dual flame attention network for efficient and accurate flame detection, enhancing accuracy on edge devices. However, while attention mechanism-based improvement schemes can enhance global contextual awareness, their quadratic computational complexity significantly increases model parameters. This poses deployment challenges for resource-constrained edge devices such as drones, making them difficult to adapt to edge computing platforms [
25,
26].
In recent years, an increasing number of researchers have focused on lightweight algorithm research to better adapt to resource-constrained UAV edge devices, enabling real-time forest fire detection through drone-based systems. Almeida et al. [
27] proposed a lightweight CNN model named EdgeFireSmoke for real-time video-based fire and smoke detection using RGB images. This model can perform image processing on edge devices, demonstrating the feasibility of deploying fire detection models on UAVs. Building upon the original EdgeFireSmoke approach, Almeida et al. [
28] further developed an improved fire detection model called EdgeFireSmoke++ by integrating artificial neural networks with deep learning techniques, achieving enhanced detection accuracy and efficiency. Guan et al. [
29] introduced a modified instance segmentation model for forest fire segmentation in UAV aerial images, which improves drone-based fire detection performance by optimizing the MaskIoU branch in the U-Net architecture to reduce segmentation errors. Huang et al. [
30] proposed an ultra-lightweight network for real-time forest fire detection on embedded sensing devices, improving both detection speed and accuracy through optimized lightweight network design and model compression techniques. Lin et al. [
31] developed a lightweight dynamic model that enhances the accuracy and efficiency of forest fire and smoke detection while improving its applicability on UAV platforms through lightweight technology integration. Although these works have significantly advanced the development of lightweight fire detection technology, their detection accuracy still requires further improvement.
In summary, breakthroughs in deep learning technology have facilitated advancements in forest fire object detection, while the widespread adoption of edge devices like UAVs has provided new approaches for fire detection tasks. However, existing methods still face challenges in addressing missed detections and false alarms caused by difficulties in feature extraction when dealing with the dynamic spread and semi-transparent characteristics of flames and smoke. Although introducing attention mechanisms into models can alleviate these issues, the excessive model parameters and computational demands create deployment challenges and reduced efficiency when applied to resource-constrained edge devices such as drones. To address these challenges, this paper proposes a lightweight hybrid receptive field model (LHRF-YOLO), aiming to further improve forest fire detection accuracy while reducing false alarms and missed detections. The model achieves an efficient balance between lightweight design and detection precision to better adapt to the computational resource limitations of edge devices like UAVs. The main contributions of this work are as follows:
- (1)
Multi-Receptive Field Extraction Module: By integrating the 2D Selective Scan Mechanism (SS2D) into Residual Multi-Branch Efficient Layer Aggregation Networks (RMELANs), we achieve hybrid extraction of both global and local features, enabling precise flame localization while maintaining linear computational complexity.
- (2)
Optimized Downsampling Approach: The proposed Dynamic Enhanced Patch Merge Downsampling (DEPMD) module employs feature reorganization and channel-wise dynamic enhancement strategies. This design effectively reduces spatial resolution while strengthening semantic representation, preserving fine-grained features with minimal computational overhead.
- (3)
Enhanced Multi-Scale Fusion: The introduced Scaling Weighted Fusion (SWF) module optimizes feature contribution allocation through dynamic scaling factors, effectively addressing the issues of feature dilution and fusion difficulties in traditional multi-scale approaches.
- (4)
Improved Texture Feature Extraction: Replacing SiLU with the Mish activation function significantly enhances the model’s capability to capture flame edges and sparse smoke texture features.
3. Results
3.1. Experimental Environment
The detailed specifications of the experimental setup are provided in
Table 2, while the hyperparameter configuration of the model is presented in
Table 3.
3.2. Evaluation Metrics
In forest fire object detection tasks, to comprehensively and effectively evaluate the model, this paper adopts multi-dimensional metrics, including Precision (), Recall (), F1-score (), mean Average Precision (), Frames Per Second (), Parameters, and GFLOPs.
Precision (
) reflects the proportion of true fire samples among those detected as fires, calculated as follows:
In Equation (19), represents true positive, indicating the number of correctly detected fire samples. represents false positive, indicating the number of misclassified fire samples.
Recall (
) reflects the proportion of actual fire samples that are correctly detected, calculated as follows:
In Equation (20), represents false negative, indicating the number of undetected fire samples.
F1-score (
) is the harmonic mean of Precision (
) and Recall (
), reflecting the model’s overall performance when balancing false alarms and missed detections. Its formula is as follows:
The mean Average Precision (
) reflects the model’s Average Precision (
) under different Intersection over Union (
) thresholds, evaluating its comprehensive performance in both localization and classification. The formulas for
,
, and
are as follows:
In Equation (22), represents the overlapping area between predicted and ground truth bounding boxes, while denotes their combined total area. In Equation (23), the AP calculation is based on the area under the Precision–Recall Curve (PR curve), where indicates Precision values at different Recall levels. In Equation (24), represents the number of categories. For metrics, uses an threshold of 0.5; uses 0.75; uses 0.95; and 50–95 averages precision across thresholds from 0.5 to 0.95 with 0.05 increments.
Frames Per Second (
) measures the number of images the model can process per second, calculated as follows:
In Equation (25), represents preprocessing time, Inference Time denotes model inference time, and indicates post-processing time.
Parameters reflect the model’s spatial complexity, representing the total number of trainable parameters in the model, which directly affects storage requirements and memory consumption.
GFLOPs reflect the model’s temporal complexity, representing the number of floating-point operations required for one forward pass, which directly impacts computational resource consumption.
3.3. Ablation Experiment
To validate the effectiveness of the proposed improvement modules, we designed systematic ablation experiments conducted on the FSDataset. Using YOLOv11n as the baseline model, this experiment progressively introduces Residual Multi-Branch Efficient Layer Aggregation Networks (RMELANs), Dynamic Enhanced Patch Merge Downsampling (DEPMD), Scaling Weighted Fusion (SWF), and the detail texture enhancement strategy of replacing SiLU with the Mish activation function, analyzing the impact of each component on model performance. Key evaluation metrics include Precision (
), Recall (
), F1-score (
),
, and
50–95, as well as model complexity indicators Parameters and GFLOPs. Detailed experimental results are presented in
Table 4.
As shown in
Table 4, the baseline model (YOLOv11n) demonstrates fundamental detection capability in forest fire scenarios but exhibits certain performance bottlenecks, with mAP50 of 87.3% and mAP50–95 of 56.7%. The model’s parameter count of 2.58M and GFLOPs of 6.3 indicate significant optimization potential. Replacing the SiLU activation function with Mish maintains the parameter count while improving mAP50 to 87.4%, with mAP50–95 stabilizing around 56.6%. This demonstrates Mish’s smoother gradient characteristics enhance edge feature extraction and mitigate gradient vanishing in fire detection. Introducing RMELAN increases mAP50 to 87.4% and mAP50–95 to 56.8% without parameter growth, as its hybrid SS2D global modeling and residual connections strengthen spatial context modeling of fire regions. DEPMD alone reduces parameters to 2.32M but decreases mAP50 to 87.0%, indicating dynamic downsampling trades some accuracy for efficiency when used independently, though subsequent experiments reveal its synergistic benefits. SWF integration maintains parameter stability while boosting mAP50 to 87.4%, as adaptive weight scaling optimizes multi-scale feature fusion. Combining Mish and RMELAN achieves 87.5% mAP50 and 56.7% mAP50–95, showing their synergy in enhancing flame edge and structural perception.
Ablation studies demonstrate that the LHRF-YOLO model integrating RMLEAN, DEPMD, SWF, and Mish activation achieves an optimal balance between performance and efficiency. The model attains 87.6% mAP50 and 57.0% mAP50–95, representing improvements of 0.3% and 0.3%, respectively, over the baseline model. The F1-score reaches 81.9%, showing a 0.6% enhancement compared to the baseline, indicating superior Precision–Recall balance. The parameter count is compressed to 2.25M (12.8% reduction), and computational complexity is reduced to 5.4 GFLOPs (14.3% decrease), effectively validating the lightweight design approach.
To comprehensively validate the effectiveness of the Mish activation function in enhancing texture representation, we specifically designed comparative experiments to evaluate its performance against commonly used activation functions, including SiLU, ReLU, Swish, and GELU. As illustrated in
Figure 10 and
Figure 11, the experimental results demonstrate that the Mish activation function exhibits significantly superior performance over other candidate solutions across all key evaluation metrics examined in this study. Consequently, adopting Mish as the core activation function plays a pivotal role in improving both the model’s texture perception capability and its recognition accuracy in forest fire detection.
3.4. Comparative Experiment
To verify the effectiveness of the proposed LHRF-YOLO model in forest fire scenarios, we conducted comparative experiments with multiple baseline models from the YOLO series as well as traditional models on the FSDataset. The evaluation metrics encompass detection accuracy (Precision, Recall, F1-score), multi-threshold average precision (mAP50, mAP75, mAP95, mAP50–95), inference speed (FPS), and model complexity (parameters and GFLOPs). Experimental results demonstrate that LHRF-YOLO achieves significant performance improvements while maintaining a lightweight architecture. Detailed experimental results are presented in
Table 5. Additionally, for a more intuitive performance comparison between LHRF-YOLO and different models, we plotted comparative curves, as shown in
Figure 12 and
Figure 13.
As shown in
Table 5, LHRF-YOLO demonstrates significantly superior comprehensive performance compared to traditional models, confirming the advantages of YOLO-series models in fire object detection scenarios. LHRF-YOLO achieves superior Precision (84.2%) and F1-score (81.9%) compared to all YOLO baseline models, while its Recall (79.8%) is only slightly lower than YOLOv9t (80.0%). Compared to the latest version YOLOv11n (Precision 83.5%, Recall 79.3%), LHRF-YOLO demonstrates improvements of 0.7% in Precision, 0.5% in Recall, and 0.6% in F1-score, indicating a better balance between flame localization accuracy and target coverage. LHRF-YOLO outperforms YOLOv11n in key metrics, including mAP50 (87.6%), mAP75 (62.2%), and mAP95 (2.7%), particularly showing a 0.3% improvement under a high threshold (mAP95). While its mAP50–95 reaches 57.0%, slightly lower than YOLOv9t (57.1%), these results demonstrate LHRF-YOLO’s stronger robustness across varying detection difficulties, especially in identifying small fire spots and occluded flames. With 2.25M parameters and 5.4 GFLOPs, LHRF-YOLO achieves a 12.8% parameter reduction and a 14.3% computation reduction compared to YOLOv11n (2.58M and 6.3). This lightweight design makes it more suitable for edge device deployment without compromising core detection performance.
It is worth noting that YOLOv9t slightly outperforms LHRF-YOLO on certain metrics, such as mAP50–95. This performance difference directly results from our model’s design philosophy of reducing computational complexity. Compared to YOLOv9t’s computation-intensive architecture, LHRF-YOLO maintains fundamental detection capabilities while significantly decreasing computational requirements; this deliberate design trade-off accounts for the minor performance gap in mAP metrics. LHRF-YOLO achieves a superior F1-score (81.9% vs. 81.7%), demonstrating a better balance between reducing missed detections and false alarms in practical applications. Moreover, LHRF-YOLO’s mAP95 (2.7%) outperforms YOLOv9t (2.5%) by 0.2%, confirming its more precise localization of high-confidence targets. Although LHRF-YOLO has slightly more parameters than YOLOv9t, its DEPMD-based dynamic downsampling optimizes computational pathways, resulting in significantly lower GFLOPs. Combined with SS2D’s global enhancement capability, this architecture achieves a higher F1-score and mAP95. Comprehensively, experiments demonstrate that LHRF-YOLO delivers optimal overall performance for forest fire detection, particularly suitable for real-world scenarios requiring both real-time operation and high accuracy.
In order to better demonstrate the comparison between LHRF-YOLO and other advanced models,
Figure 14 and
Figure 15, respectively, demonstrate the fire detection performance comparison between LHRF-YOLO and other baseline models under different environmental scenarios. As shown in
Figure 14, in low-concentration, highly-diffused smoke scenarios where comparative models exhibit missed detections for both widespread and locally concentrated smoke, LHRF-YOLO successfully detects both types simultaneously with significantly improved accuracy, benefiting from its innovative global context modeling module and multi-scale texture perception mechanism.
Figure 15 illustrates that in low-light complex backgrounds with coexisting flames and smoke, compared to other models suffering from localization deviations and low accuracy, LHRF-YOLO achieves more precise bounding box annotations with higher detection confidence than baseline models. Experimental results confirm the algorithm’s superior feature perception capability and anti-interference performance in complex fire scenarios compared to existing methods.
To more intuitively validate LHRF-YOLO’s perception capability of critical fire features and its advantages over other advanced models, we generated heatmaps based on Grad-CAM [
53] and conducted comparisons with other models in typical diffused fire scenarios. These heatmaps visually demonstrate the relative intensity of the model’s attention across spatial locations in the input image when performing fire detection tasks. The activation intensity in the heatmaps is normalized to the range [0, 1], where red indicates high-attention regions and blue represents low-attention areas (
Figure 16). As demonstrated in
Figure 16, under identical scenarios, LHRF-YOLO’s heatmaps exhibit superior feature-focusing capability: high-activation regions tightly conform to flame contours with more precise edge detail capture, while core areas show stronger activation intensity. These visualizations further substantiate LHRF-YOLO’s advantages—its enhanced spatial-context modeling effectively captures subtle flame edge variations while maintaining lightweight characteristics, ultimately improving fire detection reliability and precision.
3.5. Generalization Experiment
To validate the cross-scenario adaptability of LHRF-YOLO and its generalization capability in understanding fire-related features, we conducted generalization experiments comparing LHRF-YOLO with multiple YOLO baseline models on the test sets of both the M4SFWD dataset and the original Fire and Smoke Dataset. The experimental results demonstrate that LHRF-YOLO achieves significantly superior generalization performance compared to baseline models while maintaining its lightweight architecture. Detailed experimental results are presented in
Table 6 and
Table 7.
As shown in
Table 6, on the test set of the Fire and Smoke Dataset, LHRF-YOLO demonstrates significant advantages in key metrics, highlighting its superiority as an improved version of YOLOv11n. Its F1-score reaches 82.1%, tying with the strongest baseline model, YOLOv9t, for the top position. Meanwhile, its Precision (84.4%) is notably higher than that of YOLOv9t (83.7%), showcasing its outstanding capability in reducing false detections—a critical feature for minimizing false alarms in real-world scenarios. The mAP50 of LHRF-YOLO (88.6%) is only 0.2% lower than that of YOLOv9t (88.8%), and in the more challenging mAP95 metric (4.0%), it trails YOLOv9t (4.2%) by merely 0.2%, indicating highly competitive performance in detecting difficult samples. Compared to baseline models such as YOLOv5n and YOLOv8n, LHRF-YOLO achieves F1-score improvements of 1.4%–2.1% and mAP50 gains of 1.5%–2.6%, validating the effectiveness of its enhanced modules. Although LHRF-YOLO’s composite metric mAP50–95 (61.3%) is slightly lower than that of YOLOv9t (62.1%) by 0.8%, it excels in reducing false detections and Precision, better aligning with the practical requirements of high accuracy and low error rates in real-world fire detection tasks.
As shown in
Table 7, on the M4SFWD dataset, LHRF-YOLO achieves comprehensive improvements over the original YOLOv11n model. With an F1-score of 38.4%, it ties with YOLOv9t for the top position among baseline models, while its Precision (52.9%) significantly surpasses that of YOLOv9t (51.8%), demonstrating superior performance in reducing false alarms. LHRF-YOLO attains the highest values in both mAP50 (30.2%) and mAP75 (4.9%), with particularly notable improvement in the high-confidence threshold mAP95—a 200% increase over YOLOv9t (from 0.02% to 0.06%). These results indicate enhanced capability in identifying challenging samples (e.g., small fire spots, low-light flames). Compared to YOLOv5n/v6n/v8n, LHRF-YOLO shows 2.1%–2.4% F1-score improvement and 2.3%–2.7% mAP50 enhancement, validating the effectiveness of its improved modules. While leading in most metrics, LHRF-YOLO’s mAP50–95 (10.3%) slightly trails YOLOv9t (10.8%). However, its superior false alarm reduction better aligns with practical fire detection requirements.
In summary, through generalization experiments conducted on the M4SFWD dataset and the original Fire and Smoke Dataset test set, we have validated the effectiveness of our proposed LHRF-YOLO model, which incorporates targeted improvements based on YOLOv11. The model demonstrates superior generalization capability on unseen complex data, outperforming baseline models and successfully confirming its feasibility as the optimal solution.
4. Discussion
The proposed LHRF-YOLO model in this study achieves an optimal balance between accuracy and efficiency for forest fire detection through multi-module collaborative optimization, with its innovative improvements providing new technical pathways for lightweight object detection model design.
The experimental results demonstrate that the Receptive-field Mixed Enhancement Module (RMELAN) effectively addresses the insufficient global perception capability of traditional CNN models by incorporating SS2D’s global scanning mechanism. When dealing with the dynamic diffusion characteristics of flames and smoke, RMELAN exhibits superior feature representation capability. Compared with attention mechanism improvements, SS2D achieves equivalent global context modeling while maintaining linear computational complexity, validating its feasibility for visual tasks. The Dynamic Enhanced Patch Merge Downsampling module (DEPMD) preserves relatively complete feature information transmission while reducing computational cost and parameter count through PMD’s spatial reorganization and SE’s channel weighting. The innovative Scaling Weighted Fusion (SWF) module introduces a scaling factor mechanism that adaptively adjusts the contribution of multi-scale features, effectively mitigating feature dilution. This enables simultaneous detection of initial micro-fire sources and large-scale smoke diffusion, verifying the effectiveness of the multi-scale feature contribution allocation mechanism. Regarding model performance comparison, LHRF-YOLO demonstrates a remarkable balance between lightweight design and accuracy. With only 5.4 GFLOPs and 2.25M parameters, it shows superior suitability for edge computing deployment scenarios. In high-threshold detection scenarios, it achieves 12.5% higher precision than YOLOv11n, confirming enhanced capability for high-precision localization requirements. Furthermore, in cross-scenario testing on M4SFWD, its mAP50 surpasses YOLOv11n by 4.5%, demonstrating superior generalization capability in complex environments.
Despite the significant progress achieved by LHRF-YOLO, several limitations and shortcomings remain. First, the model’s generalization capability still requires further enhancement. While it demonstrates superior performance compared to baseline models in generalization experiments, its effectiveness remains susceptible to fluctuations when encountering interference factors such as climate variations, geographical changes, visibility shifts, motion blur, atmospheric scattering, smoke occlusion, and intense illumination. To address these challenges, we must intensify research on extreme environmental factors while simultaneously constructing high-quality datasets encompassing more complex scenarios to strengthen the model’s feature learning capacity. Second, while the model shows significant improvements over the original YOLOv11n, comparative analysis with YOLOv9t reveals that certain performance metrics have not yet been comprehensively surpassed, indicating room for further optimization. Although the current real-time performance (FPS) meets basic requirements for real-time detection, there remains optimization potential compared to YOLOv11n. Finally, the model’s consideration of complex environmental factors remains incomplete. Practical forest conditions (e.g., soil characteristics) and inherent sensor detection limits directly affect the visual representation consistency of fire regions, particularly threatening the detection reliability of small targets (small fire spots, thin smoke) and challenging the model’s generalization capability. Enhanced adaptability to representation variations caused by such image uncertainties is required.
Future research will focus on optimizing model pruning techniques [
54,
55] and innovating efficient computation methods to enhance computational efficiency while maintaining accuracy. Concurrently, we will explore multimodal learning frameworks [
56] that integrate multi-source environmental parameters (meteorological, vegetation, terrain data, and soil moisture) with infrared data. Additionally, we will develop robust algorithms [
57] specifically designed for low-quality images (high noise, blur, occlusion) and small target detection while conducting in-depth investigations into how sensor detection limits affect model performance boundaries.