Next Article in Journal
Agreement Between a Pre-Markered T-Shirt and Manual Marker Placement for Opto-Electronic Plethysmography (OEP) Measures
Previous Article in Journal
Soft Conductive Textile Sensors: Characterization Methodology and Behavioral Analysis
Previous Article in Special Issue
Scene Graph and Natural Language-Based Semantic Image Retrieval Using Vision Sensor Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing UAV Object Detection in Low-Light Conditions with ELS-YOLO: A Lightweight Model Based on Improved YOLOv11

School of Computer Science and Artificial Intelligence, Beijing Technology and Business University, Beijing 100048, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(14), 4463; https://doi.org/10.3390/s25144463
Submission received: 22 June 2025 / Revised: 11 July 2025 / Accepted: 16 July 2025 / Published: 17 July 2025
(This article belongs to the Special Issue Vision Sensors for Object Detection and Tracking)

Abstract

Drone-view object detection models operating under low-light conditions face several challenges, such as object scale variations, high image noise, and limited computational resources. Existing models often struggle to balance accuracy and lightweight architecture. This paper introduces ELS-YOLO, a lightweight object detection model tailored for low-light environments, built upon the YOLOv11s framework. ELS-YOLO features a re-parameterized backbone (ER-HGNetV2) with integrated Re-parameterized Convolution and Efficient Channel Attention mechanisms, a Lightweight Feature Selection Pyramid Network (LFSPN) for multi-scale object detection, and a Shared Convolution Separate Batch Normalization Head (SCSHead) to reduce computational complexity. Layer-Adaptive Magnitude-Based Pruning (LAMP) is employed to compress the model size. Experiments on the ExDark and DroneVehicle datasets demonstrate that ELS-YOLO achieves high detection accuracy with a compact model. Here, we show that ELS-YOLO attains a mAP@0.5 of 74.3% and 68.7% on the ExDark and DroneVehicle datasets, respectively, while maintaining real-time inference capability.

1. Introduction

Drone-view object detection (DVOD) aims to locate and classify objects in images or videos captured by unmanned aerial vehicles (UAVs) [1]. With the rapid advancements in computer vision and UAV technologies, DVOD has become prevalent in diverse applications, including security surveillance, intelligent transportation systems, and environmental monitoring, achieving significant results. For instance, Wu et al. [2] proposed CCR-Net, a multimodal feature fusion network that improves operational efficiency in disaster response and emergency relief missions. Huang et al. [3] developed UFPMP-Det, which accurately identifies crop diseases and pests from UAV imagery. Zhan et al. [4] introduced ARGNet, effectively detecting forest fire smoke. Hu et al. [5] proposed CM-YOLO, which enhances the detection performance of aircraft and ships under cloud and mist conditions. Peng et al. [6] designed the MMDLTH framework to achieve robust detection of small infrared targets against heavy cloud clutter. By integrating deep learning algorithms, UAV systems can monitor critical events such as traffic violations and accidents in real time, providing essential support for informed decision making.
Nighttime security patrols and search-and-rescue missions require real-time and precise monitoring over large regions. However, traditional manual inspections are inefficient and prone to missed detections. Conventional surveillance equipment struggles with capturing detailed imagery under low-light conditions and lacks automated data analysis capabilities. UAV systems equipped with object detection algorithms offer rapid deployment, mobility, and automated recognition, enabling efficient wide-area monitoring in a short time [7]. Therefore, developing robust and efficient object detection techniques for low-light conditions has become a critical research focus [8].
Currently, most object detection methods for low-light conditions primarily rely on image preprocessing techniques, such as brightness adjustment and noise suppression, to enhance image quality and thereby improve detection performance [9]. For example, Guo et al. [10] proposed an illumination map estimation method that initializes low-light images based on maximum RGB channel values. Hu et al. [11] mitigated color distortion in low-light images through saturation adjustment. Jeon et al. [12] combined atmospheric scattering models with pixel-adaptive gamma correction for image enhancement. However, these enhancement-based methods exhibit inherent limitations. First, the enhancement process can introduce artifacts that obscure essential image details. Second, reliance on fixed prior knowledge restricts adaptability to dynamically changing lighting conditions and limits the model’s ability to learn deeper, high-level semantic features. Moreover, the computational overhead associated with image enhancement is significant for resource-constrained edge devices, severely limiting their real-time performance and practical deployment.
The YOLO (You Only Look Once) [13] series models represent single-stage object detection frameworks capable of performing localization and classification simultaneously in a single forward pass. These models, characterized by simple architectures, effectively balance detection accuracy and real-time performance, making them suitable for resource-constrained edge devices. Lightweight variants such as YOLOv8-nano, YOLOv9-tiny [14], and YOLOv10-nano [15] demonstrate robust performance on natural image datasets like Pascal VOC [16] and MS COCO [17], but they are not optimized specifically for low-light or UAV-captured imagery. Consequently, their performance significantly deteriorates under complex backgrounds and weak object features.
To address these issues, we propose a lightweight object detection model tailored specifically for low-light conditions called ELS-YOLO. This model builds upon the YOLOv11s framework and aims to balance detection accuracy with architectural efficiency.
The main contributions of this work are summarized as follows:
  • We design the re-parameterized backbone ER-HGNetV2, which integrates Re-parameterized Convolution (RepConv) [18] and Efficient Channel Attention (ECA) [19] mechanisms to effectively capture high-quality features, suppress noise, and enhance feature representation in low-light environments.
  • We develop LFSPN, which enables efficient cross-scale feature fusion and improves both model generalization and detection capability across diverse object scales.
  • We propose SCSHead, a lightweight detection head that leverages shared convolutions with separate batch normalization layers to minimize computational complexity and enhance inference efficiency. Furthermore, we incorporate Layer-Adaptive Magnitude-Based Pruning (LAMP) [20] to precisely prune redundant parameters, thereby reducing computational costs without compromising detection performance.
  • Extensive experiments conducted on the ExDark and DroneVehicle datasets demonstrate that ELS-YOLO achieves an optimal balance between detection accuracy and inference speed, validating its practical deployment potential.

2. Background

2.1. DVOD: Drone-View Object Detection

As an emerging research direction in remote sensing, DVOD faces unique challenges compared with conventional ground-view detection. Images captured by drones often contain numerous targets with significant scale variations, complicating detection accuracy and robustness. Existing DVOD approaches can be broadly classified into three categories: super-resolution-based, context-based, and representation fusion-based methods.
Super-resolution-based methods [21,22] enhance small object detectability by reconstructing low-resolution regions into high-resolution representations, typically through a three-stage pipeline: candidate region proposal, super-resolution reconstruction, and detection. Although these methods significantly improve object perception, their multi-stage structures often introduce redundant computation and complicate the training process, limiting their practicality and end-to-end optimization capability.
Context-based methods [23,24] leverage local and global contextual information to build spatial relationships and semantic dependencies, enhancing semantic representation and scene understanding. However, drone imagery often presents complex backgrounds and ambiguous semantic boundaries, hindering effective context modeling and reducing overall detection accuracy.
Representation fusion-based methods [25,26] integrate fine-grained spatial details from shallow features with high-level semantic features from deeper layers, primarily using architectures such as the Feature Pyramid Network (FPN) and its variants. Nonetheless, under low-light conditions, the representational gap between scales is pronounced, and direct fusion may introduce noise, degrading the discriminative ability of the model.
In addition, object detection from the UAV perspective faces multiple challenges arising from environmental factors. First, high-speed flight often results in image blur and motion trails, which significantly increase the difficulty of real-time detection. Second, weather variations such as fog, rain, and wind can degrade image quality and affect flight stability, thereby impairing object perception. Third, UAVs are susceptible to both intentional and unintentional electromagnetic interference, which may cause motor stalling, sensor drift, or even communication link failures [27,28]. All these factors can adversely impact the real-time performance and accuracy of object detection systems.

2.2. LLOD: Low-Light Object Detection

Existing research on object detection in low-light environments mainly focuses on improving image quality through low-light image enhancement (LLIE) and enhancing detection performance through architectural optimization.
LLIE techniques aim to restore critical information in dark regions by enhancing image brightness, contrast, and overall visual quality. Early LLIE methods relied on pixel intensity mapping and local statistical modeling. Techniques such as exposure correction [29] adjust global brightness distributions to enhance visibility, whereas histogram equalization [30] redistributes pixel intensity histograms to increase contrast. The Retinex theory [31] offers a physically interpretable enhancement framework by decomposing an image into illumination and reflectance components, thereby modeling contributions from lighting and surface texture. In recent years, deep neural networks have achieved significant advancements in LLIE tasks. LLNet [32] was the first deep autoencoder-based network designed for simultaneous low-light enhancement and denoising. Wei et al. [33] combined the Retinex theory with convolutional neural networks, incorporating Gaussian filtering and logarithmic transformation to perform adaptive brightness correction. Guo et al. proposed Zero-DCE [34,35], a method that achieves fast, reference-free image enhancement by learning pixel-wise luminance adjustment curves. Xu et al. [36] developed an SNR-aware network that adaptively enhances images through global attention mechanisms and local structure modeling.
Architectural optimization aims to improve feature extraction and object recognition under low-light conditions by refining network structures and incorporating attention mechanisms. Long et al. [37] proposed a multi-level illumination learning framework, SCINet, which enhances feature extraction under complex backgrounds. Qiu et al. [38] introduced Efficient Attention Pyramid Transformer (EAPT), which integrates deformable attention and a global encoder–decoder structure to improve multi-scale feature modeling. Hu et al. [39] proposed an occlusion-aware attention module, MPCM, to alleviate detection difficulties caused by occlusion. Peng et al. [40] enhanced detection performance in low-light scenarios by optimizing attention mechanisms and the loss function. Wu et al. [41] developed the progressive enhancement network AENet, combining Yeo–Johnson transformation with a Transformer architecture to improve dynamic feature representation. Wang et al. [42] and Li et al. [43] mitigated the challenges posed by complex environments by employing geometry-aware learning and advanced signal reconstruction techniques.

3. Baseline Algorithm

YOLOv11 [44] is the latest version of the YOLO series, and provides five model variants—n, s, m, l, and x—to support deployment across a spectrum of platforms, ranging from edge devices to high-performance servers. As illustrated in Figure 1, YOLOv11 adopts a classic three-stage architecture consisting of a backbone, neck, and head, which are responsible for feature extraction, feature fusion, and object detection, respectively.
As a significant advancement in real-time object detection, YOLOv11 inherits the high efficiency and end-to-end detection capabilities of previous YOLO models, while incorporating multiple architectural innovations and optimizations. An overview of YOLOv11 primary modules is presented in Figure 2. The C3k2 module serves as the core structural unit of YOLOv11 and adjusts the kernel size by modifying the C3k parameter to meet the feature extraction requirements of different scenarios. The C2PSA module uses spatial attention to guide the model to focus on key regions and improve the accuracy of small or occluded objects. In the detection head, YOLOv11 employs depthwise separable convolutions (DWConv) to replace standard convolutions, thereby further reducing the parameter count and accelerating inference speed.
We selected YOLOv11s as the baseline due to its superior performance under identical experimental conditions. As demonstrated in Table 1 and Table 2, YOLOv11s achieved an optimal balance between detection accuracy, parameter count, and computational complexity on the ExDark dataset relative to other YOLO variants. Moreover, compared with the “n” variants, the “s” versions provided higher detection precision with only a slight increase in computational overhead, making them particularly suitable for lightweight UAV deployments.

4. Methodology

4.1. ER-HGNetV2: Re-Parameterized Backbone

The backbone network of YOLOv11 primarily consists of alternating stacks of standard convolutional layers and C3k2 modules. Although this design demonstrates strong feature extraction capabilities, its increasing depth and channel width lead to parameter redundancy and substantial computational overhead. Moreover, this structure struggles to effectively capture complex global semantic features in low-light UAV imagery. To address this issue, inspired by the HGNetV2 framework [45], we design a re-parameterized backbone, ER-HGNetV2, which is illustrated in Figure 3.
ER-HGNetV2 begins with a Stem module that consists of standard convolution and max-pooling operations, performing initial spatial downsampling and extracting fundamental feature representations. ERBlock forms the core of the network and applies a multi-scale feature extraction strategy based on stacked convolution layers to refine features and enhance representational accuracy. The LDS layer integrates Depthwise Convolution (DWConv), batch normalization, and SiLU activation, enabling independent channel-wise convolution that enhances feature expressiveness while maintaining minimal computational overhead.
We construct ERBlock using the RepConv and ECA mechanism. As illustrated in Figure 4, RepConv adopts a multi-branch training structure that incorporates 3 × 3 convolutions, 1 × 1 convolutions, and identity mappings to capture diverse feature patterns. During inference, RepConv applies re-parameterization to fuse convolutional layers and batch normalization into a single standard convolution, thereby achieving a balance between representational capacity and inference efficiency. As shown in Figure 3, the ECA mechanism leverages a lightweight 1D convolution to adaptively compute channel-wise attention weights. This compact design effectively emphasizes critical feature channels with minimal computational cost, thereby improving the model’s ability to distinguish targets under low-light conditions.

4.2. LFSPN: Lightweight Feature Selection Pyramid Network

The Path Aggregation Feature Pyramid Network (PAFPN) used in YOLOv11 suffers from unselective aggregation and cross-level semantic inconsistency during feature fusion, making it difficult to obtain discriminative multi-scale representations. To address this issue, we design the LFSPN, which selectively strengthens semantically relevant features and suppresses redundant background information to significantly improve model robustness under challenging lighting conditions. As shown in Figure 5, LFSPN consists of two stages: an attention-weighted stage and a dynamic cross-scale feature fusion stage.
The attention-weighted stage performs initial feature selection on the multi-scale feature maps extracted by the backbone and enhances spatial position awareness via the Coordinate Attention (CA) [46] mechanism. As shown in Figure 6, the CA mechanism first applies global average pooling along the horizontal and vertical directions to capture spatial dependencies. It then concatenates the pooled features along the channel dimension and applies a 1 × 1 convolution to model cross-directional spatial relationships and generate intermediate attention weights. Finally, pixel-wise weighting is applied to the original feature map using the generated attention weights to emphasize informative regions and suppress redundant features, improving the quality of representations for the subsequent fusion stage.
To further improve the effectiveness of feature fusion, we design a Dynamic Feature Selection (DFS) module, as shown in Figure 7. Taking the high-level feature map F 4 and the low-level feature map F 3 as an example, we first apply a 1 × 1 convolution to F 4 for nonlinear dimensionality reduction to lower computational cost. Then we upsample F 4 using transposed convolution to match the spatial resolution of F 3 . The CA mechanism is applied to the upsampled feature map F 4 to generate attention weights for dynamic feature selection. These weights are element-wise multiplied with F 3 to selectively enhance spatially informative regions in the low-level feature map. The refined F 3 is then fused with F 4 through element-wise addition to produce a feature representation that preserves both fine-grained spatial details and high-level semantic information. The fused features are passed through a C3k2 module to further enhance feature representations, resulting in the final output feature map, as
f tmp = Trans Conv 1 × 1 ( f high )
f out = C 3 k 2 f low · CA ( f tmp ) + f tmp

4.3. SCSHead: Shared Convolution and Separate Batch Normalization Head

On UAV and other edge devices, detection head must balance high accuracy and low computational cost. However, conventional designs typically use separate convolutional branches for each prediction task, which leads to parameter redundancy. To address this issue, we propose the SCSHead, as shown in Figure 8.
SCSHead adopts a shared-weight convolutional architecture. Feature maps from different scales are first processed by individual 3 × 3 convolutional layers to align their channel dimensions, thereby ensuring consistent cross-scale feature representation. The aligned feature maps are then passed through a shared-weight convolutional module consisting of two successive convolutional stages. First, a 3 × 3 convolutional layer is applied to extract cross-scale features and enhance representational capacity. Subsequently, a 1 × 1 convolutional layer performs channel-wise compression and reorganization, thereby reducing the number of parameters and computational cost. Finally, the processed features are fed into two parallel convolutional layers, Conv_Reg for predicting bounding box coordinates and Conv_Cls for classification scores.
To accommodate statistical discrepancies across feature maps of different scales, we introduce a scale-specific normalization strategy. Specifically, each input branch applies an independent BN layer. This design allows each scale to adaptively normalize its feature statistics, thereby preserving and emphasizing the discriminative properties of individual feature maps.

4.4. Network Structure of ELS-YOLO

The overall architecture of the proposed ELS-YOLO is shown in Figure 9. The input image is first processed by the ER-HGNetV2 backbone to extract hierarchical features. These features are passed through LFSPN for dynamic cross-scale fusion to improve multi-scale representation. The fused features are then sent to SCSHead to produce the final detection results.

4.5. Channel Pruning

Deploying object detection systems on UAV platforms necessitates real-time processing of aerial imagery to accurately identify and localize traffic signs, vehicles, pedestrians, and other critical targets. However, UAVs are typically constrained by limited computational resources, memory, and battery capacity. Therefore, detection models must be carefully optimized to reduce both computational complexity and memory consumption. Model pruning has emerged as an effective structural optimization technique that significantly compresses model size while preserving near-original performance. To further improve the real-time inference capability of the proposed model on edge devices, we apply the LAMP method to eliminate redundant parameters and reduce model complexity.
Traditional magnitude-based pruning methods apply a global threshold to prune all layers of the network uniformly. However, such approaches fail to account for the varying importance of parameters across layers, often resulting in excessive pruning of critical layers and subsequent degradation in model accuracy. To address this issue, the LAMP method introduces a layer-wise adaptive importance scoring mechanism. By reweighting the importance of weights within each layer, LAMP determines an appropriate pruning ratio for each layer. As shown in Figure 10, for a given layer’s weight tensor W, its values are first flattened into a one-dimensional vector and sorted in ascending order based on magnitude. Based on this sorted vector, LAMP assigns each weight u a corresponding score that reflects its relative importance within the layer, as
score ( u , W ) = ( W [ u ] ) 2 v u ( W [ v ] ) 2
where the numerator represents the squared value of the current weight, and the denominator denotes the sum of squares of all weights whose magnitudes are greater than or equal to that of the current weight.
The LAMP score reflects the relative contribution of each weight among the remaining connections. During pruning, the model removes connections with lower scores on a per-layer basis until the desired global sparsity level is reached. This strategy accounts for the varying contributions of different layers to the overall network, thereby enabling substantial model compression while preserving detection accuracy to the greatest extent possible.

5. Experimental Results

5.1. Dataset

To evaluate the effectiveness and generalizability of the proposed ELS-YOLO model for object detection in low-light conditions, we conducted extensive experiments on two publicly available datasets: ExDark [47] and DroneVehicle [26]. We randomly split the dataset into training, validation, and test sets in an 8:1:1 ratio.

5.1.1. ExDark

The ExDark dataset is designed for object detection under low-light conditions. It contains 7363 real-world images captured in various challenging lighting environments and provides annotations for 12 object categories. Figure 11 shows typical image examples and a statistical overview of the annotations.

5.1.2. DroneVehicle

The DroneVehicle dataset contains aerial images captured by UAVs across various urban scenes such as city roads, residential areas, parking lots, and highways. The images are categorized into four lighting conditions: Day, Hazy, Night, and Dark Night. Figure 12 shows typical examples under each condition. Since this study focuses on object detection in low-light environments, we retain only the images labeled as Night and Dark Night.

5.2. Experimental Environment

The experiment is conducted using Python 3.10.16 and PyTorch 2.3.1 on an NVIDIA GeForce RTX 4090 GPU. We use SGD as the optimizer with an initial learning rate of 0.01, momentum set to 0.937, and weight decay of 0.0005. Each model is trained for 200 epochs with a batch size of 16 and no pretrained weights. All comparison experiments use the same settings.

5.3. Evaluation Indicators

The performance of the proposed model is evaluated using commonly adopted object detection metrics, including precision, recall, mean average precision (mAP), frames per second (FPS), and GFLOPs. The definitions of these metrics are presented as follows:
Precision = T P T P + F P
Recall = T P T P + F N
A P = 0 1 Precision ( Recall ) d ( Recall )
mAP = 1 K i = 1 K A P i
where T P denotes the number of correctly detected positive samples, F P denotes the number of falsely detected positive samples, F N denotes the number of ground truth objects that were missed by the detector, K denotes the total number of object categories in the dataset, and A P i represents the average precision for the i-th category.

5.4. Experimental Analysis on the ExDark Dataset

5.4.1. ER-HGNetV2 Experiment

To validate the effectiveness of the proposed ER-HGNetV2 backbone for low-light object detection, we conducted comparative experiments using YOLOv11s as the baseline and by replacing its backbone with ER-HGNetV2 and other existing lightweight alternatives. Table 3 presents the experimental results on the ExDark dataset.
The proposed ER-HGNetV2 backbone demonstrates clear advantages across all evaluation metrics, achieving the highest scores in both mAP@0.5 and mAP. Compared with the original HGNetV2, ER-HGNetV2 yields better detection performance while incurring lower computational complexity.

5.4.2. Comparison with YOLOv11

To evaluate the performance advantages of ELS-YOLO over the YOLOv11 series models, we conducted a quantitative analysis on the ExDark dataset. The experimental results are presented in Table 1.
In terms of detection performance, ELS-YOLO achieves a mAP@0.5 of 74.3% and mAP of 48.5%. Compared with the baseline model YOLOv11s, ELS-YOLO improves mAP@0.5 by 2.9% and mAP by 2.8%, demonstrating stronger robustness in low-light conditions. Regarding model complexity, ELS-YOLO contains only 48.9% of the parameters and 70.4% of the computational cost of YOLOv11s, demonstrating excellent lightweight characteristics. Figure 13 illustrates the training performance curves of four key metrics for both ELS-YOLO and YOLOv11s. The proposed ELS-YOLO outperforms YOLOv11s across all four metrics throughout the training process.

5.4.3. LAMP Experiment

We conducted a detailed analysis of the ELS-YOLO model’s performance under different pruning ratios. The pruning ratio is defined as the ratio of computational cost before pruning to that after pruning. For example, a pruning ratio of 1.33 indicates a 25% reduction in computation, a ratio of 2 indicates a 50% reduction, and a ratio of 4 indicates a 75% reduction.
As shown in Table 4, when the pruning ratio is set to 1.33, the number of parameters is reduced from 4.6 M to 2.4 M, corresponding to a compression rate of 47.8%. In terms of detection performance, mAP@0.5 remains at 74.3%, consistent with the unpruned model, while mAP decreases by only 0.1%. When the pruning ratio is increased to 2.0, the model achieves an optimal balance between compression and performance. Specifically, the parameter count further decreases to 1.3M, and mAP@0.5 is maintained at 74.2%. However, when the pruning ratio increases to 4.0, although the number of parameters continues to decline, detection performance drops sharply, with mAP@0.5 decreasing to 62.4% and mAP falling to 37.5%.
Based on the above experimental results, the pruning ratio is set to 2.0, which provides the best trade-off between model compactness and detection accuracy.

5.4.4. Ablation Experiments

To comprehensively assess the contribution of each key component in the proposed ELS-YOLO, we conducted a series of ablation experiments. The corresponding experimental results are summarized in Table 5.
First, we replace the original backbone with the proposed ER-HGNetV2 to enhance the model’s ability to represent and generalize target features under low-light conditions. As shown in Table 5, this substitution resulted in a 1.2 increase in the mAP@0.5 metric, while the computational complexity was reduced by 3 GFLOPs. These results indicate that ER-HGNetV2 effectively eliminates redundant computation and improves the network’s capacity to extract complex features in dark environments.
Next, we apply the proposed SCSHead, which further increases mAP@0.5 to 73.8%. This result indicates that SCSHead, through its shared convolutional structure and scale-adaptive normalization, enhances the network’s robustness and discriminative capacity for multi-scale object detection in low-light conditions.
Finally, we introduce LFSPN to enhance multi-scale feature fusion. The addition of LFSPN improved mAP@0.5 and mAP by 0.5% and 0.6%, respectively. Meanwhile, the model’s parameter and computational complexity were significantly reduced. These results confirm that LFSPN effectively integrates multi-scale information while suppressing redundant features, thereby improving both the network’s generalization capability and computational efficiency.

5.4.5. Comparison Experiments with Other Baseline Methods

To further evaluate the effectiveness and performance advantages of the proposed ELS-YOLO, we conduct comparison experiments under the same settings with several mainstream detection models.
As presented in Table 2, ELS-YOLO demonstrates clear advantages across all key performance metrics, outperforming lightweight YOLO variants and DETR-based models. In terms of model complexity, ELS-YOLO maintains a compact architecture with only 4.6 M parameters and 15.0 GFLOPs, which is significantly lower than that of larger models such as Faster R-CNN and DETR. These results further support the feasibility of deploying ELS-YOLO for real-time inference on resource-constrained edge devices.

5.4.6. Visualization Analysis

To intuitively evaluate the detection performance of the proposed ELS-YOLO under low-light conditions, four representative scenes from the ExDark dataset were selected. These scenes encompass various challenges, including small object detection, multi-scale targets, severe occlusion, and complex lighting environments.
From the visualization results in Figure 14, it is evident that ELS-YOLO demonstrates superior object detection and localization capabilities compared with YOLOv11s. Specifically, in the first row, under extremely poor illumination, ELS-YOLO accurately detects partially occluded targets in dim lighting conditions. In the second row, which depicts a distant and weakly illuminated scene, ELS-YOLO effectively detects small-scale pedestrians and vehicles, whereas YOLOv11s exhibits clear omissions. These results suggest that the ER-HGNetV2 backbone and the LFSPN structure enhance ELS-YOLO’s ability to capture fine-grained features and small object information in low-light environments. In the third row, representing a multi-scale detection scenario, YOLOv11s shows noticeable false detections. In contrast, ELS-YOLO successfully identifies multiple object instances with higher precision and better localization, highlighting the benefits of the proposed SCSHead in cross-scale feature sharing and adaptive normalization. Additionally, in the fourth row, an urban night scene, ELS-YOLO provides clear and accurate detections across objects of various scales. In comparison, YOLOv11s performs poorly on distant targets, leading to both false negatives and false positives.
We use Gradient-weighted Class Activation Mapping (Grad-CAM) [52] to visualize the focus regions of both ELS-YOLO and the baseline YOLOv11s during prediction. As shown in Figure 15, ELS-YOLO exhibits more accurate attention to critical object regions compared with YOLOv11s. In the first row, under a dimly lit scene, YOLOv11s displays weak activation for distant pedestrians and bicycles. In contrast, ELS-YOLO significantly enhances attention to these small objects. In the second row, which depicts a complex nighttime environment, YOLOv11s either overlooks or diffusely attends to vehicles and distant pedestrians. However, ELS-YOLO produces more concentrated attention centered on the key regions of the targets. The third row further confirms the robustness and superiority of ELS-YOLO. Under partial occlusion, YOLOv11s generates vague and dispersed attention, whereas ELS-YOLO accurately highlights the partially occluded small objects.

5.5. Experimental Analysis on the DroneVehicle Dataset

To further assess the performance of the proposed ELS-YOLO model in DVOD tasks, we conducted experiments on the DroneVehicle dataset. This dataset comprises aerial images captured by UAVs in various urban scenarios, including city roads, residential areas, parking lots, and highways. Representative inference results are presented in Figure 16, and the corresponding performance comparison is summarized in Table 6.
As shown in Table 6, ELS-YOLO achieves a precision of 68.3%, recall of 67.5%, mAP@0.5 of 68.7%, and mAP of 44.5%. Compared with the baseline model YOLOv11s, ELS-YOLO improves mAP@0.5 by 1.5% and mAP by 1.6%, demonstrating superior target localization capabilities.

6. Discussion

We conducted extensive experiments on the representative ExDark and DroneVehicle datasets to evaluate the object detection performance of ELS-YOLO under complex low-light environments. These datasets include diverse low-light scenarios and drone perspectives characterized by substantial scale variations and complex backgrounds. The experimental results indicate that ELS-YOLO achieves a mAP@0.5 of 74.3% and 68.7% on ExDark and DroneVehicle, respectively, significantly surpassing YOLOv11s and other mainstream detection models. This performance enhancement is primarily attributed to structural innovations, resulting in superior capabilities in feature extraction and fusion strategies. Compared with other lightweight models, ELS-YOLO exhibits advantages in parameter scale, computational complexity, and inference efficiency, highlighting its potential for deployment in edge computing environments.
Although ELS-YOLO demonstrates strong overall performance, there remains room for improvement under certain conditions. For instance, its detection accuracy still needs enhancement in scenarios involving extreme low-light conditions or severe occlusion. In addition, although this study has significantly reduced the model’s parameter size, further improving inference speed without sacrificing accuracy remains an important direction for future research.

7. Conclusions

In this paper, we propose the lightweight object detection model ELS-YOLO to address the challenges of limited detection performance in complex low-light conditions and constrained computational resources on edge devices. We design the re-parameterized backbone ER-HGNetV2 to enhance the ability to extract and represent critical features under low-light environments. To overcome the limitations of conventional fusion strategies, we introduce LFSPN, which enables efficient multi-scale feature integration. We also develop the lightweight detection head SCSHead to reduce computational cost and parameter count. Furthermore, we apply the LAMP pruning strategy to compress model size without sacrificing accuracy. Extensive experiments on the ExDark and DroneVehicle datasets demonstrate that ELS-YOLO achieves superior detection accuracy and real-time inference efficiency compared with existing lightweight models.
In future work, we will further explore optimization strategies for the model under extreme conditions, such as incorporating advanced attention mechanisms or dynamic feature fusion methods, to enhance the model’s adaptability in complex environments. Additionally, we will investigate integrating techniques like knowledge distillation and quantization training to further improve the real-time inference capability of the model, thereby facilitating the deployment and application of object detection technologies in a broader range of practical scenarios.

Author Contributions

Conceptualization, T.W. and X.N.; methodology, T.W.; software, T.W.; validation, T.W.; formal analysis, T.W.; investigation, T.W.; resources, T.W.; data curation, T.W.; writing—original draft preparation, T.W.; writing—review and editing, X.N.; visualization, T.W.; supervision, X.N.; project administration, X.N.; funding acquisition, X.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 62472010.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhu, P.; Wen, L.; Du, D.; Bian, X.; Fan, H.; Hu, Q.; Ling, H. Detection and Tracking Meet Drones Challenge. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 7380–7399. [Google Scholar] [CrossRef] [PubMed]
  2. Wu, X.; Hong, D.; Chanussot, J. Convolutional Neural Networks for Multimodal Remote Sensing Data Classification. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5517010. [Google Scholar] [CrossRef]
  3. Huang, Y.; Chen, J.; Huang, D. UFPMP-Det: Toward Accurate and Efficient Object Detection on Drone Imagery. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtually, 22 February–1 March 2022. [Google Scholar]
  4. Zhan, J.; Hu, Y.; Zhou, G.; Wang, Y.; Cai, W.; Li, L. A high-precision forest fire smoke detection approach based on ARGNet. Comput. Electron. Agric. 2022, 196, 106874. [Google Scholar] [CrossRef]
  5. Hu, J.; Wei, Y.; Chen, W.; Zhi, X.; Zhang, W. CM-YOLO: Typical Object Detection Method in Remote Sensing Cloud and Mist Scene Images. Remote Sens. 2025, 17, 125. [Google Scholar] [CrossRef]
  6. Peng, L.; Lu, Z.; Lei, T.; Jiang, P. Dual-Structure Elements Morphological Filtering and Local Z-Score Normalization for Infrared Small Target Detection against Heavy Clouds. Remote Sens. 2024, 16, 2343. [Google Scholar] [CrossRef]
  7. Zhou, L.; Dong, Y.; Ma, B.; Yin, Z.; Lu, F. Object detection in low-light conditions based on DBS-YOLOv8. Clust. Comput. 2025, 28, 55. [Google Scholar] [CrossRef]
  8. Kaur, R.; Singh, S. A comprehensive review of object detection with deep learning. Digit. Signal Process. 2023, 132, 103812. [Google Scholar] [CrossRef]
  9. Liu, X.; Wu, Z.; Li, A.; Vasluianu, F.A.; Zhang, Y.; Gu, S.; Zhang, L.; Zhu, C.; Timofte, R.; Jin, Z.; et al. NTIRE 2024 Challenge on Low Light Image Enhancement: Methods and Results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 17–18 June 2024; pp. 6571–6594. [Google Scholar] [CrossRef]
  10. Guo, X.; Li, Y.; Ling, H. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Trans. Image Process. 2017, 26, 982–993. [Google Scholar] [CrossRef] [PubMed]
  11. Hu, C.; Yi, W.; Hu, K.; Guo, Y.; Jing, X.; Liu, P. FHSI and QRCPE-Based Low-Light Enhancement with Application to Night Traffic Monitoring Images. IEEE Trans. Intell. Transp. Syst. 2024, 25, 6978–6993. [Google Scholar] [CrossRef]
  12. Jeon, J.J.; Park, J.Y.; Eom, I.K. Low-light image enhancement using gamma correction prior in mixed color spaces. Pattern Recognit. 2024, 146, 110001. [Google Scholar] [CrossRef]
  13. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef]
  14. Wang, C.-Y.; Yeh, I.-H.; Liao, H.-Y.M. YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information. In Proceedings of the European Conference on Computer Vision (ECCV), Milan, Italy, 29 September–4 October 2024; pp. 1–21. [Google Scholar] [CrossRef]
  15. Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. YOLOv10: Real-Time End-to-End Object Detection. Adv. Neural Inf. Process. Syst. 2024, 37, 107984–108011. [Google Scholar]
  16. Everingham, M.; Van Gool, L.; Williams, C.K.; Winn, J.; Zisserman, A. The PASCAL Visual Object Classes (VOC) Challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef]
  17. Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Lecture Notes in Computer Science, Proceedings of the Computer Vision—ECCV 2014, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014; Volume 8693. [Google Scholar] [CrossRef]
  18. Ding, X.; Zhang, X.; Ma, N.; Han, J.; Ding, G.; Sun, J. RepVGG: Making VGG-style ConvNets Great Again. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 13728–13737. [Google Scholar] [CrossRef]
  19. Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11531–11539. [Google Scholar] [CrossRef]
  20. Lee, J.; Park, S.; Mo, S.; Ahn, S.; Shin, J. Layer-Adaptive Sparsity for the Magnitude-Based Pruning. In Proceedings of the 9th International Conference on Learning Representations (ICLR), Virtually, 3–7 May 2021. [Google Scholar]
  21. Deng, S.; Li, S.; Xie, K.; Song, W.; Liao, X.; Hao, A.; Qin, H. A Global-Local Self-Adaptive Network for Drone-View Object Detection. IEEE Trans. Image Process. 2021, 30, 1556–1569. [Google Scholar] [CrossRef] [PubMed]
  22. Bai, Y.; Zhang, Y.; Ding, M.; Ghanem, B. SOD-MTGAN: Small Object Detection via Multi-Task Generative Adversarial Network. In Lecture Notes in Computer Science, Proceedings of the Computer Vision—ECCV 2018, Munich, Germany, 8–14 September 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Springer: Cham, Switzerland, 2018; Volume 11217. [Google Scholar] [CrossRef]
  23. Xi, Y.; Zheng, J.; He, X.; Jia, W.; Li, H.; Xie, Y.; Feng, M.; Li, X. Beyond context: Exploring semantic similarity for small object detection in crowded scenes. Pattern Recognit. Lett. 2020, 137, 53–60. [Google Scholar] [CrossRef]
  24. Li, G.; Liu, Z.; Zeng, D.; Lin, W.; Ling, H. Adjacent context coordination network for salient object detection in optical remote sensing images. IEEE Trans. Cybern. 2023, 53, 526–538. [Google Scholar] [CrossRef] [PubMed]
  25. Zhao, W.; Kang, Y.; Chen, H.; Zhao, Z.; Zhao, Z.; Zhai, Y. Adaptively attentional feature fusion oriented to multiscale object detection in remote sensing images. IEEE Trans. Instrum. Meas. 2023, 72, 5008111. [Google Scholar] [CrossRef]
  26. Sun, Y.; Cao, B.; Zhu, P.; Hu, Q. Drone-based RGB-Infrared cross-modality vehicle detection via uncertainty-aware learning. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 6700–6713. [Google Scholar] [CrossRef]
  27. Jie, H.; Zhao, Z.; Zeng, Y.; Chang, Y.; Fan, F.; Wang, C.; See, K.Y. A review of intentional electromagnetic interference in power electronics: Conducted and radiated susceptibility. IET Power Electron. 2024, 17, 1487–1506. [Google Scholar] [CrossRef]
  28. Jie, H.; Zhao, Z.; Li, H.; Gan, T.H.; See, K.Y. A Systematic Three-Stage Safety Enhancement Approach for Motor Drive and Gimbal Systems in Unmanned Aerial Vehicles. IEEE Trans. Power Electron. 2025, 40, 9329–9342. [Google Scholar] [CrossRef]
  29. Farid, H. Blind inverse gamma correction. IEEE Trans. Image Process. 2001, 10, 1428–1433. [Google Scholar] [CrossRef] [PubMed]
  30. Zuiderveld, K. Contrast limited adaptive histogram equalization. In Graphics Gems IV; Academic Press Professional, Inc.: Williston, VT, USA, 1994; pp. 474–485. [Google Scholar]
  31. Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-revealing low-light image enhancement via robust Retinex model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar] [CrossRef] [PubMed]
  32. Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef]
  33. Li, X.; Wang, W.; Feng, X.; Li, M. Deep parametric Retinex decomposition model for low-light image enhancement. Comput. Vis. Image Underst. 2024, 241, 103948. [Google Scholar] [CrossRef]
  34. Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 1777–1786. [Google Scholar] [CrossRef]
  35. Li, C.; Guo, C.; Loy, C.C. Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 4225–4238. [Google Scholar] [CrossRef] [PubMed]
  36. Xu, X.; Wang, R.; Fu, C.-W.; Jia, J. SNR-aware low-light image enhancement. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 17693–17703. [Google Scholar] [CrossRef]
  37. Ma, L.; Ma, T.; Liu, R.; Fan, X.; Luo, Z. Toward fast, flexible, and robust low-light image enhancement. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 5627–5636. [Google Scholar] [CrossRef]
  38. Qiu, H.; Li, H.; Wu, Q.; Meng, F.; Xu, L.; Ngan, K.N.; Shi, H. Hierarchical context features embedding for object detection. IEEE Trans. Multimed. 2020, 22, 3039–3050. [Google Scholar] [CrossRef]
  39. Hu, J.; Cui, Z. YOLO-Owl: An occlusion aware detector for low illuminance environment. In Proceedings of the 2023 3rd International Conference on Neural Networks, Information and Communication Engineering (NNICE), Guangzhou, China, 24–26 February 2023; pp. 167–170. [Google Scholar] [CrossRef]
  40. Zhang, Y.; Wu, C.; Zhang, T.; Liu, Y.; Zheng, Y. Self-attention guidance and multiscale feature fusion-based UAV image object detection. IEEE Geosci. Remote Sens. Lett. 2023, 20, 6004305. [Google Scholar] [CrossRef]
  41. Wu, R.; Huang, W.; Xu, X. AE-YOLO: Asymptotic enhancement for low-light object detection. In Proceedings of the 2024 17th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), Shanghai, China, 26–28 October 2024; pp. 1–6. [Google Scholar] [CrossRef]
  42. Wang, H.; Zhang, G.; Cao, H.; Hu, K.; Wang, Q.; Deng, Y.; Gao, J.; Tang, Y. Geometry-Aware 3D Point Cloud Learning for Precise Cutting-Point Detection in Unstructured Field Environments. J. Field Robot. 2025; Early View. [Google Scholar] [CrossRef]
  43. Li, X.; Hu, Y.; Jie, Y.; Zhao, C.; Zhang, Z. Dual-Frequency Lidar for Compressed Sensing 3D Imaging Based on All-Phase Fourier Transform. J. Opt. Photon. Res. 2023, 1, 74–81. [Google Scholar] [CrossRef]
  44. Khanam, R.; Hussain, M. YOLOv11: An Overview of the Key Architectural Enhancements. arXiv 2024, arXiv:2410.17725. [Google Scholar]
  45. Zhao, Y.; Lv, W.; Xu, S.; Wei, J.; Wang, G.; Dang, Q.; Liu, Y.; Chen, J. DETRs Beat YOLOs on Real-time Object Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 16965–16974. [Google Scholar] [CrossRef]
  46. Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 13708–13717. [Google Scholar] [CrossRef]
  47. Loh, Y.P.; Chan, C.S. Getting to Know Low-light Images with The Exclusively Dark Dataset. Comput. Vis. Image Underst. 2019, 178, 30–42. [Google Scholar] [CrossRef]
  48. Cai, H.; Li, J.; Hu, M.; Gan, C.; Han, S. EfficientViT: Lightweight Multi-Scale Attention for High-Resolution Dense Prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; pp. 17256–17267. [Google Scholar] [CrossRef]
  49. Wang, A.; Chen, H.; Lin, Z.; Han, J.; Ding, G. Rep ViT: Revisiting Mobile CNN From ViT Perspective. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 15909–15920. [Google Scholar] [CrossRef]
  50. Qin, D.; Leichner, C.; Delakis, M.; Fornoni, M.; Luo, S.; Yang, F.; Wang, W.; Banbury, C.; Ye, C.; Howard, A.; et al. MobileNetV4: Universal Models for the Mobile Ecosystem. In Lecture Notes in Computer Science, Proceedings of the Computer Vision—ECCV 2024, Milan, Italy, 29 September–4 October 2024; Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G., Eds.; Springer: Cham, Switzerland, 2025; Volume 15098. [Google Scholar] [CrossRef]
  51. Dai, X.; Bai, Y.; Wang, Y.; Fu, Y. Rewrite the Stars. In Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 5694–5703. [Google Scholar] [CrossRef]
  52. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar] [CrossRef]
Figure 1. Overall architecture of the YOLOv11 model.
Figure 1. Overall architecture of the YOLOv11 model.
Sensors 25 04463 g001
Figure 2. Detailed architectural design of core modules in YOLOv11.
Figure 2. Detailed architectural design of core modules in YOLOv11.
Sensors 25 04463 g002
Figure 3. Network architecture of the proposed ER-HGNetV2 backbone.
Figure 3. Network architecture of the proposed ER-HGNetV2 backbone.
Sensors 25 04463 g003
Figure 4. Re-parameterization process of the RepConv module.
Figure 4. Re-parameterization process of the RepConv module.
Sensors 25 04463 g004
Figure 5. Architecture of the proposed Lightweight Feature Selection Pyramid Network.
Figure 5. Architecture of the proposed Lightweight Feature Selection Pyramid Network.
Sensors 25 04463 g005
Figure 6. Illustration of the CA mechanism.
Figure 6. Illustration of the CA mechanism.
Sensors 25 04463 g006
Figure 7. Framework of the Dynamic Feature Selection module.
Figure 7. Framework of the Dynamic Feature Selection module.
Sensors 25 04463 g007
Figure 8. Architecture of the proposed SCSHead.
Figure 8. Architecture of the proposed SCSHead.
Sensors 25 04463 g008
Figure 9. Overall architecture of the proposed ELS-YOLO model.
Figure 9. Overall architecture of the proposed ELS-YOLO model.
Sensors 25 04463 g009
Figure 10. The LAMP pruning process.
Figure 10. The LAMP pruning process.
Sensors 25 04463 g010
Figure 11. Illustration and statistical analysis of the ExDark dataset. (a) Class distribution of annotated objects, (b) distribution of normalized bounding box sizes, and (c) spatial distribution of bounding box center points across the image plane.
Figure 11. Illustration and statistical analysis of the ExDark dataset. (a) Class distribution of annotated objects, (b) distribution of normalized bounding box sizes, and (c) spatial distribution of bounding box center points across the image plane.
Sensors 25 04463 g011
Figure 12. Images under varying illumination conditions.
Figure 12. Images under varying illumination conditions.
Sensors 25 04463 g012
Figure 13. Training curves of ELS-YOLO and YOLOv11s.
Figure 13. Training curves of ELS-YOLO and YOLOv11s.
Sensors 25 04463 g013
Figure 14. Detection results of different models on the ExDark dataset. From left to right: ground truth, YOLOv11s, and ELS-YOLO.
Figure 14. Detection results of different models on the ExDark dataset. From left to right: ground truth, YOLOv11s, and ELS-YOLO.
Sensors 25 04463 g014
Figure 15. Heatmap results. From left to right: original images, YOLOv11s, and ELS-YOLO.
Figure 15. Heatmap results. From left to right: original images, YOLOv11s, and ELS-YOLO.
Sensors 25 04463 g015
Figure 16. Detection results of ELS-YOLO on the DroneVehicle dataset.
Figure 16. Detection results of ELS-YOLO on the DroneVehicle dataset.
Sensors 25 04463 g016
Table 1. Comparison with the YOLOv11 series on the ExDark dataset.
Table 1. Comparison with the YOLOv11 series on the ExDark dataset.
ModelsmAP@0.5/%mAP/%Params/MGFLOPs/GFPS
YOLO11n67.642.22.66.3282
YOLO11s71.445.79.421.3251
YOLO11m73.247.720.067.7218
YOLO11l74.648.925.286.6183
YOLO11x75.749.756.8194.5169
ELS-YOLO74.348.54.615.0274
The best value for each metric is shown in bold, and the second-best value is underlined.
Table 2. Comparison with other models on the ExDark dataset.
Table 2. Comparison with other models on the ExDark dataset.
ModelsP/%R/%mAP@0.5/%mAP/%Params/MGFLOPs/G
YOLOv8n70.259.665.741.13.08.1
YOLOv8s73.962.770.444.311.128.5
YOLOv9t74.056.765.240.82.07.6
YOLOv9s74.162.169.844.87.226.8
YOLOv10n71.858.165.040.52.78.2
YOLOv10s77.260.269.043.88.124.5
Faster R-CNN67.452.658.935.241.2208
RetinaNet66.350.757.633.936.5210
DETR71.957.363.839.740.886.2
RT-DETR-r5075.461.567.142.241.9125.7
RT-DETR-L73.158.164.639.932.0103.5
ELS-YOLO79.265.874.348.54.615.0
The best value for each metric is shown in bold, and the second-best value is underlined.
Table 3. Comparative experiments with different backbone networks.
Table 3. Comparative experiments with different backbone networks.
BackbonemAP@0.5/%mAP/%Params/MGFLOPs/GFPS
baseline71.445.79.421.3251
EfficientViT [48]68.543.17.9819.0214
RepViT [49]69.343.910.1423.5201
HGNetV269.744.67.6118.9220
MobileNetV4 [50]66.341.99.5327.8267
StarNet [51]65.840.18.6317.6174
ER-HGNetV272.646.57.618.3255
The best value for each metric is shown in bold, and the second-best value is underlined.
Table 4. Model performance under different pruning rates.
Table 4. Model performance under different pruning rates.
ModelsmAP@0.5/%mAP/%Params/MGFLOPs/GFPS
ELS-YOLO74.348.54.615.0274
ELS-YOLO (ratio = 1.33)74.348.42.411.2283
ELS-YOLO (ratio = 2.0)74.248.11.37.4298
ELS-YOLO (ratio = 4.0)62.437.50.53.7359
The best value for each metric is shown in bold, and the second-best value is underlined.
Table 5. Ablation experiments performed with the proposed ELS-YOLO.
Table 5. Ablation experiments performed with the proposed ELS-YOLO.
ModelsmAP@0.5/%mAP/%Params/MGFLOPs/GFPS
baseline71.445.79.421.3251
+A72.646.57.618.3255
+B72.246.29.0320.4253
+C72.745.96.6418.7262
+A+B73.847.97.317.6268
+A+C73.547.64.8415.5271
+A+B+C74.348.54.615.0274
A denotes ER-HGNetV2, B denotes SCSHead, and C denotes LFSPN. The best value for each metric is shown in bold, and the second-best value is underlined.
Table 6. Comparative experiments of different object detection algorithms on the DroneVehicle dataset.
Table 6. Comparative experiments of different object detection algorithms on the DroneVehicle dataset.
ModelsP/%R/%mAP@0.5/%mAP/%
YOLO11n58.458.961.738.6
YOLO11s68.263.767.242.9
RT-DETR-r5065.763.266.741.2
RT-DETR-L67.966.468.143.3
ELS-YOLO68.367.568.744.5
ELS-YOLO (ratio = 2.0)68.267.368.544.2
The best value for each metric is shown in bold, and the second-best value is underlined.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Weng, T.; Niu, X. Enhancing UAV Object Detection in Low-Light Conditions with ELS-YOLO: A Lightweight Model Based on Improved YOLOv11. Sensors 2025, 25, 4463. https://doi.org/10.3390/s25144463

AMA Style

Weng T, Niu X. Enhancing UAV Object Detection in Low-Light Conditions with ELS-YOLO: A Lightweight Model Based on Improved YOLOv11. Sensors. 2025; 25(14):4463. https://doi.org/10.3390/s25144463

Chicago/Turabian Style

Weng, Tianhang, and Xiaopeng Niu. 2025. "Enhancing UAV Object Detection in Low-Light Conditions with ELS-YOLO: A Lightweight Model Based on Improved YOLOv11" Sensors 25, no. 14: 4463. https://doi.org/10.3390/s25144463

APA Style

Weng, T., & Niu, X. (2025). Enhancing UAV Object Detection in Low-Light Conditions with ELS-YOLO: A Lightweight Model Based on Improved YOLOv11. Sensors, 25(14), 4463. https://doi.org/10.3390/s25144463

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop