Next Article in Journal
Priority Control of Intelligent Connected Dedicated Bus Corridor Based on Deep Deterministic Policy Gradient
Previous Article in Journal
Correction: Santos et al. Performance Assessment of Object Detection Models Trained with Synthetic Data: A Case Study on Electrical Equipment Detection. Sensors 2024, 24, 4219
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LEAD-YOLO: A Lightweight and Accurate Network for Small Object Detection in Autonomous Driving

by
Yunchuan Yang
,
Shubin Yang
* and
Qiqing Chan
School of Electrical and Information Engineering, Wuhan Institute of Technology, Wuhan 430205, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(15), 4800; https://doi.org/10.3390/s25154800 (registering DOI)
Submission received: 10 June 2025 / Revised: 31 July 2025 / Accepted: 1 August 2025 / Published: 4 August 2025
(This article belongs to the Section Vehicular Sensing)

Abstract

The accurate detection of small objects remains a critical challenge in autonomous driving systems, where improving detection performance typically comes at the cost of increased model complexity, conflicting with the lightweight requirements of edge deployment. To address this dilemma, this paper proposes LEAD-YOLO (Lightweight Efficient Autonomous Driving YOLO), an enhanced network architecture based on YOLOv11n that achieves superior small object detection while maintaining computational efficiency. The proposed framework incorporates three innovative components: First, the Backbone integrates a lightweight Convolutional Gated Transformer (CGF) module, which employs normalized gating mechanisms with residual connections, and a Dilated Feature Fusion (DFF) structure that enables progressive multi-scale context modeling through dilated convolutions. These components synergistically enhance small object perception and environmental context understanding without compromising network efficiency. Second, the neck features a hierarchical feature fusion module (HFFM) that establishes guided feature aggregation paths through hierarchical structuring, facilitating collaborative modeling between local structural information and global semantics for robust multi-scale object detection in complex traffic scenarios. Third, the head implements a shared feature detection head (SFDH) structure, incorporating shared convolution modules for efficient cross-scale feature sharing and detail enhancement branches for improved texture and edge modeling. Extensive experiments validate the effectiveness of LEAD-YOLO: on the nuImages dataset, the method achieves 3.8% and 5.4% improvements in mAP@0.5 and mAP@[0.5:0.95], respectively, while reducing parameters by 24.1%. On the VisDrone2019 dataset, performance gains reach 7.9% and 6.4% for corresponding metrics. These findings demonstrate that LEAD-YOLO achieves an excellent balance between detection accuracy and model efficiency, thereby showcasing substantial potential for applications in autonomous driving.

1. Introduction

With the acceleration of global urbanization and the continuous growth of transportation demands, intelligent transportation systems (ITSs) have become a key technological means to improve road safety and optimize resource utilization. As a core component, autonomous driving technology is experiencing rapid development, showing great potential in improving traffic efficiency, reducing accident rates, and enhancing travel experiences. However, the environmental perception system, as the “eyes” of autonomous driving, faces many severe challenges, particularly the problem of small object detection in complex dynamic scenarios. Recent accident investigations, including the 2018 Uber autonomous vehicle fatality, have revealed that failures in detecting pedestrians, cyclists, and road debris—which appear as small objects in perception systems—constitute significant safety risks. In actual road scenes, traffic participants such as pedestrians, bicycles, motorcycles, and distant vehicles often appear as small-scale objects in imaging systems. Especially when located at greater distances, their pixel proportion typically accounts for only 1–2% or even less of the entire image. These distant small objects, due to sparse feature information, blurred texture details, and susceptibility to lighting and weather conditions, are often ignored or misdetected by mainstream detection models. Undetected small objects often become potential safety hazards that may lead to catastrophic consequences, as early detection is crucial for predictive path planning and meeting regulatory safety standards such as Euro NCAP’s AEB tests. Meanwhile, the actual deployment of autonomous driving systems also faces the critical challenge of computational resource limitations. Due to factors such as power consumption, heat dissipation, and cost, vehicle computing platforms have far less computational capability than high-performance servers in data centers. Current high-performance detection models mostly rely on deep network structures and complex computational mechanisms. Although they have achieved breakthrough accuracy on standard datasets, they are difficult to run stably on resource-constrained edge devices. Therefore, ensuring small object detection accuracy while achieving model lightweight (i.e., reducing model parameters and computational complexity while maintaining performance) through network structure optimization, parameter compression, and other means to achieve a balance between accuracy and efficiency has become a key research topic for promoting autonomous driving from research prototypes to large-scale commercial applications.
With the rise of deep learning technology, especially Convolutional Neural Networks (CNNs), deep learning-based object detection algorithms have achieved breakthrough improvements in real-time performance and accuracy [1]. These algorithms automatically learn hierarchical feature representations, significantly outperforming traditional handcrafted features in detection accuracy and generalization ability. Current deep detection algorithms are mainly divided into two categories: two-stage and one-stage approaches. Two-stage algorithms like the R-CNN series [2] adopt a “propose-then-classify” strategy, achieving high detection accuracy, particularly in complex scenes and occluded objects. However, their multi-stage processing flow results in high computational overhead, slow inference speed, and substantial hardware requirements, making them unsuitable for resource-constrained autonomous driving platforms. In contrast, one-stage algorithms such as SSD (Single-Shot MultiBox Detector) [3] and YOLO (You Only Look Once) series [4] provide more efficient solutions by directly performing end-to-end detection on feature maps. These methods transform object detection into a unified regression problem, significantly improving inference speed while maintaining competitive accuracy through multi-scale feature fusion mechanisms, making them more suitable for real-time applications.
It is worth noting that, in recent years, the DETR (Detection Transformer) [5] series algorithms based on Transformer architecture have gained widespread attention in academia for their innovative end-to-end detection framework. DETR uses Transformer’s [6] self-attention mechanism to directly predict object sets, avoiding the complex anchor design and non-maximum suppression (NMS) post-processing in traditional methods. However, despite DETR’s elegant theoretical design, it faces severe challenges in practical applications: slow training convergence, high computational resource requirements, and relatively slow inference speed. These factors severely limit its deployment feasibility on resource-constrained vehicle platforms.
In practical applications of autonomous driving, the industry generally adopts one-stage detection methods, mainly due to their three major advantages: fast inference speed that can meet strict real-time detection requirements; end-to-end training methods that simplify system design and deployment; and with technological development, their detection accuracy has approached or even exceeded two-stage methods. Although the early SSD algorithm performed excellently in speed, it was gradually phased out due to its shortcomings in small object recognition and model scalability. In contrast, the YOLO series has become the mainstream solution for object detection in the autonomous driving field with its excellent speed performance, stable detection accuracy, and good scalability. However, facing the special challenges in autonomous driving scenarios—especially small object detection and lightweight deployment requirements—the YOLO series still has room for improvement. Therefore, how to enhance small object perception capabilities by optimizing network structure while maintaining or even reducing computational complexity has become a core research topic.
In response to the above research status, this paper focuses on the deep optimization of the YOLOv11 model structure, systematically constructing a lightweight and structurally efficient object detection network LEAD-YOLO (Lightweight Efficient Autonomous Driving YOLO) around the two core challenges of small object detection performance and model lightweight. This framework achieves performance breakthroughs through four key technical innovations: the lightweight detail-aware module CGF (Convolutional Gated Transformer) significantly enhances the model’s spatial detail perception ability for small objects; the dilated receptive field fusion structure DFF (dilated feature fusion) effectively extends the model’s perception range of small objects and their surrounding environmental information; at the feature fusion level, the hierarchical feature fusion module HFFM (Hierarchical Feature Fusion Module) further strengthens the representation ability of multi-scale objects; additionally, the lightweight shared detection head SFDH (Shared Feature Detection Head) significantly improves the inference efficiency while ensuring detection accuracy, designed with edge deployment constraints in mind for autonomous driving scenarios.

2. Related Work

Although the YOLO series models perform excellently in object detection, many problems remain unsolved in complex road environments. Particularly in autonomous driving scenarios, the problems of insufficient small object detection accuracy and excessive model computational complexity severely restrict their practical applications. Therefore, domestic and foreign scholars have carried out extensive structural optimization and improvement work based on YOLO models, mainly focusing on feature enhancement, network lightweight, and multi-scale fusion directions.

2.1. Feature Enhancement and Receptive Field Optimization

To improve YOLO models’ perception capabilities in complex scenes, researchers have explored expanding receptive fields and enhancing feature expression. Wang et al. [7] extended the network’s receptive field while embedding the parameter-free SimAM attention mechanism to enhance feature expression without increasing model complexity. Although this method achieved a small improvement in accuracy, the detection speed hardly improved, with limited overall performance enhancement. Similarly, Li et al. [8] introduced dilated convolutions with different dilation rates in the backbone network to expand receptive fields for better small object detection while maintaining the same parameter count. This method improved small object detection performance on the KITTI dataset, but the computational cost increased considerably.

2.2. Lightweight Architecture Design

Addressing the resource limitations of vehicle platforms, lightweight design has become a research focus. Luo et al. [9] proposed the YOLOv8-Ghost-EMA model, fusing the lightweight Ghost module with the dynamically weighted EMA mechanism to improve computational efficiency while enhancing feature extraction capabilities. This method achieved significant parameter and computational reduction, though challenges remained in small object and occluded object detection. Zhang et al. [10] proposed a knowledge distillation-based solution, using large YOLO models as teacher networks to guide lightweight student networks, achieving substantial parameter reduction while maintaining most of the original accuracy on the BDD100K dataset. Chen et al. [11] designed DCNv3-Lite, combining deformable convolution with depthwise separable convolution to maintain adaptability to non-rigid objects like pedestrians and bicycles while significantly reducing the computational complexity compared to YOLOv7.

2.3. Multi-Scale Feature Fusion Strategies

The core challenge of small object detection lies in insufficient feature information, making multi-scale feature fusion crucial. Yuan et al. [12] enhanced small object detection by introducing multi-scale feature enhancement modules (MFI) and lightweight attention mechanisms (LM) based on YOLOv11, though lacking validation in multi-category scenarios. Liu et al. [13] proposed an adaptive feature pyramid network (AFPN) that dynamically adjusts fusion strategies by learning feature importance weights at different scales, improving small object detection on the Waymo Open Dataset while reducing computational cost through feature channel pruning. Zhao et al. [14] designed a lightweight BiFPN version, achieving the high-precision detection of distant vehicles and pedestrians on the nuScenes dataset through optimized cross-scale connections and weighted feature fusion.
In summary, existing research has made certain progress in YOLO’s small object detection and lightweight, but still has the following limitations: (1) Most methods focus on single optimization objectives, making it difficult to balance accuracy improvement and model lightweight simultaneously; (2) The insufficient consideration of the specificity of autonomous driving scenarios, lacking adaptability to complex conditions such as extreme weather and lighting changes; (3) lightweight design often comes at the cost of sacrificing small object detection performance, posing safety hazards in practical applications. Therefore, how to design a comprehensive optimization solution that can effectively improve small object detection accuracy while significantly reducing model complexity remains a key problem to be solved.

2.4. YOLOv11 Network Introduction

YOLOv11 is the latest generation object detection framework released by Ultralytics on 30 September 2024, offering five model scales: n, s, m, l, and x. This series constructs a multi-level model configuration from lightweight to high-performance by systematically adjusting network depth and width, aiming to adapt to diverse application scenarios from resource-constrained devices to high-computing platforms, providing flexible solutions for vision tasks at different levels.
The overall architecture of the YOLO11 network consists of three parts: backbone network (backbone), neck network (neck), and detection head (head) (see Figure 1). In the backbone and neck parts, YOLO11 replaces the original C2f module with the C3K2 module, which performs the fine-grained splitting of feature maps through dual-kernel design and bottleneck layers, enhancing both the richness of feature expression and feature extraction speed [15,16]. After the spatial pyramid pooling (SPPF) module, a C2PSA module based on extended C2f is added, introducing a PSA attention mechanism based on multi-head attention and feedforward neural networks (FFNs). This not only strengthens the focusing ability on key features but also optimizes gradient flow through optional residual connections, further enhancing training stability and effectiveness [17,18]. Finally, two depthwise separable convolutions are introduced in the detection head, significantly reducing the computational burden and improving overall operational efficiency [19].

3. Proposed Method

The YOLOv11 baseline model still has several issues to be optimized in practical applications. First, the computational cost of its backbone network is relatively high. Although the C3k2 module is introduced to improve the information flow through feature map splitting and small kernel operations, the multi-layer convolution stacking still brings considerable computational burden, which is not conducive to lightweight deployment. Second, the receptive field scales of the original SPPF module are relatively discrete, making it difficult to flexibly interface with different semantic levels, resulting in difficulties in balancing spatial details and global semantic expression when fusing multi-scale features. Additionally, there are still performance bottlenecks in small object detection. Since small objects are often distributed in areas with complex backgrounds and strong interference, while the feature maps generated by the YOLOv11 backbone network have low resolution, it is difficult to effectively capture their detailed features, thereby affecting the detection accuracy.
In view of the above problems, this paper designs and proposes a better-performing detection model LEAD-YOLO based on the lightweight model YOLOv11n. Figure 2 shows the architecture of the LEAD-YOLO model, with red dashed lines indicating the improved parts.
In the backbone, we introduce the lightweight detail-aware module CGF to replace the original bottleneck, enhancing spatial feature extraction capabilities for small objects while maintaining network efficiency and stability of deep information transmission. Addressing the limited modeling capability of the SPPF module, we design the dilated receptive field fusion structure DFF, which uses multi-scale dilated convolutions to achieve progressive context modeling, improving perception of objects and their environment. In the neck part, we propose the hierarchical feature fusion module HFFM, guiding the collaborative modeling of local and global information to enhance detection robustness for multi-scale objects. For the detection head, we construct the lightweight shared structure SFDH, improving cross-scale feature utilization efficiency through shared convolution and detail enhancement branches.

3.1. CGF Block Design

The C3k2 structure in YOLOv11’s backbone network faces critical limitations in autonomous driving scenarios. Its fixed convolutional receptive fields struggle to model the spatial dependencies required for detecting small objects in dense traffic, while deep stacking causes feature degradation that impairs fine-grained representations.
Although the Transformer architecture performs well in handling long-range dependencies [20], its high computational cost makes it difficult to be directly applied. The existing gating mechanisms only focus on channel selection, while ignoring the crucial spatial relationships.
To address these challenges, we propose the convolutional gated transformer (CGF) module that synergistically combines spatial-aware gating with efficient Transformer-inspired modeling. CGF employs a complementary dual-path design: the gating path preserves fine-grained features against deep network degradation, while the spatial modeling path provides essential neighborhood context. This design is particularly effective for detecting small objects with minimal visual signatures in complex autonomous driving scenarios. The CGF module is strategically designed to optimize the Bottleneck (Figure 3b) structures in C3k (Figure 3c) and C3k2 (Figure 3d) modules, achieving three critical objectives: (1) enhanced spatial sensitivity for small object features, (2) computational efficiency suitable for edge deployment, and (3) stable gradient flow throughout deep networks. Its detailed architecture is illustrated in Figure 3a.
The CGF module implements its dual-path design through two complementary components. For an input feature map X R C × H × W , the transformation follows:
Spatial modeling path: To address the fixed receptive field limitation, we first apply dimension permutation and channel-wise normalization:
X 1 = X + Scale r · Scale l · DropPath ( LayerNorm ( Permute ( X ) ) )
This path normalizes features across spatial dimensions to alleviate distribution shift in deep networks. The DropPath mechanism (p = 0.1) provides regularization, while Layer Scale ( Scale l = 10 6 ) prevents training instability.
Channel gating path: To prevent feature degradation and enhance fine-grained details, we employ a convolutional GLU (CGLU) mechanism:
X 2 = X 1 + Scale r · Scale l · DropPath ( CGLU ( LayerNorm ( X 1 ) ) )
The CGLU splits input features into main branch F m and gating branch G through 1 × 1 convolution. The main branch extracts local dependencies via depthwise convolution with GELU activation, while the gating branch dynamically controls feature transmission through element-wise multiplication:
CGLU ( X ) = Dropout ( Conv 1 × 1 ( F m G ) ) + X
This gating mechanism acts as a learnable filter, selectively amplifying task-relevant features crucial for small object detection while suppressing noise.
The complete CGF transformation CGF ( X ) = Permute ( X 2 ) achieves synergistic benefits: the spatial path captures neighborhood context essential for understanding object relationships in traffic scenes, while the channel path preserves and enhances fine-grained features through adaptive gating. Multiple residual connections with learnable scaling factors ensure stable gradient flow throughout deep networks. This design maintains computational efficiency through lightweight depthwise convolutions, making it suitable for real-time autonomous driving applications while significantly improving small object detection performance.

3.2. DFF Block Design

Multi-scale object detection faces challenges in receptive field adaptability and fine-grained feature expression, particularly for distant small objects whose low resolution and weak texture lead to feature loss during downsampling. The conventional SPPF module (Figure 4a) addresses this through multi-scale max pooling but suffers from two limitations: fixed discrete receptive field scales that lack flexibility for different semantic levels, and pooling-induced information compression that damages small object details.
We propose the dilated feature fusion (DFF) module (Figure 4b) to address these limitations. Given input feature X R C × H × W DFF first applies channel compression through X 0 = Conv 1 × 1 ( X ) to reduce dimensionality by factor r. Subsequently, it constructs a multi-scale feature hierarchy using a shared 3 × 3 convolution kernel with progressively increasing dilation rates: X i = Conv 3 × 3 ( X i 1 , d = 2 i ) for i = 1 , 2 , , n , where each layer expands the receptive field while maintaining parameter efficiency. The final output Y = Concat [ X 0 , X 1 , , X n ] aggregates features across all scales. This recursive structure progressively expands receptive fields from 3 × 3 to ( 2 n + 1 + 1 ) × ( 2 n + 1 + 1 ) , preserving fine-grained details crucial for small object detection while capturing multi-scale context through dilated convolutions. Compared to SPPF’s parallel pooling, DFF’s sequential dilation mechanism better models spatial hierarchy without information loss, particularly benefiting distant small object detection where feature preservation is critical.

3.3. HFFM Block Design

Feature fusion is a key link in achieving the accurate perception of multi-scale objects and an understanding of environmental semantics, directly affecting the model’s recognition ability for small objects and complex backgrounds in autonomous driving scenarios. However, traditional methods often have difficulty balancing spatial details and global semantic expression when fusing multi-source features from different scales, resulting in incomplete feature information expression and insufficient discriminative ability. Especially when facing small objects with limited fine-grained information, simple concatenation easily introduces redundant background noise and causes key object features to be overwhelmed by large-scale features, thereby reducing model discriminative ability and detection accuracy.
To solve the above problems, we design the Hierarchical Feature Fusion Module HFFM to introduce into the Neck part, aiming to achieve the collaborative modeling of local structure and global semantics by constructing hierarchical feature selection paths.
Given two input features F 1 R C × H × W and F 2 R C × H × W , HFFM operates in three stages (Figure 5):
Stage 1: Dimension alignment and baseline fusion. First, we align feature dimensions and establish a baseline fusion:
F i = Conv 1 × 1 ( F i ) , F base = GroupConv 3 × 3 ( [ F 1 , F 2 ] )
where F i R C mid × H × W aligns channel dimensions. This baseline preserves general fusion information while reducing computation through group convolution.
Stage 2: Hierarchical feature extraction. To capture multi-scale patterns, we process aligned features through patch-aware attention [21] with different receptive fields:
F local = PatchAware ( F 1 , p = 2 ) , F global = PatchAware ( F 2 , p = 4 )
The PatchAware module performs spatial attention within p × p patches:
PatchAware ( X , p ) = X σ ( AvgPool p × p ( Conv ( X ) ) )
The smaller patch size (p = 2) preserves fine-grained details crucial for small objects, while larger patches (p = 4) capture global context, creating complementary feature representations.
Stage 3: Adaptive feature aggregation. Finally, we aggregate all features and apply learnable selection:
HFFM ( F 1 , F 2 ) = Conv 1 × 1 ( RepConv ( Conv 1 × 1 ( [ F local , F global , F base ] ) ) )
The reparameterizable convolution (RepConv) [22] enables efficient inference while the cascaded convolutions perform channel-wise feature selection, suppressing redundancy and highlighting salient regions.
This hierarchical design ensures that small object features are preserved through dedicated local branches while maintaining necessary global context, effectively addressing the multi-scale detection challenge in autonomous driving scenarios.

3.4. SFDH Block Design

Although YOLOv11 performs well in object detection, its detection head suffers from (1) Different scale branches are independent, lacking sharing and collaboration, increasing computational redundancy, which is not conducive to small object detection; (2) limited detail perception for distant objects. We propose the shared feature detail-enhancement head (SFDH) to address these issues through unified multi-scale processing and explicit texture enhancement.
As illustrated in Figure 6, SFDH first unifies multi-scale features { F i } i = 1 3 through channel projection: F i = Conv proj ( F i ) R C proj × H × W . This provides consistent input dimensions for subsequent shared processing.
The core innovation lies in the shared DEConv modules that process all scales jointly: F i = DEConv 2 ( DEConv 1 ( F i ) ) . Each DEConv module [23] decomposes convolution into five complementary paths capturing different geometric patterns. Specifically, it combines a learnable standard convolution K std with four fixed difference operators: center difference K C D for texture edges, horizontal K HD , and vertical K VD for directional structures, and angular K AD for diagonal patterns. Each path applies group normalization and GELU activation: F path = GELU ( GN ( Conv path ( X ) ) ) .
During inference, these multiple paths reparameterize into a single convolution through kernel fusion:
K fused = K std + j { C D , H D , V D , A D } K j
This design maintains the enhanced feature representation learned during training while reducing inference to a single 3×3 convolution operation.
Finally, shared prediction heads generate outputs with scale-adaptive calibration:
reg i = s i · Conv reg ( E i ) × r
cls i = Conv cls ( E i )
where learnable scale factors s i compensate for varying object sizes across detection levels and r denotes regression granularity.
By sharing detail enhancement modules across all scales and introducing geometric priors through specialized kernels, SFDH reduces parameters by 40% compared to the original head while significantly improving small object detection—a critical requirement for autonomous driving applications.

4. Experiments and Results Analysis

4.1. Datasets and Data Processing

The nuImages [24] dataset is an image subset of the nuScenes autonomous driving dataset released by Motional, specifically serving image-level vision tasks. This dataset covers typical traffic scenarios such as urban main roads, residential areas, and suburban roads, with collection conditions including various weather (sunny, cloudy, rainy) and lighting environments (daytime, dusk, night), as well as different traffic density conditions, fully reflecting the diversity and complexity of autonomous driving scenarios, making it an ideal dataset for evaluating autonomous driving visual perception capabilities. The dataset contains images captured by six cameras from different angles, totaling 93,000 images covering 25 different category information with rich and high-quality annotation information.
To meet the needs of forward-view object detection in autonomous driving scenarios, this paper selects 18,368 image samples collected by the vehicle’s front camera. The official division of this portion of data has been completed, including 13,187 training images, 3249 validation images, and 1932 test images. Considering the simplification and generalization requirements for object semantic classification in practical applications, we removed the driveable surface class and integrated the remaining 24 visual object classes into five categories based on semantic attributes: pedestrian, obstacle, bike, car, and vehicles, with specific mapping relationships shown in Table 1. The distribution of mapped object categories is shown in Figure 7a, and the proportion of each category is shown in Figure 7b. Statistical results show that the three categories are dominated by small objects—pedestrian, obstacle, and bike—account for up to 88.8% of the overall samples. This characteristic highlights the representativeness and pertinence of this dataset in small object detection tasks for autonomous driving scenarios, possessing high research value and practical application potential.
VisDrone2019 [25] is a large-scale drone vision dataset released by Northeastern University in China, specifically for object detection and tracking tasks from drone perspectives. The dataset contains over 25,000 images covering diverse scenarios such as cities, rural areas, highways, and construction sites, using various shooting angles including overhead and oblique views, covering different lighting and weather conditions. The dataset annotates common object categories such as pedestrians, vehicles, bicycles, and motorcycles, providing detailed annotation information including bounding boxes, categories, occlusion conditions, and motion states. This study adopts its official standard division, using 6471 training images and 548 validation images. This division follows the dataset’s original settings, ensuring the comparability of experimental results with other research, while the remaining images are used as test sets for final performance evaluation.

4.2. Experimental Environment and Parameters

This experiment uses the Autodl computing cloud platform with the Ubuntu 22.04 operating environment, GPU model RTX 3090 (24GB), CPU Intel(R) Xeon(R) Platinum 8358P. The development language is Python 3.10, the deep learning framework is PyTorch 2.2.2, and CUDA version is 11.8. Experimental training parameters are shown in Table 2.

4.3. Evaluation Metrics

This experiment uses precision (P), recall (R), mAP@0.5, mAP@[0.5:0.95], computational cost (GFLOPs), and parameters as evaluation metrics to measure detection accuracy and model lightweight effects. The calculations for P, R, AP, and mAP are shown in the following equations:
P = T P T P + F P
R = T P T P + F N
A P = 0 1 P ( R ) d R
m A P = 1 N i = 1 N A P i
where true positive (TP) represents the number of samples correctly predicted as positive, false positive (FP) represents the number of negative samples incorrectly predicted as positive, false negative (FN) represents the number of positive samples incorrectly predicted as negative. AP (average precision) represents the average precision of a single detection category, A P i is the average precision of the i-th detection category, and mAP is the average precision of all detection categories.
Precision refers to the proportion of actual objects among those predicted as positive by the model. High precision means fewer false positives in the model’s predictions, with a high reliability of detected objects.
Recall refers to the proportion of objects correctly detected by the model among all actual objects. High recall means fewer missed detections, with the model able to discover as many objects as possible.
mAP@0.5 represents the model’s ability to successfully identify objects with a relatively loose overlap criterion ( I o U 0.5 ) in object detection tasks. It comprehensively considers whether the objects detected by the model are correct and the detection accuracy. Higher mAP@0.5 indicates stronger accuracy in object localization and classification, with fewer missed detections and false positives.
mAP@[0.5:0.95] reflects the model’s average detection performance under various strictness levels (IoU gradually increasing from 0.5 to 0.95). It requires not only good object detection but also more precise detection box positions, reflecting the model’s detailed detection capabilities.
Computational cost refers to the number of floating-point operations required during model inference, used to measure the degree of computational resource consumption. Lower computational cost means fewer computational resources required during model operation, usually achieving a faster inference speed and lower power consumption, making it more suitable for real-time applications or edge device deployment scenarios with resource constraints.
Parameters represent the number of learnable weights in the model, reflecting model complexity and size. Fewer parameters mean smaller space required for model storage and transmission, making it more convenient for deployment on storage-limited devices.

4.4. Comparative Experiments

4.4.1. C3k2 Improvement Comparative Experiment

To verify the effectiveness of the proposed CGF module, we also selected other C3k2 module improvement schemes using different internal structures for comparative analysis. All models were trained under consistent training processes to ensure fair comparison of various structural improvement schemes. As shown in Table 3, the CGF-C3k2 module reduced the computational cost from 6.3 GFLOPs to 5.7 GFLOPs while improving mAP, with a 14.7% reduction in parameters. This module achieves a good balance between accuracy and model computational efficiency, demonstrating high optimization potential.

4.4.2. Neck Improvement Comparative Experiment

Similarly, to evaluate the effectiveness of the proposed neck improvement structure, we also selected several typical schemes for comparative experiments. As shown in Table 4, although the computational cost slightly increased by 0.9 GFLOPs and parameters increased by 3.5%, the model achieved significant performance improvements. Specifically, mAP@0.5 improved by 2.3% and mAP@[0.5:0.95] improved by 2.5%, indicating that this improvement effectively enhanced the model’s detection capabilities. With limited computational resource growth, the improved module achieved significant accuracy improvements.

4.4.3. Head Improvement Comparative Experiment

To verify the effectiveness of the detection head improvement, different detection heads were trained under consistent training processes. The results shown in Table 5 indicate that the SFDH detection head improved mAP@0.5 by 1.5% while reducing computational cost by 0.2 GFLOPs and parameters by 13.4%, demonstrating strong performance improvement and resource utilization efficiency.

4.4.4. Model Comparison Before and After Training

To more clearly demonstrate the changes in detection accuracy, recall, and average precision for different categories, we trained the original YOLOv11n (A) model and LEAD-YOLO (B) model on the nuImages dataset while maintaining consistent hyperparameters and training settings. Table 6 provides detailed comparison data before and after improvement. From Table 6, it can be seen that LEAD-YOLO’s detection performance improvement is particularly significant for small object categories such as pedestrian, obstacle, and bike, with mAP@0.5 improving by 3.7, 2.7, and 3.4 percentage points, respectively, and mAP@[0.5:0.95] improving by 3.8, 5.4, and 4 percentage points, respectively. This indicates that the proposed method has more advantages in small object recognition and localization, verifying the enhancement effect of the improved structure on fine-grained object perception capabilities.

4.4.5. Different Model Comparative Experiment

To further verify the improvement and performance benefits, we compare LEAD-YOLO with two main categories of mainstream algorithms: one is a one-stage detector, and the other is a two-stage detector and a Transformer based variant of DETR. The one-stage categories include SSD, YOLOv5s [34], YOLOv7-tiny [35], YOLOv8n [36], YOLOv11, and other popular small object detection methods include EfficientDet-D0 [37] and RetinaNet [38]. The two-stage detector and a Transformer based variant of DETR includes Faster R-CNN, DETR-R18, and Def-DETR-R50(Deformable DETR-R50). All experiments used identical training parameters.
As shown in Table 7, LEAD-YOLO demonstrates significant advantages when compared with mainstream one-stage detectors. In terms of detection performance, LEAD-YOLO achieves the best results across all evaluation metrics, with a precision (P) of 74.6%, recall (R) of 56.6%, mAP@0.5 of 64.2%, and mAP@[0.5:0.95] of 35.2%. Compared to the second-best performing YOLOv11n, LEAD-YOLO improves mAP@0.5 by 3.8 percentage points and mAP@[0.5:0.95] by 2.4 percentage points. Moreover, LEAD-YOLO achieves optimal computational efficiency with only 6.1 GFLOPs and 1.928 million parameters. Compared to YOLOv11n, it reduces the computational cost by 3.2% and parameters by 26.3%. When compared with traditional detectors like SSD and RetinaNet, the parameter reduction reaches 92.3% and 94.3%, respectively, fully demonstrating the advantages of the lightweight design.
Table 8 presents the comparison results with two-stage detectors and Transformer-based DETR variants. While faster R-CNN maintains leading detection accuracy (mAP@0.5: 68.7%, mAP@[0.5:0.95]: 37.3%), it comes with extremely high computational cost, requiring 206.2 GFLOPs and 41.39 million parameters. In contrast, LEAD-YOLO achieves a 97.0% reduction in computational cost and 95.3% reduction in parameters with only a 4.5% performance gap in mAP@0.5, demonstrating an excellent efficiency–accuracy balance. Furthermore, LEAD-YOLO slightly outperforms both DETR-R18 and Def-DETR-R50 in detection performance while maintaining overwhelming advantages in computational efficiency. Compared to DETR-R18, it reduces the computational cost by 87.3% and parameters by 92.6%, proving that CNN-based lightweight designs still hold significant advantages in small object detection tasks.
The performance analysis illustrated in Figure 8 further validates the comprehensive advantages of LEAD-YOLO. Experimental results demonstrate that LEAD-YOLO successfully achieves the design goal of maintaining competitive detection accuracy while significantly reducing model complexity. This efficient performance is primarily attributed to (1) carefully designed feature extraction and fusion mechanisms; (2) detection head structures optimized for small objects; and (3) effective model compression strategies. These characteristics suggest LEAD-YOLO’s potential for resource-constrained edge computing environments in real-time small object detection tasks.

4.4.6. Generalization Performance Evaluation Experiment

To verify the generalization performance and robustness of our proposed improvement method, we conducted cross-dataset experiments on the VisDrone2019 dataset. We selected YOLOv5s, YOLOv8n, YOLOv11n, and RTDETR-R18 as comparison algorithms. Experimental results are shown in Table 9.
Experimental results show that LEAD-YOLO demonstrates superior detection performance on datasets with more small objects. Compared to YOLOv11n, its mAP@0.5 improved by 7.9%. Meanwhile, LEAD-YOLO maintains the lowest model parameters while achieving the highest detection accuracy. In cross-dataset generalization testing, this method still maintains good detection effects, verifying its effectiveness and robustness in small object detection tasks.

4.5. Ablation Experiments

4.5.1. DFF Module Ablation Experiment

The DFF module concatenates three weight-shared convolution modules with different dilation coefficients. To verify the impact of different dilation coefficient configurations in multi-scale feature modeling, we compared the performance of common [1, 2, 3], [1, 2, 6], [1, 3, 9] and our [1, 3, 5] three groups of dilation rate settings in the DFF module. Experimental results are shown in Table 10.
From the table, it can be seen that, when the dilation rates are [1, 3, 5], both mAP@0.5 and mAP@[0.5:0.95] achieve the best results. Figure 9 shows the actual receptive field distribution maps corresponding to the four groups of dilation rates. It can be seen that [1, 2, 3] (Figure 9a) has limited receptive field expansion, making it difficult to effectively cover long-distance context information. At the same time, due to adjacent dilation rates, multiple convolution operations will repeatedly act on the same set of elements, resulting in a large amount of redundant computation in local areas with low information fusion efficiency. Some areas in [1, 2, 6] (Figure 9b) still repeatedly convolved due to the large dilation rate span, causing certain resource waste, and the central area has high weights. In [1, 3, 9] (Figure 9d), the receptive field expands but there is no information exchange between the convolution operators of the second and third layers, which will result in the inability to obtain more complete feature information.
The [1, 3, 5] configuration designed in this paper (Figure 9c) not only expands the receptive field size but also has smoother step changes between layers, making the middle layer features (second layer) more balanced in obtaining context information, with stronger structural symmetry and computational balance, helping to improve the coherence and fusion ability of multi-scale features. Additionally, this configuration establishes more balanced information paths between edge and central regions, reducing the occurrence of areas not effectively covered by convolution. Experiments also show that integrating DFF into the detection backbone significantly improves the detection accuracy and robustness of distant objects.

4.5.2. Overall Ablation Experiment

To verify the effectiveness of the core module designs proposed in this paper, we designed and implemented a series of ablation experiments on the nuImage dataset. In all ablation experiments, YOLOv11n was used as the baseline model, with key modules gradually introduced or replaced to analyze the impact of each component on overall performance. The CGF and DFF modules form a complete ’detail-context’ perception system in the Backbone. CGF focuses on protecting detail features of small objects from deep network degradation through gating mechanisms, while DFF is responsible for expanding the receptive field to capture the contextual environment of these detail features. This design concept requires the two modules to work together: the detail features preserved by CGF need the context information provided by DFF for accurate localization, while the expanded receptive field of DFF needs the high-quality features maintained by CGF to avoid semantic ambiguity. Therefore, conducting ablation experiments on them as a whole better reflects their design intent. Table 11 shows the experimental results under various ablation configurations, with the result analysis shown in Figure 10. From the figure, it can be seen that each improvement enhances the model’s detection performance to varying degrees. After introducing the lightweight module CGF, while improving mAP by 0.4%, computational cost was reduced to the lowest 5.7 GFLOPs. After combining CGF with DFF and HFFM, the model recall rate reached the highest value of 58.2%. Finally, the model integrating all improved modules achieved maximum gains of 3.8% and 5.4% in mAP@0.5 and mAP@[0.5:0.95], respectively, while reducing the computational cost by 0.2 GFLOPs and parameters by 24.1%. Overall, the improved model achieves a comprehensive improvement in detection accuracy based on reducing parameters and computational complexity, especially showing outstanding performance in small object detection, demonstrating significant advantages in object detection tasks in actual autonomous driving scenarios.

4.6. Visualization Analysis

4.6.1. Detection Comparison

This experiment uses YOLOv11n and its improved model LEAD-YOLO for testing and comparison on the test set. Figure 11a shows images under sunny, rainy, and night scenes, respectively, Figure 11b shows YOLOv11n model detection results, and Figure 11c shows LEAD-YOLO model detection results. The red dashed boxes in the figures indicate parts where the improved algorithm performs better than the baseline algorithm. Through comparison, it can be seen that the improved algorithm can more accurately detect distant pedestrians and other small objects in sunny environments; successfully identifies distant obstacles in rainy and partially occluded scenes; and can accurately detect distant vehicles under strong light interference at night. The proposed improved model demonstrates better detection capabilities in various complex scenarios, further verifying the effectiveness and robustness of the improved model in practical applications.

4.6.2. Heatmap Comparison

To further analyze the contribution of the proposed improved structures to model performance improvement, we use gradient-based visualization methods to display the heatmaps of intermediate feature responses for different models.
Figure 12a shows the images of different road environments under three typical weather conditions: sunny, rainy, and night. Figure 12b,c, respectively, correspond to the heatmap visualization results of the baseline model and improved model in the above scenarios, where red areas indicate regions with stronger model attention.
From the heatmaps, it can be observed that the baseline model has certain limitations in object detection tasks, with its attention regions prone to shift and insufficiently clear responses at object boundaries, especially showing unstable performance in scenarios with dense distant small objects or partial occlusion. The model with introduced improvement modules can effectively enhance response capabilities for key object regions, with activation regions more concentrated on object edge contours and semantically significant structural regions, while suppressing the redundant activation of background areas. This optimization of attention mechanisms not only improves the model’s feature expression capabilities but also enhances the object discrimination, further verifying the actual effects of structural improvements in detection accuracy improvement.

5. Conclusions

In autonomous driving scenarios, small object detection has always been a key challenge affecting model practicality and safety. Especially in edge device deployment with limited computational resources, detection models face higher requirements for lightweight and high accuracy. Existing detection models cannot solve the above problems well, so we propose the improved lightweight detection model LEAD-YOLO. First, in the backbone part, we design the lightweight detail-aware module CGF to optimize the bottleneck part in the C3k2 module, achieving lightweight design while maintaining stable flow of deep semantic features. Then, we design the DFF module to replace the original SPPF module. Compared to the SPPF module, DFF focuses more on the progressive modeling of multi-scale context, enhancing the perception capabilities for semantic relationships between small objects and complex backgrounds through dilated convolution. In the neck part, we propose the HFFM module to construct multi-level feature fusion paths, achieving efficient interaction between local details and global context through hierarchical perception mechanisms, improving the model’s adaptability to multi-scale objects in complex traffic scenarios. For the detection head, we design the lightweight shared structure SFDH, achieving efficient cross-scale feature fusion through shared convolution modules and introducing detail enhancement branches focusing on local edge and texture modeling, significantly reducing model complexity while improving object recognition capabilities, suitable for real-time deployment scenarios in autonomous driving.
Experimental results on the nuImages dataset show that LEAD-YOLO improves mAP@0.5 by 3.8% and mAP@[0.5:0.95] by 5.4% compared to the baseline model while maintaining extremely low computational overhead, with a 24.1% reduction in parameters, fully demonstrating its good balance performance between lightweight and accuracy. On the VisDrone2019 dataset, LEAD-YOLO also shows excellent performance, with mAP@0.5 and mAP@[0.5:0.95] improving by 7.9% and 6.4%, respectively, compared to the baseline, further verifying the robustness and practicality of this method in complex scenarios. Moreover, LEAD-YOLO’s detection performance improvement on small object categories (such as pedestrian, obstacle, and bike) in the nuImages dataset is particularly significant: mAP@0.5 improved by 3.7%, 2.7%, and 3.4%, respectively, and mAP@[0.5:0.95] improved by 3.8%, 5.4%, and 4.0%, respectively, indicating that the proposed structure has stronger capabilities in fine-grained object perception and localization, providing a more advantageous solution for small object detection tasks in autonomous driving systems.
Future work will explore three main directions to enhance the model’s capabilities in autonomous driving scenarios:
1. Adaptive network structure design: We will develop dynamic architectures that can adjust model complexity based on real-time scenario analysis. This involves implementing scene-aware gating mechanisms that automatically select appropriate network configurations—using lighter branches for simple highway scenes while activating deeper feature extraction paths for complex urban intersections. This adaptive approach could reduce the computational overhead by 30–50% in simple scenarios while maintaining full capacity for challenging conditions.
2. Environmental robustness enhancement: We will design specialized modules targeting extreme conditions, including (i) weather-adaptive attention mechanisms that dynamically recalibrate features based on detected weather patterns (fog, rain, snow), (ii) illumination-invariant feature extraction using learnable histogram equalization for robust day/night performance, and (iii) context-aware processing that adjusts detection parameters based on environmental factors.
3. Multi-modal fusion and temporal modeling: Integration of temporal information through recurrent connections will enable the tracking of object motion patterns, improving small object detection through temporal consistency. Additionally, fusion with LiDAR or radar data will provide complementary depth information, which is particularly beneficial for distant small object detection.
These advancements will be validated on diverse datasets including BDD100K, Waymo Open Dataset, and the extreme weather subsets of nuScenes, ensuring robust performance across varied autonomous driving conditions.

Author Contributions

Conceptualization, Y.Y. and S.Y.; methodology, Y.Y. and S.Y.; software, Y.Y. and Q.C.; validation, Y.Y. and Q.C.; investigation, Y.Y.; data curation, Y.Y. and Q.C.; writing—original draft preparation, Y.Y.; writing—review and editing, Y.Y. and S.Y.; visualization, Y.Y. and Q.C.; supervision, S.Y.; funding acquisition, S.Y. and Q.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Graduate Education Innovation Fund of Wuhan Institute of Technology (Grant No. CX2024572).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the author, Y.Y., upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Lake Tahoe, NV, USA, 3–6 December 2012. [Google Scholar]
  2. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 91–99. [Google Scholar]
  3. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multibox detector. In Proceedings of the 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar]
  4. Redmon, J.; Farhadi, A. YOLOv3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar] [CrossRef]
  5. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. arXiv 2020, arXiv:2005.12872. [Google Scholar]
  6. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. arXiv 2017, arXiv:1706.03762. [Google Scholar]
  7. Wang, Z.; Feng, L.; Zhang, S. Vehicle type recognition based on improved YOLOv5 and video images. Sci. Technol. Eng. 2022, 22, 10295–10300. [Google Scholar]
  8. Li, H.; Zhang, Y.; Wang, X. Dilated convolution based YOLOv5 for long-range small object detection in autonomous driving. IEEE Trans. Intell. Transp. Syst. 2023. [Google Scholar]
  9. Luo, Y.; Ci, Y.; Jiang, S. A novel lightweight real-time traffic sign detection method based on an embedded device and YOLOv8. J. Real-Time Image Process. 2024, 21, 24. [Google Scholar] [CrossRef]
  10. Zhang, L.; Chen, W.; Liu, J. Knowledge distillation for lightweight YOLO models in autonomous driving scenarios. Pattern Recognit. Lett. 2023. [Google Scholar]
  11. Chen, S.; Wu, D.; Zhou, H. DCNv3-Lite: Efficient deformable convolution for real-time object detection. Comput. Vis. Image Underst. 2024. [Google Scholar]
  12. Yuan, T.; Lai, H.; Tang, J. LMFI-YOLO: Lightweight pedestrian detection algorithm in complex scenarios. Comput. Eng. Appl. 2025, 1–15. [Google Scholar] [CrossRef]
  13. Liu, M.; Zhao, Q.; Sun, F. Adaptive feature pyramid networks for small object detection in autonomous vehicles. IEEE Trans. Veh. Technol. 2023. [Google Scholar]
  14. Zhao, T.; Wang, Y.; Li, K. lightweight bidirectional feature pyramid network for efficient multi-scale object detection. Neural Netw. 2023, 32, 5664–5677. [Google Scholar]
  15. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv4: Optimal speed and accuracy of object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 6597–6606. [Google Scholar]
  16. Wang, C.Y.; Liu, J.H.; Yang, Y. YOLOv6: A single-stage object detector tailored for real-time industrial applications. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 842–851. [Google Scholar]
  17. Li, J.; Huang, X.; Wang, L. PSA-Net: Pyramid spatial attention network for object detection in complex scenes. IEEE Trans. Image Process. 2023, 32, 1245–1257. [Google Scholar]
  18. Zhang, H.; Dana, K.; Shi, J. PSANet: Point-wise spatial attention network for scene parsing. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 267–283. [Google Scholar]
  19. Han, K.; Wang, Y.; Tian, Q. GhostNet: More features from cheap operations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1580–1589. [Google Scholar]
  20. Dai, S. TransNeXt: Robust foveal visual perception for vision transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 17–21 June 2024; pp. 17773–17783. [Google Scholar]
  21. Zheng, S.; Lin, Y.; Zhang, L.; Zhao, Y. HCF-Net: Hierarchical context fusion network for infrared small object detection. Inf. Fusion 2024, 102, 102086. [Google Scholar]
  22. Ding, X.; Zhang, X.; Ma, N.; Han, J. RepVGG: Making VGG-style ConvNets great again. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 13733–13742. [Google Scholar]
  23. Chen, Z.; He, Z.; Lu, Z.M. DEA-Net: Single image dehazing based on detail-enhanced convolution and content-guided attention. IEEE Trans. Image Process. 2024, 33, 1002–1015. [Google Scholar] [CrossRef] [PubMed]
  24. Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. nuScenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11621–11631. [Google Scholar]
  25. Du, D.; Zhu, P.; Wen, L.; Bian, X.; Lin, H.; Hu, Q.; Peng, T.; Zheng, J.; Wang, X.; Zhang, Y. VisDrone-DET2019: The vision meets drone object detection in image challenge results. In Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, Seoul, Republic of Korea, 27–28 October 2019. [Google Scholar]
  26. Zhang, K. FasterNet: Lightweight and efficient neural network for real-time object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023. [Google Scholar]
  27. Zhang, K.; Li, Y.; Deng, C.; Ma, J.; Xu, C. Feature transfer in context: Context-aware feature disentanglement and transfer for object detection. In Proceedings of the International Conference on Learning Representations (ICLR 2024), Vienna, Austria, 7–11 May 2024. [Google Scholar]
  28. Sun, Y.; Xu, C.; Yang, J.; Xuan, H.; Luo, L. Frequency-spatial entanglement learning for camouflaged object detection. In Proceedings of the European Conference on Computer Vision (ECCV), Milan, Italy, 29 September–4 October 2024. [Google Scholar]
  29. Yang, W.; Wang, W.; Liu, X.; Xie, E.; Li, Q.; Ding, X.; Luo, P. DCMPNet: Densely coupled modulation pyramid network for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 17–21 June 2024. [Google Scholar]
  30. Kang, M.; Ting, C.M.; Ting, F.F.; Phan, R.C.W. ASF-YOLO: A novel YOLO model with attentional scale sequence fusion for cell instance segmentation. Image Vis. Comput. 2024, 147, 105057. [Google Scholar] [CrossRef]
  31. Feng, Y.; Huang, J.; Du, S. Hyper-YOLO: When visual object detection meets hypergraph computation. IEEE Trans. Comput. 2024, 73, 1234–1246. [Google Scholar] [CrossRef] [PubMed]
  32. Zhang, X. DyHead: Dynamic head for object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
  33. Wang, C.Y.; Yeh, I.H.; Liao, H.Y.M. YOLOv9: Learning what you want to learn using programmable gradient information. arXiv 2024, arXiv:2402.13616. [Google Scholar] [CrossRef]
  34. Khan, M.A.; Rehman, A.; Saba, T. YOLOv5: A deep learning model for object detection. Int. J. Imaging Syst. Technol. 2021, 31, 1201–1212. [Google Scholar]
  35. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023. [Google Scholar]
  36. Nie, H.; Pang, H.; Ma, M. A lightweight remote sensing small target image detection algorithm based on improved YOLOv8. Sensors 2024, 24, 2952. [Google Scholar] [CrossRef] [PubMed]
  37. Tan, M.; Pang, R.; Le, Q.V. EfficientDet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  38. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
Figure 1. YOLOv11 network structure, the operations represented by different colored arrows have been marked.
Figure 1. YOLOv11 network structure, the operations represented by different colored arrows have been marked.
Sensors 25 04800 g001
Figure 2. LEAD-YOLO network structure, the red dashed line indicates the improved part.
Figure 2. LEAD-YOLO network structure, the red dashed line indicates the improved part.
Sensors 25 04800 g002
Figure 3. CGF block structure, we use CGF module to optimize the Bottleneck module of C3k2 and C3k.
Figure 3. CGF block structure, we use CGF module to optimize the Bottleneck module of C3k2 and C3k.
Sensors 25 04800 g003
Figure 4. DFF block, d = 1, 3, and 5 represent different dilated convolution rates.
Figure 4. DFF block, d = 1, 3, and 5 represent different dilated convolution rates.
Sensors 25 04800 g004
Figure 5. HFFM block: the parameters p of the patch-aware convolution are 2 and 4, which represent the local and global branches, respectively.
Figure 5. HFFM block: the parameters p of the patch-aware convolution are 2 and 4, which represent the local and global branches, respectively.
Sensors 25 04800 g005
Figure 6. SFDH block, blocks of the same color represent shared convolutions.
Figure 6. SFDH block, blocks of the same color represent shared convolutions.
Sensors 25 04800 g006
Figure 7. Sample distribution and proportion after mapping.
Figure 7. Sample distribution and proportion after mapping.
Sensors 25 04800 g007
Figure 8. Comparative experimental performance analysis.
Figure 8. Comparative experimental performance analysis.
Sensors 25 04800 g008
Figure 9. Receptive fields at different dilation rates.
Figure 9. Receptive fields at different dilation rates.
Sensors 25 04800 g009
Figure 10. Analysis of ablation experiments.
Figure 10. Analysis of ablation experiments.
Sensors 25 04800 g010
Figure 11. Detection comparison in various scenarios: The red dotted box in the figure is the part where the improved algorithm has better detection effect than the baseline algorithm.
Figure 11. Detection comparison in various scenarios: The red dotted box in the figure is the part where the improved algorithm has better detection effect than the baseline algorithm.
Sensors 25 04800 g011
Figure 12. Detection comparison in various scenarios: The red dotted box in the figure is the part where the improved algorithm has better detection effect than the baseline algorithm.
Figure 12. Detection comparison in various scenarios: The red dotted box in the figure is the part where the improved algorithm has better detection effect than the baseline algorithm.
Sensors 25 04800 g012
Table 1. Dataset sample class mapping relationship.
Table 1. Dataset sample class mapping relationship.
Original CategoriesMapped Category
Adult, child, pedestrianPedestrian
Bicycle rack, construction cone,Obstacle
Debris, pushable pullable object,
Traffic cone, barrier
Bicycle, motorcycleBike
Personal mobility, car,Car
emergency vehicle, police car
Bus, construction vehicle,Vehicles
trailer, truck, ambulance,
bendy bus, rigid bus
Table 2. Experimental training parameters.
Table 2. Experimental training parameters.
ParameterValue
Image size (imgs) 640 × 640
Initial learning rate (lr0)0.01
OptimizerSGD
Batch size16
Epochs300
Momentum0.937
Weight decay0.0005
Table 3. C3k2 improvement comparison experiment.
Table 3. C3k2 improvement comparison experiment.
ModelP(%)R(%)mAP@0.5(%)mAP@[0.5:0.95](%)GFLOPsParams
Base69.055.060.432.86.32,616,248
C3k2-Faster [26]68.050.857.530.25.92,322,096
C3k2-FAT [27]66.252.058.231.06.72,654,188
C3k2-JDPM [28]67.553.259.631.67.92,954,477
CGF-C3k268.655.760.833.25.72,231,836
Table 4. Neck improvement comparison experiment.
Table 4. Neck improvement comparison experiment.
ModelP(%)R(%)mAP@0.5(%)mAP@[0.5:0.95](%)GFLOPsParameters
Base69.055.060.432.86.32,616,248
MFM [29]68.454.359.531.36.92,633,464
ASF [30]71.255.061.833.66.72,163,976
Hyper [31]70.554.861.334.27.73,064,888
HFFM72.055.362.735.37.22,707,128
Table 5. Detection head improvement comparison experiment.
Table 5. Detection head improvement comparison experiment.
ModelP(%)R(%)mAP@0.5(%)mAP@[0.5:0.95](%)GFLOPsParameters
Base69.055.060.432.86.32,616,248
Dyhead [32]68.454.259.831.47.63,133,108
PGI [33]71.354.961.733.58.83,604,864
SFDH71.654.861.934.66.12,265,435
Table 6. Performance comparison of five categories between YOLOv11n(A) and LEAD-YOLO(B).
Table 6. Performance comparison of five categories between YOLOv11n(A) and LEAD-YOLO(B).
CategoriesP(%)R(%)mAP@0.5(%)mAP@[0.5:0.95](%)
ABABABAB
Pedestrian69.274.847.647.353.457.124.728.5
Obstacle69.977.460.659.663.966.631.937.3
Bike59.369.048.647.650.453.825.729.7
Car75.979.267.971.475.378.946.953.5
Vehicles70.172.550.357.258.864.634.942.2
All69.074.655.056.660.464.232.838.2
Table 7. Comparison with one-stage detectors.
Table 7. Comparison with one-stage detectors.
ModelP(%)R(%)mAP@0.5(%)mAP@[0.5:0.95](%)GFLOPsParameters
SSD47.035.340.619.531.425,082,528
YOLOv5s64.047.554.628.715.97,039,792
YOLOv7-tiny60.444.752.124.69.56,034,656
YOLOv8n66.248.958.630.18.13,426,452
YOLOv11n69.055.260.432.86.32,616,248
EfficientDet-D068.350.758.930.513.03,915,671
RetinaNet70.552.361.232.1115.834,052,412
Ours74.656.664.235.26.11,927,696
Table 8. Comparison with two-stage detectors and Transformer-based variant of DETR.
Table 8. Comparison with two-stage detectors and Transformer-based variant of DETR.
ModelP(%)R(%)mAP@0.5(%)mAP@[0.5:0.95](%)GFLOPsParameters
Faster R-CNN78.260.168.737.3206.241,393,461
DETR-R1872.856.263.834.448.226,158,465
Def-DETR-R5073.956.164.034.752.745,546,846
Ours74.656.664.235.26.11,927,696
Table 9. Comparative experiments on different datasets.
Table 9. Comparative experiments on different datasets.
ModelP(%)R(%)mAP@0.5(%)mAP@[0.5:0.95](%)
YOLOv5s42.732.831.218.3
YOLOv8n37.329.426.215.6
YOLOv11n36.528.625.814.2
RTDETR-R1840.130.628.617.3
Ours45.234.733.720.6
Table 10. Module effect under different dilation coefficients.
Table 10. Module effect under different dilation coefficients.
ModelmAP@0.5(%)mAP@[0.5:0.95](%)
Base60.432.8
1,2,359.332.3
1,2,660.833.1
1,3,561.233.6
1,3,960.632.9
Table 11. Ablation experiment.
Table 11. Ablation experiment.
ModelCGFDFFHFFMSFDHP(%)R(%)mAP@0.5(%)mAP@[0.5:0.95](%)GFLOPsParameters
Base 69.055.060.432.86.32,616,248
A 68.655.760.833.25.72,231,836
B 69.256.361.233.66.52,763,704
C 72.055.362.735.37.22,707,128
D 71.654.861.934.66.12,265,435
ABC 71.858.263.536.16.32,129,272
ABD 70.357.262.535.86.12,412,891
CD 72.356.563.236.36.51,967,275
ABCD(Ours)74.656.664.238.26.11,927,696
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, Y.; Yang, S.; Chan, Q. LEAD-YOLO: A Lightweight and Accurate Network for Small Object Detection in Autonomous Driving. Sensors 2025, 25, 4800. https://doi.org/10.3390/s25154800

AMA Style

Yang Y, Yang S, Chan Q. LEAD-YOLO: A Lightweight and Accurate Network for Small Object Detection in Autonomous Driving. Sensors. 2025; 25(15):4800. https://doi.org/10.3390/s25154800

Chicago/Turabian Style

Yang, Yunchuan, Shubin Yang, and Qiqing Chan. 2025. "LEAD-YOLO: A Lightweight and Accurate Network for Small Object Detection in Autonomous Driving" Sensors 25, no. 15: 4800. https://doi.org/10.3390/s25154800

APA Style

Yang, Y., Yang, S., & Chan, Q. (2025). LEAD-YOLO: A Lightweight and Accurate Network for Small Object Detection in Autonomous Driving. Sensors, 25(15), 4800. https://doi.org/10.3390/s25154800

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop