Next Article in Journal
Evolution of Characteristic Parameters in Fuel Cell Dynamic Response Under Durability Testing
Previous Article in Journal
Improving the Reliability of the Protection of Electric Transport Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Lightweight Traffic Sign Detection Model Based on Improved YOLOv8s for Edge Deployment in Autonomous Driving Systems Under Complex Environments

College of Mechanical and Electronic Engineering, Nanjing Forestry University, Nanjing 210037, China
*
Author to whom correspondence should be addressed.
World Electr. Veh. J. 2025, 16(8), 478; https://doi.org/10.3390/wevj16080478
Submission received: 15 July 2025 / Revised: 14 August 2025 / Accepted: 19 August 2025 / Published: 21 August 2025

Abstract

Traffic sign detection is a core function of autonomous driving systems, requiring real-time and accurate target recognition in complex road environments. Existing lightweight detection models struggle to balance accuracy, efficiency, and robustness under computational constraints of vehicle-mounted edge devices. To address this, we propose a lightweight model integrating FasterNet, Efficient Multi-scale Attention (EMA), Bidirectional Feature Pyramid Network (BiFPN), and Group Separable Convolution (GSConv) based on YOLOv8s (FEBG-YOLOv8s). Key innovations include reconstructing the Cross Stage Partial Network 2 with Focus (C2f) module using FasterNet blocks to minimize redundant computation; integrating an EMA mechanism to enhance robustness against small and occluded targets; refining the neck network based on BiFPN via channel compression, downsampling layers, and skip connections to optimize shallow–deep semantic fusion; and designing a GSConv-based hybrid serial–parallel detection head (GSP-Detect) to preserve cross-channel information while reducing computational load. Experiments on Tsinghua–Tencent 100K (TT100K) show FEBG-YOLOv8s improves mean Average Precision at Intersection over Union 0.5 (mAP50) by 3.1% compared to YOLOv8s, with 4 million fewer parameters and 22.5% lower Giga Floating-Point Operations (GFLOPs). Generalizability experiments on the CSUST Chinese Traffic Sign Detection Benchmark (CCTSDB) validate robustness, with 3.3% higher mAP50, demonstrating its potential for real-time traffic sign detection on edge platforms.

1. Introduction

Traffic sign detection is a critical technology for autonomous driving and driver assistance systems [1], providing real-time road condition information to support timely decision-making and enhance traffic safety [2]. However, practical deployment faces significant challenges, including significant scale variations [3], susceptibility to environmental interference (e.g., adverse weather, low illumination, or occlusion) [4], and computational constraints of vehicle-mounted edge devices [5]. This imposes dual requirements on the detection model. It must meet computational efficiency demands for the lightweight [6] design of edge devices, balancing inference speed and accuracy under constraints of computational power, memory limitations, and real-time operation, while simultaneously maintaining high robustness [7] in complex scenarios to overcome challenges such as significant scale variations and occlusion of target objects caused by environmental factors. Deep learning offers substantial advantages in object detection, significantly outperforming traditional machine learning algorithms in recognition accuracy [8]. Compared to single-stage approaches, two-stage detectors like Faster R-CNN [9] and Mask R-CNN [10] achieve high precision at the cost of slow inference speeds, rendering them less suitable for real-time applications. Single-stage detectors, such as SSD [11] and the YOLO series [12], offer faster processing; however, SSD’s inherent multi-level computational redundancy has led to its gradual replacement in real-time scenarios.
To meet these dual requirements, developing highly accurate, real-time, and robust traffic sign recognition systems is crucial for addressing challenges posed by distant, small, or feature-ambiguous signs in complex driving environments [13]. Shen et al. [14] proposed CSW-YOLO, which reconstructs the C2f module by integrating FasterNet blocks and combines it with the Convolutional GLU (CGLU) channel mixer from TransNeXt, significantly reducing model parameters and computational load. Cai et al. [15] proposed FLB-YOLOv8, a lightweight traffic sign detection model that enhances the feature extraction capability of the FasterNet block-modified C2f module by introducing the Large Selective Kernel (LSK) Network while directly adopting BiFPN for multi-scale feature fusion. Chen et al. [16] proposed YOLO-TS, a traffic sign detection network for advanced driver assistance systems (ADASs). By optimizing the receptive fields of multi-scale feature maps to align with traffic sign size distributions across datasets and leveraging an anchor-free strategy for high-resolution multi-scale detection, Huang et al. [17] proposed RePCMA-YOLOv8n, a lightweight traffic sign detection model that integrates partial convolution (PConv), re-parameterization strategies, and EMA to reduce feature extraction redundancy and minimize parameters. By adopting the Normalized Wasserstein Distance (NWD) as the optimization loss function, the model significantly improves detection accuracy. Deng et al. [18] proposes HA-YOLO, a novel YOLOv8-based framework that integrates HSFPN for enhanced multi-scale feature fusion and embeds SCSA and Triplet Attention within the C2f module to improve focus on salient features. Evaluations on datasets demonstrate superior robustness. Khalili et al. [19] proposed SOD-YOLOv8, which integrates a Generalized Feature Pyramid Network (GFPN) to enhance multi-path feature fusion. By introducing an additional detection layer, the model effectively leverages high-resolution spatial information while improving feature extraction capabilities through FasterNet block-modified C2f modules.
While existing methods demonstrate improved detection performance, they still exhibit missed detections and false positives in complex scenarios [20], and the trade-off between model complexity [21] and computational efficiency requires further optimization [22]. Current models show potential for further optimization regarding parameter size and computational efficiency on edge device deployment. To address persistent issues in traffic sign detection, including computational redundancy, small-target detection limitations, and elevated false-positive/missed-detection rates, this paper improves YOLOv8s and designs a lightweight and high-precision traffic sign detection model named FEBG-YOLOv8s.
The main contributions of this paper are as follows.
  • The bottleneck block in the C2f module is replaced with the FasterNet block [23], leveraging its PConv and pointwise convolution (PWConv) co-design to reduce computational redundancy. An EMA mechanism is integrated to enhance multi-scale feature modeling via its parallel multi-branch structure and cross-space interaction. This preserves feature extraction capability during lightweighting while improving robustness for small targets and in occluded scenes.
  • The neck network is enhanced using a BiFPN structure. A Conv module is incorporated to compress channel dimensions and enhance nonlinear representation. Additionally, a P4 layer downsampling module facilitates cross-layer connections, improving interaction efficiency between shallow detail and deep semantic features, thus enabling the refined model to adapt to the scale variation of traffic signs near and far.
  • A GSConv module is employed to construct a hybrid serial–parallel detection head. Its Channel Shuffle operation enhances cross-channel information exchange, optimizing computational efficiency.

2. Methodology

2.1. The YOLOv8 Model

YOLOv8 achieves an optimal balance between accuracy and processing speed, making it well suited for applications requiring fast detection while maintaining high precision. As shown in Figure 1, the YOLOv8 architecture comprises three key parts: the backbone, neck, and head.
The backbone primarily employs the Darknet-53 framework for hierarchical feature extraction. It integrates the C2f module for residual learning, which captures additional gradient information to enhance feature representation capabilities. The Spatial Pyramid Pooling Fast (SPPF) module merges local and global features to expand the receptive field. The neck receives multi-scale features from the backbone for cross-level fusion. Utilizing the C2f module and implementing the Path Aggregation Network Feature Pyramid Network (PAN+FPN) methodology, it constructs bidirectional feature pyramids to aggregate both shallow spatial details and deep semantic information. The head adopts an anchor-free decoupled architecture, separating object detection and classification tasks. The classification branch predicts category probabilities at each spatial position, while the regression branch optimizes bounding box coordinates without predefined anchor boxes. Both tasks undergo weighted optimization during prediction. To suit different application demands, YOLOv8 offers models of increasing size and complexity: YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, and YOLOv8x.

2.2. FEBG-YOLOv8s Model

To address prevalent challenges in traffic sign detection, including computational redundancy, small-target detection difficulty, and high false-positive/missed-detection rates, this study proposes FEBG-YOLOv8s, an improved model based on YOLOv8s (Figure 2). The model integrates three key innovations: Backbone Network Design, BiFPN-Enhanced Multi-Scale Fusion, and Detection Head Enhancement.

2.3. Backbone Network Design

2.3.1. C2f Lightweight Architecture

Traffic sign detection requires low-latency inference on vehicle-mounted edge devices. The bottleneck blocks in YOLOv8’s original C2f are stacked by 1 × 1 and 3 × 3 Conv, whose dense cross-channel operations lead to high floating-point operation counts (FLOPs) and memory access counts (MACs). The traditional lightweight methods MobileNetV3 [24] with deep separable convolution and GhostNet [25] with feature redundancy pruning can reduce the computational effort but are prone to sacrificing cross-channel correlation or mistakenly deleting key features. To address these limitations, we introduce a FasterNet block to reconstruct the C2f module, proposing the Faster-C2f module (Figure 3).
The F L O P s of PConv are given by
F L O P s = h · w · K 2 · C p
where the channel participation ratio is defined as
r = C P C
The M A C is approximated by
M A C s = 2 · h · w · C p + K 2 · C p 2 2 · h · w · C p
Here, h and w denote the height and width of the input feature map, c the input channel count, C p the channels involved in convolution, K the kernel size, and r the channel participation ratio. The core innovation involves synergistic optimization between PConv and PWConv. PConv applies standard convolution to only 1/4 of input channels while preserving the remainder, reducing FLOPs and MAC to 1/16 and 1/4 of standard convolution, respectively. PWConv compensates for potential feature degradation through cross-channel fusion, addressing the representational limitations inherent in depthwise separable convolutions. Therefore, we reconstruct the C2f module by replacing its bottleneck blocks with FasterNet blocks, forming the Faster-C2f module. Within the backbone network, Faster-C2f replaces C2f modules at layers 6 and 8. By introducing FasterNet blocks, it focuses on local features of small targets rather than the bottleneck’s full-channel computations. This reduces computational load while preventing degradation of feature extraction capability.

2.3.2. Integration with EMA Mechanisms

In traffic scenarios, small or occluded targets inherently possess weak features. PConv fails to adequately capture their global contextual information, leading to detection failures. This necessitates the integration of attention mechanisms to enhance the feature responses for these targets. The conventional SE [26] module employs a serial channel compression design, where spatial information is compressed via Global Average Pooling before channel-wise recalibration through fully connected layers. This unidirectional information flow leads to spatial context loss, while dimensionality reduction may induce feature degradation. In challenging fog or night conditions, Global Average Pooling’s global homogenization blends noise with weak target features, causing channel weight misallocation. Although CBAM [27] enhances feature representation through a sequential combination of channel and spatial attention, its spatial attention relies on convolutional kernels with limited receptive fields. When processing distant traffic signs with dimensions smaller than the convolution kernel, spatial weights become ineffective. Furthermore, channel attention errors propagate through the sequential structure, triggering attention shifts toward occlusions in complex backgrounds. To address these issues, this paper integrates the EMA mechanism [28] within the FasterNet block and proposes the Faster-EMA block (Figure 4).
The input feature map x R c × h × w is processed by PConv and PWConv of the FasterNet block and then fed into the EMA module in parallel through group convolution. EMA performs spatial pooling along the X/Y direction to separate noise from valid features while preserving multi-scale details. The formula for global average pooling is as follows:
E C = 1 h × w i = 1 h i = 1 w X c i , j
Here, E C R c / g represents the pooled feature vector, and X c denotes the feature value of the c-th channel at pixel ( i ,   j ) .
Then, it dynamically generates channel attention weights and strengthens key region response. The cross-space learning module constructs spatial dependencies using Softmax and combines the matrix multiplication method to fuse the global contextual information to compensate for the local feature extraction limitation. The Softmax normalization formula is given by
a c = exp E C k = 1 c g exp E k
Here, a c denotes the normalized attention weight of the c-th group, and E k denotes the globally pooled feature value of the k-th channel. Finally, the enhanced features are fused with the original inputs through residual concatenation to avoid feature degradation caused by sparse convolution.
PConv reduces computation through local convolution, while EMA compensates for feature loss through group processing, forming a synergistic optimization of lightweighting and feature enhancement. EMA is highly compatible with the sparse activation characteristics of the FasterNet block. EMA’s parallel multi-branch architecture overcomes SE’s channel information loss and CBAM’s serial error accumulation, maintaining local details while enhancing noise robustness. The improved Faster-EMA C2f module (Figure 5) strengthens the multi-scale modeling capability on the basis of lightweighting, which significantly improves the robustness of small-target detection in complex scenes.

2.4. BiFPN-Enhanced Multi-Scale Fusion

The features extracted from the backbone network need to be fused at multiple scales through the neck network in order to adapt to the scale variation of traffic signs near and far. YOLOv8s utilizes an FPN+PAN neck structure (Figure 6) with a unidirectional feature fusion pathway, limiting interaction between shallow spatial details and deep semantic representations. This progressively degrades resolution during successive downsampling, causing substantial information loss that disproportionately compromises small-object detection.
This study introduces an augmented BiFPN architecture tailored for traffic sign detection (Figure 7), implementing three targeted enhancements. First, 1×1 convolutional modules are integrated across P5–P7 layers to compress channel dimensionality while amplifying nonlinear representational capacity, thereby reducing computational redundancy without compromising critical semantic information. Second, a dedicated downsampling module is introduced at the P4 layer to expand receptive fields and integrate global contextual cues, mitigating feature degradation for small traffic signs against complex road backgrounds caused by insufficient spatial resolution. Third, skip connections are established between P5 and P6 layers to shorten interaction pathways between shallow spatial details and deep semantic features, significantly improving detection precision for multi-scale targets while alleviating feature loss in small objects.
The BiFPN architecture achieves iterative feature fusion through bidirectional cross-level connections, substantially enhancing the complementarity between low-level edge details and high-level semantic representations compared to FPN+PAN’s unidirectional aggregation. Strategically placed Conv modules reduce BiFPN’s parameter footprint while resolving the inter-layer information attenuation inherent in traditional FPN+PAN structures. This design enables efficient multi-scale feature integration under stringent lightweight constraints.

2.5. Detection Head Enhancement

The computational load of the detection head critically impedes the model’s real-time inference capability. The decoupled head of YOLOv8s realizes the channel separation of classification and regression tasks through two-branch 3 × 3 convolution, but its channel dimension inflation leads to exponential growth of computational complexity with the feature map resolution, making it difficult to meet the real-time demand of vehicle-mounted edge devices. Conventional lightweight approaches like GhostNet generate redundant features through inexpensive operations but introduce noise artifacts that compromise small-target localization precision. For this reason, this paper proposes GSP-Detect (Figure 8).
GSP-Detect employs GSConv [29] modules to replace standard convolutions, implementing a refined processing pipeline. GSConv compresses the input channel to the original 1/2 through 1 × 1 Conv while completing cross-channel feature mapping and enhances the spatial detail modeling performance through 5 × 5 DWConv, which facilitates cross-channel information interaction through the channel mixing operation and effectively alleviates the problem of obstructed information flow between channels due to deep convolution. Combining GSConv and 3 × 3 labeled Conv in tandem, the former realizes lightweight feature preprocessing and the latter performs high-resolution spatial detail extraction. The two-branch regular convolution is retained to handle the classification and regression tasks separately, avoiding feature interference in multi-task learning through parameter isolation. This architecture achieves an optimal balance between computational efficiency and detection accuracy. By reducing computational overhead while preserving precision, the design significantly enhances suitability for real-time traffic sign detection on resource-constrained edge devices.

3. Datasets and Experimental Settings

3.1. Datasets

The TT100K [30] dataset, jointly developed by Tsinghua University and Tencent, comprises 100,000 road images captured under diverse weather and lighting conditions, with 30,000 traffic sign instances annotated using bounding boxes, pixel-level segmentation masks, and class labels. To mitigate inter-class imbalance in the original dataset, we implemented copy–paste augmentation, establishing a balanced dataset encompassing 45 traffic sign categories. The training and test sets were partitioned in an 8:2 ratio, comprising 7236 and 1934 images, respectively. Representative samples are illustrated in Figure 9.
To further validate model generalizability, cross-dataset evaluation was conducted using CCTSDB [31]. Developed by researchers at Changsha University of Science and Technology, CCTSDB collects images from real-world Chinese driving scenarios. It annotates three sign categories—mandatory, warning, and prohibitory—within diverse road backgrounds encompassing complex environments, such as sunny, cloudy, rainy, snowy, foggy, and night conditions.

3.2. Experimental Settings

The experimental platform utilized a high-performance workstation running Windows 10 as the operating system. Computational power was provided by an Intel (R) I7-12700 K CPU and an NVIDIA RTX3070Ti GPU with 8 GB of dedicated memory. System memory consisted of 64 GB DDR RAM. For storage, a high-speed WD SN770 NVMe SSD with 1 TB capacity ensured rapid data access. The implementation was developed in Python 3.8, leveraging the PyTorch 1.12.1 deep learning framework. Model training was accelerated using CUDA 11.3.1 and CuDNN 8.0.5.39 libraries. The specific configuration of the experimental environment is detailed in Table 1.
Table 2 summarizes the hyperparameters employed during training. Input images were resized to a resolution of 640 × 64. The learning rate was initialized at 0.01. Training utilized a batch size of 16. The SGD optimizer was employed with a momentum of 0.937 and a weight decay coefficient of 0.0005 to regularize the model and prevent overfitting. Notably, a dynamic epoch adjustment strategy was implemented to identify the optimal training duration. This approach empirically verified that 300 epochs achieved peak model accuracy while satisfying all convergence criteria.

4. Results and Discussion

4.1. Evaluation Metrics

For comprehensive evaluation of model performance, this study employed the following key metrics: precision (P), recall (R), parameter count (Params), mAP50, FPS, GFLOPs, and training time. Precision, recall, mAP50, and FPS are mathematically defined as follows:
P = T P T P + F P
R = T P T P + F N
m A P = 1 k i = 1 k A P i
F P S = 1000 p r + i n f + p o s
where T P represents the number of positive samples predicted correctly, F P   represents the number of negative samples predicted incorrectly, F N represents the number of positive samples predicted incorrectly, p r is the image preprocessing time, i n f is the image inference time, and p o s is the image post-processing time.

4.2. Ablation Experiments

To evaluate the independent contributions and synergistic effects of each module, we systematically incorporated the Faster-C2f module, EMA, Conv-BiFPN, and GSP-Detect into the baseline YOLOv8s model following the controlled variable principle. Ablation results are presented in Table 3.
Replacing the C2f module in layers 6 and 8 of the backbone with Faster-C2f reduces parameters by 33.3% and GFLOPs by 12.2% while improving mAP50 by 2.2% and FPS from 176 to 181. This demonstrates that the co-design of PConv and PWConv effectively compresses redundant computation in the backbone, significantly boosting real-time performance. However, recall decreases by 0.3%, indicating that computational reduction may sacrifice feature sensitivity for small targets. Embedding the EMA mechanism within Faster-C2f significantly improves recall by 3.5% and increases mAP50 to 86.3%. EMA’s parallel multi-branch architecture introduces low computational overhead, resulting in only a minor FPS drop from 181 to 180. However, the significant accuracy gains far outweigh this cost. The results confirm that the attention mechanism improves robustness for small and occluded objects while maintaining real-time inference essential for safety-critical traffic scenarios. Incorporating the Conv-BiFPN architecture improves recall to 81.0% and mAP50 to 87.3%. The added P4 downsampling layer and cross-layer skip connections mitigate feature loss during downsampling, but enhanced multi-scale interactions cause a modest FPS decrease to 178 while still maintaining a 1.1% advantage over the baseline. This balanced trade-off achieves significant accuracy gains, particularly for multi-scale targets, with minimal real-time performance impact. Following the lightweight GSP-Detect introduction, parameters and GFLOPs are minimized, which are 36.0% and 22.6% lower than the baseline model, while maintaining high mAP50 despite a 2.5% recall decrease. Notably, the channel mixing and sparse convolution design of GSP-Detect significantly enhances real-time performance, achieving an FPS of 183, making it highly suitable for resource-constrained edge devices.
The experimental results not only validate the effectiveness of each module’s design but also demonstrate that their synergistic advantages accelerate model iteration and deployment, with training time significantly reduced from the baseline’s 45.2 h to 35.3 h. This achieves an efficient balance between accuracy, lightweight design, and speed, which is particularly critical for in-vehicle edge device scenarios with stringent computational resource and real-time requirements.

4.3. Model Comparison

To validate FEBG-YOLOv8s’ effectiveness, we compared per-category traffic sign detection accuracies on the TT100K dataset (Table 4). The comparison of key categories such as p6, ph4, and pl20 reveals that the mAP50 of the improved model is significantly improved, which verifies the optimization effect of the improved model on small-target detection.
In order to verify the excellent performance of the FEBG-YOLOv8s network on the TT100K dataset, the comparative experiments used the same hardware, software environment, and dataset for training and testing and compared it with current mainstream detection network models and network models in the YOLO series that have a wider range of applications and good results. The experimental results are shown in Table 5.
The results of the comparison experiments in Table 5 show that FEBG-YOLOv8s excels in the balance of performance and efficiency. Compared to YOLOv8s, FEBG-YOLOv8s improves precision by 4.7% due to the EMA mechanism reducing false positives in complex scenes, recall by 2.0% due to BiFPN-enhanced feature fusion mitigating missed detections, and mAP50 by 3.1% while reducing parameters by 36.0% and GFLOPs by 22.5%. Compared with mainstream detection models, the mAP50 of FEBG-YOLOv8s is significantly higher than that of the single-stage detection model SSD and the two-stage model Faster R-CNN. It is also better than earlier YOLO series models such as YOLOv3 and YOLOv4, maintaining a significant precision advantage. Relative to lightweight models YOLOv5s and YOLOv7-tiny, FEBG-YOLOv8s maintains marginally higher parameters and GFLOPs but demonstrates substantially superior precision, recall, and accuracy owing to BiFPN’s multi-scale fusion and EMA’s contextual modeling. When compared with advanced YOLOv10s and YOLOv11s, FEBG-YOLOv8s exhibits synergistic precision–lightweight advantages; mAP50 slightly exceeds both models, recall matches YOLOv11s while exceeding YOLOv10s, and precision significantly surpasses both. Regarding lightweight design, FEBG-YOLOv8s reduces parameters by 12.3% versus YOLOv10s and 24.5% versus YOLOv11s through the Faster-C2f module, with substantially lower GFLOPs than both owing to PConv sparsity and GSP-Detect channel compression. Compared to other recent traffic-sign-specific YOLO variants, FEBG-YOLOv8s maintains an optimal accuracy–efficiency trade-off. Although ETSR-YOLO achieves slightly higher mAP50, it requires 68% higher computational cost. TSD-YOLO delivers the highest mAP50 but at prohibitive computational expense. CRFS-YOLOv8 prioritizes extreme lightweighting but suffers accuracy degradation. FEBG-YOLOv8s approaches the accuracy of ETSR-YOLO while reducing computation by 40.7% and significantly outperforms CRFS-YOLOv8 by 15.0% mAP50, with only a marginal resource increase. This optimized balance provides an enhanced solution for real-time traffic sign detection in intelligent transportation systems, demonstrating significant practical value.

4.4. Comparison of Attention Mechanisms

In order to verify the superiority of the EMA mechanism in the Faster-C2f module, a comparison experiment was conducted between it and the commonly used attention module in the YOLO improvement model. Parameters such as the Faster-C2f module, learning rate, and training epoch were fixed during the experiments, and differences were introduced only in the design of the attention module, as shown in Table 6.
When the CBAM attention mechanism is introduced, the number of parameters increases to 10.2 M, and the GFLOPs rise to 27.2, but the mAP50 only increases by 0.3%, which is an insignificant improvement in accuracy and a significant increase in computational cost. The SE and ECA mechanisms show similar parameter and GFLOP increases but yield lower mAP50, indicating that their sequential spatial compression and channel modeling approach provides limited feature optimization. The CA mechanism has higher parameter and GFLOP increments yet achieves merely 0.1% mAP50 improvement, demonstrating low cost-effectiveness. Conversely, incorporating the EMA mechanism increases parameters by only 0.8 M and GFLOPs by 1.3 while significantly boosting mAP50 by 1.0%. The parallel multi-branch structure of the EMA and the cross-space interaction mechanism are able to more efficiently fuse the global contextual information to compensate for the limitations of the localized feature extraction of PConv in Faster-C2f, which verifies the effectiveness of the Faster-EMA C2f module’s superiority.

4.5. Visual Analysis

We evaluated four challenging TT100K scenarios (Figure 10 and Figure 11) to compare YOLOv8s and FEBG-YOLOv8s.
In the dense small-target scenario, the baseline loses the low-level features due to the computational redundancy of the C2f module; misses the detection of densely arranged signs, such as “No Parking pn”; and incorrectly determines that “Prohibition of two kinds of vehicles p13” is “Prohibit non-motorized vehicles p6”. FEBG-YOLOv8s improves the detail extraction ability by lightening the Faster-C2f module and combining the cross-space feature weighting with the EMA mechanism. In the shade scene, the baseline mistakenly detects “left turn p23” as “right turn p19” due to the unidirectional feature fusion of FPN+PAN, and there is a leakage detection in the darker light of the shade. FEBG-YOLOv8s reduces the false detection by improving the BiFPN bidirectional cross-scale connection. In the backlit scenario, the baseline misdetects “no left turn p23” and “motor vehicle i4” and misses small targets in the elevated low-light area. The original detection head computational redundancy leads to insufficient spatial modeling capability, so FEBG-YOLOv8s adopts the GSP-Detect lightweight detection head, which strengthens the localization accuracy of low-contrast targets. In the remote small-target scene, while the baseline has missed detection of “speed limit 30pl30” and false detection of “no trucks allowed to enter p26”, FEBG-YOLOv8s adds a new downsampling module to expand the sensing field through BiFPN and combines the sparse convolution of Faster-C2f with the sparse convolution of C2f to enhance the accuracy of localization. It uses the C2f sparse convolution strategy to retain the key channel information and carry out successful detections.
In summary, FEBG-YOLOv8s solves the problems of feature loss, insufficient cross-layer interaction, and computational redundancy of the original model in complex scenarios by the synergistic optimization of a lightweight backbone network, multi-scale feature fusion, and high-efficiency detection head while ensuring real-time performance. The experiments verify the effectiveness of the improved scheme in terms of achieving a balance between accuracy and efficiency.

4.6. Generalizability Experiments

Adverse weather conditions (e.g., haze, rain, and snow) amplify background noise interference, complicating traffic sign localization and feature extraction, thus increasing misdetection and missed-detection rates. To evaluate efficiency and generalizability in complex scenarios, we compared FEBG-YOLOv8s against the baseline on CCTSDB. The experimental results are shown in Table 7.
As shown in Table 7, the FEBG-YOLOv8s model improves precision by 2.8%, recall by 3.9% (enabled by BiFPN’s preservation of small-target features in low-visibility scenarios), and mAP50 by 3.3% compared to the baseline, demonstrating its effectiveness in reducing false positives and missed detections within CCTSDB’s complex traffic environments. This significant performance enhancement improves reliability for intelligent driving systems. Substantially reduced parameters and GFLOPs, achieved through Faster-C2f’s sparse computation and GSP-Detect’s channel compression, coupled with marginally improved inference speed, indicate strong generalization capability in challenging conditions. While TSD-YOLO achieves higher mAP50, it exhibits significantly lower precision and recall with greater computational cost. Despite its lightweight design, CRFS-YOLOv8 exhibits a significant accuracy decline on this challenging dataset, underperforming the FEBG-YOLOv8s model. FEBG-YOLOv8s maintains an optimal accuracy–efficiency balance even under the extreme conditions of CCTSDB and is capable of adapting to the demands of real driving environments, such as low light, bad weather, and dense small targets.
Figure 12 and Figure 13 compare detection performance across four CCTSDB scenarios. In the dimly lit night environment, the lack of light leads to the loss of image details, and the reflection of street lamps causes local overexposure, which obscures the sign information. YOLOv8s shows misdetection and omissions, while the FEBG-YOLOv8s improves the situation and accurately detects “p26” and “w13”. In the haze environment, due to scattered light resulting in reduced image saturation, the baseline appears to miss the detection and misdetects lights as traffic signs, whereas FEBG-YOLOv8s retains the key channel features and reduces the redundant noise interference caused by haze. In snowy scenes, snow reflections and large white backgrounds weaken the visibility of traffic signs and increase the difficulty of feature extraction. FEBG-YOLOv8s strengthens the feature response of low-contrast targets through cross-space interaction. In cloudy environments, cloudiness leads to color shifts of traffic signs, especially small targets in the shade, and the lack of shadow and strong light contrast makes it difficult for the baseline to separate speed limit signs and leads to missed detection of small targets in the shade. The improved model accurately captures the detailed features of traffic signs to provide accurate detection.

5. Conclusions

This paper proposes FEBG-YOLOv8s, a lightweight model for traffic sign detection in complex scenarios. Through synergistic architectural innovations—including the Faster-EMA C2f module, Conv-BiFPN architecture, and GSP-Detect reconstruction—the model achieves significant improvements in accuracy, speed, and efficiency on the TT100K dataset. Experimental results demonstrate that FEBG-YOLOv8s outperforms YOLOv8s on the TT100K dataset, improving mAP50 by 3.1%, precision by 4.7%, and recall by 2.0% while reducing parameters by 36.0% and GFLOPs by 22.5%. The model also exhibits strong generalization capability on the CCTSDB dataset, achieving 3.3% higher mAP50 and 3.9% higher recall. FEBG-YOLOv8s demonstrates robust performance across challenging conditions, including low illumination, adverse weather, and dense small targets, outperforming comparable models in the accuracy–speed balance. This positions it as a suitable solution for intelligent assisted driving perception systems. There is still room for optimization of the improved model, and there is still a low confidence level in adverse traffic environments. Further research will be conducted in the future on the detection of traffic signs in extreme environments to better meet the needs of the intelligent driving era. Future research could also integrate state-of-the-art dimensionality reduction techniques [32] to compress high-dimensional visual features in traffic sign detection into low-dimensional spaces, thereby enhancing localization robustness for small targets in adverse traffic environments. This approach would simultaneously improve feature interpretability and computational efficiency of lightweight models, addressing the real-time requirements of autonomous driving systems.

Author Contributions

Conceptualization, C.X. and H.S.; methodology, H.S.; software, C.X.; validation, C.X., H.S. and J.Y.; formal analysis, C.X.; investigation, C.X.; resources, C.X.; data curation, C.X.; writing—original draft preparation, C.X.; writing—review and editing, H.S.; visualization, C.X.; supervision, C.X.; project administration, J.Y.; funding acquisition, J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Postgraduate Research and Practice Innovation Program of Jiangsu Province, grant number SJCX22_0314.

Data Availability Statement

The datasets used and analyzed during the current study are publicly available and can be accessed from https://cg.cs.tsinghua.edu.cn/traffic-sign/ (accessed on 22 June 2025) and https://github.com/csust7zhangjm/CCTSDB2021. (accessed on 22 June 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Dewi, C.; Chen, R.-C.; Yu, H.; Jiang, X. Robust Detection Method for Improving Small Traffic Sign Recognition Based on Spatial Pyramid Pooling. J. Ambient Intell. Humaniz. Comput. 2021, 14, 8135–8152. [Google Scholar] [CrossRef]
  2. Barodi, A.; Bajit, A.; Zemmouri, A.; Benbrahim, M.; Tamtaoui, A. Improved Deep Learning Performance for Real-Time Traffic Sign Detection and Recognition Applicable to Intelligent Transportation Systems. SSRN Electron. J. 2025, 12, 713–723. [Google Scholar] [CrossRef]
  3. Wang, Y.; Wang, Q. Robust Stacking Ensemble Model for Traffic Sign Detection and Recognition. IEEE Access 2024, 12, 178941–178950. [Google Scholar] [CrossRef]
  4. Zheng, Z.; Cheng, Y.; Xin, Z.; Yu, Z.; Zheng, B. Robust Perception Under Adverse Conditions for Autonomous Driving Based on Data Augmentation. IEEE Trans. Intell. Transp. Syst. 2023, 24, 13916–13929. [Google Scholar] [CrossRef]
  5. Wang, C.; Zhou, W.; Wang, G. ORD-WM: A Two-Stage Loop Closure Detection Algorithm for Dense Scenes. J. King Saud Univ.-Comput. Inf. Sci. 2024, 36, 102115. [Google Scholar] [CrossRef]
  6. Zhai, H.; Du, J.; Ai, Y.; Hu, T. Edge Deployment of Deep Networks for Visual Detection: A Review. IEEE Sens. J. 2025, 25, 18662–18683. [Google Scholar] [CrossRef]
  7. Apostolidis, K.D.; Papakostas, G.A. Delving into YOLO Object Detection Models: Insights into Adversarial Robustness. Electronics 2025, 14, 1624. [Google Scholar] [CrossRef]
  8. Qin, Y.; Li, X.; He, D.; Zhou, Y.; Li, L. RLGS-YOLO: An Improved Algorithm for Metro Station Passenger Detection Based on YOLOv8. Eng. Res. Express 2024, 6, 045263. [Google Scholar] [CrossRef]
  9. Zeng, J.; Wu, H.; He, M. Image Classification Combined with Faster R–CNN for the Peak Detection of Complex Components and Their Metabolites in Untargeted LC-HRMS Data. Anal. Chim. Acta 2023, 1238, 340189. [Google Scholar] [CrossRef]
  10. Bi, X.; Hu, J.; Xiao, B.; Li, W.; Gao, X. IEMask R-CNN: Information-Enhanced Mask R-CNN. IEEE Trans. Big Data 2023, 9, 688–700. [Google Scholar] [CrossRef]
  11. Zhai, S.; Shang, D.; Wang, S.; Dong, S. DF-SSD: An Improved SSD Object Detection Algorithm Based on DenseNet and Feature Fusion. IEEE Access 2020, 8, 24344–24357. [Google Scholar] [CrossRef]
  12. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 779–788. [Google Scholar]
  13. Wei, J.; As’arry, A.; Anas Md Rezali, K.; Zuhri Mohamed Yusoff, M.; Ma, H.; Zhang, K. A Review of YOLO Algorithm and Its Applications in Autonomous Driving Object Detection. IEEE Access 2025, 13, 93688–93711. [Google Scholar] [CrossRef]
  14. Shen, Q.; Li, Y.; Zhang, Y.; Zhang, L.; Liu, S.; Wu, J. CSW-YOLO: A Traffic Sign Small Target Detection Algorithm Based on YOLOv8. PLoS ONE 2025, 20, e0315334. [Google Scholar] [CrossRef]
  15. Cai, Y.; Min, R.; Huang, J. Research on Traffic Sign Detection Method Based on FLB-YOLOv8. In Proceedings of the Fifth International Conference on Computer Communication and Network Security (CCNS 2024), Guangzhou, China, 22 August 2024; SPIE: Bellingham, WA, USA, 2024; p. 35. [Google Scholar]
  16. Chen, J.; Huang, H.; Zhang, R.; Lyu, N.; Guo, Y.; Dai, H.N.; Yan, H. YOLO-TS: Real-Time Traffic Sign Detection with Enhanced Accuracy Using Optimized Receptive Fields and Anchor-Free Fusion. arXiv 2024, arXiv:2410.17144. [Google Scholar] [CrossRef]
  17. Huang, L.; Cai, H.; Peng, Y.; Liao, J. RePCMA-YOLOv8n: A Lightweight Traffic Sign Detection Model. In Proceedings of the International Conference on Applied Intelligence, Zhengzhou, China, 22–25 November 2024; Springer Nature: Singapore, 2024; pp. 105–114. [Google Scholar]
  18. Deng, Y.; Huang, L.; Gan, X.; Lu, Y.; Shi, S. A Heterogeneous Attention YOLO Model for Traffic Sign Detection. J. Supercomput. 2025, 81, 765. [Google Scholar] [CrossRef]
  19. Khalili, B.; Smyth, A.W. SOD-YOLOv8—Enhancing YOLOv8 for Small Object Detection in Aerial Imagery and Traffic Scenes. Sensors 2024, 24, 6209. [Google Scholar] [CrossRef]
  20. Liu, H.; Zhou, K.; Zhang, Y.; Zhang, Y. ETSR-YOLO: An Improved Multi-Scale Traffic Sign Detection Algorithm Based on YOLOv5. PLoS ONE 2023, 18, e0295807. [Google Scholar] [CrossRef]
  21. Du, S.; Pan, W.; Li, N.; Dai, S.; Xu, B.; Liu, H.; Xu, C.; Li, X. TSD-YOLO: Small Traffic Sign Detection Based on Improved YOLO V8. IET Image Process. 2024, 18, 2884–2898. [Google Scholar] [CrossRef]
  22. Xie, G.; Xu, Z.; Lin, Z.; Liao, X.; Zhou, T. GRFS-YOLOv8: An Efficient Traffic Sign Detection Algorithm Based on Multiscale Features and Enhanced Path Aggregation. Signal Image Video Process. 2024, 18, 5519–5534. [Google Scholar] [CrossRef]
  23. Chen, J.; Kao, S.; He, H.; Zhuo, W.; Wen, S.; Lee, C.-H.; Chan, S.-H.G. Run, Don’t Walk: Chasing Higher FLOPS for Faster Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar]
  24. Howard, A.; Sandler, M.; Chu, G.; Chen, L.-C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for MobileNetV3. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
  25. Han, K.; Wang, Y.; Tian, Q.; Guo, J.; Xu, C.; Xu, C. GhostNet: More Features from Cheap Operations. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  26. Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; IEEE: Piscataway, NJ, USA, 2018. [Google Scholar]
  27. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. arXiv 2018, arXiv:1807.06521. [Google Scholar] [CrossRef]
  28. Ouyang, D.; He, S.; Zhang, G.; Luo, M.; Guo, H.; Zhan, J.; Huang, Z. Efficient Multi-Scale Attention Module with Cross-Spatial Learning. In Proceedings of the ICASSP 2023—2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–5. [Google Scholar]
  29. Li, H.; Li, J.; Wei, H.; Liu, Z.; Zhan, Z.; Ren, Q. Slim-Neck by GSConv: A Lightweight-Design for Real-Time Detector Architectures. J. Real-Time Image Process. 2024, 21, 62. [Google Scholar] [CrossRef]
  30. Zhu, Z.; Liang, D.; Zhang, S.; Huang, X.; Li, B.; Hu, S. Traffic-Sign Detection and Classification in the Wild. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 2110–2118. [Google Scholar]
  31. Zhang, J.; Zou, X.; Kuang, L.D.; Wang, J.; Sherratt, R.S.; Yu, X. CCTSDB 2021: A More Comprehensive Traffic Sign Detection Benchmark. Hum.-Centric Comput. Inf. Sci. 2022, 12, 23. [Google Scholar]
  32. Geoffroy, H.; Berger, J.; Colange, B.; Lespinats, S.; Dutykh, D. The Use of Dimensionality Reduction Techniques for Fault Detection and Diagnosis in a AHU Unit: Critical Assessment of Its Reliability. J. Build. Perform. Simul. 2022, 16, 249–267. [Google Scholar] [CrossRef]
Figure 1. Architecture of YOLOv8.
Figure 1. Architecture of YOLOv8.
Wevj 16 00478 g001
Figure 2. Architecture of FEBG-YOLOv8s.
Figure 2. Architecture of FEBG-YOLOv8s.
Wevj 16 00478 g002
Figure 3. Module structure of the lightweight Faster-C2f based on the FasterNet block.
Figure 3. Module structure of the lightweight Faster-C2f based on the FasterNet block.
Wevj 16 00478 g003
Figure 4. Module structure of the FasterNet block integrating the EMA mechanism.
Figure 4. Module structure of the FasterNet block integrating the EMA mechanism.
Wevj 16 00478 g004
Figure 5. Module structure of the lightweight Faster-EMA C2f based on the Faster-EMA block.
Figure 5. Module structure of the lightweight Faster-EMA C2f based on the Faster-EMA block.
Wevj 16 00478 g005
Figure 6. Feature pyramid network comparison: (a) FPN+PAN; (b) BiFPN.
Figure 6. Feature pyramid network comparison: (a) FPN+PAN; (b) BiFPN.
Wevj 16 00478 g006
Figure 7. Enhanced neck network architecture based on BiFPN.
Figure 7. Enhanced neck network architecture based on BiFPN.
Wevj 16 00478 g007
Figure 8. Module structure of the lightweight detection head with GSConv-based hybrid serial–parallel architecture (GSP-Detect).
Figure 8. Module structure of the lightweight detection head with GSConv-based hybrid serial–parallel architecture (GSP-Detect).
Wevj 16 00478 g008
Figure 9. Representative traffic sign categories in the TT100K dataset.
Figure 9. Representative traffic sign categories in the TT100K dataset.
Wevj 16 00478 g009
Figure 10. Detection performance of YOLOv8s on TT100K scenarios: (a,b) dense small targets, (c,d) occlusion, (e,f) low illumination/backlight, and (g,h) distant small targets.
Figure 10. Detection performance of YOLOv8s on TT100K scenarios: (a,b) dense small targets, (c,d) occlusion, (e,f) low illumination/backlight, and (g,h) distant small targets.
Wevj 16 00478 g010
Figure 11. Detection performance of FEBG-YOLOv8s on TT100K scenarios: (a,b) dense small targets, (c,d) occlusion, (e,f) low illumination/backlight, and (g,h) distant small targets.
Figure 11. Detection performance of FEBG-YOLOv8s on TT100K scenarios: (a,b) dense small targets, (c,d) occlusion, (e,f) low illumination/backlight, and (g,h) distant small targets.
Wevj 16 00478 g011
Figure 12. Detection performance of YOLOv8s on CCTSDB: (a,b) nighttime, (c,d) haze, (e,f) snow, and (g,h) overcast.
Figure 12. Detection performance of YOLOv8s on CCTSDB: (a,b) nighttime, (c,d) haze, (e,f) snow, and (g,h) overcast.
Wevj 16 00478 g012
Figure 13. Detection performance of FEBG-YOLOv8s on CCTSDB: (a,b) nighttime, (c,d) haze, (e,f) snow, and (g,h) overcast.
Figure 13. Detection performance of FEBG-YOLOv8s on CCTSDB: (a,b) nighttime, (c,d) haze, (e,f) snow, and (g,h) overcast.
Wevj 16 00478 g013
Table 1. Experimental environment configuration.
Table 1. Experimental environment configuration.
ParameterSpecification
Operating SystemWindows 10
CPUIntel(R) I7-12700 K
GPURTX3070Ti
GPU Memory8 GB
RAMDDR 64 GB
StorageWD SN770 NVMe SSD 1 TB
Programming LanguagePython3.8
Deep Learning FrameworkPytorch1.12.1
CUDA ToolkitCUDA11.3.1
CuDNN VersionCuDNN 8.0.5.39
Table 2. Hyperparameter settings.
Table 2. Hyperparameter settings.
HyperparametersSettings
Input Size640 × 640
Learning Rate0.01
Batch Size16
Momentum0.937
Weight Decay0.0005
OptimizerSGD
Epoch300
Table 3. Ablation study results on the TT100K dataset. “√” indicates the incorporation of this module.
Table 3. Ablation study results on the TT100K dataset. “√” indicates the incorporation of this module.
ExperimentsFaster-C2fFaster-EMA C2fConv-BiFPNGSP-DetectP (%)R (%)mAP50
(%)
Param (M)GFLOPsFPSTraining Time (h)
Baseline 83.676.583.111.128.817645.2
1 84.876.285.37.425.318140.1
2 85.379.786.38.226.618042.4
3 85.081.087.39.327.217843.1
4 88.378.586.27.122.318335.3
Table 4. Detection accuracy comparison: YOLOv8s vs. enhanced model on TT100K.
Table 4. Detection accuracy comparison: YOLOv8s vs. enhanced model on TT100K.
Model/mAP50/%i2i4i5il100il60il80ioipp3p5p6p10
YOLOv8s9091.192.197.295.49590.287.385.593.768.379.9
FEBG-YOLOv8s90.291.594.298.095.596.492.187.486.594.370.279.9
Model/mAP50/%p11p12p19p23p26p27pgph4ph5pl100pl120pl20
YOLOv8s86.284.185.787.487.890.985.37685.497.593.250.6
FEBG-YOLOv8s86.486.787.288.490.9293.788.485.388.198.795.168.8
Table 5. Comparison experiment results between FEBG-YOLOv8s and other classical object detection algorithms on TT100K.
Table 5. Comparison experiment results between FEBG-YOLOv8s and other classical object detection algorithms on TT100K.
ModelP (%)R (%)mAP50 (%)Param (M)GFLOPs
SSD65.260.465.612035.8
Faster R-CNN 58.752.355.742.6134.5
YOLOv362.259.781.563.0185.3
YOLOv464.862.382.195.9141.8
YOLOv5s71.269.282.56.816.5
YOLOv7-tiny70.867.681.36.013.2
YOLOv8s83.676.583.111.128.8
YOLOv10s86.076.585.18.124.3
YOLOv11s85.278.285.29.425.7
ETSR-YOLO [11]88.577.488.27.537.6
TSD-YOLO [11]90.883.890.68.865.7
CRFS-YOLOv8 [11]-95.071.21.7110.9
FEBG-YOLOv8s88.378.586.27.122.3
Table 6. Comparison of attention mechanisms.
Table 6. Comparison of attention mechanisms.
ModelParam (M)GFLOPsmAP50 (%)
Baseline7.425.385.3
+CBAM10.227.285.6
+SE9.926.785.1
+ECA9.926.785.2
+CA10.127.085.4
+EMA8.226.686.3
Table 7. Comparative analysis before and after enhancement on CCTSDB.
Table 7. Comparative analysis before and after enhancement on CCTSDB.
ModelP (%)R (%)mAP50 (%)Param (M)GFLOPsFPS
YOLOv8s85.474.381.210.828.5173
TSD-YOLO [11]50.970.886.58.827.3-
CRFS-YOLOv8 [11]-72.480.31.659.8-
FEBG-YOLOv8s88.278.284.57.322.7179
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xing, C.; Sun, H.; Yang, J. A Lightweight Traffic Sign Detection Model Based on Improved YOLOv8s for Edge Deployment in Autonomous Driving Systems Under Complex Environments. World Electr. Veh. J. 2025, 16, 478. https://doi.org/10.3390/wevj16080478

AMA Style

Xing C, Sun H, Yang J. A Lightweight Traffic Sign Detection Model Based on Improved YOLOv8s for Edge Deployment in Autonomous Driving Systems Under Complex Environments. World Electric Vehicle Journal. 2025; 16(8):478. https://doi.org/10.3390/wevj16080478

Chicago/Turabian Style

Xing, Chen, Haoran Sun, and Jiafu Yang. 2025. "A Lightweight Traffic Sign Detection Model Based on Improved YOLOv8s for Edge Deployment in Autonomous Driving Systems Under Complex Environments" World Electric Vehicle Journal 16, no. 8: 478. https://doi.org/10.3390/wevj16080478

APA Style

Xing, C., Sun, H., & Yang, J. (2025). A Lightweight Traffic Sign Detection Model Based on Improved YOLOv8s for Edge Deployment in Autonomous Driving Systems Under Complex Environments. World Electric Vehicle Journal, 16(8), 478. https://doi.org/10.3390/wevj16080478

Article Metrics

Back to TopTop