This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Open AccessArticle
A Lightweight Traffic Sign Small Target Detection Network Suitable for Complex Environments
by
Zonghong Feng
Zonghong Feng 1,*,
Liangchang Li
Liangchang Li 1,
Kai Xu
Kai Xu 1
and
Yong Wang
Yong Wang 2,3,*
1
School of Mathematics and Physics, Lanzhou Jiaotong University, Lanzhou 730070, China
2
School of Sciences, Southwest Petroleum University, Chengdu 610500, China
3
Intelligent Perception and Control Key Laboratory of Sichuan Province, Sichuan University of Science & Engineering, Yibin 644000, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2026, 16(1), 326; https://doi.org/10.3390/app16010326 (registering DOI)
Submission received: 15 November 2025
/
Revised: 5 December 2025
/
Accepted: 17 December 2025
/
Published: 28 December 2025
Abstract
With the increasing frequency of traffic safety issues and the rapid development of autonomous driving technology, traffic sign detection is highly susceptible to adverse weather conditions such as changes in light intensity, fog, rain, snow, and partial occlusion, which places higher demands on the accurate recognition of traffic signs. This paper proposes an improved DAYOLO model based on YOLOv8n, aiming to balance detection accuracy and model complexity. First, the Bottleneck in the C2f module of the YOLOv8n backbone network is replaced with Bottleneck DAttention. Introducing DAttention allows for more effective feature extraction, thereby improving model performance. Second, an ultra-lightweight and efficient upsampler, Dysample, is introduced into the neck network to further improve performance and reduce computational overhead. Finally, a Task-Aligned Dynamic Detection Head (TADDH) is introduced. TADDH enhances task interaction through a dynamic mechanism and utilizes shared convolutional modules to reduce parameters and improve efficiency. Simultaneously, an additional Layer2 detection head is added to the model to strengthen the extraction and fusion of features at different scales, thereby improving the detection accuracy of small traffic signs. Furthermore, replacing SlideLoss with NWDLoss can better handle prediction results with more complex distributions and more accurately measure the distance between predicted and ground truth boxes in the feature space during object detection. Experimental results show that DAYOLO achieves 97.2% mAP on the SDCCVP dataset, which is 5.3 higher than the baseline model YOLOv8n; the frame rate reaches 120, which is 37.8% higher than YOLOv8; and the number of parameters is reduced by 6.2%, outperforming models such as YOLOv3, YOLOv5, YOLOv6, and YOLOv7. In addition, DAYOLO achieves 80.8 mAP on the TT100K dataset, which is 9.2% higher than the baseline model YOLOv8n. The proposed method achieves a balance between model size and detection accuracy, meets the needs of traffic sign detection, and provides new ideas and methods for future research in the field of traffic sign detection.
Share and Cite
MDPI and ACS Style
Feng, Z.; Li, L.; Xu, K.; Wang, Y.
A Lightweight Traffic Sign Small Target Detection Network Suitable for Complex Environments. Appl. Sci. 2026, 16, 326.
https://doi.org/10.3390/app16010326
AMA Style
Feng Z, Li L, Xu K, Wang Y.
A Lightweight Traffic Sign Small Target Detection Network Suitable for Complex Environments. Applied Sciences. 2026; 16(1):326.
https://doi.org/10.3390/app16010326
Chicago/Turabian Style
Feng, Zonghong, Liangchang Li, Kai Xu, and Yong Wang.
2026. "A Lightweight Traffic Sign Small Target Detection Network Suitable for Complex Environments" Applied Sciences 16, no. 1: 326.
https://doi.org/10.3390/app16010326
APA Style
Feng, Z., Li, L., Xu, K., & Wang, Y.
(2026). A Lightweight Traffic Sign Small Target Detection Network Suitable for Complex Environments. Applied Sciences, 16(1), 326.
https://doi.org/10.3390/app16010326
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
here.
Article Metrics
Article metric data becomes available approximately 24 hours after publication online.