Comparison of Attacks on Traffic Sign Detection Models for Autonomous Vehicles †
Abstract
1. Introduction
2. Methodology
2.1. Dataset
2.2. Machine Learning (ML) Models
3. Experiments and Results
3.1. LED Strobe Attacks
- Under high-speed conditions (shutter speed exceeds 1/2000), the accuracy of CNN is 65.00% and the average confidence is 97.30%; in comparison, the accuracy of YOLOv5 is 45.00% and the average confidence is 40.11%. This shows that CNN’s performance in high-speed environments is more stable and reliable.
- Under normal conditions (shutter speed lower than or equal to 1/2000), the accuracy of the two models is equivalent, both 70.00%. However, the average confidence of CNN is still higher than that of YOLOv5. This shows that, even when the accuracy is the same, CNN’s confidence in its correct prediction is still significantly higher than that of YOLOv5.
3.2. Light Attacks
- 24 V DC LED light: As the main light source, we used two bulbs of different colors, white and green. These LED lights have their switching and frequency controlled by Arduino.
- B0302 boost module: This module is responsible for stably increasing the power supply voltage to the working voltage of the LED lamp, ensuring that the LED lamp operates stably at the required brightness.
- 1 Kohm resistor: Connected in series in the circuit to limit the current flowing through the LED light and avoid overload.
- Arduino Uno: Switch the status of the LED light by controlling the S0913 NPN transistor. Arduino receives the preset control signal and adjusts the strobe and color coverage of the LED light.
- S0913 NPN transistor: A switching element controlled by Arduino, used to drive high-voltage loads (LED lights). The design of the transistor ensures the separation of the control circuit and the power circuit, thereby protecting the Arduino.
- Camera: Samsung S22 Ultra: This camera was used to capture traffic sign images under different LED light sources. These images were used to test the recognition performance of the CNN and YOLOv5 models.
- Under higher shutter speed conditions (more than 1/2000), the accuracy of the CNN model is 65% under white light and only 15% under green light. In comparison, the YOLOv5 model has an accuracy of 60% under white light and 15% under green light. This means that, although the green light recognition capabilities of both models are significantly affected under high shutter speeds, the performance of the CNN model is more stable under white light conditions.
- Under normal shutter speed conditions (less than or equal to 1/2000), the accuracy of the CNN model reaches 100% under white light and 45% under green light. The accuracy of the YOLOv5 model is 70% under white light and 0% under green light. This shows that, in a normal shutter speed environment, white light conditions are more conducive to model recognition, and the CNN model performs better than YOLOv5.
- Under the LED light flashing frequency of 30 Hz and 60 Hz, the accuracy of the CNN model under white light conditions is 80% and 80%, respectively, which is much higher than the 40% and 20% under green light conditions. And the accuracy of the YOLOv5 model under white light is 60% and 40%, respectively, and the accuracy under green light is 0%.
- In the case of 90 Hz and 120 Hz, the accuracy of the CNN model under white light conditions remains at 80%, while it is 0% and 60% under green light conditions,, respectively. The accuracy of the YOLOv5 model under white light is 80% and 80%, while, under green light conditions, it is 0% and 30% respectively.
| Frequency | CNN (G) | CNN (W) | YOLO (G) | YOLO (W) |
|---|---|---|---|---|
| 30 Hz | 40.0% | 80.0% | 00.0% | 60.0% |
| 60 Hz | 20.0% | 80.0% | 00.0% | 40.0% |
| 90 Hz | 00.0% | 80.0% | 00.0% | 80.0% |
| 120 Hz | 60.0% | 90.0% | 30.0% | 80.0% |
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Shen, Y.; Lin, C.-H.; Chen, G.-W. MLIA: Modulated LED illumination-based adversarial attack on traffic sign recognition system for autonomous vehicle. In Proceedings of the IEEE International Conference on Trust, Security and Privacy in Computing and Communications, New York, NY, USA, 9–11 December 2022. [Google Scholar]
- Woitschek, F.; Schneider, G. Physical adversarial attacks on deep neural networks for traffic sign recognition: A feasibility study. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Nagoya, Japan, 11–17 July 2021. [Google Scholar]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Xie, K.; Li, Y.; Zhang, M. Efficient federated learning with spike neural networks for traffic sign recognition. IEEE Trans. Veh. Technol. 2022, 71, 9980–9992. [Google Scholar] [CrossRef]
- Pavlitska, S.; Ivanov, A.; Chen, G.-W. Adversarial attacks on traffic sign recognition: A survey. In Proceedings of the 3rd International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), Canary Islands, Spain, 19–21 July 2023. [Google Scholar]
- Xu, Y. On splitting training and validation set: A comparative study of cross-validation, bootstrap and systematic sampling for estimating the generalization performance of supervised learning. J. Anal. Test. 2018, 2, 249–262. [Google Scholar] [CrossRef] [PubMed]
- Wang, J.; Liu, S.; Zhang, F. Research on improved YOLOv5 for low-light environment object detection. Electronics 2023, 12, 3089. [Google Scholar] [CrossRef]
- Anzagira, L.; Fossum, E.R. Color filter array patterns for small-pixel image sensors with substantial cross talk. J. Opt. Soc. Am. A 2015, 32, 28–34. [Google Scholar] [CrossRef] [PubMed]





| Metric | CNN | YOLO |
|---|---|---|
| Higher shutter accuracy | 65.0% | 45.0% |
| Normal shutter accuracy | 70.0% | 45.0% |
| Average accuracy | 67.5% | 45.0% |
| Higher shutter confidence | 97.3% | 40.1% |
| Normal shutter confidence | 96.3% | 37.4% |
| Average confidence | 96.8% | 38.8% |
| Metric | CNN (G) | CNN (W) | YOLO (G) | YOLO (W) |
|---|---|---|---|---|
| Higher shutter accuracy | 15.0% | 65.0% | 15.0% | 60.0% |
| Normal shutter accuracy | 45.0% | 100.0% | 00.0% | 70.0% |
| Average accuracy | 30.0% | 82.5% | 7.5% | 65.0% |
| Higher shutter confidence | 64.9% | 92.1% | 38.7% | 57.4% |
| Normal shutter confidence | 79.6% | 99.9% | 00.0% | 45.2% |
| Average confidence | 75.9% | 96.8% | 38.7% | 50.9% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lin, C.-H.; Chen, G.-W. Comparison of Attacks on Traffic Sign Detection Models for Autonomous Vehicles. Eng. Proc. 2025, 120, 7. https://doi.org/10.3390/engproc2025120007
Lin C-H, Chen G-W. Comparison of Attacks on Traffic Sign Detection Models for Autonomous Vehicles. Engineering Proceedings. 2025; 120(1):7. https://doi.org/10.3390/engproc2025120007
Chicago/Turabian StyleLin, Chu-Hsing, and Guan-Wei Chen. 2025. "Comparison of Attacks on Traffic Sign Detection Models for Autonomous Vehicles" Engineering Proceedings 120, no. 1: 7. https://doi.org/10.3390/engproc2025120007
APA StyleLin, C.-H., & Chen, G.-W. (2025). Comparison of Attacks on Traffic Sign Detection Models for Autonomous Vehicles. Engineering Proceedings, 120(1), 7. https://doi.org/10.3390/engproc2025120007
