Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = flashing traffic light recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 1495 KB  
Article
FlashLightNet: An End-to-End Deep Learning Framework for Real-Time Detection and Classification of Static and Flashing Traffic Light States
by Laith Bani Khaled, Mahfuzur Rahman, Iffat Ara Ebu and John E. Ball
Sensors 2025, 25(20), 6423; https://doi.org/10.3390/s25206423 - 17 Oct 2025
Viewed by 962
Abstract
Accurate traffic light detection and classification are fundamental for autonomous vehicle (AV) navigation and real-time traffic management in complex urban environments. Existing systems often fall short of reliably identifying and classifying traffic light states in real-time, including their flashing modes. This study introduces [...] Read more.
Accurate traffic light detection and classification are fundamental for autonomous vehicle (AV) navigation and real-time traffic management in complex urban environments. Existing systems often fall short of reliably identifying and classifying traffic light states in real-time, including their flashing modes. This study introduces FlashLightNet, a novel end-to-end deep learning framework that integrates the nano version of You Only Look Once, version 10m (YOLOv10n) for traffic light detection, Residual Neural Networks 18 (ResNet-18) for feature extraction, and a Long Short-Term Memory (LSTM) network for temporal state classification. The proposed framework is designed to robustly detect and classify traffic light states, including conventional signals (red, green, and yellow) and flashing signals (flash red and flash yellow), under diverse and challenging conditions such as varying lighting, occlusions, and environmental noise. The framework has been trained and evaluated on a comprehensive custom dataset of traffic light scenarios organized into temporal sequences to capture spatiotemporal dynamics. The dataset has been prepared by taking videos of traffic lights at different intersections of Starkville, Mississippi, and Mississippi State University, consisting of red, green, yellow, flash red, and flash yellow. In addition, simulation-based video datasets with different flashing rates—2, 3, and 4 s—for traffic light states at several intersections were created using RoadRunner, further enhancing the diversity and robustness of the dataset. The YOLOv10n model achieved a mean average precision (mAP) of 99.2% in traffic light detection, while the ResNet-18 and LSTM combination classified traffic light states (red, green, yellow, flash red, and flash yellow) with an F1-score of 96%. Full article
(This article belongs to the Special Issue Deep Learning Technology and Image Sensing: 2nd Edition)
Show Figures

Figure 1

9 pages, 3054 KB  
Proceeding Paper
Simulated Adversarial Attacks on Traffic Sign Recognition of Autonomous Vehicles
by Chu-Hsing Lin, Chao-Ting Yu, Yan-Ling Chen, Yo-Yu Lin and Hsin-Ta Chiao
Eng. Proc. 2025, 92(1), 15; https://doi.org/10.3390/engproc2025092015 - 25 Apr 2025
Cited by 1 | Viewed by 939
Abstract
With the development and application of artificial intelligence (AI) technology, autonomous driving systems are gradually being applied on the road. However, people still have requirements for the safety and reliability of unmanned vehicles. Autonomous driving systems in today’s unmanned vehicles also have to [...] Read more.
With the development and application of artificial intelligence (AI) technology, autonomous driving systems are gradually being applied on the road. However, people still have requirements for the safety and reliability of unmanned vehicles. Autonomous driving systems in today’s unmanned vehicles also have to respond to information security attacks. If they cannot defend against such attacks, traffic accidents might be caused, leaving passengers exposed to risks. Therefore, we investigated adversarial attacks on the traffic sign recognition of autonomous vehicles in this study. We used You Look Only Once (YOLO) to build a machine learning model for traffic sign recognition and simulated attacks on traffic signs. The simulated attacks included LED light strobes, color-light flash, and Gaussian noise. Regarding LED strobes and color-light flash, translucent images were used to overlay the original traffic sign images to simulate corresponding attack scenarios. In the Gaussian noise attack, Python 3.11.10 was used to add noise to the original image. Different attack methods interfered with the original machine learning model to a certain extent, hindering autonomous vehicles from recognizing traffic signs and detecting them accurately. Full article
(This article belongs to the Proceedings of 2024 IEEE 6th Eurasia Conference on IoT, Communication and Engineering)
Show Figures

Figure 1

Back to TopTop