Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (15)

Search Parameters:
Keywords = German traffic sign recognition benchmark dataset

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
6 pages, 310 KB  
Proceeding Paper
Simulated Attacks and Defenses Using Traffic Sign Recognition Machine Learning Models
by Chu-Hsing Lin, Chao-Ting Yu and Yan-Ling Chen
Eng. Proc. 2025, 108(1), 11; https://doi.org/10.3390/engproc2025108011 - 1 Sep 2025
Viewed by 434
Abstract
Physically simulated attack experiments were conducted using LED lights of different colors, the You Look Only Once (YOLO) v5 model, and the German Traffic Sign Recognition Benchmark (GTSRB) dataset. We attacked and interfered with the traffic sign detection model and tested the model’s [...] Read more.
Physically simulated attack experiments were conducted using LED lights of different colors, the You Look Only Once (YOLO) v5 model, and the German Traffic Sign Recognition Benchmark (GTSRB) dataset. We attacked and interfered with the traffic sign detection model and tested the model’s recognition performance when it was interfered with by LED lights. The model’s accuracy in identifying objects was calculated with the interference. We conducted a series of experiments to test the interference effects of colored lighting. The attack with different colored lights caused a certain degree of interference to the machine learning model, which affected the self-driving vehicle’s ability to recognize traffic signs. It caused the self-driving system to fail to detect the existence of the traffic sign or commit recognition errors. To defend from this attack, we fed back the traffic signs into the training dataset and re-trained the machine learning model. This enabled the machine learning model to resist related attacks and avoid disturbance. Full article
Show Figures

Figure 1

16 pages, 2108 KB  
Article
One Possible Path Towards a More Robust Task of Traffic Sign Classification in Autonomous Vehicles Using Autoencoders
by Ivan Martinović, Tomás de Jesús Mateo Sanguino, Jovana Jovanović, Mihailo Jovanović and Milena Djukanović
Electronics 2025, 14(12), 2382; https://doi.org/10.3390/electronics14122382 - 11 Jun 2025
Cited by 1 | Viewed by 939
Abstract
The increasing deployment of autonomous vehicles (AVs) has exposed critical vulnerabilities in traffic sign classification systems, particularly against adversarial attacks that can compromise safety. This study proposes a dual-purpose defense framework based on convolutional autoencoders to enhance robustness against two prominent white-box attacks: [...] Read more.
The increasing deployment of autonomous vehicles (AVs) has exposed critical vulnerabilities in traffic sign classification systems, particularly against adversarial attacks that can compromise safety. This study proposes a dual-purpose defense framework based on convolutional autoencoders to enhance robustness against two prominent white-box attacks: Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD). Experiments on the German Traffic Sign Recognition Benchmark (GTSRB) dataset show that, although these attacks can significantly degrade system performance, the proposed models are capable of partially recovering lost accuracy. Notably, the defense demonstrates strong capabilities in both detecting and reconstructing manipulated traffic signs, even under low-perturbation scenarios. Additionally, a feature-based autoencoder is introduced, which—despite a high false positive rate—achieves perfect detection in critical conditions, a tradeoff considered acceptable in safety-critical contexts. These results highlight the potential of autoencoder-based architectures as a foundation for resilient AV perception while underscoring the need for hybrid models integrating visual-language frameworks for real-time, fail-safe operation. Full article
(This article belongs to the Special Issue Autonomous and Connected Vehicles)
Show Figures

Figure 1

21 pages, 17670 KB  
Article
Advancing Traffic Sign Recognition: Explainable Deep CNN for Enhanced Robustness in Adverse Environments
by Ilyass Benfaress, Afaf Bouhoute and Ahmed Zinedine
Computers 2025, 14(3), 88; https://doi.org/10.3390/computers14030088 - 4 Mar 2025
Viewed by 3362
Abstract
This paper presents a traffic sign recognition (TSR) system based on the deep convolutional neural network (CNN) architecture, which proves to be extremely accurate in recognizing traffic signs under challenging conditions such as bad weather, low-resolution images, and various environmental-impact factors. The proposed [...] Read more.
This paper presents a traffic sign recognition (TSR) system based on the deep convolutional neural network (CNN) architecture, which proves to be extremely accurate in recognizing traffic signs under challenging conditions such as bad weather, low-resolution images, and various environmental-impact factors. The proposed CNN is compared with other architectures, including GoogLeNet, AlexNet, DarkNet-53, ResNet-34, VGG-16, and MicronNet-BF. Experimental results confirm that the proposed CNN significantly improves recognition accuracy compared to existing models. In order to make our model interpretable, we utilize explainable AI (XAI) approaches, specifically Gradient-weighted Class Activation Mapping (Grad-CAM), that can give insight into how the system comes to its decision. The evaluation of the Tsinghua-Tencent 100K (TT100K) traffic sign dataset showed that the proposed method significantly outperformed existing state-of-the-art methods. Additionally, we evaluated our model on the German Traffic Sign Recognition Benchmark (GTSRB) dataset to ensure generalization, demonstrating its ability to perform well in diverse traffic sign conditions. Design issues such as noise, contrast, blurring, and zoom effects were added to enhance performance in real applications. These verified results indicate both the strength and reliability of the CNN architecture proposed for TSR tasks and that it is a good option for integration into intelligent transportation systems (ITSs). Full article
Show Figures

Figure 1

22 pages, 3158 KB  
Article
Sensitivity Analysis of Traffic Sign Recognition to Image Alteration and Training Data Size
by Arthur Rubio, Guillaume Demoor, Simon Chalmé, Nicolas Sutton-Charani and Baptiste Magnier
Information 2024, 15(10), 621; https://doi.org/10.3390/info15100621 - 10 Oct 2024
Cited by 1 | Viewed by 2471
Abstract
Accurately classifying road signs is crucial for autonomous driving due to the high stakes involved in ensuring safety and compliance. As Convolutional Neural Networks (CNNs) have largely replaced traditional Machine Learning models in this domain, the demand for substantial training data has increased. [...] Read more.
Accurately classifying road signs is crucial for autonomous driving due to the high stakes involved in ensuring safety and compliance. As Convolutional Neural Networks (CNNs) have largely replaced traditional Machine Learning models in this domain, the demand for substantial training data has increased. This study aims to compare the performance of classical Machine Learning (ML) models and Deep Learning (DL) models under varying amounts of training data, particularly focusing on altered signs to mimic real-world conditions. We evaluated three classical models: Support Vector Machine (SVM), Random Forest, and Linear Discriminant Analysis (LDA), and one Deep Learning model: Convolutional Neural Network (CNN). Using the German Traffic Sign Recognition Benchmark (GTSRB) dataset, which includes approximately 40,000 German traffic signs, we introduced digital alterations to simulate conditions such as environmental wear or vandalism. Additionally, the Histogram of Oriented Gradients (HOG) descriptor was used to assist classical models. Bayesian optimization and k-fold cross-validation were employed for model fine-tuning and performance assessment. Our findings reveal a threshold in training data beyond which accuracy plateaus. Classical models showed a linear performance decrease under increasing alteration, while CNNs, despite being more robust to alterations, did not significantly outperform classical models in overall accuracy. Ultimately, classical Machine Learning models demonstrated performance comparable to CNNs under certain conditions, suggesting that effective road sign classification can be achieved with less computationally intensive approaches. Full article
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence with Applications)
Show Figures

Graphical abstract

19 pages, 2620 KB  
Article
Research on the Application of Pruning Algorithm Based on Local Linear Embedding Method in Traffic Sign Recognition
by Wei Wang and Xiaorui Liu
Appl. Sci. 2024, 14(16), 7184; https://doi.org/10.3390/app14167184 - 15 Aug 2024
Cited by 1 | Viewed by 1199
Abstract
Efficient traffic sign recognition is crucial to facilitating the intelligent driving of new energy vehicles. However, current approaches like the Vision Transformer (ViT) model often impose high storage and computational demands, escalating hardware costs. This paper presents a similarity filter pruning method based [...] Read more.
Efficient traffic sign recognition is crucial to facilitating the intelligent driving of new energy vehicles. However, current approaches like the Vision Transformer (ViT) model often impose high storage and computational demands, escalating hardware costs. This paper presents a similarity filter pruning method based on locally linear embedding. Using the alternating direction multiplier method and the loss of the locally linear embedding method for the model training function, the proposed pruning method prunes the operation model mainly by evaluating the similarity of each layer in the network layer filters. According to the pre-set pruning threshold value, similar filters to be pruned are obtained, and the filter with a large cross-entropy value is retained. The results from the Belgium Traffic Sign (BelgiumTS) and German Traffic Sign Recognition Benchmark (GTSRB) datasets indicate that the proposed similarity filter pruning based on local linear embedding (SJ-LLE) pruning algorithm can reduce the number of parameters of the multi-head self-attention module and Multi-layer Perceptron (MLP) module of the ViT model by more than 60%, and the loss of model accuracy is acceptable. The scale of the ViT model is greatly reduced, which is conducive to applying this model in embedded traffic sign recognition equipment. Also, this paper proves the hypothesis through experiments that “using the LLE algorithm as the loss function for model training before pruning plays a positive role in reducing the loss of model performance in the pruning process”. Full article
(This article belongs to the Special Issue Optimization and Simulation Techniques for Transportation)
Show Figures

Figure 1

21 pages, 4576 KB  
Article
Exploring Explainable Artificial Intelligence Techniques for Interpretable Neural Networks in Traffic Sign Recognition Systems
by Muneeb A. Khan and Heemin Park
Electronics 2024, 13(2), 306; https://doi.org/10.3390/electronics13020306 - 10 Jan 2024
Cited by 9 | Viewed by 3663
Abstract
Traffic Sign Recognition (TSR) plays a vital role in intelligent transportation systems (ITS) to improve road safety and optimize traffic management. While existing TSR models perform well in challenging scenarios, their lack of transparency and interpretability hinders reliability, trustworthiness, validation, and bias identification. [...] Read more.
Traffic Sign Recognition (TSR) plays a vital role in intelligent transportation systems (ITS) to improve road safety and optimize traffic management. While existing TSR models perform well in challenging scenarios, their lack of transparency and interpretability hinders reliability, trustworthiness, validation, and bias identification. To address this issue, we propose a Convolutional Neural Network (CNN)-based model for TSR and evaluate its performance on three benchmark datasets: German Traffic Sign Recognition Benchmark (GTSRB), Indian Traffic Sign Dataset (ITSD), and Belgian Traffic Sign Dataset (BTSD). The proposed model achieves an accuracy of 98.85% on GTSRB, 94.73% on ITSD, and 92.69% on BTSD, outperforming several state-of-the-art frameworks, such as VGG19, VGG16, ResNet50V2, MobileNetV2, DenseNet121, DenseNet201, NASNetMobile, and EfficientNet, while also providing faster training and response times. We further enhance our model by incorporating explainable AI (XAI) techniques, specifically, Local Interpretable Model-Agnostic Explanations (LIME) and Gradient-weighted Class Activation Mapping (Grad-CAM), providing clear insights of the proposed model decision-making process. This integration allows the extension of our TSR model to various engineering domains, including autonomous vehicles, advanced driver assistance systems (ADAS), and smart traffic control systems. The practical implementation of our model ensures real-time, accurate recognition of traffic signs, thus optimizing traffic flow and minimizing accident risks. Full article
Show Figures

Figure 1

18 pages, 10650 KB  
Article
Zero-Shot Traffic Sign Recognition Based on Midlevel Feature Matching
by Yaozong Gan, Guang Li, Ren Togo, Keisuke Maeda, Takahiro Ogawa and Miki Haseyama
Sensors 2023, 23(23), 9607; https://doi.org/10.3390/s23239607 - 4 Dec 2023
Cited by 8 | Viewed by 3144
Abstract
Traffic sign recognition is a complex and challenging yet popular problem that can assist drivers on the road and reduce traffic accidents. Most existing methods for traffic sign recognition use convolutional neural networks (CNNs) and can achieve high recognition accuracy. However, these methods [...] Read more.
Traffic sign recognition is a complex and challenging yet popular problem that can assist drivers on the road and reduce traffic accidents. Most existing methods for traffic sign recognition use convolutional neural networks (CNNs) and can achieve high recognition accuracy. However, these methods first require a large number of carefully crafted traffic sign datasets for the training process. Moreover, since traffic signs differ in each country and there is a variety of traffic signs, these methods need to be fine-tuned when recognizing new traffic sign categories. To address these issues, we propose a traffic sign matching method for zero-shot recognition. Our proposed method can perform traffic sign recognition without training data by directly matching the similarity of target and template traffic sign images. Our method uses the midlevel features of CNNs to obtain robust feature representations of traffic signs without additional training or fine-tuning. We discovered that midlevel features improve the accuracy of zero-shot traffic sign recognition. The proposed method achieves promising recognition results on the German Traffic Sign Recognition Benchmark open dataset and a real-world dataset taken from Sapporo City, Japan. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

18 pages, 5768 KB  
Article
A Lightweight Convolutional Neural Network (CNN) Architecture for Traffic Sign Recognition in Urban Road Networks
by Muneeb A. Khan, Heemin Park and Jinseok Chae
Electronics 2023, 12(8), 1802; https://doi.org/10.3390/electronics12081802 - 11 Apr 2023
Cited by 36 | Viewed by 9847
Abstract
Recognizing and classifying traffic signs is a challenging task that can significantly improve road safety. Deep neural networks have achieved impressive results in various applications, including object identification and automatic recognition of traffic signs. These deep neural network-based traffic sign recognition systems may [...] Read more.
Recognizing and classifying traffic signs is a challenging task that can significantly improve road safety. Deep neural networks have achieved impressive results in various applications, including object identification and automatic recognition of traffic signs. These deep neural network-based traffic sign recognition systems may have limitations in practical applications due to their computational requirements and resource consumption. To address this issue, this paper presents a lightweight neural network for traffic sign recognition that achieves high accuracy and precision with fewer trainable parameters. The proposed model is trained on the German Traffic Sign Recognition Benchmark (GTSRB) and Belgium Traffic Sign (BelgiumTS) datasets. Experimental results demonstrate that the proposed model has achieved 98.41% and 92.06% accuracy on GTSRB and BelgiumTS datasets, respectively, outperforming several state-of-the-art models such as GoogleNet, AlexNet, VGG16, VGG19, MobileNetv2, and ResNetv2. Furthermore, the proposed model outperformed these methods by margins ranging from 0.1 to 4.20 percentage point on the GTSRB dataset and by margins ranging from 9.33 to 33.18 percentage point on the BelgiumTS dataset. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

19 pages, 2082 KB  
Article
Enhanced Traffic Sign Recognition with Ensemble Learning
by Xin Roy Lim, Chin Poo Lee, Kian Ming Lim and Thian Song Ong
J. Sens. Actuator Netw. 2023, 12(2), 33; https://doi.org/10.3390/jsan12020033 - 7 Apr 2023
Cited by 13 | Viewed by 5484
Abstract
With the growing trend in autonomous vehicles, accurate recognition of traffic signs has become crucial. This research focuses on the use of convolutional neural networks for traffic sign classification, specifically utilizing pre-trained models of ResNet50, DenseNet121, and VGG16. To enhance the accuracy and [...] Read more.
With the growing trend in autonomous vehicles, accurate recognition of traffic signs has become crucial. This research focuses on the use of convolutional neural networks for traffic sign classification, specifically utilizing pre-trained models of ResNet50, DenseNet121, and VGG16. To enhance the accuracy and robustness of the model, the authors implement an ensemble learning technique with majority voting, to combine the predictions of multiple CNNs. The proposed approach was evaluated on three different traffic sign datasets: the German Traffic Sign Recognition Benchmark (GTSRB), the Belgium Traffic Sign Dataset (BTSD), and the Chinese Traffic Sign Database (TSRD). The results demonstrate the efficacy of the ensemble approach, with recognition rates of 98.84% on the GTSRB dataset, 98.33% on the BTSD dataset, and 94.55% on the TSRD dataset. Full article
(This article belongs to the Special Issue Advances in Intelligent Transportation Systems (ITS))
Show Figures

Figure 1

11 pages, 3461 KB  
Communication
Accurate Image Multi-Class Classification Neural Network Model with Quantum Entanglement Approach
by Farina Riaz, Shahab Abdulla, Hajime Suzuki, Srinjoy Ganguly, Ravinesh C. Deo and Susan Hopkins
Sensors 2023, 23(5), 2753; https://doi.org/10.3390/s23052753 - 2 Mar 2023
Cited by 19 | Viewed by 5521
Abstract
Quantum machine learning (QML) has attracted significant research attention over the last decade. Multiple models have been developed to demonstrate the practical applications of the quantum properties. In this study, we first demonstrate that the previously proposed quanvolutional neural network (QuanvNN) using a [...] Read more.
Quantum machine learning (QML) has attracted significant research attention over the last decade. Multiple models have been developed to demonstrate the practical applications of the quantum properties. In this study, we first demonstrate that the previously proposed quanvolutional neural network (QuanvNN) using a randomly generated quantum circuit improves the image classification accuracy of a fully connected neural network against the Modified National Institute of Standards and Technology (MNIST) dataset and the Canadian Institute for Advanced Research 10 class (CIFAR-10) dataset from 92.0% to 93.0% and from 30.5% to 34.9%, respectively. We then propose a new model referred to as a Neural Network with Quantum Entanglement (NNQE) using a strongly entangled quantum circuit combined with Hadamard gates. The new model further improves the image classification accuracy of MNIST and CIFAR-10 to 93.8% and 36.0%, respectively. Unlike other QML methods, the proposed method does not require optimization of the parameters inside the quantum circuits; hence, it requires only limited use of the quantum circuit. Given the small number of qubits and relatively shallow depth of the proposed quantum circuit, the proposed method is well suited for implementation in noisy intermediate-scale quantum computers. While promising results were obtained by the proposed method when applied to the MNIST and CIFAR-10 datasets, a test against a more complicated German Traffic Sign Recognition Benchmark (GTSRB) dataset degraded the image classification accuracy from 82.2% to 73.4%. The exact causes of the performance improvement and degradation are currently an open question, prompting further research on the understanding and design of suitable quantum circuits for image classification neural networks for colored and complex data. Full article
(This article belongs to the Special Issue Artificial Intelligence and Advances in Smart IoT)
Show Figures

Figure 1

14 pages, 1360 KB  
Article
An Evasion Attack against Stacked Capsule Autoencoder
by Jiazhu Dai and Siwei Xiong
Algorithms 2022, 15(2), 32; https://doi.org/10.3390/a15020032 - 19 Jan 2022
Viewed by 2828
Abstract
Capsule networks are a type of neural network that use the spatial relationship between features to classify images. By capturing the poses and relative positions between features, this network is better able to recognize affine transformation and surpass traditional convolutional neural networks (CNNs) [...] Read more.
Capsule networks are a type of neural network that use the spatial relationship between features to classify images. By capturing the poses and relative positions between features, this network is better able to recognize affine transformation and surpass traditional convolutional neural networks (CNNs) when handling translation, rotation, and scaling. The stacked capsule autoencoder (SCAE) is a state-of-the-art capsule network that encodes an image in capsules which each contain poses of features and their correlations. The encoded contents are then input into the downstream classifier to predict the image categories. Existing research has mainly focused on the security of capsule networks with dynamic routing or expectation maximization (EM) routing, while little attention has been given to the security and robustness of SCAEs. In this paper, we propose an evasion attack against SCAEs. After a perturbation is generated based on the output of the object capsules in the model, it is added to an image to reduce the contribution of the object capsules related to the original category of the image so that the perturbed image will be misclassified. We evaluate the attack using an image classification experiment on the Mixed National Institute of Standards and Technology Database (MNIST), Fashion-MNIST, and German Traffic Sign Recognition Benchmark (GTSRB) datasets, and the average attack success rate can reach 98.6%. The experimental results indicate that the attack can achieve high success rates and stealthiness. This finding confirms that the SCAE has a security vulnerability that allows for the generation of adversarial samples. Our work seeks to highlight the threat of this attack and focus attention on SCAE’s security. Full article
Show Figures

Figure 1

17 pages, 2948 KB  
Article
Detection of Small Size Traffic Signs Using Regressive Anchor Box Selection and DBL Layer Tweaking in YOLOv3
by Yawar Rehman, Hafsa Amanullah, Dost Muhammad Saqib Bhatti, Waqas Tariq Toor, Muhammad Ahmad and Manuel Mazzara
Appl. Sci. 2021, 11(23), 11555; https://doi.org/10.3390/app112311555 - 6 Dec 2021
Cited by 4 | Viewed by 3320
Abstract
Traffic sign recognition is a key module of autonomous cars and driver assistance systems. Traffic sign detection accuracy and inference time are the two most important parameters. Current methods for traffic sign recognition are very accurate; however, they do not meet the requirement [...] Read more.
Traffic sign recognition is a key module of autonomous cars and driver assistance systems. Traffic sign detection accuracy and inference time are the two most important parameters. Current methods for traffic sign recognition are very accurate; however, they do not meet the requirement for real-time detection. While some are fast enough for real-time traffic sign detection, they fall short in accuracy. This paper proposes an accuracy improvement in the YOLOv3 network, which is a very fast detection framework. The proposed method contributes to the accurate detection of a small-sized traffic sign in terms of image size and helps to reduce false positives and miss rates. In addition, we propose an anchor frame selection algorithm that helps in achieving the optimal size and scale of the anchor frame. Therefore, the proposed method supports the detection of a small traffic sign with real-time detection. This ultimately helps to achieve an optimal balance between accuracy and inference time. The proposed network is evaluated on two publicly available datasets, namely the German Traffic Sign Detection Benchmark (GTSDB) and the Swedish Traffic Sign dataset (STS), and its performance showed that the proposed approach achieves a decent balance between mAP and inference time. Full article
(This article belongs to the Special Issue Advanced Intelligent Imaging Technology Ⅲ)
Show Figures

Figure 1

19 pages, 26254 KB  
Article
Learning Region-Based Attention Network for Traffic Sign Recognition
by Ke Zhou, Yufei Zhan and Dongmei Fu
Sensors 2021, 21(3), 686; https://doi.org/10.3390/s21030686 - 20 Jan 2021
Cited by 40 | Viewed by 4212
Abstract
Traffic sign recognition in poor environments has always been a challenge in self-driving. Although a few works have achieved good results in the field of traffic sign recognition, there is currently a lack of traffic sign benchmarks containing many complex factors and a [...] Read more.
Traffic sign recognition in poor environments has always been a challenge in self-driving. Although a few works have achieved good results in the field of traffic sign recognition, there is currently a lack of traffic sign benchmarks containing many complex factors and a robust network. In this paper, we propose an ice environment traffic sign recognition benchmark (ITSRB) and detection benchmark (ITSDB), marked in the COCO2017 format. The benchmarks include 5806 images with 43,290 traffic sign instances with different climate, light, time, and occlusion conditions. Second, we tested the robustness of the Libra-RCNN and HRNetv2p on the ITSDB compared with Faster-RCNN. The Libra-RCNN performed well and proved that our ITSDB dataset did increase the challenge in this task. Third, we propose an attention network based on high-resolution traffic sign classification (PFANet), and conduct ablation research on the design parallel fusion attention module. Experiments show that our representation reached 93.57% accuracy in ITSRB, and performed as well as the newest and most effective networks in the German traffic sign recognition dataset (GTSRB). Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

13 pages, 4637 KB  
Article
New Dark Area Sensitive Tone Mapping for Deep Learning Based Traffic Sign Recognition
by Jameel Ahmed Khan, Donghoon Yeo and Hyunchul Shin
Sensors 2018, 18(11), 3776; https://doi.org/10.3390/s18113776 - 5 Nov 2018
Cited by 17 | Viewed by 5863
Abstract
In this paper, we propose a new Intelligent Traffic Sign Recognition (ITSR) system with illumination preprocessing capability. Our proposed Dark Area Sensitive Tone Mapping (DASTM) technique can enhance the illumination of only dark regions of an image with little impact on bright regions. [...] Read more.
In this paper, we propose a new Intelligent Traffic Sign Recognition (ITSR) system with illumination preprocessing capability. Our proposed Dark Area Sensitive Tone Mapping (DASTM) technique can enhance the illumination of only dark regions of an image with little impact on bright regions. We used this technique as a pre-processing module for our new traffic sign recognition system. We combined DASTM with a TS detector, an optimized version of YOLOv3 for the detection of three classes of traffic signs. We trained ITSR on a dataset of Korean traffic signs with prohibitory, mandatory, and danger classes. We achieved Mean Average Precision (MAP) value of 90.07% (previous best result was 86.61%) on challenging Korean Traffic Sign Detection (KTSD) dataset and 100% on German Traffic Sign Detection Benchmark (GTSDB). Result comparisons of ITSR with latest D-Patches, TS detector, and YOLOv3 show that our new ITSR significantly outperforms in recognition performance. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

13 pages, 3218 KB  
Article
A Real-Time Chinese Traffic Sign Detection Algorithm Based on Modified YOLOv2
by Jianming Zhang, Manting Huang, Xiaokang Jin and Xudong Li
Algorithms 2017, 10(4), 127; https://doi.org/10.3390/a10040127 - 16 Nov 2017
Cited by 282 | Viewed by 27212
Abstract
Traffic sign detection is an important task in traffic sign recognition systems. Chinese traffic signs have their unique features compared with traffic signs of other countries. Convolutional neural networks (CNNs) have achieved a breakthrough in computer vision tasks and made great success in [...] Read more.
Traffic sign detection is an important task in traffic sign recognition systems. Chinese traffic signs have their unique features compared with traffic signs of other countries. Convolutional neural networks (CNNs) have achieved a breakthrough in computer vision tasks and made great success in traffic sign classification. In this paper, we present a Chinese traffic sign detection algorithm based on a deep convolutional network. To achieve real-time Chinese traffic sign detection, we propose an end-to-end convolutional network inspired by YOLOv2. In view of the characteristics of traffic signs, we take the multiple 1 × 1 convolutional layers in intermediate layers of the network and decrease the convolutional layers in top layers to reduce the computational complexity. For effectively detecting small traffic signs, we divide the input images into dense grids to obtain finer feature maps. Moreover, we expand the Chinese traffic sign dataset (CTSD) and improve the marker information, which is available online. All experimental results evaluated according to our expanded CTSD and German Traffic Sign Detection Benchmark (GTSDB) indicate that the proposed method is the faster and more robust. The fastest detection speed achieved was 0.017 s per image. Full article
Show Figures

Figure 1

Back to TopTop