# Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Perturbation-Based Methods

## 3. Implementation of Traffic Sign and Traffic Light Classifier

## 4. CNN Layer Perturbation-Based Forward

## 5. Experiments and Results

## 6. Discussion

## 7. Conclusions

## Author Contributions

## Funding

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Rolison, J.J.; Regev, S.; Moutari, S.; Feeney, A. What are the factors that contribute to road accidents? An assessment of law enforcement views, ordinary drivers’ opinions, and road accident records. Accid. Anal. Prev.
**2018**, 115, 11–24. [Google Scholar] [CrossRef] [PubMed] - Babić, D.; Fiolić, M.; Darko Babić, D.; Gates, T. Road Markings and Their Impact on Driver Behaviour and Road Safety: A Systematic Review of Current Findings. J. Adv. Transp.
**2020**, 2020, 1–19. [Google Scholar] [CrossRef] - Godthelp, J. Traffic safety in emerging countries: Making roads self-explaining through intelligent support systems. In Proceedings of the 26th World Road Congress, Abu Dhabi, United Arab Emirates, 6–10 October 2019. [Google Scholar]
- Saadna, Y.; Behloul, A. An overview of traffic sign detection and classification methods. Int. J. Multimed. Inf. Retr.
**2017**, 6, 193–210. [Google Scholar] [CrossRef] - Deng, J.; Dong, W.; Socher, R.; Li-Jia, L.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Ertler, C.; Mislej, J.; Ollmann, T.; Porzi, L.; Neuhold, G.; Kuang, Y. The Mapillary Traffic Sign Dataset for Detection and Classification on a Global Scale. arXiv
**2020**, arXiv:1909.04422v2. [Google Scholar] - Lindholm, E.; Nickolls, J.; Oberman, S.; Montrym, J. NVIDIA tesla: A unified graphics and computing architecture. IEEE Micro
**2008**, 28, 39–55. [Google Scholar] [CrossRef] - Wu, Y.; Liu, Y.; Li, J.; Liu, H.; Hu, X. Traffic Sign Detection based on Convolutional Neural Networks. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 4–9 August 2013; pp. 747–753. [Google Scholar]
- Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, L.; Wang, G.; et al. Recent Advances in Convolutional Neural Networks. arXiv
**2017**, arXiv:1512.07108v6. [Google Scholar] [CrossRef] [Green Version] - Programmable Systems for Intelligence in Automobiles (PRYSTINE) Project. Available online: https://prystine.eu/ (accessed on 27 January 2022).
- Mishra, R.; Gupta, H.P.; Dutta, T. A Survey on Deep Networks Compression: Challenge, Overview, and Solutions. arXiv
**2020**, arXiv:2010.03954v1. [Google Scholar] - Robnik-Šikonja, M.; Bohanec, M. Perturbation-Based Explanations of Prediction Models. In Human and Machine Learning; Springer: Cham, Switzerland, 2018; pp. 59–175. [Google Scholar]
- Ivanovs, M.; Kadiķis, R.; Ozols, K. Perturbation-based methods for explaining deep neural networks: A survey. Pattern Recognit. Lett.
**2021**, 150, 228–234. [Google Scholar] [CrossRef] - Fong, R.; Vedaldi, A. Explanations for Attributing Deep Neural Network Predictions. In Explainable AI: Interpreting, Explaining and Visualising Deep Learning; Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R., Eds.; Springer: Berlin/Heidelberg, Germany, 2019; pp. 149–167. [Google Scholar]
- Zeller, M.D.; Fergus, R. Visualizing and Understanding Convolutional Networks. arXiv
**2013**, arXiv:1311.2901v3. [Google Scholar] - Zhou, B.; Khosla, A.; Lapedriza, A.; Olivia, A.; Torralba, A. Object Detectors Emerging in Deep Scene CNNs. arXiv
**2015**, arXiv:1412.6865v2. [Google Scholar] - Fong, R.; Vedaldi, A. Interpretable Explanations of Black Boxes by Meaningful Perturbation. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 3429–3437. [Google Scholar]
- Fong, R.; Parick, M.; Vedaldi, A. Understanding Deep Networks via Extremal Perturbations and Smooth Masks. arXiv
**2019**, arXiv:1910.08485v1. [Google Scholar] - Petsiuk, V.; Das, A.; Saenlo, K. RISE: Randomised Input Sampling for Explanation of Black-box Models. arXiv
**2018**, arXiv:1806.07421v3. [Google Scholar] - Moosavi-Dezfooli, S.-M.; Fawzi, A.; Fawzi, O.; Frossard, P. Universal adversarial perturbations. arXiv
**2017**, arXiv:1601.08401v3. [Google Scholar] - Druml, N.; Macher, G.; Stolz, M.; Armengaud, E.; Watzenig, D.; Steger, C.; Herndl, T.; Eckel, A.; Ryabokon, A.; Hoess, A.; et al. PRYSTINE-PRogrammable sYSTems for INtelligence in automobilEs. In Proceedings of the 2018 21st Euromicro Conference on Digital System Design (DSD), Prague, Czech Republic, 29–31 August 2018; pp. 618–626. [Google Scholar]
- Sudars, K. PRYSTINE_Explainable_Road_Sign_Classifier. GitLab, Project ID:547. 2021. Available online: http://git.edi.lv/kaspars.sudars/prystine_explainable_road_sign_classifier (accessed on 27 January 2022).
- GTRSB—The German Traffic Sign Benchmark. Available online: https://www.kaggle.com/meowmeowmeowmeowmeow/gtsrb-german-traffic-sign (accessed on 27 January 2022).
- LISA Traffic Light Dataset. Available online: https://www.kaggle.com/mbornoe/lisa-traffic-light-dataset (accessed on 27 January 2022).

**Figure 1.**The architecture of the used 5-layer CNN road sign and light classification from the PRYSTINE project.

**Figure 3.**Examples of input images from the test dataset, where (

**a**), (

**b**), (

**c**), traffic signs without occlusion, (

**d**) occluded traffic sign.

**Figure 4.**The output of the 4th layer feature map of CNN PRYSTINE before max pooling, (

**a**) without masking, (

**b**) with masking of the central region of 3 × 3 around the centroid.

**Figure 5.**An example of the output signals of the DNN pool out from the 4th layer, representing the basic features in the DNN and linear classifier: (

**a**) the corresponding signal/weights class that should have been correct, and (

**b**) the most highly correlated signal/weights that were incorrectly selected as the correct class.

Compression Rate, % | Precision (Training Set of 500 Images on which Pruning is Made), % | Precision (Test Set of 500 Images), % |
---|---|---|

Original PRYSTINE CNN network | 94.4% | 94.4% |

3.125% | 94.4% | 93.0% |

5.47% | 93.4% | 92.2% |

17.58% | 89.2% | 88.0% |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Sudars, K.; Namatēvs, I.; Ozols, K.
Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach. *J. Imaging* **2022**, *8*, 30.
https://doi.org/10.3390/jimaging8020030

**AMA Style**

Sudars K, Namatēvs I, Ozols K.
Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach. *Journal of Imaging*. 2022; 8(2):30.
https://doi.org/10.3390/jimaging8020030

**Chicago/Turabian Style**

Sudars, Kaspars, Ivars Namatēvs, and Kaspars Ozols.
2022. "Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach" *Journal of Imaging* 8, no. 2: 30.
https://doi.org/10.3390/jimaging8020030