Next Article in Journal
Sulphate-Reducing Bacteria’s Response to Extreme pH Environments and the Effect of Their Activities on Microbial Corrosion
Next Article in Special Issue
gbt-HIPS: Explaining the Classifications of Gradient Boosted Tree Ensembles
Previous Article in Journal
Aerodynamics of Cycling Skinsuits Focused on the Surface Shape of the Arms
Previous Article in Special Issue
Minimum Relevant Features to Obtain Explainable Systems for Predicting Cardiovascular Disease Using the Statlog Data Set
Article

Neuroscope: An Explainable AI Toolbox for Semantic Segmentation and Image Classification of Convolutional Neural Nets

German Research Centre for Artificial Intelligence, 66123 Saarbrucken, Germany
*
Author to whom correspondence should be addressed.
Academic Editor: Antonio Fernández-Caballero
Appl. Sci. 2021, 11(5), 2199; https://doi.org/10.3390/app11052199
Received: 29 January 2021 / Revised: 24 February 2021 / Accepted: 26 February 2021 / Published: 3 March 2021
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Trust in artificial intelligence (AI) predictions is a crucial point for a widespread acceptance of new technologies, especially in sensitive areas like autonomous driving. The need for tools explaining AI for deep learning of images is thus eminent. Our proposed toolbox Neuroscope addresses this demand by offering state-of-the-art visualization algorithms for image classification and newly adapted methods for semantic segmentation of convolutional neural nets (CNNs). With its easy to use graphical user interface (GUI), it provides visualization on all layers of a CNN. Due to its open model-view-controller architecture, networks generated and trained with Keras and PyTorch are processable, with an interface allowing extension to additional frameworks. We demonstrate the explanation abilities provided by Neuroscope using the example of traffic scene analysis. View Full-Text
Keywords: explainable AI; convolutional neural nets; semantic segmentation; image classification explainable AI; convolutional neural nets; semantic segmentation; image classification
Show Figures

Figure 1

  • Externally hosted supplementary file 1
    Link: https://github.com/c3di/neuroscope
    Description: Neuroscope is available online at https://github.com/c3di/neuroscope
  • Externally hosted supplementary file 2
    Link: https://www.nuscenes.org
    Description: The nuScape data set can be downloaded at www.nuscenes.org.
MDPI and ACS Style

Schorr, C.; Goodarzi, P.; Chen, F.; Dahmen, T. Neuroscope: An Explainable AI Toolbox for Semantic Segmentation and Image Classification of Convolutional Neural Nets. Appl. Sci. 2021, 11, 2199. https://doi.org/10.3390/app11052199

AMA Style

Schorr C, Goodarzi P, Chen F, Dahmen T. Neuroscope: An Explainable AI Toolbox for Semantic Segmentation and Image Classification of Convolutional Neural Nets. Applied Sciences. 2021; 11(5):2199. https://doi.org/10.3390/app11052199

Chicago/Turabian Style

Schorr, Christian, Payman Goodarzi, Fei Chen, and Tim Dahmen. 2021. "Neuroscope: An Explainable AI Toolbox for Semantic Segmentation and Image Classification of Convolutional Neural Nets" Applied Sciences 11, no. 5: 2199. https://doi.org/10.3390/app11052199

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop