Next Article in Journal
Mining Featured Biomarkers Linked with Epithelial Ovarian CancerBased on Bioinformatics
Previous Article in Journal
The Basics and the Advancements in Diagnosis of Bacterial Lower Respiratory Tract Infections
Article Menu

Export Article

Open AccessArticle

Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities

Lister Hill National Center for Biomedical Communications, National Library of Medicine, 8600 Rockville Pike, Bethesda, MD 20894, USA
*
Author to whom correspondence should be addressed.
Diagnostics 2019, 9(2), 38; https://doi.org/10.3390/diagnostics9020038
Received: 5 March 2019 / Revised: 29 March 2019 / Accepted: 1 April 2019 / Published: 3 April 2019
(This article belongs to the Section Medical Imaging)
  |  
PDF [22584 KB, uploaded 3 April 2019]
  |     |  

Abstract

Deep learning (DL) methods are increasingly being applied for developing reliable computer-aided detection (CADe), diagnosis (CADx), and information retrieval algorithms. However, challenges in interpreting and explaining the learned behavior of the DL models hinders their adoption and use in real-world systems. In this study, we propose a novel method called “Class-selective Relevance Mapping” (CRM) for localizing and visualizing discriminative regions of interest (ROI) within a medical image. Such visualizations offer improved explanation of the convolutional neural network (CNN)-based DL model predictions. We demonstrate CRM effectiveness in classifying medical imaging modalities toward automatically labeling them for visual information retrieval applications. The CRM is based on linear sum of incremental mean squared errors (MSE) calculated at the output layer of the CNN model. It measures both positive and negative contributions of each spatial element in the feature maps produced from the last convolution layer leading to correct classification of an input image. A series of experiments on a “multi-modality” CNN model designed for classifying seven different types of image modalities shows that the proposed method is significantly better in detecting and localizing the discriminative ROIs than other state of the art class-activation methods. Further, to visualize its effectiveness we generate “class-specific” ROI maps by averaging the CRM scores of images in each modality class, and characterize the visual explanation through their different size, shape, and location for our multi-modality CNN model that achieved over 98% performance on a dataset constructed from publicly available images. View Full-Text
Keywords: class-selective relevance mapping; convolutional neural network; modality classification; visual localization; discriminative region of interest class-selective relevance mapping; convolutional neural network; modality classification; visual localization; discriminative region of interest
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Kim, I.; Rajaraman, S.; Antani, S. Visual Interpretation of Convolutional Neural Network Predictions in Classifying Medical Image Modalities. Diagnostics 2019, 9, 38.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Diagnostics EISSN 2075-4418 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top