Next Article in Journal
An Empirical Study of Cluster-Based MOEA/D Bare Bones PSO for Data Clustering
Next Article in Special Issue
Using Explainable Machine Learning to Explore the Impact of Synoptic Reporting on Prostate Cancer
Previous Article in Journal
Decomposition of Random Sequences into Mixtures of Simpler Ones and Its Application in Network Analysis
Previous Article in Special Issue
A Context-Aware Neural Embedding for Function-Level Vulnerability Detection
 
 
Article

An Interaction-Based Convolutional Neural Network (ICNN) Toward a Better Understanding of COVID-19 X-ray Images

Department of Statistics, Columbia University, New York, NY 10027, USA
*
Author to whom correspondence should be addressed.
Academic Editor: Mircea-Bogdan Radac
Algorithms 2021, 14(11), 337; https://doi.org/10.3390/a14110337
Received: 2 November 2021 / Revised: 17 November 2021 / Accepted: 17 November 2021 / Published: 19 November 2021
(This article belongs to the Special Issue Interpretability, Accountability and Robustness in Machine Learning)
The field of explainable artificial intelligence (XAI) aims to build explainable and interpretable machine learning (or deep learning) methods without sacrificing prediction performance. Convolutional neural networks (CNNs) have been successful in making predictions, especially in image classification. These popular and well-documented successes use extremely deep CNNs such as VGG16, DenseNet121, and Xception. However, these well-known deep learning models use tens of millions of parameters based on a large number of pretrained filters that have been repurposed from previous data sets. Among these identified filters, a large portion contain no information yet remain as input features. Thus far, there is no effective method to omit these noisy features from a data set, and their existence negatively impacts prediction performance. In this paper, a novel interaction-based convolutional neural network (ICNN) is introduced that does not make assumptions about the relevance of local information. Instead, a model-free influence score (I-score) is proposed to directly extract the influential information from images to form important variable modules. This innovative technique replaces all pretrained filters found by trial-and-error with explainable, influential, and predictive variable sets (modules) determined by the I-score. In other words, future researchers need not rely on pretrained filters; the suggested algorithm identifies only the variables or pixels with high I-score values that are extremely predictive and important. The proposed method and algorithm were tested on real-world data set and a state-of-the-art prediction performance of 99.8% was achieved without sacrificing the explanatory power of the model. This proposed design can efficiently screen patients infected by COVID-19 before human diagnosis and can be a benchmark for addressing future XAI problems in large-scale data sets. View Full-Text
Keywords: explainable artificial intelligence; convolutional neural networks; deep learning; Chest X-ray Image; I-score explainable artificial intelligence; convolutional neural networks; deep learning; Chest X-ray Image; I-score
Show Figures

Figure 1

MDPI and ACS Style

Lo, S.-H.; Yin, Y. An Interaction-Based Convolutional Neural Network (ICNN) Toward a Better Understanding of COVID-19 X-ray Images. Algorithms 2021, 14, 337. https://doi.org/10.3390/a14110337

AMA Style

Lo S-H, Yin Y. An Interaction-Based Convolutional Neural Network (ICNN) Toward a Better Understanding of COVID-19 X-ray Images. Algorithms. 2021; 14(11):337. https://doi.org/10.3390/a14110337

Chicago/Turabian Style

Lo, Shaw-Hwa, and Yiqiao Yin. 2021. "An Interaction-Based Convolutional Neural Network (ICNN) Toward a Better Understanding of COVID-19 X-ray Images" Algorithms 14, no. 11: 337. https://doi.org/10.3390/a14110337

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop