# COVID-19 Symptoms Detection Based on NasNetMobile with Explainable AI Using Various Imaging Modalities

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Methodology

## 3. Results

#### 3.1. CT Scan

#### 3.2. X-ray Image

#### 3.3. Confusion Matrix

#### 3.4. Confidence Interval

## 4. Discussion

#### 4.1. Feature Territory Highlighted by the Model on Different Layer

#### 4.2. Models Interpretability with LIME

## 5. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Stoecklin, S.B.; Rolland, P.; Silue, Y.; Mailles, A.; Campese, C.; Simondon, A.; Mechain, M.; Meurice, L.; Nguyen, M.; Bassi, C.; et al. First cases of coronavirus disease 2019 (COVID-19) in France: Surveillance, investigations and control measures, January 2020. Eurosurveillance
**2020**, 25, 2000094. [Google Scholar] - Dashbord. Covid-19 WorldMeter, September 2020. Available online: https://www.worldometers.info/coronavirus/ (accessed on 9 September 2020).
- McKeever, A. Here’s what coronavirus does to the body. Natl. Geogr. 2020. Available online: https://www.freedomsphoenix.com/Media/Media-Files/Heres-what-coronavirus-does-to-the-body.pdf (accessed on 12 March 2020).
- Mahase, E. Coronavirus: Covid-19 Has Killed More People than SARS and MERS Combined, Despite Lower Case Fatality Rate; BMJ: London, UK, 2020. [Google Scholar]
- Tanne, J.H.; Hayasaki, E.; Zastrow, M.; Pulla, P.; Smith, P.; Rada, A.G. Covid-19: How doctors and healthcare systems are tackling coronavirus worldwide. BMJ
**2020**, 368. [Google Scholar] [CrossRef][Green Version] - Ghoshal, B.; Tucker, A. Estimating uncertainty and interpretability in deep learning for coronavirus (COVID-19) detection. arXiv
**2020**, arXiv:2003.10769. [Google Scholar] - Cohen, J.P.; Morrison, P.; Dao, L. COVID-19 image data collection. arXiv
**2020**, arXiv:2003.11597. [Google Scholar] - Chest X-ray Images (Pneumonia). Available online: https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia (accessed on 10 March 2020).
- Shi, F.; Wang, J.; Shi, J.; Wu, Z.; Wang, Q.; Tang, Z.; He, K.; Shi, Y.; Shen, D. Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for covid-19. IEEE Rev. Biomed. Eng.
**2020**. [Google Scholar] [CrossRef] [PubMed][Green Version] - Narin, A.; Kaya, C.; Pamuk, Z. Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks. arXiv
**2020**, arXiv:2003.10849. [Google Scholar] - Zhang, J.; Xie, Y.; Li, Y.; Shen, C.; Xia, Y. Covid-19 screening on chest x-ray images using deep learning based anomaly detection. arXiv
**2020**, arXiv:2003.12338. [Google Scholar] - Wang, L.; Wong, A. COVID-Net: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 Cases from Chest X-Ray Images. arXiv
**2020**, arXiv:2003.09871. [Google Scholar] - Chen, J.; Wu, L.; Zhang, J.; Zhang, L.; Gong, D.; Zhao, Y.; Hu, S.; Wang, Y.; Hu, X.; Zheng, B.; et al. Deep learning-based model for detecting 2019 novel coronavirus pneumonia on high-resolution computed tomography: A prospective study. medRxiv. 2020. Available online: https://www.medrxiv.org/content/10.1101/2020.02.25.20021568v2.full.pdf (accessed on 12 March 2020).
- Jin, C.; Chen, W.; Cao, Y.; Xu, Z.; Zhang, X.; Deng, L.; Zheng, C.; Zhou, J.; Shi, H.; Feng, J. Development and Evaluation of an AI System for COVID-19 Diagnosis. medRxiv
**2020**. [Google Scholar] [CrossRef][Green Version] - Song, Y.; Zheng, S.; Li, L.; Zhang, X.; Zhang, X.; Huang, Z.; Chen, J.; Zhao, H.; Jie, Y.; Wang, R.; et al. Deep learning enables accurate diagnosis of novel coronavirus (COVID-19) with CT images. medRxiv
**2020**. [Google Scholar] [CrossRef][Green Version] - Butt, C.; Gill, J.; Chun, D.; Babu, B.A. Deep learning system to screen coronavirus disease 2019 pneumonia. Appl. Intell.
**2020**, 1. [Google Scholar] [CrossRef] - Li, L.; Qin, L.; Xu, Z.; Yin, Y.; Wang, X.; Kong, B.; Bai, J.; Lu, Y.; Fang, Z.; Song, Q.; et al. Artificial intelligence distinguishes COVID-19 from community acquired pneumonia on chest CT. Radiology
**2020**, 200905. [Google Scholar] [CrossRef] [PubMed] - Samek, W.; Wiegand, T.; Müller, K.R. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv
**2017**, arXiv:1708.08296. [Google Scholar] - Gilpin, L.H.; Bau, D.; Yuan, B.Z.; Bajwa, A.; Specter, M.; Kagal, L. Explaining explanations: An overview of interpretability of machine learning. In Proceedings of the 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), Turin, Italy, 1–3 October 2018; pp. 80–89. [Google Scholar]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13 August 2016; pp. 1135–1144. [Google Scholar]
- Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F.; Giannotti, F.; Pedreschi, D. A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR)
**2018**, 51, 1–42. [Google Scholar] [CrossRef][Green Version] - Holzinger, A.; Biemann, C.; Pattichis, C.S.; Kell, D.B. What do we need to build explainable AI systems for the medical domain? arXiv
**2017**, arXiv:1712.09923. [Google Scholar] - Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv
**2014**, arXiv:1409.1556. [Google Scholar] - Längkvist, M.; Karlsson, L.; Loutfi, A. A review of unsupervised feature learning and deep learning for time-series modeling. Pattern Recognit. Lett.
**2014**, 42, 11–24. [Google Scholar] [CrossRef][Green Version] - Akiba, T.; Suzuki, S.; Fukuda, K. Extremely large minibatch sgd: Training resnet-50 on imagenet in 15 minutes. arXiv
**2017**, arXiv:1711.04325. [Google Scholar] - Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- da Nóbrega, R.V.M.; Peixoto, S.A.; da Silva, S.P.P.; Rebouças Filho, P.P. Lung nodule classification via deep transfer learning in CT lung images. In Proceedings of the 2018 IEEE 31st International Symposium on Computer-Based Medical Systems (CBMS), Karlstad, Sweden, 18–21 June 2018; pp. 244–249. [Google Scholar]
- Varatharasan, V.; Shin, H.S.; Tsourdos, A.; Colosimo, N. Improving Learning Effectiveness For Object Detection and Classification in Cluttered Backgrounds. In Proceedings of the 2019 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED UAS), Cranfield, UK, 25–27 November 2019; pp. 78–85. [Google Scholar]
- Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng.
**2009**, 22, 1345–1359. [Google Scholar] [CrossRef] - Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Yu, W.; Yang, K.; Bai, Y.; Xiao, T.; Yao, H.; Rui, Y. Visualizing and comparing AlexNet and VGG using deconvolutional layers. In Proceedings of the 33 rd International Conference on Machine Learning, New York City, NY, USA, 19–24 June 2016. [Google Scholar]
- Gupta, K.D.; Ahsan, M.; Andrei, S.; Alam, K.M.R. A Robust Approach of Facial Orientation Recognition from Facial Features. BRAIN. Broad Res. Artif. Intell. Neurosci.
**2017**, 8, 5–12. [Google Scholar] - Ozturk, T.; Talo, M.; Yildirim, E.A.; Baloglu, U.B.; Yildirim, O.; Acharya, U.R. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput. Biol. Med.
**2020**, 103792. [Google Scholar] [CrossRef] [PubMed] - Denil, M.; Shakibi, B.; Dinh, L.; Ranzato, M.; De Freitas, N. Predicting parameters in deep learning. In Proceedings of the 26th International Conference on Neural Information Processing Systems—Volume 2, Lake Tahoe, NV USA, 5–10 December 2013; pp. 2148–2156. [Google Scholar]
- Sutskever, I.; Martens, J.; Dahl, G.; Hinton, G. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning, Atlanta, GA, USA; 2013; pp. 1139–1147. Available online: http://proceedings.mlr.press/v28/sutskever13.pdf (accessed on 12 March 2020).
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv
**2014**, arXiv:1412.6980. [Google Scholar] - Zhang, C.; Liao, Q.; Rakhlin, A.; Miranda, B.; Golowich, N.; Poggio, T. Theory of deep learning IIb: Optimization properties of SGD. arXiv
**2018**, arXiv:1801.02254. [Google Scholar] - Bengio, Y. Rmsprop and equilibrated adaptive learning rates for nonconvex optimization. arXiv
**2015**, arXiv:1502.04390v1. [Google Scholar] - Perez, L.; Wang, J. The effectiveness of data augmentation in image classification using deep learning. arXiv
**2017**, arXiv:1712.04621. [Google Scholar] - Filipczuk, P.; Fevens, T.; Krzyżak, A.; Monczak, R. Computer-aided breast cancer diagnosis based on the analysis of cytological images of fine needle biopsies. IEEE Trans. Med. Imaging
**2013**, 32, 2169–2178. [Google Scholar] [CrossRef] - Ahsan, M.M. Real Time Face Recognition in Unconstrained Environment; Lamar University-Beaumont: Beaumont, TX, USA, 2018. [Google Scholar]
- Wilson, E.B. Probable inference, the law of succession, and statistical inference. J. Am. Stat. Assoc.
**1927**, 22, 209–212. [Google Scholar] [CrossRef] - Edwards, W.; Lindman, H.; Savage, L.J. Bayesian statistical inference for psychological research. Psychol. Rev.
**1963**, 70, 193. [Google Scholar] [CrossRef] - Brownlee, J. Machine Learning Mastery. 2014. Available online: http://machinelearningmastery.com/discover-feature-engineering-howtoengineer-features-and-how-to-getgood-at-it (accessed on 12 March 2020).
- Khan, A.; Gupta, K.D.; Kumar, N.; Venugopal, D. CIDMP: Completely Interpretable Detection of Malaria Parasite in Red Blood Cells using Lower-dimensional Feature Space. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN 2020), Glasgow, UK, 19–24 July 2020. [Google Scholar]
- Sen, S.; Dasgupta, D.; Gupta, K.D. An Empirical Study on Algorithmic Bias. In Proceedings of the 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC), Madrid, Spain, 13–17 July 2020. [Google Scholar]

**Figure 1.**VGG16 architecture implemented during this experiment [23].

**Figure 2.**An illustration of convolutional and maxpooling layer operations [34].

**Figure 7.**Heat map of class activation of chest X-ray image on different layer acquired by ResNet50.

**Figure 10.**Top four features (

**a**) on COVID-19 patients CT scan image (

**b**) on other patients CT scan image.

Dataset | Label | Train Set | Test Set |
---|---|---|---|

CT Scan | COVID-19 | 160 | 40 |

Non-COVID-19 | 160 | 40 | |

Chest X-ray | COVID-19 | 160 | 40 |

Non-COVID-19 | 160 | 40 |

Model | Performance | |||
---|---|---|---|---|

Accuracy | Precision | Recall | F1-Score | |

VGG16 | 0.85 | 0.85 | 0.85 | 0.85 |

InceptionResNetV2 | 0.81 | 0.82 | 0.81 | 0.81 |

ResNet50 | 0.56 | 0.71 | 0.56 | 0.47 |

DenseNet201 | 0.97 | 0.97 | 0.97 | 0.97 |

VGG19 | 0.78 | 0.82 | 0.78 | 0.77 |

MobileNetV2 | 0.99 | 0.99 | 0.99 | 0.99 |

NasNetMobile | 0.90 | 0.90 | 0.90 | 0.90 |

ResNet15V2 | 0.98 | 0.98 | 0.98 | 0.98 |

Model | Performance | |||
---|---|---|---|---|

Accuracy | Precision | Recall | F1-Score | |

VGG16 | 0.86 | 0.85 | 0.86 | 0.86 |

InceptionResNetV2 | 0.84 | 0.84 | 0.84 | 0.84 |

ResNet50 | 0.55 | 0.64 | 0.55 | 0.46 |

DenseNet201 | 0.79 | 0.79 | 0.79 | 0.79 |

VGG19 | 0.76 | 0.81 | 0.76 | 0.75 |

MobileNetV2 | 0.89 | 0.89 | 0.89 | 0.89 |

NasNetMobile | 0.90 | 0.90 | 0.90 | 0.90 |

ResNet15V2 | 0.84 | 0.84 | 0.84 | 0.84 |

Model | Performance | |||
---|---|---|---|---|

Accuracy | Precision | Recall | F1-Score | |

VGG16 | 1.0 | 1.0 | 1.0 | 1.0 |

InceptionResNetV2 | 0.99 | 0.99 | 0.99 | 0.99 |

ResNet50 | 0.64 | 0.79 | 0.64 | 0.58 |

DenseNet201 | 1.0 | 1.0 | 1.0 | 1.0 |

VGG19 | 0.98 | 0.98 | 0.98 | 0.98 |

MobileNetV2 | 1.0 | 1.0 | 1.0 | 1.0 |

NasNetMobile | 1.0 | 1.0 | 1.0 | 1.0 |

ResNet15V2 | 1.0 | 1.0 | 1.0 | 1.0 |

Model | Performance | |||
---|---|---|---|---|

Accuracy | Precision | Recall | F1-Score | |

VGG16 | 0.97 | 0.98 | 0.97 | 0.97 |

InceptionResNetV2 | 0.97 | 0.98 | 0.97 | 0.97 |

ResNet50 | 0.64 | 0.79 | 0.64 | 0.58 |

DenseNet201 | 0.97 | 0.98 | 0.97 | 0.97 |

VGG19 | 0.91 | 0.93 | 0.91 | 0.91 |

MobileNetV2 | 0.97 | 0.97 | 0.97 | 0.97 |

NasNetMobile | 1.0 | 1.0 | 1.0 | 1.0 |

ResNet15V2 | 0.99 | 0.99 | 0.99 | 0.99 |

**Table 6.**Confidence Interval ($\alpha =0.05$) of CT scan and chest X-ray in terms of accuracy on test set. Sample size, n = 80 for both studies.

Study | Model | Test Accuracy | Methods | |
---|---|---|---|---|

Wilson Score | Bayesian Interval | |||

CT scan | VGG16 | 0.86 | 0.756–0.912 | 0.76–0.915 |

InceptionResNetV2 | 0.84 | 0.742–0.903 | 0.745–0.906 | |

ResNet50 | 0.55 | 0.441–0.654 | 0.441–0.656 | |

DenseNet201 | 0.79 | 0.686–0.863 | 0.689–0.866 | |

VGG19 | 0.76 | 0.659–0.842 | 0.661–0.845 | |

MobileNetV2 | 0.89 | 0.800–0.940 | 0.805–0.943 | |

NasNetMobile | 0.90 | 0.815–0.948 | 0.820–0.952 | |

ResNet15V2 | 0.84 | 0.742–0.903 | 0.745–0.906 | |

Chest X-ray | VGG16 | 0.97 | 0.913–0.993 | 0.922–0.995 |

InceptionResNetV2 | 0.97 | 0.913–0.993 | 0.922–0.995 | |

ResNet50 | 0.64 | 0.528–0.734 | 0.529–0.736 | |

DenseNet201 | 0.97 | 0.913–0.993 | 0.922–0.995 | |

MobileNetV2 | 0.97 | 0.913–0.993 | 0.922–0.995 | |

NasNetMobile | 1.0 | 0.954–1.00 | 0.969–1.00 | |

ResNet15V2 | 0.99 | 0.933–0.998 | 0.943–0.999 |

**Table 7.**Overall summary of the best model found considering various factor on CT scan image dataset.

Model | Accuracy | Precision | Recall | F1-Score | Confusion Matrix | Accuracy and Loss During Epochs | |||||
---|---|---|---|---|---|---|---|---|---|---|---|

Train | Test | Train | Test | Train | Test | Train | Test | Misclassified | Accuracy | Loss | |

VGG16 | 85% | 86% | 85% | 85% | 85% | 86% | 85% | 86% | 12 | Satisfactory | Satisfactory |

InceptionResNetV2 | 81% | 84% | 82% | 84% | 81% | 84% | 81% | 84% | 13 | Satisfactory | Satisfactory |

ResNet50 | 56% | 55% | 71% | 64% | 56% | 55% | 47% | 46% | 36 | Not satisfactory | Satisfactory |

VGG19 | 78% | 76% | 82% | 81% | 78% | 76% | 77% | 75% | 19 | Satisfactory | Satisfactory |

MobileNetV2 | 99% | 89% | 99% | 89% | 99% | 89% | 99% | 89% | 9 | Satisfactory | Satisfactory |

NasNetMobile | 90% | 90% | 90% | 90% | 90% | 90% | 90% | 90% | 8 | Satisfactory | Satisfactory |

**Table 8.**Overall summary of the best model found considering various factors on chest X-ray image dataset.

Model | Accuracy | Precision | Recall | F1-Score | Confusion Matrix | Accuracy and Loss During Epochs | |||||
---|---|---|---|---|---|---|---|---|---|---|---|

Train | Test | Train | Test | Train | Test | Train | Test | Misclassified | Accuracy | Loss | |

MobileNetV2 | 100% | 97% | 100% | 97% | 100% | 97% | 100% | 97% | 2 | Not satisfactory | Not satisfactory |

ResNet15V2 | 100% | 99% | 100% | 99% | 100% | 99% | 100% | 99% | 1 | Not satisfactory | Not satisfactory |

DenseNet201 | 100% | 97% | 100% | 98% | 100% | 97% | 100% | 97% | 2 | Not satisfactory | Not satisfactory |

VGG16 | 98% | 97% | 98% | 98% | 98% | 97% | 98% | 97% | 2 | Satisfactory | Satisfactory |

InceptionResNetV2 | 99% | 97% | 99% | 98% | 99% | 97% | 99% | 97% | 2 | Satisfactory | Satisfactory |

NasNetMobile | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 100% | 0 | Satisfactory | Satisfactory |

VGG19 | 98% | 91% | 98% | 93% | 98% | 91% | 98% | 91% | 7 | Not satisfactory | Satisfactory |

Model | Accuracy | Precision | Recall | F1-Score | Error Rate (Test Set) |
---|---|---|---|---|---|

MobileNetV2 | 96.25% | 96.25% | 96.25% | 96.25% | 11.25% |

NasNetMobile | 95% | 95% | 95% | 95% | 10% |

Model | CT Scan | X-ray |
---|---|---|

VGG16 | $\frac{(85+86)}{2}=85.5\%$ | $\frac{(100+97)}{2}=98.5\%$ |

InceptionResNetV2 | $\frac{(81+84)}{2}=82.5\%$ | $\frac{(99+97)}{2}=98\%$ |

ResNet50 | $\frac{(56+55)}{2}=55.5\%$ | $\frac{(64+64)}{2}=64\%$ |

DenseNet201 | $\frac{(97+79)}{2}=88\%$ | $\frac{(100+97)}{2}=98.5\%$ |

VGG19 | $\frac{(78+76)}{2}=77\%$ | $\frac{(98+91)}{2}=94.5\%$ |

MobileNetV2 | $\frac{(99+89)}{2}=94\%$ | $\frac{(100+97)}{2}=98.5\%$ |

NasNetMobile | $\frac{(90+90)}{2}=90\%$ | $\frac{(100+100)}{2}=100\%$ |

ResNet15V2 | $\frac{(98+84)}{2}=91\%$ | $\frac{100+99}{2}=99.5\%$ |

Average | $82.94\%$ | $93.94\%$ |

Function | Value |
---|---|

Kernel Size | 4 |

Maximum Distance | 200 |

Ratio | 0.2 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Ahsan, M.M.; Gupta, K.D.; Islam, M.M.; Sen, S.; Rahman, M.L.; Shakhawat Hossain, M.
COVID-19 Symptoms Detection Based on NasNetMobile with Explainable AI Using Various Imaging Modalities. *Mach. Learn. Knowl. Extr.* **2020**, *2*, 490-504.
https://doi.org/10.3390/make2040027

**AMA Style**

Ahsan MM, Gupta KD, Islam MM, Sen S, Rahman ML, Shakhawat Hossain M.
COVID-19 Symptoms Detection Based on NasNetMobile with Explainable AI Using Various Imaging Modalities. *Machine Learning and Knowledge Extraction*. 2020; 2(4):490-504.
https://doi.org/10.3390/make2040027

**Chicago/Turabian Style**

Ahsan, Md Manjurul, Kishor Datta Gupta, Mohammad Maminur Islam, Sajib Sen, Md. Lutfar Rahman, and Mohammad Shakhawat Hossain.
2020. "COVID-19 Symptoms Detection Based on NasNetMobile with Explainable AI Using Various Imaging Modalities" *Machine Learning and Knowledge Extraction* 2, no. 4: 490-504.
https://doi.org/10.3390/make2040027