COVID-19 Detection Using Deep Learning Algorithm on Chest X-ray Images
Abstract
:Simple Summary
Abstract
1. Introduction
2. Literature Review
3. Proposed Method
3.1. Dataset Preparation
3.2. Model Selection
- VGG19 and VGG16
- 2.
- InceptionV3
- 3.
- ResNet50 and 101
- 4.
- GoogLeNet
- 5.
- MobileNetV2
- 6.
- AlexNet
- 7.
- EfficientNet B7
- 8.
- DenseNet 121
- 9.
- NFNet
- 10.
- Modified MobileNetV2 (Novel Approach)
3.3. Modified MobileNet V2 Architecture
Algorithm 1: Proposed Modified MobileNetV2 Algorithm for COVID-19 Detection |
Input: 52,000 chest X-ray images. (80% train, 20% test data) |
Output: Result = COVID-19 positive or healthy |
Step 1: batch normalization, preprocessing, augmentation |
Step 2: Freeze the base layer and add proposed convolution layer with image size |
224, kernel size (3, 3), optimizer = RMSprop, activation: ReLU |
Step 3: Feed the first residual convolution layer with kernel size (2, 2), activation = |
ReLU, then average pooling, optimizer = RMSprop |
Step 4: Feed into the second residual convolution layer with kernel size (1, 1), stride = 2. |
Average pooling, dropout, optimizer = none |
Step 5: Feed into the third residual convolution layer with kernel size (2, 2), stride = 1. |
Max pooling, dropout, optimizer = RMSprop |
Step 6: Feed into the fourth and fifth residual convolution layer with kernel size (1, 1), |
stride = 2. Max pooling, dropout, optimizer = none |
Step 7: Feed into sixth residual convolution layer with kernel size (2, 2), stride = 2, no |
pooling, dropout, optimizer = RMSprop |
Step 8: Feed into seventh residual convolution layer with kernel size (2, 2), stride = 1, |
average pooling, dropout, optimizer = RMSprop |
Step 9: Apply proposed layer with image size 224 × 224, kernel size (3, 3), optimizer = |
RMSprop, activation: ReLu |
Step 10: Finding the accuracy, precision, f1 score, recall |
3.4. Model Implementation
- Batch size = (50, 100, 150, 200, 250, 300)
- Number of epochs = (100, 150, 200, 250, 300, 350, 400)
- Learning rate = (0.0000001, 0.000001, 0.00001, 0.0001, 0.001, 0.01, 0.1)
- Optimizer = (‘SGD’, ‘Adadelta’, ‘RMSprop’, ‘Adagrad’, ‘Adam’, ‘Nadam’, ‘Adamax’)
- Hidden layer size = 8–96 neurons
- Dropout = 0.1
- Activation = (‘ReLU’ (hidden), ‘Sigmoid’ (final))
- Kernel Size = 2 × 2
3.5. Performance Evaluation Matrix
- True-positive (TP): refers to instances of COVID-19 that have been correctly classified.
- False-positive (FP): refers to instances of COVID-19 that were erroneously classed as healthy.
- True-negative (TN): refers to instances that are correctly classified as healthy.
- False-negative (FN): refers to instances of COVID-19 that were erroneously categorized as healthy.
4. Results
4.1. Experiment Setup
4.2. Experiment Outcome
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zhao, S.; Lin, Q.; Ran, J.; Musa, S.S.; Yang, G.; Wang, W.; Lou, Y.; Gao, D.; Yang, L.; He, D.; et al. Preliminary estimation of the basic reproduction number of novel coronavirus (2019-nCoV) in China, from 2019 to 2020: A data-driven analysis in the early phase of the outbreak. Int. J. Infect. Dis. 2020, 92, 214–217. [Google Scholar] [CrossRef] [Green Version]
- European Centre for Disease Prevention and Control. COVID-19 Situation Update Worldwide. 2020. Available online: https://www.ecdc.europa.eu/en/geographical-distribution-2019-ncov-cases (accessed on 12 September 2021).
- Coronavirus and COVID-19: What You Should Know. Available online: https://www.webmd.com/lung/coronavirus (accessed on 22 October 2021).
- Rothan, H.A.; Byrareddy, S.N. The epidemiology and pathogenesis of coronavirus disease (COVID-19) outbreak. J. Autoimmun. 2020, 109, 102433. [Google Scholar] [CrossRef]
- Chen, N.; Zhou, M.; Dong, X.; Qu, J.; Gong, F.; Han, Y.; Qiu, Y.; Wang, J.; Liu, Y.; Wei, Y.; et al. Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: A descriptive study. Lancet 2020, 395, 507–513. [Google Scholar] [CrossRef] [Green Version]
- Wu, F.; Zhao, S.; Yu, B.; Chen, Y.-M.; Wang, W.; Song, Z.-G.; Hu, Y.; Tao, Z.-W.; Tian, J.-H.; Pei, Y.-Y.; et al. A new coronavirus associated with human respiratory disease in China. Nature 2020, 579, 265–269. [Google Scholar] [CrossRef] [Green Version]
- Guan, W.-J.; Ni, Z.-Y.; Hu, Y.; Liang, W.-H.; Ou, C.-Q.; He, J.-X.; Liu, L.; Shan, H.; Lei, C.-L.; Hui, D.S.; et al. Clinical Characteristics of Coronavirus Disease 2019 in China. N. Engl. J. Med. 2020, 382, 1708–1720. [Google Scholar] [CrossRef]
- Wang, D.; Hu, B.; Hu, C.; Zhu, F.; Liu, X.; Zhang, J.; Wang, B.; Xiang, H.; Cheng, Z.; Xiong, Y. Clinical characteristics of 138 hospitalized patients with 2019 novel coronavirus–infected pneumonia in Wuhan, China. JAMA 2020, 323, 1061–1069. [Google Scholar] [CrossRef]
- Wang, C.; Horby, P.W.; Hayden, F.G.; Gao, G.F. A novel coronavirus outbreak of global health concern. Lancet 2020, 395, 470–473. [Google Scholar] [CrossRef] [Green Version]
- Zhu, N.; Zhang, D.; Wang, W.; Li, X.; Yang, B.; Song, J.; Zhao, X.; Huang, B.; Shi, W.; Lu, R.; et al. A Novel Coronavirus from Patients with Pneumonia in China, 2019. N. Engl. J. Med. 2020, 382, 727–733. [Google Scholar] [CrossRef]
- Li, Q.; Guan, X.; Wu, P.; Wang, X.; Zhou, L.; Tong, Y.; Ren, R.; Leung, K.S.; Lau, E.H.; Wong, J.Y.; et al. Early transmission dynamics in Wuhan, China, of novel coronavirus–infected pneumonia. N. Engl. J. Med. 2020, 382, 1199–1207. [Google Scholar] [CrossRef]
- Holshue, M.L.; DeBolt, C.; Lindquist, S.; Lofy, K.H.; Wiesman, J.; Bruce, H.; Spitters, C.; Ericson, K.; Wilkerson, S.; Tural, A.; et al. First Case of 2019 Novel Coronavirus in the United States. N. Engl. J. Med. 2020, 382, 929–936. [Google Scholar] [CrossRef]
- Musher, D.M.; Thorner, A.R. Community-acquired pneumonia. N. Engl. J. Med. 2014, 371, 1619–1628. [Google Scholar] [CrossRef]
- Emmanuella, N.; Michael, U.E.; Musa, S.S.; Marc, M.B.; David, S.; Edet, B.E.; Essar, M.Y.; Adebisi, Y.A.; Lucero-Prisno, D.E., III. COVID-19 and its impacts: The situation in Niger republic. Clin. Epidemiol. Glob. Health 2021, 11, 100797. [Google Scholar]
- World Health Organization (WHO) COVID-19 Vaccines. Available online: https://www.who.int/emergencies/diseases/novel-coronavirus-2019/covid-19-vaccines (accessed on 11 August 2021).
- Ghosh, P.; Karim, A.; Atik, S.T.; Afrin, S.; Saifuzzaman, M. Expert cancer model using supervised algorithms with a LASSO selection approach. Int. J. Electr. Comput. Eng. 2021, 11, 2631–2639. [Google Scholar] [CrossRef]
- Karim, A.; Azam, S.; Shanmugam, B.; Kannoorpatti, K. Efficient Clustering of Emails into Spam and Ham: The Foundational Study of a Comprehensive Unsupervised Framework. IEEE Access 2020, 8, 154759–154788. [Google Scholar] [CrossRef]
- Shamrat, M.F.M.J.; Chakraborty, S.; Imran, M.M.; Muna, N.J.; Billah, M.M.; Das, P.; Rahman, O.M. Sentiment analysis on twitter tweets about COVID-19 vaccines using NLP and supervised KNN classification algorithm. Indones. J. Electr. Eng. Comput. Sci. 2021, 23, 463–470. [Google Scholar] [CrossRef]
- Al-Shargabi, A.; Alshobaili, J.; Alabdulatif, A.; Alrobah, N. COVID-CGAN: Efficient Deep Learning Approach for COVID-19 Detection Based on CXR Images Using Conditional GANs. Appl. Sci. 2021, 11, 7174. [Google Scholar] [CrossRef]
- Liew, Y.M.; McLaughlin, R.A.; Chan, B.T.; Aziz, Y.F.A.; Chee, K.H.; Ung, N.M.; Tan, L.K.; Lai, K.W.; Ng, S.; Lim, E. Motion corrected LV quantification based on 3D modelling for improved functional assessment in cardiac MRI. Phys. Med. Biol. 2015, 60, 2715–2733. [Google Scholar] [CrossRef] [Green Version]
- Jahanzad, Z.; Liew, Y.M.; Bilgen, M.; McLaughlin, R.A.; Leong, C.O.; Chee, K.H.; Aziz, Y.F.A.; Ung, N.M.; Lai, K.W.; Ng, S.-C.; et al. Regional assessment of LV wall in infarcted heart using tagged MRI and cardiac modelling. Phys. Med. Biol. 2015, 60, 4015–4031. [Google Scholar] [CrossRef]
- Da Silva, G.L.F.; Valente, T.L.A.; Silva, A.C.; Paiva, A.; Gattass, M. Convolutional neural network-based PSO for lung nodule false positive reduction on CT images. Comput. Methods Programs Biomed. 2018, 162, 109–118. [Google Scholar] [CrossRef]
- Diniz, P.H.B.; Valente, T.L.A.; Diniz, J.O.B.; Silva, A.C.; Gattass, M.; Ventura, N.; Muniz, B.C.; Gasparetto, E.L. Detection of white matter lesion regions in MRI using SLIC0 and convolutional neural network. Comput. Methods Programs Biomed. 2018, 167, 49–63. [Google Scholar] [CrossRef]
- Diniz, J.O.B.; Diniz, P.H.B.; Valente, T.L.A.; Silva, A.C.; Paiva, A.C. Spinal cord detection in planning CT for radio-therapy through adaptive template matching, IMSLIC and convolutional neural networks. Comput. Methods Programs Biomed. 2019, 170, 53–67. [Google Scholar] [CrossRef]
- Mittal, A.; Hooda, R.; Sofat, S. Lung field segmentation in chest radiographs: A historical review, current status, and expectations from deep learning. IET Image Process. 2017, 11, 937–952. [Google Scholar] [CrossRef]
- Liu, M.; Li, Q.; Zhou, J.; Ai, W.; Zheng, X.; Zeng, J.; Liu, Y.; Xiang, X.; Guo, R.; Li, X.; et al. Value of swab types and collection time on SARS-COV-2 detection using RT-PCR assay. J. Virol. Methods 2020, 286, 113974. [Google Scholar] [CrossRef]
- Huang, P.; Liu, T.; Huang, L.; Liu, H.; Lei, M.; Xu, W.; Hu, X.; Chen, J.; Liu, B. Use of Chest CT in Combination with Negative RT-PCR Assay for the 2019 Novel Coronavirus but High Clinical Suspicion. Radiology 2020, 295, 22–23. [Google Scholar] [CrossRef]
- Mahase, E. Coronavirus: Covid-19 has killed more people than SARS and MERS combined, despite lower case fatality rate. BMJ 2020, 368, m641. [Google Scholar] [CrossRef] [Green Version]
- Kanne, J.P.; Little, B.P.; Chung, J.H.; Elicker, B.M.; Ketai, L.H. Essentials for Radiologists on COVID-19: An Update—Radiology Scientific Expert Panel. Radiology 2020, 296, E113–E114. [Google Scholar] [CrossRef] [Green Version]
- Xie, X.; Zhong, Z.; Zhao, W.; Zheng, C.; Wang, F.; Liu, J. Chest CT for Typical Coronavirus Disease 2019 (COVID-19) Pneumonia: Relationship to Negative RT-PCR Testing. Radiology 2020, 296, E41–E45. [Google Scholar] [CrossRef] [Green Version]
- Gupta-Wright, A.; Macleod, C.K.; Barrett, J.; Filson, S.A.; Corrah, T.; Parris, V.; Sandhu, G.; Harris, M.; Tennant, R.; Vaid, N.; et al. False-negative RT-PCR for COVID-19 and a diagnostic risk score: A retrospective cohort study among patients admitted to hospital. BMJ Open 2021, 11, e047110. [Google Scholar] [CrossRef]
- Cherian, T.; Mulholland, E.K.; Carlin, J.; Ostensen, H.; Amin, R.; De Campo, M.; Greenberg, D.; Lagos, R.; Lucero, M.; Madhi, S.A.; et al. Standardized interpretation of paediatric chest radiographs for the diagnosis of pneumonia in epidemiological studies. Bull. World Health Organ. 2005, 83, 353–359. [Google Scholar]
- Franquet, T. Imaging of pneumonia: Trends and algorithms. Eur. Respir. J. 2001, 18, 196–208. [Google Scholar] [CrossRef] [Green Version]
- Ng, M.-Y.; Lee, E.Y.P.; Yang, J.; Yang, F.; Li, X.; Wang, H.; Lui, M.M.-S.; Lo, C.S.-Y.; Leung, B.; Khong, P.-L.; et al. Imaging Profile of the COVID-19 Infection: Radiologic Findings and Literature Review. Radiol. Cardiothorac. Imaging 2020, 2, e200034. [Google Scholar] [CrossRef] [Green Version]
- Apostolopoulos, I.D.; Mpesiana, T.A. Covid-19: Automatic detection from X-ray images utilizing transfer learning with convolutional neural networks. Phys. Eng. Sci. Med. 2020, 43, 635–640. [Google Scholar] [CrossRef] [Green Version]
- Hall, L.O.; Paul, R.; Goldgof, D.B.; Goldgof, G.M. Finding Covid-19 From Chest X-Rays Using Deep Learning on a Small Dataset. arXiv 2020, arXiv:2004.02060. Available online: http://arxiv.org/abs/2004.02060 (accessed on 15 August 2021).
- Narin, A.; Kaya, C.; Pamuk, Z. Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks. Pattern Anal. Appl. 2021, 24, 1207–1220. [Google Scholar] [CrossRef]
- Tang, Y.-X.; Tang, Y.-B.; Peng, Y.; Yan, K.; Bagheri, M.; Redd, B.A.; Brandon, C.J.; Lu, Z.; Han, M.; Xiao, J.; et al. Automated abnormality classification of chest radiographs using deep convolutional neural networks. NPJ Digit. Med. 2020, 3, 1–8. [Google Scholar] [CrossRef]
- Minaee, S.; Kafieh, R.; Sonka, M.; Yazdani, S.; Soufi, G.J. Deep-COVID: Predicting COVID-19 from chest X-ray images using deep transfer learning. Med. Image Anal. 2020, 65, 101794. [Google Scholar] [CrossRef]
- Afshar, P.; Heidarian, S.; Naderkhani, F.; Oikonomou, A.; Plataniotis, K.N.; Mohammadi, A. COVID-CAPS: A capsule network-based framework for identification of COVID-19 cases from X-ray images. Pattern Recognit. Lett. 2020, 138, 638–643. [Google Scholar] [CrossRef]
- Ahsan, M.; Gupta, K.; Islam, M.; Sen, S.; Rahman, L.; Hossain, M.S. COVID-19 Symptoms Detection Based on NasNetMobile with Explainable AI Using Various Imaging Modalities. Mach. Learn. Knowl. Extr. 2020, 2, 490–504. [Google Scholar] [CrossRef]
- Hemdan, E.E.; Shouman, M.A.; Karar, M.E. COVIDXnet: A framework of deep learning classifiers to diagnose COVID-19 in X-ray images. arXiv 2020, arXiv:2003.11055. Available online: http://arxiv.org/abs/2003.11055 (accessed on 19 August 2021).
- Ozturk, T.; Talo, M.; Yildirim, E.A.; Baloglu, U.B.; Yildirim, O.; Acharya, U.R. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput. Biol. Med. 2020, 121, 103792. [Google Scholar] [CrossRef]
- Khan, A.I.; Shah, J.L.; Bhat, M.M. CoroNet: A deep neural network for detection and diagnosis of COVID-19 from chest x-ray images. Comput. Methods Programs Biomed. 2020, 196, 105581. [Google Scholar] [CrossRef]
- Chandra, T.B.; Verma, K.; Singh, B.K.; Jain, D.; Netam, S.S. Coronavirus disease (COVID-19) detection in Chest X-Ray images using majority voting based classifier ensemble. Expert Syst. Appl. 2020, 165, 113909. [Google Scholar] [CrossRef]
- Sekeroglu, B.; Ozsahin, I. Detection of COVID-19 from Chest X-Ray Images Using Convolutional Neural Networks. SLAS Technol. Transl. Life Sci. Innov. 2020, 25, 553–565. [Google Scholar] [CrossRef]
- Pandit, M.; Banday, S.; Naaz, R.; Chishti, M. Automatic detection of COVID-19 from chest radiographs using deep learning. Radiography 2020, 27, 483–489. [Google Scholar] [CrossRef]
- Arias-Londono, J.D.; Gomez-Garcia, J.A.; Moro-Velazquez, L.; Godino-Llorente, J.I. Artificial Intelligence Applied to Chest X-Ray Images for the Automatic Detection of COVID-19. A Thoughtful Evaluation Approach. IEEE Access 2020, 8, 226811–226827. [Google Scholar] [CrossRef]
- Yamac, M.; Ahishali, M.; Degerli, A.; Kiranyaz, S.; Chowdhury, M.E.H.; Gabbouj, M. Convolutional Sparse Support Estimator-Based COVID-19 Recognition from X-Ray Images. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 1810–1820. [Google Scholar] [CrossRef]
- Wang, L.; Lin, Z.Q.; Wong, A. COVID-Net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. Sci. Rep. 2020, 10, 19549. [Google Scholar] [CrossRef]
- Abbas, A.; Abdelsamea, M.M.; Gaber, M.M. Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network. Appl. Intell. 2020, 51, 854–864. [Google Scholar] [CrossRef]
- Toraman, S.; Alakus, T.B.; Turkoglu, I. Convolutional capsnet: A novel articial neural network approach to detect COVID-19 disease from X-ray images using capsule networks. Chaos Solitons Fractals 2020, 140, 110122. [Google Scholar] [CrossRef]
- Das, N.N.; Kumar, N.; Kaur, M.; Kumar, V.; Singh, D. Automated Deep Transfer Learning-Based Approach for Detection of COVID-19 Infection in Chest X-rays. IRBM 2020. [Google Scholar] [CrossRef]
- Hu, M.; Lin, H.; Fan, Z.; Gao, W.; Yang, L.; Liu, C.; Song, Q. Learning to Recognize Chest-Xray Images Faster and More Efficiently Based on Multi-Kernel Depthwise Convolution. IEEE Access 2020, 8, 37265–37274. [Google Scholar] [CrossRef]
- Ismael, A.M.; Şengür, A. Deep learning approaches for COVID-19 detection based on chest X-ray images. Expert Syst. Appl. 2020, 164, 114054. [Google Scholar] [CrossRef]
- Shankar, K.; Perumal, E.; Díaz, V.G.; Tiwari, P.; Gupta, D.; Saudagar, A.K.J.; Muhammad, K. An optimal cascaded recurrent neural network for intelligent COVID-19 detection using Chest X-ray images. Appl. Soft Comput. 2021, 113, 107878. [Google Scholar] [CrossRef]
- Kaggle. Available online: https://www.kaggle.com/tawsifurrahman/covid19-radiography-database?fbclid=IwAR0rw_prTvf9R0zInrJQkTFazeBaESxh3rB6otdrPdAWJDonEbIl2Nf6epk (accessed on 10 September 2021).
- EnsNet. Available online: https://github.com/HCIILAB/Scene-Text-Removal (accessed on 15 July 2021).
- Zhang, S.; Liu, Y.; Jin, L.; Huang, Y.; Lai, S. Ensnet: Ensconce text in the wild. Proc. AAAI Conf. Artif. Intell. 2019, 33, 801–808. [Google Scholar] [CrossRef]
- Keras Applications. Available online: https://keras.io/api/applications/ (accessed on 22 July 2021).
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Tasnim, Z.; Chakraborty, S.; Shamrat, F.M.J.M.; Chowdhury, A.N.; Alam Nuha, H.; Karim, A.; Zahir, S.B.; Billah, M. Deep Learning Predictive Model for Colon Cancer Patient using CNN-based Classification. Int. J. Adv. Comput. Sci. Appl. 2021, 12. [Google Scholar] [CrossRef]
- Rosner, B.; Glynn, R.J.; Lee, M.T. Incorporation of Clustering Effects for the Wilcoxon Rank Sum Test: A Large—Sample Approach. Biometrics 2003, 59, 1089–1098. [Google Scholar] [CrossRef]
Reference | Number of Image Samples | Classes | Architectures | Best Performing Architecture | Performance/Accuracy |
---|---|---|---|---|---|
[35] | 1428 | 3 | VGG19, MobileNetV2, Inception, Xception, InceptionResNetV2 | MobileNetV2 | Acc = 96.78% |
[36] | 204 | 2 | VGG16 + Resnet50 | VGG16 + Resnet50 + custom CNN | Acc = 89.2% |
[37] | 100 | 2 | ResNet50, InceptionV3 and InceptionRes-NetV2 | ResNet50 | Acc = 98% |
[38] | 21,152 | 2 | CNN | CNN | Acc = 94.64% |
[39] | 5184 | 2 | ResNet18, ResNet50, SqueezeNet, DenseNet-121 | SqueezeNet | Sensitivity = 98%, Specificity = 92.9% |
[40] | 13,975 | 2 | COVID-CAPS | COVID-CAPS | Acc = 95.7%, |
[41] | 400 | 2 | VGG16, InceptionResNetV2, ResNet50, DenseNet201, VGG19, MobilenetV2, NasNetMobile, and ResNet15V2 | NasNetMobile | Acc = 93.94% |
[42] | 75 | 2 | VGG19, Xception, ResNetV2, DenseNet201, InceptionV3, MobileNetV2, InceptionResNetV2 | VGG19, DenseNet | F1 scores = 0.91 |
[43] | 1127 | 2 | Modified Darknet | Modified Darknet | Acc = 98% |
[44] | 1257 | 3 | Xception | Xception | Acc = 94% |
[45] | 2356 | 3 | ACoS system | ACoS | Acc = 91.33% |
[46] | 6100 | 3 | SVM, LR, nB, DT, and kNN + VGG16, ResNet50, MobileNetV2, DenseNet121 | Mean result | Acc = 98.5% |
[47] | 1428 | 2 | VGG16 | VGG16 | Acc = 96% |
[48] | 79,500 | 3 | Grad-CAM | Grad-CAM | Acc = 91.5% |
[49] | 6200 | 4 | CSEN-based classifier | CSEN-based Classifier | Sensitivity = 98% Specificity = 95% |
[50] | 13,975 | 19 | COVID-Net | COVID-Net | Acc = 93.3% |
[51] | 196 | 3 | DeTrac | Detrac | Acc = 93.1% |
[52] | 3150 | 3 | CapsNet | CapsNet | Acc = 97% |
[53] | 1127 | 3 | Xception | Xception | Acc = 97% |
[54] | 7470 | 2 | MD-Conv | MD-Conv | Acc = 93.4% |
[55] | 380 | 2 | Novel CNN Model | Novel CNN Model | Acc = 91.6% |
[56] | 247 | 2 | BMO-CRNN | BMO-CRNN | Sensivity = 97.01% Acc = 97.31% F-value = 97.53% |
Augmentation Technique | Range |
---|---|
Horizontal flip | True |
Rotation range | 10 |
Width shift range | 0.1 |
Height shift range | 0.1 |
Vertical flip | False |
Features | Values |
---|---|
Total Number of Images | 52,000 |
Disease Types | 2 |
Dimension (Size in Pixel) | Classifier’s Resolution (i.e., AlexNet is 256 × 256 pixels) |
Color Grading | Grays, Cyan, Spectrum |
COVID-19 Images | 26,000 (After Augmentation) |
Healthy Images | 26,000 (After Augmentation) |
Training Images | 41,600 |
Testing Images | 10,400 |
Input | Operator | t | c | n | s |
---|---|---|---|---|---|
224 × 224 × 3 | Con 2d | 1 | 32 | 1 | 2 |
224 × 224 × 3 Average pool (3, 3) | bottleneck | 3 | 16 | 1 | 1 |
128 × 128 × 3 Average pool (3, 3) | bottleneck | 3 | 24 | 1 | 2 |
128 × 128 × 3 Maxpool (2, 2) | bottleneck | 3 | 32 | 2 | 2 |
128 × 128 × 3 Maxpool (1, 1) | bottleneck | 3 | 64 | 2 | 1 |
64 × 64 × 3 Maxpool (1, 1) | bottleneck | 3 | 96 | 1 | 2 |
64 × 64 × 3 | bottleneck | 3 | 160 | 1 | 1 |
64 × 64 × 3 | bottleneck | 3 | 320 | 1 | 2 |
56 × 56 × 3 | Con 2d | - | k | - | - |
Model | Batch Size | Number of Epochs | Hidden Layer Size | Dropout | Learning Rate | Activation | Optimizer | Kernel Size |
---|---|---|---|---|---|---|---|---|
VGG16 | 150 | 200 | 8–96 neurons | 0.1 | 0.00001 | ReLu sigmoid | Adam | 2 × 2 |
VGG19 | 150 | 200 | 8–96 neurons | 0.1 | 0. 0001 | ReLu sigmoid | RMSprop | 2 × 2 |
InceptionV3 | 200 | 300 | 8–96 neurons | 0.1 | 0.0001 | ReLu sigmoid | Nadam | 2 × 2 |
ResNet50 | 100 | 200 | 8–96 neurons | 0.1 | 0.001 | ReLu sigmoid | Adamax | 2 × 2 |
ResNet101 | 250 | 300 | 8–96 neurons | 0.1 | 0.0001 | ReLu sigmoid | Adam | 2 × 2 |
GoogLeNet | 50 | 150 | 8–96 neurons | 0.1 | 0.0001 | ReLu sigmoid | SGD | 2 × 2 |
MobileNetV2 | 250 | 300 | 8–96 neurons | 0.1 | 0.01 | ReLu sigmoid | RMSprop | 2 × 2 |
AlexNet | 100 | 150 | 8–96 neurons | 0.1 | 0.00001 | ReLu sigmoid | Adadelta | 2 × 2 |
EfficientNet B7 | 200 | 300 | 8–96 neurons | 0.1 | 0.000001 | ReLu sigmoid | Adamax | 2 × 2 |
DenseNet121 | 200 | 350 | 8–96 neurons | 0.1 | 0.00001 | ReLu sigmoid | Adagrad | 2 × 2 |
NFNet | 150 | 250 | 8–96 neurons | 0.1 | 0.0001 | ReLu sigmoid | Adadelta | 2 × 2 |
Modified MobileNetV2 (Proposed Method) | 300 | 400 | 8–96 neurons | 0.1 | 0.0000001 | ReLu | RMSprop | 2 × 2 |
Model | Image Mode & Training Records | Testing F1 Score |
---|---|---|
VGG16 | X-ray No overfitting evident (dropout Apply) | 0.87 |
VGG19 | X-ray No overfitting evident (dropout Apply) | 0.88 |
InceptionV3 | X-ray No overfitting evident (dropout Apply) | 0.91 |
ResNet50 | X-ray No overfitting evident (dropout Apply) | 0.82 |
ResNet101 | X-ray No overfitting evident (dropout Apply) | 0.93 |
GoogLeNet | X-ray No overfitting evident (dropout Apply) | 0.90 |
MobileNetV2 | X-ray No overfitting evident (dropout Apply) | 0.95 |
AlexNet | X-ray No overfitting evident (dropout Apply) | 0.87 |
EfficientNetB7 | X-ray No overfitting evident (dropout Apply) | 0.92 |
DenseNet121 | X-ray No overfitting evident (dropout Apply) | 0.94 |
NFNet | X-ray No overfitting evident (dropout Apply) | 0.92 |
Modified MobileNetV2 (Proposed Method) | X-ray No overfitting evident (dropout Apply) | 0.98 |
Model | Sensitivity/Recall | Precision/PPV | Accuracy | NPV | Specificity | F1 Score |
---|---|---|---|---|---|---|
Modified MobileNetV2 | 98% | 97% | 98% | 98% | 97% | 97% |
MobileNetV2 | 97% | 96% | 97% | 97% | 96% | 96% |
VGG19 | 95% | 94% | 95% | 95% | 93% | 94% |
ResNet101 | 96% | 93% | 95% | 96% | 93% | 94% |
InceptionV3 | 95% | 92% | 94% | 95% | 92% | 93% |
NFNET | 94% | 94% | 94% | 93% | 93% | 94% |
GoogLeNet | 94% | 93% | 94% | 94% | 93% | 93% |
DenseNet121 | 92% | 96% | 94% | 91% | 96% | 93% |
EfficientNetB7 | 93% | 90% | 92% | 93% | 90% | 91% |
AlexNet | 95% | 87% | 91% | 95% | 97% | 90% |
VGG16 | 93% | 89% | 91% | 93% | 98% | 90% |
ResNet50 | 87% | 81% | 84% | 86% | 80% | 83% |
Model | Compilation Time |
---|---|
VGG19 | 5 h 18 min 14 s |
VGG16 | 6 h 27 min 7 s |
MobileNetV2 | 5 h 42 min 34 s |
InceptionV3 | 7 h 6 min 41 s |
ResNet50 | 6 h 1 min 22 s |
AlexNet | 4 h 54 min 3 s |
4 h 57 min 52 s | |
ResNet101 | 6 h 18 min 18 s |
EfficientNetB7 | 6 h 45 min 47 s |
DenseNet | 6 h 42 min 35 s |
NFNET | 6 h 24 min 26 s |
Modified mobilenetv2 | 2 h 50 min 21 s |
Modified MobileNetV2 | Sensitivity/ Recall | Precision/ PPV | Accuracy | NPV | Specificity | F1 Score |
---|---|---|---|---|---|---|
Train | 97% | 96% | 97% | 97% | 96% | 96% |
Validation | 98% | 97% | 98% | 98% | 97% | 97% |
No. | Pairwise Model Comparison | p Value | Significance < 0.05 |
---|---|---|---|
1 | Modified MobileNetV2 versus VGG16 | 0.0001 | Yes |
2 | Modified MobileNetV2 versus VGG19 | 0.0117 | Yes |
3 | Modified MobileNetV2 versus InceptionV3 | 0.0001 | Yes |
4 | Modified MobileNetV2 versus ResNet50 | 0.0001 | Yes |
5 | Modified MobileNetV2 versus ResNet101 | 0.0030 | Yes |
6 | Modified MobileNetV2 versus GoogLeNet | 0.0001 | Yes |
7 | Modified MobileNetV2 versus MobileNetV2 | 0.0028 | Yes |
8 | Modified MobileNetV2 versus AlexNet | 0.0001 | Yes |
9 | Modified MobileNetV2 versus EfficientNetB7 | 0.0001 | Yes |
10 | Modified MobileNetV2 versus DenseNet121 | 0.0068 | Yes |
11 | Modified MobileNetV2 versus NFNet | 0.0023 | Yes |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Akter, S.; Shamrat, F.M.J.M.; Chakraborty, S.; Karim, A.; Azam, S. COVID-19 Detection Using Deep Learning Algorithm on Chest X-ray Images. Biology 2021, 10, 1174. https://doi.org/10.3390/biology10111174
Akter S, Shamrat FMJM, Chakraborty S, Karim A, Azam S. COVID-19 Detection Using Deep Learning Algorithm on Chest X-ray Images. Biology. 2021; 10(11):1174. https://doi.org/10.3390/biology10111174
Chicago/Turabian StyleAkter, Shamima, F. M. Javed Mehedi Shamrat, Sovon Chakraborty, Asif Karim, and Sami Azam. 2021. "COVID-19 Detection Using Deep Learning Algorithm on Chest X-ray Images" Biology 10, no. 11: 1174. https://doi.org/10.3390/biology10111174
APA StyleAkter, S., Shamrat, F. M. J. M., Chakraborty, S., Karim, A., & Azam, S. (2021). COVID-19 Detection Using Deep Learning Algorithm on Chest X-ray Images. Biology, 10(11), 1174. https://doi.org/10.3390/biology10111174