Next Article in Journal
Health Monitoring of Civil Structures: A MCMC Approach Based on a Multi-Fidelity Deep Neural Network Surrogate
Previous Article in Journal
A Novel Deep Learning ArCAR System for Arabic Text Recognition with Character-Level Representation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

A Hybrid Deep Learning Approach for COVID-19 Diagnosis via CT and X-ray Medical Images †

by
Channabasava Chola
1,2,
Pramodha Mallikarjuna
1,
Abdullah Y. Muaad
1,3,*,
J. V. Bibal Benifa
2,
Jayappa Hanumanthappa
1 and
Mugahed A. Al-antari
3,4,5,*
1
Department of Studies in Computer Science, University of Mysore, Manasagangothri, Mysore 570006, India
2
Department of Studies in Computer Science and Engineering, Indian Institute of Information Technology Kottayam, Valavoor 686635, India
3
IT Department, Sana’a Community College, Sana’a 5695, Yemen
4
Department of Biomedical Engineering, Sana’a Community College, Sana’a 5695, Yemen
5
Department of Artificial Intelligence, Sejong University, Sejong 30019, Korea
*
Authors to whom correspondence should be addressed.
Presented at the 1st International Electronic Conference on Algorithms, 27 September–10 October 2021; Available online: https://ioca2021.sciforum.net/.
Comput. Sci. Math. Forum 2022, 2(1), 13; https://doi.org/10.3390/IOCA2021-10909
Published: 29 September 2021
(This article belongs to the Proceedings of The 1st International Electronic Conference on Algorithms)

Abstract

:
The COVID-19 pandemic has been a global health problem since December 2019. To date, the total number of confirmed cases, recoveries, and deaths has exponentially increased on a daily basis worldwide. In this paper, a hybrid deep learning approach is proposed to directly classify the COVID-19 disease from both chest X-ray (CXR) and CT images. Two AI-based deep learning models, namely ResNet50 and EfficientNetB0, are adopted and trained using both chest X-ray and CT images. The public datasets, consisting of 7863 and 2613 chest X-ray and CT images, are respectively used to deploy, train, and evaluate the proposed deep learning models. The deep learning model of EfficientNetB0 consistently performed a better classification result, achieving overall diagnosis accuracies of 99.36% and 99.23% using CXR and CT images, respectively. For the hybrid AI-based model, the overall classification accuracy of 99.58% is achieved. The proposed hybrid deep learning system seems to be trustworthy and reliable for assisting health care systems, patients, and physicians.

1. Introduction

The outbreak of COVID-19 is considered an epidemic and pandemic, affecting people around the world in a short period. It is rapidly transmitted among people in different local and global communities due to travel issues [1]. To date, the number of confirmed cases and deaths has reached 226 million and 4 million worldwide, respectively. COVID-19 is a novel coronavirus coined as Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) and it targets the human respiratory system. The confirmed biological symptoms of COVID-19 are fever, shortness of breath, dizziness, cough, headache, sore throat, fatigue, and muscle pain. Accurate and rapid classification techniques have become necessary to automatically diagnose COVID-19, especially in a pandemic situation. Recently, AI techniques (deep learning and machine learning) were employed to build a robust decision-making system against COVID-19 [2,3,4]. Traditionally, COVID-19 screening involves RT-PCR (reverse transcription polymerase chain reaction) carried out at the pathogen laboratory. Due to its higher time consumption and lower sensitivity, medical imaging techniques such as computed tomography (CT) as well as Chest X-ray (Radiological Image) images are being used to fight and classify the COVID-19 respiratory disease [5,6,7]. The lungs are the major target of the COVID-19 virus. RT-PCR is useful for the diagnosis of disease, while CT and CXR images are useful to assess the damage caused to the lungs due to COVID-19 at various stages of the disease. Inflammation of lung tissues can be identified based on the size and shape of attacked tissues with the help of X-ray and CT images [3,6]. Deep convolutional networks are extensively utilized in the fields of hyperspectral images, microscopic images, and medical image analysis; current trending coronavirus-related diagnostic studies also used deep learning-based architectures, namely COVID-SDNet, DL-CRC, and EDL-COVID [1,6,8]. Machine learning-based techniques such as SOM-LWL, PB-OCSVM, and one-shot cluster-based approaches for COVID CXR images have also been introduced to the diagnosis of COVID detection and classification [9,10,11]. In addition, other techniques such as transfer and learning methods have been implemented using MobileNet, VGG, ResNet, Alexnet, and DensNet architectures as a base module for training for the task of COVID image classification [12,13]. Computer-aided diagnosis systems have been proposed for several medical image analysis tasks such as breast cancer, brain tumors, and kidney and lung disorders using deep learning methods [5,12,14,15].
In this proposed work, a hybrid deep learning system is deployed to perform the classification task of COVID-19 using two CXR and CT datasets. Deep convolutional networks have a promising feature extraction method to automatically identify a large quantity of deep features directly from the input images, thus improving overall classification accuracy. The objective of this study is to provide a unified deep learning model using both medical CXR and CT images. The main contributions in this hybrid system are summarized as follows: First, the building of a novel hybrid deep learning model in unified architecture to automatically and rapidly classify COVID-19 disease using both CXR and CT images. Second, deep learning regularizations of data balancing, transfer learning-based approaches, and data augmentation are used in order to improve overall diagnostic performance. Such experiments will help to improve understanding of COVID-19 disease and to diagnose it using different medical imaging modalities [16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36].
The objective of this work is to provide a robust and feasible AI-based system for medical institutions, health care service providers, physicians, and patients by providing practical solutions for COVID-19 diagnosis.
The rest of this paper is organized as follows: A review of the relevant literature is presented in Section 2; The technical aspects of the deep learning methods for classification systems are detailed in Section 3; The results of the experiment with COVID-19 are reported and discussed in Section 4 and Section 5; Finally, the most important findings of this work are summarized in the conclusion in Section 6.

2. Related Work

In early 2020, while the world was under pandemic conditions due to the COVID-19 outbreak, some computer-aided diagnosis systems, proposed based on deep learning, were introduced to predict COVID-19 on digital X-ray and CT images. In [18], Yang et al. presented diagnosis of COVID-19 with the help of CT images, and proposed an AI-based diagnosis system based on DensNet and ResNet pre-trained models with transfer and learning techniques to classify, and reported an accuracy of 89%, AUC of 98%, and F1 score of 90%; the dataset was made open-source. Madallah Alruwaili et al. in [19] used an improved Inception-ResNetV2 to diagnose COVID-19 in X-ray images, which had high accuracy in the radiography dataset at detecting COVID-19. Mundher et al. in [13] designed a model to detect COVID-19 from X-ray images using convolutional neural networks with transfer learning-based techniques with VGG16, and MobileNet modules reported the highest accuracy of 98.28% with VGG16 as a base model. In [19], Madallah Alruwaili et al. used an improved Inception-ResNetV2 to diagnose COVID-19 in X-ray images. Xception, VGG16, InceptionV3, ResNet50V2, MobileNetV2, ResNet101V2, and DenseNet121 models were used for experimentation and with CXR images, with the Inception-ResNetV2 model achieving 99.8%. In [20], Fareed Ahmad et al. designed a deep learning model for detecting COVID-19 using chest X-ray images, using different CNN models such as MobileNet, InceptionV3, and ResNet50. The best model was InceptionV3, which reported 95.75% and 91.47% accuracy and F-score, respectively. In [5], Boran Sekeroglu et al. used the CNN model to detect COVID-19 from chest X-ray images using the available dataset. They used CNN without preprocessing and with a decreasing number of layers, and were capable of detecting COVID-19 in a limited number of data and imbalanced chest X-ray images with an accuracy of 98.50%. In [21] Pramit Brata Chanda et al. implemented a new model to diagnose COVID-19 using chest X-rays. They used the CNN-based transfer learning framework for the classification task and reported an accuracy of 96.13%. In [22], Mubashir Rehman et al. designed a platform-monitoring system to detect and diagnose of COVID-19 using the breathing rate measurement. In [8], S. Tabik et al. contributed a new open-source dataset, called COVIDGR-1.0. In their experiment, they designed a new model to detect COVID-19 using X-ray images, and also helped to measure severity. They reported the classification as moderate and severe—86.90% and 97.72%, respectively, on the basis of the CXR database. In [12] Wentao et al. presented a new model based on deep learning for the diagnosis of COVID-19 using CT images. The transfer learning technique achieved a good accuracy of 98%. In [6], Sadman et al. proposed a deep learning-based chest radiograph classification (DL-CRC) framework to distinguish COVID-19 cases with high accuracy from two classes, abnormal and normal. They presented a deep learning model called the DL-CRC framework, with two parts: the DARI algorithm and generic data augmentation, with an accuracy of 93.94%. In [23] Khalid M. Hosny et al. designed a hybrid model to detect COVID-19 using two types of CT scans and chest X-ray images. Their work combined two types of images to fit memory and computational time. They proposed the framework for CXR and CT images with 99.3% and 93.2%, respectively. In [7], transfer learning was presented to detect COVID-19 using X-ray and CT-scan images. This was because in COVID-19, initial screening of chest X-rays (CXR) may provide significant information in the detection of suspected COVID-19 cases. In [24], Ravi et al. presented a model to detect COVID-19 using both CT and CXR datasets. In [25], Elmehdi Benmalek et al. aimed to make a comparison for the performances of CT-scan and chest X-ray images to detect COVID-19 utilizing CNN, achieving an accuracy equal to 98.5% and 98.6%, respectively. In [16], Muhammad E. H. et al. presented a strong model to detect COVID-19 pneumonia using chest X-ray images, utilizing the pre-trained deep learning technique. They created a database by merging data that had been created by previous work. They obtained a classification accuracy of 99.77%.

3. Methods and Materials

The proposed hybrid deep learning system for COVID-19 diagnosis is demonstrated in Figure 1. Two different deep learning models, namely ResNet50 and EfficientNetB0, were used for CXR and CT images, respectively. Both deep learning models were trained using 100 epochs. The final layers from both deep learning models were concatenated together to merge the derived deep features and generate the single most robust deep-feature set. This set carries promising features generated from both CXR and CT images at the same time; this is key to improve the overall accuracy performance of the proposed deep learning system. The concatenated deep features were then scaled in 1D form using a global average pooling (GAP); this is to make the derived feature maps suitable for the following two fully connected layers. Finally, Softmax layer is used to make the final decision of whether the output is a positive COVID-19 case or a normal negative case. To reduce overfitting that may occur during the training phase, the 0.5 dropout strategy is used. For pre-training, the transfer leering strategy is used with the ImageNet database.

3.1. Preprocessing

The preprocessing technique is the most significant step of the model. Here, we considered raw data and transformed it into a specific input data format and dimension [3]. We inculcated the data augmentation and class-balancing strategy to reduce overfitting, and this acted as a catalyst for the training process [15,31]. Later, we divided both data into 70% for training, 20% for testing, 10% for validation. For each class, the dataset was selected in a randomized manner. For hyperparameter initialization, the transfer-earning strategy was applied using the dataset of ImageNet [3,15,31].

3.2. Feature Extraction

Deep CNN has shown improved performance regardless of domain, particularly in medical imaging, and generalization of the model has been observed. Transfer learning is being explored to provide an efficient solution [6]. In our experimental analysis, we employed ResNet50 [27] and EfficientNet0 [24,26] models for the task of feature generation; deep features were later passed to custom user-specific layers. In our work we pushed to global average pooling, followed by a fully connected layer [24]. We used two FC layers, improving efficiency; and to generalize learning, we also introduced dropout in the middle of the FC layers. These extracted features were passed to the classification layer to assign the appropriate class label of the given input data instance.

3.3. Classification

The pipeline of extracted deep features through feed-forward models ResNet50 and EfficientNet0 was passed to the SoftMax layer for classification and feature extraction. The results were generated for both CT and CXR databases separately and results were discussed in the below tables. The results were promising in contrast to existing research.

4. Experimental Analysis

4.1. Dataset

To quantify our work, two datasets of chest X-rays and CTs were used. These datasets are publicly available at Kaggle databases [16,17,18]. The datasets are described as shown in Table 1.

4.2. Implementation Environment

To perform all experiments in this study, we used a PC with the following specifications: Intel R © Core(TM) i7-6850 K processor with 32 GB RAM, 3.360 GHz frequency GPUs NVIDIA GeForce GTX1050Ti. Deep learning algorithms were implemented herein using Python 3.8.0 programming with Anaconda [Jupyter notebook]. The Python-based ML libraries such as Torch, TensorFlow, OpenCV, pandas, and Scikitlearn were utilized to investigate the performance metrics by the proposed methods; at the same time TensorFlow and Keras in Colab were used to implement transfer learning. The results and discussions concerning various techniques incorporated are highlighted in the subsequent sections. The source codes are available at GitHub (https://github.com/IIITK-AI-LAB/Hybrid-covid-model (accessed on 25 September 2021)).

4.3. Evaluation Metrics

To assess our proposed system, we used the evaluation metrics of recall/sensitivity (Re), specificity (Sp), F1-measure (F-M), and overall accuracy (Az). The mathematical formula for these evaluation metrics is defined as follows:
Recall / Sensitivity   Re = TP TP + FN   ,
Specificity   Sp = TN TN + FP   ,
F 1 - score   F - M = 2 · TP 2 · TP + FP + FN   ,
Overall   accuracy   Az = TP + TN TP + FN + TN + FP   ,
where TP, TN, FP, and FN are defined to represent the number of true positive, true negative, false positive, and false negative detections, respectively. The confusion matrix is used to derive all of these parameters.

5. Results and Discussion

This section shows our experimental results in the following two different scenarios: a single straightforward scenario and a hybrid scenario. The former scenario means both deep learning models (i.e., ResNet50 and EfficientNetB0) are separately used and tested to investigate which model could provide the best overall classification accuracy for a single dataset—chest X-ray or CT images. The later scenario means both deep learning models are concatenated to produce the proposed hybridization model, as shown in Figure 1. This is to check which hybridization combination could achieve the best performance when both medical chest images are used.

5.1. Single Straightforward Scenario

For each medical chest dataset (i.e., chest X-ray or CT images), two different experiments were performed. One experiment was carried out using the deep convolutional ResNet50 model, and the other was performed using the EfficientNetB0 deep learning model. In other words, the single deep learning model (i.e., ResNet50 or EfficientNetB0) was trained twice: once for the chest X-ray and again for the CT images. In both training styles, the same deep learning architecture, as well as training/testing settings, were used.

5.1.1. COVID-19 Classification Based on Chest X-ray Images

In this case, the input dataset consists only of X-ray images for the ResNet50 or EfficientNetB0 deep learning models. The overall classification evaluation results are summarized in Table 2. Although it is obviously shown that both deep learning models achieve almost the same results, the EfficientNetB0 deep learning model achieves a slightly better overall accuracy of 99.36%.

5.1.2. COVID-19 Classification Based on Chest CT Images

The chest CT dataset is only used to separately train the deep learning models of ResNet50 and EfficientNetB0. The overall classification evaluation results are reported in Table 3. The EfficientNetB0 deep learning model achieves a slightly better overall accuracy of 99.23%, while other evaluation metrics show a consistent and stable performance.

5.2. Hybrid Scenario: COVID-19 Classification Using Chest and X-ray Images

In the proposed hybrid deep learning model, both chest X-ray and CT datasets are used as input, as shown in Figure 1. The evaluation classification results for the best combination hybrid model are demonstrated in Table 4. Each row in Table 4 presents the classification assessment results by using a single deep learning model in a hybrid style for both chest X-ray and CT images.

6. Conclusions

A hybrid deep learning model is proposed to automatically detect COVID-19 respiratory disease from both chest X-ray and CT images. The proposed hybrid model uses two deep convolutional networks, namely ResNet and EfficientNet, to generate promising deep hierarchical features. The proposed hybrid deep learning approach could achieve classification accuracies of 99.58% using chest X-ray and CT images. Further improvements could be achieved by including ultrasound images as well. This can help to construct and build a much more robust and reliable diagnosis system to fight COVID-19 in the early stages. The promising results could help to provide a better real-time diagnosis system for health care service providers, physicians, and patients.

Author Contributions

Conceptualization, P.M., A.Y.M., C.C. and M.A.A.-a.; methodology, A.Y.M., J.V.B.B., C.C. and M.A.A.-a.; software, P.M., A.Y.M. and C.C.; validation, A.Y.M., C.C. and M.A.A.-a.; formal analysis, A.Y.M., J.V.B.B. and C.C.; investigation, A.Y.M., J.V.B.B., J.H., C.C. and M.A.A.-a.; resources, A.Y.M., J.H., J.V.B.B. and C.C.; data curation, P.M., A.Y.M. and C.C.; writing—original draft preparation, P.M., A.Y.M., C.C. and M.A.A.-a.; writing—review and editing, P.M., A.Y.M., J.V.B.B., C.C. and M.A.A.-a.; visualization, J.V.B.B. and M.A.A.-a.; supervision, J.V.B.B., J.H. and M.A.A.-a.; project administration, J.V.B.B. and M.A.A.-a.; funding acquisition, J.V.B.B., J.H., C.C. and M.A.A.-a. All authors have read and agreed to the published version of the manuscript.

Funding

The experimental part of the work reported herein (Medical_Image_DL-PR) is fully supported by National PARAM Supercomputing Facility (NPSF), Centre for Development of Advanced Computing (C-DAC), Savitribai Phule Pune University Campus, India. We acknowledge our sincere thanks for providing such excellent computing resources.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

There are no conflict of interest associated with publishing this paper.

References

  1. Tang, S.; Wang, C.; Nie, J.; Kumar, N.; Zhang, Y.; Xiong, Z.; Barnawi, A. EDL-COVID: Ensemble Deep Learning for COVID-19 Case Detection from Chest X-ray Images. IEEE Trans. Ind. Inform. 2021, 17, 6539–6549. [Google Scholar] [CrossRef]
  2. Islam, M.N.; Inan, T.T.; Rafi, S.; Akter, S.S.; Sarker, I.H.; Islam, A.K.M.N. A Systematic Review on the Use of AI and ML for Fighting the COVID-19 Pandemic. IEEE Trans. Artif. Intell. 2021, 1, 258–270. [Google Scholar] [CrossRef]
  3. Al-antari, M.A.; Hua, C.H.; Bang, J.; Lee, S. Fast deep learning computer-aided diagnosis of COVID-19 based on digital chest x-ray images. Appl. Intell. 2021, 51, 2890–2907. [Google Scholar] [CrossRef] [PubMed]
  4. Chola, C.; Heyat, M.B.B.; Akhtar, F.; Al Shorman, O.; Benifa, J.V.; Muaad, A.Y.M.; Masadeh, M.; Alkahatni, F. IoT Based Intelligent Computer-Aided Diagnosis and Decision Making System for Health Care. In Proceedings of the 2021 International Conference on Information Technology (ICIT), Amman, Jordanm, 14–15 July 2021; pp. 184–189. [Google Scholar] [CrossRef]
  5. Sekeroglu, B.; Ozsahin, I. Detection of COVID-19 from Chest X-ray Images Using Convolutional Neural Networks. SLAS Technol. 2020, 25, 553–565. [Google Scholar] [CrossRef]
  6. Sakib, S.; Tazrin, T.; Fouda, M.M.; Fadlullah, Z.M.; Guizani, M. DL-CRC: Deep learning-based chest radiograph classification for covid-19 detection: A novel approach. IEEE Access 2020, 8, 171575–171589. [Google Scholar] [CrossRef]
  7. Panwar, H.; Gupta, P.K.; Siddiqui, M.K.; Morales-Menendez, R.; Bhardwaj, P.; Singh, V. A deep learning and grad-CAM based color visualization approach for fast detection of COVID-19 cases using chest X-ray and CT-Scan images. Chaos Solitons Fractals 2020, 140, 110190. [Google Scholar] [CrossRef]
  8. Tabik, S.; Gomez-Rios, A.; Martin-Rodriguez, J.L.; Sevillano-Garcia, I.; Rey-Area, M.; Charte, D.; Guirado, E.; Suarez, J.L.; Luengo, J.; Valero-Gonzalez, M.A.; et al. COVIDGR Dataset and COVID-SDNet Methodology for Predicting COVID-19 Based on Chest X-ray Images. IEEE J. Biomed. Health Inform. 2020, 24, 3595–3605. [Google Scholar] [CrossRef]
  9. Aradhya, V.N.M.; Mahmud, M.; Chowdhury, M.; Guru, D.S.; Kaiser, M.S.; Azad, S. Learning through One Shot: A Phase by Phase Approach for COVID-19 Chest X-ray Classification. In Proceedings of the 2020 IEEE-EMBS Conference on Biomedical Engineering and Sciences (IECBES), Langkawi Island, Malaysia, 1–3 March 2021; pp. 241–244. [Google Scholar] [CrossRef]
  10. Aradhya, V.N.M.; Mahmud, M.; Guru, D.S.; Agarwal, B.; Kaiser, M.S. One-shot Cluster-Based Approach for the Detection of COVID-19 from Chest X-ray Images. Cognit. Comput. 2021, 13, 873–881. [Google Scholar] [CrossRef]
  11. Sonbhadra, S.K.; Agarwal, S.; Nagabhushan, P. Pinball-OCSVM for early-stage COVID-19 diagnosis with limited posteroanterior chest X-ray images. arXiv 2020, arXiv:2010.08115. [Google Scholar]
  12. Zhao, W.; Jiang, W.; Qiu, X. Deep learning for COVID-19 detection based on CT images. Sci. Rep. 2021, 11, 14353. [Google Scholar] [CrossRef]
  13. Taresh, M.M.; Zhu, N.; Ali, T.A.A.; Hameed, A.S.; Mutar, M.L. Transfer Learning to Detect COVID-19 Automatically from X-ray Images Using Convolutional Neural Networks. Int. J. Biomed. Imaging 2021, 2021, 8828404. [Google Scholar] [CrossRef]
  14. Al-masni, M.A.; Al-antari, M.A.; Min, H.; Hyeon, N.; Kim, T. A deep learning model integrating FrCN and residual convolutional networks for skin lesion segmentation and classification. In Proceedings of the 2019 IEEE Eurasia Conference on Biomedical Engineering, Healthcare and Sustainability (ECBIOS), Okinawa, Japan, 31 May–3 June 2019; pp. 95–98. [Google Scholar]
  15. Al-antari, M.A.; Han, S.M.; Kim, T.S. Evaluation of deep learning detection and classification towards computer-aided diagnosis of breast lesions in digital X-ray mammograms. Comput. Methods Programs Biomed. 2020, 196, 105584. [Google Scholar] [CrossRef]
  16. Chowdhury, M.E.H.; Rahman, T.; Khandakar, A.; Mazhar, R.; Kadir, M.A.; Mahbub, Z.B.; Islam, K.R.; Khan, M.S.; Iqbal, A.; al Emadi, N. Can AI Help in Screening Viral and COVID-19 Pneumonia? IEEE Access 2020, 8, 132665–132676. [Google Scholar] [CrossRef]
  17. Afshar, P.; Heidarian, S.; Enshaei, N.; Naderkhani, F.; Rafiee, M.J.; Oikonomou, A.; Fard, F.B.; Samimi, K.; Plataniotis, K.N.; Mohammadi, A. COVID-CT-MD, COVID-19 computed tomography scan dataset applicable in machine learning and deep learning. Sci. Data 2021, 8, 121. [Google Scholar] [CrossRef]
  18. Yang, X.; He, X.; Zhao, J.; Zhang, Y.; Zhang, S.; Xie, P. COVID-CT-Dataset: A CT Scan Dataset about COVID-19. arXiv 2020, arXiv:2003.13865. [Google Scholar]
  19. Alruwaili, M.; Shehab, A.; Abd El-Ghany, S. COVID-19 Diagnosis Using an Enhanced Inception-ResNetV2 Deep Learning Model in CXR Images. J. Healthc. Eng. 2021, 2021, 6658058. [Google Scholar] [CrossRef]
  20. Ahmad, F.; Farooq, A.; Ghani, M.U. Deep Ensemble Model for Classification of Novel Coronavirus in Chest X-ray Images. Comput. Intell. Neurosci. 2021, 2021, 8890226. [Google Scholar] [CrossRef]
  21. Chanda, P.B.; Banerjee, S.; Dalai, V.; Ray, R. CNN based transfer learning framework for classification of COVID-19 disease from chest X-ray. In Proceedings of the 2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 6–8 May 2021; pp. 1367–1373. [Google Scholar] [CrossRef]
  22. Rehman, M.; Shah, R.A.; Khan, M.B.; Ali, N.A.A.; Alotaibi, A.A.; Althobaiti, T.; Ramzan, N.; Shah, S.A.; Yang, X.; Alomainy, A.; et al. Contactless Small-Scale Movement Monitoring System Using Software Defined Radio for Early Diagnosis of COVID-19. IEEE Sens. J. 2021, 21, 17180–17188. [Google Scholar] [CrossRef]
  23. Hosny, K.M.; Darwish, M.M.; Li, K.; Salah, A. COVID-19 diagnosis from CT scans and chest X-ray images using low-cost Raspberry Pi. PLoS ONE 2021, 16, e0250688. [Google Scholar] [CrossRef]
  24. Ravi, V.; Narasimhan, H.; Chakraborty, C.; Pham, T.D. Deep learning-based meta-classifier approach for COVID-19 classification using CT scan and chest X-ray images. Multimedia Syst. 2021, 1–15. [Google Scholar] [CrossRef]
  25. Benmalek, E.; Elmhamdi, J.; Jilbab, A. Comparing CT scan and chest X-ray imaging for COVID-19 diagnosis. Biomed. Eng. Adv. 2021, 1, 100003. [Google Scholar] [CrossRef]
  26. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
  27. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  28. Science, N.; Phenomena, C.; Rajpal, S.; Lakhyani, N.; Kumar, A.; Kohli, R.; Kumar, N. Using handpicked features in conjunction with ResNet-50 for improved detection of COVID-19 from chest X-ray images. Chaos Solitons Fractals 2021, 145, 110749. [Google Scholar] [CrossRef]
  29. Muaad, A.; Jayappa, H.; Al-Antari, M.; Lee, S. ArCAR: A Novel Deep Learning Computer-Aided Recognition for Character-Level Arabic Text Representation and Recognition. Algorithms 2021, 14, 216. [Google Scholar] [CrossRef]
  30. Bai, H.X.; Wang, R.; Xiong, Z.; Hsieh, B.; Chang, K.; Halsey, K.; Tran, T.M.; Choi, J.W.; Wang, D.C.; Shi, L.B.; et al. Artificial intelligence augmentation of radiologist performance in distinguishing COVID-19 from pneumonia of other origin at chest CT. Radiology 2020, 296, E156–E165. [Google Scholar] [CrossRef]
  31. Chowdhury, N.K.; Kabir, M.A.; Rahman, M.; Rezoana, N. ECOVNet: An Ensemble of Deep Convolutional Neural Networks Based on EfficientNet to Detect COVID-19 From Chest X-rays. arXiv 2020, arXiv:2009.11850. [Google Scholar]
  32. Al-antari, M.A.; Hua, C.-H.; Bang, J.; Choi, D.-J.; Kang, S.M.; Lee, S. A Rapid Deep Learning Computer-Aided Diagnosis to Simultaneously Detect and Classify the Novel COVID-19 Pandemic. In Proceedings of the 2020 IEEE-EMBS Conference on Biomedical Engineering and Sciences (IECBES)—IECBES 2020, Langkawi Island, Malaysia, 1–3 March 2021. [Google Scholar]
  33. Al-antari, M.A.; Al-masni, M.A.; Choi, M.-T.; Han, S.-M.; Kim, T.-S. A Fully Integrated Computer-Aided Diagnosis System for Digital X-ray Mammograms via Deep Learning Detection, Segmentation, and Classification. Int. J. Med. Inform. 2018, 117, 44–54. [Google Scholar] [CrossRef]
  34. Al-antari, M.A.; Al-masni, M.A.; Park, S.-U.; Park, J.; Metwally, M.K.; Kadah, Y.M.; Han, S.-M.; Kim, T.-S. An Automatic Computer-Aided Diagnosis System for Breast Cancer in Digital Mammograms via Deep Belief Network. J. Med. Biol. Eng. 2017, 38, 443–456. [Google Scholar] [CrossRef]
  35. Al-masni, M.A.; Al-antari, M.A.; Park, J.-M.; Gi, G.; Kim, T.-Y.; Rivera, P.; Valarezo, E.; Choi, M.; Han, S.-M.; Kim, T.-S. Simultaneous detection and classification of breast masses in digital mammograms via a deep learning YOLO-based CAD system. Comput. Methods Programs Biomed. 2018, 157, 85–94. [Google Scholar] [CrossRef] [PubMed]
  36. Al-masni, M.A.; Al-antari, M.A.; Choi, M.-T.; Han, S.-M.; Kim, T.-S. Skin Lesion Segmentation in Dermoscopy Images via Deep Full Resolution Convolutional Networks. Comput. Methods Programs Biomed. 2018, 162, 221–231. [Google Scholar] [CrossRef]
Figure 1. Schematic hybrid deep learning diagram of the COVID-19 classification system.
Figure 1. Schematic hybrid deep learning diagram of the COVID-19 classification system.
Csmf 02 00013 g001
Table 1. Chest X-ray and CT dataset distribution per class.
Table 1. Chest X-ray and CT dataset distribution per class.
Type of ImagesCOVIDNormal
CT13231290
X-ray39233960
Table 2. Classification evaluation results (%) using chest X-ray images.
Table 2. Classification evaluation results (%) using chest X-ray images.
ModelAzSpReF-M
ResNet5098.4799.010099.0
EfficientNetB099.3698.099.099.0
Table 3. Classification evaluation results using chest CT images.
Table 3. Classification evaluation results using chest CT images.
ModelAzSpReF-M
ResNet5098.8599.098.099.0
EfficientNetB099.2399.099.099.0
Table 4. Classification evaluation results (%) for the proposed hybrid deep learning model using both chest X-ray and CT medical images.
Table 4. Classification evaluation results (%) for the proposed hybrid deep learning model using both chest X-ray and CT medical images.
ModelAzSpReF-M
ResNet5098.0199.099.099.0
EfficientNetB099.5899.099.099.0
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chola, C.; Mallikarjuna, P.; Muaad, A.Y.; Bibal Benifa, J.V.; Hanumanthappa, J.; Al-antari, M.A. A Hybrid Deep Learning Approach for COVID-19 Diagnosis via CT and X-ray Medical Images. Comput. Sci. Math. Forum 2022, 2, 13. https://doi.org/10.3390/IOCA2021-10909

AMA Style

Chola C, Mallikarjuna P, Muaad AY, Bibal Benifa JV, Hanumanthappa J, Al-antari MA. A Hybrid Deep Learning Approach for COVID-19 Diagnosis via CT and X-ray Medical Images. Computer Sciences & Mathematics Forum. 2022; 2(1):13. https://doi.org/10.3390/IOCA2021-10909

Chicago/Turabian Style

Chola, Channabasava, Pramodha Mallikarjuna, Abdullah Y. Muaad, J. V. Bibal Benifa, Jayappa Hanumanthappa, and Mugahed A. Al-antari. 2022. "A Hybrid Deep Learning Approach for COVID-19 Diagnosis via CT and X-ray Medical Images" Computer Sciences & Mathematics Forum 2, no. 1: 13. https://doi.org/10.3390/IOCA2021-10909

APA Style

Chola, C., Mallikarjuna, P., Muaad, A. Y., Bibal Benifa, J. V., Hanumanthappa, J., & Al-antari, M. A. (2022). A Hybrid Deep Learning Approach for COVID-19 Diagnosis via CT and X-ray Medical Images. Computer Sciences & Mathematics Forum, 2(1), 13. https://doi.org/10.3390/IOCA2021-10909

Article Metrics

Back to TopTop