Next Article in Journal
Entropy Generation and Consequences of MHD in Darcy–Forchheimer Nanofluid Flow Bounded by Non-Linearly Stretching Surface
Previous Article in Journal
The Exponentiated Truncated Inverse Weibull-Generated Family of Distributions with Applications
Open AccessArticle

Within the Lack of Chest COVID-19 X-ray Dataset: A Novel Detection Model Based on GAN and Deep Transfer Learning

1
Department of Computer Science, Faculty of Computers and Artificial Intelligence, Benha University, Benha 13511, Egypt
2
Department of Mathematics, University of New Mexico, Gallup Campus, NM 87301, USA
3
Department of Information Technology, Faculty of Computers and Artificial Intelligence, Cairo University, Cairo 12613, Egypt
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(4), 651; https://doi.org/10.3390/sym12040651
Received: 5 April 2020 / Revised: 11 April 2020 / Accepted: 16 April 2020 / Published: 20 April 2020

Abstract

The coronavirus (COVID-19) pandemic is putting healthcare systems across the world under unprecedented and increasing pressure according to the World Health Organization (WHO). With the advances in computer algorithms and especially Artificial Intelligence, the detection of this type of virus in the early stages will help in fast recovery and help in releasing the pressure off healthcare systems. In this paper, a GAN with deep transfer learning for coronavirus detection in chest X-ray images is presented. The lack of datasets for COVID-19 especially in chest X-rays images is the main motivation of this scientific study. The main idea is to collect all the possible images for COVID-19 that exists until the writing of this research and use the GAN network to generate more images to help in the detection of this virus from the available X-rays images with the highest accuracy possible. The dataset used in this research was collected from different sources and it is available for researchers to download and use it. The number of images in the collected dataset is 307 images for four different types of classes. The classes are the COVID-19, normal, pneumonia bacterial, and pneumonia virus. Three deep transfer models are selected in this research for investigation. The models are the Alexnet, Googlenet, and Restnet18. Those models are selected for investigation through this research as it contains a small number of layers on their architectures, this will result in reducing the complexity, the consumed memory and the execution time for the proposed model. Three case scenarios are tested through the paper, the first scenario includes four classes from the dataset, while the second scenario includes 3 classes and the third scenario includes two classes. All the scenarios include the COVID-19 class as it is the main target of this research to be detected. In the first scenario, the Googlenet is selected to be the main deep transfer model as it achieves 80.6% in testing accuracy. In the second scenario, the Alexnet is selected to be the main deep transfer model as it achieves 85.2% in testing accuracy, while in the third scenario which includes two classes (COVID-19, and normal), Googlenet is selected to be the main deep transfer model as it achieves 100% in testing accuracy and 99.9% in the validation accuracy. All the performance measurement strengthens the obtained results through the research.
Keywords: 2019 novel coronavirus; deep transfer learning; machine learning; COVID-19; SARS-CoV-2; convolutional neural network; GAN 2019 novel coronavirus; deep transfer learning; machine learning; COVID-19; SARS-CoV-2; convolutional neural network; GAN

1. Introduction

In 2019, Wuhan is a commercial center of Hubei province in China that faced a flare-up of a novel 2019 coronavirus that killed more than hundreds and infected over thousands of individuals within the initial days of the novel coronavirus pestilence. The Chinese researchers named the novel virus as the 2019 novel coronavirus (2019-nCov) or the Wuhan virus [1]. The International Committee of Viruses titled the virus of 2019 as the Severe Acute Respiratory Syndrome CoronaVirus-2 (SARS-CoV-2) and the malady as Coronavirus disease 2019 (COVID-19) [2,3,4]. The subgroups of the coronaviruses family are alpha-CoV (α), beta-CoV (β), gamma-CoV (δ), and delta-CoV (γ) coronavirus. SARS-CoV-2 was announced to be an organ of the beta-CoV (β) group of coronaviruses. In 2003, the Kwangtung people were infected with a 2013 virus lead to the Severe Acute Respiratory Syndrome (SARS-CoV). SARS-CoV was assured as a family of the beta-CoV (β) subgroup and was title as SARS-CoV [5]. Historically, SRAS-CoV, across 26 countries in the world, infected more than 8000 individuals with a death rate of 9%. Moreover, SARS-CoV-2 infected more than 750,000 individuals with a death rate of 4%, across 150 states, untill the date of this lettering. It demonstrates that the broadcast rate of SARS-CoV-2 is higher than SRAS-CoV. The transmission ability is enhanced because of authentic recombination of S protein in the RBD region [6].
Beta-coronaviruses have caused malady to people that have had wild animals generally either in bats or rats [7,8]. SARS-CoV-1 and MERS-CoV (camel flu) were transmitted to people from some wild cats and Arabian camels respectively as shown in Figure 1. The sale and buy of unknown animals may be the provenance of coronavirus infection. The invention of the various progeny of pangolin coronavirus and their propinquity to SARS-CoV-2 suggests that pangolins should be a thinker as possible hosts of novel 2019 coronaviruses. Wild animals must be taken away from wild animal markets to stop animal coronavirus transmission [9]. Coronavirus transmission has been assured by World Health Organization (WHO) and by The Centers for Diseases of the US, with evidence of human-to-human conveyance from five different cases outside China, namely in Italy [10], US [11], Nepal [12], Germany [13], and Vietnam [14]. On 31 March 2020, SARS-CoV-2 confirmed more than 750,000 cases, 150,000 recovered cases, and 35,000 death cases. Table 1 show some statistics about SARS-CoV-2 [15].

1.1. Deep Learning

Nowadays, Deep Learning (DL) is a subfield of machine learning concerned with techniques inspired by neurons of the brain [16]. Today, DL is quickly becoming a crucial technology in image/video classification and detection. DL depends on algorithms for reasoning process simulation and data mining, or for developing abstractions [17]. Hidden deep layers on DL maps input data to labels to analyze hidden patterns in complicated data [18]. Besides their use in medical X-ray recognition, DL architectures are also used in other areas in the application of image processing and computer vision in medical. DL improves such a medical system to realize higher outcomes, widen illness scope, and implementing applicable real-time medical image [19,20] disease detection systems. Table 2 shows a series of major contributions in the field of the neural network to deep learning [21].

1.2. Generative Adversarial Network

Generative Adversarial Network (GAN) is a class of deep learning models invented by Ian Goodfellow in 2014 [23]. GAN models have two main networks, called the generative network and discriminative network. The first neural network is the generator network, responsible for generating new fake data instances that look like training data. The discriminator tries to distinguish between real data and fake (artificially generated) data generated by the generator network as shown in Figure 2. The mission GANs models that generator network is to try fooling the discriminator network and the discriminator network tries to fight from being fooled [24,25,26,27].

1.3. Convolution Neural Networks

Convolutional Neural Networks (ConvNets or CNNs) are a category of deep learning techniques used primarily to recognize and classify the image. Convolutional Neural Networks have accomplished extraordinary success for medical image/video classification and detection. In 2012, Ciregan et al. and Krizhevsky and et al. [28,29] showed how CNNs based on Graphics Processing Unit (GPU) can enhance many vision benchmark records such as MNIST [30], Chinese characters [31], Arabic digits recognition [32], Arabic handwritten characters recognition [33], NORB (jittered, cluttered) [34], traffic signs [35], and large-scale ImageNet [36] benchmarks. In the following years, various advances in ConvNets further increased the accuracy rate on the image detection/classification competition tasks. ConvNets pre-trained models introduced significant improvements in succeeding in the annual challenges of ImageNet Large Scale Visual Recognition Competition (ILSVRC). Deep Transfer Learning (DTL) is a deep learning (DL) model that focuses on storing weights gained while solving one image classification and applying it to a related problem. Many DTL models were introduced like VGGNet [37], GoogleNet [38], ResNet [39], Xception [40], Inception-V3 [41] and DenseNet [42].
The novelty of this paper is conducted as follows: i) the introduced ConvNet models have end-to-end structure without classical feature extraction and selection methods. ii) We show that GAN is an effective technique to generate X-ray images. iii) Chest X-ray images are one of the best tools for the classification of SARS-CoV-2. iv) The deep transfer learning models have been shown to yield very high outcomes in the small dataset COVID-19. The rest of the paper is organized as follows. Section 2 explores related work and determines the scope of this works. Section 3 discusses the dataset used in our paper. Section 4 presents the proposed models, while Section 5 illustrates the achieved outcomes and its discussion. Finally, Section 6 provides conclusions and directions for further research.

2. Related Works

This part conducts a survey on the recent scientific researches for applying machine learning and deep learning in the field of medical pneumonia and coronavirus X-ray classification. Classical image classification stages can be divided into three main stages: image preprocessing, feature extraction, and feature classification. Stephen et al. [43] proposed a new study of classifying and detect the presence of pneumonia from a collection of chest X-ray image samples based on a ConvNet model trained from scratch based on dataset [44]. The outcomes obtained were training loss = 12.88%, training accuracy = 95.31%, validation loss = 18.35%, and validation accuracy = 93.73%.
In [45], the Authors introduced an early diagnosis system from Pneumonia chest X-ray images based on Xception and VGG16. In this study, a database containing approximately 5800 frontal chest X-ray images introduced by Kermany et al [44] 1600 normal case, 4200 up-normal pneumonia case in the Kermany X-ray database. The trial outcomes showed that VGG-16 network better than X-ception network with a classification rate of 87%. Forasmuch X-ception network better than VGG-16 network by sensitivity 85%, precision 86% and recall 94%. X-ception network is more felicitous for classifying X-ray images than VGG-16 network. Varshni et al. [46] proposed pre-trained ConvNet models (VGG-16, Xception, Res50, Dense-121, and Dense-169) as feature-extractors followed by different classifiers (SVM, Random Forest, k-nearest neighbors, Naïve Bayes) for the detection of normal and abnormal pneumonia X-rays images. The prosaists used ChestX-ray14 introduced by Wang et al. [47].
Chouhan et al. [48] introduced an ensemble deep model that combines outputs from all transfer deep models for the classification of pneumonia using the connotation of deep learning. The Guangzhou Medical Center [44] database introduced a total of approximately 5200 X-ray images, divided to 1300 X-ray normal, 3900 X-rays abnormal. The proposed model reached a miss-classification error of 3.6% with a sensitivity of 99.6% on test data from the database. Ref. [49] proposed a Compressed Sensing (CS) with a deep transfer learning model for automatic classification of pneumonia on X-ray images to assist the medical physicians. The dataset used for this work contained approximately 5850 X-ray data of two categories (abnormal /normal) obtained from Kaggle. Comprehensive simulation outcomes have shown that the proposed approach detects the classification of pneumonia (abnormal /normal) with 2.66% miss-classification.
In this research, we introduced a transfer of deep learning models to classify COVID-19 X-ray images. To input adopting X-ray images of the chest to the convolutional neural network, we embedded the medical X-ray images using GAN to generate X-ray images. After that, a classifier is used to ensemble the outputs of the classification outcomes. The proposed transfer model was evaluated on the proposed dataset.

3. Dataset

The COVID-19 dataset [50] utilized in this research [51] was created by Dr. Joseph Cohen, a postdoctoral fellow at the University of Montreal. The Pneumonia [44] dataset Chest X-ray Images was used to build the proposed dataset. The dataset [52] is organized into two folders (train, test) and contains sub-folders for each image category (COVID-19/normal/pneumonia bacterial/ pneumonia virus). There are 306 X-ray images (JPEG) and four categories (COVID-19/normal/pneumonia bacterial/ pneumonia virus). The number of images for each class is presented in Table 3. Figure 3 illustrates samples of images used for this research. Figure 4 also illustrates that there is a lot of variation of image sizes and features that may reflect on the accuracy of the proposed model which will be presented in the next section.

4. The Proposed Model

The proposed model includes two main deep learning components, the first component is GAN and the second component is the deep transfer model. Figure 4 illustrates the proposed GAN/Deep transfer learning model. Mainly, the GAN used for the preprocessing phase while the deep transfer model used in the training, validation and testing phase.
Algorithm 1 introduces the proposed transfer model in detail. Let D = {Alexnet, Googlenet, Resnet18} be the set of transfer models. Each deep transfer model is fine-tuned with the COVID-19 X-ray Images dataset ( X , Y ) ; where X the set of N input data, each of size, 512 lengths × 512 widths, and Y have the identical class, Y = { y / y {COVID-19; normal; pneumonia bacterial; pneumonia virus }}. The dataset divided to train and test, training set ( X train ; Y train ) for 90% percent for the training and then validation while 10% percent for the testing. The 90% percent was divided into 80% for training and 20% for the validation. The selection of 80% for the training and 20% in the validation proved it is efficient in many types of research such as [53,54,55,56,57]. The training data then divided into mini-batches, each of size n = 64 , such that ( X q ; Y q ) ( X train ; Y train ) ; q = 1 , 2 , , N n and iteratively optimizes the DCNN model d D to reduce the functional loss as illustrated in Equation (1).
C ( w , X i ) = 1 n x X i , y Y i c ( d ( x , w ) , y ) ,
where d ( x , w ) is the ConvNet model that true label y for input x given w is a weight and c ( . ) is the multi-class entropy loss function.
This research relied on the deep transfer learning CNN architectures to transfer the learning weights to reduce the training time, mathematical calculations and the consumption of the available hardware resources. There are several types of research in [53,58,59] tried to build their architecture, but those architecture are problem-specific and cannot fit the data presented in this paper. The used deep transfer learning CNN models investigated in this research are Alexnet [29], Resnet18 [39], Googlenet [60], The mentioned CNN models had a few numbers of layers if it is compared to large CNN models such as Xception [40], Densenet [42], and Inceptionresnet [61] which consist of 71, 201 and 164 layers accordingly. The choice of these models will reflect on reducing the training time and the complexity of the calculations.
Algorithm 1 Introduced algorithm.
1:
Input data: COVID-19 Chest X-ray Images ( X , Y ) ; where Y = { y / y {COVID-19; normal; pneumonia bacterial; pneumonia virus}}
2:
Output data: The transfer model that detected the COVID-19 Chest X-ray image x X
3:
Pre-processing steps:
4:
modify the X-ray input to dimension 512 height × 512 width
5:
Generate X-ray images using GAN
6:
Mean normalize each X-ray data input
7:
download and reuse transfer models D = {Alexnet, Googlenet, Resnet18}
8:
Replace the last layer of each transfer model by (4 × 1) layer dimension.
9:
foreach d     D do
10:
μ = 0.01
11:
for epochs = 1 to 20 do
12:
foreach mini-batch ( X i ;   Y i ) ( X train ; Y train ) do
   Modify the coefficients of the transfer d ( · )
      if the error rate is increased for five epochs then
             μ = μ × 0.01
     end
     end
13:
end
14:
end
15:
foreach x X t e s t do
16:
the outcome of all transfer architectures, d     D
17:
end

4.1. Generative Adversarial Network

GANs consist of two different types of networks. Those networks are trained simultaneously. The first network is trained on image generation while the other is used for discrimination. GANs are considered a special type of deep learning models. The first network is the generator, while the second network is the discriminator. The generator network in this research consists of five transposed convolutional layers, four ReLU layers, four batch normalization layers, and Tanh Layer at the end of the model, while the discriminator network consists of five convolutional layers, four leaky ReLU, and three batch normalization layers. All the convolutional and transposed convolutional layers used the same window size of 4*4* pixel with 64 filters for each layer. Figure 5 presents the structure and the sequence of layers of the GAN network proposed in this research.
The GAN network helped in overcoming the overfitting problem caused by the limited number of images in the dataset. Moreover, it increased the dataset images to be 30 times larger than the original dataset. The dataset number of images reached 8100 images after using the GAN network for 4 classes. This will help in achieving a remarkable testing accuracy and performance matrices. The achieved results will be deliberated in detail in the experimental outcomes section. Figure 6 presents samples of the output of the GAN network for the COVID-19 class.

4.2. Deep Transfer Learning

Convolutional Neural Networks (ConvNet) is the most successful type of model for image classification and detection. A single ConvNet model contains many different layers of neural networks that work on labeling edges and simple/complex features on neural network layers and more complex deep features in deeper network layers. An image is convolved with filters (kernels) and then max pooling is applied, this process may go on for some layers and at last recognizable features are obtained. Take the size of W l 1 × H l 1 × C l 1 (where W × H is width × height) feature map and a filterbank in layer l 1 for example within C l kernels at the size of f l × C l 1 , augmenting the other two coefficients stride s l and padding p l , the outcome feature box in layer l is W l × H l × C l as shown in Equation (2):
( W l , H l ) = [ ( W l 1 × H l 1 ) + 2 p l f l s l + 1 ] ,
where [·] indicate to floor math. Kernels must be equal to that of the input map. as in Equation (3):
x j l = σ ( i V j x j l 1 × f i j l + b j l ) ,
where i and j are indexes of input/output network maps at a range of W l × H l and W l 1 × H l 1 respectively. V j here indicates the receptive field of kernel and b j l is the bias term. In equation (3), σ ( . ) is a non-linearity function applied to get non-linearity in deep transfer learning. In our transfer method, we used ReLU in equation (4) as the non-linearity function for rapid training process:
σ ( x i n p u t ) = max ( 0 , x i n p u t ) .
Our cost function in Equation (5):
L ( s , t ) = L c l s ( s c * ) + λ [ p * > 0 ] L r e g ( g , g * ) ,
where s c * is output label c * while g and g * denote [ g x , g y , g w , g h ] of bounding boxes. λ [ p * > 0 ] consider the boxes of non-background (if p * = 0 is background). This cost function have detection loss L c l s and regression loss L r e g , in Equations (6)–(8):
L c l s ( s c * ) = log ( s c * )
and
L r e g ( g , g * ) = i ( x , y , w , h ) R L 1 ( g i g i * )
where:
R L 1 ( x ) = { 0.5 x 2 , i f   | x | < 0 | x | 0.5 , o t h e r w i s e
In terms of optimizer technique, the momentum Stochastic Gradient Descent (SGD) [62] with momentum 0.9 is chosen as our optimizer technique, which updates weights parameters. This optimizer technique updates the weights of the gradient at the previous iteration and fine-tuning of the gradient. To bypass deep learning network overfitting problems, we utilize this problem by using the dropout technique [63] and the early-stopping technique [64] to select the best training steps. As to the learning rate policy, the step size technique is performed in SGD. We introduced the learning rate ( μ ) to 0.01 and the number of iterations to be 2000. The mini-batch size is set to 64 and early-stopping to be five epochs if the accuracy did not improve.

5. Experimental Results

The introduced model was coded using a software package (MATLAB). The development was CPU specific. All outcomes were conducted on a computer server equipped by an Intel Xeon processor (2 GHz), 96 GB of RAM. The proposed model has been tested under three different scenarios, the first scenario is to test the proposed model for 4 classes, the second scenario for three classes and the third one for two classes. All the test experiment scenarios included the COVID-19 class. Every scenario consists of the validation phase and the testing phase. In the validation phase, 20% of total generated images will be used while in the testing phase consists of around 10% from the original dataset will be used.
The main difference between the validation phase and testing phase accuracy is in the validation phase, the data used to validate the generalization ability of the model or for the early stopping, during the training process. In the testing phase, the data used for other purposes other than training and validating. The data used in training, validation, and testing never overlap with each other to build a concrete result about the proposed model.
Before listing the major results of this research, Table 4 presents the validation and the testing accuracy for four classes before using GAN as an image augmenter. The presented results in Table 4 show that the validation and testing accuracy is quite low and not acceptable as a model for the detection of coronavirus.

5.1. Verification and Testing Accuracy Measurement

Testing accuracy is one of the estimations which demonstrates the precision and the accuracy of any proposed models. The confusion matrix also is one of the accurate measurements which give more insight into the achieved validation and testing accuracy. First, the four classes scenario will be investigated with the three types of deep transfer learning which include Alexnet, Googlenet, and Resnet18. Figure 7, Figure 8 and Figure 9 illustrates the confusion matrices for the validation and testing phases for four classes in the dataset.
Table 5 summarizes the validation and the testing accuracy for the different deep transfer models for four classes. The table illustrates according to validation accuracy, the Resnet18 achieved the highest accuracy with 99.6%, this is due to the large number of parameters in the Resnet18 architecture which contains 11.7 million parameters which are not larger than Alexnet but the Alexnet only include 8 layers while the Resnet18 includes 18 layers. According to testing accuracy, the Googlenet achieved the highest accuracy with 80.6%, this is due to a large number of layers if it is compared to other models as it contains about 22 layers.
The second scenario to be tested in this research when the dataset only contains three classes. Figure 10, Figure 11 and Figure 12 illustrate the confusion matrices for the validation and testing phases for three classes in the dataset including the Covid class.
Table 6 summarizes the validation and the testing accuracy for the different deep transfer models for 3 classes. The table illustrates according to validation accuracy, the Resnet18 achieved the highest accuracy with 99.6%. According to testing accuracy, the Alexnet achieved the highest accuracy with 85.2%, this is maybe due to the large number of parameters in the Alexnet architecture which include 61 million parameters and also due to the elimination of the fourth class which include the pneumonia virus which has similar features if it is compared to COVID-19 which is also considered a type of pneumonia virus. The elimination of the pneumonia virus helps in achieving better testing accuracy for the all deep transfer model than when it is trained over four classes as mentioned before as COVID-19 is a special type of pneumonia virus.
The third scenario to be tested when the dataset only includes two classes, the covid class, and the normal class. Figure 13 illustrates the confusion matrix for the three different transfer models for validation accuracy, While the confusion matrix for testing accuracy is presented in Figure 14 which is the same for all the deep transfer models selected in this research.
Table 7 summarizes the validation and the testing accuracy for the different deep transfer models for two classes. The table illustrates according to validation accuracy, the Googlenet achieved the highest accuracy with 99.9%. According to testing accuracy, all the pre-trained model Alexnet, Goolgenet, and Resnet18 achieved the highest accuracy with 100%, This due to the elimination of the third and the fourth class which includes pneumonia bacterial and pneumonia virus which has similar features if it is compared to COVID-19. This leads to a noteworthy enhancement in the testing accuracy which reflects on whatever the deep transfer model will be used the testing accuracy will reach 100%. The choice of the best model here will be according to validation accuracy which achieved 99.9%. So the Googlenet will be the selected deep transfer model in the third scenario.
To conclude this part, every scenario has it is own deep transfer model. In the first scenario, Googlenet was selected, while the second scenario, Alexnet was selected, and finally, in the third scenario, Googlenet was selected as a deep transfer model. To draw a full conclusion for the selected deep transfer learning that fit the dataset and all scenarios, testing accuracy for every class is required for the different deep transfer model. Table 7 presents the testing accuracy for every class for the different three scenarios. Table 8 does not help much to determine the deep transfer model that fits all scenarios but for the distinction of COVID-19 class among the other classes, Alexnet and Resent18 will be the selected as deep transfer model as they achieved 100% testing accuracy for COVID-19 class whatever the number of classes is 2,3 or 4.

5.2. Performance Evaluation and Discussion

To estimate the performance of the proposed model, extra performance matrices are required to be explored through this study. The most widespread performance measures in the field of deep learning are Precision, Sensitivity (recall), F1 Score [65] and they are presented from Equation (9) to Equation (11).
Precision = TrueP ( TrueP + FalseP )
Sensitivity = TrueP ( TrueP + FalseN )
F 1 Score = 2 Precision Sensitivity ( Precision + Sensitivity )
where TrueP is the count of true positive samples, TrueN is the count of true negative samples, FalseP is the count of false positive samples, and FalseN is the count of false negative samples from a confusion matrix.
Table 9 presents the performance metrics for different scenarios and deep transfer models for the testing accuracy. The table illustrates that in the first scenario which contains four classes, Googlenet achieved the highest percentage for precision, sensitivity and F1 score metrics which strengthen the research decision for choosing Googlenet as a deep transfer model. The table also illustrates that in the second scenario which contains three classes, Alexnet achieved the highest percentage for precision and recall score metrics while Resnet achieved the highest score in F1 with 88.10% but overall the Alexnet had the highest testing accuracy which also strengthens the research decision for choosing Alexnet as deep transfer model.
Table 9 also illustrates that in the third scenario, which contains two classes, all deep transfer learning models achieved similar the highest percentage for precision, recall and F1 score metrics which strengthen the research decision for choosing Googlenet as it achieved the highest validation accuracy with 99.9% as illustrated in Table 6.

6. Conclusions and Future Works

The 2019 novel Coronaviruses (COVID-19) are a family of viruses that leads to illnesses ranging from the common cold to more severe diseases and may lead to death according to World Health Organization (WHO), with the advances in computer algorithms and especially artificial intelligence, the detection of this type of virus in early stages will help in fast recovery. In this paper, a GAN with deep transfer learning for COVID-19 detection in limited chest X-ray images is presented. The lack of benchmark datasets for COVID-19 especially in chest X-rays images was the main motivation of this research. The main idea is to collect all the possible images for COVID-19 and use the GAN network to generate more images to help in the detection of the virus from the available X-ray’s images. The dataset in this research was collected from different sources. The number of images of the collected dataset was 307 images for four types of classes. The classes are the covid, normal, pneumonia bacterial, and pneumonia virus.
Three deep transfer models were selected in this research for investigation. Those models are selected for investigation through this research as it contains a small number of layers on their architectures, this will result in reducing the complexity and the consumed memory and time for the proposed model. A three-case scenario was tested through the paper, the first scenario which included the four classes from the dataset, while the second scenario included three classes and the third scenario included two classes. All the scenarios included the COVID-19 class as it was the main target of this research to be detected. In the first scenario, the Googlenet was selected to be the main deep transfer model as it achieved 80.6% in testing accuracy. In the second scenario, the Alexnet was selected to be the main deep transfer model as it achieved 85.2% in testing accuracy while in the third scenario which included two classes(COVID-19, and normal), Googlenet was selected to be the main deep transfer model as it achieved 100% in testing accuracy and 99.9% in the validation accuracy. One open door for future works is to apply the deep models with a larger dataset benchmark.

Author Contributions

All authors contributed equally to this work. All authors have read and agree to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Singhal, T. A Review of Coronavirus Disease-2019 (COVID-19). Indian J. Pediatrics 2020, 87, 281–286. [Google Scholar] [CrossRef]
  2. Lai, C.-C.; Shih, T.-P.; Ko, W.-C.; Tang, H.-J.; Hsueh, P.-R. Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and coronavirus disease-2019 (COVID-19): The epidemic and the challenges. Int. J. Antimicrob. Agents 2020, 55, 105924. [Google Scholar] [CrossRef]
  3. Li, J.; Li, J.J.; Xie, X.; Cai, X.; Huang, J.; Tian, X.; Zhu, H. Game consumption and the 2019 novel coronavirus. Lancet Infect. Dis. 2020, 20, 275–276. [Google Scholar] [CrossRef]
  4. Sharfstein, J.M.; Becker, S.J.; Mello, M.M. Diagnostic Testing for the Novel Coronavirus. JAMA 2020. [Google Scholar] [CrossRef] [PubMed]
  5. Chang, L.; Yan, Y.; Wang, L. Coronavirus Disease 2019: Coronaviruses and Blood Safety. Transfus. Med. Rev. 2020. [Google Scholar] [CrossRef]
  6. Shereen, M.A.; Khan, S.; Kazmi, A.; Bashir, N.; Siddique, R. COVID-19 infection: Origin, transmission, and characteristics of human coronaviruses. J. Adv. Res. 2020, 24, 91–98. [Google Scholar] [CrossRef] [PubMed]
  7. Rabi, F.A.; Al Zoubi, M.S.; Kasasbeh, G.A.; Salameh, D.M.; Al-Nasser, A.D. SARS-CoV-2 and Coronavirus Disease 2019: What We Know So Far. Pathogens 2020, 9, 231. [Google Scholar] [CrossRef]
  8. York, A. Novel coronavirus takes flight from bats? Nat. Rev. Microbiol. 2020, 18, 191. [Google Scholar] [CrossRef]
  9. Lam, T.T.-Y.; Shum, M.H.-H.; Zhu, H.-C.; Tong, Y.-G.; Ni, X.-B.; Liao, Y.-S.; Wei, W.; Cheung, W.Y.-M.; Li, W.-J.; Li, L.-F.; et al. Identifying SARS-CoV-2 related coronaviruses in Malayan pangolins. Nature 2020, 1–6. [Google Scholar] [CrossRef] [PubMed]
  10. Giovanetti, M.; Benvenuto, D.; Angeletti, S.; Ciccozzi, M. The first two cases of 2019-nCoV in Italy: Where they come from? J. Med. Virol. 2020, 92, 518–521. [Google Scholar] [CrossRef]
  11. Holshue, M.L.; DeBolt, C.; Lindquist, S.; Lofy, K.H.; Wiesman, J.; Bruce, H.; Spitters, C.; Ericson, K.; Wilkerson, S.; Tural, A.; et al. First Case of 2019 Novel Coronavirus in the United States. N. Engl. J. Med. 2020, 382, 929–936. [Google Scholar] [CrossRef] [PubMed]
  12. Bastola, A.; Sah, R.; Rodriguez-Morales, A.J.; Lal, B.K.; Jha, R.; Ojha, H.C.; Shrestha, B.; Chu, D.K.W.; Poon, L.L.M.; Costello, A.; et al. The first 2019 novel coronavirus case in Nepal. Lancet Infect. Dis. 2020, 20, 279–280. [Google Scholar] [CrossRef]
  13. Rothe, C.; Schunk, M.; Sothmann, P.; Bretzel, G.; Froeschl, G.; Wallrauch, C.; Zimmer, T.; Thiel, V.; Janke, C.; Guggemos, W.; et al. Transmission of 2019-nCoV Infection from an Asymptomatic Contact in Germany. N. Engl. J. Med. 2020, 382, 970–971. [Google Scholar] [CrossRef] [PubMed]
  14. Phan, L.T.; Nguyen, T.V.; Luong, Q.C.; Nguyen, T.V.; Nguyen, H.T.; Le, H.Q.; Nguyen, T.T.; Cao, T.M.; Pham, Q.D. Importation and Human-to-Human Transmission of a Novel Coronavirus in Vietnam. N. Engl. J. Med. 2020, 382, 872–874. [Google Scholar] [CrossRef] [PubMed]
  15. Coronavirus (COVID-19) Map. Available online: https://www.google.com/COVID-19-map/ (accessed on 31 March 2020).
  16. Rong, D.; Xie, L.; Ying, Y. Computer vision detection of foreign objects in walnuts using deep learning. Comput. Electron. Agric. 2019, 162, 1001–1010. [Google Scholar] [CrossRef]
  17. Eraslan, G.; Avsec, Ž.; Gagneur, J.; Theis, F.J. Deep learning: new computational modelling techniques for genomics. Nat. Rev. Genet. 2019, 20, 389–403. [Google Scholar] [CrossRef]
  18. Riordon, J.; Sovilj, D.; Sanner, S.; Sinton, D.; Young, E.W.K. Deep Learning with Microfluidics for Biotechnology. Trends Biotechnol. 2019, 37, 310–324. [Google Scholar] [CrossRef]
  19. Lundervold, A.S.; Lundervold, A. An overview of deep learning in medical imaging focusing on MRI. Z. für Med. Phys. 2019, 29, 102–127. [Google Scholar] [CrossRef]
  20. Maier, A.; Syben, C.; Lasser, T.; Riess, C. A gentle introduction to deep learning in medical image processing. Z. für Med. Phys. 2019, 29, 86–101. [Google Scholar] [CrossRef]
  21. Shrestha, A.; Mahmood, A. Review of Deep Learning Algorithms and Architectures. IEEE Access 2019, 7, 53040–53065. [Google Scholar] [CrossRef]
  22. Pouyanfar, S.; Sadiq, S.; Yan, Y.; Tian, H.; Tao, Y.; Reyes, M.P.; Shyu, M.-L.; Chen, S.-C.; Iyengar, S.S. A Survey on Deep Learning: Algorithms, Techniques, and Applications. ACM Comput. Surv. 2018, 51, 1–36. [Google Scholar] [CrossRef]
  23. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems—Volume 2, Montreal, QC, Canada, 8–13 December 2014; MIT Press: Cambridge, MA, USA, 2014; pp. 2672–2680. [Google Scholar]
  24. Cao, Y.; Jia, L.; Chen, Y.; Lin, N.; Yang, C.; Zhang, B.; Liu, Z.; Li, X.; Dai, H. Recent Advances of Generative Adversarial Networks in Computer Vision. IEEE Access 2019, 7, 14985–15006. [Google Scholar] [CrossRef]
  25. Gonog, L.; Zhou, Y. A Review: Generative Adversarial Networks. In Proceedings of the 2019 14th IEEE Conference on Industrial Electronics and Applications (ICIEA), Xi’an, China, 19–21 June 2019; pp. 505–510. [Google Scholar]
  26. Lee, M.; Seok, J. Controllable Generative Adversarial Network. IEEE Access 2019, 7, 28158–28169. [Google Scholar] [CrossRef]
  27. Caramihale, T.; Popescu, D.; Ichim, L. Emotion Classification Using a Tensorflow Generative Adversarial Network Implementation. Symmetry 2018, 10, 414. [Google Scholar] [CrossRef]
  28. Ciregan, D.; Meier, U.; Schmidhuber, J. Multi-column deep neural networks for image classification. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3642–3649. [Google Scholar]
  29. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 1097–1105. [Google Scholar] [CrossRef]
  30. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  31. Yin, F.; Wang, Q.; Zhang, X.; Liu, C. ICDAR 2013 Chinese Handwriting Recognition Competition. In Proceedings of the 2013 12th International Conference on Document Analysis and Recognition, Washington, DC, USA, 25–28 August 2013; pp. 1464–1470. [Google Scholar]
  32. El-Sawy, A.; EL-Bakry, H.; Loey, M. CNN for Handwritten Arabic Digits Recognition Based on LeNet-5 BT. In Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2016, Cairo, Egypt, 24–26 October 2016; Hassanien, A.E., Shaalan, K., Gaber, T., Azar, A.T., Tolba, M.F., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 566–575. [Google Scholar]
  33. El-Sawy, A.; Loey, M.; EL-Bakry, H. Arabic Handwritten Characters Recognition Using Convolutional Neural Network. WSEAS Trans. Comput. Res. 2017, 5, 11–19. [Google Scholar]
  34. LeCun, Y.; Huang, F.J.; Bottou, L. Learning methods for generic object recognition with invariance to pose and lighting. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, Washington, DC, USA, 27 June–2 July 2004; Volume 2, p. II-104. [Google Scholar]
  35. Stallkamp, J.; Schlipsing, M.; Salmen, J.; Igel, C. The German Traffic Sign Recognition Benchmark: A multi-class classification competition. In Proceedings of the The 2011 International Joint Conference on Neural Networks, San Jose, CA, USA, 31 July–5 August 2011; pp. 1453–1460. [Google Scholar]
  36. Deng, J.; Dong, W.; Socher, R.; Li, L.; Kai, L.; Li, F.-F. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  37. Liu, S.; Deng, W. Very deep convolutional neural network based image classification using small training sample size. In Proceedings of the 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), Kuala Lumpur, Malaysia, 3–6 November 2015; pp. 730–734. [Google Scholar]
  38. Szegedy, C.; Wei, L.; Yangqing, J.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  39. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cairo, Egypt, 24–26 October 2016; pp. 770–778. [Google Scholar]
  40. Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1800–1807. [Google Scholar]
  41. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June 1–1 July 2016; pp. 2818–2826. [Google Scholar]
  42. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar]
  43. Stephen, O.; Sain, M.; Maduh, U.J.; Jeong, D.-U. An Efficient Deep Learning Approach to Pneumonia Classification in Healthcare. J. Healthc. Eng. 2019, 2019, 4180949. [Google Scholar] [CrossRef] [PubMed]
  44. Kermany, D.S.; Goldbaum, M.; Cai, W.; Valentim, C.C.S.; Liang, H.; Baxter, S.L.; McKeown, A.; Yang, G.; Wu, X.; Yan, F.; et al. Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning. Cell 2018, 172, 1122–1131. [Google Scholar] [CrossRef]
  45. Ayan, E.; Ünver, H.M. Diagnosis of Pneumonia from Chest X-ray Images Using Deep Learning. In Proceedings of the 2019 Scientific Meeting on Electrical-Electronics & Biomedical Engineering and Computer Science (EBBT), Istanbul, Turkey, 24–26 April 2019; pp. 1–5. [Google Scholar]
  46. Varshni, D.; Thakral, K.; Agarwal, L.; Nijhawan, R.; Mittal, A. Pneumonia Detection Using CNN based Feature Extraction. In Proceedings of the 2019 IEEE International Conference on Electrical, Computer and Communication Technologies (ICECCT), Coimbatore, India, 20–22 February 2019; pp. 1–7. [Google Scholar]
  47. Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Bagheri, M.; Summers, R.M. ChestX-ray8: Hospital-Scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 3462–3471. [Google Scholar]
  48. Chouhan, V.; Singh, S.K.; Khamparia, A.; Gupta, D.; Tiwari, P.; Moreira, C.; Damaševičius, R.; de Albuquerque, V.H.C. A Novel Transfer Learning Based Approach for Pneumonia Detection in Chest X-ray Images. Appl. Sci. 2020, 10, 559. [Google Scholar] [CrossRef]
  49. Islam, S.R.; Maity, S.P.; Ray, A.K.; Mandal, M. Automatic Detection of Pneumonia on Compressed Sensing Images using Deep Learning. In Proceedings of the 2019 IEEE Canadian Conference of Electrical and Computer Engineering (CCECE), Edmonton, AB, Canada, 5–8 May 2019; pp. 1–4. [Google Scholar]
  50. Cohen, J.P.; Morrison, P.; Dao, L. COVID-19 Image Data Collection. arXiv 2020, arXiv:2003.11597. [Google Scholar]
  51. Cohen, J.P.; Morrison, P.; Dao, L. COVID-19 Image Data Collection. Available online: https://github.com/ieee8023/covid-chestxray-dataset (accessed on 31 March 2020).
  52. Dataset. Available online: https://drive.google.com/uc?id=1coM7x3378f-Ou2l6Pg2wldaOI7Dntu1a (accessed on 31 March 2020).
  53. Khalifa, N.; Taha, M.; Hassanien, A.; Mohamed, H. Deep Iris: Deep Learning for Gender Classification Through Iris Patterns. Acta Inform. Medica 2019, 27, 96. [Google Scholar] [CrossRef]
  54. Khalifa, N.E.M.; Taha, M.H.N.; Hassanien, A.E.; Hemedan, A.A. Deep bacteria: robust deep learning data augmentation design for limited bacterial colony dataset. Int. J. Reason.-Based Intell. Syst. 2019, 11, 256–264. [Google Scholar] [CrossRef]
  55. Lemley, J.; Bazrafkan, S.; Corcoran, P. Smart Augmentation Learning an Optimal Data Augmentation Strategy. IEEE Access 2017, 5, 5858–5869. [Google Scholar] [CrossRef]
  56. Khalifa, N.E.M.; Taha, M.H.N.; Ezzat Ali, D.; Slowik, A.; Hassanien, A.E. Artificial Intelligence Technique for Gene Expression by Tumor RNA-Seq Data: A Novel Optimized Deep Learning Approach. IEEE Access 2020, 8, 22874–22883. [Google Scholar] [CrossRef]
  57. Khalifa, N.; Loey, M.; Taha, M.; Mohamed, H. Deep Transfer Learning Models for Medical Diabetic Retinopathy Detection. Acta Inform. Medica 2019, 27, 327. [Google Scholar] [CrossRef] [PubMed]
  58. Khalifa, N.E.; Hamed Taha, M.; Hassanien, A.E.; Selim, I. Deep galaxy V2: Robust deep convolutional neural networks for galaxy morphology classifications. In Proceedings of the 2018 International Conference on Computing Sciences and Engineering, ICCSE 2018—Proceedings, Kuwait City, Kuwait, 11–13 March 2018; pp. 1–6. [Google Scholar]
  59. Khalifa, N.E.M.; Taha, M.H.N.; Hassanien, A.E. Aquarium Family Fish Species Identification System Using Deep Neural Networks. In International Conference on Advanced Intelligent Systems and Informatics; Springer: Cham, Switzerland, 2018; pp. 347–356. [Google Scholar]
  60. Aswathy, P.; Siddhartha; Mishra, D. Deep GoogLeNet Features for Visual Object Tracking. In Proceedings of the 2018 IEEE 13th International Conference on Industrial and Information Systems (ICIIS), Rupnagar, India, 1–2 December 2018; pp. 60–66. [Google Scholar]
  61. Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A.A. Inception-v4, inception-ResNet and the impact of residual connections on learning. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, AAAI 2017, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
  62. Bottou, L. Stochastic Gradient Descent Tricks. In Neural Networks: Tricks of the Trade: Second Edition; Montavon, G., Orr, G.B., Müller, K.-R., Eds.; Springer: Berlin/Heidelberg, Germany, 2012; pp. 421–436. ISBN 978-3-642-35289-8. [Google Scholar]
  63. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  64. Caruana, R.; Lawrence, S.; Giles, L. Overfitting in Neural Nets: Backpropagation, Conjugate Gradient, and Early Stopping. In Proceedings of the 13th International Conference on Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2000; pp. 381–387. [Google Scholar]
  65. Goutte, C.; Gaussier, E. A Probabilistic Interpretation of Precision, Recall and F-Score, with Implication for Evaluation. In European Conference on Information Retrieval; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
Figure 1. Coronavirus transmission from animals to humans.
Figure 1. Coronavirus transmission from animals to humans.
Symmetry 12 00651 g001
Figure 2. Generative Adversarial Network model.
Figure 2. Generative Adversarial Network model.
Symmetry 12 00651 g002
Figure 3. Samples of the used images in this research.
Figure 3. Samples of the used images in this research.
Symmetry 12 00651 g003
Figure 4. The proposed GAN/deep transfer learning mode.
Figure 4. The proposed GAN/deep transfer learning mode.
Symmetry 12 00651 g004
Figure 5. The structure and the sequence of layers for the proposed GAN network.
Figure 5. The structure and the sequence of layers for the proposed GAN network.
Symmetry 12 00651 g005
Figure 6. Samples of images generated using the proposed GAN structure.
Figure 6. Samples of images generated using the proposed GAN structure.
Symmetry 12 00651 g006
Figure 7. Confusion matrices of Alexnet for 4 classes (a) validation accuracy, and (b) testing accuracy.
Figure 7. Confusion matrices of Alexnet for 4 classes (a) validation accuracy, and (b) testing accuracy.
Symmetry 12 00651 g007
Figure 8. Confusion matrices of Googlenet for 4 classes (a) validation accuracy, and (b) testing accuracy.
Figure 8. Confusion matrices of Googlenet for 4 classes (a) validation accuracy, and (b) testing accuracy.
Symmetry 12 00651 g008
Figure 9. Confusion matrices of Resnet18 for 4 classes (a) validation accuracy, and (b) testing accuracy.
Figure 9. Confusion matrices of Resnet18 for 4 classes (a) validation accuracy, and (b) testing accuracy.
Symmetry 12 00651 g009
Figure 10. Confusion matrices of Alexnet for 3 classes (a) validation accuracy, and (b) testing accuracy.
Figure 10. Confusion matrices of Alexnet for 3 classes (a) validation accuracy, and (b) testing accuracy.
Symmetry 12 00651 g010
Figure 11. Confusion matrices of Googlenet for 3 classes (a) validation accuracy, and (b) testing accuracy.
Figure 11. Confusion matrices of Googlenet for 3 classes (a) validation accuracy, and (b) testing accuracy.
Symmetry 12 00651 g011
Figure 12. Confusion matrices of Resnet18 for 3 classes (a) validation accuracy, and (b) testing accuracy.
Figure 12. Confusion matrices of Resnet18 for 3 classes (a) validation accuracy, and (b) testing accuracy.
Symmetry 12 00651 g012
Figure 13. Confusion matrices the verification accuracy for (a) Alexnet, (b) Googlenet, and (c) Resnet.
Figure 13. Confusion matrices the verification accuracy for (a) Alexnet, (b) Googlenet, and (c) Resnet.
Symmetry 12 00651 g013
Figure 14. Confusion matrix for testing accuracy for Alexnet, Googlenet, and Resnet18.
Figure 14. Confusion matrix for testing accuracy for Alexnet, Googlenet, and Resnet18.
Symmetry 12 00651 g014
Table 1. SARS-CoV-2 statistics in some countries.
Table 1. SARS-CoV-2 statistics in some countries.
LocationConfirmedRecoveredDeaths
United States164,3455,9453,171
Italy101,73914,62011,591
Spain94,41719,2598,269
China81,51876,0523305
Germany67,0517635682
Iran44,60614,6562898
France43,97372023018
United Kingdom22,1411351408
Table 2. Major contributions in the history of the neural network to deep learning [21,22].
Table 2. Major contributions in the history of the neural network to deep learning [21,22].
Milestone/ContributionYear
McCulloch-Pitts Neuron1943
Perceptron1958
Backpropagation1974
Neocognitron1980
Boltzmann Machine1985
Restricted Boltzmann Machine1986
Recurrent Neural Networks1986
Autoencoders1987
LeNet1990
LSTM1997
Deep Belief Networks2006
Deep Boltzmann Machine2009
Table 3. Number of images for each class in the COVID-19 dataset.
Table 3. Number of images for each class in the COVID-19 dataset.
Dataset/ClassCovidNormalPneumonia_bacPneumonia_virTotal
Train60707070270
Test999936
Total69797979306
Table 4. Validation and testing accuracy for 4 classes according to 3 deep transfer learning models without using GAN.
Table 4. Validation and testing accuracy for 4 classes according to 3 deep transfer learning models without using GAN.
Model/Validation-Testing AccuracyALexnetGooglenetResnet18
Validation Accuracy73.1%76.9%67.3%
Testing Accuracy52.0%52.8%50.0%
Table 5. Validation and testing accuracy for 4 classes according to 3 deep transfer learning models.
Table 5. Validation and testing accuracy for 4 classes according to 3 deep transfer learning models.
Model/Validation-Testing AccuracyALexnetGooglenetResnet18
Validation Accuracy98.5%98.9%99.6%
Testing Accuracy66.7%80.6%66.7%
Table 6. Validation and testing accuracy for 3 classes according to 3 deep transfer learning models.
Table 6. Validation and testing accuracy for 3 classes according to 3 deep transfer learning models.
Model/Validation-Testing AccuracyALexnetGooglenetResnet18
Validation Accuracy97.2%98.3%99.6%
Testing Accuracy85.2%81.5%81.5%
Table 7. Validation and testing accuracy for 2 classes according to 3 deep transfer learning models.
Table 7. Validation and testing accuracy for 2 classes according to 3 deep transfer learning models.
Model/Validation-Testing AccuracyALexnetGooglenetResnet18
Validation Accuracy99.6%99.9%99.8%
Testing Accuracy100%100%100%
Table 8. Testing accuracy for every class for the different 3 scenarios.
Table 8. Testing accuracy for every class for the different 3 scenarios.
# of ClassesClass NameAlexnetGooglenetResnet18
4 classesCovid100%100%100%
Normal64.3%100%100%
Pneumonia _bac44.4%70%50%
Pneumonia _vir50%66.7%40%
3 classesCovid100%81.8%100%
Normal77.7%75.0%100%
Pneumonia _bac77.8%87.5%64.3%
2 classesCovid100%100%100%
Normal100%100%100%
Table 9. Performance measurements for different scenarios.
Table 9. Performance measurements for different scenarios.
# of ClassesClass NameAlexnetGooglenetResnet18
4 classesPrecision64.68%84.17%72.50%
Recall66.67%80.56%66.67%
F1 Score65.66%82.32%69.46%
Testing Accuracy66.67%80.56%69.46%
3 classesPrecision85.19%81.44%88.10%
Recall85.19%81.48%81.48%
F1 Score85.19%81.46%84.66%
Testing Accuracy85.19%81.48%81.48%
2 classesPrecision100%100%100%
Recall100%100%100%
F1 Score100%100%100%
Testing Accuracy100%100%100%
Back to TopTop