Next Article in Journal
A Novel Sequential Three-Way Decision Model for Medical Diagnosis
Previous Article in Journal
The Symmetry of the Muscle Tension Signal in the Upper Limbs When Propelling a Wheelchair and Innovative Control Systems for Propulsion System Gear Ratio or Propulsion Torque: A Pilot Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Optimized Deep Learning Application for COVID-19 Detection in Chest X-ray Images

1
Department of Computer Engineering, Istanbul Gedik University, Istanbul 34876, Turkey
2
Department of Computer Engineering, Ankara Yildirim Beyazit University, Ankara 06010, Turkey
3
Department of Electrical and Electronic Engineering, Tarsus University, Mersin 33400, Turkey
4
Department of Computing Sciences, Texas A&M University, Corpus Christi, TX 78412, USA
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(5), 1003; https://doi.org/10.3390/sym14051003
Submission received: 10 April 2022 / Revised: 6 May 2022 / Accepted: 12 May 2022 / Published: 14 May 2022

Abstract

:
Due to false negative results of the real-time Reverse Transcriptase-Polymerase Chain Reaction (RT-PCR) test, the complemental practices such as computed tomography (CT) and X-ray in combination with RT-PCR are discussed to achieve a more accurate diagnosis of COVID-19 in clinical practice. Since radiology includes visual understanding as well as decision making under limited conditions such as uncertainty, urgency, patient burden, and hospital facilities, mistakes are inevitable. Therefore, there is an immediate requirement to carry out further investigation and develop new accurate detection and identification methods to provide automatically quantitative evaluation of COVID-19. In this paper, we propose a new computer-aided diagnosis application for COVID-19 detection using deep learning techniques. A new technique, which receives symmetric X-ray data as the input, is presented in this study by combining Convolutional Neural Networks (CNN) with Ant Lion Optimization Algorithm (ALO) and Multiclass Naïve Bayes Classifier (NB). Moreover, several other classifiers such as Softmax, Support Vector Machines (SVM), K-Nearest Neighbors (KNN) and Decision Tree (DT) are combined with CNN. The promising results of these classifiers are evaluated and presented for accuracy, precision, and F1-score metrics. NB classifier with Ant Lion Optimization Algorithm and CNN produced the best results with 98.31% accuracy, 100% precision and 98.25% F1-score and with the lowest execution time.

1. Introduction

Novel Coronavirus disease (COVID-19), also called as Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), is a contagious disease caused by a newly studied out coronavirus that is firstly evaluated as an outbreak but is declared as a pandemic by World Health Organization (WHO) on 11 March 2020 [1]. Most people infected with COVID-19 experience respiratory illness in mild to moderate stages and recover without special treatment. Contrarily, with elderly and those with underlying medical problems such as chronic respiratory disease, cancer, cardiovascular disease and diabetes, virus is more likely to develop serious life-threatening effects.
For appropriate quarantine and treatment of the disease, it is a priority to screen large numbers of suspected cases to control the spread of COVID-19. Although the clinical symptoms of SARS, MERS and COVID-19 seem similar, differential diagnosis have been recorded to date [2,3]. The diagnosis of COVID-19 relies on some criteria as tracking clinical symptoms, epidemiological history and positive X-ray or Computed Tomography (CT) chest images, as well as positive pathogenic testing. The clinical characteristics of COVID-19 includes respiratory symptoms, fever, cough, dyspnea, and pneumonia [4,5,6], which are nonspecific, and may be confused with the other diseases. The definitive test for COVID-19 is the Real-Time Reverse Transcription Polymerase Chain Reaction (RT-PCR) test and is believed to be highly specific but may have false negative instances with as high as 60–71% for detecting COVID-19, which is a real clinical problem [7,8,9]. Due to false negative results of RT-PCR, the complemental practices such as computed tomography (CT) and X-ray in combination with RT-PCR are considered to achieve a more accurate diagnosis in clinical practice [10]. Thus, laboratory and imaging features in combination with clinical tests are required for a complete clinical characterization of this disease. Clinical findings, laboratory examination, and radiological imaging features of COVID-19-positive patients are also of great importance in improving reliable evaluation and diagnosis. In Diagnosis and Treatment Protocol for Novel Coronavirus Pneumonia (Trial Version 6) published by the National Health Commission of the People’s Republic of China, definitive diagnosis based on chest radiological features has been reported to contribute an important role in the treatment of patients with suspected COVID-19 infection [11,12,13,14,15,16,17,18,19,20,21].
Makris et al. [22] conducted a study on 9 common Convolutional Neural Networks (CNNs) for the classification of X-ray images recorded on patients with COVID-19, pneumonia, and healthy individuals. Research results emphasized that CNNs have the power to detect respiratory diseases with high accuracy (specifically VGG16 and VGG19 achieved 95% accuracy), although they need a large amount of sample images [22]. In another study, authors proposed a deep neural network-based method nCOVnet, an alternative on fast screening to detect the COVID-19 by analyzing the X-rays of patients [23]. Zebin et al. [24] experimented on convolutional network architecture with VGG-16, ResNet50, and EfficientNetB0 pre-trained on ImageNet dataset for detecting COVID-19 on chest X-ray images. These three backbones achieved the accuracies of 90%, 94.3%, and 96.8%, respectively [24]. In another study based on machine learning methods, new Fractional Multichannel Exponent Moments (FrMEMs) is used as a feature extractor. The process is parallelized with a multi-core computational framework. Modified Manta-Ray Foraging Optimization based on differential evolution is used to optimize the feature selection process. The proposed method is evaluated with two COVID-19 X-ray datasets and achieved accuracy rates of 96.09% and 98.09% for the first and second datasets, respectively [25].
In the work of Azemin et al. [26], ResNet-101 CNN architecture, which is a prominent deep learning technique, is trained with millions of images to detect and classify abnormality found in X-ray images. The outcome of the presented model in terms of AUC, sensitivity, specificity, and accuracy were82%, 77.3%, 71.8%, and 71.9%, respectively [26]. Rajamaran et al. proposed iteratively pruned deep learning model ensembles to detect COVID-19 on chest X-rays [27]. In the work of Sahlol et al. [28], an enhanced hybrid recognition approach is proposed. This method combines CNN and the swarm-based Marine Predators Algorithm to select the most significant features and classify them. An automated Siamese neural network-based pulmonic disease score is introduced for COVID-19 prediction in a clinical study [29].
In the work of Sitaula and Hossain [30], a new deep learning framework based on attention module with VGG-16 is proposed. This attention model extracts the spatial relationship between the ROIs in CXR images. Then, four layers of VGG-16 is used in addition to the attention module. Sitaula and Aryal [31] propose a new Bag of Deep Visual Words (BoVW) technique over deep features. In this work, the feature map normalization step is removed as deep features normalization step is added on the raw feature maps. This step proves to be very significant to distinguish between COVID-19 and pneumonia. Furthermore, in the work of Sitaula et al. [32], the workflow is the application of BoVW with VGG-16. The extracted features are wired to the SVM, which provided suitable classification accuracy.
In the work of Shorfuzzaman et al. [33] a new CNN based deep learning fusion method applying the transfer learning concept is presented. By providing a fusion model that can also efficiently identify certain areas on X-ray images, which are related to the disease, the study is expected to assist clinicians to automate the process of COVID-19 detection. The proposed method presents 95.49% accuracy with high sensitivity and specificity. In the work of Hasan et al. [34] machine learning tools area applied to perform one-hot encoding. Furthermore, several deep learning techniques such as CNN, VGG16, Average Pooling 2D, dropout, flat-ten, dense, and input are used to build a detection model. The proposed model presented 91.69% COVID-19 detection accuracy. Moreover, several other studies [35,36,37] utilize various machine learning, deep learning, and image processing techniques to detect COVID-19, which demonstrates the trend and usability of such approaches in assisting the medical society.
Considering the literature, the purpose of our study is to evaluate the diagnostic performance of a CNN-based ALO system including various classification methods (Softmax, Support Vector Machines (SVM), K-Nearest Neighbors (KNN), Naïve Bayes (NB), Decision Tree (DT) using X-ray images (the early-stage radiological imaging before CT imaging needs) to detect COVID-19. Our motivation to compare various classification methods is to find the classification algorithm that provides better results and combine its strength in classification with CNN features in the deep learning steps [38]. Our results reported in this work show that this approach can be used for applying deep learning techniques to extract high-level features from X-ray images for COVID-19 diagnosis. To the best of our knowledge, this will be the first study on the optimized classification process of COVID-19 and healthy instances from the chest X-ray images and will enlighten the new studies on AI-based diagnosis systems. In this study, Naïve Bayes classifier and Ant Lion Optimization Algorithm (ALO) are combined with CNN, an approach not utilized before, which presented remarkable results. This work brings a new optimized deep learning application of COVID-19 recognition, which is validated with two datasets, compared with the related studies in the literature and showed remarkable results.

2. Materials and Methods

2.1. Convolutional Neural Network (CNN)

CNN consists of convolutional layers, pooling layers, and fully connected layers between input and output layers. CNN function presents three ideas that are shared weights, local receptive fields, and spatial or temporal subsampling. In the first stage, by using local receptive fields, the neurons extract the initial visual features such as points and edges. The extracted features are connected with intermediate deeper layers to extract high level features such as corners and circles. Finally, the fully connected layer tries to predicate the labels of the data by using high level features that are extracted in the previous layers. Between output layer and the output result, the classifier is located.

2.1.1. Convolutional Layers

Convolutional layers are used to extract features by convolving several filter masks with input feature map and output image of the previous layer [39]. The features’ matrix (map) consists of two-dimensional weights. X m L 1 represents mth features map of L 1 layer, W m L represents weight filter connecting to n th feature matrix (map) of input layer and b n L is the convolved with input features to produce the output feature. Then, the mathematical model of output feature in layer L is formulated in Equation (1):
X m L = f ( m x m L 1   W m n L ) + b n L
where ∗ denotes the convolutional procedure and f represents the activation function which can be Hyperbolic Tangent (tanh), Sigmoid or Rectified Linear Unit (ReLU). The activation functions can be replaced according to the data type.

2.1.2. Pooling Layer

Pooling operation is deployed to downsize the feature maps and offer invariance of the output to shifts and distortions. The pooling layer function reduces the size of the feature map. This leads to a reduction of the computational time of the whole network, which is also important for extracting only the predominant features. Downsizing the feature map also helps prevent overfitting and lessens the parameters that must be trained. Mathematically the pooling layer can be represented as shown in Equation (2):
x n L = d o w n ( x m L 1 )
where down (⋅) is a type of pooling operation. In this paper, the pooling stage is created with max pooling that selects the largest value from the map in which average pooling selects the average of the matrix.

2.1.3. Fully Connected (FC) Layer

In FC layer, the neurons and the activations in previous layers become in full connection. The output of the layer can be calculated with multiplying the matrix followed by a bias offset. Then, the output of FC becomes a vector which represents high level features of input data. The number of neurons in last FC layer is equal to the number of classes (labels) in the classification problems. The mathematical model of this step is presented in Equation (3):
p ( y = 1 | x ; w ) = 1 1 + exp ( w T x )
where y represents the labels of the data, x R ( K + 1 ) × 1 indicates the feature vector in K-dimension, w R ( K + 1 ) × 1 indicates the parameters of the weight vector.

2.2. Classifiers

2.2.1. Softmax Classifier

Softmax function is a core element used in deep learning classification tasks, which is an activation function that convert numbers (logits) into probabilities that sum to one. Softmax function provides a vector as an output that represents the probability distributions of a list of potential outcomes. It is good at multi-dimensional classification instead of binary classification. The mathematical model of Softmax can be seen in Equation (4):
y [ 2.0 1.0 0.1               S ( y i ) = e y i j e y j           p = 0.7 p = 0.2 p = 0.1
where logits of y i matrix [2.0, 1.0, 0.1] is converted into probabilities [0.7, 0.2, 0.1], which adds up to 1.0.

2.2.2. Support Vector Machines (SVM)

SVM technique is utilized to find a hyperplane in an N-dimensional space, where N is the number of features, that classifies the data points clearly. SVM finds the best hyperplane that has the largest margin between two classes. The support vectors are the closest data points to the separating hyperplane; these points are on the boundary as represented in Figure 1 with + indicating data points of type 1, and—indicating data points of type −1.

2.2.3. K-Nearest Neighbors (KNN)

KNN is a technique that utilizes a distance function, which checks the nearest neighbors to find the class most common among its neighbors. Then the case is classified as being assigned to this common class. The distance function can be Euclidean, Manhattan and Minkowski as seen in Equations (5)–(7):
Distance   Functions Euclidean   i = 1 k ( x i y i ) 2
Manhattan   i = 1 k | x i y i |
Minkowski   ( i = 1 k ( | x i y i | ) q ) 1 q

2.2.4. Naïve Bayes (NB)

NB is a classification algorithm for binary and multi-class classification problems, which is based on Bayes’ Theorem. Rather than calculating the values of each attribute value P (d1, d2, d3|h), these attributes are considered as conditionally independent. Consequently, the calculation is handled as P (d1|h) ∗ P (d2|h) and so on, as seen in Equation (8):
P ( A \ B ) = P ( B \ A ) X   P ( A ) P ( B )
where P ( A \ B ) is a posterior probability (A is the class as Normal/Abnormal and B is the predictor), P ( B \ A ) is the likelihood of the predictor to the class, P ( A ) is the class prior probability and P ( B ) is the predictor prior probability.

2.2.5. Decision Trees (DT)

DT approach hierarchically sorts the instances from the root to leaf nodes, which contain a test attribute of the instance. Finally, a branch descending from a node carries one possible value of the attribute. It creates the tree as seen in Figure 2 using Information Theory.

2.3. Ant Lion Optimization (ALO) Algorithm

ALO algorithm simulates antlions and ants’ interactions in the snare. The interaction can be modelled as the movement of the ants over the search space, as antlions are allowed to hunt them and become fitter using traps. During the search for food, the ants move randomly in the nature. The random walk of the ants represented mathematically as shown in Equation (9):
A = [ 0 , c s u m ( 2 r ( t 1 ) 1 ) , c s u m ( 2 r ( t 2 ) 1 ) , . ,   c s u m ( 2 r ( t n ) 1 ) ]
where c s u m represents the cumulative sum, n indicates the maximum number of iterations, t represents the random walk steps, and r ( t ) represents the stochastic function [41]. This stochastic function is formulated in Equation (10):
r ( t ) = { 0     i f   r a n d < 0.5 1     i f   r a n d   0.5
where random walk (iteration) is represented by t and the random number is generated in range [0, 1] with uniform distribution. Then, during optimization, the positions of ants are saved and utilized as shown in matrix in Equation (11).
A n t P = [ A 1 , 1 A 1 , 2 A 1 , d A 2 , 1 A 2 , 2 A 2 , d A n , 1 A n , 2 A n , d ]
Each ant position is saved in matrix called A n t P . A i , j refers to the value of the j - t h variable of i - t h ant. An ant’s number is represented by n and the variable number is represented by d . The fitness function matrix for the ants is displayed in Equation (12):
A n t f i t n e s s = [ f ( [ A 1 , 1 ,   A 1 , 2 , . , A 1 , d ] ) f ( [ A 2 , 1 ,   A 2 , 2 , . , A 2 , d ] ) f ( [ A n , 1 ,   A n , 2 , . , A n , d ] ) ]
where each ant’s fitness is saved in matrix called A n t f i t n e s s . A i , j indicates the rate of j - t h dimension of i - t h , the number of ants is represented by n , and the objective function is represented by f .
In addition to ants in the search space, the antlions are tasked to perform hiding somewhere [42]. The following matrices in Equations (13) and (14) are employed to save their positions and fitness values:
A n t l i o n P = [ A L 1 , 1 A L 1 , 2 A L 1 , d A L 2 , 1 A L 2 , 2 A L 2 , d A L n , 1 A L n , 2 A L n , d ]
Each antlion position is saved in the A n t l i o n P matrix, A L i , j indicates the j - t h dimensions value of i - t h antlion, the number of antlions is represented by n , and the number of the variable is represented by d .
A n t l i o n f i t n e s s = [ f ( [ A L 1 , 1 ,   A L 1 , 2 , . , A L 1 , d ] ) f ( [ A L 2 , 1 ,   A L 2 , 2 , . , A L 2 , d ] ) f ( [ A L n , 1 ,   A L n , 2 , . , A L n , d ] ) ]
All antlion fitness values are saved in a matrix called A n t l i o n f i t n e s s , A L i , j indicates the j - t h dimension’s value of i - t h antlion, the antlion’s number is represented by n , and the objective function is represented by f .
At each step of optimization, the ant’s positions are updated using random walk based on Equation (9). This equation cannot be applied immediately to update ants’ positions because each search space has a range of variables. The min-max normalization equation, which can be seen in Equation (15), is applied to determine the random walks in range of search space [42].
x = ( X i t a i ) ( d i c i t ) ( b i t a i ) + c i
The minimum value of the random walk is represented by a i for i - t h variable and the maximum value of the random walk is represented by b i for i - t h variable. Furthermore, the minimum value of i - t h variable is represented by c i t at t - t h iteration, and the maximum value of the i - t h variable is represented by d i t at iteration t - t h .
As presented above, the antlion traps affect an ant’s random walks. This hypothesis is represented mathematically as shown in following Equations (16) and (17).
c i t = a n t i l i o n j t + c t
d i t = a n t i l i o n j t + d t
The minimum values of variables at t - t h iteration are represented by c t and the maximum values of all variables at t - t h iteration are represented by d t . The minimum value for i - t h ant are represented by c i t , the maximum value for i - t h ant is represented by d i t , and the position of the determined j - t h antlion at t - t h iteration is represented by a n t i l i o n j t .
A roulette wheel is applied to model the hunting capability of antlions. During the optimization, a roulette wheel operator is utilized by ALO algorithm for determining antlions based on their fitness. This technique promises higher opportunity to the fitter antlions for hunting the ants [43].
This manner can be modelled mathematically as the radius of an ant’s random walks hyper-sphere is reduced adaptively. The mathematical model is presented below in Equations (18) and (19):
c t = c t I
d t = d t I
in which I is the ratio, the minimum value of all variables is c t at t - t h iteration, and the maximum value included in vector is d t at t - t h iteration.
In the last stage, the ant becomes fitter and catching prey occurs. Then, ALO updates its location to the final location of the hunted ant to optimize its chance of catching new prey. This action mathematically represented as shown in Equation (20).
A n t l i o n j t = A n t i t   i f   f ( A n t l i o n j t )
Current iteration is represented by t , the position of the determined j - t h antlion is represented by A n t l i o n j t at iteration t , and the position of i - t h ant is indicated by A n t i t at t - t h iteration.
Finally, one of the important properties of evolutionary algorithms is elitism, which assists them to keep the best solutions obtained at any level of optimization procedures. In ALO, the best antlion is obtained and saved as an elite, which, in other words, is the fittest antlion. The elite can mathematically be represented as in Equation (21):
A n t i t = R A t + R E t 2
where the random walk determined by the roulette wheel around the antlion is represented by R A t at t - t h iteration, the random walk around the elite is represented by R E t at t - t h iteration and the position of i - t h ant is represented by A n t i t at t - t h iteration [44].

2.4. Proposed Method

In this study, the proposed method consists of three stages: Data resizing and feature extraction using AlexNet, feature selection using ALO and classification using Bayes Naïve classifier.

2.4.1. Data Resizing and Feature Extraction

In the first stage AlexNet, a pre-trained CNN network is used as a feature extractor. In last few years, AlexNet has shown high performance [45] and many studies have shown very good results in image classification [46,47,48,49,50]. After AlexNet was introduced, Artificial Intelligence (AI) society that studies on image recognition-based classification has focused much more on CNNs and has improved AlexNet’s performance by altering some parameters or number of layers, making it deeper [46,48,49]. AlexNet combines the benefits of Inception-V4 and ResNet50, which can quickly initialize the network model and maintain good network generalization performance. In experiments, it is apparent that there are certain disadvantages to VGGNet. One issue is that it is slow to train. Furthermore, the weights of the network architecture considering disk and bandwidth are quite large. So, AlexNet is better suited for most image classification problems [51]. This is the reason AlexNet is utilized in this study.
Each X-ray image provided to the system are resized automatically to 227 × 227 × 3 for optimizing the input and making it symmetrical. Symmetry is a vital concept for neural networks. Although it is possible to pass the data as asymmetrical input, an adapter may be needed for conversion to symmetrical input for optimization. The AlexNet is trained to classify images to 1000 classes (for the inner stages) and consists of several layers such as: Convolutional layer, pooling layer, and fully connected layers as shown in Figure 3. AlexNet consists of five convolution layers with three fully connected layers. The ReLU activation function is applied in all layers of the network. The three fully connected layers consist of 4096-4096-1000 neurons, respectively. The fully connected layers represent the layers 6, 7, and 8. In this study, the features extracted from layer 6 with 4096 features.

2.4.2. Feature Selection

In the second stage, ALO was applied as the feature selection method to reduce the size of the features, which are considered as the output of the fully connected layers. The feature selection function maximizes the classification accuracy and reduces the size of the features to a minimum. Then we formulate the feature selection problem as the objective function presented in Equation (22):
O b j e c t i v e = α E r r o r   R a t e + β # S F # A l l _ F
where Error Rate represents the error rate of the classification model (NB, SVM, DT, SoftMax, KNN). # S F represents the number of selected features and # A l l _ F represents the total number of features. The two parameters α and β represent the significance of classification characteristic and the subset length, α ∈ [0, 1] and β = (1 − α). Several advantages are obtained when feature selection functions applied to any problem:
  • Decrease Overfitting: Less redundant features mean less chance to encounter decisions based on noise.
  • Enhance Accuracy: Less misleading features mean an increase in model accuracy.
  • Decrease Training Time: Less features means that the classifiers train faster.

2.4.3. Classifiers

In the last stage, the classifiers (Softmax, SVM, KNN, Multiclass Naïve Bayes and DT) are trained in a supervised approach to classify the features that are extracted by AlexNet in the previous stage. The features wired from layer 7 are directed by the fully connected layer to the classifiers. The proposed architecture is visualized in Figure 4.
In order to measure the classification performance of the proposed application, a confusion matrix was used. A confusion matrix contains information regarding estimated and actual classifications carried out by an algorithm. Performance and outcome of these algorithms are generally measured using the data in the confusion matrix. Figure 5 shows the confusion matrix for a two-class classifier.
Taking all the correct classified samples into account, accuracy, precision, F1 score, sensitivity and specificity criteria can be calculated through Equations (23)–(27), respectively.
Accuracy = TP + TN TP + TN + FN + FP
Precision = TP TP + FP
F 1   Score = 2 Precision Recall Precision + Recall
Sensitivity = TP TP + FN
Specificity = TN TN + FP

3. Results

In this study, MATLAB2018 was applied to execute our proposed methodology for COVID-2019 detection. The analysis was executed using PC configuration including Intel core i7-670 @ 2.60 GHz CPU and 8 GB of RAM. Random sampling technique is applied to evaluate the proposed method. The goal of using random sampling is to avoid overfitting. Furthermore, the experiment is repeated 5 times and the average values are measured for each case. T-test and P value parameters calculated to evaluate the obtained results after and before applying ALO. In our test environment, ALO considers 45 search agents and 300 iterations as parameters. Two well-known datasets were used to test the performance of the proposed results as shown in Table 1. Several parameters are calculated such as accuracy, precision and F1 score for each classifier. Comparative results are shown in Table 2.
Then, our framework that applies ALO to select features from the output of CNN and reduces the size of the features that will be classified. This phase assists in reduction of the computational time and classification complexity. Our novel framework CNN-ALO-classifier presented the best results compared to the classifier without using ALO. The results are presented in Table 3.
A 2-tailed paired t-test is applied on the two matched groups with diagnosis of COVID-19 and p-value is calculated as 0.031011, which is less than the standard level of significance (p < 0.05). Therefore, a statistically significant difference between using and not using ALO is noted on this dataset from [53].
Furthermore, the variation of mean squared error of the classifiers versus number of epochs are presented in Figure 6.
In addition, the variation of accuracy versus number of epochs is provided in Figure 7. Experimental results show that NB classifier presented best results compared to other approaches (SVM, SoftMax, KNN, DT). Furthermore, the execution times for these classifiers are also acquired and visualized in Figure 8.
On the other hand, COVID-19 public dataset in [54] is also used to validate the proposed method. This dataset consists of 460 COVID-19, 1266 normal, and 3418 pneumonia training X-ray images and 116 COVID-19, 314 normal, and 855 pneumonias for testing. The main critical issue in this dataset is that it contains pneumonia that is caused by bacterial infection and not COVID-19. Figure 9 illustrates images that are randomly selected from the class samples.
In our first iteration, we implemented our CNN+NB method directly on the unbalanced dataset. The results shown in Table 4.
Furthermore, our method based on CNN + ALO + NB is applied to the same dataset. The results of our method are presented in Table 5. Our method presented an overall accuracy of 98.9%. Furthermore, it was able to detect COVID-19 cases with 100% accuracy. This means that the proposed method of CNN + ALO + NB can detect COVID-19 cases without any misclassified instances and is not affected negatively with low number of instances of COVID-19.
A 2-tailed paired t-test is applied on the two matched groups with diagnosis of COVID-19 and p-value is calculated as 0.041011, which is less than the standard level of significance (p < 0.05). Consequently, a statistically significant difference between using and not using ALO is noted on the dataset from [54].

4. Discussion

RT-PCR tests and viral nucleic acid testing serve as the gold standard methods for the diagnosis of COVID-19. However, false negative results reported in early studies may block the prevention and control of outbreak, especially since these tests play an important reference role [55,56]. So, clinical tests, laboratory results, image findings, and other epidemiological factors must be carefully examined for the full characterization and correct diagnosis of COVID-19.
In the routine progress, for image analysis, the radiologists, who may be somewhat experienced in interpreting chest X-ray imaging, examine chest X-ray images and decide on positive or negative X-ray findings by consensus. The radiologists also classify the chest X-rays as positive or negative for COVID-19. Accurate assessment is often based on education and experience, but it can be subjective at times. Less experienced radiologists can produce results with enough specificity but low sensitivity in differentiating COVID-19 from viral pneumonia on chest X-rays or CT. This is due to the difficulties to make reproducible radiology evaluations for accurate diagnosis and classification given the urgency, patient burden and hospital facilities in the COVID-19 outbreak. Since radiology includes visual perception as well as decision making under uncertainty, mistakes are inevitable, especially under such limited conditions [57,58]. These facts underline the need for immediate and accurate detecting and differentiating methods that can be used in local hospitals and clinics responsible for the diagnosis of COVID-19 and management for patients.
Deep learning approach has proven its potential for different classification tasks with the best results on varying image data sets. This data-driven approach allows for more abstract feature information [59,60,61,62]. While various deep learning architectures have been researched to address different tasks, the most common deep learning architecture typologies in medical imaging today are CNNs. Thus, in this study, we proposed a CNN-based model to classify COVID-19 from chest X-ray images using transfer learning. Transfer learning or using pre-trained networks in other datasets is often used when dealing with rare or little data with no need for data augmentation progress [15]. We adopted the transfer learning approach and used AlexNet architecture trained in the patient dataset from COVID-19 and healthy subjects to extract the features. These properties are transferred to the classifiers of the respective models, and the results are compared to the classifiers. The promising results of these classifiers are evaluated and presented for accuracy, precision, and F1-score metrics. NB classifier with Ant Lion Optimization Algorithm and CNN produced the best results with 98.31% accuracy, 100% precision and 98.25% F1-score and with the lowest execution time.
Table 6 and Table 7 present the comparison between our method based on CNN + ALO + NB and several significant works dealing with detection of COVID-19 using state-of-the art approaches. Table 6 contains the works, which are tested with the dataset in [54] and Table 7 includes the works that are tested with the dataset in [53].
After viewing Table 6 and Table 7 which show the superiority of CNN + ALO + NB method compared to the several studies in the literature, we can note that our proposed method produced better results. Furthermore, the proposed ALO algorithm is compared with three well known algorithms PSO, GA, and Bat. Figure 10 shows this comparison in which ALO produces better results than the other algorithms.
The main reason of this superiority is related to the usage of ALO as feature selector after feature extraction stage with CNN and before NB. ALO presented high contribution to optimize the system performance according to the following issues:
  • Random choice of antlions and the usage of a roulette wheel ensure exploration of the search space.
  • Random walks of ants around the antlions additionally accentuate exploration of the search range around the antlions.
  • The local optima are resolved by using roulette wheel support and random walk.
  • ALO approximates the global optima by avoiding the local optima in the population of search agents.
  • ALO algorithm is flexible and appropriate for solving various problems, as it has small number of adaptive parameters to fine-tune.
  • PSO is easy to fall into local optimum in high-dimensional space and has a low convergence rate in the iterative process. This causes problems for feature selection, especially from complex data such as COVID-19 X-ray images.
  • GA is computationally expensive. Consequently, GA implementation requires high amount of optimization. Moreover, designing an objective function and acquiring the representation and operators right can be difficult.
There are some limitations in our study. Successful deep learning models such as AlexNet must be trained with more image information. Nevertheless, the amount of COVID-19 data in this study are hardly available and limited by the fact that there is a shortage of laboratory records during the outbreak. On the other hand, X-ray data of MERS, SARS, and other relevant syndromes are not included in this study. In order to acquire a more comprehensive understanding of COVID-19, it would be suitable to include a greater dataset from a wide geographic scope. Additionally, a deep learning approach with integrated radiology image features and RT-PCR results may make more effective scanning and treatment of COVID-19.

5. Conclusions

The results of this study showed the significance of using the diagnostic performance of CNN deep learning system for accurate detection to improve evaluation and clinical management of patients with COVID-19. While COVID-19 diagnoses are commonly confirmed with clinical and laboratory tests in which is possibility of low specificity. Radiology image analysis can be combined to improve accurate and deep learning methods can minimize both human and computing error and thus enable a simplified, quantitative diagnostic system for managing COVID-19. The CNN-ALO-Naïve Bayes method is a novel method which produced better results compared to the other techniques. Moreover, this combined method produced the best results compared to the related studies in the literature. As future work, this method can be applied to various problems such as video classification, object detection, and cybersecurity domains.

Author Contributions

Conceptualization, A.M.K.; data curation, A.M.K. and H.K.; investigation, H.K.; methodology, A.M.K.; resources, V.A., B.S. and I.A.H.; validation, V.A.; writing—original draft, A.M.K., B.S. and I.A.H.; writing—review and editing, I.A.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. World Health Organization (WHO). Available online: https://www.who.int/emergencies/diseases/novel-coronavirus-2019 (accessed on 24 February 2020).
  2. Chen, N.; Zhou, M.; Dong, X.; Qu, J.; Gong, F.; Han, Y.; Qiu, Y.; Wang, J.; Liu, Y.; Wei, Y.; et al. Epidemiological and Clinical Characteristics of 99 cases of 2019 Novel Coronavirus Pneumonia in Wuhan, China: A Descriptive Study. Lancet 2020, 395, 507–513. [Google Scholar] [CrossRef] [Green Version]
  3. Yin, Y.; Wunderink, R.G. MERS, SARS and Other Coronaviruses as Causes of Pneumonia. Respirology 2018, 23, 130–137. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Wang, D.; Hu, B.; Hu, C.; Zhu, F.; Liu, X.; Zhang, J.; Wang, B.; Xiang, H.; Cheng, Z.; Xiong, Y.; et al. Clinical Characteristics of 138 Hospitalized Patients with 2019 Novel Coronavirus-Infected Pneumonia in Wuhan, China. JAMA 2020, 323, 1061–1069. [Google Scholar] [CrossRef] [PubMed]
  5. Li, Q.; Guan, X.; Wu, P.; Wang, X.; Zhou, L.; Tong, Y.; Ren, R.; Leung, K.S.; Lau, E.H.; Wong, J.Y.; et al. Early Transmission Dynamics in Wuhan, China, of Novel Coronavirus-Infected Pneumonia. N. Engl. J. Med. 2020, 382, 1199–1207. [Google Scholar] [CrossRef] [PubMed]
  6. Huang, C.; Wang, Y.; Li, X.; Ren, L.; Zhao, J.; Hu, Y.; Zhang, L.; Fan, G.; Xu, J.; Gu, X.; et al. Clinical Features of Patients Infected with 2019 Novel Coronavirus in Wuhan, China. Lancet 2020, 395, 497–506. [Google Scholar] [CrossRef] [Green Version]
  7. General Office of National Health Committee. Notice on Printing and Distributing the Novel Coronavirus Pneumonia Diagnosis and Treatment Plan (Trial Version 6). 18 February 2020. Available online: http://www.nhc.gov.cn/yzygj/s7653p/202002/8334a8326dd94d329df351d7da8aefc2.shtml?from=timeline (accessed on 24 February 2020).
  8. Chung, M.; Bernheim, A.; Mei, X.; Zhang, N.; Huang, M.; Zeng, X.; Cui, J.; Xu, W.; Yang, Y.; Fayad, Z.A.; et al. CT Imaging Features of 2019 Novel Coronavirus (2019-nCoV). Radiology 2020, 295, 202–207. [Google Scholar] [CrossRef] [Green Version]
  9. Huang, P.; Liu, T.; Huang, L.; Liu, H.; Lei, M.; Xu, W.; Hu, X.; Chen, J.; Liu, B. Use of Chest CT in Combination with Negative RT-PCR Assay for the 2019 Novel Coronavirus but High Clinical Suspicion. Radiology 2020, 295, 22–23. [Google Scholar] [CrossRef]
  10. Li, D.; Wang, D.; Dong, J.; Wang, N.; Huang, H.; Xu, H.; Xia, C. False-Negative Results of Real-Time Reverse-Transcriptase Polymerase Chain Reaction for Severe Acute Respiratory Syndrome Coronavirus 2: Role of Deep-Learning-Based CT Diagnosis and Insights from Two Cases. Korean J. Radiol. 2020, 21, 505–508. [Google Scholar] [CrossRef]
  11. National Health Commission of the People’s Republic of China. Diagnosis and Treatment Protocols of Pneumonia Caused by a Novel Coronavirus (Trial Version 5); National Health Commission of the People’s Republic of China: Beijing, China, 2020.
  12. Koo, H.J.; Lim, S.; Choe, J.; Choi, S.H.; Sung, H.; Do, K.H. Radiographic and CT Features of Viral Pneumonia. Radiographics 2018, 38, 719–739. [Google Scholar] [CrossRef] [Green Version]
  13. Ucar, F.; Korkmaz, D. COVIDiagnosis-Net: Deep Bayes-SqueezeNet based Diagnosis of the Coronavirus Disease 2019 (COVID-19) from X-ray Images. Med. Hypotheses 2020, 140, 109761. [Google Scholar] [CrossRef]
  14. Li, L.; Qin, L.; Xu, Z.; Yin, Y.; Wang, X.; Kong, B.; Bai, J.; Lu, Y.; Fang, Z.; Song, Q.; et al. Using Artificial Intelligence to Detect COVID-19 and Community-acquired Pneumonia Based on Pulmonary CT: Evaluation of the Diagnostic Accuracy. Radiology 2020, 296, E65–E71. [Google Scholar] [CrossRef] [PubMed]
  15. Shin, H.C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep Convolutional Neural Networks for Computer-Aided Detection: Cnn Architectures, Dataset Characteristics and Transfer Learning. IEEE Trans. Med. Imaging 2016, 35, 1285–1298. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Rajpurkar, P.; Irvin, J.; Ball, R.L.; Zhu, K.; Yang, B.; Mehta, H.; Duan, T.; Ding, D.; Bagul, A.; Langlotz, C.P.; et al. Deep Learning for Chest Radiograph Diagnosis: A Retrospective Comparison of the CheXNeXt Algorithm to Practicing Radiologists. PLoS Med. 2018, 15, e1002686. [Google Scholar] [CrossRef] [PubMed]
  17. Rawat, W.; Wang, Z. Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review. Neural Comput. 2017, 29, 2352–2449. [Google Scholar] [CrossRef]
  18. Rajpurkar, P.; Irvin, J.; Zhu, K.; Yang, B.; Mehta, H.; Duan, T.; Ding, D.; Bagul, A.; Langlotz, C.; Shpanskaya, K.; et al. CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning. arXiv 2017, arXiv:1711.05225. [Google Scholar]
  19. Kermany, D.S.; Goldbaum, M.; Cai, W.; Valentim, C.C.; Liang, H.; Baxter, S.L.; McKeown, A.; Yang, G.; Wu, X.; Yan, F.; et al. Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning. Cell 2018, 172, 1122–1131. [Google Scholar] [CrossRef]
  20. Rousan, L.A.; Elobeid, E.; Karrar, M.; Khader, Y. Chest X-ray findings and temporal lung changes in patients with COVID-19 pneumonia. BMC Pulm. Med. 2020, 20, 245. [Google Scholar] [CrossRef]
  21. Cozzi, D.; Albanesi, M.; Cavigli, E.; Moroni, C.; Bindi, A.; Luvarà, S.; Lucarini, S.; Busoni, S.; Mazzoni, L.N.; Miele, V. Chest X-ray in new Coronavirus Disease 2019 (COVID-19) infection: Findings and correlation with clinical outcome. Radiol. Med. 2020, 125, 730–737. [Google Scholar] [CrossRef]
  22. Chen, H.; Zheng, Y.; Park, J.H.; Heng, P.A.; Zhou, S.K. Medical Image Computing and Computer-Assisted Intervention; MICCAI: Athens, Greece, 2016; pp. 487–495. [Google Scholar]
  23. Makris, A.; Kontopoulos, I.; Tserpes, K. COVID-19 detection from chest X-Ray images using Deep Learning and Convolutional Neural Networks. medRxiv 2020. [Google Scholar] [CrossRef]
  24. Zebin, T.; Rezvy, S. COVID-19 detection and disease progression visualization: Deep learning on chest X-rays for classification and coarse localization. Appl. Intell. 2021, 51, 1010–1021. [Google Scholar] [CrossRef]
  25. Elaziz, M.A.; Hosny, K.M.; Salah, A.; Darwish, M.M.; Lu, S.; Sahlol, A.T. New machine learning method for image-based diagnosis of COVID-19. PLoS ONE 2020, 15, e0235187. [Google Scholar] [CrossRef] [PubMed]
  26. Azemin, M.Z.C.; Hassan, R.; Tamrin, M.I.M.; Ali, M.A.M. COVID-19 Deep Learning Prediction Model Using Publicly Available Radiologist-Adjudicated Chest X-Ray Images as Training Data: Preliminary Findings. Int. J. Biomed. Imaging 2020, 2020, 8828855. [Google Scholar] [CrossRef] [PubMed]
  27. Rajaraman, S.; Siegelman, J.; Alderson, P.O.; Folio, L.S.; Folio, L.R.; Antani, S.K. Iteratively Pruned Deep Learning Ensembles for COVID-19 Detection in Chest X-Rays. IEEE Access Pract. Innov. Open Solut. 2020, 8, 115041–115050. [Google Scholar] [CrossRef] [PubMed]
  28. Sahlol, A.T.; Yousri, D.; Ewees, A.A.; Al-Qaness, M.A.A.; Damasevicius, R.; Elaziz, M.A. COVID-19 image classification using deep features and fractional-order marine predators algorithm. Sci. Rep. 2020, 10, 15364. [Google Scholar] [CrossRef]
  29. Li, M.D.; Arun, N.T.; Gidwani, M.; Chang, K.; Deng, F.; Little, B.P.; Mendoza, D.P.; Lang, M.; Lee, S.I.; O’Shea, A.; et al. Automated Assessment and Tracking of COVID-19 Pulmonary Disease Severity on Chest Radiographs using Convolutional Siamese Neural Networks. Radiol. Artif. Intell. 2020, 2, e200079. [Google Scholar] [CrossRef]
  30. Sitaula, C.; Hossain, M.B. Attention-based VGG-16 model for COVID-19 chest X-ray image classification. Appl. Intell. 2021, 51, 2850–2863. [Google Scholar] [CrossRef]
  31. Sitaula, C.; Aryal, S. New bag of deep visual words based features to classify chest x-ray images for COVID-19 diagnosis. Health Inf. Sci. Syst. 2021, 9, 24. [Google Scholar] [CrossRef]
  32. Sitaula, C.; Shahi, T.B.; Aryal, S.; Marzbanrad, F. Fusion of multi-scale bag of deep visual words features of chest X-ray images to detect COVID-19 infection. Sci. Rep. 2021, 11, 23914. [Google Scholar] [CrossRef]
  33. Shorfuzzaman, M.; Masud, M.; Alhumyani, H.; Anand, D.; Singh, A. Artificial Neural Network-Based Deep Learning Model for COVID-19 Patient Detection Using X-Ray Chest Images. J. Healthc. Eng. 2021, 2021, 5513679. [Google Scholar] [CrossRef]
  34. Hasan, M.; Ahmed, S.; Abdullah, Z.; Monirujjaman Khan, M.; Anand, D.; Singh, A.; AlZain, M.; Masud, M. Deep Learning Approaches for Detecting Pneumonia in COVID-19 Patients by Analyzing Chest X-Ray Images. Math. Probl. Eng. 2021, 2021, 9929274. [Google Scholar] [CrossRef]
  35. Shorfuzzaman, M.; Hossain, M. MetaCOVID: A Siamese neural network framework with contrastive loss for n-shot diagnosis of COVID-19 patients. Pattern Recognit. 2021, 113, 107700. [Google Scholar] [CrossRef] [PubMed]
  36. Tahsin Meem, A.; Monirujjaman Khan, M.; Masud, M.; Aljahdali, S. Prediction of Covid-19 Based on Chest X-Ray Images Using Deep Learning with CNN. Comput. Syst. Sci. Eng. 2022, 41, 1223–1240. [Google Scholar] [CrossRef]
  37. Karim, A.M.; Mishra, A. Novel COVID-19 Recognition Framework Based on Conic Functions Classifier. In Healthcare Informatics for Fighting COVID-19 and Future Epidemics; Garg, L., Chakraborty, C., Mahmoudi, S., Sohmen, V.S., Eds.; EAI/Springer Innovations in Communication and Computing; Springer: Cham, Switzerland, 2022. [Google Scholar]
  38. Md Noor, S.S.; Ren, J.; Marshall, S.; Michael, K. Hyperspectral Image Enhancement and Mixture Deep Learning Classification of Corneal Epithelium Injuries. Sensors 2017, 17, 2644. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Mary, L.; Sreenath, N. Live Detection of Text in the Natural Environment Using Convolutional Neural Network. Future Gener. Comput. Syst. 2019, 98, 444–455. [Google Scholar] [CrossRef]
  40. Barros, R.; Basgalupp, M.; de Carvalho, A.; Freitas, A. Automatic Design of Decision-Tree Algorithms with Evolutionary Algorithms. Evol. Comput. 2013, 21, 659–684. [Google Scholar] [CrossRef] [Green Version]
  41. Mirjalili, S. The Ant Lion Optimizer. Adv. Eng. Softw. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  42. Wang, M.; Wu, C.; Wang, L.; Xiang, D.; Huang, X. A Feature Selection Approach for Hyperspectral Image Based on Modified Ant Lion Optimizer. Knowl.-Based Syst. 2019, 168, 39–48. [Google Scholar] [CrossRef]
  43. Kumar, S.; Kumar, A. A brief review on antlion optimization algorithm. In Proceedings of the 2018 International Conference on Advances in Computing, Communication Control and Networking (ICACCCN), Greater Noida, India, 12–13 October 2018; pp. 236–240. [Google Scholar]
  44. Assiri, A.S.; Hussien, A.G.; Amin, M. Ant Lion Optimization: Variants, Hybrids, and Applications. IEEE Access 2020, 8, 77746–77764. [Google Scholar] [CrossRef]
  45. Krizhevsky, A.; Sutskever, I.; Hinton, G. ImageNet Classification with Deep Convolutional Neural Networks. Neural Inf. Processing Syst. 2012, 25, 84–90. [Google Scholar] [CrossRef]
  46. Zeiler, M.D.; Fergus, R. Visualizing and Understanding Convolutional Networks. In Computer Vision–ECCV 2014; Springer: Zurich, Switzerland; Cham, Switzerland, 2014; Volume 8689, pp. 818–833. [Google Scholar]
  47. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  48. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
  49. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  50. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
  51. Chen, J.; Wan, Z.; Zhang, J.; Li, W.; Chen, Y.; Li, Y.; Duan, Y. Medical image segmentation and reconstruction of prostate tumor based on 3D AlexNet. Comput. Methods Programs Biomed. 2021, 200, 105878. [Google Scholar] [CrossRef] [PubMed]
  52. Banga, A.; Bhatia, P.K. Optimized Component based Selection using LSTM Model by Integrating Hybrid MVO-PSO Soft Computing Technique. Adv. Sci. Technol. Eng. Syst. J. 2021, 6, 62–71. [Google Scholar] [CrossRef]
  53. Rahman, T.; Chowdhury, M.; Khandakar, A. COVID-19 Radiography Database. COVID-19 Chest X-ray Images and Lung Masks Database. 2022. Available online: https://www.kaggle.com/datasets/tawsifurrahman/covid19-radiography-database (accessed on 1 February 2022).
  54. Cohen, J.P.; Morrison, P.; Dao, L. COVID-19 Image Data Collection. arXiv 2020, arXiv:2003.11597. [Google Scholar]
  55. Li, Y.; Xia, L. Coronavirus Disease 2019 (COVID-19): Role of Chest CT in Diagnosis and Management. Am. J. Roentgenol. 2020, 214, 1280–1286. [Google Scholar] [CrossRef]
  56. Zhu, N.; Zhang, D.; Wang, W.; Li, X.; Yang, B.; Song, J.; Zhao, X.; Huang, B.; Shi, W.; Lu, R.; et al. A Novel Coronavirus from Patients with Pneumonia in China, 2019. N. Engl. Med. 2019, 382, 727–733. [Google Scholar] [CrossRef]
  57. McDonald, R.J.; Schwartz, K.M.; Eckel, L.J.; Diehn, F.E.; Hunt, C.H.; Bartholmai, B.J.; Erickson, B.J.; Kallmes, D.F. The Effects of Changes in Utilization and Technological Advancements of Cross-Sectional Imaging on Radiologist Workload. Acad. Radiol. 2015, 22, 1191–1198. [Google Scholar] [CrossRef]
  58. Fitzgerald, R. Error in Radiology. Clin. Radiol. 2001, 56, 938–946. [Google Scholar] [CrossRef]
  59. Paul, R.; Hawkins, S.H.; Balagurunathan, Y.; Schabath, M.B.; Gillies, R.J.; Hall, L.O.; Goldgof, D.B. Deep Feature Transfer Learning in Combination with Traditional Features Predicts Survival Among Patients with Lung Adenocarcinoma. Tomography 2016, 2, 388–395. [Google Scholar] [CrossRef]
  60. Cheng, J.-Z.; Ni, D.; Chou, Y.-H.; Qin, J.; Tiu, C.-M.; Chang, Y.-C.; Huang, C.-S.; Shen, D.; Chen, C.-M. Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans. Sci. Rep. 2016, 6, 24454. [Google Scholar] [CrossRef] [Green Version]
  61. Liang, B.; Zhai, Y.; Tong, C.; Zhao, J.; Li, J.; He, X.; Ma, Q. A Deep Automated Skeletal Bone Age Assessment Model via Region-Based Convolutional Neural Network. Future Gener. Comput. Syst. 2019, 98, 54–59. [Google Scholar] [CrossRef]
  62. Panwar, H.; Gupta, P.K.; Siddiqui, M.K.; Morales-Menendez, R.; Singh, V. Application of deep learning for fast detection of COVID-19 in X-Rays using nCOVnet. Chaos Solitons Fractals 2020, 138, 109944. [Google Scholar] [CrossRef] [PubMed]
  63. Wang, L.; Wong, A. COVID-Net: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 Cases from Chest Radiography Images. arXiv 2020, arXiv:2200309871. [Google Scholar] [CrossRef] [PubMed]
  64. Li, X.; Zhu, D. COVID-Xpert: An AI Powered Population Screening of COVID-19 Cases Using Chest Radiography Images. arXiv 2020, arXiv:200403042. [Google Scholar]
  65. Afshar, P.; Heidarian, S.; Naderkhani, F.; Oikonomou, A.; Plataniotis, K.N.; Mohammadi, A. COVID-CAPS: A Capsule Network-based Framework for Identification of COVID-19 cases from X-ray Images. arXiv 2020, arXiv:200402696. [Google Scholar] [CrossRef]
  66. Farooq, M.; Hafeez, A. COVID-ResNet: A Deep Learning Framework for Screening of COVID19 from Radiographs. arXiv 2020, arXiv:200314395. [Google Scholar]
  67. Chowdhury, M.E.H.; Rahman, T.; Khandakar, A.; Mazhar, R.; Kadir, M.A.; Bin Mahbub, Z.; Islam, K.R.; Khan, M.S.; Iqbal, A.; Al Emadi, N.; et al. Can AI Help in Screening Viral and COVID-19 Pneumonia. IEEE Access 2020, 8, 132665–132676. [Google Scholar] [CrossRef]
  68. Ozturk, T.; Talo, M.; Yildirim, E.A.; Baloglu, U.B.; Yildirim, O.; Rajendra Acharya, U. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput. Biol. Med. 2020, 121, 103792. [Google Scholar] [CrossRef]
  69. Khan, A.I.; Shah, J.L.; Bhat, M.M. Coronet: A deep neural network for detection and diagnosis of COVID-19 from chest x-ray images. Comput. Methods Programs Biomed 2020, 196, 105581. [Google Scholar] [CrossRef]
  70. Apostolopoulos, I.D.; Mpesiana, T.A. Covid-19: Automatic detection from X-ray images utilizing transfer learning with convolutional neural networks. Phys. Eng. Sci. Med. 2020, 43, 635–640. [Google Scholar] [CrossRef] [Green Version]
  71. Hussain, E.; Hasan, M.; Rahman, M.A.; Lee, I.; Tamanna, T.; Parvez, M.Z. Corodet: A deep learning based classification for covid-19 detection using chest x-ray images. Chaos Solitons Fractals 2021, 142, 110495. [Google Scholar] [CrossRef] [PubMed]
  72. Umer, M.; Ashraf, I.; Ullah, S.; Mehmood, A.; Choi, G.S. COVINet: A convolutional neural network approach for predicting COVID-19 from chest X-ray images. J. Ambient. Intell. Humaniz. Comput. 2022, 13, 535–547. [Google Scholar] [CrossRef] [PubMed]
  73. Mukherjee, H.; Ghosh, S.; Dhar, A.; Obaidullah, S.M.; Santosh, K.; Roy, K. Shallow convolutional neural network for covid-19 outbreak screening using chest X-rays. Cogn. Comput. 2021, 5, 1–14. [Google Scholar] [CrossRef] [PubMed]
  74. Mahmud, T.; Rahman, M.A.; Fattah, S.A. Covxnet: A multi-dilation convolutional neural network for automatic covid-19 and other pneumonia detection from chest x-ray images with transferable multi-receptive feature optimization. Comput. Biol. Med. 2020, 122, 103869. [Google Scholar] [CrossRef] [PubMed]
Figure 1. SVM Based Largest Separating Hyperplane.
Figure 1. SVM Based Largest Separating Hyperplane.
Symmetry 14 01003 g001
Figure 2. DT Tree Induction Using Information Theory [40].
Figure 2. DT Tree Induction Using Information Theory [40].
Symmetry 14 01003 g002
Figure 3. AlexNet Structure.
Figure 3. AlexNet Structure.
Symmetry 14 01003 g003
Figure 4. Proposed Architecture.
Figure 4. Proposed Architecture.
Symmetry 14 01003 g004
Figure 5. Confusion Matrix [52].
Figure 5. Confusion Matrix [52].
Symmetry 14 01003 g005
Figure 6. MSE vs number of epochs of CNN-ALO-Classifiers.
Figure 6. MSE vs number of epochs of CNN-ALO-Classifiers.
Symmetry 14 01003 g006
Figure 7. Accuracy Epochs of CNN-ALO-Classifiers.
Figure 7. Accuracy Epochs of CNN-ALO-Classifiers.
Symmetry 14 01003 g007
Figure 8. Comparison of the Execution Time of the Classifiers.
Figure 8. Comparison of the Execution Time of the Classifiers.
Symmetry 14 01003 g008
Figure 9. COVID-19 X-ray Samples: (a) Normal, (b) Pneumonia, (c) COVID-19.
Figure 9. COVID-19 X-ray Samples: (a) Normal, (b) Pneumonia, (c) COVID-19.
Symmetry 14 01003 g009
Figure 10. Comparisons of optimization algorithms using data from [52] as Dataset 1 and data from [53] as Dataset 2.
Figure 10. Comparisons of optimization algorithms using data from [52] as Dataset 1 and data from [53] as Dataset 2.
Symmetry 14 01003 g010
Table 1. Datasets.
Table 1. Datasets.
ClassesDataset from [53]Dataset from [54]
COVID-193616576
Lung Opacity6012-
Normal10,2001583
Pneumonia13454273
Table 2. Experimental Results of CNN + Classifiers. Data from [53].
Table 2. Experimental Results of CNN + Classifiers. Data from [53].
ClassifiersAccuracyPrecisionF1 ScoreSensitivitySpecificity
NB0.96360.92000.95830.94080.9310
SVM0.94550.92000.93880.93770.9406
Soft Max0.90910.92000.90200.91900.9019
KNN0.89090.88000.88000.87030.8954
DT0.84480.81480.83000.80330.8258
Table 3. Experimental Results of CNN + ALO + Classifiers. Data from [53].
Table 3. Experimental Results of CNN + ALO + Classifiers. Data from [53].
ClassifiersAccuracyPrecisionF1 ScoreSensitivitySpecificity
NB0.980199.950.98040.98230.9856
SVM0.96050.95500.96000.96050.9566
Soft Max0.91730.95400.92310.92110.9412
KNN0.93550.87000.93620.89070.9011
DT0.86270.86000.86270.87200.8814
Table 4. Experimental Results of CNN + Classifiers. Data from [54].
Table 4. Experimental Results of CNN + Classifiers. Data from [54].
ClassifiersAccuracyPrecisionF1 ScoreSensitivitySpecificity
NB0.97760.94670.96560.95760.9409
SVM0.96090.93980.94560.95690.9534
Soft Max0.93780.92000.90200.91900.9019
KNN0.90650.89010.87090.88090.8954
DT0.85420.81480.83000.80330.8258
Table 5. Experimental Results of CNN + ALO + Classifiers. Data from [54].
Table 5. Experimental Results of CNN + ALO + Classifiers. Data from [54].
ClassifiersAccuracyPrecisionF1 ScoreSensitivitySpecificity
NB0.98010.97870.97450.96040.9594
SVM0.97670.95670.96980.96770.9645
Soft Max0.94980.93090.92950.92450.9324
KNN0.90650.89930.87950.88860.8975
DT0.85420.81750.83970.80950.8284
Table 6. Accuracy comparison of significant works with the proposed CNN + ALO + NB method. Data from [54].
Table 6. Accuracy comparison of significant works with the proposed CNN + ALO + NB method. Data from [54].
RefMethodAccuracy (%)
[13]Bayes-SqueezeNet98.83
[63]Tailored CNN92.30
[64]DenseNet88.90
[65]Capsule Networks95.70
[66]ResNet5096.20
[67]Sgdm-SqueezeNet98.30
[68]DarkNet-19 based CNN87.02
[69]Transfer learning with Xception96.60
[70]Transfer learning with MobileNetV296.80
[71]CoroDet94.2
[72]COVINet97
[73]Shallow CNN95
[74]CovXNet97.6
Proposed MethodCNN + ALO + NB99.63
Table 7. Accuracy comparison of significant works with the proposed CNN+ALO+NB method. Data from [53].
Table 7. Accuracy comparison of significant works with the proposed CNN+ALO+NB method. Data from [53].
RefMethodAccuracy (%)
[71]CoroDet91.2
[72]COVINet85
[74]CovXNet90.3
Proposed MethodCNN + ALO + NB98.01
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Karim, A.M.; Kaya, H.; Alcan, V.; Sen, B.; Hadimlioglu, I.A. New Optimized Deep Learning Application for COVID-19 Detection in Chest X-ray Images. Symmetry 2022, 14, 1003. https://doi.org/10.3390/sym14051003

AMA Style

Karim AM, Kaya H, Alcan V, Sen B, Hadimlioglu IA. New Optimized Deep Learning Application for COVID-19 Detection in Chest X-ray Images. Symmetry. 2022; 14(5):1003. https://doi.org/10.3390/sym14051003

Chicago/Turabian Style

Karim, Ahmad Mozaffer, Hilal Kaya, Veysel Alcan, Baha Sen, and Ismail Alihan Hadimlioglu. 2022. "New Optimized Deep Learning Application for COVID-19 Detection in Chest X-ray Images" Symmetry 14, no. 5: 1003. https://doi.org/10.3390/sym14051003

APA Style

Karim, A. M., Kaya, H., Alcan, V., Sen, B., & Hadimlioglu, I. A. (2022). New Optimized Deep Learning Application for COVID-19 Detection in Chest X-ray Images. Symmetry, 14(5), 1003. https://doi.org/10.3390/sym14051003

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop