Next Article in Journal
Research on a Random Route-Planning Method Based on the Fusion of the A* Algorithm and Dynamic Window Method
Next Article in Special Issue
KDE-Based Ensemble Learning for Imbalanced Data
Previous Article in Journal
High-Efficiency Power Optimization Based on Reconfigurable Intelligent Surface for Nonlinear SWIPT System
Previous Article in Special Issue
The Impact of Partial Balance of Imbalanced Dataset on Classification Performance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rapid and Accurate Diagnosis of COVID-19 Cases from Chest X-ray Images through an Optimized Features Extraction Approach

by
K. G. Satheesh Kumar
1,
Arunachalam Venkatesan
1,*,
Deepika Selvaraj
1 and
Alex Noel Joseph Raj
2
1
Department of Micro and Nano Electronics, School of Electronics Engineering, Vellore Institute of Technology, Vellore 621103, India
2
Key Laboratory of Digital Signal and Image Processing of Guangdong Province, Department of Electronic Engineering, College of Engineering, Shantou University, Shantou 515063, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(17), 2682; https://doi.org/10.3390/electronics11172682
Submission received: 28 July 2022 / Revised: 18 August 2022 / Accepted: 23 August 2022 / Published: 26 August 2022

Abstract

:
The mutants of novel coronavirus (COVID-19 or SARS-Cov-2) are spreading with different variants across the globe, affecting human health and the economy. Rapid detection and providing timely treatment for the COVID-19 infected is the greater challenge. For fast and cost-effective detection, artificial intelligence (AI) can perform a key role in enhancing chest X-ray images and classifying them as infected/non-infected. However, AI needs huge datasets to train and detect the COVID-19 infection, which may impact the overall system speed. Therefore, Deep Neural Network (DNN) is preferred over standard AI models to speed up the classification with a set of features from the datasets. Further, to have accurate feature extraction, an algorithm that combines Zernike Moment Feature (ZMF) and Gray Level Co-occurrence Matrix Feature (GF) is proposed and implemented. The proposed algorithm uses 36 Zernike Moment features with variance and contrast textures. This helps to detect the COVID-19 infection accurately. Finally, the Region Blocking (RB) approach with an optimum sub-image size (32 × 32) is employed to improve the processing speed up to 2.6 times per image. The performance of this implementation presents an accuracy (A) of 93.4%, sensitivity (Se) of 72.4%, specificity (Sp) of 95%, precision (Pr) of 74.9% and F1-score (F1) of 72.3%. These metrics illustrate that the proposed model can identify the COVID-19 infection with a lesser dataset and improved accuracy up to 1.3 times than state-of-the-art existing models.

1. Introduction

The new variant of coronavirus disease 2019 (COVID-19) is caused by severe acute respiratory syndrome (SARS). It can transmit easily from human to human and mainly affects the lungs. The first COVID-19 infected human was found in Wuhan city, China in December 2019 [1]. As of June 2021, the World Health Organization (WHO) has published a list outlining that 176 million people are affected across the world by all types of COVID-19 mutants [2]. The COVID-19 disease is spreading day by day with a combination of different variants. Mortality and recovery rates are approximately 5% and 55%, respectively. Researchers and scientists from biomedical divisions have successfully developed vaccines and people are now widely vaccinated as per their country’s regulations. However, fast detection is an important factor to identify, treat and control the disease. AI is playing a key role in identifying and serving the automatic problem solving and public health observation. It is also protecting humans from the spread of the virus as well as analyzing the severity of the COVID-19 diseases with different AI models [3,4,5,6,7,8,9,10].
A list of popular test methods is involved to detect COVID-19 such as reverse transcription-polymerase chain reaction (RT-PCR), chest X-ray (CXR), and computed tomography (CT) images [11]. RT-PCR is used to detect the infection using viral nucleic acid, and requires throat swabs and nasopharyngeal collected from the affected person. In addition, the test duration is long with results that may be influenced due to sampling error. Medical imaging (X-ray and CT) methods are common techniques to record the lungs’ information in the form of visual representation [12]. As it is fast and cost-effective, an X-ray image is preferable. Pulmonary consolidation and ground glass opacification (GGO) are affecting the lungs as a COVID-19 infection [13]. Pulmonary consolidation indicates the final stage of the infection, whereas ground glass opacification (GGO) indicates the early stage of the infection. Both phases are detected clearly from the X-ray images of COVID-19 cases. Ref. [14] suggested that an X-ray image offers reliability and good capability at detecting the infection region. The model and experimental analysis were executed in a MATLAB environment, followed by the identification of the infection that requires experts for manual decision. As per the current situation, the number of cases is increasing rapidly, so it is difficult to provide a manual diagnosis by medical experts, which is influenced by various factors (fatigue, the experience of the specialist, etc.). Thus, automatic classification is required in the X-ray image to detect the infection in a quick manner. The key contributions are framed from the above consideration as follows:
  • To introduce the Deep Neural Network (DNN) model with fast detection and accurate classification of the infection (COVID-19) region from the X-ray image database. Our implementation provides a 93% better accuracy with a smaller number of a minimum of 30 to a maximum of 50 training images.
  • Instead of training and testing the whole X-ray image, the sub-image size of the 32 × 32 Region Blocking (RB) algorithm is applied to each image after the integration of a feature extraction method (ZMF shape descriptors and GF texture). RB improves the target classification performance with a quick processing time of 124 s/image.
  • The results of the proposed work are evaluated using the standard performance metrics for the X-ray test dataset, and the metrics are compared with the previous AI models.
The rest of the paper is organized as follows. Section 2 discusses the COVID-19 related works using different AI models. Section 3 explains the methodology along with model implementation. Section 4 and Section 5 give a dataset collection, experimental design outline, results, and discussion. Section 6 concludes the work.

2. COVID-19 Related Works Using AI Techniques

The list of comprehensive works for COVID-19 medical image-based AI techniques are explained. Developing the neural network architecture provides a more suitable method for the automatic detection and classification of medical images [15,16]. The machine learning classification model has been proposed in [17,18,19], consisting of dimensionality reduction through principal component analysis, unsupervised machine learning using a self-organizing map, and classification (Adam Deep Learning). AI with radiology images can be helpful to detect the infection accurately. Automatic detection is developed using the DarkCovNet model [20], Ozturk et al., 2020 which provides the end-to-end structure without requiring manual feature extraction. The implementation is performed for multi-class and binary classification with 98% accuracy and tested using a larger database. In 2020, Oh et al. [21] proposed a convolutional neural network using a patch-based approach that requires a lesser number of training images for diagnosis of the COVID-19 infection. The patch-based CNN achieves 70% of accuracy. In 2021, Abbas et al. [22] introduced a deep convolutional neural network model, named Decompose, Transfer, and Composes (DeTraC). It is a classification model and involves the diagnosis of the irregularities present in the X-ray image using a class decomposition method. DeTraC implementation provides 93% of accuracy in the detection of infection. A threshold-based deep learning model [23], Zhang et al., 2020 has been proposed for reliable and fast screening of the COVID-19 dataset. In the model mentioned in [23], 100 X-ray images from COVID-19 datasets are used to evaluate the performance at different threshold levels (T). For example, when T = 0.25 , the performance metrics such as sensitivity and specificity were 90% and 87.8%. However, when, T = 0.15 , the reported sensitivity and specificity were 96% and 70.6%, respectively. COVID-19 infection detection through the feature extraction method and classification model is discussed in [24], Bardhan et al., 2021.
In feature extraction, the total number of 55 texture features are gathered from X-ray images used for training and testing. The features are classified in the COVID-19 region using four standard classification models. From the popular random forest classifier, 98% of accuracy and 99% of AUC were noted. A deep convolutional neural network model, COVID-Net was proposed by Wang et al., 2020 [25], providing improvements in the transparency of the features while detecting positive cases. For this analysis, they employed a benchmarked open-source dataset of COVIDX. The COVIDX-Net [26], Hemdan et al., 2020 framework includes seven different deep neural network architectures such as Google MobileNet version 2 and the VGG-19 modified visual geometry group.
In each model, the different stages of the COVID-19 cases were classified based on the analysis of the intensity of normalization. The COVIDX-Net model uses 50% of the whole data for training, 30% for validation, and 20% for testing. With this, it achieved an F1 score of 91% which was comparable to the VGG-19 and DenseNet models. An architecture of deep CNN is named Stacked CovXNet [27], Mahmud et al., 2020 which involves depth-wise convolution with different dilation rates for properly extracting the features from X-ray images. A stacking algorithm is employed for the optimization, followed by integration of gradient-based discriminative localization. Here, it was used to discriminate different pneumonia-infected regions of chest X-rays that enables near perfect deduction of the COVID-19 infection. Stacked CovXNet gives 90.2% for multiclass output layers such as normal, COVID-19, a viral and bacterial infection. The literature study shows that the COVID-19 infection is effectively evaluated using traditional machine learning and deep learning models by researchers, as presented in Table 1. In addition, the previous work clarifies that sometimes infections are not assessed by the classifier due to a lack of feature information and training samples. To overcome these problems, the framework of the proposed work is explained in the following sections.

3. Methodology

The proposed framework dataset consists of 60 X-ray images size 224 × 224. The classification of each X-ray image involves (a) pre-processing (b) extracting the unique features (c) training the region blocks and (d) classifying the COVID-19-infected region from the background. The flow of the proposed methodology is shown in Figure 1.

3.1. Pre-Processing Gamma Correction Technique

The pre-processing of the standard gamma correction enhancement approach [28] is applied to the X-ray images. The original X-ray image and corresponding histogram plot are displayed in Figure 2a,c. The gamma enhancement technique gives expressive information from the X-ray COVID-19 dataset. Visually, the enhanced COVID-19-infected pixels are brightened due to the gamma factor as shown in Figure 2b, and the corresponding image histogram as shown in Figure 2d highlights the same. Gamma correction techniques perform the non-linear operation on the original X-ray image. They alternate the image values to improve the pixel information using the projection relation between the pixel value and gamma value according to the internal map. GC(h,k) refers to the corrected pixel value in grayscale, ‘I(x,y)’ refers to the original image, γ(I(x,y)) is the gamma value, and x , y , h , k are pixel coordinates.
G C h , k = 255 I x , y 255 1 γ I x , y
The gamma correction of enhanced images has significant information for feature extraction. Feature extraction is one of the important methods to obtain the descriptive information from the X-ray images, and a stacked block generally indicates referred feature vectors. The feature vectors help in identifying the infected regions from the background. The feature formation considered for the work is the integration of two features such as texture features and shape descriptors. These are derived from the gray level co-occurrence matrix (GF) and Zernike moment (ZMF), respectively.

3.2. Zernike Moment Feature (ZMF) for Shape Descriptors and 2- Gray-Level Co-Occurrence Matrix (GF) for Texture Features Extraction

Dissimilar geometry structures, shapes, and sizes are present in each COVID-19 infected image for different levels of infections. From the previous works [29,30,31,32], we understand that the shape descriptors such as Zernike and Hu moments [24] are good enough to define detailed shape information, especially for medical images (CT images or X-ray images). A good shape descriptor should provide the following:
  • High discrimination ability with low redundancy;
  • Be invariant to scale, rotation, and translation; and
  • Provide both coarse to finer detailed representation.
Both Zernike and Hu share the above qualities, but when considering higher-order moments, the Hu moment needs extensive power. Therefore, we chose the ZMF-based shape feature descriptor for the shape feature for representing the X-ray COVID-19 lungs image. ZMF is estimated for each pixel of the gamma enhanced image by using a sliding window size 5 × 5 along the row and column with a ‘1’ stride. The different polynomial orders of ‘n’ and corresponding repetition factor of ‘m’ were used for the H x , y . Next, Zernike moments are computed by projecting H x , y onto the complex Zernike polynomial sets, as denoted in Equation (2).
F n m = m + 1 π x y H x , y   P l n , m x , y   d x d y ,   x 2 + y 2 1
where, P l n m x , y is a Zernike polynomial, H x , y is the image size of a 5 × 5 window, n is a non-negative integer, m is an integer that n m is even and 0 m n . A function P l n m x , y is the complex conjugate of the P l n m x , y an orthogonal basic function defined as:
P l n m x , y = P l n , m r , θ = R p n , m r e j m θ
where, r = x 2 + y 2 and θ = tan 1 y x ,   0 θ 2 π ,   j = 1 .
The radial polynomial RpnmI is defined as:
R p n m r = q = 0 n m 1 q 2 n + 1 q ! q ! n m s ! n + m + 1 s ! r n q
The ZMF obtained from Equation (2) is a complex number as indicated in Equation (5).
F n m = R p Z M F + j H Z M F
| F n m | = R p Z M F 2 + H Z M F 2
In Equation (6), F n m is the magnitude of the Zernike moment, which represents the ZMF shape features for particular n and m . Here a total of 36 F n m were found with the combination of n ,   m . The range of n   is from 1 to 10 and matching combination of n ,   m with m that n m is even and 0 m n for each pixel. F n m is estimated for every pixel and the ZMF maps observed are displayed in Figure 3. From the ZMF observation, the combination of 36 orders gives finer information in differentiating the infected and non-infected regions. To analyze the Zernike sensitive feature Z M F l and Zernike robust feature map Z M F h the 36 extracted feature points are chosen from the X-ray image and plotted with different combination orders n ,   m as shown in Figure 4 (blue represents the Z M F h and red represents the Z M F l ). The plot indicates that background pixels (the non-infected region) and infected pixels can be separated through a proper choice of a threshold level. The Equations (7) and (8) for Z M F l and Z M F h are given below.
Z M F h = { F 0 , 0 , F 2 , 0 , F 4 , 0 , F 6 , 0 , F 8 , 0 , F 8 , 8 , F 10 , 0 , F 10 , 8 }
Z M F l = F 1 , 1 , F 2 , 12   t o   F 5 , 5 , F 6 , 2   t o   F 7 , 7 , F 8 , 2 , F 8 , 6 , F 9 , 1   t o   F 9 , 9 , F 10 , 10
To improve the accuracy of the classification model, we computed and merged two additional texture features (GF) [33]. Texture features are also crucial in medical image analysis to point out pulmonary consolidation and GGO features. GF is the relationship of different angles between an image pixel. GF obtained from the image, is denoted as Q = Q u ,   v | d , θ . At this point, GF is used to estimate the features of rth pixel frequency with the feature of sth pixel frequency by the length (d = 1) and direction (θ = 0). The texture feature is extracted by employing the same procedure as that of ZMF i.e., using a window size 5 × 5 and ‘1’ stride over the gamma enhanced image. Contrast C o and variance V a considered for our implementation are shown in Equations (9) and (10). We advertently select these two features, since they provide proper discrimination between the infected and non-infected pixels from the X-ray image as shown in Figure 5.
C o = u = 0 J 1 v = 0 J 1 Q u , v u v 2
V a = u = 0 J 1 v = 0 J 1 Q u , v u μ 2
G F = C o ,   V a
Here, ‘i’ refers to total training images, ‘u’ refers to mean and ‘J’ refers to the image size, G F i refers to the two gray level co-occurrence matrix features viz: contrast and variance. Now, the texture feature computed from the GF and the shape feature obtained from the ZMF is concatenated, to present a 38 size features vector as shown in Equation (12). Fi represents the extracted features from the X-ray dataset using ZMF and GF.
F i = Z M F h , i ,   Z M F l , i ,   G F i
The formation of training dataset from F i is discussed in the following section.

4. Dataset and Experimental Design

4.1. Collection of COVID-19 Dataset

A publicly available COVID-19 dataset was used [34,35] to analyze the proposed implementation. There were 43 female and 82 male positive cases that were collected from Cho Ray Hospital, Ho Chi Minh City, Vietnam, and Myongji Hospital, Goyang, South Korea. The approximate average age of the COVID-19-positive subjects was 55 years. Most of the patients had symptoms such as sore throat, dry cough, low-grade subjective fever, and fatigue. In addition, a small consolidation in the right upper lobe, ground-glass opacities in both lower lobes, and multifocal patchy opacities in both lungs were observed in the clinical statement. The comprehensive metadata on machine vendors, dose, and peak kilovoltage (KVp) was not provided in the open-source datasets. In existing implementations [36], the datasets of CT images and X-ray images are used for the binary target classification of infected regions and non-infected regions. For our analysis, our DNN model has adopted 60 images (non-infected X-ray image: 17 and infected X-ray image: 33) of resolution 224 × 224 pixels from [34,35]. In the dataset, the training network has adopted the 50 X-ray images, and the remaining 10 X-ray images (testing data) in the dataset are used to test the performance of the trained model. From the dataset, the size of the X-ray image is downscaled to 224 × 224 pixels. Due to resizing, we agree that the quality of each image may be slightly decreased, however through the use of pre-processing (gamma correction) and handcrafted feature extraction methods (Zernike moment and gray level co-occurrence matrix), any harmful effects due to down-sampling have been overcome. The final performance achieves accuracy (A), specificity (Sp), sensitivity (Se), precision (Pr), and F1-score (F1) as 93.4%, 95%, 72.4%, 74.9%, 72.3%, respectively, illustrate the same. These two methods are adopted in our proposed DNN model and achieve the overall performance metrics as discussed in Section 5.

4.2. Training Method and Testing for X-ray Images

COVID-19 X-ray images contain pixels of infected regions and non-infected regions. Training the whole X-ray image presents the results with accuracy degradation, and more training time that reduces the overall network performance [21,22,27]. The structure of the human lung system is captured during the chest radiography process, or chest X-ray (CXR). Basically, the lung’s structure in the X-ray images is partitioned into ‘Dx’ and ‘Sin’ which denotes the ‘right’ and ‘left’ sides of the lungs, as marked in Figure 6a. The top to bottom right and top to bottom left sides of the lung region are denoted as the superior lobe and the inferior lobe, as shown in Figure 6a. The COVID-19 infection occurs between the superior lobe and the inferior lobe region [37]. It gives the inspiration to process half of the pixels in the X-ray image instead of processing the whole image. The resultant gives better accuracy with a lesser dataset and lesser processing time.
Based on the above analysis, a novel region blocking (RB) algorithm is introduced to the X-ray image dataset to process each training and testing image. Instead of training and testing the whole image, the proposed RB algorithm with the selection of a suitable sub-image (block) size reduces the need for employing half the number of pixels during the training and testing process of the network. This enables the network to present accurate results with a lower computation time. The works [25,26] have used the full image (complete resolution) during training, resulting in a high processing time. In addition, the sensitivity of the test image is lower by not identifying the infected pixels (true positive values), which is mainly due to the limitation of the dataset during training. Hence, the proposed training and testing implementation uses an RB algorithm to achieve better performance in terms of accuracy and processing time. The model has been tested with different blocking regions such as 8 × 8, 16 × 16, 32 × 32, and 56 × 56. The result of the different RBs is observed in terms of processing time, as shown in Table 2.
In this table, the total number of RBs required for 32 × 32 size is 49 blocks. From the 49 blocks, 28 sub-blocks (N, 2N, 4N) are enough to process the complete lung structure (superior to the inferior lobe) as shown in Figure 6a. The performance evaluation of the RB algorithm is compared to the image without the RB approach (baseline). The improvement is 2.6 times that of the standard network. In addition, the analysis is done with different RB sub-image sizes. A suitable RB sub-image size of 32 × 32 was chosen based on the trade-off between accuracy and processing time (refer to the plotted line in Figure 6b, illustrating the trade-off between RB and computation time).

4.3. Formation of Training Data Using RB

The training set was obtained by applying the RB approach on the 50 training images size 224 × 224. These are pre-processed using gamma correction and generated feature maps based on the procedures explained in Section 3 and Section 4. Accordingly for the images size 224 × 224, different RB sub-image sizes are chosen such as 8 × 8, 16 × 16, 32 × 32, and 56 × 56 as shown in Figure 6a. In Table 2, The sub-image size of 32 × 32 RB was selected and applied on the Fi,j. With the RB_32 size, each X-ray image produces 28 blocks and the chosen RB_32 gives the best trade-off between accuracy and processing time, as plotted in Figure 6b. These block features help to identify the image as a COVID-19 infection or non-COVID-19. The Algorithm 1 for the RB approach is given on the next page.
The variables C denotes the number of N×N RB present in one row. D represents the total number of rows required for N × N RB to process, B denotes the ensemble of the N × N regions, and RB represents the set of N × N blocks from each Fi. The set of RB features gives enough information between infected points and non-infected points. The pixels are arranged in vector format, with the corresponding target for training the network. The total training consists of 31,920 RBs, and each RB contains 1024 pixels. Each pixel within the RB has 38 features (ZFM and GF), which presents a total of 38,912 sets of feature information. To avoid any misclassification, the number of pixels taken was larger in the infection region to avoid misclassification during testing of the image. During the training process, all the feature blocks are labeled with the corresponding target (1′s or 0′s) T. Here, T represents the binary classification.
Y i = R B i , T i
T i = B C 0 i , B C 1 i
Here, Yi refers to the training dataset, RB refers to the region block formed in the input features, T is a target for the input, and ‘i’ refers to a number of training images. In the T, BC0 =1’ represents the presence of the COVID-19 infection. Likewise, BC0 =0’ represents non-COVID-19 region. Furthermore, the set of features Yi has been trained with different classifiers such as Deep Neural Network (DNN), Support Vector Machine (SVM), Decision Tree (DT), Gaussian Naive Bayes (GNB), and Logistic Regression (LR) to find the suitable classifier for the proposed implementation. The classifier performances are analyzed using the standard evaluation metrics: Accuracy, Specificity, Sensitivity, and Area Under Curve (AUC) as illustrated in Table 3. From Table 3, the DNN classifier outperforms the other classifier in terms of standard evaluation metrics for the set of training features. The DNN classifier confusion matrix is presented in Table 4. The ROC plot of different classifiers is plotted for AUC estimation, which is shown in Figure 7.
Algorithm 1. Region blocking of sub-image size N × N.
I n p u t :   F i , j   = Z M F i , j   ,   G F i , j   ;
/ / S e t   o f   f e a t u r e s , ' i '   r e f e r s   t o   a   n u m b e r   o f   t e s t i n g   a n d   t r a i n i n g   i m a g e s ,
/ /   ' j '   r e f e r s   t o   t h e   n u m b e r   o f   f e a t u r e s   p e r   i m a g e   F 1   t o   F 36 Z M F   a n d   F 37 , F 38 G F .
O u t p u t :   R B i , j = B i , j ; / / N u m b e r   o f   r e g i o n s   p e r   i m a g e
S t a r t
1 . I n i t i a l i s e   N ;                         / / E a c h   r e g i o n   b l o c k   s i z e   a n d   c h o s e n   b a s e d   o n   X r a y   i m a g e   s i z e
2 . r o w ,   c o l ,   z = s i z e   F i , j ;       / / X r a y   i m a g e   s i z e
G e n e r a l i z e d   w a y   t o   s e l e c t   t h e   R B   s i z e  
3 . I f   N = 56                                                           / / R B   s i z e   56 × 56
4 . C = 4 ;   D = 3 ;
5 . e l s e   i f   N = 32                                         / / R B   s i z e   32 × 32
6 . C = 7 ;   D = 5 ;
7 . e l s e   i f   N = 16                                       / / R B   s i z e   16 × 16
8 . C = 14 ;   D = 8 ;
9 . e l s e   i f   N = 8                
10 . C = 28 ;   D = 13 ;                                 / / R B   s i z e   8 × 8
11 . e n d  
12 . m = c o l C ;
13 . n = r o w C ;                                        
14 . f o r   i = 1   t o   50
15 . f o r   j = 1   t o   38                               / / T o t a l   n u m b e r   o f   f e a t u r e s
16 . f o r   b = 2   t o   D                            
R o w w i s e   p r o c e s s   f r o m   2 n d   t o   5 t h   r o w
17 . f o r   g = 1   t o   C                                   / / N u m b e r   o f   b l o c k s   p r e s e n t   i n   o n e   r o w
18 . B i , j = F i , j   b 1 × m + 1 : b × m ,   g 1 × n + 1 : g × n ;     / / 28 b l o c k s   a r e   p r o c e s s e d   i n   F i , j
19 . R B i , j = B i , j ;                                       / / s e t   o f   f e a t u r e   b l o c k s   g o e s   t o   t r a i n i n g   o r   t e s t i n g   t h e   n e t w o r k  
20 . e n d
21 . e n d
22 . e n d
23 . e n d
S t o p

4.4. Target Classification Using DNN

The formation of the set of ZMF and GF feature blocks is extracted using the method as discussed in Section 3. The feature set is trained by the supervised learning method, which is one of the Deep Neural Network (DNN) models. The architecture of DNN is the base of multiple hidden layers of feed-forward neural networks. The DNN architecture consists of an input layer (38 neurons represent the 38 features), 3 hidden layers (each 58 neurons), and followed by an output layer (a Binary classifier), as shown in Figure 8.
Considering the learnable parameters for the DNN model, the total biases and weights are 9224 (2262- H1,L, 3422- H2,L, 3422- H3,L, and 118 output payer). ‘H’ refers to the hidden layers, and ‘L’ refers to the number of neurons in the hidden neurons i.e., 58 neurons in each hidden layer. The training function of Scaled Conjugate Gradient (SCG) is used to converge the loss function of cross-entropy by modifying the weights and biases. Tan sigmoid is an activation function for H1,L H2,L H3,L and the probability of binary output by the softmax function at the output layer. The probability of target outputs is ‘1’ and ‘0’ for the infected and non-infected (background) regions. The dataset training requires around 20 min at 100 epochs with a good error performance of 0.1289.

5. Results and Discussion

The DNN-trained network was evaluated with the test dataset containing 10 X-ray images. After the enhancement technique and feature extraction along with the RB algorithm, the formation of the testing blocks is collected for each test image sized 224 × 224 × 38 with 32 × 32 RB. All the experimental work is analyzed in a MATLAB R2020a environment run on a processor corei7 at 2.80 Ghz, 8 GB RAM. The generation of the feature method takes around 20 min for 10 images (test dataset) and the time required for each testing image is around 115 s. The automatic classification for the X-ray image challenges is as follows:
  • Each X-ray image varies depends on the texture, position, and lung shape across different cases. The small consolidation provides the results as false negatives for the whole image.
  • The difficulty in identifying the ground-glass opacities (GGO) of the infection region within the X-ray image is due to the occurrence of blurriness and low contrast.
  • AI models need a large number of training images to train the network, and the collection of images and ground truth annotation requires more specialist human resources.
The feature blocks of 38 features along with the RB algorithm were used to train the DNN model, as explained in Section 3. The classifier output has the probability of two class scores, as given in Equation (14). The probability score decides the output to fall on the particular class, either ‘BC0 = 1 and BC1 = 0′ or ‘BC0 = 0 and BC1 = 1′. Furthermore, the 38 feature blocks of 10 test images are classified with the DNN-trained network and performance evaluation metrics are illustrated in Table 5. In Table 5, the 10 testing image performance is displayed from the dataset. Apart from that, an ablation study was performed to analyze the feature’s formation. ZMF 36 features were trained and tested but the sensitivity of each image performance was reduced, and failed to capture the true positive information. Hence during the training, the addition of two texture features (GF) produced remarkable results compared to the previous work. Performance evaluation metrics achieved a combination of 36 ZMF shape descriptors and 2 GF texture features compared to the 36 ZMF. We have evaluated each test image by the standard metrics performance such as Sensitivity, Accuracy, Specificity, Precision and F1-score using the confusion matrices, as presented in Table 6.
The proposed work has been compared with previous works [20,21,22,23,25,26,27] that employed AI for the classification of the COVID-19 region from the X-ray image dataset, as presented in Table 7. In [20], the DarkCovidNet model is analyzed using 125 COVID-19 positive images, with 80 percentage used for training and the remainder for testing. The performance evaluation of [20] is less in terms of A and Sp. The patch-wise CNN model is described in [21], in which outcomes from the A, Se, and Sp are 1.2× (times) lesser than our method. In our implementation, A and Sp are moderately higher and Se is 13% lesser than the Decompose, Transfer, and Compose (DeTraC) [22]. Likewise, the COVID-Net [25], COVIDX-Net [26] and Stacked CovXNet [27] model is compared in terms of accuracy (A) which resulted in 0.86%, 3.7%, and 6.9% lesser performance than our implementation. However, the overall assessment of our test results is enhanced with a limited number of training images, as proved in Table 7. In addition, the improved performance metrics of A, Se, and Sp are related to the existing state-of-the-art method, as revealed in Table 7 (columns 3rd, 5th, and 7th).

6. Conclusions

In this paper, an improved DNN classifier model for fast and accurate COVID-19 detection using chest X-ray images is proposed. This model adopts the combined feature extraction algorithms: 36 unique Zernike moments (features of shapes) and 2 GF textures (contrast and variance). In addition, the region blocking size 32 × 32 on the X-ray image is applied to select the lung region for the training and testing dataset. Our implementation provides an effective and suitable tool for the radiographer to categorize the input image with a limited training dataset. The performance evaluation observed 93.4%, 95%, 72.4%, 74.9%, and 72.3% in terms of accuracy (A), specificity (Sp), sensitivity (Se), precision (Pr) and F1-score (F1) respectively. The proposed DNN model has enhanced A by 1× to 1.3×, Sp by 1.0× to 1.1×, and Se by 1× to 1.2× compared to the existing models such as DarkCovidNet, COVID-Net, COVIDX-Net, Stacked CovXNet, DeTraC deep CNN and patch-based CNN. Furthermore, these standard existing models need a larger training dataset to sustain the required accuracy and computation speed. The proposed DNN implementation uses a COVID-19 binary classification model considering the classification of 2.4 times more speed per X-ray image. Due to this, the proposed model misses the level of infection and classification of other lung infections. In addition, the future improvement is to automate infection layout and various levels of the COVID-19 infection by segmentation of the chest X-ray images.

Author Contributions

Conceptualization and methodology framework by K.G.S.K.; Investigation and supervision by A.V.; Software validation and draft preparation by D.S.; Validation datasets and review by A.N.J.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Chest X-ray image datasets for our work are collected from the publicly available database as follows. https://www.kaggle.com/datasets/praveengovi/coronahack-chest-xraydataset/metadata (accessed on 21 March 2021); https://www.kaggle.com/datasets/aysendegerli/qatacov19-dataset/metadata (accessed on 21 March 2021); MATLAB codes are publicly available in https://github.com/deepika-s517/Rapid-chest-X-ray-image-classification-for-COVID-19-detection.

Acknowledgments

The work was supported in part by the Council of Scientific and Industrial Research (CSIR), New Delhi, through the Senior Research Fellow award under grant 09/844(0104)/2020- EMR.I. The authors would like to thank Vellore Institute of Technology, Vellore for providing MATLAB licensed software and high-performance system facilities.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shi, F.; Wang, J.; Shi, J.; Wu, Z.; Wang, Q.; Tang, Z.; He, K.; Shi, Y.; Shen, D. Review of artificial intelligence techniques in imaging data acquisition, segmentation, and diagnosis for COVID-19. IEEE Rev. Biomed. Eng. 2020, 14, 4–15. [Google Scholar] [CrossRef] [PubMed]
  2. Online Data is Collected from WHO Website [Available] WHO Coronavirus (COVID-19) Dashboard | WHO Coronavirus (COVID-19) Dashboard With Vaccination Data. 2021. Available online: https://covid19.who.int/ (accessed on 7 June 2021).
  3. Sepúlveda, A.; Periñán-Pascual, C.; Muñoz, A.; Martínez-España, R.; Hernández-Orallo, E.; Cecilia, J.M. COVIDSensing: Social Sensing Strategy for the Management of the COVID-19 Crisis. Electronics 2021, 10, 3157. [Google Scholar] [CrossRef]
  4. Alghamdi, H.S.; Amoudi, G.; Elhag, S.; Saeedi, K.; Nasser, J. Deep learning approaches for detecting COVID-19 from chest X-ray images: A survey. IEEE Access 2021, 9, 20235–20254. [Google Scholar] [CrossRef] [PubMed]
  5. Serena Low, W.C.; Chuah, J.H.; Tee, C.A.T.; Anis, S.; Shoaib, M.A.; Faisal, A.; Khalil, A.; Lai, K.W. An overview of deep learning techniques on chest X-ray and CT scan identification of COVID-19. Comput. Math. Methods Med. 2021, 2021, 5528144. [Google Scholar] [CrossRef] [PubMed]
  6. Jain, R.; Gupta, M.; Taneja, S.; Hemanth, D.J. Deep learning-based detection and analysis of COVID-19 on chest X-ray images. Appl. Intell. 2021, 51, 1690–1700. [Google Scholar] [CrossRef]
  7. Negi, A.; Kumar, K. AI-based implementation of decisive technology for prevention and fight with COVID-19. Cyber-Phys. Syst. 2022, 1–14. [Google Scholar] [CrossRef]
  8. Zhang, D.; Ren, F.; Li, Y.; Na, L.; Ma, Y. Pneumonia detection from chest X-ray images based on convolutional neural network. Electronics 2021, 10, 1512. [Google Scholar] [CrossRef]
  9. Negi, A.; Kumar, K.; Chauhan, P.; Rajput, R.S. Deep Neural Architecture for Face mask Detection on Simulated Masked Face Dataset against Covid-19 Pandemic. In 2021 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS); IEEE: Piscataway Township, NJ, USA, 2021; pp. 595–600. [Google Scholar]
  10. Zhang, W.; Wu, Y.; Yang, B.; Hu, S.; Wu, L.; Dhelim, S. Overview of multi-modal brain tumor mr image segmentation. Healthcare 2021, 9, 1051. [Google Scholar] [CrossRef]
  11. Xie, X.; Zhong, Z.; Zhao, W.; Zheng, C.; Wang, F.; Liu, J. Chest CT for typical coronavirus disease 2019 (COVID-19) pneumonia: Relationship to negative RT-PCR testing. Radiology 2020, 296, E41–E45. [Google Scholar] [CrossRef]
  12. Cao, M.; Zhang, D.; Wang, Y.; Lu, Y.; Zhu, X.; Li, Y.; Xue, H.; Lin, Y.; Zhang, M.; Sun, Y.; et al. Clinical features of patients infected with the 2019 novel coronavirus (COVID-19) in Shanghai, China. medRxiv 2020. [Google Scholar] [CrossRef] [Green Version]
  13. Ahmed, I.; Chehri, A.; Jeon, G. A sustainable deep learning based framework for automated segmentation of COVID-19 infected regions: Using U-Net with an attention mechanism and boundary loss function. Electronics 2022, 11, 2296. [Google Scholar] [CrossRef]
  14. Ahsan, M.; Based, M.; Haider, J.; Kowalski, M. COVID-19 detection from chest X-ray images using feature fusion and deep learning. Sensors 2021, 21, 1480. [Google Scholar]
  15. Mohammad-Rahimi, H.; Nadimi, M.; Ghalyanchi-Langeroudi, A.; Taheri, M.; Ghafouri-Fard, S. Application of machine learning in diagnosis of COVID-19 through X-ray and CT images: A scoping review. Front. Cardiovasc. Med. 2021, 8, 185. [Google Scholar] [CrossRef] [PubMed]
  16. Bhargava, A.; Bansal, A. Novel coronavirus (COVID-19) diagnosis using computer vision and artificial intelligence techniques: A review. Multimed. Tools Appl. 2021, 80, 19931–19946. [Google Scholar] [CrossRef] [PubMed]
  17. Ali, M.N.Y.; Sarowar, M.G.; Rahman, M.L.; Chaki, J.; Dey, N.; Tavares, J.M.R. Adam deep learning with SOM for human sentiment classification. Int. J. Ambient Comput. Intell. (IJACI) 2019, 10, 92–116. [Google Scholar] [CrossRef]
  18. Kumari, S.; Singh, M.; Kumar, K. Prediction of Liver Disease Using Grouping of Machine Learning Classifiers. In International Conference on Deep Learning, Artificial Intelligence and Robotics; Springer: Cham, Switzerland, 2019; pp. 339–349. [Google Scholar]
  19. Dabral, I.; Singh, M.; Kumar, K. December. Cancer Detection Using Convolutional Neural Network. In International Conference on Deep Learning, Artificial Intelligence and Robotics; Springer: Cham, Switzerland, 2019; pp. 290–298. [Google Scholar]
  20. Ozturk, T.; Talo, M.; Yildirim, E.A.; Baloglu, U.B.; Yildirim, O.; Acharya, U.R. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput. Biol. Med. 2020, 121, 103792. [Google Scholar] [CrossRef] [PubMed]
  21. Oh, Y.; Park, S.; Ye, J.C. Deep learning covid-19 features on CXR using limited training data sets. IEEE Trans. Med. Imaging 2020, 39, 2688–2700. [Google Scholar] [CrossRef]
  22. Abbas, A.; Abdelsamea, M.M.; Gaber, M.M. Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network. Appl. Intell. 2021, 51, 854–864. [Google Scholar] [CrossRef]
  23. Zhang, J.; Xie, Y.; Li, Y.; Shen, C.; Xia, Y. COVID-19 screening on chest X-ray images using deep learning-based anomaly detection. arXiv 2020, arXiv:2003.12338v1. [Google Scholar]
  24. Bardhan, S.; Roga, S. Feature Based Automated Detection of COVID-19 from Chest X-ray Images. In Emerging Technologies during the Era of COVID-19 Pandemic; Springer: Berlin/Heidelberg, Germany, 2021; Volume 348, p. 115. [Google Scholar]
  25. Wang, L.; Lin, Z.Q.; Wong, A. Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest X-ray images. Sci. Rep. 2020, 10, 19549. [Google Scholar] [CrossRef]
  26. Hemdan, E.E.D.; Shouman, M.A.; Karar, M.E. Covidx-net: A framework of deep learning classifiers to diagnose covid-19 in X-ray images. arXiv 2020, arXiv:2003.11055. [Google Scholar]
  27. Mahmud, T.; Rahman, M.A.; Fattah, S.A. CovXNet: A multi-dilation convolutional neural network for automatic COVID-19 and other pneumonia detection from chest X-ray images with transferable multi-receptive feature optimization. Comput. Biol. Med. 2020, 122, 103869. [Google Scholar] [CrossRef] [PubMed]
  28. Rahman, T.; Khandakar, A.; Qiblawey, Y.; Tahir, A.; Kiranyaz, S.; Kashem, S.B.A.; Islam, M.T.; Al Maadeed, S.; Zughaier, S.M.; Khan, M.S.; et al. Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images. Comput. Biol. Med. 2021, 132, 104319. [Google Scholar] [CrossRef] [PubMed]
  29. Raj, A.N.J.; Mahesh, V.G. Zernike-moments-based shape descriptors for pattern recognition and classification applications. In Advanced Image Processing Techniques and Applications; IGI Global: Hershey, PA, USA, 2017; pp. 90–120. [Google Scholar]
  30. Selvaraj, D.; Venkatesan, A.; Mahesh, V.G.; Joseph Raj, A.N. An integrated feature frame work for automated segmentation of COVID-19 infection from lung CT images. Int. J. Imaging Syst. Technol. 2021, 31, 28–46. [Google Scholar] [CrossRef] [PubMed]
  31. Elaziz, M.A.; Hosny, K.M.; Salah, A.; Darwish, M.M.; Lu, S.; Sahlol, A.T. New machine learning method for image-based diagnosis of COVID-19. PLoS ONE 2020, 15, e0235187. [Google Scholar] [CrossRef] [PubMed]
  32. Thepade, S.D.; Bang, S.V.; Chaudhari, P.R.; Dindorkar, M.R. COVID-19 Identification from Chest X-ray Images using Machine Learning Classifiers with GLCM Features. ELCVIA Electron. Lett. Comput. Vis. Image Anal. 2020, 19, 85–97. [Google Scholar] [CrossRef]
  33. Garg, M.; Dhiman, G. A novel content-based image retrieval approach for classification using GLCM features and texture fused LBP variants. Neural Comput. Appl. 2021, 33, 1311–1328. [Google Scholar] [CrossRef]
  34. Praveen. CoronaHack: Chest X-ray-Dataset. 2021. Available online: https://www.kaggle.com/praveengovi/coronahack-chest-xraydataset (accessed on 21 March 2021).
  35. QaTa-COVID19 Dataset. 2020. Available online: https://www.kaggle.com/aysendegerli/qatacov19-dataset (accessed on 21 March 2021).
  36. Cohen, J.P.; Morrison, P.; Dao, L. COVID-19 image data collection. arXiv 2020, arXiv:2003.11597. Available online: https://github.com/ieee8023/covid-chestxray-dataset (accessed on 4 April 2021).
  37. Jing, B.; Wang, Z.; Xing, E. Show, describe and conclude: On exploiting the structure information of chest X-ray reports. arXiv 2020, arXiv:2004.12274. [Google Scholar]
Figure 1. Block diagram of binary classification for the X-ray image.
Figure 1. Block diagram of binary classification for the X-ray image.
Electronics 11 02682 g001
Figure 2. Enhancement of X-ray image (a) original image (b) gamma correction image (c) histogram of original image (d) histogram of gamma correction.
Figure 2. Enhancement of X-ray image (a) original image (b) gamma correction image (c) histogram of original image (d) histogram of gamma correction.
Electronics 11 02682 g002
Figure 3. Visual representation of Zernike moment features with a combination of a different order, |Fn,m|.
Figure 3. Visual representation of Zernike moment features with a combination of a different order, |Fn,m|.
Electronics 11 02682 g003
Figure 4. Experimental analysis for the extraction of 36 Zernike moment features (ZMF).
Figure 4. Experimental analysis for the extraction of 36 Zernike moment features (ZMF).
Electronics 11 02682 g004
Figure 5. GF plot for a number of pixels versus intensity (a) Variance (b) Contrast (red dots represent non-infected pixels and blue dots represent the infected region).
Figure 5. GF plot for a number of pixels versus intensity (a) Variance (b) Contrast (red dots represent non-infected pixels and blue dots represent the infected region).
Electronics 11 02682 g005
Figure 6. (a) Visual representation of X-ray image with different RB sizes, (b) RB size versus processing time and accuracy.
Figure 6. (a) Visual representation of X-ray image with different RB sizes, (b) RB size versus processing time and accuracy.
Electronics 11 02682 g006
Figure 7. ROC for different classifier models training dataset.
Figure 7. ROC for different classifier models training dataset.
Electronics 11 02682 g007
Figure 8. Architecture of deep neural network (DNN).
Figure 8. Architecture of deep neural network (DNN).
Electronics 11 02682 g008
Table 1. Literature study of the related works for COVID-19 classification.
Table 1. Literature study of the related works for COVID-19 classification.
ReferenceModelDataset FormationFindings
Ozturk et al., 2020 [20]DarkCovidNet architecture for X-ray image classificationDataset I
COVID-19 Positive image: 125
No finding data: 500
Dataset II
COVID-19 positive: 125
Pneumonia: 500
No findings: 500
DarkCovidNet achieved an accuracy of 87% and specificity of 92%. Compared to [15,16], accuracy was improved by 1.2 times and 1.0 times.
Oh et al. 2020 [21]Patch-based convolution neural network for global and local approachCombination of four dataset
COVID-19 positive image: 180
COVID-19 negative image: 186
Other infection images: 131
Accuracy, recall, and sensitivity for the global approach achieved 70.7%, 60.1%, and 89.7% respectively.
Accuracy, recall, and sensitivity for the local approach achieved 88.9%, 85.9%, and 84% respectively.
Bardhan et al., 2021 [24]55- Texture features are extracted using seven different groups (G1 to G7).
Four different classifiers are used such as RF, SVM KNN and LDA
Dataset I
COVID-19 Positive image: 219
Normal data: 125
Dataset II
COVID-19 Positive image: 125
N0 findings: 500
The performance in terms of accuracy from four different classifiers. RF achieved 98.6%, KNN achieved 91.2%, SVM produced 87%, and LDA accuracy was 97.9%.
Abbas et al., 2021 [22]Decompose, Transfer, and Compose (DeTraC) based deep convolutional neural networkDataset
COVID-19 positive image: 105
SARS infected image: 11
Normal image: 80
The DeTraC provides 84.2% accuracy, 89% specificity and 82% sensitivity. DeTraC performance was improved by 1.2 times compared to patch-based CNN [15].
Zhang et al., 2020 [23]Threshold-based convolutional neural network (ResNet) technique for COVID-19 classificationDataset
COVID-19 positive image: 70
Other images: 1008
When the threshold parameter was dropped from 0.50 to 0.15, the performance achieved the sensitivity rose from 72% to 96% and the specificity decreased from 97.9% to 70%.
Mahmud et al., 2020 [27]Stacked multi-resolution CovXNetDataset I
COVID-19 positive image: 305
Viral Pneumonia data: 305
Dataset II
COVID-19 positive image: 305
Normal image: 305
Pneumonia images: 305
The performance of the stacked CovXNet produces 87% accuracy and 85.5% specificity for dataset I.
Stacked CovXNet in dataset II achieved an accuracy of 90.3%.
The accuracy of dataset I was improved by 1.0 times and 1.2 times compared to [15,16].
Hemdan et al., 2020 [26]COVIDX-Net architecture for COVID-19 classificationDataset
COVID-19 positive image: 25
Normal image: 25
COVIDX-Net model achieved an accuracy of 90%. Compare to [14], improved by 1.0 times.
Table 2. Timing estimation of different region’s blocking size of X-ray image 224 × 224.
Table 2. Timing estimation of different region’s blocking size of X-ray image 224 × 224.
Blocking SizeTotal Number of Blocks (RBs)Number of Processed Blocks (2nd to 5th Row-Wise)Processing Time (Sec)Improvement
(Times)
56 × 561681232.6×
32 × 3249281242.6×
16 × 16196981671.9×
8 × 87843922201.4×
Without RB algorithm--320 to 420
(baseline)
-
Note: Improvement in terms of times (x) with respect to the baseline (without RB approach).
Table 3. Test evaluation metrics of a dataset for different classifiers with image block size 32 × 32.
Table 3. Test evaluation metrics of a dataset for different classifiers with image block size 32 × 32.
ClassifiersAccuracySpecificitySensitivityAUC
SVM0.8990.8440.6310.849
DT0.9150.9880.6940.903
LR0.8530.9000.5390.825
GNB0.8760.9250.6120.874
DNN0.9500.9770.7790.950
Table 4. Confusion matrix for training X-ray dataset of size 32 × 32 (16-RB).
Table 4. Confusion matrix for training X-ray dataset of size 32 × 32 (16-RB).
Y = (B, T)Accuracy (%)
Infected BlocksBackground Blocks
Training datasetInfected blocks6671(20.9%)734(2.3%)95.00%
Background blocks862(2.7%)23,653(74.1%)
Table 5. Performance evaluation using 38 features for the 10 test image.
Table 5. Performance evaluation using 38 features for the 10 test image.
Test ImageAccuracy (A)Sensitivity (Se)Specificity (Sp)Precision (Pr)F1-Score (F1)
Test-10.9320.8620.9530.8120.727
Test-20.9560.7290.9490.7190.702
Test-30.9710.6600.9890.7050.649
Test-40.9010.7750.9600.7870.792
Test-50.9420.8100.9440.8180.827
Test-60.8960.7470.9120.6320.648
Test-70.9520.8230.9760.7960.811
Test-80.9000.8010.9350.7720.826
Test-90.9270.5110.9580.6700.543
Test-100.9700.5230.9320.7860.714
Average0.9340.7240.9500.7490.723
A = T P + T N T P + F N + T N + F P ;   S e = T P T P + F N ;   S p = T N T N + F P ;   P r = T P T P + F P ;   F 1 = 2 × P r × S e P r + S e .
Table 6. Confusion matrix for a test-1 X-ray image sized 32 × 32 (16-RB).
Table 6. Confusion matrix for a test-1 X-ray image sized 32 × 32 (16-RB).
DNN ClassifierAccuracy (%)
Infected PointsBackground Points
Test image 1Infected points5734(20%)1032(3.6%)93.20%
Background points918(3.2%)20,988(73.2%)
Table 7. Comparison of the proposed model with existing results.
Table 7. Comparison of the proposed model with existing results.
ModelAccuracy (A)Improve
(Times)
Specificity (Sp)Improve
(Times)
Sensitivity (Se)Improve
(Times)
DarkCovidNet [20]87.021.0×0.9211.0×--
Patch-based CNN [21]70.71.3×0.8971.05×0.6011.2×
DeTraC deep CNN [22]84.21.1×0.8911.06×0.821-
ResNet [23]95.18-0.979-0.7201.0×
COVID-Net [25]92.61.0×----
COVIDX-Net [26]90.01.0×----
Stacked CovXNet [27]87.31.0×0.8551.1×0.881-
Proposed DNN (38 features)93.4 0.950 0.724
‘-‘ Data unavailability.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kumar, K.G.S.; Venkatesan, A.; Selvaraj, D.; Raj, A.N.J. Rapid and Accurate Diagnosis of COVID-19 Cases from Chest X-ray Images through an Optimized Features Extraction Approach. Electronics 2022, 11, 2682. https://doi.org/10.3390/electronics11172682

AMA Style

Kumar KGS, Venkatesan A, Selvaraj D, Raj ANJ. Rapid and Accurate Diagnosis of COVID-19 Cases from Chest X-ray Images through an Optimized Features Extraction Approach. Electronics. 2022; 11(17):2682. https://doi.org/10.3390/electronics11172682

Chicago/Turabian Style

Kumar, K. G. Satheesh, Arunachalam Venkatesan, Deepika Selvaraj, and Alex Noel Joseph Raj. 2022. "Rapid and Accurate Diagnosis of COVID-19 Cases from Chest X-ray Images through an Optimized Features Extraction Approach" Electronics 11, no. 17: 2682. https://doi.org/10.3390/electronics11172682

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop