Rapid and Accurate Diagnosis of COVID-19 Cases from Chest X-ray Images through an Optimized Features Extraction Approach

: The mutants of novel coronavirus (COVID-19 or SARS-Cov-2) are spreading with different variants across the globe, affecting human health and the economy. Rapid detection and providing timely treatment for the COVID-19 infected is the greater challenge. For fast and cost-effective detection, artiﬁcial intelligence (AI) can perform a key role in enhancing chest X-ray images and classifying them as infected/non-infected. However, AI needs huge datasets to train and detect the COVID-19 infection, which may impact the overall system speed. Therefore, Deep Neural Network (DNN) is preferred over standard AI models to speed up the classiﬁcation with a set of features from the datasets. Further, to have accurate feature extraction, an algorithm that combines Zernike Moment Feature (ZMF) and Gray Level Co-occurrence Matrix Feature (GF) is proposed and implemented. The proposed algorithm uses 36 Zernike Moment features with variance and contrast textures. This helps to detect the COVID-19 infection accurately. Finally, the Region Blocking (RB) approach with an optimum sub-image size (32 × 32) is employed to improve the processing speed up to 2.6 times per image. The performance of this implementation presents an accuracy (A) of 93.4%, sensitivity (S e ) of 72.4%, speciﬁcity (S p ) of 95%, precision (P r ) of 74.9% and F1-score (F 1 ) of 72.3%. These metrics illustrate that the proposed model can identify the COVID-19 infection with a lesser dataset and improved accuracy up to 1.3 times than state-of-the-art existing models.


Introduction
The new variant of coronavirus disease 2019 (COVID-19) is caused by severe acute respiratory syndrome (SARS). It can transmit easily from human to human and mainly affects the lungs. The first COVID-19 infected human was found in Wuhan city, China in December 2019 [1]. As of June 2021, the World Health Organization (WHO) has published a list outlining that 176 million people are affected across the world by all types of COVID-19 mutants [2]. The COVID-19 disease is spreading day by day with a combination of different variants. Mortality and recovery rates are approximately 5% and 55%, respectively. Researchers and scientists from biomedical divisions have successfully developed vaccines and people are now widely vaccinated as per their country's regulations. However, fast detection is an important factor to identify, treat and control the disease. AI is playing a key role in identifying and serving the automatic problem solving and public health observation. It is also protecting humans from the spread of the virus as well as analyzing the severity of the COVID-19 diseases with different AI models [3][4][5][6][7][8][9][10].
A list of popular test methods is involved to detect COVID-19 such as reverse transcriptionpolymerase chain reaction (RT-PCR), chest X-ray (CXR), and computed tomography (CT) 1.
To introduce the Deep Neural Network (DNN) model with fast detection and accurate classification of the infection (COVID-19) region from the X-ray image database. Our implementation provides a 93% better accuracy with a smaller number of a minimum of 30 to a maximum of 50 training images.

2.
Instead of training and testing the whole X-ray image, the sub-image size of the 32 × 32 Region Blocking (RB) algorithm is applied to each image after the integration of a feature extraction method (ZMF shape descriptors and GF texture). RB improves the target classification performance with a quick processing time of 124 s/image. 3.
The results of the proposed work are evaluated using the standard performance metrics for the X-ray test dataset, and the metrics are compared with the previous AI models.
The rest of the paper is organized as follows. Section 2 discusses the COVID-19 related works using different AI models. Section 3 explains the methodology along with model implementation. Sections 4 and 5 give a dataset collection, experimental design outline, results, and discussion. Section 6 concludes the work.

COVID-19 Related Works Using AI Techniques
The list of comprehensive works for COVID-19 medical image-based AI techniques are explained. Developing the neural network architecture provides a more suitable method for the automatic detection and classification of medical images [15,16]. The machine learning classification model has been proposed in [17][18][19], consisting of dimensionality reduction through principal component analysis, unsupervised machine learning using a self-organizing map, and classification (Adam Deep Learning). AI with radiology images can be helpful to detect the infection accurately. Automatic detection is developed using the DarkCovNet model [20], Ozturk et al., 2020 which provides the end-to-end structure without requiring manual feature extraction. The implementation is performed for multiclass and binary classification with 98% accuracy and tested using a larger database. In 2020, Oh et al. [21] proposed a convolutional neural network using a patch-based approach that requires a lesser number of training images for diagnosis of the COVID-19 infection. The patch-based CNN achieves 70% of accuracy. In 2021, Abbas et al. [22] introduced a deep convolutional neural network model, named Decompose, Transfer, and Composes (DeTraC). It is a classification model and involves the diagnosis of the irregularities present in the X-ray image using a class decomposition method. DeTraC implementation provides 93% of accuracy in the detection of infection. A threshold-based deep learning model [23], Zhang et al., 2020 has been proposed for reliable and fast screening of the COVID-19 dataset. In the model mentioned in [23], 100 X-ray images from COVID-19 datasets are used to evaluate the performance at different threshold levels (T). For example, when T = 0.25, the performance metrics such as sensitivity and specificity were 90% and 87.8%. However, when, T = 0.15, the reported sensitivity and specificity were 96% and 70.6%, respectively. COVID-19 infection detection through the feature extraction method and classification model is discussed in [24], Bardhan et al., 2021. In feature extraction, the total number of 55 texture features are gathered from Xray images used for training and testing. The features are classified in the COVID-19 region using four standard classification models. From the popular random forest classifier, 98% of accuracy and 99% of AUC were noted. A deep convolutional neural network model, COVID-Net was proposed by Wang et al., 2020 [25], providing improvements in the transparency of the features while detecting positive cases. For this analysis, they employed a benchmarked open-source dataset of COVIDX. The COVIDX-Net [26], Hemdan et al., 2020 framework includes seven different deep neural network architectures such as Google MobileNet version 2 and the VGG-19 modified visual geometry group.
In each model, the different stages of the COVID-19 cases were classified based on the analysis of the intensity of normalization. The COVIDX-Net model uses 50% of the whole data for training, 30% for validation, and 20% for testing. With this, it achieved an F1 score of 91% which was comparable to the VGG-19 and DenseNet models. An architecture of deep CNN is named Stacked CovXNet [27], Mahmud et al., 2020 which involves depth-wise convolution with different dilation rates for properly extracting the features from X-ray images. A stacking algorithm is employed for the optimization, followed by integration of gradient-based discriminative localization. Here, it was used to discriminate different pneumonia-infected regions of chest X-rays that enables near perfect deduction of the COVID-19 infection. Stacked CovXNet gives 90.2% for multiclass output layers such as normal, COVID-19, a viral and bacterial infection. The literature study shows that the COVID-19 infection is effectively evaluated using traditional machine learning and deep learning models by researchers, as presented in Table 1. In addition, the previous work clarifies that sometimes infections are not assessed by the classifier due to a lack of feature information and training samples. To overcome these problems, the framework of the proposed work is explained in the following sections. DarkCovidNet achieved an accuracy of 87% and specificity of 92%. Compared to [15,16], accuracy was improved by 1.2 times and 1.0 times.
Oh et al. 2020 [21] Patch-based convolution neural network for global and local approach

Methodology
The proposed framework dataset consists of 60 X-ray images size 224 × 224. The classification of each X-ray image involves (a) pre-processing (b) extracting the unique features (c) training the region blocks and (d) classifying the COVID-19-infected region from the background. The flow of the proposed methodology is shown in Figure 1.  [14], improved by 1.0 times.

Methodology
The proposed framework dataset consists of 60 X-ray images size 224 × 224. The classification of each X-ray image involves (a) pre-processing (b) extracting the unique features (c) training the region blocks and (d) classifying the COVID-19-infected region from the background. The flow of the proposed methodology is shown in Figure 1.

Pre-Processing Gamma Correction Technique
The pre-processing of the standard gamma correction enhancement approach [28] is applied to the X-ray images. The original X-ray image and corresponding histogram plot are displayed in Figure 2a,c. The gamma enhancement technique gives expressive information from the X-ray COVID-19 dataset. Visually, the enhanced COVID-19-infected pixels are brightened due to the gamma factor as shown in Figure 2b, and the corresponding image histogram as shown in Figure 2d highlights the same. Gamma correction techniques perform the non-linear operation on the original X-ray image. They alternate the image values to improve the pixel information using the projection relation between the pixel value and gamma value according to the internal map. GC(h,k) refers to the corrected pixel value in grayscale, 'I(x,y)' refers to the original image, γ(I(x,y)) is the gamma value, and x, y, h, k are pixel coordinates.
The gamma correction of enhanced images has significant information for feature extraction. Feature extraction is one of the important methods to obtain the descriptive information from the X-ray images, and a stacked block generally indicates referred feature vectors. The feature vectors help in identifying the infected regions from the background. The feature formation considered for the work is the integration of two features such as texture features and shape descriptors. These are derived from the gray level co-occurrence matrix (GF) and Zernike moment (ZMF), respectively.

Pre-Processing Gamma Correction Technique
The pre-processing of the standard gamma correction enhancement approach [28] is applied to the X-ray images. The original X-ray image and corresponding histogram plot are displayed in Figure 2a,c. The gamma enhancement technique gives expressive information from the X-ray COVID-19 dataset. Visually, the enhanced COVID-19-infected pixels are brightened due to the gamma factor as shown in Figure 2b, and the corresponding image histogram as shown in Figure 2d highlights the same. Gamma correction techniques perform the non-linear operation on the original X-ray image. They alternate the image values to improve the pixel information using the projection relation between the pixel value and gamma value according to the internal map. GC(h,k) refers to the corrected pixel value in grayscale, 'I(x,y)' refers to the original image, γ(I(x,y)) is the gamma value, and , , ℎ, are pixel coordinates.
The gamma correction of enhanced images has significant information for feature extraction. Feature extraction is one of the important methods to obtain the descriptive information from the X-ray images, and a stacked block generally indicates referred feature vectors. The feature vectors help in identifying the infected regions from the background. The feature formation considered for the work is the integration of two features such as texture features and shape descriptors. These are derived from the gray level cooccurrence matrix (GF) and Zernike moment (ZMF), respectively.

Zernike Moment Feature (ZMF) for Shape Descriptors and 2-Gray-Level Co-Occurrence Matrix (GF) for Texture Features Extraction
Dissimilar geometry structures, shapes, and sizes are present in each COVID-19 infected image for different levels of infections. From the previous works [29][30][31][32], we understand that the shape descriptors such as Zernike and Hu moments [24] are good enough to define detailed shape information, especially for medical images (CT images or X-ray images). A good shape descriptor should provide the following: High discrimination ability with low redundancy; Be invariant to scale, rotation, and translation; and Provide both coarse to finer detailed representation.

Zernike Moment Feature (ZMF) for Shape Descriptors and 2-Gray-Level Co-Occurrence Matrix (GF) for Texture Features Extraction
Dissimilar geometry structures, shapes, and sizes are present in each COVID-19 infected image for different levels of infections. From the previous works [29][30][31][32], we understand that the shape descriptors such as Zernike and Hu moments [24] are good enough to define detailed shape information, especially for medical images (CT images or X-ray images). A good shape descriptor should provide the following: High discrimination ability with low redundancy; Be invariant to scale, rotation, and translation; and Provide both coarse to finer detailed representation.
Both Zernike and Hu share the above qualities, but when considering higher-order moments, the Hu moment needs extensive power. Therefore, we chose the ZMF-based shape feature descriptor for the shape feature for representing the X-ray COVID-19 lungs image. ZMF is estimated for each pixel of the gamma enhanced image by using a sliding window size 5 × 5 along the row and column with a '1' stride. The different polynomial orders of 'n' and corresponding repetition factor of 'm' were used for the H(x, y). Next, Zernike moments are computed by projecting H(x, y) onto the complex Zernike polynomial sets, as denoted in Equation (2).
where, Pl nm (x, y) is a Zernike polynomial, H(x, y) is the image size of a 5 × 5 window, n is a non-negative integer, m is an integer that n − |m| is even and 0 ≤ |m| ≤ n. A function Pl * nm (x, y) is the complex conjugate of the Pl nm (x, y) an orthogonal basic function defined as: where, r = x 2 + y 2 and θ = tan −1 y The radial polynomial Rp nm I is defined as: The ZMF obtained from Equation (2) is a complex number as indicated in Equation (5).
In Equation (6), |F nm | is the magnitude of the Zernike moment, which represents the ZMF shape features for particular n and m. Here a total of 36 |F nm | were found with the combination of n, m. The range of n is from 1 to 10 and matching combination of n, m with m that n − |m| is even and 0 ≤ |m| ≤ n for each pixel. |F nm | is estimated for every pixel and the ZMF maps observed are displayed in Figure 3. From the ZMF observation, the combination of 36 orders gives finer information in differentiating the infected and non-infected regions. To analyze the Zernike sensitive feature (ZMF l ) and Zernike robust feature map (ZMF h ) the 36 extracted feature points are chosen from the X-ray image and plotted with different combination orders n, m as shown in Figure 4 (blue represents the ZMF h and red represents the ZMF l ). The plot indicates that background pixels (the noninfected region) and infected pixels can be separated through a proper choice of a threshold level. The Equations (7) and (8) for ZMF l and ZMF h are given below.
ZMF l = {|F 1,1 |, |F 2,12 | to |F 5,5 |, |F 6,2 | to |F 7,7 |, |F 8,2 |, |F 8,6 |, |F 9,1 | to |F 9,9 |, |F 10,10 |} (8) To improve the accuracy of the classification model, we computed and merged two additional texture features (GF) [33]. Texture features are also crucial in medical image analysis to point out pulmonary consolidation and GGO features. GF is the relationship of different angles between an image pixel. GF obtained from the image, is denoted as Q = [Q(u, v|d, θ)]. At this point, GF is used to estimate the features of rth pixel frequency with the feature of sth pixel frequency by the length (d = 1) and direction (θ = 0). The texture feature is extracted by employing the same procedure as that of ZMF i.e., using a window size 5 × 5 and '1' stride over the gamma enhanced image. Contrast (C o ) and variance (V a ) considered for our implementation are shown in Equations (9) and (10). We advertently select these two features, since they provide proper discrimination between the infected and non-infected pixels from the X-ray image as shown in Figure 5.
Here, 'i' refers to total training images, 'u' refers to mean and 'J' refers to the image size, GF i refers to the two gray level co-occurrence matrix features viz: contrast and variance. Now, the texture feature computed from the GF and the shape feature obtained from the ZMF is concatenated, to present a 38 size features vector as shown in Equation (12). F i represents the extracted features from the X-ray dataset using ZMF and GF.
The formation of training dataset from F i is discussed in the following section.
Electronics 2022, 11, x FOR PEER REVIEW 7 of 16 select these two features, since they provide proper discrimination between the infected and non-infected pixels from the X-ray image as shown in Figure 5. , , , Here, 'i' refers to total training images, 'u' refers to mean and 'J' refers to the image size, refers to the two gray level co-occurrence matrix features viz: contrast and variance. Now, the texture feature computed from the GF and the shape feature obtained from the ZMF is concatenated, to present a 38 size features vector as shown in Equation (12). Fi represents the extracted features from the X-ray dataset using ZMF and GF. , , , , The formation of training dataset from is discussed in the following section.

Collection of COVID-19 Dataset
A publicly available COVID-19 dataset was used [34,35] to analyze the proposed implementation. There were 43 female and 82 male positive cases that were collected from Cho Ray Hospital, Ho Chi Minh City, Vietnam, and Myongji Hospital, Goyang, South Korea. The approximate average age of the COVID-19-positive subjects was 55 years. Most of the patients had symptoms such as sore throat, dry cough, low-grade subjective fever, and fatigue. In addition, a small consolidation in the right upper lobe, ground-glass opacities in both lower lobes, and multifocal patchy opacities in both lungs were observed in the clinical statement. The comprehensive metadata on machine vendors, dose, and peak kilovoltage (KVp) was not provided in the open-source datasets. In existing implementations [36], the datasets of CT images and X-ray images are used for the binary target classification of infected regions and non-infected regions. For our analysis, our DNN model has adopted 60 images (non-infected X-ray image: 17 and infected X-ray image: 33) of resolution 224 × 224 pixels from [34,35]. In the dataset, the training network has adopted the 50 X-ray images, and the remaining 10 X-ray images (testing data) in the dataset are used to test the performance of the trained model. From the dataset, the size of the X-ray image is downscaled to 224 × 224 pixels. Due to resizing, we agree that the quality of each image may be slightly decreased, however through the use of pre-processing (gamma correction) and handcrafted feature extraction methods (Zernike moment and gray level co-

Collection of COVID-19 Dataset
A publicly available COVID-19 dataset was used [34,35] to analyze the proposed implementation. There were 43 female and 82 male positive cases that were collected from Cho Ray Hospital, Ho Chi Minh City, Vietnam, and Myongji Hospital, Goyang, South Korea. The approximate average age of the COVID-19-positive subjects was 55 years. Most of the patients had symptoms such as sore throat, dry cough, low-grade subjective fever, and fatigue. In addition, a small consolidation in the right upper lobe, ground-glass opacities in both lower lobes, and multifocal patchy opacities in both lungs were observed in the clinical statement. The comprehensive metadata on machine vendors, dose, and peak kilovoltage (KVp) was not provided in the open-source datasets. In existing implementations [36], the datasets of CT images and X-ray images are used for the binary target classification of infected regions and non-infected regions. For our analysis, our DNN model has adopted 60 images (non-infected X-ray image: 17 and infected X-ray image: 33) of resolution 224 × 224 pixels from [34,35]. In the dataset, the training network has adopted the 50 X-ray images, and the remaining 10 X-ray images (testing data) in the dataset are used to test the performance of the trained model. From the dataset, the size of the X-ray image is downscaled to 224 × 224 pixels. Due to resizing, we agree that the quality of each image may be slightly decreased, however through the use of pre-processing (gamma correction) and handcrafted feature extraction methods (Zernike moment and gray level co-

Collection of COVID-19 Dataset
A publicly available COVID-19 dataset was used [34,35] to analyze the proposed implementation. There were 43 female and 82 male positive cases that were collected from Cho Ray Hospital, Ho Chi Minh City, Vietnam, and Myongji Hospital, Goyang, South Korea. The approximate average age of the COVID-19-positive subjects was 55 years. Most of the patients had symptoms such as sore throat, dry cough, low-grade subjective fever, and fatigue. In addition, a small consolidation in the right upper lobe, groundglass opacities in both lower lobes, and multifocal patchy opacities in both lungs were observed in the clinical statement. The comprehensive metadata on machine vendors, dose, and peak kilovoltage (KVp) was not provided in the open-source datasets. In existing implementations [36], the datasets of CT images and X-ray images are used for the binary target classification of infected regions and non-infected regions. For our analysis, our DNN model has adopted 60 images (non-infected X-ray image: 17 and infected X-ray image: 33) of resolution 224 × 224 pixels from [34,35]. In the dataset, the training network has adopted the 50 X-ray images, and the remaining 10 X-ray images (testing data) in the dataset are used to test the performance of the trained model. From the dataset, the size of the X-ray image is downscaled to 224 × 224 pixels. Due to resizing, we agree that the quality of each image may be slightly decreased, however through the use of pre-processing (gamma correction) and handcrafted feature extraction methods (Zernike moment and gray level co-occurrence matrix), any harmful effects due to down-sampling have been overcome. The final performance achieves accuracy (A), specificity (S p ), sensitivity (S e ), precision (P r ), and F1-score (F 1 ) as 93.4%, 95%, 72.4%, 74.9%, 72.3%, respectively, illustrate the same. These two methods are adopted in our proposed DNN model and achieve the overall performance metrics as discussed in Section 5.

Training Method and Testing for X-ray Images
COVID-19 X-ray images contain pixels of infected regions and non-infected regions. Training the whole X-ray image presents the results with accuracy degradation, and more training time that reduces the overall network performance [21,22,27]. The structure of the human lung system is captured during the chest radiography process, or chest X-ray (CXR). Basically, the lung's structure in the X-ray images is partitioned into 'Dx' and 'Sin' which denotes the 'right' and 'left' sides of the lungs, as marked in Figure 6a. The top to bottom right and top to bottom left sides of the lung region are denoted as the superior lobe and the inferior lobe, as shown in Figure 6a. The COVID-19 infection occurs between the superior lobe and the inferior lobe region [37]. It gives the inspiration to process half of the pixels in the X-ray image instead of processing the whole image. The resultant gives better accuracy with a lesser dataset and lesser processing time.

Formation of Training Data Using RB
The training set was obtained by applying the RB approach on the 50 training images size 224 × 224. These are pre-processed using gamma correction and generated feature maps based on the procedures explained in Sections 3 and 4. Accordingly for the images size 224 × 224, different RB sub-image sizes are chosen such as 8 × 8, 16 × 16, 32 × 32, and 56 × 56 as shown in Figure 6a. In Table 2, The sub-image size of 32 × 32 RB was selected and applied on the Fi,j. With the RB_32 size, each X-ray image produces 28 blocks and the chosen RB_32 gives the best trade-off between accuracy and processing time, as plotted in Figure 6b. These block features help to identify the image as a COVID-19 infection or non-COVID-19. The algorithm for the RB approach is given on the next page.
The variables C denotes the number of N×N RB present in one row. D represents the total number of rows required for N × N RB to process, B denotes the ensemble of the N × N regions, and RB represents the set of N × N blocks from each Fi. The set of RB features gives enough information between infected points and non-infected points. The pixels are arranged in vector format, with the corresponding target for training the network. The total training consists of 31,920 RBs, and each RB contains 1024 pixels. Each pixel within the RB has 38 features (ZFM and GF), which presents a total of 38,912 sets of feature information. To avoid any misclassification, the number of pixels taken was larger in the infection region to avoid misclassification during testing of the image. During the training process, all the feature blocks are labeled with the corresponding target (1′s or 0′s) T. Here, T represents the binary classification. Based on the above analysis, a novel region blocking (RB) algorithm is introduced to the X-ray image dataset to process each training and testing image. Instead of training and testing the whole image, the proposed RB algorithm with the selection of a suitable sub-image (block) size reduces the need for employing half the number of pixels during the training and testing process of the network. This enables the network to present accurate results with a lower computation time. The works [25,26] have used the full image (complete resolution) during training, resulting in a high processing time. In addition, the sensitivity of the test image is lower by not identifying the infected pixels (true positive values), which is mainly due to the limitation of the dataset during training. Hence, the proposed training and testing implementation uses an RB algorithm to achieve better performance in terms of accuracy and processing time. The model has been tested with different blocking regions such as 8 × 8, 16 × 16, 32 × 32, and 56 × 56. The result of the different RBs is observed in terms of processing time, as shown in Table 2.  N, 2N, 4N) are enough to process the complete lung structure (superior to the inferior lobe) as shown in Figure 6a. The performance evaluation of the RB algorithm is compared to the image without the RB approach (baseline). The improvement is 2.6 times that of the standard network. In addition, the analysis is done with different RB sub-image sizes. A suitable RB sub-image size of 32 × 32 was chosen based on the trade-off between accuracy and processing time (refer to the plotted line in Figure 6b, illustrating the trade-off between RB and computation time).

Formation of Training Data Using RB
The training set was obtained by applying the RB approach on the 50 training images size 224 × 224. These are pre-processed using gamma correction and generated feature maps based on the procedures explained in Sections 3 and 4. Accordingly for the images size 224 × 224, different RB sub-image sizes are chosen such as 8 × 8, 16 × 16, 32 × 32, and 56 × 56 as shown in Figure 6a. In Table 2, The sub-image size of 32 × 32 RB was selected and applied on the F i,j . With the RB_32 size, each X-ray image produces 28 blocks and the chosen RB_32 gives the best trade-off between accuracy and processing time, as plotted in Figure 6b. These block features help to identify the image as a COVID-19 infection or non-COVID-19. The Algorithm 1 for the RB approach is given on the next page.
The variables C denotes the number of N×N RB present in one row. D represents the total number of rows required for N × N RB to process, B denotes the ensemble of the N × N regions, and RB represents the set of N × N blocks from each F i . The set of RB features gives enough information between infected points and non-infected points. The pixels are arranged in vector format, with the corresponding target for training the network. The total training consists of 31,920 RBs, and each RB contains 1024 pixels. Each pixel within the RB has 38 features (ZFM and GF), which presents a total of 38,912 sets of feature information. To avoid any misclassification, the number of pixels taken was larger in the infection region to avoid misclassification during testing of the image. During the training process, all the feature blocks are labeled with the corresponding target (1 s or 0 s) T. Here, T represents the binary classification.
Here, Y i refers to the training dataset, RB refers to the region block formed in the input features, T is a target for the input, and 'i' refers to a number of training images. In the T, BC 0 = '1' represents the presence of the COVID-19 infection. Likewise, BC 0 = '0' represents non-COVID-19 region. Furthermore, the set of features Y i has been trained with different classifiers such as Deep Neural Network (DNN), Support Vector Machine (SVM), Decision Tree (DT), Gaussian Naive Bayes (GNB), and Logistic Regression (LR) to find the suitable classifier for the proposed implementation. The classifier performances are analyzed using the standard evaluation metrics: Accuracy, Specificity, Sensitivity, and Area Under Curve (AUC) as illustrated in Table 3. From Table 3, the DNN classifier outperforms the other classifier in terms of standard evaluation metrics for the set of training features. The DNN classifier confusion matrix is presented in Table 4. The ROC plot of different classifiers is plotted for AUC estimation, which is shown in Figure 7.

Target Classification Using DNN
The formation of the set of ZMF and GF feature blocks is extracted using the method as discussed in Section 3. The feature set is trained by the supervised learning method, which is one of the Deep Neural Network (DNN) models. The architecture of DNN is the base of multiple hidden layers of feed-forward neural networks. The DNN architecture consists of an input layer (38 neurons represent the 38 features), 3 hidden layers (each 58 neurons), and followed by an output layer (a Binary classifier), as shown in Figure 8.

Target Classification Using DNN
The formation of the set of ZMF and GF feature blocks is extracted using the method as discussed in Section 3. The feature set is trained by the supervised learning method, which is one of the Deep Neural Network (DNN) models. The architecture of DNN is the base of multiple hidden layers of feed-forward neural networks. The DNN architecture consists of an input layer (38 neurons represent the 38 features), 3 hidden layers (each 58 neurons), and followed by an output layer (a Binary classifier), as shown in Figure 8.

Results and Discussion
The DNN-trained network was evaluated with the test dataset containing 10 X-ray images. After the enhancement technique and feature extraction along with the RB algorithm, the formation of the testing blocks is collected for each test image sized 224 × 224 × 38 with 32 × 32 RB. All the experimental work is analyzed in a MATLAB R2020a Considering the learnable parameters for the DNN model, the total biases and weights are 9224 (2262-H 1,L , 3422-H 2,L , 3422-H 3,L, and 118 output payer). 'H' refers to the hidden layers, and 'L' refers to the number of neurons in the hidden neurons i.e., 58 neurons in each hidden layer. The training function of Scaled Conjugate Gradient (SCG) is used to converge the loss function of cross-entropy by modifying the weights and biases. Tan sigmoid is an activation function for H 1,L H 2,L H 3,L and the probability of binary output by the softmax function at the output layer. The probability of target outputs is '1' and '0' for the infected and non-infected (background) regions. The dataset training requires around 20 min at 100 epochs with a good error performance of 0.1289.

Results and Discussion
The DNN-trained network was evaluated with the test dataset containing 10 X-ray images. After the enhancement technique and feature extraction along with the RB algorithm, the formation of the testing blocks is collected for each test image sized 224 × 224 × 38 with 32 × 32 RB. All the experimental work is analyzed in a MATLAB R2020a environment run on a processor corei7 at 2.80 Ghz, 8 GB RAM. The generation of the feature method takes around 20 min for 10 images (test dataset) and the time required for each testing image is around 115 s. The automatic classification for the X-ray image challenges is as follows:

1.
Each X-ray image varies depends on the texture, position, and lung shape across different cases. The small consolidation provides the results as false negatives for the whole image.

2.
The difficulty in identifying the ground-glass opacities (GGO) of the infection region within the X-ray image is due to the occurrence of blurriness and low contrast. 3.
AI models need a large number of training images to train the network, and the collection of images and ground truth annotation requires more specialist human resources.
The feature blocks of 38 features along with the RB algorithm were used to train the DNN model, as explained in Section 3. The classifier output has the probability of two class scores, as given in Equation (14). The probability score decides the output to fall on the particular class, either 'BC 0 = 1 and BC 1 = 0 or 'BC 0 = 0 and BC 1 = 1 . Furthermore, the 38 feature blocks of 10 test images are classified with the DNN-trained network and performance evaluation metrics are illustrated in Table 5. In Table 5, the 10 testing image performance is displayed from the dataset. Apart from that, an ablation study was performed to analyze the feature's formation. ZMF 36 features were trained and tested but the sensitivity of each image performance was reduced, and failed to capture the true positive information. Hence during the training, the addition of two texture features (GF) produced remarkable results compared to the previous work. Performance evaluation metrics achieved a combination of 36 ZMF shape descriptors and 2 GF texture features compared to the 36 ZMF. We have evaluated each test image by the standard metrics performance such as Sensitivity, Accuracy, Specificity, Precision and F1-score using the confusion matrices, as presented in Table 6.  The proposed work has been compared with previous works [20][21][22][23][25][26][27] that employed AI for the classification of the COVID-19 region from the X-ray image dataset, as presented in Table 7. In [20], the DarkCovidNet model is analyzed using 125 COVID-19 positive images, with 80 percentage used for training and the remainder for testing. The performance evaluation of [20] is less in terms of A and S p . The patch-wise CNN model is described in [21], in which outcomes from the A, S e, and S p are 1.2× (times) lesser than our method. In our implementation, A and S p are moderately higher and S e is 13% lesser than the Decompose, Transfer, and Compose (DeTraC) [22]. Likewise, the COVID-Net [25], COVIDX-Net [26] and Stacked CovXNet [27] model is compared in terms of accuracy (A) which resulted in 0.86%, 3.7%, and 6.9% lesser performance than our implementation. However, the overall assessment of our test results is enhanced with a limited number of training images, as proved in Table 7. In addition, the improved performance metrics of A, S e, and S p are related to the existing state-of-the-art method, as revealed in Table 7 (columns 3rd, 5th, and 7th).

Conclusions
In this paper, an improved DNN classifier model for fast and accurate COVID-19 detection using chest X-ray images is proposed. This model adopts the combined feature extraction algorithms: 36 unique Zernike moments (features of shapes) and 2 GF textures (contrast and variance). In addition, the region blocking size 32 × 32 on the X-ray image is applied to select the lung region for the training and testing dataset. Our implementation provides an effective and suitable tool for the radiographer to categorize the input image with a limited training dataset. The performance evaluation observed 93.4%, 95%, 72.4%, 74.9%, and 72.3% in terms of accuracy (A), specificity (S p ), sensitivity (S e ), precision (P r ) and F1-score (F 1 ) respectively. The proposed DNN model has enhanced A by 1× to 1.3×, S p by 1.0× to 1.1×, and S e by 1× to 1.2× compared to the existing models such as DarkCovidNet, COVID-Net, COVIDX-Net, Stacked CovXNet, DeTraC deep CNN and patch-based CNN. Furthermore, these standard existing models need a larger training dataset to sustain the required accuracy and computation speed. The proposed DNN implementation uses a COVID-19 binary classification model considering the classification of 2.4 times more speed per X-ray image. Due to this, the proposed model misses the level of infection and classification of other lung infections. In addition, the future improvement is to automate infection layout and various levels of the COVID-19 infection by segmentation of the chest X-ray images. Data Availability Statement: Chest X-ray image datasets for our work are collected from the publicly available database as follows. https://www.kaggle.com/datasets/praveengovi/coronahack-chestxraydataset/metadata (accessed on 21 March 2021); https://www.kaggle.com/datasets/aysendegerli/ qatacov19-dataset/metadata (accessed on 21 March 2021); MATLAB codes are publicly available in https://github.com/deepika-s517/Rapid-chest-X-ray-image-classification-for-COVID-19-detection.