You are currently viewing a new version of our website. To view the old version click .
Bioengineering
  • Article
  • Open Access

16 March 2023

Design and Analysis of a Deep Learning Ensemble Framework Model for the Detection of COVID-19 and Pneumonia Using Large-Scale CT Scan and X-ray Image Datasets

,
,
,
,
,
and
1
Fujian Provincial Key Laboratory of Big Data Mining and Applications, Fujian University of Technology, Fuzhou 350011, China
2
Department of Computer Science and Engineering, Solamalai College of Engineering, Madurai 625020, Tamil Nadu, India
3
Department of Computer Engineering, University of Technology, Baghdad 10066, Iraq
4
Amrita School of Computing, Amrita Vishwa Vidyapeetham, Amaravati Campus, Mangalagiri 522503, Andhra Pradesh, India
This article belongs to the Special Issue Recent Advances in the Application of Mathematical and Computational Models in Biomedical Science and Engineering

Abstract

Recently, various methods have been developed to identify COVID-19 cases, such as PCR testing and non-contact procedures such as chest X-rays and computed tomography (CT) scans. Deep learning (DL) and artificial intelligence (AI) are critical tools for early and accurate detection of COVID-19. This research explores the different DL techniques for identifying COVID-19 and pneumonia on medical CT and radiography images using ResNet152, VGG16, ResNet50, and DenseNet121. The ResNet framework uses CT scan images with accuracy and precision. This research automates optimum model architecture and training parameters. Transfer learning approaches are also employed to solve content gaps and shorten training duration. An upgraded VGG16 deep transfer learning architecture is applied to perform multi-class classification for X-ray imaging tasks. Enhanced VGG16 has been proven to recognize three types of radiographic images with 99% accuracy, typical for COVID-19 and pneumonia. The validity and performance metrics of the proposed model were validated using publicly available X-ray and CT scan data sets. The suggested model outperforms competing approaches in diagnosing COVID-19 and pneumonia. The primary outcomes of this research result in an average F-score (95%, 97%). In the event of healthy viral infections, this research is more efficient than existing methodologies for coronavirus detection. The created model is appropriate for recognition and classification pre-training. The suggested model outperforms traditional strategies for multi-class categorization of various illnesses.

1. Introduction

CT scans, medical imaging, or chest X-rays have been proposed as a viable method for detecting COVID-19 quickly and early. CT scan has demonstrated high sensitivity in identifying COVID-19 at the initial assessment of patients. In severe circumstances, it may efficiently correct RT-PCR false negatives [1,2,3,4]. Nevertheless, this is a complex and time-consuming process, because a professional must interpret the X-rays and CT imaging to establish if an individual is COVID-19 positive. The first diagnostic tool to be used to identify COVID-19 pathology is a chest X-ray. A single chest X-ray image is insufficient for reliable prediction and treatment of COVID-19 [5,6,7,8]. To address this limitation, multiple medical data findings are combined, and predictive classifications are generated, leading to improved accuracy compared to using only a single test image. However, extracting relevant features from COVID-19 images is challenging due to daily fluctuations in characteristics and differences between cases [9,10,11,12]. Many conventional machine learning (ML) techniques have already been applied to automatically classify digitized chest data. Using an SVM classification model, frequent patterns were generated from the pulmonary surface to distinguish between malignant and benign lung nodules. Backpropagation networks have been used to categorize imagery as normal or malignant using a gray-level co-occurrence matrix technique [13,14,15].
Convolutional neural networks (CNNs) can remove valuable features in picture categorization tasks. This feature extraction is performed with transfer learning. Pre-trained approaches collect the general properties of large-scale data such as ImageNet and then apply them to the task. DL algorithms are better than traditional NN models if enough labeled images are available. The CNN model is one of the most common DL algorithms for diagnostic imaging, with excellent results. Despite ML algorithms, the effectiveness of CNNs depends on acquiring feature representation from property imagery. As a result, the obtainability of pre-trained models such as DenseNet, ResNet, and VGG-16 is extremely helpful in this procedure and appears to be extremely interesting for COVID-19 identification using chest CT images and X-ray images. Transferring acquired knowledge from a pre-trained network that has completed one function to a new task is a systematic approach for training CNN architecture. This technique is faster and more efficient because it does not require a large annotation training dataset; as a result, most academics, particularly in the medical field, prefer it.
Transfer learning can be performed in three ways [16,17,18,19,20] as follows: shallow tuning adjusts individuals from the previous classification layer to novel problems while leaving the limitations of the remaining levels untrained; the deep tune function is used to retrain the variables that were before the network from end to end; fine tuning intends to prepare many layers after layer and gradually adjust to learning variables until a critical performance is obtained. Knowledge transfer in detecting X-ray images through an impressively executed fine-tuning procedure.
The motivational aspect of the proposed model, along with the significant contributions, are as follows. The suggested system uses transfer learning to train a model faster. The training models compute the appropriate weights. Tuning in to the work at hand, it is identifying COVID-19. The techniques are stacked to anticipate output class. The meta-model in this system is an individual neuron that correctly forecasts the input category using the outputs. To train a model faster, the proposed system employs a learning approach. The weights of the pre-trained method are correctly adjusted during training, and the approaches are integrated via stacking. The best model architecture and training parameters for COVID-19 and pneumonia diagnosis should be automated in order to produce better results. Using transfer learning methodologies solves research gaps and reduces training time.
The research is organized as follows: Section 2 focuses on the related existing works for COVID-19 detection cases; Section 3 discusses the proposed work and the details of the algorithms presented in this study; Section 4 illustrates the graphical plots and tabular representation for the experimentation of the proposed and existing methods; Section 5 summarizes the proposed work and concludes with its enhancements.

3. Methods

This section focuses on the need for the proposed system model in detail.

3.1. Need for Proposed Model

The proposed model is required to fulfill the following requirements: to obtain better measures, and to automate optimum model architecture and training parameters in COVID-19 and pneumonia diagnosis; to apply the transfer learning approaches to solve content gaps and shorten training duration; to perform multi-class classification for X-ray imaging tasks by applying an upgraded VGG16 deep transfer learning; to minimize the complexity of the layered design to perform the multi-class categorization of various illnesses.

3.2. System Model

This section explores the proposed model as sketched in Figure 1. The spatial domain procedure is applied to the image pixels and altered pixel by pixel in this filtering. It is a filtering mask that shifts from one pixel to another by performing several operations. It will remove noise from the image by smoothing it.
Figure 1. The architecture of the proposed model.
The spatial domain filtering is divided into linear and non-linear filters. The mean and Wiener filters are the most often used linear filtering techniques, whereas the median filter is a non-linear filtering approach. The mean value in the mean filter is derived from the computation of neighbors and center pixel values in the N × N size. It is calculated after calculating the center pixel value as follows:
Y M , N = M e a n   X i , j ,   i , j ϵ   w
where w are the pixel positions in the neighborhood.
The Wiener filter is applied for filtering the consistent pixel values that define the constant power additive noise. This filtering technique is used for adaptive filtering of the image pixel-wise. Two variations are computed by computing the pixels in the neighborhood with the following mean and standard deviation:
μ = 1 N × M n 1 , n 2 η a n 1 , n 2
σ 2 = 1 N × M n 1 , n 2 η a 2 n 1 , n 2 μ 2  
where η represents the N × M of the current pixel; with this estimation, the pixel-wise Wiener filter was applied over the denoised image, which is computed as follows:
b n 1 ,   n 2 = μ + α 2 v 2 α 2 a n 1 , n 2 μ
where v 2 represents the noise variance.
The median filter works by the pixels in the window sorted in ascending order. The median value of the N × M image is changed based on the central pixel values using
Y M , N = M e d i a n   X i , j ,   i , j ϵ   w
where w is the pixels of the neighborhood.
Image enhancement uses quality enhancement techniques to improve the image contents based on essential features, including intensity, edges, and corners. Histogram equalization redistributes the pixel values of an image to increase contrast, making it easier to distinguish between different image features. This technique can help to reduce the number of pixels that are saturated or redundant in certain areas of the image, but its primary goal is to enhance the overall visual quality of the image. This process increases the contrast level of the image and the computation speed of content-based medical image retrieval. Histogram equalization is a technique that improves the image’s visual quality by adjusting the noisy and blurred pixel values. The histogram represents the discrete function or the intensity distribution with the graphical format applied to dark images. The process of histogram equalization for contrast enhancement is defined as follows:
  • Read the input image.
  • Compute the histogram, probability of each pixel, and probability density function of the input image.
  • Equalize the histogram.
The histogram represents the image’s intensities using bars representing pixel frequency. For a given input image, the histogram is computed as follows:
H r k = n k
where r k is the k t h gray level value ranging from 0 to L-1, and n k is the number of pixels in the image which have the gray level in r k . Then, determine the probability of the gray level using
p r k = n k M × N
where n k   is the number of pixels, M × N defines the image size with M rows and N columns. The pdf of the image is computed using
p k = i = 0 k n k × 1 M × N
Then, the histogram equalization is applied to enhance the quality of the image by correcting its intensity values for the discrete case using
S k =   L 1 × p k r
where p k ( r ) represents the pdf.

3.3. Fast Attention-Based ResNet

Attention-based ResNet is used to perform feature extraction and classification. The architecture of the proposed model is depicted in Figure 2. The learning rate of the proposed model is set to 0.0001, and the number of epochs performed for classification is 10. The extracted set of features has many irrelevant and redundant features that need to be removed because it reduces the accuracy of the process. Then, the feature selection is performed to remove the irrelevant and redundant characteristics [7]. The attention module is responsible for the extraction of features by learning the weights of the features corresponding to the scaling attacks. The output of the attention layer is computed using
A t n i Z = Q i Z .   F i Z + F i Z
where Q i Z denotes the attention weight and   F i Z denote the features. In the attention layer, the relationship between the features is computed to achieve more relevant information from the features, which can be computed as,
R F 1 ; F 2 = F 1 n F 2 n p F 1 , F 2 log p F 1 , F 2 p F 1 p F 2 d F 1 d F 2
where p F 1 , F 2 denote the function of probability between the features F 1 and F 2 and p F 1 , p F 2 denote the individual function of marginal density, respectively. The inception layer learns specific features deeply by reducing the size of the initial patch, which affects the original information required for the classification process. Based on the selected features, classification is carried out. This process ascertains whether a given unknown packet is standard or malicious. The softmax layer performs this classification, in which the cross entropy is computed to determine the output loss, which can be calculated as
L o s s   x = k log s t x k q i
Figure 2. Attention-based ResNet.
Formulate the softmax s t x of a vector using
s t x k = e x k m e x m
The classification is performed by increasing the score corresponding to the input images.

3.4. Enhanced VGG-16 Architecture for X-ray Images

Figure 3,the VGG-16 architecture is implemented to extract and classify the features using DL [8]. Initially, the features of the segmented region are extracted. The proposed VGG-16 has three layers: a convolutions layer, a fully connected layer, and a softmax layer. VGG 16 extracts low-level features with small masks and has fewer layers than VGG 19. The max pooling and average pooling layers extract the features from the image. These layers extract the features from the segmented region, as shown in Table 1. The convolution layer concatenates the pooling results using the sigmoid function δ with the measure of
C F = δ f 7 × 7 F A ; F M
where F A r 1 × h × w   and F M r 1 × h × w   represent expected and max pooling measures.
Figure 3. The architecture of VGG 16.
Table 1. Features and Aesthetics of the model.
The fully connected layer receives the extracted results using dense, flattened, and dropout layers to bring the final classification as normal or abnormal using
C p = q / s = e b   j e b j
Algorithm 1 presents the segmentation process of dividing an image into multiple segments or regions based on some criteria, such as color, texture, or shape. VGG16 is a convolutional neural network that has been widely used for image classification tasks. However, it can also be used for feature extraction and segmentation tasks.
Algorithm 1: VGG 16
  • Input: Segmented Region (S.R.)
  • Output: Normal (N) or abnormal (AN)
  •  {
  • Begin
  • Initialize   f c ,   f s ,   f t
  • f c f c 1 , f c 2 , . . f c n
  • f s f s 1 , f s 2 , . . f s n
  • f t f t 1 , f t 2 , . . f t n
  • Initialize the feature extraction data.
  • for  i o   t o   n do
  • Extract   f c   from S.R.
  • Extract   f s   from S.R.
  • Extract   f t   from S.R.
  • F f c , f s , f t
  • Extract the features by average pooling layer F A
  • Extract the features by maxpool layer F M
  • Concatenate the features
  • Classifying the images using softmax layer
  • Class   N ,   A N
  • end for
  •  }
  • return class
  • end
  •  }

4. Results and Discussion

This section analyzes the experimental results of the proposed method.

4.1. Dataset

Five different databases are applied to evaluate the performance of the proposed method. Two databases provide chest X-ray imaging, while the remainder include chest CT scans [11,12]. Every database was divided into three parts: test, validation, and training set. The test set must have between at least 200 and 400 images to evaluate the model’s flexibility well. The size of the test set determines the size of the verification set; the more significant the test set, the more prominent the testing set will be, and conversely. The other images were used to develop a training set. To perform the presented approach, the test and the validation test sets are organized, consisting of the same partitions of the positive and negative image samples. The hyper-parameters are tuned based on the training and test sets of the coronavirus images.
The descriptions of the datasets are given as follows:
The CT image dataset contains approximately 349 CT images from 216 patients with coronavirus and 397 patients without coronavirus. The images were collected from hospitals that treat both coronavirus and non-coronavirus patients, but only the positive and negative cases for coronavirus were included in the dataset [13,14]. Some of the samples of coronavirus images in both positive and negative classes are given in Figure 4. The features of the dataset are as follows:
Figure 4. (a) COVID-19 Negative (COVID-19 images collection dataset) (b) COVID-19 Positive (COVID-19 images collection dataset).
  • Types of Images: CT Images
  • Size of Dataset: 746 CT Scans
  • Positive Case images in Total: 349
  • Negative Case images in Total: 397
  • Validation Size Set: 118 Scans
  • Training Size Set: 425 Scans
  • Test Size Set: 203 Scans
The COVID-19 images collection dataset consists of images gathered from the public community, such as physicians and hospitals. The features of this dataset are as follows:
  • Types of Imaging: Chest X-rays
  • Size of Dataset: 579 Images
  • Training Size Set: 309
  • Validation Size Set: 70
  • Testing Size Set: 200
  • COVID-19 Positive Cases: 342
  • COVID-19 Negative Cases: 237
The CT set COVID-19 dataset consists of original CT scan imaging for 377 patients. A total of 15,589 CT scans are used for CT scan images of 95 coronavirus patients and 282 regular patients. The features of this dataset are as follows:
  • Types of Images: CT Scan Images
  • Size of Dataset: 12,058 Scans
  • Positive Images: 2282
  • Negative Images: 9776
  • Training Set: 11, 400
  • Validation Set: 258
  • Testing Set: 400
The COVID-19 radiography collection includes 1200 images of positive COVID-19, 1341 images of healthy patients, and 1345 images of patients with fever infection. This dataset has the following characteristics:
Figure 5 CT scans, or computed tomography scans, use X-rays to create detailed images of the inside of the body. CT scans can be helpful in diagnosing respiratory illnesses such as SARS-CoV-2, the virus that causes COVID-19. CT scans can show the presence of lung abnormalities such as ground-glass opacities and consolidation, which can be indicative of viral pneumonia.
Figure 5. (a) Patient healthy image, (b) Patient with pneumonia, and (c) Patient with +Ve COVID-19 images COVID-19 negative.
  • Images Type: X-ray
  • Sum of Images: 2541
  • Negative Images: 2686
  • Positive Images: 1200
  • Training Size Set: 3086
  • Testing Size Set: 400
  • Validation Size Set: 400
SARS-CoV-2 CT scan images consists of 1252 CT scan images that are positive, whereas the remaining 1230 images are classified as non-infected viruses, with the following features:
  • Images Type: CT Scan Images
  • Size of Dataset: 2482 Images
  • Negative Images: 1230
  • Positive Images: 1252
  • Training Size Set: 1800
  • Testing Size Set: 400
  • Validation Set Size: 400
To mitigate the model overfitting issues of the training case, the training size is increased from 1275 images collected utilizing data augmentation based on random rotation, horizontal flipping, and color jittering.

4.2. Data Preprocessing and Augmentation Techniques

Using data augmentation methods, some enormous volumes of datasets must be tested using transfer learning or DL techniques [14,15]. Hence, this research model considers data augmentation methods with the datasets.
  • Resize or Crop by Random: This step represents the cropping of the input image to the unsystematic size of the image and the aspect ratio.
  • Rotation by Random: To perform this step, the sample is rotated by selecting an angle at random.
  • Horizontal Flip by Random: This step represents the flip action for the given input image randomly in a horizontal manner.
  • Color Jittering: This step randomly represents the modifications of the input image’s contrast, saturation, and brightness.
  • Training Settings: The overall work is implemented through a Python framework, in which the variables are fixed for all experiments. The simulation parameters are shown in Table 2.
    Table 2. Simulation parameters.

4.3. Comparison Analysis

Classification accuracy is defined as the accuracy in performing the classification of both COVID-19 and viral pneumonia and computed using
A c c u r a c y = T r P + T r N T r P + T r N + F l P + F l N × 100 %
It is important to find the optimal number of iterations that maximizes the model’s accuracy without overfitting. This can be done by monitoring the validation accuracy during training, which measures how well the model performs on a separate set of data that is not used for training. The validation accuracy typically starts to plateau after a certain number of iterations, indicating that the model is no longer improving significantly and further training may lead to overfitting in Figure 6.
Figure 6. Iteration count vs. classification accuracy.
The greater the classification accuracy, the greater the efficiency of the approach. Figure 6 and Figure 7 depict the relationship between the model’s classification accuracy with others based on the iterations and the sample counts. The classification accuracy of the proposed ensemble model is high due to the extensive extraction of features and classification.
Figure 7. Number of samples vs. classification accuracy.
The feature set is generated from many features using a pre-trained model to eliminate redundant features [16,17]. The selection of transfer learning in the model is based on the recent study and literature analysis, contributing to increased classification accuracy. The existing approaches considered both necessary and redundant features for classification, which degraded the classification accuracy.
The classification accuracy of the proposed method with others based on users and iterations count is presented in Table 3. The classification and accuracy of the proposed method are 95% and 96%, respectively, whereas the existing approaches possess only about 62% to 72% of accuracy. The proposed method accurately classifies COVID-19 and non-COVID-19 images.
Table 3. Classification accuracy.
The precision is the measure of relevancy in the classified images computed using
Precision = T r P T r P + F l P × 100 %
The comparison of the precision of the presented method and the existing approaches to iteration count and the number of samples is illustrated in Figure 8 and Figure 9. The figures show that the precision of the proposed model is high due to the implementation of spatial domain filtering, thereby eliminating the noise. Increasing the iteration count of a machine learning model can lead to improvements in recall, but this is not always the case Figure 10.
Figure 8. Iteration count vs. precision.
Figure 9. Samples vs. precision.
Figure 10. Iteration count vs. recall.
Further, the classification of the face and non-face regions was performed by utilizing the feature extraction in which the optimum approach performed the selection of features. The lack of consideration of pixel-based features and the removal of the noise level in the images of the existing approaches resulted in reduced precision [17,18]. The superiority of the proposed pre-trained approach is illustrated in Table 4, which presents the numerical comparison of the presented method and existing approaches in terms of the iteration count. The presented method’s precision is 95% to 96%, whereas the existing approaches possess precision of about 61% to 71%. The proposed method produces better precision than other methods in detecting coronavirus.
Table 4. Precision.
The recall is called sensitivity, the measure of correctness between the classified images, calculated using
R e c a l l = T P T P + F N
Figure 11 and Figure 12 sketches the comparison of recall measure with other methods over the fixed iterations in all cases of test data. The recall of the presented method is high due to the implementation of effective pre-processing techniques to eliminate the noise and correct the illumination.
Figure 11. Large Iteration count vs. recall.
Figure 12. Iteration count vs. F-score.
Further, consideration of various features, such as low and high-level features, improved the recall results of the proposed model. The lack of noise elimination and consideration of integrated features restricted the recall of the existing approaches. The numerical analysis of recall with other methods is shown in Table 5. The presented method has a recall of 94% to 96%, whereas the existing approaches possess only about 62% to 72%. This leads to the conclusion that the presented method is more efficient than the existing approaches in performing coronavirus detection in healthy and viral infection cases [20].
Table 5. Recall.
The F-measure is the combination of precision and recall calculated by computing the harmonic mean and measured using
F m e a s u r e = 2 × T P 2 × T P + F P + F N
The comparison of the F-measure with other methods based on the iteration count is illustrated in Figure 13. The increased F-measure of the proposed method is due to the increase in precision and recall value.
Figure 13. Large iteration count vs. F-score.
The detection and classification are carried out by pre-processing the input frames and extracting extensive features from the pre-processed images. The grouping of pixels is performed to determine the differentiation between the non-similar features. The classification of coronavirus and pneumonia is determined with an increased F-measure over the reduced F-measure of the other methods.
The efficiency of the presented method is proved by the numerical analysis presented in Table 6, which compares the F-measure with the other methods. The F-measure of the presented method is 95% to 97%, whereas the existing methods range from 63% to 72%. The model that was developed has been shown to be efficient for pre-training purposes in the detection and classification of masked faces.
Table 6. F-measure.
The Receiver Operating Curve (ROC) depicts the relationship between the specificity and sensitivity in all possible cut-off values. The ROC curve is a graphical representation to visualize the efficiency of an approach. The better performance of an approach is depicted as the curve nearer to the left top corner of the ROC curve. The ROC curve of the proposed model with others is also analyzed for all the image sets of evaluation. The proposed model yields a better ROC curve than the other models in classification and detection [36,37]. The existing approaches possess reduced ROC curve values due to the inefficiency caused by image noise and the lack of feature integration for classification. The computational time is the time to compute specific tasks to obtain the desired result. Figure 13 compares the computational time of the proposed and other models based on the iteration factor. The computational time involved with the existing approaches is high due to the increased time consumption in training the model.
As the number of samples increases, the computation time required to process those samples also increases. This is because more samples typically mean more data to process and analyze, which can be computationally intensive. For example, if you’re training a machine learning model on a dataset of images, the more images you have, the longer it will take to train the model Figure 14.
Figure 14. Iteration count vs. computation time.
The benefits of the proposed model are as follows: the pre-trained model possesses fewer layers than the existing models and thereby possesses low complexity. This characteristic of the proposed model requires a short computational time to detect and recognize the coronavirus.
Figure 15, this method provides minimal computation time without compromising the accuracy in detecting and recognizing the coronavirus [38,39]. The time complexity of the layered architecture runs in a linear, that is, O(n) time. The proposed model is computationally efficient and highly accurate for the multi-class categorization of distinct illness types [40,41,42,43].
Figure 15. Number of samples vs. computation time.
Table 7 shows the comparison of the computation time with other models. From the table, it is clear that the computation time of our approach is shallow, at about 0.4 s. In contrast, the existing approaches possess an increased computation time of up to 0.9 s for both preprocessed and non-preprocessed.
Table 7. Computation time (s).

4.4. Limitations and Analysis

The limitations of the model are as follows: quality of outcomes based on the noise levels in the images affect the performance measures; integrating the relevant features are also an important requirement for the classification in the proposed model. The future research directions of the proposed model are as follows: enhancement using recent soft computing components and better recommender systems will be developed based on the features of datasets [19,20,44,45,46,47,48]; large datasets will be considered for validation to further improve performance measures using soft computing within the minimal computing time [25,26,27,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63].

5. Conclusions

COVID-19 and pneumonia are usually encountered due to lung symptoms that can be identified through genes and studies. Early detection of coronavirus and effective management of its progression can be facilitated through the use of imaging tests. In identifying coronavirus disease, chest X-rays and CT are helpful imaging modalities. The wide availability of huge, annotated images based on transfer learning methods—VGG-16, ResNet, and DenseNet—has led to considerable progress in earlier transfer learning models for medical image classification. This ensemble learning technique is an accurate representation because it recovers a hierarchy of localized visual elements given the input. However, irregularities in the annotation of COVID-19 and pneumonia instances produced on X-ray imaging remain a much more problematic component of working on them. This research classified COVID-19 and pneumonia images in a vast chest X-ray and CT database using an ensemble learning system based on developments in the extraction features technique. The suggested methodology provided fast and thorough COVID-19 and pneumonia classification results and the ability to deal with data inconsistencies and a limited amount of class images. The classification and accuracy of the proposed method are about 95% and 96%, respectively, and the precision is found to be 95% to 96%. The recall is 94% to 96%, with an F-measure from 95% to 97%. The proposed method is more efficient than the existing approaches in performing coronavirus and pneumonia detection in healthy and viral infection cases. The devised model is adequate for pre-training in detection and classification. The time complexity of the layered architecture runs in a linear, that is, O(n) time, and takes less than 0.5 s. The proposed model is computationally efficient and highly accurate for the multi-class categorization of distinct illness types.

Author Contributions

Conceptualization, X.X.; Methodology, R.M.; Software, S.C.; Validation, G.M.A.; Formal analysis, R.R.M.; Writing—original draft, S.K.R.; Writing—review & editing, O.I.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (No. 62172095), the Natural Science Foundation of Fujian Province (Nos. 2020J01875 and 2022J01644).

Institutional Review Board Statement

The study did not require ethical approval.

Data Availability Statement

Not applicable.

Acknowledgments

I am writing to express my sincere gratitude for your support and generosity toward our research project. It is with great pleasure that I acknowledge the receipt of your funding support through the National Natural Science Foundation of China (No. 62172095) and the Natural Science Foundation of Fujian Province (Nos. 2020J01875 and 2022J01644).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shah, F.M.; Joy, S.K.; Ahmed, F.; Hossain, T.; Humaira, M.; Ami, A.S.; Paul, S.; Jim, M.A.; Ahmed, S. A Comprehensive Survey of COVID-19 Detection Using Medical Images. SN Comput. Sci. 2021, 2, 434. [Google Scholar] [CrossRef] [PubMed]
  2. Riaz, M.; Bashir, M.M.; Younas, I. Metaheuristics based COVID-19 detection using medical images: A review. Comput. Biol. Med. 2022, 144, 105344. [Google Scholar] [CrossRef] [PubMed]
  3. Liang, S.; Liu, H.; Gu, Y.; Guo, X.; Li, H.; Li, L.; Wu, Z.; Liu, M.; Tao, L. Fast automated detection of COVID-19 from medical images using convolutional neural networks. Commun. Biol. 2021, 4, 35. [Google Scholar] [CrossRef] [PubMed]
  4. Singh, M.; Bansal, S.; Ahuja, S.; Dubey, R.K.; Panigrahi, B.K.; Dey, N. Transfer learning–based ensemble support vector machine model for automated COVID-19 detection using lung computerized tomography scan data. Med. Biol. Eng. Comput. 2021, 59, 825–839. [Google Scholar] [CrossRef]
  5. Das, A.; Ghosh, S.; Thunder, S.; Dutta, R.; Agarwal, S.; Chakrabarti, A. Automatic COVID-19 detection from X-ray images using ensemble learning with convolutional neural network. Pattern Anal. Appl. 2021, 24, 1111–1124. [Google Scholar] [CrossRef]
  6. Horry, M.J.; Chakraborty, S.; Paul, M.; Ulhaq, A.; Pradhan, B.; Saha, M.; Shukla, N. X-ray image based COVID-19 detection using pre-trained deep learning models. IEEE Access 2020, 8, 149808–149824. [Google Scholar] [CrossRef]
  7. Yang, D.; Martinez, C.; Visuña, L.; Khandhar, H.; Bhatt, C.; Carretero, J. Detection and Analysis of COVID-19 in Medical Images Using Deep Learning Techniques. Sci. Rep. 2021, 11, 19638. [Google Scholar] [CrossRef] [PubMed]
  8. Bansal, S.; Singh, M.; Dubey, R.K.; Panigrahi, B.K. Multi-Objective Genetic Algorithm Based deep learning Model for Automated COVID-19 Detection using Medical Image Data. J. Med. Biol. Eng. 2021, 41, 678–689. [Google Scholar] [CrossRef]
  9. Murugappan, M.; Bourisly, A.K.; Krishnan, P.T.; Maruthapillai, V.; Muthusamy, H. Artificial Intelligence Based COVID-19 Detection using Medical Imaging Methods: A Review. Comput. Model. Imaging SARS-CoV-2 COVID-19 2021, 2, 91–107. [Google Scholar] [CrossRef]
  10. Agrawal, T.; Choudhary, P. FocusCovid: Automated COVID-19 detection using deep learning with chest X-ray images. Evol. Syst. 2021, 13, 519–533. [Google Scholar] [CrossRef]
  11. Zhang, W.; Zhou, T.; Lu, Q.; Wang, X.; Zhu, C.; Sun, H.; Wang, Z.; Lo, S.K.; Wang, F. Dynamic-Fusion-Based Federated Learning for COVID-19 Detection. IEEE Internet Things J. 2021, 8, 15884–15891. [Google Scholar] [CrossRef] [PubMed]
  12. Abbas, A.; Abdelsamea, M.M.; Gaber, M.M. Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network. Appl. Intell. 2021, 51, 854–864. [Google Scholar] [CrossRef] [PubMed]
  13. Ghoshal, B.; Tucker, A. Estimating Uncertainty and Interpretability in deep learning for Coronavirus (COVID-19) Detection. arXiv 2020, arXiv:2003.10769. [Google Scholar]
  14. Islam, M.Z.; Islam, M.M.; Asraf, A. A Combined Deep CNN-LSTM Network for the Detection of Novel Coronavirus (COVID-19) Using X-ray Images. Inform. Med. Unlocked 2020, 20, 100412. [Google Scholar] [CrossRef] [PubMed]
  15. Mahmoudi, S.A.; Stassin, S.; Daho, M.E.; Lessage, X.; Mahmoudi, S. Explainable deep learning for Covid-19 Detection Using Chest X-ray and CT-Scan Images. In Healthcare Informatics for Fighting COVID-19 and Future Epidemics; Springer: Berlin/Heidelberg, Germany, 2021; pp. 311–336. [Google Scholar]
  16. Zebin, T.; Rezvy, S. COVID-19 detection and disease progression visualization: Deep learning on chest X-rays for classification and coarse localization. Appl. Intell. (Dordr. Neth.) 2021, 51, 1010–1021. [Google Scholar] [CrossRef] [PubMed]
  17. Shorfuzzaman, M.; Masud, M.; Alhumyani, H.; Anand, D.; Singh, A. Artificial Neural Network-Based Deep Learning Model for COVID-19 Patient Detection Using X-Ray Chest Images. J. Healthc. Eng. 2021, 2021, 5513679. [Google Scholar] [CrossRef] [PubMed]
  18. Horry, M.J.; Chakraborty, S.; Paul, M.; Ulhaq, A.; Pradhan, B.; Saha, M.; Shukla, N. COVID-19 Detection Through Transfer Learning Using Multimodal Imaging Data. IEEE Access 2021, 8, 149808–149824. [Google Scholar] [CrossRef]
  19. Alqudah, A.M.; Qazan, S.; Alqudah, A. Automated Systems for Detection of COVID-19 Using Chest X-ray Images and Light-weight Convolutional Neural Networks. Sensors 2020, 20, 6985. [Google Scholar] [CrossRef]
  20. Khalifa, N.M.; Taha, M.H.; Hassanien, A.E.; Taha, S.H. The Detection of COVID-19 in CT Medical Images: A deep learning Approach. In Big Data Analytics and Artificial Intelligence Against COVID-19: Innovation Vision and Approach; Springer: Berlin/Heidelberg, Germany, 2020; Volume 78, pp. 73–90. [Google Scholar]
  21. Jangam, E.; Barreto, A.A.D.; Annavarapu, C.S.R. Automatic detection of COVID-19 from chest CT scan and chest X-Rays images using deep learning, transfer learning and stacking. Appl. Intell. 2021, 52, 2243–2259. [Google Scholar] [CrossRef]
  22. Padma, T.; Kumari, C.U. Deep learning Based Chest X-Ray Image as a Diagnostic Tool for COVID-19. In Proceedings of the 2020 International Conference on Smart Electronics and Communication (ICOSEC), Trichy, India, 10–12 September 2020; pp. 589–592. [Google Scholar]
  23. Nawshad, M.A.; Shami, U.A.; Sajid, S.; Fraz, M.M. Attention Based Residual Network for Effective Detection of COVID-19 and Viral Pneumonia. In Proceedings of the 2021 International Conference on Digital Futures and Transformative Technologies (ICoDT2), Islamabad, Pakistan, 20–21 May 2021; pp. 1–7. [Google Scholar]
  24. Ashraf, A.; Malik, A.U.; Khan, Z.H. POSTER: Diagnosis of COVID-19 through Transfer Learning Techniques on CT Scans: A Comparison of deep learning Models. In Proceedings of the 2022 2nd International Conference of Smart Systems and Emerging Technologies (SMARTTECH), Riyadh, Saudi Arabia, 9–11 May 2022. [Google Scholar]
  25. Kumar, N.; Hashmi, A.; Gupta, M.; Kundu, A. Automatic Diagnosis of Covid-19 Related Pneumonia from CXR and CT-Scan Images. Eng. Technol. Appl. Sci. Res. 2022, 12, 7993–7997. [Google Scholar] [CrossRef]
  26. Kamil, M.Y. A deep learning framework to detect Covid-19 disease via chest X-ray and CT scan images. Int. J. Electr. Comput. Eng. 2021, 11, 844–850. [Google Scholar] [CrossRef]
  27. Chouat, I.; Echtioui, A.; Khemakhem, R.; Zouch, W.; Ghorbel, M.; Ben Hamida, A. COVID-19 detection in CT and CXR images using deep learning models. Biogerontology 2022, 23, 65–84. [Google Scholar] [CrossRef] [PubMed]
  28. Patel, S. Classification of COVID-19 from chest X-ray images using a deep convolutional neural network. Turk. J. Comput. Math. Educ. 2021, 12, 2643–2651. [Google Scholar]
  29. Neha, K.; Joshi, K.P.; Jyothi, N.A.; Kumar, J.V. Preliminary Detection of COVID-19 Using deep learning and Machine Learning Techniques on Radiological Data. Indian J. Comput. Sci. Eng. 2021, 12, 79–88. [Google Scholar] [CrossRef]
  30. Olcer, D.; Erdaş, Ç.B. A deep learning approach fed by ct scans for diagnosis of COVID-19. Selcuk. Univ. J. Eng. Sci. 2020, 19, 110–116. [Google Scholar]
  31. Umair, M.; Khan, M.S.; Ahmed, F.; Baothman, F.; Alqahtani, F.; Alian, M.; Ahmad, J. Detection of COVID-19 Using Transfer Learning and Grad-CAM Visualization on Indigenously Collected X-ray Dataset. Sensors 2021, 21, 5813. [Google Scholar] [CrossRef]
  32. Dandotiya, H. Deep learning-based detection model for coronavirus (COVID-19) using CT and X-ray image data. Diagnostics 2021, 11, 340. [Google Scholar] [CrossRef]
  33. Shankar, K.; Perumal, E. Automated Detection and Classification of COVID-19 from Chest X-ray Images Using Deep Learning. J. Comput. Theor. Nanosci. 2020, 17, 5457–5463. [Google Scholar] [CrossRef]
  34. Jain, A.; Ratnoo, S.; Kumar, D.K. Convolutional Neural Network for Covid-19 detection from X-ray images. In Proceedings of the 2021 Fourth International Conference on Computational Intelligence and Communication Technologies (CCICT), Sonepat, India, 3 July 2021; pp. 100–104. [Google Scholar]
  35. Ahmed, D.S.; Allah, H.A.A.A.; Hussain, S.A.-K.; Abbas, I.K. Detection COVID-19 of CT-Scan Image for Hospitalized Iraqi Patients based on Deep Learning. Webology 2022, 19, 1028–1055. [Google Scholar] [CrossRef]
  36. Yadav, S.S.; Bendre, M.; Vikhe, P.S.; Jadhav, S.M. Analysis of deep machine learning algorithms in COVID-19 disease diagnosis. arXiv 2020, arXiv:PPR:PPR347356. [Google Scholar]
  37. Yadav, S.S.; Sandhu, J.K.; Bendre, M.; Vikhe, P.S.; Kaur, A. A comparison of deep machine learning algorithms in COVID-19 disease diagnosis. arXiv 2020, arXiv:2008.11639. [Google Scholar]
  38. Kumar, S.; Abhishek, K.; Singh, K. COVID-19 Detection from Chest X-Rays and CT Scans using Dilated Convolutional Neural Networks. In Proceedings of the 2021 10th IEEE International Conference on Communication Systems and Network Technologies (CSNT), Bhopal, India, 18–19 June 2021; pp. 369–374. [Google Scholar]
  39. Marappan, R.; Sethumadhavan, G. Solving graph coloring problem for large graphs. Glob. J. Pure Appl. Math. 2015, 11, 2487–2494. [Google Scholar]
  40. Alakus, T.B.; Turkoglu, I. Comparison of deep learning approaches to predict COVID-19 infection. Chaos Solitons Fractals 2020, 140, 110120. [Google Scholar] [CrossRef]
  41. Marappan, R.; Sethumadhavan, G. Solution to graph coloring problem using heuristics and recursive backtracking. Int. J. Appl. Eng. Res. 2015, 10, 25939–25944. [Google Scholar]
  42. Sitaula, C.; Hossain, M.B. Attention-based VGG-16 model for COVID-19 chest X-ray image classification. Appl. Intell. 2020, 51, 2850–2863. [Google Scholar] [CrossRef] [PubMed]
  43. Pathak, Y.; Shukla, P.; Tiwari, A.; Stalin, S.; Singh, S. Deep Transfer Learning Based Classification Model for COVID-19 Disease. Irbm 2022, 43, 87–92. [Google Scholar] [CrossRef]
  44. Bhaskaran, S.; Marappan, R. Design and Analysis of an Efficient Machine Learning Based Hybrid Recommendation System with Enhanced Density-Based Spatial Clustering for Digital E-Learning Applications. Complex Intell. Syst. 2021, 1, 1–17. [Google Scholar] [CrossRef]
  45. Marappan, R.; Sethumadhavan, G. A New Genetic Algorithm for Graph Coloring. In Proceedings of the 2013 Fifth International Conference on Computational Intelligence, Modelling and Simulation, Seoul, Republic of Korea, 24–25 September 2013; pp. 49–54. [Google Scholar] [CrossRef]
  46. Sethumadhavan, G.; Marappan, R. A genetic algorithm for graph coloring using single parent conflict gene crossover and mutation with conflict gene removal procedure. In Proceedings of the 2013 IEEE International Conference on Computational Intelligence and Computing Research, Enathi, India, 26–28 December 2013; pp. 1–6. [Google Scholar] [CrossRef]
  47. Marappan, R.; Sethumadhavan, G. Divide and conquer based genetic method for solving channel allocation. In Proceedings of the 2016 International Conference on Information Communication and Embedded Systems (ICICES), Chennai, India, 25–26 February 2016; pp. 1–5. [Google Scholar] [CrossRef]
  48. Veeramanickam, M.R.M.; Rodriguez, C.; Navarro Depaz, C.; Concha, U.R.; Pandey, B.; Kharat, R.S.; Marappan, R. Machine Learning Based Recommendation System for Web-Search Learning. Telecom 2023, 4, 118–134. [Google Scholar] [CrossRef]
  49. Marappan, R.; Sethumadhavan, G. Solution to Graph Coloring Using Genetic and Tabu Search Procedures. Arab. J. Sci. Eng. 2018, 43, 525–542. [Google Scholar] [CrossRef]
  50. Marappan, R.; Sethumadhavan, G. Complexity Analysis and Stochastic Convergence of Some Well-known Evolutionary Operators for Solving Graph Coloring Problem. Mathematics 2020, 8, 303. [Google Scholar] [CrossRef]
  51. Bhaskaran, S.; Marappan, R.; Santhi, B. Design and Comparative Analysis of New Personalized Recommender Algorithms with Specific Features for Large Scale Datasets. Mathematics 2020, 8, 1106. [Google Scholar] [CrossRef]
  52. Bhaskaran, S.; Marappan, R.; Santhi, B. Design and Analysis of a Cluster-Based Intelligent Hybrid Recommendation System for E-Learning Applications. Mathematics 2021, 9, 197. [Google Scholar] [CrossRef]
  53. Hussain Ali, Y.; Chinnaperumal, S.; Marappan, R.; Raju, S.K.; Sadiq, A.T.; Farhan, A.K.; Srinivasan, P. Multi-Layered Non-Local Bayes Model for Lung Cancer Early Diagnosis Prediction with the Internet of Medical Things. Bioengineering 2023, 10, 138. [Google Scholar] [CrossRef] [PubMed]
  54. Reegu, F.A.; Abas, H.; Jabbari, A.; Akmam, R.; Uddin, M.; Wu, C.-M.; Chen, C.-L.; Khalaf, O.I. Interoperability Requirements for Blockchain-Enabled Electronic Health Records in Healthcare: A Systematic Review and Open Research Challenges. Secur. Commun. Networks 2022, 2022, 1–11. [Google Scholar] [CrossRef]
  55. Banumathy, D.; Khalaf, O.I.; Romero, C.A.T.; Indra, J.; Sharma, D.K. CAD of BCD from Thermal Mammogram Images Using Machine Learning. Intell. Autom. Soft Comput. 2022, 34, 595–612. [Google Scholar] [CrossRef]
  56. Tavera, C.A.; Jesús, H.O.; Osamah, I.K.; Diego, F. Saavedra; Theyazn HH Aldhyani. Wearable wireless body area networks for medical applications. Comput. Math. Methods Med. 2021, 1, 5574376. [Google Scholar] [CrossRef]
  57. Sudhakar, S.; Khalaf, O.I.; Priyadarsini, S.; Sharma, D.K.; Amarendra, K.; Hamad, A.A. Smart healthcare security device on medical IoT using raspberry pi. Int. J. Reliab. Qual. E-Healthc. 2022, 11, 1–11. [Google Scholar]
  58. Sengan, S.; Khalaf, O.I.; Rao, G.R.K.; Sharma, D.K.; Amarendra, K.; Hamad, A.A. Security-Aware Routing on Wireless Communication for E-Health Records Monitoring Using Machine Learning. Int. J. Reliab. Qual. E-Healthcare 2021, 11, 1–10. Available online: https://econpapers.repec.org/article/iggjrqeh0/v_3a11_3ay_3a2022_3ai_3a3_3ap_3a1-10.htm (accessed on 15 October 2021). [CrossRef]
  59. Marappan, R.; Sethumadhavan, G. Solution to Graph Coloring Problem using Evolutionary Optimization through Symmetry-Breaking Approach. Int. J. Appl. Eng. Res. 2015, 10, 26573–26580. [Google Scholar]
  60. Marappan, R.; Sethumadhavan, G. Solving Graph Coloring Problem Using Divide and Conquer-Based Turbulent Particle Swarm Optimization. Arab. J. Sci. Eng. 2021, 47, 9695–9712. [Google Scholar] [CrossRef]
  61. Marappan, R.; Sethumadhavan, G. Solution to graph coloring problem using divide and conquer based genetic method. In Proceedings of the 2016 International Conference on Information Communication and Embedded Systems (ICICES), Chennai, India, 25–26 February 2016; pp. 1–5. [Google Scholar] [CrossRef]
  62. Anand, N.S.; Marappan, R.; Sethumadhavan, G. Performance Analysis of SAR Image Speckle Filters and its Recent Challenges. In Proceedings of the 2018 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), Madurai, India, 13–15 December 2018; pp. 1–4. [Google Scholar] [CrossRef]
  63. Bhaskaran, S.; Marappan, R. Enhanced personalized recommendation system for machine learning public datasets: Generalized modeling, simulation, significant results and analysis. Int. J. Inf. Technol. 2023, 1, 1–13. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.