You are currently viewing a new version of our website. To view the old version click .
Journal of Imaging
  • Article
  • Open Access

19 December 2024

X-Ray Image-Based Real-Time COVID-19 Diagnosis Using Deep Neural Networks (CXR-DNNs)

,
,
and
1
Telecommunications Engineering School, University of Malaga, 29010 Malaga, Spain
2
Institute of Oceanic Engineering Research, University of Malaga, 29010 Malaga, Spain
3
Department of Software Engineering, Sir Syed University of Engineering & Technology, Karachi 75300, Pakistan
*
Author to whom correspondence should be addressed.
This article belongs to the Section Medical Imaging

Abstract

On 11 February 2020, the prevalent outbreak of COVID-19, a coronavirus illness, was declared a global pandemic. Since then, nearly seven million people have died and over 765 million confirmed cases of COVID-19 have been reported. The goal of this study is to develop a diagnostic tool for detecting COVID-19 infections more efficiently. Currently, the most widely used method is Reverse Transcription Polymerase Chain Reaction (RT-PCR), a clinical technique for infection identification. However, RT-PCR is expensive, has limited sensitivity, and requires specialized medical expertise. One of the major challenges in the rapid diagnosis of COVID-19 is the need for reliable imaging, particularly X-ray imaging. This work takes advantage of artificial intelligence (AI) techniques to enhance diagnostic accuracy by automating the detection of COVID-19 infections from chest X-ray (CXR) images. We obtained and analyzed CXR images from the Kaggle public database (4035 images in total), including cases of COVID-19, viral pneumonia, pulmonary opacity, and healthy controls. By integrating advanced techniques with transfer learning from pre-trained convolutional neural networks (CNNs), specifically InceptionV3, ResNet50, and Xception, we achieved an accuracy of 95%, significantly higher than the 85.5% achieved with ResNet50 alone. Additionally, our proposed method, CXR-DNNs, can accurately distinguish between three different types of chest X-ray images for the first time. This computer-assisted diagnostic tool has the potential to significantly enhance the speed and accuracy of COVID-19 diagnoses.

1. Introduction

In Wuhan (Hubei Province, China), officials reported an increase in cases of pneumonia with an unexplained cause on 31 December 2019. China’s Center for Disease Control and Prevention (China CDC) confirmed on 9 January 2020 that a newly discovered coronavirus (now designated as a 2019-nCoV) was the likely causal agent of these outbreaks. It has been proven by Chinese health officials that the virus may be spread from person to person. On 11 February 2020, the World Health Organization (WHO) reclassified the virus that had been circulating since 2019 from 2019-nCoV (Corona Virus Disease) to COVID-19. The coronavirus liable for COVID-19 has been officially dubbed SARS-CoV-2 (Severe Acute Respiratory Syndrome Coronaviruses) by the Coronavirus Study Group (CSG) of the International Committee on the Taxonomy of Viruses. The official classification of viruses was recently updated to include the Corona viridae family because of the novel nature of the human disease and because doing so is in conformity with phylogeny, taxonomy, and best practices. The worldwide spread and severity of SARS-CoV-2 provoked the World Health Organization to proclaim COVID-19 a pandemic [1]. Around two million people have been infected with COVID-19, making it a serious risk to health across the world. When it comes to diagnosing and keeping tabs on COVID-19 pneumonia, imaging modalities, especially CT, play a crucial role [2].
The coronavirus SARS-CoV-2, which produces COVID-19, is tested through your nose or mouth. Nucleic Acid Amplification Tests (NAATs) and antigen testing are the main viral tests. One test type may be preferred in some cases. Laboratories perform NAATs like PCR-based assays. Regardless of whether the patient is symptomatic or asymptomatic, these are the most trustworthy tests. After testing positive, viral genetic material may remain in your body for 90 days, so it is recommended that, if you are tested as positive within 90 days, you do not utilize NAATs. Antigen tests give results in 15–30 min. NAATs are less reliable when used to test symptomless people.
In this study, we address the problem of three-way classification and demonstrate that the Vision Transformer can discriminate between additional categories of lung disorders with extremely high accuracy and specificity. No other similar methods, to the best of our knowledge, have achieved such a high degree of accuracy across three distinct classes of CXR images when automatically detecting COVID-19. The major contributions of our manuscript are as follows:
  • The integration of advanced techniques with transfer learning from pre-trained convolutional neural networks (CNNs);
  • Our proposed CXR-DNN model’s ability to accurately distinguish between three types of chest X-ray images;
  • The potential of our computer-assisted diagnostic tool to significantly improve the speed and accuracy of COVID-19 diagnosis at scale.
The structure of this work begins with examining state of the art techniques in Section 2. After that, Section 3 explains all the details of the proposed CXR-DNN technique. Results obtained with real X-rays images are presented in Section 4. Finally, the last section presents the main conclusions and remarks derived from the presented work.

3. CXR-DNN Technique

The ML branch of AI is concerned with the study and creation of algorithms with the capacity for learning and adaptation. These algorithms may learn from their mistakes and become better at a certain activity without any additional instruction. ML can be used in many different fields, from natural language processing to video games. It is already making a huge difference in a wide variety of disciplines and has the potential to completely transform several established industries. The many upsides to ML include gains in effectiveness, precision, flexibility, responsiveness, automation, and economy.
The main blocks of the algorithm ML-based CXR-DNN developed here are presented in Figure 1. This figure illustrates the CXR-DNN model’s comprehensive architecture and methodology, which entails the processing of chest X-ray images through a series of phases for the purpose of screening and classifying COVID-19, viral pneumonia, and normal cases. The diagram highlights critical components, such as data input, preprocessing, feature extraction, and final classification. The main objective is to diagnose the illness from a CXR image. Specifically, a CXR-DNN is based on an analysis of CXR images by employing a DNN.
Figure 1. Block diagram of CXR-DNN used for screening COVID-19.
In Figure 2, it can be observed that EfficientNetB7 uses MBConv (Mobile Inverted Bottleneck Convolution) blocks with depth-wise separable convolutions, squeeze-and-excitation, and compound scaling to efficiently balance depth, width, and resolution for high-performance image classification. This figure features detailed information regarding the EfficientNetB7 architecture employed in the CXR-DNN model. The architecture is deconstructed into its fundamental layers and modules, with a particular emphasis on the advanced convolutional and MBConv blocks that improve the precision of feature extraction and classification. EfficientNetB7’s unique compound scaling and MBConv blocks with squeeze-and-excitation make it more efficient and accurate compared to other CNN models. Since Figure 1 and Figure 2 reflect varying degrees of information within the CXR-DNN model, they are linked. Presenting the whole model’s process from input to output, Figure 1 provides a macro-level perspective. Figure 2 offers a micro-level perspective, highlighting the EfficientNetB7 architecture which is fundamental for the step of feature extraction shown in Figure 1. Taken together, they offer a complete understanding of both the general framework and the complex operations of the model. Figure 1 supports the study by visually summarizing the methodology used for COVID-19 screening, making it easier for readers to grasp the end-to-end process. Figure 2 complements this by detailing the core architecture that underpins the model’s robust performance, thus emphasizing the technical innovation and depth of the proposed solution. This combination helps to clearly convey the significance of the research and its approach to achieving reliable COVID-19 diagnostics through advanced deep learning techniques. To illustrate further the differences in CXR images belonging to classes 1 (COVID-19), 2 (normal), and 3 (pneumonia), Figure 3 shows three examples of lung X-rays (one for each class).
Figure 2. Proposed EfficientNetB7 architecture.
Figure 3. CXR images of lungs in patients: (a) healthy, (b) COVID-19, and (c) pneumonia.
Firstly, the CNN divides an image into zones so that it can estimate boundary boxes and probabilities for each location [26]. The model we have developed allows us to divide our data into three categories: COVID-19, viral pneumonia, and normal. X-ray images of patients will be captured by our system and uploaded as data to an ML platform to train the model. The X-ray image will be processed by the platform, and it will be classified as best it can. Transfer learning, offering evaluation metrics, and ensuring easy accessibility and integration make it simple and affordable to acquire precise results from patients.
Next, CXR pictures were used to further train numerous pre-trained CNN models for COVID-19 detection. Subsets of classifiers with a cardinality greater than one are then aggregated using the screening of a chest X-ray [27], and their measures are calculated using the advanced proposed CNN technique. At the end, the results of three separate aggregations calculated using each of the three weighting methods (training dataset, validation dataset, and testing dataset) are evaluated in terms of confusion matrices, accuracy per class, three parameters of efficiency (precision, recall and F1-score), and, finally, graphs of the accuracy and loss per epoch. This last evaluation is the final stage of the CXR-DNN presented in Figure 1. The three images’ sets (training, validation, and testing) were images used to generate features. The photos are then labeled and placed into three categories: COVID-19, normal, and viral pneumonia. Each set’s weights are determined, i.e., training, validation, and testing.
Convolution is utilized by our proposed technique. The first component, i.e., stem block, is an attention mechanism module that processes an attention mechanism in parallel multiple times. The second component is a series of MBConv blocks that facilitate efficient feature extraction, reduced computational complexity, and enhanced representational power through depth-wise separable convolutions, squeeze-and-excitation blocks, and residual connections. The third component is compound scaling [28], which facilitates balanced model scaling by simultaneously adjusting depth, width, and resolution, leading to improved performance, optimal resource utilization, and enhanced model generalization. The next step is to determine suitable values with the help of advanced features. The accuracy is calculated according to many different performance indicators (see Figure 1).
To guarantee the dataset veracity, it was downloaded from Kaggle’s community [29]. Researchers have put together a database of CXR images, namely i.e., the COVID-19_Radiography_Dataset, for three types of patients, i.e., COVID-19, normal, and pneumonia. All pictures were saved in Portable Network Graphics (PNG) format. There are available different datasets for CXR images besides the current one we have used. We show a selection of them in Table 1.
Table 1. CXR images availability from openly available databases.
Figure 3a shows a normal chest X-ray, which would reveal transparent lungs with no abnormalities. The consequence of suffering from COVID-19 is a fatal respiratory illness that causes severe breathing difficulties in both lungs (Figure 3b). The lungs may show patchy or consolidated opacities in a CXR with COVID-19 infection. When an infection strikes the lungs, it can lead to viral pneumonia (Figure 3c), which in turn leads to inflammation of the alveoli, the tiny air sacs in the lungs that are essential for exchanging oxygen and carbon dioxide, and which requires immediate medical advice or treatment. Coughing, fever, shortness of breath, chest pain, and exhaustion are all symptoms of pneumonia. Because of the inflammation, fluid may collect in the lungs, making it difficult to breathe. The best way to distinguish between COVID-19 and COVID-19 pneumonia is to consider them different steps of the same illness. COVID-19 is a respiratory sickness caused by SARS-CoV-2, and COVID Pneumonia is a complication of COVID-19 that causes the cited inflammation and fluid in the lungs [33]. There is reason for alarm with influenza, since it may lead to everything from a mild cold to pneumonia, acute respiratory distress syndrome, and even death.
By using tagged CXR images of COVID-19 positive, viral pneumonia, and normal cases, we trained CNN which uses advanced techniques to enhance efficiency and performance. In Table 2, the assigned class number is presented jointly with the file name used from dataset: 1 (COVID), 2 (Normal) or 3 (Pneumonia).
Table 2. Classes and models for classification.
Upon submitting the CXR images for evaluation, we anticipate receiving the expected result, i.e., COVID-19, normal, or pneumonia. If it does not provide the predicted result based on the given class, the model will not be absolutely accurate. We have found that, for the tested photos mentioned in Table 1, our algorithm accurately returns the result for the specified class, as we will present in the next section.

4. Results

4.1. Computational and Hardware Considerations

The proposed CXR-DNN has been run on a computer with the following specifications: Intel(R) Core(TM) i5-6300U CPU (2.40 GHz), 8.00 GB RAM, and integrated graphics (Intel(R) HD Graphics 520). This system configuration stands for a modest setup and supports the system’s basic functionality for diagnostic processing. However, due to the intensive nature of deep learning algorithms, more powerful hardware such as a dedicated GPU can substantially reduce the latency and processing time, enhancing real-time diagnostic capabilities. On this system, each diagnostic run is completed in approximately in 15–60 s. This demonstrates, that while higher-end hardware is ideal for optimal performance, the CXR-DNN system can still function with more accessible lower-specification configurations.

4.2. Model Interpretability with Grad-CAM

We used Gradient-weighted Class Activation Mapping (Grad-CAM) to help to comprehend the CXR-DNN model and provide knowledge of its decision-making process. Convolutional neural networks (CNNs) make extensive use of this method to create visual heatmaps highlighting the areas of an input picture most impacting the prediction of the model [34]. In terms of COVID-19 viral pneumonia, or normal lung diseases, grad-CAM allowed us to pinpoint, from the chest X-ray image, predictions of the model emphasized during classification. Grad-CAM is a useful instrument for clinical application, as the pictures it shows not only boosts the confidence in the model’s predictions but also aids in improving the openness of its decision-making process. Additionally, the integration of Grad-CAM contributes to model explainability in clinical practice, helping practitioners better understand the rationale behind each prediction. This step is essential for ensuring that the model can be utilized confidently in real-world medical applications.

4.3. Steps of Methods on Preprocessing for Image Quality and Robustness

Improving CXR picture quality and guaranteeing model resilience in classification problems depend mostly on preprocessing methods. This work used standard preprocessing techniques, including picture scaling, normalizing, and selective augmentation. Images were rescaled to fit the EfficientNetB7 architecture’s needed input dimensions, hence fostering uniformity for efficient feature extracting. Pixel intensity levels were standardized to a [0,1] range to reduce variance and improve training stability. Additionally, rotation and flipping were judiciously employed to imitate real-world data variations, hence enhancing the generalizing powers of the model. These preprocessing techniques substantially assisted in optimizing the data for training and enhancing the dependability and diagnostic accuracy of the model in clinical situations.

4.4. Hyperparameter Tuning and Its Impact on Model Performance

This work aims to obtain the best performance of the suggested CXR-DNN model by means of the hyperparameter tweaking method. Based on iterative testing and field-of-expertise, we choose the learning rate, batch size, and number of epochs in deep learning. Specifically, after evaluating a variety of values (e.g., 0.001, 0.0005, 0.0001, and 0.00005), the learning rate was fixed at 0.0001 to provide sustained convergence without overshooting the minimal loss. Experiments combining computational efficiency and model generalization led us to choose a batch size of 32; lesser batch sizes (e.g., 16) resulted in noisier gradients, while higher batch sizes (e.g., 64) raised memory demand without appreciable accuracy gains. Setting the number of epochs at 100 allowed the model to acquire intricate features without overfitting, as tracked by validation accuracy, enough training time. These decisions confirmed their relevance for strong CXR image categorization by matching actual results from our pilot testing and literature benchmarks.
In this work, a CNN with advanced techniques (i.e., EfficientNetB7) with 100 epochs, a 0.0001 learning rate, and a batch size of 32 has been implemented for three classes, i.e., COVID-19, normal, and pneumonia (see Table 2). To train our algorithm for getting optimal results, we tested a total of 4035 CXR images, i.e., 1345 CXR images for each class, i.e., COVID-19, normal, and pneumonia. We have tested 70% of the dataset as trained for each class, i.e., 941 images for each of the three classes, and 15% of the dataset as validated and tested for each class, i.e., 202 images for each class. Highly accurate and low-cost automated screening of CXR images using a CNN with a transfer learning algorithm may detect COVID-19-related lung disease. Our method used random rotations and brightness changes as part of data augmentation methods to improve dataset variability and model resilience. Moreover, a fine-tuning approach was adopted wherein a lower learning rate was used when training the unfreezed layers, thereby assuring that the model could efficiently adapt to the particular properties of CXR pictures while preserving already acquired features from the pre-trained network. Overcoming previous CNN designs (e.g., ResNetXX, VGGXX) [35], this approach allowed the model to obtain a 95% average accuracy for the COVID-19 class on the training dataset. These focused approaches improve the model’s diagnostic capacity and dependability, hence facilitating real-time clinical uses. Some of the data obtained are presented in Table 3.
Table 3. Performance evaluation of CNN architectures.
Results from Table 3 show that ResNet50 is the best CNN, with an 86% test accuracy, but that it is not better than ours, at 95%, mentioned above for the training dataset.

4.5. Evaluation Metrics and Performance Analysis

In order to make a more precise evaluation, we have used four typical efficiency parameters precision, recall, F1-score, and accuracy, for the three datasets (i.e., training, validation, and testing) of each class: COVID, normal, and pneumonia. Their definitions are given by the following usual expressions:
P = T P ( T P + F P )
R = T P ( T P + F N )
F 1 = 2 P · R ( P + R )
      A = T P + T N T P + T N + F P + F N
T N = N ( T P + F P + F N )
where P, R, F1, and A are the precision, recall, F1-score, and accuracy, respectively; TP, TN, FP, and FN are the number of true positives, true negatives, false positives, and false negatives, respectively; and N is the total samples used. The “macro” averaging method was employed for handling multi-class predictions in this study where precision, recall, and F1-score were calculated independently for each class. The results were then averaged to provide a comprehensive evaluation of model performance, treating each class with equal importance regardless of class frequency in the dataset, which ensures a balanced evaluation.
When we talk about genuine value in this context, we are referring to the fact that we have mentioned the classes of COVID, normal, and pneumonia; hence, if the result is exact for all three classes, then it will be truly positive (TP). FN means that we upload an image, expect its result from its mentioned class number but then find that it gives some other output. For instance, we were expecting that the generated result would be COVID-19, but if it gives some other output, it will be falsely negative (FN) for the COVID-19 class.
The precision denoted by P is the fraction (in percent) of examples where the system successfully classified the picture into that group. Another way to evaluate an ML model’s efficacy is via the F1-score. The F1-score combines a model’s precision and recall. When evaluating a model, accuracy considers all the predictions, positive and negative, with overall correct measures. In that sense, recall (R) is calculated as the percentage of those images that the neural network successfully classified as belonging to the specified category. Recall, on the other hand, specifies the percentage of photos successfully assigned to a class by the neural network, given the number of images that truly belong to that class. The converse is also true; in some cases, a low recall (R) rating is associated with a high precision (P) score. The total number of pictures the network has put into a certain group is the number of pictures in that group.
Table 4, Table 5 and Table 6 show the value obtained for the three parameters P, R, and F1 comprehensively; also, our proposed model robustness can be observed here due to its sustainability. The last column, titled “support”, shows the number of photos per category utilized to evaluate the capacity of the assigned images to the various categories. In order to compare sizes of other databases, it should be remembered that Table 1 contains an overview of the number of images in public repositories.
Table 4. Efficiency parameters for EfficientNetB7 on the training dataset.
Table 5. Efficiency parameters for EfficientNetB7 on the validation dataset.
Table 6. Efficiency parameters for EfficientNetB7 on the testing dataset.

4.6. Visualization and Analysis of Confusion Matrices

The confusion matrices are presented in Figure 4, and the predicted positive images of CXR can be observed very clearly in them. These matrices focus on the model’s classification performance across COVID-19, pneumonia, and normal chest X-ray images. Given the similarity in radiographic features between pneumonia and COVID-19, such as ground glass complexities, there is an essential overlap between the two classes. This overlap might result in misclassifications, specifically false positives or false negatives, as the model may struggle to differentiate these conditions correctly. The accuracy per epoch and loss per epoch are shown in Figure 5 for every dataset used. In all of them, the accuracy per epoch is close to 1, whereas the loss per epoch is close to 0, which indicates a high reliability and robustness in our proposed CXR-DNN model.
Figure 4. 3 × 3 confusion matrices for the (a) Training, (b) Validation, and (c) Testing datasets representing the model’s performance in true positives, false positives, true negatives, and false negatives for each class (COVID-19, Normal, and Pneumonia).
Figure 5. Accuracy and loss per epoch for each dataset, illustrating the model’s performance and learning progression: (a) Training dataset, (b) Validation dataset, (c) Testing dataset. In the accuracy graphs, the blue curve represents accuracy, and the orange curve represents loss. In the loss graphs, the blue curve represents loss, and the orange curve represents accuracy, showing their variation over the epochs.

4.7. Performance Metric Convergence Graphs

The convergence graphs of precision, recall, and F1-score are presented in Figure 6 for every class considered (1–3). It can be seen how, in all the graphs (Figure 6a–c) CXR-DNNs assess a good class’s prediction accuracy. The best accuracy score (p = 95%) using our proposed model CXR-DNN is shown in Table 7. These results imply that the network may benefit from transfer learning to extract useful characteristics in terms of COVID-19 illness diagnosis. More than one piece of work has used this concept to speed up the creation of a trustworthy instrument to aid medical professionals in the diagnosis of COVID-19.
Figure 6. Convergence for precision, Recall and F1-score in every dataset used: (a) training, (b) validation, (c) testing. Classes: 1—COVID-19, 2—normal, 3—pneumonia.
Table 7. Accuracy per class for the training dataset on EfficientNetB7 parameters.
Attention maps are used for assessing the performance of DL systems on image categorization. As example, Figure 7 presents one layer (of 4) of the ViT’s layers [36] in the CXR-DNN proposed.
Figure 7. Vision sample COVID-19 CXR picture with transformer attention map (layer 1).

5. Discussion

The results reported so far have shown that the proposed architecture can outperform alternative network configurations for this unusual application. If the illness of interest is the root cause of the present pandemic, then a rapid, accurate, and reliable approach to diagnosing lung infections quickly takes on even greater relevance. While the EfficientNetB7 architecture has shown good performance in regard to COVID-19 predictions using chest X-ray images, on the other hand, there are several limitations that should be acknowledged, such as its effectiveness being highly dependent on the quality of the dataset, as any bias or inadequacies can adversely affect the model’s generalization capabilities. In addition, EfficientNetB7 may require necessary computational resources which may not be available in clinical environments. This architecture has been designed for precise differentiation between COVID-19 and other respiratory conditions and is very important for rapid and automated assessments in high-demand healthcare situations. Future work should address limitations related to dataset diversity and model generalizability. Furthermore, this approach shows the potential for adaptation to related medical imaging tasks, such as detecting other thoracic diseases, thereby broadening its utility for clinical decision-making capabilities.
This model also operates as a black box, making it challenging to represent the decision-making process. While it performs well on the data used, its scalability to different situations in chest X-ray imaging techniques may require essential further tuning and refining. Finally, the current architecture is tailored for COVID-19 diagnoses and may not be as effective for other thoracic pathologies without verification and validation. The opaque nature of DL and neural networks generally makes it challenging to comprehend how they arrive at their conclusions.
As a result, the network may perform well on a particular dataset yet struggle when faced with novel data or conditions. This becomes more important when the algorithm’s main function is to produce a rapid and dependable answer to aid clinicians in clinical diagnosis. Since deep neural networks are still largely a mystery, it is important that future research in this area focus on elucidating the factors that may influence them to make a certain decision when presented with several options. Hassani et al. [37] provides a novel alternative to the conventional patching and embedding method by taking advantage of convolutional networks’ capacity to identify salient features from pictures and feed that data into a Transformer. One alternative approach to discovering what causes a network to classify a particular picture in a specific manner is to examine how the data is first partitioned and then to compare that with the classification job completed by the Transformer. We found that our automated COVID-19 identification approach achieved higher accuracy compared to existing methods across all three chest X-ray images.

Author Contributions

Conceptualization, M.I.S. and M.-A.L.-N.; methodology, E.N.-B. and M.-A.L.-N.; software, M.I.S. and A.Y.K.; validation, M.I.S. and A.Y.K.; formal analysis, M.I.S., M.-A.L.-N. and E.N.-B.; investigation, A.Y.K.; resources, A.Y.K. and M.I.S.; data curation, A.Y.K., M.I.S. and M.-A.L.-N.; writing—original draft preparation, A.Y.K.; writing—review and editing, A.Y.K. and M.-A.L.-N.; visualization, A.Y.K. and M.I.S.; supervision, M.-A.L.-N., M.I.S. and E.N.-B.; project administration, M.-A.L.-N.; funding acquisition, M.-A.L.-N. and E.N.-B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by a grant (PCM-00006) from the Regional Government of Andalusia (Spain) through the project "CAMSUB3D: Advanced 3D camera for optimized underwater imaging and wireless charging" (Cod.25046, Complementary Plan for Marine Sciences and the Recovery, Transformation and Resilience Plan).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data supporting the outcomes of this study are publicly available on Kaggle and have been addressed in Section 3 (i.e., the CXR-DNN technique of the manuscript). Researchers can access the dataset directly through Kaggle to reproduce or build upon our work.

Acknowledgments

Authors express their gratitude to the Institute of Oceanic Engineering Research, University of Malaga (Malaga, Spain) for its technical support.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Lefkowitz, E.J.; Dempsey, D.M.; Hendrickson, R.C.; Orton, R.J.; Siddell, S.G.; Smith, D.B. Virus taxonomy: The database of the international committee on taxonomy of viruses (ICTV). Nucleic Acids Res. 2018, 46, D708–D717. [Google Scholar] [CrossRef] [PubMed]
  2. Dong, D.; Tang, Z.; Wang, S.; Hui, H.; Gong, L.; Lu, Y.; Xue, Z.; Liao, H.; Chen, F.; Yang, F.; et al. The role of imaging in the detection and management of COVID-19: A review. IEEE Rev. Biomed. Eng. 2021, 14, 16–29. [Google Scholar] [CrossRef] [PubMed]
  3. Coronavirus Disease (COVID-19) Pandemic. Available online: https://www.who.int/emergencies/diseases/novel-coronavirus-2019 (accessed on 14 February 2023).
  4. COVID-19 PCR Test: How Does it Work? Are There Any Alternatives? Available online: https://www.auxologico.com/covid-19-pcr-test-how-does-it-work-are-there-any-alternatives (accessed on 14 February 2023).
  5. Kumar, A.; Gupta, P.K.; Srivastava, A. A review of modern technologies for tackling COVID-19 pandemic. Diabetes Metab. Syndr. Clin. Res. Rev. 2020, 14, 569–573. [Google Scholar] [CrossRef] [PubMed]
  6. Shi, F.; Wang, J.; Shi, J.; Wu, Z.; Wang, Q.; Tang, Z.; He, K.; Shi, Y.; Shen, D. Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for COVID-19. IEEE Rev. Biomed. Eng. 2021, 14, 4–15. [Google Scholar] [CrossRef] [PubMed]
  7. Wynants, L.; Van Calster, B.; Collins, G.S.; Riley, R.D.; Heinze, G.; Schuit, E.; Bonten, M.M.J.; Dahly, D.L.; Damen, J.A.; Debray, T.P.A.; et al. Prediction models for diagnosis and prognosis of COVID-19 infection: Systematic review and critical appraisal. Brit. Med. J. 2020, 369, m1328. [Google Scholar] [CrossRef]
  8. Abbas, A.; Abdelsamea, M.M.; Gaber, M.M. Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network. Appl. Intell. 2021, 51, 854–864. [Google Scholar] [CrossRef]
  9. Tabik, S.; Gomez-Rios, A.; Martin-Rodriguez, J.L.; Sevillano-Garcia, I.; Rey-Area, M.; Charte, D.; Guirado, E.; Suarez, J.L.; Luengo, J.; Valero-Gonzalez, M.A.; et al. COVIDGR Dataset and COVID-SDNet Methodology for Predicting COVID-19 Based on Chest X-Ray Images. IEEE J. Biomed. Health Inform. 2020, 24, 3595–3605. [Google Scholar] [CrossRef]
  10. Dhere, A.; Sivaswamy, J. COVID Detection From Chest X-Ray Images Using Multi-Scale Attention. IEEE J. Biomed. Health Inform. 2022, 26, 1496–1505. [Google Scholar] [CrossRef]
  11. Schaefer-Prokop, C.; Prokop, M. Chest radiography in COVID-19: No role in asymptomatic and oligosymptomatic disease. Radiology 2021, 298, E156–E157. [Google Scholar] [CrossRef]
  12. Borakati, A.; Perera, A.; Johnson, J.; Sood, T. Diagnostic accuracy of X-ray versus CT in COVID-19: A propensity-matched database study. BMJ Open 2020, 10, e042946. [Google Scholar] [CrossRef]
  13. Chowdhury, M.E.H.; Rahman, T.; Khandakar, A.; Mazhar, R.; Kadir, M.A.; Bin Mahbub, Z.; Islam, K.R.; Khan, M.S.; Iqbal, A.; Al Emadi, N.; et al. Can AI help in screening Viral and COVID-19 pneumonia? arXiv 2020, arXiv:2003.13145. [Google Scholar] [CrossRef]
  14. Zhao, H.; Fang, Z.; Ren, J.; MacLellan, C.; Xia, Y.; Li, S.; Sun, M.; Ren, K. SC2Net: A Novel Segmentation-Based Classification Network for Detection of COVID-19 in Chest X-Ray Images. IEEE J. Biomed. Health Inform. 2022, 26, 4032–4043. [Google Scholar] [CrossRef] [PubMed]
  15. Zhang, X.; Han, L.; Sobeih, T.; Han, L.; Dempsey, N.; Lechareas, S.; Tridente, A.; Chen, H.; White, S.; Zhang, D. CXR-Net: A Multitask Deep Learning Network for Explainable and Accurate Diagnosis of COVID-19 Pneumonia From Chest X-Ray Images. IEEE J. Biomed. Health Inform. 2023, 27, 980–991. [Google Scholar] [CrossRef] [PubMed]
  16. Ren, Q.; Zhou, B.; Tian, L.; Guo, W. Detection of COVID-19 With CT Images Using Hybrid Complex Shearlet Scattering Networks. IEEE J. Biomed. Health Inform. 2022, 26, 194–205. [Google Scholar] [CrossRef]
  17. Joshi, A.M.; Nayak, D.R. FL-Net: An Efficient Lightweight Multi-Scale Feature Learning CNN for COVID-19 Diagnosis From CT Images. IEEE J. Biomed. Health Inform. 2022, 26, 5355–5363. [Google Scholar] [CrossRef]
  18. Shazia, A.; Xuan, T.Z.; Chuah, J.H.; Usman, J.; Qian, P.; Lai, K.W. A comparative study of multiple neural network for detection of COVID-19 on chest X-ray. EURASIP J. Adv. Signal Process. 2021, 2021, 50. [Google Scholar] [CrossRef]
  19. Ismael, A.M.; Şengür, A. Deep learning approaches for COVID-19 detection based on chest X-ray images. Expert Syst. Appl. 2021, 164, 114054. [Google Scholar] [CrossRef]
  20. Katsamenis, I.; Protopapadakis, E.; Voulodimos, A.; Doulamis, A.; Doulamis, N. Transfer learning for COVID-19 pneumonia detection and classification in chest X-ray images. In Proceedings of the 24th Pan-Hellenic Conference on Informatics, Athens, Greece, 20–22 November 2020; pp. 170–174. [Google Scholar] [CrossRef]
  21. Krishnan, K.S. Vision transformer based COVID-19 detection using chest X-rays. In Proceedings of the 6th International Conference on Signal Processing, Computing and Control (ISPCC), Solan, India, 7–9 October 2021; pp. 644–648. [Google Scholar] [CrossRef]
  22. Shome, D.; Kar, T.; Mohanty, S.N.; Tiwari, P.; Muhammad, K.; AlTameem, A.; Zhang, Y.; Saudagar, A.K.J. COVID-transformer: Interpretable COVID19 detection using vision transformer for healthcare. Int. J. Environ. Res. Public Health 2021, 18, 11086. [Google Scholar] [CrossRef]
  23. Ligi, S.V.; Kundu, S.S.; Kumar, R.; Narayanamoorthi, R.; Lai, K.W.; Dhanalakshmi, S. Radiological analysis of COVID-19 using computational intelligence: A broad gauge study. J. Healthc. Eng. 2022, 2022, 5998042. [Google Scholar] [CrossRef]
  24. Almalki, Y.E.; Qayyum, A.; Irfan, M.; Haider, N.; Glowacz, A.; Alshehri, F.M.; Alduraibi, S.K.; Alshamrani, K.; Basha, M.A.A.; Alduraibi, A.; et al. A novel method for COVID19 diagnosis using artificial intelligence in chest X-ray images. Healthcare 2021, 9, 522. [Google Scholar] [CrossRef]
  25. Park, S.; Kim, G.; Oh, Y.; Seo, J.B.; Lee, S.M.; Kim, J.H.; Moon, S.; Lim, J.-K.; Ye, J.C. Multi-task vision transformer using low-level chest X-ray feature corpus for COVID-19 diagnosis and severity quantification. Med. Image Anal. 2022, 75, 102299. [Google Scholar] [CrossRef] [PubMed]
  26. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. arXiv 2015, arXiv:1506.01497. [Google Scholar] [CrossRef] [PubMed]
  27. Bhowal, P.; Sen, S.; Yoon, J.H.; Geem, Z.W.; Sarkar, R. Choquet Integral and Coalition Game-Based Ensemble of Deep Learning Models for COVID-19 Screening From Chest X-Ray Images. IEEE J. Biomed. Health 2021, 25, 4328–4339. [Google Scholar] [CrossRef] [PubMed]
  28. Lim, J.; Lee, S.; Ha, S. Resolution Based Incremental Scaling Methodology for CNNs. IEEE Access 2023, 11, 60462–60470. [Google Scholar] [CrossRef]
  29. Sen, S.; Bhowal, P.; Sarkar, R.; Yoon, J.H.; Geem, Z.W. Novel COVID-19 Chest Xray Repository. Available online: https://www.kaggle.com/subhankarsen/novel-COVID19-chestxray-repository (accessed on 14 February 2023).
  30. Akter, S.; FMJM, S.; Chakraborty, S.; Karim, A.; Azam, S. COVID-19 Detection Using Deep Learning Algorithm on Chest X-ray Images. Biology 2021, 10, 1174. [Google Scholar] [CrossRef]
  31. Chowdhury, M.E.H.; Rahman, T.; Khandakar, A.; Mazhar, R.; Kadir, M.A.; Mahbub, Z.B.; Islam, K.R.; Khan, M.S.; Iqbal, A.; Al-Emadi, N.; et al. COVID-19 Radiography Database. Available online: https://www.kaggle.com/tawsifurrahman/covid19-radiography-database (accessed on 26 February 2021).
  32. Wang, L.; Wong, A.; Chung, A. Actualmed COVID-19 Chest X-Ray Dataset Initiative. Available online: https://github.com/agchung/Actualmed-COVID-chestxray-dataset (accessed on 26 February 2021).
  33. Kanakaprabha, S.; Radha, D. Analysis of COVID-19 and Pneumonia Detection in Chest X-Ray Images using Deep Learning. In Proceedings of the 2021 International Conference on Communication, Control and Information Sciences (ICCISc), Idukki, India, 16–18 June 2021; pp. 1–6. [Google Scholar] [CrossRef]
  34. Selvaraju, R.R.; Cogswell, M.; Das, A.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
  35. Farooq, M.; Hafeez, A. COVID-ResNet: A deep learning framework for screening of COVID19 from radiographs. arXiv 2020, arXiv:2003.14395. [Google Scholar]
  36. Cohen, J.P.; Morrison, P.; Dao, L.; Roth, K.; Duong, T.Q.; Ghassemi, M. COVID-Chest Xray Set. Available online: https://github.com/ieee8023/covid-chestxray-dataset (accessed on 26 February 2021).
  37. Hassani, A.; Walton, S.; Shah, N.; Abuduweili, A.; Li, J.; Shi, H. Escaping the big data paradigm with compact transformers. arXiv 2021, arXiv:2104.05704. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.