You are currently viewing a new version of our website. To view the old version click .
Diagnostics
  • Article
  • Open Access

9 May 2023

COVID-ConvNet: A Convolutional Neural Network Classifier for Diagnosing COVID-19 Infection

and
Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh P.O. Box 11451, Saudi Arabia
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Artificial Intelligence and Machine Learning for Infectious Diseases

Abstract

The novel coronavirus (COVID-19) pandemic still has a significant impact on the worldwide population’s health and well-being. Effective patient screening, including radiological examination employing chest radiography as one of the main screening modalities, is an important step in the battle against the disease. Indeed, the earliest studies on COVID-19 found that patients infected with COVID-19 present with characteristic anomalies in chest radiography. In this paper, we introduce COVID-ConvNet, a deep convolutional neural network (DCNN) design suitable for detecting COVID-19 symptoms from chest X-ray (CXR) scans. The proposed deep learning (DL) model was trained and evaluated using 21,165 CXR images from the COVID-19 Database, a publicly available dataset. The experimental results demonstrate that our COVID-ConvNet model has a high prediction accuracy at 97.43% and outperforms recent related works by up to 5.9% in terms of prediction accuracy.

1. Introduction

The most recent viral pandemic, COVID-19, arose in the Chinese city of Wuhan [1,2]. Because the outbreak surged across the globe and infected millions of individuals, the WHO declared it a worldwide pandemic, and the number of affected people continues to rise daily. As of 22 September 2022, over 610 million coronavirus cases worldwide, as well as 6.5 million deaths, had been reported [3]. The COVID-19 virus primarily spreads through respiratory droplets when an infected person coughs, sneezes, talks, or breathes [4]. Symptoms of COVID-19 can range from mild to severe and can include fever, cough, shortness of breath, and loss of taste or smell [5]. The COVID-19 pandemic has had a significant impact on public health, economies, and social systems around the world. Many countries have implemented measures such as lockdowns, travel restrictions, and vaccination campaigns to control the spread of the virus [6]. The pandemic has also highlighted the importance of public health infrastructure, scientific research, and global cooperation in responding to infectious diseases. Figure 1 shows how the COVID-19 virus spreads to human lungs and causes pneumonia, resulting in serious harm.
Figure 1. Illustration of how COVID-19 affects human lungs.
Currently, no treatment directly interacts with this new type of coronavirus. Many medley medicines, comprised mainly of varying concentrations of ethanol, hydrogen peroxide, and isopropyl alcohol, have been developed by certain firms in response to the unique virus. The WHO has confirmed and approved the use of these treatments worldwide [7].
The evolution of computer vision diagnostic tools for the treatment of COVID-19 would give medical professionals an automated “second reading”, helping in the critical diagnosis of COVID-19 infected patients and improving the decision-making process to cope with this widespread illness. Radiological examination, including chest X-rays and computed tomography (CT) scans, has played an important role in the screening and diagnosis of COVID-19 [8]. Chest X-rays are often used as a first-line imaging tool in the initial screening of suspected COVID-19 patients, while CT scans are typically reserved for more severe cases or for patients with inconclusive chest X-ray results [9]. Radiological examination can also help monitor the progression of the disease and assess the effectiveness of treatment [10]. Radiologists and other medical professionals may indeed find it challenging to differentiate between pneumonia caused by COVID-19 and other kinds of viral and bacterial pneumonia based only on diagnostic imaging. X-ray imaging is an easy and affordable technique for identifying lung and COVID-19 infections. In X-ray scans, opacities or patchy infiltrates, similar to other viral pneumonia symptoms, are frequently detected in COVID-19-infected patients. However, on X-ray images, earlier stages of COVID-19 do not seem to show any abnormalities. COVID-19 affects the mid and upper or lower areas of the lungs and develops patchy infiltrations, typically with evidence of consolidation, as the patient’s condition worsens.
All facets of modern life, including business, marketing, the military, communications, engineering, and health, rely on innovative technology applications. The medical industry requires the extensive use of new technologies, from precisely describing symptoms to accurately diagnosing conditions and conducting examinations of patients [2]. The ability of artificial intelligence (AI) and DL algorithms to accurately recognize COVID-19 might be viewed as a supporting factor to improve conventional diagnostic approaches, such as chest X-rays [11]. DL and CNN models have excelled in a large number of medical image categorization applications [12].
Deep learning research on the use of chest X-rays to detect COVID-19 symptoms has shown promising results. Several studies have used deep learning algorithms to develop models that can accurately identify COVID-19 cases based on chest X-ray images. These models typically use convolutional neural networks (CNNs) to extract features from chest X-ray images and classify them as positive or negative for COVID-19. Some studies have also used transfer learning, a technique that uses pre-trained CNNs to improve the accuracy and efficiency of the model.
One potential limitation of these models is the lack of large, diverse datasets for training and validation. This may limit the generalizability of the trained models and can lead to overfitting, where the model performs well on the training data but poorly on new, unseen data. In addition, some studies have reported high false-positive rates or difficulties in distinguishing COVID-19 from other respiratory illnesses. Moreover, some related works demand a large number of training parameters and complex computational resources, making them challenging to implement in real-world scenarios, especially in the healthcare industry. In our study, we overcome the limitations of previous studies by using a large database that includes a significant number of chest X-ray scans (21,165 images) to improve the accuracy and generalizability of the trained model by providing more diverse samples; this can help to reduce overfitting problems. Furthermore, complexity is avoided by reducing the number of parameters and computational resources required by the proposed CNN model without sacrificing accuracy.
In this article, our major contributions are the following:
  • We propose a deep learning approach, COVID-ConvNet, to help in the early diagnosis of COVID-19 cases.
  • We employ conventional chest X-rays for the identification and diagnosis of COVID-19 while empirically evaluating the proposed deep learning image classifiers. Three experimental classifications were performed with four, three, and two classes.
  • We compare the results of various DL models to show the COVID-19 classification results and to demonstrate the superiority of the proposed model.
The rest of the paper is organized as follows. Section 2 reviews related works on the detection of the COVID-19 virus based on machine learning (ML) methods. Section 3 describes our proposed method, COVID-ConvNet. Section 4 presents the experimental results obtained using this method. Finally, Section 5 presents a conclusion of the article.

3. The Proposed Deep Learning Model

This section details the dataset used for training and testing, as well as the deep learning model proposed.

3.1. Dataset

The COVID-19 Radiography dataset was used to train and assess the proposed technique. It was proposed by Rahman et al. and is freely available on Kaggle [37]. This dataset was revised three times, and for this study, we obtained the most recent version of the dataset. It contains 1345 images of viral pneumonia, 3616 chest X-ray images of COVID-19 infection, 10,192 chest X-ray images of normal cases, and 6012 scans of lung opacity, as shown in Figure 2. The dataset comprises several sub-datasets, falling into four distinct categories, i.e., COVID-19, lung opacity, normal, and viral pneumonia. Each class was selected from a different sub-dataset, and the dataset was generated by integrating several datasets. A total of 3616 images for the COVID-19 category were chosen from four different databases. With 2473 CXR images, the BIMCV-COVID19+ dataset [38] from the Valencian Region Medical Image Bank (BIMCV) significantly contributes to the existing set. It is one of the most comprehensive, publicly available independent databases. Other datasets that have COVID-19 data collections include the German Medical School dataset [39] (with 183 chest X-ray scans), SIRM, Kaggle, GitHub, and Twitter [40,41,42,43], which have 560 chest X-ray images. In addition, another dataset with 400 combined chest X-ray scans is accessible on GitHub [44]. Table 2 displays a description of the COVID-19 radiography DS in terms of classes, number of CXR scans, and sources. Figure 3 displays a sample from each category of the COVID-19 radiography classes.
Figure 2. Classes and structure of the dataset.
Table 2. Description of COVID-19 Radiography dataset.
Figure 3. Examples of samples of dataset.

3.2. The Structure of the COVID-ConvNet Model

The proposed COVID-ConvNet has the ability to predict the health condition of a patient’s lung based on the processed dataset (Figure 4). In this article, we propose the performance of three experimental classifications with four classes (i.e., COVID-19, lung opacity, normal, and viral pneumonia), three classes (i.e., COVID-19, normal, and viral pneumonia), and two classes (i.e., COVID-19 and normal). As illustrated in Figure 5, convolution layers with maximum pooling layers, flattened layers, and thick layers make up the COVID-ConvNet model.
Figure 4. The proposed COVID-ConvNet model.
Figure 5. The structure of the proposed COVID-ConvNet model.
  • Image resizing: The chest X-ray scans in the dataset had a size of 256 by 256 pixels. An image resizing process was performed to reduce the image size to 100 by 100 pixels.
  • Convolution layers: All convolution layers were employed with a kernel size of (3, 3). In our study, the input shape of the CXR image was (100, 100, 3), where 100 denotes the width and height, while 3 indicates the input image’s three color channels (RGB). Rectified linear unit (ReLU), a piecewise linear function that returns a zero if the input is negative and returns the unchanged input value otherwise, served as the activation function of the convolution layers. ReLU is frequently employed as an activation function in convolution layers as it overcomes the vanishing gradient challenge, enabling the model to recognize characteristics more quickly and attain a high prediction performance. The filter size is 32 in the first convolution layer and gradually increases in the subsequent layers.
  • Max pooling layers: These layers were employed to compress features to minimize calculation time [46]. We selected (2, 2) as the kernel size and stride in all of the convolutional network’s max pooling layers.
  • Flatten layer: This layer generates a one-dimensional array vector from all pixels along the whole channels.
  • Dense layers: The dense layer is a densely linked layer, entailing that every neuron of the dense layer acquires data from all neurons in the preceding layer. The activation function and units, which define the layer’s output size and element-wise activation in the dense layer, respectively, were the parameters employed by the dense layer. There were two dense layers at the end of our COVID-ConvNet model. The first one had a ReLU activation function, whereas the second one had a softmax activation function. The softmax activation function was utilized to forecast a multinomial probability distribution at the output layer.
  • Selection unit: This unit was used to determine the index of the predicted class.
Hyperparameters are the settings or configurations of a machine learning model that are set prior to the training process. These settings can have a significant impact on the performance of the model, and the choice of hyperparameters can be critical for achieving good results. In our proposed COVID-ConvNet model, the following hyperparameters were utilized:
  • Number of filters: The first convolutional layer employed a filter size of 32 to extract basic features from the input image. The subsequent convolutional layers had a filter size of 64 to capture more complex features and patterns from the output of the previous layer. This gradual increase in filter size allowed the network to learn increasingly complex representations of the input image, leading to better performance in classification tasks.
  • Kernel size: The selected kernel size was (3, 3) for all the convolutional layers. This is a common choice for image classification tasks, as it allows the network to capture a range of features of different sizes. Additionally, using the same kernel size throughout the network ensures that the learned features are consistent across all layers, which can improve the network’s ability to generalize to new images.
  • Stride: The stride in the given code was (2, 2) for all the max pooling layers. The stride determines the step size used when sliding the filter over the input image. A stride of (2, 2) means that the filter moves two pixels at a time in both the horizontal and vertical directions. Using a stride of (2, 2) can help to reduce the size of the output feature maps, which can help to reduce the computational cost of the network and prevent overfitting.
  • Learning rate: The default learning rate was used, which was 1/1000 or 0.001. The learning rate is a hyperparameter that determines the step size used during the gradient descent to update the weights of the neural network. It is used because it is a reasonable starting point for many image classification tasks.
  • Batch size: A batch size of 32 was used to determine the number of samples that are processed in each iteration of the training process. A batch size of 32 is a common choice for image classification tasks.

4. Experimental Analysis and Results

In this section, we present the experimental measures and results to demonstrate our COVID-ConvNet model’s ability to recognize COVID-19 instances from chest X-ray pictures.

4.1. Performance Metrics

A confusion matrix, sometimes referred to as a contingency table [47], was developed to evaluate the performance of a trained COVID-ConvNet model. A confusion matrix is indeed an effective tool for determining the ratios of true positives (TP), false positives (FP), true negatives (TN), and false negatives (FN) [48]. TP stands for “the number of samples projected and found to be positive”, while TN corresponds to “the number of samples predicted and found to be negative”. FP classification happens when a machine learning model classifies a sample as positive, but the target class appears to be negative. When a sample is initially classified as negative but later proven positive, the classification is FN [49]. Accuracy is defined as the ratio of the number of correctly categorized samples to the total number of testing samples [50,51], as described in Equation (1). The ratio of true positives to all positives is defined as precision (Equation (2)) [52,53]. Finally, the ratio of true positives to the total number of true positives and false negatives is defined as recall (Equation (3)). As indicated in Equation (4), the F-score is a combination of precision and recall [54,55].
Accuracy = TP + TN TP + FP + TN + FN
Precision = TP TP + FP
Recall = TP TP + FN
F - score = 2 TP 2 TP + FP + FN

4.2. Performance Results

Data samples were separated into two sub-sets: a training set to develop the proposed CNN model, and a testing set to assess models.The proportion of data allocated to the testing set was 20%, and the remaining 80% of the data were allocated to the training set. More specifically, the training and testing sets were composed of 16,932 and 4233 samples, respectively. To split the dataset into training and testing sets, a random shuffle method was used. This method is important to ensure that the data are not biased or ordered in a specific way that may affect the performance of the deep learning model. By randomly shuffling the data, we can reduce the risk of overfitting and ensure that the model is trained on a representative sample of the data. We present in this section the results for four, three, and two classes of classification for chest X-ray scans. To train and test the proposed COVID-ConvNet, the Google Colab platform was used, providing an NVidia Tesla K80 GPU with a single-core 2.3 GHz Xeon Processor, 320 GB of disk space, and 16 GB of RAM.

4.2.1. Experiment 1: Four-Class Classification

In this section, we describe the results obtained for the four-class classification, i.e., COVID-19, lung opacity, normal, and viral pneumonia. Figure 6 illustrates the confusion matrix of the trained CNN model. After training for 50 epochs, our model reached an accuracy of 97.71%, 92.27%, 92.3%, and 99.57% on the testing subset for the COVID-19, lung opacity, normal, and viral pneumonia classes, respectively. Table 3 displays the evaluation values of the trained CNN with regard to accuracy, precision, recall, and F-score. As the table indicates, the evaluation metrics have been computed for each class separately according to Equations (1)–(4).
Figure 6. Confusion matrix (four-class classification).
Table 3. Evaluation values of the trained CNN model (four-class classification).

4.2.2. Experiment 2: Three-Class Classification

In this section, we present the results of the three-class classification, i.e., COVID-19, normal, and viral pneumonia lungs. The confusion matrix of the trained CNN model is illustrated in Figure 7. Table 4 depicts the performance evaluation values for this experiment.
Figure 7. Confusion matrix (three-class classification).
Table 4. Evaluation values of the trained CNN model (three-class classification).
Our COVID-ConvNet’s performance was further compared with Nikolaou et al.’s CNN models [21] in terms of the overall accuracy of the prediction. Figure 8 compares the COVID-19, normal, and viral pneumonia classification results. The results show that our proposed deep learning model achieved a higher accuracy than Nikolaou et al.’s CNN model with feature extraction by 3.66%, as well as higher results than their CNN model with fine-tuning by 1.88%.
Figure 8. Comparison results of three-class classification.

4.2.3. Experiment 3: Two-Class Classification

In this section, we present the results of the two-class classification problem (that is to say, COVID-19 vs. normal lungs). Figure 9 illustrates the confusion matrix of our CNN model, while Table 5 deals with the evaluation values.
Figure 9. Confusion matrix (Two-class classification).
Table 5. Evaluation values of the trained CNN model (two-class classification).
Figure 10 represents a comparison of our results with Nikolaou et al.’s CNN models for the two classes of COVID-19 and normal. The results obtained show that our model outperformed Nikolaou et al.’s model with feature extraction by 5.9%, as well as their model with fine-tuning by 2.5%.
Figure 10. Comparison of two-class classification results.

4.3. Considerations and Limitations of the COVID-ConvNet Model

While the use of the COVID-ConvNet model for COVID-19 screening and diagnosis has the potential to improve the accuracy and speed of diagnoses, there are several practical considerations and limitations that need to be considered. One practical consideration is the cost and availability of the necessary technology and expertise required to implement such a DL model in clinical settings. The use of the COVID-ConvNet model for COVID-19 screening would require access to high-performance computing resources and specialized expertise in deep learning and medical imaging. Another limitation of using CXR scans for COVID-19 detection is their lower sensitivity and specificity compared to other imaging modalities, such as CT scans. CXR scans are often used as a first-line screening tool for COVID-19 due to their lower cost and wider availability, but they may not always detect early or mild cases of the disease.

5. Conclusions and Future Work

As the COVID-19 pandemic continues to impact the world, the use of chest X-rays for diagnosis has become increasingly important. Convolutional neural networks have shown promising results in detecting COVID-19 from chest X-rays. In this article, we proposed an automatic detection model for COVID-19 infection based on chest X-ray scans called COVID-ConvNet. The Kaggle COVID-19 radiography dataset was selected to test and train the proposed COVID-ConvNet model because it offers a large and diverse collection of chest X-ray images (21,165 CXR scans) sourced from various repositories. Three experimental classifications were performed using the Google Colab platform, with four, three, and two classes. Experimental results showed that our deep learning model is superior to recent related works, reaching an accuracy of 97.43%. Furthermore, it outperforms Nikolaou et al.’s model with feature extraction and with fine-tuning in terms of prediction accuracy by up to 5.9%. It encourages multidisciplinary researchers to develop powerful artificial intelligence frameworks to combat the COVID-19 worldwide pandemic. The use of our proposed COVID-ConvNet model in COVID-19 patient screening has several potential implications. It can be applied in clinical practice for computer-aided diagnosis (CAD) systems to assist radiologists in interpreting medical images. These systems can help to improve the accuracy and speed of diagnoses, particularly in cases where the radiologist may be inexperienced, where the diagnosis is difficult, or in areas with limited access to PCR testing. Additionally, it can aid in the triage of patients, identifying those who require immediate medical attention, and it can assist in the monitoring of disease progression and response to treatment. In addition, combining CXR scans with other diagnostic tools such as laboratory tests, clinical examinations, and medical history can help to improve the accuracy of diagnoses and guide treatment decisions.
For future work, we aim to assess the performance of the COVID-ConvNet model using larger and more diversified datasets with more COVID-19 examples. In addition, feature engineering can be applied to enhance the performance of the deep learning model. Another direction for future research could be exploring the use of transfer learning to improve the performance of CNN models. Furthermore, we aim to investigate advanced techniques that can further address the issue of data imbalance and enhance the performance of our model. Specifically, we will explore methods such as data augmentation, class weighting, and resampling to balance the dataset and mitigate the impact of class imbalance. Another area for exploration could be the development of explainable AI techniques to provide insights into the decision-making process of CNN models. Finally, it would be valuable to investigate the generalizability of CNN models across different populations and imaging equipment. Overall, continued research in this area will be crucial in improving the accuracy and reliability of CNN-based COVID-19 detection from chest X-rays.

Author Contributions

I.A.L.A. performed the experiments, analyzed the data, and wrote the paper. M.J.F.A. supervised the research and critically revised the paper. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to Researcher Supporting Project number (RSPD2023R582), King Saud University, Riyadh, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

In this work, we used a dataset called “COVID-19 Radiography Database”, which is available on Kaggle via the following URL: https://www.kaggle.com/tawsifurrahman/covid19-radiography-database (accessed on 1 Janurary 2023).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following table gives a list of abbreviations used in our work:
AD-COVID19Automatic detection of COVID-19
CADComputer-aided diagnosis
COVID-19Coronavirus disease 2019
CNNConvolutional neural network
COVID-ConvNetCOVID-19 convolutional network
COVID-SDNetCOVID-19-smart-data-based network
CTComputed tomography
CXRChest X-ray
DeTraCDecompose, transfer, and compose
DNNDeep neural network
FNFalse negative
FPFalse positive
ILDInterstitial lung disease
JSRTThe Japanese Society of Radiological Technology
MLPMulti-layer perceptron
PCRPolymerase chain reaction
PARLPrior-attention residual learning
ReLURectified linear unit
ResNetResidual network
RGBRed, green, and blue
RSNARadiological Society of North America
SARSSevere acute respiratory syndrome
SVMSupport vector machine
TNTrue negative
TPTrue positive
VGGVisual geometry group
WHOThe World Health Organization

References

  1. Khan, E.; Rehman, M.Z.U.; Ahmed, F.; Alfouzan, F.A.; Alzahrani, N.M.; Ahmad, J. Chest X-ray Classification for the Detection of COVID-19 Using Deep Learning Techniques. Sensors 2022, 22, 1211. [Google Scholar] [CrossRef] [PubMed]
  2. Kwekha-Rashid, A.S.; Abduljabbar, H.N.; Alhayani, B. Coronavirus disease (COVID-19) cases analysis using machine-learning applications. Appl. Nanosci. 2021, 13, 2013–2025. [Google Scholar] [CrossRef] [PubMed]
  3. Yang, J.; Vaghela, S.; Yarnoff, B.; De Boisvilliers, S.; Di Fusco, M.; Wiemken, T.L.; Kyaw, M.H.; McLaughlin, J.M.; Nguyen, J.L. Estimated global public health and economic impact of COVID-19 vaccines in the pre-omicron era using real-world empirical data. Expert Rev. Vaccines 2023, 22, 54–65. [Google Scholar] [CrossRef] [PubMed]
  4. Anderson, E.L.; Turnham, P.; Griffin, J.R.; Clarke, C.C. Consideration of the aerosol transmission for COVID-19 and public health. Risk Anal. 2020, 40, 902–907. [Google Scholar] [CrossRef] [PubMed]
  5. Guo, J.W.; Radloff, C.L.; Wawrzynski, S.E.; Cloyes, K.G. Mining twitter to explore the emergence of COVID-19 symptoms. Public Health Nurs. 2020, 37, 934–940. [Google Scholar] [CrossRef]
  6. Ataguba, J.E. COVID-19 pandemic, a war to be won: Understanding its economic implications for Africa. Appl. Health Econ. Health Policy 2020, 18, 325–328. [Google Scholar] [CrossRef]
  7. Mahmood, A.; Eqan, M.; Pervez, S.; Alghamdi, H.A.; Tabinda, A.B.; Yasar, A.; Brindhadevi, K.; Pugazhendhi, A. COVID-19 and frequent use of hand sanitizers; human health and environmental hazards by exposure pathways. Sci. Total Environ. 2020, 742, 140561. [Google Scholar] [CrossRef]
  8. Irfan, M.; Iftikhar, M.A.; Yasin, S.; Draz, U.; Ali, T.; Hussain, S.; Bukhari, S.; Alwadie, A.S.; Rahman, S.; Glowacz, A.; et al. Role of hybrid deep neural networks (HDNNs), computed tomography, and chest X-rays for the detection of COVID-19. Int. J. Environ. Res. Public Health 2021, 18, 3056. [Google Scholar] [CrossRef]
  9. Stogiannos, N.; Fotopoulos, D.; Woznitza, N.; Malamateniou, C. COVID-19 in the radiology department: What radiographers need to know. Radiography 2020, 26, 254–263. [Google Scholar] [CrossRef]
  10. Ozturk, T.; Talo, M.; Yildirim, E.A.; Baloglu, U.B.; Yildirim, O.; Acharya, U.R. Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput. Biol. Med. 2020, 121, 103792. [Google Scholar] [CrossRef]
  11. Manigandan, S.; Wu, M.T.; Ponnusamy, V.K.; Raghavendra, V.B.; Pugazhendhi, A.; Brindhadevi, K. A systematic review on recent trends in transmission, diagnosis, prevention and imaging features of COVID-19. Process. Biochem. 2020, 98, 233–240. [Google Scholar] [CrossRef] [PubMed]
  12. Horry, M.J.; Chakraborty, S.; Paul, M.; Ulhaq, A.; Pradhan, B.; Saha, M.; Shukla, N. COVID-19 Detection Through Transfer Learning Using Multimodal Imaging Data. IEEE Access 2020, 8, 149808–149824. [Google Scholar] [CrossRef] [PubMed]
  13. Ohata, E.F.; Bezerra, G.M.; das Chagas, J.V.S.; Neto, A.V.L.; Albuquerque, A.B.; de Albuquerque, V.H.C.; Reboucas Filho, P.P. Automatic detection of COVID-19 infection using chest X-ray images through transfer learning. IEEE/CAA J. Autom. Sin. 2020, 8, 239–248. [Google Scholar] [CrossRef]
  14. Tabik, S.; Gómez-Ríos, A.; Martín-Rodríguez, J.L.; Sevillano-García, I.; Rey-Area, M.; Charte, D.; Guirado, E.; Suárez, J.L.; Luengo, J.; Valero-González, M.; et al. COVIDGR dataset and COVID-SDNet methodology for predicting COVID-19 based on chest X-ray images. IEEE J. Biomed. Health Inform. 2020, 24, 3595–3605. [Google Scholar] [CrossRef]
  15. Karnati, M.; Seal, A.; Sahu, G.; Yazidi, A.; Krejcar, O. A novel multi-scale based deep convolutional neural network for detecting COVID-19 from X-rays. Appl. Soft Comput. 2022, 125, 109109. [Google Scholar] [CrossRef]
  16. Wang, L.; Lin, Z.Q.; Wong, A. COVID-net: A tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. Sci. Rep. 2020, 10, 19549. [Google Scholar] [CrossRef]
  17. Hemdan, E.E.D.; Shouman, M.A.; Karar, M.E. COVIDx-net: A framework of deep learning classifiers to diagnose COVID-19 in X-ray images. arXiv 2020, arXiv:2003.11055. [Google Scholar]
  18. Cohen, J.; Rosebrock, A. Covid Chest X-ray Dataset. Available online: https://github.com/ieee8023/covid-chestxray-dataset (accessed on 1 January 2023).
  19. Arias-Londoño, J.D.; Gomez-Garcia, J.A.; Moro-Velázquez, L.; Godino-Llorente, J.I. Artificial Intelligence applied to chest X-ray images for the automatic detection of COVID-19. A thoughtful evaluation approach. IEEE Access 2020, 8, 226811–226827. [Google Scholar] [CrossRef]
  20. Wang, J.; Bao, Y.; Wen, Y.; Lu, H.; Luo, H.; Xiang, Y.; Li, X.; Liu, C.; Qian, D. Prior-attention residual learning for more discriminative COVID-19 screening in CT images. IEEE Trans. Med. Imaging 2020, 39, 2572–2583. [Google Scholar] [CrossRef]
  21. Nikolaou, V.; Massaro, S.; Fakhimi, M.; Stergioulas, L.; Garn, W. COVID-19 diagnosis from chest X-rays: Developing a simple, fast, and accurate neural network. Health Inf. Sci. Syst. 2021, 9, 1–11. [Google Scholar] [CrossRef]
  22. Ismael, A.M.; Şengür, A. Deep learning approaches for COVID-19 detection based on chest X-ray images. Expert Syst. Appl. 2021, 164, 114054. [Google Scholar] [CrossRef]
  23. Narin, A.; Kaya, C.; Pamuk, Z. Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks. Pattern Anal. Appl. 2021, 24, 1207–1220. [Google Scholar] [CrossRef] [PubMed]
  24. Mooney, P. Chest X-ray Images (Pneumonia). Available online: https://www.kaggle.com/datasets/paultimothymooney/chest-xray-pneumonia (accessed on 26 January 2023).
  25. Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Bagheri, M.; Summers, R.M. Chestx-ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2097–2106. [Google Scholar]
  26. Abbas, A.; Abdelsamea, M.M.; Gaber, M.M. Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network. Appl. Intell. 2021, 51, 854–864. [Google Scholar] [CrossRef] [PubMed]
  27. Candemir, S.; Jaeger, S.; Palaniappan, K.; Musco, J.P.; Singh, R.K.; Xue, Z.; Karargyris, A.; Antani, S.; Thoma, G.; McDonald, C.J. Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration. IEEE Trans. Med. Imaging 2013, 33, 577–590. [Google Scholar] [CrossRef]
  28. Cohen, J.P.; Morrison, P.; Dao, L. COVID-19 image data collection. arXiv 2020, arXiv:2003.11597. [Google Scholar]
  29. Jain, R.; Gupta, M.; Taneja, S.; Hemanth, D.J. Deep learning based detection and analysis of COVID-19 on chest X-ray images. Appl. Intell. 2021, 51, 1690–1700. [Google Scholar] [CrossRef]
  30. Patel, P. Chest X-ray (COVID-19 & Pneumonia). Available online: https://www.kaggle.com/datasets/prashant268/chest-xray-covid19-pneumonia (accessed on 26 January 2023).
  31. Zouch, W.; Sagga, D.; Echtioui, A.; Khemakhem, R.; Ghorbel, M.; Mhiri, C.; Hamida, A.B. Detection of COVID-19 from CT and chest X-ray images using deep learning models. Ann. Biomed. Eng. 2022, 50, 825–835. [Google Scholar] [CrossRef] [PubMed]
  32. Jkooy. COVID-CT. Available online: https://github.com/UCSD-AI4H/COVID-CT/tree/master/Images-processed (accessed on 1 February 2023).
  33. Kong, L.; Cheng, J. Classification and detection of COVID-19 X-ray images based on DenseNet and VGG16 feature fusion. Biomed. Signal Process. Control 2022, 77, 103772. [Google Scholar] [CrossRef]
  34. Li, H.; Zeng, N.; Wu, P.; Clawson, K. Cov-Net: A computer-aided diagnosis method for recognizing COVID-19 from chest X-ray images via machine vision. Expert Syst. Appl. 2022, 207, 118029. [Google Scholar] [CrossRef]
  35. Chowdhury, M.E.; Rahman, T.; Khandakar, A.; Mazhar, R.; Kadir, M.A.; Mahbub, Z.B.; Islam, K.R.; Khan, M.S.; Iqbal, A.; Al Emadi, N.; et al. Can AI help in screening viral and COVID-19 pneumonia? IEEE Access 2020, 8, 132665–132676. [Google Scholar] [CrossRef]
  36. Rahman, T.; Khandakar, A.; Qiblawey, Y.; Tahir, A.; Kiranyaz, S.; Kashem, S.B.A.; Islam, M.T.; Al Maadeed, S.; Zughaier, S.M.; Khan, M.S.; et al. Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images. Comput. Biol. Med. 2021, 132, 104319. [Google Scholar] [CrossRef] [PubMed]
  37. Kaggle COVID-19 Radiography Database. Available online: https://www.kaggle.com/datasets/tawsifurrahman/covid19-radiography-database (accessed on 1 January 2023).
  38. BIMCV-COVID19, Datasets Related to COVID19’s Pathology Course. 2020. Available online: https://bimcv.cipf.es/bimcv-projects/bimcv-covid19/#1590858128006-9e640421-6711 (accessed on 1 January 2023).
  39. COVID-19-Image-Repository. 2020. Available online: https://github.com/ml-workgroup/covid-19-image-repository/tree/master/png (accessed on 1 January 2023).
  40. Chen, R.; Liang, W.; Jiang, M.; Guan, W.; Zhan, C.; Wang, T.; Tang, C.; Sang, L.; Liu, J.; Ni, Z.; et al. Risk factors of fatal outcome in hospitalized subjects with coronavirus disease 2019 from a nationwide analysis in China. Chest 2020, 158, 97–105. [Google Scholar] [CrossRef] [PubMed]
  41. Liu, J.; Liu, Y.; Xiang, P.; Pu, L.; Xiong, H.; Li, C.; Zhang, M.; Tan, J.; Xu, Y.; Song, R.; et al. Neutrophil-to-lymphocyte ratio predicts severe illness patients with 2019 novel coronavirus in the early stage. MedRxiv 2020. [Google Scholar] [CrossRef]
  42. Weng, Z.; Chen, Q.; Li, S.; Li, H.; Zhang, Q.; Lu, S.; Wu, L.; Xiong, L.; Mi, B.; Liu, D.; et al. ANDC: An early warning score to predict mortality risk for patients with Coronavirus Disease 2019. J. Transl. Med. 2020, 18, 1–10. [Google Scholar] [CrossRef] [PubMed]
  43. Huang, I.; Pranata, R. Lymphopenia in severe coronavirus disease-2019 (COVID-19): Systematic review and meta-analysis. J. Intensive Care 2020, 8, 1–10. [Google Scholar] [CrossRef]
  44. Armiro. COVID-CXNet. Available online: https://github.com/armiro/COVID-CXNet (accessed on 1 February 2023).
  45. RSNA Pneumonia Detection Challenge. Available online: https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/data (accessed on 22 April 2023).
  46. Lin, C.J.; Yang, T.Y. A Fusion-Based Convolutional Fuzzy Neural Network for Lung Cancer Classification. Int. J. Fuzzy Syst. 2022, 25, 451–467. [Google Scholar] [CrossRef]
  47. Chicco, D.; Starovoitov, V.; Jurman, G. The Benefits of the Matthews Correlation Coefficient (MCC) Over the Diagnostic Odds Ratio (DOR) in Binary Classification Assessment. IEEE Access 2021, 9, 47112–47124. [Google Scholar] [CrossRef]
  48. Bhatnagar, A.; Srivastava, S. A Robust Model for Churn Prediction using Supervised Machine Learning. In Proceedings of the 2019 IEEE 9th International Conference on Advanced Computing (IACC), Tiruchirappalli, India, 13–14 December 2019; pp. 45–49. [Google Scholar] [CrossRef]
  49. Hsu, C.Y.; Wang, S.; Qiao, Y. Intrusion detection by machine learning for multimedia platform. Multimed. Tools Appl. 2021, 80, 29643–29656. [Google Scholar] [CrossRef]
  50. Rodrigues, J.d.C.; Rebouças Filho, P.P.; Peixoto, E., Jr.; Kumar, A.; de Albuquerque, V.H.C. Classification of EEG signals to detect alcoholism using machine learning techniques. Pattern Recognit. Lett. 2019, 125, 140–149. [Google Scholar] [CrossRef]
  51. Alablani, I.A.; Arafah, M.A. An SDN/ML-Based Adaptive Cell Selection Approach for HetNets: A Real-World Case Study in London, UK. IEEE Access 2021, 9, 166932–166950. [Google Scholar] [CrossRef]
  52. Porto, A.; Voje, K.L. ML-morph: A fast, accurate and general approach for automated detection and landmarking of biological structures in images. Methods Ecol. Evol. 2020, 11, 500–512. [Google Scholar] [CrossRef]
  53. Alablani, I.A.; Arafah, M.A. A2T-Boost: An Adaptive Cell Selection Approach for 5G/SDN-Based Vehicular Networks. IEEE Access 2023, 11, 7085–7108. [Google Scholar] [CrossRef]
  54. Lee, J.; Lee, U.; Kim, H. PASS: Reducing Redundant Notifications between a Smartphone and a Smartwatch for Energy Saving. IEEE Trans. Mob. Comput. 2020, 19, 2656–2669. [Google Scholar] [CrossRef]
  55. Alablani, I.A.; Arafah, M.A. Enhancing 5G small cell selection: A neural network and IoV-based approach. Sensors 2021, 21, 6361. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.