Comprehensive Survey of Machine Learning Systems for COVID-19 Detection

The last two years are considered the most crucial and critical period of the COVID-19 pandemic affecting most life aspects worldwide. This virus spreads quickly within a short period, increasing the fatality rate associated with the virus. From a clinical perspective, several diagnosis methods are carried out for early detection to avoid virus propagation. However, the capabilities of these methods are limited and have various associated challenges. Consequently, many studies have been performed for COVID-19 automated detection without involving manual intervention and allowing an accurate and fast decision. As is the case with other diseases and medical issues, Artificial Intelligence (AI) provides the medical community with potential technical solutions that help doctors and radiologists diagnose based on chest images. In this paper, a comprehensive review of the mentioned AI-based detection solution proposals is conducted. More than 200 papers are reviewed and analyzed, and 145 articles have been extensively examined to specify the proposed AI mechanisms with chest medical images. A comprehensive examination of the associated advantages and shortcomings is illustrated and summarized. Several findings are concluded as a result of a deep analysis of all the previous works using machine learning for COVID-19 detection, segmentation, and classification.


Introduction
COVID-19 is a new disease that surfaced two years ago. It is caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) [1]. The COVID-19 pandemic has affected health resources and most people's lives-socially, educationally, and economically [2][3][4]. With the vast and growing number of infected people due to the fast spread of this virus, all countries have applied multiple measures to control the disease propagation. Most airports were closed, and governments assessed country-wide lockdowns; some cities were entirely quarantined as infested areas.
In addition to all the above complications, the COVID-19 diagnosis has many limitations and challenges of its own. For example, using PCR test kits incurs a considerable likelihood of false negatives, which is considered a primary drawback in diagnosis correctness. This leads doctors to use chest images (X-ray or CT scan) for a more accurate diagnosis that can better help doctors in their decisions. However, chest imaging availability is limited primarily due to the scarce availability of imaging equipment.
Producing a drug for COVID-19 takes months or even years due to the clinical trials that need to be carried out on humans and require approval for ethical and other reasons. Additionally, the various genetic mutations the virus shows cause further delay in producing a cure for this virus. From a technical perspective, many researchers have proposed Collecting articles was carried out in March 2022 which was the first step of the searching strategy, where 203 studies were retrieved using the mentioned keywords. The second step was screening the most relevant papers and excluding any duplicates or out of scope studies. There were 33 excluded articles, which present 16% of the total number of collected studies. According to the used keywords, several papers were filtered for eligibility in the third step of the proposed strategy. The papers should successfully combine COVID-19 or COVID-20 detection with machine learning or deep learning using chest images. Some articles presented a different type of database, use different techniques for diagnosis, or did not have the required information, such as the performance or the used Collecting articles was carried out in March 2022 which was the first step of the searching strategy, where 203 studies were retrieved using the mentioned keywords. The second step was screening the most relevant papers and excluding any duplicates or out of scope studies. There were 33 excluded articles, which present 16% of the total number of collected studies. According to the used keywords, several papers were filtered for eligibility in the third step of the proposed strategy. The papers should successfully combine COVID-19 or COVID-20 detection with machine learning or deep learning using chest images. Some articles presented a different type of database, use different techniques for diagnosis, or did not have the required information, such as the performance or the used dataset. Finally, 145 papers were extensively analyzed, illustrated, and summarized in this paper. The analysis covered the proposed systems including the performance and the limitations that can be improved by researchers. Figure 2 shows a summary of the selection strategy of the reviewed papers.
dataset. Finally, 145 papers were extensively analyzed, illustrated, and summarized in this paper. The analysis covered the proposed systems including the performance and the limitations that can be improved by researchers. Figure 2 shows a summary of the selection strategy of the reviewed papers.

Materials
One leading limitation of the PCR test is the high rate of false-negative results, i.e., the diagnosis is negative, whereas the patient in reality is a positive carrier of the virus. Moreover, in many regions of the world, PCR test accessibility is restricted. Consequently, Computer Tomography (CT) and X-ray images can be used as the best alternative to identify this infection. CT or X-ray images are promptly accessible where there are no PCR test kits. Additionally, PCR kits are costly and take a lot of time to produce results, especially when the volume of tests is high. Furthermore, a professional clinician is needed to gather PCR tests, which may require additional training. Alternatively, it is somewhat simpler to work with CT and X-ray images.
Artificial Intelligence is a rising field that could fulfill a significant role in COVID-19 detection. For this purpose, any trained model needs enough data (mostly chest images) for virus recognition. With promising results, researchers have used machine learning to detect COVID-19 using medical images such as CT and X-ray images.
A CT image of the chest is taken using the computed tomography CT scan procedure. This procedure is known as computed axial tomography, a medical imaging technique that provides the clinician with detailed images of the body for diagnostic purposes [35]. Figure 3 shows a CT scan image of a COVID-19 pneumonia patient's lung.

Materials
One leading limitation of the PCR test is the high rate of false-negative results, i.e., the diagnosis is negative, whereas the patient in reality is a positive carrier of the virus. Moreover, in many regions of the world, PCR test accessibility is restricted. Consequently, Computer Tomography (CT) and X-ray images can be used as the best alternative to identify this infection. CT or X-ray images are promptly accessible where there are no PCR test kits. Additionally, PCR kits are costly and take a lot of time to produce results, especially when the volume of tests is high. Furthermore, a professional clinician is needed to gather PCR tests, which may require additional training. Alternatively, it is somewhat simpler to work with CT and X-ray images.
Artificial Intelligence is a rising field that could fulfill a significant role in COVID-19 detection. For this purpose, any trained model needs enough data (mostly chest images) for virus recognition. With promising results, researchers have used machine learning to detect COVID-19 using medical images such as CT and X-ray images.
A CT image of the chest is taken using the computed tomography CT scan procedure. This procedure is known as computed axial tomography, a medical imaging technique that provides the clinician with detailed images of the body for diagnostic purposes [35]. Figure 3 shows a CT scan image of a COVID-19 pneumonia patient's lung. Despite its benefits, CT scans are generally considered expensive. Accordingly, clinicians use another type of chest image, namely X-ray images, instead of CT imaging. X-ray is a methodology that is traditionally used to produce chest images and is generally more Despite its benefits, CT scans are generally considered expensive. Accordingly, clinicians use another type of chest image, namely X-ray images, instead of CT imaging. X-ray is a methodology that is traditionally used to produce chest images and is generally more available in most regions of the world, including developing countries. However, a CT scan generates images with further details, making COVID-19 diagnosis more effortless and efficient.
Several studies use CT scan images for COVID-19 detection. Yasar and Ceylan proposed a CNN model for COVID-19 detection using 386 CT scan images, and their study achieved 94.7% accuracy [44]. This is considered a low number of images, especially when using a deep learning algorithm. At the time of the study in 2020, the data set availability was limited.
In the following year (2021), the data sets had been increased because of the broader spread of the disease and the continuously growing number of patients. Since 2021, more studies have used CT scan images based on artificial intelligence algorithms for disease detection. For example, in [45][46][47], the researchers proposed artificial intelligence models for COVID-19 detection using CT scan images. The number of images in these studies was more significant than in the previous ones, and the system's accuracy was 95.0, 93.44, 94.73, and 98.78%, respectively.
Most studies have been conducted based on X-ray images. X-rays have much more limited frequencies than apparent light, which makes it conceivable to test structures significantly more modest than can be seen utilizing an ordinary magnifying instrument. This feature is utilized in X-ray microscopy to obtain high-resolution images and in crystallography to examine and find the location of atoms in precious stones [48]. Figure 4 shows an example of X-ray images. Despite its benefits, CT scans are generally considered expensive. Accordingly, clinicians use another type of chest image, namely X-ray images, instead of CT imaging. X-ray is a methodology that is traditionally used to produce chest images and is generally more available in most regions of the world, including developing countries. However, a CT scan generates images with further details, making COVID-19 diagnosis more effortless and efficient.
Several studies use CT scan images for COVID-19 detection. Yasar and Ceylan proposed a CNN model for COVID-19 detection using 386 CT scan images, and their study achieved 94.7% accuracy [44]. This is considered a low number of images, especially when using a deep learning algorithm. At the time of the study in 2020, the data set availability was limited.
In the following year (2021), the data sets had been increased because of the broader spread of the disease and the continuously growing number of patients. Since 2021, more studies have used CT scan images based on artificial intelligence algorithms for disease detection. For example, in [45][46][47], the researchers proposed artificial intelligence models for COVID-19 detection using CT scan images. The number of images in these studies was more significant than in the previous ones, and the system's accuracy was 95.0, 93.44, 94.73, and 98.78%, respectively.
Most studies have been conducted based on X-ray images. X-rays have much more limited frequencies than apparent light, which makes it conceivable to test structures significantly more modest than can be seen utilizing an ordinary magnifying instrument. This feature is utilized in X-ray microscopy to obtain high-resolution images and in crystallography to examine and find the location of atoms in precious stones [48]. Figure 4 shows an example of X-ray images. X-ray images focus on bones and give more details about the surrounding tissues. Medical X-ray images are a critical product that uses radiation. In 1987, they represented 58% of human-made technologies in the United States. The more significant part came from traditional sources (82%); clinical X-rays represented just 10% of American radiation products [49]. X-ray images focus on bones and give more details about the surrounding tissues. Medical X-ray images are a critical product that uses radiation. In 1987, they represented 58% of human-made technologies in the United States. The more significant part came from traditional sources (82%); clinical X-rays represented just 10% of American radiation products [49].
By 2006, the operations in the United States were contributing significantly more ionizing radiation products than during the mid-1980s. In 2006, medical tools comprised almost 50% of the U.S.'s radiation products. This expansion reflects the advances and increased utilization of clinical imaging methodology, specifically processed tomography (CT) and atomic medicine [50].
Some studies used both CT and X-ray image types [62,63]. The first study used deep learning CNN for detection and classification and achieved the same accuracy for both CT and X-ray images. However, the second study proposed a different algorithm for the two types of images which gave different performances, where it was 93.44% for X-ray images and 87.98% for CT images.

Data Augmentation
Data augmentation is a great process that can give new, extra images that save the original information. However, it can likewise create noise that can affect the training process efficiency of the AI model. For example, data augmentation was necessary, especially at the beginning of the pandemic, as the number of images was negligible. At the same time, the artificial intelligence algorithms required a more extensive data set to be trained efficiently.
In the study [64], the authors used a data set with 585 X-ray images. This number is considered minor, and consequently, the researchers attempted to augment these images for a more efficient training experience. The images were produced randomly during the training process by flipping, translation, and rotation operations. Table 1 demonstrates the used augmentation algorithms. In another study [65], data augmentation was carried out by several transformation processes, as follows, and as shown in Figure 5: different studies, some data sets are private and need access to be used. However, the public data sets have been rapidly and continuously growing. Some studies used both CT and X-ray image types [62,63]. The first study used deep learning CNN for detection and classification and achieved the same accuracy for both CT and X-ray images. However, the second study proposed a different algorithm for the two types of images which gave different performances, where it was 93.44% for X-ray images and 87.98% for CT images.

Data Augmentation
Data augmentation is a great process that can give new, extra images that save the original information. However, it can likewise create noise that can affect the training process efficiency of the AI model. For example, data augmentation was necessary, especially at the beginning of the pandemic, as the number of images was negligible. At the same time, the artificial intelligence algorithms required a more extensive data set to be trained efficiently.
In the study [64], the authors used a data set with 585 X-ray images. This number is considered minor, and consequently, the researchers attempted to augment these images for a more efficient training experience. The images were produced randomly during the training process by flipping, translation, and rotation operations. Table 1 demonstrates the used augmentation algorithms. In another study [65], data augmentation was carried out by several transformation processes, as follows, and as shown in Figure 5:   [65]. Figure 5. Data augmentation process as proposed in [65].
As indicated earlier, the data augmentation process is a useful tool, but it can produce noise that affects the training model's efficiency. Nishio  The results of this work showed better accuracy using a combination of the three algorithms than no-augmentation, or using any of the mentioned three methods separately [66]. Another study aimed to prevent the overfitting problem by monitoring the training loss. The data augmentation was done based on rotation and flipping [67].
The mentioned data augmentation techniques, such as flipping, rotating, and color changing, are fast, reliable, and easy to apply. However, such changes in the structure are limited in their benefits and do not produce completely new data. For example, in [68], the authors proposed a new technique for data augmentation using a Generative Adversarial Network (GAN). This model can generate artificial images with inconspicuous examples and without supervision [69]. The main idea of this network is to utilize two restricting networks and a generator that delivers a clear picture to deceive the other network that is prepared to most separate between the positive and false images in the discriminator [70]. Figure 6a,b shows the system performance with, and without, data augmentation [68].
The results of this work showed better accuracy using a combination of the three algorithms than no-augmentation, or using any of the mentioned three methods separately [66]. Another study aimed to prevent the overfitting problem by monitoring the training loss. The data augmentation was done based on rotation and flipping [67].
The mentioned data augmentation techniques, such as flipping, rotating, and color changing, are fast, reliable, and easy to apply. However, such changes in the structure are limited in their benefits and do not produce completely new data. For example, in [68], the authors proposed a new technique for data augmentation using a Generative Adversarial Network (GAN). This model can generate artificial images with inconspicuous examples and without supervision [69]. The main idea of this network is to utilize two restricting networks and a generator that delivers a clear picture to deceive the other network that is prepared to most separate between the positive and false images in the discriminator [70]. Figure 6a,b shows the system performance with, and without, data augmentation [68].

Segmentation
In some studies, researchers have tried to segment the infected tissues of the lung to develop COVID-19 virus detection in CT scan images. The segmentation process can be considered a Region of Interest (ROI), which improves the system performance by specifying only the infected area and minimizing the wasted time reading all the image features.
Despite all the advantages of segmenting the infected lung tissues, there are several challenges facing this process, such as the high variation between infected cells, the irregularity of these cells, the low contrast of the tissues, and the differences in the illumination. Irrespective of the mentioned challenges, some studies proposed several algorithms for defected tissue segmentation. For example, in [71], the authors used Support Vector Machine (SVM) and active contour modeling to segment the defected tissues. The detection rate of three types of nodules (solid, non-solid, and cavitary) was 89%, whereas the false positive was 7.3%, and the proposed model correctly specified the location of the nodules.

Segmentation
In some studies, researchers have tried to segment the infected tissues of the lung to develop COVID-19 virus detection in CT scan images. The segmentation process can be considered a Region of Interest (ROI), which improves the system performance by specifying only the infected area and minimizing the wasted time reading all the image features.
Despite all the advantages of segmenting the infected lung tissues, there are several challenges facing this process, such as the high variation between infected cells, the irregularity of these cells, the low contrast of the tissues, and the differences in the illumination. Irrespective of the mentioned challenges, some studies proposed several algorithms for defected tissue segmentation. For example, in [71], the authors used Support Vector Machine (SVM) and active contour modeling to segment the defected tissues. The detection rate of three types of nodules (solid, non-solid, and cavitary) was 89%, whereas the false positive was 7.3%, and the proposed model correctly specified the location of the nodules.
In the study [72], three benchmarks were built for lung and infection segmentation processes using 70 COVID-19 cases, which contain momentum dynamic research fieldstones, e.g., scarcely any shot learning, area speculation, and information move. This work was proposed using 40 pre-trained models. The results of this research were 67.3% for average Dice Similarity Coefficient (DSC) scores of infection segmentation, whereas the average Normalized Surface Dice (NSD) score was 70.0% for infection area segmentation.
Another study to address the lung segmentation challenges was proposed by Fan et al. [73]. This study was conducted based on Infection Network (Inf-Net) for segmentation. This system started with extracting the low-level features. The performance of these features was increased by adding an edge attention model. The overall infection network is shown in Figure 7. The system performance was promising and considerably closer to the ground truth compared to other studies, as shown in Figure 8.
average Normalized Surface Dice (NSD) score was 70.0% for infection area segmentation.
Another study to address the lung segmentation challenges was proposed by Fan et al. [73]. This study was conducted based on Infection Network (Inf-Net) for segmentation. This system started with extracting the low-level features. The performance of these features was increased by adding an edge attention model. The overall infection network is shown in Figure 7. The system performance was promising and considerably closer to the ground truth compared to other studies, as shown in Figure 8.  In another study, the authors used the UNet++ for lung segmentation by inheriting the basic structure of UNet++ and then composing it with SCOAT-Net based on the encoder and the decoder semantic level. This method uses a smaller number of parameters average Normalized Surface Dice (NSD) score was 70.0% for infection area segmentation.
Another study to address the lung segmentation challenges was proposed by Fan et al. [73]. This study was conducted based on Infection Network (Inf-Net) for segmentation. This system started with extracting the low-level features. The performance of these features was increased by adding an edge attention model. The overall infection network is shown in Figure 7. The system performance was promising and considerably closer to the ground truth compared to other studies, as shown in Figure 8.  In another study, the authors used the UNet++ for lung segmentation by inheriting the basic structure of UNet++ and then composing it with SCOAT-Net based on the encoder and the decoder semantic level. This method uses a smaller number of parameters In another study, the authors used the UNet++ for lung segmentation by inheriting the basic structure of UNet++ and then composing it with SCOAT-Net based on the encoder and the decoder semantic level. This method uses a smaller number of parameters and reduces the calculation cost. Further, the performance was good for small-scale datasets. On the other hand, this method could not detect certain delicate opacity regions, as shown in case 5 in Figure 9 [74].
Based on the proposed literature, infected cell segmentation provides radiologists with more informative results related to the volume, shape, and percentage, of the infected area. Furthermore, this process generates the most important and impressive information for ML models, which has been validated in numerous studies [72,[75][76][77].
To minimize the time that has been consumed for manual masking of the infected tissues of the lung, an enhanced segmentation framework was introduced in [78]. A multiagent Deep Reinforcement Learning (DRL) approach was proposed for lung infection segmentation, which is an improved version of Deep Q-network and based on CT images. The suggested mask detection was carried out like a tree for 3D images to cover all agents in terms getting a best segmentation and detection process, as shown in Figure 10. This study outcome was compared with the other existing segmentation systems and the ground truth; the precision of the proposed model was 97.12%, the sensitivity, specificity, precision, and the F1 score were 79.97, 99.48, 85.21, and 83.01%, respectively. This model used the multiagent to get several masks for a 3D image to cover all areas and specially the unobservable regions of the lung. and reduces the calculation cost. Further, the performance was good for small-scale datasets. On the other hand, this method could not detect certain delicate opacity regions, as shown in case 5 in Figure 9 [74]. Based on the proposed literature, infected cell segmentation provides radiologists with more informative results related to the volume, shape, and percentage, of the infected area. Furthermore, this process generates the most important and impressive information for ML models, which has been validated in numerous studies [72,[75][76][77].
To minimize the time that has been consumed for manual masking of the infected tissues of the lung, an enhanced segmentation framework was introduced in [78]. A multiagent Deep Reinforcement Learning (DRL) approach was proposed for lung infection segmentation, which is an improved version of Deep Q-network and based on CT images. The suggested mask detection was carried out like a tree for 3D images to cover all agents in terms getting a best segmentation and detection process, as shown in Figure 10. This study outcome was compared with the other existing segmentation systems and the ground truth; the precision of the proposed model was 97.12%, the sensitivity, specificity, precision, and the F1 score were 79.97, 99.48, 85.21, and 83.01%, respectively. This model used the multi-agent to get several masks for a 3D image to cover all areas and specially the unobservable regions of the lung.

Classification
Radiologists recently discovered that the DL approach, which was able to detect tuberculosis in chest X-rays, could also be practical for recognizing COVID-19-related lung abnormalities, and assisting physicians in choosing the treatment order for high-risk COVID-19 patients. Others have proven that medical imaging is an essential source of information for a quick diagnosis of COVID-19 and that the combination of AI and chest imaging can assist in explaining COVID-19 problems [76].
In terms of COVID-19 image analysis, a chest X-ray is an imaging tool used by hospitals to diagnose COVID-19 infection, and it was the first image-based strategy utilized in Spain. Suppose clinical suspicion of infection exists after inspection. In that case, a sam-

Classification
Radiologists recently discovered that the DL approach, which was able to detect tuberculosis in chest X-rays, could also be practical for recognizing COVID-19-related lung abnormalities, and assisting physicians in choosing the treatment order for high-risk COVID-19 patients. Others have proven that medical imaging is an essential source of information for a quick diagnosis of COVID-19 and that the combination of AI and chest imaging can assist in explaining COVID-19 problems [76].
In terms of COVID-19 image analysis, a chest X-ray is an imaging tool used by hospitals to diagnose COVID-19 infection, and it was the first image-based strategy utilized in Spain. Suppose clinical suspicion of infection exists after inspection. In that case, a sample of nasopharyngeal exudate is taken to evaluate the reverse transcription-polymerase chain reaction (RT-PCR), followed by the acquisition of a chest X-ray film. Because the results of the PCR test can take several hours to obtain, the information revealed by the chest X-ray is critical for a quick clinical diagnosis. For example, if the patient's clinical condition and chest X-ray are both normal, the patient is sent home while the etiological test results are awaited. The suspected patient will be admitted to hospital for close observation if the X-ray reveals abnormal results [77].
In general, the lack of, or presence of, abnormal signs on a chest X-ray is used to determine whether the patient should be sent home or kept in the hospital for additional observation. While radiography in medical examinations can be performed quickly and widely due to the prevalence of chest radiology imaging systems in healthcare systems, the radiologists' ability to interpret radiography images is limited due to the human capacity to detect subtle visual features present in the images [79].
Many studies have been reported in this literature on new advances in DL models employing types of neural networks for separating COVID-19 from non-COVID-19 cases using neural networks, since AI can uncover patterns in chest X-rays that radiologists would generally miss.
COVID-19 and other kinds of pneumonia with different localization from chest Xrays are proposed to be detected quickly using a deep neural network architecture called CovXNet. Instead of utilizing classic convolution, efficient depth-wise convolution with varying dilation rates is used to analyze anomalies in 5856 X-rays images from various perspectives by integrating data from multiple receptive fields. For the initial training of the deep network, an enormous database containing X-rays from normal and other typical pneumonia patients is used to supplement the modest number of COVID-19 X-rays. Due to overlapping solid characteristics between COVID-19 and other pneumonia, a very excellent result may be achieved with a smaller database, including COVID-19 X-rays, by transferring the first trained convolutional layers with some extra fine-tuning layers [80].
Furthermore, it has been discovered that stacking algorithm improvements can be made by further improving predictions obtained from multiple CovXNet variants, which are principally optimized with varying input X-ray resolutions. In addition, a created class activation map allows for the discriminative localization of aberrant zones, which can aid in the diagnosis of clinical pneumonia symptoms on X-rays. More sample X-rays of COVID-19 patients for training in the transfer learning phase should increase the performance of these schemes even more.
Extensive simulation findings indicate that it could be an effective solution for the faster diagnosis of COVID-19 and other pneumonia patients. Furthermore, the proposed CovXNet is extremely scalable and has a sizeable receptive capacity, making it suitable for various computer vision applications. As a result, a gradient-based discriminative localization is implemented to identify the aberrant regions of X-ray pictures relating to distinct forms of pneumonia [78].
Extensive testing utilizing two distinct datasets reveals that COVID-19/Normal, 96.9% for COVID-19/Viral pneumonia, and 94.7% for COVID-19/Bacterial pneumonia. On the other hand, the accuracy was 90.2% for multiclass COVID-19/normal/Viral/Bacterial pneumonia, providing outstanding detection performance, shown in Figure 11. As a result, at the current state of the COVID-19 pandemic, the proposed methods can be an effective tool. tinct forms of pneumonia [78].
Extensive testing utilizing two distinct datasets reveals that COVID-19/Normal, 96.9% for COVID-19/Viral pneumonia, and 94.7% for COVID-19/Bacterial pneumonia. On the other hand, the accuracy was 90.2% for multiclass COVID-19/normal/Viral/Bacterial pneumonia, providing outstanding detection performance, shown in Figure 11. As a result, at the current state of the COVID-19 pandemic, the proposed methods can be an effective tool. Figure 11. COVID-19 cases detection [78].
In another attempt, the authors present a model for automatically collecting the radiological abnormalities congruent with COVID-19 in chest CT data and creating a robust ML model [79]. These findings show that a typical machine learning approach can predict the presence of the virus in a radiological test. To improve the model, an AI system was utilized to extract COVID-19-related illnesses and feed them to the algorithm as supplementary data, with an accuracy of 89.15%. The best method proposed (SVM) is efficient, quality, and cost-effective. As a result, radiologists will use this approach as a decision support tool to detect suspected COVID-19 cases in real-world circumstances and attain roughly 90% accuracy with this method, improving the baseline findings by five points.
A multi-classification deep learning model for detecting COVID-19, pneumonia, and lung cancer, has been developed and tested from X-rays and CT scans of the chest. According to our knowledge, this model is accurate [80]. The first attempt is to classify the three chest disorders systematically. A single design is critical to accurately diagnose these Figure 11. COVID-19 cases detection [78].
In another attempt, the authors present a model for automatically collecting the radiological abnormalities congruent with COVID-19 in chest CT data and creating a robust ML model [79]. These findings show that a typical machine learning approach can predict the presence of the virus in a radiological test. To improve the model, an AI system was utilized to extract COVID-19-related illnesses and feed them to the algorithm as supplementary data, with an accuracy of 89.15%. The best method proposed (SVM) is efficient, quality, and cost-effective. As a result, radiologists will use this approach as a decision support tool to detect suspected COVID-19 cases in real-world circumstances and attain roughly 90% accuracy with this method, improving the baseline findings by five points.
A multi-classification deep learning model for detecting COVID-19, pneumonia, and lung cancer, has been developed and tested from X-rays and CT scans of the chest. According to our knowledge, this model is accurate [80]. The first attempt is to classify the three chest disorders systematically. A single design is critical to accurately diagnose these conditions as soon as possible, as COVID-19 patients should receive the appropriate treatment and be isolated from halting the spread of the virus. There were four different architectures. VGG19+CNN, ResNet152V2, ResNet152V2+GRU, VGG19+CNN, ResNet152V2+GRU, VGG19+CNN, ResNet152V2+GRU, VGG19+CNNResNet152V2+Bi-GRU, and ResNet152V2+Bi-GRU.
Through extensive experiments and results performed on collected datasets from several sources that contained chest X-ray and CT images, the VGG19+CNN model outperformed the other three proposed models. The VGG19+CNN model achieved 98.05% accuracy, 98.05% recall, 98.43% precision, 99.5% specificity, 99.3% negative predictive value, 98.24% F1 score, 97.7% MCC, and 99.66% AUC, based on X-ray and CT images [80].
A Computer-Aided Design (CAD) technique for identifying COVID-19 patients from 2.300 CXR pictures is provided. A rich representation is built from an optimal set of GLCMbased texture features to precisely represent the segmented lung tissue ROIs of each CXR image [81]. The collected features are normalized for the final COVID-19 classification and fed into a discriminative LDCRF model. Using 5-folds cross-validation, the approach was thoroughly evaluated and validated on a large publicly available dataset of frontal CXR pictures, reaching an average accuracy of 95.88%, with precision, recall, and F1-score of 96.17%, 94.45%, and 95.79%, respectively. These findings show that the suggested CAD approach can assist radiologists and medical physicists in developing a reliable diagnosis model to differentiate COVID-19 from non-COVID-19, as shown in Figure 12. each CXR image [81]. The collected features are normalized for the final COVID-19 classification and fed into a discriminative LDCRF model. Using 5-folds cross-validation, the approach was thoroughly evaluated and validated on a large publicly available dataset of frontal CXR pictures, reaching an average accuracy of 95.88%, with precision, recall, and F1-score of 96.17%, 94.45%, and 95.79%, respectively. These findings show that the suggested CAD approach can assist radiologists and medical physicists in developing a reliable diagnosis model to differentiate COVID-19 from non-COVID-19, as shown in Figure 12. There are three types of deep learning approach for classifying and segmenting Xray pictures of COVID-19 virus-infected patients' lungs. Two systems are presented for patient diagnosis: a deep neural network (DNN) method based on the fractal feature of input photos, and a CNN method based on CT scan images [82]. The given CNN architecture, with greater accuracy (93.2%) and sensitivity (96.1%), outperforms the DNN technique, which has an accuracy of 83.4% and a sensitivity of 86%. With impressive performance, the provided CNN architecture can be used to diagnose COVID-19 patients.
Reverse Transcription Polymerase Chain Reaction is currently the best method for diagnosing patients (RT-PCR). However, many people cannot use this method on patients; the procedure is costly and time-consuming. As a result, it is critical to propose an artificial intelligence strategy for better diagnosing COVID-19 patients. The resulting approach can be used in place of RT-PCR and demonstrates a reliable and successful approach to COVID-19 patient non-contact testing, which can aid in the early and cost-effective detection and screening of COVID-19 cases. To see if the model extracts enough biomarkers for COVID-19 positive cases, a group of medical specialists will have to work together [82]. There are three types of deep learning approach for classifying and segmenting X-ray pictures of COVID-19 virus-infected patients' lungs. Two systems are presented for patient diagnosis: a deep neural network (DNN) method based on the fractal feature of input photos, and a CNN method based on CT scan images [82]. The given CNN architecture, with greater accuracy (93.2%) and sensitivity (96.1%), outperforms the DNN technique, which has an accuracy of 83.4% and a sensitivity of 86%. With impressive performance, the provided CNN architecture can be used to diagnose COVID-19 patients.
Reverse Transcription Polymerase Chain Reaction is currently the best method for diagnosing patients (RT-PCR). However, many people cannot use this method on patients; the procedure is costly and time-consuming. As a result, it is critical to propose an artificial intelligence strategy for better diagnosing COVID-19 patients. The resulting approach can be used in place of RT-PCR and demonstrates a reliable and successful approach to COVID-19 patient non-contact testing, which can aid in the early and cost-effective detection and screening of COVID-19 cases. To see if the model extracts enough biomarkers for COVID-19 positive cases, a group of medical specialists will have to work together [82].
The regions of focus for verified COVID-19 positive cases, bacterial pneumonia, and healthy cases, are shown in the CAM images of chest radiographs. Methods mentioned in this study could be used as an initial screening tool to assist healthcare providers in treating COVID-19 patients by better recognizing and promptly screening illness. In addition, it offers not only a low-cost but also high-quality service. In addition, medical practitioners can use an automatic noncontact testing approach to reduce the risk of contracting COVID-19. Initial tests revealed that the model produced satisfactory results and may be utilized to speed up COVID-19 identification. In two and three output class examples, the experimentation revealed an accuracy of 96% and 92.5% [82].
Based on chest X-ray pictures, the current study used three deep CNN techniques to detect COVID-19. Two transfer learning methodologies were evaluated: deep feature extraction and fine-tuning, as well as an end-to-end trained new CNN model. In addition, SVM classifiers were used to classify the in-depth features and several kernel functions. Analyzing eight well-known local descriptors yielded the following conclusions: Local descriptors outperformed deep learning algorithms [83]. Deep features and the SVM classifier exceeded the other approaches; deep feature extraction and local feature descriptor extraction take less time than fine-tuning and end-to-end training. In deep feature classification, the Cubic kernel function surpassed all other kernels.
The ResNet50 model outperformed the other pertained CNN models in most cases. Deep CNN models outperformed external networks in terms of end-to-end training. The accuracy score for the deep features retrieved from the ResNet50 model and SVM classifier using the Linear kernel function was 94.7%, the highest of all the findings. The fine-tuned ResNet50 model had a 92.6% success rate, whereas the constructed CNN model had a 91.6% success rate after end-to-end training. Various local texture descriptors and SVM classifications were also employed to compare performance with alternative deep approaches to identifying COVID-19 based on chest X-ray images. The findings indicated that the deep techniques tend to be efficient when compared to the local texture descriptors [83].
COVID-19 symptoms were diagnosed using Artificial Intelligence models based on human-generated respiratory sounds such as voice/speech, cough, and breath. The Convolutional Neural Network is a type of AI network used to manage various real-world challenges. The Deep Convolutional Neural Network (DCNN) model was used in this study to diagnose COVID-19 disease using human respiratory sounds from the COVID-19 sounds crowdsourced dataset [84].
Using multi-feature channels instead of standard techniques allowed the model to extract deep features of the acoustic respiratory sound signal with a 7% higher accuracy on the COVID-19 crowdsourced benchmark dataset. Using a DCNN classifier, the model classified sounds as asthma sounds, COVID-19 sounds, pertussis, bronchitis, and ordinary healthy sounds, with an accuracy of 95.45%. COVID-19 detection using respiratory sounds is the suggested approach to improve the efficiency of detecting COVID-19 positive cases.
Automatic detection of COVID-19 from chest X-ray pictures, EMCNet, can help affected patients. EMCNet extracts high-level characteristics from X-ray pictures using CNN. The ensemble model accurately identifies COVID-19 vs. normal instances. In comparison with a previous recent study, the dataset comprises a considerable number of COVID-19 photos with 98.91% accuracy, 100% precision, 97.82% recall, and 98.89% F1-score; a thorough trial reveals improved performance. With these findings, EMCNet can serve as a valuable resource for doctors and may be used as an alternative to manual radiological analysis for the automatic detection of COVID-19. EMCNet has certain limitations, for example, it can misclassify some COVID-19-positive situations as negative, but it can be used as a backup [84].
For the detection of new COVID-19 cases from X-ray pictures, a deep CNNLSTM network was introduced. For coronavirus detection, CNN is employed as a feature extractor and the LSTM network as a classifier. Combining retrieved characteristics with LSTM that distinguish COVID-19 cases from others improves the performance of the proposed system. The created system had a 99.4% accuracy, 99.9% AUC, 99.2% specificity, 99.3% sensitivity, and a 98.9% F1-score. On the same dataset, the proposed CNN-LSTM and competitive CNN architectures are used [85]. The results of the rigorous testing demonstrated that the proposed design outperforms a competing CNN network and that the suggested system could produce a tool for COVID-19 patients and caregivers during this global COVID-19 pandemic.
To lessen the effort of medical diagnosis, they employ X-ray pictures of the chest region collected from COVID-19 infected patients, as well as healthy people, to build a system using a deep transfer learning approach. Three CNN models, Inception-ResNetV2, InceptionV3, and ResNet50, are compared, with ResNet50 providing the best performance and accuracy of around 98% [86]. In the suggested framework, the class imbalance problem is solved utilizing a modified loss function and numerous layers of convolution and capsule. Following the adoption of this system, doctors and those in clinical practice would be able to make better decisions due to enhanced performance. In addition, the model's area under the curve, specificity, and accuracy, can all be improved with pre-training. The number of images in the datasets can be increased to improve the quality.
Deep feature and SVM are used to adapt an approach for detecting coronavirus (COVID-19) using X-ray pictures by extracting the deep features of 13 pre-trained CNN models and feeding them to the SVM classifier one by one. Each classification model is run 20 times, and the average value is recorded to improve the robustness of the classification model [87]. The ResNet50 plus SVM classification model performs better than the other 12 classification models. The proposed classification model for COVID-19 detection has a 95.33% accuracy rate. The accuracy of 95.33% is based on the average of 20 independent executions, with a maximum accuracy value of 98.66%. A vast dataset might be used to expand this study. The method's restriction is that it cannot be used if the patient is in pain.
Zhou et al. proposed a segmentation network based on the U-Net with an attention mechanism [88]. Because most contemporary segmentation networks are trained with dice loss, which equally penalizes false negative and false positive voxels, they contribute a high specificity but low sensitivity. They used the focused Tversky loss to train the model to improve the tiny ROI segmentation performance. Furthermore, they enhanced the baseline U-Net by adding the attention mechanism into each layer to capture rich contextual interactions and improve feature representations.
The outcomes of the experiment show that their proposed strategy is effective. However, the study is constrained by the tiny dataset. It believes that the proposed method could obtain more competitive outcomes with a more extensive training dataset. The experiment's findings, evaluated on a short dataset with only 100 CT available slices, show that the proposed strategy works. On COVID-19 segments, it is possible to produce accurate and quick segmentation. The dice score, sensitivity, and specificity achieved are 69.1%, 81.1%, and 97.2% of the time, respectively [88].
Growing the number of research proposes employing deep learning to enable quick and dependable COVID-19 assessment using chest CT using a multi-center dataset. The study suggested the first systematic comparison of a wide range of deep learning approaches for CT segmentation. Seven in-house deep learning methods were compared to four public deep learning methods [89]. All procedures predicted overall lesion volume with an average volume deference that was lower than the human rater's accuracy. In addition, they compare 12 deep learning methods using a multi-center dataset, including open-source and in-house built algorithms.
The results suggest that combining several approaches improves overall test set performance for lung segmentation, binary lesion segmentation, and multiclass lesion segmentation, with mean dice scores of 0.982, 0.724, and 0.469, respectively. The binary lesions were segmented with a mean absolute volume error of 91.3 mL. With a mean fundamental volume difference of 152 mL and mean dice scores of 0.369 for consolidation and 0.523 ground-glass opacity, identifying different lesion types was more difficult. All algorithms accomplish binary lesion segmentation with an average volume error lower than that of human raters' visual assessment, implying that methods have matured to the point where they can be evaluated on a wide scale and used in clinical practice [89].
A deep learning approach for COVID-19 lung infection segmentation in chest CT scans was provided in this paper [90]. The FCN was designed utilizing a U-net architecture as the backbone, with proposed ResDense blocks at each level along the encoding and decoding routes. Because of the concatenation skip connection in each ResDense block, the feature maps of the infection zones and lung backdrop travel through the network with a minor change in their values, improving network learning and segmentation performance. Furthermore, the method includes an EED step that enhances the look of infection regions in CT slices by increasing contrast and homogeneity.
The qualitative and quantitative evaluation results demonstrate the system's usefulness and ability to segment COVID-19 infection regions from CT images. This system is trained and verified using a variety of datasets from various sources, demonstrating its generalizability and potential as a tool for automatic COVID-19 infection segmentation and clinical practice. Researchers used different metrics for lung and infection regions segmentation and achieved dice overlapping scores of 0.961 and 0.780, respectively. Many 2D CT slices taken from various datasets from various sources are used to train and evaluate the proposed system, demonstrating its generality and efficiency [90].
COVID-19's diagnostic performance was improved using pre-trained knowledge and study of the potential using a deep learning technique called returning transfer learning, to help clinicians diagnose COVID-19 [91]. COVID-19 was distinguished from viral and bacterial pneumonia by their approach. Their method can potentially increase the efficiency of diagnosis, isolation, and the treatment of COVID-19 patients, relieve radiologists' workload, and bring the pandemic under control. Using ResNet in binary classification, the suggested system can accurately classify COVID-19 from healthy patients, COVID-19 from bacterial pneumonia, and COVID-19 from viral pneumonia. Using this model provided 97.20% accuracy on three classes and 80.95% on four classes in multi-class classification.
Automatic algorithms for classifying chest X-ray pictures using three separate categories of analysis: COVID-19, pneumonia, and healthy patients are presented [92]. Given the similarity in pathological impact on the lungs between COVID-19 and pneumonia, particularly during the early phases of both lung diseases, they conducted a comprehensive investigation of differences considering various pathological circumstances. They evaluated six representative state-of-the-art deep network architectures on three different public datasets to address these classification tasks: (I) the Radiological Society of North America (RSNA) Chest X-ray dataset; (II) the COVID-19 Image Data Collection; and (III) the SIRM dataset of the Italian Society of Medical Radiology.
Various typical tests were done to validate the designed methodologies for 6070 chest X-ray radiographs classification. In general, the outcomes of the experiments were favorable. The developed methods achieved accuracy values of 0.9706, assisting doctors in diagnosis and, as a result, allowing for early treatment of this crucial pandemic pathology [92].
A deep convolutional neural network is proposed to detect COVID-19 pneumonia patients using digital chest X-ray pictures, while maximizing detection accuracy (DCNN) [93]. The collection has 864 COVID-19, 1345 viral pneumonia, and 1341 normal chest X-ray images. In this study, a DCNN-based model called Inception V3, with transfer learning, was developed for detecting coronavirus pneumonia infected patients using chest X-ray radiographs, with a classification accuracy above 98% (training accuracy of 97% and validation accuracy of 93%). The findings reveal that transfer learning for COVID-19 detection is effective, has stable performance, and is simple to implement.
A Deep Convolutional Neural Network technique for identifying COVID-19 infection cases from patient chest X-ray pictures quickly and reliably is presented [94]. Chest X-ray pictures of more than 150 verified COVID-19 patients from the Kaggle data pool were used in the experiments to validate the performance of the proposed method. The results reveal that the suggested method correctly detects the results 93% of the time.
Based on the knowledge gained from CT scan images, a new COVID-19 diagnosis technique is developed as a binary classification challenge utilizing a sequential CNN. The findings indicate that the model is quite effective at doing its job, with a maximum accuracy of 92.48% [95]. Other linked parameters confirm the validity of the proposed method's outcome and establish its superiority over earlier methods. The main goal of this research is to aid in the worldwide war against COVID-19, which medical professionals are waging with zeal, by providing them with a simple and effective method of diagnosing the condition. As academics, we hope that this approach will be applied and used globally and that we will be able to contribute to putting this pandemic behind us.
An integrated deep learning architecture for COVID-19 classification is presented using two widely used classification networks, ResNet and Xception, to run experiments to uncover obstacles and limitations [96]. The findings reveal that deep learning models can overestimate their performance due to experimental design flaws and overfitting of the training dataset. Instead, the study used an independent test set to compare the suggested architecture to state-of-the-art approaches and found that several highlighted bias and overfitting concerns are minimized.
Even though the proposed deep learning architecture provides the best performance with the best feasible setup, it highlights the difficulties in comparing and interpreting the results of various deep learning algorithms. While deep learning algorithms based on chest imaging data have shown promising results in the past, the tests imply that a more extensive, more comprehensive database, with less bias, is required for building tools that may be used in real-world clinical situations.
Using a convolutional neural network is proposed to use chest radiographs to detect COVID-19 positive individuals. Previous research has shown that COVID-19-positive patients' lung X-rays have precise characteristics. This is a valid method for testing patients since X-ray inspection of suspect positive individuals is more accessible than PCR [97]. With a classification accuracy of 99.45% (training accuracy of 99.70%), a sensitivity of 99.30%, and a specificity of 99.40% obtained from 820 chest radiographic images (excluding data augmentation) collected from three databases, this model has proven to be a reliable COVID-19 detector.
Another machine learning algorithm is demonstrated using a SVM classifier, trained using a combination of deep convolutional and handcrafted features taken from X-ray chest scans [98]. This combination of characteristics is used to distinguish between healthy people with common pneumonia and COVID-19 patients. A regular convolutional neural network and a SVM, trained using handcrafted features, are used to compare the performance of the combined feature strategy discovered. Combining these features in the innovative framework enhances classification performance compared to applying convolutional and handcrafted features separately. The classification problem attained a 0.988 accuracy with the combined technique, compared to 0.963 and 0.983 accuracies with SVM and CNN, respectively, for handcrafted features.
DenResCov-19 is a new deep-learning network that can offer reliable classification results in multi-class lung disorders. The suggested model was tested on three distinct published datasets with four classes: COVID-19 positive, pneumonia, tuberculosis, and healthy individuals [99]. It also addressed the class imbalance issue by appropriately combining the datasets (except for DXR4, where the dataset is imbalanced in COVID-19 positive cases due to a limited number of available images).
As a result of the experimental research, the proposed model has a positive generalization and robust behavior. According to the proposed analysis, their network has better classification accuracy than state-of-the-art networks like ResNet-50, DenseNet-121, VGG-16, and Inception-V3. In addition, the proposed network would be able to provide the results of the well-balanced AUC-ROC and F1 metrics that have been confirmed. In most situations, our network's detection points from heatmaps match the expert radiologist's detection points. To summarize, they created a pre-screening fast-track decision network based on CXR images to detect COVID-19 and other lung diseases [99].
An approach based on pre-trained deep neural networks is shown, which has achieved state-of-the-art performance for the job at hand, i.e., 99.60% accuracy, by utilizing a cyclic generative adversarial net (CycleGAN) model for data augmentation [100]. A dataset of 3163 photos from 189 patients was also collected and labeled by clinicians to test the approach. Unlike previous datasets, standard data of persons having COVID-19 disease was collected rather than data from other disorders, and this database was made publicly available.
The suggested study includes basic measures such as grouping and clustering in the classification report [101]. The proposed methods are evaluated using J48 and Simple K Means analyses. The simulation tests the right diagnosis and confirms the accuracy figures of 99.63% for Classifying Date and 96.27% for Classifying State. Highlights are finely extracted for arrangement, and parameters are generated to create a Weka class. In structured tree decision, J48 percept is supplied ostensible information and evaluates each perceptron to send the best findings to each other.
Deep Learning has proven to be an effective way of extracting high-dimensional information from medical photos. In this paper, the state-of-the-art Convolutional Neural Network Mobile Net is used and trained from the ground up to evaluate the value of the retrieved features for the classification job [102]. Mobile Net v2, which has been shown to deliver impressive performance in related tasks, is trained using a large-scale dataset of 3905 X-ray images corresponding to six diseases.
This results both in discriminating the X-rays between the seven classes and between COVID-19 and non-COVID-19, and training the CNNs from scratch surpasses the other transfer learning strategies. Classification accuracy of 87.66% is attained between the seven classes. Furthermore, this approach has a 99.18% accuracy, 97.36% sensitivity, and 99.42% specificity in the field when COVID-19 was found to be present. The findings show that training CNNs from scratch could reveal essential biomarkers associated with, but not limited to, the COVID-19 disease. At the same time, the highest classification accuracy suggests that the X-ray imaging potential should be investigated further [102].
Using deep features, the help vector gadget distinguishes corona impacted X-ray images from others. Clinical practitioners can employ the approach to discover COVID-19infected patients early. With COVID-19, the proposed technique of multi-level thresholding plus SVM showed good accuracy in classifying the infected lung [103]. All the images were the same size and saved in JPEG format with 512 × 512 pixels. The lung classification's average sensitivity, specificity, and accuracy, utilizing the suggested model findings were 95.76, 99.7, and 97.48%, respectively.
Using a convolutional neural network and the pre-trained DenseNet201 model, a novel deep transfer learning model for COVID-19 disease has been developed [104]. The proposed model classifies chest CT scans as having 99.82, 96.25, and 97.4% training, testing, and validation accuracy, respectively. In comparison to specific well-known deep transfer learning models, the DenseNet201-based CNN performs much better, according to comparative evaluations.
The proposed model achieves 97% accuracy, but the accuracy of VGG-16 and Resnet152V2 is 96 and 95%, respectively. Compared to competitive models, the proposed approach demonstrated a 1% improvement. When applying the proposed method to an enormous population, however, the 1% performance advantage can save many individuals' lives. The proposed methodology can improve the COVID-19 testing procedure because CT scans are available in most medical institutes. As a result, the proposed model can be used instead of several COVID-19 testing kits [104].
For coronavirus identification utilizing deep features and the J48 method, the chest X-ray pictures utilized for simulation purposes were acquired from GitHub and Kaggle sources. The extraction is conducted with the use of 11 pre-trained CNN models that have been separately supplied for J48 classification [105]. In addition, a statistical study is conducted to choose the optimal classification pattern. The statistical performance of the ResNet101 plus J48 classification model outperforms the other 10 competing models.
As a result, the proposed classification model's accuracy for detecting COVID-19 disease is 98.54%. An optimized convolutional neural network model (ADECOCNN) was presented to distinguish between infected and uninfected patients. The ADECO-CNN approach is also compared to the VGG19, Google Net, and ResNet models, which are based on convolutional neural networks (CNNs). The ADECO-CNN-optimized CNN model can categorize CT images with 99.99% accuracy, 99.96% sensitivity, 99.92% precision, and 99.97% specificity, according to extensive testing [105].
Garg et al. proposed a different study type [106]. This study designed, examined, and compared, using 20 varieties of convolutional neural networks for COVID-19 detection, including EfficientNet-B5, DenseNet169, InceptionV3, ResNet50, VGG16, and others. The examined models were applied using 4173 CT images of lung tissues. As a result of this study, EfficientNetB5 was considered the best model performance with about 98% accuracy and sensitivity. This model has a higher performance and the smallest size compared to the other 21 models. Table A1 presents a summary of the existed ML and DL systems for COVID-19 detection and classification.

Conclusions
COVID-19 is a high severity disease that has spread widely over the world. Artificial intelligence systems using medical images have played a significant role in COVID-19 diagnosis. This paper discussed the most efficient and accurate AI systems for COVID-19 diagnosis using X-ray or CT scan chest images and illustrated the range of machine learning or deep learning techniques used for the detection and classification of the COVID-19 virus. The AI platforms, image augmentation, and image segmentation techniques have been briefly reviewed and discussed. Two types of images (X-ray and CT scan images) were covered in this literature to demonstrate the effectiveness of AI models for COVID-19 diagnosis.
Two main categories have been taken in order to filter the studies; the first one is using chest images, either X-ray or CT scan, whereas the other criteria were using machine learning, including deep learning algorithms. Some studies used data augmentation and segmentation which can be recognized as a valuable additive that enriches this study to be more detailed and comprehensive. Machine learning algorithms include deep learning algorithms, either the pre-defined, the improved, or the optimized ones. The Table A1 in Appendix A summarizes these studies, which are related to classification by focusing on each study's methodology and performance.
In summary, many studies have been proposed for COVID-19 detection and classification with promising results. Screening for COVID-19 can help in early detection and aid radiologists in their diagnosis efforts. In addition, more research can be developed to predict the severity of the disease, which is very important in order to estimate the need for ICU and to make clinical decisions regarding disease treatment, therefore, reducing the load on hospitals and health care centers.