Nationally and internationally, the Coronavirus (COVID-19) outbreak is increasing. In the universal battle against COVID-19, for example, medical imaging, X-ray, and computed tomography (CT) play a key role, and the latest AI developments tend to improve the capacity of imaging tools and facilitate healthcare personnel.
Medical imaging research is commonly used for the identification of COVID-19 by clinicians. Chest X-ray and lung CT image samples are mostly used in COVID-19 clinical imaging trials. AI innovation plays a significant role in medical imaging testing. It has produced enormous results in image identification, organ recognition, geographic infection classification, and disease classification. It not only decreases the picture diagnostic time of the radiologist, but it also increases the accuracy and execution of the diagnosis. AI can enhance work performance through correct diagnostic precision in X-ray and CT imaging, which makes it easier to test, as follows. The computer-aided networks also assist radiologists in making clinical decisions, i.e., for the identification, monitoring, and prognosis of diseases. We will address the innovations of AI techniques to chest X-ray and CT imaging in depth.
2.1. Chest CT Image Detection
A valued feature of the assessment of patients with doubtful SARS-CoV-2 infection is the chest CT picture. There is a growing research on the role of COVID-19 imaging for treatment and diagnosis. The infection triggers a huge spectrum of CT scan imaging discoveries, most commonly ground-glass opacities and lung periphery consolidations. Chest CT sensitivity to diagnose COVID-19 has been found to be significantly higher and it can occur prior to a positive viral lab test. Therefore, hospitals with a large quantities of admissions use CT for the fast emergency of patients with conceivable COVID-19 disease in epidemic territories, where the basic healthcare system is under pressure. Chest CT plays a vital role in the estimation of COVID-19 patients with severe and compound respiratory symptoms. Based on scans, it is possible to determine how badly the lungs are compromised and how the illness of the individual progresses, which is effective in making medical decisions.There is a growing understanding of the sudden incidence of lung defects that are induced by COVID-19 in CT scans that were conducted for many other clinical indications, such as abdominal CT scans for bowel disorders or patients without respiratory symptoms [
3]. In this pandemic, by reducing the strain on clinicians, the evaluation of AI may become the most significant factor. Although it can take up to 15 min. to manually interpret a CT scan, AI can analyse the images in 10 s. [
3]. Therefore, advanced image processing with artificial neural network has the possibility to significantly improve the function of CT in COVID-19 detection by allowing a large proportion of patients to identify disease easily and rapidly with accuracy. The continuation of AI-based CT imaging tests usually involves the following steps: regional division of the Region of Interest (ROI), removal of pulmonary tissue, identification of regional infection, and classification of COVID-19. A basic basis for analysing AI-based imagery is the recognition of lung organs and ROIs. ROI has been demonstrated for further testing and analysis in CT imaging in lungs, lung lobes, bronchopulmonary segments, and regions with infection or ulcers. For CT image classification, different types of DL networks, e.g., U-Net, V-Net, and VB-Net, VNET-IR-RPN, had been used. From an overall of 905 patients assessed with real-time RT-PCR assay and next-generation RT-PCR, 419, approximately (46.3%), were confirmed by an AI device with SARS-CoV-2. The AI method consists of deep CNN for the primary CT scan to evaluate the image characteristics and attributes of individuals with SARS-CoV-2. Subsequently, according to clinical knowledge, SVM, RF, and MLP classifiers were used in order to identify SARS-CoV-2 patients. To foresee COVID-19 status, the AI system operates on radiological data and medical factors. The deep CNN-based AI system obtained an AUC of 92% and it had a comparable sensitivity relative to the senior thoracic radiologist in the experimental set of 279 patients. Furthermore, the Artificial Intelligence (AI) system enhanced the detection of patients, who aimed for RT-PCR detected COVID-19 who submitted standard CT scans, which correctly classified 17 out of 25 patients (68%), and all of these patients were graded by radiologists as COVID-19 negative [
4]. The training dataset also included 25 COVID-19 positive cases with a chest CT marked as negative by the two reading radiologists during display. The CNN-based model categorized 13 out of 25, about (52%) of images, as positive for COVID-19. The clinical model categorized 16 out of 25, about (64%) of images, as positive for disease, and the joint model categorized 17 out of 25 (68%) as positive for disease, while the senior radiologist and their fellows classified 0 out of 25 (0%) of these images as being positive for disease [
4].
In an attempt to discover the answer that could rapidly decompose images from deep learning and recognize COVID-19 features, researchers are trying to develop several AI resources. A research team led by Bo Xu of the Tianjin Medical University Cancer Institute and Hospital involved CT scans of 180 patients that were confirmed to have severe viral pneumonia even before epidemic of COVID-19 and 79 patients with certified COVID-19 to establish an AI method to classify COVID-19 [
5]. They gave pictures of patients at random to train and test a deep CNN-based algorithm. In the findings that were released in medRxiv [
6], the researchers stated that their model detected COVID-19 with 89.5% accuracy from CT images. An accuracy of approximately 55% was reported by two radiologists who analysed the images. The team confirms the findings indicate that from a CT scan, AI can provide an exact analysis. Another algorithm, named as RADLogics algorithm [
7], managed to detect and assist in initiating a COVID-19 patient’s improvement.
Two studies, published in [
4,
8], advance this thought by utilising DL trained on CT scans as a fast symptomatic instrument to search for COVID-19 infection in sufferers who were admitted to the hospitals and required medical image processing. In [
8], researchers at Macau University of Science and Technology applied 532,000 CT images from 3777 patients in China to train and test their AI-based models, concentrating on the tell-tale lesions observed in COVID-19 patient lungs. The AI model successfully diagnosed coronavirus-induced pneumonia no less 85% of the time when it was used in a database of 417 patients in four separate groups in a pilot study across many Chinese hospitals.
There seems to be a big problem in distinguishing whether the signs are COVID-19 and pneumonia detected in CT images by radiologists. A company, VIDA Diagnostics [
9], has developed a LungPrint device that utilises AI to analyse CT scans to accurately identify respiratory disorders, including COVID-19 signs and symptoms. In [
10] NIH and NVIDIA researchers attempted to create a DL technique to detect COVID-19 using chest CT images utilising datasets from four hospitals across China, Italy and Japan. In total, in this study, researchers used 2724 samples from 2619 patients, including two models (i.e., Full 3D, Hybrid 3D) that were used in series to establish the final prediction model for COVID-19. These two models work. The first model implemented a fixed input size (full 3D) of the entire lung area. At fixed image resolutions (hybrid 3D), the second model used an average score for a few regions in each lung. The hybrid three-dimensional (3D) model achieved 92.4% validation accuracy when detecting COVID-19 as well as other pneumonia, whereas the full 3D model achieved 91.7% accuracy.
In order to retrieve ROIs from each CT image and acquire a training curve for suspected lesions, Chen et al. [
11] constructed a U-Net++ deep learning structure. 46,096 anonymous images were collected and processed for model creation and validation from 106 patients already admitted, including 51 laboratory patients who had reported COVID-19 pneumonia and 55 other disease control patients at Renmin Hospital of Wuhan University in China. On 5 February 2020, twenty-seven consecutive patients experiencing CT scanning were grouped at Renmin Hospital of Wuhan University in order to estimate the effectiveness of radiologists as compared to the 2019-CoV pneumonia model. The U-Net++ model achieved an overall of sensitivity of 100%, specificity of 93.55%, accuracy of 95.24% while using retrospective dataset. Huang et al. [
12] used the AI-based InferReadTM CT pneumonia method to accurately assess improvements in the lung burden of COVID-19 patients. Three modules are incorporated into the tool: pulmonary and lobe extraction, classification of pneumonia, and measurable analysis. The CT image characteristics for COVID-19 pneumonia are divided into four modalities: mild, moderate, severe, and critical. A professional deep learning software automatically calculated the level of CT lung natural action of the overall lung and five lobes, and compared CT scans over follow-up. A total of 126 COVID-19 patients, including six mild, 94 moderate, 20 severe, and six critical cases, were assessed. The rate of CT-based natural action was entirely diverse among the initial clinical groups, progressing gradually from mild to severe (all
p < 1%).
71 CT scans from 52 COVID-19 approved patients in five hospitals were obtained by Qi et al. [
13]. They used the Pyradiomics methodology to excerpt 1218 traits from each CT image. The models of CT radiomics focused on LR and RF algorithims. They were built on pneumonia lesion extracts during training and interactions. At the lung lobe and the patient level, predictability efficacy was also evaluated in the experimental database. The types of CT Radiomics are focused on six second order. They were successful in distinguishing short-term and long-term stays in patients with SARS-CoV-2-related pneumonia, with 97% AUC and 92% LR and RF, respectively. The LR model showed 100% and 89% sensitivity and specificity, while the RF model showed 75% and 100% similar sensitivity and specificity. The short-term hospital stay is less than 10 days, while the long-term hospital stay is more than 10 days.
2.2. Chest X-ray Image Detection
Chest X-rays have been proposed as a highly helpful method for evaluating and testing COVID-19 patients.
Figure 2 shows representative architectures of DL-based CT image classification and COVID-19 examination. When compared with CT images, chest X-ray (CXR) images are simpler to acquire in clinical radiology examinations. There are many available studies [
14,
15] that operates on chest X-ray (CXR) images for corona virus detection. For the most part, the CXR image testing factor that is based on AI strategies involves measures, such as data correction, model training, and segmentation of COVID-19. There are several methodologies of deep learning (such as CNN, nCOVnet, and U-Net++) that are used to find better and fast detection in the detection of COVID-19 on X-ray images.
In medical centers and hospitals, X-ray devices deliver less costly and quicker outcomes from scanning different human organs. The interpretation of numerous X-ray images is typically performed by radiologists manually. Radiologists are only equipped to detect 69% of X-ray COVID-19 cases [
17]. Pre-trained models made it much easier and quicker to detect COVID-19. In [
14], a dataset of reported positive COVID-19, typical bacterial pneumonia, and stable (no infection) cases was used. In this survey, a total of 1428 X-ray scans were applied. The authors used the pre-trained model of VGG-16 in order to model the role of division and perform the categorization. In two and three production class cases, the examiner obtained 96% and 92.5% accuracy, respectively. Based on the obtained results, X-ray images can be accessed by the medical community as a possible symptomatic method for immediate and faster COVID-19 identification to supplement the current diagnostic and symptomatic approaches. Several more creative algorithms are used for better results by CXR images towards the war with SARS-nCOV-2. By assessing essential characteristics from chest X-ray scans, Basu and Mitra [
15] introduced a domain transfer learning for detecting COVID-19. The definition of Gradient Class Activation Map (Grad-CAM) is also used on the collection of 20,000 Chest X-rays to get an account of the COVID-19 detection. In order to obtain an overview of the viability of using chest X-rays tuned for classification between four groups, i.e., normal, other disease, pneumonia, and COVID-19 data, were used to diagnose the disease using a five-fold cross validation. With 100% of the COVID-19 and normal cases being accurately characterised in each validation fold, the overall accuracy was computed as 95.3%. A misdiagnosis in both pneumonia and other disease stages has occurred.
Students were also involved in the production of ML algorithm to identify novel COVID-19 patients. With the assistance of AI, Cranfield University students developed computer models that can diagnose COVID-19 in CXR images [
18,
19]. ML and DL techniques were used in the proposed models in order to obtain characteristics and identify CXR images. It can discern data that would not generally be apparent to the naked human eye and aid with COVID-19 detection.
The first model is to examine abnormalities in an X-ray, distinguishing patients with normal and pneumonia. The second model further operates on certain patients with pneumonia in order to determine whether the COVID-19 virus causes pneumonia [
18,
19].
Figure 3 shows a high-level representation of the intelligent computational models that were developed at Cranfield University.
In the meantime, previous researches at King’s College London, Massachusetts General Hospital and health tech company Zoe have begun studies of a fully AI-based identification that aims to predict COVID-19 pathogens by means of assessing the symptoms with the implications of traditional COVID-19 tests [
20]. With the support of Chest X-ray scans for COVID-19 diagnosis, another model [
21] was proposed in order to provide an end-to-end structure without even using an extracting features tool. This model is built on the foundations of Darknet-19 (form based on a real-time object detection method, called YOLO), named DarkCovidnet, [
21], which is already documented. The achieved 85.35%, 92.18%, and 87.37% of sensitivity, specificity, and F1-score values, respectively, while using the same model.
Most research studies are conducted using various methods in order to resolve COVID-19. Mangal et al. [
22] proposed a COVID-19 AI-based Detector (CovidAID). CovidAID is a deep neural network model that was developed on the publicly available covid-chestxray-dataset dataset to treat patients for proper examination. With 100% sensitivity, they achieved 90.5% accuracy. In order to demonstrate COVID-19 positive X-rays from other negative ones, a deep learning-based CNN model, called Truncated Inception Net, was proposed in [
23]; six distinctive dataset forms were carried out by means of the types of X-rays: COVID-19 positive, pneumonia positive, tuberculosis positive, and normal cases. In detecting COVID-19 positive cases from total Pneumonia and healthy patients, their model obtained 99.96% accuracy (AUC of 100%). Comparably, in classifying COVID-19 positive patients from other types of X-ray scans, it obtained an accuracy of 99.92% (AUC of 99%). They proved the feasibility of employing the proposed Truncated Inception Net as a screening method, according to [
23], and outperformed all of the existing tools. Research on COVID-19 prediction in X-ray images using transfer learning was published by Minaee et al. [
24]. Four common pre-trained deep convolution neural networks (CNNs), which are ResNet18, ResNet50, SqueezeNet, and DenseNet-161, were compared with their predictions They use COVID-19 and non-COVID datasets to instruct these four models, including 14 subclasses containing normal ChexPert dataset images. The models resulted in an average specificity rate of ∼90% with a sensitivity range of 97.5%. This highly supports the possibility that CXR imaging will distinguish COVID-19 from other infections and normal lung conditions. A COVID-19 assessment DT classifier from Chest X-ray Imaging was applied by [
25]. It included three binary DTs, each of which was trained by a deep learning model that was built on the PyTorch system with a CNN. The Chest X-ray images were graded as normal or abnormal by the first DT. The second tree differentiated the abnormal X-ray images containing tuberculosis symptoms, while the third tree distinguished the equivalent second for COVID-19. The accuracy of the first DT is 98% and the second DT is 80%, while the third DT’s average accuracy is 95%. They claimed that the suggested deep learning-based DT classifier can be applied before RT-PCR outcomes are available in pre-screening instances in order to perform triage and fast-track conclusion making. GAN (Generative Adversarial Network) was introduced in a study [
26] with deep transfer learning for COVID-19 identification in chest X-ray scans with a shortage of chest X-ray images dataset. 307 images for four different class groups, i.e., COVID-19, normal, bacterial pneumonia, and virus pneumonia were collected. Three AlexNet, GoogLeNet, and ResNet18 deep transfer models were chosen for operation. In this analysis, three case scenarios were tested: the first scenario consists of four dataset classes, while the second scenario consists of three classes, and the third scenario consists of only two classes. It was a must to have the COVID-19 class in all scenarios. In the first scenario, as it attained 80.6% in testing accuracy, GoogLeNet was chosen to be the essential deep transfer model. With the use of Alexnet, the second scenario obtained a testing accuracy of 85.62%. GoogLeNet was selected as a fundamental deep transfer learning model in the third scenario (which involves two classes, i.e., COVID-19 and normal), which gave an ideal 100% accuracy in testing and 99.9% accuracy in validation.
Especially in the low-resource X-ray settings may play a significant function in COVID-19 triage. 24,678 X-ray images were used in order to classify COVID-19 [
27]. A deep learning-based AI framework CAD4COVID-Xray was trained. A lung segmentation with U-net and a CNN was implemented. 454 images were analysed from a dataset (223 patients tested positive for COVID-19, the remaining 231 tested negative for COVID-19). Chest X-ray images were specifically labelled as COVID-19 pneumonia with an AUC of 81% by the AI system CAD4COVID-XRay. According to author [
27], as part of a process of looking at symptomatic issues, the system can be useful, especially in low-resource settings, where diagnostic equipment is not accessible. The
Table 1 provides a summary of the best score AI-based ML and DL methods for diagnosing COVID-19 while using radiology images.
2.3. COVID-19 Severity Classification Using Chest X-ray with Deep Learning
With the support of artificial intelligence, chest X-rays enable us to understand more, particularly by using ML and DL techniques. Chest X-rays (CXRs) offer a non-invasive method for monitoring disease progression. For front-chest X-ray pictures, a severity score predictor model of COVID-19 pneumonia is being studied in [
48]. Expansions of lung involvement and light intensity are also included in the CXR image database. A pre-trained neural network model (i.e., DenseNet) in large chest X-ray sets (non-COVID-19) is used in order to create features of COVID-19 images to predict this activity. 94 images of COVID-19 certified patients go to studying the severity of COVID-19 prediction while using DL, as shown in
Figure 4. A score-based methodology is used, which includes two forms of scores.
Table 2 provides examples of disease severity stages: the level of lung participation and the degree of ambiguity are shown at a time in a single X-ray image.
For each lung, the level of lung involvement by ground glass ambiguity and degree of unification were rated. The overall degree and ambiguity score on both the right and left lungs ranged from 0 to 8 and 0 to 6, respectively. The severity of a disease that are used for the increase or decrease of care, as well as tracking the effectiveness of patient treatment, especially in the ICU, can be found on the basis of the score.
Mobile app development also utilises AI to evaluate the severity of COVID-19 in the game. In [
49,
50], a mobile app was created by researchers from NYU College of Dentistry that used a dataset of 160 images of reported patients with COVID-19 from China. While using the mobile app, the numerous bio-markers that were contained in the blood will diagnose COVID-19 severity levels from level 0 (mild) to 100 (extreme).
Ridley [
51], on the other hand, developed a special form of deep-learning algorithm, called the Convolutionary Siamese Neural Network (CSNN) to generate a score of COVID-19 patient pulmonary X-ray severity (PXS) and related well with radiologist evaluations, and could also help to predict whether a patient will require intubation or not before dying. The algorithm was carried out with two kinds of internal and external datasets. Internal research was conducted on a dataset of 154 COVID-19 admission chest X-rays, of which 92 had additional chest X-ray follow-up and were used for longitudinal study. In a community hospital, Newton-Wellesley Hospital in Newton, MA in the United States, external testing was performed on 113 consecutive admission chest X-rays from COVID-19 cases. The researchers found that the median PXS score was higher in intubated or deceased patients (PXS score = 7.9) as compared to those who had no intubation (PXS score = 3.2) on all of the test sets. The difference (
p < 0.001) was statistically important.
Qi et al. [
13] obtained 71 CT scans from 52 COVID-19 patients that were registered in five hospitals. They applied the methodology of pyradiomics to extract 1218 attributes from all the CT images. Logistic regression (LR) and random forest (RF) based CT radiomics models were built on pneumonia lesion extracts in testing and interactions. Predictability efficiency was also evaluated in the experimental database at the lung lobe and at the patient level. The types of CT in radiology are based on six-second order that have been functional in distinguishing short-term and long-term retention in patients with SARS-CoV-2-related pneumonia, with 97% and 92% AUC by LR and RF, respectively, in the testing phase. The LR model resulted in 100% of sensitivity and 89% of specificity, while the RF model resulted in 75% and 100% in sensitivity and specificity in the test results. Hospital stay for short-term is less or equal than 10 days, while hospital stay for long-term is larger than 10 days.
2.4. Observing COVID-19 Through AI-Based Cough Sound Analysis
Coughing is a symptom of more than 30 medical conditions that are not COVID-19. This makes COVID-19 infection identification by coughing alone a big challenge for different problems. Physicians also use sound signals that are made by human bodies. Examples of these sounds are moaning, breathing, heartbeat, digestion, and vibrating sounds. They are considered as indicators of diagnosis. Up to this point, at scheduled visits, such signals were usually collected via manual auscultation. Other research is being conducted for cardiovascular and respiratory studies while using digital technologies to capture body sounds (e.g., from digital stethoscopes), which can be used for automatic COVID-19 analysis. Recent research has begun to investigate how respiratory sounds (e.g., cough, breathing, and voice) that are recorded in hospital by devices from patients that tested positive for COVID-19 differ from healthy people’s sounds.
An analysis of identification of COVID-19 based on coughs that were collected via phone app is reported in [
52], using a cohort of 48 COVID-19, 102 bronchitis, 131 pertussis, and 76 normal cough sounds to learn and evaluate this diagnostic method. Equation (
1) is used in [
52] to transform the collected cough data into Mel scale
m for data pre-processing:
where
m is a pitch scale that is measured by listeners to be equal in distance from each other [
53]. Another mobile app, named kAs, was built in [
54], by Zensark Technologies (Hyderabad, Telangana, India) while using machine learning to assess patient respiratory health and disease-specific cough signatures. The app provides 15 questions regarding the subject and their cough sound into the mobile app to transmit the cough sounds to Swaasa, the company’s AI platform, where on the basis of the answers to the questionnaire and the coughing tone, it goes to an audiometric examination and generates a COVID-19 Risk Score. The app produces a rating on a scale of 1 to 10, where 10 is the highest risk level. It may take days or weeks to monitor person’s risk score.
Schuller et al. explored the effective use of Computer Audition (CA) and AI in the study of cough sounds in COVID-19 patients [
55]. They first tested the capacity of CA to detect speech and cough automatically in different conditions, such as breathing, dry and wet cough, or sneezing, flu-like speech, eating habits, sleepiness, or pain. They subsequently recommended using the CA technology in the diagnosis and cure for COVID-19 patients. However, there is no documentation on the use of this technology in COVID-19 research, due to the absence of usable information and annotated data. Wang et al. [
56] studied the respiratory tract of patients with COVID-19 and other usual cold and flu respiratory tract patients. In addition, for the diagnosis of COVID-19, they suggested a model of respiratory simulation (called BI-AT-GRU). The BI-AT-GRU model involves the bidirectional bids and attention of the GRU neural network, and it can differentiate between six forms of clinical respiratory types, such as Eupnea, Tachypnea, Bradypnea, Biots, Cheyne-Stokes, and Central-Apnea. Sharma et al. [
57] intended to create the laboratory diagnostic techniques of COVID-19 for sputum testing. In roder to test biomarkers in acoustics, the project, called Coswara, used cough, breath, and speech sounds. Data were collected for nine distinct vowel sounds. Some other features were also collected. The nine voices carry on varying patterns of body-breathing. Visual and temporary spectral features have been extracted from audio files. Tasks for classification and data curation are under operation.
A portable device, called FluSense [
54,
58], was developed by the University of Massachusetts Amherst.
Figure 5 shows the components of the FluSense machine that was operated by an AI-based neural network that can real-time identify cough and crowd size and directly evaluate and collect data for flu-like diseases, such as COVID-19. A microphone array and a thermal camera configuration and neural computing engine are used by the FluSense to actively and constantly render speech and cough signals as well as real-time adjustments in the crowd density on the edge. These sources of knowledge help to determine the time of campaigns for vaccination for the flu, possible travel restrictions, delivery of drugs, and more [
59]. FluSense developers claim that the new edge computing platform, which is believed to be used in medical centers, will extend the health monitoring instruments that are used to predict influenza and other viruses, including the epidemics of COVID-19.
In [
58], it has shown that FluSense accurately predicted the daily number of patient while using a Pearson Correlation Coefficient of 95%. Respiratory diseases or additional health data were both not considered by the FluSense platform. Information on cough is important and necessary, but it is not appropriate for all respiratory infections to be used. They gather data from a number of sources. An online website devoted to spreading cough sounds from COVID-19 patients, Ravelo [
60], launched a cough AI-based system against COVID-19. The Bill and Melinda Gates Foundation is supportive of this initiative. The mechanism is plain. A individual downloads a sputum recording and it is requested to provide information, such as symptoms, other diseases, gender, and geographic location, in order to decide whether the person is from a specific area has a COVID-19 infection or not. The procedure also asks individuals for a photo of the results of their COVID-19 test. They will create an ML or DL-based algorithm once they have enough data and check whether the cough sounds that are associated with COVID-19 infection can be accurately determined. Iqbal et al. [
61] addressed an anonymous structure that applies the role of the mobile app recognition to obtain and evaluate suspicious cough sounds of individuals in order to assess whether a person is healthy or suffering from respiratory disease.
A cross-sectional system between vocal cords (coughing and breathing) was created by researchers at the University of Cambridge [
62] in order to recognise safe and unhealthy individuals. Speech sounds are applied to differentiate between COVID-19, normal persons, and asthma. The three binary separation functions are structured, as follows:
Separating positive users of COVID-19 from negative users.
Separate COVID-19 cough users from healthy cough users.
Separate COVID-19 cough users from asthma users who have declared coughing.
In the community driven data collection, more than 7000 unique users (approximately 10 K samples) participated, from which more than 200 registered positive COVID-19. The typical methods of audio enhancement have been used to maximise a data set’s sample size. For the classification task, three classifications were used, namely, LR, Gradient Boosting Trees (GBT), and SVM. In order to compare the efficiency, the analysis used the combined curve area under the curve (AUC). In all three binary split trades, more than 70 percent of AUCs are registered. In order to distinguish, the researchers used respiratory tests and found that the AUC was around 60%. However, because of the high number of characteristics, the AUC progresses to around 80% per operation when coughing and the respiratory inputs are merged into phases.
Table 3 provides a description of AI-based ML and DL approaches for speech and audio analysis regarding health problems with COVID-19.