Role of Artificial Intelligence in COVID-19 Detection

The global pandemic of coronavirus disease (COVID-19) has caused millions of deaths and affected the livelihood of many more people. Early and rapid detection of COVID-19 is a challenging task for the medical community, but it is also crucial in stopping the spread of the SARS-CoV-2 virus. Prior substantiation of artificial intelligence (AI) in various fields of science has encouraged researchers to further address this problem. Various medical imaging modalities including X-ray, computed tomography (CT) and ultrasound (US) using AI techniques have greatly helped to curb the COVID-19 outbreak by assisting with early diagnosis. We carried out a systematic review on state-of-the-art AI techniques applied with X-ray, CT, and US images to detect COVID-19. In this paper, we discuss approaches used by various authors and the significance of these research efforts, the potential challenges, and future trends related to the implementation of an AI system for disease detection during the COVID-19 pandemic.


Introduction
COVID-19 was first reported by the Wuhan Municipal Health Commission, China, in December 2019. It is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), and is considered one of the deadliest global pandemics in history [1]. The World (SARS-CoV-2), and is considered one of the deadliest global pandemics in history [1]. World Health Organization (WHO) declared the COVID-19 outbreak a pandem March 2020, and there have been 203,944,144 cases and 4,312,902 deaths globally acc ing to the WHO statistics of 12 August 2021 (available online: https://covid19.who.in ble (accessed on 12 August 2021)). The pandemic situation has caused worldwide dis by affecting people socially, medically, and economically. This infectious disease in se form often leads to acute respiratory syndrome and the development of pneumonia. outbreak was thought to be initiated via zoonotic spread from the seafood markets in han, China. Later, it was believed that transmission between humans was responsibl community spread of the infection throughout the world, and approximately 200 c tries have been affected by this pandemic [2][3][4][5]. Although individuals of all ages a risk of being infected, severe COVID-19 symptoms are more likely in people aged 60 above, and individuals with comorbidities.
Once the SARS-CoV-2 virus enters the body via respiratory aerosol, it acts on respiratory system, and affects patients with varying degrees of clinical severity. Du the initial days of infection, the clinical presentation remains asymptomatic, although mune response is mediated in the body. Those persons affected are infectious at phase, and the disease can be diagnosed by nasal swab [6][7][8]. Further migration o virus from nasal epithelial cells into the upper respiratory tract results in symptom fever, dry cough, malaise, etc. The majority of infected patients do not progress bey this phase, as the immune response from the host is sufficient to contain the disease f spreading to the lower respiratory tract and lungs [9] (refer to Figure 1). Approximately one-fifth of infected cases develop lower respiratory tract infec and these patients present with acute respiratory distress syndrome (ARDS). Histo cally, this stage reveals lung sequestration along with host cell apoptosis. Persisten flammation and diffuse alveolar damage are common histopathologic patterns obse among the infected patients exhibiting ARDS [5,10].
COVID-19 affects people in different ways. Asymptomatic patients will have pos nasal swab results and normal chest X-ray images. Patients with mild illness exhibit ferent commonly known symptoms such as fever, sore throat, dry cough, malaise body aches or nausea, vomiting, abdominal pain, and loose stools. Patients with mode illness show symptoms of pneumonia with no significant hypoxemia (persistent fever cough). This group of infected patients also shows abnormal lesions on high-resolu Approximately one-fifth of infected cases develop lower respiratory tract infection, and these patients present with acute respiratory distress syndrome (ARDS). Histologically, this stage reveals lung sequestration along with host cell apoptosis. Persistent inflammation and diffuse alveolar damage are common histopathologic patterns observed among the infected patients exhibiting ARDS [5,10].
COVID-19 affects people in different ways. Asymptomatic patients will have positive nasal swab results and normal chest X-ray images. Patients with mild illness exhibit different commonly known symptoms such as fever, sore throat, dry cough, malaise and body aches or nausea, vomiting, abdominal pain, and loose stools. Patients with moderate illness show symptoms of pneumonia with no significant hypoxemia (persistent fever and cough). This group of infected patients also shows abnormal lesions on highresolution chest computed tomography (CT). Severe illness is defined as patients who present with pneumonia and significant systemic hypoxemia (SpO 2 < 92%). In cases of detection of COVID-19 using clinical data, statistical analysis, and case studies with no data mining and deep leaning techniques were excluded from the selection.
The relevance of a paper was based on title, abstract, materials and methods. An article was considered based on the voting scheme by the authors' group of the current study. The authors are well-versed in the field of deep learning and machine learning techniques using various imaging modalities. Low-quality and conference papers were removed from the database. A final total of 202 papers (184 articles with 18 review papers) were compiled and analyzed. The selection process is shown in Figure 2. To the best of our knowledge, we have considered the data mining and deep learning research publications reported to present for identification of COVID-19 using various image modalities.

AI Techniques for COVID-19 Detection
Based on the state-of-the-art AI techniques to automatically detect COVID-19 using medical imagery, we categorized the methodologies as: (i) the DNN-based approach, (ii) the HCFL-based approach, and (iii) the hybrid approach. The input data consisted mainly of X-ray, CT, and US medical images of patients. In the DNN-based approach, convolutional neural networks (CNNs) are employed to automatically characterize the COVID-19 imagery. The DNN approach groups the feature extraction and classification components into an integrated neural network. In the HCFL-based approach, knowledge of features extraction techniques is required, followed by feature selection/ranking and classification

AI Techniques for COVID-19 Detection
Based on the state-of-the-art AI techniques to automatically detect COVID-19 using medical imagery, we categorized the methodologies as: (i) the DNN-based approach, (ii) the HCFL-based approach, and (iii) the hybrid approach. The input data consisted mainly of X-ray, CT, and US medical images of patients. In the DNN-based approach, convolutional neural networks (CNNs) are employed to automatically characterize the COVID-19 imagery. The DNN approach groups the feature extraction and classification components into an integrated neural network. In the HCFL-based approach, knowledge of features extraction techniques is required, followed by feature selection/ranking and classification stages. The hybrid approach fuses the methodologies from DNN-and HCFL- based approaches to obtain promising results. Figure 3 illustrates the key components used in the COVID-19 detection system. stages. The hybrid approach fuses the methodologies from DNN-and HCFL-based approaches to obtain promising results. Figure 3 illustrates the key components used in the COVID-19 detection system.

COVID-19 Dataset: Medical Image
RT-PCR is the gold standard to diagnose COVID-19 using a nasal/throat swab. Sometimes the test results may not be available immediately and may cause a false negative result, due to the quality of the sample [31]. In such situations, various chest imaging modalities such as X-ray, CT, and Ultrasound (US) help to confirm COVID-19 suspects [32]. The combination of AI techniques with various imaging modalities can assist to increase the efficiency of COVID-19 detection worldwide [32].
The development of an automated COVID-19 detection system based on chest X-ray imagery requires labeled images of normal and COVID-19 cases so as to train the system to differentiate healthy persons from COVID-19 patients. To test the system with an independent test dataset and to enhance its efficacy, it is necessary for these datasets to be made available publicly. With large datasets, it is possible for researchers to cross verify existing AI models before installation in hospitals or testing centers. Hence, medical images such as chest X-ray, CT, and lung US images are essential for the development of an automated COVID-19 detection system. Many researchers have of their own volition or in collaboration with hospitals, aggregated the COVID-19 datasets with various imaging modalities and released them publicly to assist research communities. Figure 4 shows examples of several chest images from publicly available datasets.

COVID-19 Dataset: Medical Image
RT-PCR is the gold standard to diagnose COVID-19 using a nasal/throat swab. Sometimes the test results may not be available immediately and may cause a false negative result, due to the quality of the sample [31]. In such situations, various chest imaging modalities such as X-ray, CT, and Ultrasound (US) help to confirm COVID-19 suspects [32]. The combination of AI techniques with various imaging modalities can assist to increase the efficiency of COVID-19 detection worldwide [32].
The development of an automated COVID-19 detection system based on chest X-ray imagery requires labeled images of normal and COVID-19 cases so as to train the system to differentiate healthy persons from COVID-19 patients. To test the system with an independent test dataset and to enhance its efficacy, it is necessary for these datasets to be made available publicly. With large datasets, it is possible for researchers to cross verify existing AI models before installation in hospitals or testing centers. Hence, medical images such as chest X-ray, CT, and lung US images are essential for the development of an automated COVID-19 detection system. Many researchers have of their own volition or in collaboration with hospitals, aggregated the COVID-19 datasets with various imaging modalities and released them publicly to assist research communities. Figure 4 shows examples of several chest images from publicly available datasets. stages. The hybrid approach fuses the methodologies from DNN-and HCFL-based approaches to obtain promising results. Figure 3 illustrates the key components used in the COVID-19 detection system.

COVID-19 Dataset: Medical Image
RT-PCR is the gold standard to diagnose COVID-19 using a nasal/throat swab. Sometimes the test results may not be available immediately and may cause a false negative result, due to the quality of the sample [31]. In such situations, various chest imaging modalities such as X-ray, CT, and Ultrasound (US) help to confirm COVID-19 suspects [32]. The combination of AI techniques with various imaging modalities can assist to increase the efficiency of COVID-19 detection worldwide [32].
The development of an automated COVID-19 detection system based on chest X-ray imagery requires labeled images of normal and COVID-19 cases so as to train the system to differentiate healthy persons from COVID-19 patients. To test the system with an independent test dataset and to enhance its efficacy, it is necessary for these datasets to be made available publicly. With large datasets, it is possible for researchers to cross verify existing AI models before installation in hospitals or testing centers. Hence, medical images such as chest X-ray, CT, and lung US images are essential for the development of an automated COVID-19 detection system. Many researchers have of their own volition or in collaboration with hospitals, aggregated the COVID-19 datasets with various imaging modalities and released them publicly to assist research communities. Figure 4 shows examples of several chest images from publicly available datasets.  The majority of the state-of-the-art AI techniques depend on publicly available datasets (refer to Table 1). The first dataset uses the X-ray as the imaging modality, and is very popular due to the huge dataset collected from nine different sources and made available in a single source (refer to the given source in Table 1). It is noted that there are only a few public sources available for US images, compared to X-ray and CT images. In addition to the public datasets mentioned in Table 1, there are also other sources which have not yet been as widely utilized as an X-ray image source (available in: https://public. roboflow.ai/classification/covid-19-and-pneumoniascans (accessed on 19 July 2021)), for CT images [33], https://www.kaggle.com/andrewmvd/covid19-ct-scans (accessed on 19 July 2021), and for US images [34], https://github.com/jannisborn/covid19_ultrasound (accessed on 19 July 2021). The X-ray images collected from various researchers in different parts of the world are available in portable network graphics format with the size of 299 × 299 pixels (https://www.kaggle.com/tawsifurrahman/covid19-radiography-database (accessed on 21 August 2021)). In [35], COVID-19 CT images were collected in various sizes from medRxiv (https://www.medrxiv.org/, latest accessed on 29 November 2021) and bioRxiv (https://www.biorxiv.org/, latest accessed on 29 November 2021), which was posted from 19 January to 25 March 2020. The minimum, average, and maximum widths are 124, 383, and 1485, respectively. The minimum, average, and maximum heights are 153, 491, and 1853, respectively [35]. In [36], CT scans have been collected from real patients in Sao Paulo Hospital, Brazil. It is also observed that the CT images were collected from municipal hospitals in Moscow, Russia. These are segregated based on severity i.e., CT1-CT4: COVID-19-related findings. The number of cases for each category is: CT0-254; CT1-684; CT2-125; CT3-45; and CT4-2 [37]. The largest publicly available lung US dataset was released in [39]. In total, 261 recordings (202 videos and 59 images) were gathered from 216 patients using either convex or linear probes. In addition, the British Society of Thoracic Imaging has also released a COVID-19 image database for teaching purposes (available in: https://www.bsti.org.uk/training-and-education/covid-19-bsti-imaging-database/ (accessed 19 July 2021)). Authors can use these underutilized datasets to enhance the heterogeneous capability of their own dataset. In addition, using the freely available datasets, researchers can initiate a community-oriented research effort to develop various models using AI techniques. Hence, it is also possible for the researchers to generalize their system using the various medical images.

Methodology
This section discusses the key processing stages covered by the different authors in the development of state-of-the-art COVID-19 detection systems.

Preprocessing/Segmentation
Preprocessing is the initial stage used to enhance image quality by improving contrast and standardizing image pixel intensity levels. This stage plays a major role in obtaining accurate results. Usually, image quality is greatly improved by employing the contrast limited adaptive histogram equalization (CLAHE) technique [40]. Denoising techniques such as the Kirsch filter [41], Weiner filter [42], and pixel intensity normalization are also implemented. Other preprocessing techniques such as edge detection using the Prewitt filter (PF) [42], histogram equalization (HE), and gamma correction (GC) [43] may be useful. The aforementioned techniques are used in several works and can significantly increase the accuracy of the results.
For the CNN-based method, a common set of preprocessing techniques are employed. These techniques include resizing and shuffling. Furthermore, images are converted to RGB and then input to a CNN. In order to visualize the image more distinctly, the image boundaries are smoothed by normalization using morphological filters and by applying different filters and enhancement techniques. In addition, lung imagery is extracted using segmentation techniques such as region growing [44] and watershed [45], UNet [46], and LinkNet [47], where the latter is a variant of UNet and the variational data imputation method (VDI) [48].
In the process of training a deep learning model, sometimes there may be a shortage of datasets. In such situations, data augmentation techniques may be used to create additional data by slightly altering the existing data, thereby creating different versions of the original data. This acts as a regularizer and reduces overfitting while training the model. Data augmentation techniques such as rotation, cropping, flipping, and translation [49], Gaussian blur, and contrast adjustment have been used [50]. For the class imbalance, SMOTE [51] has been employed by several authors. Synthesis images can also be created using an adversarial network (GAN) [52], conditional GAN [53], auxiliary classifier generative adversarial network (ACGAN) [54] and Keras' ImageDataGenerator (https://keras.io/api/preprocessing/image/ (accessed on 16 September 2021)).
Similarly, features models have also been extracted using a CNN-based approach. In this approach, base architectures such as ResNet101 [66], AlexNet [67], DenseNet-201 [68], VGG16 [69], GoogLeNet [70], MobileNetv2 [71], Inceptionv3 [72], SqueezeNet [73], VGG19 [74], and Xception [75] have been adjusted for feature learning and extraction. Transfer learning (TL) has been arrayed to cope with the limitations that arise from lack of freely accessible labeled medical images. In addition to TL, methods such as the multilayer perceptron convolutional neural network (MLP-CNN) have been assembled to handle mixed data types consisting of numerical/categorical and image data [76]. Similarly, a high-resolution network (HRNet) has been used for extracting detailed features [77]. In addition, the authors have also furnished customized CNN models to improve system performance.

Feature Selection/Optimization
Feature selection is employed to reduce redundant content by preserving significant information. The sequential feature selector algorithm (SFS) [78], chaotic salp swarm algorithm (CSSA) [79], advanced squirrel search optimization algorithm (ASSOA) [80], and harmony search (HS) [81] algorithm are extensively utilized to reduce redundant information in feature representation. Similarly, ReliefF and Neighborhood Component Analysis (NCA) are used to select optimal features, i.e., RFINCA [82]. In addition, methods such as binary gray wolf optimization (GWO) [83] and hybrid social group optimization (HSGO) [84] have proven their efficacy in providing best optimized features. Scientists have also fitted the fractional-order marine predators algorithm (FO-MPA) [85], minimum redundancy and maximum relevance (mRMR) [86], and manta ray foraging optimization (MRFO) [63] in order to select the most significant features. Feature dimensionality reduction has been undertaken using a t-distributed stochastic neighbor embedding (t-SNE) technique [87] and principal component analysis (PCA) [88]. Apart from these methods, feature selection using mutual information (MI) [89], Relief-F [90] and the dragonfly algorithm (DA) [91], and the guided whale optimization algorithm (Guided WOA) [92] have also been set up. In addition, feature selection has been performed using maximum entropy and ANOVA test [93].
Because optimizers are the crucial part of the neural network, the most commonly used algorithms for DNN approaches are the stochastic gradient descent, adaptive learning rate optimization algorithm [94], and root mean square propagation [95], which are supplied to update the network weights. CNN with GWO and whale optimization with the BAT algorithm have been employed to tune the hyperparameters [96,97]. Furthermore, biogeography-based optimization [98], and the multi-objective differential evolution (MODE) parameter tuning method have been used to optimize the parameters [99].

Classification
In the classification stage, a decision is made on test images by predicting the labels. In order to categorize COVID-19 infections, highly accurate classifier techniques play an important role. Classifier techniques such as random forest (RF) [100], the support vector machine (SVM) [101], and the bagging tree classifier [102] have proven their efficacy in multiclass classifications. In addition to these classification techniques, k-nearest neighbor (k-NN) [103], decision tree (DT) [104], Naïve Bayes (NB) [105] and artificial neural network (ANN) [106], generalized regression neural network (GRNN) [107], MLP neural network [108], probabilistic neural network (PNN) [109], and extreme learning machine (ELM) [110] classifier are also used by the research community. Moreover, adaptive boosting (AdaBoost) [111], eXtreme Gradient Boosting (XGBoost) [112], and logistic regression (LR) [113] have also been incorporated by various investigators. However, the authors selected the classifiers based on the best achieved results for the extracted features. Tables 2-5 are summaries of state-of-the-art techniques used in the automated detection of COVID-19 with various image modalities.

Results
From our extensive literature review, it was observed that many of the CAD tools in the area of several medical fields have used accuracy, sensitivity or recall, specificity, positive predictive value (PPV) or precision, F-measure or F-score, and area under the curve (AUC) to evaluate the performance of the system [274][275][276]. Similarly, the performance of the CAD tool for the identification of COVID-19 was also evaluated using the same performance In all performance measures, the higher the value, the better the performance of the model. The developed AI models for COVID-19 detection using various medical images, such as X-ray, CT, and US, can be categorized into 2, 3, 4, and 5 classes per imaging modality, as shown in Figure 5.

Results
From our extensive literature review, it was observed that many of the CAD tools in the area of several medical fields have used accuracy, sensitivity or recall, specificity, positive predictive value (PPV) or precision, F-measure or F-score, and area under the curve (AUC) to evaluate the performance of the system [274][275][276]. Similarly, the performance of the CAD tool for the identification of COVID-19 was also evaluated using the same performance parameters as mentioned above. Let TP, TN, FP, and FN indicate true positive, true negative, false positive and false negative, respectively. They are given by the following equations: In all performance measures, the higher the value, the better the performance of the model. The developed AI models for COVID-19 detection using various medical images, such as X-ray, CT, and US, can be categorized into 2, 3, 4, and 5 classes per imaging modality, as shown in Figure 5.  NonCOVID-19) was the most frequently reported among the different imaging modalities. Combinations of different class categorizations were also observed in CADTs which used X-ray images. Table 6 conveys the average performance outcomes of the systems considered in the present review irrespective of the number of cases. Many of the studies used publicly available datasets and achieved comparable results.  Combinations of different class categorizations were also observed in CADTs which used Xray images. Table 6 conveys the average performance outcomes of the systems considered in the present review irrespective of the number of cases. Many of the studies used publicly available datasets and achieved comparable results.
It is observed from Table 6 that the systems developed with X-ray and CT images had five-class classification and achieved a Cvd.Acc (avg.) of 92.41% using X-ray images. It is also observed that the two-class models are no longer valid when other diseases with similar symptoms were presented [178]. It is noted from Tables 2-5 that few studies have performed four-class (normal vs. COVID-19 vs. viral pneumonia (VP) vs. bacterial pneumonia (BP)) classification [114,118,138,154,161,179,189,194,264]. They have obtained the Cvd.Acc (avg.) of 89.91%. Hence, for further analysis of the system we considered the model which can categorize three or more images. Box plot analysis was carried out to obtain the overall performance of the three-class classification system used in COVID-19 detection. Figure 6 shows the box plots for Cvd.Acc, Cvd.Sen, Cvd.Spe, F1-Score, and AUC values of the reported AI methods in the three-class classification scenario. Box plots represent the distribution characteristics of performance measures based on minimum, first quartile, median, third quartile, and maximum. It is observed from Table 6 that the systems developed with X-ray and CT images had five-class classification and achieved a Cvd.Acc (avg.) of 92.41% using X-ray images. It is also observed that the two-class models are no longer valid when other diseases with similar symptoms were presented [178]. It is noted from Tables 2-5 that few studies have performed four-class (normal vs. COVID-19 vs. viral pneumonia (VP) vs. bacterial pneumonia (BP)) classification [114,118,138,154,161,179,189,194,264]. They have obtained the Cvd.Acc (avg.) of 89.91%. Hence, for further analysis of the system we considered the model which can categorize three or more images. Box plot analysis was carried out to obtain the overall performance of the three-class classification system used in COVID-19 detection. Figure 6 shows the box plots for Cvd.Acc, Cvd.Sen, Cvd.Spe, F1-Score, and AUC values of the reported AI methods in the three-class classification scenario. Box plots represent the distribution characteristics of performance measures based on minimum, first quartile, median, third quartile, and maximum. Figure 6. Comparison of Cvd.Acc, Cvd.Sen, Cvd.Spe, F1-Score, and AUC of AI techniques to detect COVID-19 using box plots. Figure 6. Comparison of Cvd.Acc, Cvd.Sen, Cvd.Spe, F1-Score, and AUC of AI techniques to detect COVID-19 using box plots.
It is noted from Figure 6 that AI techniques using X-ray imagery had acceptable performance when compared to other medical images. For the three-class scenario, the method achieved Cvd.Acc (avg.) of 94.78%, 94.55%, and 94.99% using X-ray, CT, and the system with both X-ray and CT, respectively, by considering all state-of-the-art techniques. Further, we also analyzed the systems which can categorize three or more classes. It is observed from Table 2 that ResNet50 with DWT and GLCM [114], customized CNN [118,154,179,189], GoogLeNet [138], InceptionNet [141], AlexNet [160], a combination of DenseNet103 and ResNet18 [148], an ensemble of various models such as InceptionResNetV2, ResNet152V2, VGG16, and DenseNet201 [153], and a grouping of MobileNet and InceptionV3 [161] were effectively used for four-class classification using X-ray images. Further, the authors also used CNN models for five-class classification using X-ray images [129,168,177]. From Table 2, it is also noted that only RF [114], SVM [179], and ensemble of classifiers [194] have achieved comparable results for four-class categorization. Herein, the RF classifier shows its suitability multiclass categorization by achieving Cvd.Acc of 98.48%. From Table 3, it is observed that grouping of ResNet152V2, DenseNet201, and VGG16 [212], deep learning model [216], and PSSPNN [232] were used by the authors to categorize four-class CT images. The combination of various DNN models achieved a Cvd.Acc of 98.83% [212]. From Table 4 it is noted that minimal work has been reported using lung US imagery. In [258] the autoencoder and modified DenseNet201 is used for four-class classification, and achieved a better result, by over 17%, compared to traditional DenseNet. In [260,264], the system is tested with X-ray and CT modalities, and achieved better classification for four classes. The usage of VGG19 [260] and VGG16 [264] have shown their significance in four-class classification, as noted in Table 5. In [265], a combination of DenseNet103 with Haralick textural features and the ResNet101 model also showed promising performance. It is furthermore observed that for all modalities, only the VGG19 model is used for three-class categorization [273]. It achieved better result for US images, when compared to X-ray and CT.

Discussion
Investigators have developed many models to detect COVID-19 during the past two years and have shown that there is a role for AI in detecting COVID-19 [19,[21][22][23][24][25][26][27][28][29][277][278][279][280][281]. The 184 technical papers reviewed in this study provide up-to-date knowledge on the usage of AI techniques in detecting COVID-19. The developed models were categorized based on DNN, HCFL, and hybrid methodologies. The number of articles based on the three methodologies are highlighted in Figure 7. In short, it is very difficult to make a comprehensive comparison of methodologies in this present situation because the methods were evaluated using various datasets of different sizes. Hence, the general opinion on the algorithm may be reduced. Few investigators performed k-fold cross validation and in most of the cases the hold-out method was used. Therefore, it is difficult to observe the consistency in the developed models.
Although several models have been developed to detect COVID-19, there are many factors involved in the analysis of COVID-19 imagery, which are listed as follows: Implementation of multiclass categorization models: Many of the studies implemented two-class categorization; however, these are restricted to only understanding the features of normal and COVID-19 images. For disease symptoms similar to COVID-19, there is a need for algorithms which can discriminate among various classes, such as normal, COVID-19, pneumonia, BP, VP, tuberculosis, and lung opacity. Hence, there is a need It is observed from Figure 7 that 70% of the papers reported the use of a DNN-based approach, which included pre-trained networks and customized CNNs. Very few papers were developed to quantify the severity of COVID-19 [282][283][284][285][286]. It is also noted that the computational cost of various deep learning approaches is high [287,288]. From Figure 5, 40%, 78.26%, and 50% of the papers using X-ray, CT, and all modalities, respectively, reported only two-class classification. However, it is difficult to show its significance level in real-time to categorize multiple classes with similar symptoms. It is also observed from Table 6 that, for four-class classification, the Cvd.Sen and Cvd.Spe of the methods increased 4.5% and 1.66%, respectively, using CT images, when compared to X-ray images. In most of the cases, CNNs were able to successfully extract significant information from lung tissue with pneumonia, (i.e., BP and VP). Pre-trained networks such as ResNet, DenseNet, and VGG were successfully used in all of the modalities for greater than threeclass categorization. However, the comparison of the pre-trained networks for binary classification may not be as useful, since it may fail to distinguish diseases which have similar symptoms with COVID-19.
In short, it is very difficult to make a comprehensive comparison of methodologies in this present situation because the methods were evaluated using various datasets of different sizes. Hence, the general opinion on the algorithm may be reduced. Few investigators performed k-fold cross validation and in most of the cases the hold-out method was used. Therefore, it is difficult to observe the consistency in the developed models.
Although several models have been developed to detect COVID-19, there are many factors involved in the analysis of COVID-19 imagery, which are listed as follows: Implementation of multiclass categorization models: Many of the studies implemented two-class categorization; however, these are restricted to only understanding the features of normal and COVID-19 images. For disease symptoms similar to COVID-19, there is a need for algorithms which can discriminate among various classes, such as normal, COVID-19, pneumonia, BP, VP, tuberculosis, and lung opacity. Hence, there is a need for models which can understand the inherent characteristics of various diseases and predict the severity level. Investigators should therefore concentrate on the generalization aspects of the developed models by considering all image modalities.
Implementation aspects: State-of-the-art techniques have trained models using a transfer learning approach. Although the results are promising, the primary architecture has been developed to handle real-world color images. Hence, there is a need for DNNs which are trained from scratch using real medical images. In addition, the selection of appropriate hyper parameters to obtain improved accuracy will play a significant role in training networks developed in the future. The discrimination power of AI techniques can be improved by training the system with multiple views of medical images, which, however, requires extra time. Hence, there is a need for compact featuring to represent COVID-19 and other similar diseases to handle huge datasets.
CADTs to analyze prognosis of COVID-19: Researchers should exploit the hybrid methodology to help medical doctors to understand the treatment outcomes for COVID-19. It is important to develop models to assess the health condition of post-COVID-19 patients for better health and management of the system.

Future Trends
Since the onset of the COVID-19 pandemic, home isolation and quarantine have been implemented by governments across the world to control the spread of the pandemic [289,290]. In addition, risk factors such as fever, weakness, heart disease, and dry cough, are the most critical issues in the mortality of patients [291]. A person who has tested positive for COVID-19 or who has been in close contact with a confirmed COVID-19 person has to undergo a period of quarantine. In cases where home quarantine is required, especially in rural areas of developing countries, the hospital may require frequent health updates from the patient. This can be done via smartphone where the patient monitors his/her own temperature and/or SpO 2 level and reports the results to the medical doctor. In this way the doctor is able to monitor patient health remotely and provide suitable prescriptions or medications when required. There is also a chance that the results obtained from the antigen rapid self-test kit may be negative, despite the patient showing symptoms of COVID-19 disease. In addition, there may be other issues such as people with disabilities and elderly people dependent on them. Considering all of these issues, the best solution would be to remotely monitor the patient without the need for frequent visits to the hospital.
Recent advancements in the Internet of Things (IoT) have paved the way for providing improved healthcare support services [292]. In the future, a cloud-based wireless healthcare system can be used to control the observation of COVID-19 epidemiologically, as shown in Figure 8. X-ray images of the patient's chest can be taken at selected rural hospitals. X-ray imaging is a fast, inexpensive, and minimally invasive procedure, and X-ray units are available in most rural hospitals. Before collecting the data, the institute's ethical committee approval should be granted, and the imaging data should be collected after obtaining written consent from the patients. The collected data are stored in a secured cloud-based server with unique identification number for each patient. X-ray images are then analyzed using a cloud-based system, and observations are sent to the medical doctors. On close examination of the imagery, the doctor provides suitable advice to the patient along with prescriptions and treatment instructions. Hence, medical doctors and their patients can interact remotely for any further treatment even in rural communities. tained from the antigen rapid self-test kit may be negative, despite the patient showing symptoms of COVID-19 disease. In addition, there may be other issues such as people with disabilities and elderly people dependent on them. Considering all of these issues, the best solution would be to remotely monitor the patient without the need for frequent visits to the hospital.
Recent advancements in the Internet of Things (IoT) have paved the way for providing improved healthcare support services [292]. In the future, a cloud-based wireless healthcare system can be used to control the observation of COVID-19 epidemiologically, as shown in Figure 8. X-ray images of the patient's chest can be taken at selected rural hospitals. X-ray imaging is a fast, inexpensive, and minimally invasive procedure, and Xray units are available in most rural hospitals. Before collecting the data, the institute's ethical committee approval should be granted, and the imaging data should be collected after obtaining written consent from the patients. The collected data are stored in a secured cloud-based server with unique identification number for each patient. X-ray images are then analyzed using a cloud-based system, and observations are sent to the medical doctors. On close examination of the imagery, the doctor provides suitable advice to the patient along with prescriptions and treatment instructions. Hence, medical doctors and their patients can interact remotely for any further treatment even in rural communities.

1.
This review considered only manuscripts written in English.

2.
In this review process, many databases were explored using different search queries; thus a few relevant works may have been neglected in the search. The review process was performed based on technical papers to detect COVID-19 rather than on clinical studies.
3. The present work provides a systematic review of AI techniques, analysis, and its advancement. However, the transformation before and after COVID-19 is not assigned great importance in this study.
The scope of this review was the comprehension of the AI techniques using different imaging modalities. It is observed that the CT scan, which is the faster and more feasible method, has been proven to be the most sensitive tool in the diagnosis of COVID-19 compared to the RT-PCR test [293]. However, the technique involves a high dose of radiation and is not available in the rural health care sectors in developing countries [294,295]. In contrast, the chest X-ray is a universally available technique with 30-70 times lower radiation exposure, and the test is performed during the initial investigational process for COVID-19 [296]. However, lung US is an alternative mode that produces results similar to those of the chest CT and is considered to be superior to the chest X-ray in the diagnosis of lung pathology in COVID-19 infection. Nonetheless, this modality is not useful when the pleura is spared from the pneumonic pathology during the early course of the disease [297]. Recent developments in the diagnosis of COVID-19 using signals such as respiratory sounds, speech signals, and coughing sounds, have also attracted many researchers [298,299]. Furthermore, in the future, this can be combined with other imaging modalities and signals to enhance the performance of the system using various deep learning approaches.

Conclusions
AI techniques do not substitute for medical doctors and expert radiologists. However, they can efficiently and automatically impact the analysis of medical imagery. The development of CAD tools to detect COVID-19 have grown significantly in recent years, contributing to the body of clinical and medical research. The early detection of COVID-19 using AI techniques would be helpful to prevent the progression of the pandemic by enabling rapid decision-making. This study aimed to observe and analyze the growth and improvement in AI techniques for the detection of COVID-19. In this review, 184 papers were selected and summarized. The results showed that all DNN, HCFL, and hybrid approaches have high a potential to predict COVID-19 cases. The classification, segmentation, and quantification of the severity level of COVID-19 on heterogeneous datasets can be improved if medical experts play a significant role in building the framework for AI techniques, providing significant knowledge of image features and real-world requirements.

Conflicts of Interest:
The authors declare no conflict of interest.