A Survey on AI Techniques for Thoracic Diseases Diagnosis Using Medical Images
Abstract
:1. Introduction
- Supervised learning: data scientists supply algorithms with labeled training data and define the variables they want the algorithm to assess for correlations. Both the input and the output of the algorithm are specified. Some of the most common algorithms in supervised learning include Support Vector Machines (SVM), Decision Trees, and Random Forest;
- Unsupervised learning: involves algorithms that train on unlabeled data. The algorithm scans through datasets looking for any meaningful connection. The data that the algorithms train on as well as the predictions or recommendations they output are predetermined;
- Semi-supervised learning: occurs when part of the given input data has been labeled. Unsupervised and semi-supervised learning can be more appealing alternatives as it can be time-consuming and costly to rely on domain expertise to label data appropriately for supervised learning;
- Reinforcement learning: data scientists typically use reinforcement learning to teach a machine to complete a multi-step process for which there are clearly defined rules. Data scientists program an algorithm to complete a task and give it positive or negative cues as it works out how to complete a task. However, for the most part, the algorithm decides on its own what steps to take along the way.
- Cost (analyzing data will be very costly both in terms of energy and hardware use);
- These technologies are still a group of very rapidly developing technologies, and therefore they are still under development. Thus, we need experts in this field to deal with it;
- These technologies are not widely available in the healthcare sector;
- Security needs to be integral in the AI process.
- It provides a comprehensive overview of the use of AI in detecting thoracic diseases, including COVID-19;
- It presented the different types of AI models that used to detect thoracic diseases and the databases that include those diseases. In addition, the progress of the works and the direction the researchers are moving in this domain throughout the recent years;
- To express that CNN has penetrated the field of understanding the medical picture with high accuracy;
- It collected many different databases for thoracic diseases with descriptions;
- It also presents the issues of thoracic diseases detection using deep learning found in the literature studies.
2. Methodology
3. The Taxonomy of State-of-the-Art Work on Thoracic Diseases Detection Using DL/ML
3.1. Imaging Thoracic Exams
- Chest X-ray (CXR): can be used to check for diseases such as pneumonia [32] and a lung infection that causes fluid buildup [33]. It can also be used to detect cancer or pulmonary fibrosis, which is a scar tissue buildup in the lungs. CXR scans are commonly used in clinical practice since they are inexpensive, simple to perform, give a quick scan for the patient as two-dimensional (2D) images, and can be widely used for diagnosis and treatment of lung and cardiovascular diseases [34,35]. Although X-rays are frequently used, they have side effects such as exposure to ionizing radiation harmful to the human body and relatively low information when compared to other imaging methods;
- Computerized Tomography (CT): is a more advanced imaging test that can be used to detect disorders such as cancer that an X-ray could miss [36,37,38,39]. A CT scan is a series of X-rays taken from various angles that are patched together to create a complete image. While CT scans are more reliable in diagnosing COVID-19, they are less accurate in diagnosing non-viral pneumonia-like consolidation [40]. The CT scan is very accurate spatial information and quick, but the disadvantages of the CT scan are the risk of exposure to radiation is high, require expensive equipment, and is therefore not always accessible to all levels of people;
- Histopathology: often known as histology, is the microscopic examination of organic tissues in order to observe the appearance of diseased cells [41]. The tissue that was sent for testing, as well as the characteristics of the tumor under the microscope is described in a histopathology report [42]. A biopsy report or a pathology report are both terms used to describe a histopathology report. It can identify features of what cancer looks like under the microscope, or detect cardiomegaly disease [43]. Histology examination is low cost and allows an evaluation of infection distribution in various tissues. However, it needs 2–7 days of preparation time, might not detect low-level infection, and it depends on the expertise of pathologists;
- Sputum Smear Microscopy: refers to the microscopic investigation of sputum [44]. This has been proved to be one of the most effective ways of detecting tuberculosis infection in patients so that treatment can begin [45]. In some times, a chest X-ray and a sputum sample are needed to find out if a person has tuberculosis [46]. In poor and middle-income countries, sputum smear microscopy has been the major approach for diagnosing pulmonary tuberculosis [47]. Sputum smear microscopy examination has a long experience, inexpensive, and is used for the follow-up of patients on treatment. However, it is cumbersome for laboratory staff and patients and needs two samples;
- Magnetic Resonance Imaging (MRI): is a type of scan that uses powerful magnetic fields and radio waves to provide detailed images of the inside of the body. An MRI scanner is a huge tube with powerful magnets within. During the scan, the patient will be lying inside the tube. MRI scans can be used to investigate practically any region of the body, including the brain, breast, and heart problems [48]. MRI has more advantages as a 3D technique and is safer (no ionizing radiation, and excellent soft-tissue contrast. However, it has long total scan times (30–75 min), is not as readily accessible, and is claustrophobic (enclosed space).
3.2. Dataset Description
Name of Dataset/Ref. & Download Link | Dataset Classes | Images Type | Dataset Description |
---|---|---|---|
ChestX-ray8 [49,70] | 8 thoracic diseases and a normal case. Diseases labels are Atelectasis, Cardiomegaly, Effusion, Infiltration, Mass, Nodule, Pneumonia, and Pneumothorax. | X-ray | 108,948 frontal images in PNG format with resolution of images 1024 × 1024, from 30,805 patients. |
ChestX-ray14 [49,70] | 14 thoracic diseases and a normal case. Diseases labels are Edema, Cardiomegaly, Effusion, Infiltration, Mass, Nodule, Pneumonia, Pneumothorax, Atelectasis, Hernia, Pleural thickening, Emphysema, Fibrosis, and Consolidation. | X-ray | 112,120 total images in PNG format from 32,717 patients. Images resolution 1024 × 1024. |
ImageCLEF 2019 [39,71] | Tuberculosis | CT | 335 images in PNG format for 218 patients, with a set of clinically relevant metadata. Image size 512 × 512 pixels. |
ImageCLEF 2020 [62,72] | Tuberculosis | CT | 403 images in PNG format 512 × 512 pixels. |
JSRT dataset [57,73] | Normal and Lung Nodules | CT and X-ray | 93 normal and 154 nodule images in PNG format, with metadata. Image size 2048 × 2048 pixels. |
Montgomery dataset [63,74] | Tuberculosis and Normal | X-ray | 138 TB images and 80 normal images in PNG format with metadata. Images size is either 4020 × 4892 or 4892 × 4020 pixels. |
Autofocus database [65,75] | Tuberculosis | Sputum Smear Microscopy | 1200 images with resolution of 2816 × 2112 pixels. |
Andrew’s Kaggle Database [76] | COVID-19 | CT and X-ray | 16 CT images and 79 X-ray images in JPEG format with different size of images. |
Chowdhury’s Kaggle dataset [50,77] | COVID-19, Pneumonia, and Normal | X-ray | 1341 Normal, 219 COVID-19, and 1345 Pneumonia in PNG format images. |
Optical Coherence Tomography (OCT) and Chest X-ray Images [58,78] | Normal and Pneumonia | X-ray and CT | 5856 images, 1583 normal and 4273 pneumonia images in JPEG format with different images size. Bacterial Pneumonia, Viral Pneumonia, and COVID-19 are all represented in the Pneumonia class. |
Shenzhen dataset [63,79] | Tuberculosis and normal | X-ray | 662 frontal images; 326 Normal and 336 TB. Images are in PNG format with different size about 3000 × 3000. |
CheXpert [53,80] | 18 different diseases labels as Atelectasis, Consolidation, Infiltration, Pneumothorax, Edema, Emphysema, Fibrosis, Effusion, Pneumonia, Pleural Thickening, Cardiomegaly, Nodule, Mass, Hernia, Lung Lesion, Fracture, Lung Opacity, and Enlarged Cardiomediastinum | X-ray | 224,316 images in PNG and JPG format from 65,240 patients with both frontal and lateral views, with different images size. |
RSNA Dataset [59] | Pneumonia and Normal | X-ray | 5863 images in JPEG format with different images size. |
PadChest [52,81] | 16 different diseases labels as Pulmonary Fibrosis, COPD signs, Pulmonary Hypertension, Pneumonia, Heart Insufficiency, Pulmonary Edema, Emphysema, Tuberculosis, Tuberculosis Sequelae, Lung Metastasis, Post Radiotherapy Changes, Atypical Pneumonia, Respiratory Distress, Asbestosis Signs, Lymphangitis Carcinomatosa, and Lepidic Adenocarcinoma | X-ray | 160,868 images in PNG format with different images size from 67,625 patients and 206,222 reports. |
NCI Genomic Data Commons [61,82] | Lung Cancer | Histopathology | More than 575,000 images with size 512 × 512. |
Covid Chest X-ray database [83] | COVID-19 | X-ray | 231 COVID-19 images in JPEG format with different images size, and contains metadata |
RIH-CXR [60] | Normal and Abnormal | X-ray | 17,202 frontal images; 9030 Normal and 8172 abnormal images from 14,471 patients. It also contains metadata. |
Sajid’s Kaggle database [84] | Normal and COVID-19 | X-ray | 28 Normal and 70 COVID-19 images in JPEG, JPG, and PNG format with different images size. |
Covid-19 Radiography Database [55,77] | Normal, COVID-19, Lung Opacity, and Viral Pneumonia | X-ray | 10200 Normal, 3616 COVID-19, 6012 Lung Opacity, and 1345 Viral Pneumonia. 299 × 299 pixels images in PNG format. The dataset contains metadata. |
ChestX-ray images (Pneumonia) [58,85] | Normal and Pneumonia | X-ray | 5232 chest X-ray images from children. 3883 pneumonia (2538 bacterial and 1345 Viral) and 1349 normal, from a total of 5856 patients to train a model and then tested with 234 normal and 390 Pneumonia from 624 patients. The images are in JPEG format with different size. |
COVID-CT database [64,86] | Normal and COVID-19 | CT | 15589 images for normal and 48260 images for COVID-19 in DICOM format with 512×512 pixels. |
COVID-19 Image Data Collection [51,87] | 4 classes: COVID-19, Viral Pneumonia, Bacterial Pneumonia, and Normal | X-ray | It contains 306 images, 79 images for normal, 69 images for COVID-19, 79 images for Bacterial Pneumonia, and 79 images for Viral Pneumonia in JPG format with different size. It also contains metadata. |
LIDC-IDRI [69,88] | Lung Cancer | CT | It conatins 1018 images from 1010 patients. It also contains metadata. |
LDOCTCXR [66,78] | Normal and Pneumonia | X-ray | 3883 Pneumonia and 1349 Normal images. |
COVIDx Dataset [54,89] | Pneumonia, Normal, and COVID-19 | X-ray | 5559 Pneumonia, 8066 Normal, and 573 COVID-19 images |
CPTAC-LUAD Dataset [68,90] | Lung Cancer | MRI, CT, and X-ray | 43,420 images in DICOM format. |
Sunnybrook Cardiac MRI [67,91] | Heart Disease | MRI | The SCD had 45 MRI images with the combination of patients with the following classes such as healthy, hypertrophy, heart failure with infarction and heart failure without infarction. The image resolution is 255 × 255. |
MIMIC-CXR Dataset [56,92] | Chest radiograph | X-ray | 377,110 chest radiographs with 227,835 radiology reports in DICOM format. The size of the chest radiographs varies and is around 3000 × 3000 pixels. |
3.3. Image Pre-Processing
3.4. Deep Learning Models
3.4.1. Convolutional Neural Networks (CNNs)
- Convolutional layer has a set of filters (or kernels). A kernel or a filter is a collection of weights, where each neuron in one layer is connected to every neuron in the next layer in the neural network using weights. It performs a convolution operation (a linear operation involving a set of weights multiplied (in a dot product mode) by the input is called convolution) [108]. To obtain a certain value, the value of dot products are added together;
- Pooling layer is applied to the feature maps produced by a convolutional layer. It provides an approach for downsampling feature maps by summarizing the presence of features in patches of the feature map, which leads to reducing the number of parameters and calculations in the network. It recognizes the complex objects in the image and thus preventing overfitting. Average pooling and max pooling are two common pooling algorithms that summarize a feature’s average presence maps;
- Fully connected layer connects all of the neurons from the previous layer and assigns each connection a weight. Each output node in the output layer represents a class’s score. Multiple convolutional-pooling layers are merged to generate a deep architecture of nonlinear transformations, which helps to create a hierarchical representation of an image, facilitating the learning of complex relationships.
3.4.2. Recurrent Neural Networks (RNNs)
3.4.3. Deep Belief Networks (DBNs)
3.4.4. Multilayer Perceptron (MLP)
3.5. Ways to Train Deep Learning Models
- Learning from scratch collects a large number of labeled datasets and designs a network architecture to learn the features that may then be used as input to a model (i.e., feature extractor). Feature extraction images may be extracted from a model automatically as in the CNN model or manually using hand-crafted methods such as Histogram of Oriented Gradients (HOG), Intensity Histograms (IH), Scale Invariant Feature Transform (SIFT), Local Binary Patterns (LBP), and Edge Histogram Descriptor (EHD) [117]. For applications with a large number of output classes, this strategy is useful, but it needs more time to train a model [118];
- Transfer learning is the process of transferring information from one model to the next, allowing for more accurate model creation with less training data as shown in Figure 6. Instead of starting the learning process from scratch, transfer learning begins with patterns learned while solving a previous problem, allowing for faster progress and improved performance while tackling the second problem [119]. Many studies use transfer learning to enhance their model performance, such as the ones in [94,101,120,121,122];
- Fine-tuning is a common technique for transfer learning. In addition, it is making minor changes in order to obtain the desired result or performance, using the weights of a pre-trained neural network model as initialization for a new model trained on the same domain’s data. Except for the output layer, the target model duplicates all model designs and their parameters from the source model and fine-tunes them based on the target dataset. The target model’s output layer, on the other hand, must be trained from scratch. Fine-tuning deep learning involves using weights of a previous deep learning algorithm for programming another similar deep learning process as in [32,123,124]. Because it already has crucial knowledge from a previous deep learning algorithm, its procedure dramatically reduces the time required to develop and process a new deep learning algorithm. When the amount of data available for the new deep learning model is limited, fine-tuning deep learning models can be used, but only if the datasets of the current model and the new deep learning model are similar [125].
3.6. Ensemble Learning
3.7. Pre-Trained Models
- Visual Geometry Group (VGG) is the most familiar model for image classification. It is a standard CNN with multiple layers [134]. The VGG models are VGG-16 and VGG-19, which supports 16 and 19 convolutional layers, respectively, trained on the ImageNet (ImageNet is a database with over 14 million images divided into 1000 categories). VGG-16 takes a long time to train compared to other models, and this can be a disadvantage when we are using large datasets. The main feature of this architecture is that it focuses on basic 3 × 3 size kernels rather than a large number of hyper-parameters (a kernel is a matrix of weights that are multiplied with the input to improve the output in a preferred manner) in the convolutional layers and the max-pooling layers of 2 × 2 size. Finally, it has two fully connected (FC) layers for output, followed by a Softmax classifier. The VGG’s weight configuration is publicly available and has been utilized as a baseline feature extractor in a variety of other applications and challenges. VGG-19 differs from VGG-16 in that each of the three convolutional blocks has an extra layer [135]. The work in [136] used VGG-16 for the classification of 14 different thoracic diseases and the work in [137] used the same model for COVID-19 detection. The work in [138] used VGG-19 for the detection of tuberculosis and the work in [139] used VGG-19 in the detection of pneumonia;
- Inception-V3 Szegedy et al. invented a type of CNN in 2014 [140]. Inception v3 is an image recognition model that has been shown to attain greater than 78.1% accuracy on the ImageNet dataset [141]. Inception models are different from typical CNNs in that they are made up of inception blocks, concatenating the results of many filters on the same input tensor. The model itself is made up of symmetric and asymmetric building blocks, including convolutions, average pooling, max pooling, concatenations, dropouts, and fully connected layers. Batch normalization is used extensively throughout the model and applied to activation inputs. Loss is computed using Softmax. Inception-V3 is a new version of the starting model that was first released in 2015. It has three different filter sizes in a block of parallel convolutional layers (1 × 1, 3 × 3, and 5 × 5). Moreover, a maximum 3 × 3 assembly is performed. The outputs are transmitted to the next unit in a consecutive order. It accepts an entry image size of 299 × 299 pixels [142]. In [119], the authors used this model for the detection of lung nodule disease;
- ResNet50 is a type of deep neural network that is a subclass of CNNs and is used to classify images. ResNet50 is a variant of ResNet model which has 48 Convolution layers along with one MaxPool and one Average Pool layer [143]. The usage of residual layers to create a new in-network architecture is a major innovation. ResNet50 is comprised of five convolution blocks, each having three layers of convolution. ResNet50 is a residual network that accepts photos with a resolution of 224 × 224 pixels and has 50 residual networks [144]. The work in [120,145] used this model in the classification of 14 different thoracic diseases;
- Inception-ResNet-V2 is an ImageNet-trained CNN. The network is 164 layers deep and can classify images into 1000 object categories [141]. It is a hybrid approach that combines the structure of inception with the residual connection. It accepts 299 × 299 pixel images and generates a list of estimated class probabilities. The conversion of inception modules into residual inception blocks, the addition of more inception modules, and the creation of a new type of inception module (Inception-A) following the Stem module are among the advantages of Inception-Resnet-V2 [146];
- DenseNet201 is a 201-layer CNN that receives a 224 × 224 pixel input image. DenseNet201 is a ResNet upgrade that adds dense layer connections. It connects one layer to the next in a feed-forward approach. DensNet201 has direct connections while the standard convolutional networks have L layers and L connections. In DenseNet, each layer obtains additional inputs from all preceding layers and passes on its own feature-maps to all subsequent layers. Concatenation is used. Each layer is receiving a “collective knowledge” from all preceding layers. Since each layer in DenseNet receives all preceding layers as input, it has more diversified features and tends to have richer patterns [147]. By increasing the amount of computing required, encouraging feature reuse, minimizing the number of parameters, and reinforcing feature propagation, DenseNet can enhance the model’s performance [148];
- MobileNet-V2 is an improved version of MobileNet-V1 that uses the ImageNet database to train. It contains only 54 layers and a 224 × 224 pixel input image. MobileNetV2 contains the initial fully convolution layer with 32 filters, followed by 19 residual bottleneck layers [149]. Its key distinctive feature is that it uses depth-wise separable convolutions instead of a single 2D convolution. That is, two 1D convolutions with two kernels are used. As a result, training takes up less memory and requires fewer parameters, resulting in a tiny and efficient model. A residual block with a stride of 1 and a downsizing block with a stride of 2 are the two types of blocks. Each block has three layers: a 1 × 1 convolution with ReLU6, a depthwise 3 × 3 convolution with ReLU6, and another 1 × 1 convolution with nonlinearity. MobileNetV2 is a mobile-oriented model that can be used to solve a variety of visual identification tasks (classification, segmentation, or detection) [150]. The work in [151] used MobileNet-v2 in the classification of 14 different thoracic diseases, and the work in [101] used this model for the detection of tuberculosis disease;
- Xception is a CNN that has 71 layers called Xception and presented by Chollet [152]. It features depthwise separable convolutions and is a more advanced version of Inception’s architecture. The traditional Inception modules are replaced by depthwise separable convolutions in Xception. It outperforms VGG16, ResNet, and Inception in conventional classification issues when compared to them. It uses a 299 × 299 pixel input image [152];
- NASNet is a type of convolutional neural network discovered through a search for neural architecture. It has been trained on over a million images from ImageNet. For a wide variety of images, the network learned rich feature representations. Normal and reduced cells are the basic building blocks [153]. The network accepts 331 × 331 pixel images as input [154]. The work in [135] used this model in lung cancer detection;
- U-Net is used for semantic segmentation. It is a convolutional network architecture for fast and precise segmentation of images. It is used for biomedical image segmentation [155]. In the U-Net model, the input images go through several stages of convolutional and pooling, which reduce the height and width of the image as the depth grows after each convolution in down-sampling, followed by fully convolutional and several stages of up-sampling to produce the image mask [156]. The segmentation image size of 512×512 pixel [157,158]. In [159], the authors used this model for segmentation of thoracic fracture disease, and in [100], the authors used U-Net in segmentation of cardiomegaly disease.
3.8. Evaluation Criteria
3.9. Type of Disease
3.9.1. Lung Diseases
Lung Diseases That Affect Tissues
Ref. (Year) | Name of Disease | Input Image Type | Dataset Used | Data Preparation Type | Model Type | Ensemble Technique | Target | Results | Open Issues |
---|---|---|---|---|---|---|---|---|---|
[127] (2019) | Pneumonia | X-ray | RSNA Pneumonia Detection Challenge dataset | Data Augmentation including flipping, rotation, brightness, gamma transforms, random gaussian noise, and blur. | RetinaNet and Mask R-CNN | voting scheme | Localization and Classification | Recall 79.3% | A lateral chest X-ray or/and CT images should be augmented to the chest X-ray.Metadata such as age, gender, and view position can be useful in later investigations. |
[94] (2021) | Pneumonia | X-ray | Covid Chest X-ray and optical coherence tomography datasets. | intensity normalization. Contrast Limited Adaptive Histogram Equalization (CLAHE).Data Augmentation. | CNN pre-trained on Inception-V3, VGG16, VGG19, DenseNet201, Inception-ResNet-V2, Resnet50, MobileNet-V2, and Xception | Detection and Classification | Accuracy 96.61%, Sensitivity 94.92%, F1-Score 96.67%, Specificity 98.43%, Precision 98.49% | Create a complete system that can detect, segment, and classify pneumonia. Furthermore, performance could be improved by using larger datasets and more advanced feature extraction techniques including color, texture, and shape. | |
[139] (2021) | Pneumonia | X-ray | ChestX-ray8 | image resizing | VGG19 | Voting Classifier | Classification and Detection | Accuracy 97.94% | Using texture and shape feature extraction techniques to improve the handcrafted feature vector. Using a suitable classifier system to replace the SoftMax layer. To improve classification accuracy, the fully-connected layer and the drop out layer were modified. |
[166] (2020) | Edema | X-ray | MIMIC-CXR | Data Augmentation including translation and rotation | BERT model | Classification and Prediction of the Edema severity level | overall accuracy 89% | Suggest utilizing text to semantically explain the image model. | |
[183] (2019) | Edema | X-ray | MIMIC-CXR | Data Augmentation including rotation, transformation, and cropping. | Bayesian | predicting pulmonary edema severity | RMS 0.66 | To improve the pulmonary edema severity prediction accuracy, researchers suggest using an alternative machine learning approach. | |
[184] (2018) | Fibrosis | Histopathology | cardiac histological images dataset | Data Augmentation including rotation, flipping, warping, and transformation. | CNN | Segmentation and Detection | Mean DSC is 0.947 | Learning data should include proportions of each class and color variations in particular structures, as well as an approximate representation of the attributes in the whole image collection. | |
[172] (2019) | Fibrosis | CT | LTRC-DB, MD-ILD, INSEL-DB | CNN | Segmentation, Classification, and Diagnosis | Accuracy 81% and F1-score 80% | Use Histopathology or X-ray in the diagnosis. | ||
[168] (2020) | Consolidation | X-ray | Pediatric Chest X-ray | Data Augmentation including cropping, Histogram matching transformation, and Contrast Limited Adaptive Histogram Equalization (CLAHE) | DCNN | Detection and Perturbation visualization (Heatmap) | Accuracy 94.67% | Test DCNN model in multi-classification. | |
[185] (2021) | Consolidation and (Pneumonia, SARS-CoV-2) | X-ray | COVIDx Dataset | Data Augmentation including flipping, rotation, and scaling. | CNN pre-trained on VGG-19 | Classification and Visualization (GradCam) | Accuracy 89.58% for binary classification and 64.58% for multi-classification | Enhance accuracy by using a large amount of data in multi-classification. | |
[135] (2019) | Lung Lesion/Lung Cancer | CT | LIDC-IDRI | DCNN pre-trained on VGG-19, VGG-16, ResNet50, DenseNet121, MobileNet, Xception, and NASNet | Segmentation and Classification | DenseNet: Accuracy 87.88%, Sensitivity is 80.93%, Specificity is 92.38%, Precision is 87.88%, and AUC is 93.79%. Xception: Accuracy 87.03%, Sensitivity 82.73%, Specificity 89.92%, Precision 84.97%, and AUC 93.24%. | Focusing on the application of deep learning models to small datasets. Using CNNs to synthesize artificial datasets, such as generative adversarial networks. | ||
[186] (2020) | Lung Nodule | CT | Japanese Society of Radiological Technology database | Data Augmentation including horizontal flipping and angle rotation. | CNN | nodule enhancement, nodule segmentation, and nodule detection. | Sensitivity 91.4% | To improve CAD performance, the ROI image can be transformed to an RGB image and combined with additional nodule enhancement images. | |
[119] (2019) | Lung Nodules | CT | JSRT | Data Augmentation including rotation, flip, and shift. | DCNN pre-trained on Inception-v3 | Classification | sensitivity 95.41%, specificity 80.09% | Using ensemble learning to overcome the problem of the deep learning model’s large gap between specificity and sensitivity. | |
[167] (2021) | Asbestosis Sign | CT | Private dataset | LRCN | Classification and Visualization | Accuracy 83.3% | The LRCN model can be used to diagnose a wide range of lung diseases. | ||
[187] (2022) | Asbestosis | CT | Private dataset | Data Augmentation including zoom, flipping, rotation, and shift. Random sampling. | LRCN (CNN and RNN) | Segmentation and Diagnosis | Sensitivity 96.2%, Specificity 97.5%, Accuracy 97%, AUROC of 96.8%, and F1 score 96.1% | To supplement the limitations of a short dataset, more data should be obtained, and external validation should be done through a multicenter study involving additional hospitals. | |
[171] (2018) | Pleural Thickening and another 13 different diseases | X-ray | ChestX-ray14 | Data Augmentation including cropping and flipping | AG-CNN | Localization and classification | AUC 86.8% | Look into a more accurate localization of the lesion areas. Take on the challenges of sample collecting and annotation (with the help of a semi-supervised learning system). | |
[145] (2021) | Pleural Thickening and another 13 different diseases | X-ray | ChestX-ray14 and CheXpert | CNN pre-trained on ResNet50 | Localization and classification | AUC (Pleural Thickening) 79% of ChestX-ray14 & average AUC 83.5% | Invite a group of top radiologists to work on mask level annotation for the NIH and CheXpert datasets. | ||
[98] (2021) | Lung Metastasis—Lung Cancerv | CT | SPIE-AAPM Lung CT Challenge Data Set | Data Augmentation using GAN network | CNN pre-trained on AlexNet | Classification | Accuracy 99.86% | Adjusting the parameters of each layer to obtain the best parameter combination or implement the optimizer in different network architectures. | |
[121] (2021) | Lung Cancer | CT | LIDC-IDRI | CNN pre-trained on GoogleNet | Classification | Accuracy 94.53%, Specificity 99.06%, Sensitivity 65.67%, and AUC 86.84% | To increase the classification accuracy of lung lesions in CT images, more study on the GoogleNet network is required. |
Pneumonia
Fibrosis
Lesion
Pleural Thickening
Asbestosis Signs
Pulmonary Edema
Lung Metastasis
Consolidation
Lung Diseases That Affect Airways
Ref. (Year) | Name of Disease | Input Image Type | Dataset Used | Data Preparation Type | Model Type | Ensemble Technique | Target | Results | Open Issues |
---|---|---|---|---|---|---|---|---|---|
[188] (2021) | COPD | CT | KNUH and JNUH | 3D-CNN | Extraction, visualization, and classification | Accuracy 89.3% and Sensitivity 88.3% | Apply a 3D-model using a wide range of datasets. | ||
[170] (2021) | COPD | CT | RFAI | data augmentation processes: random rotation, random translation, random Gaussian blur, and subsampling | MV-DCNN | Classification | Accuracy 97.7% | Applied MC-DCNN to diagnose a variety of lung diseases. | |
[175] (2019) | Emphysema | CT | private dataset | DCNN | Classification and Detection | Accuracy 92.68% | Use transfer learning to achieve high accuracy. | ||
[189] (2019) | Emphysema and another different 13 diseases | X-ray | ChestX-ray14 | CNN | Classification | overall Accuracy 89.77% | Ensemble approaches were used to improve the model’s performance. | ||
[173] (2018) | Asthma | Reports only | Private dataset | DNN | Diagnosis | Accuracy 98% | Apply a different classifier to outperform the DNN algorithm in terms of accuracy. | ||
[190] (2018) | Asthma | Reports only | Private dataset | Bayesian Logistic Regression | Prediction for Asthma disease | Accuracy 86.3673%, Sensitivity 87.25% | Check if there is an increase in the accuracy when including more patients in the dataset, using the posteriors from this study as priors for the new dataset. | ||
[137] (2020) | COVID-19 | CT | Private dataset | Histogram equalization features extraction Intensity transformation | CNN pre-trained on VGG-16 | Classification | Precision 92%, Sensitivity 90%, Specificity 91%, F1-Score 91%, Accuracy 90% | It is possible to use deep learning networks with more complex backbone architecture.GANs can be developed to increase the number of suitable images for network training and hence improve the model’s performance. | |
[131] (2020) | COVID-19 | CT | Private dataset | Volume features based on segmented infected lung regions, Histogram distribution, Radiomics features. | CNN | Boosting | Segmentation and Classification | Accuracy 91.79%, Sensitivity 93.05%, Specificity 89.95%, AUC 96.35%, Precision 93.10%, and F1-score 93.07% | Plan to collect more data from patients with different diseases and apply the AFSDF approach to further COVID-19 classification tasks (e.g., COVID-19 vs. Influenza-A viral pneumonia and CAP, severe patients vs. non-severe patients). |
[122] (2021) | COVID-19, Pneumonia, Tuberculosis | X-ray | Pediatric CXRs, IEEE COVID-19 CXRs, and Shenzhen datasets. | Data Augmentation including rotation, shift, and adding noise. | DenResCov-19 | Classification | Precision 82.90%, AUC 95%, and F1-Score 75.75% | Increase the number of classes to address more lung illnesses. The number of COVID-19 patients should be raised. | |
[191] (2021) | COVID-19 | X-ray | RSNA dataset | Data Augmentation including zoom, flipping, rotation, translation, and shift. | COVID-Net CXR-S | Detection and Classification | Sensitivity (level1) 92.3%, Sensitivity (level2) 92.85%, PPV (level1) 87.27%, PPV (level2) 95.78%, Accuracy 92.66% | Build innovative clinical decision support technologies to aid clinicians all throughout the world in dealing with the pandemic. | |
[133] (2022) | COVID-19 | X-ray | COVID-19 dataset, chest-X-ray, COVID-19 pneumonia dataset, private dataset collected from MGM Medical College and hospital | DCNN | Stacking | Classification and Detection | Accuracy 88.98% for three classifications and 98.58% for binary classification. | Using more public datasets will improve the model’s accuracy. | |
[192] (2022) | COVID-19 | CT | COVID-19 CT Images Segmentation | segmentation | DRL | Image segmentation | Precision 97.12%, a sensitivity of 79.97%, and a specificity of 99.48% | The mask extraction stage could be improved. In addition, more complex algorithms, approaches, and datasets appear promising to improve system performance. | |
[193] (2019) | Tuberculosis | Sputum Smear Microscopy | ZNSM-iDB dataset | Data Augmentation including rotation and translation | RCNN pre-trained on VGG-16 | localization and classification | Recall 98.3% Precision 82.6% F1-Score 89.7% | Planning to expand the amount of data used in a deep network. | |
[101] (2020) | Tuberculosis | X-ray | NIAID TB dataset and RSNA dataset | Data Augmentation including cropping. | CNN pre-trained on (Inception-V3, ResNet-18, DenseNet-201, ResNet-50, ResNet-101, ChexNet, SqueezeNet, VGG-19, and MobileNet-V2) and UNet for segmentation. | Lung Segmentation and TB Classification | Without Segmentation: Accuracy 96.47%, Precision 96.62%, and Recall 96.47% With Segmentation: Accuracy 98.6%, Precision 98.57%, and Recall 98.56% | Split lungs into patches that can be fed into a CNN model, perhaps improving performance even more. | |
[176] (2021) | Tuberculosis | X-ray | Montgomery County (MC) CXR dataset, Shenzhen dataset, RSNA Pneumonia Detection Challenge dataset, Belarus dataset, and COVID-19 radiography database | EfficientNet and Vision Transformer | Boosting | Classification & Detection | Accuracy 97.72%AUC 100% | Planning to add new baselines to compare to the tool that has been developed.Planning to release a mobile app that can run on small devices like smartphones and tablets. | |
[138] (2021) | Tuberculosis | X-ray | Montgomery County dataset (MC) and Shenzhen dataset (SZ) | Histogram Equalization & Contrast Limited Adaptive Histogram Equalization (CLAHE). | CNN pre-trained on VGG19 | Stacking | Segmentation and Detection TB disease | AUC 99.00 ± 0.28/98.00 ± 0.16 for MC/SZ For the MC/SZ accuracy 99.26 ± 0.40/99.22 ± 0.32. | Propose scalability testing for the proposed approach on large datasets. Use big data technologies like distributed processing and/or Map-Reduced based approaches for complex network building and feature extraction. |
Tuberculosis
COVID-19
Asthma
COPD
Emphysema
Infiltration
Ref. (Year) | Name of Disease | Input Image Type | Dataset Used | Data Preparation Type | Model Type | Ensemble Technique | Target | Results | Open Issues |
---|---|---|---|---|---|---|---|---|---|
[169] (2018) | Atelectasis and another 13 different diseases | X-ray | ChestX-ray14 | ChestNet | Classification and Visualization | Average AUC 0.7810 | Concentrate on understanding the relationships between those illness image descriptors and implementing them into the computer-aided diagnosis procedure. | ||
[151] (2021) | Atelectasis and another 13 different diseases | X-ray | ChestX-ray14 | Data Augmentation including rotation, flipping, and transformation | MobileNet V2 | Classification and Prediction of chest 14 disease | Average AUC 0.811 and Accuracy above 90% | In the medical field, a light neural network design can be used. Look into using new architectures to take advantage of label dependencies and segmentation data. | |
[174] (2019) | Pneumothorax | X-ray | ChestX-ray14 | Data Augmentation including translating, scaling, rotating, horizontal flipping, windowing, and adding noise | CNN for classification MIL for localization FCN for segmentation | linear combination (Ensemble Averaging) | Classification, localization, and segmentation | AUC (Classification) 96%, AUC (Localization) 93%, and AUC (Segmentation) 92% | Use other techniques to combine the three approaches. |
[198] (2019) | Pneumothorax | CT | Private dataset | CNN | Detection and localization | Accuracy 87.3% | To improve the model’s performance, use data from multiple sources. | ||
[177] (2018) | Infiltration and another different 13 diseases | X-ray | ChestX-ray14 | CNN for Classification CPNN and BPNN for Diagnosis Chest diseases | Classification & Diagnosis | CNN Accuracy 92.4%BPNN Accuracy 80.04%CPNN Accuracy 89.57%CNN with GIST Accuracy 92%VGG16 Accuracy 86%VGG19 Accuracy 92% | Propose employing several transfer learning strategies to improve model accuracy. | ||
[136] (2019) | Infiltration and another different 13 diseases | X-ray | ChestX-ray14 | CNN pre-traind on VGG-16 | Classification & Visualization | Accuracy 83.671% (scratch CNN) and 97.81% (transfer learning) | Using a fine-tuning model other than the VGG-16 to analyze medical images can be a viable option. |
Atelectasis
Pneumothorax
3.9.2. Heart Diseases
Cadiomegaly
Ref. (Year) | Name of Disease | Input Image Type | Dataset Used | Data Preparation Type | Model Type | Ensemble Technique | Target | Results | Open Issues |
---|---|---|---|---|---|---|---|---|---|
[128] (2021) | Heart disease | MRI | Automated Cardiac Diagnosis Challenge (ACDC-2017) | Handcrafted features, data augmentation, and ROI extraction | CNN for Classification and UNet for Segmentation | Voting technique | Classification and diagnosis of Heart disease | Accuracy 92% | Planning to investigate for more enhancement and improvement of the current result. |
[100] (2020) | Cardiomegaly | X-ray | ChestX-ray14 | Data Augmentation | ResNet18, ResNet50, and DenseNet121 for classification and UNet for Segmentation | Segmentation and classification | AUC for segmentation is 0.977 and AUC for classification is 0.941 | Investigate whether the segmentation-based approach may be used for other diagnostic tasks. | |
[120] (2019) | Cardiomegaly and other 13 diseases | X-ray | ChestX-ray14 | Data Augmentation | CNN pre-trained on ResNet50 | Classification | AUC average is 0.822 | Investigate other model architectures, new architectures for leveraging label dependencies and incorporating segmentation information. | |
[201] (2018) | Cardiomegaly | X-ray | NIH-CXR and NLM-Indiana datasets | CNN pre-trained on VGG-16, VGG-19, AlexNet, and InceptionV3 | Detection | Accuracy is 0.8986 | In addition to image-based clues, look into other factors such as lung size, rib-cage measurements, and diaphragm lengths. | ||
[178] (2018) | Heart Failure | Histopathology | Private dataset collected from the Cardiovascular Research Institute | Data Augmentation including cropping, rotation, image mirroring, and stain color | CNN | Classification and detection | Sensitivity 99% and specificity 94% | The ability of CNNs to detect pre-clinical disease must be evaluated. Focus on heart failure prognostic modeling and post-transplant rejection surveillance, as well as etiologic classification of cardiomyopathy etiologies. | |
[202] (2021) | Heart Failure | reports only | IBM Commercial and Medicare Supplement Databases | LSTM algorithm-based sequential model architecture | Boosting | Detection of Heart Failure and severity classification | AUC 0.861 | Better regularization approaches, models pre-trained on other datasets, and the use of larger datasets with more detailed clinical data are all possible options to increase the performance of the model. |
Heart Failure
3.9.3. Others
Ref. (Year) | Name of Disease | Input Image Type | Dataset Used | Data Preparation Type | Model Type | Ensemble Technique | Target | Results | Open Issues |
---|---|---|---|---|---|---|---|---|---|
[159] (2021) | Fracture | CT | Private | R-CNN for classification & UNet for Segmentation | weighted average of probabilities | Segmentation and detection | Average Sensitivity 89.2% and Precision 88.4% | A 3D-CNN model can be used as a classification model to further classify the observed rib fracture types from the existing model. The performance of the rib segmentation and labelling algorithm must be improved. | |
[203] (2020) | Fracture | CT | Private | R-CNN pre-trained on ResNet101 | Detection and classification | For three multicenter datasets: Precision 80.3% and Sensitivity 62.4% For five multicenter datasets: Precision 91.1% and sensitivity 86.3% | The anatomical location was identified using a three-dimensional deep learning and tracking approach. | ||
[179] (2019) | Hernia and another 13 different diseases | X-ray | ChestX-ray14 | Data Augmentation including flipping | CNN pre-trained on DenseNet121 | Classification | AUC 84.3% and AUC for Hernia only 96.37% | Entropy weighting loss improved the binary classification of Hernia. | |
[204] (2021) | Hernia and another 13 different diseases | X-ray | ChestX-ray14 | CNN pre-trained on DenseNet121 | multiscale ensemble module | Classification | AUC 82.6% | Using pathologically abnormal region annotations to regularize attention learning.Addressing the uncertainty that existed in noisy labels. | |
[180] (2020) | Mass | X-ray | ChestX-ray14 | Quibim App Chest X-ray Classifier | Detection | Sensitivity 76.6%, AUC 91.6%, Accuracy 83%, and Specificity of 88.68%. | During the diagnostic procedure, build the four algorithms that were mentioned in this paper to improve sensitivity and specificity. | ||
[205] (2017) | Mass | X-ray | JSRT dataset | RCNN | detection and localization | Accuracy 53% | Compare the RCNN algorithm to other state-of-the-art mass detection algorithms. |
Fracture
Hernia
Mass
4. Discussion
5. Critical Analysis
- Publicize datasets, so researchers would have access to more data and the classifiers developed would be more accurate;
- Efforts can be focused on investigating several features. When employing ensemble approaches, this can help address the issue of high error correlation. As more features are added, the number of contrasts increases and the model’s accuracy improves. The results are often better when merging multiple versions;
- Using ensemble learning, especially in multi-classifications, to improve the accuracy of model detection and reduce training time;
- The majority of the models discussed in this analysis classify rather than localize or segment abnormalities, and this is an area that can be explored further;
- Unsupervised learning approaches like generative adversarial networks and variational autoencoders are being used by numerous researchers to investigate automated data curation.
6. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Kumar, S.; Singh, P.; Ranjan, M. A review on deep learning based pneumonia detection systems. In Proceedings of the 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS), Coimbatore, India, 25–27 March 2021; pp. 289–296. [Google Scholar] [CrossRef]
- Creek, J. Lung Disease: Medlineplus Medical Encyclopedia. Available online: https://medlineplus.gov/ency/article/000066.htm (accessed on 23 November 2022).
- Omar, S. Chest Diseases: Your Comprehensive Guide. Available online: www.webteb.com/articles/23328 (accessed on 23 November 2022).
- EBC. World Pneumonia Day. Available online: https://stoppneumonia.org/latest/world-pneumonia-day/ (accessed on 23 November 2022).
- WHO. Pneumonia. Available online: https://www.who.int/news-room/fact-sheets/detail/pneumonia (accessed on 23 November 2022).
- Team, I. Coronavirus Cases. Available online: https://www.worldometers.info/coronavirus/ (accessed on 23 November 2022).
- Rag, C. Global Future of Imaging. Available online: https://www.bir.org.uk/get-involved/world-partner-network/global-future-of-imaging.aspx (accessed on 23 November 2022).
- Çallı, E.; Sogancioglu, E.; van Ginneken, B.; van Leeuwen, K.G.; Murphy, K. Deep learning for chest X-ray analysis: A survey. Med. Image Anal. 2021, 72, 102125. [Google Scholar] [CrossRef]
- Haleem, A.; Javaid, M.; Khan, I.H. Current status and applications of Artificial Intelligence (AI) in medical field: An overview. Curr. Med. Res. Pract. 2019, 9, 231–237. [Google Scholar] [CrossRef]
- Davenport, T.; Kalakota, R. The Potential for Artificial Intelligence in Healthcare. Available online: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6616181/ (accessed on 23 November 2022).
- Pugliese, R.; Regondi, S.; Marini, R. Machine learning-based approach: Global trends, research directions, and regulatory standpoints. Data Sci. Manag. 2021, 4, 19–29. [Google Scholar] [CrossRef]
- Council of Europe. Ai and Control of COVID-19 Coronavirus. Available online: https://www.coe.int/en/web/artificial-intelligence/ai-and-control-of-covid-19-coronavirus (accessed on 23 November 2022).
- Watson, I.; Jeong, S.; Hollingsworth, J. How this South Korean Company Created Coronavirus Test Kits in Three Weeks. Available online: https://edition.cnn.com/2020/03/12/asia/coronavirus-south-korea-testing-intl-hnk/index.html (accessed on 23 November 2022).
- Baidu, B. How Baidu is Bringing AI to the Fight against Coronavirus. Available online: https://www.technologyreview.com/2020/03/11/905366/how-baidu-is-bringing-ai-to-the-fight-against-coronavirus/ (accessed on 23 November 2022).
- Mishra, R.K.; Reddy, G.Y.; Pathak, H. The understanding of Deep Learning: A Comprehensive Review. Math. Probl. Eng. 2021, 2021, 1–15. [Google Scholar] [CrossRef]
- Lundervold, A.S.; Lundervold, A. An overview of deep learning in medical imaging focusing on MRI. Z. Med. Phys. 2019, 29, 102–127. [Google Scholar] [CrossRef]
- Krawczyk, B.; Minku, L.L.; Gama, J.; Stefanowski, J.; Woźniak, M. Ensemble learning for data stream analysis: A survey. Inf. Fusion 2017, 37, 132–156. [Google Scholar] [CrossRef][Green Version]
- Najafabadi, M.M.; Villanustre, F.; Khoshgoftaar, T.M.; Seliya, N.; Wald, R.; Muharemagic, E. Deep learning applications and challenges in Big Data Analytics. J. Big Data 2015, 2, 1–21. [Google Scholar] [CrossRef][Green Version]
- Das, P.K.; Meher, S. An efficient deep Convolutional Neural Network based detection and classification of Acute Lymphoblastic Leukemia. Expert Syst. Appl. 2021, 183, 115311. [Google Scholar] [CrossRef]
- Das, P.; Meher, S.; Panda, R.; Abraham, A. A Review of Automated Methods for the Detection of Sickle Cell Disease. IEEE Rev. Biomed. Eng. 2019, 13, 309–324. [Google Scholar] [CrossRef]
- Das, P.; Pradhan, A.; Meher, S. Detection of Acute Lymphoblastic Leukemia Using Machine Learning Techniques. In Machine Learning, Deep Learning and Computational Intelligence for Wireless Communication; Springer: Singapore, 2021; pp. 425–437. [Google Scholar] [CrossRef]
- Das, P.; Meher, S. Transfer Learning-Based Automatic Detection of Acute Lymphocytic Leukemia. In Proceedings of the 2021 National Conference on Communications (NCC), Kanpur, India, 27–30 July 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Das, P.; Diya, V.; Meher, S.; Panda, R.; Abraham, A. A Systematic Review on Recent Advancements in Deep and Machine Learning Based Detection and Classification of Acute Lymphoblastic Leukemia. IEEE Access 2022, 10, 81741–81763. [Google Scholar] [CrossRef]
- Das, P.K.; Nayak, B.; Meher, S. A lightweight deep learning system for automatic detection of blood cancer. Measurement 2022, 191, 110762. [Google Scholar] [CrossRef]
- Rajagopal, R.; Karthick, R.; Meenalochini, P.; Kalaichelvi, T. Deep Convolutional Spiking Neural Network optimized with Arithmetic optimization algorithm for lung disease detection using chest X-ray images. Biomed. Signal Process. Control 2023, 79, 104197. [Google Scholar] [CrossRef]
- Gao, S.; Lima, D. A review of the application of deep learning in the detection of Alzheimer’s disease. Int. J. Cogn. Comput. Eng. 2022, 3, 1–8. [Google Scholar] [CrossRef]
- EL-Geneedy, M.; Moustafa, H.E.D.; Khalifa, F.; Khater, H.; AbdElhalim, E. An MRI-based deep learning approach for accurate detection of Alzheimer’s disease. Alex. Eng. J. 2023, 63, 211–221. [Google Scholar] [CrossRef]
- Li, Q.; Zhang, Y.; Liang, H.; Gong, H.; Jiang, L.; Liu, Q.; Shen, L. Deep learning based neuronal soma detection and counting for Alzheimer’s disease analysis. Comput. Methods Programs Biomed. 2021, 203, 106023. [Google Scholar] [CrossRef]
- Sultan, S. Limitations of Artificial Intelligence. Available online: https://scholarworks.rit.edu/cgi/viewcontent.cgi?article=12113&context=theses (accessed on 23 November 2022).
- Kumar, Y.; Koul, A.; Singla, R.; Ijaz, M.F. Artificial intelligence in disease diagnosis: A systematic literature review, synthesizing framework and future research agenda. J. Ambient. Intell. Humaniz. Comput. 2022, 1–28. [Google Scholar] [CrossRef]
- Victory, L.R.; Ervin, K.M.; Ridge, C.A. Imaging in chest disease. Medicine 2020, 48, 249–256. [Google Scholar] [CrossRef]
- Ayan, E.; Ünver, H.M. Diagnosis of Pneumonia from Chest X-ray Images Using Deep Learning. In Proceedings of the 2019 Scientific Meeting on Electrical-Electronics Biomedical Engineering and Computer Science (EBBT), Istanbul, Turkey, 24–26 April 2019; pp. 1–5. [Google Scholar] [CrossRef]
- Ng, K.H.; Rehani, M.M. X ray imaging goes digital. BMJ 2006, 333, 765–766. [Google Scholar] [CrossRef][Green Version]
- Thompson, W.; Hudnut, H.; Russo, P.; Brown, F.; Mosley, K. A review and study of cardiovascular disease screening with the miniature chest X-ray. J. Chronic Dis. 1961, 13, 148–160. [Google Scholar] [CrossRef]
- Bharati, S.; Podder, P.; Mondal, M.R.H. Hybrid deep learning for detecting lung diseases from X-ray images. Inform. Med. Unlocked 2020, 20, 100391. [Google Scholar] [CrossRef]
- Saxena, S.; Jena, B.; Gupta, N.; Das, S.; Sarmah, D.; Bhattacharya, P.; Nath, T.; Paul, S.; Fouda, M.M.; Kalra, M.; et al. Role of Artificial Intelligence in Radiogenomics for Cancers in the Era of Precision Medicine. Cancers 2022, 14, 2860. [Google Scholar] [CrossRef]
- Jena, B.; Saxena, S.; Nayak, G.K.; Balestrieri, A.; Gupta, N.; Khanna, N.N.; Laird, J.R.; Kalra, M.K.; Fouda, M.M.; Saba, L.; et al. Brain Tumor Characterization Using Radiogenomics in Artificial Intelligence Framework. Cancers 2022, 14, 4052. [Google Scholar] [CrossRef] [PubMed]
- Soffer, S.; Morgenthau, A.S.; Shimon, O.; Barash, Y.; Konen, E.; Glicksberg, B.S.; Klang, E. Artificial Intelligence for Interstitial Lung Disease Analysis on Chest Computed Tomography: A Systematic Review. Acad. Radiol. 2022, 29, S226–S235. [Google Scholar] [CrossRef] [PubMed]
- Dicente Cid, Y.; Liauchuk, V.; Klimuk, D.; Tarasau, A.; Kovalev, V.; Müller, H. Overview of ImageCLEFtuberculosis 2019—Automatic CT-based Report Generation and Tuberculosis Severity Assessment. In CLEF (Working Notes); 2019. [Google Scholar]
- Kassem, M.N.; Masallat, D.T. Clinical application of chest computed tomography (CT) in detection and characterization of Coronavirus (COVID-19) pneumonia in adults. J. Digit. Imaging 2021, 34, 273–283. [Google Scholar] [CrossRef] [PubMed]
- Gurcan, M.; Boucheron, L.; Can, A.; Madabhushi, A.; Rajpoot, N.; Yener, B. Histopathological Image Analysis: A Review. IEEE Rev. Biomed. Eng. 2009, 2, 147–171. [Google Scholar] [CrossRef][Green Version]
- Coudray, N.; Ocampo, P.; Sakellaropoulos, T.; Narula, N.; Snuderl, M.; Fenyö, D.; Moreira, A.; Razavian, N.; Tsirigos, A. Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. Nat. Med. 2018, 24, 1559–1567. [Google Scholar] [CrossRef]
- He, L.; Long, L.R.; Antani, S.; Thoma, G.R. Histology image analysis for carcinoma detection and grading. Comput. Methods Programs Biomed. 2012, 107, 538–556. [Google Scholar] [CrossRef][Green Version]
- Shah, M.; Mishra, S.; Yadav, V.; Chauhan, A.; Sarkar, M.; Sharma, S.; Rout, C. Ziehl-Neelsen sputum smear microscopy image database: A resource to facilitate automated bacilli detection for tuberculosis diagnosis. J. Med. Imaging 2017, 4, 027503. [Google Scholar] [CrossRef]
- Kant, S.; Srivastava, M.M. Towards Automated Tuberculosis detection using Deep Learning. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bangalore, India, 18–21 November 2018; pp. 1250–1253. [Google Scholar] [CrossRef][Green Version]
- Das, P.; Ganguly, S.; Mandal, B. Sputum Smear Microscopy in tuberculosis: It is still relevant in the era of molecular diagnosis when seen from the Public Health Perspective. Biomed. Biotechnol. Res. J. (BBRJ) 2019, 3, 77. [Google Scholar] [CrossRef]
- ISTC. International Standards for Tuberculosis Care. Available online: https://theunion.org/technical-publications/international-standards-for-tuberculosis-care:%:text=The%International%Standards%of%Tuberculosis,%and%economic%losses%from%TB (accessed on 23 November 2022).
- Ishida, M.; Kato, S.; Sakuma, H. Cardiac MRI in ischemic heart disease. Circ. J. 2009, 73, 1577–1588. [Google Scholar] [CrossRef][Green Version]
- Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Bagheri, M.; Summers, R.M. ChestX-ray8: Hospital-Scale Chest X-ray Database and Benchmarks on Weakly-Supervised Classification and Localization of Common Thorax Diseases. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 3462–3471. [Google Scholar] [CrossRef][Green Version]
- Chowdhury, M.; Rahman, T.; Khandakar, A.; Mazhar, R.; Kadir, M.; Mahbub, Z.; Islam, K.; Khan, M.S.; Iqbal, A.; Al-Emadi, N.; et al. Can AI help in screening Viral and COVID-19 pneumonia? IEEE Access 2020, 8, 132665–132676. [Google Scholar] [CrossRef]
- Cohen, J.; Morrison, P.; Dao, L. COVID-19 Image Data Collection. arXiv 2020, arXiv:2003.11597. [Google Scholar]
- Bustos, A.; Pertusa, A.; Salinas, J.M.; de la Iglesia-Vayá, M. PadChest: A large chest X-ray image dataset with multi-label annotated reports. Med. Image Anal. 2020, 66, 101797. [Google Scholar] [CrossRef] [PubMed]
- Irvin, J.; Rajpurkar, P.; Ko, M.; Yu, Y.; Ciurea-Ilcus, S.; Chute, C.; Marklund, H.; Haghgoo, B.; Ball, R.; Shpanskaya, K.; et al. Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 590–597. [Google Scholar] [CrossRef][Green Version]
- Wang, L.; Wong, A. COVID-Net: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 Cases from Chest Radiography Images. Sci. Rep. 2020, 10, 1–12. [Google Scholar] [CrossRef] [PubMed]
- Rahman, T.; Khandakar, A.; Qiblawey, Y.; Tahir, A.; Kiranyaz, S.; Abul Kashem, S.B.; Islam, M.T.; Al Maadeed, S.; Zughaier, S.M.; Khan, M.S.; et al. Exploring the effect of image enhancement techniques on COVID-19 detection using chest X-ray images. Comput. Biol. Med. 2021, 132, 104319. [Google Scholar] [CrossRef] [PubMed]
- Johnson, A.E.; Pollard, T.J.; Berkowitz, S.J.; Greenbaum, N.R.; Lungren, M.P.; Deng, C.Y.; Mark, R.G.; Horng, S. Mimic-CXR, a de-identified publicly available database of chest radiographs with free-text reports. Sci. Data 2019, 6, 1–8. [Google Scholar] [CrossRef] [PubMed][Green Version]
- Shiraishi, J.; Katsuragawa, S.; Ikezoe, J.; Matsumoto, T.; Kobayashi, T.; Komatsu, K.I.; Matsui, M.; Fujita, H.; Kodera, Y.; Doi, K. Development of a digital image database for chest radiographs with and without a lung nodule: Receiver operating characteristic analysis of radiologists’ detection of pulmonary nodules. Am. J. Roentgenol. 2000, 174, 71–74. [Google Scholar] [CrossRef]
- Kermany, D.S.; Zhang, K.; Goldbaum, M.H. Labeled Optical Coherence Tomography (OCT) and Chest X-ray Images for Classification. Mendeley Data 2018, 2, 2. [Google Scholar] [CrossRef]
- Rsna, P. RSNA Pneumonia Detection Challenge. Available online: https://www.kaggle.com/c/rsna-pneumonia-detection-challenge (accessed on 23 November 2022).
- Pan, I.; Agarwal, S.; Merck, D. Generalizable Inter-Institutional Classification of Abnormal Chest Radiographs Using Efficient Convolutional Neural Networks. J. Digit. Imaging 2019, 32, 888–896. [Google Scholar] [CrossRef]
- Grossman, R.; Heath, A.; Ferretti, V.; Varmus, H.; Lowy, D.; Kibbe, W.; Staudt, L. Toward a Shared Vision for Cancer Genomic Data. N. Engl. J. Med. 2016, 375, 1109–1112. [Google Scholar] [CrossRef]
- Kozlovski, S.; Liauchuk, V.; Dicente Cid, Y.; Tarasau, A.; Kovalev, V.; Müller, H. Overview of ImageCLEFtuberculosis 2020-Automatic CT-based Report Generation. In Proceedings of the CLEF 2020, Thessaloniki, Greece, 22–25 September 2020. [Google Scholar]
- Jaeger, S.; Candemir, S.; Antani, S.; Wáng, Y.X.J.; Lu, P.X.; Thoma, G. Two public chest X-ray datasets for computer-aided screening of pulmonary diseases. Quant. Imaging Med. Surg. 2014, 4, 475. [Google Scholar] [CrossRef]
- Shakouri, S.; Bakhshali, M.A.; Layegh, P.; Kiani, B.; Masoumi, F.; Nakhaei, S.; Mostafavi, S. COVID19-CT-dataset: An open-access chest CT image repository of 1000+ patients with confirmed COVID-19 diagnosis. BMC Res. Notes 2021, 14, 1–3. [Google Scholar] [CrossRef]
- Costa, M.G.F.; Filho, C.F.F.C.; Kimura, A.; Levy, P.C.; Xavier, C.M.; Fujimoto, L.B. A sputum smear microscopy image database for automatic bacilli detection in conventional microscopy. In Proceedings of the 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 26–30 August 2014; pp. 2841–2844. [Google Scholar] [CrossRef]
- Kermany, D.S.; Goldbaum, M.; Cai, W.; Valentim, C.C.; Liang, H.; Baxter, S.L.; McKeown, A.; Yang, G.; Wu, X.; Yan, F.; et al. Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning. Cell 2018, 172, 1122–1131.e9. [Google Scholar] [CrossRef]
- Radau, P.; Lu, Y.; Connelly, K.; Paul, G.; Dick, A.; Wright, G. Evaluation Framework for Algorithms Segmenting Short Axis Cardiac MRI. Card. MR Left Ventricle Segment. Chall. 2009, 49, 2707–2713. [Google Scholar] [CrossRef]
- Edwards, N.; Oberti, M.; Thangudu, R.; Cai, S.; Mcgarvey, P.; Jacob, S.; Madhavan, S.; Ketchum, K. The CPTAC data portal: A resource for cancer proteomics research. J. Proteome Res. 2015, 14, 2707–2713. [Google Scholar] [CrossRef]
- Armato, S., III; Mclennan, G.; Bidaut, L.; McNitt-Gray, M.; Meyer, C.; Reeves, A.; Zhao, B.; Aberle, D.; Henschke, C.; Hoffman, E.; et al. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A completed reference database of lung nodules on CT scans. Med. Phys. 2011, 38, 915–931. [Google Scholar] [CrossRef][Green Version]
- National Institutes of Health Chest X-ray Dataset, Kaggle. NIH Chest X-rays. Available online: https://www.kaggle.com/nih-chest-xrays/data (accessed on 23 November 2022).
- ImageCLEF. ImageCLEFmed Tuberculosis. Available online: https://www.imageclef.org/2019/medical/tuberculosis (accessed on 23 November 2022).
- ImageCLEF. ImageCLEFmed Tuberculosis. Available online: https://www.imageclef.org/2020/medical/tuberculosis (accessed on 23 November 2022).
- JSRT Database. JSRT Database: Japanese Society of Radiological Technology. Available online: http://db.jsrt.or.jp/eng.php (accessed on 23 November 2022).
- SK Tuberculosis, A. Tuberculosis Chest X-ray Image Data Sets.—LHNCBC Abstract. Available online: https://lhncbc.nlm.nih.gov/publication/pub9931 (accessed on 23 November 2022).
- Flavio, T.I. TBImages—An Image Database of Conventional Sputum Smear Microscopy for Tuberculosis. Available online: http://www.tbimages.ufam.edu.br/ (accessed on 23 November 2022).
- Larxel, C. COVID-19 X rays. Available online: https://www.kaggle.com/andrewmvd/convid19-x-rays (accessed on 23 November 2022).
- Rahman, T. COVID-19 Radiography Database. Available online: https://www.kaggle.com/tawsifurrahman/covid19-radiography-database (accessed on 23 November 2022).
- Kermany, D. Large Dataset of Labeled Optical Coherence Tomography (OCT) and chest X-ray Images. Available online: https://data.mendeley.com/datasets/rscbjbr9sj/3 (accessed on 23 November 2022).
- Raddar, T. Tuberculosis Chest X-rays (Shenzhen). Available online: https://www.kaggle.com/raddar/tuberculosis-chest-xrays-shenzhen (accessed on 23 November 2022).
- Stanford ML Group. Chexpert: A Large Dataset of Chest X-rays and Competition for Automated Chest X-ray Interpretation. Available online: https://stanfordmlgroup.github.io/competitions/chexpert/ (accessed on 23 November 2022).
- BIMCV. Available online: https://bimcv.cipf.es/bimcv-projects/padchest/ (accessed on 23 November 2022).
- Genomic Data Commons Data Portal. Available online: https://portal.gdc.cancer.gov/ (accessed on 23 November 2022).
- Cohen, J.P. IEEE8023/COVID-Chestxray-Dataset: We Are Building an Open Database of COVID-19 Cases with Chest X-ray or CT Images. Available online: https://github.com/ieee8023/covid-chestxray-dataset (accessed on 23 November 2022).
- Sajid, N. COVID-19 Patients Lungs X ray Images 10000. Available online: https://www.kaggle.com/nabeelsajid917/covid-19-x-ray-10000-images (accessed on 23 November 2022).
- Mooney, P. Chest X-ray Images (Pneumonia). Available online: https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia (accessed on 23 November 2022).
- UCSD-AI4H. COVID-CT/README.md. Available online: https://github.com/UCSD-AI4H/COVID-CT/blob/c224644822838e70b8f13b4ba90aa239ced992f7/README.md (accessed on 23 November 2022).
- Joinup, C. Open Data. Available online: https://joinup.ec.europa.eu/collection/digital-response-covid-19/open-data (accessed on 23 November 2022).
- Vendt, B. Data from LIDC-IDRI. Available online: https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI (accessed on 23 November 2022).
- Zhao, A. COVIDx CXR-2. Available online: https://www.kaggle.com/andyczhao/covidx-cxr2?select=competition_test (accessed on 23 November 2022).
- Berryman, S. CPTAC-LUAD. Available online: https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=33948253 (accessed on 23 November 2022).
- Hussaini, S. Sunnybrook Cardiac MRI. Available online: https://www.kaggle.com/salikhussaini49/sunnybrook-cardiac-mri (accessed on 23 November 2022).
- Johnson, A.; Pollard, T.; Mark, R.; Berkowitz, S.; Horng, S. Mimic-CXR Database. Available online: https://physionet.org/content/mimic-cxr/2.0.0/ (accessed on 23 November 2022).
- Domingos, P. A Few Useful Things to Know about Machine Learning. Commun. ACM 2012, 55, 78–87. [Google Scholar] [CrossRef][Green Version]
- El Asnaoui, K.; Chawki, Y.; Idri, A. Automated methods for detection and classification pneumonia based on X-ray images using Deep Learning. Stud. Big Data 2021, 90, 257–284. [Google Scholar] [CrossRef]
- Mikołajczyk, A.; Grochowski, M. Data augmentation for improving deep learning in image classification problem. In Proceedings of the 2018 International Interdisciplinary PhD Workshop (IIPhDW), Świnoujście, Poland, 9–12 May 2018; pp. 117–122. [Google Scholar] [CrossRef]
- Zheng, Z.; Cai, Y.; Li, Y. Oversampling method for imbalanced classification. Comput. Inform. 2015, 34, 1017–1037. [Google Scholar]
- Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X. Improved Techniques for Training GANs. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; Volume 29, pp. 2234–2242. [Google Scholar] [CrossRef]
- Lin, C.H.; Lin, C.J.; Li, Y.C.; Wang, S.H. Using Generative Adversarial Networks and Parameter Optimization of Convolutional Neural Networks for Lung Tumor Classification. Appl. Sci. 2021, 11, 480. [Google Scholar] [CrossRef]
- He, H.; Bai, Y.; Garcia, E.A.; Li, S. ADASYN: Adaptive synthetic sampling approach for imbalanced learning. In Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–8 June 2008; pp. 1322–1328. [Google Scholar] [CrossRef][Green Version]
- Sogancioglu, E.; Murphy, K.; Calli, E.; Scholten, E.T.; Schalekamp, S.; Van Ginneken, B. Cardiomegaly Detection on Chest Radiographs: Segmentation Versus Classification. IEEE Access 2020, 8, 94631–94642. [Google Scholar] [CrossRef]
- Rahman, T.; Khandakar, A.; Kadir, M.A.; Islam, K.R.; Islam, K.F.; Mazhar, R.; Hamid, T.; Islam, M.T.; Kashem, S.; Mahbub, Z.B.; et al. Reliable tuberculosis detection using chest X-ray with deep learning, segmentation and visualization. IEEE Access 2020, 8, 191586–191601. [Google Scholar] [CrossRef]
- van der Velden, B.H.; Kuijf, H.J.; Gilhuijs, K.G.; Viergever, M.A. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med. Image Anal. 2022, 79, 102470. [Google Scholar] [CrossRef] [PubMed]
- Yamashita, R.; Nishio, M.; Do, R.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Into Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef] [PubMed][Green Version]
- O’Shea, K.; Nash, R. An Introduction to Convolutional Neural Networks. arXiv 2015. [Google Scholar] [CrossRef]
- Ker, J.; Wang, L.; Rao, J.; Lim, T. Deep Learning Applications in Medical Image Analysis. IEEE Access 2018, 6, 9375–9389. [Google Scholar] [CrossRef]
- Xu, Z.; Zhang, H.; Li, N.; Zhang, L. Building extraction from high resolution SAR imagery based on deep neural networks. Remote Sens. Lett. 2017, 8, 888–896. [Google Scholar] [CrossRef]
- Pandey, R.; Sahai, A.; Kashyap, H. Chapter 13—Implementing convolutional neural network model for prediction in medical imaging. In Artificial Intelligence and Machine Learning for EDGE Computing; Pandey, R., Khatri, S.K., kumar Singh, N., Verma, P., Eds.; Academic Press: Cambridge, MA, USA, 2022; pp. 189–206. [Google Scholar] [CrossRef]
- Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L.; et al. Review of Deep Learning: Concepts, CNN Architectures, challenges, applications, Future Directions. J. Big Data 2021, 8, 1–74. [Google Scholar] [CrossRef]
- Shokraei Fard, A.; Reutens, D.C.; Vegh, V. From CNNs to GANs for cross-modality medical image estimation. Comput. Biol. Med. 2022, 146, 105556. [Google Scholar] [CrossRef]
- DiPietro, R.; Hager, G.D. Chapter 21-Deep learning: RNNs and LSTM. In Handbook of Medical Image Computing and Computer Assisted Intervention; Zhou, S.K., Rueckert, D., Fichtinger, G., Eds.; The Elsevier and MICCAI Society Book Series; Academic Press: Cambridge, MA, USA, 2020; pp. 503–519. [Google Scholar] [CrossRef]
- Sharkawy, A.N. Principle of Neural Network and Its Main Types: Review. J. Adv. Appl. Comput. Math. 2020, 7, 8–19. [Google Scholar] [CrossRef]
- Mithra, K.S.; Emmanuel, W.R.S. Automated identification of mycobacterium bacillus from sputum images for tuberculosis diagnosis. Signal Image Video Process. 2019, 13, 1–8. [Google Scholar] [CrossRef]
- Hinton, G.; Osindero, S.; Teh, Y.W. A Fast Learning Algorithm for Deep Belief Nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef] [PubMed]
- Voulodimos, A.; Doulamis, N.; Doulamis, A.; Protopapadakis, E. Deep Learning for Computer Vision: A brief review. Comput. Intell. Neurosci. 2018, 2018, 1–13. [Google Scholar] [CrossRef]
- Koo, K.; Min, G.; Kim, J.; Park, J.; Kim, J.; Ahn, H.; Min, M.; Kim, J.; Chung, B. 166—A multilayer perceptron artificial neural network model for predicting survival of patients with prostate cancer according to initial treatment strategy: Development of a web-based clinical decision support system. Eur. Urol. Suppl. 2019, 18, e223–e224. [Google Scholar] [CrossRef]
- Akkaya, B.; Çolakoğlu, N. Comparison of Multi-class Classification Algorithms on Early Diagnosis of Heart Diseases. In Proceedings of the y-BIS Conference 2019: Recent Advances in Data Science and Business Analytics, İstanbul, Turkey, 25–28 September 2019; pp. 162–172. [Google Scholar]
- Lin, W.; Hasenstab, K.; Cunha, G.; Schwartzman, A. Comparison of handcrafted features and convolutional neural networks for liver MR image adequacy assessment. Sci. Rep. 2020, 10, 20336. [Google Scholar] [CrossRef]
- Nanni, L.; Ghidoni, S.; Brahnam, S. Handcrafted vs. non-handcrafted features for computer vision classification. Pattern Recognit. 2017, 71, 158–172. [Google Scholar] [CrossRef]
- Wang, C.; Chen, D.; Hao, L.; Liu, X.; Zeng, Y.; Chen, J.; Zhang, G. Pulmonary Image Classification Based on Inception-v3 Transfer Learning Model. IEEE Access 2019, 7, 146533–146541. [Google Scholar] [CrossRef]
- Baltruschat, I.; Nickisch, H.; Grass, M.; Knopp, T.; Saalbach, A. Comparison of Deep Learning Approaches for Multi-Label Chest X-ray Classification. Sci. Rep. 2019, 9, 6381. [Google Scholar] [CrossRef][Green Version]
- Ashhar, S.; Mokri, S.; Abd. Rahni, A.A.; Huddin, A.; Zulkarnain, N.; Azmi, N.; Mahaletchumy, T. Comparison of deep learning convolutional neural network (CNN) architectures for CT lung cancer classification. Int. J. Adv. Technol. Eng. Explor. 2021, 8, 126–134. [Google Scholar] [CrossRef]
- Mamalakis, M.; Swift, A.J.; Vorselaars, B.; Ray, S.; Weeks, S.; Ding, W.; Clayton, R.H.; Mackenzie, L.S.; Banerjee, A. DenResCov-19: A Deep Transfer Learning Network for robust automatic classification of COVID-19, pneumonia, and tuberculosis from X-rays. Comput. Med. Imaging Graph. 2021, 94, 102008. [Google Scholar] [CrossRef]
- Waheed, A.; Goyal, M.; Gupta, D.; Khanna, A.; Al-Turjman, F.; Pinheiro, P. CovidGAN: Data Augmentation using Auxiliary Classifier GAN for Improved Covid-19 Detection. IEEE Access 2020, 8, 91916–91923. [Google Scholar] [CrossRef]
- Mahmud, T.; Rahman, M.A.; Fattah, S.A. CovXNet: A multi-dilation convolutional neural network for automatic COVID-19 and other pneumonia detection from chest X-ray images with transferable multi-receptive feature optimization. Comput. Biol. Med. 2020, 122, 103869. [Google Scholar] [CrossRef]
- Wang, G.; Li, W.; Zuluaga, M.A.; Pratt, R.; Patel, P.A.; Aertsen, M.; Doel, T.; David, A.L.; Deprest, J.; Ourselin, S.; et al. Interactive Medical Image Segmentation Using Deep Learning With Image-Specific Fine Tuning. IEEE Trans. Med. Imaging 2018, 37, 1562–1573. [Google Scholar] [CrossRef] [PubMed]
- Dietterich, T.G. Ensemble Methods in Machine Learning. In Multiple Classifier Systems; Springer: Berlin/Heidelberg, Germany, 2000; pp. 1–15. [Google Scholar]
- Sirazitdinov, I.; Kholiavchenko, M.; Mustafaev, T.; Yuan, Y.; Kuleev, R.; Ibragimov, B. Deep neural network ensemble for pneumonia localization from a large-scale chest X-ray database. Comput. Electr. Eng. 2019, 78, 388–399. [Google Scholar] [CrossRef]
- Ammar, A.; Bouattane, O.; Youssfi, M. Automatic cardiac cine MRI segmentation and heart disease classification. Comput. Med. Imaging Graph. 2021, 88, 101864. [Google Scholar] [CrossRef]
- Subasi, A.; Kadasa, B.; Kremic, E. Classification of the Cardiotocogram Data for Anticipation of Fetal Risks using Bagging Ensemble Classifier. Procedia Comput. Sci. 2020, 168, 34–39. [Google Scholar] [CrossRef]
- Vo, D.M.; Nguyen, N.Q.; Lee, S.W. Classification of breast cancer histology images using incremental boosting convolution networks. Inf. Sci. 2019, 482, 123–138. [Google Scholar] [CrossRef]
- Sun, L.; Mo, Z.; Yan, F.; Xia, L.; Shan, F.; Ding, Z.; Song, B.; Gao, W.; Shao, W.; Shi, F.; et al. Adaptive feature selection guided Deep Forest for COVID-19 classification with chest CT. IEEE J. Biomed. Health Inform. 2020, 24, 2798–2805. [Google Scholar] [CrossRef]
- Rajaraman, S.; Candemir, S.; Xue, Z.; Alderson, P.; Kohli, M.; Abuya, J.; Thoma, G.; Antani, S. A novel stacked generalization of models for improved TB detection in chest radiographs. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; Volume 2018, pp. 718–721. [Google Scholar] [CrossRef]
- Deb, S.D.; Jha, R.K.; Jha, K.; Tripathi, P.S. A multi model ensemble based deep convolution neural network structure for detection of COVID19. Biomed. Signal Process. Control 2022, 71, 103126. [Google Scholar] [CrossRef] [PubMed]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2015, arXiv:1409.1556. [Google Scholar]
- Zhang, Q.; Wang, H.; Yoon, S.W.; Won, D.; Srihari, K. Lung Nodule Diagnosis on 3D Computed Tomography Images Using Deep Convolutional Neural Networks. Procedia Manuf. 2019, 39, 363–370. [Google Scholar] [CrossRef]
- Choudhary, P.; Hazra, A. Chest disease radiography in twofold: Using convolutional neural networks and transfer learning. Evol. Syst. 2019, 12, 567–579. [Google Scholar] [CrossRef]
- Abdar, A.K.; Sadjadi, S.M.; Soltanian-Zadeh, H.; Bashirgonbadi, A.; Naghibi, M. Automatic detection of coronavirus (COVID-19) from chest CT images using VGG16-based deep-learning. In Proceedings of the 2020 27th National and 5th International Iranian Conference on Biomedical Engineering (ICBME), Tehran, Iran, 26–27 November 2020. [Google Scholar] [CrossRef]
- Khatibi, T.; Shahsavari, A.; Farahani, A. Proposing a novel multi-instance learning model for tuberculosis recognition from chest X-ray images based on CNNs, complex networks and stacked ensemble. Phys. Eng. Sci. Med. 2021, 44, 291–311. [Google Scholar] [CrossRef]
- Dey, N.; Zhang, Y.D.; Rajinikanth, V.; Pugalenthi, R.; Raja, N.S. Customized VGG19 architecture for pneumonia detection in chest X-rays. Pattern Recognit. Lett. 2021, 143, 67–74. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef][Green Version]
- Nguyen, L.; Lin, D.; Lin, Z.; Cao, J. Deep CNNs for microscopic image classification by exploiting transfer learning and feature concatenation. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018; pp. 1–5. [Google Scholar] [CrossRef]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar] [CrossRef][Green Version]
- Ali, L.; Alnajjar, F.; Jassmi, H.; Gochoo, M.; Khan, W.; Serhani, M. Performance Evaluation of Deep CNN-Based Crack Detection and Localization Techniques for Concrete Structures. Sensors 2021, 21, 1688. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef][Green Version]
- Ouyang, X.; Karanam, S.; Wu, Z.; Chen, T.; Huo, J.; Zhou, X.S.; Wang, Q.; Cheng, J.Z. Learning Hierarchical Attention for Weakly-Supervised Chest X-ray Abnormality Localization and Diagnosis. IEEE Trans. Med. Imaging 2021, 40, 2698–2710. [Google Scholar] [CrossRef]
- Szegedy, C.; Ioffe, S.; Vanhoucke, V.; Alemi, A. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning. AAAI Conf. Artif. Intell. 2017, 4278–4284. [Google Scholar] [CrossRef]
- Wang, S.; Zhang, Y.D. DenseNet-201-Based Deep Neural Network with Composite Learning Factor and Precomputation for Multiple Sclerosis Classification. Acm Trans. Multimed. Comput. Commun. Appl. 2020, 16, 1–19. [Google Scholar] [CrossRef]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar] [CrossRef][Green Version]
- Chen, J.; Zhang, D.; Suzauddola, M.; Zeb, A. Identifying crop diseases using attention embedded MobileNet-V2 model. Appl. Soft Comput. 2021, 113, 107901. [Google Scholar] [CrossRef]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar] [CrossRef][Green Version]
- Abdelbaki, S.; Sakli, N.; Sakli, H. Classification and Predictions of Lung Diseases from Chest X-rays Using MobileNet V2. Appl. Sci. 2021, 11, 2751. [Google Scholar] [CrossRef]
- Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1800–1807. [Google Scholar] [CrossRef][Green Version]
- Zoph, B.; Vasudevan, V.; Shlens, J.; Le, Q.V. Learning transferable architectures for Scalable Image Recognition. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8697–8710. [Google Scholar] [CrossRef]
- Yang, J.; Ren, P.; Zhang, D.; Chen, D.; Wen, F.; Li, H.; Hua, G. Neural Aggregation Network for Video Face Recognition. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4362–4371. [Google Scholar] [CrossRef][Green Version]
- Ronneberger, O. Invited talk: U-net convolutional networks for biomedical image segmentation. Inform. Aktuell 2017, 3, 3. [Google Scholar] [CrossRef]
- Lamba, H. Understanding Semantic Segmentation with UNET. Available online: https://towardsdatascience.com/understanding-semantic-segmentation-with-unet-6be4f42d4b47 (accessed on 23 November 2022).
- Cui, H.; Yuwen, C.; Jiang, L.; Xia, Y.; Zhang, Y. Multiscale attention guided U-Net architecture for cardiac segmentation in short-axis MRI images. Comput. Methods Programs Biomed. 2021, 206, 106142. [Google Scholar] [CrossRef]
- Dabass, M.; Vashisth, S.; Vig, R. Attention-Guided deep atrous-residual U-Net architecture for automated gland segmentation in colon histopathology images. Inform. Med. Unlocked 2021, 27, 100784. [Google Scholar] [CrossRef]
- Wu, M.; Chai, Z.; Qian, G.; Lin, H.; Wang, Q.; Wang, L.; Chen, H. Development and Evaluation of a Deep Learning Algorithm for Rib Segmentation and Fracture Detection from Multicenter Chest CT Images. Radiol. Artif. Intell. 2021, 3, e200248. [Google Scholar] [CrossRef]
- Singh, P.; Singh, N.; Singh, K.K.; Singh, A. Chapter 5—Diagnosing of disease using machine learning. In Machine Learning and the Internet of Medical Things in Healthcare; Singh, K.K., Elhoseny, M., Singh, A., Elngar, A.A., Eds.; Academic Press: Cambridge, MA, USA, 2021; pp. 89–111. [Google Scholar] [CrossRef]
- Sharma, N.; Saba, L.; Khanna, N.N.; Kalra, M.K.; Fouda, M.M.; Suri, J.S. Segmentation-Based Classification Deep Learning Model Embedded with Explainable AI for COVID-19 Detection in Chest X-ray Scans. Diagnostics 2022, 12, 2132. [Google Scholar] [CrossRef]
- Suri, J.S.; Agarwal, S.; Chabert, G.L.; Carriero, A.; Paschè, A.; Danna, P.S. COVLIAS 2.0-cXAI: Cloud-Based Explainable Deep Learning System for COVID-19 Lesion Localization in Computed Tomography Scans. Diagnostics 2022, 12, 1482. [Google Scholar] [CrossRef]
- Suri, J.; Agarwal, S.; Chabert, G.; Carriero, A.; Paschè, A.; Danna, P.; Saba, L.; Mehmedović, A.; Faa, G.; Singh, I.; et al. COVLIAS 1.0Lesion vs. MedSeg: An Artificial Intelligence Framework for Automated Lesion Segmentation in COVID-19 Lung Computed Tomography Scans. Diagnostics 2022, 12, 1283. [Google Scholar] [CrossRef]
- Sakib, S.; Tazrin, T.; Fouda, M.M.; Fadlullah, Z.M.; Guizani, M. DL-CRC: Deep Learning-Based Chest Radiograph Classification for COVID-19 Detection: A Novel Approach. IEEE Access 2020, 8, 171575–171589. [Google Scholar] [CrossRef]
- Sakib, S.; Fouda, M.M.; Md Fadlullah, Z.; Nasser, N. On COVID-19 Prediction Using Asynchronous Federated Learning-Based Agile Radiograph Screening Booths. In Proceedings of the ICC 2021—IEEE International Conference on Communications, Montreal, QC, Canada, 14–23 June 2021. [Google Scholar] [CrossRef]
- Chauhan, G.; Liao, R.; Wells, W.; Andreas, J.; Wang, X.; Berkowitz, S.; Horng, S.; Szolovits, P.; Golland, P. Joint modeling of chest radiographs and radiology reports for pulmonary edema assessment. In Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention—MICCAI, Lima, Peru, 29 September 2020; pp. 529–539. [Google Scholar] [CrossRef]
- Kim, D.; Myong, J.P.; Han, S.W. Classification of Asbestosis in CT Imaging Data Using Convolutional LSTM. Res. Sq. 2021. [Google Scholar] [CrossRef]
- Behzadi-khormouji, H.; Rostami, H.; Salehi, S.; Derakhshande-Rishehri, T.; Masoumi, M.; Salemi, S.; Keshavarz, A.; Gholamrezanezhad, A.; Assadi, M.; Batouli, A.; et al. Deep Learning, reusable and problem-based architectures for detection of consolidation on chest X-ray images. Comput. Methods Programs Biomed. 2020, 185, 105162. [Google Scholar] [CrossRef] [PubMed]
- Wang, H.; Xia, Y. ChestNet: A Deep Neural Network for Classification of Thoracic Diseases on Chest Radiography. arXiv 2018, arXiv:1807.03058. [Google Scholar]
- Bao, Y.; Makady, Y.H.A.; Mahmoodi, S. Automatic diagnosis of COPD in lung CT images based on multi-view DCNN. In Proceedings of the 10th International Conference on Pattern Recognition, Applications and Methods, Institute of Communication and University of Lisbon, Lisbon, Portugal, 4–6 February 2021; pp. 571–578. [Google Scholar] [CrossRef]
- Guan, Q.; Huang, Y.; Zhong, Z.; Zheng, Z.; Zheng, L.; Yang, Y. Diagnose like a Radiologist: Attention Guided Convolutional Neural Network for Thorax Disease Classification. arXiv 2018, arXiv:1801.09927. [Google Scholar]
- Christe, A.; Peters, A.; Drakopoulos, D.; Heverhagen, J.; Geiser, T.; Stathopoulou, T.; Christodoulidis, S.; Anthimopoulos, M.; Mougiakakou, S.; Ebner, L. Computer-Aided Diagnosis of Pulmonary Fibrosis Using Deep Learning and CT Images. Investig. Radiol. 2019, 54, 627–632. [Google Scholar] [CrossRef] [PubMed][Green Version]
- Tomita, K.; Touge, H.; Sakai, H.; Sano, H.; Tohda, Y. Deep learning facilitates the diagnosis of adult asthma. Allergol. Int. 2019, 68, 456–461. [Google Scholar] [CrossRef] [PubMed]
- Gooßen, A.; Deshpande, H.; Harder, T.; Schwab, E.; Baltruschat, I.; Mabotuwana, T.; Cross, N.; Saalbach, A. Deep Learning for Pneumothorax Detection and Localization in Chest Radiographs. arXiv 2019, arXiv:1907.07324. [Google Scholar]
- Peng, L.; Lin, L.; Hu, H.; Zhang, Q.; Li, H.; Chen, Q.; Wang, D.; Han, X.H.; Iwamoto, Y.; Chen, Y.W.; et al. Multi-scale deep convolutional neural networks for emphysema classification and quantification. Intell. Syst. Ref. Libr. 2019, 149–164. [Google Scholar] [CrossRef]
- Duong, L.T.; Le, N.H.; Tran, T.B.; Ngo, V.M.; Nguyen, P.T. Detection of tuberculosis from chest X-ray images: Boosting the performance with Vision Transformer and transfer learning. Expert Syst. Appl. 2021, 184, 115519. [Google Scholar] [CrossRef]
- Abiyev, R.H.; Ma’aitah, M.K. Deep convolutional neural networks for chest diseases detection. J. Healthc. Eng. 2018, 2018, 1–11. [Google Scholar] [CrossRef]
- Nirschl, J.J.; Janowczyk, A.; Peyster, E.G.; Frank, R.; Margulies, K.B.; Feldman, M.D.; Madabhushi, A. A deep-learning classifier identifies patients with clinical heart failure using whole-slide images of H&E Tissue. PLoS ONE 2018, 13, e0192726. [Google Scholar] [CrossRef][Green Version]
- Mo, S.; Cai, M. Deep Learning Based Multi-Label Chest X-ray Classification with Entropy Weighting Loss. In Proceedings of the 2019 12th International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China, 14–15 December 2019; Volume 2, pp. 124–127. [Google Scholar] [CrossRef]
- Liang, C.H.; Liu, Y.C.; Wu, M.T.; Garcia-Castro, F.; Alberich-Bayarri, A.; Wu, F.Z. Identifying pulmonary nodules or masses on chest radiography using deep learning: External validation and strategies to improve clinical practice. Clin. Radiol. 2020, 75, 38–45. [Google Scholar] [CrossRef] [PubMed][Green Version]
- Rachael Zimlich, B. What You Need to Know about Lung Disease. Available online: https://www.verywellhealth.com/types-of-lung-disease-what-you-should-know-5207533 (accessed on 23 November 2022).
- Smart, J. Chapter VIII—DIseases of the Lungs. In A Synopsis of Respiratory Diseases; Smart, J., Ed.; Butterworth-Heinemann: Oxford, UK, 1964; pp. 99–132. [Google Scholar] [CrossRef]
- Liao, R.; Rubin, J.; Lam, G.; Berkowitz, S.J.; Dalal, S.; Wells, W.M.; Horng, S.; Golland, P. Semi-Supervised Learning for Quantification of Pulmonary Edema in Chest X-ray Images. arXiv 2019, arXiv:1902.10785. [Google Scholar]
- Fu, X.; Liu, T.; Xiong, Z.; Smaill, B.H.; Stiles, M.K.; Zhao, J. Segmentation of histological images and fibrosis identification with a convolutional neural network. Comput. Biol. Med. 2018, 98, 147–158. [Google Scholar] [CrossRef][Green Version]
- Bhatt, A.; Ganatra, A.; Kotecha, K. COVID-19 pulmonary consolidations detection in chest X-ray using progressive resizing and transfer learning techniques. Heliyon 2021, 7, e07211. [Google Scholar] [CrossRef] [PubMed]
- Chen, S.; Han, Y.; Lin, J.; Zhao, X.; Kong, P. Pulmonary nodule detection on chest radiographs using balanced convolutional neural network and classic candidate detection. Artif. Intell. Med. 2020, 107, 101881. [Google Scholar] [CrossRef] [PubMed]
- Min Kim, H.; Ko, T.; Young Choi, I.; Myong, J.P. Asbestosis diagnosis algorithm combining the lung segmentation method and deep learning model in computed tomography image. Int. J. Med. Inform. 2022, 158, 104667. [Google Scholar] [CrossRef]
- Ho, T.; Kim, T.; Kim, W.J.; Lee, C.H.; Chae, K.; Bak, S.; Kwon, S.; Jin, G.; Park, E.K.; Choi, S. A 3D-CNN model with CT-based parametric response mapping for classifying COPD subjects. Sci. Rep. 2021, 11, 34. [Google Scholar] [CrossRef]
- Chaudhary, A.; Hazra, A.; Prakash, C. Diagnosis of Chest Diseases in X-ray images using Deep Convolutional Neural Network. In Proceedings of the 2019 10th International Conference on Computing, Communication, and Networking Technologies (ICCCNT), Anpur, India, 6–8 July 2019. [Google Scholar] [CrossRef]
- Spyroglou, I.; Spöck, G.; Chatzimichail, E.; Rigas, A.; Paraskakis, E. A Bayesian logistic regression approach in asthma persistence prediction. Epidemiol. Biostat. Public Health 2018, 15, e12777. [Google Scholar] [CrossRef]
- Aboutalebi, H.; Pavlova, M.; Shafiee, M.J.; Sabri, A.; Alaref, A.; Wong, A. COVID-Net CXR-S: Deep Convolutional Neural Network for Severity Assessment of COVID-19 Cases from Chest X-ray Images. Diagnostics 2021, 12, 25. [Google Scholar] [CrossRef]
- Allioui, H.; Mohammed, M.; Benameur, N.; Al-Khateeb, B.; Abdulkareem, K.; Zapirain, B.; Damaševičius, R.; Maskeliunas, R. A Multi-Agent Deep Reinforcement Learning Approach for Enhancement of COVID-19 CT Image Segmentation. J. Pers. Med. 2022, 12, 309. [Google Scholar] [CrossRef] [PubMed]
- El-Melegy, M.; Mohamed, D.; El Melegy, T. Automatic detection of tuberculosis bacilli from microscopic sputum smear images using faster R-CNN, Transfer Learning and Augmentation. In Proceedings of the Iberian Conference on Pattern Recognition and Image Analysis, Madrid, Spain, 1–4 July 2019; pp. 270–278. [Google Scholar] [CrossRef]
- Contributors, H. What Is COPD? Available online: https://health.howstuffworks.com/diseases-conditions/respiratory/what-is-copd.htm (accessed on 23 November 2022).
- Mohamed, I.; Fouda, M.M.; Hosny, K.M. Machine Learning Algorithms for COPD Patients Readmission Prediction: A Data Analytics Approach. IEEE Access 2022, 10, 15279–15287. [Google Scholar] [CrossRef]
- WHO. Chronic Obstructive Pulmonary Disease (COPD). Available online: https://www.who.int/news-room/fact-sheets/detail/chronic-obstructive-pulmonary-disease-(copd) (accessed on 23 November 2022).
- Wallace, G.; Winter, J.; Winter, J.; Taylor, A.; Taylor, T.; Cameron, R. Chest X-rays in COPD screening: Are they worthwhile? Respir. Med. 2009, 103, 1862–1865. [Google Scholar] [CrossRef] [PubMed][Green Version]
- Park, S.; Lee, S.M.; Kim, N.; Choe, J.; Cho, Y.; Do, K.H.; Seo, J.B. Application of deep learning-based computer-aided detection system: Detecting pneumothorax on chest radiograph after biopsy. Eur. Radiol. 2019, 29, 5341–5348. [Google Scholar] [CrossRef] [PubMed]
- Pouraliakbar, H. Chapter 6—Chest Radiography in Cardiovascular Disease. In Practical Cardiology, 2nd ed.; Maleki, M., Alizadehasl, A., Haghjoo, M., Eds.; Elsevier: Amsterdam, The Netherlands, 2022; pp. 111–129. [Google Scholar] [CrossRef]
- Teams, W. Cardiovascular Diseases (CVDs). Available online: https://www.who.int/news-room/fact-sheets/detail/cardiovascular-diseases-(cvds) (accessed on 23 November 2022).
- Candemir, S.; Rajaraman, S.; Thoma, G.; Antani, S. Deep Learning for Grading Cardiomegaly Severity in Chest X-rays: An Investigation. In Proceedings of the 2018 IEEE Life Sciences Conference (LSC), Montreal, QC, Canada, 28–30 October 2018; pp. 109–113. [Google Scholar] [CrossRef]
- Wang, Z.; Chen, X.; Tan, X.; Yang, L.; Kannapur, K.; Vincent, J.L.; Kessler, G.N.; Ru, B.; Yang, M. Using deep learning to identify high-risk patients with heart failure with reduced ejection fraction. J. Health Econ. Outcomes Res. 2021, 8, 6–13. [Google Scholar] [CrossRef] [PubMed]
- Zhou, Q.Q.; Wang, J.; Tang, W.; Hu, Z.C.; Xia, Z.Y.; Li, X.S.; Zhang, R.; Yin, X.; Zhang, H. Automatic Detection and Classification of Rib Fractures on Thoracic CT Using Convolutional Neural Network: Accuracy and Feasibility. Korean J. Radiol. 2020, 21, 869. [Google Scholar] [CrossRef]
- Wang, H.; Wang, S.; Qin, Z.; Zhang, Y.; Li, R.; Xia, Y. Triple attention learning for classification of 14 thoracic diseases using chest radiography. Med. Image Anal. 2021, 67, 101846. [Google Scholar] [CrossRef]
- Li, Z.; Li, L. A novel method for lung masses detection and location based on deep learning. In Proceedings of the 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Kansas City, MO, USA, 13–16 November 2017; pp. 963–969. [Google Scholar] [CrossRef]
Method | Advantage | Disadvantage |
---|---|---|
Supervised Learning | It performs classification and regression tasks. It exists notions of the output along the learning process. | It requires a labeled dataset. |
Unsupervised Learning | It does not require a training data to be labeled. Classification task is fast. | There are no notions of the output along the learning process. |
Semi-Supervised Learning | Builds a model through a mix of labeled and unlabeled data. Reduced training dataset. | Computationally complex. |
Reinforcement Learning | Can gain experience and feedbacks (rewards) from their actions which help them to improve their results. | needs large datasets to make better benchmarks and decisions. |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mostafa, F.A.; Elrefaei, L.A.; Fouda, M.M.; Hossam, A. A Survey on AI Techniques for Thoracic Diseases Diagnosis Using Medical Images. Diagnostics 2022, 12, 3034. https://doi.org/10.3390/diagnostics12123034
Mostafa FA, Elrefaei LA, Fouda MM, Hossam A. A Survey on AI Techniques for Thoracic Diseases Diagnosis Using Medical Images. Diagnostics. 2022; 12(12):3034. https://doi.org/10.3390/diagnostics12123034
Chicago/Turabian StyleMostafa, Fatma A., Lamiaa A. Elrefaei, Mostafa M. Fouda, and Aya Hossam. 2022. "A Survey on AI Techniques for Thoracic Diseases Diagnosis Using Medical Images" Diagnostics 12, no. 12: 3034. https://doi.org/10.3390/diagnostics12123034