Next Article in Journal
RBM24 in the Post-Transcriptional Regulation of Cancer Progression: Anti-Tumor or Pro-Tumor Activity?
Previous Article in Journal
Isolation of Neoantigen-Specific Human T Cell Receptors from Different Human and Murine Repertoires
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How AI Can Help in the Diagnostic Dilemma of Pulmonary Nodules

1
Diagnostic Radiology Department, Mansoura University Hospital, Mansoura 35516, Egypt
2
Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
3
Information Technology Department, Faculty of Computers and Informatics, Mansoura University, Mansoura 35516, Egypt
4
Computer Science and Information Technology Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
5
Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Cancers 2022, 14(7), 1840; https://doi.org/10.3390/cancers14071840
Submission received: 3 March 2022 / Revised: 29 March 2022 / Accepted: 30 March 2022 / Published: 6 April 2022

Abstract

:

Simple Summary

Pulmonary nodules are considered a sign of bronchogenic carcinoma, detecting them early will reduce their progression and can save lives. Lung cancer is the second most common type of cancer in both men and women. This manuscript discusses the current applications of artificial intelligence (AI) in lung segmentation as well as pulmonary nodule segmentation and classification using computed tomography (CT) scans, published in the last two decades, in addition to the limitations and future prospects in the field of AI.

Abstract

Pulmonary nodules are the precursors of bronchogenic carcinoma, its early detection facilitates early treatment which save a lot of lives. Unfortunately, pulmonary nodule detection and classification are liable to subjective variations with high rate of missing small cancerous lesions which opens the way for implementation of artificial intelligence (AI) and computer aided diagnosis (CAD) systems. The field of deep learning and neural networks is expanding every day with new models designed to overcome diagnostic problems and provide more applicable and simply used models. We aim in this review to briefly discuss the current applications of AI in lung segmentation, pulmonary nodule detection and classification.

1. Introduction

Lung cancer screening is a very important issue as the disease is the second most common type of cancer in both males and females. Lung cancer is responsible for 25 % of all cancer cases in USA [1]. It is obvious that early detection was associated with a higher 5-year survival rate. Risk factors for developing lung cancer include all types of smoking (even electronic cigarettes and passive smoking) [2,3,4], family history either of single or multiple relatives especially those who developed cancer at young age [5], chronic obstructive lung disease [6], and human papilloma virus [7]. Recently, the United States Preventive Services Task Force recommended annual screening for lung cancer with low dose computed tomography (LDCT) for asymptomatic individuals aged 55 to 80 years who have a 30-pack year smoking history and currently smoke or have quit smoking within the past 15 years. Patients who have stopped smoking for 15 years, have a co-existing health problem limiting life expectancy, or are not candidates for surgical resection are excluded from annual screening. The algorithm of screening includes the number, the density, and size of solid, part solid or non-solid component of the nodules and according to these parameters, a follow-up schedule was designed [8,9]. Artificial intelligence was invented to enhance the computational abilities of computers and teach them to think, solve problems, and perform tasks in the same way as human beings. Recently, medical image analysis and diseases prediction and detection are among the most exciting applications of artificial intelligence. Using artificial intelligence techniques, computer aided diagnosis (CAD) systems have been developed and used in the analysis of medical imaging and have proved to be very helpful tools. AI techniques could be used to create a proper learning model to be used in clinical practice for lung cancer screening. The learning model should consist of four main steps; lung segmentation, followed by nodule segmentation/detection, then feature analysis, and the exclusion of false positive nodules (see Figure 1). Classification of detected pulmonary nodules into benign and malignant is based upon a preset of characteristic features including shape analysis, estimation of growth rate, and appearance analysis [10,11,12]. In this review, we will briefly discuss the current applications of AI in lung segmentation and pulmonary nodule detection and classification. This study reviews recent CT-based studies as well as studies published in the last two decades.

2. Lung Segmentation

The first step in almost every CAD system dealing with lung disease is the segmentation. In this step, a preferred structure is delineated from its surrounding prior to analysis. Lung segmentation is very challenging due to different existing structures with near-similar densities such as the bronchi, bronchioles, pulmonary artery, and vein branches. Lung segmentation techniques can be categorized into four main categories based on: (1) Hounsfield unit (HU) threshold, (2) deformable boundaries, (3) shape models, (4) region/edge-based models, in addition to machine learning (ML) based methods and hybrid techniques which utilize a combination of methods to overcome the drawbacks of using single method (Figure 2). Details of the different categories are given below.
Hounsfield unit (HU) thresholding: Normal lung parenchyma displays low HU and appears hypodense in thoracic CT scan images in contrast to other structures such as heart, blood vessels or bronchial walls. Researchers tried to determine a threshold of HU to define lung parenchyma using different methods. Hu et al. [13] proposed a 3-step technique to perform lung segmentation. Their method started with extracting lung parenchyma utilizing a proper grey scale threshold. Then, separation of right and left lungs was performed using dynamic program. Lastly, a series of morphological operations were used to refine the pulmonary margins. This method was further used in the works of Ukil and Reinhardt [14], as well as Van Rikxoort [15]. Amato et al. [16,17] used grey scale thresholding once to extract the thorax from surrounding structures, and another time for extracting the lung from the rest of thoracic structures. A rolling ball algorithm is applied to lung periphery aiming not to miss any juxta-pleural nodule and exclude partial volume pixels. Pu et al. [18] designed an adaptive border marching (ABM) algorithm to reach the same purpose through refining lung margins. Gao et al. [19] proposed a 4-step method to separate the pulmonary vessels, and airways from lung parenchyma as well as separating right and left lungs based on a grey scale threshold. Other researchers used more sophisticated methods to define threshold used for lung extraction such as histogram analysis [20], and 3D fuzzy adaptive thresholding [21]. Limitations of lung segmentation using thresholding method are mainly related to its reliance on image resolution and type of scanners used (i.e., GE, Philips…). Another important issue is that there might be an overlap between densities of different lung structures making differentiation based on HU difficult.
Deformable boundary models: The second method used for lung segmentation is deformable boundary models including snakes, active contours, and level sets. These models start with an initial point then follow the shape of the desired structure influenced by internal and external forces. Itai et al. [22] utilized a 2D parametric deformable model to extract lung from computed tomography (CT) image using lung borders as an external guiding force. Silveria et al. [23,24] presented a technique that uses active contour and Level sets. They begin with a thresholding technique, then edge detection is initiated using a robust geometric active contour model around the lung. It divides into two and continues by multiple strokes which are categorized into valid and invalid according to confidence degrees. The major limitation of deformable boundary models is the high sensitivity of the selection of the initial point, in addition to inhomogeneity of lung structure that may lead to unsuccessful adaptation of lung boundaries [25].
Shape-based models: In this method, the stored data in the CAD system is used to improve the accuracy of lung segmentation. It utilizes either a statistical shape or lung appearance model. Unlike previously discussed methods, this approach of lung segmentation is more effective in dealing with lungs with moderate to severe pathology and with variations in lung anatomy as it gets benefit from trained models [26]. Sun et al. [27] proposed a 2-step lung segmentation technique that used a robust active shape model (RASM) matching method to segment the outline of the lungs guided by rib cage detection method, followed by using an optimal surface finding approach that was created by Li et al. [28] to fit the initial segmentation result to the lung. The right and left lungs were segmented separately. Sofka et al. [29] designed a multistage learning model that used predefined anatomical data to initiate a statistical shape model. Hau et al. [30] developed a graph-based search algorithm via cost function that takes into consideration the intensity, gradient, boundary smoothness, and rib anatomical information. Other researchers proposed a user interface framework [31] or Bayesian classification refined by Markov Gibbs Random Field (MGRF) method [32,33,34]. Similar approach was introduced by Chung et al. [35] who developed a Bayesian approach based on the Chan Vese (CV) model [36], where the data obtained from previous or upper frame image was used to predict lung image. False positive juxta-pleural nodule candidates were excluded via concave points detection and circle/ellipse Hough transform. Modification of lung contour by adding the final nodule candidates to the area of the CV model was the final step. More recently, Sun et al. [37] presented a new active shape model (ASM) algorithm to detect the outlier marker points by distance method aiming to get better assessment of lung periphery and juxta-pleural lung nodules. They also used a robust principal component analysis (RPCA) of low rank theory to remove noise from images in order to construct ASM. Despite the many advantages of shape model over other lung segmentation methods, its main limitation depends on the accurateness of the used stored data [25].
Region-based method: The main idea of region-based segmentation is that neighboring pixels in a certain region will have similar values [38]. An example of this method is the region growing method. If one pixel showed similar criteria to a predefined set then it is included in that region [38,39,40,41,42]. Other examples include watershed segmentation [43], random walks segmentation [44], graph cuts segmentation [45], and fuzzy connectedness [46]. This method of segmentation is suitable for homogenous structures such as lungs with no or mild pathology, airway and pathologic lesions with homogeneous density [25].
Machine learning-based methods: This method uses learning models composed of predefined measurable characteristics (called features) to identify normal and abnormal lung regions as well as different anatomical structures and finally construct the proper lung segmentation. Small image patches are labelled either as normal, abnormal, or neighboring soft tissue. The most common pathological patches used in clinical practice include consolidation, ground glass opacities, and fibrosis. A supervised training process uses data systems to extract features from each pixel/voxel and further classify them to predict lung field boundaries and reach final segmentation. A proper lung segmentation should include identification of both normal and pathological lung regions in the same process, and this is performed via examining each voxel in the CT image [47,48,49,50,51]. Multiple sophisticated algorithms were developed to reach this task, for example, Mansoor et al. [52] designed an ML algorithm that identifies a large spectrum of pulmonary pathologic lesions combined with region-based and neighboring anatomy guided correction segmentation. Obviously, this method is computationally expensive, but its remarkably high accuracy along with development of parallel computing and efficient well-processed workstations make this method feasible in clinical practice. One of the limitations of this method is that it uses small image patches which makes it impossible to predict structural information such as global shape of the lung. It is impossible to get feature data sets that can fit anatomical and physiologic lung variations in different subjects. Lastly, pixel by pixel assessment was the reason that this method had the least efficiency as compared to the other four major classes of lung segmentation [51,53,54,55,56].
Hybrid approaches of lung segmentation: No single lung segmentation method could fit with anatomical and pathological variants alone, this encouraged the development of combined approaches. As in the works of Mansoor et al. [52] and Hau et al. [30].
In summary, the literature reviews of lung segmentation system using these four different categories are presented in Table 1.

3. Pulmonary Nodule Detection and Segmentation

Lung cancer screening programs rely mainly on early detection of pulmonary nodules utilizing LDCT [71,72,73,74,75,76,77]. LDCT is capable of providing imaging of the thoracic region of high contrast, temporal, and spatial resolution in a very short acquisition time (single breath hold). However, detection of lung nodules is not as simple as it looks, as pulmonary nodules usually appear as a white spherical structure that could mimic a nearby small blood vessel or a collapsed bronchiole. In addition, the inter-reader variations in detection and the characterization of pulmonary nodules are merely subjective issues [10,78,79]. This opens the way for artificial intelligence and deep learning to overcome human errors and provide more effective procedures. The process of lung nodule detection passes into two stages; first detection of the pulmonary nodule candidates, second exclusion of the false positive nodules (FPN) and keeping only the true positive nodules (TPN). In other words, detection followed by classification [10,78,79].
Computer-aided diagnosis (CAD) systems: A large public database was generated to provide data that can be used to assess the performance of CAD detection and diagnostic systems and help further development. It is called the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The creation of this database required great efforts as CAD was not used in annotation of images included [80]. Other databases such as data derived from the Dutch-Belgian NELSON lung cancer screening trial and LUNA16, LIDC, DSB2017, NLST, TianChi, and ELCAP datasets were utilized by most of the current research works dealing with CAD and deep learning (DL) [81]. The first step in the process of nodule detection is to unsharp the CT images by changing the image threshold which improves discrimination of pulmonary nodules from the rest of the surrounding lung parenchyma. A series of 3D cylindrical and spherical filters and template matching were used to detect small lung nodules [82,83,84,85,86,87,88,89]. However, the geometry of the candidate nodules doesn’t always fit into these spherical, cylindrical, or circular assumptions as it may be spiculated by its nature or due to attachment to nearby pleural surface or blood vessel [90]. Other studies proposed methods to detect lung nodules using k-means clustering technique [91,92,93] with further utilization of rule-based classifiers and linear discriminate analysis (LDA) to eliminate normal lung structures and reduce FPN. One study tried to solve the problem of eliminating an overlapping or contacting blood vessel by choosing a proper region of interest (ROI) in a 3-step model [94]. On the other hand, Oda et al. [95] and Siata et al. [96] used 3D algorithms; 3D filter by orientation map of gradient vectors and 3D distance transformation to overcome the same problem. Brown et al. [97] used prior patient images to create a specific model, so that any change in size and morphology of pulmonary nodules could be detected in follow up images easily. Messay et al. [98] used a fully automated CAD system that utilizes intensity thresholding and morphological operations to detect pulmonary nodules with a sensitivity of 82.66 % with 3 FPN/scan. A set of 245 features was computed for each segmented lung nodule and Fisher Linear Discriminant (FLD) classifier was utilized. Similarly, Setio et al. [99] designed a CAD system to detect pulmonary nodules larger than 10 mm. They also used a multi-stage process of thresholding and morphological operations, then the extracted nodules were segmented and a set of 24 features was computed, finally the nodules were classified via a radial based vector supporting machine (VSM). A recent study aimed to solve the problem of using uncertain class data through the application of a CAD system based upon semi-supervised extreme learning machines (SS-ELM). This was done by using both certain class feature sets with labels, and unlabeled feature sets for training [100].
Deep learning: Deep learning is an advanced type of machine learning that uses complicated algorithms to model high level features and recognize characteristics. It is composed of statistical models that predict results depending on previous training on annotated or un-labelled datasets [101]. The algorithm could predict the presence of pulmonary nodule or predict its nature whether benign or malignant [102]. Convolutional neural network (CNN) is one of the most commonly used DL algorithms in the clinical practice. It was originally implemented in LeNet, which was designed by Yann LeCun et al. [103]. Since then, it gained more popularity and outperformed the existing state of the art texture analysis and support vector machine(SVM) methods. CNN model can build itself from the beginning even when dealing with new un-labelled features without the need for predefined set of features or complex human led pipes, in contrast to tissue radiomics or feature analysis. Another advantage of CNN over other methods is that all its components reach ultimate level at the same time, while in the case of tissue radiomics for instance, there is no guarantee that all components will fulfill high level. Additionally, it requires limited human supervision [10,104,105]. In the last decade, several research works emerged with different CNN algorithms and models designed for pulmonary nodule detection. Two studies showed exceptionally high accuracy (99– 96.6 % ), sensitivity (97.5–96.9) and specificity (97.5–96.3). They proposed algorithms that either combined 2D and 3D artificial neural networks with intensity based statistical features [106] or used CAD system for different dimensions of angular histograms of surface normals (AHSN) features [107]. Other researchers used 2D and 3D subsets of features [108], local shape analysis and data-driven local contextual feature learning [109], geometric and intensity statistical features [110], or deep neural networks (DNN) [111]. Bergtholdt et al. [112] found that using support vector machine classifier improved the accuracy, sensitivity, and specificity of pulmonary nodule detection. One study [113] used deep believe network (DBN) to detect large nodules (>30 mm) with high accuracy of about 90 % . Jakobs et al. [114] compared the performance of two commercial and one academic state of the art CAD systems and found that the updated commercial CAD system (Herakles) had the highest sensitivity reaching 82 % with 3.1 FPN/scan. They found that about one third of the missed nodules were subsolid. They recommended the addition of a CAD scheme designed for subsolid nodules to improve the sensitivity of nodule detection. Another recent study reviewed several research works and found high sensitivity of DL algorithms when utilizing LUNA 16 dataset (in the range of 94.4– 97 % ) with an average of 4 FPN/scan and LIDC-IDRI dataset (in the range of 80.06 94.1 % ) [115].
Pulmonary nodule segmentation: Nodule size is a strong predictor of neoplastic nature along with its progressive increase on follow up [116]. One large study demonstrated that risk of developing cancer in nodule less than 100 mm3 equals those with no nodules [117]. Nodule size was better assessed through volumetry rather than diameter as 2D measurements were found to be unreliable and showed wide inter and intra-observer variations [118]. Automated 3D measurement of pulmonary nodules provides better assessment of its morphology and growth rate [119]. Accurate nodule volumetry requires good nodule segmentation. Manual segmentation of lung nodules is time consuming and is far less accurate in comparison to deep learning semiautomated methods [120]. Most of the available algorithms concerned with pulmonary nodule detection rely on growing edge method where a predefined threshold acts as a seed that connects all nearby voxels of higher density [121]. As mentioned before, solid pulmonary nodules display higher density than surrounding lung parenchyma promoting easy discrimination by growing edge method, but difficulties occur when a vessel contacts or passes beside a pulmonary nodule or when it approximates the pleura [121,122]. The detection of ground glass nodules with indistinct margins is very problematic in manual segmentation. Tao et al., and Zhou et al., proposed novel methods via a multi-level statically based method [123] and a classifier by boosting k-nearest neighbor (kNN), whose distance measure is the Euclidean distance between the nonparametric density estimates of two regions [124]. Another more recent study segmented subsolid nodule through voxel classification that automatically eliminate blood vessels [125]. Other studies described more complex approaches to segment of pulmonary nodules of different densities and those with either vascular or pleural attachment via analysis of the core of the nodule [79,126,127]. Table 2 presents a summary of the state-of-the-art pulmonary nodule detection and segmentation systems.

4. Nodule Classification

One of the major limitations of using CAD systems in the detection of lung nodule is the high false positive rate which hinders the accuracy and lowers its efficacy as a screening framework that could be used on a large scale population. False positive nodules are associated with extra costs and hazards as they lead to unnecessary biopsies, more prolonged follow up imaging, and extra worry by patients and their families. So, accurate classification of detected pulmonary nodule is of utmost importance to overcome these problems. After nodule detection and segmentation, comes nodule classification. TPNs are classified by two large architectures: either radiomics feature-based scheme or deep learning models [136,137,138,139] (Figure 3). The feature radiomic scheme uses different sets of features, that could be morphological/shape (spherical disproportion, circularity … etc.), texture features, gray scale/histogram features (average, standard deviation, skewness…), gradient features (average, standard deviation, kurtosis…), and spatial features (location of the nodule) [140,141]. The extracted data from image voxels are then gathered and transformed into numeric form called feature radiomics [142]. A group of numeric features (radiomics) represent what is called feature vector. Then, a classifier (which is a machine learning model) differentiates feature vectors according to training algorithms and labelled data [143]. Famous classifiers include support vector machine, and random forest [144]. The advantage of radiomics model is that it could build models of high performance out of limited datasets, yet it requires manual tumor segmentation and hand-crafted feature extraction [145,146,147].
On the other hand, classifiers are used to build end to end convolutional neural networks, fully connected neural network, or deep neural network to reach final nodule classification through semantic feature analysis [12,147,148,149,150,151]. As mentioned earlier, ML and neural networks do not require segmentation or hand-crafted feature extraction [152,153]. DNN could assess difficult cases which does not fit in the predefined feature characteristics, yet still with satisfactory results. Deep layers such as ResNet and DenseNet are usually used to train the DNN model [69,136,154,155].
The process of nodule classification requires analysis of data obtained from 3D images. However, most of the available models either use 2D data to build a 3D CNN model [156] or a multi-view 2D CNN model [157,158,159]. Uthoff et al. [156] developed a ML pipeline using k-medoids clustering and information theory to pick efficient predictor sets for different amounts of parenchyma. Their method had high sensitivity of 100% and specificity of 96%. On the other hand, Shen et al. [157] used a multiscale 2-layered CNN to diagnose lung cancer in CT chest images, reaching an accuracy of 84.86%, while Jung et al. [160] used a 3D deep convolutional neural network (DCNN) with shortcut and dense connections to classify lung nodules. These connections allow gradients to pass directly and quickly, thus overcome gradient vanishing problems. In addition to acquiring three dimensional features instead of two. Their method had higher competition performance metric (CPM) of about 0.9 as compared to other state of the art methods. Chen et al. [160] used a neural network ensemble (NNE) to evaluate lung nodules and differentiate between probably malignant, uncertain, and probably benign nodules with an accuracy of 78.7%. Another study using texture features and artificial neural networks found that feed forward back propagation showed more accurate nodule classification as compared to feed forward neural networks and that skewness was the most accurate parameter [161]. Kumar et al. [149] proposed another type of neural network for lung nodule classification called stacked autoencoder (SAE) with an accuracy of 75.01%. Wilms et al. [78] presented a model-based 4D segmentation of lungs with large tumors in 4D CT data sets in which a 4D statistical shape model is fitted to the 4D image sequence respecting inter and intra-patient variation. Ardila et al., proposed a DL model that extracts data from patient’s prior and current CT images to predict the risk of development of bronchogenic carcinoma [162]. This model had high accuracy when applied on lung cancer screening trial cases and on independent validation group. They compared their results with a group of 6 radiologists. Interestingly, their model was comparable to radiologists in the evaluation of prior and recent CT images, but it outperformed the radiologists when evaluating recent CT image only. Li et al. [163] evaluated the diagnostic performance of a CAD commercial software program called InferRead CT Lung Research (ICLR) which was based on 3D CNN. They found that ICLR had high accuracy in risk prediction of bronchogenic carcinoma unlike benign or metastatic lesions. One recent research [164] utilized a 2-level classification of pulmonary nodules into benign and malignant with further subdivision of malignant nodules into serious and mild malignant nodules using CNN with transfer learning, they attained high accuracy similar to other published research.
Other studies were more concerned in correlating between pulmonary nodules morphological features and finger print of genetic mutations of pathological types of lung cancer (radio-genomics). This is particularly important in the assessment of success of gene inhibiting therapy [164,165,166,167,168].
Regarding the diagnostic performance, a bunch of studies proved that deep leaning is superior to ML models, owing to self-learning capabilities of the later [78,149,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175]. Song et al. [176] compared three types of neural networks; convolutional neural network, deep neural network, and stacked autoencoder (SAE). They found that CNN had the highest accuracy (84.15%), while another more recent study showed high accuracy (AUC of 0.99) using CNN based DL systematic approach called NoduleX [177]. Table 3 presents a summary of state-of-the-art pulmonary nodules classifications.

5. Limitations and Future Prospects

The scale of dataset used in CNN model is a crucial factor in the determination of whether it is a good model for training or not [182]. Collecting a large number annotated images could be a year-long process or even impossible owing to nature of medical imaging. To overcome this problem, large public datasets were introduced. Another solution is to artificially generate datasets that are similar to those used in the training of CNN. One example is the generative adversarial network (GAN) [133]. Another suggested solution is to implement transfer learning. Transfer model and LeNet5 were suggested to deal with conditions where large datasets are not available. Transfer-learning simply uses pre-existing data from source task to analyze data obtained from target task, which is useful in situations where target task has few datasets [183]. Recent study used CNN and LeNet5 to classify pulmonary nodules into benign or malignant with further sub-classification of various types of malignancies [184]. A limitation that comes along with data sharing and data transfer is the legal aspects of patient’s privacy. Another limitation is the lack of uniform terms between radiologists (for example when to describe a nodule as subsolid or non-solid) or between pathologists (minimally invasive carcinoma or carcinoma in situ), which in turn leads to non-uniform labelling of data which may affect the trained model. Of course, the solution for this problem will be the creation of a data-reporting system to unify medical terms like what happened in BI-RADS and LI-RADS. In the clinical practice, radiologists usually get benefit from clinical data to direct differential diagnosis and reach proper decision. However most of the available algorithms depend only on features derived from the images with little or no consideration to clinical data such as age, presence or absence of risk factors (smoking). Algorithms that combine clinical and imaging data are the solution to such limitation [185]. Finally, many algorithms and models are proposed but they lack generalizability and are used mainly in research works.

6. Conclusions

AI and its multiple arms including CAD, ML and DL are used to design complex algorithms to detect and further characterize pulmonary nodules in order to predict malignancy risk. Along the last decade, large number of radiomic features and artificial networks were proposed, each had its own advantages and drawbacks, till now no specific method gained popular acceptance to be applied on a general population.

Author Contributions

Conceptualization, D.F., H.K., A.K., M.Y., M.G., A.S., A.M. and A.E.-B.; Project administration, A.E.-B.; Supervision, A.E.-B.; Writing—original draft, D.F., H.K., A.K., M.Y., M.G., A.S., A.M. and A.E.-B.; Writing—review & editing, D.F., H.K., A.K., M.Y., M.G., A.S., A.M. and A.E.-B. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by Abu Dhabi’s Advanced Technology Research Council via the ASPIRE Award for Research Excellence program.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This research is supported by Abu Dhabi’s Advanced Technology Research Council via the ASPIRE Award for Research Excellence program.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HUHounsfield Unit
ABMAdaptive Border Marching
A-CNNAmalgamated Convolutional Neural Network
ASMActive Shape Model
CADComputer-Aided Diagnosis
CADe             Computer-Aided Detection System
CADxComputer-Aided Diagnosis System
DLDeep Learning
CNNConvolutional Neural Network
MV-CNNMulti-view CNN
ML-CNNMulti-level CNN
AHSNAngular Histograms of Surface Normals
CPMCompetition Performance Metrics
CTComputed Tomography
CVChan Vese
DBNDeep Belief Network
DCNNDeep Convolutional Neural Network
DNNDeep Neural Network
ELMExtreme Learning Machines
FLDFisher Linear Discriminant
FPNFalse Positive Nodule
GANGenerative Adversarial Network
GGOGround Glass Opacity
GGNGround Glass Nodule
ICLRInferRead CT Lung Research
KBKnowledge Bank
k-NNK-nearest Neighbor
LDALinear Discriminate Analysis
LDCTLow Dose Computed Tomography
LIDC-IDRILung Image Database Consortium and Image Database Resource Initiative
MGRFMarkov Gibbs Random Field
MLMachine Learning
MPPMulti Player Perception
NNENeural Network Ensemble
PNNProbabilistic Neural Network
RASMRobust Active Shape Model
ROIRegion of Interest
RPCARobust Principal Component Analysis
SAEStacked Autoencoder
SS-ELMSemi-Supervised Extreme Learning Machines
SVMSupport Vector Machine
TPNTrue Positive Nodule
AUCArea Under the Curve
IAInvasive Adenocarcinoma
MTANNMassive training artificial neural networks
NCINational Cancer Institute
SVHNStreet View House Numbers Dataset
LASSOLeast Absolute Shrinkage and Selection Operator
AAHAtyoical Adenomatous Hyperplasia
MIAminimally invasive adenocarcinoma
AISAdenocarcinoma in Situ
GLCMGray-Level Co-occurrence Matrix
EMExpectation–maximization method
DSCDice Similarity Coefficient
Inf-NetCOVID-19-infected lung segmentation convolution neural network
Semi-Inf-Netsemi-supervised Inf-Net
ALVDabsolute lung volume difference
BHDbidirectional Hausdorff distance
HCRFHidden conditional random field
SCPM-Netsphere center-points matching detection network
SD-U-NetSqueeze and attention, and dense atrous spatial pyramid pooling U-Net

References

  1. American Cancer Society: Cancer Facts and Figures 2017. Available online: https://www.cancer.org/content/dam/cancer-org/research/cancer-facts-and-statistics/annual-cancer-facts-and-figures/2017/cancer-facts-and-figures-2017.pdf (accessed on 13 November 2021).
  2. Centers for Disease Control and Prevention (CDC): Smoking and Tobacco Use: Secondhand Smoke (SHS) Facts. Available online: https://www.cdc.gov/tobacco/data_statistics/fact_sheets/secondhand_smoke/general_facts/index.htm (accessed on 11 November 2021).
  3. Madsen, L.R.; Krarup, N.H.V.; Bergmann, T.K.; Bærentzen, S.; Neghabat, S.; Duval, L.; Knudsen, S.T. A cancer that went up in smoke: Pulmonary reaction to e-cigarettes imitating metastatic cancer. Chest 2016, 149, e65–e67. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Jenks, S. Is Lung Cancer Incidence Increasing Among Never-Smokers? Jnci J. Natl. Cancer Inst. 2016, 108, djv418. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Coté, M.L.; Liu, M.; Bonassi, S.; Neri, M.; Schwartz, A.G.; Christiani, D.C.; Spitz, M.R.; Muscat, J.E.; Rennert, G.; Aben, K.K.; et al. Increased risk of lung cancer in individuals with a family history of the disease: A pooled analysis from the International Lung Cancer Consortium. Eur. J. Cancer 2012, 48, 1957–1968. [Google Scholar] [CrossRef] [PubMed]
  6. de Torres, J.P.; Wilson, D.O.; Sanchez-Salcedo, P.; Weissfeld, J.L.; Berto, J.; Campo, A.; Alcaide, A.B.; García-Granero, M.; Celli, B.R.; Zulueta, J.J. Lung cancer in patients with chronic obstructive pulmonary disease. Development and validation of the COPD Lung Cancer Screening Score. Am. J. Respir. Crit. Care Med. 2015, 191, 285–291. [Google Scholar] [CrossRef] [Green Version]
  7. Zhai, K.; Ding, J.; Shi, H.Z. Author’s Reply to “Comments on HPV and Lung Cancer Risk: A Meta-Analysis” [J. Clin. Virol. (In Press)]. J. Clin. Virol. Off. Publ. Pan Am. Soc. Clin. Virol. 2015, 63, 92–93. [Google Scholar] [CrossRef]
  8. Team, N.L.S.T.R. The national lung screening trial: Overview and study design. Radiology 2011, 258, 243–253. [Google Scholar]
  9. Global Resource for Advancing Cancer Education: Lung Cancer Screening, Part I: The Arguments for CT Screening. Available online: http://cancergrace.org/lung/2007/01/23/ct-screening-for-lung-ca-advantages/ (accessed on 14 November 2021).
  10. Ather, S.; Kadir, T.; Gleeson, F. Artificial intelligence and radiomics in pulmonary nodule management: Current status and future applications. Clin. Radiol. 2020, 75, 13–19. [Google Scholar] [CrossRef] [Green Version]
  11. Prabhakar, B.; Shende, P.; Augustine, S. Current trends and emerging diagnostic techniques for lung cancer. Biomed. Pharmacother. 2018, 106, 1586–1599. [Google Scholar] [CrossRef] [PubMed]
  12. Firmino, M.; Morais, A.H.; Mendoça, R.M.; Dantas, M.R.; Hekis, H.R.; Valentim, R. Computer-aided detection system for lung cancer in computed tomography scans: Review and future prospects. Biomed. Eng. Online 2014, 13, 1–16. [Google Scholar] [CrossRef] [Green Version]
  13. Hu, S.; Hoffman, E.A.; Reinhardt, J.M. Automatic lung segmentation for accurate quantitation of volumetric X-ray CT images. IEEE Trans. Med. Imaging 2001, 20, 490–498. [Google Scholar] [CrossRef] [PubMed]
  14. Ukil, S.; Reinhardt, J.M. Anatomy-guided lung lobe segmentation in X-ray CT images. IEEE Trans. Med. Imaging 2008, 28, 202–214. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Van Rikxoort, E.M.; De Hoop, B.; Van De Vorst, S.; Prokop, M.; Van Ginneken, B. Automatic segmentation of pulmonary segments from volumetric chest CT scans. IEEE Trans. Med. Imaging 2009, 28, 621–630. [Google Scholar] [CrossRef] [PubMed]
  16. Armato, S.G.; Giger, M.L.; Moran, C.J.; Blackburn, J.T.; Doi, K.; MacMahon, H. Computerized detection of pulmonary nodules on CT scans. Radiographics 1999, 19, 1303–1311. [Google Scholar] [CrossRef] [PubMed]
  17. Armato III, S.G.; Sensakovic, W.F. Automated lung segmentation for thoracic CT: Impact on computer-aided diagnosis1. Acad. Radiol. 2004, 11, 1011–1021. [Google Scholar] [CrossRef] [PubMed]
  18. Pu, J.; Roos, J.; Chin, A.Y.; Napel, S.; Rubin, G.D.; Paik, D.S. Adaptive border marching algorithm: Automatic lung segmentation on chest CT images. Comput. Med. Imaging Graph. 2008, 32, 452–462. [Google Scholar] [CrossRef] [Green Version]
  19. Gao, Q.; Wang, S.; Zhao, D.; Liu, J. Accurate lung segmentation for X-ray CT images. In Proceedings of the Third International Conference on Natural Computation (ICNC 2007), Haikou, China, 24–27 August 2007; Volume 2, pp. 275–279. [Google Scholar]
  20. Wei, Q.; Hu, Y.; Gelfand, G.; MacGregor, J.H. Segmentation of lung lobes in high-resolution isotropic CT images. IEEE Trans. Biomed. Eng. 2009, 56, 1383–1393. [Google Scholar] [PubMed]
  21. Ye, X.; Lin, X.; Dehmeshki, J.; Slabaugh, G.; Beddoe, G. Shape-based computer-aided detection of lung nodules in thoracic CT images. IEEE Trans. Biomed. Eng. 2009, 56, 1810–1820. [Google Scholar] [PubMed] [Green Version]
  22. Itai, Y.; Kim, H.; Ishikawa, S.; Katsuragawa, S.; Ishida, T.; Nakamura, K.; Yamamoto, A. Automatic segmentation of lung areas based on SNAKES and extraction of abnormal areas. In Proceedings of the 17th IEEE International Conference on Tools with Artificial Intelligence (ICTAI’05), Hong Kong, China, 14–16 November 2005; p. 5. [Google Scholar]
  23. Silveira, M.; Marques, J. Automatic segmentation of the lungs using multiple active contours and outlier model. In Proceedings of the 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 30 August–3 September 2006; pp. 3122–3125. [Google Scholar]
  24. Silveira, M.; Nascimento, J.; Marques, J. Automatic segmentation of the lungs using robust level sets. In Proceedings of the 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 22–26 August 2007; pp. 4414–4417. [Google Scholar]
  25. Rani, K.V.; Jawhar, S. Emerging trends in lung cancer detection scheme—A review. Int. J. Res. Anal. Rev. 2018, 5, 530–542. [Google Scholar]
  26. Mansoor, A.; Bagci, U.; Foster, B.; Xu, Z.; Papadakis, G.Z.; Folio, L.R.; Udupa, J.K.; Mollura, D.J. Segmentation and image analysis of abnormal lungs at CT: Current approaches, challenges, and future trends. Radiographics 2015, 35, 1056–1076. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Sun, S.; Bauer, C.; Beichel, R. Automated 3-D segmentation of lungs with lung cancer in CT data using a novel robust active shape model approach. IEEE Trans. Med. Imaging 2011, 31, 449–460. [Google Scholar] [PubMed] [Green Version]
  28. Li, K.; Wu, X.; Chen, D.Z.; Sonka, M. Optimal surface segmentation in volumetric images-a graph-theoretic approach. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 28, 119–134. [Google Scholar]
  29. Sofka, M.; Wetzl, J.; Birkbeck, N.; Zhang, J.; Kohlberger, T.; Kaftan, J.; Declerck, J.; Zhou, S.K. Multi-stage learning for robust lung segmentation in challenging CT volumes. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Toronto, ON, Canada, 18–22 September 2011; pp. 667–674. [Google Scholar]
  30. Hua, P.; Song, Q.; Sonka, M.; Hoffman, E.A.; Reinhardt, J.M. Segmentation of pathological and diseased lung tissue in CT images using a graph-search algorithm. In Proceedings of the 2011 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Chicago, IL, USA, 30 March–2 April 2011; pp. 2072–2075. [Google Scholar]
  31. Kockelkorn, T.T.; van Rikxoort, E.M.; Grutters, J.C.; van Ginneken, B. Interactive lung segmentation in CT scans with severe abnormalities. In Proceedings of the 2010 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Rotterdam, The Netherlands, 14–17 April 2010; pp. 564–567. [Google Scholar]
  32. El-Baz, A.; Gimel’farb, G.; Falk, R.; El-Ghar, M.A. A novel three-dimensional framework for automatic lung segmentation from low dose computed tompgraphy images. In Lung Imaging and Computer Aided Diagnosis; El-Baz, A., Suri, J., Eds.; CRC Press: Boca Raton, FL, USA, 2011; pp. 1–15. [Google Scholar]
  33. El-Ba, A.; Gimel’farb, G.; Falk, R.; Holland, T.; Shaffer, T. A new stochastic framework for accurate lung segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, New York, NY, USA, 6–10 September 2008; pp. 322–330. [Google Scholar]
  34. El-Baz, A.; Gimel’farb, G.L.; Falk, R.; Holland, T.; Shaffer, T. A Framework for Unsupervised Segmentation of Lung Tissues from Low Dose Computed Tomography Images. In Proceedings of the BMVC, Aberystwyth, UK, 31 August–3 September 2008; pp. 1–10. [Google Scholar]
  35. Chung, H.; Ko, H.; Jeon, S.J.; Yoon, K.H.; Lee, J. Automatic lung segmentation with juxta-pleural nodule identification using active contour model and bayesian approach. IEEE J. Transl. Eng. Health Med. 2018, 6, 1–13. [Google Scholar] [CrossRef] [PubMed]
  36. Chan, T.F.; Vese, L.A. Active contours without edges. IEEE Trans. Image Process. 2001, 10, 266–277. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Sun, S.; Ren, H.; Dan, T.; Wei, W. 3D segmentation of lungs with juxta-pleural tumor using the improved active shape model approach. Technol. Health Care 2021, 29, 385–398. [Google Scholar] [CrossRef]
  38. Adams, R.; Bischof, L. Seeded region growing. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 641–647. [Google Scholar] [CrossRef] [Green Version]
  39. Hojjatoleslami, S.; Kittler, J. Region growing: A new approach. IEEE Trans. Image Process. 1998, 7, 1079–1084. [Google Scholar] [CrossRef] [Green Version]
  40. Pavlidis, T.; Liow, Y.T. Integrating region growing and edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1990, 12, 225–233. [Google Scholar] [CrossRef]
  41. Tremeau, A.; Borel, N. A region growing and merging algorithm to color segmentation. Pattern Recognit. 1997, 30, 1191–1203. [Google Scholar] [CrossRef]
  42. Zhu, S.C.; Yuille, A. Region competition: Unifying snakes, region growing, and Bayes/MDL for multiband image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 1996, 18, 884–900. [Google Scholar]
  43. Mangan, A.P.; Whitaker, R.T. Partitioning 3D surface meshes using watershed segmentation. IEEE Trans. Vis. Comput. Graph. 1999, 5, 308–321. [Google Scholar] [CrossRef] [Green Version]
  44. Grady, L. Random walks for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1768–1783. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Boykov, Y.; Jolly, M.P. Interactive organ segmentation using graph cuts. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Pittsburgh, PA, USA, 11–14 October 2000; pp. 276–286. [Google Scholar]
  46. Udupa, J. Fuzzy connectedness and object definition: Theory, algorithms, and applications in image segmentation. Graph. Model. Image Process. 1999, 9, 85–90. [Google Scholar]
  47. Song, Y.; Cai, W.; Zhou, Y.; Feng, D.D. Feature-based image patch approximation for lung tissue classification. IEEE Trans. Med. Imaging 2013, 32, 797–808. [Google Scholar] [CrossRef] [PubMed]
  48. Xu, Y.; Sonka, M.; McLennan, G.; Guo, J.; Hoffman, E.A. MDCT-based 3-D texture classification of emphysema and early smoking related lung pathologies. IEEE Trans. Med. Imaging 2006, 25, 464–475. [Google Scholar] [CrossRef] [PubMed]
  49. Yao, J.; Dwyer, A.; Summers, R.M.; Mollura, D.J. Computer-aided diagnosis of pulmonary infections using texture analysis and support vector machine classification. Acad. Radiol. 2011, 18, 306–314. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Korfiatis, P.D.; Karahaliou, A.N.; Kazantzi, A.D.; Kalogeropoulou, C.; Costaridou, L.I. Texture-based identification and characterization of interstitial pneumonia patterns in lung multidetector CT. IEEE Trans. Inf. Technol. Biomed. 2009, 14, 675–680. [Google Scholar] [CrossRef]
  51. Bagci, U.; Yao, J.; Wu, A.; Caban, J.; Palmore, T.N.; Suffredini, A.F.; Aras, O.; Mollura, D.J. Automatic detection and quantification of tree-in-bud (TIB) opacities from CT scans. IEEE Trans. Biomed. Eng. 2012, 59, 1620–1632. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  52. Mansoor, A.; Bagci, U.; Xu, Z.; Foster, B.; Olivier, K.N.; Elinoff, J.M.; Suffredini, A.F.; Udupa, J.K.; Mollura, D.J. A generic approach to pathological lung segmentation. IEEE Trans. Med. Imaging 2014, 33, 2293–2310. [Google Scholar] [CrossRef] [Green Version]
  53. Van Rikxoort, E.M.; Van Ginneken, B. Automated segmentation of pulmonary structures in thoracic computed tomography scans: A review. Phys. Med. Biol. 2013, 58, R187. [Google Scholar] [CrossRef] [PubMed]
  54. Bağci, U.; Yao, J.; Caban, J.; Palmore, T.N.; Suffredini, A.F.; Mollura, D.J. Automatic detection of tree-in-bud patterns for computer assisted diagnosis of respiratory tract infections. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 5096–5099. [Google Scholar]
  55. Bagci, U.; Yao, J.; Caban, J.; Suffredini, A.F.; Palmore, T.N.; Mollura, D.J. Learning shape and texture characteristics of CT tree-in-bud opacities for CAD systems. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Toronto, ON, Canada, 18–22 September 2011; pp. 215–222. [Google Scholar]
  56. Caban, J.J.; Yao, J.; Bagci, U.; Mollura, D.J. Monitoring pulmonary fibrosis by fusing clinical, physiological, and computed tomography features. In Proceedings of the 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Boston, MA, USA, 30 August–3 September 2011; pp. 6216–6219. [Google Scholar]
  57. Korfiatis, P.; Kalogeropoulou, C.; Karahaliou, A.; Kazantzi, A.; Skiadopoulos, S.; Costaridou, L. Texture classification-based segmentation of lung affected by interstitial pneumonia in high-resolution CT. Med. Phys. 2008, 35, 5290–5302. [Google Scholar] [CrossRef]
  58. Wang, J.; Li, F.; Li, Q. Automated segmentation of lungs with severe interstitial lung disease in CT. Med. Phys. 2009, 36, 4592–4599. [Google Scholar] [CrossRef] [Green Version]
  59. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, SMC-3, 610–621. [Google Scholar] [CrossRef] [Green Version]
  60. Sharafeldeen, A.; Elsharkawy, M.; Khalifa, F.; Soliman, A.; Ghazal, M.; AlHalabi, M.; Yaghi, M.; Alrahmawy, M.; Elmougy, S.; Sandhu, H.S.; et al. Precise higher-order reflectivity and morphology models for early diagnosis of diabetic retinopathy using OCT images. Sci. Rep. 2021, 11. [Google Scholar] [CrossRef] [PubMed]
  61. Nakagomi, K.; Shimizu, A.; Kobatake, H.; Yakami, M.; Fujimoto, K.; Togashi, K. Multi-shape graph cuts with neighbor prior constraints and its application to lung segmentation from a chest CT volume. Med. Image Anal. 2013, 17, 62–77. [Google Scholar] [CrossRef]
  62. Yan, Q.; Wang, B.; Gong, D.; Luo, C.; Zhao, W.; Shen, J.; Shi, Q.; Jin, S.; Zhang, L.; You, Z. COVID-19 Chest CT Image Segmentation—A Deep Convolutional Neural Network Solution. arXiv 2020, arXiv:2004.10987. [Google Scholar] [CrossRef]
  63. Fan, D.P.; Zhou, T.; Ji, G.P.; Zhou, Y.; Chen, G.; Fu, H.; Shen, J.; Shao, L. Inf-Net: Automatic COVID-19 Lung Infection Segmentation From CT Images. IEEE Trans. Med. Imaging 2020, 39, 2626–2637. [Google Scholar] [CrossRef]
  64. Oulefki, A.; Agaian, S.; Trongtirakul, T.; Laouar, A.K. Automatic COVID-19 lung infected region segmentation and measurement using CT-scans images. Pattern Recognit. 2021, 114, 107747. [Google Scholar] [CrossRef] [PubMed]
  65. Sharafeldeen, A.; Elsharkawy, M.; Alghamdi, N.S.; Soliman, A.; El-Baz, A. Precise Segmentation of COVID-19 Infected Lung from CT Images Based on Adaptive First-Order Appearance Model with Morphological/Anatomical Constraints. Sensors 2021, 21, 5482. [Google Scholar] [CrossRef] [PubMed]
  66. Zhao, C.; Xu, Y.; He, Z.; Tang, J.; Zhang, Y.; Han, J.; Shi, Y.; Zhou, W. Lung segmentation and automatic detection of COVID-19 using radiomic features from chest CT images. Pattern Recognit. 2021, 119, 108071. [Google Scholar] [CrossRef] [PubMed]
  67. Sousa, J.; Pereira, T.; Silva, F.; Silva, M.C.; Vilares, A.T.; Cunha, A.; Oliveira, H.P. Lung Segmentation in CT Images: A Residual U-Net Approach on a Cross-Cohort Dataset. Appl. Sci. 2022, 12, 1959. [Google Scholar] [CrossRef]
  68. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Lecture Notes in Computer Science; Springer: Berlin, Germany, 2015; pp. 234–241. [Google Scholar] [CrossRef] [Green Version]
  69. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  70. Kim, H.M.; Ko, T.; Choi, I.Y.; Myong, J.P. Asbestosis diagnosis algorithm combining the lung segmentation method and deep learning model in computed tomography image. Int. J. Med. Inform. 2022, 158, 104667. [Google Scholar] [CrossRef] [PubMed]
  71. Miettinen, O.S.; Henschke, C.I. CT screening for lung cancer: Coping with nihilistic recommendations. Radiology 2001, 221, 592–596. [Google Scholar] [CrossRef] [PubMed]
  72. Henschke, C.I.; Naidich, D.P.; Yankelevitz, D.F.; McGuinness, G.; McCauley, D.I.; Smith, J.P.; Libby, D.; Pasmantier, M.; Vazquez, M.; Koizumi, J.; et al. Early Lung Cancer Action Project: Initial findings on repeat screening. Cancer 2001, 92, 153–159. [Google Scholar] [CrossRef]
  73. Swensen, S.J.; Jett, J.R.; Hartman, T.E.; Midthun, D.E.; Sloan, J.A.; Sykes, A.M.; Aughenbaugh, G.L.; Clemens, M.A. Lung cancer screening with CT: Mayo Clinic experience. Radiology 2003, 226, 756–761. [Google Scholar] [CrossRef]
  74. Rusinek, H.; Naidich, D.P.; McGuinness, G.; Leitman, B.S.; McCauley, D.I.; Krinsky, G.A.; Clayton, K.; Cohen, H. Pulmonary nodule detection: Low-dose versus conventional CT. Radiology 1998, 209, 243–249. [Google Scholar] [CrossRef]
  75. Garg, K.; Keith, R.L.; Byers, T.; Kelly, K.; Kerzner, A.L.; Lynch, D.A.; Miller, Y.E. Randomized controlled trial with low-dose spiral CT for lung cancer screening: Feasibility study and preliminary results. Radiology 2002, 225, 506–510. [Google Scholar] [CrossRef]
  76. Nawa, T.; Nakagawa, T.; Kusano, S.; Kawasaki, Y.; Sugawara, Y.; Nakata, H. Lung cancer screening using low-dose spiral CT: Results of baseline and 1-year follow-up studies. Chest 2002, 122, 15–20. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  77. Sone, S.; Li, F.; Yang, Z.; Honda, T.; Maruyama, Y.; Takashima, S.; Hasegawa, M.; Kawakami, S.; Kubo, K.; Haniuda, M.; et al. Results of three-year mass screening programme for lung cancer using mobile low-dose spiral computed tomography scanner. Br. J. Cancer 2001, 84, 25–32. [Google Scholar] [CrossRef] [PubMed]
  78. Way, T.W.; Hadjiiski, L.M.; Sahiner, B.; Chan, H.P.; Cascade, P.N.; Kazerooni, E.A.; Bogot, N.; Zhou, C. Computer-aided diagnosis of pulmonary nodules on CT scans: Segmentation and classification using 3D active contours. Med. Phys. 2006, 33, 2323–2337. [Google Scholar] [CrossRef] [Green Version]
  79. Tandon, Y.K.; Bartholmai, B.J.; Koo, C.W. Putting artificial intelligence (AI) on the spot: Machine learning evaluation of pulmonary nodules. J. Thorac. Dis. 2020, 12, 6954. [Google Scholar] [CrossRef]
  80. Armato, S.G., III; McLennan, G.; Bidaut, L.; McNitt-Gray, M.F.; Meyer, C.R.; Reeves, A.P.; Zhao, B.; Aberle, D.R.; Henschke, C.I.; Hoffman, E.A.; et al. The lung image database consortium (LIDC) and image database resource initiative (IDRI): A completed reference database of lung nodules on CT scans. Med. Phys. 2011, 38, 915–931. [Google Scholar] [CrossRef] [PubMed]
  81. Gu, Y.; Chi, J.; Liu, J.; Yang, L.; Zhang, B.; Yu, D.; Zhao, Y.; Lu, X. A survey of computer-aided diagnosis of lung nodules from CT scans using deep learning. Comput. Biol. Med. 2021, 137, 104806. [Google Scholar] [CrossRef]
  82. Chang, S.; Emoto, H.; Metaxas, D.N.; Axel, L. Pulmonary micronodule detection from 3D chest CT. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Saint-Malo, France, 26–29 September 2004; pp. 821–828. [Google Scholar]
  83. Takizawa, H.; Shigemoto, K.; Yamamoto, S.; Matsumoto, T.; Tateno, Y.; Iinuma, T.; Matsumoto, M. A recognition method of lung nodule shadows in X-Ray CT images using 3D object models. Int. J. Image Graph. 2003, 3, 533–545. [Google Scholar] [CrossRef]
  84. Li, Q.; Doi, K. New selective nodule enhancement filter and its application for significant improvement of nodule detection on computed tomography. In Proceedings of the Medical Imaging 2004: Image Processing. International Society for Optics and Photonics, San Diego, CA, USA, 16–19 February 2004; Volume 5370, pp. 1–9. [Google Scholar]
  85. Paik, D.S.; Beaulieu, C.F.; Rubin, G.D.; Acar, B.; Jeffrey, R.B.; Yee, J.; Dey, J.; Napel, S. Surface normal overlap: A computer-aided detection algorithm with application to colonic polyps and lung nodules in helical CT. IEEE Trans. Med. Imaging 2004, 23, 661–675. [Google Scholar] [CrossRef]
  86. Mendonça, P.R.; Bhotika, R.; Sirohey, S.A.; Turner, W.D.; Miller, J.V.; Avila, R.S. Model-based analysis of local shape for lesion detection in CT scans. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Palm Springs, CA, USA, 26–29 October 2005; pp. 688–695. [Google Scholar]
  87. Lee, Y.; Hara, T.; Fujita, H.; Itoh, S.; Ishigaki, T. Automated detection of pulmonary nodules in helical CT images based on an improved template-matching technique. IEEE Trans. Med. Imaging 2001, 20, 595–604. [Google Scholar] [PubMed]
  88. Wiemker, R.; Rogalla, P.; Zwartkruis, A.; Blaffert, T. Computer-aided lung nodule detection on high-resolution CT data. In Proceedings of the Medical Imaging 2002: Image Processing. International Society for Optics and Photonics, San Diego, CA, USA, 23–28 February 2002; Volume 4684, pp. 677–688. [Google Scholar]
  89. Kostis, W.J.; Reeves, A.P.; Yankelevitz, D.F.; Henschke, C.I. Three-dimensional segmentation and growth-rate estimation of small pulmonary nodules in helical CT images. IEEE Trans. Med. Imaging 2003, 22, 1259–1274. [Google Scholar] [CrossRef] [PubMed]
  90. Gurcan, M.N.; Sahiner, B.; Petrick, N.; Chan, H.P.; Kazerooni, E.A.; Cascade, P.N.; Hadjiiski, L. Lung nodule detection on thoracic computed tomography images: Preliminary evaluation of a computer-aided diagnosis system. Med. Phys. 2002, 29, 2552–2558. [Google Scholar] [CrossRef]
  91. Kanazawa, K.; Kawata, Y.; Niki, N.; Satoh, H.; Ohmatsu, H.; Kakinuma, R.; Kaneko, M.; Moriyama, N.; Eguchi, K. Computer-aided diagnosis for pulmonary nodules based on helical CT images. Comput. Med. Imaging Graph. 1998, 22, 157–167. [Google Scholar] [CrossRef]
  92. Kawata, Y.; Niki, N.; Ohmatsu, H.; Kusumoto, M.; Kakinuma, R.; Mori, K.; Nishiyama, H.; Eguchi, K.; Kaneko, M.; Moriyama, N. Computer-aided diagnosis of pulmonary nodules using three-dimensional thoracic CT images. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Utrecht, The Netherlands, 27 September–1 October 2001; pp. 1393–1394. [Google Scholar]
  93. Betke, M.; Ko, J.P. Detection of pulmonary nodules on CT and volumetric assessment of change over time. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Cambridge, UK, 19–22 September 1999; pp. 245–252. [Google Scholar]
  94. Kubo, M.; Kubota, K.; Yamada, N.; Kawata, Y.; Niki, N.; Eguchi, K.; Ohmatsu, H.; Kakinuma, R.; Kaneko, M.; Kusumoto, M.; et al. CAD system for lung cancer based on low-dose single-slice CT image. In Proceedings of the Medical Imaging 2002: Image Processing. International Society for Optics and Photonics, San Diego, CA, USA, 19–25 January 2002; Volume 4684, pp. 1262–1269. [Google Scholar]
  95. Oda, T.; Kubo, M.; Kawata, Y.; Niki, N.; Eguchi, K.; Ohmatsu, H.; Kakinuma, R.; Kaneko, M.; Kusumoto, M.; Moriyama, N.; et al. Detection algorithm of lung cancer candidate nodules on multislice CT images. In Proceedings of the Medical Imaging 2002: Image Processing. International Society for Optics and Photonics, San Diego, CA, USA, 19–25 January 2002; Volume 4684, pp. 1354–1361. [Google Scholar]
  96. Saita, S.; Oda, T.; Kubo, M.; Kawata, Y.; Niki, N.; Sasagawa, M.; Ohmatsu, H.; Kakinuma, R.; Kaneko, M.; Kusumoto, M.; et al. Nodule detection algorithm based on multislice CT images for lung cancer screening. In Proceedings of the Medical Imaging 2004: Image Processing. International Society for Optics and Photonics, San Diego, CA, USA, 16–19 February 2004; Volume 5370, pp. 1083–1090. [Google Scholar]
  97. Brown, M.S.; McNitt-Gray, M.F.; Goldin, J.G.; Suh, R.D.; Sayre, J.W.; Aberle, D.R. Patient-specific models for lung nodule detection and surveillance in CT images. IEEE Trans. Med. Imaging 2001, 20, 1242–1250. [Google Scholar] [CrossRef]
  98. Messay, T.; Hardie, R.C.; Rogers, S.K. A new computationally efficient CAD system for pulmonary nodule detection in CT imagery. Med. Image Anal. 2010, 14, 390–406. [Google Scholar] [CrossRef] [PubMed]
  99. Setio, A.A.; Jacobs, C.; Gelderblom, J.; van Ginneken, B. Automatic detection of large pulmonary solid nodules in thoracic CT images. Med. Phys. 2015, 42, 5642–5653. [Google Scholar] [CrossRef] [PubMed]
  100. Wang, Z.; Xin, J.; Sun, P.; Lin, Z.; Yao, Y.; Gao, X. Improved lung nodule diagnosis accuracy using lung CT images with uncertain class. Comput. Methods Programs Biomed. 2018, 162, 197–209. [Google Scholar] [CrossRef]
  101. Baralis, E.; Chiusano, S.; Garza, P. A lazy approach to associative classification. IEEE Trans. Knowl. Data Eng. 2007, 20, 156–171. [Google Scholar] [CrossRef]
  102. Pehrson, L.M.; Nielsen, M.B.; Ammitzbøl Lauridsen, C. Automatic pulmonary nodule detection applying deep learning or machine learning algorithms to the LIDC-IDRI database: A systematic review. Diagnostics 2019, 9, 29. [Google Scholar] [CrossRef] [Green Version]
  103. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  104. Kadir, T.; Gleeson, F. Lung cancer prediction using machine learning and advanced imaging techniques. Transl. Lung Cancer Res. 2018, 7, 304. [Google Scholar] [CrossRef] [PubMed]
  105. Lee, S.M.; Seo, J.B.; Yun, J.; Cho, Y.H.; Vogel-Claussen, J.; Schiebler, M.L.; Gefter, W.B.; Van Beek, E.J.; Goo, J.M.; Lee, K.S.; et al. Deep learning applications in chest radiography and computed tomography. J. Thorac. Imaging 2019, 34, 75–85. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  106. Akram, S.; Javed, M.Y.; Qamar, U.; Khanum, A.; Hassan, A. Artificial neural network based classification of lungs nodule using hybrid features from computerized tomographic images. Appl. Math. Inf. Sci 2015, 9, 183–195. [Google Scholar] [CrossRef]
  107. Choi, W.J.; Choi, T.S. Automated pulmonary nodule detection based on three-dimensional shape-based feature descriptor. Comput. Methods Programs Biomed. 2014, 113, 37–54. [Google Scholar] [CrossRef] [PubMed]
  108. Alilou, M.; Kovalev, V.; Snezhko, E.; Taimouri, V. A comprehensive framework for automatic detection of pulmonary nodules in lung CT images. Image Anal. Stereol. 2014, 33, 13–27. [Google Scholar] [CrossRef] [Green Version]
  109. Bai, J.; Huang, X.; Liu, S.; Song, Q.; Bhagalia, R. Learning orientation invariant contextual features for nodule detection in lung CT scans. In Proceedings of the 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), Brooklyn, NY, USA, 16–19 April 2015; pp. 1135–1138. [Google Scholar]
  110. El-Regaily, S.A.; Salem, M.A.M.; Aziz, M.H.A.; Roushdy, M.I. Lung nodule segmentation and detection in computed tomography. In Proceedings of the 2017 Eighth International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, Egyp, 5–7 December 2017; pp. 72–78. [Google Scholar]
  111. Golan, R.; Jacob, C.; Denzinger, J. Lung nodule detection in CT images using deep convolutional neural networks. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 243–250. [Google Scholar]
  112. Bergtholdt, M.; Wiemker, R.; Klinder, T. Pulmonary nodule detection using a cascaded SVM classifier. In Proceedings of the Medical Imaging 2016: Computer-Aided Diagnosis. International Society for Optics and Photonics, San Diego, CA, USA, 27 February–3 March 2016; Volume 9785, p. 978513. [Google Scholar]
  113. Zhang, T.; Zhao, J.; Luo, J.; Qiang, Y. Deep belief network for lung nodules diagnosed in CT imaging. Int. J. Perform. Eng. 2017, 13, 1358. [Google Scholar] [CrossRef]
  114. Jacobs, C.; van Rikxoort, E.M.; Murphy, K.; Prokop, M.; Schaefer-Prokop, C.M.; van Ginneken, B. Computer-aided detection of pulmonary nodules: A comparative study using the public LIDC/IDRI database. Eur. Radiol. 2016, 26, 2139–2147. [Google Scholar] [CrossRef] [PubMed]
  115. Wang, Y.; Wu, B.; Zhang, N.; Liu, J.; Ren, F.; Zhao, L. Research progress of computer aided diagnosis system for pulmonary nodules in CT images. J. X-ray Sci. Technol. 2020, 28, 1–16. [Google Scholar] [CrossRef] [PubMed]
  116. McWilliams, A.; Tammemagi, M.C.; Mayo, J.R.; Roberts, H.; Liu, G.; Soghrati, K.; Yasufuku, K.; Martel, S.; Laberge, F.; Gingras, M.; et al. Probability of cancer in pulmonary nodules detected on first screening CT. N. Engl. J. Med. 2013, 369, 910–919. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  117. Horeweg, N.; Scholten, E.T.; de Jong, P.A.; van der Aalst, C.M.; Weenink, C.; Lammers, J.W.J.; Nackaerts, K.; Vliegenthart, R.; ten Haaf, K.; Yousaf-Khan, U.A.; et al. Detection of lung cancer through low-dose CT screening (NELSON): A prespecified analysis of screening test performance and interval cancers. Lancet Oncol. 2014, 15, 1342–1350. [Google Scholar] [CrossRef]
  118. Revel, M.P.; Bissery, A.; Bienvenu, M.; Aycard, L.; Lefort, C.; Frija, G. Are two-dimensional CT measurements of small noncalcified pulmonary nodules reliable? Radiology 2004, 231, 453–458. [Google Scholar] [CrossRef] [PubMed]
  119. Korst, R.J.; Lee, B.E.; Krinsky, G.A.; Rutledge, J.R. The utility of automated volumetric growth analysis in a dedicated pulmonary nodule clinic. J. Thorac. Cardiovasc. Surg. 2011, 142, 372–377. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  120. Bianconi, F.; Fravolini, M.L.; Pizzoli, S.; Palumbo, I.; Minestrini, M.; Rondini, M.; Nuvoli, S.; Spanu, A.; Palumbo, B. Comparative evaluation of conventional and deep learning methods for semi-automated segmentation of pulmonary nodules on CT. Quant. Imaging Med. Surg. 2021, 11, 3286. [Google Scholar] [CrossRef] [PubMed]
  121. Kuhnigk, J.M.; Dicken, V.; Bornemann, L.; Bakai, A.; Wormanns, D.; Krass, S.; Peitgen, H.O. Morphological segmentation and partial volume analysis for volumetry of solid pulmonary lesions in thoracic CT scans. IEEE Trans. Med. Imaging 2006, 25, 417–434. [Google Scholar] [CrossRef]
  122. Jamshid, D.; Hamdan, A.; Manlio, V.; Ye, X. Segmentation of pulmonary nodules in thoracic CT scans: A region growing approach. IEEE Trans. Med. Imaging 2008, 27, 467–480. [Google Scholar]
  123. Tao, Y.; Lu, L.; Dewan, M.; Chen, A.Y.; Corso, J.; Xuan, J.; Salganicoff, M.; Krishnan, A. Multi-level ground glass nodule detection and segmentation in CT lung images. In Proceedings of the International Conference on Medical Image Computing and Computer—Assisted Intervention, London, UK, 20–24 September 2009; pp. 715–723. [Google Scholar]
  124. Zhou, J.; Chang, S.; Metaxas, D.N.; Zhao, B.; Ginsberg, M.S.; Schwartz, L.H. An automatic method for ground glass opacity nodule detection and segmentation from CT studies. In Proceedings of the 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, Virtual, 1–5 November 2006; pp. 3062–3065. [Google Scholar]
  125. Charbonnier, J.P.; Chung, K.; Scholten, E.T.; Van Rikxoort, E.M.; Jacobs, C.; Sverzellati, N.; Silva, M.; Pastorino, U.; Van Ginneken, B.; Ciompi, F. Automatic segmentation of the solid core and enclosed vessels in subsolid pulmonary nodules. Sci. Rep. 2018, 8, 646. [Google Scholar] [CrossRef] [Green Version]
  126. Kubota, T.; Jerebko, A.K.; Dewan, M.; Salganicoff, M.; Krishnan, A. Segmentation of pulmonary nodules of various densities with morphological approaches and convexity models. Med. Image Anal. 2011, 15, 133–154. [Google Scholar] [CrossRef]
  127. Mukhopadhyay, S. A segmentation framework of pulmonary nodules in lung CT images. J. Digit. Imaging 2016, 29, 86–103. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  128. Liu, Y.; Wang, Z.; Guo, M.; Li, P. Hidden conditional random field for lung nodule detection. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014. [Google Scholar] [CrossRef]
  129. Li, Q.; Sone, S.; Doi, K. Selective enhancement filters for nodules, vessels, and airway walls in two- and three-dimensional CT scans. Med. Phys. 2003, 30, 2040–2051. [Google Scholar] [CrossRef]
  130. Quattoni, A.; Wang, S.; Morency, L.P.; Collins, M.; Darrell, T. Hidden Conditional Random Fields. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1848–1852. [Google Scholar] [CrossRef] [PubMed]
  131. Zhao, C.; Han, J.; Jia, Y.; Gou, F. Lung Nodule Detection via 3D U-Net and Contextual Convolutional Neural Network. In Proceedings of the 2018 International Conference on Networking and Network Applications (NaNA), Xi’an, China, 12–15 October 2018. [Google Scholar] [CrossRef]
  132. Özgün, Ç.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016; Springer: Berlin, Germany, 2016; pp. 424–432. [Google Scholar] [CrossRef] [Green Version]
  133. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8 December 2014; Volume 2, pp. 2672–2680. [Google Scholar]
  134. Luo, X.; Song, T.; Wang, G.; Chen, J.; Chen, Y.; Li, K.; Metaxas, D.N.; Zhang, S. SCPM-Net: An anchor-free 3D lung nodule detection network using sphere representation and center points matching. Med. Image Anal. 2022, 75, 102287. [Google Scholar] [CrossRef] [PubMed]
  135. Yin, S.; Deng, H.; Xu, Z.; Zhu, Q.; Cheng, J. SD-UNet: A Novel Segmentation Framework for CT Images of Lung Infections. Electronics 2022, 11, 130. [Google Scholar] [CrossRef]
  136. Gong, J.; Liu, J.; Hao, W.; Nie, S.; Zheng, B.; Wang, S.; Peng, W. A deep residual learning network for predicting lung adenocarcinoma manifesting as ground-glass nodule on CT images. Eur. Radiol. 2020, 30, 1847–1855. [Google Scholar] [CrossRef] [PubMed]
  137. Sim, Y.; Chung, M.J.; Kotter, E.; Yune, S.; Kim, M.; Do, S.; Han, K.; Kim, H.; Yang, S.; Lee, D.J.; et al. Deep convolutional neural network–based software improves radiologist detection of malignant lung nodules on chest radiographs. Radiology 2020, 294, 199–209. [Google Scholar] [CrossRef] [PubMed]
  138. Tajbakhsh, N.; Suzuki, K. Comparing two classes of end-to-end machine-learning models in lung nodule detection and classification: MTANNs vs. CNNs. Pattern Recognit. 2017, 63, 476–486. [Google Scholar] [CrossRef]
  139. Hu, X.; Gong, J.; Zhou, W.; Li, H.; Wang, S.; Wei, M.; Peng, W.; Gu, Y. Computer-aided diagnosis of ground glass pulmonary nodule by fusing deep learning and radiomics features. Phys. Med. Biol. 2021, 66, 065015. [Google Scholar] [CrossRef] [PubMed]
  140. Zwanenburg, A.; Leger, S.; Vallières, M.; Löck, S. Image biomarker standardisation initiative. arXiv 2016, arXiv:1612.07003. [Google Scholar] [CrossRef] [Green Version]
  141. Sharafeldeen, A.; Elsharkawy, M.; Khaled, R.; Shaffie, A.; Khalifa, F.; Soliman, A.; khalek Abdel Razek, A.A.; Hussein, M.M.; Taman, S.; Naglah, A.; et al. Texture and shape analysis of diffusion-weighted imaging for thyroid nodules classification using machine learning. Med. Phys. 2021, 49, 988–999. [Google Scholar] [CrossRef] [PubMed]
  142. Lambin, P.; Rios-Velazquez, E.; Leijenaar, R.; Carvalho, S.; Van Stiphout, R.G.; Granton, P.; Zegers, C.M.; Gillies, R.; Boellard, R.; Dekker, A.; et al. Radiomics: Extracting more information from medical images using advanced feature analysis. Eur. J. Cancer 2012, 48, 441–446. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  143. Foley, F.; Rajagopalan, S.; Raghunath, S.M.; Boland, J.M.; Karwoski, R.A.; Maldonado, F.; Bartholmai, B.J.; Peikert, T. Computer-aided nodule assessment and risk yield risk management of adenocarcinoma: The future of imaging? In Seminars in Thoracic and Cardiovascular Surgery; Elsevier: Amsterdam, The Netherlands, 2016; Volume 28, pp. 120–126. [Google Scholar]
  144. Wang, X.; Mao, K.; Wang, L.; Yang, P.; Lu, D.; He, P. An appraisal of lung nodules automatic classification algorithms for CT images. Sensors 2019, 19, 194. [Google Scholar] [CrossRef] [Green Version]
  145. Li, M.; Narayan, V.; Gill, R.R.; Jagannathan, J.P.; Barile, M.F.; Gao, F.; Bueno, R.; Jayender, J. Computer-aided diagnosis of ground-glass opacity nodules using open-source software for quantifying tumor heterogeneity. Am. J. Roentgenol. 2017, 209, 1216. [Google Scholar] [CrossRef]
  146. Fan, L.; Fang, M.; Li, Z.; Tu, W.; Wang, S.; Chen, W.; Tian, J.; Dong, D.; Liu, S. Radiomics signature: A biomarker for the preoperative discrimination of lung invasive adenocarcinoma manifesting as a ground-glass nodule. Eur. Radiol. 2019, 29, 889–897. [Google Scholar] [CrossRef] [PubMed]
  147. Madero Orozco, H.; Vergara Villegas, O.O.; Cruz Sánchez, V.G.; Ochoa Domínguez, H.D.J.; Nandayapa Alfaro, M.D.J. Automated system for lung nodules classification based on wavelet feature descriptor and support vector machine. Biomed. Eng. Online 2015, 14, 9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  148. Dehmeshki, J.; Ye, X.; Costello, J. Shape based region growing using derivatives of 3D medical images: Application to semiautomated detection of pulmonary nodules. In Proceedings of the 2003 International Conference on Image Processing, Barcelona, Spain, 14–17 September 2003; Volume 1, pp. I-1085–I-1088. [Google Scholar]
  149. Kumar, D.; Wong, A.; Clausi, D.A. Lung nodule classification using deep features in CT images. In Proceedings of the 2015 12th Conference on Computer and Robot Vision, Halifax, NS, Canada, 3–5 June 2015; pp. 133–138. [Google Scholar]
  150. Li, Q.; Balagurunathan, Y.; Liu, Y.; Qi, J.; Schabath, M.B.; Ye, Z.; Gillies, R.J. Comparison between radiological semantic features and lung-RADS in predicting malignancy of screen-detected lung nodules in the National Lung Screening Trial. Clin. Lung Cancer 2018, 19, 148–156. [Google Scholar] [CrossRef] [Green Version]
  151. Liu, A.; Wang, Z.; Yang, Y.; Wang, J.; Dai, X.; Wang, L.; Lu, Y.; Xue, F. Preoperative diagnosis of malignant pulmonary nodules in lung cancer screening with a radiomics nomogram. Cancer Commun. 2020, 40, 16–24. [Google Scholar] [CrossRef] [Green Version]
  152. Zhao, W.; Yang, J.; Sun, Y.; Li, C.; Wu, W.; Jin, L.; Yang, Z.; Ni, B.; Gao, P.; Wang, P.; et al. 3D deep learning from CT scans predicts tumor invasiveness of subcentimeter pulmonary adenocarcinomas. Cancer Res. 2018, 78, 6881–6889. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  153. Wang, J.; Chen, X.; Lu, H.; Zhang, L.; Pan, J.; Bao, Y.; Su, J.; Qian, D. Feature-shared adaptive-boost deep learning for invasiveness classification of pulmonary subsolid nodules in CT images. Med. Phys. 2020, 47, 1738–1749. [Google Scholar] [CrossRef] [PubMed]
  154. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  155. Xia, X.; Gong, J.; Hao, W.; Yang, T.; Lin, Y.; Wang, S.; Peng, W. Comparison and fusion of deep learning and radiomics features of ground-glass nodules to predict the invasiveness risk of stage-I lung adenocarcinomas in CT scan. Front. Oncol. 2020, 10, 418. [Google Scholar] [CrossRef]
  156. Uthoff, J.; Stephens, M.J.; Newell, J.D., Jr.; Hoffman, E.A.; Larson, J.; Koehn, N.; De Stefano, F.A.; Lusk, C.M.; Wenzlaff, A.S.; Watza, D.; et al. Machine learning approach for distinguishing malignant and benign lung nodules utilizing standardized perinodular parenchymal features from CT. Med. Phys. 2019, 46, 3207–3216. [Google Scholar] [CrossRef] [PubMed]
  157. Shen, W.; Zhou, M.; Yang, F.; Yang, C.; Tian, J. Multi-scale convolutional neural networks for lung nodule classification. In Proceedings of the International Conference on Information Processing in Medical Imaging; 2015; pp. 588–599. [Google Scholar]
  158. Nibali, A.; He, Z.; Wollersheim, D. Pulmonary nodule classification with deep residual networks. Int. J. Comput. Assist. Radiol. Surg. 2017, 12, 1799–1808. [Google Scholar] [CrossRef] [PubMed]
  159. Liu, X.; Hou, F.; Qin, H.; Hao, A. Multi-view multi-scale CNNs for lung nodule type classification from CT images. Pattern Recognit. 2018, 77, 262–275. [Google Scholar] [CrossRef]
  160. Chen, H.; Wu, W.; Xia, H.; Du, J.; Yang, M.; Ma, B. Classification of pulmonary nodules using neural network ensemble. In Proceedings of the International Symposium on Neural Networks, Guilin, China, 29 May–1 June 2011; pp. 460–466. [Google Scholar]
  161. Kuruvilla, J.; Gunavathi, K. Lung cancer classification using neural networks for CT images. Comput. Methods Programs Biomed. 2014, 113, 202–209. [Google Scholar] [CrossRef]
  162. Ardila, D.; Kiraly, A.P.; Bharadwaj, S.; Choi, B.; Reicher, J.J.; Peng, L.; Tse, D.; Etemadi, M.; Ye, W.; Corrado, G.; et al. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat. Med. 2019, 25, 954–961. [Google Scholar] [CrossRef] [PubMed]
  163. Li, K.; Liu, K.; Zhong, Y.; Liang, M.; Qin, P.; Li, H.; Zhang, R.; Li, S.; Liu, X. Assessing the predictive accuracy of lung cancer, metastases, and benign lesions using an artificial intelligence-driven computer aided diagnosis system. Quant. Imaging Med. Surg. 2021, 11, 3629. [Google Scholar] [CrossRef]
  164. Zhou, M.; Leung, A.; Echegaray, S.; Gentles, A.; Shrager, J.B.; Jensen, K.C.; Berry, G.J.; Plevritis, S.K.; Rubin, D.L.; Napel, S.; et al. Non–small cell lung cancer radiogenomics map identifies relationships between molecular and imaging phenotypes with prognostic implications. Radiology 2018, 286, 307–315. [Google Scholar] [CrossRef]
  165. Yamamoto, S.; Korn, R.L.; Oklu, R.; Migdal, C.; Gotway, M.B.; Weiss, G.J.; Iafrate, A.J.; Kim, D.W.; Kuo, M.D. ALK molecular phenotype in non–small cell lung cancer: CT radiogenomic characterization. Radiology 2014, 272, 568–576. [Google Scholar] [CrossRef] [PubMed]
  166. Aerts, H.J.; Grossmann, P.; Tan, Y.; Oxnard, G.R.; Rizvi, N.; Schwartz, L.H.; Zhao, B. Defining a radiomic response phenotype: A pilot study using targeted therapy in NSCLC. Sci. Rep. 2016, 6, 33860. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  167. Rizzo, S.; Petrella, F.; Buscarino, V.; De Maria, F.; Raimondi, S.; Barberis, M.; Fumagalli, C.; Spitaleri, G.; Rampinelli, C.; De Marinis, F.; et al. CT radiogenomic characterization of EGFR, K-RAS, and ALK mutations in non-small cell lung cancer. Eur. Radiol. 2016, 26, 32–42. [Google Scholar] [CrossRef] [PubMed]
  168. Velazquez, E.R.; Parmar, C.; Liu, Y.; Coroller, T.P.; Cruz, G.; Stringfield, O.; Ye, Z.; Makrigiorgos, M.; Fennessy, F.; Mak, R.H.; et al. Somatic mutations drive distinct imaging phenotypes in lung cancer. Cancer Res. 2017, 77, 3922–3930. [Google Scholar] [CrossRef] [Green Version]
  169. Lee, K.H.; Goo, J.M.; Park, C.M.; Lee, H.J.; Jin, K.N. Computer-aided detection of malignant lung nodules on chest radiographs: Effect on observers’ performance. Korean J. Radiol. 2012, 13, 564–571. [Google Scholar] [CrossRef]
  170. Liu, S.; Xie, Y.; Jirapatnakul, A.; Reeves, A.P. Pulmonary nodule classification in lung cancer screening with three-dimensional convolutional neural networks. J. Med. Imaging 2017, 4, 041308. [Google Scholar] [CrossRef]
  171. Kang, G.; Liu, K.; Hou, B.; Zhang, N. 3D multi-view convolutional neural networks for lung nodule classification. PLoS ONE 2017, 12, e0188290. [Google Scholar] [CrossRef] [Green Version]
  172. Lyu, J.; Ling, S.H. Using multi-level convolutional neural network for classification of lung nodules on CT images. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 686–689. [Google Scholar]
  173. Ciompi, F.; Chung, K.; Van Riel, S.J.; Setio, A.A.A.; Gerke, P.K.; Jacobs, C.; Scholten, E.T.; Schaefer-Prokop, C.; Wille, M.M.; Marchiano, A.; et al. Towards automatic pulmonary nodule management in lung cancer screening with deep learning. Sci. Rep. 2017, 7, 46479. [Google Scholar] [CrossRef]
  174. Shaffie, A.; Soliman, A.; Fraiwan, L.; Ghazal, M.; Taher, F.; Dunlap, N.; Wang, B.; van Berkel, V.; Keynton, R.; Elmaghraby, A.; et al. A generalized deep learning-based diagnostic system for early diagnosis of various types of pulmonary nodules. Technol. Cancer Res. Treat. 2018, 17, 1533033818798800. [Google Scholar] [CrossRef]
  175. Hua, K.L.; Hsu, C.H.; Hidayati, S.C.; Cheng, W.H.; Chen, Y.J. Computer-aided classification of lung nodules on computed tomography images via deep learning technique. OncoTarg. Ther. 2015, 8, 2015–2022. [Google Scholar]
  176. Song, Q.; Zhao, L.; Luo, X.; Dou, X. Using deep learning for classification of lung nodules on computed tomography images. J. Healthc. Eng. 2017, 2017, 8314740. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  177. Causey, J.L.; Zhang, J.; Ma, S.; Jiang, B.; Qualls, J.A.; Politte, D.G.; Prior, F.; Zhang, S.; Huang, X. Highly accurate model for prediction of lung nodule malignancy with CT scans. Sci. Rep. 2018, 8, 9286. [Google Scholar] [CrossRef] [PubMed]
  178. El-Baz, A.S.; Gimel’farb, G.L.; Suri, J.S. Stochastic Modeling for Medical Image Analysis; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  179. Elsharkawy, M.; Sharafeldeen, A.; Soliman, A.; Khalifa, F.; Ghazal, M.; El-Daydamony, E.; Atwan, A.; Sandhu, H.S.; El-Baz, A. A Novel Computer-Aided Diagnostic System for Early Detection of Diabetic Retinopathy Using 3D-OCT Higher-Order Spatial Appearance Model. Diagnostics 2022, 12, 461. [Google Scholar] [CrossRef]
  180. Elsharkawy, M.; Sharafeldeen, A.; Taher, F.; Shalaby, A.; Soliman, A.; Mahmoud, A.; Ghazal, M.; Khalil, A.; Alghamdi, N.S.; Razek, A.A.K.A.; et al. Early assessment of lung function in coronavirus patients using invariant markers from chest X-rays images. Sci. Rep. 2021, 11, 12095. [Google Scholar] [CrossRef] [PubMed]
  181. Farahat, I.S.; Sharafeldeen, A.; Elsharkawy, M.; Soliman, A.; Mahmoud, A.; Ghazal, M.; Taher, F.; Bilal, M.; Razek, A.A.K.A.; Aladrousy, W.; et al. The Role of 3D CT Imaging in the Accurate Diagnosis of Lung Function in Coronavirus Patients. Diagnostics 2022, 12, 696. [Google Scholar] [CrossRef] [PubMed]
  182. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  183. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
  184. Zhang, S.; Sun, F.; Wang, N.; Zhang, C.; Yu, Q.; Zhang, M.; Babyn, P.; Zhong, H. Computer-aided diagnosis (CAD) of pulmonary nodule of thoracic CT image using transfer learning. J. Digit. Imaging 2019, 32, 995–1007. [Google Scholar] [CrossRef]
  185. Suk, H.I.; Shen, D. Deep learning-based feature representation for AD/MCI classification. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Nagoya, Japan, 22–26 September 2013; pp. 583–590. [Google Scholar]
Figure 1. A Typical CAD System for Lung Cancer Diagnosis.
Figure 1. A Typical CAD System for Lung Cancer Diagnosis.
Cancers 14 01840 g001
Figure 2. Main Categories of Lung Segmentation.
Figure 2. Main Categories of Lung Segmentation.
Cancers 14 01840 g002
Figure 3. Main Categories of Lung Nodule Classification.
Figure 3. Main Categories of Lung Nodule Classification.
Cancers 14 01840 g003
Table 1. Literature reviews of lung segmentation system using Hounsfield unit (HU) threshold, deformable boundaries, shape models, region/edge-based models, or machine learning (ML) based methods.
Table 1. Literature reviews of lung segmentation system using Hounsfield unit (HU) threshold, deformable boundaries, shape models, region/edge-based models, or machine learning (ML) based methods.
StudyMethod# SubjectsSystem Evaluation
Amato et al. [16,17]1. Grey scale thresholding
2. Rolling ball algorithm.
17 CT patients.The area under the ROC curve (AUC) of the system was 93 % .
Hu et al. [13]1. Grey scale thresholding.
2. Dynamic programming.
3. Morphological operations.
eight normal CT patients.The average intrasubject change was 2.75 % ± 2.29 % .
Itai et al. [22]1. Grey scale thresholding.
2. Active contour model.
9 CT Patients.Qualitative evaluation only.
Silveria et al. [23,24]1. Grey scale thresholding.
2. Geometric active contour.
3. Level sets.
4. Expectation-maximization (EM) algorithm.
Stack of chest CT slices.Qualitative evaluation only.
Gao et al. [19]1. Grey scale thresholding.
2. Anisotropic diffusion.
3. 3D region growing.
4. Dynamic programming.
5. Rolling ball algorithm.
eight CT scans.The average overlap coefficient of the system was 99.46 % .
Pu et al. [18]1. Grey scale thresholding.
2. Geometric border marching.
20 CT patients.Average over-segmentation and under-segmentation ratio were 0.43 % and 1.63 % , respectively.
Korfiatis et al. [57]1. k-means clustering
2. Support vector machine (SVM)
22 CT patients.The mean overlap coefficient of the system was higher than 89 % .
Wang et al. [58]1. Gray scale thresholding.
2. 3D gray-level co-occurrence matrix (GLCM) [59,60].
76 CT patients.The mean overlap coefficient of the system was 96.7 % .
Van Rikxoort et al. [15]1. Region growing.
2. Grey scale thresholding.
3. Dynamic programming.
4. 3D hole filling.
5. Morphological closing.
100 CT Patients.The accuracy of the system was 77 % .
Wei et al. [20]1. Histogram analysis and connected-component labeling.
2. Wavelet transform.
3. Otsu’s algorithm.
nine CT patients.The accuracy range of the system was 76.7 94.8 % .
Ye et al. [21]1. 3D fuzzy adaptive thresholding.
2. Expectation–maximization (EM) algorithm.
3. Antigeometric diffusion.
4. Volumetric shape index map.
5. Gaussian filter.
6. Dot map.
7. Weighted support vector machine (SVM) classification.
108 CT patients.The average detection rate of the system was 90.2 % .
Sun et al. [27]1. Active shape model matching method.
2. Rib cage detection method.
3. Surface finding approach.
60 CT patients.The Dice similarity coefficient (DSC) and mean absolute surface distance of the system were 97.5 % ± 0.6 % and 0.84 ± 0.23 , respectively.
Sofka et al. [29]1. Shape model.
2. Boundary detection.
260 CT patients.The errors in segmenting left and right lung were 1.98 ± 0.62 and 1.92 ± 0.73 , respectively.
Hua et al. [30]Graph-based search algorithm.19 pathological lung CT patients.The sensitivity, specificity, and Hausdorff distance of the system were 98.6 % ± 1.1 % , 99.5 % ± 0.3 % , and 13.3 ± 4.7 , respectively.
Nakagomi et al. [61]Min-cut graph algorithm.97 CT patientsThe sensitivity and Jaccard index of the system were 91.2 % ± 13.3 % , and 97.7 % ± 1.1 % , respectively.
Mansoor et al. [52]1. Fuzzy connectedness segmentation algorithm.
2. Texture-based random forest classification.
3. Region-based and neighboring anatomy guided correction segmentation.
more than 400 CT patients.The DSC, Hausdorff distance, sensitivity, and specificity of the system were 95.95 % ± 0.34 % , 19.65 ± 12.84 , 96.84 % ± 1.63 % , and 92.97 % ± 0.68 % , respectively.
Yan et al. [62]Convolution neural network (CNN).861 CT COVID-19 patients.The system achieved DSC of 98.7 % and 72.6 % , sensitivity of 98.6 % and 75.1 % , and specificity of 99 % and 72.6 % for normal and COVID-19-infected lung, respectively.
Fan et al. [63]1. COVID-19-infected lung segmentation convolution neural network (Inf-Net).
2. Semi-supervised Inf-Net (Semi-Inf-Net).
100 CT images.The DSC (sensitivity, specificity) of Inf-Net and Semi-Inf-Net were 68.2 % ( 69.2 % , 94.3 % ) and 73.9 % ( 72.5 % , 96 % ), respectively.
Oulefki et al. [64]Multi-level entropy-based threshold approach.297 CT COVID-19 patients.The DSC, sensitivity, specificity, and precision of the system were 71.4 % , 73.3 % , 99.4 % , and 73.9 % , respectively.
Sharafeldeen et al. [65]1. Linear combination of Gaussian.
2. Expectation-maximization (EM) algorithm.
3. Modified k-means clustering approach.
4. 3D MGRF-based morphological constraints.
32 CT COVID-19 patients.The Overlap coefficient, DSC, absolute lung volume difference (ALVD), and 95th-percentile bidirectional Hausdorff distance (BHD) were 91.76 % ± 3.29 % , 95.67 % ± 1.83 % , 2.93 ± 2.39 , and 4.86 ± 5.01 , respectively.
Zhao et al. [66]1. Grey scale thresholding.
2. 3D V-Net.
3. Deformation module.
112 CT patients.DSC, sensitivity, specificity, and mean surface distance error of the system were 97.96 % , 98.4 % , 99.54 % , and 0.0318 , respectively.
Sousa et al. [67]Hybrid deep learning model, consisted of U-Net [68] and ResNet-34 [69] architectures.385 CT patients, collected from five different datasets.The mean DSC of the system was higher than 93 % , and the average Hausdorff distance was less than 5.2 .
Kim et al. [70]Otsu’s algorithm.447 CT patients.Sensitivity, specificity, accuracy, AUC, and F1-score of the system were 96.2 % , 97.5 % , 97 % , 96.8 % , and 96.1 % , respectively.
Table 2. Literature review of pulmonary nodule detection and segmentation systems.
Table 2. Literature review of pulmonary nodule detection and segmentation systems.
StudyMethod# SubjectsSystem Evaluation
Brown et al. [97]
1. Priori model.
2. Region growing.
3. Mathematical morphology.
31 CT patients.The accuracy of the system was 86 % .
Oda et al. [95]
1. 3D filter by orientation map of gradient vectors.
2. 3D distance transformation.
33 CT patients.The accuracy of the system was 59 % .
Chang et al. [82]
1. Cylinder filter.
2. Spherical filter.
3. Sphericity test.
eight CT patients.The detection rate of the system was 90 % .
Way et al. [78]
1. k-means clustering.
2. 3D active contour model
96 CT patients.Qualitative evaluation only.
Kuhnigk et al. [121]Automatic morphological and partial volume analysis based method.Low-dose data from 8 clinical metastasis patients.Results of proposed method outperformed conventional methods both systematic and absolute errors were substantially reduced. Method could successfully account for slice thickness and variations of kernel reconstruction compared to conventional methods.
Zhou et al. [124]
1. Detection: boosted KNN with Euclidean distance measure between the non-parametric density estimates of two regions.
2. Segmentation: analysis of 3-D texture likelihood map of nodule region.
10 ground Glass Opacity nodules.All 10 nodules detected with only 1 false positive nodule.
Dehmeshki et al. [122]Adaptive sphericity oriented contrast region growing on the fuzzy connectivity map of the object of interest.
1. Database 1: 608 pulmonary nodules from 343 scans,
2. Database 2: 207 pulmonary nodules from 80 CT scans.
Visual inspection found that 84 % of the segmented nodules were correct, while the other 16 % nodules required other segmentation solutions.
Tao et al. [123]A multi-level statistical learning-based approach for segmentation and detection of ground glass nodule.Database: 1100 subvolumes (100 contains ground glass nodule) acquired from 200 subjects.Classification accuracy: 92.28 % (overall), and 89.87 % (ground glass nodule).
Messay et al. [98]
1. Thresholding.
2. Morphological operations.
3. Fisher Linear Discriminant (FLD) classifier.
84 CT patients.The sensitivity of the system was 82.66 % .
Kubota et al. [126]Region Growing.
1. LIDC 1: 23 nodule,
2. LIDC 2: 82 nodule,
3. A dataset of 820 nodules with manual diameter measurements.

1. LIDC 1: 0.69 ± 0.18 average overlap,
2. LIDC 2: 0.59 ± 0.19 average overlap.
Liu et al. [128]
1. Selective enhancement filter [129].
2. Hidden conditional random field (HCRF) [130].
24 CT patients.The sensitivity of the system was 89.3 % with 1.2 false positive/scan.
Choi et al. [107]
1. Dot enhancement filter.
2. Angular histograms of surface normals (AHSN).
3. Iterative wall elimination method.
4. Support vector machine (SVM) classifier.
84 CT patients.The sensitivity of the system was 97.5 % with 6.76 false positive/scan.
Alilou et al. [108]
1. Thresholding.
2. Morphological opening.
3. 3D region growing.
60 CT patients.The sensitivity of the system was 80 % with 3.9 false positive/scan.
Bai et al. [109]
1. Local shape analysis.
2. Data-driven local contextual feature learning.
3. Principal component analysis (PCA).
99 CT patientsThe number of false positive were reduced by more than 80 % .
Setio et al. [99]
1. Thresholding.
2. Morphological operations.
3. Vector supporting machine (VSM) classifier.
888 CT patients.The sensitivity of the system was 94.1 % and 98.3 % with an average of 1 and 4 false positive/scan, respectively.
Bai et al. [109]
1. Local shape analysis.
2. Data-driven local contextual feature learning.
3. Principal component analysis (PCA).
99 CT patientsThe number of false positive were reduced by more than 80 % .
Setio et al. [99]
1. Thresholding.
2. Morphological operations.
3. Vector supporting machine (VSM) classifier.
888 CT patients.The sensitivity of the system was 94.1 % and 98.3 % with an average of 1 and 4 false positive/scan, respectively.
Akram et al. [106]
1. Artificial neural network (ANN).
2. Geometric and intensity-based features.
84 CT patients.The accuracy and sensitivity of the system were 96.68 % and 96.95 % , respectively.
Golan et al. [111]Deep convolutional neural network (CNN).1018 CT patientsThe sensitivity of the system was 78.9 % with 20 false positive/scan.
Bergtholdt et al. [112]
1. Geometric features.
2. Grayscale features.
3. Location features.
4. Support vector machine (SVM) classifier.
1018 CT patients.The sensitivity of the system was 85.9 % with 2.5 false positive/scan.
Sudipta Mukhopadhyay [127]Thresholding approach based on internal texture (solid/part-solid and non-solid), and external attachment (juxta-plural and juxta-vascular).891 nodules from (LIDC/IDRI).Average segmentation accuracy: 99 % ± 1 (for soild/part-solid), 98 % ± 2 (for non-solid).
El-Regaily et al. [110]
1. Canny edge detector.
2. Thresholding.
3. Region growing.
4. Rule-based classifier.
400 CT patients.The accuracy, sensitivity, and specificity of the system were 70.53 % , 77.77 % , and 69.5 % , respectively with an average of 4.1 false positive/scan.
Zhang et al. [113]Deep believe network (DBN).1018 CT patients.The accuracy of system was 90 % .
Wang et al. [100]Semi-supervised extreme learning machines (SS-ELM)1018 CT patients.The accuracy of the system was 96.1 % .
Zhao et al. [131]
1. 3D U-Net [132].
2. Generative adversarial network (GAN) [133].
800 CT scans.Qualitative evaluation only.
Charbonnier et al. [125]Subsolid nodule segmentation using voxel classification that eliminated blood vessels.170 subsolid nodules from the Multicentric Italian Lung Disease trial. 92.4 % of segmented vessels, and 80.6 % of segmented solid core were accepted observers.
Luo et al. [134]3D sphere center-points matching detection network (SCPM-Net).888 CT scans.The sensitivity of the system was 89.2 % .
Yin et al. [135]Squeeze and attention, and dense atrous spatial pyramid pooling U-Net (SD-U-Net).2236 CT slices.The Dice similarity coefficient (DSC), sensitivity, specificity, and accuracy of the system were 86.96 % , 89.88 % , 99.32 % , and 99.06 % , respectively.
Bianconi et al. [120]
1. 12 conventional semi-automated methods (Active contours (MorphACWE, MprphGAC), cluserting (K-means, SlIC), graph-based (Felzenszwalb), region-growing (flood fill), thresholding (Kapur, Kittler, Otsu, MultiOtsu, others (MSER, Watershed)), and
2. 12 deep learning semi-automated methods (12 CNNS designed using 4 standard segmentation models (FPN, LinkNet, PSPNet, U-Net) and 3 well-known encoders (InceptionV3, MobileNet, ResNet34)).

1. Dataset 1: 383 images from a cohort of 111 patients.
2. Dataset 2: 259 images from a cohort of 100.
Semi-automated deep learning methods outperformed the conventional methods. DSCs of the deep learning based methods recorded 0.853 and 0.763 for dataset 1, and dataset 2 respectively. Conventional methods recorded DSCs of 0.761 and 0.704 .
Table 3. Literature review of pulmonary nodule classification systems.
Table 3. Literature review of pulmonary nodule classification systems.
StudyMethod# SubjectsSystem Evaluation
Dehmeshki et al. [148]Shape-based region growing.3D lung CT data where nodules are attached to blood vessels or lung wall.Qualitative evaluation only.
Lee et al. [169]Commercial CAD system (IQQA-Chest, EDDA Technology, Princeton Junction, NJ, USA).200 chest radiographs (100 normal, 100 with malignant solitary nodules.Sensitivity of 87 % , false positive rate of 0.19 .
Kuruvilla et al. [161]Feed forward and feed forward back propagation neural networks.155 patients from LIDCClassification accuracy of 93.3 % .
Yamamoto et al. [165]Random forest.172 patients with NSCLC.Sensitivity of 83.3 % , specificity of 77.9 % , accuracy of 78.8 % in independent testing.
Orozco et al. [147]
1. Wavelet feature descriptor,
2. SVM.
45 CT scans from ELCAP and LIDC.Total preciseness in classifying cancerous from non-cancerous nodules was 82 % ; sensitivity of 90.90 % , and specificity of 73.91 % .
Kumar et al. [149]Deep Features using autoencoder.4323 nodules from NCI-LIDC dataset. 75.01 % overall accuracy, 83.35 % sensitivity, and false positive of 0.39/patient (10-fold cross validation).
Hua et al. [175]
1. A deep belief network (DBN),
2. CNN.
LIDCSensitivity (DBN: 73.4 % , CNN: 73.3 % ), Specificity (DBN: 82.2 % , CNN: 78.7 % ).
Kang et al. [171]3D multi-view CNN (MV-CNN).LIDC-IDRIError rate of 4.59 % for binary classification (benign and malignant) and 7.70 % for ternary classification(benign, primary malignant and metastatic malignant).
Ciompi et al. [173]Multi-stream multi-scale convolutional networks.
1. Italian MILD screening trial,
2. Danish DLCST screening trial.
Best accuracy of 79.5 % .
Song et al. [176]
1. CNN,
2. Deep neural network (DNN),
3. Stacked autoencoder (SAE).
LIDC-IDRIAccuracy of 84.15 % , sensitivity of 83.96 % , and specificity of 84.32 % .
Tajbakhsh et al. [138]
1. Massive training artificial neural networks (MTANN),
2. CNN.
LDCT acquired from 31 patients.AUC = 0.8806 ( 95 % confidence interval (CI): 0.8389 0.9223 ).
Li et al. [145]Support vector machine (SVM).248 GGNs.Accuracy of classifying GGNs into atypical adenomatous hyperplasia (AAH), adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), and invasive adenocarcinoma (IA) was 70.9 % . Accuracy of classification between AIS and MIA nodules is 73.1 % , and between indolent versus invasive lesions is 88.1 % .
Huang et al. [154]Dense convolutional network (DenseNet).
1. CIFAR,
2. SVHN,
3. ImageNet.
Error rates for CIFAR (C10: 5.19 % , C10+: 3.46 % , C100: 19.64 % , C100+: 17.18 % ), SVHN ( 1.59 % ), ImageNet (error rates with single-crop (10-crop) are: top-1 (25.02 (23.61), 23.80 (22.08), 22.58 (21.46), 22.33 (20.85)), top-5 (7.71 (6.66), 6.85 (5.92), 6.34 (5.54), 6.15 (5.30))).
Nibali et al. [158]ResNetLIDC/IDRISensitivity of 91.07 % , specificity of 88.64 % , precision of 89.35 % , AUC of 0.9459 , and accuracy of 89.90 % .
Liu et al. [159]Multi-view multi-scale CNNsLIDC-IDRI and ELCAPClassification rate as 92.1 % .
Zhao et al. [152]A deep learning system based on 3D CNNs and multitask learning651 nodules with labels of AAH, AIS, MIA, IA.Classification accuracy using 3 class weighted average F1 score is: 63.3 % compared to radiologists who achieved 55.6 % , 56.6 % , 54.3 % , and 51.0 % .
Li et al. [150]Multivariable linear predictor model built on semantic features.100 patients from NLST-LDCT.AUC at baseline screening: 0.74 , at first followup: 0.88 , and at second followup: 0.96 .
Lyu et al. [172]Multi-level CNN (ML-CNN).LIDC, IDRI (1018 cases from 1010 patients)Accuracy: 84.81 % .
Shaffie et al. [174]
1. Seventh-order Markov Gibbs random field (MGRF) model [178,179,180],
2. Geometric features,
3. Deep autoencoder classifier.
727 nodules from 467 patients (LIDC).Classification accuracy of 91.20 % .
Causey et al. [177]Deep learning CNN.LIDC-IDRIAccuracy of malignancy classification with AUC of approximately of 0.99 .
Uthoff et al. [156]k-medoids clustering and information theory.Training: (74 malignant, 289 benign), Validation (50 malignant, 50 benign).AUC = 0.965 , 100 % sensitivity and 96 % specificity.
Ardila et al. [162]A deep learning CNN.6716 National Lung Cancer Screening Trial cases, independent clinical validation set of 1139 cases.AUC = 94.4 % .
Liu et al. [151]
1. Multivariate logistic regression analysis,
2. Least absolute shrinkage and selection operator (LASSO).
Benign and malignant nodules from 875 patients.Training: AUC = 0.836 ; 95 % CI: 0.793–0.879) and validation (AUC = 0.809 ; 95 % CI: 0.745–0.872).
Gong et al. [136]A deep learning–based artificial intelligence system for classifying ground-glass nodule(GGN) into invasive adenocarcinoma (IA) or non-invasive IA.828 GGNs of 644 patients (209 are IA and 619 non-IA, including 409 adenocarcinomas in situ and 210 minimally invasive adenocarcinomas).AUC = 0.92 ± 0.03 .
Sim et al. [137]Radiologists assisted by deep learning–based CNN.600 lung cancer–containing chest radiographs and 200 normal chest radiographs.Average sensitivity improved from 65.1 % to 70.3 % , and number of false positives per radiograph declined from 0.2 to 0.18 .
Wang et al. [153]A two-stage deep learning strategy: prior-feature learning followed by adaptive-boost deep learning.1357 nodules (765 noninvasive (AAH and AIS) and 592 invasive nodules (MIA and IA)).Classification accuracy of 73.4 % ± 1.4 compared to specialists who achieved 69.1 % , 69.3 % , and 67.9 % . AUC= 81.3 % ± 2.2 .
Xia et al. [155]1. Recurrent residual CNN based on U-Net,
2. Information fusion method.
373 GGNs from 323 patients.AUC= 0.90 ± 0.03 , accuracy: 80.3 % .
Li et al. [163]CLR software based on 3D CNN with DenseNet architecture as a backbone.486 consecutive resected lung lesions(320 adenocarcinomas, 40 other malignancies, 55 metastases, and 71 benign lesions).Classification accuracy for adenocarcinomas, other malignancies, metastases, and benign lesions was 93.4 % , 95.0 % , 50.9 % , and 40.8 % , respectively.
Hu et al. [139]
1. 3D U-NET,
2. Deep neural network.
513 GGNs (100 benign, 413 malignant).Accuracy of 75.6 % , F1 score of 84.6 % , weighted average F1 score of 70.3 % , and Matthews correlation coefficient of 43.6 % .
Farahat et al. [181]1. Three MGRF energies, extracted from three different grades of COVID-19 patients,
2. Artificial neural network.
76 CT COVID-19 patients. 100 % accuracy, and 100 % Cohen kappa.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Fahmy, D.; Kandil, H.; Khelifi, A.; Yaghi, M.; Ghazal, M.; Sharafeldeen, A.; Mahmoud, A.; El-Baz, A. How AI Can Help in the Diagnostic Dilemma of Pulmonary Nodules. Cancers 2022, 14, 1840. https://doi.org/10.3390/cancers14071840

AMA Style

Fahmy D, Kandil H, Khelifi A, Yaghi M, Ghazal M, Sharafeldeen A, Mahmoud A, El-Baz A. How AI Can Help in the Diagnostic Dilemma of Pulmonary Nodules. Cancers. 2022; 14(7):1840. https://doi.org/10.3390/cancers14071840

Chicago/Turabian Style

Fahmy, Dalia, Heba Kandil, Adel Khelifi, Maha Yaghi, Mohammed Ghazal, Ahmed Sharafeldeen, Ali Mahmoud, and Ayman El-Baz. 2022. "How AI Can Help in the Diagnostic Dilemma of Pulmonary Nodules" Cancers 14, no. 7: 1840. https://doi.org/10.3390/cancers14071840

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop