Next Article in Journal
Signal Preprocessing, Decomposition and Feature Extraction Methods in EEG-Based BCIs
Previous Article in Journal
A Composite Index to Identify Appropriate Locations for Rural Community Renewable Energy Projects
Previous Article in Special Issue
Enhancing Pediatric Asthma Homecare Management: The Potential of Deep Learning Associated with Spirometry-Labelled Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

LuCa: A Novel Method for Lung Cancer Delineation

Department of Information Engineering, Università Politecnica delle Marche, 60131 Ancona, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(22), 12074; https://doi.org/10.3390/app152212074
Submission received: 15 September 2025 / Revised: 3 November 2025 / Accepted: 11 November 2025 / Published: 13 November 2025
(This article belongs to the Special Issue Deep Learning and Data Mining: Latest Advances and Applications)

Abstract

Lung cancer remains the leading cause of cancer-related deaths worldwide, with over 2.4 million new diagnoses in 2022. Early diagnosis remains challenging due to the non-specificity of symptoms, often resulting in late-stage detection. Although 2-D and 3-D medical imaging, particularly computed tomography (CT), is widely used for detecting lung cancer, it is associated with manual segmentation, which remains time-consuming and user-dependent. This study proposes LuCa as an innovative 2.5-D deep learning model for lung cancer delineation, which combines the benefits of 2-D segmentation with 3-D volume delineation. The main novelty of LuCa is focused on its pipeline, specifically designed to be of clinical use, in order to guarantee the usability of the method. LuCa employs a U-Net architecture for segmentation, followed by a post-image-processing step for 3-D tumor volume delineation and false-positive correction. The method was trained and evaluated using the “NSCLC-Radiomics” database, comprising CT images of 422 non-small cell lung cancer patients, with clinical manual tumor annotations as ground truth. The model achieved strong performance, with high dice coefficients (87 ± 12%), intersection over union (81 ± 17%), sensitivity (84 ± 16%), and positive predictive value (94 ± 10%) on the test set. Performance was particularly high for larger tumors, reflecting the ability of the model to delineate more visible lesions accurately. Statistical analysis confirmed the high correlation and minimal error between predicted and ground truth tumor volumes. The results highlight the potential of the 2.5-D approach to improve clinical efficiency by enabling accurate tumor segmentation with reduced computational cost, compared to traditional 3-D methods. Future research will focus on assessing the use of LuCa as real-time clinical decision support, particularly for assessing tumors during treatment.

1. Introduction

Nowadays, lung cancer is the leading cause of global cancer-related mortality [1]. According to the latest GLOBOCAN estimates 2,480,675 new cases of lung cancer were diagnosed globally in 2022, making it the most frequently diagnosed cancer (12.4% of all cancers globally) [1]. Lung cancers arise through a multi-step process involving the development of multiple genetic and epigenetic alterations [2]. Various factors may contribute to an increase in the risk of developing lung cancer and they could be divided into two macro-categories: non-modifiable risk factors (such as age, gender, ethnicity, and family history) and modifiable risk factors (such as tobacco and cannabis smoking, diet, ionizing radiation, and air pollution) [3,4]. According to the histology of the cells, lung cancer is categorized into two classes, which are non-small cell lung cancer (NSCLC) and small cell lung cancer (SCLC). NSCLC is the most common type (around 85% of cases) and includes subtypes such as adenocarcinoma, squamous cell carcinoma, and large cell carcinoma. Diagnosis of lung cancer is still not efficient. Over half of lung cancer cases are diagnosed in the last stage. Due to the non-specificity of most of the associated symptoms, indeed, patients may present non-specific systemic symptoms, such as fatigue, anorexia, and weight loss, or direct signs and symptoms caused by the primary tumor, such as chest discomfort, cough, dyspnea, and hemoptysis [5].
The most common diagnostic method for lung cancer is 2-D and 3-D medical imaging, which is the representation of an internal anatomical structure or its functional processes. Computed Tomography (CT) is the imaging technique with the highest sensitivity for the detection of pulmonary nodules [6]. CT assists in finding abnormalities, highlights signs of disease, monitors the response to treatment, and supports the planning of therapy [7]; indeed, increased use of CT has improved the identification of small peripheral nodules [8]. However, clinicians must deal with the reading and interpretation of complex imaging data, where factors such as anatomical structures overlaying the region of interest and poor image quality often obscure the identification of malignant lesions. Then, the segmentation of organs or anatomical structures is still manually performed, causing low annotation quality and high inter-observer variability. This variety does not allow an objective clinical practice, resulting in a high level of false positives and false negatives, especially in the tumor volume characterization. Moreover, this practice is time-consuming and non-objective [9].
Recently, deep learning was introduced as a technique to perform lung cancer segmentation [9,10,11]. Among the many deep-learning tools, the Convolutional Neural Networks (CNNs) reached good results in image segmentation tasks [12]. Various architectures have been explored to address the complexities of image segmentation, each offering unique solutions while balancing computational costs and segmentation accuracy. Image segmentation through CNN could be performed through 2-D analysis and 3-D analysis. Efficient 2-D architecture has been developed to maintain high segmentation quality with lower resource requirements. The Bi-FPN network integrated bidirectional feature pyramid fusion, achieving accurate results while minimizing computational load [13]. WEU-Net introduced a weight excitation mechanism, emphasizing nodular relevant features and achieving robust contextual learning and better segmentation accuracy [14]. Then, the introduction of dense connections and new loss functions guaranteed the achievement of better segmentation, providing a 1% improvement in comparison to 2-D approaches [15]. Attention and transformer-based architectures have further advanced segmentation by capturing long-range spatial dependencies. A novel framework combined CNNs with transformers, using deformable self-attention to improve tumor segmentation, achieving good results and facilitating real-time radiotherapy workflows [16]. Finally, a two-step approach adapted separate networks for large and small tumors, improving precision across different tumor sizes [17]. In contrast, other works focused on 3-D CNN architecture for accurate volume segmentation. The 3-D densely connected network reduced parameter load while enhancing segmentation detail, outperforming traditional 3-D U-Nets in accuracy [18]. The MF-3D U-Net adopted multiscale feature fusion with trainable downsampling, which allowed efficient nodule segmentation in complex cases [19]. Retina U-Net improved detection by segmenting based on anatomical region, reducing false positives in PET/CT scans [20]. Multiple attention U-Net further enhanced segmentation quality by incorporating spatial and channel attention mechanisms, achieving high Dice scores on clinical datasets [21]. Two-stage models have also shown potential in balancing segmentation precision and efficiency. A two-stage U-Net effectively refined tumor delineation by first isolating the global tumor area and then refining the region locally, significantly improving the accuracy of PET/CT data [22]. Another approach used a segmented patch-based method to reduce inference time significantly by processing relevant patches, achieving efficient segmentation in a fraction of the time [23]. The 3-D U-Net combined with graphcut co-segmentation applied dual PET-CT inputs to refine tumor boundaries in lung cancer patients [24]. While these methods demonstrate high accuracy, including the volume 3-D features in the analysis, they face challenges with computational demand, limiting their practical application. Techniques like weighted sampling and customized loss functions were designed to improve learning in challenging nodule regions, such as nodules attached to the chest wall, but remain demanding in resource-limited environments [25]. To reduce computational effort while maintaining a high level of tumor delineation, in the literature, two studies [26,27] presented 2.5-D approaches: these methods constitute a 2-D deep-learning approach for tumor segmentation (aiming to reduce computational effort) and an additional module for the 3-D volume reconstruction (aiming to reach a good tumor delineation). Despite the innovative idea, these methods achieved good performance but still not competitive (dice scores around 80%).
Thus, the present paper aims to assess a new deep-learning method based on an innovative 2.5-D approach, LuCa, which is designed to be competitive with 2-D methods in terms of required resources, and with 3-D methods in terms of delineation accuracy. The main novelty of LuCa is focused on the definition of a pipeline specifically designed to be integrated clinical practice, in order to guarantee the usability of the method.

2. Materials and Methods

2.1. Data Description

The data used for this project are collected in the “NSCLC-Radiomics dataset”, available online on the “The Cancer Imaging Archive” database [28]. The database contains pretreatment CT scans from 422 NSCLC patients. Patients (290/132 male/female; age: 68 ± 10 years) presented different types of NSCLC at the acquisition time; specifically, 114 patients presented large cell cancer, 152 patients presented squamous cell carcinoma, 51 patients presented adenocarcinoma, 63 patients presented “not otherwise specified” cancer and, finally, 42 patients presented no type of investigation test. Survival time (measured from the start of treatment) ranged between 10 and 4454 days (989 ± 1036 days). The database also contains the manual annotation of the lungs and of the tumor volume, which was used as ground truth in this study. This delineation was manually performed by a radiation oncologist and consists of the 3-D volume of the primary gross tumor volume (“GTV-1”) and selected anatomical structures (i.e., lung, heart, and esophagus).
Patients with missing files (such as segmentation files) were excluded. Thus, the final population of the study included scans from 391 subjects, with a total of 32,911 pairs of CT slices and masks, which were divided into training, validation, and test datasets. Firstly, the database was split into a training/validation set, which contains CT scans from 312 subjects (80%), and the test set, which contains CT scans from 79 subjects (20%). Then, the training/validation set was split into a training set, which included CT scans from 249 subjects (80% of the training/validation set), and a validation set, which included CT slices from 63 subjects (20% of the training/validation set). The training and validation datasets were used to train the proposed method (the validation dataset was used to implement the early stopping criteria [29]), and the test set was used to evaluate the generalization ability of the entire pipeline. Finally, a data generator was implemented with a batch size of 128, applying a random shuffle of the training and validation datasets.

2.2. Proposed Method

The pipeline of LuCa, the proposed 2.5-D deep-learning method for lung cancer delineation, is represented in Figure 1 and described in the following sections. The entire pipeline has been implemented in Python 3.12.12 Code on the Google Colab environment, using the GPU hardware acceleration and high RAM setups of the PRO version.

2.2.1. Data Pre-Processing

Data was pre-processed to extract the 2-D slices from each 3-D scan. The pre-processing steps are slice rearrangement, greyscale pixel intensity windowing (range = [−1000; +1000] HU), lungs extraction by lung segmentation masks, black slices removal, cropping, image resizing to 256 × 256 pixels, JPEG conversion, and CT normalization. Augmentation was not applied.

2.2.2. U-Net

Proposed U-Net (Figure 1) consists of an encoder–decoder architecture, proven to be efficient in image processing in capturing information while maintaining good spatial accuracy in a precise segmentation task [30]. The encoder path includes five blocks: each block includes two convolutional layers with an LReLU activation function, followed by a batch normalization block. Convolutional blocks employ a (3, 3) kernel size of (3, 3) with equal padding, spatial dropout set at 0.2 after each block (to help reduce overfitting), and max pooling layers to down-sample the feature maps from 256 × 256 to 16 × 16. Then, the transposed convolutions mark the beginning of the expansive path, to upsample the feature maps, concatenating them with the corresponding encoder outputs using the skip connections to preserve fine-grained spatial information. The final output layer is a 2-D convolution with a (1, 1) kernel size, followed by a sigmoid activation function. Regarding the training strategy, the Adam optimizer was used with a learning rate of 0.001, and a learning rate scheduler was used to reduce the learning rate by a 0.5 factor in case of stalls in progress. Moreover, the early stopping criterion was used by considering the best validation loss, and restoration of the best weights was implemented to enhance the training performance. Dice Coefficient (DC) was chosen as formulation of the loss function by considering its sensitivity to class imbalance in binary segmentation. It is defined by Equation (1):
DC ( PRE , GT ) = 2 · PRE     GT PRE + GT
where PRE is the predicted set of pixels, and GT is the set of pixels in the ground-truth mask. The loss function was implemented as represented by Equation (2):
Loss   =   1 DC

2.2.3. Three-Dimensional Lung Cancer Delineation

The 3-D lung cancer delineation was obtained from the 2-D predictions. For each subject, the predictions related to a single CT scan were aligned. Then, a post-processing step was applied to identify and correct the false positive detections. Specifically, this step evaluated the cancer predictions as objects of sequential slices: if a prediction of one slice presented no predictions in the adjacent slices, that prediction was removed; and, if a prediction of one slice had sequential slices with predictions located in a different location, these predictions were considered as not belonging to the same nodule, and the prediction with the smaller area was removed. Finally, all the corrected slices were stacked to construct the 3-D lung cancer, and the volume of the cancer was computed as the sum of the predicted areas multiplied by the thickness between the slices (3 mm in this study).

2.3. Statistical Analysis

LuCa has been evaluated by performing a technical evaluation and clinical assessment, focusing on pixel categorization and tumor volume characterization, respectively. Technical evaluation was performed by computing the DC and the intersection over union (IoU) between PRE and GT by using the mathematical formulae defined by Equation (1). Furthermore, sensitivity (SEN) and positive predictive value (PPV) were calculated according to
S E N = T P T P + F N
P P V = T P T P + F P
where TP are the tumor pixels that were correctly predicted as tumor pixels, FP are the non-tumor pixels that were erroneously predicted as tumor pixels, and FN are the tumor pixels that were erroneously predicted as non-tumor pixels.
Clinical assessment was performed by computing the volumes of tumors of both GT (VGT; cm3) and (VPRE; cm3). Distributions of VGT and VPREE volumes were compared by paired t-test. Volume similarity was assessed by Bland–Altman analysis: δ(cm3) and μ(cm3) are the point-to-point differences and averages between VGT and VPRE, respectively. Pearson’s correlation analysis assessed volume association: ρ is the correlation coefficient, and m and q(cm3) are the slope and the bias of the linear regression line, respectively. Normal distributions were characterized in terms of mean values (MN) and standard deviation (SD). Statistical significance was set at 0.05.
Both technical evaluation and clinical assessment were performed by considering the entire dataset (overall analysis), the data division sets (training, validation, and test) and the tumor size. Specifically, tumor size was classified as small, medium, or large if its VGT was lower than 15 cm3, higher than 15 cm3 and lower than 60 cm3, or higher than 60 cm3, respectively. Finally, data division and tumor size stratification was merged into subgroups in order to deeply assess the performance of LuCa.

3. Results

Results of technical evaluation and clinical assessment of LuCa are reported in Table 1.
Examples of subgroup-stratified 2D results with ground truth are reported in Figure 2. Technically, testing high values of DC (87 ± 12%), IoU (81 ± 17%), SEN (84 ± 16%), and PPV (94 ± 10%) indicated a very good level of generalization of LuCa; delineation performance increased with tumor size, as expected. Distributions of VPRE are not statistically different from VGT distributions (p value higher than 0.85), independently by dataset (training, validation, and test) and by tumor size (small, medium, and large). Results of Bland–Altman and Pearson’s correlation analyses are reported in Figure 3 and Figure 4, showing low errors (δ < 5 cm3) and strong statistically significant agreement (ρ > 0.82) between the PRE and GT in all stratifications. Finally, Figure 5 depicts examples of the 3-D volume delineation.

4. Discussion

The study presents LuCa, a novel deep learning approach for lung cancer segmentation and volume delineation. The main novelty of LuCa is focused on the definition of a pipeline specifically designed to be integrated clinical practice, in order to guarantee the usability of the method. The main concepts for its definition were reliability (i.e., good results in terms of cancer delineation) and usability (i.e., low computational effort to be integrated into the real clinical scenario). The 2.5-D concept is designed to reduce computational effort using a 2-D deep learning model (U-Net) for image segmentation while preserving delineation accuracy through a 3-D image processing tool for cancer volume delineation. U-Net was selected by considering its reliability, transparency, and relatively low computational effort, which is a good tradeoff between computing and clinical use. Finally, the trained model can be exported into Open Neural Network Exchange (ONNX) format, which can be executed within secure hospital cloud, using runtime available environments (e.g., ONNX Runtime), easily usable in clinical settings, such as PACS system.
As a common technical choice, data manipulation procedures are used to increase the study population and assess the generalizability of the model (e.g., data augmentation). To ensure the population remains representative of the clinical problem, a static data division was applied without pre-processing the population by data augmentation procedures. Despite the fact that in computer science, augmentation is a well-established technique to improve generalization and robustness, in clinical settings this technique can lead to misinterpretation due to the changes in clinical realism of CT data and to the biases in volumetric estimations. Preprocessing and postprocessing pipelines were designed to reduce storage and computational load, essentials in the clinical scenario. Under these conditions, LuCa has been demonstrated effective in lung tumor delineation, providing high statistical metrics (DC = 87%; IoU = 81%; SEN = 84%; PPV = 94%) in the testing dataset, and thus, confirming its ability to generalize the clinical problem of interest.
Clinically, accurate tumor volume delineation is crucial in each stage of oncological patient management, from diagnosis to prognosis, as well as treatment planning. LuCa demonstrated a strong similarity/association between predicted and ground-truth tumor volumes, with no statistically significant differences across datasets (training, validation, and test) or tumor size groups (small, medium, and large). Despite the statistical significance, the performance of LuCa improves with the cancer size. This result is expected according to two main factors. First of all, large tumors are more visible and easily detectable; thus, performance on large anatomical structures is an easier task than detecting small structures in all image processing fields. Secondly, the effect of false positive/negative detection in small structures affects not only automatic analysis but also visual inspection-based procedures, such as those used as ground truth in this study. Considering that the deep-learning methods, such as LuCa, are strongly affected by the gold standard being supervised techniques, the low visibility of small tumors can be inferred by issues related to both visual inspection and automatic tools. Finally, the literature stated that AI-based tools in lung cancer screening may improve sensitivity but increase the number of false-positive results [31]. Thus, our technical method was designed to balance both sensitivity and specificity, dealing with the risk of (not statistically) underestimating small tumors.
In the literature, some other algorithms were proposed to perform lung cancer delineation. Five studies presented 2-D approaches [11,12,13,14,15], providing good results (dice coefficient range: 74–92%) in terms of technical metrics, but the 2-D nature makes them not suitable for lung cancer volumes. On the other hand, eight studies presented 3-D end-to-end approaches [16,17,18,19,20,21,22,23] that were able to reach very high technical results (dice coefficient range: 78–97%) and very good clinical performance in terms of cancer volume delineation; but, these models are not integrable in standard clinical practice considering the need of high computational effort to be applied. Only the other two studies [24,25] presented 2.5-D approaches, similar to LuCa, the one presented in this paper. The results presented by [24,25] confirmed the effectiveness of the 2.5-D hypothesis, but, despite their good technical results (dice coefficient of 75% and 82%, respectively), LuCa provided superior results (dice coefficient of 88%). Thus, the qualitative comparison of LuCa with already published studies in the field confirmed that LuCa is a very good trade-off between clinical feature extraction and computational effort. Unfortunately, a quantitative comparison of LuCa with other methods in the literature is not possible because most of the studies used private datasets, different pipelines, and evaluation criteria, making the comparison not reliable.
Despite its promising performance, the study has several limitations that should be addressed in future work. The multicenter “NSCLC-Radiomics” database is a valid database for evaluating the performance of a deep-learning model for lung tumor segmentation, guaranteeing reproducibility due to its open access. Despite that, it may not represent the full diversity of clinical cases. For example, most of the data (88%) belongs to dead subjects; thus, an evaluation of the utility of the method as a decision support system cannot be made. Thus, variability in data acquisition settings, image quality, and patient demographics should be validated to assess the generalizability of the model in every clinical scenario. In the future, merging diverse imaging modalities and datasets from multiple centers will help to evaluate the dependency of LuCa on the technical and demographic features of the oncological population. Moreover, LuCa needs ground truth to be trained, such as other supervised procedures. This property remains an important technical issue, considering that it does not allow us to compare the performance of the models with the clinical opinion, which is usually used as ground truth. Thus, future studies will aim to evaluate the clinical use of our deep-learning method by considering several clinical features, different tumor types and by varying its main parameters; moreover, defining unsupervised methodologies independent of manual annotation will be investigated. Finally, despite the superiority of the 2.5-D to 3-D approach in terms of computational effort, LuCa still needs advanced resources to be trained. Thus, exploring hybrid models (including attention mechanisms or transformer-based modules) could reduce the computational cost and, eventually, improve segmentation performance, guaranteeing LuCa accessibility in any kind of environment, especially in low-resource settings.

5. Conclusions

The main novelty of LuCa is focused on the definition of a pipeline specifically designed to be integrated clinical practice, in order to guarantee the usability of the method. It proved to be effective in delineating lung cancer volumes by using a 2.5-D approach, offering a computationally efficient and clinically reliable alternative to end-to-end 2-D and 3-D models. Future work will focus on multi-center testing analysis, trying to reproduce the quantitative results published in the literature.

Author Contributions

Conceptualization, M.C. and A.S.; methodology, M.C., M.J.M. and A.S.; software, M.C., G.B. and M.J.M.; validation, M.C., M.J.M. and A.S.; formal analysis, M.C. and G.B.; investigation, M.C. and G.B.; resources, L.B.; data curation, G.B.; writing—original draft preparation, M.C. and A.S.; writing—review and editing, G.B., M.J.M. and L.B.; visualization, A.S.; supervision, L.B. and A.S.; project administration, L.B.; funding acquisition, L.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available in a publicly accessible repository openly available in “NSCLC-Radiomics dataset” on the “The Cancer Imaging Archive” database [https://www.cancerimagingarchive.net/] (accessed on 10 November 2025).

Acknowledgments

The authors would like to thank Dr. Donatella Di Fabrizio (https://portale.ospedaliriuniti.marche.it/archivio10_personale-clinico-ed-amministrativo_0_2098.html, accessed on 10 November 2025) for her valuable contribution in reviewing the manuscript. She provided insightful clinical feedback that improved the quality of this work and confirmed the clinical use of LuCa.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bray, F.; Laversanne, M.; Sung, H.; Ferlay, J.; Siegel, R.L.; Soerjomataram, I.; Jemal, A. Global Cancer Statistics 2022: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J. Clin. 2024, 74, 229–263. [Google Scholar] [CrossRef] [PubMed]
  2. Cooper, W.A.; Lam, D.C.L.; O’Toole, S.A.; Minna, J.D. Molecular Biology of Lung Cancer. J. Thorac. Dis. 2013, 5, S479–S490. [Google Scholar] [CrossRef] [PubMed]
  3. Thandra, K.C.; Barsouk, A.; Saginala, K.; Aluru, J.S.; Barsouk, A. Epidemiology of Lung Cancer. Contemp. Oncol. 2021, 25, 45–52. [Google Scholar] [CrossRef]
  4. Malhotra, J.; Malvezzi, M.; Negri, E.; La Vecchia, C.; Boffetta, P. Risk Factors for Lung Cancer Worldwide. Eur. Respir. J. 2016, 48, 889–902. [Google Scholar] [CrossRef]
  5. Collins, L.G.; Haines, C.; Perkel, R.; Enck, R.E. Lung Cancer: Diagnosis and Management. Am. Fam. Physician 2007, 75, 56–63. [Google Scholar]
  6. De Wever, W.; Coolen, J.; Verschakelen, J. Imaging Techniques in Lung Cancer. ERS J. 2011, 7, 338–346. [Google Scholar] [CrossRef]
  7. Panunzio, A.; Sartori, P. Lung Cancer and Radiological Imaging. Curr. Radiopharm. 2020, 13, 238–242. [Google Scholar] [CrossRef]
  8. The International Agency for Research on Cancer Pathology and Genetics of Tumours of the Lung, Pleura, Thymus and Heart (IARC WHO Classification of Tumours), 1st ed.; World Health Organization: Lyon, France, 2004.
  9. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  10. Song, Y.; Liu, Y.; Lin, Z.; Zhou, J.; Li, D.; Zhou, T.; Leung, M.-F. Learning from AI-Generated Annotations for Medical Image Segmentation. IEEE Trans. Consum. Electron. 2024, 71, 1473–1481. [Google Scholar] [CrossRef]
  11. Li, Y.; Hao, W.; Zeng, H.; Wang, L.; Xu, J.; Routray, S.; Jhaveri, R.H.; Gadekallu, T.R. Cross-Scale Texture Supplementation for Reference-Based Medical Image Super-Resolution. IEEE J. Biomed. Health Inform. 2025, 1–15. [Google Scholar] [CrossRef] [PubMed]
  12. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional Neural Networks: An Overview and Application in Radiology. Insights Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef] [PubMed]
  13. Annavarapu, C.S.R.; Parisapogu, S.A.B.; Keetha, N.V.; Donta, P.K.; Rajita, G. A Bi-FPN-Based Encoder-Decoder Model for Lung Nodule Image Segmentation. Diagnostics 2023, 13, 1406. [Google Scholar] [CrossRef]
  14. Banu, S.F.; Sarker, M.M.K.; Abdel Nasser, M.; Rashwan, H.; Puig, D. WEU-Net: A Weight Excitation U-Net for Lung Nodule Segmentation. Appl. Sci. 2021, 101, 349–356. [Google Scholar] [CrossRef]
  15. Lu, D.; Chu, J.; Zhao, R.; Zhang, Y.; Tian, G. A Novel Deep Learning Network and Its Application for Pulmonary Nodule Segmentation. Comput. Intell. Neurosci. 2022, 2022, 1–6. [Google Scholar] [CrossRef]
  16. Kunkyab, T.; Bahrami, Z.; Zhang, H.; Liu, Z.; Hyde, D. A Deep Learning-Based Framework (Co-Retr) for Auto-Segmentation of Non-Small Cell Lung Cancer in Computed Tomography Images. J. Appl. Clin. Med. Phys. 2024, 25, e14297. [Google Scholar] [CrossRef]
  17. Zhang, F.; Wang, Q.; Fan, E.; Lu, N.; Chen, D.; Jiang, H.; Yu, Y. Enhancing Non-Small Cell Lung Cancer Tumor Segmentation with a Novel Two-Step Deep Learning Approach. J. Radiat. Res. Appl. Sci. 2024, 17, 100775. [Google Scholar] [CrossRef]
  18. Zhao, L. 3D Densely Connected Convolution Neural Networks for Pulmonary Parenchyma Segmentation from CT Images. J. Phys. Conf. Ser. 2020, 1631, 12049. [Google Scholar] [CrossRef]
  19. Agnes, S.A.; Anitha, J. Efficient Multiscale Fully Convolutional UNet Model for Segmentation of 3D Lung Nodule from CT Image. J. Med. Imaging 2022, 9, 052402. [Google Scholar] [CrossRef]
  20. Weikert, T.; Jaeger, P.F.; Yang, S.; Baumgartner, M.; Breit, H.C.; Winkel, D.J.; Sommer, G.; Stieltjes, B.; Thaiss, W.; Bremerich, J.; et al. Automated Lung Cancer Assessment on 18F-PET/CT Using Retina U-Net and Anatomical Region Segmentation. Eur. Radiol. 2023, 33, 4270–4279. [Google Scholar] [CrossRef]
  21. Chen, W.; Yang, F.; Zhang, X.; Xu, X.; Qiao, X. MAU-Net: Multiple Attention 3D U-Net for Lung Cancer Segmentation on CT Images. Procedia Comput. Sci. 2021, 192, 543–552. [Google Scholar] [CrossRef]
  22. Park, J.; Kang, S.K.; Hwang, D.; Choi, H.; Ha, S.; Seo, J.M.; Eo, J.S.; Lee, J.S. Automatic Lung Cancer Segmentation in [18F]FDG PET/CT Using a Two-Stage Deep Learning Approach. Nucl. Med. Mol. Imaging 2023, 57, 86–93. [Google Scholar] [CrossRef]
  23. Shirokikh, B.; Shevtsov, A.; Dalechina, A.; Krivov, E.; Kostjuchenko, V.; Golanov, A.; Gombolevskiy, V.; Morozov, S.; Belyaev, M. Accelerating 3D Medical Image Segmentation by Adaptive Small-Scale Target Localization. J. Imaging 2021, 7, 35. [Google Scholar] [CrossRef]
  24. Zhong, Z.; Kim, Y.; Zhou, L.; Plichta, K.; Allen, B.; Buatti, J.; Wu, X. 3D Fully Convolutional Networks for Co-Segmentation of Tumors on PET-CT Images. In Proceedings of the IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 228–231. [Google Scholar]
  25. Kido, S.; Kidera, S.; Hirano, Y.; Mabu, S.; Kamiya, T.; Tanaka, N.; Suzuki, Y.; Yanagawa, M.; Tomiyama, N. Segmentation of Lung Nodules on CT Images Using a Nested Three-Dimensional Fully Connected Convolutional Network. Front. Artif. Intell. 2022, 5, 782225. [Google Scholar] [CrossRef] [PubMed]
  26. Wang, Y.; Zhou, C.; Chan, H.-P.; Hadjiiski, L.M.; Chughtai, A.; Kazerooni, E.A. Hybrid U-Net-Based Deep Learning Model for Volume Segmentation of Lung Nodules in CT Images. Med. Phys. 2022, 49, 7287–7302. [Google Scholar] [CrossRef] [PubMed]
  27. Wang, S.; Zhou, M.; Liu, Z.; Liu, Z.; Gu, D.; Zang, Y.; Dong, D.; Gevaert, O.; Tian, J. Central Focused Convolutional Neural Networks: Developing a Data-Driven Model for Lung Nodule Segmentation. Med. Image Anal. 2017, 40, 172–183. [Google Scholar] [CrossRef]
  28. The Cancer Imaging Archive NSCLC-Radiomics. Available online: https://www.cancerimagingarchive.net/ (accessed on 10 November 2025).
  29. Aggarwal, C.C. Neural Networks and Deep Learning; Springer: Cham, Switzerland, 2023. [Google Scholar]
  30. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Springer: Cham, Switzerland, 2015; Volume 9351, pp. 234–241. [Google Scholar]
  31. Geppert, J.; Asgharzadeh, A.; Brown, A.; Stinton, C.; Helm, E.J.; Jayakody, S.; Todkill, D.; Gallacher, D.; Ghiasvand, H.; Patel, M.; et al. Software Using Artificial Intelligence for Nodule and Cancer Detection in CT Lung Cancer Screening: Systematic Review of Test Accuracy Studies. Thorax 2024, 79, 1040–1049. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Block diagram of LuCa, the proposed 2.5-D deep-learning method for lung cancer delineation. The input is the 3-D CT scan, which goes to pre-processing. The pre-processed 2-D CT slices are input to a U-Net, predicting the cancer areas. Finally, an image-processing-based model delineates lung cancer in 3-D, providing the volume prediction.
Figure 1. Block diagram of LuCa, the proposed 2.5-D deep-learning method for lung cancer delineation. The input is the 3-D CT scan, which goes to pre-processing. The pre-processed 2-D CT slices are input to a U-Net, predicting the cancer areas. Finally, an image-processing-based model delineates lung cancer in 3-D, providing the volume prediction.
Applsci 15 12074 g001
Figure 2. Examples of subgroup-stratified 2D prediction (red areas) with ground truth (blue lines). (Panels AC) represent examples of small tumors obtained from training, validation, and test set, respectively. (Panels DF) represent examples of medium tumors obtained from training, validation, and test set, respectively. (Panels GI) represent examples of large tumors obtained from training, validation, and test set, respectively.
Figure 2. Examples of subgroup-stratified 2D prediction (red areas) with ground truth (blue lines). (Panels AC) represent examples of small tumors obtained from training, validation, and test set, respectively. (Panels DF) represent examples of medium tumors obtained from training, validation, and test set, respectively. (Panels GI) represent examples of large tumors obtained from training, validation, and test set, respectively.
Applsci 15 12074 g002
Figure 3. Bland–Altman analysis performed on the entire dataset (panel A), by considering the data division stratification (panel B), the tumor size (panel C) and the subgroups (panel D). The values of mean error (MN) and limits (+2 SD and −2 SD) are reported in the legend.
Figure 3. Bland–Altman analysis performed on the entire dataset (panel A), by considering the data division stratification (panel B), the tumor size (panel C) and the subgroups (panel D). The values of mean error (MN) and limits (+2 SD and −2 SD) are reported in the legend.
Applsci 15 12074 g003
Figure 4. Pearson’s correlation analysis performed on the entire dataset (panel A) by considering the data division stratification (panel B), the tumor size (panel C) and the subgroups (panel D). The correlation coefficient (ρ; * p-value lower than 0.05) and the coefficients of the regression line are reported (m: slope; q: bias).
Figure 4. Pearson’s correlation analysis performed on the entire dataset (panel A) by considering the data division stratification (panel B), the tumor size (panel C) and the subgroups (panel D). The correlation coefficient (ρ; * p-value lower than 0.05) and the coefficients of the regression line are reported (m: slope; q: bias).
Applsci 15 12074 g004
Figure 5. Examples of the 3-D volume delineation by LuCa: GT 3-D volumes are depicted in red, while the PRE 3-D volumes are depicted in green. (Panels AC) depict examples of small, medium, and large cancers, respectively.
Figure 5. Examples of the 3-D volume delineation by LuCa: GT 3-D volumes are depicted in red, while the PRE 3-D volumes are depicted in green. (Panels AC) depict examples of small, medium, and large cancers, respectively.
Applsci 15 12074 g005
Table 1. Results of the technical evaluation and the clinical assessment of LuCa. Normal distributions are reported in terms of mean value (MN) ± standard deviation (SD).
Table 1. Results of the technical evaluation and the clinical assessment of LuCa. Normal distributions are reported in terms of mean value (MN) ± standard deviation (SD).
StratificationNumberTechnical EvaluationClinical Assessment
DC (%)IoU (%)SEN (%)PPV (%)VGT (cm3)VPRE (cm3)
OVERALL39186.8 ± 14.979.3 ± 19.693.7 ± 9.783.6 ± 18.964.2 ± 79.561.5 ± 78.3
Data
division
TRAINING24987.4 ± 14.280.0 ± 19.094.4 ± 8.083.7 ± 18.761.9 ± 79.459.2 ± 78.2
VALIDATION6382.6 ± 19.474.2 ± 23.890.2 ± 14.281.2 ± 23.164.6 ± 93.564.1 ± 92.6
TEST7988.5 ± 12.481.2 ± 17.294.1 ± 9.585.5 ± 15.771.3 ± 66.966.8 ± 65.3
Tumor
size
SMALL13573.3 ± 17.560.7 ± 21.189.2 ± 12.066.4 ± 22.97.0 ± 3.75.6 ± 3.9
MEDIUM13089.4 ± 10.982.1 ± 14.093.3 ± 9.687.6 ± 12.833.8 ± 1332.3 ± 13.8
LARGE12695.4 ± 5.791.8 ± 9.097.8 ± 4.693.7 ± 8.0142.7 ± 89.9137.9 ± 89.6
SubgroupsTRAINSMALL8973.5 ± 17.260.9 ± 20.990.3 ± 10.465.7 ± 22.56.6 ± 3.75.2 ± 3.8
MEDIUM8591.5 ± 6.785.0 ± 10.394.6 ± 7.489.5 ± 9.234.1 ± 12.632.4 ± 12.4
LARGE7595.8 ± 4.692.4 ± 7.698.2 ± 1.694.0 ± 7.4145.0 ± 92.3140.1 ± 92.3
VALIDATIONSMALL2470.2 ± 18.857.1 ± 22.286.4 ± 15.864.8 ± 25.17.5 ± 4.15.9 ± 4.1
MEDIUM2083.8 ± 20.375.6 ± 21.788.9 ± 14.185.2 ± 21.633.1 ± 14.234.0 ± 20.1
LARGE1995.2 ± 8.191.6 ± 11.995.7 ± 11.195.2 ± 4.7160.8 ± 117.1160.2 ± 113.8
TESTSMALL2277.0 ± 17.465.3 ± 2187.7 ± 13.073.0 ± 21.68.1 ± 3.27.0 ± 3.9
MEDIUM2586.6 ± 10.377.7 ± 15.292.4 ± 11.283.3 ± 13.533.6 ± 13.930.3 ± 12.9
LARGE3294.7 ± 6.590.5 ± 10.398.0 ± 2.392.2 ± 10.5126.5 ± 61.9119.7 ± 62.5
DC: dice coefficient; GT: ground truth; IoU: intersection over union; LARGE: volumes higher than 60 cm3, MEDIUM: volumes higher than 15 cm3 and lower than 60 cm3, PPV: positive predictive value; SEN: sensitivity, SMALL: volumes lower than 15 cm3.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Carletti, M.; Bruschi, G.; Mortada, M.J.; Burattini, L.; Sbrollini, A. LuCa: A Novel Method for Lung Cancer Delineation. Appl. Sci. 2025, 15, 12074. https://doi.org/10.3390/app152212074

AMA Style

Carletti M, Bruschi G, Mortada MJ, Burattini L, Sbrollini A. LuCa: A Novel Method for Lung Cancer Delineation. Applied Sciences. 2025; 15(22):12074. https://doi.org/10.3390/app152212074

Chicago/Turabian Style

Carletti, Mattia, Giulia Bruschi, MHD Jafar Mortada, Laura Burattini, and Agnese Sbrollini. 2025. "LuCa: A Novel Method for Lung Cancer Delineation" Applied Sciences 15, no. 22: 12074. https://doi.org/10.3390/app152212074

APA Style

Carletti, M., Bruschi, G., Mortada, M. J., Burattini, L., & Sbrollini, A. (2025). LuCa: A Novel Method for Lung Cancer Delineation. Applied Sciences, 15(22), 12074. https://doi.org/10.3390/app152212074

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop