Deep Learning for Adrenal Gland Segmentation: Comparing Accuracy and Efficiency Across Three Convolutional Neural Network Models
Abstract
:1. Introduction
2. Materials and Methods
2.1. Study Design and Population
2.2. Imaging Protocol
2.3. Data Preprocessing
- Resampling images to an isotropic resolution of 1 × 1 × 1 mm3;
- Cropping volumes around the adrenal regions;
- Normalizing pixel intensities using z-score normalization;
- Implementing data augmentation through random rotations (±15°), horizontal and vertical flips, contrast-limited adaptive histogram equalization (CLAHE), and brightness variation (±20%).
2.4. Neural Network Architectures
- U-Net: Featuring a symmetric encoder–decoder structure with skip connections that facilitate precise localization by combining high-resolution features from the contracting path with upsampled features.
- SegNet: An encoder–decoder design that utilizes pooling indices to optimize memory efficiency during the decoding process.
- NablaNet: A lightweight CNN architecture with fewer connections, designed to enhance processing speed, particularly suitable for edge computing applications.
2.5. Training Configuration
- Loss function: Weighted cross-entropy to address class imbalance between adrenal and non-adrenal tissue.
- Optimizer: Adam optimizer with an initial learning rate of 0.001 and scheduled decay.
- Batch size: 4.
- Epochs: 100.
- Hardware: NVIDIA RTX 2080 Ti (11 GB VRAM).
2.6. Evaluation Metrics
- Dice Similarity Coefficient (DSC): Primary metric for segmentation accuracy.
- Sensitivity (Recall): Measure of correctly identified adrenal tissue.
- Specificity: Measure of correctly excluded non-adrenal tissue.
- 95th Percentile Hausdorff Distance (HD95): Assessment of boundary delineation accuracy.
- Time-To-Model-Integration-in-Production (TTMIP): Deployment time for inference models.
- Time-Of-Training-Models (TOTM): Mean training time per epoch.
- Time-Of-Preprocessing-Input (TOPPI): Time required for image preparation per dataset.
3. Results
3.1. Segmentation Performance
3.2. Computational Efficiency Analysis
3.3. Statistical Analysis
3.4. Qualitative Analysis
3.5. Comparative Performance Context
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Kempná, P.; Flück, C.E. Adrenal development and disease. Endocrinol. Metab. Clin. N. Am. 2015, 44, 269–289. [Google Scholar]
- Miller, W.L.; Auchus, R.J. The molecular biology, biochemistry, and physiology of human steroidogenesis and its disorders. Endocr. Rev. 2011, 32, 81–151. [Google Scholar] [CrossRef]
- Mitani, F. Functional zonation of the rat adrenal cortex: The development and maintenance. Proc. Jpn. Acad. Ser. B Phys. Biol. Sci. 2014, 90, 163–183. [Google Scholar] [CrossRef]
- Blake, M.A.; Holalkere, N.S.; Boland, G.W. Imaging techniques for adrenal lesion characterization. Radiol. Clin. N. Am. 2008, 46, 65–78. [Google Scholar] [CrossRef]
- Lockhart, M.E.; Smith, J.K.; Kenney, P.J. Imaging of adrenal masses. Eur. J. Radiol. 2002, 41, 95–112. [Google Scholar] [CrossRef]
- Torresan, F.; Ceccato, F.; Barbot, M.; Scaroni, C. Adrenal venous sampling: Technique, indications, and limitations. Front. Endocrinol. 2021, 12, 775624. [Google Scholar]
- Kim, M.; Park, B.K. Adrenal imaging: Current status and future perspectives. World J. Radiol. 2023, 15, 1–13. [Google Scholar]
- Fassnacht, M.; Arlt, W.; Bancos, I.; Dralle, H.; Newell-Price, J.; Sahdev, A.; Tabarin, A.; Terzolo, M.; Tsagarakis, S.; Dekkers, O.M. Management of adrenal incidentalomas: European Society of Endocrinology Clinical Practice Guideline in collaboration with the European Network for the Study of Adrenal Tumors. Eur. J. Endocrinol. 2016, 175, G1–G34. [Google Scholar] [CrossRef]
- Bovio, S.; Cataldi, A.; Reimondo, G.; Sperone, P.; Novello, S.; Berruti, A.; Borasio, P.; Fava, C.; Dogliotti, L.; Scagliotti, G.V.; et al. Prevalence of adrenal incidentaloma in a contemporary computerized tomography series. J. Endocrinol. Investig. 2006, 29, 298–302. [Google Scholar] [CrossRef]
- Kloos, R.T.; Gross, M.D.; Francis, I.R.; Korobkin, M.; Shapiro, B. Incidentally discovered adrenal masses. Endocr. Rev. 1995, 16, 460–484. [Google Scholar]
- Benitah, N.; Yeh, B.M.; Qayyum, A.; Williams, G.; Breiman, R.S.; Coakley, F.V. Minor morphologic abnormalities of adrenal glands at CT: Prognostic importance in patients with lung cancer. Radiology 2005, 235, 517–522. [Google Scholar] [CrossRef]
- Boland, G.W.; Blake, M.A.; Hahn, P.F.; Mayo-Smith, W.W. Incidental adrenal lesions: Principles, techniques, and algorithms for imaging characterization. Radiology 2008, 249, 756–775. [Google Scholar] [CrossRef]
- Johnson, P.T.; Horton, K.M.; Fishman, E.K. Adrenal imaging with multidetector CT: Evidence-based protocol optimization and interpretative practice. Radiographics 2009, 29, 1319–1331. [Google Scholar] [CrossRef]
- Lee, M.J.; Mayo-Smith, W.W.; Hahn, P.F.; Goldberg, M.A.; Boland, G.W.; Saini, S.; Papanicolaou, N. State-of-the-art MR imaging of the adrenal gland. Radiographics. 2002, 22, 1231–1246. [Google Scholar] [CrossRef]
- Blake, M.A.; Kalra, M.K.; Sweeney, A.T.; Lucey, B.C.; Maher, M.M.; Sahani, D.V.; Halpern, E.F.; Mueller, P.R.; Hahn, P.F.; Boland, G.W. Distinguishing benign from malignant adrenal masses: Multi-detector row CT protocol with 10-minute delay. Radiology 2006, 238, 578–585. [Google Scholar] [CrossRef]
- Pham, D.L.; Xu, C.; Prince, J.L. Current methods in medical image segmentation. Annu. Rev. Biomed. Eng. 2000, 2, 315–337. [Google Scholar] [CrossRef]
- Patel, J.; Davenport, M.S.; Cohan, R.H.; Caoili, E.M. Can established CT attenuation and washout criteria for adrenal adenoma accurately exclude pheochromocytoma? AJR Am. J. Roentgenol. 2013, 201, 122–127. [Google Scholar] [CrossRef]
- Dunnick, N.R.; Korobkin, M. Imaging of adrenal incidentalomas: Current status. AJR Am. J. Roentgenol. 2002, 179, 559–568. [Google Scholar] [CrossRef]
- Sohaib, S.A.; Reznek, R.H. Adrenal imaging. BJU Int. 2000, 86 (Suppl. S1), 95–110. [Google Scholar] [CrossRef]
- Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef]
- Hesamian, M.H.; Jia, W.; He, X.; Kennedy, P. Deep learning techniques for medical image segmentation: Achievements and challenges. J. Digit. Imaging 2019, 32, 582–596. [Google Scholar] [CrossRef]
- Christ, P.F.; Elshaer, M.E.A.; Ettlinger, F.; Tatavarty, S.; Bickel, M.; Bilic, P.; Remplfer, M.; Armbruster, M.; Hofmann, F.; D’anastasi, M.; et al. Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully Convolutional Neural Networks and 3D Conditional Random Fields. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016; Ourselin, S., Joskowicz, L., Sabuncu, M., Unal, G., Wells, W., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2016; Volume 9901. [Google Scholar] [CrossRef]
- Heller, N.; Sathianathen, N.; Kalapara, A.; Walczak, E.; Moore, K.; Kaluzniak, H.; Rosenberg, J.; Blake, P.; Rengel, Z.; Oestreich, M.; et al. The KiTS19 challenge data: 300 kidney tumor cases with clinical context, CT semantic segmentations, and surgical outcomes. arXiv 2019, arXiv:1904.00445. [Google Scholar]
- Isensee, F.; Jaeger, P.F.; Kohl, S.A.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Navab, N., Hornegger, J., Wells, W., Frangi, A., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2015; Volume 9351. [Google Scholar] [CrossRef]
- Schlemper, J.; Oktay, O.; Schaap, M.; Heinrich, M.; Kainz, B.; Glocker, B.; Rueckert, D. Attention gated networks: Learning to leverage salient regions in medical images. Med. Image Anal. 2019, 53, 197–207. [Google Scholar] [CrossRef]
- Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. TransUNet: Transformers make strong encoders for medical image segmentation. IEEE Trans. Med. Imaging 2021, 40, 3747–3759. [Google Scholar]
- Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-Unet: Unet-Like Pure Transformer for Medical Image Segmentation. In Computer Vision—ECCV 2022 Workshops. ECCV 2022; Karlinsky, L., Michaeli, T., Nishino, K., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2023; Volume 13803. [Google Scholar] [CrossRef]
- Wang, Y.; Zhou, Y.; Shen, W.; Park, S.; Fishman, E.K.; Yuille, A.L. Abdominal multi-organ segmentation with organ-attention networks and statistical fusion. Med. Image Anal. 2019, 55, 88–102. [Google Scholar] [CrossRef]
- Gibson, E.; Giganti, F.; Hu, Y.; Bonmati, E.; Bandula, S.; Gurusamy, K.; Davidson, B.; Pereira, S.P.; Clarkson, M.J.; Barratt, D.C. Automatic multi-organ segmentation on abdominal CT with dense v-networks. IEEE Trans. Med. Imaging 2018, 37, 1822–1834. [Google Scholar] [CrossRef]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Fu, H.; Cheng, J.; Xu, Y.; Wong, D.W.K.; Liu, J.; Cao, X. Joint optic disc and cup segmentation based on multi-label deep network and polar transformation. IEEE Trans. Med. Imaging 2018, 37, 1597–1605. [Google Scholar] [CrossRef]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Wang, J.; Li, X.; Ma, Z. Multi-Scale Three-Path Network (MSTP-Net): A new architecture for retinal vessel segmentation. Measurement 2023, 210, 112575. [Google Scholar] [CrossRef]
- Zhao, Y.; Li, X.; Zhou, C.; Peng, H.; Zheng, Z.; Chen, J.; Ding, W. A review of cancer data fusion methods based on deep learning. Inf. Fusion 2022, 86, 57–80. [Google Scholar] [CrossRef]
- Isensee, F.; Maier-Hein, K.H. An attempt at beating the 3D U-Net. arXiv 2019, arXiv:1908.02182. [Google Scholar]
- Zhou, Y.; Xie, L.; Shen, W.; Wang, Y.; Fishman, E.K.; Yuille, A.L. A Fixed-Point Model for Pancreas Segmentation in Abdominal CT Scans. In Medical Image Computing and Computer Assisted Intervention−MICCAI 2017; Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D., Duchesne, S., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2017; Volume 10433. [Google Scholar] [CrossRef]
- Taghanaki, S.A.; Abhishek, K.; Cohen, J.P.; Cohen-Adad, J.; Ghassan, H. Deep semantic segmentation of natural and medical images: A review. Artif. Intell. Rev. 2021, 54, 137–178. [Google Scholar] [CrossRef]
- Humpire-Mamani, G.E.; Setio, A.A.A.; van Ginneken, B.; Jacobs, C. Efficient organ localization using multi-task convolutional neural networks in thorax-abdomen CT scans. Phys. Med. Biol. 2018, 63, 085003. [Google Scholar] [CrossRef]
- Fu, Y.; Mazur, T.R.; Wu, X.; Liu, S.; Chang, X.; Lu, Y.; Li, H.H.; Kim, H.; Roach, M.C.; Henke, L.; et al. A novel MRI segmentation method using CNN-based correction network for MRI-guided adaptive radiotherapy. Med. Phys. 2018, 45, 5129–5137. [Google Scholar] [CrossRef]
- Roth, H.R.; Lu, L.; Farag, A.; Shin, H.C.; Liu, J.; Turkebey, E.B.; Summers, R. DeepOrgan: Multi-level Deep Convolutional Networks for Automated Pancreas Segmentation. In Medical Image Computing and Computer-Assisted Intervention--MICCAI 2015; Navab, N., Hornegger, J., Wells, W., Frangi, A., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2015; Volume 9349. [Google Scholar] [CrossRef]
- Mayo-Smith, W.W.; Lee, M.J.; McNicholas, M.M.; Hahn, P.F.; Boland, G.W.; Saini, S. Characterization of adrenal masses (<5 cm) by use of chemical shift MR imaging: Observer performance versus quantitative measures. AJR Am. J. Roentgenol. 1995, 165, 91–95. [Google Scholar]
- Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.; Kainz, B.; et al. Attention U-Net: Learning Where to Look for the Pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
- Chen, J.; Mei, J.; Li, X.; Lu, Y.; Yu, Q.; Wei, Q.; Luo, X.; Xie, Y.; Adeli, E.; Wang, Y.; et al. TransUNet: Rethinking the U-Net architecture design for medical image segmentation through the lens of transformers. Med. Image Anal. 2024, 97, 103280. [Google Scholar] [CrossRef] [PubMed]
- Kamnitsas, K.; Bai, W.; Ferrante, E.; McDonagh, S.; Sinclair, M.; Pawlowski, N.; Rajchl, M.; Lee, M.; Kainz, B.; Rueckert, D.; et al. Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. BrainLes 2017; Crimi, A., Bakas, S., Kuijf, H., Menze, B., Reyes, M., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; Volume 10670. [Google Scholar] [CrossRef]
- Greenspan, H.; San José Estépar, R.; Niessen, W.J.; Siegel, E.; Nielsen, M. Position paper on COVID-19 imaging and AI: From the clinical needs and technological challenges to initial AI solutions at the lab and national level towards a new era for AI in healthcare. Med. Image Anal. 2020, 66, 101800. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Langlotz, C.P.; Allen, B.; Erickson, B.J.; Kalpathy-Cramer, J.; Bigelow, K.; Cook, T.S.; Flanders, A.E.; Lungren, M.P.; Mendelson, D.S.; Rudie, J.D.; et al. A roadmap for foundational research on artificial intelligence in medical imaging: From the 2018 NIH/RSNA/ACR/The Academy Workshop. Radiology 2019, 291, 781–791. [Google Scholar] [CrossRef]
- Krähenbühl, P.; Koltun, V. Efficient inference in fully connected CRFs with Gaussian edge potentials. In Proceedings of the Advances in Neural Information Processing Systems—NeurIPS 2011, Granada, Spain, 12–15 December 2011; pp. 109–117. [Google Scholar]
- Dmitriev, K.; Kaufman, A.E.; Javed, A.A.; Hruban, R.H.; Fishman, E.K.; Lennon, A.M.; Saltz, J.H. Classification of Pancreatic Cysts in Computed Tomography Images Using a Random Forest and Convolutional Neural Network Ensemble. In Medical Image Computing and Computer Assisted Intervention−MICCAI 2017; Springer: Cham, Switzerland, 2017; Volume 10435, pp. 150–158. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Li, W.; Lin, Z.; Zhou, K.; Qi, L.; Wang, Y.; Jia, J. MAT: Mask-aware Transformer for Large Hole Image Inpainting. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 10758–10768. [Google Scholar] [CrossRef]
- Azizi, S.; Culp, L.; Freyberg, J.; Mustafa, B.; Baur, S.; Kornblith, S.; Chen, T.; Tomasev, N.; Mitrović, J.; Strachan, P.; et al. Self-supervised learning for medical image analysis using image context restoration. Med. Image Anal. 2023, 83, 102662. [Google Scholar]
Model | Gland | Dice (Mean ± SD) | Precision | Sensitivity | Specificity | HD95 (mm) |
---|---|---|---|---|---|---|
U-Net | Right | 0.630 ± 0.05 | 0.649 | 0.612 | 0.998 | 8.2 |
Left | 0.660 ± 0.06 | 0.678 | 0.645 | 0.999 | 7.5 | |
NablaNet | Right | 0.552 ± 0.08 | 0.567 | 0.538 | 0.996 | 10.4 |
Left | 0.550 ± 0.07 | 0.559 | 0.532 | 0.997 | 9.8 | |
SegNet | Right | 0.320 ± 0.10 | 0.315 | 0.305 | 0.995 | 14.7 |
Left | 0.335 ± 0.09 | 0.329 | 0.318 | 0.996 | 13.9 |
KPI | U-Net | NablaNet | SegNet |
---|---|---|---|
TOTM (min/epoch) | 6.63 (right)/8.10 (left) | 4.45 (right)/6.01 (left) | 3.50 (right)/4.22 (left) |
TTMIP (s/gland) | 58.3 (right)/72.2 (left) | 51.7 (right)/65.0 (left) | 46.2 (right)/58.4 (left) |
TOPPI (preprocessing) | 20 min + 12.5 h (YAML development) | 20 min + 12.5 h (YAML development) | 20 min + 12.5 h (YAML development) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bolocan, V.-O.; Nicu-Canareica, O.; Mitoi, A.; Costache, M.G.; Manolescu, L.S.C.; Medar, C.; Jinga, V. Deep Learning for Adrenal Gland Segmentation: Comparing Accuracy and Efficiency Across Three Convolutional Neural Network Models. Appl. Sci. 2025, 15, 5388. https://doi.org/10.3390/app15105388
Bolocan V-O, Nicu-Canareica O, Mitoi A, Costache MG, Manolescu LSC, Medar C, Jinga V. Deep Learning for Adrenal Gland Segmentation: Comparing Accuracy and Efficiency Across Three Convolutional Neural Network Models. Applied Sciences. 2025; 15(10):5388. https://doi.org/10.3390/app15105388
Chicago/Turabian StyleBolocan, Vlad-Octavian, Oana Nicu-Canareica, Alexandru Mitoi, Maria Glencora Costache, Loredana Sabina Cornelia Manolescu, Cosmin Medar, and Viorel Jinga. 2025. "Deep Learning for Adrenal Gland Segmentation: Comparing Accuracy and Efficiency Across Three Convolutional Neural Network Models" Applied Sciences 15, no. 10: 5388. https://doi.org/10.3390/app15105388
APA StyleBolocan, V.-O., Nicu-Canareica, O., Mitoi, A., Costache, M. G., Manolescu, L. S. C., Medar, C., & Jinga, V. (2025). Deep Learning for Adrenal Gland Segmentation: Comparing Accuracy and Efficiency Across Three Convolutional Neural Network Models. Applied Sciences, 15(10), 5388. https://doi.org/10.3390/app15105388