Artificial Intelligence in Pancreatic Image Analysis: A Review
Abstract
:1. Introduction
1.1. Contribution of This Review
- There is a brief description of PC, including its characteristics, subtypes, risk factors, precursor lesions, and clinical challenges.
- There is a summary of the various AI tasks, representative models for each task, and the metrics used to evaluate the performance of AI models on each task.
- There is an outline of publicly available pancreatic image datasets with different modalities and comparisons of AI model performance on some of them.
- This paper describes the imaging features of CT, MRI, EUS, PET, pathological images, and their combination. It also comprehensively discusses the application of AI models in pancreatic medical image analysis for different tasks on different modalities.
- This paper summarizes visualization tools, deep learning frameworks, and software for processing and analyzing pancreatic images.
- This paper also discusses current clinical challenges and future research directions for AI models to improve the outcomes of PC diagnosis and treatment.
1.2. Structure of This Review
2. Materials and Methods
2.1. Search Strategy and Literature Sources
2.2. Selection Criteria
2.3. Results
3. Pancreatic Cancer and Clinical Challenges
3.1. Introduction to Pancreatic Cancer
3.1.1. Pancreatic Ductal Adenocarcinoma
3.1.2. Pancreatic Neuroendocrine Tumors
3.2. Clinical Challenges of PC Diagnosis and Treatment
4. Public Data Sources
4.1. NIH (National Institutes of Health) [67]
4.2. AbdomenCT-1K [68]
4.3. BTCV (Beyond the Cranial Vault Multi-Organ Segmentation Challenge) [69]
4.4. WORD (Whole Abdominal Organ Dataset) [70]
4.5. MSD (Medical Segmentation Decathlon) [71]
4.6. Dataset of Manually Segmented Pancreatic Cystic Lesions in CT Images [73]
4.7. TCGA (The Cancer Genome Atlas) [75]
4.8. SEER (Surveillance, Epidemiology, and End Results Program) [41]
4.9. The PANORAMA Challenge (Pancreatic Cancer Diagnosis: Radiologists Meet AI) [76]
4.10. LEPset [77]
4.11. PAIP 2023 (Tumor Cellularity Prediction in Pancreatic Cancer) [78]
4.12. Dataset Related to Article of Grizzi et al. [79]
5. AI Tasks, Models, and Evaluation Metrics
5.1. Classification
5.1.1. Introduction to Classification
5.1.2. Evaluation Metrics for Classification
5.2. Segmentation
5.2.1. Introduction to Segmentation
5.2.2. Evaluation Metrics for Segmentation
5.3. Object Detection
5.3.1. Introduction to Object Detection
5.3.2. Evaluation Metrics for Object Detection
5.4. Prognosis Prediction
5.4.1. Introduction to Prognosis Prediction
5.4.2. Evaluation Metrics for Prognosis Prediction
5.5. Other Tasks
6. Computed Tomography (CT)
6.1. Introduction to CT
6.2. Classification
6.3. Segmentation
6.4. Object Detection
6.5. Prognosis Prediction
6.6. Other Tasks
7. Magnetic Resonance Imaging (MRI)
7.1. Introduction to MRI
7.2. Classification
7.3. Segmentation
7.4. Object Detection
7.5. Prognosis Prediction
7.6. Other Tasks
8. Endoscopic Ultrasonography (EUS)
8.1. Introduction to EUS
8.2. Classification
8.3. Segmentation
8.4. Object Detection
8.5. Other Tasks
9. Positron Emission Tomography (PET)
9.1. Introduction to PET
9.2. Classification
9.3. Segmentation
9.4. Object Detection
9.5. Prognosis Prediction
10. Pathological Images
10.1. Introduction to Pathological Images
10.2. Classification
10.3. Segmentation
10.4. Other Tasks
11. Multiple Modalities Analysis
11.1. Traditional Machine Learning
11.2. Muti-Modal Fusion
11.3. Cross-Modality Transfer Learning
11.4. Deep Learning-Based Image Modality Conversion
11.5. Multi Modality-Tasks Models
12. Tools, Frameworks, and Software
12.1. Visulization and Annotation Tools
12.2. Platform, Software, and Packages of Radiomics
12.3. Framework of Deep Learning Designed for Medical Image Analysis
13. Special Topics and Future Directions
13.1. Efficient and Light Model Design
13.2. Domain Generalization
13.3. Multimodal Tasks
13.4. Large Model Empowered Solutions
13.5. Explainability
14. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Mizrahi, J.D.; Surana, R.; Valle, J.W.; Shroff, R.T. Pancreatic cancer. Lancet 2020, 395, 2008–2020. [Google Scholar]
- Kamisawa, T.; Wood, L.D.; Itoi, T.; Takaori, K. Pancreatic cancer. Lancet 2016, 388, 73–85. [Google Scholar] [CrossRef]
- Siegel, R.L.; Miller, K.D.; Wagle, N.S.; Jemal, A. Cancer statistics, 2023. CA A Cancer J. Clin. 2023, 73, 17–48. [Google Scholar] [CrossRef]
- Lee, E.S.; Lee, J.M. Imaging diagnosis of pancreatic cancer: A state-of-the-art review. World J. Gastroenterol. WJG 2014, 20, 7864. [Google Scholar] [CrossRef] [PubMed]
- Udare, A.; Agarwal, M.; Alabousi, M.; McInnes, M.; Rubino, J.G.; Marcaccio, M.; van der Pol, C.B. Diagnostic Accuracy of MRI for Differentiation of Benign and Malignant Pancreatic Cystic Lesions Compared to CT and Endoscopic Ultrasound: Systematic Review and Meta-analysis. J. Magn. Reson. Imaging 2021, 54, 1126–1137. [Google Scholar] [CrossRef] [PubMed]
- Edward Coleman, R. Single photon emission computed tomography and positron emission tomography in cancer imaging. Cancer 1991, 67, 1261–1270. [Google Scholar] [CrossRef]
- Hsieh, J.; Flohr, T. Computed tomography recent history and future perspectives. J. Med. Imaging 2021, 8, 052109. [Google Scholar] [CrossRef] [PubMed]
- Tonini, V.; Zanni, M. Pancreatic cancer in 2021: What you need to know to win. World J. Gastroenterol. 2021, 27, 5851. [Google Scholar] [CrossRef]
- Goyal, H.; Sherazi, S.A.A.; Gupta, S.; Perisetti, A.; Achebe, I.; Ali, A.; Tharian, B.; Thosani, N.; Sharma, N.R. Application of artificial intelligence in diagnosis of pancreatic malignancies by endoscopic ultrasound: A systemic review. Ther. Adv. Gastroenterol. 2022, 15, 17562848221093873. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Liu, K.L.; Wu, T.; Chen, P.T.; Tsai, Y.M.; Roth, H.; Wu, M.S.; Wang, W. Deep learning to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue: A retrospective study with cross-racial external validation. Lancet Digit. Health 2020, 2, e303–e313. [Google Scholar] [CrossRef]
- Chen, P.T.; Wu, T.; Wang, P.; Chang, D.; Liu, K.L.; Wu, M.S.; Wang, W. Pancreatic cancer detection on CT scans with deep learning: A nationwide population-based study. Radiology 2023, 306, 172–182. [Google Scholar] [CrossRef] [PubMed]
- Ahmed, T.M.; Kawamoto, S.; Hruban, R.H.; Fishman, E.K.; Soyer, P.; Chu, L.C. A primer on artificial intelligence in pancreatic imaging. Diagn. Interv. Imaging 2023, 104, 435–447. [Google Scholar] [CrossRef] [PubMed]
- Chu, L.C.; Fishman, E.K. Artificial intelligence outperforms radiologists for pancreatic cancer lymph node metastasis prediction at ct. Radiology 2023, 306, 170–171. [Google Scholar] [CrossRef]
- Bian, Y.; Zheng, Z.; Fang, X.; Jiang, H.; Zhu, M.; Yu, J.; Zhao, H.; Zhang, L.; Yao, J.; Lu, L.; et al. Artificial intelligence to predict lymph node metastasis at CT in pancreatic ductal adenocarcinoma. Radiology 2023, 306, 160–169. [Google Scholar] [CrossRef]
- Huang, B.; Huang, H.; Zhang, S.; Zhang, D.; Shi, Q.; Liu, J.; Guo, J. Artificial intelligence in pancreatic cancer. Theranostics 2022, 12, 6931. [Google Scholar] [CrossRef]
- Cazacu, I.; Udristoiu, A.; Gruionu, L.; Iacob, A.; Gruionu, G.; Saftoiu, A. Artificial intelligence in pancreatic cancer: Toward precision diagnosis. Endosc. Ultrasound 2019, 8, 357–359. [Google Scholar]
- Pereira, S.P.; Oldfield, L.; Ney, A.; Hart, P.A.; Keane, M.G.; Pandol, S.J.; Li, D.; Greenhalf, W.; Jeon, C.Y.; Koay, E.J.; et al. Early detection of pancreatic cancer. Lancet Gastroenterol. Hepatol. 2020, 5, 698–710. [Google Scholar]
- Kenner, B.; Chari, S.T.; Kelsen, D.; Klimstra, D.S.; Pandol, S.J.; Rosenthal, M.; Rustgi, A.K.; Taylor, J.A.; Yala, A.; Abul-Husn, N.; et al. Artificial intelligence and early detection of pancreatic cancer: 2020 summative review. Pancreas 2021, 50, 251–279. [Google Scholar] [CrossRef] [PubMed]
- Yang, J.; Xu, R.; Wang, C.; Qiu, J.; Ren, B.; You, L. Early screening and diagnosis strategies of pancreatic cancer: A comprehensive review. Cancer Commun. 2021, 41, 1257–1274. [Google Scholar] [CrossRef]
- Hameed, B.S.; Krishnan, U.M. Artificial Intelligence-Driven Diagnosis of Pancreatic Cancer. Cancers 2022, 14, 5382. [Google Scholar] [CrossRef] [PubMed]
- Schlanger, D.; Graur, F.; Popa, C.; Moiš, E.; Al Hajjar, N. The role of artificial intelligence in pancreatic surgery: A systematic review. Updat. Surg. 2022, 74, 417–429. [Google Scholar] [CrossRef] [PubMed]
- Mikdadi, D.; O’Connell, K.A.; Meacham, P.J.; Dugan, M.A.; Ojiere, M.O.; Carlson, T.B.; Klenk, J.A. Applications of artificial intelligence (AI) in ovarian cancer, pancreatic cancer, and image biomarker discovery. Cancer Biomarkers 2022, 33, 173–184. [Google Scholar] [CrossRef] [PubMed]
- Jan, Z.; El Assadi, F.; Abd-Alrazaq, A.; Jithesh, P. Artificial intelligence for the prediction and early diagnosis of pancreatic cancer: Scoping review. J. Med. Internet Res. 2023, 25, e44248. [Google Scholar] [CrossRef] [PubMed]
- Katta, M.; Kalluru, P.; Bavishi, D.; Hameed, M.; Valisekka, S. Artificial intelligence in pancreatic cancer: Diagnosis, limitations, and the future prospects—A narrative review. J. Cancer Res. Clin. Oncol. 2023, 149, 6743–6751. [Google Scholar] [CrossRef] [PubMed]
- Zhao, G.; Chen, X.; Zhu, M.; Liu, Y.; Wang, Y. Exploring the application and future outlook of Artificial intelligence in pancreatic cancer. Front. Oncol. 2024, 14, 1345810. [Google Scholar] [CrossRef] [PubMed]
- Daher, H.; Punchayil, S.A.; Ismail, A.A.E.; Fernandes, R.R.; Jacob, J.; Algazzar, M.H.; Mansour, M. Advancements in Pancreatic Cancer Detection: Integrating Biomarkers, Imaging Technologies, and Machine Learning for Early Diagnosis. Cureus 2024, 16, e56583. [Google Scholar] [CrossRef] [PubMed]
- Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Int. J. Surg. 2010, 8, 336–341. [Google Scholar] [CrossRef]
- Aier, I.; Semwal, R.; Sharma, A.; Varadwaj, P.K. A systematic assessment of statistics, risk factors, and underlying features involved in pancreatic cancer. Cancer Epidemiol. 2019, 58, 104–110. [Google Scholar] [CrossRef] [PubMed]
- Klein, A.P. Pancreatic cancer epidemiology: Understanding the role of lifestyle and inherited risk factors. Nat. Rev. Gastroenterol. Hepatol. 2021, 18, 493–502. [Google Scholar] [CrossRef]
- Poddighe, D. Autoimmune pancreatitis and pancreatic cancer: Epidemiological aspects and immunological considerations. World J. Gastroenterol. 2021, 27, 3825–3836. [Google Scholar] [CrossRef] [PubMed]
- Distler, M.; Aust, D.; Weitz, J.; Pilarsky, C.; Grützmann, R. Precursor lesions for sporadic pancreatic cancer: PanIN, IPMN, and MCN. BioMed Res. Int. 2014, 2014, 474905. [Google Scholar] [CrossRef] [PubMed]
- Rawla, P.; Sunkara, T.; Gaduputi, V. Epidemiology of pancreatic cancer: Global trends, etiology and risk factors. World J. Oncol. 2019, 10, 10–27. [Google Scholar] [CrossRef] [PubMed]
- Hidalgo, M.; Cascinu, S.; Kleeff, J.; Labianca, R.; Löhr, J.M.; Neoptolemos, J.; Real, F.X.; Van Laethem, J.L.; Heinemann, V. Addressing the challenges of pancreatic cancer: Future directions for improving outcomes. Pancreatology 2015, 15, 8–18. [Google Scholar] [CrossRef]
- Vassos, N.; Agaimy, A.; Klein, P.; Hohenberger, W.; Croner, R.S. Solid-pseudopapillary neoplasm (SPN) of the pancreas: Case series and literature review on an enigmatic entity. Int. J. Clin. Exp. Pathol. 2013, 6, 1051. [Google Scholar] [PubMed]
- Fang, Y.; Su, Z.; Xie, J.; Xue, R.; Ma, Q.; Li, Y.; Zhao, Y.; Song, Z.; Lu, X.; Li, H.; et al. Genomic signatures of pancreatic adenosquamous carcinoma (PASC). J. Pathol. 2017, 243, 155–159. [Google Scholar] [CrossRef]
- Kitagami, H.; Kondo, S.; Hirano, S.; Kawakami, H.; Egawa, S.; Tanaka, M. Acinar cell carcinoma of the pancreas: Clinical analysis of 115 patients from Pancreatic Cancer Registry of Japan Pancreas Society. Pancreas 2007, 35, 42–46. [Google Scholar] [CrossRef] [PubMed]
- Reid, M.D.; Choi, H.; Balci, S.; Akkas, G.; Adsay, V. Serous cystic neoplasms of the pancreas: Clinicopathologic and molecular characteristics. Semin. Diagn. Pathol. 2014, 31, 475–483. [Google Scholar] [CrossRef]
- Bochis, O.; Bota, M.; Mihut, E.; Buiga, R.; Hazbei, D.; Irimie, A. Solid pseudopapillary tumor of the pancreas: Clinical-pathological features and management of 13 cases. Clujul Med. 2017, 90, 171–178. [Google Scholar] [CrossRef]
- Backx, E.; Coolens, K.; Van den Bossche, J.L.; Houbracken, I.; Espinet, E.; Rooman, I. On the origin of pancreatic cancer: Molecular tumor subtypes in perspective of exocrine cell plasticity. Cell. Mol. Gastroenterol. Hepatol. 2022, 13, 1243–1253. [Google Scholar] [CrossRef]
- Races, A.; Males, M.W.M.B. SEER Cancer Statistics Review 1975–2017; National Cancer Institute: Bethesda, MD, USA, 2020. Available online: https://seer.cancer.gov/csr/1975_2017/ (accessed on 15 July 2024).
- Artinyan, A.; Soriano, P.A.; Prendergast, C.; Low, T.; Ellenhorn, J.D.; Kim, J. The anatomic location of pancreatic cancer is a prognostic factor for survival. Hpb 2008, 10, 371–376. [Google Scholar] [CrossRef] [PubMed]
- Mostafa, M.E.; Erbarut-Seven, I.; Pehlivanoglu, B.; Adsay, V. Pathologic classification of “pancreatic cancers”: Current concepts and challenges. Chin. Clin. Oncol. 2017, 6, 59. [Google Scholar] [CrossRef] [PubMed]
- Raphael, B.J.; Hruban, R.H.; Aguirre, A.J.; Moffitt, R.A.; Yeh, J.J.; Stewart, C.; Robertson, A.G.; Cherniack, A.D.; Gupta, M.; Getz, G.; et al. Integrated genomic characterization of pancreatic ductal adenocarcinoma. Cancer Cell 2017, 32, 185–203. [Google Scholar] [CrossRef] [PubMed]
- Espinet, E.; Klein, L.; Puré, E.; Singh, S.K. Mechanisms of PDAC subtype heterogeneity and therapy response. Trends Cancer 2022, 8, 1060–1071. [Google Scholar] [CrossRef] [PubMed]
- Flowers, B.M.; Xu, H.; Mulligan, A.S.; Hanson, K.J.; Seoane, J.A.; Vogel, H.; Curtis, C.; Wood, L.D.; Attardi, L.D. Cell of origin influences pancreatic cancer subtype. Cancer Discov. 2021, 11, 660–677. [Google Scholar] [CrossRef] [PubMed]
- Guo, W.; Zhang, Y.; Guo, S.; Mei, Z.; Liao, H.; Dong, H.; Wu, K.; Ye, H.; Zhang, Y.; Zhu, Y.; et al. Tumor microbiome contributes to an aggressive phenotype in the basal-like subtype of pancreatic cancer. Commun. Biol. 2021, 4, 1019. [Google Scholar] [CrossRef] [PubMed]
- Halfdanarson, T.R.; Rabe, K.; Rubin, J.; Petersen, G. Pancreatic neuroendocrine tumors (PNETs): Incidence, prognosis and recent trend toward improved survival. Ann. Oncol. 2008, 19, 1727–1733. [Google Scholar] [CrossRef] [PubMed]
- Ellison, T.A.; Wolfgang, C.L.; Shi, C.; Cameron, J.L.; Murakami, P.; Mun, L.J.; Singhi, A.D.; Cornish, T.C.; Olino, K.; Meriden, Z.; et al. A single institution’s 26-year experience with nonfunctional pancreatic neuroendocrine tumors: A validation of current staging systems and a new prognostic nomogram. Ann. Surg. 2014, 259, 204–212. [Google Scholar] [CrossRef]
- Mpilla, G.B.; Philip, P.A.; El-Rayes, B.; Azmi, A.S. Pancreatic neuroendocrine tumors: Therapeutic challenges and research limitations. World J. Gastroenterol. 2020, 26, 4036. [Google Scholar] [CrossRef]
- Perri, G.; Prakash, L.R.; Katz, M.H. Pancreatic neuroendocrine tumors. Curr. Opin. Gastroenterol. 2019, 35, 468–477. [Google Scholar] [CrossRef]
- Pea, A.; Hruban, R.H.; Wood, L.D. Genetics of pancreatic neuroendocrine tumors: Implications for the clinic. Expert Rev. Gastroenterol. Hepatol. 2015, 9, 1407–1419. [Google Scholar] [CrossRef] [PubMed]
- Luo, S.; Wang, J.; Wu, L.; Wang, C.; Yang, J.; Li, M.; Zhang, L.; Ge, J.; Sun, C.; Li, E.; et al. Epidemiological trends for functional pancreatic neuroendocrine tumors: A study combining multiple imputation with age adjustment. Front. Endocrinol. 2023, 14, 1123642. [Google Scholar] [CrossRef] [PubMed]
- Nieveen van Dijkum, E.J.; Engelsman, A.F. Diagnosis and Management of Functional Pancreatic Neuroendocrine Tumors. In Endocrine Surgery Comprehensive Board Exam Guide; Springer: Berlin/Heidelberg, Germany, 2022; pp. 681–693. [Google Scholar]
- Tsilimigras, D.; Pawlik, T. Pancreatic neuroendocrine tumours: Conservative versus surgical management. Br. J. Surg. 2021, 108, 1267–1269. [Google Scholar] [CrossRef]
- Kuo, J.H.; Lee, J.A.; Chabot, J.A. Nonfunctional pancreatic neuroendocrine tumors. Surg. Clin. 2014, 94, 689–708. [Google Scholar] [CrossRef] [PubMed]
- Dong, D.H.; Zhang, X.F.; Lopez-Aguiar, A.G.; Poultsides, G.; Makris, E.; Rocha, F.; Kanji, Z.; Weber, S.; Fisher, A.; Fields, R.; et al. Tumor burden score predicts tumor recurrence of non-functional pancreatic neuroendocrine tumors after curative resection. HPB 2020, 22, 1149–1157. [Google Scholar] [CrossRef] [PubMed]
- Zerbi, A.; Falconi, M.; Rindi, G.; Delle Fave, G.; Tomassetti, P.; Pasquali, C.; Capitanio, V.; Boninsegna, L.; Di Carlo, V.; Members of the AISP-Network Study Group; et al. Clinicopathological features of pancreatic endocrine tumors: A prospective multicenter study in Italy of 297 sporadic cases. Off. J. Am. Coll. Gastroenterol. ACG 2010, 105, 1421–1429. [Google Scholar] [CrossRef] [PubMed]
- Nigri, G.; Petrucciani, N.; Debs, T.; Mangogna, L.M.; Crovetto, A.; Moschetta, G.; Persechino, R.; Aurello, P.; Ramacciato, G. Treatment options for PNET liver metastases: A systematic review. World J. Surg. Oncol. 2018, 16, 142. [Google Scholar] [CrossRef]
- Srivastava, S.; Koay, E.J.; Borowsky, A.D.; De Marzo, A.M.; Ghosh, S.; Wagner, P.D.; Kramer, B.S. Cancer overdiagnosis: A biological challenge and clinical dilemma. Nat. Rev. Cancer 2019, 19, 349–358. [Google Scholar] [CrossRef] [PubMed]
- Macdonald, S.; Macleod, U.; Campbell, N.C.; Weller, D.; Mitchell, E. Systematic review of factors influencing patient and practitioner delay in diagnosis of upper gastrointestinal cancer. Br. J. Cancer 2006, 94, 1272–1280. [Google Scholar] [CrossRef]
- Zhang, L.; Sanagapalli, S.; Stoita, A. Challenges in diagnosis of pancreatic cancer. World J. Gastroenterol. 2018, 24, 2047. [Google Scholar] [CrossRef]
- Walter, F.M.; Mills, K.; Mendonça, S.C.; Abel, G.A.; Basu, B.; Carroll, N.; Ballard, S.; Lancaster, J.; Hamilton, W.; Rubin, G.P.; et al. Symptoms and patient factors associated with diagnostic intervals for pancreatic cancer (SYMPTOM pancreatic study): A prospective cohort study. Lancet Gastroenterol. Hepatol. 2016, 1, 298–306. [Google Scholar] [CrossRef] [PubMed]
- Jiang, S.; Fagman, J.B.; Ma, Y.; Liu, J.; Vihav, C.; Engstrom, C.; Liu, B.; Chen, C. A comprehensive review of pancreatic cancer and its therapeutic challenges. Aging 2022, 14, 7635. [Google Scholar] [CrossRef] [PubMed]
- Halbrook, C.J.; Lyssiotis, C.A.; di Magliano, M.P.; Maitra, A. Pancreatic cancer: Advances and challenges. Cell 2023, 186, 1729–1754. [Google Scholar] [CrossRef] [PubMed]
- Wood, L.D.; Canto, M.I.; Jaffee, E.M.; Simeone, D.M. Pancreatic cancer: Pathogenesis, screening, diagnosis, and treatment. Gastroenterology 2022, 163, 386–402. [Google Scholar] [CrossRef] [PubMed]
- Roth, H.R.; Lu, L.; Farag, A.; Shin, H.C.; Liu, J.; Turkbey, E.B.; Summers, R.M. Deeporgan: Multi-level deep convolutional networks for automated pancreas segmentation. In Proceedings, Part I 18, Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 556–564. [Google Scholar]
- Ma, J.; Zhang, Y.; Gu, S.; Zhu, C.; Ge, C.; Zhang, Y.; An, X.; Wang, C.; Wang, Q.; Liu, X.; et al. Abdomenct-1k: Is abdominal organ segmentation a solved problem? IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 6695–6714. [Google Scholar] [CrossRef] [PubMed]
- Landman, B.; Xu, Z.; Igelsias, J.; Styner, M.; Langerak, T.; Klein, A. Multi-atlas labeling beyond the cranial vault–workshop and challenge. In Proceedings of the MICCAI Multi-Atlas Labeling Beyond Cranial Vault—Workshop Challenge, Munich, Germany, 5 October 2015; Available online: https://www.synapse.org (accessed on 15 July 2024).
- Luo, X.; Liao, W.; Xiao, J.; Chen, J.; Song, T.; Zhang, X.; Li, K.; Metaxas, D.N.; Wang, G.; Zhang, S. WORD: A large scale dataset, benchmark and clinical applicable study for abdominal organ segmentation from CT image. arXiv 2021, arXiv:2111.02403. [Google Scholar] [CrossRef] [PubMed]
- Simpson, A.L.; Antonelli, M.; Bakas, S.; Bilello, M.; Farahani, K.; Van Ginneken, B.; Kopp-Schneider, A.; Landman, B.A.; Litjens, G.; Menze, B.; et al. A large annotated medical image dataset for the development and evaluation of segmentation algorithms. arXiv 2019, arXiv:1902.09063. [Google Scholar]
- Yushkevich, P.A.; Gao, Y.; Gerig, G. ITK-SNAP: An interactive tool for semi-automatic segmentation of multi-modality biomedical images. In Proceedings of the 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 16–20 August 2016; pp. 3342–3345. [Google Scholar]
- Abel, L.; Wasserthal, J.; Weikert, T.; Sauter, A.W.; Nesic, I.; Obradovic, M.; Yang, S.; Manneck, S.; Glessgen, C.; Ospel, J.M.; et al. Automated Detection of Pancreatic Cystic Lesions on CT Using Deep Learning. Diagnostics 2021, 11, 901. [Google Scholar] [CrossRef]
- Isensee, F.; Jaeger, P.F.; Kohl, S.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef]
- Network, T.C.G.A.R. The Cancer Genome Atlas. Nature 2014, 517, 547–555. [Google Scholar] [CrossRef]
- PANORAMA. Pano3D: A Large-Scale 3D Panoramic Dataset. Available online: https://vcl3d.github.io/Pano3D/download/ (accessed on 15 July 2024).
- Li, J.; Zhang, P.; Wang, T.; Zhu, L.; Liu, R.; Yang, X.; Wang, K.; Shen, D.; Sheng, B. DSMT-Net: Dual Self-supervised Multi-operator Transformation for Multi-source Endoscopic Ultrasound Diagnosis. IEEE Trans. Med. Imaging 2023, 43, 64–75. [Google Scholar] [CrossRef] [PubMed]
- PAIP2023. 2023. Available online: https://2023paip.grand-challenge.org/ (accessed on 20 May 2024).
- Grizzi, F.; Fiorino, S.; Qehajaj, D. Computer-aided assessment of the extra-cellular matrix during pancreatic carcinogenesis: A pilot study. J. Transl. Med. 2019, 17, 61. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Altman, D.G.; Bland, J.M. Diagnostic tests. 1: Sensitivity and specificity. BMJ 1994, 308, 1552. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings, Part III 18, Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention u-net: Learning where to look for the pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
- Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. Unet++: Redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med Imaging 2019, 39, 1856–1867. [Google Scholar] [CrossRef]
- Jha, D.; Smedsrud, P.H.; Riegler, M.A.; Johansen, D.; De Lange, T.; Halvorsen, P.; Johansen, H.D. Resunet++: An advanced architecture for medical image segmentation. In Proceedings of the 2019 IEEE International Symposium on Multimedia (ISM), San Diego, CA, USA, 9–11 December 2019; pp. 225–2255. [Google Scholar]
- Chen, Y.; Wang, K.; Liao, X.; Qian, Y.; Wang, Q.; Yuan, Z.; Heng, P.A. Channel-Unet: A spatial channel-wise convolutional neural network for liver and tumors segmentation. Front. Genet. 2019, 10, 1110. [Google Scholar] [CrossRef]
- Huang, H.; Lin, L.; Tong, R.; Hu, H.; Zhang, Q.; Iwamoto, Y.; Han, X.; Chen, Y.W.; Wu, J. Unet 3+: A full-scale connected unet for medical image segmentation. In Proceedings of the ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 1055–1059. [Google Scholar]
- Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In Proceedings, Part II 19, Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, 17–21 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 424–432. [Google Scholar]
- Chen, W.; Liu, B.; Peng, S.; Sun, J.; Qiao, X. S3D-UNet: Separable 3D U-Net for brain tumor segmentation. In Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, 16 September 2018, Revised Selected Papers, Part II 4; Springer: Berlin/Heidelberg, Germany, 2019; pp. 358–368. [Google Scholar]
- Abdollahi, A.; Pradhan, B.; Alamri, A. VNet: An end-to-end fully convolutional neural network for road extraction from high-resolution remote sensing data. IEEE Access 2020, 8, 179424–179436. [Google Scholar] [CrossRef]
- Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. Transunet: Transformers make strong encoders for medical image segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar]
- Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-unet: Unet-like pure transformer for medical image segmentation. In European Conference on Computer Vision; Springer: Cham, Switzerland, 2022; pp. 205–218. [Google Scholar]
- Sha, Y.; Zhang, Y.; Ji, X.; Hu, L. Transformer-unet: Raw image processing with unet. arXiv 2021, arXiv:2109.08417. [Google Scholar]
- Chen, B.; Liu, Y.; Zhang, Z.; Lu, G.; Kong, A.W.K. Transattunet: Multi-level attention-guided u-net with transformer for medical image segmentation. IEEE Trans. Emerg. Top. Comput. Intell. 2023, 8, 55–68. [Google Scholar] [CrossRef]
- Chen, J.; Mei, J.; Li, X.; Lu, Y.; Yu, Q.; Wei, Q.; Luo, X.; Xie, Y.; Adeli, E.; Wang, Y.; et al. 3d transunet: Advancing medical image segmentation through vision transformers. arXiv 2023, arXiv:2310.07781. [Google Scholar]
- Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
- Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y. Graph attention networks. stat 2017, 1050, 10-48550. [Google Scholar]
- Xu, K.; Hu, W.; Leskovec, J.; Jegelka, S. How powerful are graph neural networks? arXiv 2018, arXiv:1810.00826. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar] [CrossRef] [PubMed]
- Cai, Z.; Vasconcelos, N. Cascade R-CNN: High Quality Object Detection and Instance Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 483–1498. [Google Scholar] [CrossRef] [PubMed]
- Wang, X.; Kong, T.; Shen, C.; Jiang, Y.; Li, L. SOLO: Segmenting Objects by Locations. In European Conference on Computer Vision (ECCV); Springer: Cham, Switzerland, 2020. [Google Scholar]
- Wang, X.; Zhang, R.; Kong, T.; Li, L.; Shen, C. SOLOv2: Dynamic and Fast Instance Segmentation. Proc. Adv. Neural Inf. Process. Syst. (NeurIPS) 2020, 33, 17721–17732. [Google Scholar]
- Fang, Y.; Yang, S.; Wang, X.; Li, Y.; Fang, C.; Shan, Y.; Feng, B.; Liu, W. Instances as Queries. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 11–17 October 2021; pp. 6910–6919. [Google Scholar]
- Huttenlocher, D.P.; Klanderman, G.A.; Rucklidge, W.J. Comparing images using the Hausdorff distance. IEEE Trans. Pattern Anal. Mach. Intell. 1993, 15, 850–863. [Google Scholar] [CrossRef]
- Nikolov, S.; Blackwell, S.; Zverovitch, A.; Mendes, R.; Livne, M.; De Fauw, J.; Patel, Y.; Meyer, C.; Askham, H.; Romera-Paredes, B.; et al. Clinically applicable segmentation of head and neck anatomy for radiotherapy: Deep learning algorithm development and validation study. J. Med. Internet Res. 2021, 23, e26151. [Google Scholar] [CrossRef] [PubMed]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings, Part I 14, Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, faster, stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection, 2020. arXiv 2020, arXiv:2004.10934. [Google Scholar] [CrossRef]
- Jocher, G.; Chaurasia, A.; Stoken, A.; Borovec, J.; Kwon, Y.; Michael, K.; Xie, T.; Fang, J.; Lorna; Zeng, Y.; et al. ultralytics/yolov5: v7.0-yolov5 sota realtime instance segmentation. Zenodo 2022. [Google Scholar] [CrossRef]
- Li, C.; Li, L.; Geng, Y.; Jiang, H.; Cheng, M.; Zhang, B.; Ke, Z.; Xu, X.; Chu, X. YOLOv6 v3.0: A Full-Scale Reloading, 2023. arXiv 2023, arXiv:2301.05586. [Google Scholar] [CrossRef]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
- Jocher, G.; Chaurasia, A.; Qiu, J. Ultralytics YOLOv8. Available online: https://zenodo.org/records/7347926 (accessed on 15 July 2024).
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-End Object Detection with Transformers. In Proceedings of the ECCV, Glasgow, UK, 23–28 August 2020. [Google Scholar]
- Kern, D.; Mastmeyer, A. 3D bounding box detection in volumetric medical image data: A systematic literature review. In Proceedings of the 2021 IEEE 8th International Conference on Industrial Engineering and Applications (ICIEA), Chengdu, China, 23–26 April 2021; pp. 509–516. [Google Scholar]
- De Vos, B.D.; Wolterink, J.M.; De Jong, P.A.; Viergever, M.A.; Išgum, I. 2D image classification for 3D anatomy localization: Employing deep convolutional neural networks. Med. Imaging 2016 Image Process. 2016, 9784, 517–523. [Google Scholar]
- Huang, R.; Xie, W.; Noble, J.A. VP-Nets: Efficient automatic localization of key brain structures in 3D fetal neurosonography. Med. Image Anal. 2018, 47, 127–139. [Google Scholar] [CrossRef]
- Blair, S.I.A.S.A.; White, C.; Moses, L.D.D. Localization of lumbar and thoracic vertebrae in 3d ct datasets by combining deep reinforcement learning with imitation learning. 2018. Available online: https://cgi.cse.unsw.edu.au/~reports/papers/201803.pdf (accessed on 15 July 2024).
- Xu, X.; Zhou, F.; Liu, B.; Fu, D.; Bai, X. Efficient multiple organ localization in CT image using 3D region proposal network. IEEE Trans. Med Imaging 2019, 38, 1885–1898. [Google Scholar] [CrossRef]
- Buzug, T.M. Computed tomography. In Springer Handbook of Medical Technology; Springer: Berlin/Heidelberg, Germany, 2011; pp. 311–342. [Google Scholar]
- Hasebroock, K.M.; Serkova, N.J. Toxicity of MRI and CT contrast agents. Expert Opin. Drug Metab. Toxicol. 2009, 5, 403–416. [Google Scholar] [CrossRef]
- Li, M.; Nie, X.; Reheman, Y.; Huang, P.; Zhang, S.; Yuan, Y.; Chen, C.; Yan, Z.; Chen, C.; Lv, X.; et al. Computer-aided diagnosis and staging of pancreatic cancer based on CT images. IEEE Access 2020, 8, 141705–141718. [Google Scholar] [CrossRef]
- Chen, P.T.; Chang, D.; Yen, H.; Liu, K.L.; Huang, S.Y.; Roth, H.; Wu, M.S.; Liao, W.C.; Wang, W. Radiomic features at CT can distinguish pancreatic cancer from noncancerous pancreas. Radiol. Imaging Cancer 2021, 3, e210010. [Google Scholar] [CrossRef] [PubMed]
- Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM Sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
- Mukherjee, S.; Patra, A.; Khasawneh, H.; Korfiatis, P.; Rajamohan, N.; Suman, G.; Majumder, S.; Panda, A.; Johnson, M.P.; Larson, N.B.; et al. Radiomics-based machine learning models can detect pancreatic cancer on prediagnostic computed tomography scans at a substantial lead time before clinical diagnosis. Gastroenterology 2022, 163, 1435–1446. [Google Scholar] [CrossRef] [PubMed]
- Xia, Y.; Yao, J.; Lu, L.; Huang, L.; Xie, G.; Xiao, J.; Yuille, A.; Cao, K.; Zhang, L. Effective pancreatic cancer screening on non-contrast CT scans via anatomy-aware transformers. In Proceedings, Part V 24, Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 259–269. [Google Scholar]
- Cao, K.; Xia, Y.; Yao, J.; Han, X.; Lambert, L.; Zhang, T.; Tang, W.; Jin, G.; Jiang, H.; Fang, X.; et al. Large-scale pancreatic cancer detection via non-contrast CT and deep learning. Nat. Med. 2023, 29, 3033–3043. [Google Scholar] [CrossRef] [PubMed]
- Vaiyapuri, T.; Dutta, A.K.; Punithavathi, I.H.; Duraipandy, P.; Alotaibi, S.S.; Alsolai, H.; Mohamed, A.; Mahgoub, H. Intelligent deep-learning-enabled decision-making medical system for pancreatic tumor classification on CT images. Healthcare 2022, 10, 677. [Google Scholar] [CrossRef] [PubMed]
- Huy, H.Q.; Dat, N.T.; Hiep, D.N.; Tram, N.N.; Vu, T.A.; Huong, P.T.V. Pancreatic Cancer Detection Based on CT Images Using Deep Learning. In International Conference on Intelligent Systems & Networks; Springer: Singapore, 2023; pp. 66–72. [Google Scholar]
- Yang, R.; Chen, Y.; Sa, G.; Li, K.; Hu, H.; Zhou, J.; Guan, Q.; Chen, F. CT classification model of pancreatic serous cystic neoplasms and mucinous cystic neoplasms based on a deep neural network. Abdom. Radiol. 2022, 47, 232–241. [Google Scholar] [CrossRef]
- Bakasa, W.; Viriri, S. Stacked ensemble deep learning for pancreas cancer classification using extreme gradient boosting. Front. Artif. Intell. 2023, 6, 1232640. [Google Scholar] [CrossRef] [PubMed]
- Roth, H.R.; Farag, A.; Lu, L.; Turkbey, E.B.; Summers, R.M. Deep convolutional networks for pancreas segmentation in CT imaging. Med. Imaging 2015 Image Process. 2015, 9413, 378–385. [Google Scholar]
- Heinrich, M.P.; Oktay, O. BRIEFnet: Deep pancreas segmentation using binary sparse convolutions. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2017; pp. 329–337. [Google Scholar]
- Zhou, Y.; Xie, L.; Fishman, E.K.; Yuille, A.L. Deep supervision for pancreatic cyst segmentation in abdominal CT scans. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2017; pp. 222–230. [Google Scholar]
- Lee, C.Y.; Xie, S.; Gallagher, P.; Zhang, Z.; Tu, Z. Deeply-supervised nets. Artif. Intell. Stat. PMLR 2015, 38, 562–570. [Google Scholar]
- Lu, L.; Jian, L.; Luo, J.; Xiao, B. Pancreatic segmentation via ringed residual U-Net. IEEE Access 2019, 7, 172871–172878. [Google Scholar] [CrossRef]
- Boers, T.; Hu, Y.; Gibson, E.; Barratt, D.; Bonmati, E.; Krdzalic, J.; van der Heijden, F.; Hermans, J.; Huisman, H. Interactive 3D U-net for the segmentation of the pancreas in computed tomography scans. Phys. Med. Biol. 2020, 65, 065002. [Google Scholar] [CrossRef] [PubMed]
- Jiang, F.; Zhi, X.; Ding, X.; Tong, W.; Bian, Y. DLU-Net for pancreatic cancer segmentation. In Proceedings of the 2020 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Seoul, Republic of Korea, 16–19 December 2020; pp. 1024–1028. [Google Scholar]
- Li, F.; Li, W.; Shu, Y.; Qin, S.; Xiao, B.; Zhan, Z. Multiscale receptive field based on residual network for pancreas segmentation in CT images. Biomed. Signal Process. Control 2020, 57, 101828. [Google Scholar] [CrossRef]
- Li, Y.; Cai, W.; Gao, Y.; Li, C.; Hu, X. More than encoder: Introducing transformer decoder to upsample. In Proceedings of the 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Las Vegas, NV, USA, 6–8 December 2022; pp. 1597–1602. [Google Scholar]
- Paithane, P.; Kakarwal, S. LMNS-Net: Lightweight Multiscale Novel Semantic-Net deep learning approach used for automatic pancreas image segmentation in CT scan images. Expert Syst. Appl. 2023, 234, 121064. [Google Scholar] [CrossRef]
- Juwita, J.; Hassan, G.; Akhtar, N.; Datta, A. M3bunet: Mobile Mean Max Unet for Pancreas Segmentation on Ct-Scans. arXiv 2024, arXiv:2401.10419. [Google Scholar]
- Zhou, Z.; Bian, Y.; Pan, S.; Meng, Q.; Zhu, W.; Shi, F.; Chen, X.; Shao, C.; Xiang, D. A dual branch and fine-grained enhancement network for pancreatic tumor segmentation in contrast enhanced CT images. Biomed. Signal Process. Control 2023, 82, 104516. [Google Scholar] [CrossRef]
- Chen, X.; Chen, Z.; Li, J.; Zhang, Y.D.; Lin, X.; Qian, X. Model-driven deep learning method for pancreatic cancer segmentation based on spiral-transformation. IEEE Trans. Med Imaging 2021, 41, 75–87. [Google Scholar] [CrossRef]
- Yu, L.; Yang, X.; Chen, H.; Qin, J.; Heng, P.A. Volumetric ConvNets with mixed residual connections for automated prostate segmentation from 3D MR images. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31. [Google Scholar]
- Roth, H.; Oda, M.; Shimizu, N.; Oda, H.; Hayashi, Y.; Kitasaka, T.; Fujiwara, M.; Misawa, K.; Mori, K. Towards dense volumetric pancreas segmentation in CT using 3D fully convolutional networks. Med. Imaging 2018 Image Process. 2018, 10574, 59–64. [Google Scholar]
- Chen, H.; Wang, X.; Huang, Y.; Wu, X.; Yu, Y.; Wang, L. Harnessing 2D networks and 3D features for automated pancreas segmentation from volumetric CT images. In Proceedings, Part VI 22, Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, 13–17 October 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 339–347. [Google Scholar]
- Zhao, N.; Tong, N.; Ruan, D.; Sheng, K. Fully automated pancreas segmentation with two-stage 3D convolutional neural networks. In Proceedings, Part II 22, Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, 13–17 October 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 201–209. [Google Scholar]
- Zhang, J.; Xie, Y.; Xia, Y.; Shen, C. DoDNet: Learning to segment multi-organ and tumors from multiple partially labeled datasets. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 1195–1204. [Google Scholar]
- Zhang, D.; Zhang, J.; Zhang, Q.; Han, J.; Zhang, S.; Han, J. Automatic pancreas segmentation based on lightweight DCNN modules and spatial prior propagation. Pattern Recognit. 2021, 114, 107762. [Google Scholar] [CrossRef]
- Isensee, F.; Petersen, J.; Klein, A.; Zimmerer, D.; Jaeger, P.F.; Kohl, S.; Wasserthal, J.; Koehler, G.; Norajitra, T.; Wirkert, S.; et al. nnu-net: Self-adapting framework for u-net-based medical image segmentation. arXiv 2018, arXiv:1809.10486. [Google Scholar]
- Yao, J.; Shi, Y.; Lu, L.; Xiao, J.; Zhang, L. DeepPrognosis: Preoperative Prediction of Pancreatic Cancer Survival and Surgical Margin via Contrast-Enhanced CT Imaging, 2020. arXiv 2020, arXiv:2008.11853. [Google Scholar] [CrossRef]
- Huang, X.; Deng, Z.; Li, D.; Yuan, X. Missformer: An effective medical image segmentation transformer. arXiv 2021, arXiv:2109.07162. [Google Scholar] [CrossRef]
- Dai, S.; Zhu, Y.; Jiang, X.; Yu, F.; Lin, J.; Yang, D. TD-Net: Trans-Deformer network for automatic pancreas segmentation. Neurocomputing 2023, 517, 279–293. [Google Scholar] [CrossRef]
- Rahman, M.M.; Shokouhmand, S.; Bhatt, S.; Faezipour, M. MIST: Medical Image Segmentation Transformer with Convolutional Attention Mixing (CAM) Decoder. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2024; pp. 404–413. [Google Scholar]
- Zhou, H.Y.; Guo, J.; Zhang, Y.; Yu, L.; Wang, L.; Yu, Y. nnformer: Interleaved transformer for volumetric segmentation. arXiv 2021, arXiv:2109.03201. [Google Scholar]
- Hatamizadeh, A.; Tang, Y.; Nath, V.; Yang, D.; Myronenko, A.; Landman, B.; Roth, H.R.; Xu, D. Unetr: Transformers for 3d medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 574–584. [Google Scholar]
- Tang, Y.; Yang, D.; Li, W.; Roth, H.R.; Landman, B.; Xu, D.; Nath, V.; Hatamizadeh, A. Self-supervised pretraining of swin transformers for 3d medical image analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 20730–20740. [Google Scholar]
- Qu, T.; Li, X.; Wang, X.; Deng, W.; Mao, L.; He, M.; Li, X.; Wang, Y.; Liu, Z.; Zhang, L.; et al. Transformer guided progressive fusion network for 3D pancreas and pancreatic mass segmentation. Med. Image Anal. 2023, 86, 102801. [Google Scholar] [CrossRef] [PubMed]
- Guo, Z.; Zhang, L.; Lu, L.; Bagheri, M.; Summers, R.M.; Sonka, M.; Yao, J. Deep LOGISMOS: Deep learning graph-based 3D segmentation of pancreatic tumors on CT scans. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 1230–1233. [Google Scholar]
- Soberanis-Mukul, R.D.; Navab, N.; Albarqouni, S. Uncertainty-based graph convolutional networks for organ segmentation refinement. Med. Imaging Deep. Learn. PMLR 2020, 121, 755–769. [Google Scholar]
- Hu, P.; Li, X.; Tian, Y.; Tang, T.; Zhou, T.; Bai, X.; Zhu, S.; Liang, T.; Li, J. Automatic pancreas segmentation in CT images with distance-based saliency-aware DenseASPP network. IEEE J. Biomed. Health Inform. 2020, 25, 1601–1611. [Google Scholar] [CrossRef] [PubMed]
- Zhao, T.; Cao, K.; Yao, J.; Nogues, I.; Lu, L.; Huang, L.; Xiao, J.; Yin, Z.; Zhang, L. 3D graph anatomy geometry-integrated network for pancreatic mass segmentation, diagnosis, and quantitative patient management. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13743–13752. [Google Scholar]
- Liu, S.; Liang, S.; Huang, X.; Yuan, X.; Zhong, T.; Zhang, Y. Graph-enhanced U-Net for semi-supervised segmentation of pancreas from abdomen CT scan. Phys. Med. Biol. 2022, 67, 155017. [Google Scholar] [CrossRef]
- Zhu, Z.; Liu, C.; Yang, D.; Yuille, A.; Xu, D. V-NAS: Neural architecture search for volumetric medical image segmentation. In Proceedings of the 2019 International Conference on 3d Vision (3DV), Québec City, QC, Canada, 16–19 September 2019; pp. 240–248. [Google Scholar]
- He, Y.; Yang, D.; Roth, H.; Zhao, C.; Xu, D. Dints: Differentiable neural network topology search for 3d medical image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 5841–5850. [Google Scholar]
- He, S.; Bao, R.; Li, J.; Grant, P.E.; Ou, Y. Accuracy of segment-anything model (sam) in medical image segmentation tasks. arXiv 2023, arXiv:2304.09324. [Google Scholar]
- Mazurowski, M.A.; Dong, H.; Gu, H.; Yang, J.; Konz, N.; Zhang, Y. Segment anything model for medical image analysis: An experimental study. Med. Image Anal. 2023, 89, 102918. [Google Scholar] [CrossRef]
- Huang, Y.; Yang, X.; Liu, L.; Zhou, H.; Chang, A.; Zhou, X.; Chen, R.; Yu, J.; Chen, J.; Chen, C.; et al. Segment anything model for medical images? Med Image Anal. 2024, 92, 103061. [Google Scholar] [CrossRef]
- Liu, J.; Zhang, Y.; Chen, J.N.; Xiao, J.; Lu, Y.; A Landman, B.; Yuan, Y.; Yuille, A.; Tang, Y.; Zhou, Z. Clip-driven universal model for organ segmentation and tumor detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; pp. 21152–21164. [Google Scholar]
- Liu, J.; Zhang, Y.; Wang, K.; Yavuz, M.C.; Chen, X.; Yuan, Y.; Li, H.; Yang, Y.; Yuille, A.; Tang, Y.; et al. Universal and Extensible Language-Vision Models for Organ Segmentation and Tumor Detection from Abdominal Computed Tomography. arXiv 2024, arXiv:2405.18356. [Google Scholar] [CrossRef]
- Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. Learning transferable visual models from natural language supervision. In Proceedings of the International Conference on Machine Learning, PMLR, Virtual, 18–24 July 2021; pp. 8748–8763. [Google Scholar]
- Knolle, M.; Kaissis, G.; Jungmann, F.; Ziegelmayer, S.; Sasse, D.; Makowski, M.; Rueckert, D.; Braren, R. Efficient, high-performance semantic segmentation using multi-scale feature extraction. PLoS ONE 2021, 16, e0255397. [Google Scholar] [CrossRef] [PubMed]
- Wang, P.; Shen, C.; Wang, W.; Oda, M.; Fuh, C.S.; Mori, K.; Roth, H.R. ConDistFL: Conditional Distillation for Federated Learning from Partially Annotated Data. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2023; pp. 311–321. [Google Scholar]
- Man, Y.; Huang, Y.; Feng, J.; Li, X.; Wu, F. Deep Q learning driven CT pancreas segmentation with geometry-aware U-Net. IEEE Trans. Med. Imaging 2019, 38, 1971–1980. [Google Scholar] [CrossRef] [PubMed]
- Dogan, R.O.; Dogan, H.; Bayrak, C.; Kayikcioglu, T. A two-phase approach using mask R-CNN and 3D U-Net for high-accuracy automatic segmentation of pancreas in CT imaging. Comput. Methods Programs Biomed. 2021, 207, 106141. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Z.; Li, S.; Wang, Z.; Lu, Y. A novel and efficient tumor detection framework for pancreatic cancer via CT images. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 1160–1164. [Google Scholar]
- Baumgartner, M.; Jäger, P.F.; Isensee, F.; Maier-Hein, K.H. nnDetection: A self-configuring method for medical object detection. In Proceedings, Part V 24, Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 530–539. [Google Scholar]
- Jaeger, P.F.; Kohl, S.A.; Bickelhaupt, S.; Isensee, F.; Kuder, T.A.; Schlemmer, H.P.; Maier-Hein, K.H. Retina U-Net: Embarrassingly simple exploitation of segmentation supervision for medical object detection. In Proceedings of the Machine Learning for Health Workshop, PMLR, Virtual, 13–18 July 2020; pp. 171–183. [Google Scholar]
- Juneja, M.; Singh, G.; Chanana, C.; Verma, R.; Thakur, N.; Jindal, P. Region-based Convolutional Neural Network (R-CNN) architecture for auto-cropping of pancreatic computed tomography. Imaging Sci. J. 2023, 1–14. [Google Scholar] [CrossRef]
- Dinesh, M.; Bacanin, N.; Askar, S.; Abouhawwash, M. Diagnostic ability of deep learning in detection of pancreatic tumour. Sci. Rep. 2023, 13, 9725. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Lobo-Mueller, E.M.; Karanicolas, P.; Gallinger, S.; Haider, M.A.; Khalvati, F. Improving prognostic performance in resectable pancreatic ductal adenocarcinoma using radiomics and deep learning features fusion in CT images. Sci. Rep. 2021, 11, 1378. [Google Scholar] [CrossRef]
- Lee, W.; Park, H.J.; Lee, H.J.; Jun, E.; Song, K.B.; Hwang, D.W.; Lee, J.H.; Lim, K.; Kim, N.; Lee, S.S.; et al. Preoperative data-based deep learning model for predicting postoperative survival in pancreatic cancer patients. Int. J. Surg. 2022, 105, 106851. [Google Scholar] [CrossRef]
- Tran, D.; Wang, H.; Torresani, L.; Ray, J.; LeCun, Y.; Paluri, M. A closer look at spatiotemporal convolutions for action recognition. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6450–6459. [Google Scholar]
- Hara, K.; Kataoka, H.; Satoh, Y. Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6546–6555. [Google Scholar]
- Chen, X.; Wang, W.; Jiang, Y.; Qian, X. A dual-transformation with contrastive learning framework for lymph node metastasis prediction in pancreatic cancer. Med. Image Anal. 2023, 85, 102753. [Google Scholar] [CrossRef]
- Gibson, E.; Giganti, F.; Hu, Y.; Bonmati, E.; Bandula, S.; Gurusamy, K.; Davidson, B.; Pereira, S.P.; Clarkson, M.J.; Barratt, D.C. Automatic multi-organ segmentation on abdominal CT with dense V-networks. IEEE Trans. Med Imaging 2018, 37, 1822–1834. [Google Scholar] [CrossRef]
- Lyu, P.; Neely, B.; Solomon, J.; Rigiroli, F.; Ding, Y.; Schwartz, F.R.; Thomsen, B.; Lowry, C.; Samei, E.; Marin, D. Effect of deep learning image reconstruction in the prediction of resectability of pancreatic cancer: Diagnostic performance and reader confidence. Eur. J. Radiol. 2021, 141, 109825. [Google Scholar] [CrossRef]
- Noda, Y.; Iritani, Y.; Kawai, N.; Miyoshi, T.; Ishihara, T.; Hyodo, F.; Matsuo, M. Deep learning image reconstruction for pancreatic low-dose computed tomography: Comparison with hybrid iterative reconstruction. Abdom. Radiol. 2021, 46, 4238–4244. [Google Scholar] [CrossRef] [PubMed]
- Chi, J.; Sun, Z.; Zhao, T.; Wang, H.; Yu, X.; Wu, C. Low-dose ct image super-resolution network with dual-guidance feature distillation and dual-path content communication. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2023; pp. 98–108. [Google Scholar]
- Takai, Y.; Noda, Y.; Asano, M.; Kawai, N.; Kaga, T.; Tsuchida, Y.; Miyoshi, T.; Hyodo, F.; Kato, H.; Matsuo, M. Deep-learning image reconstruction for 80-kVp pancreatic CT protocol: Comparison of image quality and pancreatic ductal adenocarcinoma visibility with hybrid-iterative reconstruction. Eur. J. Radiol. 2023, 165, 110960. [Google Scholar] [CrossRef] [PubMed]
- Shi, J.; Pelt, D.M.; Batenburg, K.J. SR4ZCT: Self-supervised Through-Plane Resolution Enhancement for CT Images with Arbitrary Resolution and Overlap. In International Workshop on Machine Learning in Medical Imaging; Springer: Cham, Switzerland, 2023; pp. 52–61. [Google Scholar]
- Liu, Y.; Lei, Y.; Wang, T.; Fu, Y.; Tang, X.; Curran, W.J.; Liu, T.; Patel, P.; Yang, X. CBCT-based synthetic CT generation using deep-attention cycleGAN for pancreatic adaptive radiotherapy. Med. Phys. 2020, 47, 2472–2483. [Google Scholar] [CrossRef]
- Dai, X.; Lei, Y.; Wynne, J.; Janopaul-Naylor, J.; Wang, T.; Roper, J.; Curran, W.J.; Liu, T.; Patel, P.; Yang, X. Synthetic CT-aided multiorgan segmentation for CBCT-guided adaptive pancreatic radiotherapy. Med. Phys. 2021, 48, 7063–7073. [Google Scholar] [CrossRef] [PubMed]
- Shi, Y.; Tang, H.; Baine, M.J.; Hollingsworth, M.A.; Du, H.; Zheng, D.; Zhang, C.; Yu, H. 3DGAUnet: 3D generative adversarial networks with a 3D U-net based generator to achieve the accurate and effective synthesis of clinical tumor image data for pancreatic cancer. Cancers 2023, 15, 5496. [Google Scholar] [CrossRef] [PubMed]
- Hooshangnejad, H.; Chen, Q.; Feng, X.; Zhang, R.; Ding, K. deepPERFECT: Novel Deep Learning CT Synthesis Method for Expeditious Pancreatic Cancer Radiotherapy. Cancers 2023, 15, 3061. [Google Scholar] [CrossRef] [PubMed]
- Peng, J.; Liu, Y.; Jiang, D.; Wang, X.; Peng, P.; He, S.; Zhang, W.; Zhou, F. Deep Learning and GAN-Synthesis for Auto-Segmentation of Pancreatic Cancer by Non-Enhanced CT for Adaptive Radiotherapy. Int. J. Radiat. Oncol. Biol. Phys. 2023, 117, e499–e500. [Google Scholar] [CrossRef]
- Guan, Q.; Chen, Y.; Wei, Z.; Heidari, A.A.; Hu, H.; Yang, X.H.; Zheng, J.; Zhou, Q.; Chen, H.; Chen, F. Medical image augmentation for lesion detection using a texture-constrained multichannel progressive GAN. Comput. Biol. Med. 2022, 145, 105444. [Google Scholar] [CrossRef]
- Caverly, R.H. MRI fundamentals: RF aspects of magnetic resonance imaging (MRI). IEEE Microw. Mag. 2015, 16, 20–33. [Google Scholar] [CrossRef]
- Fatahi, M.; Speck, O. Magnetic resonance imaging (MRI): A review of genetic damage investigations. Mutat. Res. Mutat. Res. 2015, 764, 51–63. [Google Scholar]
- Eshed, I.; Hermann, K.G.A. MRI in imaging of rheumatic diseases: An overview for clinicians. Clin. Exp. Rheumatol. 2018, 36, 10–15. [Google Scholar]
- Smith, N.B.; Webb, A. Introduction to Medical Imaging: Physics, Engineering and Clinical Applications; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
- Cui, S.; Tang, T.; Su, Q.; Wang, Y.; Shu, Z.; Yang, W.; Gong, X. Radiomic nomogram based on MRI to predict grade of branching type intraductal papillary mucinous neoplasms of the pancreas: A multicenter study. Cancer Imaging 2021, 21, 26. [Google Scholar] [CrossRef]
- Chen, W.; Ji, H.; Feng, J.; Liu, R.; Yu, Y.; Zhou, R.; Zhou, J. Classification of pancreatic cystic neoplasms based on multimodality images. In International Workshop on Machine Learning in Medical Imaging; Springer: Cham, Switzerland, 2018; pp. 161–169. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Chen, X.; Chen, Y.; Ma, C.; Liu, X.; Tang, X. Classification of pancreatic tumors based on MRI images using 3D convolutional neural networks. In Proceedings of the 2nd International Symposium on Image Computing and Digital Medicine, Chengdu, China, 13–14 October 2018; pp. 92–96. [Google Scholar]
- Corral, J.E.; Hussein, S.; Kandel, P.; Bolan, C.W.; Bagci, U.; Wallace, M.B. Deep learning to classify intraductal papillary mucinous neoplasms using magnetic resonance imaging. Pancreas 2019, 48, 805–810. [Google Scholar] [CrossRef] [PubMed]
- Chatfield, K.; Simonyan, K.; Vedaldi, A.; Zisserman, A. Return of the devil in the details: Delving deep into convolutional nets. arXiv 2014, arXiv:1405.3531. [Google Scholar]
- Hussein, S.; Kandel, P.; Bolan, C.W.; Wallace, M.B.; Bagci, U. Lung and pancreatic tumor characterization in the deep learning era: Novel supervised and unsupervised learning approaches. IEEE Trans. Med. Imaging 2019, 38, 1777–1787. [Google Scholar] [CrossRef] [PubMed]
- Asaturyan, H.; Thomas, E.L.; Fitzpatrick, J.; Bell, J.D.; Villarini, B. Advancing pancreas segmentation in multi-protocol mri volumes using hausdorff-sine loss function. In Proceedings 10, Proceedings of the Machine Learning in Medical Imaging: 10th International Workshop, MLMI 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, 13 October 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 27–35. [Google Scholar]
- Liang, Y.; Schott, D.; Zhang, Y.; Wang, Z.; Nasief, H.; Paulson, E.; Hall, W.; Knechtges, P.; Erickson, B.; Li, X.A. Auto-segmentation of pancreatic tumor in multi-parametric MRI using deep convolutional neural networks. Radiother. Oncol. 2020, 145, 193–200. [Google Scholar] [CrossRef] [PubMed]
- Li, J.; Feng, C.; Shen, Q.; Lin, X.; Qian, X. Pancreatic cancer segmentation in unregistered multi-parametric MRI with adversarial learning and multi-scale supervision. Neurocomputing 2022, 467, 310–322. [Google Scholar] [CrossRef]
- Mazor, N.; Dar, G.; Lederman, R.; Lev-Cohain, N.; Sosna, J.; Joskowicz, L. MC3DU-Net: A multisequence cascaded pipeline for the detection and segmentation of pancreatic cysts in MRI. Int. J. Comput. Assist. Radiol. Surg. 2023, 19, 423–432. [Google Scholar] [CrossRef]
- Cai, J.; Lu, L.; Zhang, Z.; Xing, F.; Yang, L.; Yin, Q. Pancreas segmentation in MRI using graph-based decision fusion on convolutional neural networks. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 442–450. [Google Scholar]
- Li, J.; Feng, C.; Lin, X.; Qian, X. Utilizing GCN and meta-learning strategy in unsupervised domain adaptation for pancreatic cancer segmentation. IEEE J. Biomed. Health Inform. 2021, 26, 79–89. [Google Scholar] [CrossRef]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
- Han, S.; Kim, J.H.; Yoo, J.; Jang, S. Prediction of recurrence after surgery based on preoperative MRI features in patients with pancreatic neuroendocrine tumors. Eur. Radiol. 2022, 32, 2506–2517. [Google Scholar] [CrossRef]
- Xu, X.; Qu, J.; Zhang, Y.; Qian, X.; Chen, T.; Liu, Y. Development and validation of an MRI-radiomics nomogram for the prognosis of pancreatic ductal adenocarcinoma. Front. Oncol. 2023, 13, 1074445. [Google Scholar] [CrossRef] [PubMed]
- Van Roessel, S.; Kasumova, G.G.; Verheij, J.; Najarian, R.M.; Maggino, L.; De Pastena, M.; Malleo, G.; Marchegiani, G.; Salvia, R.; Ng, S.C.; et al. International validation of the eighth edition of the American Joint Committee on Cancer (AJCC) TNM staging system in patients with resected pancreatic cancer. JAMA Surg. 2018, 153, e183617. [Google Scholar] [CrossRef]
- Chaika, M.; Afat, S.; Wessling, D.; Afat, C.; Nickel, D.; Kannengiesser, S.; Herrmann, J.; Almansour, H.; Männlin, S.; Othman, A.E.; et al. Deep learning-based super-resolution gradient echo imaging of the pancreas: Improvement of image quality and reduction of acquisition time. Diagn. Interv. Imaging 2023, 104, 53–59. [Google Scholar] [CrossRef] [PubMed]
- Fusaroli, P.; Caletti, G. Endoscopic ultrasonography. Endoscopy 2003, 35, 127–135. [Google Scholar] [CrossRef] [PubMed]
- Dimagno, E.P.; Regan, P.T.; Clain, J.E.; James, E.; Buxton, J.L. Human endoscopic ultrasonography. Gastroenterology 1982, 83, 824–829. [Google Scholar] [CrossRef]
- Ruano, J.; Jaramillo, M.; Gómez, M.; Romero, E. Robust Descriptor of Pancreatic Tissue for Automatic Detection of Pancreatic Cancer in Endoscopic Ultrasonography. Ultrasound Med. Biol. 2022, 48, 1602–1614. [Google Scholar] [CrossRef] [PubMed]
- Kuwahara, T.; Hara, K.; Mizuno, N.; Okuno, N.; Matsumoto, S.; Obata, M.; Kurita, Y.; Koda, H.; Toriyama, K.; Onishi, S.; et al. Usefulness of deep learning analysis for the diagnosis of malignancy in intraductal papillary mucinous neoplasms of the pancreas. Clin. Transl. Gastroenterol. 2019, 10, e00045. [Google Scholar] [CrossRef] [PubMed]
- Zhang, J.; Zhu, L.; Yao, L.; Ding, X.; Chen, D.; Wu, H.; Lu, Z.; Zhou, W.; Zhang, L.; An, P.; et al. Deep learning–based pancreas segmentation and station recognition system in EUS: Development and validation of a useful training tool (with video). Gastrointest. Endosc. 2020, 92, 874–885. [Google Scholar] [CrossRef]
- Udriștoiu, A.L.; Cazacu, I.M.; Gruionu, L.G.; Gruionu, G.; Iacob, A.V.; Burtea, D.E.; Ungureanu, B.S.; Costache, M.I.; Constantin, A.; Popescu, C.F.; et al. Real-time computer-aided diagnosis of focal pancreatic masses from endoscopic ultrasound imaging based on a hybrid convolutional and long short-term memory neural network model. PLoS ONE 2021, 16, e0251701. [Google Scholar] [CrossRef]
- Nguon, L.S.; Seo, K.; Lim, J.H.; Song, T.J.; Cho, S.H.; Park, J.S.; Park, S. Deep learning-based differentiation between mucinous cystic neoplasm and serous cystic neoplasm in the pancreas using endoscopic ultrasonography. Diagnostics 2021, 11, 1052. [Google Scholar] [CrossRef]
- Bonmati, E.; Hu, Y.; Grimwood, A.; Johnson, G.J.; Goodchild, G.; Keane, M.G.; Gurusamy, K.; Davidson, B.; Clarkson, M.J.; Pereira, S.P.; et al. Voice-assisted image labeling for endoscopic ultrasound classification using neural networks. IEEE Trans. Med. Imaging 2021, 41, 1311–1319. [Google Scholar] [CrossRef] [PubMed]
- Vilas-Boas, F.; Ribeiro, T.; Afonso, J.; Cardoso, H.; Lopes, S.; Moutinho-Ribeiro, P.; Ferreira, J.; Mascarenhas-Saraiva, M.; Macedo, G. Deep Learning for Automatic Differentiation of Mucinous versus Non-Mucinous Pancreatic Cystic Lesions: A Pilot Study. Diagnostics 2022, 12, 2041. [Google Scholar] [CrossRef]
- Jaramillo, M.; Ruano, J.; Gómez, M.; Romero, E. Automatic detection of pancreatic tumors in endoscopic ultrasound videos using deep learning techniques. Med. Imaging 2022 Ultrason. Imaging Tomogr. SPIE 2022, 12038, 106–115. [Google Scholar]
- Ren, Y.; Zou, D.; Xu, W.; Zhao, X.; Lu, W.; He, X. Bimodal segmentation and classification of endoscopic ultrasonography images for solid pancreatic tumor. Biomed. Signal Process. Control 2023, 83, 104591. [Google Scholar] [CrossRef]
- Kuwahara, T.; Hara, K.; Mizuno, N.; Haba, S.; Okuno, N.; Kuraishi, Y.; Fumihara, D.; Yanaidani, T.; Ishikawa, S.; Yasuda, T.; et al. Artificial intelligence using deep learning analysis of endoscopic ultrasonography images for the differential diagnosis of pancreatic masses. Endoscopy 2023, 55, 140–149. [Google Scholar] [CrossRef]
- Tan, M.; Le, Q. Efficientnetv2: Smaller models and faster training. In Proceedings of the International Conference on Machine Learning, PMLR, Virtual, 18–24 July 2021; pp. 10096–10106. [Google Scholar]
- Fleurentin, A.; Mazellier, J.P.; Meyer, A.; Montanelli, J.; Swanstrom, L.; Gallix, B.; Sosa Valencia, L.; Padoy, N. Automatic pancreas anatomical part detection in endoscopic ultrasound videos. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2023, 11, 1136–1142. [Google Scholar] [CrossRef]
- Iwasa, Y.; Iwashita, T.; Takeuchi, Y.; Ichikawa, H.; Mita, N.; Uemura, S.; Shimizu, M.; Kuo, Y.T.; Wang, H.P.; Hara, T. Automatic segmentation of pancreatic tumors using deep learning on a video image of contrast-enhanced endoscopic ultrasound. J. Clin. Med. 2021, 10, 3589. [Google Scholar] [CrossRef]
- Oh, S.; Kim, Y.J.; Park, Y.T.; Kim, K.G. Automatic pancreatic cyst lesion segmentation on EUS images using a deep-learning approach. Sensors 2021, 22, 245. [Google Scholar] [CrossRef]
- Seo, K.; Lim, J.H.; Seo, J.; Nguon, L.S.; Yoon, H.; Park, J.S.; Park, S. Semantic Segmentation of Pancreatic Cancer in Endoscopic Ultrasound Images Using Deep Learning Approach. Cancers 2022, 14, 5111. [Google Scholar] [CrossRef]
- Tang, A.; Gong, P.; Fang, N.; Ye, M.; Hu, S.; Liu, J.; Wang, W.; Gao, K.; Wang, X.; Tian, L. Endoscopic ultrasound diagnosis system based on deep learning in images capture and segmentation training of solid pancreatic masses. Med. Phys. 2023, 50, 4197–4205. [Google Scholar] [CrossRef] [PubMed]
- Meyer, A.; Fleurentin, A.; Montanelli, J.; Mazellier, J.P.; Swanstrom, L.; Gallix, B.; Exarchakis, G.; Sosa Valencia, L.; Padoy, N. Spatio-Temporal Model for EUS Video Detection of Pancreatic Anatomy Structures. In International Workshop on Advances in Simplifying Medical Ultrasound; Springer: Cham, Switzerland, 2022; pp. 13–22. [Google Scholar]
- Wu, H.; Chen, Y.; Wang, N.; Zhang, Z. Sequence level semantics aggregation for video object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9217–9225. [Google Scholar]
- Gong, T.; Chen, K.; Wang, X.; Chu, Q.; Zhu, F.; Lin, D.; Yu, N.; Feng, H. Temporal ROI align for video object recognition. Aaai Conf. Artif. Intell. 2021, 35, 1442–1450. [Google Scholar] [CrossRef]
- Tian, G.; Xu, D.; He, Y.; Chai, W.; Deng, Z.; Cheng, C.; Jin, X.; Wei, G.; Zhao, Q.; Jiang, T. Deep learning for real-time auxiliary diagnosis of pancreatic cancer in endoscopic ultrasonography. Front. Oncol. 2022, 12, 973652. [Google Scholar] [CrossRef] [PubMed]
- Jaramillo, M.; Ruano, J.; Bravo, D.; Medina, S.; Gómez, M.; González, F.A.; Romero, E. Automatic Localization of Pancreatic Tumoral Regions in Whole Sequences of Echoendoscopy Procedures. In Proceedings of the 2023 19th International Symposium on Medical Information Processing and Analysis (SIPAIM), Mexico City, Mexico, 15–17 November 2023; pp. 1–5. [Google Scholar]
- Grimwood, A.; Ramalhinho, J.; Baum, Z.M.; Montaña-Brown, N.; Johnson, G.J.; Hu, Y.; Clarkson, M.J.; Pereira, S.P.; Barratt, D.C.; Bonmati, E. Endoscopic ultrasound image synthesis using a cycle-consistent adversarial network. In Proceedings 2, Proceedings of the Simplifying Medical Ultrasound: Second International Workshop, ASMUS 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, 27 September 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 169–178. [Google Scholar]
- Cherry, S.R.; Dahlbom, M.; Cherry, S.R.; Dahlbom, M. PET: Physics, Instrumentation, and Scanners; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Schlyer, D.J. PET tracers and radiochemistry. Ann.-Acad. Med. 2004, 33, 146–154. [Google Scholar] [CrossRef]
- Kapoor, V.; McCook, B.M.; Torok, F.S. An introduction to PET-CT imaging. Radiographics 2004, 24, 523–543. [Google Scholar] [CrossRef]
- Vandenberghe, S.; Moskal, P.; Karp, J.S. State of the art in total body PET. EJNMMI Phys. 2020, 7, 35. [Google Scholar] [CrossRef]
- Townsend, D.W.; Carney, J.P.; Yap, J.T.; Hall, N.C. PET/CT today and tomorrow. J. Nucl. Med. 2004, 45, 4S–14S. [Google Scholar]
- Li, S.; Jiang, H.; Wang, Z.; Zhang, G.; Yao, Y.d. An effective computer aided diagnosis model for pancreas cancer on PET/CT images. Comput. Methods Programs Biomed. 2018, 165, 205–214. [Google Scholar] [CrossRef]
- Zhang, Y.; Cheng, C.; Liu, Z.; Wang, L.; Pan, G.; Sun, G.; Chang, Y.; Zuo, C.; Yang, X. Radiomics analysis for the differentiation of autoimmune pancreatitis and pancreatic ductal adenocarcinoma in 18F-FDG PET/CT. Med. Phys. 2019, 46, 4520–4530. [Google Scholar] [CrossRef]
- Xing, H.; Hao, Z.; Zhu, W.; Sun, D.; Ding, J.; Zhang, H.; Liu, Y.; Huo, L. Preoperative prediction of pathological grade in pancreatic ductal adenocarcinoma based on 18 F-FDG PET/CT radiomics. EJNMMI Res. 2021, 11, 1–10. [Google Scholar] [CrossRef]
- Van Griethuysen, J.J.; Fedorov, A.; Parmar, C.; Hosny, A.; Aucoin, N.; Narayan, V.; Beets-Tan, R.G.; Fillion-Robin, J.C.; Pieper, S.; Aerts, H.J. Computational radiomics system to decode the radiographic phenotype. Cancer Res. 2017, 77, e104–e107. [Google Scholar] [CrossRef] [PubMed]
- Zhang, G.; Bao, C.; Liu, Y.; Wang, Z.; Du, L.; Zhang, Y.; Wang, F.; Xu, B.; Zhou, S.K.; Liu, R. 18F-FDG-PET/CT-based deep learning model for fully automated prediction of pathological grading for pancreatic ductal adenocarcinoma before surgery. EJNMMI Res. 2023, 13, 49. [Google Scholar] [CrossRef] [PubMed]
- Wei, W.; Jia, G.; Wu, Z.; Wang, T.; Wang, H.; Wei, K.; Cheng, C.; Liu, Z.; Zuo, C. A multidomain fusion model of radiomics and deep learning to discriminate between PDAC and AIP based on 18F-FDG PET/CT images. Jpn. J. Radiol. 2023, 41, 417–427. [Google Scholar] [CrossRef] [PubMed]
- Suganuma, Y.; Teramoto, A.; Saito, K.; Fujita, H.; Suzuki, Y.; Tomiyama, N.; Kido, S. Hybrid Multiple-Organ Segmentation Method Using Multiple U-Nets in PET/CT Images. Appl. Sci. 2023, 13, 10765. [Google Scholar] [CrossRef]
- Wang, F.; Cheng, C.; Cao, W.; Wu, Z.; Wang, H.; Wei, W.; Yan, Z.; Liu, Z. MFCNet: A multi-modal fusion and calibration networks for 3D pancreas tumor segmentation on PET-CT images. Comput. Biol. Med. 2023, 155, 106657. [Google Scholar] [CrossRef] [PubMed]
- Shao, M.; Cheng, C.; Hu, C.; Zheng, J.; Zhang, B.; Wang, T.; Jin, G.; Liu, Z.; Zuo, C. Semisupervised 3D segmentation of pancreatic tumors in positron emission tomography/computed tomography images using a mutual information minimization and cross-fusion strategy. Quant. Imaging Med. Surg. 2024, 14, 1747. [Google Scholar] [CrossRef] [PubMed]
- Wang, H.; Wu, Z.; Wang, F.; Wei, W.; Wei, K.; Liu, Z. MAFF: Multi-Scale and Self-Adaptive Attention Feature Fusion Network for Pancreatic Lesion Detection in PET/CT Images. In EITCE ’22, Proceedings of the 2022 6th International Conference on Electronic Information Technology and Computer Engineering, Xiamen, China, 21–23 October 2022; Association for Computing Machinery: New York, NY, USA, 2023; pp. 1412–1419. [Google Scholar] [CrossRef]
- Park, Y.J.; Park, Y.S.; Kim, S.T.; Hyun, S.H. A machine learning approach using [18F] FDG PET-based radiomics for prediction of tumor grade and prognosis in pancreatic neuroendocrine tumor. Mol. Imaging Biol. 2023, 25, 897–910. [Google Scholar] [CrossRef] [PubMed]
- Mendez, A.J.; Tahoces, P.G.; Lado, M.J.; Souto, M.; Vidal, J.J. Computer-aided diagnosis: Automatic detection of malignant masses in digitized mammograms. Med. Phys. 1998, 25, 957–964. [Google Scholar] [CrossRef]
- Gurcan, M.N.; Boucheron, L.E.; Can, A.; Madabhushi, A.; Rajpoot, N.M.; Yener, B. Histopathological image analysis: A review. IEEE Rev. Biomed. Eng. 2009, 2, 147–171. [Google Scholar] [CrossRef]
- Farahani, N.; Parwani, A.V.; Pantanowitz, L. Whole slide imaging in pathology: Advantages, limitations, and emerging perspectives. Pathol. Lab. Med. Int. 2015, 7, 23–33. [Google Scholar]
- Michael, C.W.; Kameyama, K.; Kitagawa, W.; Azar, N. Rapid on-site evaluation (ROSE) for fine needle aspiration of thyroid: Benefits, challenges and innovative solutions. Gland Surg. 2020, 9, 1708. [Google Scholar] [CrossRef] [PubMed]
- da Cunha Santos, G.; Ko, H.M.; Saieg, M.A.; Geddie, W.R. “The petals and thorns” of ROSE (rapid on-site evaluation). Cancer Cytopathol. 2013, 121, 4–8. [Google Scholar] [CrossRef] [PubMed]
- Saillard, C.; Delecourt, F.; Schmauch, B.; Moindrot, O.; Svrcek, M.; Bardier-Dupas, A.; Emile, J.F.; Ayadi, M.; Rebours, V.; De Mestier, L.; et al. PACpAInt: A deep learning approach to identify molecular subtypes of pancreatic adenocarcinoma on histology slides. bioRxiv 2022, 2022-01. [Google Scholar] [CrossRef]
- Chang, Y.H.; Thibault, G.; Madin, O.; Azimi, V.; Meyers, C.; Johnson, B.; Link, J.; Margolin, A.; Gray, J.W. Deep learning based Nucleus Classification in pancreas histological images. In Proceedings of the 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju, Republic of Korea, 11–15 July 2017; pp. 672–675. [Google Scholar]
- Le, H.; Samaras, D.; Kurc, T.; Gupta, R.; Shroyer, K.; Saltz, J. Pancreatic cancer detection in whole slide images using noisy label annotations. In Proceedings, Part I 22, Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, 13–17 October 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 541–549. [Google Scholar]
- Sehmi, M.N.M.; Fauzi, M.F.A.; Ahmad, W.S.H.M.W.; Chan, E.W.L. Pancreatic cancer grading in pathological images using deep learning convolutional neural networks. F1000Research 2021, 10, 1057. [Google Scholar] [CrossRef]
- Ono, N.; Iwamoto, C.; Ohuchida, K. Construction of Classifier of Tumor Cell Types of Pancreas Cancer Based on Pathological Images Using Deep Learning. In Multidisciplinary Computational Anatomy: Toward Integration of Artificial Intelligence with MCA-Based Medicine; Springer: Singapore, 2022; pp. 145–148. [Google Scholar]
- Zhang, T.; Feng, Y.; Feng, Y.; Zhao, Y.; Lei, Y.; Ying, N.; Yan, Z.; He, Y.; Zhang, G. Shuffle Instances-based Vision Transformer for Pancreatic Cancer ROSE Image Classification. arXiv 2022, arXiv:2208.06833. [Google Scholar] [CrossRef] [PubMed]
- Ghoshal, B.; Ghoshal, B.; Tucker, A. Leveraging Uncertainty in Deep Learning for Pancreatic Adenocarcinoma Grading. In Annual Conference on Medical Image Understanding and Analysis; Springer: Cham, Switzerland, 2022; pp. 565–577. [Google Scholar]
- Kou, Y.; Xia, C.; Jiao, Y.; Zhang, D.; Ge, R. DACTransNet: A Hybrid CNN-Transformer Network for Histopathological Image Classification of Pancreatic Cancer. In CAAI International Conference on Artificial Intelligence; Springer: Cham, Switzerland, 2023; pp. 422–434. [Google Scholar]
- Janssen, B.V.; Theijse, R.; van Roessel, S.; de Ruiter, R.; Berkel, A.; Huiskens, J.; Busch, O.R.; Wilmink, J.W.; Kazemier, G.; Valkema, P.; et al. Artificial intelligence-based segmentation of residual tumor in histopathology of pancreatic cancer after neoadjuvant treatment. Cancers 2021, 13, 5089. [Google Scholar] [CrossRef] [PubMed]
- Yang, C.; Xiang, D.; Bian, Y.; Lu, J.; Jiang, H.; Zheng, J. Gland segmentation in pancreas histopathology images based on selective multi-scale attention. Med. Imaging 2021 Image Process. SPIE 2021, 11596, 699–705. [Google Scholar]
- Fu, H.; Mi, W.; Pan, B.; Guo, Y.; Li, J.; Xu, R.; Zheng, J.; Zou, C.; Zhang, T.; Liang, Z.; et al. Automatic pancreatic ductal adenocarcinoma detection in whole slide images using deep convolutional neural networks. Front. Oncol. 2021, 11, 665929. [Google Scholar] [CrossRef] [PubMed]
- Gao, E.; Jiang, H.; Zhou, Z.; Yang, C.; Chen, M.; Zhu, W.; Shi, F.; Chen, X.; Zheng, J.; Bian, Y.; et al. Automatic multi-tissue segmentation in pancreatic pathological images with selected multi-scale attention network. Comput. Biol. Med. 2022, 151, 106228. [Google Scholar] [CrossRef]
- Zhang, S.; Zhou, Y.; Tang, D.; Ni, M.; Zheng, J.; Xu, G.; Peng, C.; Shen, S.; Zhan, Q.; Wang, X.; et al. A deep learning-based segmentation system for rapid onsite cytologic pathology evaluation of pancreatic masses: A retrospective, multicenter, diagnostic study. EBioMedicine 2022, 80, 104022. [Google Scholar] [CrossRef]
- Liu, A.; Jiang, H.; Cao, W.; Cui, W.; Xiang, D.; Shao, C.; Liu, Z.; Bian, Y.; Zheng, J. MLAGG-Net: Multi-level aggregation and global guidance network for pancreatic lesion segmentation in histopathological images. Biomed. Signal Process. Control 2023, 86, 105303. [Google Scholar] [CrossRef]
- Gao, W.; Jiang, H.; Jiao, Y.; Wang, X.; Xu, J. Multi-tissue segmentation model of whole slide image of pancreatic cancer based on multi task and attention mechanism. Sheng Wu Yi Xue Gong Cheng Xue Za Zhi = J. Biomed. Eng. Shengwu Yixue Gongchengxue Zazhi 2023, 40, 70–78. [Google Scholar]
- Chen, Z.M.; Liao, Y.; Zhou, X.; Yu, W.; Zhang, G.; Ge, Y.; Ke, T.; Shi, K. Pancreatic cancer pathology image segmentation with channel and spatial long-range dependencies. Comput. Biol. Med. 2024, 169, 107844. [Google Scholar] [CrossRef] [PubMed]
- Li, B.; Keikhosravi, A.; Loeffler, A.G.; Eliceiri, K.W. Single image super-resolution for whole slide image using convolutional neural networks and self-supervised color normalization. Med. Image Anal. 2021, 68, 101938. [Google Scholar] [CrossRef] [PubMed]
- Kugler, M.; Goto, Y.; Kawamura, N.; Kobayashi, H.; Yokota, T.; Iwamoto, C.; Ohuchida, K.; Hashizume, M.; Hontani, H. Accurate 3D reconstruction of a whole pancreatic cancer tumor from pathology images with different stains. In Proceedings 5, Proceedings of the Computational Pathology and Ophthalmic Medical Image Analysis: First International Workshop, COMPAY 2018, and 5th International Workshop, OMIA 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, 16–20 September 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 35–43. [Google Scholar]
- Kugler, M.; Goto, Y.; Tamura, Y.; Kawamura, N.; Kobayashi, H.; Yokota, T.; Iwamoto, C.; Ohuchida, K.; Hashizume, M.; Shimizu, A.; et al. Robust 3D image reconstruction of pancreatic cancer tumors from histopathological images with different stains and its quantitative performance evaluation. Int. J. Comput. Assist. Radiol. Surg. 2019, 14, 2047–2055. [Google Scholar] [CrossRef] [PubMed]
- Panda, A.; Garg, I.; Truty, M.J.; Kline, T.L.; Johnson, M.P.; Ehman, E.C.; Suman, G.; Anaam, D.A.; Kemp, B.J.; Johnson, G.B.; et al. Borderline Resectable and Locally Advanced Pancreatic Cancer: FDG PET/MRI and CT Tumor Metrics for Assessment of Pathologic Response to Neoadjuvant Therapy and Prediction of Survival. Am. J. Roentgenol. 2021, 217, 730–740. [Google Scholar] [CrossRef] [PubMed]
- Koch, V.; Weitzer, N.; Dos Santos, D.P.; Gruenewald, L.D.; Mahmoudi, S.; Martin, S.S.; Eichler, K.; Bernatz, S.; Gruber-Rouh, T.; Booz, C.; et al. Multiparametric detection and outcome prediction of pancreatic cancer involving dual-energy CT, diffusion-weighted MRI, and radiomics. Cancer Imaging 2023, 23, 38. [Google Scholar] [CrossRef]
- Hussein, S.; Kandel, P.; Corral, J.E.; Bolan, C.W.; Wallace, M.B.; Bagci, U. Deep multi-modal classification of intraductal papillary mucinous neoplasms (IPMN) with canonical correlation analysis. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 800–804. [Google Scholar]
- Chen, X.; Lin, X.; Shen, Q.; Qian, X. Combined spiral transformation and model-driven multi-modal deep learning scheme for automatic prediction of TP53 mutation in pancreatic cancer. IEEE Trans. Med Imaging 2020, 40, 735–747. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Z.; Chen, E.; Zhang, X.; Yang, J.; Wang, X.; Chen, P.; Zeng, M.; Du, M.; Xu, S.; Yang, Z.; et al. Multi-Modal Fusion of Radiomics and Pathomics to Predict the Survival of Pancreatic Cancer Patients Based on Asymmetric Twinning Information Interaction Network. Available online: https://ssrn.com/abstract=4260135 (accessed on 15 July 2024).
- Yao, Y.; Chen, Y.; Gou, S.; Chen, S.; Zhang, X.; Tong, N. Auto-segmentation of pancreatic tumor in multi-modal image using transferred DSMask R-CNN network. Biomed. Signal Process. Control 2023, 83, 104583. [Google Scholar] [CrossRef]
- Li, J.; Qi, L.; Chen, Q.; Zhang, Y.D.; Qian, X. A dual meta-learning framework based on idle data for enhancing segmentation of pancreatic cancer. Med. Image Anal. 2022, 78, 102342. [Google Scholar] [CrossRef]
- Cai, J.; Zhang, Z.; Cui, L.; Zheng, Y.; Yang, L. Towards cross-modal organ translation and segmentation: A cycle-and shape-consistent generative adversarial network. Med. Image Anal. 2019, 52, 174–184. [Google Scholar] [CrossRef] [PubMed]
- Cai, J.; Lu, L.; Xing, F.; Yang, L. Pancreas segmentation in CT and MRI images via domain specific network designing and recurrent neural contextual learning. arXiv 2018, arXiv:1803.11303. [Google Scholar]
- Asaturyan, H.; Gligorievski, A.; Villarini, B. Morphological and multi-level geometrical descriptor analysis in CT and MRI volumes for automatic pancreas segmentation. Comput. Med. Imaging Graph. 2019, 75, 1–13. [Google Scholar] [CrossRef] [PubMed]
- Puech, P.A.; Boussel, L.; Belfkih, S.; Lemaitre, L.; Douek, P.; Beuscart, R. DicomWorks: Software for reviewing DICOM studies and promoting low-cost teleradiology. J. Digit. Imaging 2007, 20, 122–130. [Google Scholar] [CrossRef] [PubMed]
- Kikinis, R.; Pieper, S.D.; Vosburgh, K.G. 3D Slicer: A platform for subject-specific image analysis, visualization, and clinical support. In Intraoperative Imaging and Image-Guided Therapy; Springer: Berlin/Heidelberg, Germany, 2013; pp. 277–289. [Google Scholar]
- Philbrick, K.A.; Weston, A.D.; Akkus, Z.; Kline, T.L.; Korfiatis, P.; Sakinis, T.; Kostandy, P.; Boonrod, A.; Zeinoddini, A.; Takahashi, N.; et al. RIL-contour: A medical imaging dataset annotation tool for and with deep learning. J. Digit. Imaging 2019, 32, 571–581. [Google Scholar] [CrossRef] [PubMed]
- Liu, Y.; Chu, L.; Chen, G.; Wu, Z.; Chen, Z.; Lai, B.; Hao, Y. PaddleSeg: A High-Efficient Development Toolkit for Image Segmentation, 2021. arXiv 2021, arXiv:2101.06175. [Google Scholar] [CrossRef]
- A Easy-to-Use, Efficient, Smart 3D Medical Image Annotation Platform. 2022. Available online: https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.9/EISeg/med3d (accessed on 20 May 2024).
- Echegaray, S.; Bakr, S.; Rubin, D.L.; Napel, S. Quantitative Image Feature Engine (QIFE): An open-source, modular engine for 3D quantitative feature extraction from volumetric medical images. J. Digit. Imaging 2018, 31, 403–414. [Google Scholar] [CrossRef] [PubMed]
- Pawlowski, N.; Ktena, S.I.; Lee, M.C.; Kainz, B.; Rueckert, D.; Glocker, B.; Rajchl, M. DLTK: State of the Art Reference Implementations for Deep Learning on Medical Images. arXiv 2017, arXiv:1711.06853. [Google Scholar]
- Pérez-García, F.; Sparks, R.; Ourselin, S. TorchIO: A Python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning. Comput. Methods Programs Biomed. 2021, 208, 106236. [Google Scholar] [CrossRef]
- Cardoso, M.J.; Li, W.; Brown, R.; Ma, N.; Kerfoot, E.; Wang, Y.; Murrey, B.; Myronenko, A.; Zhao, C.; Yang, D.; et al. MONAI: An open-source framework for deep learning in healthcare, 2022. arXiv 2022, arXiv:2211.02701. [Google Scholar] [CrossRef]
- 3D Medical Image Segmentaion Solution. 2022. Available online: https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.9/contrib/MedicalSeg (accessed on 20 May 2024).
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Zhang, X.; Zhou, X.; Lin, M.; Sun, J. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar]
- Valanarasu, J.M.J.; Patel, V.M. Unext: Mlp-based rapid medical image segmentation network. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2022; pp. 23–33. [Google Scholar]
- Ruan, J.; Xiang, S.; Xie, M.; Liu, T.; Fu, Y. MALUNet: A multi-attention and light-weight unet for skin lesion segmentation. In Proceedings of the 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Las Vegas, NV, USA, 8 December 2022; pp. 1150–1156. [Google Scholar]
- Ruan, J.; Xie, M.; Gao, J.; Liu, T.; Fu, Y. Ege-unet: An efficient group enhanced unet for skin lesion segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2023; pp. 481–490. [Google Scholar]
- Yoon, J.S.; Oh, K.; Shin, Y.; Mazurowski, M.A.; Suk, H.I. Domain generalization for medical image analysis: A survey. arXiv 2023, arXiv:2310.08598. [Google Scholar]
- Taleb, A.; Lippert, C.; Klein, T.; Nabi, M. Multimodal self-supervised learning for medical image analysis. In International Conference on Information Processing in Medical Imaging; Springer: Berlin/Heidelberg, Germany, 2021; pp. 661–673. [Google Scholar]
- Xu, Y.; Xie, S.; Reynolds, M.; Ragoza, M.; Gong, M.; Batmanghelich, K. Adversarial consistency for single domain generalization in medical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2022; pp. 671–681. [Google Scholar]
- Su, Z.; Yao, K.; Yang, X.; Huang, K.; Wang, Q.; Sun, J. Rethinking data augmentation for single-source domain generalization in medical image segmentation. AAAI Conf. Artif. Intell. 2023, 37, 2366–2374. [Google Scholar] [CrossRef]
- Xu, C.; Wen, Z.; Liu, Z.; Ye, C. Improved domain generalization for cell detection in histopathology images via test-time stain augmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2022; pp. 150–159. [Google Scholar]
- Zhang, L.; Wang, X.; Yang, D.; Sanford, T.; Harmon, S.; Turkbey, B.; Wood, B.J.; Roth, H.; Myronenko, A.; Xu, D.; et al. Generalizing deep learning for medical image segmentation to unseen domains via deep stacked transformation. IEEE Trans. Med Imaging 2020, 39, 2531–2540. [Google Scholar] [CrossRef] [PubMed]
- Zhang, X.; Wu, C.; Zhao, Z.; Lin, W.; Zhang, Y.; Wang, Y.; Xie, W. Pmc-vqa: Visual instruction tuning for medical visual question answering. arXiv 2023, arXiv:2305.10415. [Google Scholar]
- He, X.; Zhang, Y.; Mou, L.; Xing, E.; Xie, P. Pathvqa: 30,000+ questions for medical visual question answering. arXiv 2020, arXiv:2003.10286. [Google Scholar]
- Lau, J.J.; Gayen, S.; Ben Abacha, A.; Demner-Fushman, D. A dataset of clinically generated visual questions and answers about radiology images. Sci. Data 2018, 5, 180251. [Google Scholar] [CrossRef] [PubMed]
- Gao, W.; Deng, Z.; Niu, Z.; Rong, F.; Chen, C.; Gong, Z.; Zhang, W.; Xiao, D.; Li, F.; Cao, Z.; et al. Ophglm: Training an ophthalmology large language-and-vision assistant based on instructions and dialogue. arXiv 2023, arXiv:2306.12174. [Google Scholar]
- Zhao, Z.; Liu, Y.; Wu, H.; Li, Y.; Wang, S.; Teng, L.; Liu, D.; Li, X.; Cui, Z.; Wang, Q.; et al. Clip in medical imaging: A comprehensive survey. arXiv 2023, arXiv:2312.07353. [Google Scholar]
- Tiu, E.; Talius, E.; Patel, P.; Langlotz, C.P.; Ng, A.Y.; Rajpurkar, P. Expert-level detection of pathologies from unannotated chest X-ray images via self-supervised learning. Nat. Biomed. Eng. 2022, 6, 1399–1406. [Google Scholar] [CrossRef]
- Wu, Y.; Zhou, Y.; Saiyin, J.; Wei, B.; Lai, M.; Shou, J.; Fan, Y.; Xu, Y. Zero-Shot Nuclei Detection via Visual-Language Pre-trained Models. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2023; pp. 693–703. [Google Scholar]
- Adhikari, R.; Dhakal, M.; Thapaliya, S.; Poudel, K.; Bhandari, P.; Khanal, B. Synthetic Boost: Leveraging Synthetic Data for Enhanced Vision-Language Segmentation in Echocardiography. In International Workshop on Advances in Simplifying Medical Ultrasound; Springer: Berlin/Heidelberg, Germany, 2023; pp. 89–99. [Google Scholar]
- Eslami, S.; Meinel, C.; De Melo, G. Pubmedclip: How much does clip benefit visual question answering in the medical domain? In Findings of the Association for Computational Linguistics: EACL 2023; Association for Computational Linguistics: Stroudsburg, PA, USA, 2023; pp. 1181–1193. [Google Scholar]
- Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.Y.; et al. Segment anything. arXiv 2023, arXiv:2304.02643. [Google Scholar]
- Zhang, K.; Liu, D. Customized segment anything model for medical image segmentation. arXiv 2023, arXiv:2304.13785. [Google Scholar]
- Wu, J.; Fu, R.; Fang, H.; Liu, Y.; Wang, Z.; Xu, Y.; Jin, Y.; Arbel, T. Medical sam adapter: Adapting segment anything model for medical image segmentation. arXiv 2023, arXiv:2304.12620. [Google Scholar]
- Ye, J.; Cheng, J.; Chen, J.; Deng, Z.; Li, T.; Wang, H.; Su, Y.; Huang, Z.; Chen, J.; Jiang, L.; et al. Sa-med2d-20m dataset: Segment anything in 2d medical imaging with 20 million masks. arXiv 2023, arXiv:2311.11969. [Google Scholar]
- Jia, X.; Ren, L.; Cai, J. Clinical implementation of AI technologies will require interpretable AI models. Med. Phys. 2020, 47, 1–4. [Google Scholar] [CrossRef]
- Van der Velden, B.H.; Kuijf, H.J.; Gilhuijs, K.G.; Viergever, M.A. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med. Image Anal. 2022, 79, 102470. [Google Scholar] [CrossRef]
- Decathlon, M.S. Medical Segmentation Decathlon. Available online: http://medicaldecathlon.com/ (accessed on 20 May 2024).
- GitHub. Available online: https://github.com (accessed on 20 May 2024).
- Challenge, G. Grand Challenge. Available online: https://grand-challenge.org/ (accessed on 20 May 2024).
- Bionetworks, S. Synapse. Available online: https://www.synapse.org/ (accessed on 20 May 2024).
- Zenodo. Zenodo Repository. Available online: https://zenodo.org/ (accessed on 20 May 2024).
Reference | Year | Brief Summary | AI Models in Pancreatic Imaging Processing | Multiple AI Tasks and Evaluation Metrics | Different Pancreatic Imaging Modalities | Future Directions for AI in PC Research |
---|---|---|---|---|---|---|
[17] | 2019 | A review on deep learning in the differential diagnosis of PC and CP | M | L | N | L |
[18] | 2020 | A review on early detection of PC | L | L | M | N |
[19] | 2021 | A summative review on PDAC early detection | H | L | M | N |
[20] | 2021 | A comprehensive review on PC screening and diagnosis strategies | L | L | H | N |
[16] | 2022 | A review on application of AI in PC diagnosis | H | H | M | M |
[21] | 2022 | A review on AI in PC diagnosis based on medical imaging and biomarkers | H | L | H | N |
[22] | 2022 | A systematic review on AI and machine learning in pancreatic surgery | M | L | M | H |
[23] | 2022 | A review on AI in PDAC diagnosis and prognosis from CT images | H | H | N | H |
[24] | 2023 | A scoping review on PC diagnosis and prediction using AI | M | M | N | M |
[25] | 2023 | A narrative review on AI in PC diagnosis, biomarkers detection, and prognosis | L | L | M | M |
[26] | 2024 | A review on AI in various aspects of PC | H | M | M | H |
[27] | 2024 | A review on AI in PC early diagnosis | H | M | M | N |
This paper | - | A comprehensive review on AI in pancreatic images processing | H | H | H | H |
Search Term | Set of Keywords |
---|---|
Pancreatic | pancreatic cancer, pancreatic lesion, pancreatic cancer diagnosis, pancreatic cancer detection, pancreatic ductal adenocarcinoma, pancreatic neuroendocrine tumors |
Cancer | cancer subtypes, precursor lesions, cancer diagnosis, cancer treatment |
AI task | classification, segmentation, object detection, prognosis prediction, image registration, image generation, super-resolution, denoising, reconstruction, medical visual question answering, natural language processing |
Image modality | CT, MRI, EUS, PET, pathological images, PET/CT, multimodal fusion, multiple modalities, cross-modality, modality conversion |
Machine learning | Cox proportional hazards regression, Logistic regression, least absolute shrinkage and selection operator regression, decision tree, support vector machine, random forest, ensemble learning, k-nearest neighbors, k-means clustering |
Deep learning | convolutional neural networks, fully convolutional neural networks, transformers, recurrent neural networks, long short-term memory, you only look once, graph neural networks, federated learning, reinforcement learning, neural architecture search |
Large model | contrastive language-image pretraining, segment anything model |
Methods | Feature Name |
---|---|
Shape | height, width, perimeter, area, complexity, rectangularity, elongation, equivalent area radius |
GLCM | mean and standard deviation of energy, entropy, moment of inertia, and correlation |
GLRLM | short run emphasis, long run emphasis, gray-level nonuniformity, run percentage, run-length nonuniformity, low gray-level run emphasis, high-gray level run emphasis |
GLGCM | small grads dominance, big grads dominance, gray asymmetry, grads asymmetry, energy, gray mean, grads mean, gray variance, grads variance, correlation, gray entropy, grads entropy, entropy inertia, differ moment |
GLDS | mean, contrast, angular second moment, entropy |
Wavelet transform |
Year | Reference | Model | Dataset | Sample Size | Performance |
---|---|---|---|---|---|
2020 | [130] | LASSO regression and EL-SVM learner | A private dataset | 168 | AUC = 0.7308 (normal–early stage), 0.6587 (normal–stage III), 0.7333 (normal–stage IV) |
2021 | [131] | XGBoost | A private dataset, MSD and NIH | 27,235, 5715, and 7054 | AUC = 0.97 (private test set), 0.83, and 0.89 (public test set) |
2022 | [133] | KNN, SVM, RF and XGBoost | A private dataset and NIH | 596 and 82 | AUC = 0.95, 0.98, 0.95, and 0.96 |
2020 | [11] | VGG | A private dataset, MSD and NIH | 14,780, 4849, and 1427 | Accuracy = 0.986, 0.989 (private test set), and 0.832 (MSD and NIH test set) |
2021 | [134] | UNet with Anatomy-aware Hybrid Transformers | A private dataset | 1627 | Recall = 0.952, Specificity = 0.958 |
2023 | [135] | PANDA | Five private dataset | 3208, 786, 5337, 18,654, and 4815 | Specificity = 0.999, Recall = 0.929, AUC = 0.986–0.996 |
2022 | [136] | IDLDMS-PTC | A private dataset | 500 | Accuracy = 0.9935, Specificity = 0.9884, Recall = 0.9935, F1-score = 0.9948 |
2023 | [137] | DenseNet | NIH and MSD | 18,942 and 15,000 | Accuracy = 0.974, Specificity = 0.966, Recall = 0.983 |
2022 | [138] | DNN-MMRF-ResNet | A private dataset | 110 | Precision = 0.9387, Recall = 0.9136, Specificity = 0.9380, Accuracy = 0.9269 |
2023 | [139] | Stacking ensemble | NIH | 80 | Accuracy = 0.988 |
Year | Reference | Model | Dataset | Sample Size | Performance |
---|---|---|---|---|---|
2015 | [140] | SLIC | NIH | 82 | DSC = 0.81 |
2015 | [67] | Probabilistic bottom-up approach | NIH | 82 | DSC = 0.805 |
2017 | [141] | BRIEFnet | BTCV | 30 | DSC = 0.645 |
2017 | [142] | FCN-8s with DSN | A private dataset | 131 | DSC = |
2019 | [144] | Ringed Residual U-Net | NIH | 82 | DSC = |
2020 | [145] | iUNet | A combination of TCIA and BTCV, and a private dataset | 90 and 1905 | DSC = 0.87 |
2020 | [146] | DLU-Net | MSD and a private dataset | 281 and 126 | DSC = 0.9117 and 0.9094, Accuracy = 0.9725 and 0.9743 |
2020 | [147] | Custom segmentation network | NIH | 82 | DSC = |
2022 | [148] | WAU | BTCV | 30 | DSC = 0.6601 |
2023 | [149] | LMNS-net | NIH | 82 | DSC = 0.8868, IoU = 0.9882, Precision = 0.6822, Recall = 0.9866 |
2024 | [150] | M3BUNet | NIH and MSD | 82 and 281 | DSC = 0.8952 and 0.8860, IoU = 0.8116 and 0.7990 |
2023 | [151] | DBFE-Net | Two private datasets | 116 and 42 | Precision = 0.6573 (PCs), 0.8907 (abnormal) and 0.9147 (normal) |
2023 | [152] | Spiral-ResUNet | MSD | 281 | DSC = 0.6662 |
2018 | [154] | 3D UNet | A private dataset | 147 | DSC = |
2019 | [155] | CNN with Bias-dice loss function | NIH | 82 | DSC = 0.8522 |
2019 | [156] | 3D UNet-based two-stage framework | NIH | 82 | DSC = 0.8599 |
2021 | [157] | DoDNet | MSD | 281 | DSC = 0.7155, HD = 11.70 |
2021 | [158] | CNNs with STFFM and PPM modules | NIH and MSD | 82 and 281 | DSC = 0.8490 and 0.8556 |
2018 | [159] | nnUNet | MSD | 281 | DSC = 0.659 |
2020 | [160] | nnUNet | A private dataset | 61 | DSC = 0.73 |
2021 | [98] | Transformer-UNet | NIH | 82 | mIoU = 0.8301, DSC = 0.7966 |
2021 | [161] | MISSFormer | BTCV | 30 | DSC = 0.6567 |
2021 | [96] | TransUNet | BTCV | 30 | DSC = 0.5586 |
2022 | [97] | Swin-UNet | BTCV | 30 | DSC = 0.5658 |
2023 | [162] | TD-Net | NIH and MSD | 82 and 281 | DSC = 0.8989 and 0.9122 |
2024 | [163] | MIST | BTCV | 30 | DSC = 0.7243 |
2021 | [164] | nnFormer | BTCV | 30 | DSC = 0.8335 |
2022 | [165] | UNETR | BTCV | 30 | DSC = 0.799 |
2022 | [166] | Swin UNETR | BTCV and MSD | 30 and 281 | DSC = 0.897 and 0.7071 |
2023 | [100] | 3D TransUNet | BTCV | 30 | DSC = 0.8269 |
2023 | [167] | TGPFN | Three private datasets and MSD | 313, 53, 50, and 420 | DSC = 0.8051, 0.6717, 0.6925, and 0.4386 |
2018 | [168] | Deep LOGISMOS | A private dataset | 50 | DSC = |
2020 | [169] | Improved UNet based on uncertainty analysis and GCNs | NIH | 82 | DSC = |
2020 | [170] | DSD-ASPP-Net | NIH | 82 | DSC = |
2021 | [171] | SMCN with Graph-ResNet | A private dataset | 661 | DSC = 0.738 (PDAC) |
2022 | [172] | GEPS-Net | NIH | 82 | DSC = , IoU = , HD = |
2019 | [173] | V-NAS | NIH and MSD | 82 and 281 | DSC = 0.8515 and 0.5886 |
2021 | [174] | DiNTS | MSD | 281 | DSC = 0.6819, NSD = 0.8608 |
2023 | [175] | SAM | MSD | 281 | DSC = 0.0547 (box) |
2024 | [177] | SAM | AbdomenCT-1K | 1000 | DSC = 0.7686 (box) |
2024 | [179] | CLIP-Driven Universal Model | MSD | 281 | DSC = 0.7259, NSD = 0.8976 |
2021 | [181] | MoNet | MSD | 281 | DSC = 0.74 ± 0.11 |
2023 | [182] | ConDistFL | MSD | 281 | DSC = 0.5756 |
2019 | [183] | DQN | NIH | 82 | DSC = |
2021 | [184] | Mask-RCNN | NIH | 82 | DSC = , IoU = |
Year | Reference | Model | Dataset | Sample Size | Performance |
---|---|---|---|---|---|
2020 | [185] | Custom pancreatic tumor detection network | A private dataset | 2890 | Recall = 0.8376, Specificity = 0.9179, Accuracy = 0.9018 |
2021 | [186] | nnDetection | MSD | 281 | [email protected] = 0.766 (cross validation) and 0.791 (test set) |
2023 | [188] | RCNN-Crop | NIH | 19,000 | [email protected] = 0.281 |
2023 | [189] | YCNN | A private dataset | 7245 | AUC = 1.00, F1-score = 0.99, Accuracy = 1.00 |
Year | Reference | Model | Dataset | Sample Size | Performance |
---|---|---|---|---|---|
2020 | [160] | CE-ConvLSTM | Three private datasets, MSD and a combined dataset [195] | 296, 571, 61, 281 and 90 scans | C-index = 0.651 |
2021 | [190] | RF | A private dataset | 98 scans | AUC = 0.84 |
2022 | [191] | Ensemble learning | A private dataset | 282 scans | AUC = 0.76 (2-year OS) and 0.74 (1-year recurrence-free survival) |
2023 | [194] | Custom contrastive learning scheme | A private dataset | 157 scans | Accuracy = 0.744, AUC = 0.791, Recall = 0.740, Specificity = 0.750 |
Task | Year | Reference | Model | Dataset | Sample Size | Performance |
---|---|---|---|---|---|---|
Classification | 2021 | [211] | LASSO regression | A private dataset | 202 | AUC = 0.903 |
Classification | 2018 | [212] | PCN-Net | Two private datasets | 52 and 68 | Accuracy = 0.923 |
Classification | 2018 | [214] | ResNet-18 | A private dataset | 115 | Accuracy = 0.91, Precision = 0.86, Recall = 0.99, AUC = 0.90, F1-score = 0.92 |
Classification | 2019 | [215] | SVM | A private dataset | 139 | AUC = 0.78 |
Classification | 2019 | [217] | Proportion-SVM | A private dataset | 171 | Accuracy = 0.8422, Recall = 0.972, Specificity = 0.465 |
Segmentation | 2019 | [218] | CNN with Hausdorff-Sine loss function | Two private datasets | 180 and 120 | DSC = 0.841 and 0.857 |
Segmentation | 2021 | [152] | Spiral-ResUNet | Four private datasets | 65, 69, 68 and 70 | DSC = 0.656, 0.640, 0.645, and 0.653 |
Segmentation | 2020 | [219] | Square-window-based CNN | A private dataset | 56 | DSC = |
Segmentation | 2022 | [220] | MMSA-Net | Two private datasets | 67 and 67 | DSC = and |
Segmentation | 2023 | [221] | MC3DU-Net | A private dataset | 158 | Precision = 0.75, Recall = 0.80, DSC = 0.80 |
Segmentation | 2016 | [222] | CNN with CRF | A private dataset | 78 | DSC = 0.761 |
Segmentation | 2021 | [223] | UDA | Four private datasets | 67, 68, 68, and 64 | DSC = 0.6138, 0.6111, 0.6190, and 0.6007 |
Object Detection | 2018 | [212] | Modified Faster-RCNN | Two private datasets | 52 and 68 | Precision = 0.589 and 0.598, Recall = 0.873 and 0.889 |
Prognosis Prediction | 2021 | [225] | Logistic regression and Cox regression | A private dataset | 99 | - |
Prognosis Prediction | 2023 | [226] | Cox regression | A private dataset | 78 | C-index = 0.78 |
Year | Reference | Model | Dataset | Sample Size | Performance |
---|---|---|---|---|---|
2022 | [231] | SVM and AdaBoost | A private dataset | 55 | Accuracy = 0.921, Recall = 0.963, Specificity = 0.878 |
2019 | [232] | ResNet-50 | A private dataset | 3970 | Accuracy = 0.940, Recall = 0.957, Specificity = 0.926 |
2020 | [233] | ResNet | Two private datasets | 21,406 and 768 | DSC = 0.836 and 0.835 |
2021 | [234] | Combination of CNN and LSTM | A private dataset | 1350 | Accuracy = 0.9826, AUC = 0.98 |
2021 | [235] | ResNet-50 | A private dataset | 108 | Accuracy = 0.8275, AUC = 0.88 |
2021 | [236] | Multi-modal CNN | A private dataset | 3575 | Accuracy = 0.76, Precision = 0.74, Recall = 0.74, F1-score = 0.74 |
2022 | [237] | Xception | A private dataset | 5505 | Accuracy = 0.985, Specificity = 0.989, Recall = 0.983, AUC = 1.00 |
2022 | [238] | GoogleNet, ResNet-18, and ResNet-50 | A private dataset | 66,249 | Accuracy = 0.932, Specificity = 0.950, Recall = 0.877, F1-score = 0.870 |
2023 | [239] | ResNet | A private dataset | 12,809 | Accuracy = 0.9180 |
2023 | [240] | EfficientNetV2-L | A private dataset | 22,000 | Accuracy = 0.91 |
2023 | [242] | CNNs and ViT models | A private dataset | 41 | Accuracy = 0.668 |
2023 | [77] | DSMT-Net | LEPset | 11,500 | Accuracy = 0.877, Precision = 0.842, Recall = 0.801, F1-score = 0.822 |
Year | Reference | Model | Dataset | Sample Size | Performance |
---|---|---|---|---|---|
2020 | [233] | UNet++ | Three private datasets | 2115, 768, and 28 | Accuracy = 0.942, 0.824, and 0.862 |
2021 | [243] | UNet | A private dataset | 100 | IoU = 0.77 |
2021 | [244] | Attention U-Net | Two private dataset | 57 and 364 | DSC = 0.794, IoU = 0.741, Accuracy = 0.983, Specificity = 0.991, Recall = 0.797 |
2022 | [245] | DAF-Net | A private dataset | 330 | DSC = 0.828, IoU = 0.723, AUC = 0.927, Recall = 0.890, Specificity = 0.981, Precision = 0.851 |
2023 | [239] | Attention UNet | A private dataset | 1049 | DSC = 0.7552, mIOU = 0.6241, Precision = 0.7204, Recall = 0.8003 |
2023 | [246] | UNet++ | Two private datasets | 4530 and 270 | DSC = 0.763, Recall = 0.941, Precision = 0.642, Accuracy = 0.842, mIoU = 0.731 |
Year | Reference | Model | Dataset | Sample Size | Performance |
---|---|---|---|---|---|
2022 | [247] | SELSA-TROIA | A private dataset | 50 | [email protected] = 0.5836 |
2022 | [250] | YOLOv5m | A private dataset | 1213 | AUC = 0.85, Recall = 0.95, Specificity = 0.75 |
2023 | [251] | Combination of a classifier and YOLO | A private dataset | 66,249 | IoU = 0.42, Precision = 0.853 |
Task | Year | Reference | Model | Dataset | Sample Size | Performance |
---|---|---|---|---|---|---|
Classification | 2018 | [258] | HFB-SVM-RF | A private dataset | 1700 | Accuracy = 0.965, Recall = 0.952, Specificity = 0.975 |
Classification | 2019 | [259] | RBF SVM and Linear SVM | A private dataset | 111 | Accuracy = 0.85, Specificity = 0.84, Recall = 0.86, AUC = 0.93 |
Classification | 2021 | [260] | XGBoost | A private dataset | 149 | AUC = 0.921 |
Classification | 2023 | [262] | TMC | A private dataset | 370 | Accuracy = 0.75, Recall = 0.77, Specificity = 0.73 |
Classification | 2023 | [263] | RAD_model, DL_model, and MF_model | A private dataset | 159 | Accuracy = 0.901, Specificity = 0.930, Recall = 0.875, AUC = 0.964 |
Segmentation | 2018 | [258] | SLIC | A private dataset and NIH | 1700 and 82 | DSC = 0.789, IoU = 0.654 |
Segmentation | 2023 | [262] | UNet with OLP | A private dataset | 370 | DSC = 0.89 |
Segmentation | 2023 | [264] | DenseUNet | A private dataset | 48,092 | DSC = 0.751 |
Segmentation | 2023 | [265] | MFCNet | A private dataset | 93 | DSC = 0.7620 |
Segmentation | 2024 | [266] | CMF module and MIM strategy | A private dataset | 93 | DSC = 0.7314, IoU = 0.6056, HD = 6.30 |
Object Detection | 2023 | [267] | MAFF | A private dataset | 880 | [email protected] = 0.850 |
Prognosis Prediction | 2023 | [268] | NN | A private dataset | 58 | AUC = 0.830 |
Year | Reference | Model | Dataset | Sample Size | Performance |
---|---|---|---|---|---|
2022 | [274] | PACpAInt | Four private datasets and TCGA | 424, 304, 909, 25, and 100 | AUC = 0.86 (private test set) and 0.81 (TCGA test set) |
2017 | [275] | DeepNC | A private dataset | 60,036,000 | Accuracy = 0.913, Specificity = 0.928, Precision = 0.926, Recall = 0.899 |
2019 | [276] | NLC | TCGA and SEER | 190 and 64 | AUC = 0.860 and 0.944 |
2021 | [277] | CNN models | A private dataset | 138 | Accuracy = 0.9561 |
2022 | [278] | CNN with IMSAT | - | - | - |
2022 | [279] | SI-ViT | A private dataset | 5088 | Accuracy = 0.9400, Precision = 0.9198, Recall = 0.9068, F1-score = 91.32 |
2022 | [280] | BCNN | A private dataset | 3201 | Accuracy = 0.7929, Precision = 0.7935, Recall = 0.7933, F1-score = 0.7915 |
2023 | [281] | DACTransNet | TCGA and three private datasets | 1336 patches from 190 WSIs, 35, 35, and 38 | Accuracy = 0.9634 (TCGA), 0.8973 (Center A), 0.8714 (Center B), and 0.9113 (Center C) |
Year | Reference | Model | Dataset | Sample Size | Performance |
---|---|---|---|---|---|
2021 | [282] | Modified UNet | A private dataset | 16,572 | F1-score = 0.86 |
2021 | [283] | SMA block | A private dataset | 24 | DSC = 0.8347, Precision = 0.8649, Recall = 0.8216 |
2021 | [284] | UNet | A private dataset | 231 | DSC = 0.8465 |
2022 | [285] | SMANet | A private dataset | 165 | mDSC = 0.769, mIoU = 0.665 |
2022 | [286] | UNet | A private dataset | 5345 | F1-score = 0.929 |
2023 | [287] | MLAGG-Net | A private dataset | 460 | DSC = 0.9002, IoU = 0.8207, Accuracy = 0.9439, Recall = 0.9136 |
2023 | [288] | Multi-task learning framework | A private dataset | 555,119 | F1-score = 0.97 |
2024 | [289] | Channel-spatial self-attention module | A private dataset | 329 | DSC = 0.7393, IoU = 0.5942, Accuracy = 0.7526, Precision = 0.8030, Recall = 0.7177 |
Year | Modalities | Task | Reference | Method | Dataset | Sample Size | Performance |
---|---|---|---|---|---|---|---|
2021 | PET-MRI and CT | Prognosis prediction | [293] | Cox regression | A private dataset | 44 | AUC = 0.87 |
2023 | CT and MRI | Prognosis prediction | [294] | Cox regression | A private dataset | 143 | AUC = 0.995, C-index = 0.778 |
2018 | MRI T1w and MRI T2w | Classification | [295] | CNN-based CAD system | A private dataset | 139 | Accuracy = 0.8280, Specificity = 0.8167, Recall = 0.8355 |
2018 | MRI T1w and MRI T2w | Classification | [214] | PCN-Net | A private dataset | 52 and 68 | Accuracy = 0.800 |
2020 | MRI ADC, MRI DWI, and MRI T2w | Classification | [296] | Model-driven multimodal deep learning approach | A private dataset | 64 | Accuracy = 0.736, Specificity = 0.680, Precision = 0.810, Recall = 0.775, AUC = 0.740, F1-score = 0.783 |
2022 | CT and WSI | Prognosis prediction | [297] | ATIIN | A private dataset | 356 | C-index = 0.70 |
2023 | PET and MRI | Segmentation | [298] | TDSMask R-CNN | A private dataset | 71 | DSC = 0.7833, Recall = 0.7856, Specificity = 0.9972 |
2022 | CT and MRI | Segmentation | [299] | Improved Res-UNet | A private dataset and MSD | 163 and 281 | DSC = 0.6416 and 0.5753 |
2018 | CT and MRI | Segmentation | [300] | CNN | Two private dataset | 82 and 78 | DSC = 0.788 and 0.704 |
2018 | CT and MRI | Segmentation | [301] | CNN-RNN model | NIH and a private dataset | 82 and 79 | DSC = 0.833 and 0.807, IoU = 0.718 and 0.682, Precision = 0.845 and 0.843, Recall = 0.828 and 0.783 |
2019 | CT and MRI | Segmentation | [302] | Custome 2D/3D method | NIH and two private datasets | 82, 216, and 132 | DSC = 0.793, 0.796, and 0.816 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, W.; Zhang, B.; Liu, T.; Jiang, J.; Liu, Y. Artificial Intelligence in Pancreatic Image Analysis: A Review. Sensors 2024, 24, 4749. https://doi.org/10.3390/s24144749
Liu W, Zhang B, Liu T, Jiang J, Liu Y. Artificial Intelligence in Pancreatic Image Analysis: A Review. Sensors. 2024; 24(14):4749. https://doi.org/10.3390/s24144749
Chicago/Turabian StyleLiu, Weixuan, Bairui Zhang, Tao Liu, Juntao Jiang, and Yong Liu. 2024. "Artificial Intelligence in Pancreatic Image Analysis: A Review" Sensors 24, no. 14: 4749. https://doi.org/10.3390/s24144749
APA StyleLiu, W., Zhang, B., Liu, T., Jiang, J., & Liu, Y. (2024). Artificial Intelligence in Pancreatic Image Analysis: A Review. Sensors, 24(14), 4749. https://doi.org/10.3390/s24144749