Recent Advances in Deep Learning and Medical Imaging for Head and Neck Cancer Treatment: MRI, CT, and PET Scans
Abstract
:Simple Summary
Abstract
1. Introduction
2. Limitations of Traditional Imaging Techniques in Head and Neck Cancer Imaging
2.1. Subjectivity and Interobserver Variability
2.2. Factors Affecting Human Expertise
2.3. Need for Improved Imaging Techniques
3. Deep Learning in Medical Imaging
3.1. Deep Learning: A Brief Overview
3.2. Deep Learning in Medical Image Analysis: Performance Improvements over Traditional Methods
4. Deep Learning in Head and Neck Cancer Imaging
4.1. MRI
4.1.1. Tumor Detection and Segmentation
4.1.2. Treatment Response Prediction and Prognosis Assessment
4.2. CT
4.2.1. Tumor Detection and Segmentation
4.2.2. Outcome Prediction and Treatment Planning
4.3. PET
4.3.1. Tumor Detection and Segmentation
4.3.2. Outcome Prediction and Treatment Monitoring
5. Comparison of Deep Learning and Traditional Imaging Techniques in Head and Neck Cancer Imaging
5.1. Advantages of Traditional Imaging Techniques
- Interpretability: Traditional imaging techniques provide interpretable results, as they are often based on well-established, handcrafted features and statistical methods [43]. This interpretability allows clinicians to better understand the rationale behind the decision-making process, which is essential for building trust in the results and ensuring appropriate clinical actions.
- Lower computational requirements: Traditional imaging methods typically have lower computational demands compared to deep learning approaches [16], making them more accessible and easier to implement on standard workstations without the need for high-performance computing resources.
- Robustness to variations: Traditional imaging techniques may be more robust to variations in imaging protocols and acquisition parameters [44] since they rely on well-established features that are less sensitive to changes in image quality and appearance.
5.2. Advantages of Deep Learning Techniques
- Automatic feature learning: Deep learning models can automatically learn relevant features from the data without the need for manual feature engineering [5], reducing the potential for human bias and enabling the discovery of novel imaging biomarkers that may not be apparent using traditional techniques.
- Integration of multi-modal and multi-parametric data: Deep learning models can efficiently manage and integrate information from various imaging modalities (e.g., MRI, CT, and PET) and different image sequences [16], potentially providing a more comprehensive assessment of tumor characteristics and treatment response.
5.3. Guidance for Optimal Use
- Preliminary analysis and simpler tasks: In situations in which a quick, preliminary analysis is required or for less complex tasks, traditional imaging techniques may be more suitable due to their lower computational demands and ease of implementation [16].
- Interpretability and trust: When interpretability and trust in the decision-making process are critical, traditional methods may be more appropriate, as they provide more transparent and explainable results [46].
- Resource-limited settings: In settings in which computational resources are limited, traditional imaging techniques may be more feasible since they generally have lower computational requirements [16].
- Complex tasks and improved performance: For more complex tasks or when seeking improved accuracy and performance, deep learning techniques are more suitable [24,25,27,45]. This suitability includes for tasks such as tumor detection and segmentation, outcome prediction, and treatment planning in head and neck cancer imaging.
6. Gaps, Controversies, and Future Directions
6.1. Lack of Standardization and Benchmarking
6.2. Integration of Multimodal, Temporal, and Clinical Information
6.3. Explainability, Interpretability, and Trustworthiness
6.4. Clinical Implementation, Validation, and Impact Assessment
7. Challenges and Limitations of Deep Learning Models in Head and Neck Cancer Imaging
7.1. Data Quality and Quantity
7.2. Model Overfitting
7.3. Computation Requirements
7.4. Ethical Considerations
8. Radiogenomics: A Promising Avenue for Deep Learning in Head and Neck Cancer Imaging
8.1. Radiogenomics: An Overview
8.2. Radiogenomic Features in Deep Learning Models
8.3. Challenges and Future Directions
8.4. Clinical Implementation and Validation
9. Advanced Deep Learning Techniques in Head and Neck Cancer Imaging
9.1. Convolutional Autoencoders for Data Compression, Image Denoising, and Feature Extraction
9.2. Generative Adversarial Networks for Super-Resolution
9.3. Transformer Models: Vision Transformer (ViT)
10. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Chow, L.Q.M. Head and neck cancer. N. Engl. J. Med. 2020, 382, 60–72. [Google Scholar] [CrossRef] [PubMed]
- Marur, S.; Forastiere, A.A. Head and neck squamous cell carcinoma: Update on epidemiology, diagnosis, and treatment. Mayo Clin. Proc. 2016, 91, 386–396. [Google Scholar] [CrossRef] [Green Version]
- Chiesa-Estomba, C.M.; Lequerica-Fernández, P.; Fakhry, N. Imaging in head and neck cancer: United in diversity. Cancers 2019, 11, 1077. [Google Scholar]
- Wang, J.; Wu, C.J.; Bao, M.L.; Zhang, J.; Wang, X.N.; Zhang, Y.D. Machine learning-based analysis of MR radiomics can help to improve the diagnostic performance of PI-RADS v2 in clinically relevant prostate cancer. Eur. Radiol. 2017, 27, 4082–4090. [Google Scholar] [CrossRef]
- Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Gillies, R.J.; Kinahan, P.E.; Hricak, H. Radiomics: Images are more than pictures, they are data. Radiology 2016, 278, 563–577. [Google Scholar] [CrossRef] [Green Version]
- Pignon, J.P.; le Maître, A.; Maillard, E.; Bourhis, J. Meta-analysis of chemotherapy in head and neck cancer (MACH-NC): An update on 93 randomised trials and 17,346 patients. Radiother. Oncol. 2009, 92, 4–14. [Google Scholar] [CrossRef]
- Alizadeh, M.; Safaei, A.; Dorri, G.; Zendehbad, S.A. Application of deep learning and machine learning methods to radiomics and texture analysis of medical images: A review. J. Biomed. Phys. Eng. 2018, 8, 405–424. [Google Scholar]
- Petrick, N.; Sahiner, B.; Armato, I.I.I.S.G.; Bert, A.; Correale, L.; Delsanto, S.; Freedman, M.T.; Fryd, D.; Gur, D.; Hadjiiski, L.; et al. Evaluation of computer-aided detection and diagnosis systems. Med. Phys. 2013, 40, 087001. [Google Scholar] [CrossRef] [Green Version]
- Lambin, P.; Rios-Velazquez, E.; Leijenaar, R.; Carvalho, S.; Van Stiphout, R.G.; Granton, P.; Zegers, C.M.; Gillies, R.; Boellard, R.; Dekker, A.; et al. Radiomics: Extracting more information from medical images using advanced feature analysis. Eur. J. Cancer 2012, 48, 441–446. [Google Scholar] [CrossRef] [Green Version]
- Taha, A.A.; Hanbury, A. Metrics for evaluating 3D medical image segmentation: Analysis, selection, and tool. BMC Med. Imaging 2015, 15, 29. [Google Scholar] [CrossRef] [Green Version]
- Krupinski, E.A. Current perspectives in medical image perception. Atten. Percept. Psychophys. 2010, 72, 1205–1217. [Google Scholar] [CrossRef] [Green Version]
- Drew, T.; Vo ML, H.; Olwal, A.; Jacobson, F.; Seltzer, S.E.; Wolfe, J.M. Scanners and drillers: Characterizing expert visual search through volumetric images. J. Vis. 2013, 13, 3. [Google Scholar] [CrossRef] [PubMed]
- Berbaum, K.S.; Franken, E.A.; Dorfman, D.D.; Miller, E.M.; Caldwell, R.T.; Kuehn, D.M.; Berbaum, M.L. Role of faulty deci-sion making in the satisfaction of search effect in chest radiography. Acad. Radiol. 2000, 7, 1098–1106. [Google Scholar] [CrossRef] [PubMed]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Shen, D.; Wu, G.; Suk, H.I. Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [Google Scholar] [CrossRef] [Green Version]
- Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef]
- Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J.; et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 2016, 316, 2402–2410. [Google Scholar] [CrossRef]
- Ciompi, F.; Chung, K.; Van Riel, S.J.; Setio, A.A.; Gerke, P.K.; Jacobs, C.; Scholten, E.T.; Schaefer-Prokop, C.; Wille, M.M.; Marchiano, A.; et al. Towards automatic pulmonary nodule management in lung cancer screening with deep learning. Sci. Rep. 2017, 7, 46479. [Google Scholar] [CrossRef] [Green Version]
- Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R.; et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 2015, 34, 1993–2024. [Google Scholar] [CrossRef]
- Lambin, P.; Leijenaar, R.T.H.; Deist, T.M.; Peerlings, J.; de Jong, E.E.C.; van Timmeren, J.; Sanduleanu, S.; Larue, R.T.H.M.; Even, A.J.G.; Jochems, A.; et al. Radiomics: The bridge between medical imaging and personalized medicine. Nat. Rev. Clin. Oncol. 2017, 14, 749–762. [Google Scholar] [CrossRef] [Green Version]
- Czernin, J.; Allen-Auerbach, M.; Schelbert, H.R. Improvements in cancer staging with PET/CT: Literature-based evidence as of September 2006. J. Nucl. Med. 2007, 48, 78S–88S. [Google Scholar]
- Ibragimov, B.; Xing, L. Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks. Med. Phys. 2017, 44, 547–557. [Google Scholar] [CrossRef] [Green Version]
- Zhong, Z.; Kim, Y.; Zhou, L.; Plichta, K. Deep-learning based, automated segmentation of head and neck cancer in PET/CT images. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 177–181. [Google Scholar]
- Men, K.; Dai, J.; Li, Y. Automatic segmentation of the clinical target volume and organs at risk in the planning CT for rectal cancer using deep dilated convolutional neural networks. Med. Phys. 2017, 44, 6377–6389. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Lao, J.; Chen, Y.; Li, Z.C.; Li, Q.; Zhang, J. A deep learning-based radiomics model for prediction of survival in glioblastoma multiforme. Sci. Rep. 2017, 7, 10353. [Google Scholar] [CrossRef] [Green Version]
- Wang, H.; Zhou, Z.; Li, Y.; Chen, Z.; Lu, P.; Wang, W.; Liu, W.; Yu, L. Comparison of machine learning methods for classify-ing mediastinal lymph node metastasis of non-small cell lung cancer from 18F-FDG PET/CT images. EJNMMI Res. 2019, 9, 15. [Google Scholar]
- Hötker, A.M.; Tarlinton, L.; Mazaheri, Y.; Woo, K.M.; Gönen, M.; Saltz, L.B.; Goodman, K.A.; Garcia-Aguilar, J.; Gollub, M.J. Multiparametric MRI in the assessment of response of rectal cancer to neoadjuvant chemoradiotherapy: A comparison of morphological, volumetric and functional MRI parameters. Eur. Radiol. 2016, 26, 4303–4312. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Som, P.M.; Curtin, H.D. Head and Neck Imaging, 4th ed.; Mosby: St. Louis, MO, USA, 2003. [Google Scholar]
- Zhu, W.; Huang, Y.; Zeng, L.; Chen, X.; Liu, Y.; Qian, Z.; Du, N.; Fan, W.; Xie, X. AnatomyNet: Deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy. Med. Phys. 2019, 46, 576–589. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Haarburger, C.; Valentinitsch, A.; Rienmüller, A.; Kuder, T.A.; Bickelhaupt, S. Automated detection and segmentation of head and neck carcinomas using a 3D U-Net on planning CT scans. Eur. Radiol. Exp. 2020, 4, 1–10. [Google Scholar]
- Wu, J.; Gensheimer, M.F.; Dong, X.; Rubin, D.L.; Napel, S.; Diehn, M. Robust intratumor partitioning to identify high-risk subregions in lung cancer: A pilot study. Int. J. Radiat. Oncol. Biol. Phys. 2017, 98, 647–656. [Google Scholar] [CrossRef] [Green Version]
- Tajbakhsh, N.; Shin, J.Y.; Gurudu, S.R.; Hurst, R.T.; Kendall, C.B.; Gotway, M.B.; Liang, J. Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE Trans. Med. Imaging 2016, 35, 1299–1312. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lu, L.; Ehmke, R.C.; Schwartz, L.H.; Zhao, B. Assessing agreement between radiomic features computed for multiple CT imaging settings. PloS ONE 2015, 10, e0143848. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kearney, V.; Chan, J.W.; Haaf, S.; Descovich, M.; Solberg, T.D. DoseNet: A volumetric dose prediction algorithm using 3D fully convolutional networks. Med. Phys. 2018, 45, 6343–6353. [Google Scholar] [CrossRef] [PubMed]
- Adams, M.C.; Turkington, T.G.; Wilson, J.M.; Wong, T.Z. A systematic review of the factors affecting accuracy of SUV measurements. Am. J. Roentgenol. 2010, 195, 310–320. [Google Scholar] [CrossRef] [PubMed]
- Liu, S.; Liu, S.; Cai, W.; Che, H.; Pujol, S.; Kikinis, R.; Feng, D.; Fulham, M.J. Multimodal neuroimaging feature learning for multiclass diagnosis of Alzheimer’s disease. IEEE Trans. Biomed. Eng. 2015, 62, 1132–1140. [Google Scholar] [CrossRef] [Green Version]
- Eiber, M.; Weirich, G.; Holzapfel, K.; Souvatzoglou, M.; Haller, B.; Rauscher, I.; Beer, A.J.; Wester, H.J.; Gschwend, J.; Schwaiger, M.; et al. Simultaneous 68Ga-PSMA HBED-CC PET/MRI Improves the Localization of Primary Prostate Cancer. Eur. Urol. 2016, 70, 829–836. [Google Scholar] [CrossRef]
- Zhou, M.; Scott, J.; Chaudhury, B.; Hall, L.; Goldgof, D.; Yeom, K.W.; Iv, M.; Ou, Y.; Kalpathy-Cramer, J.; Napel, S.; et al. Radiomics in Brain Tumor: Image Assessment, Quantitative Feature Descriptors, and Machine-Learning Approaches. Am. J. Neuroradiol. 2018, 39, 208–216. [Google Scholar] [CrossRef] [Green Version]
- Vallières, M.; Freeman, C.R.; Skamene, S.R.; El Naqa, I. A radiomics model from joint FDG-PET and MRI texture features for the prediction of lung metastases in soft-tissue sarcomas of the extremities. Phys. Med. Biol. 2015, 60, 5471–5496. [Google Scholar] [CrossRef]
- Vallières, M.; Kay-Rivest, E.; Perrin, L.J.; Liem, X.; Furstoss, C.; Aerts, H.J.; Khaouam, N.; Nguyen-Tan, P.F.; Wang, C.S.; Sultanem, K.; et al. Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer. Sci. Rep. 2017, 7, 10117. [Google Scholar] [CrossRef] [Green Version]
- Bankier, A.A.; Levine, D.; Halpern, E.F.; Kressel, H.Y. Consensus interpretation in imaging research: Is there a better way? Radiology 2010, 257, 14–17. [Google Scholar] [CrossRef]
- Fortin, J.P.; Parker, D.; Tunç, B.; Watanabe, T.; Elliott, M.A.; Ruparel, K.; Roalf, D.R.; Satterthwaite, T.D.; Gur, R.C.; Gur, R.E.; et al. Harmonization of multi-site diffusion tensor imaging data. Neuroimage 2017, 161, 149–170. [Google Scholar] [CrossRef]
- Men, K.; Chen, X.; Zhang, Y.; Zhang, T.; Dai, J.; Yi, J.; Li, Y. Deep deconvolutional neural network for target segmentation of nasopharyngeal cancer in planning computed tomography images. Front. Oncol. 2019, 9, 1200. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F.; Giannotti, F.; Pedreschi, D. A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 2018, 51, 1–42. [Google Scholar] [CrossRef] [Green Version]
- Castelvecchi, D. Can we open the black box of AI? Nat. News 2016, 538, 20–23. [Google Scholar] [CrossRef] [Green Version]
- Jha, S.; Topol, E.J. Adapting to artificial intelligence: Radiologists and pathologists as information specialists. JAMA 2016, 316, 2353–2354. [Google Scholar] [CrossRef]
- Topol, E.J. High-performance medicine: The convergence of human and artificial intelligence. Nat. Med. 2019, 25, 44–56. [Google Scholar] [CrossRef] [PubMed]
- Zwanenburg, A.; Vallières, M.; Abdalah, M.A.; Aerts, H.J.W.L.; Andrearczyk, V.; Apte, A.; Ashrafinia, S.; Bakas, S.; Beukinga, R.J.; Boellaard, R.; et al. The image biomarker standardization initiative: Standardized quantitative radiomics for high-throughput image-based phenotyping. Radiology 2020, 295, 328–338. [Google Scholar] [CrossRef] [Green Version]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Char, D.S.; Shah, N.H.; Magnus, D. Implementing machine learning in health care—addressing ethical challenges. N. Engl. J. Med. 2018, 378, 981–983. [Google Scholar] [CrossRef] [Green Version]
- Rodrigues, J.J.; de la Torre, I.; Fernández, G.; López-Coronado, M. Analysis of the security and privacy requirements of cloud-based electronic health records systems. J. Med. Internet Res. 2013, 15, e186. [Google Scholar] [CrossRef]
- Aerts, H.J.; Velazquez, E.R.; Leijenaar, R.T.; Parmar, C.; Grossmann, P.; Carvalho, S.; Bussink, J.; Monshouwer, R.; Haibe-Kains, B.; Rietveld, D.; et al. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat. Commun. 2014, 5, 4006. [Google Scholar] [CrossRef] [Green Version]
- Leek, J.T.; Scharpf, R.B.; Bravo, H.C.; Simcha, D.; Langmead, B.; Johnson, W.E.; Geman, D.; Baggerly, K.; Irizarry, R.A. Tackling the widespread and critical impact of batch effects in high-throughput data. Nat. Rev. Genet. 2010, 11, 733–739. [Google Scholar] [CrossRef] [Green Version]
- Siravegna, G.; Marsoni, S.; Siena, S.; Bardelli, A. Integrating liquid biopsies into the management of cancer. Nat. Rev. Clin. Oncol. 2017, 14, 531–548. [Google Scholar] [CrossRef]
- Sollini, M.; Antunovic, L.; Chiti, A.; Kirienko, M. Towards clinical application of image mining: A systematic review on artificial intelligence and radiomics. Eur. J. Nucl. Med. Mol. Imaging 2018, 45, 2076–2089. [Google Scholar] [CrossRef] [Green Version]
- Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. In Proceedings of the 2015 8th International Congress on Image and Signal Processing (CISP), Shenyang, China, 14–16 October 2015; pp. 393–398. [Google Scholar]
- Chen, H.; Zhang, Y.; Zhang, W.; Liao, P.; Li, K.; Zhou, J.; Wang, G. Low-dose CT via convolutional neural network. Biomed. Opt. Express 2018, 8, 679–694. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Zhou, Z.; Siddiquee MM, R.; Tajbakhsh, N.; Liang, J. UNet++: A Nested U-Net Architecture for Medical Image Segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Springer: Cham, Switzerland, 2020; pp. 3–11. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2014; pp. 2672–2680. [Google Scholar]
- Wolterink, J.M.; Leiner, T.; Viergever, M.A.; Isgum, I. Generative adversarial networks for noise reduction in low-dose CT. IEEE Trans. Med. Imaging 2017, 36, 2536–2545. [Google Scholar] [CrossRef] [PubMed]
- Son, J.; Park, S.J.; Jung, K.H. Retinal vessel segmentation in fundoscopic images with generative adversarial networks. arXiv 2018, arXiv:1801.00863. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Chen, C.; Qin, C.; Qiu, H.; Tarroni, G.; Duan, J.; Bai, W.; Rueckert, D. Iterative residual refinement for joint optic-disc and cup segmentation with transformers. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Strasbourg, France, 27 September–1 October 2021; Springer: Cham, Switzerland, 2021; pp. 118–127. [Google Scholar]
- Sharma, A.; Gupta, A.; Kumar, A.; Ray, A.; Sharma, S. Transformer-based deep learning model for multi-class segmentation of head and neck cancer. In Proceedings of the International Conference on Image Analysis and Recognition, Lecce, Italy, 23–27 May 2022; Springer: Cham, Switzerland, 2022; pp. 202–214. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Illimoottil, M.; Ginat, D. Recent Advances in Deep Learning and Medical Imaging for Head and Neck Cancer Treatment: MRI, CT, and PET Scans. Cancers 2023, 15, 3267. https://doi.org/10.3390/cancers15133267
Illimoottil M, Ginat D. Recent Advances in Deep Learning and Medical Imaging for Head and Neck Cancer Treatment: MRI, CT, and PET Scans. Cancers. 2023; 15(13):3267. https://doi.org/10.3390/cancers15133267
Chicago/Turabian StyleIllimoottil, Mathew, and Daniel Ginat. 2023. "Recent Advances in Deep Learning and Medical Imaging for Head and Neck Cancer Treatment: MRI, CT, and PET Scans" Cancers 15, no. 13: 3267. https://doi.org/10.3390/cancers15133267
APA StyleIllimoottil, M., & Ginat, D. (2023). Recent Advances in Deep Learning and Medical Imaging for Head and Neck Cancer Treatment: MRI, CT, and PET Scans. Cancers, 15(13), 3267. https://doi.org/10.3390/cancers15133267