Updates, Applications and Future Directions of Deep Learning for the Images Processing in the Field of Cranio-Maxillo-Facial Surgery
Abstract
1. Introduction
1.1. Background
1.2. Machine Learning vs. Deep Learning
1.3. Objective of This Study
2. Methods
2.1. Literature Search
2.2. Inclusion and Exclusion Criteria
2.3. Data Collection
3. Results
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Eisenstein, M. AI Assistance for Planning Cancer Treatment. Nature 2024, 629, S14–S16. [Google Scholar] [CrossRef] [PubMed]
- Saha, A.; Bosma, J.S.; Twilt, J.J.; van Ginneken, B.; Bjartell, A.; Padhani, A.R.; Bonekamp, D.; Villeirs, G.; Salomon, G.; Giannarini, G.; et al. Artificial Intelligence and Radiologists in Prostate Cancer Detection on MRI (PI-CAI): An International, Paired, Non-Inferiority, Confirmatory Study. Lancet Oncol. 2024, 25, 879–887. [Google Scholar] [CrossRef] [PubMed]
- Katsoulakis, E.; Wang, Q.; Wu, H.; Shahriyari, L.; Fletcher, R.; Liu, J.; Achenie, L.; Liu, H.; Jackson, P.; Xiao, Y.; et al. Digital Twins for Health: A Scoping Review. NPJ Digit. Med. 2024, 7, 77. [Google Scholar] [CrossRef]
- Oerther, B.; Engel, H.; Nedelcu, A.; Strecker, R.; Benkert, T.; Nickel, D.; Weiland, E.; Mayrhofer, T.; Bamberg, F.; Benndorf, M.; et al. Performance of an Ultra-Fast Deep-Learning Accelerated MRI Screening Protocol for Prostate Cancer Compared to a Standard Multiparametric Protocol. Eur. Radiol. 2024, 34, 7053–7062. [Google Scholar] [CrossRef] [PubMed]
- Oh, N.; Kim, J.-H.; Rhu, J.; Jeong, W.K.; Choi, G.-S.; Kim, J.M.; Joh, J.-W. Automated 3D Liver Segmentation from Hepatobiliary Phase MRI for Enhanced Preoperative Planning. Sci. Rep. 2023, 13, 17605. [Google Scholar] [CrossRef]
- Bizzo, B.C.; Almeida, R.R.; Alkasab, T.K. Computer-Assisted Reporting and Decision Support in Standardized Radiology Reporting for Cancer Imaging. JCO Clin. Cancer Inform. 2021, 5, 426–434. [Google Scholar] [CrossRef]
- Bharadwaj, P.; Berger, M.; Blumer, S.L.; Lobig, F. Selecting the Best Radiology Workflow Efficiency Applications. J. Digit. Imaging Inform. Med. 2024, 37, 2740–2751. [Google Scholar] [CrossRef]
- Khera, R.; Oikonomou, E.K.; Nadkarni, G.N.; Morley, J.R.; Wiens, J.; Butte, A.J.; Topol, E.J. Transforming Cardiovascular Care With Artificial Intelligence: From Discovery to Practice. JACC 2024, 84, 97–114. [Google Scholar] [CrossRef]
- Ayers, J.W.; Poliak, A.; Dredze, M.; Leas, E.C.; Zhu, Z.; Kelley, J.B.; Faix, D.J.; Goodman, A.M.; Longhurst, C.A.; Hogarth, M.; et al. Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum. JAMA Intern. Med. 2023, 183, 589–596. [Google Scholar] [CrossRef]
- Chi, W.N.; Reamer, C.; Gordon, R.; Sarswat, N.; Gupta, C.; White VanGompel, E.; Dayiantis, J.; Morton-Jost, M.; Ravichandran, U.; Larimer, K.; et al. Continuous Remote Patient Monitoring: Evaluation of the Heart Failure Cascade Soft Launch. Appl. Clin. Inform. 2021, 12, 1161–1173. [Google Scholar] [CrossRef]
- Moglia, A.; Georgiou, K.; Morelli, L.; Toutouzas, K.; Satava, R.M.; Cuschieri, A. Breaking down the Silos of Artificial Intelligence in Surgery: Glossary of Terms. Surg. Endosc. 2022, 36, 7986–7997. [Google Scholar] [CrossRef] [PubMed]
- Liyanage, V.; Tao, M.; Park, J.S.; Wang, K.N.; Azimi, S. Malignant and Non-Malignant Oral Lesions Classification and Diagnosis with Deep Neural Networks. J. Dent. 2023, 137, 104657. [Google Scholar] [CrossRef] [PubMed]
- Chen, R.; Wang, Q.; Huang, X. Intelligent Deep Learning Supports Biomedical Image Detection and Classification of Oral Cancer. Technol. Health Care 2024, 32, 465–475. [Google Scholar] [CrossRef]
- Kouketsu, A.; Doi, C.; Tanaka, H.; Araki, T.; Nakayama, R.; Toyooka, T.; Hiyama, S.; Iikubo, M.; Osaka, K.; Sasaki, K.; et al. Detection of Oral Cancer and Oral Potentially Malignant Disorders Using Artificial Intelligence-Based Image Analysis. Head Neck 2024, 46, 2253–2260. [Google Scholar] [CrossRef]
- Kusakunniran, W.; Imaromkul, T.; Mongkolluksamee, S.; Thongkanchorn, K.; Ritthipravat, P.; Tuakta, P.; Benjapornlert, P. Deep Upscale U-Net for Automatic Tongue Segmentation. Med. Biol. Eng. Comput. 2024, 62, 1751–1762. [Google Scholar] [CrossRef] [PubMed]
- Patel, A.; Besombes, C.; Dillibabu, T.; Sharma, M.; Tamimi, F.; Ducret, M.; Chauvin, P.; Madathil, S. Attention-Guided Convolutional Network for Bias-Mitigated and Interpretable Oral Lesion Classification. Sci. Rep. 2024, 14, 31700. [Google Scholar] [CrossRef]
- Thakuria, T.; Mahanta, L.B.; Khataniar, S.K.; Goswami, R.D.; Baruah, N.; Bharali, T. Smartphone-Based Oral Lesion Image Segmentation Using Deep Learning. J. Imaging Inform. Med. 2025. [Google Scholar] [CrossRef]
- Alzahrani, A.A.; Alsamri, J.; Maashi, M.; Negm, N.; Asklany, S.A.; Alkharashi, A.; Alkhiri, H.; Obayya, M. Deep Structured Learning with Vision Intelligence for Oral Carcinoma Lesion Segmentation and Classification Using Medical Imaging. Sci. Rep. 2025, 15, 6610. [Google Scholar] [CrossRef]
- Ahmad, M.; Irfan, M.A.; Sadique, U.; Haq, I.U.; Jan, A.; Khattak, M.I.; Ghadi, Y.Y.; Aljuaid, H. Multi-Method Analysis of Histopathological Image for Early Diagnosis of Oral Squamous Cell Carcinoma Using Deep Learning and Hybrid Techniques. Cancers 2023, 15, 5247. [Google Scholar] [CrossRef]
- Panigrahi, S.; Nanda, B.S.; Bhuyan, R.; Kumar, K.; Ghosh, S.; Swarnkar, T. Classifying Histopathological Images of Oral Squamous Cell Carcinoma Using Deep Transfer Learning. Heliyon 2023, 9, e13444. [Google Scholar] [CrossRef]
- Das, M.; Dash, R.; Mishra, S.K. Automatic Detection of Oral Squamous Cell Carcinoma from Histopathological Images of Oral Mucosa Using Deep Convolutional Neural Network. Int. J. Environ. Res. Public Health 2023, 20, 2131. [Google Scholar] [CrossRef]
- Confer, M.P.; Falahkheirkhah, K.; Surendran, S.; Sunny, S.P.; Yeh, K.; Liu, Y.-T.; Sharma, I.; Orr, A.C.; Lebovic, I.; Magner, W.J.; et al. Rapid and Label-Free Histopathology of Oral Lesions Using Deep Learning Applied to Optical and Infrared Spectroscopic Imaging Data. J. Pers. Med. 2024, 14, 304. [Google Scholar] [CrossRef]
- Ragab, M.; Asar, T.O. Deep Transfer Learning with Improved Crayfish Optimization Algorithm for Oral Squamous Cell Carcinoma Cancer Recognition Using Histopathological Images. Sci. Rep. 2024, 14, 25348. [Google Scholar] [CrossRef] [PubMed]
- Zafar, A.; Khalid, M.; Farrash, M.; Qadah, T.M.; Lahza, H.F.M.; Kim, S.-H. Enhancing Oral Squamous Cell Carcinoma Detection Using Histopathological Images: A Deep Feature Fusion and Improved Haris Hawks Optimization-Based Framework. Bioengineering 2024, 11, 913. [Google Scholar] [CrossRef] [PubMed]
- Albalawi, E.; Thakur, A.; Ramakrishna, M.T.; Bhatia Khan, S.; SankaraNarayanan, S.; Almarri, B.; Hadi, T.H. Oral Squamous Cell Carcinoma Detection Using EfficientNet on Histopathological Images. Front. Med. 2023, 10, 1349336. [Google Scholar] [CrossRef]
- Koriakina, N.; Sladoje, N.; Bašić, V.; Lindblad, J. Deep multiple instance learning versus conventional deep single instance learning for interpretable oral cancer detection. PLoS ONE 2024, 19, e0302169. [Google Scholar] [CrossRef] [PubMed]
- Sukegawa, S.; Tanaka, F.; Nakano, K.; Hara, T.; Ochiai, T.; Shimada, K.; Inoue, Y.; Taki, Y.; Nakai, F.; Nakai, Y.; et al. Training High-Performance Deep Learning Classifier for Diagnosis in Oral Cytology Using Diverse Annotations. Sci. Rep. 2024, 14, 17591. [Google Scholar] [CrossRef]
- Yuan, W.; Yang, J.; Yin, B.; Fan, X.; Yang, J.; Sun, H.; Liu, Y.; Su, M.; Li, S.; Huang, X. Noninvasive Diagnosis of Oral Squamous Cell Carcinoma by Multi-Level Deep Residual Learning on Optical Coherence Tomography Images. Oral Dis. 2023, 29, 3223–3231. [Google Scholar] [CrossRef]
- Chen, Z.; Yu, Y.; Liu, S.; Du, W.; Hu, L.; Wang, C.; Li, J.; Liu, J.; Zhang, W.; Peng, X. A Deep Learning and Radiomics Fusion Model Based on Contrast-Enhanced Computer Tomography Improves Preoperative Identification of Cervical Lymph Node Metastasis of Oral Squamous Cell Carcinoma. Clin. Oral Investig. 2023, 28, 39. [Google Scholar] [CrossRef]
- Yang, L.; Zhang, S.; Li, J.; Feng, C.; Zhu, L.; Li, J.; Lin, L.; Lv, X.; Su, K.; Lao, X.; et al. Diagnosis of Lymph Node Metastasis in Oral Squamous Cell Carcinoma by an MRI-Based Deep Learning Model. Oral Oncol. 2025, 161, 107165. [Google Scholar] [CrossRef]
Characteristics | Machine Learning | Deep Learning |
---|---|---|
Feature Extraction | Manual | Automatic |
Data required | Works well with smaller datasets | Needs large volumes of data |
Hardware | Sufficient CPU | GPU/TPU required |
Interpretability | More transparent and understandable models | Complex and less interpretable models |
Typical applications | Regression; simple classification | Computer vision; NLP; voice recognition |
Study | Purpose | DL Models Used | Data Type | Results | Advantages | Disadvantages |
---|---|---|---|---|---|---|
Liyanage et al. (2023) [12] | Classification of oral lesions (non-neoplastic vs. benign vs. malignant neoplastic). | EfficientNetV2 and MobileNetv3. | 342 clinical photos. | Accuracy: 76% (MobileNetV3); AUC: 0.88; Precision/Recall/F1-score: ~0.64. | Usable on smartphones for remote screening; promising AUC; support in low-resource areas. | Low accuracy on non-neoplastic lesions; no texture filtering; overfitting possible. |
Chen et al. (2024) [13] | Binary classification (cancer vs. healthy). | CANet and Swin Transformer. | 131 clinical photos. | CANet: accuracy 97.00%; sensibility 97.82%; specificity 97.82%; F1 96.61%. Swin: accuracy 94.95%; sensibility 95.37%; specificity 95.52%; F1 94.66%. | High accuracy; robust to noise; suitable for small datasets; transferable. | Small dataset; sensitivity to low-quality images; risk of inconsistent labels. |
Kouketsu et al. (2024) [14] | Detection of location and presence OSCC and leukoplakia. | Single Shot Multibox Detector (SDD). | 1043 clinical photos. | Sensibility 93.9% (OSCC); 83.7% (OSCC + leukoplakia). Specificity 81.2%. | Non-invasive; inexpensive; high accuracy; suitable for remote screening or self-assessment. | Reduced performance in low light or extreme images; language only; limited to RGB images. |
Kusakunniran et al. (2024) [15] | Segmentation of tongue lesions. | Deep Upscale U-Net (DU-UNET). | 995 clinical photos. | Accuracy up to 99.97%; mean IoU: 93.10%; Dice: 87.43%. | High accuracy even on difficult images; outperforms U-Net and existing methods. | Performance drops on visually diverse domains; specific retraining needed. |
Patel et al. (2024) [16] | Multiclass classification of 16 lesions. | EfficientNet-B5 + GAIN (Guidance stream) + Anatomical Site Prediction (ASP). | 1888 clinica photos. | Balanced Accuracy: GAIN 78.7% (+7.2%); GAIN + ASP 75.5%. AUC 83.7–99.7%. | High accuracy; robustness against bias; high interpretability; generalizable. | Lower performance on rare classes; anatomical predictions sometimes increase bias (GAIN + ASP). |
Thakuria et al. (2025) [17] | Segmentation of oral lesions. | OralSegNet (EfficientNetV2L + ASPP + Residual Blocks + SE Block). | 538 clinical photos. | Test Dice: 0.8518; F1: 0.8518; ROC-AUC: 0.97. | High accuracy; robustness to light conditions/variations; advanced architecture; clinical validation. | More power required; future optimization needed. |
Alzahrani et al. (2025) [18] | Classification and segmentation of oral carcinoma lesions. | DSLVI-OCLSC (Wiener Filterning + ShuffleNetV2 + MA-CNN-BiSTM + Unett3+ + Sine Cosine Algorithm). | 131 clinical photos. | Accuracy up to 98.47%; sensitivity: 97.73%; specificity: 97.73%; F1-score: 98.27%. | High accuracy; low computational time; advanced integration between segmentation and classification. | Limited dataset; high computational demand; vulnerable to low-quality images. |
Study | Purpose | DL Models Used | Data Type | Results | Advantages | Disadvantages |
---|---|---|---|---|---|---|
Ahmad et al. (2023) [19] | Identification of OSCC on histopathological images using a combination of DL and ML models and traditional features. | Three strategies: 1. Transfer learning (based on 5 models: Xception, InceptionV3, InceptionResNetV2, and NASNetLarge e DenseNet201); 2. Hybrid CNN + SVM (deep features extracted from each CNN and classified with SVM); 3. Fusion of CNN + traditional features (texture, shape, and local texture). | 5192 histopathologic images. | The combined model obtained an accuracy of 97.00%, precision of 96.77%, sensibility of 90.90%, specificity of 98.92%, F1-score of 93.74%, and AUC of 96.80%. | High accuracy; reduced burden on pathologists; early diagnosis; easily adaptable to images from different sources; fast turnaround time. | High computational cost; binary dataset (normal vs. OSCC); lack of multicenter external validation. |
Panigrahi et al. (2023) [20] | Automatic classification of histopathological images to detect presence of OSCC with deep convolutional neural networks (DCNNs). | Two approaches. 1. Transfer learning on pre-trained models (Re-sNet50; InceptionV3; MobileNet; VGG16; and VGG19); 2. CNN baseline model (co-built from scratch with 10 convolutional layers). | 4000 histopathologic images. | ResNet50 demonstrates higher accuracy; F-score precision; and recall than the other models (96.6%; 0.97; 0.96; and 0.96; respectively). | High accuracy with few training epochs; superior results compared with CNN model developed from scratch; excellent generalization without overfitting (due to normalization; dropout; and data augmentation techniques). | Need much time for training of complex models (such as VGG19); histological images differ greatly from ImageNet images. |
Das et al. (2023) [21] | Automatic classification of histopathological images of OSCC. | Model developed from scratch: customized 10-layer CNN (8 convolutional layers, 6 max pooling layers + 6 batch normalization layers, 1 dropout layer, 2 fully connected layeers, and an output layer) compared with other DL models. | 1224 histopathologic images. | The presented model demonstrated higher accuracy, precision, recal, specificity, F1-score, and AUC than the other models and ResNet50 (98%, 0.97, 0.98, 0.97, 0.97, and 0.97, respectively). | High accuracy; robust generalization; includes batch normalization and dropout; requires no manual feature extraction. | Dataset initially unbalanced and limited (need for aug-mentation); not yet validated on real-time clinical images; applicability only by binary classification (benign vs. malignant). |
Confer et al. (2024) [22] | Histopathological diagnosis without staining (label-free) using discrete IR spectroscopic imaging (DFIR). Segmentation into connective tissue; dysplastic and non-dysplastic epithelium. | Fully convolutional network (FCN) on ResNet50 backbone. | 2561 histopathological images | DFIR + darkfiled: accuracy 94.5%; F1-score 0.823. | Rapid workflow; high precision; avoids coloring; lower cost; high accuracy and F1-score; high clinical scale-bility. | Limited sample variability; no cancerous tissues included; multi-center validation needed. |
Ragab et al. (2024) [23] | Identification of OSCC from histopathological images. | Advanced hybrid pipeline: 1. Preprocessing (bilateral filtering; BF); 2. Feature extraction (SE-CapsNet); 3. Optimization (Improved Crayfish Optimization Algorithm; ICOA); 4. Final classification (CNN + BiLSTM). | 528 histopathological images | Compared with other recent DL models; such as ResNet50 or VGG16; the SEHDL-OSCCR model proposed by the study per-formed as follows: accuracy—98.75%; precision—96.69%; recall—98.75%; F1—97.69%. | High accuracy; computational efficiency (2.70 s); advanced model combination; high generalization with minimal overfitting. | Possible scalability issues; need for more validation; sensitivity to subtle variations; clinical interpretability of complex networks (CapsNet; BiLSTM) still limited. |
Zafar et al. (2024) [24] | Automated framework for diagnosis of OSCC from histopathological images using a combination of DL patterns. | Combination of models: 1. Feature extractor (ResNet-101 + EfficientNet-b0 + other 14 models); 2. Feature fusion (Canonical Correlation Analysis; CCA); 3. Feature selection (Binary-Improved Harris Hawks Optimization; b-IHHO); 4. Classifier (K-Nearest Neighbors; KNN). | 4946 histopathological images | The proposed model (b-IHHO + KNN) achieved an accuracy of 98.20%; higher than ResNet101 and EfficientNetb0 considered alone. | High accuracy; efficient size reduction; high generalizability. | Limited and binary dataset; non-end-to-end approach (requires multiple steps such as extraction, fusion, selection and classification); limited validation. |
Albalawi et al. (2024) [25] | Automated model for detecting OSCC on histopa-tological images. | EfficientNetB3 | 1224 histopathological images | The model in the testing set demonstrated an accuracy of 98.73% with an F1-score of 0.99. | High accuracy; high performance on images with different resolutions (100× and 400×); robust generalization; potential clinical integration. | Potential biases related to dataset origin; uncertain generalization; poorly interpretable model (DL decisions are “black-box”); need for high computational resources. |
Study | Purpose | DL Models Used | Data Type | Results | Advantages | Disadvantages |
---|---|---|---|---|---|---|
Koriakina et al. (2024) [26] | Oral cancer detection on cytological images. | Two approaches: Single Instance Learning (SIL) and Attention-Based Multiple Instance Learning (ABMIL). | Cytological images (24 patients and 307,839 cells). | Accuracy 93.1% (SIL, ResNet18); precision SIL > ABMIL on all datasets. | SIL is simpler and more accurate than ABMIL; it is high accuracy, interpretable, robust, and less prone to overfitting on irrelevant features. | ABMIL is complex, memory-heavy, less effective on small datasets and sparse key instances, and assumes that all patient cells share a label (this introduces noise). |
Sukegawa et al. (2024) [27] | Classification of precancerous lesions, OSCC, and glossitis. | ResNet50 pretrained on ImageNet. Made 6 variants of the DL model (patho-logist A, B, C, and GT, majority voting, and probabilistic model). | 14,535 images. | Probabilistic model has better AUC (0.91). | High accuracy; use of diversified labels; robust to single annotator bias; generalizable. | Costly labeling; variability among pathologists; retrospective and single-center study; performance not statistically comparable across dataset. |
Study | Purpose | DL Models Used | Data Type | Results | Advantages | Disadvantages |
---|---|---|---|---|---|---|
Yuan et al. (2023) [28] | Diagnosis of OSCC with MDRL System compared with radiologist and other CNN models. | Multi-level residual deep learning network (MDRL). | 468 OCT scans. | A sensitivity of 91.2%, specificity of 83.6%, accuracy of 87.5%, VPP of 85.3%, and VPN of 90.2% at the image level, with an AUC value of 0.92, compared to the radiologist (100, 69.3, 86.2, 80, and 100). | Non-invasive; integrates multi-level features (improving diagnostic accuracy). | Need annotated data; prefe-ribly with se-mi-supervised or non-supervised learning; model is based on 2D slicing of 3D images. |
Chen et al. (2023) [29] | DL and radiomics model to identify cervical lymph node metastasis in patients with OSCC though analysis of CT scans. | 3DDenseNet modifiedfor volumetric analysis of lymph nodes on CECT images (adjunctive module discrimintive filter learning; DFL). | CECT scans of 100 patients and 217 meta-static and 1973 nonmetastatic cervical lymph nodes. | The DL + radiomics model compared to the two expert clinicians demostrated higher sensiblity (92 vs. 72% and 60%) but lower specificity (88 vs. 97 and 99%). | Greater sensibility than clinicians; radiomic features and DL complementary integration; good genera-lizability. | Minor specificity to clinicians; single-center and retrospective study; model is still suboptimal for small lymph nodes or those with inconspicuous features. |
Yang et al. (2025) [30] | Diagnosis of cervical lymph node metastasis in patients with OSCC cN0. | Three stages model (images-based, sequence-based, and patient-based stages). AlexNet modified + RF classifier. | 45,664 MRI scans (723 patients). | The model showed better performance, with an AUC of 0.70 and an accuracy of 73.53% compared to radiologists (accuracy of 61.76%). Model performance included AUC: 0.97 (train), 0.81 (external test); ACC: 93.3%; specificity: 98.6%; sensitivity: 82.5%; and F1 score: 0.88 (train). | Reduction in occult metastasis; superior performance to radiologists; multicenter validation. | Complex model; requires high-quality MRI; retrospective study; only imaging as input. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Michelutti, L.; Tel, A.; Robiony, M.; Marini, L.; Tognetto, D.; Agosti, E.; Ius, T.; Gagliano, C.; Zeppieri, M. Updates, Applications and Future Directions of Deep Learning for the Images Processing in the Field of Cranio-Maxillo-Facial Surgery. Bioengineering 2025, 12, 585. https://doi.org/10.3390/bioengineering12060585
Michelutti L, Tel A, Robiony M, Marini L, Tognetto D, Agosti E, Ius T, Gagliano C, Zeppieri M. Updates, Applications and Future Directions of Deep Learning for the Images Processing in the Field of Cranio-Maxillo-Facial Surgery. Bioengineering. 2025; 12(6):585. https://doi.org/10.3390/bioengineering12060585
Chicago/Turabian StyleMichelutti, Luca, Alessandro Tel, Massimo Robiony, Lorenzo Marini, Daniele Tognetto, Edoardo Agosti, Tamara Ius, Caterina Gagliano, and Marco Zeppieri. 2025. "Updates, Applications and Future Directions of Deep Learning for the Images Processing in the Field of Cranio-Maxillo-Facial Surgery" Bioengineering 12, no. 6: 585. https://doi.org/10.3390/bioengineering12060585
APA StyleMichelutti, L., Tel, A., Robiony, M., Marini, L., Tognetto, D., Agosti, E., Ius, T., Gagliano, C., & Zeppieri, M. (2025). Updates, Applications and Future Directions of Deep Learning for the Images Processing in the Field of Cranio-Maxillo-Facial Surgery. Bioengineering, 12(6), 585. https://doi.org/10.3390/bioengineering12060585