Deep Learning Applications for Dental-Disease Classification Using Intraoral Photographic Images: Current Status and Future Perspectives
Abstract
1. Introduction
- Which deep learning architectures and learning paradigms (e.g., CNNs, encoder–decoder networks, vision transformers, self-supervised learning, and federated learning) demonstrate robust performance for dental disease detection and segmentation using intraoral photographic images?
- How do preprocessing, data augmentation, and image enhancement strategies influence diagnostic accuracy, robustness, and generalizability across primary dental conditions?
- What methodological and translational challenges, including class imbalance, single-center dataset bias, domain shift, and limited explainability, most strongly constrain clinical adoption?
- What emerging research directions and implementation strategies are most promising for improving fairness, interpretability, privacy preservation, and real-world deployment of AI-assisted intraoral photographic diagnostics?
- (i).
- task- and disease-level framing (Section 4).
- (ii).
- synthesis of deep learning architectures (Section 5).
- (iii).
- review of preprocessing and augmentation strategies (Section 6).
- (iv).
- task-wise synthesis of diagnostic performance (Section 7).
- (v).
- evaluation metrics and generalizability considerations (Section 8).
- (vi).
- analysis of translational and clinical challenges (Section 9)
- (vii).
- identification of emerging research directions (Section 10).
2. Methodology
2.1. Search Strategy
2.2. Inclusion and Exclusion Criteria
2.3. Study Selection
2.4. Data Extraction and Synthesis
- Study information: author, publication year, and journal.
- Dataset characteristics: type of intraoral images, dataset source, and preprocessing methods.
- DL model details: architecture (e.g., CNN, ResNet, ViT), transfer-learning methods, and training/validation approaches.
- Performance metrics: accuracy, sensitivity, specificity, F1-score, and area under the curve (AUC) of the receiver operating characteristic (ROC).
- External validation: presence or absence of independent testing.
- Key findings and limitations.
2.5. Quality and Risk of Bias Assessment Criteria
- Q1.
- Is the dataset source of the dental photographic image dataset clearly described (e.g., public/private, single- or multi-center)?
- Q2.
- Is an appropriate strategy reported for splitting the dataset into training, validation, and test sets, with explicit measures to prevent data leakage (e.g., patient-level separation or avoidance of duplicate images across splits)?
- Q3.
- Was the developed model evaluated using external validation or an independent test dataset to assess its generalizability to unseen data?
- Q4.
- Are clinically meaningful performance metrics reported beyond overall accuracy, such as sensitivity, specificity, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC)?
- Q5.
- Are ground-truth labels derived from reliable clinical reference standards, such as expert dental annotation, consensus labeling, or validated diagnostic criteria?
- Q6.
- Does the study explicitly discuss key methodological limitations, potential sources of bias, and constraints affecting the interpretation or generalizability of the results?
- Q7.
- Is image acquisition standardized or quality-controlled, including reporting of camera type, lighting conditions, acquisition protocols, or exclusion of poor-quality images? Image acquisition standardization and quality control.
Risk of Bias (ROB) Scoring Criteria and Thresholds
if 70% or more of the evaluated domains were rated as low risk, with no more than one high-risk rating in a specified critical domain. Conversely, a study was deemed to have a moderate Risk of Bias (Lower Quality)
. If less than 70% of the domains were rated as low risk, and there were high-risk ratings in two or more critical domains. A High Risk of Bias classification
was assigned when 50% or fewer domains were rated as high risk, resulting in the study’s exclusion from the qualitative synthesis. The use of explicit criteria and quantitative thresholds was intended to minimize subjective judgment and improve reproducibility.3. Results
3.1. Summary of Included Studies
3.2. Distribution of Study Characteristics and Evidence
3.3. Quality and Risk of Bias Assessment
4. Discussion
4.1. Detectable Dental Diseases in Intraoral Photographs
4.1.1. Dental Caries
4.1.2. Gingivitis and Periodontal Diseases
4.1.3. Dental Plaque
4.1.4. Orthodontic Conditions
4.1.5. Soft-Tissue Lesions and Potentially Malignant Disorders
5. Deep Learning Architectures and Applications for Intraoral Photographic Image Analysis
5.1. Convolutional Neural Networks (CNNs)
- ResNet employs skip connections to mitigate vanishing gradients, enabling more profound and more discriminative models [22].
- DenseNet connects each layer to all subsequent layers, promoting feature reuse and improving gradient flow; it is particularly advantageous for small or imbalanced dental datasets.
- EfficientNet uses a compound scaling strategy that jointly optimizes network width, depth, and resolution, achieving high accuracy with reduced computational cost, making it suitable for smartphone-based inference.
5.2. Encoder–Decoder Networks for Segmentation (U-Net and Derivatives)
5.3. Vision Transformers
5.4. Self-Supervised Learning
5.5. Federated Learning
5.6. Generative Adversarial Networks
6. Image Preprocessing and Augmentation
6.1. Color Constancy and Normalization
6.2. Contrast Enhancement
6.3. Region-of-Interest Extraction
6.4. Image Augmentation
6.5. Standardized Photography Protocols
7. Evaluation Metrics
7.1. Classification Metrics
7.1.1. Accuracy
7.1.2. Sensitivity and Specificity
7.1.3. Precision, Recall, and F1-Score
7.1.4. Area Under the Receiver Operating Characteristic Curve
7.2. Segmentation Metrics
7.3. Calibration, Robustness, and Reliability Assessments
7.4. External Validation and Cross-Domain Generalization
7.5. Human–Artificial Intelligence Comparison and Clinical Utility
7.6. Interpretation of Evaluation Metrics and Acceptable Performance Ranges
8. Challenges and Limitations
8.1. Dataset Limitations and Class Imbalance
8.2. Domain Shift and Variability in Image Acquisition
8.3. Lack of Explainability and Clinical Transparency
8.4. Limited External and Prospective Validation
8.5. Privacy, Ethics, and Regulatory Constraints
8.6. Integration into Clinical Workflow
9. Future Research Directions
9.1. Development of Large, Multi-Center, Standardized Datasets
9.2. Advanced Learning Paradigms: Self-Supervised Learning, FL, and Multi-Task Models
9.3. Improved Explainability and Clinical Interpretability
9.4. Robustness, Calibration, and Continual Learning
9.5. Integration with Mobile and Tele-Dentistry Platforms
9.6. Regulatory Approval Pathways and Ethical Frameworks
10. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| IOPI | Intraoral Photographic Image |
| DL | Deep Learning |
| CNN | Convolutional Neural Network |
| DSLR | Digital Single-Lens Reflex |
| HIPAA | Health Insurance Portability and Accountability Act |
References
- Tonetti, M.S.; Jepsen, S.; Jin, L.; Otomo-Corgel, J. Impact of the global burden of periodontal diseases on health, nutrition and wellbeing of mankind: A call for global action. J. Clin. Periodontol. 2017, 44, 456–462. [Google Scholar] [CrossRef] [PubMed]
- Lang, N.P.; Bartold, P.M. Periodontal health. J. Periodontol. 2018, 89, S9–S16. [Google Scholar] [CrossRef] [PubMed]
- Pretty, I.A.; Ekstrand, K.R.; Pretty, I.A.; Ekstrand, K.R. Detection and monitoring of early caries lesions: A review. Eur. Arch. Paediatr. Dent. 2015, 17, 13–25. [Google Scholar] [CrossRef] [PubMed]
- Ismail, A.I.; Sohn, W.; Tellez, M.; Amaya, A.; Sen, A.; Hasson, H.; Pitts, N.B. The International Caries Detection and Assessment System (ICDAS): An integrated system for measuring dental caries. Community Dent. Oral Epidemiol. 2007, 35, 170–178. [Google Scholar] [CrossRef]
- Jeong, H.K.; Park, C.; Henao, R.; Kheterpal, M. Deep Learning in Dermatology: A Systematic Review of Current Approaches, Outcomes, and Limitations. JID Innov. 2022, 3, 100150. [Google Scholar] [CrossRef]
- Rajpurkar, P.; Irvin, J.; Ball, R.L.; Zhu, K.; Yang, B.; Mehta, H.; Duan, T.; Ding, D.; Bagul, A.; Langlotz, C.P.; et al. Deep learning for chest radiograph diagnosis: A retrospective comparison of the CheXNeXt algorithm to practicing radiologists. PLoS Med. 2018, 15, e1002686. [Google Scholar] [CrossRef]
- Ruamviboonsuk, P.; Cheung, C.Y.; Zhang, X.; Raman, R.; Park, S.J.; Ting, D.S.W. Artificial Intelligence in Ophthalmology: Evolutions in Asia. Asia-Pac. J. Ophthalmol. 2020, 9, 78–84. [Google Scholar] [CrossRef]
- Estai, M.; Bunt, S.; Kanagasingam, Y.; Kruger, E.; Tennant, M. Diagnostic accuracy of teledentistry in the detection of dental caries: A systematic review. J. Evid. Based Dent. Pract. 2016, 16, 161–172. [Google Scholar] [CrossRef]
- Estai, M.; Kanagasingam, Y.; Mehdizadeh, M.; Vignarajan, J.; Norman, R.; Huang, B.; Spallek, H.; Irving, M.; Arora, A.; Kruger, E.; et al. Teledentistry as a novel pathway to improve dental health in school children: A research protocol for a randomised controlled trial. BMC Oral Health 2020, 20, 11. [Google Scholar] [CrossRef]
- Schwendicke, F.; Samek, W.; Krois, J. Artificial Intelligence in Dentistry: Chances and Challenges. J. Dent. Res. 2020, 99, 769–774. [Google Scholar] [CrossRef]
- Kumar, P.D.M.; Sivakumar, S.; Rajeshwari, S.; Lavanya, C.; Ranganathan, K. Diagnostic efficiency of digital photography and AI-assisted image interpretation in dental caries examination: An umbrella review. J. Oral Biol. Craniofacial Res. 2026, 16, 1–7. [Google Scholar] [CrossRef] [PubMed]
- Noor Uddin, A.; Ali, S.A.; Lal, A.; Adnan, N.; Ahmed, S.M.F.; Umer, F. Applications of AI-based deep learning models for detecting dental caries on intraoral images—A systematic review. Evid.-Based Dent. 2025, 26, 71–72. [Google Scholar] [CrossRef] [PubMed]
- Yu, A.C.; Mohajer, B.; Eng, J. External Validation of Deep Learning Algorithms for Radiologic Diagnosis: A Systematic Review. Radiol. Artif. Intell. 2022, 4, e210064. [Google Scholar] [CrossRef]
- Eke, C.I.; Shuib, L.; Eke, C.I.; Shuib, L. The role of explainability and transparency in fostering trust in AI healthcare systems: A systematic literature review, open issues and potential solutions. Neural Comput. Appl. 2024, 37, 1999–2034. [Google Scholar] [CrossRef]
- Du, G.; Cao, X.; Liang, J.; Chen, X.; Zhan, Y.; Cao, X.; Liang, J.; Chen, X.; Zhan, Y. Medical Image Segmentation based on U-Net: A Review. J. Imaging Sci. Technol. 2020, 64, jist0710. [Google Scholar] [CrossRef]
- Khan, S.; Naseer, M.; Hayat, M.; Waqas, Z.; Shahbaz, K.; Shah, M. Transformers in Vision: A Survey. ACM Comput. Surv. (CSUR) 2022, 54, 200. [Google Scholar] [CrossRef]
- Singh, N.K.; Raza, K. Medical Image Generation Using Generative Adversarial Networks: A Review. In Studies in Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar] [CrossRef]
- Johnson, J.W. Generative adversarial networks in medical imaging. In State of the Art in Neural Networks and Their Applications; Elsevier: Amsterdam, The Netherlands, 2021. [Google Scholar] [CrossRef]
- Qayyum, A.; Tahir, A.; Butt, M.A.; Luke, A.; Abbas, H.T.; Qadir, J.; Arshad, K.; Assaleh, K.; Imran, M.A.; Abbasi, Q.H.; et al. Dental caries detection using a semi-supervised learning approach. Sci. Rep. 2023, 13, 749. [Google Scholar] [CrossRef]
- Kaissis, G.A.; Makowski, M.R.; Rückert, D.; Braren, R.F. Secure, privacy-preserving and federated machine learning in medical imaging. Nat. Mach. Intell. 2020, 2, 305–311. [Google Scholar] [CrossRef]
- Rischke, R.; Schneider, L.; Müller, K.; Samek, W.; Schwendicke, F.; Krois, J. Federated Learning in Dentistry: Chances and Challenges. J. Dent. Res. 2022, 101, 1269–1273. [Google Scholar] [CrossRef]
- Park, E.Y.; Cho, H.; Kang, S.; Jeong, S.; Kim, E.K. Caries detection with tooth surface segmentation on intraoral photographic images using deep learning. BMC Oral Health 2022, 22, 573. [Google Scholar] [CrossRef]
- Mehta, L.R.; Borse, M.S.; Tepan, M.; Shah, J. Identifying Suitable Deep Learning Approaches for Dental Caries Detection Using Smartphone Imaging. Int. J. Comput. Methods Exp. Meas. 2024, 12, 251–267. [Google Scholar] [CrossRef]
- Sabri, R.K.; Abdulkadir, L.Y.; Khidhir, A.M.; Saleh, H.A. Diagnosing Gingiva Disease Using Artificial Intelligence Techniques. Caries detection with tooth surface segmentation on intraoral photographic images using deep learning. Diyala J. Eng. Sci. 2025, 18, 179–190. [Google Scholar] [CrossRef]
- Park, S.; Erkinov, H.; Hasan, M.A.M.; Nam, S.-H.; Kim, Y.-R.; Shin, J.; Chang, W.-D. Periodontal Disease Classification with Color Teeth Images Using Convolutional Neural Networks. Electronics 2023, 12, 1518. [Google Scholar] [CrossRef]
- Wen, C.; Bai, X.; Yang, J.; Li, S.; Wang, X.; Yang, D. Deep learning based approach: Automated gingival inflammation grading model using gingival removal strategy. Sci. Rep. 2024, 14, 19780. [Google Scholar] [CrossRef]
- Garg, A.; Lu, J.; Maji, A. Towards Earlier Detection of Oral Diseases On Smartphones Using Oral and Dental RGB Images. arXiv 2023, arXiv:2308.15705. [Google Scholar] [CrossRef]
- Nantakeeratipat, T.; Apisaksirikul, N.; Boonrojsaree, B.; Boonkijkullatat, S.; Simaphichet, A. Automated machine learning for image-based detection of dental plaque on permanent teeth. Front. Dent. Med. 2024, 5, 1507705. [Google Scholar] [CrossRef]
- Zhang, R.; Zhang, L.; Zhang, D.; Wang, Y.; Huang, Y.; Wang, D.; Xu, L. Development and evaluation of a deep learning model for occlusion classification in intraoral photographs. PeerJ 2025, 13, e20140. [Google Scholar] [CrossRef]
- Ryu, J.; Lee, Y.-S.; Mo, S.-P.; Lim, K.; Jung, S.-K.; Kim, T.-W. Application of deep learning artificial intelligence technique to the classification of clinical orthodontic photos. BMC Oral Health 2022, 22, 454. [Google Scholar] [CrossRef]
- Su, A.-Y.; Wu, M.-L.; Wu, Y.-H. Deep learning system for the differential diagnosis of oral mucosal lesions through clinical photographic imaging. J. Dent. Sci. 2025, 20, 54–60. [Google Scholar] [CrossRef]
- Zhang, R.; Lu, M.; Zhang, J.; Chen, X.; Zhu, F.; Tian, X.; Chen, Y.; Cao, Y. Research and Application of Deep Learning Models with Multi-Scale Feature Fusion for Lesion Segmentation in Oral Mucosal Diseases. Bioengineering 2024, 11, 1107. [Google Scholar] [CrossRef]
- Tanriver, G.; Tekkesin, M.S.; Ergen, O. Automated Detection and Classification of Oral Lesions Using Deep Learning to Detect Oral Potentially Malignant Disorders. Cancers 2021, 13, 2766. [Google Scholar] [CrossRef]
- Vinayahalingam, S.; van Nistelrooij, N.; Rothweiler, R.; Tel, A.; Verhoeven, T.; Tröltzsch, D.; Kesting, M.; Bergé, S.; Xi, T.; Heiland, M.; et al. Advancements in diagnosing oral potentially malignant disorders: Leveraging Vision transformers for multi-class detection. Clin. Oral Investig. 2024, 28, 364. [Google Scholar] [CrossRef] [PubMed]
- Warin, K.; Limprasert, W.; Suebnukarn, S.; Jinaporntham, S.; Jantana, P. Performance of deep convolutional neural network for classification and detection of oral potentially malignant disorders in photographic images. Int. J. Oral Maxillofac. Surg. 2022, 51, 699–704. [Google Scholar] [CrossRef] [PubMed]
- Talwar, V.; Singh, P.; Mukhia, N.; Shetty, A.; Birur, P.; Desai, K.M.; Sunkavalli, C.; Varma, K.S.; Sethuraman, R.; Jawahar, C.V.; et al. AI-Assisted Screening of Oral Potentially Malignant Disorders Using Smartphone-Based Photographic Images. Cancers 2023, 15, 4120. [Google Scholar] [CrossRef] [PubMed]
- Rashid, U.; Javid, A.; Khan, A.R.; Liu, L.; Ahmed, A.; Khalid, O.; Saleem, K.; Meraj, S.; Iqbal, U.; Nawaz, R. A hybrid mask RCNN-based tool to localize dental cavities from real-time mixed photographic images. PeerJ Comput. Sci. 2022, 8, e888. [Google Scholar] [CrossRef]
- Ali, D.A.; Sadeeq, H.T. An Interpretable Deep Learning Framework for Multi-Class Dental Disease Classification from Intraoral RGB Images. Stat. Optim. Inf. Comput. 2025, 14, 3380–3397. [Google Scholar] [CrossRef]
- Boy, A.F.; Akhyar, A.; Arif, T.Y.; Syahrial, S. Development of an artificial intelligence model based on MobileNetV3 for early detection of dental caries using smartphone images: A preliminary study. Adv. Sci. Technol. Res. J. 2025, 19, 109–116. [Google Scholar] [CrossRef]
- Li, S.; Guo, Y.; Pang, Z.; Song, W.; Hao, A.; Xia, B. Automatic Dental Plaque Segmentation Based on Local-to-Global Features Fused Self-Attention Network. IEEE J. Biomed. Health Inform. 2022, 26, 2240–2251. [Google Scholar] [CrossRef]
- Patel, A.; Besombes, C.; Dillibabu, T.; Sharma, M.; Tamimi, F.; Ducret, M.; Madathil, S. Attention-guided convolutional network for bias-mitigated and interpretable oral lesion classification. Sci. Rep. 2024, 14, 31700. [Google Scholar] [CrossRef]
- Ryu, J.; Kim, Y.-H.; Kim, T.-W.; Jung, S.-K. Evaluation of artificial intelligence model for crowding categorization and extraction diagnosis using intraoral photographs. Sci. Rep. 2023, 13, 5177. [Google Scholar] [CrossRef]
- Liu, Y.; Cheng, Y.; Song, Y.; Cai, D.; Zhang, N. Oral screening of dental calculus, gingivitis and dental caries through segmentation on intraoral photographic images using deep learning. BMC Oral Health 2024, 24, 1287. [Google Scholar] [CrossRef]
- Jeong, J.-S.; Kim, K.-S.; Gu, Y.; Yoon, D.-H.; Zhang, M.; Wang, L.; Kim, J.-H. Deep learning for automated dental plaque index assessment: Validation against expert evaluations. BMC Oral Health 2025, 25, 1000. [Google Scholar] [CrossRef]
- Li, W.; Liang, Y.; Zhang, X.; Liu, C.; He, L.; Miao, L.; Sun, W. A deep learning approach to automatic gingivitis screening based on classification and localization in RGB photos. Sci. Rep. 2021, 11, 16831. [Google Scholar] [CrossRef]
- Neumayr, J.; Frenkel, E.; Schwarzmaier, J.; Ammar, N.; Kessler, A.; Schwendicke, F.; Kühnisch, J.; Dujic, H. External validation of an artificial intelligence-based method for the detection and classification of molar incisor hypomineralisation in dental photographs. J. Dent. 2024, 148, 105228. [Google Scholar] [CrossRef]
- Duong, D.L.; Kabir, M.H.; Kuo, R.F. Automated caries detection with smartphone color photography using machine learning. Health Inform. J. 2021, 27, 14604582211007530. [Google Scholar] [CrossRef] [PubMed]
- Rao, G.K.L.; Srinivasa, A.C.; Iskandar, Y.H.P.; Mokhtar, N. Identification and analysis of photometric points on 2D facial images: A machine learning approach in orthodontics. Health Technol. 2019, 9, 715–724. [Google Scholar] [CrossRef]
- Abdulwahhab, A.H.; Mahmood, N.T.; Mohammed, A.A.; Myderrizi, I.; Al-Jumaili, M.H. A Review on Medical Image Applications Based on Deep Learning Techniques. J. Image Graph. 2024, 12, 215–227. [Google Scholar] [CrossRef]
- Mienye, I.D.; Swart, T.G.; Obaido, G.; Jordan, M.; Ilono, P. Deep Convolutional Neural Networks in Medical Image Analysis: A Review. Information 2025, 16, 195. [Google Scholar] [CrossRef]
- Dai, L.; Zhou, M.; Liu, H. Recent Applications of Convolutional Neural Networks in Medical Data Analysis: Medicine & Healthcare Book Chapter. In Federated Learning and AI for Healthcare; IGI Global Scientific Publishing: Hershey, PA, USA, 2024. [Google Scholar]
- Kühnisch, J.; Meyer, O.; Hesenius, M.; Hickel, R.; Gruhn, V. Caries Detection on Intraoral Images Using Artificial Intelligence. J. Dent. Res. 2022, 101, 158–165. [Google Scholar] [CrossRef]
- Srinivasan, S.; Durairaju, K.; Deeba, K.; Mathivanan, S.K.; Karthikeyan, P.; Shah, M.A. Multimodal Biomedical Image Segmentation using Multi-Dimensional U-Convolutional Neural Network. BMC Med. Imaging 2024, 24, 38. [Google Scholar] [CrossRef]
- Zhou, Z.; Zhu, J.; Zhang, Y.; Guan, X.; Wang, P.; Li, T. Deep Learning in Dental Image Analysis: A Systematic Review of Datasets, Methodologies, and Emerging Challenges. arXiv 2025, arXiv:2510.20634. [Google Scholar] [CrossRef]
- He, K.; Gan, C.; Li, Z.; Rekik, I.; Yin, Z.; Ji, W.; Gao, Y.; Wang, Q.; Zhang, J.; Shen, D. Transformers in medical image analysis. Intell. Med. 2023, 3, 59–78. [Google Scholar] [CrossRef]
- Tran, Q.V.; Byeon, H. The Promise of Self-Supervised Learning for Dental Caries. Int. J. Adv. Comput. Sci. Appl. 2023, 14, 57–61. [Google Scholar] [CrossRef]
- Taleb, A.; Rohrer, C.; Bergner, B.; Leon, G.D.; Rodrigues, J.A.; Schwendicke, F.; Lippert, C.; Krois, J. Self-Supervised Learning Methods for Label-Efficient Dental Caries Classification. Diagnostics 2022, 12, 1237. [Google Scholar] [CrossRef] [PubMed]
- Badano, A.; Revie, C.; Casertano, A.; Cheng, W.-C.; Green, P.; Kimpe, T.; Krupinski, E.; Sisson, C.; Skrøvseth, S.; Treanor, D.; et al. Consistency and Standardization of Color in Medical Imaging: A Consensus Report. J. Digit. Imaging 2014, 28, 41–52. [Google Scholar] [CrossRef]
- Yoshimi, Y.; Mine, Y.; Ito, S.; Takeda, S.; Okazaki, S.; Nakamoto, T.; Nagasaki, T.; Kakimoto, N.; Murayama, T.; Tanimoto, K. Image preprocessing with contrast-limited adaptive histogram equalization improves the segmentation performance of deep learning for the articular disk of the temporomandibular joint on magnetic resonance images. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2024, 138, 128–141. [Google Scholar] [CrossRef]
- Xu, M.; Yoon, S.; Fuentes, A.; Park, D.S. A Comprehensive Survey of Image Augmentation Techniques for Deep Learning. Pattern Recognit. 2023, 137, 109347. [Google Scholar] [CrossRef]
- Saincher, R.; Kumar, S.; Gopalkrishna, P.; Maithri, M. Comparison of color accuracy and picture quality of digital SLR, point and shoot and mobile cameras used for dental intraoral photography—A pilot study. Heliyon 2022, 8, e09262. [Google Scholar] [CrossRef]
- Lamas-Lara, V.F.; Mattos-Vela, M.A.; Evaristo-Chiyong, T.A.; Guerrero, M.E.; Jiménez-Yano, J.F.; Gómez-Meza, D.N. Validity and reliability of a smartphone-based photographic method for detection of dental caries in adults for use in teledentistry. Front. Oral Health 2025, 6, 1470706. [Google Scholar] [CrossRef]
- Li, X.; Zhao, D.; Xie, J.; Wen, H.; Liu, C.; Li, Y.; Li, W.; Wang, S. Deep learning for classifying the stages of periodontitis on dental images: A systematic review and meta-analysis. BMC Oral Health 2023, 23, 1017. [Google Scholar] [CrossRef]
- Kocak, B.; Klontzas, M.E.; Stanzione, A.; Meddeb, A.; Demircioğlu, A.; Bluethgen, C.; Bressem, K.K.; Ugga, L.; Mercaldo, N.; Díaz, O.; et al. Evaluation metrics in medical imaging AI: Fundamentals, pitfalls, misapplications, and recommendations. Eur. J. Radiol. Artif. Intell. 2025, 3, 100030. [Google Scholar] [CrossRef]
- Adeniran, A.A.; Onebunne, A.P.; William, P. Explainable AI (XAI) in healthcare: Enhancing trust and transparency in critical decision-making. World J. Adv. Res. Rev. 2024, 23, 2647–2658. [Google Scholar] [CrossRef]
- Krois, J.; Garcia Cantu, A.; Chaurasia, A.; Patil, R.; Chaudhari, P.K.; Gaudin, R.; Gehrung, S.; Schwendicke, F. Generalizability of deep learning models for dental image analysis. Sci. Rep. 2021, 11, 6102. [Google Scholar] [CrossRef] [PubMed]
- Guan, H.; Liu, M. Domain Adaptation for Medical Image Analysis: A Survey. IEEE Trans. Biomed. Eng. 2022, 69, 1173–1185. [Google Scholar] [CrossRef] [PubMed]
- Das, A.; Rad, P. Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey. arXiv 2020, arXiv:2006.11371. [Google Scholar] [CrossRef]
- Liu, T.-Y.; Lee, K.-H.; Mukundan, A.; Karmakar, R.; Dhiman, H.; Wang, H.-C. AI in Dentistry: Innovations, Ethical Considerations, and Integration Barriers. Bioengineering 2025, 12, 928. [Google Scholar] [CrossRef]
- Rajkumar, N.M.R.; Muzoora, M.R.; Thun, S. Dentistry and Interoperability. J. Dent. Res. 2022, 101, 1258–1262. [Google Scholar] [CrossRef]
- Haripriya, R.; Khare, N.; Pandey, M. Privacy-preserving federated learning for collaborative medical data mining in multi-institutional settings. Sci. Rep. 2025, 15, 12482. [Google Scholar] [CrossRef]
- Kumari, P.; Chauhan, J.; Bozorgpour, A.; Huang, B.; Azad, R.; Merhof, D. Continual learning in medical image analysis: A comprehensive review of recent advancements and future prospects. Med. Image Anal. 2025, 106, 103730. [Google Scholar] [CrossRef]
- Al-Zubaidy, D.; Innes, N.; Galloway, J.; Al-Yaseen, W.; Al-Zubaidy, D.; Innes, N.; Galloway, J.; Al-Yaseen, W. Evaluating user perceptions and usability of an AI-powered smartphone application for at-home dental plaque screening. Br. Dent. J. 2025, 239, 46–52. [Google Scholar] [CrossRef]
- Rokhshad, R.; Ducret, M.; Chaurasia, A.; Karteva, T.; Radenkovic, M.; Roganovic, J.; Hamdan, M.; Mohammad-Rahimi, H.; Krois, J.; Lahoud, P.; et al. Ethical considerations on artificial intelligence in dentistry: A framework and checklist. J. Dent. 2023, 135, 104593. [Google Scholar] [CrossRef]






| Author (Year) | Dental Study | Dataset Source & Size | Imaging Modality | Model Used | Outcomes | Limitations |
|---|---|---|---|---|---|---|
| Park et al., 2022 [22] | Caries Detection | KNUDH, 2348 images | Intraoral camera | ResNet18, Faster R-CNN | Accuracy: 0.813 AUC: 0.837 Sensitivity: 0.890 | Limited internal visibility and occult lesions |
| Mehta et al., 2024 [23] | Dental Caries | Bharati Vidyapeeth’s Dental College, Pune, 1164 images | Intraoral digital RGB images | DenseNet201 | Accuracy: 0.93 | Dataset scarcity, generalizability, and risk of overfitting. |
| Sabri et al., 2025 [24] | Gingival diseases | Multihospital Karnataka, 2270 images | X-Ray and Intraoral images | Mobile Net | Accuracy: 92.7% | Data scarcity, poor interpretability, and clinical limits |
| Park et al., 2023 [25] | Periodontal diseases | GitHub public, 220 images | Optical camera images | YOLOv5s | F1 score: 99.9% | Dataset expansion, synthetic bias, and low applicability |
| Wen et al., 2024 [26] | Gingival inflammation grading | School and Hospital of Stomatology, Wuhan, 8214 images | Digital Camera | U-Net with Dense Net encoder | Accuracy: 79.22% AUC: 0.837 Sensitivity: 83.75% Specificity: 69.33% Precision: 0.867 | Limited dataset, regional gap, and image bias |
| Garg et al., 2023 [27] | Dental Calculus | Public Dataset, 220 images | RGB Intraoral images | ResNet34 | Accuracy: 81.82% Recall: 75.00% F1-score: 81.82% Precision: 90.00% | Data demand, training cost, and manual processing |
| Nantakeeratipat et al., 2024 [28] | Dental plaque | Srinakharinwirot University, Bangkok, 299 images | Smartphone camera images | Google Cloud’s Vertex AI AutoML | Precision: 0.964 F1-score: 0.931 AUPRC: 0.964 | Data limitation, weak generalization, and manual cropping risk |
| Zhang et al., 2025 [29] | Dental occlusion classification | Private dataset, 7200 images | Digital camera IOPIs | Swin Transformer | F1-score: 0.90 (Molar Occlusal) F1-score: 0.87 (Canine Occlusal) | Quality flaw, Source dependent, and validation gap |
| Ryu et al., 2022 [30] | Orthodontic photo classification | Seoul National University Dental Hospital, 4448 images | IOPIs | Multi-domain CNN | Accuracy: 99.3% (Facial) Accuracy: 99.9%) (Intraoral photos) | Single dataset, no flip-handling |
| Su et al., 2025 [31] | Oral mucosal lesions | National Cheng Kung University Hospital, 506 images | Clinical photographic imaging | CNN | Specificity: 97.0% Kappa: 0.851 AUC: 0.985 | Dataset scarcity, class imbalance, cross-validation |
| Zhang et al., 2024 [32] | Oral lesion segmentation | Private dataset, 838 images | Intraoral lesion images | SegFormer-B2 Transformer | Dice: 0.710 Precision: 0.886 | Data scarcity, low diversity, weak generalization |
| Tanriver et al., 2021 [33] | OPMD Disorders | Combined public dataset, 652 images | White-light photographic images | YOLOv5l U-Net | Dice: 0.929 (U-Net) AP: 0.855 (YOLOv5l) | Data scarcity, low diversity, and lesion challenge |
| Vinayahalingam et al., 2024 [34] | OPMD detection | Private dataset, 4161 images | Clinical photographs | Mask R-CNN + Swin | F1 score:0.852 AUC: 0.974 F1score: 0.796 AUC: 0.938 | Site limitation, low diversity, and label inconsistency |
| Warin et al., 2022 [35] | OPMD detection | Private dataset, 600 images | Digital dental camera | DenseNet-121 ResNet-50 Faster R-CNN | AUC: 95% (DenseNet-121) AUC: 95% (ResNet-50) F1 Score: 0.743 (Faster R-CNN) | Data scarcity and risk of overfitting |
| Talwar et al., 2023 [36] | OPMDs | Indian Dental Institute, 2178 images | Intraoral photographic images | DenseNet-201 | F1 Score: 0.84 | Inconsistent quality, focus, and angle variation |
| Rashid et al., 2022 [37] | Dental Caries | Public dataset | Mixed dental images | Hybrid Mask RCNN | Accuracy between 78% and 92% | No explicit study, annotated datasets |
| Ali & Sadeeq et al., 2025 [38] | Dental Classification | Kaggle Multi dataset | Clinically obtained RGB intraoral images | EfficienNet-B3 | Accuracy: 95.4% (Oral Diseases) Accuracy: 89.9% (Oral Infection) Accuracy: 99.3% (Teeth Dataset) | Class imbalance and low recall in Hypodontia |
| Boy et al., 2025 [39] | Dental caries | Private Indonesian clinical dataset, 1200 images | Smartphone images | MobileNetV3 | Accuracy: 90% Precision: 90% Sensitivity: 90% Specificity: 90% | Quality flaw, device variability, and low resolution |
| Pang et al., 2022 [40] | Dental Plaque | Private dataset, 2884 images | Raw oral endoscope RGB images | ResNet101 | Accuracy: 83.86% | Device variability, Imaging inconsistency, Equipment variation |
| Patel et al., 2024 [41] | Oral lesions | Private OCPP data, 2765 images | Intraoral images | GAIN + ASP | Accuracy:75.45% AUC:99.7% | No limitations stated |
| Ryu et al., 2023 [42] | Dental crowding severity | Seoul National University Dental Hospital, 2248 images | Intraoral photographs | VGG19 | (Maxilla) Accuracy: 0.922 (Mandible) Accuracy: 0.898 | Single-center data, weak generalization, quality flaw |
| Liu et al., 2024 [43] | Dental caries, calculus, gingivitis. | Private dataset, 3365 images | Intraoral photographic images | Oral-Mamba CNN | Accuracy: 0.83 (gingivitis) 0.83 (caries) 0.81 (calculus) | No explicit limitations stated |
| Jeong et al., 2025 [44] | Dental plaque accumulation | Private dataset, 1094 images | Camera IOPIs | U-Net | Precision: 76.34% Recall: 65.15% F1-score: 66.15% | Single-dataset, Imaging and Visualization limits |
| Li et al., 2021 [45] | Gingivitis | Private dataset, 10,000 images | RGB photos | ResNet-50 YOLOv3 | Accuracy: 92.1% Sensitivity: 91.3% Specificity: 92.9% | Single-center data, Subjective diagnosis, data scarcity |
| Neumayr et al., 2024 [46] | Molar incisor hypomineralisation | Open source web images, 455 images | IOPIs | AI-based model | Accuracy 94.3% sensitivity (94.4%) specificity (94.2%) AUC: 0.89–0.94 | Heterogeneous images, Subjective quality rating, No standard criteria |
| Study ID | Question Number | Overall Bias | ||||||
|---|---|---|---|---|---|---|---|---|
| Q1 | Q2 | Q3 | Q4 | Q5 | Q6 | Q7 | ||
| Park et al., 2022 [22] | low | Moderate | Low | Low | Low | Low | Low | ![]() |
| Mehta et al., 2024 [23] | Low | Low | High | Moderate | Low | Low | Low | ![]() |
| Sabri et al., 2025 [24] | Low | Low | Moderate | Low | Moderate | Low | Moderate | ![]() |
| Park et al., 2023 [25] | High | Moderate | Moderate | Moderate | High | Low | High | ![]() |
| Wen et al., 2024 [26] | Moderate | Moderate | Low | Low | Low | Low | Low | ![]() |
| Garg et al., 2023 [27] | Moderate | Moderate | Low | Low | Low | Low | Low | ![]() |
| Nantakeeratipat et al., 2024 [28] | Low | Moderate | Low | Low | Low | Low | Low | ![]() |
| Zhang et al., 2025 [29] | Moderate | Low | High | Low | Low | Low | Low | ![]() |
| Ryu et al., 2022 [30] | Low | Moderate | Low | High | Low | Low | High | ![]() |
| Su et al., 2025 [31] | Moderate | Moderate | Low | Low | Low | Low | High | ![]() |
| Zhang et al., 2024 [32] | Low | Moderate | High | Low | Low | Low | Low | ![]() |
| Tanriver et al., 2021 [33] | Moderate | Low | High | Low | Low | Low | Low | ![]() |
| Vinayahalingam et al., 2024 [34] | Low | Moderate | High | Low | Low | Low | Moderate | ![]() |
| Warin et al., 2022 [35] | Low | Moderate | High | Low | Low | Low | Low | ![]() |
| Talwar et al., 2023 [36] | Low | Moderate | High | Low | Low | High | Low | ![]() |
| Rashid et al., 2022 [37] | Low | Moderate | High | Low | Low | Low | Low | ![]() |
| Ali & Sadeeq, 2025 [38] | Low | Low | Low | Low | Moderate | Low | Moderate | ![]() |
| Boy et al., 2025 [39] | Low | Moderate | High | Low | Low | Low | Low | ![]() |
| Pang et al., 2022 [40] | Moderate | Moderate | Low | Low | Low | Low | Low | ![]() |
| Patel et al., 2024 [41] | Low | Moderate | High | Low | Low | Low | Low | ![]() |
| Ryu et al., 2023 [42] | Moderate | Low | High | Low | Low | Low | Moderate | ![]() |
| Liu et al., 2024 [43] | Low | Moderate | High | Low | Low | High | Moderate | ![]() |
| Jeong et al., 2025 [44] | Low | Moderate | High | Low | Low | Low | Moderate | ![]() |
| Li et al., 2021 [45] | Low | Moderate | High | Low | Moderate | Low | Low | ![]() |
| Neumayr et al., 2024 [46] | Low | Low | Low | Low | Low | Low | High | ![]() |
Low Risk Of Bias (High quality);
Moderate Risk Of Bias (Low quality);
High Risk of Bias (Poor quality).| Disease Category | Key Visual Indicators in IOPIs | Clinical Relevance | Typical DL Tasks | Representative Studies |
|---|---|---|---|---|
| Dental caries | White-spot lesions, cavitation, discoloration | Early prevention of progression | Classification, localization | [3] |
| Gingivitis/Periodontitis | Gingival redness, swelling, bleeding | Prevents progression and tooth loss | Classification, grading | [26] |
| Dental plaque | Yellowish biofilm at the gingival margin | Risk factors for caries and gingivitis | Segmentation, quantification | [40] |
| Orthodontic conditions | Crowding, spacing, occlusal imbalance | Treatment planning | Classification, landmark detection, | [48] |
| Soft-tissue lesions/PMDs | White/red patches ulcers | Early oral cancer screening | Classification, Lesion segmentation | [33] |
| Study (Author, Year) | Imaging Task | DL Architecture | Key Methodological Focus | Ref |
|---|---|---|---|---|
| Park, 2022 | Tooth surface caries detection and segmentation | Res Net-based segmentation pipeline | Tooth-surface segmentation before classification | [22] |
| Duong, 2021 | Caries screening | Classical ML/CNN prototype | Feasibility of smartphone photographic ML | [47] |
| Kühnisch, 2022 | Caries detection | CNN ensembles, Transfer learning | High-performance caries benchmarking | [27] |
| Shuai Li, 2022 | Plaque segmentation | Local-to-global attention (U-Net variant) Attention-based U-Net | Improved plaque boundary delineation | [40] |
| Nantakeeratipat, 2024 | Plaque detection | Automated ML frameworks | AutoML for plaque detection on permanent teeth Automated model selection | [28] |
| Ryu, 2022 | Orthodontic diagnosis | CNN (IOPI Classification) | Automated classification of intraoral photos | [30] |
| Vinayahalingam, 2024 | OPMD multi-class detection | Vision Transformer | Multi-class PMD detection using ViTs | [34] |
| Tanriver, 2021 | Oral lesion detection | CNN-based classifiers | Early automated PMD detection | [33] |
| Kaissis, 2020 Rischke, 2022 | Federated training context | FL frameworks (FedAvg/FedProx) | Privacy-preserving multi-center training in medical imaging/dentistry | [20,21] |
| Taleb, 2022 | Label-efficient learning | SSL paradigms (SimCLR, BYOL, MoCo) | SSL promises efficient detection and labeling of dental caries | [57] |
| Model/Architecture | Typical Dental Roles (IOPI) | Strengths | Limitations | Representative Benchmark Studies |
|---|---|---|---|---|
| Res Net | Caries and lesion classification | Strong feature extraction | Limited global context; needs augmentation | [22] |
| Efficient Net/ Mobile Net | Smartphone screening | Good accuracy- compute tradeoff | Sensitive to IOPI variability | [39] |
| U-Net/DeepLabV3+/ Attention U-Net | Plaque, gingiva, lesion segmentation | Accurate boundary localization | Limited global reasoning | [15,53] |
| Vision Transformers | Multi-disease and orthodontic analysis | Strong global context modeling | Data hungry | [34,55] |
| GANs | Data augmentation, color correction | Mitigate class imbalance | Risk of unrealistic samples | [17,18] |
| Self-Supervised Learning | Label efficient, pretraining | Reduces annotation burden | Sensitive to augmentation | [56,57] |
| Federated Learning | Multi-center training | Privacy-preserving robust | Communication overhead | [20,21] |
| Task | Best Reported Model(s) | Dataset Source | Key Techniques | Reported Performance | Key Reviewed Studies |
|---|---|---|---|---|---|
| Caries detection | Res Net/CNN models | smartphone IOPIs | Tooth-surface segmentation, CLAHE, ROI cropping | Accuracy ~85–93% | [22,47,52] |
| Gingivitis/ periodontal grading | CNN classifiers | Clinical RGB photos | Gingival ROI, color normalization | Accuracy ~85–92% | [25,26,45] |
| Plaque segmentation | Attention U-Net variants | Masked datasets | Color normalization, morphological cleaning | Dice coefficients ~0.82–0.95 | [28,40,44] |
| Orthodontic classification | CNNs, ViT/Swin variants | Clinical IOPIs | Alignment/ ROI extraction | Accuracy ~90–99% | [29,42] |
| OPMD detection | CNN ensembles, ViTs | Clinical photos | Contrast enhancement, | AUC~ 0.85–0.96 | [33,35] |
| Multi-center robustness | Federated learning | Multi- center datasets | FL training; domain adaptation | Improved external robustness | [20,21] |
| Challenge Area | Evidence from Literature | Future Research Directions |
|---|---|---|
| Dataset heterogeneity | Many studies rely on single-center datasets with limited demographic and clinical diversity [10,26], affecting generalizability [60]. | Development of multi-center IOPI datasets with standardized acquisition protocols. |
| Image acquisition variability | Variations in lighting, device type, viewing angle, and saliva artifacts influence model performance [26,47]. | Color normalization, illumination correction, and guided image-capture strategies. |
| Class imbalance and rare conditions | Rare conditions such as early caries and OPMDs are underrepresented in available datasets [26,29]. | Targeted data collection, synthetic augmentation, and self-supervised pretraining. |
| Domain shift and external validation | Performance degradation is commonly reported on external datasets due to domain shift [60]. | Domain adaptation techniques, external validation, and federated learning. |
| Limited explainability | Saliency-based explanations are not always aligned with clinical reasoning [64,71]. | Clinically interpretable explanation frameworks and human–AI studies. |
| Annotation burden | Manual labeling is time-consuming and subject to inter-observer variability [21,31]. | Self-supervised, weakly supervised, and consensus-based annotation methods. |
| Privacy and regulatory constraints | Data-sharing restrictions limit large-scale multi-institutional collaboration [74]. | Privacy-preserving learning and regulatory-aligned AI development. |
| Model reliability and calibration | Confidence estimates are often poorly calibrated for clinical decision support [68]. | Uncertainty-aware modeling and calibration strategies. |
| Clinical workflow integration | Limited deployment due to interoperability and hardware constraints [8,50]. | Lightweight models, interoperable deployment frameworks, and user-centered design. |
| Lack of prospective evaluation | Most studies rely on retrospective analysis. | Prospective and longitudinal clinical validation studies. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Mutawa, A.M.; Altarakemah, Y.Y.; Thirupathy, K. Deep Learning Applications for Dental-Disease Classification Using Intraoral Photographic Images: Current Status and Future Perspectives. AI 2026, 7, 85. https://doi.org/10.3390/ai7030085
Mutawa AM, Altarakemah YY, Thirupathy K. Deep Learning Applications for Dental-Disease Classification Using Intraoral Photographic Images: Current Status and Future Perspectives. AI. 2026; 7(3):85. https://doi.org/10.3390/ai7030085
Chicago/Turabian StyleMutawa, A. M., Yacoub Yousef Altarakemah, and Karthiga Thirupathy. 2026. "Deep Learning Applications for Dental-Disease Classification Using Intraoral Photographic Images: Current Status and Future Perspectives" AI 7, no. 3: 85. https://doi.org/10.3390/ai7030085
APA StyleMutawa, A. M., Altarakemah, Y. Y., & Thirupathy, K. (2026). Deep Learning Applications for Dental-Disease Classification Using Intraoral Photographic Images: Current Status and Future Perspectives. AI, 7(3), 85. https://doi.org/10.3390/ai7030085

