Prognostic Evaluation of Lower Third Molar Eruption Status from Panoramic Radiographs Using Artificial Intelligence-Supported Machine and Deep Learning Models
Abstract
1. Introduction
2. Materials and Methods
2.1. Ethical Approval, Consent and Guiding Standards
2.2. Dataset and Study Design
2.2.1. Radiographic Image Acquisition
2.2.2. Labeling Process
- Erupted: the occlusal surface of the third molar was at or above the occlusal plane of the adjacent second molar, with no overlying bone visible radiographically.
- Partially impacted: the occlusal surface was below the occlusal plane but partially exposed, with part of the crown still covered by alveolar bone or showing limited eruption space between the distal of the second molar and the anterior border of the ramus.
- Impacted: the third molar was entirely below the occlusal plane, with full bone coverage and/or evident spatial limitation or unfavorable angulation (e.g., mesioangular or horizontal position) in relation to the second molar.
2.2.3. Model Training Process and Reference Model Selection
- Data Augmentation Layer
- InceptionV3 or ResNet50 Preprocessing Layer
- Pretrained InceptionV3 or ResNet50
- Rescaling Layer (Optional)
- Global Average Pooling
- Flatten Layer
- Dropout Layer (Optional)
- Fully Connected Layer (Optional)
- Decision Layer (Softmax)
- Dropout Rate: 0, 0.15, 0.3
- Number of Nodes: 16, 32, 64, 128
- L2 Regulation: 0, 0.001, 0.01
- Learning Rate: 0.0005, 0.001, 0.005
2.2.4. Preprocessing Methods Used in This Study
- Resizer: A method used to adjust image dimensions to match the model input size.
- Median Filter: A filtering technique that reduces noise in the image, resulting in a smoother appearance.
- CLAHE (Contrast Limited Adaptive Histogram Equalization): A technique that enhances image contrast, making details more visible.
- Gaussian Thresholding: A segmentation method that classifies pixels based on whether they fall above or below a certain threshold (Figure 6).
3. Results
3.1. Comparison of Model Performances
3.2. The Effect of Hyperparameter Optimization
3.3. Performance on Initial Diagnoses
3.4. Overall Evaluation of the Results
4. Discussion
4.1. Interpretation of Findings
4.2. Strengths of the Model and Comparison with Classical Machine Learning Methods
4.3. Clinical Relevance
4.4. Methodological Differences
4.5. Limitations
4.6. Suggested Directions for Future Research
- The utilization of high-resolution images (e.g., the employment of CBCT instead of conventional radiographs)
- The incorporation of standardized yet diverse data inputs
- The engagement of multiple observers while minimizing inter-observer variability
- The integration of practical technologies such as mobile devices and cloud computing. The conducting of tests across different demographic groups and scenarios
- In order to achieve generalizability, it is essential to consider factors such as dataset size and the expected performance metrics (e.g., detection, classification, prediction, treatment recommendations) when selecting the appropriate model.
- Furthermore, undertaking prospective studies for clinical validation is recommended.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Hounsome, J.; Pilkington, G.; Mahon, J.; Boland, A.; Beale, S.; Kotas, E.; Renton, T.; Dickson, R. Prophylactic removal of impacted mandibular third molars: A systematic review and economic evaluation. Health Technol. Assess. 2020, 24, 1–116. [Google Scholar] [CrossRef]
- Guidance on the Extraction of Wisdom Teeth. National Institute for Health and Care Excellence (NICE). 2000. Available online: www.nice.org.uk/guidance/ta1 (accessed on 15 October 2025).
- Bastos, V.C.; Gomez, R.S.; Gomes, C.C. Revisiting the human dental follicle: From tooth development to its association with unerupted or impacted teeth and pathological changes. Dev. Dyn. 2022, 251, 408–423. [Google Scholar] [CrossRef]
- Nivedita, S.; John, E.R.; Acharya, S.; D’costa, V.G. Prophylactic extraction of non-impacted third molars: Is it necessary? Minerva Stomatol. 2019, 68, 297–302. [Google Scholar] [CrossRef] [PubMed]
- Schwendicke, F.; Samek, W.; Krois, J. Artificial intelligence in dentistry: Chances and challenges. J. Dent. Res. 2020, 99, 769–774. [Google Scholar] [CrossRef]
- Hung, K.; Montalvao, C.; Tanaka, R.; Kawai, T.; Bornstein, M.M. The use and performance of artificial intelligence applications in dental and maxillofacial radiology: A systematic review. Dentomaxillofac. Radiol. 2020, 49, 20190107. [Google Scholar] [CrossRef]
- Patil, S.; Albogami, S.; Hosmani, J.; Mujoo, S.; Kamil, M.A.; Mansour, M.A.; Abdul, H.N.; Bhandi, S.; Ahmed, S.S.S.J. Artificial intelligence in the diagnosis of oral diseases: Applications and pitfalls. Diagnostics 2022, 12, 1029. [Google Scholar] [CrossRef]
- Putra, R.H.; Doi, C.; Yoda, N.; Astuti, E.R.; Sasaki, K. Current applications and development of artificial intelligence for digital dental radiography. Dentomaxillofac. Radiol. 2022, 51, 20210197. [Google Scholar] [CrossRef]
- Choi, E.; Lee, S.; Jeong, E.; Shin, S.; Park, H.; Youm, S.; Son, Y.; Pang, K. Artificial intelligence in positioning between mandibular third molar and inferior alveolar nerve on panoramic radiography. Sci. Rep. 2022, 12, 2456. [Google Scholar] [CrossRef]
- Orhan, K.; Bilgir, E.; Bayrakdar, I.S.; Ezhov, M.; Gusarev, M.; Shumilov, E. Evaluation of artificial intelligence for detecting impacted third molars on cone-beam computed tomography scans. J. Stomatol. Oral Maxillofac. Surg. 2021, 122, 333–337. [Google Scholar] [CrossRef] [PubMed]
- Bui, R.; Iozzino, R.; Richert, R.; Roy, P.; Boussel, L.; Tafrount, C.; Ducret, M. Artificial Intelligence as a Decision-Making Tool in Forensic Dentistry: A Pilot Study with I3M. Int. J. Environ. Res. Public Health 2023, 20, 4620. [Google Scholar] [CrossRef] [PubMed]
- World Medical Association. World Medical Association Declaration of Helsinki: Ethical principles for medical research involving human subjects. JAMA 2013, 310, 2191–2194. [Google Scholar] [CrossRef]
- Collins, G.S.; Moons, K.G.M.; Dhiman, P.; Riley, R.D.; Beam, A.L.; Van Calster, B.; Ghassemi, M.; Liu, X.; Reitsma, J.B.; van Smeden, M.; et al. TRIPOD+AI statement: Updated guidance for reporting clinical prediction models that use regression or machine learning methods. BMJ 2024, 385, e078378. [Google Scholar] [CrossRef]
- Anil, S.; Porwal, P.; Porwal, A. Transforming Dental Caries Diagnosis Through Artificial Intelligence-Based Techniques. Cureus 2023, 15, e41694. [Google Scholar] [CrossRef]
- Yoo, J.H.; Yeom, H.G.; Shin, W.; Yun, J.P.; Lee, J.H.; Jeong, S.H.; Lim, H.J.; Lee, J.; Kim, B.C. Deep learning based prediction of extraction difficulty for mandibular third molars. Sci. Rep. 2021, 11, 1954. [Google Scholar] [CrossRef] [PubMed]
- Hauge Matzen, L.; Christensen, J.; Hintze, H.; Schou, S.; Wenzel, A. Diagnostic accuracy of panoramic radiography, stereo-scanography and cone beam CT for assessment of mandibular third molars before surgery. Acta Odontol. Scand. 2013, 71, 1391–1398. [Google Scholar] [CrossRef] [PubMed]
- Krois, J.; Ekert, T.; Meinhold, L.; Golla, T.; Kharbot, B.; Wittemeier, A.; Dörfer, C.; Schwendicke, F. Deep Learning for the Radiographic Detection of Periodontal Bone Loss. Sci. Rep. 2019, 9, 8495. [Google Scholar] [CrossRef] [PubMed]
- Fukuda, M.; Inamoto, K.; Shibata, N.; Ariji, Y.; Yanashita, Y.; Kutsuna, S.; Nakata, K.; Katsumata, A.; Fujita, H.; Ariji, E. Evaluation of an artificial intelligence system for detecting vertical root fracture on panoramic radiography. Oral Radiol. 2020, 36, 337–343. [Google Scholar] [CrossRef]
- Hosmer, D.W.; Lemeshow, S.; Sturdivant, R. Applied Logistic Regression, 3rd ed.; John Wiley & Sons: Hoboken, NJ, USA, 2013; Chapter 1; pp. 1–35. [Google Scholar]
- Moran, M.; Faria, M.; Giraldi, G.; Bastos, L.; Conci, A. Do radiographic assessments of periodontal bone loss improve with deep learning methods for enhanced image resolution? Sensors 2021, 21, 2013. [Google Scholar] [CrossRef]
- Vinayahalingam, S.; Xi, T.; Bergé, S.; Maal, T.; de Jong, G. Automated detection of third molars and mandibular nerve by deep learning. Sci. Rep. 2019, 9, 9007. [Google Scholar] [CrossRef]
- Huang, C.; Wang, J.; Wang, S.; Zhang, Y. A review of deep learning in dentistry. Neurocomputing 2023, 554, 126629. [Google Scholar] [CrossRef]
- Buda, M.; Saha, A.; Walsh, R.; Ghate, S.; Li, N.; Swiecicki, A.; Lo, J.Y.; Mazurowski, M.A. A Data Set and Deep Learning Algorithm for the Detection of Masses and Architectural Distortions in Digital Breast Tomosynthesis Images. JAMA Netw. Open 2021, 4, E2119100. [Google Scholar] [CrossRef]
- Walston, S.L.; Seki, H.; Takita, H.; Mitsuyama, Y.; Sato, S.; Hagiwara, A.; Ito, R.; Hanaoka, S.; Miki, Y.; Ueda, D. Data set terminology of deep learning in medicine: A historical review and recommendation. Jpn. J. Radiol. 2024, 42, 1100–1109. [Google Scholar] [CrossRef]
- Dratsch, T.; Caldeira, L.; Maintz, D.; Dos Santos, D.P. Artificial intelligence abstracts from the European Congress of Radiology: Analysis of topics and compliance with the STARD for abstracts checklist. Insights Imaging 2020, 11, 59. [Google Scholar] [CrossRef]
- Aggarwal, R.; Sounderajah, V.; Martin, G.; Ting, D.S.W.; Karthikesalingam, A.; King, D.; Ashrafian, H.; Darzi, A. Diagnostic accuracy of deep learning in medical imaging: A systematic review and meta-analysis. NPJ Digit. Med. 2021, 4, 65. [Google Scholar] [CrossRef] [PubMed]
- Vinayahalingam, S.; Kempers, S.; Limon, L.; Deibel, D.; Maal, T.; Hanisch, M.; Bergé, S.; Xi, T. Classification of caries in third molars on panoramic radiographs using deep learning. Sci. Rep. 2021, 11, 12609. [Google Scholar] [CrossRef] [PubMed]
- De Tobel, J.; Radesh, P.; Vandermeulen, D.; Thevissen, P.W. An automated technique to stage lower third molar development on panoramic radiographs for age estimation: A pilot study. J. Forensic Odontostomatol. 2017, 35, 42–54. [Google Scholar] [PubMed]







| Label Definition | Initial (n.268) | Definitive (n.834) | ||||
|---|---|---|---|---|---|---|
| Impacted | Partially Impacted | Erupted | Impacted | Partially Impacted | Erupted | |
| Label number | 118 | 89 | 61 | 147 | 119 | 568 |
| Parameter Definition | The Effect of the Parameter on the Model |
|---|---|
| Dropout Rate | It is the rate used in the dropout layer within the model architecture. As the rate increases, it is expected to prevent overfitting. When this value is set to “0”, the dropout layer is removed from the model architecture. |
| Number of Nodes | It is the number of nodes to be used in the fully connected layer within the model architecture. When this value is set to “0”, the fully connected layer is removed from the model architecture. |
| L2 Regularization * | This value is used to enhance the effect of L1 and L2 regularization applied in the fully connected and decision layers. Unless otherwise specified, the default values for L1 and L2 regularization are set to “0”. |
| Learning Rate | The learning rate is a parameter that determines the learning speed of the optimization method chosen for model training. |
| Cohort | Class | Train | Validation | Test |
|---|---|---|---|---|
| Initial | Impacted | 70 | 24 | 24 |
| Partially Impacted | 53 | 18 | 18 | |
| Erupted | 18 | 6 | 12 | |
| Definitive | Impacted | 340 | 144 | 114 |
| Partially Impacted | 88 | 30 | 29 | |
| Erupted | 71 | 24 | 24 |
| Model Performance Metrics | Phase 1 | Phase 2 | Phase 3 |
|---|---|---|---|
| Accuracy | 0.850 | 0.856 | 0.850 |
| Recall | 0.816 | 0.821 | 0.804 |
| Precision | 0.794 | 0.795 | 0.788 |
| F1 Score | 0.800 | 0.803 | 0.792 |
| Optimal Final Layer | Logistic Regression | Logistic Regression | Logistic Regression |
| Model Performance Metrics | Phase 1 | Phase 2 | Phase 3 |
|---|---|---|---|
| Accuracy | 0.892 | 0.880 | 0.892 |
| Recall | 0.784 | 0.789 | 0.814 |
| Precision | 0.843 | 0.833 | 0.846 |
| F1 Score | 0.808 | 0.809 | 0.829 |
| Optimal Final Layer | Logistic Regression | Neural Network | KNN |
| Model Performance Metrics | Phase 1 | Phase 2 | Phase 3 |
|---|---|---|---|
| Accuracy | 0.629 | 0.648 | 0.704 |
| Recall | 0.620 | 0.634 | 0.713 |
| Precision | 0.683 | 0.672 | 0.707 |
| F1 Score | 0.639 | 0.647 | 0.705 |
| Optimal Final Layer | KNN | Neural Network | KNN |
| True Class (Definitive) | Predicted Impacted (from Initial Image) | Predicted Partially Impacted (from Initial Image) | Predicted Erupted (from Initial Image) | Support |
|---|---|---|---|---|
| Impacted | 18 (75%) | 3 (12.50%) | 3 (12.50%) | 24 |
| Partially Impacted | 7 (38.88%) | 10 (55.55%) | 1 (5.55%) | 18 |
| Erupted | 1 (8.33%) | 1 (8.33%) | 10 (83.33%) | 12 |
| Phase 1 | Phase 2 | Phase 3 | |
|---|---|---|---|
| Impacted Definitive | 0.928 | 0.939 | 0.944 |
| Partially Impacted Definitive | 0.885 | 0.881 | 0.872 |
| Erupted Definitive | 0.930 | 0.932 | 0.929 |
| Phase 1 | Phase 2 | Phase 3 | |
|---|---|---|---|
| Impacted Definitive | 0.947 | 0.970 | 0.926 |
| Partially Impacted Definitive | 0.826 | 0.873 | 0.794 |
| Erupted Definitive | 0.929 | 0.948 | 0.901 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Guldiken, I.N.; Tekin, A.; Kanbak, T.; Kahraman, E.N.; Özcan, M. Prognostic Evaluation of Lower Third Molar Eruption Status from Panoramic Radiographs Using Artificial Intelligence-Supported Machine and Deep Learning Models. Bioengineering 2025, 12, 1176. https://doi.org/10.3390/bioengineering12111176
Guldiken IN, Tekin A, Kanbak T, Kahraman EN, Özcan M. Prognostic Evaluation of Lower Third Molar Eruption Status from Panoramic Radiographs Using Artificial Intelligence-Supported Machine and Deep Learning Models. Bioengineering. 2025; 12(11):1176. https://doi.org/10.3390/bioengineering12111176
Chicago/Turabian StyleGuldiken, Ipek N., Alperen Tekin, Tunahan Kanbak, Emine N. Kahraman, and Mutlu Özcan. 2025. "Prognostic Evaluation of Lower Third Molar Eruption Status from Panoramic Radiographs Using Artificial Intelligence-Supported Machine and Deep Learning Models" Bioengineering 12, no. 11: 1176. https://doi.org/10.3390/bioengineering12111176
APA StyleGuldiken, I. N., Tekin, A., Kanbak, T., Kahraman, E. N., & Özcan, M. (2025). Prognostic Evaluation of Lower Third Molar Eruption Status from Panoramic Radiographs Using Artificial Intelligence-Supported Machine and Deep Learning Models. Bioengineering, 12(11), 1176. https://doi.org/10.3390/bioengineering12111176

