Abstract
Background/Objectives: The early and accurate identification of impacted teeth in the maxilla is critical for effective dental treatment planning. Traditional diagnostic methods relying on manual interpretation of radiographic images are often time-consuming and subject to variability. Methods: This study presents a deep learning-based approach for automated classification of impacted maxillary canines using panoramic radiographs. A comparative evaluation of four pre-trained convolutional neural network (CNN) architectures—ResNet50, Xception, InceptionV3, and VGG16—was conducted through transfer learning techniques. In this retrospective single-center study, the dataset comprised 694 annotated panoramic radiographs sourced from the archives of a university dental hospital, with a mildly imbalanced representation of impacted and non-impacted cases. Models were assessed using accuracy, precision, recall, specificity, and F1-score. Results: Among the tested architectures, VGG16 demonstrated superior performance, achieving an accuracy of 99.28% and an F1-score of 99.43%. Additionally, a prototype diagnostic interface was developed to demonstrate the potential for clinical application. Conclusions: The findings underscore the potential of deep learning models, particularly VGG16, in enhancing diagnostic workflows; however, further validation on diverse, multi-center datasets is required to confirm clinical generalizability.