Next Article in Journal
Peripheral Odontogenic Keratocyst of the Gingiva: A Systematic Review of the Literature and Case Report
Previous Article in Journal
Current Concepts of the Applications and Treatment Implications of Drug-Induced Sleep Endoscopy for the Management of Obstructive Sleep Apnoea
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence-Aided Tooth Detection and Segmentation on Pediatric Panoramic Radiographs in Mixed Dentition Using a Transfer Learning Approach

by
Serena Incerti Parenti
1,*,
Giorgio Tsiotas
2,
Alessandro Maglioni
1,
Giulia Lamberti
1,
Andrea Fiordelli
1,
Davide Rossi
2,
Luciano Bononi
2 and
Giulio Alessandri-Bonetti
1
1
Unit of Orthodontics and Sleep Dentistry, Department of Biomedical and Neuromotor Sciences (DIBINEM), University of Bologna, Via San Vitale 59, 40125 Bologna, Italy
2
Department of Computer Science and Engineering (DISI), University of Bologna, Mura Anteo Zamboni 7, 40126 Bologna, Italy
*
Author to whom correspondence should be addressed.
Diagnostics 2025, 15(20), 2615; https://doi.org/10.3390/diagnostics15202615
Submission received: 5 September 2025 / Revised: 13 October 2025 / Accepted: 14 October 2025 / Published: 16 October 2025
(This article belongs to the Special Issue Advances in Diagnosis and Treatment in Pediatric Dentistry)

Abstract

Background/Objectives: Accurate identification of deciduous and permanent teeth on panoramic radiographs (PRs) during mixed dentition is fundamental for early detection of eruption disturbances, yet relies heavily on clinician experience due to developmental variability. This study aimed to develop a deep learning model for automated tooth detection and segmentation in pediatric PRs during mixed dentition. Methods: A retrospective dataset of 250 panoramic radiographs from patients aged 6–13 years was analyzed. A customized YOLOv11-based model was developed using a novel hybrid pre-annotation strategy leveraging transfer learning from 650 publicly available adult radiographs, followed by expert manual refinement. Performance evaluation utilized mean average precision (mAP), F1-score, precision, and recall metrics. Results: The model demonstrated robust performance with mAP0.5 = 0.963 [95%CI: 0.944–0.983] and macro-averaged F1-score = 0.953 [95%CI: 0.922–0.965] for detection. Segmentation achieved mAP0.5 = 0.890 [95%CI: 0.857–0.923]. Stratified analysis revealed excellent performance for permanent teeth (F1 = 0.977) and clinically acceptable accuracy for deciduous teeth (F1 = 0.884). Conclusions: The automated system achieved near-expert accuracy in detecting and segmenting teeth during mixed dentition using an innovative transfer learning approach. This framework establishes reliable infrastructure for AI-assisted diagnostic applications targeting eruption or developmental anomalies, potentially facilitating earlier detection while reducing clinician-dependent variability in mixed dentition evaluation.

1. Introduction

Artificial intelligence (AI) has emerged as one of the most rapidly evolving innovations in healthcare, with increasingly relevant applications in dentistry. Advances in machine learning algorithms, deep learning (DL) architectures, and large language models (LLMs) have enabled AI to support multiple aspects of dental care, from diagnosis and treatment planning to patient communication. Recent studies have demonstrated the utility of AI across multiple domains, ranging from caries detection, periodontal disease assessment, pediatric dentistry, and endodontic procedures, exhibiting encouraging accuracy and efficiency, despite considerable variation in model performance across different studies [1].
The integration of AI in radiographic interpretation represents one of the most promising applications in dentistry, particularly for enhancing image interpretation speed and diagnostic accuracy. Moreover, looking toward future perspectives, AI technologies hold considerable potential for transforming patient communication and engagement in clinical decision-making processes. The widespread adoption of LLM-based chatbots has reshaped healthcare information dissemination, with tools such as ChatGPT demonstrating promising performance across various dental fields [2,3,4,5,6], and potentially challenging traditional search engines like Google as primary resources for patient healthcare inquiries. However, a recent systematic review emphasizes the critical need for further research to assess LLM content reliability and its clinical impact, given the inherent risk of generating inaccurate information [7].
The monitoring of the transition from the mixed to the permanent dentition is an essential component of pediatric dental care and presents unique challenges that are particularly well-suited for AI-assisted solutions. Tooth eruption is a dynamic and sequential developmental process with the transition from primary to permanent dentition occurring during the mixed dentition period, generally between 6 and 12 years of age. This period is characterized by the interplay of primary tooth exfoliation and permanent tooth eruption. Accurate tooth numbering systems, such as the Federation Dentaire Internationale (FDI), are essential for proper diagnosis, communication, and treatment planning.
Early identification of tooth developmental and eruption anomalies during the mixed dentition period is crucial for optimal patient care. Conditions including agenesis, supernumerary teeth, delayed or ectopic eruption can significantly disrupt the normal sequence of dental development, potentially leading to space discrepancies, malocclusions or tooth impactions. Timely detection enables the proactive management of developmental anomalies and the implementation of interceptive strategies when needed, such as orthodontic space management, or strategic extractions capable of redirecting aberrant eruption patterns toward physiological pathways and potentially reducing the likelihood of complex interventions later in development [8,9].
Panoramic radiographs (PRs) represent a widely used diagnostic tool for mixed dentition assessment, providing comprehensive visualization of dental development and identification of eruption disturbances in a single exposure [9]. Beyond its routine diagnostic applications, PRs play a fundamental role in dental age estimation, offering reliable information on tooth development chronology and eruption patterns that prove critical in pediatric dentistry, orthodontics, and forensic contexts. Recent evidence, including a 2025 systematic review and meta-analysis, has further demonstrated the potential of DL to enhance age estimation from PRs, highlighting the promise of AI-assisted approaches in this domain [10]. The ability of PR to simultaneously evaluate the position, morphology, and teeth developmental stage, combined with emerging DL applications, underscores its importance as both a first-line diagnostic tool and a valuable adjunct for guiding clinical decision-making and interceptive interventions during the mixed dentition period.
However, accurate interpretation of mixed dentition PRs remains highly dependent on clinician experience, particularly given the variable coexistence of deciduous and permanent teeth at different developmental stages and the frequent overlap of anatomical structures [11]. Implementing a fully automated approach could overcome these limitations while simultaneously enhancing both diagnostic accuracy and clinical efficiency.
While recent advances in DL have yielded promising results for automated tooth detection and numbering in PRs of permanent dentition [12,13,14], research focusing specifically on mixed dentition remains limited, and several critical gaps persist in the current literature [15,16,17,18,19]. Among the studies addressing mixed dentition, most have predominantly employed bounding-box detection approaches [15,16,17,18], with limited exploration of detailed polygonal segmentation [19]. Furthermore, none have reported performance stratification by dentition type or provided in-depth error characterization. Additionally, current investigations have not adequately explored the potential of state-of-the-art YOLO architectures, particularly YOLOv11 [20], nor have they evaluated innovative methodologies to optimize pediatric annotation workflows, thereby failing to address the inherent challenges associated with tooth recognition in mixed dentition scenarios.
This study addresses these limitations by introducing a novel YOLOv11-based framework that integrates simultaneous detection and segmentation capabilities specifically optimized for PRs in mixed dentition. The key methodological innovation lies in implementing a hybrid pre-annotation strategy that leverages transfer learning from a publicly available adult radiographic dataset to automatically generate provisional tooth labels for pediatric images, followed by systematic expert manual refinement. Our framework incorporates comprehensive performance assessment, including stratified analyses for deciduous and permanent dentition as well as detailed error characterization. This integrated methodology represents a substantial step forward in the development of high-precision AI systems specifically tailored for pediatric dental imaging applications.
The null hypothesis was that our novel YOLOv11-based approach could not achieve acceptable performance in automated tooth detection and segmentation in mixed dentition PRs. By addressing the current gaps in pediatric dental AI research, this study aims to provide a robust foundation for the clinical implementation of AI-assisted mixed dentition analysis, ultimately contributing to more accurate, efficient, and standardized pediatric dental care. This work contributes to the field in three relevant ways:
  • task integration: a YOLOv11-based framework that jointly performs detection, enumeration, and polygonal instance segmentation in a single pass, tailored to mixed dentition;
  • efficient pediatric labeling: a hybrid pre-annotation strategy that reduces manual burden while preserving accuracy;
  • dentition-aware reporting: comprehensive stratification of performance by dentition type and error profiling, highlighting clinically relevant failures that prior studies did not examine.

2. Materials and Methods

2.1. Study Design

This retrospective study was conducted at the Unit of Orthodontics and Sleep Dentistry, Department of Biomedical and Neuromotor Sciences (DIBINEM), in collaboration with the Department of Computer Science and Engineering (DISI), University of Bologna, Italy. The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Ethics Committee CE-AVEC (approval number: 293-2024-AUSLBO; 16 October 2024). Informed consent was obtained from all subjects involved in the study. Personal data were processed in accordance with the applicable data protection legislation (in particular Regulation (EU) 2016/679 and Legislative Decree No. 196 of 30 June 2003, as subsequently amended and supplemented). Study conduct and reporting adhered to the Checklist for Artificial Intelligence in Medical Imaging guidelines [21] and to the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) guidelines [22] to ensure transparency, reproducibility, and methodological rigor (Table S1).

2.2. Dataset Description

A total of 250 PRs were retrospectively selected from patients who underwent routine diagnostic imaging between 2017 and 2024 at the School of Dentistry, DIBINEM, University of Bologna, Italy. Eligible patients were aged 6–13 years presenting mixed dentition. The age range was specifically chosen to encompass the critical transitional period when both deciduous and permanent teeth coexist, allowing comprehensive evaluation of diverse developmental stages and tooth morphologies essential for robust model training. Exclusion criteria comprised: (1) poor image quality with blurred or distorted tooth contours that could compromise accurate annotation (image quality was defined as clear visualization of tooth contours, absence of motion artifacts, and sufficient contrast to discriminate teeth from surrounding bone); (2) presence of supernumerary teeth that might confound automated tooth identification algorithms; (3) cystic lesions or pathological conditions affecting normal dental anatomy; and (4) active fixed orthodontic appliances, which could introduce metallic artifacts and obscure tooth boundaries.
The sample size was determined based on data availability within the institutional archive and was deemed sufficient to allow robust model training and evaluation.
All images were acquired using a Planmeca ProMax panoramic system (Planmeca Oy, Helsinki, Finland) with standardized exposure parameters (60–64 kVp, 5 mA, 12 ms) and patient positioning using guide lights and chin stabilization. Images were exported in PNG format at 2454 × 1304 pixel resolution, providing optimal balance between image quality and computational efficiency for DL applications.

2.3. Labeling Protocol

As a preliminary step, 650 annotated PRs from adult patients were obtained from the publicly available Dental Enumeration and Diagnosis on Panoramic-X-rays (DENTEX) dataset, released as part of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2023 [23,24]. This adult dataset served as the training foundation for developing a preliminary DL model specifically designed for permanent tooth identification and enumeration. The resulting pre-trained model was subsequently applied to automatically pre-annotate the 250 pediatric PRs, thereby establishing initial tooth boundaries and classifications to streamline the subsequent manual labeling task (Figure 1).
Given the inherent limitations of adult-trained models in recognizing deciduous teeth and developing permanent teeth at various stages of formation, a comprehensive manual annotation phase was essential. A customized open-source labeling tool was employed to facilitate expert-driven manual annotation according to the FDI tooth numbering system. This phase specifically focused on accurately identifying and classifying deciduous teeth and developing permanent teeth that were either misclassified or unrecognized by the adult pre-trained model (Figure 2).
All manual annotations were performed by a specialist in orthodontics with more than fifteen years of clinical experience. The specialist systematically reviewed each pre-annotated image, correcting automated classifications where necessary and manually annotating previously undetected dental elements. All labeled images underwent independent verification by a senior orthodontist with more than thirty years of clinical experience. Any discrepancies between the two evaluators were discussed and resolved through consensus. Full agreement was achieved across all tooth classifications and segmentations.

2.4. AI Model Architecture and Training

In this study, we employed YOLOv11 (“You Only Look Once”—version 11), a state-of-the-art DL architecture widely adopted for object detection and image segmentation [20]. YOLO is specifically designed to concurrently perform three tasks on a single image:
  • detect the location of each object (in our case, each tooth),
  • classify the object (e.g., tooth 11, tooth 12, tooth 35, etc.), and
  • segment its precise contour (tooth boundary).
The model follows a YOLO-style single-stage design, consisting of three main components: a backbone for feature extraction, a neck (FPN/PAN-like) for multi-scale feature aggregation, and dedicated heads for detection and segmentation.
YOLOv11 is structured as follows:
  • backbone: extracts visual features from the input image (e.g., shapes, contours), leveraging convolutional blocks with batch normalization and SiLU activations;
  • neck: employs upsampling and concatenation operations to generate multi-resolution feature maps, enabling the combination and enhancement of features across different scales;
  • heads:
    detection head: outputs the position, class label, and confidence score for each object. For each grid cell, the detection head predicts bounding-box parameters (tx, ty, tw, th), an objectness score, and class logits. The detection loss combines three components: a bounding-box regression loss (IoU-based), an objectness loss (BCE), and a classification loss (Cross-Entropy or BCE, depending on the parametrization);
    segmentation head: produces pixel-level masks for each detected object using an instance-aware single-shot approach. The head generates a small set of prototype masks and, for each detection, a coefficient vector. The final mask for detection i is obtained by linearly combining the prototypes with the coefficients, followed by a sigmoid activation and cropping according to the bounding box.
This design enables the generation of instance-level segmentation outputs (i.e., one mask per detected tooth), even in cases where teeth are spatially adjacent or overlapping.

2.5. Output Layer Configuration

In this study, the model was trained to detect and classify up to 52 tooth types (32 permanent and 20 deciduous teeth, corresponding to mixed dentition).
For each detected tooth, the model provides:
  • bounding box coordinates (x, y, width, height),
  • class label (e.g., 11, 12, …), and
  • an instance-specific segmentation mask.
Unlike architectures that produce separate segmentation maps, the final layer outputs one mask per detected tooth instance, directly associated with the predicted class.
The model architecture was derived from the pre-trained Ultralytics YOLOv11-seg-small backbone, which was subsequently fine-tuned on our mixed-dentition dataset to adapt the network for the specific task of accurate detection and segmentation of both deciduous and permanent teeth.
The dataset was randomly partitioned into training (80%), validation (10%), and test (10%) subsets. The training process utilized 225 PRs, comprising the training and validation portions of our pediatric dataset.
To enlarge the effective training dataset size and improve model generalizability, we incorporated the 650 annotated adult panoramic radiographs from the DENTEX dataset during the initial training phase. This strategic combination of adult and pediatric images substantially increased the anatomical variation encountered during optimization, empirically trying to improve detection performance for late-erupting permanent teeth that may exhibit adult-like morphological characteristics.
The model architecture was modified to include a custom loss function specifically designed to penalize the erroneous identification of multiple classes for the same anatomical tooth position during training. Data augmentation hyperparameters were carefully optimized for dental radiographic analysis: scale transformation (scale) was limited to 0.1 to preserve anatomical proportions, pixel erasure (erase) was set to 0.1 to simulate minor image artifacts, image rotation (degrees) was set to 0.0, while mosaic augmentation was disabled (mosaic = 0.0) to maintain spatial relationships between adjacent teeth. Horizontal flip augmentation was specifically disabled (fliplr = 0.0) to prevent left-right tooth misclassification, which is critical for accurate dental enumeration according to the FDI system.
The training protocol employed a two-stage approach. Initial training was performed on a single NVIDIA RTX 2080 GPU using an intersection over union (IoU) threshold of 0.7 and an initial learning rate (lr) of 0.01. This was followed by a specialized fine-tuning phase exclusively on the pediatric PRs (20 epochs, lr = 0.0005, freeze = 10), enabling more precise deciduous teeth segmentation boundaries while preserving adult tooth morphology capability.
During inference, the IoU threshold was reduced to 0.1 to minimize prediction duplicates while maintaining detection sensitivity. Training was automatically terminated at epoch 54 through early stopping when peak validation performance (mean average precision, mAP@0.5 = 0.949) was achieved. The model checkpoint demonstrating the highest validation mAP was selected and exported for final evaluation on the independent test set (10% of total dataset) (Figure 3).

2.6. Evaluation Metrics & Statistical Analysis

Model performance was evaluated using standard metrics: precision, recall, F1-score, and mAP, offering a comprehensive assessment of detection and localization accuracy. Performance metrics were initially computed on a per-class basis for each individual tooth type, and, subsequently, macro-averaged values were calculated using unweighted averaging across all tooth classes to provide overall performance indicators without bias toward more frequently occurring tooth types. Additional subset analyses were performed by computing separate macro-averages exclusively for deciduous teeth and permanent teeth categories, facilitating targeted evaluation of model performance on each dentition type within the mixed dentition context.
Error pattern analysis was conducted through the construction of a multiclass confusion matrix normalized to percentages, enabling visualization of systematic misclassification patterns and identification of challenging tooth discrimination scenarios.
In addition to the 80/10/10 dataset split, we performed a 5-fold cross-validation to evaluate model robustness and generalization. In each fold, the model was trained from scratch with newly defined training/validation partitions, while the independent pediatric test set remained untouched. Cross-validation results confirmed the stability of the YOLOv11 framework, showing consistently high precision, recall, and mAP values with low variance across folds.
Statistical analyses were performed using the software Python for Windows (Python Software Foundation. Python Language Reference, version 3.1. Available at http://www.python.org, accessed on 5 June 2025).

3. Results

The test subset comprised 25 PRs from pediatric patients (mean age 8.4 ± 1.8 years; 14 females, 11 males) and yielded 921 annotated tooth instances, consisting of 810 permanent teeth and 111 deciduous teeth. Tooth classes #71 and #81 were excluded from the analysis due to their extremely low frequency in the dataset, which would have generated unreliable per-class estimates and potentially distorted macro-averaged performance metrics.
For the bounding box detection task, the YOLOv11 model demonstrated robust performance across all tooth categories (Table 1), achieving an overall mAP0.5 of 0.963 [95% CI: 0.944–0.983] and a macro-averaged F1-score of 0.953 [95% CI: 0.922–0.965]. This performance was driven by consistently high precision of 0.946 [95% CI: 0.922–0.969] and recall of 0.945 [95% CI: 0.921–0.968], indicating balanced detection accuracy with minimal false positive and false negative predictions.
As can be seen in Table 2, for the segmentation task, the model maintained clinically acceptable performance with mAP0.5 = 0.890 [95% CI: 0.857–0.923], precision = 0.893 [95% CI: 0.864–0.922], recall = 0.894 [95% CI: 0.862–0.926], and F1-score = 0.891 [95% CI: 0.862–0.921]. The slightly reduced performance in segmentation compared to detection reflects the increased complexity of precise boundary delineation for irregularly shaped dental structures.
Stratified analysis by dentition type revealed differential model performance between tooth categories. Permanent teeth demonstrated excellent recognition accuracy with an average F1-score of 0.977 [95% CI: 0.971–0.982], while deciduous teeth showed lower but clinically acceptable performance with an average F1-score of 0.884 [95% CI: 0.833–0.934]. This performance discrepancy can be attributed to the morphological variability of deciduous teeth, their smaller size, and the frequent anatomical overlap with adjacent developing permanent teeth.
The three least accurate classifications were identified as the maxillary right deciduous central incisor (#51, F1 = 0.654), the maxillary right deciduous lateral incisor (#52, F1 = 0.704), and the mandibular right deciduous canine (#82, F1 = 0.759). Detailed examination of misclassified cases revealed that these errors predominantly occurred in instances where deciduous teeth exhibited significant overlap with neighboring deciduous elements or adjacent unerupted permanent teeth, thereby compromising precise localization and boundary definition.
The normalized 52 × 52 confusion matrix (Figure 4) provided comprehensive visualization of model classification patterns, with ground truth annotations displayed on the x-axis and model predictions on the y-axis. Each cell represents the percentage of predictions for a given true versus predicted tooth class, with color intensity corresponding to classification accuracy. The matrix demonstrated predominantly diagonal activity with minimal off-diagonal misclassifications, indicating robust discriminative capability. The sparse off-diagonal elements and pronounced diagonal concentration confirmed effective model training with limited systematic errors.

4. Discussion

The present study demonstrates significant methodological and analytical advancements in AI-assisted tooth detection and segmentation specifically tailored for mixed dentition PRs. Our novel YOLOv11-based framework achieved robust performance while addressing critical gaps in the current literature through comprehensive dentition-type stratification and detailed error characterization. These findings support the rejection of our null hypothesis, confirming that AI-based DL models can achieve clinically acceptable accuracy levels for automated mixed dentition analysis.
AI in dentistry represents a paradigm shift from traditional diagnostic approaches to AI-augmented clinical decision-making. While conventional methods rely primarily on clinician expertise and subjective interpretation, AI enables data-driven analyses that enhance diagnostic accuracy and efficiency. This transition creates new opportunities, including earlier detection of developmental anomalies, reduction in inter-operator variability, and enhanced support for personalized treatment planning and monitoring of patient compliance [25]. However, it simultaneously raises ethical considerations related to data privacy, transparency of algorithmic processes, and the preservation of the dentist–patient relationship. Recent reports from the World Health Organization (2023) and the European Commission (2024) emphasize that AI in healthcare must be developed and implemented within a human-centered, transparent, and ethically robust framework [26,27]. Recommendations highlight the need for accountability mechanisms, continuous professional training, and regulatory oversight to ensure that AI systems enhance clinical decision-making without replacing the clinician–patient relationship [28].
Despite their potential, AI applications remain prone to errors. Misclassification, biased predictions, and opaque “black box” decision-making processes (i.e., situations in which an AI system generates outputs or recommendations without offering transparency regarding the internal reasoning or computational pathways that led to such results) can result in patient harm and loss of trust. The WHO (2023) recommendations suggest fostering “optimal trust,” where clinicians critically evaluate AI outputs rather than relying on them blindly [26,28]. Importantly, errors are not limited to algorithmic miscalculations but also derive from biased or incomplete training datasets, potentially exacerbating health inequalities. Moreover, echo chamber dynamics may amplify and reinforce a participant’s pre-existing beliefs through repetition and algorithmic biases. These risks highlight the need for careful design, monitoring, and human oversight in the deployment of conversational AI tools in clinical and health promotion contexts.
Future developments in AI should focus on explainability, interoperability, and integration into multidisciplinary care pathways. The WHO (2023) calls for investment in transparent models that provide not only predictions but also justifications for their recommendations [26]. Furthermore, the European Commission (2024) envisions AI as a catalyst for more resilient healthcare systems, particularly through predictive analytics, personalized medicine, and digital public health strategies [27].
Clinical adoption of AI remains hindered by challenges such as limited interpretability, data privacy concerns, regulatory fragmentation, and infrastructural disparities. Suggested solutions involve robust governance structures, mandatory reporting of serious incidents, transparent communication of risks and benefits to patients, and investment in equitable access to AI-enabled care [28]. Collaborative approaches, where clinicians, patients, policymakers, and technologists co-design AI solutions, are increasingly recommended as a strategy to align innovation with clinical and societal needs.
The systematic review by Maganur et al. (2024) synthesized evidence from 16 studies from 2018 to 2023 on AI models for tooth detection and numbering in dental radiographs [10]. AI models, predominantly based on convolutional neural networks, demonstrated very high performance, with some studies reporting precision up to 99.4% for detection and 98.0% for numbering. Nevertheless, heterogeneity in datasets and limited dataset size were recurrent limitations. Importantly, the review, serving as a lens through which to better analyze other types of studies [29], discusses the challenge of reliably recognizing deciduous teeth, citing morphological variability, overlapping structures, and limited training examples as major obstacles to robust performance in primary dentition. Overall, the review supports the potential of AI as a diagnostic adjunct while emphasizing the need for rigorous validation and enhanced methods specifically tailored to primary tooth recognition.
As can be seen in Table 3, the Ultralytics YOLOv11-based model in this study achieved competitive detection performance while also providing polygonal segmentation, thereby offering an analytical depth that extends beyond bounding-box detection. Beser et al. (2024), employing YOLOv5 on a dataset of 3854 pediatric PRs, reported slightly higher metrics for tooth detection and segmentation [19]. However, their work did not stratify performance by dentition type nor provide detailed error characterization, both of which are essential for clinical translation. Similarly, Kaya et al. (2022) obtained robust detection results with YOLOv4 on 4545 pediatric PRs, yet their methodology remained limited to bounding-box annotations without segmentation or differentiation between primary and permanent dentition [15]. Kilic et al. (2021), focusing exclusively on deciduous teeth with Faster R-CNN Inception v2, reported high accuracy, but their study lacked applicability to mixed dentition contexts [30]. More recently, Peker and Kurtoglu (2025) investigated YOLOv10 performance on a limited dataset (n = 200) and achieved satisfactory performance, yet again relying solely on bounding-box annotations [16].
Our stratified analysis highlighted a clear discrepancy between permanent and deciduous teeth, reflecting the inherent morphological variability, smaller size, and frequent overlap of primary teeth with developing permanent successors. This dentition-specific evaluation—largely absent in previous studies—provides clinically relevant insights by identifying priority areas for methodological refinement. Detailed error analysis further revealed that misclassifications were concentrated at dentition boundaries (notably teeth #51, #52, and #82), where overlapping anatomical structures reduce localization and boundary precision.
In the present study, the key methodological innovation was implementing a pre-annotation strategy utilizing transfer learning from an adult radiographic dataset. Initial training on 650 adult PRs from the publicly available DENTEX dataset enabled automatic generation of provisional labels for mixed dentition images, followed by systematic expert manual refinement. This hybrid two-stage approach represents a paradigm shift from conventional fully manual annotation workflows, significantly reducing labeling burden while maintaining high accuracy standards through targeted expert correction of deciduous teeth and developing permanent teeth that could be challenging to recognize. Unlike previous investigations that relied exclusively on manual annotation, our methodology optimizes annotation efficiency by exploiting the morphological similarities between fully developed permanent teeth across age groups, while ensuring accurate classification of primary dentition through specialized expert review. Moreover, the adoption of optimized hyperparameters specific to dental radiographic characteristics, such as limiting scale augmentation to preserve tooth proportions and disabling left-right flips to avoid numeration errors, further enhanced model performance.
This multifaceted approach represents a significant advancement toward developing robust AI systems optimized for pediatric dental imaging AI-based applications. Standardized automated analysis could reduce interpretation time and improve communication with patients through consistent diagnostic terminology, particularly relevant in healthcare systems with limited pediatric specialist access [31]. Accurate tooth identification represents an essential prerequisite for developing AI algorithms that analyze tooth position, orientation, and eruption trajectories. Looking ahead, in clinical practice, given that ectopic eruption poses significant orthodontic challenges when diagnosed late [32,33], AI-based predictive models could facilitate early detection when interceptive treatments remain most effective [9,32], rather than focusing on the detection of already impacted teeth [12,13]. Such predictive frameworks could substantially improve diagnostic efficiency while reducing clinician-dependent variability in mixed dentition evaluation.
Despite achieving competitive performance, some study limitations warrant acknowledgment. The dataset size could benefit from expansion to enhance generalizability across diverse pediatric populations, varied imaging protocols, and equipment specifications. The integration of multi-center validation studies would further strengthen the evidence base for clinical applications. Additionally, the performance gap between permanent and deciduous tooth detection suggests the need for specialized training strategies or augmented datasets focusing on primary dentition morphology. The model’s performance across different stages of dental development and varying numbers of present teeth (supernumerary teeth or agenesis) also remains to be fully characterized, representing an important area for future investigation that should include comprehensive evaluation across early mixed dentition, late mixed dentition, and transitional phases of dental development.

5. Conclusions

This study establishes the feasibility and clinical relevance of DL-based automated tooth recognition and segmentation in pediatric mixed dentition PRs. The developed system achieved excellent agreement with expert annotations, demonstrating a robust performance across diverse developmental stages, potentially improving diagnostic efficiency and reducing interpretive variability in mixed dentition PRs. The integration of an innovative hybrid pre-annotation strategy (leveraging transfer learning from a publicly available adult radiographic dataset, followed by systematic expert manual refinement) together with dentition-type performance stratification and detailed error characterization establishes a robust foundation for clinical translation while delineating specific areas for further advancement in AI-based applications for mixed dentition. Future multicenter validation studies will be essential for confirming the broader applicability and clinical utility of this approach across diverse healthcare settings and patient populations.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/diagnostics15202615/s1, Table S1: STROBE Statement—Checklist of items that should be included in reports of cross-sectional studies.

Author Contributions

Conceptualization, S.I.P. and G.A.-B.; methodology, software, formal analysis, G.T.; investigation, G.A.-B., A.M., G.L. and A.F.; writing—original draft preparation, A.M.; writing—review and editing, S.I.P., D.R. and L.B.; supervision, D.R. and L.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Giorgio Tsiotas was supported by the Ministerial Decree No. 352 through National Recovery and Resilience Plan (NRRP) (funded by European Union’s NextGenerationEU) under Mission 4 ‘‘Education and Research,’’ Component 2 ‘‘From Research to Business,’’ Investment 3.3, in April 2022, under Grant CUP J33C22001400009 and Grant DOT1303952-2283”.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Ethics Committee CE-AVEC (approval number: 293-2024-AUSLBO; 16 October 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to privacy.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
PRPanoramic Radiographs
DLDeep Learning
LLMLarge Language Model
FDIFederation Dentaire Internationale
DIBINEMDepartment of Biomedical and Neuromotor Sciences
DISIDepartment of Computer Science and Engineering
MICCAIMedical Image Computing and Computer-Assisted Intervention
DENTEXDental Enumeration and Diagnosis on Panoramic-X-rays
mAPmean Average Precision
IoUIntersection over Union

References

  1. Lee, S.J.; Poon, J.; Jindarojanakul, A.; Huang, C.-C.; Viera, O.; Cheong, C.W.; Lee, J.D. Artificial intelligence in dentistry: Exploring emerging applications and future prospects. J. Dent. 2025, 155, 105648. [Google Scholar] [CrossRef]
  2. Buldur, M.; Sezer, B. Evaluating the accuracy of Chat Generative Pre-trained Transformer version 4 (ChatGPT-4) responses to United States Food and Drug Administration (FDA) frequently asked questions about dental amalgam. BMC Oral Health 2024, 24, 605. [Google Scholar] [CrossRef]
  3. Sezer, B.; Aydoğdu, T. Performance of advanced artificial intelligence models in pulp therapy for immature permanent teeth: A comparison of ChatGPT-4 Omni, DeepSeek, and Gemini Advanced in accuracy, completeness, response time, and readability. J. Endod. 2025; in press. [Google Scholar] [CrossRef]
  4. Incerti Parenti, S.; Bartolucci, M.L.; Biondi, E.; Maglioni, A.; Corazza, G.; Gracco, A.; Alessandri-Bonetti, G. Online patient education in obstructive sleep apnea: ChatGPT versus Google Search. Healthcare 2024, 12, 1781. [Google Scholar] [CrossRef]
  5. Kapoor, D.; Garg, D.; Tadakamadla, S.K. Brush, byte, and bot: Quality comparison of artificial intelligence-generated pediatric dental advice across ChatGPT, Gemini, and Copilot. Front. Oral Health 2025, 6, 1652422. [Google Scholar] [CrossRef] [PubMed]
  6. Mukhopadhyay, A.; Mukhopadhyay, S.; Biswas, R. Evaluation of large language models in pediatric dentistry: A Bloom’s taxonomy-based analysis. Folia Med. 2025, 67, e154338. [Google Scholar] [CrossRef] [PubMed]
  7. Sallam, M. ChatGPT utility in healthcare education, research, and practice: Systematic review on the promising perspectives and valid concerns. Healthcare 2023, 11, 887. [Google Scholar] [CrossRef]
  8. American Academy of Pediatric Dentistry. Management of the developing dentition and occlusion in pediatric dentistry. Pediatr. Dent. 2018, 40, 352–365. [Google Scholar]
  9. Ericson, S.; Kurol, J. Early treatment of palatally erupting maxillary canines by extraction of the primary canines. Eur. J. Orthod. 1988, 10, 283–295. [Google Scholar] [CrossRef]
  10. Maganur, P.C.; Vishwanathaiah, S.; Mashyakhy, M.; Abumelha, A.S.; Robaian, A.; Almohareb, T.; Almutairi, B.; Alzahrani, K.M.; Binalrimal, S.; Marwah, N.; et al. Development of artificial intelligence models for tooth numbering and detection: A systematic review. Int. Dent. J. 2024, 74, 917–929. [Google Scholar] [CrossRef]
  11. Rokhshad, R.; Mohammad, F.D.; Nomani, M.; Mohammad-Rahimi, H.; Schwendicke, F. Chatbots for conducting systematic reviews in pediatric dentistry. J. Dent. 2025, 158, 105733. [Google Scholar] [CrossRef] [PubMed]
  12. Alenezi, O.; Bhattacharjee, T.; Alseed, H.A.; Tosun, Y.I.; Chaudhry, J.; Prasad, S. Evaluating the efficacy of various deep learning architectures for automated preprocessing and identification of impacted maxillary canines in panoramic radiographs. Int. Dent. J. 2025, 75, 100940. [Google Scholar] [CrossRef] [PubMed]
  13. Minhas, S.; Wu, T.H.; Kim, D.G.; Chen, S.; Wu, Y.C.; Ko, C.C. Artificial intelligence for 3D reconstruction from 2D panoramic X-rays to assess maxillary impacted canines. Diagnostics 2024, 14, 196. [Google Scholar] [CrossRef] [PubMed]
  14. Özçelik, S.T.A.; Üzen, H.; Şengür, A.; Fırat, H.; Türkoğlu, M.; Çelebi, A.; Gül, S.; Sobahi, N.M. Enhanced panoramic radiograph-based tooth segmentation and identification using an attention gate-based encoder-decoder network. Diagnostics 2024, 14, 2719. [Google Scholar] [CrossRef]
  15. Kaya, E.; Gunec, H.G.; Gokyay, S.S.; Kutal, S.; Gulum, S.; Ates, H.F. Proposing a CNN method for primary and permanent tooth detection and enumeration on pediatric dental radiographs. J. Clin. Pediatr. Dent. 2022, 46, 293–298. [Google Scholar] [CrossRef]
  16. Peker, R.B.; Kurtoglu, C.O. Evaluation of the performance of a YOLOv10-based deep learning model for tooth detection and numbering on panoramic radiographs of patients in the mixed dentition period. Diagnostics 2025, 15, 405. [Google Scholar] [CrossRef]
  17. Arslan, C.; Yucel, N.O.; Kahya, K.; Sunal Akturk, E.; Germec Cakan, D. Artificial intelligence for tooth detection in cleft lip and palate patients. Diagnostics 2024, 14, 2849. [Google Scholar] [CrossRef]
  18. Bakhsh, H.H.; Alomair, D.; AlShehri, N.A.; Alturki, A.U.; Allam, E.; ElKhateeb, S.M. Validation of an artificial intelligence-based software for the detection and numbering of primary teeth on panoramic radiographs. Diagnostics 2025, 15, 1489. [Google Scholar] [CrossRef]
  19. Beser, B.; Reis, T.; Berber, M.N.; Topaloglu, E.; Gungor, E.; Kılıc, M.C.; Duman, S.; Çelik, Ö.; Kuran, A.; Bayrakdar, I.S. YOLOv5-based deep learning approach for tooth detection and segmentation on pediatric panoramic radiographs in mixed dentition. BMC Med. Imaging 2024, 24, 172. [Google Scholar] [CrossRef]
  20. Jocher, G.; Qiu, J. Ultralytics YOLO11 [Software], version 11.0.0; Ultralytics: Frederick, MD, USA, 2024; License: AGPL-3.0. Available online: https://github.com/ultralytics/ultralytics (accessed on 1 November 2024).
  21. Mongan, J.; Moy, L.; Kahn, C.E., Jr. Checklist for artificial intelligence in medical imaging (CLAIM): A guide for authors and reviewers. Radiol. Artif. Intell. 2020, 2, e200029. [Google Scholar] [CrossRef]
  22. von Elm, E.; Altman, D.G.; Egger, M.; Pocock, S.J.; Gøtzsche, P.C.; Vandenbroucke, J.P.; STROBE Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: Guidelines for reporting observational studies. J. Clin. Epidemiol. 2008, 61, 344–349. [Google Scholar] [CrossRef]
  23. Hamamci, I.E.; Er, S.; Simsar, E.; Yuksel, A.E.; Gultekin, S.; Ozdemir, S.D.; Yang, K.; Li, H.B.; Pati, S.; Stadlinger, B.; et al. DENTEX: An abnormal tooth detection with dental enumeration and diagnosis benchmark for panoramic X-rays. arXiv 2023, arXiv:2305.19112. [Google Scholar] [CrossRef]
  24. Hamamci, I.E.; Er, S.; Simsar, E.; Sekuboyina, A.; Gundogar, M.; Stadlinger, B.; Mehl, A.; Menze, B. Diffusion-based hierarchical multi-label object detection to analyze panoramic dental X-rays. In Medical Image Computing and Computer Assisted Intervention—MICCAI 2023, Proceedings of 26th International Conference, Vancouver, BC, Canada, 8–12 October 2023; Greenspan, H., Madabhushi, A., Mousavi, P., Salcudean, S., Duncan, J., Syeda-Mahmood, T., Taylor, R., Eds.; Springer: Cham, Switzerland, 2023; Volume 14225, pp. 389–399. [Google Scholar] [CrossRef]
  25. Torsello, F.; D’Amico, G.; Staderini, E.; Marigo, L.; Cordaro, M.; Castagnola, R. Factors Influencing Appliance Wearing Time during Orthodontic Treatments: A Literature Review. Appl. Sci. 2022, 12, 7807. [Google Scholar] [CrossRef]
  26. European Commission. Artificial Intelligence in Healthcare; European Commission: Brussels, Belgium, 2024; Available online: https://health.ec.europa.eu/ehealth-digital-health-and-care/artificial-intelligence-healthcare_en (accessed on 1 September 2025).
  27. World Health Organization. Ethics and Governance of Artificial Intelligence for Health: WHO Guidance; World Health Organization: Geneva, Switzerland, 2023; Available online: https://www.who.int/publications/i/item/9789240084759 (accessed on 1 September 2025).
  28. Smith, A.; Arena, R.; Bacon, S.L.; Faghy, M.A.; Grazzi, G.; Raisi, A.; Vermeesch, A.L.; Ong’wen, M.; Popovic, D.; Pronk, N.P. Recommendations on the use of artificial intelligence in health promotion. Prog. Cardiovasc. Dis. 2024, 87, 37–43. [Google Scholar] [CrossRef]
  29. Patini, R.; Staderini, E.; Camodeca, A.; Guglielmi, F.; Gallenzi, P. Case reports in pediatric dentistry journals: A systematic review about their effect on impact factor and future investigations. Dent. J. 2019, 7, 103. [Google Scholar] [CrossRef]
  30. Kılıc, M.C.; Bayrakdar, I.S.; Çelik, Ö.; Bilgir, E.; Orhan, K.; Aydın, O.B.; Kaplan, F.A.; Sağlam, H.; Odabaş, A.; Aslan, A.F.; et al. Artificial intelligence system for automatic deciduous tooth detection and numbering in panoramic radiographs. Dentomaxillofac Radiol. 2021, 50, 20200172. [Google Scholar] [CrossRef]
  31. Surdu, S.; Dall, T.M.; Langelier, M.; Forte, G.J.; Chakrabarti, R.; Reynolds, R.L. The paediatric dental workforce in 2016 and beyond. J. Am. Dent. Assoc. 2019, 150, 609–617.e5. [Google Scholar] [CrossRef]
  32. Alessandri Bonetti, G.; Zanarini, M.; Incerti Parenti, S.; Marini, I.; Gatto, M.R. Preventive treatment of ectopically erupting maxillary permanent canines by extraction of deciduous canines and first molars: A randomized clinical trial. Am. J. Orthod. Dentofac. Orthop. 2011, 139, 316–323. [Google Scholar] [CrossRef] [PubMed]
  33. Barlow, S.T.; Moore, M.B.; Sherriff, M.; Ireland, A.J.; Sandy, J.R. Palatally impacted canines and the modified index of orthodontic treatment need. Eur. J. Orthod. 2009, 31, 362–366. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The pre-annotation phase was conducted using the preliminary model trained on the adult dataset DENTEX in order to streamline the subsequent labeling process. The numbers represent the type of tooth according to the FDI numbering system. Each tooth was labeled with a unique, non-semantic color.
Figure 1. The pre-annotation phase was conducted using the preliminary model trained on the adult dataset DENTEX in order to streamline the subsequent labeling process. The numbers represent the type of tooth according to the FDI numbering system. Each tooth was labeled with a unique, non-semantic color.
Diagnostics 15 02615 g001
Figure 2. Labeling of the pediatric dataset was performed using a customized open-source tool. Annotations included deciduous and permanent teeth and were made using a segmented line to outline the shape of the tooth The numbers represent the type of tooth according to the FDI numbering system. Each tooth was labeled with a unique, non-semantic color.
Figure 2. Labeling of the pediatric dataset was performed using a customized open-source tool. Annotations included deciduous and permanent teeth and were made using a segmented line to outline the shape of the tooth The numbers represent the type of tooth according to the FDI numbering system. Each tooth was labeled with a unique, non-semantic color.
Diagnostics 15 02615 g002
Figure 3. Example of DL model’s performance. The final model was able to recognize deciduous teeth and developing permanent teeth. The numbers represent the type of tooth according to the FDI numbering system. Each tooth was labeled with a unique, non-semantic color.
Figure 3. Example of DL model’s performance. The final model was able to recognize deciduous teeth and developing permanent teeth. The numbers represent the type of tooth according to the FDI numbering system. Each tooth was labeled with a unique, non-semantic color.
Diagnostics 15 02615 g003
Figure 4. Normalised confusion matrix related to prediction of Ultralytics YOLO 11 (vertical axis) and ground truth (expert annotation, horizontal axis).
Figure 4. Normalised confusion matrix related to prediction of Ultralytics YOLO 11 (vertical axis) and ground truth (expert annotation, horizontal axis).
Diagnostics 15 02615 g004
Table 1. Bounding box performance (detection task). Precision, recall, mAP0.5, mAP50-95, and F1 scores by tooth number.
Table 1. Bounding box performance (detection task). Precision, recall, mAP0.5, mAP50-95, and F1 scores by tooth number.
ClassPrecision (P)Recall (R)mAP0.5mAP50_95F1 Score
110.9890.9930.9950.7080.991
120.9620.9800.9900.6270.971
130.9830.9870.9920.6450.985
140.9610.9740.9850.5200.967
150.9881.0000.9950.5730.994
160.9910.9260.9600.6110.957
170.9730.9190.9410.6690.945
180.9460.9900.9800.6200.968
210.9820.9870.9940.6690.984
220.9860.9860.9870.5980.986
230.9910.9870.9870.6260.989
240.9631.0000.9950.4890.981
250.9580.9870.9930.6220.972
260.9760.9930.9950.7180.984
270.9870.9080.9200.6320.946
280.9910.9750.9930.6240.983
310.9760.9930.9950.4910.984
320.9790.9930.9950.5090.986
330.9900.9930.9950.5540.992
340.9760.9800.9930.5500.978
351.0000.9820.9950.5500.991
360.9930.9740.9880.6940.984
370.9760.9810.9940.6640.978
380.9570.9860.9880.5550.972
410.9870.9910.9950.4740.989
420.9820.9740.9850.5020.978
430.9880.9930.9950.5750.991
440.9640.9740.9840.5570.969
450.9770.9930.9930.4930.985
460.9580.9860.9870.6170.972
470.9190.9420.9800.6830.930
480.9670.9850.9920.6230.976
510.5530.8000.6350.1790.654
520.6430.7780.7800.4000.704
530.9480.9210.9810.4900.934
540.9140.9500.9650.4640.932
550.9630.9410.9720.4570.952
610.8461.0000.9950.4550.917
620.8740.6930.8490.3190.773
630.9630.8770.9620.4700.918
640.9550.9170.9400.4840.936
650.9311.0000.9940.5630.964
721.0000.6340.8630.3490.776
731.0000.9470.9650.4100.973
740.9590.9500.9800.3870.955
750.9300.8660.9630.4010.897
820.8090.7140.7930.3100.759
830.8741.0000.9940.4480.933
840.9540.9750.9920.3390.965
850.9560.9690.9900.3720.962
Table 2. Segmentation performance. Precision, recall, mAP0.5, mAP50-95, and F1 scores by tooth number.
Table 2. Segmentation performance. Precision, recall, mAP0.5, mAP50-95, and F1 scores by tooth number.
ClassPrecision (P)Recall (R)mAP0.5mAP50_95F1 Score
110.9750.9800.9740.5860.978
120.9550.9730.9770.4150.964
130.9760.9800.9830.4200.978
140.9220.9350.9330.3510.928
150.9070.9180.8950.2590.913
160.9780.9140.9320.4920.945
170.9400.8880.8960.4950.913
180.8650.9050.8910.3990.884
210.9630.9660.9740.3680.965
220.9460.9450.9430.3860.945
230.9370.9330.9300.3520.935
240.8650.8990.8530.2760.882
250.9520.9800.9840.4680.966
260.9630.9800.9870.5630.971
270.9740.8960.9100.4810.933
280.9170.9020.9210.4070.909
310.8840.9000.8360.2250.892
320.9250.9390.9150.2940.932
330.9830.9870.9860.4140.985
340.9160.9200.8890.2970.918
350.9860.9680.9750.4150.977
360.9870.9680.9860.5370.977
370.9500.9550.9560.5340.953
380.9480.9760.9770.3700.962
410.9470.9510.9440.2570.949
420.8890.8810.8440.2340.885
430.9750.9800.9790.4070.977
440.9380.9470.9390.3700.943
450.9570.9720.9680.3080.965
460.9530.9800.9810.3830.966
470.9000.9230.9520.4900.911
480.9320.9490.9510.4140.940
510.5550.8000.6350.1620.655
520.6440.7780.7800.2300.704
530.9330.9050.9680.3160.919
540.8420.8750.8370.2030.858
550.9450.9230.9450.3380.934
610.6780.8000.6420.1930.734
620.6320.5000.5540.1400.558
630.8380.7630.7380.1490.799
640.8870.8520.8530.3390.869
650.9160.9830.9810.4650.948
720.7710.4900.6160.2040.599
730.8240.7810.6870.1870.802
740.9340.9250.9600.3070.930
750.9300.8660.9650.3100.897
820.6550.5710.6120.0950.610
830.8510.9730.9590.2320.908
840.7830.8000.7480.1270.792
850.9130.9240.9490.2580.918
Table 3. Comparison of recent studies on automated analysis of pediatric panoramic radiographs and the present work. The table summarizes dataset size, dentition stage, task (detection, segmentation, enumeration), model type and main reported metrics.
Table 3. Comparison of recent studies on automated analysis of pediatric panoramic radiographs and the present work. The table summarizes dataset size, dentition stage, task (detection, segmentation, enumeration), model type and main reported metrics.
Studyn (PRs)Age/DentitionTask(s)ModelDetection mAP0.5/F1Segmentation mAP0.5/F1
Kaya et al., 2022 [15]4545Pediatric, mixedDetection + numberingYOLOv40.92/0.91-
Beser et al., 2024 [19]3854Pediatric, mixedDet. + Segm.YOLOv50.98/0.990.98
Peker et al., 2025 [16]200Pediatric, mixedDetectionYOLOv100.968/0.919-
Kilic et al., 2021
[30]
1125Pediatric, deciduousDetection + numberingFaster R-CNN0.93/0.91-
Our work250Pediatric, mixedDet. + Segm. + EnumerationYOLOv11-seg0.963/0.9530.89
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Incerti Parenti, S.; Tsiotas, G.; Maglioni, A.; Lamberti, G.; Fiordelli, A.; Rossi, D.; Bononi, L.; Alessandri-Bonetti, G. Artificial Intelligence-Aided Tooth Detection and Segmentation on Pediatric Panoramic Radiographs in Mixed Dentition Using a Transfer Learning Approach. Diagnostics 2025, 15, 2615. https://doi.org/10.3390/diagnostics15202615

AMA Style

Incerti Parenti S, Tsiotas G, Maglioni A, Lamberti G, Fiordelli A, Rossi D, Bononi L, Alessandri-Bonetti G. Artificial Intelligence-Aided Tooth Detection and Segmentation on Pediatric Panoramic Radiographs in Mixed Dentition Using a Transfer Learning Approach. Diagnostics. 2025; 15(20):2615. https://doi.org/10.3390/diagnostics15202615

Chicago/Turabian Style

Incerti Parenti, Serena, Giorgio Tsiotas, Alessandro Maglioni, Giulia Lamberti, Andrea Fiordelli, Davide Rossi, Luciano Bononi, and Giulio Alessandri-Bonetti. 2025. "Artificial Intelligence-Aided Tooth Detection and Segmentation on Pediatric Panoramic Radiographs in Mixed Dentition Using a Transfer Learning Approach" Diagnostics 15, no. 20: 2615. https://doi.org/10.3390/diagnostics15202615

APA Style

Incerti Parenti, S., Tsiotas, G., Maglioni, A., Lamberti, G., Fiordelli, A., Rossi, D., Bononi, L., & Alessandri-Bonetti, G. (2025). Artificial Intelligence-Aided Tooth Detection and Segmentation on Pediatric Panoramic Radiographs in Mixed Dentition Using a Transfer Learning Approach. Diagnostics, 15(20), 2615. https://doi.org/10.3390/diagnostics15202615

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop