Next Article in Journal
Special Considerations in Pediatric Inflammatory Bowel Disease Pathology
Next Article in Special Issue
Role of Artificial Intelligence in the Diagnosis and Management of Pulmonary Embolism: A Comprehensive Review
Previous Article in Journal
Relationship Between Ocular Trauma Score and Computed Tomography Findings in Eyes with Penetrating Globe Injuries: A Preliminary Study
Previous Article in Special Issue
A Hybrid Deep Learning Model with Data Augmentation to Improve Tumor Classification Using MRI Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence Applications in Pediatric Craniofacial Surgery

1
Department of Plastic Surgery, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
2
Analytical Imaging and Modeling Center, Children’s Health Medical Center, Dallas, TX 75235, USA
*
Author to whom correspondence should be addressed.
Diagnostics 2025, 15(7), 829; https://doi.org/10.3390/diagnostics15070829
Submission received: 25 February 2025 / Revised: 9 March 2025 / Accepted: 19 March 2025 / Published: 25 March 2025

Abstract

:
Artificial intelligence is rapidly transforming pediatric craniofacial surgery by enhancing diagnostic accuracy, improving surgical precision, and optimizing postoperative care. Machine learning and deep learning models are increasingly used to analyze complex craniofacial imaging, enabling early detection of congenital anomalies such as craniosynostosis, and cleft lip and palate. AI-driven algorithms assist in preoperative planning by identifying anatomical abnormalities, predicting surgical outcomes, and guiding personalized treatment strategies. In cleft lip and palate care, AI enhances prenatal detection, severity classification, and the design of custom therapeutic devices, while also refining speech evaluation. For craniosynostosis, AI supports automated morphology classification, severity scoring, and the assessment of surgical indications, thereby promoting diagnostic consistency and predictive outcome modeling. In orthognathic surgery, AI-driven analyses, including skeletal maturity evaluation and cephalometric assessment, inform optimal timing and diagnosis. Furthermore, in cases of craniofacial microsomia and microtia, AI improves phenotypic classification and surgical planning through precise intraoperative navigation. These advancements underscore AI’s transformative role in diagnostic accuracy, and clinical decision-making, highlighting its potential to significantly enhance evidence-based pediatric craniofacial care.

1. Introduction

Artificial intelligence (AI), driven by machine learning (ML) and deep learning (DL) models, is poised to transform the landscape of modern medical care. These advanced computational techniques learn from diverse datasets—including patient records, imaging studies, genetic profiles, and physiological signals—to provide actionable insights that enhance diagnostic accuracy, refine therapeutic strategies, and improve care coordination [1,2]. In clinical practice, AI has demonstrated capabilities such as robust image interpretation, precise prognostic modeling, and the identification of complex disease phenotypes, enabling data-driven decision support and personalized treatment protocols [3].
At the core of AI’s advancements in medicine are ML and DL techniques. Machine learning encompasses a range of algorithms that enable systems to learn from and make predictions based on data without being explicitly programmed for specific tasks [4,5]. Deep learning, a subset of ML, utilizes multi-layered neural networks to model complex patterns and representations in large datasets [6,7]. These technologies have proven particularly effective in handling high-dimensional data typical of medical imaging and genomics, providing superior performance in tasks such as image classification, segmentation, and anomaly detection [8,9]. The ability of DL models to autonomously extract and optimize features from raw data minimizes the need for manual feature engineering, thereby accelerating the development and deployment of AI-driven solutions in clinical settings [10,11].
Pediatric craniofacial surgery stands to benefit substantially from these technologies [12,13]. Complex anatomy, developmental factors, and the need for nuanced, patient-specific interventions require precise preoperative assessment and sophisticated three-dimensional surgical planning. AI-driven analysis can delineate anatomical landmarks, integrate multiple imaging modalities, and highlight subtle risk factors influencing perioperative decision-making [14,15]. Beyond surgical planning, real-time augmented reality (AR) guidance can align the operative field with preoperative models, enhancing intraoperative navigation and surgical accuracy. Postoperatively, large-scale data analysis can identify outcome-related patterns and refine future treatment strategies. Moreover, AI-driven patient engagement tools, including chatbots and natural language processing (NLP)-based educational platforms, can improve patient-family communication, ensuring accessible and comprehensible information tailored to individual needs [16,17].
As pediatric craniofacial procedures increasingly incorporate these technologies, the resulting synergy promises to advance both immediate and long-term patient outcomes. Enhanced predictive modeling may guide intervention timing, precise segmentation could streamline complex reconstructions, and AI-informed feedback from large postoperative datasets may accelerate technique refinement. Such innovations drive a shift toward data-centric methodologies, moving beyond conventional reliance on individual surgical expertise. This review examines the integration of AI into pediatric craniofacial surgery, underscoring its potential to shape the future of this specialized field.
This review aims to provide a comprehensive examination of how AI is shaping pediatric craniofacial surgery, with a specific focus on its applications and implications in managing cleft lip and palate, velopharyngeal insufficiency, orthognathic conditions, craniosynostosis, craniofacial microsomia, and microtia. By exploring advances in AI-driven diagnostics, surgical planning, intraoperative guidance, and postoperative assessment across these key areas, as well as examining the role of AI in surgical education, this review seeks to highlight emerging opportunities, delineate the current evidence base, and guide future research and clinical translation. In doing so, it aims to inform surgeons, researchers, and educators about the transformative potential of AI tools to enhance patient outcomes, optimize clinical workflows, and ultimately improve standards of care in pediatric craniofacial practice.
We conducted a literature review using PubMed, Scopus, and Web of Science with search terms including “artificial intelligence”, “machine learning,” “deep learning”, and key pediatric craniofacial conditions (e.g., cleft lip and palate, craniosynostosis). Inclusion criteria were peer-reviewed studies in English focusing on AI applications for diagnosis, surgical planning, intraoperative guidance, or outcome assessment in pediatric craniofacial surgery. Exclusion criteria included adult-only studies, case reports, and papers lacking technical or clinical relevance. Priority was given to studies published in the past 10 years, with exceptions for foundational research where necessary.

2. Cleft Lip and Palate

Cleft lip and palate are the most common congenital anomalies, affecting approximately 1 in 700 live births globally [18]. These conditions result from the incomplete fusion of tissues of the upper lip and/or the palate failing to fuse properly during embryonic development, leading to defects that can manifest as a cleft lip, cleft palate, or a combination of both, with varying degrees of severity. A cleft lip may involve only the lip or extend to include the alveolar ridge and nose, causing functional and aesthetic challenges. Similarly, a cleft palate can range from a partial opening in the soft palate to a complete separation involving both the hard and soft palates, impacting essential functions such as feeding, speech, and dental development.
The etiology of cleft lip and palate is multifactorial, with contribution from genetic predisposition and environmental factors, including maternal health, nutrition, and teratogenic exposures during pregnancy. Advances in AI have facilitated the study of genetic variants associated with orofacial clefts. For example, convolutional neural networks (CNNs) have been used to analyze single-nucleotide polymorphism (SNP) activity for predicting non-syndromic cleft lip with or without cleft palate [19]. Genetic algorithm-optimized neural networks were used to evaluate SNPs for predicting non-syndromic cleft lip with or without cleft palate in the Korean population [20].
Early detection of these congenital malformations can allow timely counseling and referral to a multidisciplinary cleft team. Management of cleft lip and palate, which requires a comprehensive, team-based approach involving plastic surgeons, orthodontists, speech therapists, and other specialists. Prenatal diagnosis, often achieved through ultrasonography, can identify these anomalies; however, the small size of fetal structures and the echogenicity of the palate bones make detection challenging [21,22]. AI-assisted techniques, such as syntactic pattern recognition and deep learning algorithms, have improved diagnostic accuracy in ultrasound imaging. A study by Jurek et al. demonstrated pattern recognition approaches for cleft palate detection to assist physicians in proper diagnosis [21]. Li et al. reported a deep learning algorithm for ultrasound-based detection with a diagnostic accuracy of 92.5% in a cohort of 632 pregnant women [22].
AI applications have expanded postnatally to improve diagnosis and classification of cleft conditions [23,24,25,26]. Kuwada et al. utilized a deep learning-based model for the identification of unilateral and bilateral cleft alveolus and palate on panoramic radiographs [23,24,25], although their clinical utility is limited due to radiation concerns. Agarwal et al. demonstrated an AI system capable of identifying cleft lip using clinical photographs, achieving an area under the receiver operating characteristic (ROC) curve of 0.95 in a dataset of 58 bilateral and 78 unilateral cleft images [26]. A study by Rosero et al. introduced an automated deep learning approach to assess lip asymmetry in patients with cleft lip, achieving a weighted accuracy of 75% in classifying asymmetry severity levels [27].
Furthermore, CNNs have been employed to classify the severity of cleft morphology. McCullough et al. used CNN-based landmark detection to grade 800 unilateral cleft lip severity, achieving a correlation coefficient of 0.892 [28]. In a study by Hayajneh et al., applied adversarial neural networks with model adaptation to assess baseline severity with high accuracy compared to human raters [29]. The generated ratings correlated closely with a score of 0.89 compared to 145 human raters.
Early intervention and tailored treatment plans are essential for optimizing functional outcomes and improving the quality of life for individuals with cleft lip and palate. Nasoalveolar molding (NAM) is a preoperative adjunct that utilizes an intraoral molding plate combined with an external nasal molding device to align the lip, alveolus, palate, and nasal segments. While highly effective, the creation of NAM devices traditionally requires significant expertise and is a time-intensive process. To address these challenges, Schiebl et al. developed an algorithm for automated generation of patient-specific NAM devices for patients with bilateral cleft lip and palate [30]. This algorithm successfully produced 3D-printable NAM devices, facilitating effective treatment for 16 patients.
Augmented Reality (AR) is an advanced technology that integrates digital information into the physical environment, often enhanced by AI-driven machine learning for improved object recognition, interaction, and adaptability. Wearable AR devices provide trainees with a unique perspective, allowing them to view surgical procedures through the eyes of the attending surgeon. These devices facilitate real-time video communication and serve as interactive reference guides during operations [31]. AR technology has demonstrated significant utility in global outreach initiatives, particularly in improving cleft care by enabling remote surgical assistance [32,33]. The virtual presence allows for longitudinal support from experienced physicians, providing guidance and reinforcing techniques acquired during surgical missions [32,33]. A 13-month education program conducted by Vyas et al., which combined augmented reality with on-site teaching, reported improved educational outcomes and enhanced patient care [33].
Alveolar bone grafting is a critical surgical procedure in the management of cleft lip and palate, typically performed during the mixed dentition phase, between 8 and 12 years of age. The procedure involves harvesting autologous bone, often from the iliac crest, and grafting it into the alveolar cleft to restore continuity of the maxillary arch. The goals of alveolar bone grafting include providing structural support for the eruption of permanent teeth, stabilizing the dental arch, and improving both facial aesthetics and functional outcomes.
Recent advancements in AI have facilitated the quantitative assessment of alveolar defects. Wang et al. developed a deep learning-based segmentation protocol to measure the length, width, and height of alveolar defects, enabling precise preoperative planning [34]. Similarly, Fujii et al. demonstrated the utility of automatic segmentation algorithms to evaluate residual grafted bone postoperatively [35]. These AI-driven tools hold significant potential for determining the volume of graft material required for surgical correction and for assessing postoperative outcomes. Miranda et al. utilized CNNs to create an automated classification system for evaluating the severity of alveolar defects based on 3D surface models of patients with cleft lip and palate, providing a robust framework for clinical decision-making [36].
The intricate nature of cleft lip repair presents significant challenges for surgical trainees due to the steep learning curve and the precision required for successful outcomes. The surgery requires mastery of delicate tissue handling and precise anatomical reconstruction, which necessitate extensive hands-on practice. CNNs based on deep learning architectures and 3D point cloud images have been developed to identify anatomical landmarks on cleft lip images [37,38,39]. These AI-driven tools assist trainees in designing surgical incisions and evaluating post-surgical outcomes, thereby enhancing the training process and improving procedural accuracy [37,38,39,40,41].
Parent and patient education are essential in the management of cleft lip and palate, particularly for ensuring optimal healing and minimizing complications after cleft palate repair. Comprehensive educational materials are essential to help parents understand the surgical process and its benefits, including improvements in feeding, speech, and overall oral health. Large language models, such as ChatGPT-4, have been employed to generate detailed and accessible educational content for caregivers [42,43,44,45,46,47,48,49]. Fazilat et al. reported high levels of satisfaction among both caregivers and surgeons with the AI-generated materials, citing their accuracy, comprehensiveness, and clarity [42]. Additionally, Lo et al. implemented AR-based educational tools, which received positive feedback for their interactive and engaging approach, further enhancing caregiver understanding and involvement in the treatment process [50]. In addition, a study assessing ChatGPT’s responses to patient inquiries on cleft lip repair demonstrated that the AI generated generally accurate and comprehensible information; however, the absence of direct citations raises concerns regarding its clinical reliability [51]. To support trainee education, Lebhar et al. explored the use of ChatGPT to generate clear and accessible surgical steps for performing cleft lip repair using the Fischer technique [52]. See Table 1.

2.1. Velopharyngeal Insufficiency

Velopharyngeal insufficiency (VPI) is a common functional complication in individuals with cleft lip and palate, characterized by inadequate closure of the velopharyngeal sphincter during speech. This insufficiency results in hypernasality, nasal air escape, and articulation deficits, significantly affecting speech intelligibility and communication. The etiology of VPI is multifactorial, often arising from structural anomalies or post-surgical scarring that disrupt the coordination and mobility of the soft palate and pharyngeal walls. The evaluation of VPI typically requires a multidisciplinary approach, incorporating speech assessment, nasal endoscopy, and video fluoroscopy.
Speech assessment is a vital component in the management of VPI, as speech and communication are profoundly affected by velopharyngeal dysfunction. Comprehensive speech evaluation includes the analysis of speech intelligibility, resonance, articulation, and compensatory or maladaptive speech patterns such as nasal emissions and glottal stops. Recent advancements in AI have facilitated objective speech analysis through automated processing of patient speech samples [53,54,55,56,57,58,59,60,61,62,63,64,65,66,67]. Mathad et al. developed a deep learning algorithm trained on normative speech data, demonstrating an ability to assess hypernasality with accuracy comparable to that of trained clinicians [53].
Beyond speech analysis, imaging modalities such as nasal endoscopy and video fluoroscopy are integral to diagnosing VPI by evaluating nasopharyngeal structure and function (Table 1). Ha et al. applied deep learning techniques to automate the evaluation of video fluoroscopy, achieving performance metrics comparable to those of experienced physicians [68]. Additionally, AI-driven segmentation tools have been developed to enhance the assessment of nasopharyngeal anatomy, providing more precise and reproducible measurements of velopharyngeal motion and closure patterns [69,70,71,72,73].
Management of VPI may include conservative approaches such as speech therapy or prosthetic interventions like a speech bulb or palatal lift. Surgical interventions, including pharyngeal flap surgery, sphincter pharyngoplasty, and revision palatoplasty, are often necessary for cases unresponsive to conservative measures. However, surgical correction can lead to postoperative complications, most notably obstructive sleep apnea (OSA) due to narrowing of the pharyngeal airway [74]. To address this risk, AI-based screening tools have been developed to assist in the early identification and diagnosis of OSA in patients undergoing VPI surgery [75,76,77,78,79,80,81].
AI’s integration into the assessment and management of VPI represents a significant advancement in precision medicine, offering objective diagnostic tools and predictive analytics that optimize patient outcomes. Continued research and validation of these AI-driven methodologies will further refine clinical workflows and enhance the efficacy of VPI management in individuals with cleft-related speech disorders.

2.2. Orthognathic Surgery

Patients with cleft lip and palate often experience maxillary hypoplasia due to intrinsic growth disturbances or surgical scarring from cleft repairs. Orthognathic surgery, typically performed after skeletal maturity, involves repositioning the maxilla, mandible, or both to correct malocclusions, improve facial symmetry, and enhance overall jaw function. Among the available techniques, the Le Fort I osteotomy is the most employed procedure for maxillary advancement, addressing midface retrusion while improving occlusion, speech, and airway function [82].
Accurate timing of orthognathic surgery is critical to achieving optimal outcomes. Kamei et al. developed an AI-driven algorithm for the detection of skeletal maturity in patients with cleft lip and palate, facilitating objective surgical timing [83]. The determination of surgical necessity is largely based on lateral cephalograms, which provide essential data on skeletal discrepancies and malocclusion severity. AI-powered algorithms have been developed to assist in the automated diagnosis and classification of patients requiring orthognathic intervention [84,85,86,87,88,89,90,91].
In patients with midfacial hypoplasia, the decision to proceed with surgical intervention depends on the severity of functional and aesthetic concerns. For individuals with minor discrepancies that do not significantly impact function, non-surgical interventions such as orthodontic treatment may suffice. Machine learning models trained on cephalometric data have been developed to provide predictive guidance on the necessity of orthognathic surgery [92,93,94,95,96,97,98]. Choi et al. applied a machine learning algorithm to a cohort of 316 patients, demonstrating a 96% accuracy in determining the need for surgery and a 91% success rate in predicting the type of surgical intervention and the necessity of extractions [97]. Additionally, early prediction models, such as those developed by Lin et al., have leveraged machine learning techniques to analyze lateral cephalograms and predict the future need for orthognathic surgery in patients as young as six years old, enabling early intervention planning [98].
Traditional orthognathic treatment protocols typically involve a preparatory phase of orthodontic decompensation to align teeth within their respective jaws before surgical repositioning. However, a surgery-first approach has gained popularity for select cases, particularly those with severe skeletal discrepancies. This approach eliminates the need for a pre-surgical orthodontic phase and can shorten overall treatment time. Chang et al. described an AI-based decision-support system that utilizes deep learning models trained on lateral cephalograms to identify candidates suitable for the surgery-first approach, thereby optimizing treatment efficiency [99].
Predicting postoperative facial appearance is a crucial aspect of surgical planning. For patients, these predictive tools provide a clear visualization of anticipated outcomes, setting realistic expectations and alleviating preoperative anxiety. Accurate predictions also enhance patient satisfaction by aligning their aesthetic goals with surgical objectives (Table 1). For surgeons, AI-assisted prediction models enhance surgical planning by ensuring that skeletal modifications translate effectively into the desired facial changes. Advanced AI algorithms have demonstrated high clinical applicability in postoperative outcome prediction [100,101,102,103,104]. Intraoperatively, augmented reality and AI-assisted custom splint fabrication have been integrated into surgical workflows to enhance precision and efficiency [105]. Additionally, AI applications in cephalometric, computed tomography, and three-dimensional imaging analyses have facilitated the objective assessment of post-treatment facial symmetry and aesthetic outcomes [106,107,108,109].
The integration of AI into orthognathic surgery has significantly improved preoperative planning, intraoperative precision, and postoperative assessment, ultimately contributing to enhanced functional and aesthetic results for patients with cleft lip and palate. Ongoing advancements in AI-driven diagnostics, predictive modeling, and intraoperative guidance continue to refine surgical decision-making and optimize treatment outcomes in this complex patient population.

3. Craniosynostosis

Craniosynostosis is the premature fusion of one or more cranial sutures, which normally remain open to accommodate rapid brain and skull growth. Premature suture fusion leads to abnormal cranial morphology and, in severe cases, may contribute to increased intracranial pressure, developmental delays, and neurological complications. Early diagnosis and a multidisciplinary treatment approach, often involving surgical intervention, are crucial for achieving optimal function and aesthetic outcomes.
Deformational plagiocephaly, also known as positional plagiocephaly, is a condition characterized by an asymmetrical skull flattening due to external forces on the malleable infant skull. Unlike craniosynostosis, this condition does not involve premature fusion of cranial sutures and is often managed with conservative interventions. AI-based machine learning models leveraging various imaging modalities have been developed to distinguish between craniosynostosis, deformational plagiocephaly, and normal skull morphology (Table 1) [110,111,112,113,114,115,116,117,118]. Watt et al. utilized a smartphone-based AI algorithm to diagnose deformational plagiocephaly with a sensitivity of 87.5% and specificity of 83.67% [116]. Nguyen et al. further refined AI classification methods by analyzing vertex photographs to stratify deformational plagiocephaly severity based on the Argenta classification [118]. Telehealth and AI-assisted smartphone technologies can improve access to specialized craniofacial care by facilitating remote diagnosis and appropriate referrals [116,117].
Craniosynostosis can present with varying severity depending on the affected suture, with sagittal, coronal, metopic, and lambdoid each leading to distinct cranial morphology. AI-driven diagnostic models have been developed to accurately classify these conditions [119,120,121,122,123,124,125]. Geisler et al. implemented CNNs to identify and classify non-syndromic craniosynostosis using clinical photographs, achieving an overall diagnostic accuracy of 90.6% [120].
One diagnostic challenge, particularly in metopic craniosynostosis, is differentiating true suture fusion from benign metopic ridging. To improve diagnostic precision, machine learning models have been developed to analyze three-dimensional frontal curvature data. To address this, machine learning models analyzing three-dimensional frontal curvature have been developed [126,127]. Cho et al. demonstrated that an unsupervised machine learning algorithm outperformed traditional surgeon diagnosis in distinguishing metopic craniosynostosis from benign metopic ridging, highlighting the potential of AI to enhance diagnostic accuracy and reduce subjectivity in clinical decision-making [126]. Bloch et al. demonstrated an AI-based classification system with a sensitivity of 94.4% and specificity of 92.6%, allowing for improved differentiation between pathological and benign conditions [127].
AI has also been instrumental in guiding clinical decision-making regarding surgical intervention. The severity of metopic craniosynostosis exists along a spectrum, making the decision to proceed with surgery highly subjective and surgeon-dependent. To introduce greater objectivity, AI-based severity assessment tools have been developed to quantify orbitofrontal dysmorphology. To further delineate the degree of orbitofrontal dysmorphology, objective AI tools have been developed to guide indications for operative intervention [126,127,128,129,130,131,132]. AI-generated metopic severity scores and cranial morphology deviation metrics derived from computed tomography (CT) and three-dimensional photography have demonstrated strong correlation with traditional severity indices [128,129,130,131]. The generated severity scores are comparable to previously developed severity indices and associated with aesthetic and neurocognitive outcomes [132,133,134]. Cho et al. reported a 96% concordance between an automated AI algorithm and surgeon decision-making in determining the need for surgical intervention [126].
Craniosynostosis can occur as an isolated anomaly or as part of a genetic syndrome. Unlike non-syndromic craniosynostosis, which involves isolated suture fusion, syndromic cases are part of broader conditions such as Apert, Crouzon, Pfeiffer, Saethre-Chotzen, and Muenke syndromes. These syndromes often involve mutations in genes like FGFR2, FGFR3, or TWIST1, leading to characteristic features such as midface hypoplasia, exorbitism, limb abnormalities, and, in some cases, developmental delays or hearing loss. Given the complexity of these syndromes, AI-based diagnostic frameworks have been developed to assist clinicians [135,136,137]. O’Sullivan et al. introduced a machine-learning craniofacial analysis model capable of identifying Muenke, Crouzon, and Apert syndromes with a sensitivity of 99.9% and specificity of 100%, outperforming traditional clinical diagnosis [136].
The integration of AI extends beyond diagnosis and classification to intraoperative applications. Augmented reality has emerged as a transformative tool in craniosynostosis surgery, offering enhanced visualization and precision during complex craniofacial procedures. By overlaying virtual 3D models of the patient’s anatomy onto the surgical field in real time, augmented reality enables surgeons to accurately plan and execute osteotomies and reconstructions [138,139,140,141,142,143,144]. Applications of augmented reality systems have been successfully applied in cranial vault remodeling, minimally invasive craniectomy, and orbital box osteotomy [138,139,140,141,142,143,144]. Furthermore, augmented reality technology has proven valuable in patient and caregiver education. Chen et al. found that caregivers preferred augmented reality models over two-dimensional diagrams in craniosynostosis education [145]. AI technology has also been used in the evaluation of operative outcomes in non-syndromic and syndromic patients [146,147,148,149].

4. Craniofacial Microsomia

Craniofacial microsomia is a congenital underdevelopment of one side of the face, primarily affecting the mandible, ear, and associated soft tissues. Advances in AI have facilitated early diagnosis and surgical planning for this condition. Baek et al. employed a convolutional neural network model to detect craniofacial microsomia from clinical photographs with high accuracy ranging from 94% to 99% [150]. Additionally, AI algorithms have been developed for cephalometric analysis and segmentation, enabling objective evaluation of preoperative malformations and postoperative outcomes. These tools provide clinicians with more precise assessments, guiding treatment planning and monitoring surgical success [151,152].
Surgical management of mandibular in craniofacial microsomia focuses on restoring facial symmetry and improving functional outcomes. Treatment options include distraction osteogenesis to gradually lengthen the underdeveloped mandible, costochondral grafting, and orthognathic to optimize jaw alignment and occlusion. Deep learning-based predictive models have been developed to estimate soft tissue profile changes following mandibular advancement surgery, allowing for more precise preoperative planning (Table 1). Ter Horst et al. demonstrated a deep-learning algorithm capable of predicting postoperative soft tissue outcomes within an acceptable error range, improving patient-specific surgical planning [153].
Mandibular osteotomies in hemifacial microsomia present unique challenges due to the inherent asymmetry and hypoplasia of the affected structures. The underdeveloped mandible often exhibits distorted anatomy, including abnormal condyles, ramus, and body, complicating osteotomies and fixation. Variability in bone quality, such as reduced cortical thickness and lower bone volume, further increases the complexity of stabilization and raises the risk of fracture or hardware failure. To address these challenges, multiple research groups have developed augmented reality platforms to assist in intraoperative navigation for mandibular osteotomies and distractor placement. These technologies enhance surgical precision by superimposing 3D models onto the operative field, providing real-time guidance [154,155,156,157,158,159,160]. Liu et al. further extended the application of augmented reality to guide facial fat grafting for soft tissue augmentation, improving volumetric restoration and symmetry [161].
Microtia, a congenital anomaly characterized by underdevelopment or absence of the external ear, frequently coexists with craniofacial microsomia. The severity of microtia varies widely, from mild structural abnormalities to complete ear absence, known as anotia. AI-driven classification models have been developed to stratify the severity of microtia based on clinical photographs. Wang et al. implemented convolutional neural networks for this purpose, enhancing diagnostic accuracy and standardizing severity assessment [162].
Surgical reconstruction of microtia typically includes reconstructive ear surgery, either using autologous cartilage or synthetic materials. Ear reconstruction presents several challenges due to the complexity of recreating a functional and aesthetically pleasing external ear. Achieving proper positioning on the side of the head is crucial for both appearance and functional outcomes, which can be challenging due to the abnormal development of the surrounding structures. Augmented reality technology has been applied to assist surgeons with placement of the ear construct [163,164,165]. Nuri et al. compared augmented reality and traditional transparent film measurements and found the difference to be within 2 mm [165]. AI tools to assess post-operative asymmetry in ear position have further been developed [166,167,168].
The integration of AI and augmented reality into the diagnosis, surgical planning, and intraoperative execution of procedures for craniofacial microsomia and microtia represents a significant advancement in craniofacial surgery. These technologies enhance diagnostic accuracy, improve preoperative assessments, and provide real-time intraoperative guidance, ultimately leading to more predictable and refined surgical outcomes. As AI and AR continue to evolve, their applications in craniofacial reconstruction will further optimize patient care and advance the precision of surgical interventions.
Table 1. Applications of artificial intelligence in pediatric craniofacial surgery.
Table 1. Applications of artificial intelligence in pediatric craniofacial surgery.
DomainAI ApplicationsKey FindingsReferences
Cleft Lip and Palate
-
AI-assisted genetic analysis (CNNs for SNPs)
-
AI for prenatal and postnatal diagnosis (ultrasound, clinical photos)
-
Automated nasoalveolar molding device design
-
AR for surgical training and telemedicine
-
Improved SNP-based prediction of cleft risk (Pearson correlation 0.50–0.83)
-
Enhanced ultrasound-based detection accuracy (92.5%)
-
CAD-generated NAM devices optimized treatment (91% success rate)
-
AR-supported global outreach programs enhanced training outcomes
[19,20,21,22,23,24,25,26,27,28,29,30,31,32,33]
Velopharyngeal Insufficiency (VPI)
-
AI-driven speech analysis for hypernasality
-
AI-assisted nasopharyngeal imaging
-
AI-based OSA risk screening
-
Deep learning models achieved clinician-level accuracy in speech assessment (r = 0.81–0.89)
-
AI-enhanced imaging segmentation for VPI evaluation (Dice score 0.92–0.97, detected 90% of closures)
-
AI models improved OSA screening before VPI surgery (Accuracy, sensitivity and specificity > 85%)
[53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81]
Orthognathic Surgery
-
AI-driven skeletal maturity assessment
-
AI-assisted cephalometric analysis
-
AI for predicting orthognathic surgery need and outcome
-
AI improved surgical timing and decision-making (Sensitivity 95.5%, specificity 95.2%, simulation error 1.1 mm)
-
CNN models classified skeletal discrepancies with high accuracy
-
AI-predicted post-surgical facial changes improved patient counseling
[82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109]
Craniosynostosis
-
AI-based skull morphology classification
-
AI-assisted severity scoring and surgical indication
-
AI-powered AR for intraoperative guidance
-
AI algorithms classified craniosynostosis with 90.6% accuracy
-
AI-based severity scores correlated with surgical decision-making (96% concordance)
-
AR-assisted suture mapping (error 2.4 mm)
[110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149]
Craniofacial Microsomia and Microtia
-
AI-driven classification models (clinical photos, cephalometrics)
-
AI-assisted mandibular osteotomies and distractor placement
-
AR for microtia reconstruction planning
-
CNN-based classification models achieved 94–99% accuracy
-
AR-guided osteotomies enhanced surgical precision (p < 0.05)
-
AR improved ear positioning accuracy within 2 mm
[150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168]

5. Limitations and Gaps

While machine learning and deep learning have demonstrated significant potential in pediatric craniofacial surgery, their widespread clinical adoption faces several challenges. Model performance often declines when applied to real-world datasets that differ from the original training data, particularly due to differences in imaging protocols, population characteristics, and regional healthcare practices. This lack of generalizability is further compounded by limited access to diverse, high-quality annotated datasets. Additionally, the cost and complexity of integrating AI tools into existing clinical workflows remain substantial barriers, especially in resource-limited settings. Regulatory approval requires extensive validation through prospective trials, which are currently lacking in many AI applications. Ethical concerns around data privacy, algorithmic bias, and transparency also require careful consideration. Addressing these challenges will be essential for ensuring that AI tools deliver consistent, equitable, and clinically meaningful benefits across diverse healthcare environments.

6. Future Directions

Artificial intelligence has made significant strides in healthcare, with its transformative potential particularly evident in complex surgical specialties such as pediatric craniofacial surgery. This field involves the intricate correction of congenital deformities affecting the skull, face, and jaw, requiring intricate planning, precision, and multidisciplinary collaboration. AI’s expanding role in pediatric craniofacial surgery offers promising advancements in preoperative planning, intraoperative assistance, and postoperative monitoring, ultimately improving patient outcomes and surgical efficiency.
In the preoperative phase, AI-driven technologies are poised to enhance diagnostic accuracy and streamline surgical planning. Deep learning algorithms can analyze multimodal imaging data, including computed tomography (CT), magnetic resonance imaging (MRI), and three-dimensional (3D) reconstructions, to detect subtle craniofacial abnormalities with greater precision than traditional methods. Machine learning models, trained on extensive datasets, can identify minute structural deformities, soft tissue anomalies, and developmental variations that may be imperceptible to the human eye. These insights facilitate the development of personalized surgical plans tailored to each patient’s unique anatomy, allowing surgeons to anticipate potential challenges and optimize surgical strategies.
Beyond preoperative planning, AI-driven robotic systems hold the potential to revolutionize the execution of pediatric craniofacial procedures. Given the delicate nature of craniofacial surgeries, which often involve precise bone reconstruction and meticulous soft tissue manipulation, robotic assistance could enhance surgical accuracy, reduce intraoperative complications, and improve recovery times. AI-integrated robotic platforms may also enable minimally invasive techniques, a crucial advancement for pediatric patients who are more susceptible to complications from extensive surgical interventions.
During surgery, AI can function as a real-time decision-support tool by synthesizing data from various sources, including imaging modalities, surgical instruments, and patient vitals. By continuously analyzing intraoperative parameters, AI can provide alerts regarding deviations from the planned surgical trajectory, unanticipated anatomical variations, or real-time changes in patient physiology. Additionally, AI can assist in optimizing procedural techniques, such as identifying ideal osteotomy sites, guiding implant placement, and ensuring precise cranial bone alignment—critical factors in achieving both functional and aesthetic success.
The benefits of AI extend beyond the operating room, significantly impacting postoperative monitoring and long-term patient care. AI-powered systems can track recovery trajectories, detect early signs of complications such as infections, bone displacement, or graft failure, and predict the likelihood of secondary interventions. By continuously analyzing post-surgical imaging and patient data, AI facilitates early intervention, reducing the risk of long-term morbidity.
Furthermore, AI can support longitudinal patient monitoring, particularly in pediatric populations where craniofacial structures undergo continuous growth and development. AI-driven predictive models can analyze sequential imaging data to assess craniofacial maturation, guiding clinicians on the optimal timing for additional interventions when necessary. This proactive approach ensures that developmental abnormalities are addressed promptly, minimizing the need for more extensive corrective surgeries later in life.
Techniques such as data augmentation using generative adversarial networks (GANs) and targeted handling of class imbalance have been shown to enhance model robustness and performance in medical imaging AI, including MRI-based tumor classification and other applications [169,170]. These strategies are particularly relevant when developing AI models for pediatric craniofacial surgery, where datasets are often small and highly imbalanced.
As AI technologies continue to evolve, their integration into pediatric craniofacial surgery is expected to revolutionize diagnostic workflows, surgical precision, and postoperative care. While several challenges remain, the lack of standardized, high-quality, and representative datasets presents the most significant barrier to the development and validation of AI applications in pediatric craniofacial surgery. Nonetheless, ongoing advancements in AI technology hold the potential to transform this field by enhancing diagnostic precision, surgical planning, and long-term outcome monitoring.

Author Contributions

Conceptualization, R.R.H.; formal analysis, L.M.H.; data curation, L.M.H.; writing—original draft preparation, L.M.H.; writing—review and editing, R.L.E. and R.R.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

Children’s Analytical Imaging and Modeling Center Research Program.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rahmani, A.M.; Yousefpoor, E.; Yousefpoor, M.S.; Mehmood, Z.; Haider, A.; Hosseinzadeh, M.; Ali Naqvi, R. Machine Learning (ML) in Medicine: Review, Applications, and Challenges. Mathematics 2021, 9, 2970. [Google Scholar] [CrossRef]
  2. Quazi, S. Artificial intelligence and machine learning in precision and genomic medicine. Med. Oncol. 2022, 39, 120. [Google Scholar] [CrossRef]
  3. Alowais, S.A.; Alghamdi, S.S.; Alsuhebany, N.; Alqahtani, T.; Alshaya, A.I.; Almohareb, S.N.; Aldairem, A.; Alrashed, M.; Bin Saleh, K.; Badreldin, H.A.; et al. Revolutionizing healthcare: The role of artificial intelligence in clinical practice. BMC Med. Educ. 2023, 23, 689. [Google Scholar] [CrossRef]
  4. Gupta, R.; Srivastava, D.; Sahu, M.; Tiwari, S.; Ambasta, R.K.; Kumar, P. Artificial intelligence to deep learning: Machine intelligence approach for drug discovery. Mol. Divers. 2021, 25, 1315–1360. [Google Scholar] [CrossRef]
  5. Taye, M.M. Understanding of Machine Learning with Deep Learning: Architectures, Workflow, Applications and Future Directions. Computers 2023, 12, 91. [Google Scholar] [CrossRef]
  6. Alaskar, H.; Saba, T. Machine Learning and Deep Learning: A Comparative Review. In Proceedings of the Integrated Intelligence Enable Networks and Computing, Gopeshwar, India, 25–27 May 2020; Springer: Singapore, 2021. [Google Scholar]
  7. Hallac, R.R.; Lee, J.; Pressler, M.; Seaward, J.R.; Kane, A.A. Identifying Ear Abnormality from 2D Photographs Using Convolutional Neural Networks. Sci. Rep. 2019, 9, 18198. [Google Scholar] [CrossRef]
  8. Thakur, G.K.; Thakur, A.; Kulkarni, S.; Khan, N.; Khan, S. Deep Learning Approaches for Medical Image Analysis and Diagnosis. Cureus 2024, 16, e59507. [Google Scholar] [CrossRef]
  9. Mall, P.K.; Singh, P.K.; Srivastav, S.; Narayan, V.; Paprzycki, M.; Jaworska, T.; Ganzha, M. A comprehensive review of deep neural networks for medical image processing: Recent developments and future opportunities. Healthc. Anal. 2023, 4, 100216. [Google Scholar] [CrossRef]
  10. Li, M.; Jiang, Y.; Zhang, Y.; Zhu, H. Medical image analysis using deep learning algorithms. Front. Public Health 2023, 11, 1273253. [Google Scholar] [CrossRef]
  11. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef]
  12. Hallac, R.R.; Jackson, S.A.; Grant, J.; Fisher, K.; Scheiwe, S.; Wetz, E.; Perez, J.; Lee, J.; Chitta, K.; Seaward, J.R.; et al. Assessing outcomes of ear molding therapy by health care providers and convolutional neural network. Sci. Rep. 2021, 11, 17875. [Google Scholar] [CrossRef]
  13. Rosero, K.; Salman, A.N.; Hallac, R.R.; Busso, C. Lip abnormality detection for patients with repaired cleft lip and palate: A lip normalization approach. In Proceedings of the 26th International Conference on Multimodal Interaction (ICMI 2024), San José, Costa Rica, 4–8 November 2024. [Google Scholar] [CrossRef]
  14. Loftus, T.J.; Tighe, P.J.; Filiberto, A.C.; Efron, P.A.; Brakenridge, S.C.; Mohr, A.M.; Rashidi, P.; Upchurch, G.R., Jr.; Bihorac, A. Artificial Intelligence and Surgical Decision-making. JAMA Surg. 2020, 155, 148–158. [Google Scholar] [CrossRef] [PubMed]
  15. Najjar, R. Redefining Radiology: A Review of Artificial Intelligence Integration in Medical Imaging. Diagnostics 2023, 13, 2760. [Google Scholar] [CrossRef] [PubMed]
  16. Yang, Z.; Wang, D.; Zhou, F.; Song, D.; Zhang, Y.; Jiang, J.; Kong, K.; Liu, X.; Qiao, Y.; Chang, R.T. Understanding natural language: Potential application of large language models to ophthalmology. Asia-Pac. J. Ophthalmol. 2024, 13, 100085. [Google Scholar]
  17. Sezgin, E.; Jackson, D.I.; Kocaballi, A.B.; Bibart, M.; Zupanec, S.; Landier, W.; Audino, A.; Ranalli, M.; Skeens, M. Can Large Language Models Aid Caregivers of Pediatric Cancer Patients in Information Seeking? A Cross-Sectional Investigation. Cancer Med. 2025, 14, e70554. [Google Scholar]
  18. Mossey, P.; Catilla, E. WHO Registry Meeting on Craniofacial Anomalies. In WHO Human Genetics Programme & WHO Meeting on International Collaborative Research on Craniofacial Anomalies; WHO: Geneva, Switzerland, 2001. [Google Scholar]
  19. Dai, Y.; Itai, T.; Pei, G.; Yan, F.; Chu, Y.; Jiang, X.; Weinberg, S.M.; Mukhopadhyay, N.; Marazita, M.L.; Simon, L.M.; et al. DeepFace: Deep-learning-based framework to contextualize orofacial-cleft-related variants during human embryonic craniofacial development. HGG Adv. 2024, 5, 100322. [Google Scholar] [CrossRef]
  20. Kang, G.; Baek, S.H.; Kim, Y.H.; Kim, D.H.; Park, J.W. Genetic Risk Assessment of Nonsyndromic Cleft Lip with or without Cleft Palate by Linking Genetic Networks and Deep Learning Models. Int. J. Mol. Sci. 2023, 24, 4557. [Google Scholar] [CrossRef]
  21. Jurek, J.; Wojtowicz, W.; Wojtowicz, A. Syntactic pattern recognition-based diagnostics of fetal palates. Pattern Recognit. Lett. 2020, 133, 144–150. [Google Scholar]
  22. Li, Y.; Cai, P.; Huang, Y.; Yu, W.; Liu, Z.; Liu, P. Deep learning brosased detection and classification of fetal lip in ultrasound images. J. Perinat. Med. 2024, 52, 769–777. [Google Scholar] [CrossRef]
  23. Kuwada, C.; Ariji, Y.; Kise, Y.; Fukuda, M.; Nishiyama, M.; Funakoshi, T.; Takeuchi, R.; Sana, A.; Kojima, N.; Ariji, E. Deep-learning systems for diagnosing cleft palate on panoramic radiographs in patients with cleft alveolus. Oral Radiol. 2023, 39, 349–354. [Google Scholar] [CrossRef]
  24. Kuwada, C.; Ariji, Y.; Kise, Y.; Funakoshi, T.; Fukuda, M.; Kuwada, T.; Gotoh, K.; Ariji, E. Detection and classification of unilateral cleft alveolus with and without cleft palate on panoramic radiographs using a deep learning system. Sci. Rep. 2021, 11, 16044. [Google Scholar] [CrossRef] [PubMed]
  25. Kuwada, C.; Ariji, Y.; Kise, Y.; Fukuda, M.; Ota, J.; Ohara, H.; Kojima, N.; Ariji, E. Detection of unilateral and bilateral cleft alveolus on panoramic radiographs using a deep-learning system. Dentomaxillofacial Radiol. 2023, 52, 20210436. [Google Scholar] [CrossRef] [PubMed]
  26. Agarwal, S.; Hallac, R.; Mishra, R.; Li, C.; Daescu, O.; Kane, A. Images based detection of craniofacial abnormalities using feature extraction by classical convolutional neural network. In Proceedings of the 2018 IEEE 8th International Conference on Computational Advances in Bio and Medical Sciences, Las Vegas, NV, USA, 8–20 October 2018; pp. 1–6. [Google Scholar]
  27. Rosero, K.; Salman, A.N.; Harrison, L.M.; Kane, A.A.; Busso, C.; Hallac, R.R. Deep Learning-Based Assessment of Lip Symmetry for Patients with Repaired Cleft Lip. Cleft Palate Craniofacial J. 2025, 62, 289–299. [Google Scholar] [CrossRef] [PubMed]
  28. McCullough, M.; Ly, S.; Auslander, A.; Yao, C.; Campbell, A.; Scherer, S.; Magee, W.P. Convolutional Neural Network Models for Automatic Preoperative Severity Assessment in Unilateral Cleft Lip. Plast. Reconstr. Surg. 2021, 148, 162–169. [Google Scholar] [CrossRef]
  29. Hayajneh, A.; Shaqfeh, M.; Serpedin, E.; Stotland, M.A. Unsupervised anomaly appraisal of cleft faces using a StyleGAN2-based model adaptation technique. PLoS ONE 2023, 18, e0288228. [Google Scholar] [CrossRef]
  30. Schiebl, J.; Bauer, F.X.; Grill, F.; Loeffelbein, D.J. RapidNAM: Algorithm for the Semi-Automated Generation of Nasoalveolar Molding Device Designs for the Presurgical Treatment of Bilateral Cleft Lip and Palate. IEEE Trans. Biomed. Eng. 2020, 67, 1263–1271. [Google Scholar] [CrossRef]
  31. Lee, G.K.; Moshrefi, S.; Fuertes, V.; Veeravagu, L.; Nazerali, R.; Lin, S.J. What Is Your Reality? Virtual, Augmented, and Mixed Reality in Plastic Surgery Training, Education, and Practice. Plast. Reconstr. Surg. 2021, 147, 505–511. [Google Scholar] [CrossRef]
  32. Chahine, E.M.; Kantar, R.S.; Kassam, S.N.; Vyas, R.M.; Ghotmi, L.H.; Haddad, A.G.; Hamdan, U.S. Sustainable Cleft Care: A Comprehensive Model Based on the Global Smile Foundation Experience. Cleft Palate Craniofacial J. 2021, 58, 647–652. [Google Scholar] [CrossRef]
  33. Vyas, R.M.; Sayadi, L.R.; Bendit, D.; Hamdan, U.S. Using Virtual Augmented Reality to Remotely Proctor Overseas Surgical Outreach: Building Long-Term International Capacity and Sustainability. Plast. Reconstr. Surg. 2020, 146, 622e–629e. [Google Scholar] [CrossRef]
  34. Wang, X.; Pastewait, M.; Wu, T.H.; Lian, C.; Tejera, B.; Lee, Y.T.; Lin, F.C.; Wang, L.; Shen, D.; Li, S.; et al. 3D morphometric quantification of maxillae and defects for patients with unilateral cleft palate via deep learning-based CBCT image auto-segmentation. Orthod. Craniofacial Res. 2021, 24 (Suppl. S2), 108–116. [Google Scholar] [CrossRef]
  35. Fujii, Y.; Sugiyama-Tamura, T.; Sugisaki, R.; Chujo, Y.; Honda, A.; Kono, M.; Chikazu, D. New Assessment Method of Alveolar Bone Grafting Using Automatic Registration and AI-based Segmentation. J. Craniofacial Surg. 2024. [Google Scholar] [CrossRef] [PubMed]
  36. Miranda, F.; Choudhari, V.; Barone, S.; Anchling, L.; Hutin, N.; Gurgel, M.; Al Turkestani, N.; Yatabe, M.; Bianchi, J.; Aliaga-Del Castillo, A.; et al. Interpretable artificial intelligence for classification of alveolar bone defect in patients with cleft lip and palate. Sci. Rep. 2023, 13, 15861. [Google Scholar] [CrossRef]
  37. Sayadi, L.R.; Hamdan, U.S.; Zhangli, Q.; Hu, J.; Vyas, R.M. Harnessing the Power of Artificial Intelligence to Teach Cleft Lip Surgery. Plast. Reconstr. Surg. Glob. Open 2022, 10, e4451. [Google Scholar] [CrossRef]
  38. Xu, M.; Liu, B.; Luo, Z.; Ma, H.; Sun, M.; Wang, Y.; Yin, N.; Tang, X.; Song, T. Using a New Deep Learning Method for 3D Cephalometry in Patients with Cleft Lip and Palate. J. Craniofacial Surg. 2023, 34, 1485–1488. [Google Scholar] [CrossRef] [PubMed]
  39. Li, Y.; Cheng, J.; Mei, H.; Ma, H.; Chen, Z. CLPNet: Cleft Lip and Palate Surgery Support with Deep Learning. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Berlin, Germany, 23–27 July 2019; pp. 3666–3672. [Google Scholar] [CrossRef]
  40. Patcas, R.; Timofte, R.; Volokitin, A.; Agustsson, E.; Eliades, T.; Eichenberger, M.; Bornstein, M.M. Facial attractiveness of cleft patients: A direct comparison between artificial-intelligence-based scoring and conventional rater groups. Eur. J. Orthod. 2019, 41, 428–433. [Google Scholar] [CrossRef]
  41. Wu, J.; Heike, C.; Birgfeld, C.; Evans, K.; Maga, M.; Morrison, C.; Saltzman, B.; Shapiro, L.; Tse, R. Measuring Symmetry in Children with Unrepaired Cleft Lip: Defining a Standard for the Three-Dimensional Midfacial Reference Plane. Cleft Palate Craniofacial J. 2016, 53, 695–704. [Google Scholar] [CrossRef] [PubMed]
  42. Fazilat, A.Z.; Berry, C.E.; Churukian, A.; Lavin, C.; Kameni, L.; Brenac, C.; Podda, S.; Bruckman, K.; Lorenz, H.P.; Khosla, R.K.; et al. AI-based Cleft Lip and Palate Surgical Information is Preferred by Both Plastic Surgeons and Patients in a Blind Comparison. Cleft Palate Craniofacial J. 2024, 10556656241266368. [Google Scholar] [CrossRef]
  43. Shehab, A.A.; Shedd, K.E.; Alamah, W.; Mardini, S.; Bite, U.; Gibreel, W. Bridging Gaps in Health Literacy for Cleft Lip and Palate: The Role of Artificial Intelligence and Interactive Educational Materials. Cleft Palate Craniofacial J. 2024, 10556656241289653. [Google Scholar] [CrossRef]
  44. Chaker, S.C.; Hung, Y.C.; Saad, M.; Golinko, M.S.; Galdyn, I.A. Easing the Burden on Caregivers—Applications of Artificial Intelligence for Physicians and Caregivers of Children with Cleft Lip and Palate. Cleft Palate Craniofacial J. 2024, 10556656231223596. [Google Scholar] [CrossRef]
  45. Duran, G.S.; Yurdakurban, E.; Topsakal, K.G. The Quality of CLP-Related Information for Patients Provided by ChatGPT. Cleft Palate Craniofacial J. 2023, 10556656231222387. [Google Scholar] [CrossRef]
  46. Manasyan, A.; Lasky, S.; Jolibois, M.; Moshal, T.; Roohani, I.; Munabi, N.; Urata, M.M.; Hammoudeh, J.A. Expanding Accessibility in Cleft Care: The Role of Artificial Intelligence in Improving Literacy of Alveolar Bone Grafting Information. Cleft Palate Craniofacial J. 2024, 10556656241281453. [Google Scholar] [CrossRef]
  47. Alkhamees, A. Evaluation of Artificial Intelligence as a Search Tool for Patients: Can ChatGPT-4 Provide Accurate Evidence-Based Orthodontic-Related Information? Cureus 2024, 16, e65820. [Google Scholar] [CrossRef]
  48. Aziz, A.A.A.; Abdelrahman, H.H.; Hassan, M.G. The use of ChatGPT and Google Gemini in responding to orthognathic surgery-related questions: A comparative study. J. World Fed. Orthod. 2024, 14, 20–26. [Google Scholar] [CrossRef] [PubMed]
  49. Fatima, K.; Singh, P.; Amipara, H.; Chaudhary, G. Accuracy of Artificial Intelligence-Based Virtual Assistants in Responding to Frequently Asked Questions Related to Orthognathic Surgery. J. Oral Maxillofac. Surg. 2024, 82, 916–921. [Google Scholar] [CrossRef]
  50. Lo, S.J.; Chapman, P.; Young, D.; Drake, D.; Devlin, M.; Russell, C. The Cleft Lip Education with Augmented Reality (CLEAR) VR Phase 2 Trial: A Pilot Randomized Crossover Trial of a Novel Patient Information Leaflet. Cleft Palate Craniofacial J. 2023, 60, 179–188. [Google Scholar] [CrossRef]
  51. Mahedia, M.; Rohrich, R.N.; Sadiq, K.O.S.; Bailey, L.; Harrison, L.M.; Hallac, R.R. Exploring the Utility of ChatGPT in Cleft Lip Repair Education. J. Clin. Med. 2025, 14, 993. [Google Scholar] [CrossRef] [PubMed]
  52. Lebhar, M.S.; Velazquez, A.; Goza, S.; Hoppe, I.C. Dr. ChatGPT: Utilizing Artificial Intelligence in Surgical Education. Cleft Palate Craniofacial J. 2024, 61, 2067–2073. [Google Scholar] [CrossRef]
  53. Mathad, V.C.; Scherer, N.; Chapman, K.; Liss, J.M.; Berisha, V. A Deep Learning Algorithm for Objective Assessment of Hypernasality in Children with Cleft Palate. IEEE Trans. Biomed. Eng. 2021, 68, 2986–2996. [Google Scholar] [CrossRef]
  54. He, F.; Wang, X.; Yin, H.; Zhang, H.; Yang, G.; He, L. Acoustic analysis and detection of pharyngeal fricative in cleft palate speech using correlation of signals in independent frequency bands and octave spectrum prominent peak. Biomed. Eng. Online 2020, 19, 36. [Google Scholar] [CrossRef]
  55. Maier, A.; Hönig, F.; Bocklet, T.; Nöth, E.; Stelzle, F.; Nkenke, E.; Schuster, M. Automatic detection of articulation disorders in children with cleft lip and palate. J. Acoust. Soc. Am. 2009, 126, 2589–2602. [Google Scholar] [CrossRef]
  56. Zhang, Y.; Zhang, J.; Li, W.; Yin, H.; He, L. Automatic Detection System for Velopharyngeal Insufficiency Based on Acoustic Signals from Nasal and Oral Channels. Diagnostics 2023, 13, 2714. [Google Scholar] [CrossRef] [PubMed]
  57. He, L.; Zhang, J.; Liu, Q.; Yin, H.; Lech, M. Automatic Evaluation of Hypernasality and Consonant Misarticulation in Cleft Palate Speech. IEEE Signal Process. Lett. 2014, 21, 1298–1301. [Google Scholar] [CrossRef]
  58. He, L.; Tan, J.; Hao, H.; Tang, M.; Yin, H.; Lech, M. Automatic evaluation of resonance and articulation disorders in cleft palate speech. In Proceedings of the 2015 IEEE China Summit and International Conference on Signal and Information Processing (ChinaSIP), Chengdu, China, 12–15 July 2015. [Google Scholar]
  59. Golabbakhsh, M.; Abnavi, F.; Kadkhodaei Elyaderani, M.; Derakhshandeh, F.; Khanlar, F.; Rong, P.; Kuehn, D.P. Automatic identification of hypernasality in normal and cleft lip and palate patients with acoustic analysis of speech. J. Acoust. Soc. Am. 2017, 141, 929. [Google Scholar] [CrossRef]
  60. Bocklet, T.; Riedhammer, K.; Esyholdt, U.; Noth, E. Automatic phoneme analysis in children with Cleft Lip and Palate. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 7572–7576. [Google Scholar]
  61. Mathad, V.C.; Liss, J.M.; Chapman, K.; Scherer, N.; Berisha, V. Consonant-Vowel Transition Models Based on Deep Learning for Objective Evaluation of Articulation. IEEE/ACM Trans. Audio Speech Lang. Process. 2023, 31, 86–95. [Google Scholar] [CrossRef]
  62. Wang, X.; Yang, S.; Tang, M.; Yin, H.; Huang, H.; He, L. HypernasalityNet: Deep recurrent neural network for automatic hypernasality detection. Int. J. Med. Inform. 2019, 129, 1–12. [Google Scholar] [CrossRef] [PubMed]
  63. Lucas, C.; Torres-Guzman, R.; James, A.J.; Corlew, S.; Stone, A.; Powell, M.E.; Golinko, M.; Pontell, M.E. Machine Learning for Automatic Detection of Velopharyngeal Dysfunction: A Preliminary Report. J. Craniofacial Surg. 2024. [Google Scholar] [CrossRef]
  64. Saxon, M.; Tripathi, A.; Jiao, Y.; Liss, J.; Berisha, V. Robust Estimation of Hypernasality in Dysarthria with Acoustic Model Likelihood Features. IEEE/ACM Trans. Audio Speech Lang. Process. 2020, 28, 2511–2522. [Google Scholar] [CrossRef]
  65. Dubey, A.; Prasanna, M.; Dandapat, S. Sinusoidal model-based hypernasality detection in cleft palate speech using CVCV sequence. Speech Commun. 2020, 124, 1–12. [Google Scholar] [CrossRef]
  66. Cornefjord, M.; Bluhme, J.; Jakobsson, A.; Klintö, K.; Lohmander, A.; Mamedov, T.; Stiernman, M.; Svensson, R.; Becker, M. Using Artificial Intelligence for Assessment of Velopharyngeal Competence in Children Born with Cleft Palate with or Without Cleft Lip. Cleft Palate Craniofacial J. 2024, 10556656241271646. [Google Scholar] [CrossRef]
  67. Mathad, V.; Prasanna, S. Vowel onset point based screening of misarticulated stops in cleft lip and palate speech. IEEE/ACM Trans. Audio Speech Lang. Process. 2020, 28, 450–460. [Google Scholar] [CrossRef]
  68. Ha, J.H.; Lee, H.; Kwon, S.M.; Joo, H.; Lin, G.; Kim, D.Y.; Kim, S.; Hwang, J.Y.; Chung, J.H.; Kong, H.J. Deep Learning-Based Diagnostic System for Velopharyngeal Insufficiency Based on Videofluoroscopy in Patients with Repaired Cleft Palates. J. Craniofacial Surg. 2023, 34, 2369–2375. [Google Scholar] [CrossRef]
  69. Cho, H.N.; Gwon, E.; Kim, K.A.; Baek, S.H.; Kim, N.; Kim, S.J. Accuracy of convolutional neural networks-based automatic segmentation of pharyngeal airway sections according to craniofacial skeletal pattern. Am. J. Orthod. Dentofac. Orthop. 2022, 162, e53–e62. [Google Scholar] [CrossRef] [PubMed]
  70. Shujaat, S.; Jazil, O.; Willems, H.; Van Gerven, A.; Shaheen, E.; Politis, C.; Jacobs, R. Automatic segmentation of the pharyngeal airway space with convolutional neural network. J. Dent. 2021, 111, 103705. [Google Scholar] [CrossRef]
  71. Ruthven, M.; Miquel, M.E.; King, A.P. Deep-learning-based segmentation of the vocal tract and articulators in real-time magnetic resonance images of speech. Comput. Methods Programs Biomed. 2021, 198, 105814. [Google Scholar] [CrossRef]
  72. Leeraha, C.; Kusakunniran, W.; Yodrabum, N.; Chaisrisawadisuk, S.; Vathanophas, V.; Siriapisith, T. Performance enhancement of deep learning based solutions for pharyngeal airway space segmentation on MRI scans. Sci. Rep. 2024, 14, 19671. [Google Scholar] [CrossRef] [PubMed]
  73. Kim, D.Y.; Woo, S.; Roh, J.Y.; Choi, J.Y.; Kim, K.A.; Cha, J.Y.; Kim, N.; Kim, S.J. Subregional pharyngeal changes after orthognathic surgery in skeletal Class III patients analyzed by convolutional neural networks-based segmentation. J. Dent. 2023, 135, 104565. [Google Scholar] [CrossRef]
  74. de Blacam, C.; Smith, S.; Orr, D. Surgery for Velopharyngeal Dysfunction: A Systematic Review of Interventions and Outcomes. Cleft Palate Craniofacial J. 2018, 55, 405–422. [Google Scholar] [CrossRef]
  75. He, S.; Li, Y.; Zhang, C.; Li, Z.; Ren, Y.; Li, T.; Wang, J. Deep learning technique to detect craniofacial anatomical abnormalities concentrated on middle and anterior of face in patients with sleep apnea. Sleep Med. 2023, 112, 12–20. [Google Scholar] [CrossRef]
  76. Hanif, U.; Leary, E.; Schneider, L.; Paulsen, R.; Morse, A.M.; Blackman, A.; Schweitzer, P.; Kushida, C.A.; Liu, S.; Jennum, P.; et al. Estimation of Apnea-Hypopnea Index Using Deep Learning On 3-D Craniofacial Scans. IEEE J. Biomed. Health Inform. 2021, 25, 4185–4194. [Google Scholar] [CrossRef]
  77. Monna, F.; Ben Messaoud, R.; Navarro, N.; Baillieul, S.; Sanchez, L.; Loiodice, C.; Tamisier, R.; Joyeux-Faure, M.; Pépin, J.L. Machine learning and geometric morphometrics to predict obstructive sleep apnea from 3D craniofacial scans. Sleep Med. 2022, 95, 76–83. [Google Scholar] [CrossRef]
  78. Tsuiki, S.; Nagaoka, T.; Fukuda, T.; Sakamoto, Y.; Almeida, F.R.; Nakayama, H.; Inoue, Y.; Enno, H. Machine learning for image-based detection of patients with obstructive sleep apnea: An exploratory study. Sleep Breath. 2021, 25, 2297–2305. [Google Scholar] [CrossRef] [PubMed]
  79. Kim, J.W.; Lee, K.; Kim, H.J.; Park, H.C.; Hwang, J.Y.; Park, S.W.; Kong, H.J.; Kim, J.Y. Predicting Obstructive Sleep Apnea Based on Computed Tomography Scans Using Deep Learning Models. Am. J. Respir. Crit. Care Med. 2024, 210, 211–221. [Google Scholar] [CrossRef]
  80. Tsai, C.Y.; Huang, H.T.; Cheng, H.C.; Wang, J.; Duh, P.J.; Hsu, W.H.; Stettler, M.; Kuan, Y.C.; Lin, Y.T.; Hsu, C.R.; et al. Screening for Obstructive Sleep Apnea Risk by Using Machine Learning Approaches and Anthropometric Features. Sensors 2022, 22, 8630. [Google Scholar] [CrossRef]
  81. Chen, Q.; Liang, Z.; Wang, Q.; Ma, C.; Lei, Y.; Sanderson, J.E.; Hu, X.; Lin, W.; Liu, H.; Xie, F.; et al. Self-helped detection of obstructive sleep apnea based on automated facial recognition and machine learning. Sleep Breath. 2023, 27, 2379–2388. [Google Scholar] [CrossRef]
  82. Charles, D.; Harrison, L.; Hassanipour, F.; Hallac, R.R. Nasal Airflow Dynamics following LeFort I Advancement in Cleft Nasal Deformities: A Retrospective Preliminary Study. Diagnostics 2024, 14, 1294. [Google Scholar] [CrossRef] [PubMed]
  83. Kamei, G.; Batra, P.; Singh, A.K.; Arora, G.; Kaushik, S. Development of an Artificial Intelligence-Based Algorithm for the Assessment of Skeletal Age and Detection of Cervical Vertebral Anomalies in Patients with Cleft Lip and Palate. Cleft Palate Craniofacial J. 2024, 10556656241299890. [Google Scholar] [CrossRef]
  84. Alam, M.K.; Alfawzan, A.A. Dental Characteristics of Different Types of Cleft and Non-cleft Individuals. Front. Cell Dev. Biol. 2020, 8, 789. [Google Scholar] [CrossRef]
  85. Khosravi-Kamrani, P.; Qiao, X.; Zanardi, G.; Wiesen, C.A.; Slade, G.; Frazier-Bowers, S.A. A machine learning approach to determine the prognosis of patients with Class III malocclusion. Am. J. Orthod. Dentofac. Orthop. 2022, 161, e1–e11. [Google Scholar] [CrossRef]
  86. Knoops, P.G.M.; Papaioannou, A.; Borghi, A.; Breakey, R.W.F.; Wilson, A.T.; Jeelani, O.; Zafeiriou, S.; Steinbacher, D.; Padwa, B.L.; Dunaway, D.J.; et al. A machine learning framework for automated diagnosis and computer-assisted planning in plastic and reconstructive surgery. Sci. Rep. 2019, 9, 13597. [Google Scholar] [CrossRef]
  87. Zhao, L.; Chen, X.; Huang, J.; Mo, S.; Gu, M.; Kang, N.; Song, S.; Zhang, X.; Liang, B.; Tang, M. Machine Learning Algorithms for the Diagnosis of Class III Malocclusions in Children. Children 2024, 11, 762. [Google Scholar] [CrossRef]
  88. Alam, M.K.; Alfawzan, A.A.; Haque, S.; Mok, P.L.; Marya, A.; Venugopal, A.; Jamayet, N.B.; Siddiqui, A.A. Sagittal Jaw Relationship of Different Types of Cleft and Non-cleft Individuals. Front. Pediatr. 2021, 9, 651951. [Google Scholar] [CrossRef]
  89. Li, H.; Xu, Y.; Lei, Y.; Wang, Q.; Gao, X. Automatic Classification for Sagittal Craniofacial Patterns Based on Different Convolutional Neural Networks. Diagnostics 2022, 12, 1359. [Google Scholar] [CrossRef]
  90. Xiao, D.; Deng, H.; Lian, C.; Kuang, T.; Liu, Q.; Ma, L.; Lang, Y.; Chen, X.; Kim, D.; Gateno, J.; et al. Unsupervised learning of reference bony shapes for orthognathic surgical planning with a surface deformation network. Med. Phys. 2021, 48, 7735–7746. [Google Scholar] [CrossRef]
  91. Xiao, D.; Lian, C.; Deng, H.; Kuang, T.; Liu, Q.; Ma, L.; Kim, D.; Lang, Y.; Chen, X.; Gateno, J.; et al. Estimating Reference Bony Shape Models for Orthognathic Surgical Planning Using 3D Point-Cloud Deep Learning. IEEE J. Biomed. Health Inform. 2021, 25, 2958–2966. [Google Scholar] [CrossRef]
  92. de Oliveira, P.H.J.; Li, T.; Li, H.; Gonçalves, J.R.; Santos-Pinto, A.; Gandini Junior, L.G.; Cevidanes, L.S.; Toyama, C.; Feltrin, G.P.; Campanha, A.A.; et al. Artificial intelligence as a prediction tool for orthognathic surgery assessment. Orthod. Craniofacial Res. 2024, 27, 785–794. [Google Scholar] [CrossRef]
  93. Lee, H.; Ahmad, S.; Frazier, M.; Dundar, M.M.; Turkkahraman, H. A novel machine learning model for class III surgery decision. J. Orofac. Orthop. 2024, 85, 239–249. [Google Scholar] [CrossRef]
  94. Taraji, S.; Atici, S.F.; Viana, G.; Kusnoto, B.; Allareddy, V.S.; Miloro, M.; Elnagar, M.H. Novel Machine Learning Algorithms for Prediction of Treatment Decisions in Adult Patients with Class III Malocclusion. J. Oral Maxillofac. Surg. 2023, 81, 1391–1402. [Google Scholar] [CrossRef]
  95. Shin, W.; Yeom, H.G.; Lee, G.H.; Yun, J.P.; Jeong, S.H.; Lee, J.H.; Kim, H.K.; Kim, B.C. Deep learning based prediction of necessity for orthognathic surgery of skeletal malocclusion using cephalogram in Korean individuals. BMC Oral Health 2021, 21, 130. [Google Scholar] [CrossRef]
  96. Jeong, S.H.; Yun, J.P.; Yeom, H.G.; Lim, H.J.; Lee, J.; Kim, B.C. Deep learning based discrimination of soft tissue profiles requiring orthognathic surgery by facial photographs. Sci. Rep. 2020, 10, 16235. [Google Scholar] [CrossRef]
  97. Choi, H.I.; Jung, S.K.; Baek, S.H.; Lim, W.H.; Ahn, S.J.; Yang, I.H.; Kim, T.W. Artificial Intelligent Model with Neural Network Machine Learning for the Diagnosis of Orthognathic Surgery. J. Craniofacial Surg. 2019, 30, 1986–1989. [Google Scholar] [CrossRef]
  98. Lin, G.; Kim, P.J.; Baek, S.H.; Kim, H.G.; Kim, S.W.; Chung, J.H. Early Prediction of the Need for Orthognathic Surgery in Patients with Repaired Unilateral Cleft Lip and Palate Using Machine Learning and Longitudinal Lateral Cephalometric Analysis Data. J. Craniofacial Surg. 2021, 32, 616–620. [Google Scholar] [CrossRef]
  99. Chang, J.S.; Ma, C.Y.; Ko, E.W. Prediction of surgery-first approach orthognathic surgery using deep learning models. Int. J. Oral Maxillofac. Surg. 2024, 53, 942–949. [Google Scholar] [CrossRef]
  100. Tanikawa, C.; Yamashiro, T. Development of novel artificial intelligence systems to predict facial morphology after orthognathic surgery and orthodontic treatment in Japanese patients. Sci. Rep. 2021, 11, 15853. [Google Scholar] [CrossRef]
  101. Park, J.A.; Moon, J.H.; Lee, J.M.; Cho, S.J.; Seo, B.M.; Donatelli, R.E.; Lee, S.J. Does artificial intelligence predict orthognathic surgical outcomes better than conventional linear regression methods? Angle Orthod. 2024, 94, 549–556. [Google Scholar] [CrossRef]
  102. Ma, Q.; Kobayashi, E.; Fan, B.; Hara, K.; Nakagawa, K.; Masamune, K.; Sakuma, I.; Suenaga, H. Machine-learning-based approach for predicting postoperative skeletal changes for orthognathic surgical planning. Int. J. Med. Robot. Comput. Assist. Surg. 2022, 18, e2379. [Google Scholar] [CrossRef]
  103. Cheng, M.; Zhang, X.; Wang, J.; Yang, Y.; Li, M.; Zhao, H.; Huang, J.; Zhang, C.; Qian, D.; Yu, H. Prediction of orthognathic surgery plan from 3D cephalometric analysis via deep learning. BMC Oral Health 2023, 23, 161. [Google Scholar] [CrossRef]
  104. Ma, L.; Xiao, D.; Kim, D.; Lian, C.; Kuang, T.; Liu, Q.; Deng, H.; Yang, E.; Liebschner, M.A.K.; Gateno, J.; et al. Simulation of Postoperative Facial Appearances via Geometric Deep Learning for Efficient Orthognathic Surgical Planning. IEEE Trans. Med. Imaging 2023, 42, 336–345. [Google Scholar] [CrossRef]
  105. Yuan, Z.; He, S.; Jiang, T.; Xie, Q.; Zhou, N.; Huang, X. Augmented reality hologram combined with pre-bent distractor enhanced the accuracy of distraction vector transfer in maxillary distraction osteogenesis, a study based on 3D printed phantoms. Front. Surg. 2022, 9, 1018030. [Google Scholar] [CrossRef]
  106. Choi, J.W.; Park, H.; Kim, I.H.; Kim, N.; Kwon, S.M.; Lee, J.Y. Surgery-First Orthognathic Approach to Correct Facial Asymmetry: Artificial Intelligence-Based Cephalometric Analysis. Plast. Reconstr. Surg. 2022, 149, 496e–499e. [Google Scholar] [CrossRef]
  107. Patcas, R.; Bernini, D.A.J.; Volokitin, A.; Agustsson, E.; Rothe, R.; Timofte, R. Applying artificial intelligence to assess the impact of orthognathic treatment on facial attractiveness and estimated age. Int. J. Oral Maxillofac. Surg. 2019, 48, 77–83. [Google Scholar] [CrossRef]
  108. Ma, Q.; Kobayashi, E.; Jin, S.; Masamune, K.; Suenaga, H. 3D evaluation model of facial aesthetics based on multi-input 3D convolution neural networks for orthognathic surgery. Int. J. Med. Robot. Comput. Assist. Surg. 2024, 20, e2651. [Google Scholar] [CrossRef]
  109. Lo, L.J.; Yang, C.T.; Ho, C.T.; Liao, C.H.; Lin, H.H. Automatic Assessment of 3-Dimensional Facial Soft Tissue Symmetry Before and After Orthognathic Surgery Using a Machine Learning Model: A Preliminary Experience. Ann. Plast. Surg. 2021, 86 (Suppl. S2), S224–S228. [Google Scholar] [CrossRef]
  110. Mizutani, K.; Miwa, T.; Sakamoto, Y.; Toda, M. Application of Deep Learning Techniques for Automated Diagnosis of Non-Syndromic Craniosynostosis Using Skull. J. Craniofacial Surg. 2022, 33, 1843–1846. [Google Scholar] [CrossRef]
  111. Schaufelberger, M.; Ktihle, R.P.; Kaiser, C.; Wachter, A.; Weichel, F.; Hagen, N.; Ringwald, F.; Eisenmann, U.; Freudlsperger, C.; Nahml, W. CNN-Based Classification of Craniosynostosis Using 2D Distance Maps. In Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society, Glasgow, UK, 11–15 July 2022; pp. 446–449. [Google Scholar] [CrossRef]
  112. Sabeti, M.; Boostani, R.; Moradi, E.; Shakoor, M. Machine learning-based identification of craniosynostosis in newborns. Mach. Learn. Appl. 2022, 8, 100292. [Google Scholar]
  113. Paro, M.; Lambert, W.A.; Leclair, N.K.; Romano, R.; Stoltz, P.; Martin, J.E.; Hersh, D.S.; Bookland, M.J. Machine Learning-Driven Clinical Image Analysis to Identify Craniosynostosis: A Pilot Study of Telemedicine and Clinic Patients. Neurosurgery 2022, 90, 613–618. [Google Scholar] [CrossRef]
  114. Porras, A.R.; Tu, L.; Tsering, D.; Mantilla, E.; Oh, A.; Enquobahrie, A.; Keating, R.; Rogers, G.F.; Linguraru, M.G. Quantification of Head Shape from Three-Dimensional Photography for Presurgical and Postsurgical Evaluation of Craniosynostosis. Plast. Reconstr. Surg. 2019, 144, 1051e–1060e. [Google Scholar] [CrossRef]
  115. Tabatabaei, S.A.H.; Fischer, P.; Wattendorf, S.; Sabouripour, F.; Howaldt, H.P.; Wilbrand, M.; Wilbrand, J.F.; Sohrabi, K. Automatic detection and monitoring of abnormal skull shape in children with deformational plagiocephaly using deep learning. Sci. Rep. 2021, 11, 17970. [Google Scholar] [CrossRef]
  116. Watt, A.; Lee, J.; Toews, M.; Gilardino, M.S. Smartphone Integration of Artificial Intelligence for Automated Plagiocephaly Diagnosis. Plast. Reconstr. Surg. Glob. Open 2023, 11, e4985. [Google Scholar] [CrossRef]
  117. Bookland, M.J.; Ahn, E.S.; Stoltz, P.; Martin, J.E. Image processing and machine learning for telehealth craniosynostosis screening in newborns. J. Neurosurg. Pediatr. 2021, 27, 581–588. [Google Scholar] [CrossRef]
  118. Nguyen, H.T.; Obinero, C.G.; Wang, E.; Boyd, A.K.; Cepeda, A.; Talanker, M.; Mumford, D.; Littlefield, T.; Greives, M.R.; Nguyen, P.D. Artificial Intelligence Methods for the Argenta Classification of Deformational Plagiocephaly to Predict Severity and Treatment Recommendation. J. Craniofacial Surg. 2024, 35, 1917–1920. [Google Scholar] [CrossRef]
  119. Schaufelberger, M.; Kühle, R.; Wachter, A.; Weichel, F.; Hagen, N.; Ringwald, F.; Eisenmann, U.; Hoffmann, J.; Engel, M.; Freudlsperger, C.; et al. A Radiation-Free Classification Pipeline for Craniosynostosis Using Statistical Shape Modeling. Diagnostics 2022, 12, 1516. [Google Scholar] [CrossRef]
  120. Geisler, E.L.; Agarwal, S.; Hallac, R.R.; Daescu, O.; Kane, A.A. A Role for Artificial Intelligence in the Classification of Craniofacial Anomalies. J. Craniofacial Surg. 2021, 32, 967–969. [Google Scholar] [CrossRef]
  121. de Jong, G.; Bijlsma, E.; Meulstee, J.; Wennen, M.; van Lindert, E.; Maal, T.; Aquarius, R.; Delye, H. Combining deep learning with 3D stereophotogrammetry for craniosynostosis diagnosis. Sci. Rep. 2020, 10, 15346. [Google Scholar] [CrossRef]
  122. Kim, S.M.; Yang, J.S.; Han, J.W.; Koo, H.I.; Roh, T.H.; Yoon, S.H. Convolutional neural network-based classification of craniosynostosis and suture lines from multi-view cranial X-rays. Sci. Rep. 2024, 14, 26729. [Google Scholar] [CrossRef]
  123. Mendoza, C.S.; Safdar, N.; Okada, K.; Myers, E.; Rogers, G.F.; Linguraru, M.G. Personalized assessment of craniosynostosis via statistical shape modeling. Med. Image Anal. 2014, 18, 635–646. [Google Scholar] [CrossRef]
  124. Abdel-Alim, T.; Tapia Chaca, F.; Mathijssen, I.M.J.; Dirven, C.M.F.; Niessen, W.J.; Wolvius, E.B.; van Veelen, M.C.; Roshchupkin, G.V. Quantifying dysmorphologies of the neurocranium using artificial neural networks. J. Anat. 2024, 245, 903–913. [Google Scholar] [CrossRef]
  125. Kuehle, R.; Ringwald, F.; Bouffleur, F.; Hagen, N.; Schaufelberger, M.; Nahm, W.; Hoffmann, J.; Freudlsperger, C.; Engel, M.; Eisenmann, U. The Use of Artificial Intelligence for the Classification of Craniofacial Deformities. J. Clin. Med. 2023, 12, 7082. [Google Scholar] [CrossRef]
  126. Cho, M.J.; Hallac, R.R.; Effendi, M.; Seaward, J.R.; Kane, A.A. Comparison of an unsupervised machine learning algorithm and surgeon diagnosis in the clinical differentiation of metopic craniosynostosis and benign metopic ridge. Sci. Rep. 2018, 8, 6312. [Google Scholar] [CrossRef]
  127. Bloch, K.; Geoffroy, M.; Taverne, M.; van de Lande, L.; O’Sullivan, E.; Liang, C.; Paternoster, G.; Moazen, M.; Laporte, S.; Khonsari, R.H. New diagnostic criteria for metopic ridges and trigonocephaly: A 3D geometric approach. Orphanet J. Rare Dis. 2024, 19, 204. [Google Scholar] [CrossRef]
  128. Bruce, M.K.; Tao, W.; Beiriger, J.; Christensen, C.; Pfaff, M.J.; Whitaker, R.; Goldstein, J.A. 3D Photography to Quantify the Severity of Metopic Craniosynostosis. Cleft Palate Craniofacial J. 2023, 60, 971–979. [Google Scholar] [CrossRef] [PubMed]
  129. Beiriger, J.W.; Tao, W.; Bruce, M.K.; Anstadt, E.; Christensen, C.; Smetona, J.; Whitaker, R.; Goldstein, J.A. CranioRate: An Image-Based, Deep-Phenotyping Analysis Toolset and Online Clinician Interface for Metopic Craniosynostosis. Plast. Reconstr. Surg. 2024, 153, 112e–119e. [Google Scholar] [CrossRef]
  130. Anstadt, E.E.; Tao, W.; Guo, E.; Dvoracek, L.; Bruce, M.K.; Grosse, P.J.; Wang, L.; Kavan, L.; Whitaker, R.; Goldstein, J.A. Quantifying the Severity of Metopic Craniosynostosis Using Unsupervised Machine Learning. Plast. Reconstr. Surg. 2023, 151, 396–403. [Google Scholar] [CrossRef]
  131. Bhalodia, R.; Dvoracek, L.A.; Ayyash, A.M.; Kavan, L.; Whitaker, R.; Goldstein, J.A. Quantifying the Severity of Metopic Craniosynostosis: A Pilot Study Application of Machine Learning in Craniofacial Surgery. J. Craniofacial Surg. 2020, 31, 697–701. [Google Scholar] [CrossRef]
  132. Junn, A.; Dinis, J.; Hauc, S.C.; Bruce, M.K.; Park, K.E.; Tao, W.; Christensen, C.; Whitaker, R.; Goldstein, J.A.; Alperovich, M. Validation of Artificial Intelligence Severity Assessment in Metopic Craniosynostosis. Cleft Palate Craniofacial J. 2023, 60, 274–279. [Google Scholar] [CrossRef]
  133. Blum, J.D.; Beiriger, J.; Villavisanis, D.F.; Morales, C.; Cho, D.Y.; Tao, W.; Whitaker, R.; Bartlett, S.P.; Taylor, J.A.; Goldstein, J.A.; et al. Machine Learning in Metopic Craniosynostosis: Does Phenotypic Severity Predict Long-Term Esthetic Outcome? J. Craniofacial Surg. 2023, 34, 58–64. [Google Scholar] [CrossRef]
  134. Long, A.S.; Hauc, S.C.; Almeida, M.N.; Alper, D.P.; Beiriger, J.; Rivera, J.C.; Goldstein, J.; Mayes, L.; Persing, J.A.; Alperovich, M. Morphologic Severity and Age at Surgery Are Associated with School-Age Neurocognitive Outcomes in Metopic Craniosynostosis. Plast. Reconstr. Surg. 2024, 154, 824–835. [Google Scholar] [CrossRef]
  135. Foti, S.; Rickart, A.J.; Koo, B.; O’ Sullivan, E.; van de Lande, L.S.; Papaioannou, A.; Khonsari, R.; Stoyanov, D.; Jeelani, N.U.O.; Schievano, S.; et al. Latent disentanglement in mesh variational autoencoders improves the diagnosis of craniofacial syndromes and aids surgical planning. Comput. Methods Programs Biomed. 2024, 256, 108395. [Google Scholar] [CrossRef]
  136. O’ Sullivan, E.; van de Lande, L.S.; Papaioannou, A.; Breakey, R.W.F.; Jeelani, N.O.; Ponniah, A.; Duncan, C.; Schievano, S.; Khonsari, R.H.; Zafeiriou, S.; et al. Convolutional mesh autoencoders for the 3-dimensional identification of FGFR-related craniosynostosis. Sci. Rep. 2022, 12, 2230. [Google Scholar] [CrossRef]
  137. Hennocq, Q.; Paternoster, G.; Collet, C.; Amiel, J.; Bongibault, T.; Bouygues, T.; Cormier-Daire, V.; Douillet, M.; Dunaway, D.J.; Jeelani, N.O.; et al. AI-based diagnosis and phenotype—Genotype correlations in syndromic craniosynostoses. J. Cranio-Maxillofac. Surg. 2024, 52, 1172–1187. [Google Scholar] [CrossRef] [PubMed]
  138. Han, W.; Yang, X.; Wu, S.; Fan, S.; Chen, X.; Aung, Z.M.; Liu, T.; Zhang, Y.; Gu, S.; Chai, G. A new method for cranial vault reconstruction: Augmented reality in synostotic plagiocephaly surgery. J. Cranio-Maxillofac. Surg. 2019, 47, 1280–1284. [Google Scholar] [CrossRef] [PubMed]
  139. Alshomer, F.; Alazzam, A.; Alturki, A.; Almeshal, O.; Alhusainan, H. Smartphone-assisted Augmented Reality in Craniofacial Surgery. Plast. Reconstr. Surg. Glob. Open 2021, 9, e3743. [Google Scholar] [CrossRef]
  140. Garcia-Mato, D.; Moreta-Martinez, R.; Garcia-Sevilla, M.; Ochadiano, S.; Garcia-Leal, R.; Perez-Mananes, R.; Calvo-Haro, J.; Salmeron, J.; Pascau, J. Augmented reality visualization for craniosynostosis surgery. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 2021, 9, 392–399. [Google Scholar]
  141. Coelho, G.; Rabelo, N.N.; Vieira, E.; Mendes, K.; Zagatto, G.; Santos de Oliveira, R.; Raposo-Amaral, C.E.; Yoshida, M.; de Souza, M.R.; Fagundes, C.F.; et al. Augmented reality and physical hybrid model simulation for preoperative planning of metopic craniosynostosis surgery. Neurosurg. Focus 2020, 48, E19. [Google Scholar] [CrossRef]
  142. Thabit, A.; Benmahdjoub, M.; van Veelen, M.C.; Niessen, W.J.; Wolvius, E.B.; van Walsum, T. Augmented reality navigation for minimally invasive craniosynostosis surgery: A phantom study. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 1453–1460. [Google Scholar] [CrossRef]
  143. Zhu, M.; Chai, G.; Lin, L.; Xin, Y.; Tan, A.; Bogari, M.; Zhang, Y.; Li, Q. Effectiveness of a Novel Augmented Reality-Based Navigation System in Treatment of Orbital Hypertelorism. Ann. Plast. Surg. 2016, 77, 662–668. [Google Scholar] [CrossRef] [PubMed]
  144. Ruggiero, F.; Cercenelli, L.; Emiliani, N.; Badiali, G.; Bevini, M.; Zucchelli, M.; Marcelli, E.; Tarsitano, A. Preclinical Application of Augmented Reality in Pediatric Craniofacial Surgery: An Accuracy Study. J. Clin. Med. 2023, 12, 2693. [Google Scholar] [CrossRef]
  145. Chen, J.; Kumar, S.; Shallal, C.; Leo, K.T.; Girard, A.; Bai, Y.; Li, Y.; Jackson, E.M.; Cohen, A.R.; Yang, R. Caregiver Preferences for Three-Dimensional Printed or Augmented Reality Craniosynostosis Skull Models: A Cross-Sectional Survey. J. Craniofacial Surg. 2022, 33, 151–155. [Google Scholar] [CrossRef]
  146. Beiriger, J.W.; Tao, W.; Irgebay, Z.; Smetona, J.; Dvoracek, L.; Kass, N.M.; Dixon, A.; Zhang, C.; Mehta, M.; Whitaker, R.; et al. A Longitudinal Analysis of Pre- and Post-Operative Dysmorphology in Metopic Craniosynostosis. Cleft Palate Craniofacial J. 2024, 10556656241237605. [Google Scholar] [CrossRef]
  147. Villavisanis, D.F.; Shakir, S.; Zhao, C.; Cho, D.Y.; Barrero, C.; Blum, J.D.; Swanson, J.W.; Bartlett, S.P.; Tucker, A.M.; Taylor, J.A. Predicting Changes in Cephalic Index Following Spring-mediated Cranioplasty for Nonsyndromic Sagittal Craniosynostosis: A Stepwise and Machine Learning Algorithm Approach. J. Craniofacial Surg. 2022, 33, 2333–2338. [Google Scholar] [CrossRef]
  148. Anderson, M.G.; Jungbauer, D.; Leclair, N.K.; Ahn, E.S.; Stoltz, P.; Martin, J.E.; Hersh, D.S.; Bookland, M.J. Incorporation of a biparietal narrowing metric to improve the ability of machine learning models to detect sagittal craniosynostosis with 2D photographs. Neurosurg. Focus 2023, 54, E9. [Google Scholar] [CrossRef] [PubMed]
  149. Rickart, A.J.; Foti, S.; van de Lande, L.S.; Wagner, C.; Schievano, S.; Jeelani, N.U.O.; Clarkson, M.J.; Ong, J.; Swanson, J.W.; Bartlett, S.P.; et al. Using A Disentangled Neural Network to Objectively Assess the Outcomes of Midfacial Surgery in Syndromic Craniosynostosis. Plast. Reconstr. Surg. 2024. [Google Scholar] [CrossRef] [PubMed]
  150. Baek, R.M.; Cho, A.; Chung, Y.G.; Jeon, Y.; Kim, H.; Hwang, H.; Kang, J.; Myung, Y. Diagnosis and Screening of Velocardiofacial Syndrome by Evaluating Facial Photographs Using a Deep Learning-Based Algorithm. Plast. Reconstr. Surg. 2024. [Google Scholar] [CrossRef] [PubMed]
  151. Xu, M.; Liu, B.; Luo, Z.; Sun, M.; Wang, Y.; Yin, N.; Tang, X.; Song, T. Using a New Deep Learning Method for 3D Cephalometry in Patients with Hemifacial Microsomia. Ann. Plast. Surg. 2023, 91, 381–384. [Google Scholar] [CrossRef] [PubMed]
  152. Han, W.; Xia, W.; Zhang, Z.; Kim, B.S.; Chen, X.; Yan, Y.; Sun, M.; Lin, L.; Xu, H.; Chai, G.; et al. Radiomics and Artificial Intelligence Study of Masseter Muscle Segmentation in Patients with Hemifacial Microsomia. J. Craniofacial Surg. 2023, 34, 809–812. [Google Scholar] [CrossRef]
  153. Ter Horst, R.; van Weert, H.; Loonen, T.; Bergé, S.; Vinayahalingam, S.; Baan, F.; Maal, T.; de Jong, G.; Xi, T. Three-dimensional virtual planning in mandibular advancement surgery: Soft tissue prediction based on deep learning. J. Cranio-Maxillofac. Surg. 2021, 49, 775–782. [Google Scholar] [CrossRef] [PubMed]
  154. Gao, Y.; Lin, L.; Chai, G.; Xie, L. A feasibility study of a new method to enhance the augmented reality navigation effect in mandibular angle split osteotomy. J. Cranio-Maxillofac. Surg. 2019, 47, 1242–1248. [Google Scholar] [CrossRef]
  155. Zhang, Z.; Zhao, Z.; Han, W.; Kim, B.S.; Yan, Y.; Chen, X.; Lin, L.; Shen, W.; Chai, G. Accuracy and safety of robotic navigation-assisted distraction osteogenesis for hemifacial microsomia. Front. Pediatr. 2023, 11, 1158078. [Google Scholar] [CrossRef]
  156. Liu, X.; Zhang, Z.; Han, W.; Zhao, Z.; Kim, B.S.; Yan, Y.; Chen, X.; Wang, X.; Li, X.; Yang, X.; et al. Efficacy of navigation system-assisted distraction osteogenesis for hemifacial microsomia based on artificial intelligence for 3 to 18 years old: Study protocol for a randomized controlled single-blind trial. Trials 2024, 25, 42. [Google Scholar] [CrossRef]
  157. Kim, B.S.; Zhang, Z.; Sun, M.; Han, W.; Chen, X.; Yan, Y.; Shi, Y.; Xu, H.; Lin, L.; Chai, G. Feasibility of a Robot-Assisted Surgical Navigation System for Mandibular Distraction Osteogenesis in Hemifacial Microsomia: A Model Experiment. J. Craniofacial Surg. 2023, 34, 525–531. [Google Scholar] [CrossRef]
  158. Cai, E.Z.; Yee, T.H.; Gao, Y.; Lu, W.W.; Lim, T.C. Mixed reality guided advancement osteotomies in congenital craniofacial malformations. J. Plast. Reconstr. Aesthetic Surg. 2024, 98, 100–102. [Google Scholar] [CrossRef]
  159. Qu, M.; Hou, Y.; Xu, Y.; Shen, C.; Zhu, M.; Xie, L.; Wang, H.; Zhang, Y.; Chai, G. Precise positioning of an intraoral distractor using augmented reality in patients with hemifacial microsomia. J. Cranio-Maxillofac. Surg. 2015, 43, 106–112. [Google Scholar] [CrossRef]
  160. Zhang, Z.; Kim, B.S.; Han, W.; Sun, M.; Chen, X.; Yan, Y.; Xu, H.; Chai, G.; Lin, L. Preliminary study of the accuracy and safety of robot-assisted mandibular distraction osteogenesis with electromagnetic navigation in hemifacial microsomia using rabbit models. Sci. Rep. 2022, 12, 19572. [Google Scholar] [CrossRef]
  161. Liu, K.; Chen, S.; Wang, X.; Ma, Z.; Shen, S.G.F. Utilization of facial fat grafting augmented reality guidance system in facial soft tissue defect reconstruction. Head Face Med. 2024, 20, 51. [Google Scholar] [CrossRef] [PubMed]
  162. Wang, D.; Chen, X.; Wu, Y.; Tang, H.; Deng, P. Artificial intelligence for assessing the severity of microtia. Front. Surg. 2022, 9, 929110. [Google Scholar] [CrossRef]
  163. Jiang, D.; Wang, S.; Ma, C.; Yang, J.; He, L. A Feasibility Study of HoloLens Ear Image Guidance for Ear Reconstruction. Ann. Plast. Surg. 2025, 94, e11–e20. [Google Scholar] [CrossRef]
  164. Díez-Montiel, A.; Pose-Díez-de-la-Lastra, A.; González-Álvarez, A.; Salmerón, J.I.; Pascau, J.; Ochandiano, S. Tablet-based Augmented reality and 3D printed templates in fully guided Microtia Reconstruction: A clinical workflow. 3D Print. Med. 2024, 10, 17. [Google Scholar] [CrossRef]
  165. Nuri, T.; Mitsuno, D.; Otsuki, Y.; Ueda, K. Augmented Reality Technology for the Positioning of the Auricle in the Treatment of Microtia. Plast. Reconstr. Surg. Glob. Open 2020, 8, e2626. [Google Scholar] [CrossRef]
  166. Tolba, M.; Qian, Z.J.; Lin, H.F.; Yeom, K.W.; Truong, M.T. Use of Convolutional Neural Networks to Evaluate Auricular Reconstruction Outcomes for Microtia. Laryngoscope 2023, 133, 2413–2416. [Google Scholar] [CrossRef]
  167. Ye, J.; Lei, C.; Wei, Z.; Wang, Y.; Zheng, H.; Wang, M.; Wang, B. Evaluation of reconstructed auricles by convolutional neural networks. J. Plast. Reconstr. Aesthetic Surg. 2022, 75, 2293–2301. [Google Scholar] [CrossRef]
  168. Pathak, A.; Dhamande, M.M.; Gujjelwar, S.; Das, P.; Chheda, E.V.; Puthenkandathil, R. Fabrication of Implant-Supported Auricular Prosthesis Using Artificial Intelligence. Cureus 2024, 16, e60267. [Google Scholar] [CrossRef]
  169. Onakpojeruo, E.P.; Mustapha, M.T.; Ozsahin, D.U.; Ozsahin, I. Enhanced MRI-based brain tumour classification with a novel Pix2pix generative adversarial network augmentation framework. Brain Commun. 2024, 6, fcae372. [Google Scholar] [CrossRef] [PubMed]
  170. Onakpojeruo, E.P.; Mustapha, M.T.; Ozsahin, D.U.; Ozsahin, I. A Comparative Analysis of the Novel Conditional Deep Convolutional Neural Network Model, Using Conditional Deep Convolutional Generative Adversarial Network-Generated Synthetic and Augmented Brain Tumor Datasets for Image Classification. Brain Sci. 2024, 14, 559. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Harrison, L.M.; Edison, R.L.; Hallac, R.R. Artificial Intelligence Applications in Pediatric Craniofacial Surgery. Diagnostics 2025, 15, 829. https://doi.org/10.3390/diagnostics15070829

AMA Style

Harrison LM, Edison RL, Hallac RR. Artificial Intelligence Applications in Pediatric Craniofacial Surgery. Diagnostics. 2025; 15(7):829. https://doi.org/10.3390/diagnostics15070829

Chicago/Turabian Style

Harrison, Lucas M., Ragan L. Edison, and Rami R. Hallac. 2025. "Artificial Intelligence Applications in Pediatric Craniofacial Surgery" Diagnostics 15, no. 7: 829. https://doi.org/10.3390/diagnostics15070829

APA Style

Harrison, L. M., Edison, R. L., & Hallac, R. R. (2025). Artificial Intelligence Applications in Pediatric Craniofacial Surgery. Diagnostics, 15(7), 829. https://doi.org/10.3390/diagnostics15070829

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop