Next Article in Journal
How to Be Predictable in the Management of Vertical Dimension of Occlusion—A Narrative Review and Case Report
Next Article in Special Issue
Artificial Intelligence Applications in Dentistry: A Systematic Review
Previous Article in Journal
Transmembrane Mucin-1 Facilitates Oral Microbial Colonization in Oral Cancer
Previous Article in Special Issue
Predicting Artificial Intelligence Acceptance in Dental Treatments Among Patients in Saudi Arabia: A Perceived Risks and Benefits Perspective
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence to Detect Obstructive Sleep Apnea from Craniofacial Images: A Narrative Review

1
Research Department, Institute of Neuropsychiatry, Tokyo 162-0851, Japan
2
Division of Aging and Geriatric Dentistry, Department of Oral Function and Morphology, Tohoku University Graduate School of Dentistry, Sendai 980-8575, Japan
3
Division of Anesthesiology, Department of Perioperative Medicine, Showa Medical University School of Dentistry, Tokyo 145-8515, Japan
4
Department of Dentistry and Oral Surgery, Aichi Medical University, Aichi 480-1195, Japan
*
Author to whom correspondence should be addressed.
Submission received: 13 July 2025 / Revised: 12 September 2025 / Accepted: 28 September 2025 / Published: 9 October 2025
(This article belongs to the Special Issue Artificial Intelligence in Oral Medicine: Advancements and Challenges)

Abstract

Obstructive sleep apnea (OSA) is a chronic disorder associated with serious health consequences, yet many cases remain undiagnosed due to limited access to standard diagnostic tools such as polysomnography. Recent advances in artificial intelligence (AI) have enabled the development of deep convolutional neural networks that analyze craniofacial radiographs, particularly lateral cephalograms, to detect anatomical risk factors for OSA. The goal of this approach is not to replace polysomnography but to identify individuals with a high suspicion of OSA at the primary care or dental level and to guide them toward timely and appropriate diagnostic evaluation. Current studies have demonstrated that AI can recognize patterns of oropharyngeal crowding and anatomical imbalance of the upper airway with high accuracy, often exceeding manual assessment. Furthermore, interpretability analyses suggest that AI focuses on clinically meaningful regions, including the tongue, mandible, and upper airway. Unexpected findings such as predictive signals from outside the airway also suggest AI may detect subtle features associated with age or obesity. Ultimately, integrating AI with cephalometric imaging may support early screening and referral for polysomnography, improving care pathways and reducing delays in OSA treatment.

1. Introduction

Obstructive sleep apnea (OSA) is characterized by two clinically distinct features: a sleep disorder and a respiratory disorder, both resulting from repeated episodes of upper airway occlusion during sleep [1]. These episodes lead to intermittent cessation of breathing and respiratory instability, ultimately causing excessive daytime sleepiness due to sleep fragmentation. As a consequence, OSA reduces occupational productivity, increases absenteeism, and is associated with a higher incidence of traffic and industrial accidents [2,3]. If left untreated, OSA tends to worsen in severity and significantly increases the risk of adverse cardiovascular outcomes. Although treatment with mandibular advancement devices or continuous positive airway pressure is now widely recognized for mitigating such risks, public health and health economic perspectives support the development of strategies to detect undiagnosed OSA patients [4].
Dentists routinely assess patients’ general appearance (e.g., obese vs. non-obese) as soon as they enter the room for dental treatment. They also inquire about medical history, including medications such as antihypertensive agents, hypnotics, tranquilizers, and stimulants; such information often hints at the presence of OSA as the condition is strongly associated with hypertension, hypersomnia, and depression [4]. Moreover, dentists observe the face, tongue, dental arches, and oral cavity during every treatment, offering an opportunity to notice morphological features typical of OSA. Lateral cephalograms, commonly obtained in dental clinics, as well as in otolaryngology practices, are particularly useful for evaluating the position and size of the maxilla and mandible relative to the cranial base [5]. These images can also reveal adenoidal and tonsillar hypertrophy, both of which are typical signs of respiratory complications such as snoring and OSA. Accordingly, dentists are already in a unique position to identify OSA during routine care simply by slightly shifting their perspective to include screening for OSA in addition to standard dental treatment (Figure 1). This can be considered not a matter of developing new technology but of changing perspective [6].
This narrative review discusses the relevance of OSA, dentistry, and artificial intelligence (AI), and explores the potential future use of two-dimensional lateral cephalograms in conjunction with AI for the advancement of OSA care.

2. Understanding the Pathogenesis of OSA from a Dental Perspective

The pathophysiology of OSA can be classified into four clinical traits or endotypes: (i) impaired pharyngeal anatomy; (ii) impaired function of the pharyngeal dilator muscles; (iii) unstable respiratory control (i.e., high loop gain); and (iv) a low respiratory arousal threshold, meaning the individual awakens too easily in response to minor upper airway narrowing during sleep [7,8]. Notably, while the non-anatomical traits (ii–iv) require advanced diagnostic tools, such as overnight polysomnography combined with highly specialized data analysis techniques, the anatomical trait (i) is directly observable and can be assessed through imaging, without the need for diagnostic polysomnography. Studies have reported that among OSA patients, 19% exhibit mild, 58% moderate, and 23% severe anatomical abnormalities [7,8]. Although this anatomical approach is classical and not novel in itself, craniofacial images, whether two-dimensional or three-dimensional, remain attractive for OSA detection. Image-based assessment is relatively simple, reduces the influence of human subjectivity in diagnosis, and does not require burdensome sleep studies or reliance on questionnaires [9].
Prior to the advent of the third wave of AI, conventional two-dimensional cephalometric analyses demonstrated that although tongue size is generally proportional to maxillomandibular dimensions in healthy individuals, patients with OSA tend to have disproportionately larger tongues relative to jaw size [10,11]. Conversely, patients whose jaw size is insufficient relative to tongue volume are more likely to exhibit OSA. This anatomical imbalance, expressed as the ratio of soft tissue volume to dentofacial skeletal size (i.e., the degree of oropharyngeal crowding), has been shown to influence upper airway obstruction during sleep and the development of OSA [12]. We have further suggested that increased oropharyngeal crowding is positively correlated with OSA severity; that is, the more crowded the upper airway, the more severe the condition [12]. It is therefore reasonable to consider that if dentists routinely assess the degree of oropharyngeal crowding using cephalometric imaging, it may facilitate early identification of patients at risk for OSA. Accordingly, it is also reasonable to hypothesize that AI could be trained to detect such anatomical indicators of OSA directly from images.

3. Concept of AI Use in Detecting OSA from Images

AI is a field of computer science that aims to develop systems capable of performing tasks that typically require human intelligence. One major application of AI is machine learning, which enables systems to learn from data and make predictions without being explicitly programmed [13,14]. A particularly powerful subset of machine learning is deep learning (DL), which employs multi-layered neural networks to learn complex representations from large datasets.
The theoretical foundations of DL were established decades before its global practical success. The Amari–Hopfield network, referring to recurrent neural systems based on energy-driven associative memory, integrates the pioneering work of Amari [15,16], who introduced continuous, self-organizing neural fields and attractor dynamics in the 1970s, with Hopfield’s later discrete network model [17], which brought these ideas into broader recognition. Although differing in formulation, these models together laid the groundwork for attractor-based computation and inspired key principles that underpin modern DL theory [18].
A representative DL architecture is the convolutional neural network (CNN), which is specifically designed for processing visual data. When composed of many layers, these models are referred to as deep convolutional neural networks (DCNNs). In DCNNs, early layers identify low-level features such as edges or colors, while deeper layers capture increasingly abstract elements, including shapes and entire objects. This hierarchical feature extraction makes DCNNs particularly effective for image recognition tasks. Therefore, it is not surprising that researchers have begun applying two-dimensional cephalometric images to detect OSA, even in the absence of a well-defined working hypothesis, given the high capacity of DCNNs to identify anatomical features associated with OSA.
In practice, the training of AI models for OSA detection using cephalometric radiographs typically follows established deep learning workflows. Images are preprocessed to standardize orientation and resolution, and the dataset is divided into training, validation, and/or independent test subsets to prevent overfitting. Data augmentation strategies, such as random rotations or flips, are often employed to increase model robustness. Training is performed using a DCNN, with model performance monitored on the validation set (if used) and finalized based on accuracy and area under the receiver operating characteristic curve (AUC) in the test set. This methodological framework, which has been consistently applied in recent studies [9,19], ensures that the reported predictive accuracies reflect genuine learning of anatomical and morphological features rather than spurious correlations, while providing transparent documentation of the model development process for reproducibility and clinical credibility.
Beyond two-dimensional cephalometry, several other imaging modalities have also been explored for AI-driven OSA detection. For example, brain MRI, particularly diffusion tensor imaging, has been applied with machine learning models to classify OSA status, achieving AUC values of approximately 0.84–0.85 [20], while other investigations have analyzed upper airway soft tissues using MRI and performed automated segmentation to extract anatomical features relevant to OSA risk [21]. Computed tomography (CT) and cone-beam CT similarly allow detailed three-dimensional evaluation of craniofacial skeletal and upper airway anatomy, and AI models trained on these modalities have shown promise in mapping upper airway narrowing and structural relationships characteristic of OSA [22,23]. In addition, recent studies using frontal and profile facial photography combined with deep learning have demonstrated predictive ability for OSA by detecting craniofacial phenotypic markers, such as mandibular width and soft-tissue contours [24]. Nevertheless, the rationale for why two-dimensional lateral cephalometric images remains a particularly suitable and often preferred modality for OSA screening in dental settings will be discussed in the following section.

4. What Does AI Focus on in a Lateral Cephalometric Image?

While alternative modalities such as MRI, CT, and cone-beam CT provide valuable anatomical insights, their practical limitations (e.g., radiation exposure, high cost, procedural complexity and lack of standardized acquisition protocols) underscore the advantages of widely available, low-dose two-dimensional lateral cephalograms as the most pragmatic and scalable foundation for AI-based OSA screening in dental settings [9]. Lateral cephalograms are already integrated into routine orthodontic and prosthodontic workflows, require minimal patient burden, and are highly standardized worldwide, with consistent positioning and measurement protocols. This high degree of standardization not only facilitates AI model training but also enables meaningful cross-population and interethnic comparisons, making cephalograms uniquely suited for global research initiatives and large-scale OSA screening efforts.
Tsuiki et al. [9] were among the first to demonstrate that AI could accurately identify individuals with OSA using image-based analysis. Notably, their use of a DCNN was grounded in a clinically relevant question. In actual practice, sleep dentists and physicians often make intuitive judgments about the presence of OSA by estimating the degree of craniofacial abnormality, for example, observing a large tongue relative to the mandible or a low hyoid bone position [10,11,25,26] (Figure 2A). Moreover, as shown in Figure 2B, it is visually easier to distinguish severe OSA (group c) from non-OSA individuals (group a) than it is to differentiate mild/moderate OSA (group b) from non-OSA (group a). This observation is consistent with previous findings that greater anatomical imbalance, such as increased oropharyngeal crowding, is associated with increased OSA severity [12]. Based on this rationale, it was clinically and methodologically reasonable to begin by testing whether AI could distinguish between two clearly distinct groups: individuals with severe OSA (group c) and those without OSA (group a). This binary comparison served as an appropriate first step in validating AI’s capacity to recognize key anatomical differences associated with severe OSA.

5. Interpretability and Stratification in AI-Based OSA Detection

It may or may not be surprising per se that AI achieved higher AUC values (e.g., 0.89 or 0.92) in distinguishing between clearly different groups, compared to conventional manual analysis, which exhibited an AUC of 0.75, indicating that AI outperformed human-based methods [9] (Figure 3). However, what is far more critical for clinical implementation is the ability to explain the reasons behind these outcomes. While it is widely accepted that classification accuracy depends on the specific network architecture employed, it should also be acknowledged that a cephalometric image reflects not only craniofacial skeletal and soft tissue structures but also broader epidemiological risk factors associated with OSA, such as sex, age, and obesity [4]. A recent study [19] highlighted the inclusion of both sexes, a broad age range, and varying OSA severities as a strength of its dataset. Yet, the model’s performance (AUC 0.73–0.82) suggests that further improvements are likely if stratification based on these key risk factors is incorporated during training and evaluation.
A practical framework for this stratification may include sex-specific AI models and subgroup models stratified by certain relevant BMI or age cut-offs to provide more accurate and clinically meaningful predictions for OSA. Sex-related anatomical differences in the craniofacial region between individuals with and without OSA have been long recognized. It is thus reasonable, if not essential, to stratify data by sex in AI model development. Prior studies have shown that females exhibit a distinct upper airway morphology compared to males, and female OSA patients tend to have less compromised upper airways from both anatomical and functional perspectives [27,28,29]. One contributing factor to the male predisposition to upper airway collapse is increased upper airway length [28]. In addition, awake genioglossus muscle activity is reported to be greater in females than in males [29]. These sex-based physiological and anatomical differences are likely to influence AI performance and should be considered. In obese individuals with OSA, a reduction in lung volume appears to increase pharyngeal closing pressure (i.e., make it more positive), thereby reducing upper airway compliance. As a consequence, the upper airway length becomes shortened with increasing obesity [1]. This physiological mechanism is mediated by axial forces transmitted through the trachea, commonly referred to as the “tracheal tug” [30,31]. As lung volume increases, the trachea is displaced caudally, generating downward forces on the upper airway. It is plausible that AI systems may detect differences in upper airway length between individuals without OSA and those with OSA, who are typically more obese.
One outstanding result from Tsuiki et al. [9] is the superior AUC (0.92) obtained when using the main region of interest in the image (encompassing the facial profile, upper airway, and craniofacial soft and hard tissues) compared to the full, unmodified image (AUC = 0.89) (Figure 3). This finding strongly suggests that anatomical abnormalities constitute a primary contributor to OSA, consistent with the assertion by Eckert et al. [7,8] that anatomical factors are central among the various traits underlying OSA pathophysiology. Moreover, this result implies that the AI system may primarily attend to the oropharyngeal region: An area also routinely emphasized by experienced radiologists and sleep medicine specialists. This hypothesis was partly corroborated by a subsequent study employing gradient-weighted class activation mapping (Grad-CAM), which showed that AI-based prediction of OSA was predominantly based on the upper airway, particularly the pharynx, tongue, mandible, and surrounding soft tissues [19]. In addition to Grad-CAM, other interpretability approaches have been proposed in imaging research. For example, occlusion sensitivity analysis tests the effect of systematically masking parts of an image to reveal which regions most influence predictions [32]. Model-agnostic approaches such as SHapley Additive exPlanations (SHAP) quantify the relative contribution of each feature to the model’s decision [33], while the Local Interpretable Model-agnostic Explanations (LIME) method provides case-specific linear approximations to highlight local decision boundaries [34]. A more robust and clinically interpretable explanation of AI-based OSA detection would be feasible when such methods are combined with visual saliency maps.
As repeatedly emphasized, obesity exacerbates oropharyngeal crowding, thereby increasing OSA severity [10]. However, OSA can also be present in non-obese individuals who lack a thick neck but possess a small maxilla and/or mandible. In such individuals, even with a normal amount of soft tissue, the cross-sectional area of the upper airway is reduced, and the hyoid bone may be positioned more caudally [12,26]. A low hyoid position may reflect compensatory descent in response to excessive soft tissue volume resulting from oropharyngeal crowding: an anatomical feature to which AI systems appear particularly attuned.

6. What AI Can See in Images That Humans Cannot: Perspectives

One unexpected yet intriguing finding from the study by Tsuiki et al. [9] was that a moderate level of classification accuracy (AUC = 0.70) was achieved using only the occipital region of the image (Figure 3). The authors initially hypothesized that the AUC would be close to 0.50, essentially indicating no predictive power, because this region contains no upper airway structures. While technical artifacts, variations in imaging parameters, or model overfitting could potentially affect predictive performance, it is unlikely that the observed performance arises solely from such factors, since random errors would be expected to occur more uniformly across the images. Furthermore, the dataset comprised a sufficient number of samples (~1389 lateral cephalometric radiographs), which helps mitigate overfitting concerns; a DCNN was developed and tested using these highly standardized images, as already discussed in Section 4. Therefore, the surprising result from the occipital region suggests that AI may extract information from cephalometric radiographs that is inaccessible or overlooked by human observers.
There are a few plausible explanations. Given that patients with OSA tend to be significantly older and more obese than non-OSA individuals [9], and considering that both aging and obesity are major risk factors for OSA, the AI may have identified indirect indicators related to these variables from the occipital region. For instance, age-related changes in craniofacial soft tissue thickness have been documented, suggesting that even subtle alterations in scalp or subcutaneous tissue morphology could provide predictive signals [35]. Similarly, previous studies applying DL to panoramic radiographs have shown that skeletal structures beyond the dentition, including the mandible, maxillary sinuses, and vertebrae, can be used to accurately predict age [36,37]. Furthermore, Kahm et al. conducted a study using 1922 panoramic images of patients aged 15–23 years, demonstrating that AI-based age estimation is feasible in this age group [38]. More recently, Alam et al. [39] demonstrated that neural networks and vision–language models could accurately estimate age and sex from panoramic radiographs, supporting the notion that AI can extract age-related craniofacial features from dental images, in addition to cephalometric images. Although these studies did not directly evaluate the occipital region, they suggest that age-related features outside the upper airway may provide predictive information from craniofacial images that is not readily perceptible to humans but could be captured by AI.
Nevertheless, these results should be interpreted with caution. Further validation using independent datasets and additional ablation studies will be necessary to confirm the robustness and clinical significance of the model’s findings. Such work could generate new and unforeseen hypotheses, broadening our understanding of what AI can learn from medical images beyond conventional diagnostic frameworks. If subsequent studies consistently demonstrate high predictive performance, cephalometric radiographs combined with AI for OSA detection may become widely accepted.
Another limitation is that most of the cited evidence and preliminary model development are based on Japanese cohorts, naturally reflecting the origin of our hypothesis generation (i.e., Figure 2). Craniofacial morphology is known to vary by ethnicity, and such differences may influence both OSA risk and the performance of image-based AI models. While the Japanese population provided an appropriate basis for hypothesis generation and initial validation, further external validation in diverse, multi-ethnic cohorts will be essential to confirm generalizability.
At the same time, definitive diagnosis of OSA should continue to rely on standard PSG, which provides accurately labeled ground truth data and minimizes annotation noise. Although some studies have reported high prediction accuracy despite labeling issues in both OSA and non-OSA samples, or in non-OSA samples alone, the outcomes of such approaches require careful consideration [19,40]. Therefore, further validation using PSG-labeled datasets is recommended to ensure robustness and clinical applicability [41]. By carefully addressing these methodological considerations, proactively using cephalometric images for OSA detection, rather than relying on incidental findings, could represent the next stage in integrating AI into OSA triage (see Figure 1).
In practical terms, an interpretable AI framework could allow clinicians, such as dentists, to upload a patient’s lateral cephalogram along with basic information (e.g., age, sex, BMI) to a secure online platform. The AI would then provide a probability estimate of OSA presence. Such a concept emphasizes actionable interpretability for clinical users, allowing them to integrate AI insights with patient data without requiring in-depth technical knowledge. This approach represents a feasible next step in translating AI-based OSA detection from research into routine dental practice, and to our knowledge, this review is among the first to propose this clinically oriented workflow.

7. Conclusions

While the early detection of OSA using AI and craniofacial imaging offers clear benefits for clinical management, the ability to explain the underlying rationale behind AI-generated results is of even greater importance. Without transparent interpretability, improvements in prediction accuracy alone are unlikely to justify clinical implementation. Therefore, rigorous comparison between AI-based analyses and conventional manual methods, grounded in decades of accumulated imaging data, remains indispensable. Such efforts will help ensure the responsible and effective integration of AI into the diagnostic workflow for OSA.

8. Patents

The image-based analysis method illustrated in Figure 3 and partly discussed in Tsuiki et al. [9] is subject to a pending Japanese patent: “Disease determination device and disease determination program,” Inventors: Satoru Tsuiki, Hiroki Enno; Applicants: Satoru Tsuiki, Hiroki Enno; Japanese Patent Application No. 2021-011378, filed in 2021.

Author Contributions

Conceptualization, S.T.; writing—original draft preparation, S.T.; writing—review and editing, T.F., E.I., and A.F.; supervision, T.F. All authors have read and agreed to the published version of the manuscript.

Funding

This review draws in part on findings and perspectives developed through projects funded by the Japan Society for the Promotion of Science (JSPS) KAKENHI (Grant Numbers: 22K10069, 24K13038, and 25K13118).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable.

Acknowledgments

This review draws in part on findings and perspectives developed through projects funded by the Japan Society for the Promotion of Science (JSPS) KAKENHI (Grant Numbers: 22K10069, 24K13038, and 25K13118). The authors are grateful to Hiroki Enno (Plasma Inc., Tokyo, Japan) for his exceptional expertise and support in the AI analyses. Part of this report reflects outcomes derived from an international collaborative project between the Institute of Neuropsychiatry, Tokyo (Satoru Tsuiki, Eiki Ito, and Tatsuya Fukuda), and the Department of Oral Health Sciences, Faculty of Dentistry, The University of British Columbia, Vancouver (Fernanda Almeida).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Isono, S. Obstructive sleep apnea of obese adults: Pathophysiology and perioperative airway management. Anesthesiology 2009, 110, 908–921. [Google Scholar] [CrossRef]
  2. Asaoka, S.; Namba, K.; Tsuiki, S.; Komada, Y.; Inoue, Y. Excessive daytime sleepiness among Japanese public transportation drivers engaged in shiftwork. J. Occup. Environ. Med. 2010, 52, 813–818. [Google Scholar] [CrossRef]
  3. Hirsch Allen, A.J.M.; Bansback, N.; Ayas, N.T. The effect of OSA on work disability and work-related injuries. Chest 2015, 147, 1422–1428. [Google Scholar] [CrossRef]
  4. Akashiba, T.; Inoue, Y.; Uchimura, N.; Ohi, M.; Kasai, T.; Kawana, F.; Sakurai, S.; Takegami, M.; Tachikawa, R.; Tanigawa, T.; et al. Sleep Apnea Syndrome (SAS) Clinical Practice Guidelines 2020. Sleep Biol. Rhythms 2022, 20, 5–37. [Google Scholar] [CrossRef]
  5. Neelapu, B.C.; Kharbanda, O.P.; Sardana, H.K.; Balachandran, R.; Sardana, V.; Kapoor, P.; Gupta, A.; Vasamsetti, S. Craniofacial and upper airway morphology in adult obstructive sleep apnea patients: A systematic review and meta-analysis of cephalometric studies. Sleep. Med. Rev. 2017, 31, 79–90. [Google Scholar] [CrossRef] [PubMed]
  6. Tsuiki, S.; Kohzuka, Y.; Fukuda, T.; Iijima, T. Contribution of dentists to detecting obstructive sleep apnea. J. Oral Sleep. Med. 2023, 9, 25–32, (In Japanese with English Abstract). [Google Scholar]
  7. Eckert, D.J. Phenotypic approaches to obstructive sleep apnoea—New pathways for targeted therapy. Sleep. Med. Rev. 2018, 37, 45–59. [Google Scholar] [CrossRef]
  8. Carberry, J.C.; Amatoury, J.; Eckert, D.J. Personalized management approach for obstructive sleep apnea. Chest 2018, 153, 744–755. [Google Scholar] [CrossRef]
  9. Tsuiki, S.; Nagaoka, T.; Fukuda, T.; Sakamoto, Y.; Almeida, F.R.; Nakayama, H.; Inoue, Y.; Enno, H. Machine learning for image-based detection of patients with obstructive sleep apnea: An exploratory study. Sleep Breath. 2021, 25, 2297–2305. [Google Scholar] [CrossRef]
  10. Tsuiki, S.; Isono, S.; Ishikawa, T.; Yamashiro, Y.; Tatsumi, K.; Nishino, T. Anatomical balance of the upper airway and obstructive sleep apnea. Anesthesiology 2008, 108, 1009–1015. [Google Scholar] [CrossRef] [PubMed]
  11. Watanabe, T.; Isono, S.; Tanaka, A.; Tanzawa, H.; Nishino, T. Contribution of body habitus and craniofacial characteristics to segmental closing pressures of the passive pharynx in patients with sleep-disordered breathing. Am. J. Respir. Crit. Care Med. 2002, 165, 260–265. [Google Scholar] [CrossRef]
  12. Ito, E.; Tsuiki, S.; Maeda, K.; Okajima, I.; Inoue, Y. Oropharyngeal crowding closely relates to aggravation of OSA. Chest 2016, 150, 346–352. [Google Scholar] [CrossRef]
  13. Meyer, A.; Zverinski, D.; Pfahringer, B.; Kempfert, J.; Kuehne, T.; Sündermann, S.H.; Stamm, C.; Hofmann, T.; Falk, V.; Eickhoff, C. Machine learning for real-time prediction of complications in critical care: A retrospective study. Lancet Respir. Med. 2018, 6, 905–914. [Google Scholar] [CrossRef] [PubMed]
  14. Xu, Z.; Lin, A.; Han, X. Current AI applications and challenges in oral pathology. Oral 2025, 5, 2. [Google Scholar] [CrossRef] [PubMed]
  15. Amari, S. Dynamics of pattern formation in lateral-inhibition type neural fields. Biol. Cybern. 1977, 27, 77–87. [Google Scholar] [CrossRef] [PubMed]
  16. Dominguez, D.; Koroutchev, K.; Serrano, E.; Rodríguez, F.B. Information and topology in attractor neural networks. Neural Comput. 2007, 19, 956–973. [Google Scholar] [CrossRef]
  17. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef]
  18. Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
  19. Jeong, H.G.; Kim, T.; Hong, J.E.; Kim, H.J.; Yun, S.Y.; Kim, S.; Yoo, J.; Lee, S.H.; Thomas, R.J.; Yun, C.H. Automated deep neural network analysis of lateral cephalogram data can aid in detecting obstructive sleep apnea. J. Clin. Sleep Med. 2023, 19, 327–337. [Google Scholar] [CrossRef]
  20. Pang, B.; Doshi, S.; Roy, B.; Lai, M.; Ehlert, L.; Aysola, R.S.; Kang, D.W.; Anderson, A.; Joshi, S.H.; Tward, D.; et al. Machine learning approach for obstructive sleep apnea screening using brain diffusion tensor imaging. J. Sleep Res. 2023, 32, e13729. [Google Scholar] [CrossRef]
  21. Bommineni, V.L.; Erus, G.; Doshi, J.; Singh, A.; Keenan, B.T.; Schwab, R.J.; Wiemken, A.; Davatzikos, C. Automatic Segmentation and Quantification of Upper Airway Anatomic Risk Factors for Obstructive Sleep Apnea on Unprocessed Magnetic Resonance Images. Acad. Radiol. 2023, 30, 421–430. [Google Scholar] [CrossRef]
  22. Kim, J.W.; Lee, K.; Kim, H.J.; Park, H.C.; Hwang, J.Y.; Park, S.W.; Kong, H.J.; Kim, J.Y. Predicting Obstructive Sleep Apnea Based on Computed Tomography Scans Using Deep Learning Models. Am. J. Respir. Crit. Care Med. 2024, 210, 211–221. [Google Scholar] [CrossRef]
  23. Giorgi, L.; Nardelli, D.; Moffa, A.; Iafrati, F.; Di Giovanni, S.; Olszewska, E.; Baptista, P.; Sabatino, L.; Casale, M. Advancements in Obstructive Sleep Apnea Diagnosis and Screening Through Artificial Intelligence: A Systematic Review. Healthcare 2025, 13, 181. [Google Scholar] [CrossRef]
  24. He, S.; Li, Y.; Zhang, C.; Li, Z.; Ren, Y.; Li, T.; Wang, J. Deep learning technique to detect craniofacial anatomical abnormalities concentrated on middle and anterior of face in patients with sleep apnea. Sleep Med. 2023, 112, 12–20. [Google Scholar] [CrossRef]
  25. Isono, S.; Tanaka, A.; Tagaito, Y.; Ishikawa, T.; Nishino, T. Influences of head positions and bite opening on collapsibility of the passive pharynx. J. Appl. Physiol. 2004, 97, 339–346. [Google Scholar] [CrossRef] [PubMed]
  26. Isono, S.; Tsuiki, S. Difficult tracheal intubation and a low hyoid. Anesthesiology 2009, 110, 431. [Google Scholar] [CrossRef]
  27. Martin, S.E.; Mathur, R.; Marshall, I.; Douglas, N.J. The effect of age, sex, obesity and posture on upper airway size. Eur. Respir. J. 1997, 10, 2087–2090. [Google Scholar] [CrossRef] [PubMed]
  28. Malhotra, A.; Huang, Y.; Fogel, R.B.; Pillar, G.; Edwards, J.K.; Kikinis, R.; Loring, S.H.; White, D.P. The male predisposition to pharyngeal collapse: Importance of airway length. Am. J. Respir. Crit. Care Med. 2002, 166, 1388–1395. [Google Scholar] [CrossRef]
  29. Popovic, R.M.; White, D.P. Influence of gender on waking genioglossal electromyogram and upper airway resistance. Am. J. Respir. Crit. Care Med. 1995, 152, 725–731. [Google Scholar] [CrossRef] [PubMed]
  30. Van de Graaff, W.B.; Gotteried, S.B.; Mitra, J.; Van Lunteren, E.; Cherniack, N.S.; Strohl, K.P. Respiratory function of hyoid muscles and hyoid arch. J. Appl. Physiol. 1984, 57, 197–204. [Google Scholar] [CrossRef]
  31. Kuna, S.T.; Remmers, J.E. Anatomy and physiology of upper airway obstruction. In Principles and Practice of Sleep Medicine, 3rd ed.; Kryger, M.H., Roth, T., Dement, W.C., Eds.; WB Saunders: Philadelphia, PA, USA, 2000; pp. 840–858. [Google Scholar]
  32. Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Computer Vision—ECCV 2014; Lecture Notes in Computer Science; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Springer: Cham, Switzerland, 2014; Volume 8689, pp. 818–833. [Google Scholar]
  33. Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017; Curran Associates, Inc.: Red Hook, NY, USA, 2017; pp. 4765–4774. [Google Scholar]
  34. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’16), San Francisco, CA, USA, 13–17 August 2016; Association for Computing Machinery: New York, NY, USA, 2016; pp. 1135–1144. [Google Scholar]
  35. Alimova, S.; Sharobaro, V.; Yukhno, A.; Bondarenko, E. Possibilities of Ultrasound Examination in the Assessment of Age-Related Changes in the Soft Tissues of the Face and Neck: A Review. Appl. Sci. 2023, 13, 1128. [Google Scholar] [CrossRef]
  36. Oliveira, W.; Albuquerque Santos, M.; Burgardt, C.A.P.; Anjos Pontual, M.L.; Zanchettin, C. Estimation of human age using machine learning on panoramic radiographs for Brazilian patients. Sci. Rep. 2024, 14, 19689. [Google Scholar] [CrossRef] [PubMed]
  37. Bizjak, Ž.; Robič, T. DentAge: Deep learning for automated age prediction using panoramic dental X-ray images. J. Forensic Sci. 2024, 69, 2069–2074. [Google Scholar] [CrossRef]
  38. Kahm, S.H.; Kim, J.Y.; Yoo, S.; Bae, S.M.; Kang, J.E.; Lee, S.H. Application of entire dental panorama image data in artificial intelligence model for age estimation. BMC Oral Health 2023, 23, 1007. [Google Scholar] [CrossRef]
  39. Alam, S.S.; Rashid, N.; Faiza, T.A.; Ahmed, S.; Hassan, R.A.; Dudley, J.; Farook, T.H. Estimating Age and Sex from Dental Panoramic Radiographs Using Neural Networks and Vision–Language Models. Oral 2025, 5, 3. [Google Scholar] [CrossRef]
  40. Kim, M.J.; Jeong, J.; Lee, J.W.; Kim, I.H.; Park, J.W.; Roh, J.Y.; Kim, N.; Kim, S.J. Screening obstructive sleep apnea patients via deep learning of knowledge distillation in the lateral cephalogram. Sci. Rep. 2023, 13, 17788. [Google Scholar] [CrossRef] [PubMed]
  41. Goldstein, C.A.; Berry, R.B.; Kent, D.T.; Kristo, D.A.; Seixas, A.A.; Redline, S.; Westover, M.B.; Abbasi-Feinberg, F.; Aurora, R.N.; Carden, K.A.; et al. Artificial intelligence in sleep medicine: An American Academy of Sleep Medicine position statement. J. Clin. Sleep Med. 2020, 16, 605–607. [Google Scholar] [CrossRef]
Figure 1. Challenges in AI-Assisted OSA Detection in Dental Settings. Key challenges in current OSA management include a shortage of specialized medical facilities and a large number of undiagnosed patients, many of whom may present in dental settings. AI-based screening using dental images may help identify individuals at high risk and facilitate timely referral to secondary or tertiary sleep centers for polysomnography (PSG) or other appropriate diagnostic testing, thereby expediting appropriate diagnosis and effective treatment.
Figure 1. Challenges in AI-Assisted OSA Detection in Dental Settings. Key challenges in current OSA management include a shortage of specialized medical facilities and a large number of undiagnosed patients, many of whom may present in dental settings. AI-based screening using dental images may help identify individuals at high risk and facilitate timely referral to secondary or tertiary sleep centers for polysomnography (PSG) or other appropriate diagnostic testing, thereby expediting appropriate diagnosis and effective treatment.
Oral 05 00076 g001
Figure 2. (A) Schematic Illustrations of the Interaction Between Upper Airway Bony Structures and Soft Tissue Affecting Airway Patency. Anatomical configurations of the upper airway, including the tongue, mandible, and cervical vertebrae, are illustrated above, with simplified mechanical models presented below. Two key anatomical factors contribute to upper airway collapsibility: excessive soft tissue associated with obesity (e.g., large tongue, progressing from left to right) and a small maxilla and/or mandible, both of which lead to upper airway narrowing. This mechanical model suggests that the balance between the craniofacial bony enclosure and the volume of soft tissue inside the bony enclosure largely determines upper airway patency. Darker shading indicates greater internal soft tissue pressure within the craniofacial bony enclosure. These illustrations are partially adapted from the works of Tsuiki et al. [10], Watanabe et al. [11], and Isono et al. [25]. (B) Relationship Between the Degree of Oropharyngeal Crowding and the Severity of OSA. Greater anatomical imbalance resulting in upper airway crowding is associated with increased severity of obstructive sleep apnea (OSA), progressing from (a) to (b) to (c). Upon viewing a single cephalometric image, experienced sleep dentists or physicians may either consciously recognize this phenomenon or intuitively detect it without deliberate effort. By capturing a single cephalogram and subsequently training an AI system to learn anatomical characteristics of groups A (non-OSA), B (mild-to-moderate OSA), and C (severe OSA), it becomes feasible to differentiate among these groups. This conceptual illustration is partially adapted from Ito et al. [12] Each triangle represents a non-OSA sample. Each round means a mild to moderate OSA sample. Each cross is a severe OSA sample.
Figure 2. (A) Schematic Illustrations of the Interaction Between Upper Airway Bony Structures and Soft Tissue Affecting Airway Patency. Anatomical configurations of the upper airway, including the tongue, mandible, and cervical vertebrae, are illustrated above, with simplified mechanical models presented below. Two key anatomical factors contribute to upper airway collapsibility: excessive soft tissue associated with obesity (e.g., large tongue, progressing from left to right) and a small maxilla and/or mandible, both of which lead to upper airway narrowing. This mechanical model suggests that the balance between the craniofacial bony enclosure and the volume of soft tissue inside the bony enclosure largely determines upper airway patency. Darker shading indicates greater internal soft tissue pressure within the craniofacial bony enclosure. These illustrations are partially adapted from the works of Tsuiki et al. [10], Watanabe et al. [11], and Isono et al. [25]. (B) Relationship Between the Degree of Oropharyngeal Crowding and the Severity of OSA. Greater anatomical imbalance resulting in upper airway crowding is associated with increased severity of obstructive sleep apnea (OSA), progressing from (a) to (b) to (c). Upon viewing a single cephalometric image, experienced sleep dentists or physicians may either consciously recognize this phenomenon or intuitively detect it without deliberate effort. By capturing a single cephalogram and subsequently training an AI system to learn anatomical characteristics of groups A (non-OSA), B (mild-to-moderate OSA), and C (severe OSA), it becomes feasible to differentiate among these groups. This conceptual illustration is partially adapted from Ito et al. [12] Each triangle represents a non-OSA sample. Each round means a mild to moderate OSA sample. Each cross is a severe OSA sample.
Oral 05 00076 g002
Figure 3. Image datasets (upper) and area under the receiver operating characteristic (ROC) curve for detection of obstructive sleep apnea (lower). AUC, area under the curve. Note that the ROC curve with the better AUC (i.e., 0.75) obtained by a less crowded oropharynx and hyoid position is shown as the representative result of manual cephalometric analyses. Reproduced from Tsuiki et al. [9] under the terms of the Creative Commons Attribution License (CC BY). A related patent application has been filed (Japanese Patent Application No. 2021-011378).
Figure 3. Image datasets (upper) and area under the receiver operating characteristic (ROC) curve for detection of obstructive sleep apnea (lower). AUC, area under the curve. Note that the ROC curve with the better AUC (i.e., 0.75) obtained by a less crowded oropharynx and hyoid position is shown as the representative result of manual cephalometric analyses. Reproduced from Tsuiki et al. [9] under the terms of the Creative Commons Attribution License (CC BY). A related patent application has been filed (Japanese Patent Application No. 2021-011378).
Oral 05 00076 g003
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tsuiki, S.; Furuhashi, A.; Ito, E.; Fukuda, T. Artificial Intelligence to Detect Obstructive Sleep Apnea from Craniofacial Images: A Narrative Review. Oral 2025, 5, 76. https://doi.org/10.3390/oral5040076

AMA Style

Tsuiki S, Furuhashi A, Ito E, Fukuda T. Artificial Intelligence to Detect Obstructive Sleep Apnea from Craniofacial Images: A Narrative Review. Oral. 2025; 5(4):76. https://doi.org/10.3390/oral5040076

Chicago/Turabian Style

Tsuiki, Satoru, Akifumi Furuhashi, Eiki Ito, and Tatsuya Fukuda. 2025. "Artificial Intelligence to Detect Obstructive Sleep Apnea from Craniofacial Images: A Narrative Review" Oral 5, no. 4: 76. https://doi.org/10.3390/oral5040076

APA Style

Tsuiki, S., Furuhashi, A., Ito, E., & Fukuda, T. (2025). Artificial Intelligence to Detect Obstructive Sleep Apnea from Craniofacial Images: A Narrative Review. Oral, 5(4), 76. https://doi.org/10.3390/oral5040076

Article Metrics

Back to TopTop