Next Article in Journal
Nonthermal Atmospheric Pressure Plasma Treatment of Endosteal Implants for Osseointegration and Antimicrobial Efficacy: A Comprehensive Review
Next Article in Special Issue
Inferior Alveolar Nerve Canal Segmentation on CBCT Using U-Net with Frequency Attentions
Previous Article in Journal
Development of a 3D Vascular Network Visualization Platform for One-Dimensional Hemodynamic Simulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparative Analysis of Artificial Intelligence and Manual Methods for Three-Dimensional Anatomical Landmark Identification in Dentofacial Treatment Planning

1
Department of Oral and Maxillofacial Surgery, Hallym University Sacred Heart Hospital, Anyang 14068, Republic of Korea
2
Department of Artificial Intelligence and Robotics in Dentistry, Graduate School of Clinical Dentistry, Hallym University, Chuncheon 24252, Republic of Korea
3
Institute of Clinical Dentistry, Hallym University, Chuncheon 24252, Republic of Korea
4
Dental Artificial Intelligence and Robotics R&D Center, Hallym University Sacred Heart Hospital, Anyang 14068, Republic of Korea
5
Department of Orthodontics, Hallym University Sacred Heart Hospital, Anyang 14068, Republic of Korea
6
Division of Oral and Maxillofacial Surgery, Department of Dentistry, Hallym University Dongtan Sacred Heart Hospital, Hawseong 18450, Republic of Korea
7
Mir Dental Hospital, Daegu 41940, Republic of Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Bioengineering 2024, 11(4), 318; https://doi.org/10.3390/bioengineering11040318
Submission received: 29 February 2024 / Revised: 20 March 2024 / Accepted: 25 March 2024 / Published: 27 March 2024

Abstract

:
With the growing demand for orthognathic surgery and other facial treatments, the accurate identification of anatomical landmarks has become crucial. Recent advancements have shifted towards using three-dimensional radiologic analysis instead of traditional two-dimensional methods, as it allows for more precise treatment planning, primarily relying on direct identification by clinicians. However, manual tracing can be time-consuming, mainly when dealing with a large number of patients. This study compared the accuracy and reliability of identifying anatomical landmarks using artificial intelligence (AI) and manual identification. Thirty patients over 19 years old who underwent pre-orthodontic and orthognathic surgery treatment and had pre-orthodontic three-dimensional radiologic scans were selected. Thirteen anatomical indicators were identified using both AI and manual methods. The landmarks were identified by AI and four experienced clinicians, and multiple ANOVA was performed to analyze the results. The study results revealed minimal significant differences between AI and manual tracing, with a maximum deviation of less than 2.83 mm. This indicates that utilizing AI to identify anatomical landmarks can be a reliable method in planning orthognathic surgery. Our findings suggest that using AI for anatomical landmark identification can enhance treatment accuracy and reliability, ultimately benefiting clinicians and patients.

1. Introduction

Since its introduction by Broadbent in 1931, cephalograms have become a standard method for analyzing the skull and facial bones, measuring the dimensions and contours of craniomaxillofacial structures, and assessing their growth and maturation [1]. Among these, lateral cephalogram is considered the most reliable tool for evaluating craniofacial development, diagnosing orthodontic problems, planning treatments, assessing treatment outcomes, and predicting further growth of the craniomaxillofacial region. Cephalograms are regarded as the “gold standard” for evaluating craniofacial growth, orthodontic diagnosis, treatment planning, and assessing treatment results and craniofacial growth prediction [2]. With increasing interest in functional and aesthetic aspects, the demand for orthodontic treatment has risen. However, the existing two-dimensional (2D) cephalometric imaging, while necessary for orthodontic treatment, has been debated for its accuracy compared to three-dimensional (3D) imaging. One limitation is that the exact landmark location is not always found by selecting the midpoint between the left and right sides of the landmark on 2D cephalometric photography [3,4,5].
Additionally, as orthodontic treatment has increased, so has the demand for orthognathic surgery, for which 2D cephalometric imaging is limited in diagnosing and performing. For oral and maxillofacial surgery, precise knowledge of anatomy, including blood vessels, nerves, and bone morphology, is crucial to prevent side effects like nerve damage and massive bleeding. Fortunately, recent advances in radiographic techniques have made obtaining clear 3D images with minimal X-ray exposure possible, making it easier to understand anatomical structures through 3D images using CT [5,6,7]. In particular, orthodontic treatment has seen the development of techniques to measure anatomical landmarks, predict postoperative facial changes, calculate the required movement for both jaws, and create guide plates accordingly. For patients in need of orthodontic treatment, a diagnosis is made using 3D radiographs of the face, teeth, and skeleton, allowing for the establishment of a comprehensive treatment plan [7,8].
However, manually identifying anatomical landmarks in the patient’s CT data is time-consuming. It relies on the skill level of the clinician, which can lead to potential errors due to subjective factors during the process [9].
In recent decades, dentistry has seen significant progress thanks to technological advancements, particularly in the emerging field of artificial intelligence (AI) [10,11]. AI is a branch of computer science that utilizes technology to perform tasks without human intervention or supervision, effectively simulating human intelligence [10,11,12,13]. Various deep learning architectures, such as deep neural networks, convolutional deep neural networks, deep belief networks, and recurrent neural networks, have been employed to develop algorithms for essential domains like natural language processing, computer vision, speech recognition, and bioinformatics. These applications have innovated automation, leading to the efficient and accurate execution of practical tasks across various fields, including sports, biomaterials, and engineering [14,15,16].
Landmark identification using AI has been explored to address the limitations of manual tracing. Various attempts have been made to integrate AI into cephalometric analysis, including global AI challenges organized by the Institute of Electrical and Electronics Engineers (IEEE) and the International Symposium on Biomedical Imaging (ISBI). Beginning in 2014, these challenges aimed to achieve accurate artificial intelligence identification, with a recent focus on clinical applications and the automatic identification of cephalometric landmarks using 400 different lateral cephalograms [17].
However, prior research has predominantly concentrated on examining 2D cephalograms or 2D imagery derived from 3D radiological scans. Consequently, the diagnostic process reliant on two-dimensional landmark tracing exhibits inferior precision compared to three-dimensional diagnostic methods, resulting in extended surgical durations and minor surgical adjustments. Therefore, we conducted landmark identification on 3D CT radiography to facilitate three-dimensional diagnostic processes and harness AI to expedite the procedure. Hence, our study aims to explore the accuracy and reliability of AI in 3D CT tracing by comparing it with manual tracing methods.

2. Materials and Methods

2.1. Patient Selection

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Hallym University Sacred Heart Hospital Institutional Review Board (IRB No. 2022-03-008-001). Thirty out of one hundred patients meeting two criteria were randomly chosen, utilizing the random sample function within the Python programming language:
-
Adults aged 19 years or older whose jaw bone growth was completed.
-
Patients who completed pre-orthodontic treatment and orthognathic surgery at the Hallym University Sacred Heart Hospital in 2016~2022 and who agreed to participate in the study.

2.2. Definition of Landmarks

The landmarks used in this study were defined according to the ON3D software Ver. 1.4.0 (3D ONS Inc., Seoul, Republic of Korea), as shown in Table 1. This was developed by Cho H.J. [18].

2.3. Methods

The landmarks were identified using the ON3D software based on its predefined definitions (Figure 1). The three-dimensional cone beam computed tomography Dicom files were digitized in ON3D. The primary clinician performed each reorientation, measuring Nasion (N), Porion, and Orbitale, which served as the reference plane for minimizing errors derived from different head postures. Subsequently, AI and four clinicians identified the landmarks using this reference plane. To ensure consistency, the four clinicians repeated the landmark identification twice at a 2-week interval over four weeks, and the average values were used as the coordinates for manual tracing. For AI tracing, the coordinate value was identified once. Each landmark was assigned a 3D coordinate value (x, y, z), with N as the origin (0, 0, 0), and the unit of measurement was millimeters (mm). The individuals were classified as Human I through IV, and differences between Human I and AI, Human II and AI, Human III and AI, and Human IV and AI were calculated for each landmark in 30 patients. Furthermore, the mean value was calculated and then organized as the absolute differences between Human and AI.

2.4. Statistics

Five groups were compared: AI and four different clinicians (Human I, II, III, and IV). The inter-class correlation coefficient analysis was performed to evaluate the association level between AI and clinicians and the interrelationships among clinicians. Multiple ANOVA was performed for each X, Y, and Z component. The confidence interval was set at 95%. The inter-class agreement between AI and manual identification, as well as among different human clinicians, was assessed using the intra-class correlation coefficient. Statistical analysis was conducted using SPSS IBM-vs. 27.0 (SPSS, Chicago, IL, USA) software.

3. Results

3.1. Inter-Class Agreement

The sample comprised thirty subjects, including fifteen males and fifteen females. We calculated the inter-class correlation coefficient to assess the reliability between AI and manual tracing and among manual groups. All measurements showed good inter-class correlation, with values above 0.75 (Table 2).

3.2. Comparison between AI and Manual Tracing

Thirteen landmarks were identified twice by the clinicians, with a two-week interval, and the average value was taken as the coordinate value for each landmark. Mean difference values between AI and Human I, II, III, and IV were calculated for all 30 patients (Table 3, Table 4 and Table 5). The differences ranged from 0.18 mm to 1.96 mm on the X-axis, 0.11 mm to 2.83 mm on the Y-axis, and 0.19 mm to 1.89 mm on the Z-axis.
Based on the ANOVA results, there was no significant difference (p > 0.05) between AI and manual tracing for most coordinate values. However, two coordinates, both Ant. Zygoma on the X- and Y-axes, showed significant differences (Table 6, Table 7 and Table 8).

4. Discussion

Lateral cephalograms are essential for diagnosing anteroposterior and vertical variations in anatomical structures. However, 2D cephalograms have limitations, leading to increased studies utilizing three-dimensional CT for diagnosis [4,5,19]. Manual tracing for radiographic analysis must be precise, safe, and repeatable but is time-consuming. To address this, clinicians have explored automatic methods for identifying measurement points since Cohen and Linney et al. introduced the first one in 1984, aiming to enhance accuracy [20]. Several studies have demonstrated a strong correlation between automatic measurement points and those identified manually [13,21,22]. Meric and Naoumova proposed that fully automated solutions can significantly improve cephalometric analyses [23]. Artificial intelligence has revolutionized medical image analysis, with the healthcare industry expecting a high annual growth rate of 40% [19,24]. However, the majority of AI research focuses on 2D cephalometric analysis. Another issue in clinical practice involves the amount of supplementary information presented by 3D diagnostics compared to 2D diagnostics, posing significant challenges for clinicians in analysis and treatment planning. Automating the analysis of 3D diagnostics offers a broad spectrum of diagnostic possibilities and enhances accessibility for clinicians, thereby facilitating the transition from 2D to 3D imaging in routine clinical settings [25]. This study aims to validate the efficiency and accuracy of AI by comparing manual and AI tracing using 3D cone beam computed tomography (CBCT). We used the ON3D (3D ONS Inc., Seoul, Republic of Korea) software for automated landmark identification. The company 3D ONS is highly interested in automatic tracing and measurements. Nowadays, they offer a 3D CBCT imaging software capable of tracing maxillofacial landmarks and measuring digital surgery planning. ON3D (3D ONS Inc., Seoul, Republic of Korea) is based on artificial intelligence, using a deep convolutional neural network (CNN) for landmark detection. Recently, convolutional neural networks (CNNs) are experiencing growing adoption in medical image segmentation [26,27]. These CNNs have notably demonstrated exceptional performance levels [28,29,30]. The success of CNNs largely lies in their capability to effectively acquire knowledge of nonlinear spatial characteristics present in input images. It has found application in diverse domains such as image recognition, character identification, facial recognition, and pose estimation [15,31,32].
The ANOVA test results indicate that except for both Ant. zygoma on the X- and Y-axes, there was no significant difference (p > 0.05) between AI and manual tracing groups. Thus, landmark identification using artificial intelligence is considered efficient and entirely accurate. The difference values between AI and the four manual tracing groups support this accuracy. Automated identification systems’ ability to detect measurement points within a 2 mm range, commonly considered a clinical error range, is a standard measure of their success rate in performance comparison [9,15,33,34,35]. This study found only differences over 2 mm on the X-axis between AI and Human III for Rt. Ant. Zygoma. On the Y-axis, differences over 2 mm were found between AI and Human III for both Ant. Zygoma and Rt. Zygion, and between AI and Human IV for both Ant. Zygoma. These landmarks with differences over 2 mm exhibited statistically significant differences between AI and manual tracing methods.
Statistical analysis was conducted on the X, Y, and Z coordinates of 13 landmarks, treating them as data. Of the 39 values, 35 showed a similarity between AI and manual tracing. The remaining four were related to the anterior zygoma. Differences exceeding 2 mm were not observed on the X- and Z-axes, but differences exceeding 2 mm were observed on the Y-axis. This suggests that clinicians had varying interpretations of the X- and Z-axes, which imply horizontal and vertical orientations within the three-dimensional image, and the Y-axis, representing the anterior–posterior direction (Figure 2). The definition of 3D landmarks remains unclear, with attempts to define them based on anatomical curves or projections into specific orientations [36,37]. More precise definitions of landmarks are needed in 3D images to improve consistency.
Several limitations exist in this study. First, the number of identified landmarks is limited, necessitating an increase in the sample size. Second, the investigation focuses on hard tissues, while differences between AI and manual tracing are generally more pronounced for soft tissues in 2D comparative analysis. Third, consistency among the four clinicians’ results varied, suggesting the need for human validation even when using AI for accurate landmark identification and diagnosis. Finally, it was observed that automated ON3D tracing was faster compared to manual tracing. However, the time required for cephalometric measurement using both methods was not compared and could be analyzed in future studies.

5. Conclusions

Many dental and medical professionals are experiencing a reduced reluctance in utilizing artificial intelligence compared to earlier times, with a prominent example being its application in diagnostic tracing, particularly in the orthodontic domain. This study indicates that using AI for 3D landmark tracing can significantly reduce time and effort, improving overall convenience despite the absence of accurate definitions of three-dimensional landmarks and insufficient quantity of patients and landmarks. Future research with clear definitions of 3D landmarks, an increased number and variety of landmarks, and similar levels of expertise among clinicians are crucial for advancing this field.

Author Contributions

Conceptualization, H.-J.A., S.-H.B. (Soo-Hwan Byun) and B.-E.Y.; methodology, H.-J.A., S.-H.B. (Soo-Hwan Byun) and B.-E.Y.; formal analysis, H.-J.A., S.-H.B. (Sae-Hoon Baek), S.-Y.P. and S.-M.Y.; investigation, S.-H.B. (Soo-Hwan Byun), S.-Y.P., S.-M.Y., I.-Y.P. and S.-W.O.; resources, S.-H.B. (Soo-Hwan Byun), S.-M.Y., I.-Y.P., J.-C.K. and B.-E.Y.; data curation, S.-Y.P. and J.-C.K.; writing—original draft, H.-J.A., S.-H.B. (Soo-Hwan Byun) and B.-E.Y.; writing—review and editing, H.-J.A., S.-H.B. (Soo-Hwan Byun) and B.-E.Y.; visualization, S.-Y.P., S.-M.Y., I.-Y.P. and S.-W.O.; supervision, B.-E.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a National IT Industry Promotion Agency (NIPA) grant funded by the Korean government (MSIT) (S1402-23-1001, AI Diagnostic Assisted Virtual Surgery and Digital Surgical Guide for Dental Implant Treatment in the Post-Aged Society: A Multicenter Clinical Demonstration). This work was supported by the Korea Medical Device Development Fund grant funded by the Korean government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health and Welfare, the Ministry of Food and Drug Safety) (project number: RS-2022-00140935). This work was supported by the Medical Device Technology Development Program (20006006, Development of artificial intelligence-based augmented reality surgery system for oral and maxillofacial surgery) funded by the Ministry of Trade, Industry, and Energy, Republic of Korea.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Hallym University Sacred Heart Hospital Institutional Review Board (IRB No. 2022-03-008-001).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data supporting this study’s findings are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Broadbent, B.H. A new X-ray technique and its application to orthodontia. Angle Orthod. 1931, 1, 45–66. [Google Scholar]
  2. Albarakati, S.; Kula, K.; Ghoneima, A. The reliability and reproducibility of cephalometric measurements: A comparison of conventional and digital methods. Dentomaxillofac. Radiol. 2012, 41, 11–17. [Google Scholar] [CrossRef] [PubMed]
  3. Olszewski, R.; Cosnard, G.; Macq, B.; Mahy, P.; Reychler, H. 3D CT-based cephalometric analysis: 3D cephalometric theoretical concept and software. Neuroradiology 2006, 48, 853–862. [Google Scholar] [CrossRef] [PubMed]
  4. Nalcaci, R.; Öztürk, F.; Sökücü, O. A comparison of two-dimensional radiography and three-dimensional computed tomography in angular cephalometric measurements. Dentomaxillofac. Radiol. 2010, 39, 100–106. [Google Scholar] [CrossRef] [PubMed]
  5. Van Vlijmen, O.; Maal, T.; Bergé, S.; Bronkhorst, E.; Katsaros, C.; Kuijpers-Jagtman, A. A comparison between 2D and 3D cephalometry on CBCT scans of human skulls. Int. J. Oral Maxillofac. Surg. 2010, 39, 156–160. [Google Scholar] [CrossRef]
  6. Nijkamp, P.G.; Habets, L.L.; Aartman, I.H.; Zentner, A. The influence of cephalometrics on orthodontic treatment planning. Eur. J. Orthod. 2008, 30, 630–635. [Google Scholar] [CrossRef]
  7. Gupta, A.; Kharbanda, O.P.; Sardana, V.; Balachandran, R.; Sardana, H.K. Accuracy of 3D cephalometric measurements based on an automatic knowledge-based landmark detection algorithm. Int. J. Comput. Assist. Radiol. Surg. 2016, 11, 1297–1309. [Google Scholar] [CrossRef] [PubMed]
  8. Naji, P.; Alsufyani, N.A.; Lagravère, M.O. Reliability of anatomic structures as landmarks in three-dimensional cephalometric analysis using CBCT. Angle Orthod. 2014, 84, 762–772. [Google Scholar] [CrossRef] [PubMed]
  9. Hwang, H.-W.; Park, J.-H.; Moon, J.-H.; Yu, Y.; Kim, H.; Her, S.-B.; Srinivasan, G.; Aljanabi, M.N.A.; Donatelli, R.E.; Lee, S.-J. Automated identification of cephalometric landmarks: Part 2—Might it be better than human? Angle Orthod. 2020, 90, 69–76. [Google Scholar] [CrossRef]
  10. Kunz, F.; Stellzig-Eisenhauer, A.; Zeman, F.; Boldt, J. Artificial intelligence in orthodontics: Evaluation of a fully automated cephalometric analysis using a customized convolutional neural network. J. Orofac. Orthop. 2020, 81, 52–68. [Google Scholar] [CrossRef]
  11. Dreyer, K.J.; Geis, J.R. When machines think: Radiology’s next frontier. Radiology 2017, 285, 713–718. [Google Scholar] [CrossRef] [PubMed]
  12. Sandler, P. Reproducibility of cephalometric measurements. Br. J. Orthod. 1988, 15, 105–110. [Google Scholar] [CrossRef]
  13. Alessandri-Bonetti, A.; Sangalli, L.; Salerno, M.; Gallenzi, P. Reliability of Artificial Intelligence-Assisted Cephalometric Analysis—A Pilot Study. Biomedinformatics 2023, 3, 44–53. [Google Scholar] [CrossRef]
  14. Bulatova, G.; Kusnoto, B.; Grace, V.; Tsay, T.P.; Avenetti, D.M.; Sanchez, F.J.C. Assessment of automatic cephalometric landmark identification using artificial intelligence. Orthod. Craniofac. Res. 2021, 24, 37–42. [Google Scholar] [CrossRef] [PubMed]
  15. Zhu, Z.; Ng, D.W.H.; Park, H.S.; McAlpine, M.C. 3D-printed multifunctional materials enabled by artificial-intelligence-assisted fabrication technologies. Nat. Rev. Mater. 2021, 6, 27–47. [Google Scholar] [CrossRef]
  16. Goh, G.L.; Goh, G.D.; Pan, J.W.; Teng, P.S.P.; Kong, P.W. Automated service height fault detection using computer vision and machine learning for badminton matches. Sensors 2023, 23, 9759. [Google Scholar] [CrossRef] [PubMed]
  17. Tsolakis, I.A.; Tsolakis, A.I.; Elshebiny, T.; Matthaios, S.; Palomo, J.M. Comparing a fully automated cephalometric tracing method to a manual tracing method for orthodontic diagnosis. J. Clin. Med. 2022, 11, 6854. [Google Scholar] [CrossRef]
  18. Cho, H.J. A three-dimensional cephalometric analysis. J. Clin. Orthod. 2009, 43, 235–252. [Google Scholar] [PubMed]
  19. Lisboa, C.d.O.; Masterson, D.; MOTTA, A.F.J.; Motta, A.T. Reliability and reproducibility of three-dimensional cephalometric landmarks using CBCT: A systematic review. J. Appl. Oral Sci. 2015, 23, 112–119. [Google Scholar] [CrossRef]
  20. Cohen, A.; Ip, H.-S.; Linney, A. A preliminary study of computer recognition and identification of skeletal landmarks as a new method of cephalometric analysis. Br. J. Orthod. 1984, 11, 143–154. [Google Scholar] [CrossRef]
  21. Nguyen, T.; Larrivée, N.; Lee, A.; Bilaniuk, O.; Durand, R. Use of Artificial Intelligence in Dentistry: Current Clinical Trends and Research Advances. J. Can. Dent. Assoc. 2021, 87, 1488–2159. [Google Scholar] [CrossRef]
  22. Bichu, Y.M.; Hansa, I.; Bichu, A.Y.; Premjani, P.; Flores-Mir, C.; Vaid, N.R. Applications of artificial intelligence and machine learning in orthodontics: A scoping review. Prog. Orthod. 2021, 22, 18. [Google Scholar] [CrossRef]
  23. Meriç, P.; Naoumova, J. Web-based fully automated cephalometric analysis: Comparisons between app-aided, computerized, and manual tracings. Turk. J. Orthod. 2020, 33, 142. [Google Scholar] [CrossRef] [PubMed]
  24. Abesi, F.; Jamali, A.S.; Zamani, M. Accuracy of artificial intelligence in the detection and segmentation of oral and maxillofacial structures using cone-beam computed tomography images: A systematic review and meta-analysis. Pol. J. Radiol. 2023, 88, 256. [Google Scholar] [CrossRef] [PubMed]
  25. Blum, F.M.S.; Möhlhenrich, S.C.; Raith, S.; Pankert, T.; Peters, F.; Wolf, M.; Hölzle, F.; Modabber, A. Evaluation of an artificial intelligence–based algorithm for automated localization of craniofacial landmarks. Clin. Oral Investig. 2023, 27, 2255–2265. [Google Scholar] [CrossRef]
  26. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.; Ginneken, B.; Sanchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2018, 45, 2169–2180. [Google Scholar] [CrossRef]
  27. Altaf, F.; Islam, S.M.; Akhtar, N.; Janjua, N.K. Going deep in medical image analysis: Concepts, methods, challenges, and future directions. IEEE Access 2019, 7, 99540–99572. [Google Scholar] [CrossRef]
  28. Minnema, J.; Eijnatten, M.; Kouw, W.; Diblen, F.; Mendrik, A.; Wolff, J. CT image segmentation of bone for medical additive manufacturing using a convolutional neural network. Comput. Biol. Med. 2018, 103, 130–139. [Google Scholar] [CrossRef]
  29. Casalegno, F.; Newton, T.; Daher, R.; Abdelaziz, M.; Lodi-Rizzini, A.; Schurmann, F.; Krejci, I.; Markram, H. Caries detection with near-infrared transillumination using deep learning. J. Dent. Res. 2019, 98, 1227–1233. [Google Scholar] [CrossRef]
  30. Nguyen, K.C.T.; Duong, D.Q.; Almeida, F.T.; Major, P.W.; Kaipatur, N.R.; Pham, T.T.; Lou, E.H.M.; Noga, M.; Punithakumar, K.; Le, L.H. Alveolar bone segmentation in intraoral ultrasonographs with machine learning. J. Dent. Res. 2020, 99, 1054–1061. [Google Scholar] [CrossRef]
  31. Wang, H.; Minnema, J.; Batenburg, K.J.; Forouzanfar, T.; Hu, F.J.; Wu, G. Multiclass CBCT image segmentation for orthodontics with deep learning. J. Dent. Res. 2021, 100, 943–949. [Google Scholar] [CrossRef] [PubMed]
  32. Abesi, F.; Hozuri, M.; Zamani, M. Performance of artificial intelligence using cone-beam computed tomography for segmentation of oral and maxillofacial structures: A systematic review and meta-analysis. J. Clin. Exp. Dent. 2023, 15, 954. [Google Scholar] [CrossRef] [PubMed]
  33. Arık, S.Ö.; Ibragimov, B.; Xing, L. Fully automated quantitative cephalometry using convolutional neural networks. J. Med. Imaging 2017, 4, 014501. [Google Scholar] [CrossRef] [PubMed]
  34. Nishimoto, S.; Sotsuka, Y.; Kawai, K.; Ishise, H.; Kakibuchi, M. Personal computer-based cephalometric landmark detection with deep learning, using cephalograms on the internet. J. Craniofac. Surg. 2019, 30, 91–95. [Google Scholar] [CrossRef] [PubMed]
  35. Park, J.-H.; Hwang, H.-W.; Moon, J.-H.; Yu, Y.; Kim, H.; Her, S.-B.; Srinivasan, G.; Aljanabi, M.N.A.; Donatelli, R.E.; Lee, S.-J. Automated identification of cephalometric landmarks: Part 1—Comparisons between the latest deep-learning methods YOLOV3 and SSD. Angle Orthod. 2019, 89, 903–909. [Google Scholar] [CrossRef]
  36. Swennen, G.R.; Schutyser, F.A.; Hausamen, J.-E. Three-Dimensional Cephalometry: A Color Atlas and Manual; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  37. Katina, S.; McNeil, K.; Ayoub, A.; Guilfoyle, B.; Khambay, B.; Siebert, P.; Sukno, F.; Rojas, M.; Vittert, L.; Waddington, J. The definitions of three-dimensional landmarks on the human face: An interdisciplinary view. J. Anat. 2016, 228, 355–365. [Google Scholar] [CrossRef]
Figure 1. Sample 3D landmark tracing: (A) Frontal view obtained by artificial intelligence (AI) tracing. (B) Frontal view obtained by manual tracing. (C) Lateral view obtained by AI tracing. (D) Lateral view obtained by manual tracing.
Figure 1. Sample 3D landmark tracing: (A) Frontal view obtained by artificial intelligence (AI) tracing. (B) Frontal view obtained by manual tracing. (C) Lateral view obtained by AI tracing. (D) Lateral view obtained by manual tracing.
Bioengineering 11 00318 g001
Figure 2. X-, Y-, and Z-axes in a three-dimensional image. green line: X-axis, yellow line: Y-axis, blue line: Z-axis.
Figure 2. X-, Y-, and Z-axes in a three-dimensional image. green line: X-axis, yellow line: Y-axis, blue line: Z-axis.
Bioengineering 11 00318 g002
Table 1. Definition of anatomical landmark used in the study.
Table 1. Definition of anatomical landmark used in the study.
LandmarkDefinition
Nasion (N)V notch of frontal
Oribtalemost inferior point of the orbital contour
Porionmost superior point of the external auditory meatus
ANSthe most anterior point of the premaxillary bone in the sagittal plane
PNSthe most posterior point of the palatine bone in the sagittal plane
Ant. Zygomathe point on the zygomatic bone lateral to the deepest concavity of anterior concavity
Zygionthe most lateral point of the zygomatic arch, a point determined from the submental vertex view
A point (A)the deepest point between ANS and the upper incisal alveolus
B point (B)the deepest point between the pogonion and lower incisal alveolus
Gnathion (Gn)the middle point between the most anterior (Pogonion) and most inferior point of the chin (Menton)
Pogonion (Pog)most anterior point of the symphysis
Menton (Me)the most inferior point on the symphyseal outline
Gonion (Go)the point in the inferoposterior outline of the right mandible at which the surface turns from the inferior border into the posterior border
Table 2. Intra-class correlation coefficient (ICC) and 95% confidence interval for inter-class agreement.
Table 2. Intra-class correlation coefficient (ICC) and 95% confidence interval for inter-class agreement.
AI/Manual (95% CI)Different Manual Groups (95% CI)
X-axis0.817 (0.760~0.866)0.821 (0.757~0.873)
Y-axis0.925 (0.901~0.945)0.924 (0.897~0.946)
Z-axis0.956 (0.942~0.968)0.956 (0.940~0.969)
Table 3. Mean difference values for each landmark on X-axis (*: minimum value, **: maximum value).
Table 3. Mean difference values for each landmark on X-axis (*: minimum value, **: maximum value).
AI–Human IAI–Human IIAI–Human IIIAI–Human IV
ANS0.440.710.411.10
PNS0.231.140.841.14
A0.18 *0.661.351.15
Rt. Ant. Zygoma0.731.681.96 **1.91
Lt. Ant. Zygoma0.911.111.841.01
Rt. Zygion0.241.461.591.61
Lt. Zygion0.371.131.131.16
B0.321.371.121.67
Pog0.311.521.211.71
Gn0.531.110.860.73
Me0.561.050.940.76
Rt. Go0.520.260.520.79
Lt. Go0.561.080.731.02
Table 4. Mean difference values for each landmark on Y-axis (*: minimum value, **: maximum value).
Table 4. Mean difference values for each landmark on Y-axis (*: minimum value, **: maximum value).
AI–Human IAI–Human IIAI–Human IIIAI–Human IV
ANS0.360.310.111.11
PNS0.611.561.640.63
A0.431.111.050.27
Rt. Ant. Zygoma0.931.772.83 **2.37
Lt. Ant. Zygoma0.941.212.332.81
Rt. Zygion1.071.192.83 **1.68
Lt. Zygion1.091.391.461.92
B0.340.11 *1.471.51
Pog0.120.480.700.77
Gn0.470.630.610.92
Me0.460.590.480.82
Rt. Go1.590.850.630.93
Lt. Go0.681.291.431.17
Table 5. Mean difference values for each landmark on Z-axis (*: minimum value, **: maximum value).
Table 5. Mean difference values for each landmark on Z-axis (*: minimum value, **: maximum value).
AI–Human IAI–Human IIAI–Human IIIAI–Human IV
ANS0.311.531.121.18
PNS0.430.900.811.61
A0.810.881.640.96
Rt. Ant. Zygoma0.781.090.611.47
Lt. Ant. Zygoma1.091.511.081.54
Rt. Zygion0.471.041.111.15
Lt. Zygion0.461.301.561.23
B0.881.471.511.10
Pog0.891.191.331.18
Gn1.411.221.89 **1.87
Me0.210.19 *0.710.93
Rt. Go0.691.861.440.92
Lt. Go0.861.690.931.34
Table 6. Descriptive statistics for each landmark of X-axis. Statistical significance set at p < 0.05 **.
Table 6. Descriptive statistics for each landmark of X-axis. Statistical significance set at p < 0.05 **.
AIHuman IHuman IIHuman IIIHuman IV
LandmarkMean (SD)p-Value
ANS−0.24 (1.42)−0.22 (1.29)−0.62 (1.93)−0.62 (1.91)−0.14 (1.58)0.741
PNS−0.42 (2.03)−0.36 (1.96)−0.51 (2.01)−0.51 (2.24)−0.23 (1.99)0.931
A−0.26 (1.34)−0.13 (1.26)−0.36 (2.31)−0.26 (1.91)0.18 (1.62)0.924
Rt. Ant. Zygoma−54.54 (3.81)−54.61 (4.09)−53.44 (4.21)−51.42 (4.63)−52.93 (3.85)0.049 **
Lt. Ant. Zygoma53.51 (3.05)54.19 (3.37)52.23 (3.29)50.13 (3.62)53.27 (2.99)0.000 **
Rt. Zygion−67.59 (4.22)−67.12 (4.31)−67.49 (4.16)−67.51 (4.83)−67.31 (4.31)1.000
Lt. Zygion66.36 (3.65)66.27 (3.83)66.52 (3.59)66.36 (3.71)66.33 (3.72)0.998
B0.14 (2.12)0.23 (2.11)−0.28 (2.39)−0.21 (2.32)0.26 (1.99)0.937
Gn0.51 (2.33)0.51 (2.23)0.21 (2.77)0.11 (2.48)0.31 (2.36)0.976
Pog0.21 (2.36)0.19 (2.14)−0.22 (2.61)−0.09 (2.55)0.37 (2.22)0.954
Me0.41 (2.23)0.39 (2.21)−0.19 (2.43)0.15 (2.39)0.49 (2.31)0.947
Rt. Go−49.26 (4.68)−48.90 (4.81)−48.66 (3.69)−49.23 (4.64)−49.52 (4.91)0.971
Lt. Go49.04 (4.29)49.43 (4.28)48.49 (4.35)48.94 (4.21)49.33 (4.23)0.953
Table 7. Descriptive statistics for each landmark of the Y-axis. Statistical significance set at p < 0.05 **.
Table 7. Descriptive statistics for each landmark of the Y-axis. Statistical significance set at p < 0.05 **.
AIHuman IHuman IIHuman IIIHuman IV
LandmarkMean (SD)p-Value
ANS−4.51 (2.93)−4.65 (2.91)−4.51 (3.08)−4.12 (3.01)−4.44 (2.73)0.957
PNS45.73 (4.71)46.21 (4.91)46.19 (4.61)44.26 (11.91)46.07 (4.81)0.794
A−1.15 (3.01)−0.72 (3.23)−0.54 (3.51)−0.54 (3.11)−0.92 (3.31)0.973
Rt. Ant. Zygoma22.77 (4.92)23.43 (4.53)21.97 (4.68)18.19 (5.68)21.75 (3.98)0.001 **
Lt. Ant. Zygoma23.16 (4.41)24.03 (4.29)22.11 (4.78)18.27 (5.51)22.93 (3.94)0.001 **
Rt. Zygion55.84 (5.72)56.53 (5.62)56.68 (5.32)53.59 (13.47)55.85 (5.91)0.721
Lt. Zygion56.11 (5.98)57.12 (5.89)57.33 (5.91)54.24 (13.49)56.56 (6.42)0.662
B3.94 (6.32)4.27 (6.41)4.35 (5.62)4.49 (6.62)4.24 (6.03)1.000
Gn2.81 (7.45)2.91 (7.58)3.14 (7.51)3.21 (7.59)3.11 (7.33)1.000
Pog4.89 (7.63)5.34 (7.61)5.31 (7.61)5.13 (7.55)5.53 (7.72)1.000
Me10.03 (7.52)10.12 (7.42)10.08 (7.83)9.51 (7.71)10.6 (1.31)0.995
Rt. Go70.55 (6.81)70.64 (6.81)71.82 (6.91)67.93 (17.42)70.2 (1.39)0.802
Lt. Go72.42 (6.05)72.73 (5.92)72.83 (6.21)69.78 (16.99)72.3 (1.91)0.832
Table 8. Descriptive statistics for each landmark of the Z-axis. Statistical significance set at p < 0.05.
Table 8. Descriptive statistics for each landmark of the Z-axis. Statistical significance set at p < 0.05.
AIHuman IHuman IIHuman IIIHuman IV
LandmarkMean (SD)p-Value
ANS−54.24 (3.72)−54.12 (3.52)−54.14 (4.23)−54.14 (3.31)−54.4 (3.55)0.970
PNS−54.93 (4.81)−54.58 (4.92)−54.47 (5.12)−54.76 (4.64)−54.83 (5.21)0.987
A−60.81 (4.21)−60.21 (4.12)−59.51 (4.43)−60.55 (3.01)−59.81 (4.03)0.844
Rt. Ant. Zygoma−43.65 (4.81)−43.44 (5.04)−44.24 (5.03)−43.82 (4.88)−44.49 (4.61)0.991
Lt. Ant. Zygoma−43.73 (4.92)−42.91 (5.51)−44.51 (4.41)−43.95 (4.91)−42.86 (4.62)0.789
Rt. Zygion−32.74 (3.71)−32.21 (3.92)−31.22 (3.91)−32.61 (3.83)−32.57 (1.31)0.663
Lt. Zygion−32.32 (4.49)−32.33 (4.51)−31.21 (4.41)−32.44 (4.24)−36.13 (8.33)0.912
B −101.21 (7.31)−101.36 (7.62)−100.38 (7.83)−100.93 (7.71)−101.8 (2.21)0.983
Gn−115.29 (8.71)−115.82 (8.42)−115.21 (9.21)−114.99 (8.72−115.7 (1.63)0.997
Pog−120.42 (8.48)−120.31 (8.57)−120.31 (9.32)−119.91 (8.42)−120.56 (9.01)1.000
Me−122.31 (8.61)−122.16 (8.41)−122.25 (9.12)−122.31 (8.47)−120.84 (9.23)0.998
Rt. Go−93.21 (9.26)−93.32 (9.42)−92.84 (9.31)−93.24 (9.41)−93.43 (8.87)0.997
Lt. Go−91.34 (9.23)−91.21 (9.55)−91.71 (9.42)−90.80 (8.93)−91.41 (8.73)1.000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahn, H.-J.; Byun, S.-H.; Baek, S.-H.; Park, S.-Y.; Yi, S.-M.; Park, I.-Y.; On, S.-W.; Kim, J.-C.; Yang, B.-E. A Comparative Analysis of Artificial Intelligence and Manual Methods for Three-Dimensional Anatomical Landmark Identification in Dentofacial Treatment Planning. Bioengineering 2024, 11, 318. https://doi.org/10.3390/bioengineering11040318

AMA Style

Ahn H-J, Byun S-H, Baek S-H, Park S-Y, Yi S-M, Park I-Y, On S-W, Kim J-C, Yang B-E. A Comparative Analysis of Artificial Intelligence and Manual Methods for Three-Dimensional Anatomical Landmark Identification in Dentofacial Treatment Planning. Bioengineering. 2024; 11(4):318. https://doi.org/10.3390/bioengineering11040318

Chicago/Turabian Style

Ahn, Hee-Ju, Soo-Hwan Byun, Sae-Hoon Baek, Sang-Yoon Park, Sang-Min Yi, In-Young Park, Sung-Woon On, Jong-Cheol Kim, and Byoung-Eun Yang. 2024. "A Comparative Analysis of Artificial Intelligence and Manual Methods for Three-Dimensional Anatomical Landmark Identification in Dentofacial Treatment Planning" Bioengineering 11, no. 4: 318. https://doi.org/10.3390/bioengineering11040318

APA Style

Ahn, H. -J., Byun, S. -H., Baek, S. -H., Park, S. -Y., Yi, S. -M., Park, I. -Y., On, S. -W., Kim, J. -C., & Yang, B. -E. (2024). A Comparative Analysis of Artificial Intelligence and Manual Methods for Three-Dimensional Anatomical Landmark Identification in Dentofacial Treatment Planning. Bioengineering, 11(4), 318. https://doi.org/10.3390/bioengineering11040318

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop