Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (16)

Search Parameters:
Keywords = detection of cephalometric landmarks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 4182 KB  
Article
Automated Landmark Detection and Lip Thickness Classification Using a Convolutional Neural Network in Lateral Cephalometric Radiographs
by Miaomiao Han, Zhengqun Huo, Jiangyan Ren, Haiting Zhu, Huang Li, Jialing Li and Li Mei
Diagnostics 2025, 15(12), 1468; https://doi.org/10.3390/diagnostics15121468 - 9 Jun 2025
Viewed by 637
Abstract
Objective: The objective of this study is to develop a convolutional neural network (CNN) for the automatic detection of soft and hard tissue landmarks and the classification of lip thickness on lateral cephalometric radiographs. Methods: A dataset of 1019 pre-orthodontic lateral cephalograms from [...] Read more.
Objective: The objective of this study is to develop a convolutional neural network (CNN) for the automatic detection of soft and hard tissue landmarks and the classification of lip thickness on lateral cephalometric radiographs. Methods: A dataset of 1019 pre-orthodontic lateral cephalograms from patients with diverse malocclusions was utilized. A CNN-based model was trained to automatically detect 22 cephalometric landmarks. Upper and lower lip thicknesses were measured using some of these landmarks, and a pre-trained decision tree model was employed to classify lip thickness into the thin, normal, and thick categories. Results: The mean radial error (MRE) for detecting 22 landmarks was 0.97 ± 0.52 mm. Successful detection rates (SDRs) at threshold distances of 1.00, 1.50, 2.00, 2.50, 3.00, and 4.00 mm were 72.26%, 89.59%, 95.41%, 97.66%, 98.98%, and 99.47%, respectively. For nine soft tissue landmarks, the MRE was 1.08 ± 0.87 mm. Lip thickness classification accuracy was 0.91 ± 0.04 (upper lip) and 0.90 ± 0.04 (lower lip) in females and 0.92 ± 0.03 (upper lip) and 0.88 ± 0.05 (lower lip) in males. The area under the curve (AUC) values for lip thickness were ≥0.97 for all gender–lip combinations. Conclusions: The CNN-based landmark detection model demonstrated high precision, enabling reliable automatic classification of lip thickness using cephalometric radiographs. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

26 pages, 12177 KB  
Article
An Efficient Hybrid 3D Computer-Aided Cephalometric Analysis for Lateral Cephalometric and Cone-Beam Computed Tomography (CBCT) Systems
by Laurine A. Ashame, Sherin M. Youssef, Mazen Nabil Elagamy and Sahar M. El-Sheikh
Computers 2025, 14(6), 223; https://doi.org/10.3390/computers14060223 - 7 Jun 2025
Viewed by 773
Abstract
Lateral cephalometric analysis is commonly used in orthodontics for skeletal classification to ensure an accurate and reliable diagnosis for treatment planning. However, most current research depends on analyzing different type of radiographs, which requires more computational time than 3D analysis. Consequently, this study [...] Read more.
Lateral cephalometric analysis is commonly used in orthodontics for skeletal classification to ensure an accurate and reliable diagnosis for treatment planning. However, most current research depends on analyzing different type of radiographs, which requires more computational time than 3D analysis. Consequently, this study addresses fully automatic orthodontics tracing based on the usage of artificial intelligence (AI) applied to 2D and 3D images, by designing a cephalometric system that analyzes the significant landmarks and regions of interest (ROI) needed in orthodontics tracing, especially for the mandible and maxilla teeth. In this research, a computerized system is developed to automate the tasks of orthodontics evaluation during 2D and Cone-Beam Computed Tomography (CBCT or 3D) systems measurements. This work was tested on a dataset that contains images of males and females obtained from dental hospitals with patient-informed consent. The dataset consists of 2D lateral cephalometric, panorama and CBCT radiographs. Many scenarios were applied to test the proposed system in landmark prediction and detection. Moreover, this study integrates the Grad-CAM (Gradient-Weighted Class Activation Mapping) technique to generate heat maps, providing transparent visualization of the regions the model focuses on during its decision-making process. By enhancing the interpretability of deep learning predictions, Grad-CAM strengthens clinical confidence in the system’s outputs, ensuring that ROI detection aligns with orthodontic diagnostic standards. This explainability is crucial in medical AI applications, where understanding model behavior is as important as achieving high accuracy. The experimental results achieved an accuracy exceeding 98.9%. This research evaluates and differentiates between the two-dimensional and the three-dimensional tracing analyses applied to measurements based on the practices of the European Board of Orthodontics. The results demonstrate the proposed methodology’s robustness when applied to cephalometric images. Furthermore, the evaluation of 3D analysis usage provides a clear understanding of the significance of integrated deep-learning techniques in orthodontics. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

22 pages, 863 KB  
Systematic Review
The Accuracy of Algorithms Used by Artificial Intelligence in Cephalometric Points Detection: A Systematic Review
by Júlia Ribas-Sabartés, Meritxell Sánchez-Molins and Nuno Gustavo d’Oliveira
Bioengineering 2024, 11(12), 1286; https://doi.org/10.3390/bioengineering11121286 - 18 Dec 2024
Cited by 2 | Viewed by 2109
Abstract
The use of artificial intelligence in orthodontics is emerging as a tool for localizing cephalometric points in two-dimensional X-rays. AI systems are being evaluated for their accuracy and efficiency compared to conventional methods performed by professionals. The main objective of this study is [...] Read more.
The use of artificial intelligence in orthodontics is emerging as a tool for localizing cephalometric points in two-dimensional X-rays. AI systems are being evaluated for their accuracy and efficiency compared to conventional methods performed by professionals. The main objective of this study is to identify the artificial intelligence algorithms that yield the best results for cephalometric landmark localization, along with their learning system. A literature search was conducted across PubMed-MEDLINE, Cochrane, Scopus, IEEE Xplore, and Web of Science. Observational and experimental studies from 2013 to 2023 assessing the detection of at least 13 cephalometric landmarks in two-dimensional radiographs were included. Studies requiring advanced computer engineering knowledge or involving patients with anomalies, syndromes, or orthodontic appliances, were excluded. Risk of bias was assessed using Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) and Newcastle–Ottawa Scale (NOS) tools. Of 385 references, 13 studies met the inclusion criteria (1 diagnostic accuracy study and 12 retrospective cohorts). Six were high-risk, and seven were low-risk. Convolutional neural networks (CNN)-based AI algorithms showed point localization accuracy ranging from 64.3 to 97.3%, with a mean error of 1.04 mm ± 0.89 to 3.40 mm ± 1.57, within the clinical range of 2 mm. YOLOv3 demonstrated improvements over its earlier version. CNN have proven to be the most effective AI system for detecting cephalometric points in radiographic images. Although CNN-based algorithms generate results very quickly and reproducibly, they still do not achieve the accuracy of orthodontists. Full article
Show Figures

Figure 1

13 pages, 860 KB  
Article
Multi-Scale 3D Cephalometric Landmark Detection Based on Direct Regression with 3D CNN Architectures
by Chanho Song, Yoosoo Jeong, Hyungkyu Huh, Jee-Woong Park, Jun-Young Paeng, Jaemyung Ahn, Jaebum Son and Euisung Jung
Diagnostics 2024, 14(22), 2605; https://doi.org/10.3390/diagnostics14222605 - 20 Nov 2024
Viewed by 1470
Abstract
Background: Cephalometric analysis is important in diagnosing and planning treatments for patients, traditionally relying on 2D cephalometric radiographs. With advancements in 3D imaging, automated landmark detection using deep learning has gained prominence. However, 3D imaging introduces challenges due to increased network complexity and [...] Read more.
Background: Cephalometric analysis is important in diagnosing and planning treatments for patients, traditionally relying on 2D cephalometric radiographs. With advancements in 3D imaging, automated landmark detection using deep learning has gained prominence. However, 3D imaging introduces challenges due to increased network complexity and computational demands. This study proposes a multi-scale 3D CNN-based approach utilizing direct regression to improve the accuracy of maxillofacial landmark detection. Methods: The method employs a coarse-to-fine framework, first identifying landmarks in a global context and then refining their positions using localized 3D patches. A clinical dataset of 150 CT scans from maxillofacial surgery patients, annotated with 30 anatomical landmarks, was used for training and evaluation. Results: The proposed method achieved an average RMSE of 2.238 mm, outperforming conventional 3D CNN architectures. The approach demonstrated consistent detection without failure cases. Conclusions: Our multi-scale-based 3D CNN framework provides a reliable method for automated landmark detection in maxillofacial CT images, showing potential for other clinical applications. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

17 pages, 3046 KB  
Systematic Review
A Comparative Study of Deep Learning and Manual Methods for Identifying Anatomical Landmarks through Cephalometry and Cone-Beam Computed Tomography: A Systematic Review and Meta-Analysis
by Yoonji Lee, Jeong-Hye Pyeon, Sung-Hoon Han, Na Jin Kim, Won-Jong Park and Jun-Beom Park
Appl. Sci. 2024, 14(16), 7342; https://doi.org/10.3390/app14167342 - 20 Aug 2024
Cited by 1 | Viewed by 2379
Abstract
Background: Researchers have noted that the advent of artificial intelligence (AI) heralds a promising era, with potential to significantly enhance diagnostic and predictive abilities in clinical settings. The aim of this meta-analysis is to evaluate the discrepancies in identifying anatomical landmarks between AI [...] Read more.
Background: Researchers have noted that the advent of artificial intelligence (AI) heralds a promising era, with potential to significantly enhance diagnostic and predictive abilities in clinical settings. The aim of this meta-analysis is to evaluate the discrepancies in identifying anatomical landmarks between AI and manual approaches. Methods: A comprehensive search strategy was employed, incorporating controlled vocabulary (MeSH) and free-text terms. This search was conducted by two reviewers to identify published systematic reviews. Three major electronic databases, namely, Medline via PubMed, the Cochrane database, and Embase, were searched up to May 2024. Results: Initially, 369 articles were identified. After conducting a comprehensive search and applying strict inclusion criteria, a total of ten studies were deemed eligible for inclusion in the meta-analysis. The results showed that the average difference in detecting anatomical landmarks between artificial intelligence and manual approaches was 0.35, with a 95% confidence interval (CI) ranging from −0.09 to 0.78. Additionally, the overall effect between the two groups was found to be insignificant. Upon further analysis of the subgroup of cephalometric radiographs, it was determined that there were no significant differences between the two groups in terms of detecting anatomical landmarks. Similarly, the subgroup of cone-beam computed tomography (CBCT) revealed no significant differences between the groups. Conclusions: In summary, the study concluded that the use of artificial intelligence is just as effective as the manual approach when it comes to detecting anatomical landmarks, both in general and in specific contexts such as cephalometric radiographs and CBCT evaluations. Full article
Show Figures

Figure 1

13 pages, 10272 KB  
Article
The Reproducibility of Reference Landmarks in the External Acoustic Meatus (EAM) on Cone Beam Computed Tomography (CBCT) Images
by Fernanda Sanders-Mello, Ronald E. G. Jonkman, Ynke Baltussen, Frederik R. Rozema and Jan Harm Koolstra
J. Clin. Med. 2024, 13(14), 4226; https://doi.org/10.3390/jcm13144226 - 19 Jul 2024
Cited by 1 | Viewed by 1921
Abstract
Objective: The aim of the present study is to identify a more reliable reference point in three-dimensional cephalometric analysis to replace the Porion point used in two-dimensional analysis, enhancing the accuracy of assessments. Methods: The methodology assessed potential alternative landmarks for three-dimensional cephalometric [...] Read more.
Objective: The aim of the present study is to identify a more reliable reference point in three-dimensional cephalometric analysis to replace the Porion point used in two-dimensional analysis, enhancing the accuracy of assessments. Methods: The methodology assessed potential alternative landmarks for three-dimensional cephalometric analysis. Utilizing a segmenting technique, anatomical landmarks were accurately pinpointed from the external acoustic meatus of 26 Cone Beam Computed Tomography (CBCT) scans. These landmarks were chosen for their clear and unambiguous detectability. To assess reproducibility, each landmark was replicated twice with a one-week interval by a master’s student. Reproducibility was quantitatively evaluated by analyzing the absolute difference per axis. Results: Five possible candidate landmarks were identified: the most anterior, posterior, superior, and inferior points of the external acoustic meatus (EAM) and a notch delineating the epitympanic recess. The reproducibility of pinpointing these landmarks ranged from 0.56 mm to 2.2 mm. The absolute mean differences between measurements were 0.46 mm (SD 0.75) for the most anterior point, 0.36 mm (SD 0.44) for the most posterior point, 0.25 mm (SD 0.26) for the most superior point, 1.11 mm (SD 1.03) for the most inferior point, and 0.78 mm (SD 0.57) for the epitympanic notch. Conclusions: The most superior point of the EAM might successfully replace the Porion as an anatomical reference. Full article
(This article belongs to the Section Nuclear Medicine & Radiology)
Show Figures

Figure 1

12 pages, 317 KB  
Review
Artificial Intelligence Systems Assisting in the Assessment of the Course and Retention of Orthodontic Treatment
by Martin Strunga, Renáta Urban, Jana Surovková and Andrej Thurzo
Healthcare 2023, 11(5), 683; https://doi.org/10.3390/healthcare11050683 - 25 Feb 2023
Cited by 56 | Viewed by 8073
Abstract
This scoping review examines the contemporary applications of advanced artificial intelligence (AI) software in orthodontics, focusing on its potential to improve daily working protocols, but also highlighting its limitations. The aim of the review was to evaluate the accuracy and efficiency of current [...] Read more.
This scoping review examines the contemporary applications of advanced artificial intelligence (AI) software in orthodontics, focusing on its potential to improve daily working protocols, but also highlighting its limitations. The aim of the review was to evaluate the accuracy and efficiency of current AI-based systems compared to conventional methods in diagnosing, assessing the progress of patients’ treatment and follow-up stability. The researchers used various online databases and identified diagnostic software and dental monitoring software as the most studied software in contemporary orthodontics. The former can accurately identify anatomical landmarks used for cephalometric analysis, while the latter enables orthodontists to thoroughly monitor each patient, determine specific desired outcomes, track progress, and warn of potential changes in pre-existing pathology. However, there is limited evidence to assess the stability of treatment outcomes and relapse detection. The study concludes that AI is an effective tool for managing orthodontic treatment from diagnosis to retention, benefiting both patients and clinicians. Patients find the software easy to use and feel better cared for, while clinicians can make diagnoses more easily and assess compliance and damage to braces or aligners more quickly and frequently. Full article
10 pages, 1485 KB  
Article
Reliability of Artificial Intelligence-Assisted Cephalometric Analysis. A Pilot Study
by Anna Alessandri-Bonetti, Linda Sangalli, Martina Salerno and Patrizia Gallenzi
BioMedInformatics 2023, 3(1), 44-53; https://doi.org/10.3390/biomedinformatics3010003 - 10 Jan 2023
Cited by 8 | Viewed by 4773
Abstract
Recently, Artificial Intelligence (AI) has spread in orthodontics, in particular within cephalometric analysis, where computerized digital software is able to provide linear-angular measurements upon manual landmark identification. A step forward is constituted by fully automated AI-assisted cephalometric analysis, where the landmarks are automatically [...] Read more.
Recently, Artificial Intelligence (AI) has spread in orthodontics, in particular within cephalometric analysis, where computerized digital software is able to provide linear-angular measurements upon manual landmark identification. A step forward is constituted by fully automated AI-assisted cephalometric analysis, where the landmarks are automatically detected by software. The aim of the study was to compare the reliability of a fully automated AI-assisted cephalometric analysis with the one obtained by a computerized digital software upon manual landmark identification. Fully automated AI-assisted cephalometric analysis of 13 lateral cephalograms were retrospectively compared to the cephalometric analysis performed twice by a blinded operator with a computerized software. Intra- and inter-operator (fully automated AI-assisted vs. computerized software with manual landmark identification) reliability in cephalometric parameters (maxillary convexity, facial conicity, facial axis angle, posterior and lower facial height) was tested with the Dahlberg equation and Bland–Altman plot. The results revealed no significant difference in intra- and inter-operator measurements. Although not significant, higher errors were observed within intra-operator measurements of posterior facial height and inter-operator measurements of facial axis angle. In conclusion, despite the small sample, the cephalometric measurements of a fully automated AI-assisted cephalometric software were reliable and accurate. Nevertheless, digital technological advances cannot substitute the critical role of the orthodontist toward a correct diagnosis. Full article
(This article belongs to the Special Issue Computational Biology and Artificial Intelligence in Medicine)
Show Figures

Figure 1

17 pages, 690 KB  
Systematic Review
Clinical Applications of Artificial Intelligence and Machine Learning in Children with Cleft Lip and Palate—A Systematic Review
by Mohamed Zahoor Ul Huqh, Johari Yap Abdullah, Ling Shing Wong, Nafij Bin Jamayet, Mohammad Khursheed Alam, Qazi Farah Rashid, Adam Husein, Wan Muhamad Amir W. Ahmad, Sumaiya Zabin Eusufzai, Somasundaram Prasadh, Vetriselvan Subramaniyan, Neeraj Kumar Fuloria, Shivkanya Fuloria, Mahendran Sekar and Siddharthan Selvaraj
Int. J. Environ. Res. Public Health 2022, 19(17), 10860; https://doi.org/10.3390/ijerph191710860 - 31 Aug 2022
Cited by 32 | Viewed by 10063
Abstract
Objective: The objective of this systematic review was (a) to explore the current clinical applications of AI/ML (Artificial intelligence and Machine learning) techniques in diagnosis and treatment prediction in children with CLP (Cleft lip and palate), (b) to create a qualitative summary of [...] Read more.
Objective: The objective of this systematic review was (a) to explore the current clinical applications of AI/ML (Artificial intelligence and Machine learning) techniques in diagnosis and treatment prediction in children with CLP (Cleft lip and palate), (b) to create a qualitative summary of results of the studies retrieved. Materials and methods: An electronic search was carried out using databases such as PubMed, Scopus, and the Web of Science Core Collection. Two reviewers searched the databases separately and concurrently. The initial search was conducted on 6 July 2021. The publishing period was unrestricted; however, the search was limited to articles involving human participants and published in English. Combinations of Medical Subject Headings (MeSH) phrases and free text terms were used as search keywords in each database. The following data was taken from the methods and results sections of the selected papers: The amount of AI training datasets utilized to train the intelligent system, as well as their conditional properties; Unilateral CLP, Bilateral CLP, Unilateral Cleft lip and alveolus, Unilateral cleft lip, Hypernasality, Dental characteristics, and sagittal jaw relationship in children with CLP are among the problems studied. Results: Based on the predefined search strings with accompanying database keywords, a total of 44 articles were found in Scopus, PubMed, and Web of Science search results. After reading the full articles, 12 papers were included for systematic analysis. Conclusions: Artificial intelligence provides an advanced technology that can be employed in AI-enabled computerized programming software for accurate landmark detection, rapid digital cephalometric analysis, clinical decision-making, and treatment prediction. In children with corrected unilateral cleft lip and palate, ML can help detect cephalometric predictors of future need for orthognathic surgery. Full article
Show Figures

Figure 1

12 pages, 5800 KB  
Article
A Computational Tool for Detection of Soft Tissue Landmarks and Cephalometric Analysis
by Mohammad Azad, Said Elaiwat and Mohammad Khursheed Alam
Electronics 2022, 11(15), 2408; https://doi.org/10.3390/electronics11152408 - 2 Aug 2022
Cited by 4 | Viewed by 2843
Abstract
In facial aesthetics, soft tissue landmark recognition and linear and angular measurement play a critical role in treatment planning. Visual identification and judgment by hand are time-consuming and prone to errors. As a result, user-friendly software solutions are required to assist healthcare practitioners [...] Read more.
In facial aesthetics, soft tissue landmark recognition and linear and angular measurement play a critical role in treatment planning. Visual identification and judgment by hand are time-consuming and prone to errors. As a result, user-friendly software solutions are required to assist healthcare practitioners in improving treatment planning. Our first goal in this paper is to create a computational tool that may be used to identify and save critical landmarks from patient X-ray pictures. The second goal is to create automated software that can assess the soft tissue facial profiles of patients in both linear and angular directions using the landmarks that have been identified. To boost the contrast, we employ gamma correction and a client-server web-based model to display the input images. Furthermore, we use the client-side to record landmarks in pictures and save the annotated landmarks to the database. The linear and angular measurements from the recorded landmarks are then calculated computationally and displayed to the user. Annotation and validation of 13 soft tissue landmarks were completed. The results reveal that our software accurately locates landmarks with a maximum deviation of 1.5 mm to 5 mm for the majority of landmarks. Furthermore, the linear and angular measurement variances across users are not large, indicating that the procedure is reliable. Full article
(This article belongs to the Special Issue Knowledge Engineering and Data Mining)
Show Figures

Figure 1

16 pages, 1831 KB  
Article
Anatomical Landmark Detection Using a Feature-Sharing Knowledge Distillation-Based Neural Network
by Di Huang, Yuzhao Wang, Yu Wang, Guishan Gu and Tian Bai
Electronics 2022, 11(15), 2337; https://doi.org/10.3390/electronics11152337 - 27 Jul 2022
Cited by 1 | Viewed by 2597
Abstract
Existing anatomical landmark detection methods consider the performance gains under heavyweight network architectures, which lead to models tending to have poor scalability and cost-effectiveness. To solve this problem, state-of-the-art knowledge distillation (KD) methods are proposed. However, they only require the teacher model to [...] Read more.
Existing anatomical landmark detection methods consider the performance gains under heavyweight network architectures, which lead to models tending to have poor scalability and cost-effectiveness. To solve this problem, state-of-the-art knowledge distillation (KD) methods are proposed. However, they only require the teacher model to guide the output of the final layer of the student model. In this way, the semantic information learned by the student model is very limited. Different from previous works, we propose a novel KD-based model-training strategy, named feature-sharing fast landmark detection (FSF-LD), which focuses on intermediate features and effectively transfers richer spatial information from the teacher model to the student model. Moreover, to generate richer and more reliable knowledge, we propose a multi-task learning structure to pretrain the teacher model before FSF-LD. Finally, a tiny and effective anatomical landmark detection model is obtained. We evaluate our proposed FSF-LD on a public 2D hand radiograph dataset, a public 2D cephalometric radiograph dataset and a private 2D hip radiograph dataset. On the 2D hand dataset, our FSF-LD has 11.7%, 12.1%, 12.0,% and 11.4% improvement on SDR (r = 2 mm, r = 2.5 mm, r = 3 mm, r = 4 mm) compared with other KD methods. The results suggest the superiority of FSF-LD in terms of model performance and cost-effectiveness. However, it is a challenge to further improve the detection accuracy of anatomical landmarks and realize the clinical application of the research results, which is also our next plan. Full article
(This article belongs to the Special Issue Pattern Recognition and Machine Learning Applications)
Show Figures

Figure 1

21 pages, 2512 KB  
Article
Cephalometric Landmark Detection in Lateral Skull X-ray Images by Using Improved SpatialConfiguration-Net
by Martin Šavc, Gašper Sedej and Božidar Potočnik
Appl. Sci. 2022, 12(9), 4644; https://doi.org/10.3390/app12094644 - 5 May 2022
Cited by 6 | Viewed by 8763
Abstract
Accurate automated localization of cephalometric landmarks in skull X-ray images is the basis for planning orthodontic treatments, predicting skull growth, or diagnosing face discrepancies. Such diagnoses require as many landmarks as possible to be detected on cephalograms. Today’s best methods are adapted to [...] Read more.
Accurate automated localization of cephalometric landmarks in skull X-ray images is the basis for planning orthodontic treatments, predicting skull growth, or diagnosing face discrepancies. Such diagnoses require as many landmarks as possible to be detected on cephalograms. Today’s best methods are adapted to detect just 19 landmarks accurately in images varying not too much. This paper describes the development of the SCN-EXT convolutional neural network (CNN), which is designed to localize 72 landmarks in strongly varying images. The proposed method is based on the SpatialConfiguration-Net network, which is upgraded by adding replications of the simpler local appearance and spatial configuration components. The CNN capacity can be increased without increasing the number of free parameters simultaneously by such modification of an architecture. The successfulness of our approach was confirmed experimentally on two datasets. The SCN-EXT method was, with respect to its effectiveness, around 4% behind the state-of-the-art on the small ISBI database with 250 testing images and 19 cephalometric landmarks. On the other hand, our method surpassed the state-of-the-art on the demanding AUDAX database with 4695 highly variable testing images and 72 landmarks statistically significantly by around 3%. Increasing the CNN capacity as proposed is especially important for a small learning set and limited computer resources. Our algorithm is already utilized in orthodontic clinical practice. Full article
(This article belongs to the Special Issue Advances in Biomedical Image Processing and Analysis)
Show Figures

Figure 1

14 pages, 1395 KB  
Article
Effectiveness of Human–Artificial Intelligence Collaboration in Cephalometric Landmark Detection
by Van Nhat Thang Le, Junhyeok Kang, Il-Seok Oh, Jae-Gon Kim, Yeon-Mi Yang and Dae-Woo Lee
J. Pers. Med. 2022, 12(3), 387; https://doi.org/10.3390/jpm12030387 - 3 Mar 2022
Cited by 31 | Viewed by 4828
Abstract
Detection of cephalometric landmarks has contributed to the analysis of malocclusion during orthodontic diagnosis. Many recent studies involving deep learning have focused on head-to-head comparisons of accuracy in landmark identification between artificial intelligence (AI) and humans. However, a human–AI collaboration for the identification [...] Read more.
Detection of cephalometric landmarks has contributed to the analysis of malocclusion during orthodontic diagnosis. Many recent studies involving deep learning have focused on head-to-head comparisons of accuracy in landmark identification between artificial intelligence (AI) and humans. However, a human–AI collaboration for the identification of cephalometric landmarks has not been evaluated. We selected 1193 cephalograms and used them to train the deep anatomical context feature learning (DACFL) model. The number of target landmarks was 41. To evaluate the effect of human–AI collaboration on landmark detection, 10 images were extracted randomly from 100 test images. The experiment included 20 dental students as beginners in landmark localization. The outcomes were determined by measuring the mean radial error (MRE), successful detection rate (SDR), and successful classification rate (SCR). On the dataset, the DACFL model exhibited an average MRE of 1.87 ± 2.04 mm and an average SDR of 73.17% within a 2 mm threshold. Compared with the beginner group, beginner–AI collaboration improved the SDR by 5.33% within a 2 mm threshold and also improved the SCR by 8.38%. Thus, the beginner–AI collaboration was effective in the detection of cephalometric landmarks. Further studies should be performed to demonstrate the benefits of an orthodontist–AI collaboration. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Personalized Medicine)
Show Figures

Figure 1

25 pages, 7092 KB  
Article
Use of Advanced Artificial Intelligence in Forensic Medicine, Forensic Anthropology and Clinical Anatomy
by Andrej Thurzo, Helena Svobodová Kosnáčová, Veronika Kurilová, Silvester Kosmeľ, Radoslav Beňuš, Norbert Moravanský, Peter Kováč, Kristína Mikuš Kuracinová, Michal Palkovič and Ivan Varga
Healthcare 2021, 9(11), 1545; https://doi.org/10.3390/healthcare9111545 - 12 Nov 2021
Cited by 75 | Viewed by 15894
Abstract
Three-dimensional convolutional neural networks (3D CNN) of artificial intelligence (AI) are potent in image processing and recognition using deep learning to perform generative and descriptive tasks. Compared to its predecessor, the advantage of CNN is that it automatically detects the important features without [...] Read more.
Three-dimensional convolutional neural networks (3D CNN) of artificial intelligence (AI) are potent in image processing and recognition using deep learning to perform generative and descriptive tasks. Compared to its predecessor, the advantage of CNN is that it automatically detects the important features without any human supervision. 3D CNN is used to extract features in three dimensions where input is a 3D volume or a sequence of 2D pictures, e.g., slices in a cone-beam computer tomography scan (CBCT). The main aim was to bridge interdisciplinary cooperation between forensic medical experts and deep learning engineers, emphasizing activating clinical forensic experts in the field with possibly basic knowledge of advanced artificial intelligence techniques with interest in its implementation in their efforts to advance forensic research further. This paper introduces a novel workflow of 3D CNN analysis of full-head CBCT scans. Authors explore the current and design customized 3D CNN application methods for particular forensic research in five perspectives: (1) sex determination, (2) biological age estimation, (3) 3D cephalometric landmark annotation, (4) growth vectors prediction, (5) facial soft-tissue estimation from the skull and vice versa. In conclusion, 3D CNN application can be a watershed moment in forensic medicine, leading to unprecedented improvement of forensic analysis workflows based on 3D neural networks. Full article
(This article belongs to the Special Issue New Trends in Forensic and Legal Medicine)
Show Figures

Graphical abstract

14 pages, 3438 KB  
Article
Accurate Landmark Localization for Medical Images Using Perturbations
by Junhyeok Kang, Kanghan Oh and Il-Seok Oh
Appl. Sci. 2021, 11(21), 10277; https://doi.org/10.3390/app112110277 - 2 Nov 2021
Cited by 9 | Viewed by 5027
Abstract
Recently, various studies have been proposed to learn the rich representations of images during deep learning. In particular, the perturbation method is a simple way to learn rich representations that has shown significant success. In this study, we present effective perturbation approaches for [...] Read more.
Recently, various studies have been proposed to learn the rich representations of images during deep learning. In particular, the perturbation method is a simple way to learn rich representations that has shown significant success. In this study, we present effective perturbation approaches for medical landmark localization. To this end, we report an extensive experiment that uses the perturbation methods of erasing, smoothing, binarization, and edge detection. The hand X-ray dataset and the ISBI 2015 Cephalometric dataset are used to evaluate the perturbation effect. The experimental results show that the perturbation method forces the network to extract richer representations of an image, leading to performance increases. Moreover, in comparison with the existing methods that lack any complex algorithmic change of network, our methods with specific perturbation methods achieve superior performance. Full article
Show Figures

Figure 1

Back to TopTop