Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (145)

Search Parameters:
Keywords = radiographic errors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 7383 KB  
Article
Vertebra Segmentation and Cobb Angle Calculation Platform for Scoliosis Diagnosis Using Deep Learning: SpineCheck
by İrfan Harun İlkhan, Halûk Gümüşkaya and Firdevs Turgut
Informatics 2025, 12(4), 140; https://doi.org/10.3390/informatics12040140 - 11 Dec 2025
Viewed by 168
Abstract
This study presents SpineCheck, a fully integrated deep-learning-based clinical decision support platform for automatic vertebra segmentation and Cobb angle (CA) measurement from scoliosis X-ray images. The system unifies end-to-end preprocessing, U-Net-based segmentation, geometry-driven angle computation, and a web-based clinical interface within a single [...] Read more.
This study presents SpineCheck, a fully integrated deep-learning-based clinical decision support platform for automatic vertebra segmentation and Cobb angle (CA) measurement from scoliosis X-ray images. The system unifies end-to-end preprocessing, U-Net-based segmentation, geometry-driven angle computation, and a web-based clinical interface within a single deployable architecture. For secure clinical use, SpineCheck adopts a stateless “process-and-delete” design, ensuring that no radiographic data or Protected Health Information (PHI) are permanently stored. Five U-Net family models (U-Net, optimized U-Net-2, Attention U-Net, nnU-Net, and UNet3++) are systematically evaluated under identical conditions using Dice similarity, inference speed, GPU memory usage, and deployment stability, enabling deployment-oriented model selection. A robust CA estimation pipeline is developed by combining minimum-area rectangle analysis with Theil–Sen regression and spline-based anatomical modeling to suppress outliers and improve numerical stability. The system is validated on a large-scale dataset of 20,000 scoliosis X-ray images, demonstrating strong agreement with expert measurements based on Mean Absolute Error, Pearson correlation, and Intraclass Correlation Coefficient metrics. These findings confirm the reliability and clinical robustness of SpineCheck. By integrating large-scale validation, robust geometric modeling, secure stateless processing, and real-time deployment capabilities, SpineCheck provides a scalable and clinically reliable framework for automated scoliosis assessment. Full article
Show Figures

Figure 1

24 pages, 1958 KB  
Article
Wearable Sensor–Based Telerehabilitation Versus Conventional Physiotherapy in Knee OA: Insights from the KneE-PAD Pilot Study
by Theodora Plavoukou, Panagiotis Kasnesis, Amalia Contiero Syropoulou, Georgios Papagiannis, Dimitrios Stasinopoulos and George Georgoudis
Appl. Sci. 2025, 15(24), 12988; https://doi.org/10.3390/app152412988 - 10 Dec 2025
Viewed by 246
Abstract
Background: Knee osteoarthritis (OA) is a leading cause of disability globally. Conventional physiotherapy, while effective, faces barriers including accessibility and adherence. Telerehabilitation augmented by wearable sensor technology and AI-driven feedback offers a scalable alternative. Objective: This pilot randomized controlled trial compared the feasibility, [...] Read more.
Background: Knee osteoarthritis (OA) is a leading cause of disability globally. Conventional physiotherapy, while effective, faces barriers including accessibility and adherence. Telerehabilitation augmented by wearable sensor technology and AI-driven feedback offers a scalable alternative. Objective: This pilot randomized controlled trial compared the feasibility, safety, and preliminary clinical effectiveness of a sensor-based telerehabilitation protocol using the KneE-PAD patient monitoring approach which was also combined with an avatar-guided visual feedback add-on tool. Although this approach is capable of AI-driven postural error detection, this feature was not enabled during the current study, and feedback was provided solely through visual cues. Methods: Twenty adults with radiographically confirmed Kellgren–Lawrence grade 1 to 3 knee OA were randomized into two groups (Control/Intervention groups, n = 10 in each). The control group received in-person physiotherapy, while the intervention group engaged in remote rehabilitation supported by wearable sEMG and IMU sensors. The 8-week program included supervised and home-based sessions. Primary outcomes were WOMAC scores (Functionality/Pain), quadriceps strength, and sEMG-derived neuromuscular activation. Secondary outcomes included Timed Up and Go test (TUG), psychological measures (HADS, TSK), and self-efficacy measure (ASES). Analyses employed both parametric and non-parametric statistics including an effect size estimation. Results: Both groups demonstrated significant improvements in WOMAC total scores (Intervention: −11.8 points; Control: −6.4 points), exceeding the minimal clinically important difference (MCID) for knee OA. Strength and mobility also improved significantly in both groups, with the Intervention group showing superior gains in sEMG measures (RMS: p = 0.0077; Peak-to-Peak: p < 0.005), indicating enhanced neuromuscular adaptation. TUG performance improved more in the intervention group (–3.17 s vs. –2.57 s, p = 0.037). Psychological outcomes favored the control group, particularly in depression scores (HADS-D, t(18) = 2.37, p = 0.03). Adherence was high (94.8%), with zero attrition and no adverse events. Conclusions: The KneE-PAD monitoring approach offers a feasible and clinically effective alternative to conventional physiotherapy, enhancing neuromuscular outcomes through real-time sensor feedback. These findings support the viability of intelligent telerehabilitation for scalable OA care and inform the design of future large-scale trials. Full article
Show Figures

Figure 1

14 pages, 1775 KB  
Article
Development of a Deep Learning Model for Hip Arthroplasty Templating Using Anteroposterior Hip Radiograph
by Siwadol Wongsak, Tanapol Janyawongchot, Nithid Sri-Utenchai, Dhammathat Owasirikul, Suphaneewan Jaovisidha, Patarawan Woratanarat and Paphon Sa-Ngasoongsong
J. Clin. Med. 2025, 14(24), 8689; https://doi.org/10.3390/jcm14248689 - 8 Dec 2025
Viewed by 235
Abstract
Background: Preoperative templating is an essential step in hip arthroplasty (HA), guiding implant selection and reducing surgical complications. It is typically performed using acetate templates or digital software. These methods, however, depend on the surgeon’s experience and may be limited by cost and [...] Read more.
Background: Preoperative templating is an essential step in hip arthroplasty (HA), guiding implant selection and reducing surgical complications. It is typically performed using acetate templates or digital software. These methods, however, depend on the surgeon’s experience and may be limited by cost and availability. This study aimed to develop and validate a deep learning (DL) model using plain radiographs to predict implant sizes in HA. Methods: This retrospective study included patients who underwent primary HA using a cementless CORAIL® femoral stem and PINNACLE® acetabular cup. The DL model was trained on 688 preoperative anteroposterior (AP) hip radiographs and validated temporally on 98 additional cases. Implant sizes predicted by the DL model were compared with on-screen templating (acetate templates overlaid on digital images). The actual implanted size was used as the reference standard. Accuracy, mean absolute error (MAE), and root mean square error (RMSE) were calculated. Logistic regression was performed to identify factors influencing prediction accuracy. Results: The DL model showed higher accuracy than the on-screen templating for the acetabular cup (88.9% [77.4% to 95.8%] vs. 83.3% [70.7% to 90.2%]) and femoral stem components (85.7% [77.2% to 92.0%] vs. 81.6% [72.5% to 88.7%]), while the on-screen method performed better for the bipolar head (93.2% [81.3% to 98.6%] vs. 72.7% [57.2% to 85.0%]). MAE and RMSE were comparable between the methods for acetabular and femoral stem components (all p > 0.05), with statistically significant differences observed only in the bipolar head (p < 0.01 and 0.02, respectively). Although logistic regression analysis showed trends toward higher accuracy in acetabular size prediction among women and those with shorter height, no demographic factors were statistically significant predictors of accuracy. Conclusions: A DL model using only plain radiographs can accurately predict implant sizes in HA, particularly for the acetabulum and femoral stem. These findings suggest that the DL-based model could be a useful tool in preoperative planning. With further refinement to improve generalizability, this approach could be useful in a routine clinical setting in the future. Full article
(This article belongs to the Special Issue Recent Advances and Clinical Outcomes of Hip and Knee Arthroplasty)
Show Figures

Graphical abstract

12 pages, 2242 KB  
Article
Augmented Reality-Assisted Micro-Invasive Apicectomy with Markerless Visual–Inertial Odometry: An In Vivo Pilot Study
by Marco Farronato, Davide Farronato, Federico Michelini and Giulio Rasperini
Appl. Sci. 2025, 15(23), 12588; https://doi.org/10.3390/app152312588 - 27 Nov 2025
Viewed by 238
Abstract
Introduction: Apicectomy is an endodontic surgical procedure prescribed for persistent periapical pathologies when conventional root canal therapy or retreatment have failed. Accurate intraoperative visualization of the root apex and surrounding structures remains challenging and subject to possible errors. Augmented reality (AR) allows for [...] Read more.
Introduction: Apicectomy is an endodontic surgical procedure prescribed for persistent periapical pathologies when conventional root canal therapy or retreatment have failed. Accurate intraoperative visualization of the root apex and surrounding structures remains challenging and subject to possible errors. Augmented reality (AR) allows for the addition of real-time digital overlays of the anatomical region, thus potentially improving surgical precision and reducing invasiveness. The purpose of this pilot study is to describe the application of an AR method in cases requiring apicectomy. Materials and Methods: Patients presenting with chronic persistent apical radio-translucency associated with pain underwent AR-assisted apicectomy. Cone-beam computed tomography (CBCT) scans were obtained preoperatively for segmentation of the target root apex and adjacent anatomical structures. A custom visual–inertial odometry (VIO) algorithm was used to map and stabilize the segmented digital 3D models on a portable device in real time, enabling an overlay of digital guides onto the operative field. The duration of preoperative procedures, was recorded. Postoperative pain measured by a Visual Analogue Scale (VAS), and periapical healing assessed with radiographic evaluations, were recorded at baseline (T0) and at 6 weeks and 6 months (T1–T2) after surgery. Results: AR-assisted apicectomies were successfully performed in all three patients without intraoperative complications. The digital overlap procedure required an average of [1.49 ± 0.34] minutes. VAS scores decreased significantly from T0 to T2, and patients showed radiographic evidence of progressive periapical healing. No patient reported persistent discomfort at follow-up. Conclusion: This preliminary pilot study indicates that AR-assisted apicectomy is feasible and may improve intraoperative visualization with low additional surgical time. Future larger-scale studies with control groups are needed to validate the method proposed and to quantify the outcomes. Clinical Significance: By integrating real-time digital images of bony structures and root morphology, AR guidance during apicectomy may offer enhanced precision for apical resection and may decrease the risk of iatrogenic damage. The use of a visual–inertial odometry-based AR method is a novel technique that demonstrated promising results in terms of VAS and final outcomes, especially in anatomically challenging cases in this preliminary pilot study. Full article
(This article belongs to the Special Issue Advanced Dental Imaging Technology)
Show Figures

Figure 1

17 pages, 2025 KB  
Article
Breast Organ Dose and Radiation Exposure Reduction in Full-Spine Radiography: A Phantom Model Using PCXMC
by Manami Nemoto and Koichi Chida
Diagnostics 2025, 15(21), 2787; https://doi.org/10.3390/diagnostics15212787 - 3 Nov 2025
Viewed by 501
Abstract
Background/Objectives: Full-spine radiography is frequently performed from childhood to adulthood, raising concerns about radiation-induced breast cancer risk. To assess such probabilistic risks as cancer, accurate estimation of equivalent and effective organ doses is essential. The purpose of this study is to investigate X-ray [...] Read more.
Background/Objectives: Full-spine radiography is frequently performed from childhood to adulthood, raising concerns about radiation-induced breast cancer risk. To assess such probabilistic risks as cancer, accurate estimation of equivalent and effective organ doses is essential. The purpose of this study is to investigate X-ray imaging conditions for radiation reduction based on breast organ dose and to evaluate the accuracy of simulation software for dose calculation. Methods: Breast organ doses from full-spine radiography were calculated using the Monte Carlo-based dose calculation software PCXMC. Breast organ doses were estimated under various technical conditions of full-spine radiography (tube voltage, distance, grid presence, and beam projection). Dose reduction methods were explored, and variations in dose and error due to phantom characteristics and photon history number were evaluated. Results: Among the X-ray conditions, the greatest radiation reduction effect was achieved by changing the imaging direction. Changing from the anteroposterior to posteroanterior direction reduced doses by approximately 76.7% to 89.1% (127.8–326.7 μGy) in children and 80.4% to 91.1% (411.3–911.1 μGy) in adults. In addition, the study highlighted how phantom characteristics and the number of photon histories influence estimated doses and calculation error, with approximately 2 × 106 photon histories recommended to achieve a standard error ≤ 2%. Conclusions: Modifying radiographic conditions is effective for reducing breast radiation exposure in patients with scoliosis. Furthermore, to ensure the accuracy of dose calculation software, the number of photon histories must be adjusted under certain conditions and used while verifying the standard error. This study demonstrates how technical modifications, projection selection, and phantom characteristics influence breast radiation exposure, thereby supporting the need for patient-tailored imaging strategies that minimize radiation risk while maintaining diagnostic validity. The findings may be useful in informing radiographic protocols and the development of safer imaging guidelines for both pediatric and adult patients undergoing spinal examinations. Full article
(This article belongs to the Special Issue Recent Advances in Diagnostic and Interventional Radiology)
Show Figures

Figure 1

29 pages, 10944 KB  
Article
Marker-Less Lung Tumor Tracking from Real-Time Color X-Ray Fluoroscopic Images Using Cross-Patient Deep Learning Model
by Yongxuan Yan, Fumitake Fujii and Takehiro Shiinoki
Bioengineering 2025, 12(11), 1197; https://doi.org/10.3390/bioengineering12111197 - 2 Nov 2025
Viewed by 723
Abstract
Fiducial marker implantation for tumor localization in radiotherapy is effective but invasive and carries complication risks. To address this, we propose a marker-less tumor tracking framework to explore the feasibility of a cross-patient deep learning model, aiming to eliminate the need for per-patient [...] Read more.
Fiducial marker implantation for tumor localization in radiotherapy is effective but invasive and carries complication risks. To address this, we propose a marker-less tumor tracking framework to explore the feasibility of a cross-patient deep learning model, aiming to eliminate the need for per-patient retraining. A novel degradation model generates realistic simulated data from digitally reconstructed radiographs (DRRs) to train a Restormer network, which transforms clinical fluoroscopic images into clean, DRR-like images. Subsequently, a DUCK-Net model, trained on DRRs, performs tumor segmentation. We conducted a feasibility study using a clinical dataset from 7 lung cancer patients, comprising 100 distinct treatment fields. The framework achieved an average processing time of 179.8 ms per image and demonstrated high accuracy: the median 3D Euclidean tumor center tracking error was 1.53 mm, with directional errors of 0.98±0.70 mm (LR), 1.09±0.74 mm (SI), and 1.34±0.94 mm (AP). These promising results validate our approach as a proof-of-concept for a cross-patient marker-less tumor tracking solution, though further large-scale validation is required to confirm broad clinical applicability. Full article
(This article belongs to the Special Issue Label-Free Cancer Detection)
Show Figures

Figure 1

14 pages, 2128 KB  
Article
Effectiveness of Graded Weight-Bearing Exercises on Pain, Function, Proprioception, and Muscle Strength in Individuals with Knee Osteoarthritis: A Randomized Controlled Trial
by Ammar Fadil, Qassim Ibrahim Muaidi, Mohamed Salaheldien Alayat, Moayad S. Subahi, Roaa A. Sroge, Abdulaziz Awali and Mansour Abdullah Alshehri
J. Clin. Med. 2025, 14(21), 7685; https://doi.org/10.3390/jcm14217685 - 29 Oct 2025
Viewed by 1320
Abstract
Background/Objectives: Knee osteoarthritis (OA) is a prevalent degenerative joint disorder associated with pain, impaired proprioception, and reduced physical function. While closed kinetic chain exercises (CKCEs) are commonly prescribed to enhance joint stability, their weight-bearing nature may exacerbate symptoms. Graded weight-bearing exercises (GWBEs) using [...] Read more.
Background/Objectives: Knee osteoarthritis (OA) is a prevalent degenerative joint disorder associated with pain, impaired proprioception, and reduced physical function. While closed kinetic chain exercises (CKCEs) are commonly prescribed to enhance joint stability, their weight-bearing nature may exacerbate symptoms. Graded weight-bearing exercises (GWBEs) using anti-gravity treadmill training provide a novel approach to reduce joint loading while maintaining functional mobility. This study aimed to evaluate the effectiveness of GWBEs compared with CKCEs and open kinetic chain exercises (OKCEs) on pain, function, proprioception, and quadricep strength in patients with knee OA. Methods: Forty-five adults aged 40–60 years with radiographically confirmed knee OA were randomized into three groups: (1) GWBE + OKCE, (2) CKCE + OKCE, or (3) OKCE alone. Interventions were conducted three times per week for six-weeks. Outcomes included pain (Visual Analogue Scale), physical function (Western Ontario and McMaster Universities Osteoarthritis Index, 6-Minute Walk Test), proprioception (joint repositioning error at 45°), and quadriceps strength (isokinetic peak torque at 60°, 120°, and 180°/s). Results: All groups demonstrated significant improvements in pain and function (p < 0.05). Proprioception improved in the GWBE + OKCE and CKCE + OKCE groups but not in the OKCE group. No significant changes were observed in quadriceps strength across groups. The GWBE + OKCE group showed significantly greater improvements in pain, function, and proprioception compared to both comparator groups (p < 0.05). Conclusions: GWBE combined with OKCE is more effective than CKCE + OKCE and OKCE alone in improving pain, function, and proprioception in patients with knee OA. Full article
Show Figures

Figure 1

10 pages, 1364 KB  
Article
Automated Detection of Lumbosacral Transitional Vertebrae on Plain Lumbar Radiographs Using a Deep Learning Model
by Donghyuk Kwak, Du Hyun Ro and Dong-Ho Kang
J. Clin. Med. 2025, 14(21), 7671; https://doi.org/10.3390/jcm14217671 - 29 Oct 2025
Viewed by 572
Abstract
Background/Objectives: Lumbosacral transitional vertebra (LSTV) is a common anatomical variant, but its identification on plain radiographs is often inconsistent. This inconsistency can lead to clinical complications such as chronic low back pain, misinterpretation of spinal parameters, and an increased risk of wrong-level [...] Read more.
Background/Objectives: Lumbosacral transitional vertebra (LSTV) is a common anatomical variant, but its identification on plain radiographs is often inconsistent. This inconsistency can lead to clinical complications such as chronic low back pain, misinterpretation of spinal parameters, and an increased risk of wrong-level surgery. This study aimed to develop and validate a deep learning-based artificial intelligence (AI) model for the automated detection of LSTV on plain lumbar radiographs. Methods: This retrospective observational study included a total of 3116 standing lumbar lateral radiographs. The presence or absence of lumbosacral transitional vertebra (LSTV) was definitively established using whole-spine imaging, CT, or MRI. Multiple deep learning architectures, including DINOv2, CLIP (ViT-B/32), and ResNet-50, were initially evaluated for binary classification of LSTV. Among these, the ResNet-50 model with partial fine-tuning achieved the best test performance and was subsequently selected for fivefold cross-validation using the training set. Model performance was assessed using accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (AUROC), and interpretability was evaluated using gradient-weighted class activation mapping (Grad-CAM). Results: On the independent test set of 313 radiographs, the final model demonstrated robust diagnostic performance. It achieved an accuracy of 76.4%, a sensitivity of 85.1%, a specificity of 61.9%, and an AUC of 0.84. The model correctly identified 166 out of 195 LSTV cases and 73 out of 118 normal cases. Conclusions: This AI-based system offers a highly accurate and reliable method for the automated detection of LSTV on plain radiographs. It shows strong potential as a clinical decision-support tool to reduce diagnostic errors, improve pre-operative planning, and enhance patient safety. Full article
Show Figures

Figure 1

20 pages, 6268 KB  
Article
Automated Implant Placement Pathway from Dental Panoramic Radiographs Using Deep Learning for Preliminary Clinical Assistance
by Pei-Yi Wu, Shih-Lun Chen, Yi-Cheng Mao, Yuan-Jin Lin, Pin-Yu Lu, Kai-Hsun Yu, Kuo-Chen Li, Tsun-Kuang Chi, Tsung-Yi Chen and Patricia Angela R. Abu
Diagnostics 2025, 15(20), 2598; https://doi.org/10.3390/diagnostics15202598 - 15 Oct 2025
Viewed by 1128
Abstract
Background/Objective: Dental implant therapy requires clinicians to identify edentulous regions and adjacent teeth accurately to ensure precise and efficient implant placement. However, this process is time-consuming and subject to operator bias. To address this challenge, this study proposes an AI-assisted detection framework that [...] Read more.
Background/Objective: Dental implant therapy requires clinicians to identify edentulous regions and adjacent teeth accurately to ensure precise and efficient implant placement. However, this process is time-consuming and subject to operator bias. To address this challenge, this study proposes an AI-assisted detection framework that integrates deep learning and image processing techniques to predict implant placement pathways on dental panoramic radiographs, supporting clinical decision-making. Methods: The proposed framework is first applied to YOLO models to detect edentulous regions and employs image enhancement techniques to improve image quality. Subsequently, YOLO-OBB is utilized to extract pixel-level positional information about neighboring healthy teeth. An implant pathway orientation visualization algorithm is applied to derive clinically relevant implant placement recommendations. Results: Experimental evaluation using YOLOv9m and YOLOv8n-OBB demonstrated stable performance in both recognition and accuracy. The models achieved Precision values of 88.86% and 89.82%, respectively, with an average angular error of only 1.537° compared to clinical implant pathways annotated by dentists. Conclusions: This study presents the first AI-assisted diagnostic framework for DPR-based implant pathway prediction. The results indicate strong consistency with clinical planning, confirming its potential to enhance diagnostic accuracy and provide reliable decision support in implant dentistry. Full article
(This article belongs to the Special Issue 3rd Edition: AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

23 pages, 11108 KB  
Article
Generative Modeling for Interpretable Anomaly Detection in Medical Imaging: Applications in Failure Detection and Data Curation
by McKell E. Woodland, Mais Altaie, Caleb S. O’Connor, Austin H. Castelo, Olubunmi C. Lebimoyo, Aashish C. Gupta, Joshua P. Yung, Paul E. Kinahan, Clifton D. Fuller, Eugene J. Koay, Bruno C. Odisio, Ankit B. Patel and Kristy K. Brock
Bioengineering 2025, 12(10), 1106; https://doi.org/10.3390/bioengineering12101106 - 14 Oct 2025
Viewed by 1186
Abstract
This work aims to leverage generative modeling-based anomaly detection to enhance interpretability in AI failure detection systems and to aid data curation for large repositories. For failure detection interpretability, this retrospective study utilized 3339 CT scans (525 patients), divided patient-wise into training, baseline [...] Read more.
This work aims to leverage generative modeling-based anomaly detection to enhance interpretability in AI failure detection systems and to aid data curation for large repositories. For failure detection interpretability, this retrospective study utilized 3339 CT scans (525 patients), divided patient-wise into training, baseline test, and anomaly (having failure-causing attributes—e.g., needles, ascites) test datasets. For data curation, 112,120 ChestX-ray14 radiographs were used for training and 2036 radiographs from the Medical Imaging and Data Resource Center for testing, categorized as baseline or anomalous based on attribute alignment with ChestX-ray14. StyleGAN2 networks modeled the training distributions. Test images were reconstructed with backpropagation and scored using mean squared error (MSE) and Wasserstein distance (WD). Scores should be high for anomalous images, as StyleGAN2 cannot model unseen attributes. Area under the receiver operating characteristic curve (AUROC) evaluated anomaly detection, i.e., baseline and anomaly dataset differentiation. The proportion of highest-scoring patches containing needles or ascites assessed anomaly localization. Permutation tests determined statistical significance. StyleGAN2 did not reconstruct anomalous attributes (e.g., needles, ascites), enabling the unsupervised detection of these attributes: mean (±standard deviation) AUROCs were 0.86 (±0.13) for failure detection and 0.82 (±0.11) for data curation. 81% (±13%) of the needles and ascites were localized. WD outperformed MSE on CT (p < 0.001), while MSE outperformed WD on radiography (p < 0.001). Generative models detected anomalous image attributes, demonstrating promise for model failure detection interpretability and large-scale data curation. Full article
(This article belongs to the Section Biomedical Engineering and Biomaterials)
Show Figures

Figure 1

14 pages, 5627 KB  
Article
U-Net-Based Deep Learning for Simultaneous Segmentation and Agenesis Detection of Primary and Permanent Teeth in Panoramic Radiographs
by Hamit Tunç, Nurullah Akkaya, Berkehan Aykanat and Gürkan Ünsal
Diagnostics 2025, 15(20), 2577; https://doi.org/10.3390/diagnostics15202577 - 13 Oct 2025
Viewed by 933
Abstract
Background/Objectives: Panoramic radiographs aid diagnosis in paediatric dentistry, but errors occur. Deep learning-based artificial intelligence offers improved accuracy by reducing overlap-related and interpretive mistakes. This study aimed to develop a U-Net-based deep learning model for simultaneous tooth segmentation and agenesis detection, capable [...] Read more.
Background/Objectives: Panoramic radiographs aid diagnosis in paediatric dentistry, but errors occur. Deep learning-based artificial intelligence offers improved accuracy by reducing overlap-related and interpretive mistakes. This study aimed to develop a U-Net-based deep learning model for simultaneous tooth segmentation and agenesis detection, capable of distinguishing between primary and permanent teeth in panoramic radiographs. Methods: Publicly available panoramic radiographs, along with images collected from the archives of Burdur Mehmet Akif Ersoy University Faculty of Dentistry, were used. The dataset totalled 1697 panoramic radiographs after applying exclusion criteria for artifacts and edentulous cases. Manual segmentation was performed by two paediatric dentists and one dentomaxillofacial radiologist. The images were split into training (80%), validation (10%), and test (10%) sets. A U-Net architecture was trained to identify both primary and permanent teeth and to detect tooth agenesis. Results: Dental agenesis was detected in 14.6% of 1697 OPGs, predominantly affecting the mandibular second premolars (32.5%) and maxillary lateral incisors (27.6%). Intra- and inter-researcher intraclass correlation coefficients (ICCs) were 0.995 and 0.990, respectively (p > 0.05). On the test set, the model achieved a Dice similarity coefficient of 0.8773, precision of 0.9115, recall of 0.8974, and an F1 score of 0.9027. Validation accuracy was 96.71%, indicating reliable performance across diverse datasets. Conclusions: The proposed deep learning model automates tooth segmentation and agenesis detection for both primary and permanent dentitions in panoramic radiographs. Its high-performance metrics suggest improved accuracy and efficiency in paediatric dental diagnostics, potentially reducing clinician workload and minimizing diagnostic errors. Full article
Show Figures

Figure 1

14 pages, 1127 KB  
Article
Dental Age Estimation from Panoramic Radiographs: A Comparison of Orthodontist and ChatGPT-4 Evaluations Using the London Atlas, Nolla, and Haavikko Methods
by Derya Dursun and Rumeysa Bilici Geçer
Diagnostics 2025, 15(18), 2389; https://doi.org/10.3390/diagnostics15182389 - 19 Sep 2025
Viewed by 983
Abstract
Background: Dental age (DA) estimation, which is widely used in orthodontics, pediatric dentistry, and forensic dentistry, predicts chronological age (CA) by assessing tooth development and maturation. Most methods rely on radiographic evaluation of tooth mineralization and eruption stages to assess DA. With the [...] Read more.
Background: Dental age (DA) estimation, which is widely used in orthodontics, pediatric dentistry, and forensic dentistry, predicts chronological age (CA) by assessing tooth development and maturation. Most methods rely on radiographic evaluation of tooth mineralization and eruption stages to assess DA. With the increasing adoption of large language models (LLMs) in medical sciences, use of ChatGPT has extended to processing visual data. The aim of this study, therefore, was to evaluate the performance of ChatGPT-4 in estimating DA from panoramic radiographs using three conventional methods (Nolla, Haavikko, and London Atlas) and to compare its accuracy against both orthodontist assessments and CA. Methods: In this retrospective study, panoramic radiographs of 511 Turkish children aged 6–17 years were assessed. DA was estimated using the Nolla, Haavikko, and London Atlas methods by both orthodontists and ChatGPT-4. The DA–CA difference and mean absolute error (MAE) were calculated, and statistical comparisons were performed to assess accuracy and sex differences and reach an agreement between the evaluators, with significance set at p < 0.05. Results: The mean CA of the study population was 12.37 ± 2.95 years (boys: 12.39 ± 2.94; girls: 12.35 ± 2.96). Using the London Atlas method, the orthodontists overestimated CA with a DA–CA difference of 0.78 ± 1.26 years (p < 0.001), whereas ChatGPT-4 showed no significant DA–CA difference (0.03 ± 0.93; p = 0.399). Using the Nolla method, the orthodontist showed no significant DA–CA difference (0.03 ± 1.14; p = 0.606), but ChatGPT-4 underestimated CA with a DA–CA difference of −0.40 ± 1.96 years (p < 0.001). Using the Haavikko method, the evaluators underestimated CA (orthodontist: −0.88; ChatGPT-4: −1.18; p < 0.001). The lowest MAE for ChatGPT-4 was obtained when using the London Atlas method (0.59 ± 0.72), followed by Nolla (1.33 ± 1.28) and Haavikko (1.51 ± 1.41). For the orthodontists, the lowest MAE was achieved when using the Nolla method (0.86 ± 0.75). Agreement between the orthodontists and ChatGPT-4 was highest when using the London Atlas method (ICC = 0.944, r = 0.905). Conclusions: ChatGPT-4 showed the highest accuracy with the London Atlas method, with no significant difference from CA for either sex or the lowest prediction error. When using the Nolla and Haavikko methods, both ChatGPT-4 and the orthodontist tended to underestimate age, with higher errors. Overall, ChatGPT-4 performed best when using visually guided methods and was less accurate when using multi-stage scoring methods. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

19 pages, 1719 KB  
Article
Evaluation of Measurement Errors in Rotational Stitching, One-Shot, and Slot-Scanning Full-Length Radiography
by Zhengliang Li, Jie Xia, Cong Wang, Zhemin Zhu, Fan Zhang, Tsung-Yuan Tsai, Zhenhong Zhu and Kai Yang
Bioengineering 2025, 12(9), 999; https://doi.org/10.3390/bioengineering12090999 - 19 Sep 2025
Viewed by 780
Abstract
Full-length radiography is essential for evaluating spinal deformities, limb length discrepancies, and preoperative planning in orthopedics, yet the measurement accuracy of different radiographic methods remains unclear. This phantom study compared the accuracy of rotational stitching, one-shot and slot-scanning full-length radiography across six radiographic [...] Read more.
Full-length radiography is essential for evaluating spinal deformities, limb length discrepancies, and preoperative planning in orthopedics, yet the measurement accuracy of different radiographic methods remains unclear. This phantom study compared the accuracy of rotational stitching, one-shot and slot-scanning full-length radiography across six radiographic systems in quantifying distances between anatomical landmarks. Measurement errors were statistically analyzed using appropriate nonparametric tests. The results demonstrated significant differences in measurement accuracy among the three methods (H (2) = 15.86, p < 0.001). Slot-scanning exhibited the highest accuracy, with a mean error of −1.19 ± 10.13 mm, while both rotational stitching and one-shot imaging showed greater systematic underestimation, with mean errors of −18.95 ± 13.77 mm and −15.32 ± 12.38 mm, respectively. These negative biases (approximately 1.9 cm and 1.5 cm) are clinically meaningful because, if unrecognized, they can alter mechanical axis estimation and alignment planning in procedures such as high tibial osteotomy (HTO). Post hoc analysis confirmed the superior accuracy of slot-scanning compared to the other two methods, while no significant difference was found between rotational stitching and one-shot imaging. These findings indicate that system choice substantially impacts measurement accuracy, supporting preferential use of slot-scanning when precise quantitative assessment is required. Full article
(This article belongs to the Special Issue Advanced Engineering Technologies in Orthopaedic Research)
Show Figures

Graphical abstract

19 pages, 595 KB  
Systematic Review
Automated Detection of Periodontal Bone Loss in Two-Dimensional (2D) Radiographs Using Artificial Intelligence: A Systematic Review
by Alin M. Iacob, Marta Castrillón Fernández, Laura Fernández Robledo, Enrique Barbeito Castro and Matías Ferrán Escobedo Martínez
Dent. J. 2025, 13(9), 413; https://doi.org/10.3390/dj13090413 - 8 Sep 2025
Cited by 1 | Viewed by 2282
Abstract
Artificial intelligence is an emerging tool that is being used in multiple fields, including dentistry. An example of this is the diagnosis of periodontal bone loss by analyzing two-dimensional (2D) radiographs (periapical, bitewing, and panoramic). Objectives: The objectives of this systematic review [...] Read more.
Artificial intelligence is an emerging tool that is being used in multiple fields, including dentistry. An example of this is the diagnosis of periodontal bone loss by analyzing two-dimensional (2D) radiographs (periapical, bitewing, and panoramic). Objectives: The objectives of this systematic review are to bring together the existing evidence and evaluate the effectiveness of the different artificial intelligence architectures that have been used in recent studies. Materials and Methods: This work has been carried out following the PRISMA criteria and has been recorded in PROSPERO (ID = CRD 42025640049). We searched six different databases, and the results were filtered according to previously established inclusion and exclusion criteria. We extracted data independently by three review authors and analyzed the risk of bias of the studies using the QUADAS-2 test, calculating Cohen’s kappa index (κ) to measure the agreement between assessors. Results: We included 20 diagnostic accuracy studies according to the inclusion and exclusion criteria, published between 2019 and 2024. All included studies described the detection of periodontal bone loss on radiographs. Limitations: One of the main limitations identified was heterogeneity in the indices used to assess the accuracy of models, which made it difficult to compare results between studies. In addition, many works use different imaging protocols and X-ray equipment, introducing variability into the data and limiting reproducibility. Conclusions: Artificial intelligence is a promising technique for the automated detection of periodontal bone loss, allowing the accurate measurement of bone loss, identifying lesions such as apical periodontitis and stage periodontitis, in addition to reducing diagnostic errors associated with fatigue or inexperience. However, improvements are still required to optimize its accuracy and clinical applicability. Full article
Show Figures

Figure 1

16 pages, 3477 KB  
Article
Classification Performance of Deep Learning Models for the Assessment of Vertical Dimension on Lateral Cephalometric Radiographs
by Mehmet Birol Özel, Sultan Büşra Ay Kartbak and Muhammet Çakmak
Diagnostics 2025, 15(17), 2240; https://doi.org/10.3390/diagnostics15172240 - 3 Sep 2025
Viewed by 1180
Abstract
Background/Objectives: Vertical growth pattern significantly influences facial aesthetics and treatment choices. Lateral cephalograms are routinely used for the evaluation of vertical jaw relationships in orthodontic diagnosis. The aim of this study was to evaluate the performance of deep learning algorithms in classifying [...] Read more.
Background/Objectives: Vertical growth pattern significantly influences facial aesthetics and treatment choices. Lateral cephalograms are routinely used for the evaluation of vertical jaw relationships in orthodontic diagnosis. The aim of this study was to evaluate the performance of deep learning algorithms in classifying cephalometric radiographs according to vertical skeletal growth patterns without the need for anatomical landmark identification. Methods: This study was carried out on lateral cephalometric radiographs of 1050 patients. Cephalometric radiographs were divided into 3 subgroups based on FMA, SN-GoGn, and Cant of Occlusal Plane angles. Six deep learning models (ResNet101, DenseNet 201, EfficientNet B0, EfficientNet V2 B0, ConvNetBase, and a hybrid model) were employed for the classification of the dataset. The performances of the well-known deep learning models and the hybrid model were compared for accuracy, precision, F1-Score, mean absolute error, Cohen’s Kappa, and Grad-CAM metrics. Results: The highest accuracy rates were achieved by the Hybrid Model with 86.67% for FMA groups, 87.29% for SN-GoGn groups, and 82.71% for Cant of Occlusal Plane groups. The lowest accuracy rates were achieved by ConvNet with 79.58% for FMA groups, 65% for SN-GoGn, and 70.21% for Cant of Occlusal Plane groups. Conclusions: The six deep learning algorithms employed demonstrated classification success rates ranging from 65% to 87.29%. The highest classification accuracy was observed in the FMA angle, while the lowest accuracy was recorded for the Cant of the Occlusal Plane angle. The proposed DL algorithms showed potential for direct skeletal orthodontic diagnosis without the need for cephalometric landmark detection steps. Full article
(This article belongs to the Special Issue Artificial Intelligence for Health and Medicine)
Show Figures

Figure 1

Back to TopTop