Next Article in Journal
Statement of Peer Review
Previous Article in Journal
Circular Economy Practices for Data Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Comparative Evaluation of Images of Alveolar Bone Loss Using Panoramic Images and Artificial Intelligence †

1
Department of Dental Research Cell, Dr. D. Y. Patil Dental College and Hospital, Dr. D. Y. Patil Vidyapeeth, Pune 411018, Maharashtra, India
2
Division of Community Health Promotion, Florida Department of Health, Tallahassee, FL 32399, USA
3
Texas State Dental Association, Austin, TX 78704, USA
4
Virginia State Dental Association, Henrico, VA 23233, USA
5
Department of Public Health Dentistry, Kalinga Institute of Dental Sciences (KIDS), Kalinga Institute of Industrial Technology (KIIT) Deemed to be University, Bhubaneswar 751024, Odisha, India
6
Department of Dentistry, Faculty of Dental Sciences, University of Aldent, 1007 Tirana, Albania
*
Author to whom correspondence should be addressed.
Presented at the 5th International Electronic Conference on Applied Sciences, 4–6 December 2024; https://sciforum.net/event/ASEC2024.
Eng. Proc. 2025, 87(1), 80; https://doi.org/10.3390/engproc2025087080
Published: 19 June 2025
(This article belongs to the Proceedings of The 5th International Electronic Conference on Applied Sciences)

Abstract

This study aimed to demonstrate the Convolutional Neural Network (CNN) algorithm’s efficiency in detecting alveolar bone loss using panoramic radiographs. The comparison was evaluated among 1874 pictures retrieved from an institution, from which the training set included 953 showing bone loss and 921 normal cases. A confusion matrix was performed for statistical analysis. The CNN method correctly identified 92 out of 100 bone loss cases and 89 out of 100 healthy cases. The model showed a sensitivity of 0.8327, a specificity of 0.8683, a precision of 0.8918, an accuracy of 0.8927, and an F1 score of 0.8615 in detecting bone loss. This study concluded that a faster CNN model may be used as an adjuvant technique to diagnose periodontal disease and alveolar bone loss using dental panoramic radiography images, thereby minimizing diagnostic effort, and saving assessment time. However, the execution of precisely detecting periodontal cases by fully automated AI models using panoramic radiographs appears imminent and needs clinical periodontal evaluation for definitive diagnosis. The suitability of this approach is supported by the sensitivity, specificity, accuracy, and F-measure, which showed satisfactory performance for classifying cases. Based on population and periodontal disease burden standpoint, the use of AI in diagnosing periodontal diseases may serve as an excellent surveillance method to classify alveolar bone loss. Monitoring a periodontal patient after treatment needs a wide area to cover by AI-based diagnostic modality. With AI as the future of dentistry, performance-based clinical usage of CNN models demands confirmed practical application by dentists.

1. Introduction

Periodontal diseases both in acute and chronic form are the sixth most common inflammatory condition of the oral cavity [1]. Panoramic radiographs are among the common two-dimensional radiography useful for determining the periodontal structure and measuring periodontal bone loss (PBL) [2]. The radiographic examination is a fundamental modality for evaluating both morphological and pathological alterations within the teeth periodontium and related alveolar bone, aiding in the diagnosis, treatment planning, and prognosis of periodontal conditions [3,4]. The periodontal treatment prognosis and pathology are based on empirical evidence of these conventional radiographic techniques and diagnostic attributes. The predominant radiographic techniques employed for diagnosing periodontal diseases have reported inherent limitations such as interpretational subjectivity, image overlap, distortion, and decreased sensitivity in identifying marginal bone alterations [5]. Yet for all of the advantages of 3D scans like Cone Beam Computed Tomography (CBCT) in providing more detailed insights of furcation areas, alveolar bone loss, and intraosseous defects, the equipment, software variations, adjusting radiation exposure, optimizing the image resolution through radiation artifacts, and beam scatter remain potential limitations [6].
Artificial Intelligence (AI) has transformed dentistry through technological advancements. Research has demonstrated that AI can reduce subjectivity in periodontal disease diagnosis and treatment planning [7]. The AI models have identified alveolar bone loss with a high accuracy of up to 93% [8]. While AI offers potential improvements in identifying and classifying periodontal diseases, the limited evidence and uneven performance of AI algorithms indicate caution in employing AI models for PBL diagnosis [7]. Presently, numerous AI software can assess bone loss, thereby reducing the diagnostic workload for clinicians. Revilla-León et al. [9] evaluated eleven AI models for detecting periodontal bone loss in radiographic pictures, with only five concentrating on panoramic radiography. Despite this narrow emphasis, the results show that AI models could be effective diagnosis tools for periodontal disease [10]. A Convolutional Neural Network (CNN) is a subset of deep learning (DL) neural networks used for image detection and processing. A big dataset of images must be used as the starting point for the training process to produce a deep learning model with good performance [8]. Since 2010, there have been notable breakthroughs in the computer vision field with CNNs, which are state-of-the-art artificial neural networks and deep learning [11]. The deep CNN algorithm can autonomously identify diverse features of an image such as spots, corners, edges, patterns, structures, and shapes [12]. Given that medical data are increasingly stored in digital formats and are subject to quantitative and qualitative growth, deep CNNs in conjunction with computer-aided detection (CAD) systems have intriguing potential in the medical domain. This rapidly emerging research area has produced remarkable outcomes in radiological and pathological studies, particularly for diagnostic and predictive accuracy [13,14].
Visual Geometry Group (VGG)-16, VGG-19, and U-net architecture are some of the architectures reported in the literature used in medical and dental imaging for image evaluation using the AI model [9,15,16]. Alotaibi et al. [15] demonstrated that the VGG-16 deep Convolutional Neural Network (CNN) effectively detects alveolar bone loss in periapical radiographs. VGG-16, a transfer learning model, has been successfully used for image classification [17] and is among the most efficient models for image categorization [11,17,18]. Given the limitation of stagging and mixed results from CNN studies using panoramic radiographs, this study explored the CNN algorithm for the evaluation of bone loss and normal periodontal on panoramic images. Here, we plan to demonstrate the CNN algorithm to show higher specificity and accuracy in detecting alveolar bone loss using panoramic radiographs.

2. Materials and Methods

2.1. Patient Selection and Imaging

This study used a dataset of panoramic photographs collected from an institution and followed the guidelines established in the Declaration of Helsinki [19]. The dataset excluded panoramic radiographs of patients with a significant number of missing teeth (fewer than 20), patients under the age of 18, and patients with extensively damaged teeth. Additionally, images with artifacts or significant distortion were omitted. In addition, all radiographs utilized in this investigation were captured using the same instrument, and only a single radiograph per patient was included. The final compilation had 1874 panoramic photos, with 953 demonstrating bone loss and 921 indicating periodontal health, regardless of gender. An oral and maxillofacial radiologist and a periodontist examined the scans for bone loss, determining the presence or absence of resorption at the bone crest while accounting for the space between the enamel–cement junction and the alveolar bone crest. The bone loss group included radiographs with irregularities of the bone along with resorption of the bone in a horizontal or vertical configuration. Periodontally healthy radiographs were defined as having intact bone crests and full coverage of the root surfaces by the alveolar bone, which corresponds to the typical anatomical structure.

2.2. Evaluation of Panoramic Radiography Images

Before training, all photos in the dataset were scaled to dimensions of 1472 × 718 pixels. The open-source Python (version 3.12.8) programming language was used to construct an arbitrary sequence, as well as the OpenCV, NumPy, Pandas, and Matplotlib libraries. The dataset was divided into training, validation, and testing sets. The training set included 1874 pictures, with 953 showing bone loss and 921 showing optimal periodontal health. The validation and testing set included 200 photos, 100 revealing bone loss, and 100 demonstrating good periodontal health.
The preprocessing phase involved utilizing a pre-trained Visual Geometry Group-16 (VGG-16) Convolutional Neural Network (CNN) model. The training of the datasets was performed utilizing transfer learning techniques. The training parameters included a batch of 32, stochastic gradient descent as an optimizer, a learning rate of 1 × 10−4, epochs between 10 and 30, and a validation split of 0.2 were performed. The augmentation included rotation, zoom, horizontal flip, and shift, which mitigated the overfitting of data. CNN achieved excellent performance in the 2014 ImageNet Large-scale Visual Recognition Contest, where it was trained on over 1.28 million photos representing 1000 different item categories. A CNN with a depth of 22 possesses the capability to generate features of varying dimensions through the utilization of convolutional filters of distinct magnitudes within a single layer. The training and validation datasets were used to optimize and choose the best weight values for the CNN. The study employed the VGG-16 architecture with the TensorFlow library in Python for all CNNs. The CNNs were trained over a span of 20,000 steps (Figure 1).

2.3. Statistical Analyses

The Statistical Package for Social Sciences (SPSS) for Windows, Version 28.0, was used to enter and analyze the data (IBM Corp, Armonk, NY, USA). A statistically significant p-value of 0.05 was determined, with confidence intervals set at 95%. For the statistical analysis, a confusion matrix, a powerful tool for comparing expected and actual outcomes, was used.

3. Results

The predicted positive and negative cases along with true positive and negative cases as ascertained by the CNN system are displayed in Table 1. The CNN method correctly identified 92 out of 100 bone loss cases and wrongly evaluated 8. Furthermore, out of 100 periodontally healthy cases, the method properly recognized 89, whereas 11 were misclassified (Table 1).
The CNN system’s performance as determined by the confusion matrix is displayed in Table 2. The model shows a sensitivity of 0.8327, a specificity of 0.8683, a precision of 0.8918, an accuracy of 0.8927, and an F1 score of 0.8615 (Table 2).

4. Discussion

Imaging is one of the medical specialties where the deep CNN algorithm has demonstrated encouraging outcomes [2,12,20,21,22,23,24,25]. The current investigation evaluated the diagnostic efficacy of the CNN-based VGG-16 model in detecting periodontal bone loss and classifying periodontal cases and normal healthy periodontium by comparing the panoramic radiographs. Our study found that deep CNN identified an alveolar bone loss with an 89% overall diagnosis accuracy, 83% sensitivity, and 86% specificity. Our study results aligned with the findings of Lee et al. [12], who found an accuracy of 76.7% for periodontal bone loss in the Molar region and 81% in the Premolar region using a deep CNN model of VGG-19 on periapical radiographs [12]. However, Lee et al. [12] utilized periapical radiographs and a VGG-19 for analysis, while our study included panoramic radiographs and a VGG-16 model for bone loss detection. Our study results were higher and comparable to similar methodology used in a study by Alotaibi et al. [15], who detected 73% and 56% accuracy for normal versus disease status for classifying bone loss cases across validation and testing datasets using the VGG-16 model. Their study included the use of periapical X-rays whereas our study incorporated the use of panoramic radiographs. In addition, our study showed a comparative evaluation of panoramic radiographs with high diagnostic accuracy to detect periodontal cases and healthy cases. Bayrakdar et al. [26] employed Google Net Inception v3 architecture, a popular CNN model similar to VGG-16, on more than 2000 panoramic radiographs and achieved 91% accuracy in detecting periodontitis comparable to 89% accuracy of our study [26]. The Net Inception V3 CNN model deploys a unique inception module with different kernel sizes while our study included the VGG-16 CNN, which relies on 3 × 3 convolutional filters to diagnose periodontal/healthy cases [9,26].
Our study used VGG-16 to classify and compare the periodontal cases and normal healthy periodontium compared to the study by Chang et al. [21], in which a novel hybrid framework that combined deep learning architecture and Computer-aided Design (CAD) approach demonstrated automatic diagnosis of periodontal bone loss and staging. Another investigation was conducted by Lee et al. [25], which revealed a diagnostic accuracy of 85%, with no substantial variance observed in the percentage measurements of radiographic bone level (RBL) between the deep learning (DL) algorithm and radiologist examiners. The study revealed significant sensitivity, specificity, and overall accuracy surpassing 80% across multiple disease stages, and the proposed DL model offers reliable RBL assessments and image-based periodontal diagnosis utilizing periapical imaging [25]. Compared to other studies in the literature, we achieved high diagnostic accuracy for periodontal disease cases. As a result, we estimate that the deep CNN algorithm was more accurate in diagnosing bone loss from periodontal health cases. The most important component for determining periodontal disease state and the extent of alveolar bone loss is image processing via a series of models in CNN for accurate diagnosis. The probability of our results is highly based on the image resolution of panoramic radiographs. So, we propose a high-resolution panoramic image is a timely need to have improved accuracy in diagnosing alveolar bone loss and assisting the dentist clinically in executing a treatment plan.
According to reports by Jabbar et al. [9], CNNs have been the most effective tool in mapping the images of musculoskeletal structures. The CNN is a form of neural network used mostly for image identification and interpretation [8]. In recent years, there has been considerable attention to employing Artificial Intelligence (AI) approaches for interpreting medical imaging modalities such as X-rays, clinical photographs, positron emission tomography (PET), magnetic resonance imaging (MRI), computed tomography (CT), etc. Among these strategies, deep CNNs have emerged as an appealing approach, demonstrating favorable outcomes [22,24,27]. The VGG-16 model has demonstrated considerable efficacy in effectively handling the detection problem, thereby validating its use in the current investigation.
The study by Machado et al. [28] compared inter-observer reliability between periodontists using the radiographic-based periodontal bone loss method on panoramic radiography and found an accuracy of 85.7% for high-resolution Orthopentogram (OPG) images and 81.5% for low-resolution OPG images for the cases classified according to the European Federation of Periodontology. They showed an accuracy of 77.1% for high-resolution images and 75.8% for low-resolution images on panoramic radiographs for the cases classified according to the American Academy of Periodontology [28]. However, a study by Krois et al. [29] reported low sensitivity for CNN models when compared with individual six dentists for diagnostic performance.
The present study corroborated the traditional accuracy of panoramic radiograph reading by radiologists with the most successfully developed CNNs to detect alveolar bone loss and periodontal status. A systematic review [7] reported mixed results in demonstrating the accuracy of DL models for diagnosing periodontal diseases and alveolar bone status using various radiographic methods assessed by radiologists and CNN’s deep learning models [7]. Numerous additional models have been developed to examine the DL and RBL using panoramic images and to classify the periodontitis stages from the panoramic images [2,12,21,23,30]. Although these procedures yield great accuracy and reliability in estimating the bone level on panoramic images, they are often not advisable because of the existence of distorted images, overlapping objects, difficulties in multi-stagging, and poor resolutions [2,12,14,20,21].
The study limitations included the number of panoramic images that were validated and finalized for this study. The low resolution of the panoramic radiographs additionally limited the role of scans selected for CNN analysis. Another limitation was difficulty in making complete diagnoses based on the 2-dimensional (2-D) image of panoramic radiographs due to the overlapping of structures and image quality. To construct an advanced deep learning algorithm with enhanced performance, it is necessary to account for an algorithm’s design and the usage of a weighted dataset for training. With the use of CBCT scans, the limitation of 2-D panoramic images can be potentially eliminated. Finally, a CNN-based model alone using panoramic radiographs is very unlikely to give evidence for an accurate diagnosis of periodontal conditions like periodontitis and necessitates accounting for training on using CNN models, patient’s case history, clinical examination by a periodontist to confirm the clinical probing depth, mobility, bleeding, percussion, and radiography.

5. Conclusions

This study concludes that a faster VGG-16 model may be used as an adjuvant technique to diagnose alveolar bone loss using dental panoramic radiographic images, thereby minimizing diagnostic effort and saving assessment time. With the higher diagnostic accuracy, sensitivity, specificity, and F-measure, this study was able to evaluate and compare alveolar bone loss and healthy periodontium using panoramic radiographs. However, the execution of precisely detecting periodontal cases by fully automated AI models using panoramic radiographs appears imminent and needs more evaluation studies for definitive accuracy. Based on population and periodontal disease burden standpoint, the use of AI in diagnosing periodontal diseases may serve as an excellent adjunct method to classify alveolar bone loss and changes based on algorithms. Clinical studies should be planned for clinical and radiographical identification of periodontal cases with the support of CNN models. It will be interesting to see the prognosis of patients after periodontal treatment to show improvement in bone level detection by the AI-based diagnostic modality.

Author Contributions

A.M. (Ankita Mathur), V.M., and A.M. (Aida Meto) participated in study design, data collection, and statistical analysis. S.P., P.K.G.K., and V.T.O. participated in writing the manuscript. A.M. (Aida Meto), K.S.D., V.M., and S.P. participated in the study design and coordination and helped to draft the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the local ethics committee of Aldent University (protocol code 2100/2024 and 15 January 2024).

Informed Consent Statement

Not applicable.

Data Availability Statement

All data is available in the manuscript.

Acknowledgments

The authors would like to acknowledge Aldent University for providing samples to facilitate this project.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tonetti, M.S.; Jepsen, S.; Jin, L.; Otomo-Corgel, J. Impact of the Global Burden of Periodontal Diseases on Health, Nutrition and Wellbeing of Mankind: A Call for Global Action. J. Clin. Periodontol. 2017, 44, 456–462. [Google Scholar] [CrossRef] [PubMed]
  2. Kharat, P.B.; Dash, K.S.; Rajpurohit, L.; Tripathy, S.; Mehta, V. Revolutionizing Healthcare through Chat GPT: AI Is Accelerating Medical Diagnosis. Oral Oncol. Rep. 2024, 9, 100222. [Google Scholar] [CrossRef]
  3. Rosa, A.; Ranieri, N.; Miranda, M.; Mehta, V.; Fiorillo, L.; Cervino, G. Mini Crestal Sinus Lift with Bone Grafting and Simultaneous Insertion of Implants in Severe Maxillary Conditions as an Alternative to Lateral Sinus Lift: Multicase Study Report of Different Techniques. J. Craniofac. Surg. 2024, 35, 203–207. [Google Scholar] [CrossRef]
  4. Raichur, P.S.; Setty, S.B.; Thakur, S.L.; Naikmasur, V.G. Comparison of Radiovisiography and Digital Volume Tomography to Direct Surgical Measurements in the Detection of Infrabony Defects. J. Clin. Exp. Dent. 2012, 4, e43–e47. [Google Scholar] [CrossRef] [PubMed]
  5. Chakrapani, S.; Sirisha, K.; Srilalitha, A.; Srinivas, M. Choice of Diagnostic and Therapeutic Imaging in Periodontics and Implantology. J. Indian Soc. Periodontol. 2013, 17, 711–718. [Google Scholar] [CrossRef]
  6. Mohan, R.; Singh, A.; Gundappa, M. Three-Dimensional Imaging in Periodontal Diagnosis—Utilization of Cone Beam Computed Tomography. J. Indian Soc. Periodontol. 2011, 15, 11–17. [Google Scholar] [CrossRef]
  7. Patil, S.; Joda, T.; Soffe, B.; Awan, K.H.; Fageeh, H.N.; Tovani-Palone, M.R.; Licari, F.W. Efficacy of Artificial Intelligence in the Detection of Periodontal Bone Loss and Classification of Periodontal Diseases: A Systematic Review. J. Am. Dent. Assoc. 2023, 154, 795–804.e1. [Google Scholar] [CrossRef] [PubMed]
  8. Turosz, N.; Chęcińska, K.; Chęciński, M.; Brzozowska, A.; Nowak, Z.; Sikora, M. Applications of Artificial Intelligence in the Analysis of Dental Panoramic Radiographs: An Overview of Systematic Reviews. Dentomaxillofac. Radiol. 2023, 52, 20230284. [Google Scholar] [CrossRef]
  9. Jabbar, S.I.; Day, C.R.; Heinz, N.; Chadwick, E.K. Using Convolutional Neural Network for Edge Detection in Musculoskeletal Ultrasound Images. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 4619–4626. [Google Scholar]
  10. Revilla-León, M.; Gómez-Polo, M.; Barmak, A.B.; Inam, W.; Kan, J.Y.K.; Kois, J.C.; Akal, O. Artificial Intelligence Models for Diagnosing Gingivitis and Periodontal Disease: A Systematic Review. J. Prosthet. Dent. 2023, 130, 816–824. [Google Scholar] [CrossRef]
  11. Sklan, J.E.S.; Plassard, A.J.; Fabbri, D.; Landman, B.A. Toward Content Based Image Retrieval with Deep Convolutional Neural Networks. Proc. SPIE Int. Soc. Opt. Eng. 2015, 9417, 94172C. [Google Scholar] [CrossRef]
  12. Lee, J.-H.; Kim, D.; Jeong, S.-N.; Choi, S.-H. Diagnosis and Prediction of Periodontally Compromised Teeth Using a Deep Learning-Based Convolutional Neural Network Algorithm. J. Periodontal Implant. Sci. 2018, 48, 114–123. [Google Scholar] [CrossRef] [PubMed]
  13. Tripathy, S.; Mathur, A.; Mehta, V. A View of Neural Networks in Artificial Intelligence in Oral Pathology. Oral Surg. 2023, 17, 179–180. [Google Scholar] [CrossRef]
  14. Rajpurkar, P.; Irvin, J.; Zhu, K.; Yang, B.; Mehta, H.; Duan, T.; Ding, D.; Bagul, A.; Langlotz, C.; Shpanskaya, K.; et al. CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning. arXiv 2017, arXiv:1711.05225. [Google Scholar]
  15. Alotaibi, G.; Awawdeh, M.; Farook, F.F.; Aljohani, M.; Aldhafiri, R.M.; Aldhoayan, M. Artificial Intelligence (AI) Diagnostic Tools: Utilizing a Convolutional Neural Network (CNN) to Assess Periodontal Bone Level Radiographically—A Retrospective Study. BMC Oral Health 2022, 22, 399. [Google Scholar] [CrossRef]
  16. Jiang, L.; Chen, D.; Cao, Z.; Wu, F.; Zhu, H.; Zhu, F. A Two-Stage Deep Learning Architecture for Radiographic Staging of Periodontal Bone Loss. BMC Oral Health 2022, 22, 106. [Google Scholar] [CrossRef]
  17. Tammina, S. Transfer Learning Using VGG-16 with Deep Convolutional Neural Network for Classifying Images. Int. J. Sci. Res. Publ. IJSRP 2019, 9, 143–150. [Google Scholar] [CrossRef]
  18. Yauney, G.; Rana, A.; Wong, L.C.; Javia, P.; Muftu, A.; Shah, P. Automated Process Incorporating Machine Learning Segmentation and Correlation of Oral Diseases with Systemic Health. In Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, 23–27 July 2019. [Google Scholar] [CrossRef]
  19. World Medical Association. World Medical Association Declaration of Helsinki: Ethical Principles for Medical Research Involving Human Subjects. JAMA 2013, 310, 2191–2194. [Google Scholar] [CrossRef]
  20. Cha, J.-Y.; Yoon, H.-I.; Yeo, I.-S.; Huh, K.-H.; Han, J.-S. Peri-Implant Bone Loss Measurement Using a Region-Based Convolutional Neural Network on Dental Periapical Radiographs. J. Clin. Med. 2021, 10, 1009. [Google Scholar] [CrossRef]
  21. Chang, H.-J.; Lee, S.-J.; Yong, T.-H.; Shin, N.-Y.; Jang, B.-G.; Kim, J.-E.; Huh, K.-H.; Lee, S.-S.; Heo, M.-S.; Choi, S.-C.; et al. Deep Learning Hybrid Method to Automatically Diagnose Periodontal Bone Loss and Stage Periodontitis. Sci. Rep. 2020, 10, 7531. [Google Scholar] [CrossRef]
  22. Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros, J.; et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA 2016, 316, 2402–2410. [Google Scholar] [CrossRef]
  23. Kim, J.; Lee, H.-S.; Song, I.-S.; Jung, K.-H. DeNTNet: Deep Neural Transfer Network for the Detection of Periodontal Bone Loss Using Panoramic Dental Radiographs. Sci. Rep. 2019, 9, 17615. [Google Scholar] [CrossRef] [PubMed]
  24. Lakhani, P.; Sundaram, B. Deep Learning at Chest Radiography: Automated Classification of Pulmonary Tuberculosis by Using Convolutional Neural Networks. Radiology 2017, 284, 574–582. [Google Scholar] [CrossRef] [PubMed]
  25. Lee, C.-T.; Kabir, T.; Nelson, J.; Sheng, S.; Meng, H.-W.; Van Dyke, T.E.; Walji, M.F.; Jiang, X.; Shams, S. Use of the Deep Learning Approach to Measure Alveolar Bone Level. J. Clin. Periodontol. 2022, 49, 260–269. [Google Scholar] [CrossRef] [PubMed]
  26. Kurt, S.; Çelik, Ö.; Bayrakdar, İ.Ş.; Orhan, K.; Bilgir, E.; Odabas, A.; Aslan, A.F. Success of Artificial Intelligence System in Determining Alveolar Bone Loss from Dental Panoramic Radiography Images. Cumhur. Dent. J. 2020, 23, 318–324. [Google Scholar] [CrossRef]
  27. Lehman, C.D.; Wellman, R.D.; Buist, D.S.M.; Kerlikowske, K.; Tosteson, A.N.A.; Miglioretti, D.L.; Breast Cancer Surveillance Consortium. Diagnostic Accuracy of Digital Screening Mammography with and Without Computer-Aided Detection. JAMA Intern. Med. 2015, 175, 1828–1837. [Google Scholar] [CrossRef]
  28. Machado, V.; Proença, L.; Morgado, M.; Mendes, J.J.; Botelho, J. Accuracy of Panoramic Radiograph for Diagnosing Periodontitis Comparing to Clinical Examination. J. Clin. Med. 2020, 9, 2313. [Google Scholar] [CrossRef]
  29. Krois, J.; Ekert, T.; Meinhold, L.; Golla, T.; Kharbot, B.; Wittemeier, A.; Dörfer, C.; Schwendicke, F. Deep Learning for the Radiographic Detection of Periodontal Bone Loss. Sci. Rep. 2019, 9, 8495. [Google Scholar] [CrossRef]
  30. Li, H.; Zhou, J.; Zhou, Y.; Chen, Q.; She, Y.; Gao, F.; Xu, Y.; Chen, J.; Gao, X. An Interpretable Computer-Aided Diagnosis Method for Periodontitis from Panoramic Radiographs. Front. Physiol. 2021, 12, 655556. [Google Scholar] [CrossRef]
Figure 1. Tooth localization model showing processing of images in training set.
Figure 1. Tooth localization model showing processing of images in training set.
Engproc 87 00080 g001
Table 1. Number of cases determined by the AI model as true positive and true negative.
Table 1. Number of cases determined by the AI model as true positive and true negative.
True PositiveTrue Negative
Predicted Positive9211
Predicted Negative889
Table 2. AI performance calculated using the confusion matrix.
Table 2. AI performance calculated using the confusion matrix.
ParameterValue
Sensitivity0.8327
Specificity0.8683
Precision0.8918
Accuracy0.8927
F1 score0.8615
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mathur, A.; Pawar, S.; Kamma, P.K.G.; Obulareddy, V.T.; Dash, K.S.; Meto, A.; Mehta, V. Comparative Evaluation of Images of Alveolar Bone Loss Using Panoramic Images and Artificial Intelligence. Eng. Proc. 2025, 87, 80. https://doi.org/10.3390/engproc2025087080

AMA Style

Mathur A, Pawar S, Kamma PKG, Obulareddy VT, Dash KS, Meto A, Mehta V. Comparative Evaluation of Images of Alveolar Bone Loss Using Panoramic Images and Artificial Intelligence. Engineering Proceedings. 2025; 87(1):80. https://doi.org/10.3390/engproc2025087080

Chicago/Turabian Style

Mathur, Ankita, Sushil Pawar, Praveen Kumar Gonuguntla Kamma, Vishnu Teja Obulareddy, Kabir Suman Dash, Aida Meto, and Vini Mehta. 2025. "Comparative Evaluation of Images of Alveolar Bone Loss Using Panoramic Images and Artificial Intelligence" Engineering Proceedings 87, no. 1: 80. https://doi.org/10.3390/engproc2025087080

APA Style

Mathur, A., Pawar, S., Kamma, P. K. G., Obulareddy, V. T., Dash, K. S., Meto, A., & Mehta, V. (2025). Comparative Evaluation of Images of Alveolar Bone Loss Using Panoramic Images and Artificial Intelligence. Engineering Proceedings, 87(1), 80. https://doi.org/10.3390/engproc2025087080

Article Metrics

Back to TopTop