Next Article in Journal
Encrypt with Your Mind: Reliable and Revocable Brain Biometrics via Multidimensional Gaussian Fitted Bit Allocation
Next Article in Special Issue
MIL-CT: Multiple Instance Learning via a Cross-Scale Transformer for Enhanced Arterial Light Reflex Detection
Previous Article in Journal
The Progress in Bioprinting and Its Potential Impact on Health-Related Quality of Life
Previous Article in Special Issue
Detection of Cardiovascular Disease from Clinical Parameters Using a One-Dimensional Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning-Based Recognition of Periodontitis and Dental Caries in Dental X-ray Images

1
Department of Electrical Engineering, National Dong Hwa University, Hualien 97401, Taiwan
2
Department of Electrical Engineering, National Taiwan Normal University, Taipei 10610, Taiwan
3
Department of Electrical Engineering, National Sun Yat-sen University, Kaohsiung 80424, Taiwan
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Bioengineering 2023, 10(8), 911; https://doi.org/10.3390/bioengineering10080911
Submission received: 30 June 2023 / Revised: 21 July 2023 / Accepted: 22 July 2023 / Published: 1 August 2023
(This article belongs to the Special Issue Artificial Intelligence-Based Diagnostics and Biomedical Analytics)

Abstract

:
Dental X-ray images are important and useful for dentists to diagnose dental diseases. Utilizing deep learning in dental X-ray images can help dentists quickly and accurately identify common dental diseases such as periodontitis and dental caries. This paper applies image processing and deep learning technologies to dental X-ray images to propose a simultaneous recognition method for periodontitis and dental caries. The single-tooth X-ray image is detected by the YOLOv7 object detection technique and cropped from the periapical X-ray image. Then, it is processed through contrast-limited adaptive histogram equalization to enhance the local contrast, and bilateral filtering to eliminate noise while preserving the edge. The deep learning architecture for classification comprises a pre-trained EfficientNet-B0 and fully connected layers that output two labels by the sigmoid activation function for the classification task. The average precision of tooth detection using YOLOv7 is 97.1%. For the recognition of periodontitis, the area under the curve (AUC) of the receiver operating characteristic (ROC) curve is 98.67%, and the AUC of the precision-recall (PR) curve is 98.38%. For the recognition of dental caries, the AUC of the ROC curve is 98.31%, and the AUC of the PR curve is 97.55%. Different from the conventional deep learning-based methods for a single disease such as periodontitis or dental caries, the proposed approach can provide the recognition of both periodontitis and dental caries simultaneously. This recognition method presents good performance in the identification of periodontitis and dental caries, thus facilitating dental diagnosis.

1. Introduction

With the continuous improvement of artificial intelligence (AI) technology and big data availability, applications in medical imaging research are increasingly emphasized [1]. AI has shown great potential to assist in disease diagnosis and treatment planning in dentistry [2,3,4]. Deep learning models have demonstrated outstanding abilities in learning complex patterns from large image datasets, giving rise to numerous applications in the field of dentistry [2,5,6,7,8,9]. Deep learning of dental radiographs has emerged as an efficient and precise method for detecting dental diseases. By applying the convolutional neural network, an effective system can be established for the recognition of dental diseases.
Oral disease is an important problem of global public health, especially common dental diseases such as periodontitis and dental caries. Periodontitis, a chronic inflammatory disease of the teeth and gums, is characterized by the destruction of surrounding tissues including the periodontal ligament and the alveolar bone. Periodontitis is mainly caused by dental plaque, which produces a series of inflammatory reactions and destroys periodontal tissues. It is found that periodontitis may increase the risk of cardiovascular disease or be related to other major systemic diseases [10]. Dental caries is a disease that can damage tooth structure and is mainly caused by acid erosion. The acid is mostly produced by intraoral bacteria. Periodontitis and dental caries are the main oral diseases with high prevalence that influence the quality of life [11]. Periodontitis and dental caries are two of the most prevalent oral diseases globally, affecting a significant proportion of the population. These two diseases can have a profound impact on oral health and overall well-being. Therefore, the prevention of periodontitis and dental caries is an important task for dentists. Moreover, early diagnosis has always been a crucial part of the treatment of periodontitis and dental caries. The detection of periodontitis and dental caries mainly depends on clinical and radiographic examinations. Dentists typically evaluate various aspects of a patient’s oral condition to provide a comprehensive diagnosis and treatment plan. Using AI-assisted technology to concurrently recognize both periodontitis and dental caries aligns closely with how dentists diagnose patients in clinical practice. Saving time and reducing loading for dentists and minimizing patients’ discomfort associated with multiple examinations can be realized.
The applications of AI to dental X-ray images can assist dentists in quickly and accurately identifying common dental diseases. The related work on the applications of AI for dental X-rays is discussed below.
A deep learning-based convolutional neural network (CNN) algorithm was developed in [12] for the predictions of periodontally compromised teeth (PCT) for premolars and molars individually. The study used 16 convolutional layers and 3 fully connected dense layers in the deep CNN model to classify the teeth into healthy teeth, moderate PCT, and severe PCT. In [13], a vector of the severity of alveolar bone loss from the teeth was used as the input feature of XGBoost to classify the four-class severity degree of periodontitis from a panoramic radiograph. In [14], periapical radiographs were used to calculate the radiographic bone loss (RBL) values and classify the severity of RBL into mild or severe, as well as classify the defect morphology. These two tasks were performed by a multi-task classification approach using the InceptionV3 model.
In [15], the modified linearly adaptive particle swarm optimization was combined with a backpropagation neural network to distinguish between normal and caries affected teeth. In [16], the prediction of dental caries of premolars and molars was based on a pre-trained GoogLeNet InceptionV3 CNN network for preprocessing and transfer learning. In [17], a system for predicting dental caries was developed using Laplacian filtering, window-based adaptive thresholding, morphology, statistical features, and a backpropagation neural network. In [18], Hu’s moment was used to train support vector machine and k-nearest neighbors for the classification of four levels of dental caries. In [19], both raw periapical images and the enhanced images were the inputs of an ensemble deep convolutional neural network model for dental caries detection. In [20], informative features were extracted from teeth on panoramic radiographs via deep learning networks, and each extracted feature set was used to train the classification model. The caries screening was determined by a majority voting method.
In [21], deep convolutional neural networks with region proposal techniques were used to detect decay, periapical periodontitis, and periodontitis on periapical radiographs. The three diseases were individually classified into mild, moderate, and severe levels. In [22], the periapical radiograph subregion was cropped to obtain a single-tooth image. Then, the crown region and the root region were cropped according to the identified cervical line. The detection of caries from the crown region and periapical periodontitis from the root region was based on a deep learning model constructed of two cascaded ResNet-18 backbones and two individual convolutional layers.
Most of the studies on AI-assisted technology for dental diseases are for the prediction of a single disease such as periodontitis or dental caries. This paper proposes a deep learning-based method to detect periodontitis and dental caries simultaneously. The image processing technologies are also incorporated to improve performance.

2. Materials and Methods

Figure 1 shows the flowchart of the training process for the proposed method to detect periodontitis and dental caries simultaneously. The single-tooth X-ray images are detected by the YOLOv7 [23] algorithm and cropped from periapical X-ray images. After performing resizing and augmentation, the single-tooth X-ray images are enhanced by contrast-limited adaptive histogram equalization (CLAHE) and bilateral filtering (BF). The enhanced images are further resized as the inputs for the deep-learning CNN, which is trained using transfer learning to determine whether the single-tooth X-ray image belongs to normal tooth, periodontitis, dental caries, or both diseases of periodontitis and dental caries.

2.1. Dataset

A total of 1525 periapical X-ray images were obtained from a dental clinic in Hualien, Taiwan. In periapical X-ray images, periodontitis can be detected by the presence of alveolar bone loss around the tooth. Dental caries can be recognized by the radiolucency of enamel and dentin in the tooth structure. In this study, a normal tooth was defined as when the characteristics of periodontitis and dental caries are not detected in the X-ray image. Both anterior and posterior teeth are included in our dataset. The teeth with root canal therapy and dental restoration are included. The teeth on periapical X-ray images were annotated by a senior dentist with over 28 years of expertise. This study used two labels of periodontitis and dental caries to classify a single-tooth X-ray image belonging to normal tooth, periodontitis, dental caries, or both diseases. Each label is 0 or 1, and thus the corresponding classification of the single-tooth X-ray image is normal (0,0), periodontitis-only (1,0), dental caries-only (0,1), or both diseases of periodontitis and dental caries (1,1).
The single-tooth X-ray images were cropped from periapical X-ray images by the YOLOv7 object detection model. A total of 2850 single-tooth X-ray images were selected in the experiments and resized to 200 × 200 pixels. Data augmentation was used to increase the number of images. The augmentation for single-tooth images included horizontal flip, vertical flip, and the rotations of 90°, 180°, and 270°. The total dataset was divided into the training dataset (n = 8000), the validation dataset (n = 2000), and the testing dataset (n = 1000), as listed in Table 1. This study used 10-fold cross-validation by randomly dividing the dataset into ten subsets to evaluate the performance of the deep learning models.

2.2. Proposed Method

Table 2 presents a comprehensive overview of the hardware and software platforms employed in the experimental setup. The hardware platform consists of a 12th Gen Intel Core i5-12400 CPU, an NVIDIA GeForce RTX 3070 GPU, and 32 GB DDR4 DRAM with 3200 MHz. On the software side, the platform includes Python version 3.7.16, Tensorflow version 2.9.1, and PyTorch version 1.7.1. These specifications are employed to facilitate the implementation and evaluation of the proposed methods in our study.
This study used YOLOv7 to obtain single-tooth images from periapical X-ray images to avoid the time-consuming task of manual cropping. YOLO (you only look once) is a classic one-stage, real-time object detection system known for its lightweight, low dependency, and highly efficient algorithm. Compared to two-stage object detection, the one-stage approach eliminates the need for candidate pre-screening, enabling end-to-end object detection and obtaining the final classification in a single pass. YOLO has demonstrated superior performance in object detection compared to other algorithms [24]. YOLOv7 has achieved remarkable advancements in both accuracy and processing speed [23]. The remarkable advancements make it well-suited for precise and fast detection to obtain single-tooth images from periapical X-ray images. To train the YOLOv7 model, the batch size of 8 and the epoch of 50 were selected to make a trade-off between computational resources and training accuracy.
In Figure 2, an illustrative example of single-tooth image detection from a periapical X-ray image is depicted. It can be observed that YOLOv7 demonstrates precise localization by accurately bounding the position of each tooth. Figure 3 shows the PR curve derived from the predictions made by YOLOv7 on the testing dataset. Notably, the average precision (AP) achieved an outstanding performance of 97.1% for tooth detection, manifesting its excellent effectiveness.
In addition to utilizing YOLOv7 for single-tooth image detection in periapical X-ray images, image enhancement techniques have been incorporated to enhance the performance of deep learning algorithms. Contrast-limited adaptive histogram equalization (CLAHE) can enhance the local details of images [25,26]. The histogram of the local area is calculated to redistribute the image brightness. The bilateral filter (BF) [27] is a non-linear filter for smoothing images while preserving edge information because not only the geometric distance between pixels but also the difference of gray-level values between pixels is considered. CLAHE and BF have been used for the preprocessing of the segmentation of teeth in dental radiographs [28].
After performing resizing and augmentation of the cropped single-tooth X-ray images, the images are processed using CLAHE, BF, and the combination of CLAHE first and then BF, as illustrated in Figure 4. CLAHE is mainly used to enhance the local contrast of the X-ray image. After increasing the contrast, there may be some detailed noises. Then, the BF blurs the less relevant areas and reduces the noise but preserves the edge of the image. CLAHE can make the tooth outline sharper and help to reveal subtle details hidden due to low contrast. The BF further reduces noise in the image, yielding a smoother appearance while preserving the overall structure and contour of the tooth. The two image processing techniques can facilitate the feature extraction of CNN and improve the prediction performance of deep learning. The processed images are further resized to 100 × 100 as the inputs of the CNN model.
CNN model can be utilized to build a multi-label classifier for the prediction of periodontitis and dental caries. In this study, we used Xception [29], MobileNetV2 [30], and EfficientNet-B0 [31] to compare their performances and select the best one. The deep learning architecture for classification consists of the pre-trained CNN model and fully connected layers that output two labels by the sigmoid activation function for the multi-label classification task. Transfer learning is applied by initializing the models with pre-trained weights provided by Keras during the training process. Furthermore, Table 3 presents an overview of the hyperparameters employed specifically in the CNN models. By adjusting these hyperparameters, better performance can be achieved.

2.3. Performance Metrics

To evaluate the performance of the proposed method, various metrics are used, including accuracy, sensitivity, specificity, positive predictive value (PPV, precision), negative predictive value (NPV), the area under the curve (AUC) of the receiver operating characteristic (ROC) curve, and the AUC of precision–recall (PR) curve. Accuracy, sensitivity, specificity, PPV, and NPV are defined by true positive (TP), false negative (FN), true negative (TN), and false positive (FP), as represented by Equations (1)–(5).
A c c u r a c y = T P + T N T P + T N + F P + F N ,
S e n s i t i v i t y   ( R e c a l l ) = T P T P + F N ,
S p e c i f i c i t y = T N T N + F P ,
P P V   ( P r e c i s i o n ) = T P T P + F P ,
N P V = T N T N + F N .

3. Results

We compared the performances of three CNN models through a 10-fold cross-validation analysis. The investigated models included Xception, MobileNetV2, and EfficientNet-B0. Each model was trained using pre-trained weights provided by Keras for transfer learning. Table 4 tabulates the results of performance metrics averaged from 10-fold cross-validation for the various models. Notably, EfficientNet-B0 achieved the best result with the accuracy of 95.44%, the sensitivity of 93.28%, the specificity of 96.88%, the PPV of 95.24%, and the NPV of 95.59% for periodontitis, and the accuracy of 94.94%, the sensitivity of 94.15%, the specificity of 95.47%, the PPV of 93.30%, and the NPV of 96.08% for dental caries.
Table 5 lists the minimum, maximum, and mean accuracy rates of each model across 10-fold cross-validation. Notably, EfficientNet-B0 demonstrated the highest mean accuracy, achieving 95.44% for periodontitis and 94.94% for dental caries. These metrics indicate that EfficientNet-B0 outperforms the other models evaluated in this study. Thus, EfficientNet-B0 was chosen as the CNN model in our method.
Table 6 provides the average evaluation metrics for each fold using EfficientNet-B0, demonstrating the performance of the model across different subsets of the dataset. It can be observed that the predictive accuracy rates for periodontitis and dental caries are consistently above 92% for all the subsets in the 10-fold cross-validation. These findings indicate the robust performance and strong predictive capabilities of the EfficientNet-B0 model in accurately classifying periodontitis and dental caries.
Figure 5 and Figure 6 exhibit the ROC curves for periodontitis and dental caries, respectively, in each fold. The utilization of the EfficientNet-B0 model shows remarkable performance for periodontitis and dental caries, with the AUC values reaching 98.67% and 98.31%, respectively. Furthermore, Figure 7 and Figure 8 illustrate the PR curves for periodontitis and dental caries, respectively, in each fold. These curves further emphasize the excellent performance of the EfficientNet-B0 model, with the AUC value of 98.38% for periodontitis and 97.55% for dental caries.
We conducted an ablation study to examine the impact of image processing techniques on the performance of the EfficientNet-B0 model. The results, shown in Table 7, demonstrate that incorporating image processing techniques improves the performance in recognizing periodontitis and dental caries. With image processing, the model achieved higher accuracy. Additionally, there were improvements in sensitivity, specificity, PPV, and NPV.
Moreover, the implementation of image processing techniques resulted in enhancements in both the ROC AUC and the PR AUC. For the prediction of periodontitis, the ROC AUC was improved from 96.72% to 98.67%, and the PR AUC increased from 96.22% to 98.38%. Similarly, in the case of dental caries prediction, the ROC AUC was enhanced from 96.49% to 98.31%, and the PR AUC increased from 95.83% to 97.55%. Overall, image processing is efficient to improve the prediction performance of two labels from all metrics.
To improve the interpretability of the EfficientNetB0 model, the illustration utilizing gradient-weighted class activation mapping (Grad-CAM) [32] is shown in Figure 9. The examples, by superimposing the up-sampled heat maps over the single-tooth X-ray images, effectively highlight the active regions within the images. These regions exert great influence on the classification results of the EfficientNet-B0 model. Figure 9 provides valuable insights into the decision-making mechanism.

4. Discussion

The study in [16] focused on detecting dental caries on one tooth per cropped image based on deep learning for premolars and molars in periapical radiographs. The accuracy, sensitivity, specificity, PPV, NPV, and ROC AUC for the model of predicting dental caries for both premolars and molars in [16] were 82.0%, 81.0%, 83.0%, 82.7%, 81.4%, and 84.5%, respectively. Our method for predicting dental caries provided the accuracy of 94.94%, the sensitivity of 94.15%, the specificity of 95.47%, the PPV of 93.30%, the NPV of 96.08%, and the ROC AUC of 98.31%, which outperforms [16], as shown in Table 8. In particular, our method can recognize both periodontitis and dental caries simultaneously. In addition, both anterior and posterior teeth are included in our dataset.
For a single tooth, the deep learning model proposed in [22] needs to be executed twice to obtain the detection of periapical periodontitis and caries. The first time is to obtain the dental root result for the detection of periapical periodontitis and the second time is to acquire the dental crown results for the detection of caries. However, the multi-label deep learning architecture of our method only needs to be executed once to obtain the recognition results among the four kinds of teeth. In [22], the sensitivity, specificity, PPV, NPV, and ROC AUC for the performance of periapical periodontitis were 82.00%, 84.00%, 83.67%, 82.35%, and 87.90%, respectively. Our model performed with the sensitivity of 93.28%, the specificity of 96.88%, the PPV of 95.24%, the NPV of 95.59%, and the ROC AUC of 98.67% for periodontitis. For the prediction of dental caries, the sensitivity, specificity, PPV, NPV, and ROC AUC were 83.50%, 82.00%, 82.27%, 83.25%, and 87.50%, respectively, in [22]; however, they were 94.15%, 95.47%, 93.30%, 96.08%, and 98.31%, respectively, in our method, as shown in Table 9. In addition, cropping a single tooth is not a fully automated method in [22]. Our method provides an automated tooth detection process using YOLOv7.
The automatic evaluation of periodontitis and dental caries can facilitate the initial screening of dental conditions, particularly benefiting people in underserved areas with limited access to healthcare resources. It can promote regular dental check-ups, thereby contributing to the overall maintenance of oral health. Additionally, the screening results obtained through this method can assist healthcare decision-makers in investigating healthcare demands, optimizing the allocation of dental workforces, and effectively reducing the urban–rural disparity in dental care accessibility. The simultaneous recognition method has the potential to positively impact public health strategies, healthcare planning, and the overall accessibility of dental care, especially for medically underserved populations.
This study focused solely on dental X-ray images, which may exclude important clinical factors, such as the patient’s medical and dental history, symptoms, and other clinical examination results. These clinical factors provide valuable information that can contribute to the overall assessment of the patient’s condition. The exclusion of these clinical factors could limit the performance of the proposed method. Therefore, future research is suggested to evaluate both dental X-ray images and clinical factors before annotation and training AI models. This integration can potentially enhance the accuracy and effectiveness of AI-assisted technology.

5. Conclusions

Artificial intelligence technologies have made significant progress recently in the applications of dentistry. This paper presents an effective method for the simultaneous recognition of periodontitis and dental caries in dental X-ray images. The methodology applies YOLOv7 for tooth detection, image processing technologies (contrast-limited adaptive histogram equalization and bilateral filtering), and the EfficientNet-B0 model. The proposed method achieved good performance in terms of various performance metrics. This deep learning-based method, which demonstrated promising capabilities, can be beneficial to dentists for the identification of periodontitis and dental caries.

Author Contributions

Conceptualization, I.D.S.C. and C.-M.Y.; Data curation, I.D.S.C.; Funding acquisition, M.-J.C. and R.-M.W.; Methodology, I.D.S.C., C.-M.Y. and M.-J.C.; Software, I.D.S.C. and M.-C.C.; Supervision, M.-J.C., R.-M.W. and C.-H.Y.; Validation, C.-M.Y.; Writing—original draft, and I.D.S.C., C.-M.Y. and M.-J.C.; Writing—review and editing, M.-C.C., R.-M.W. and C.-H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Science and Technology, Taiwan, under Grant MOST 110-2221-E-259-010-MY2.

Institutional Review Board Statement

This study was approved by the Institutional Review Board/Ethics Committee of Mennonite Christian Hospital, Taiwan, under IRB/EC No. 20-08-016.

Informed Consent Statement

Waiver of informed consent was approved by the Institutional Review Board/Ethics Committee of Mennonite Christian Hospital, Taiwan.

Data Availability Statement

Data is unavailable due to ethical restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Panayides, A.S.; Amini, A.; Filipovic, N.D.; Sharma, A.; Tsaftaris, S.A.; Young, A.; Foran, D.; Do, N.; Golemati, S.; Kurc, T.; et al. AI in medical imaging informatics: Current challenges and future directions. IEEE J. Biomed. Health Inform. 2020, 24, 1837–1857. [Google Scholar] [CrossRef]
  2. Kishimoto, T.; Goto, T.; Matsuda, T.; Iwawaki, Y.; Ichikawa, T. Application of artificial intelligence in the dental field: A literature review. J. Prosthodont. Res. 2022, 66, 19–28. [Google Scholar] [CrossRef]
  3. Suhail, Y.; Upadhyay, M.; Chhibber, A.; Kshitiz. Machine learning for the diagnosis of orthodontic extractions: A computational analysis using ensemble learning. Bioengineering 2020, 7, 55. [Google Scholar] [CrossRef]
  4. Schwendicke, F.; Golla, T.; Dreher, M.; Krois, J. Convolutional neural networks for dental image diagnostics: A scoping review. J. Dent. 2019, 91, 103226. [Google Scholar] [CrossRef] [PubMed]
  5. Rao, R.S.; Shivanna, D.B.; Lakshminarayana, S.; Mahadevpur, K.S.; Alhazmi, Y.A.; Bakri, M.M.H.; Alharbi, H.S.; Alzahrani, K.J.; Alsharif, K.F.; Banjer, H.J.; et al. Ensemble deep-learning-based prognostic and prediction for recurrence of sporadic odontogenic keratocysts on hematoxylin and eosin stained pathological images of incisional biopsies. J. Pers. Med. 2022, 12, 1220. [Google Scholar] [CrossRef] [PubMed]
  6. Murata, M.; Ariji, Y.; Ohashi, Y.; Kawai, T.; Fukuda, M.; Funakoshi, T.; Kise, Y.; Nozawa, M.; Katsumata, A.; Fujita, H.; et al. Deep-learning classification using convolutional neural network for evaluation of maxillary sinusitis on panoramic radiography. Oral Radiol. 2019, 35, 301–307. [Google Scholar] [CrossRef]
  7. Celik, M.E. Deep learning based detection tool for impacted mandibular third molar teeth. Diagnostics 2022, 12, 942. [Google Scholar] [CrossRef] [PubMed]
  8. Chuo, Y.; Lin, W.M.; Chen, T.Y.; Chan, M.L.; Chang, Y.S.; Lin, Y.R.; Lin, Y.J.; Shao, Y.H.; Chen, C.A.; Chen, S.L.; et al. A high-accuracy detection system: Based on transfer learning for apical lesions on periapical radiograph. Bioengineering 2022, 9, 777. [Google Scholar] [CrossRef]
  9. Chen, Y.C.; Chen, M.Y.; Chen, T.Y.; Chan, M.L.; Huang, Y.Y.; Liu, Y.L.; Lee, P.T.; Lin, G.J.; Li, T.F.; Chen, C.A.; et al. Improving dental implant outcomes: CNN-based system accurately measures degree of peri-implantitis damage on periapical film. Bioengineering 2023, 10, 640. [Google Scholar] [CrossRef]
  10. Falcao, A.; Bullón, P. A review of the influence of periodontal treatment in systemic diseases. Periodontol. 2000 2019, 79, 117–128. [Google Scholar] [CrossRef]
  11. Watt, R.G.; Heilmann, A.; Listl, S.; Peres, M.A. London charter on oral health inequalities. J. Dent. Res. 2016, 95, 245–247. [Google Scholar] [CrossRef]
  12. Lee, J.H.; Kim, D.H.; Jeong, S.N.; Choi, S.H. Diagnosis and prediction of periodontally compromised teeth using a deep learning-based convolutional neural network algorithm. J. Periodontal Implant Sci. 2018, 48, 114–123. [Google Scholar] [CrossRef] [Green Version]
  13. Li, H.; Zhou, J.; Zhou, Y.; Chen, Q.; She, Y.; Gao, F.; Xu, Y.; Chen, J.; Gao, X. An interpretable computer-aided diagnosis method for periodontitis from panoramic radiographs. Front. Physiol. 2021, 12, 655556. [Google Scholar] [CrossRef]
  14. Chang, J.; Chang, M.F.; Angelov, N.; Hsu, C.Y.; Meng, H.W.; Sheng, S.; Glick, A.; Chang, K.; He, Y.R.; Lin, Y.B.; et al. Application of deep machine learning for the radiographic diagnosis of periodontitis. Clin. Oral Investig. 2022, 26, 6629–6637. [Google Scholar] [CrossRef] [PubMed]
  15. Sornam, M.; Prabhakaran, M. A new linear adaptive swarm intelligence approach using back propagation neural network for dental caries classification. In Proceedings of the 2017 IEEE International Conference on Power, Control, Signals and Instrumentation Engineering, Chennai, India, 21–22 September 2017. [Google Scholar]
  16. Lee, J.H.; Kim, D.H.; Jeong, S.N.; Choi, S.H. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algorithm. J. Dent. 2018, 77, 106–111. [Google Scholar] [CrossRef]
  17. Geetha, V.; Aprameya, K.S.; Hinduja, D.M. Dental caries diagnosis in digital radiographs using back-propagation neural network. Health Inf. Sci. Syst. 2020, 8, 8. [Google Scholar] [CrossRef]
  18. Jusman, Y.; Anam, M.K.; Puspita, S.; Saleh, E. Machine learnings of dental caries images based on Hu moment invariants features. In Proceedings of the 2021 International Seminar on Application for Technology of Information and Communication, Semarangin, Indonesia, 18–19 September 2021. [Google Scholar]
  19. Imak, A.; Celebi, A.; Siddique, K.; Turkoglu, M.; Sengur, A.; Salam, I. Dental caries detection using score-based multi-input deep convolutional neural network. IEEE Access 2022, 10, 18320–18329. [Google Scholar] [CrossRef]
  20. Bui, T.H.; Hamamoto, K.; Paing, M.P. Automated caries screening using ensemble deep learning on panoramic radiographs. Entropy 2022, 24, 1358. [Google Scholar] [CrossRef] [PubMed]
  21. Chen, H.; Li, H.; Zhao, Y.; Zhao, J.; Wang, J. Dental disease detection on periapical radiographs based on deep convolutional neural networks. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 649–661. [Google Scholar] [CrossRef]
  22. Li, S.; Liu, J.; Zhou, Z.; Zhou, Z.; Wu, X.; Li, Y.; Wang, S.; Liao, W.; Ying, S.; Zhao, Z. Artificial intelligence or caries and periapical periodontitis detection. J. Dent. 2022, 122, 104107. [Google Scholar] [CrossRef]
  23. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023. [Google Scholar]
  24. Srivastava, S.; Divekar, A.V.; Anilkumar, C.; Naik, I.; Kulkarni, V.; Pattabiraman, V. Comparative analysis of deep learning image detection algorithms. J. Big Data 2021, 8, 66. [Google Scholar] [CrossRef]
  25. Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; Romeny, B.H.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
  26. Pizer, S.M.; Johnston, R.E.; Ericksen, J.P.; Yankaskas, B.C.; Muller, K.E. Contrast-limited adaptive histogram equalization: Speed and effectiveness. In Proceedings of the First Conference on Visualization in Biomedical Computing, Atlanta, GA, USA, 22–25 May 1990. [Google Scholar]
  27. Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the Sixth International Conference on Computer Vision, Bombay, India, 4–7 January 1998. [Google Scholar]
  28. Pandey, P.; Bhan, A.; Dutta, M.K.; Travieso, C.M. Automatic image processing based dental image analysis using automatic gaussian fitting energy and level sets. In Proceedings of the 2017 International Conference and Workshop on Bioinspired Intelligence, Funchal, Portugal, 10–12 July 2017. [Google Scholar]
  29. Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  30. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
  31. Tan, M.; Le, Q.V. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th Interna-tional Conference on Machine Learning, Long Beach, CA, USA, 10–15 June 2019. [Google Scholar]
  32. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
Figure 1. Flowchart of the training process for the proposed method.
Figure 1. Flowchart of the training process for the proposed method.
Bioengineering 10 00911 g001
Figure 2. Detection results by YOLOv7.
Figure 2. Detection results by YOLOv7.
Bioengineering 10 00911 g002
Figure 3. PR curve of YOLOv7 detection.
Figure 3. PR curve of YOLOv7 detection.
Bioengineering 10 00911 g003
Figure 4. Illustration of the image processing techniques.
Figure 4. Illustration of the image processing techniques.
Bioengineering 10 00911 g004
Figure 5. ROC curve for periodontitis.
Figure 5. ROC curve for periodontitis.
Bioengineering 10 00911 g005
Figure 6. ROC curve for dental caries.
Figure 6. ROC curve for dental caries.
Bioengineering 10 00911 g006
Figure 7. PR curve for periodontitis.
Figure 7. PR curve for periodontitis.
Bioengineering 10 00911 g007
Figure 8. PR curve for dental caries.
Figure 8. PR curve for dental caries.
Bioengineering 10 00911 g008
Figure 9. Illustration based on Grad-CAM.
Figure 9. Illustration based on Grad-CAM.
Bioengineering 10 00911 g009
Table 1. Numbers of single-tooth X-ray images in the experiments.
Table 1. Numbers of single-tooth X-ray images in the experiments.
DatasetNormalPeriodontitisDental CariesBoth DiseasesTotal
OriginalTraining and
Validation
70010004005002600
Testing100505050250
AugmentationTraining and
Validation
33001000160015007400
Testing300150150150750
TotalTraining32001600160016008000
Validation8004004004002000
Testing4002002002001000
Table 2. Hardware and software platforms.
Table 2. Hardware and software platforms.
Hardware PlatformVersion
CPU12th Gen Intel Core i5-12400
GPUNVIDIA GeForce RTX 3070
DRAM32 GB DDR4 3200 MHz
Software PlatformVersion
Python3.7.16
Tensorflow2.9.1
PyTorch1.7.1
Table 3. Hyperparameters in the CNN models.
Table 3. Hyperparameters in the CNN models.
HyperparameterValue
Initial learning rate0.001
Max epoch50
Batch size50
Learning drop period4
Learning rate drop factor0.316
Table 4. Performance comparison of the various CNN models.
Table 4. Performance comparison of the various CNN models.
ModelDiseaseAccuracySensitivitySpecificityPPVNPVROC AUCPR AUC
(%)(%)(%)(%)(%)(%)(%)
XceptionPeriodontitis89.76 89.26 90.59 86.40 92.5494.5893.34
Dental caries88.13 86.98 88.41 83.50 91.2193.4990.44
MobileNetV2Periodontitis91.42 91.21 91.60 87.73 94.0196.8695.89
Dental caries89.03 88.25 89.51 85.32 91.8796.3194.76
EfficientNet-B0Periodontitis95.44 93.28 96.88 95.24 95.5998.6798.38
Dental caries94.94 94.15 95.47 93.30 96.0898.3197.55
Table 5. Accuracy comparison of the various CNN models.
Table 5. Accuracy comparison of the various CNN models.
ModelPeriodontitisDental Caries
Minimum
(%)
Maximum
(%)
Mean
(%)
Minimum
(%)
Maximum
(%)
Mean
(%)
Xception88.9891.6689.7686.8989.1188.13
MobileNetV289.9892.5191.4287.3690.4289.03
EfficientNet-B094.6096.3095.4492.8096.4094.94
Table 6. Performance of EfficientNet-B0 on 10-fold cross-validation.
Table 6. Performance of EfficientNet-B0 on 10-fold cross-validation.
DiseaseTPFNTNFPAccuracySensitivitySpecificityPPVNPVROC AUCPR AUC
(%)(%)(%)(%)(%)(%)(%)
Fold-1Periodontitis369315861495.5092.2597.6796.3494.9898.8098.38
Dental caries381195742695.5095.2595.6793.6196.8098.6998.05
Fold-2Periodontitis361395851594.6090.2597.5096.0193.7598.0797.78
Dental caries363375653592.8090.7594.1791.2193.8597.2096.22
Fold-3Periodontitis369315802094.9092.2596.6794.8694.9398.3798.10
Dental caries377235683294.5094.2594.6792.1896.1197.6395.99
Fold-4Periodontitis371295762494.7092.7596.0093.9295.2198.6898.36
Dental caries372285722894.4093.0095.3393.0095.3398.1097.45
Fold-5Periodontitis380205821896.2095.0097.0095.4896.6898.8898.73
Dental caries381195792196.0095.2596.5094.7896.8299.0698.64
Fold-6Periodontitis376245831795.9094.0097.1795.6796.0599.0598.87
Dental caries384165802096.4096.0096.6795.0597.3298.7798.45
Fold-7Periodontitis377235861496.3094.2597.6796.4296.2299.1498.89
Dental caries375255881296.3093.7598.0096.9095.9298.8598.44
Fold-8Periodontitis382185752595.7095.5095.8393.8696.9698.7098.51
Dental caries378225782295.6094.5096.3394.5096.3398.7098.15
Fold-9Periodontitis371295752594.6092.7595.8393.6995.2098.3397.80
Dental caries379215574393.6094.7592.8389.8196.3798.0196.92
Fold-10Periodontitis375255851596.0093.7597.5096.1595.9098.6398.36
Dental caries376245673394.3094.0094.5091.9395.9498.1397.20
MeanPeriodontitis--------95.4493.2896.8895.2495.5998.6798.38
Dental caries--------94.9494.1595.4793.3096.0898.3197.55
Table 7. Ablation study.
Table 7. Ablation study.
MethodDiseaseAccuracy
(%)
Sensitivity
(%)
Specificity
(%)
PPV
(%)
NPV
(%)
ROC AUC
(%)
PR AUC
(%)
With image processingPeriodontitis95.4493.2896.8895.2495.5998.6798.38
Dental caries94.9494.1595.4793.3096.0898.3197.55
Without image processingPeriodontitis93.0592.1093.6890.7794.7296.7296.22
Dental caries92.9192.3393.3090.2394.8096.4995.83
Table 8. Comparison of the proposed method with [16].
Table 8. Comparison of the proposed method with [16].
MethodCNN NetworkAccuracy
(%)
Sensitivity
(%)
Specificity
(%)
PPV
(%)
NPV
(%)
ROC AUC
(%)
[16]GoogLeNet
InceptionV3
82.081.083.082.781.484.5
Proposed methodEfficientNet-B094.9494.1595.4793.3096.0898.31
Table 9. Comparison of the proposed method with [22].
Table 9. Comparison of the proposed method with [22].
MethodCNN NetworkDiseaseSensitivity
(%)
Specificity
(%)
PPV
(%)
NPV
(%)
ROC AUC
(%)
[22]Modified
ResNet-18 Backbone
Periapical periodontitis82.0084.0083.6782.3587.90
Dental caries83.5082.0082.2783.2587.50
Proposed methodEfficientNet-B0Periodontitis93.2896.8895.2495.5998.67
Dental caries94.1595.4793.3096.0898.31
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, I.D.S.; Yang, C.-M.; Chen, M.-J.; Chen, M.-C.; Weng, R.-M.; Yeh, C.-H. Deep Learning-Based Recognition of Periodontitis and Dental Caries in Dental X-ray Images. Bioengineering 2023, 10, 911. https://doi.org/10.3390/bioengineering10080911

AMA Style

Chen IDS, Yang C-M, Chen M-J, Chen M-C, Weng R-M, Yeh C-H. Deep Learning-Based Recognition of Periodontitis and Dental Caries in Dental X-ray Images. Bioengineering. 2023; 10(8):911. https://doi.org/10.3390/bioengineering10080911

Chicago/Turabian Style

Chen, Ivane Delos Santos, Chieh-Ming Yang, Mei-Juan Chen, Ming-Chin Chen, Ro-Min Weng, and Chia-Hung Yeh. 2023. "Deep Learning-Based Recognition of Periodontitis and Dental Caries in Dental X-ray Images" Bioengineering 10, no. 8: 911. https://doi.org/10.3390/bioengineering10080911

APA Style

Chen, I. D. S., Yang, C. -M., Chen, M. -J., Chen, M. -C., Weng, R. -M., & Yeh, C. -H. (2023). Deep Learning-Based Recognition of Periodontitis and Dental Caries in Dental X-ray Images. Bioengineering, 10(8), 911. https://doi.org/10.3390/bioengineering10080911

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop