Next Article in Journal
Immunogenetic Profiling of SLE and LN among Jordanian Patients
Previous Article in Journal
Effectiveness of Mindfulness and Positive Strengthening mHealth Interventions for the Promotion of Subjective Emotional Wellbeing and Management of Self-Efficacy for Chronic Cardiac Diseases
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Texture-Based Neural Network Model for Biometric Dental Applications

1
Department of Masticatory Function and Health Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, Bunkyo-ku, Tokyo 113-8510, Japan
2
Department of Advanced Prosthodontics, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, Bunkyo-ku, Tokyo 113-8510, Japan
*
Author to whom correspondence should be addressed.
J. Pers. Med. 2022, 12(12), 1954; https://doi.org/10.3390/jpm12121954
Submission received: 1 November 2022 / Revised: 23 November 2022 / Accepted: 23 November 2022 / Published: 25 November 2022

Abstract

:
Background: The aim is to classify dentition using a novel texture-based automated convolutional neural network (CNN) for forensic and prosthetic applications. Methods: Natural human teeth (n = 600) were classified, cleaned, and inspected for exclusion criteria. The teeth were scanned with an intraoral scanner and identified using a texture-based CNN in three steps. First, through preprocessing, teeth images were segmented by extracting the front-facing region of the teeth. Then, texture features were extracted from the segmented teeth images using the discrete wavelet transform (DWT) method. Finally, deep learning-based enhanced CNN models were used to identify these images. Several experiments were conducted using five different CNN models with various batch sizes and epochs, with and without augmented data. Results: Based on experiments with five different CNN models, the highest accuracy achieved was 0.8 and the precision was 0.8 with a loss value of 0.9, a batch size of 32, and 250 epochs. A comparison of deep learning models with different parameters showed varied accuracy between the different classes of teeth. Conclusion: The accuracy of the point-based CNN method was promising. This texture-identification method will pave the way for many forensic and prosthodontic applications and will potentially help improve the precision of dental biometrics.

1. Introduction

Biometric identification has immense importance in forensics as well as personalized medicine [1,2]. In identification, several parts of the human body are used [3]. Human enamel is the hardest tissue on earth and is extremely resistant to elevated temperature and chemical changes [4,5,6], making dental identification an essential alternative to soft-tissue methods [7,8].
Several features of the teeth may be used for identification such as contours, dimensions, arch size, bite marks, (estimated) dental age, dental restorations, and teeth development. These can be used alone or in combination [4,8,9,10]. Ameloglyphics, the study of teeth patterns, has been proposed as a form of biometric identification feature like fingerprinting and iris detection [3,5]. Natural teeth exhibit individual textural features, and the exact patterns of these features are unique [11,12]. Precisely recording these details makes it feasible to use these patterns for biometric applications [3]. Methods of recording dental prints include the peeling technique, recording with silicon impressions or celluloid acetate films, and automated biometric analysis [3,5].
Digital transformations in dentistry are becoming the new standard in clinical practice. One application of digital dentistry is creating digital impressions using oral scanners, which have proven to be cost-effective, time effective, and highly accurate [13]. It provides a comfortable option for the patient, without harmful side effects, regardless of repeated use [14,15]. According to recent studies, intraoral scanners are accurate to within a few microns [13,16].
The concept of personalized treatment and biomimetically designed dental prostheses is gaining popularity in dentistry, and artificial intelligence (AI) currently plays a significant role [15,17,18]. In prosthodontics, although digital production can decrease the time and cost of dental treatment, it is challenging to reproduce the unique morphological features of teeth because of limitations in computer design and construction methods [18,19]. This has led to the introduction of the concept of Digital Dental Passport, which is the application of an individual’s dental library that is easily retrieved when needed [18].
Convolutional neural networks (CNN) have been implemented in many image processing applications [20,21,22]. Deep learning is accurate in the identification and classification of radiographs, as shown in previous research [23,24,25]. However, to date, limited dental studies have been conducted on the classification of 3D (3 Dimensional) scanned images [14,26,27].
Texture analysis plays a key role in computer vision, especially object detection. It has advantages if used individually or in combination with other methods such as facial anatomy of the subject, which is increasingly comparable with digital software after digital acquisitions of intraoral impressions and facial scanners [28,29]. The Discrete Wavelet Transform (DWT) is a method used for feature texture extraction, using translations and discrete wavelet scales [30]. The DWT method is used for an efficient and quick de-noising of the signal and its implementation is also considered computationally efficient [31]. This study focuses on the development of a novel texture-based biometric application for scanned dentition, a process based on the DWT extraction feature for classification.

2. Materials and Methods

2.1. Teeth Collection, Scanning, and Classification

Extracted natural teeth of unknown origin (n = 600) were sourced from the Maxillo-facial Anatomy Department of the Tokyo Medical and Dental University. Sample size per group was calculated based on previous studies; the expected p was 0.02 and the desired precision (d) was 0.05. A total of 31 samples per class are required when the population is infinite using the following Equation (1) [32].
n = N Z 2 P ( 1 P ) d 2 ( N 1 ) + Z 2 P ( 1 P )
n′ = Sample size with finite population correction
N = Population size
Z = Z statistic for a level of confidence
P = Expected proportion (in proportion of one)
d = Precision (in proportion of one).
The teeth were cleaned using an ultrasonic scaler (Varios 970, NSK, Tokyo, Japan) at 28–32 KHz frequency to remove any debris. Subsequently, they were scanned with an intraoral scanner (Trios 3, 3Shape, Copenhagen, Denmark). Furthermore, the teeth were aligned to view their frontal surface. Then, using design software (Autodesk Meshmixer, Mill Valley, CA, USA), images of the frontal surfaces of the teeth were captured in both PNG and JPG formats. The images were classified into nine groups [12,33] and labeled from 0 to 8, as shown in Table 1.
Python was used in this study [34] and the proposed method consisted of several steps as presented in Figure 1.

2.2. Preprocessing

After classification, the tooth images were preprocessed by converting them into binary images. Morphological operations, including erosion and dilation, were used to remove outliers. Erosion and dilation operations were based on kernel size and used as structuring elements to reduce the size of the input image. Similarly, dilation increased the size of the input image based on kernel size. A kernel size of 10 × 10 was used in the proposed method. The erosion and dilation of the binary image was calculated based on Equations (2) and (3), respectively, where A represents the original binary image and B represents the kernel. The front-facing tooth image was selected after finding contours in the binary image.
A B = z E | B z
A B = b E A b

2.3. Extracting Textural Features Using DWT

In this study, wavelets were derived because they contain certain features useful in image processing. Wavelet coefficients were used as feature vectors for image classification. Using DWT, a one-variable function was converted into a two-variable function: scale and translation. Wavelet coefficients were calculated as discrete values based on the power of two, as shown in Equation (4).
W ( j , k ) = j k * x ( k ) 2 j 2 Ψ 2 j ( n k )
In the above equation, the discrete function x calculated the weighted sum of wavelets and was added to the coarse component. Furthermore, the coarse approximation was decomposed by a low pass followed by high pass iterations. The calculations of the approximation and detailed components are shown in Equations (5) and (6).
a j + 1 [ k ] = m = + l [ m 2 k ] a j [ m ]
d j + 1 [ k ] = m = + h [ m 2 k ] a j [ m ]
In DWT, experiments were done using three distinct levels: level 1, level 2, and level 3. Due to its superior accuracy, level 2 was selected for the CNN model. The outcome of the DWT texture-based image for level 2 is shown in Figure 2.

2.4. Deep Convolutional Neural Networks for Classification

Different deep learning models with varying numbers of convolutional, pooling, and dropout layers were tested to find the best possible model. Data augmentation was performed to increase the data size and variation. Augmentation was performed with randomly selected values for rotation, zoom level, width/height shift, and shear.
In CNN, hyperparameter tuning is an optimization problem. By using a cross-validation set along with the trial-and-error method, it was possible to tune the numbers of convolutional layers, pooling layers, dropout layers, and dense layers. It was found that an optimized CNN model comprising 14 layers with 728,789 parameters yielded the optimal results. The architecture of the proposed CNN model is presented in Figure 3.
In the experiments, the tooth image data were split into training and validation sets at an 80:20 ratio. To enhance the size of the training set and provide better validation, the training data were further augmented. The performance of the proposed model was evaluated using accuracy and the confusion matrix. Accuracy denotes the percentage of correct selection. For example, if the accuracy of the model is 50%, it implies that our model is capable of correctly identifying the class of 50% of the teeth samples. Furthermore, to reach a detailed picture, a confusion matrix was constructed to represent different performance-related measures against each class. Using the class-wise accuracy obtained from the confusion matrix, precision or recall was determined.

3. Results

3.1. Experimental Results and Improvement Steps

To achieve a good combination of hyperparameters and obtain the best performance from the model, several experiments were conducted based on their relevant performance (in terms of accuracy). We selected the top six for discussion in this section. Different configurations of the proposed model were built based on batch size, number of epochs, and the binary condition of being with or without augmented data. All six configurations and their accuracies are presented in Figure 4. The highest accuracy of 0.8 (80%) was achieved with configuration 5, with a loss value of 0.9 for a batch size of 32 and 250 epochs. There are a few key learning points discussed in Appendix A. Conclusively, the best accuracy was obtained using 14 layers with data augmentation, DWT level 2 textural features, and an appropriate image size as shown in Figure 4. Detailed results are featured in Appendix A Table A1.

3.2. Confusion Matrix

For clarity and to determine class-wise performance, a confusion matrix of the best performing model is shown in Figure 5. The numbers zero to eight represent tooth classes A to H, respectively.
As shown in Figure 5, the upper central and lower canine tooth classes were detected and classified with the highest degree of accuracy among all classes (100%); whereas the upper canine (32%) and upper lateral (56%) displayed the lowest accuracy.

4. Discussion

This study aimed to demonstrate the feasibility of using an automated texture-based model classier for dentition. A subjective identification and classification of dentition could lead to errors, is time-consuming, is limited by the lack of experienced manpower [11,24], and previous studies on dental classification have mainly referred to a single class of teeth [11]. Currently, machine learning texture-based automated systems and software tools can perform fingerprint recognition, facial recognition, and iris scanning that enable reliable biometric applications [35]. Thus, the incorporation of textural features-based deep learning methods in teeth classification of all types presents a solid alternative to subjective identification methods.
In this research, a complete set of extracted natural teeth was considered for two main reasons. First, because individual changes may occur in natural teeth, such as restorations and loss [18], and in forensics, it is feasible to identify whether a particular class of teeth is more amenable than another. Therefore, it is recommended to refer to several teeth classes. Second, it is useful to compare the textural feature uniqueness of several teeth, especially for prosthetic applications.
The intraoral scanner used in this research could accurately capture tooth details of less than 10 microns [13]. The digital storage of dental data was facilitated by the introduction of scanners [14,27]. Studies using scanned dental arches for biometric applications are still limited but rapidly increasing [2]. Recent studies suggest the use of occlusal surfaces of posterior teeth for classification and identification, reporting promising results [11,14]. However, it was also suggested that other teeth be included for future research [14].
In prosthetic treatment, teeth morphology generated from digital libraries cannot replicate an individual patient’s morphology. The duplication of the original tooth or mirroring of the contralateral tooth, if present, could be a solution, but it will require the correct 3D tooth position [19]. Therefore, the creation of personalized digital dental libraries and the associated use of AI identification could help implement the customization concept in digital prosthodontic design [17,18].
In this study, a fully automated method was proposed and achieved, but with some outliers. Data preprocessing is vital for any machine leaning process. In some conditions, it can correct defects that might otherwise affect the learning process, such as noise, omissions, and the presence of outliers [36]. Frequently, preprocessing makes the data less complex and enhances the training of the learning model. Convergent to the traditional segmentations of models, the capacity for abstraction in CNNs enables them to operate in a legitimate, high-element space that minimizes the demand for manually capturing data. However, it is still crucial to have suitable preprocessing to enhance the quality of the learning process [37].
DWT is a well-known mathematical method used for extracting textural features from images [31,38]. Developed in the 1980s for decomposing a signal with finite energy in the spatial domain to a set of orthogonal functions defined in the modular spatial domain [39]. It decomposes signals in the time-frequency domain into basic functions called wavelets.
The algorithms proposed for deep learning architectures have been successful in various fields, such as image restoration and speech and image recognition [40]. It is evident through this research that deep learning architectures in CNNs have been effective. An increased number of hidden layers led to a higher rate of recognition, data augmentation, and the utilization of textural features. Moreover, a smaller batch size was used to reduce memory usage [41]. Although it produced better results in this research, the benefits of a smaller batch size depended upon the number of output classes. Therefore, it is recommended to use a batch-size at least twice the size of the input classes.
The results showed that having more hidden layers enhanced the recognition rate but increased computational time since training time is directly proportional to architecture size [42]. The number of epochs was adjusted with the help of the cross-validation set, and the training process was stopped when the loss started increasing on the validation set. This was done to avoid overfitting, which occurs when the training error is exceptionally low, but the validation error is high. The tooth classification experiments were performed with 100, 150, 250, and 300 iterations; the optimal performance was observed at 250, based on the foregoing criterion.
When accuracy needs to be visualized for unbalanced datasets, a confusion matrix is used to evaluate performance [43]. According to the results, the overall accuracy of the model was 80%. Liu et al.; proposed a Haar Wavelet Transform for the classification of only four classes of teeth utilizing CBCT root sections, and they achieved similar results [44]. The accuracy for identification of the upper central incisor was 100% since it is the most difficult to replicate, and the variation in its microanatomy and surface texture increases its uniqueness. In addition, the accuracy of the lower canine was also 100%, which is superior to previous classification studies with CBCT images [44]. Conversely, the upper canine had the lowest accuracy of 32%, which is justified because this class had the fewest number of samples with a size of only 34 teeth. The upper lateral also has less surface texture than other anterior teeth [12,45], which could be a reason for its lower accuracy as compared with the other classes, where the upper lateral class was confused with the lower anterior class. In a previous study on texture ocular recognition, superior performance was achieved with 50 sample photos [35]; however, this data size may not be comparable to dentition.
The proposed CNN method showed promising overall performance for the incorporation of data augmentation and texture extraction features. Using DWT significantly improved CNN performance. Furthermore, the intraoral scanner served as a convenient tool for recording teeth details with high accuracy. A limitation of this study is that progressive recording has not been tested since these precise records might require periodic updates to overcome any surface loss [5,18]. In the future, this method will be investigated with full arch scans and an automated system will be developed for sorting dental charts. In conclusion, texture-based classification can greatly improve biometric, forensic, and personalized dental applications.

5. Conclusions

Texture-based automatic classification is a promising biometric application. The effectiveness of the novel CNN classification model based on the Discrete Wavelet Transform was validated with an accuracy of 80%. This proposed method has potential in forensics and prosthodontics. Future research will be adopted for in-vivo full arch studies.

Author Contributions

O.S.: Conceptualization; Data curation; Investigation, visualization; software, Writing—original draft. K.N.: Conceptualization; Formal analysis; Investigation; Methodology; Supervision; Writing—review & editing M.M.: Data curation; Project administration; Resources; Supervision; Validation; Writing—original draft. W.Y.: Methodology; validation; writing—original draft. H.M.: Conceptualization, Investigation; Writing—review & editing. K.F.: Methodology, Resources, Supervision, Validation and Writing—reviewing & editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This research was performed in line with the principles of the Declaration of Helsinki. It was approved by the Commission of Ethics of Tokyo Medical and Dental University, file number (2020, 48).

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available on request but may be restricted for reasons such as privacy or ethical concerns.

Acknowledgments

The authors express their gratitude to Shunichi Shibata, Maxillofacial Anatomy Department, Tokyo Medical and Dental University for his cooperation in this study.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

There are a few key learning points that are discussed below:
  • The impact of data augmentation can clearly be seen from the comparison of configuration 1 and 2. The rest of the parameters are the same; however, the only difference is data augmentation that results in improvement of around 11.3% in accuracy.
  • The role of textural features is evident from the comparison with configuration 2 and 4. The accuracy of configuration 4 is 13.3% higher than the accuracy of configuration 2.
  • The DWT level is also playing a crucial role. For example, configuration 3 uses DWT level 3 and configuration 4 uses DWT level 2. As can be seen from their comparative results, the accuracy of configuration 4 is 4.2% higher than the accuracy of configuration 3. One more thing to be noted is the image size is larger in configuration 4 and still configuration 4 is performing better. We can conclude that DWT level 2 is a better performer in this case.
  • Another important performance element is the number of layers. The only difference between configurations 4 and 5 is the number of layers they are using. As can be seen from the table, the accuracy of configuration 5 is 3.6% higher than the accuracy of configuration 4 and that is due to 4 extra layers. Hence, it concludes that layers are also improving accuracy.
  • One last observation that we can extract from this data is the poor impact of image size. If we compare accuracies of configuration 5 and 6 then we find that a higher image adversely affects the performance of the classifier.
The table shows the Precision, Recall. F-Measure and Accuracy score. The average value for the precision was 0.8, for recall it was 0.8, the average value for f -measure was 0.8 and the overall accuracy average value was 0.8.
Table A1. Classifier evaluation metrics: Precision, Recall, F-measure and overall accuracy for each Featured class.
Table A1. Classifier evaluation metrics: Precision, Recall, F-measure and overall accuracy for each Featured class.
Evaluation Metrics/Teeth ClassesPrecisionRecallF-MeasureAccuracy
Lower Incisor Classification (A)0.90.80.90.9
Lower Canine Classification (B)0.90.811
Lower Premolar Classification (C)0.80.80.80.8
Lower Molar Classification (D)10.80.90.9
Upper Canine Classification (G)0.30.30.40.3
Upper Central Classification (E)0.9111
Upper Lateral Classification (F)0.510.60.6
Upper Premolar Classification (H)0.70.70.80.8
Upper Molar Classification (I)0.90.80.90.9
Average0.80.80.80.8

References

  1. Adserias-Garriga, J.; Thomas, C.; Ubelaker, D.H.; Zapico, S.C. When Forensic Odontology Met Biochemistry: Multidisciplinary Approach in Forensic Human Identification. Arch. Oral Biol. 2018, 87, 7–14. [Google Scholar] [CrossRef] [PubMed]
  2. Reesu, G.V.; Woodsend, B.; Mânica, S.; Revie, G.F.; Brown, N.L.; Mossey, P.A. Automated Identification from Dental Data (AutoIDD): A New Development in Digital Forensics. Forensic Sci. Int. 2020, 309, 110218. [Google Scholar] [CrossRef] [PubMed]
  3. Chouhan, S.; Sansanwal, M.; Bhateja, S.; Arora, G. Ameloglyphics: A Feasible Forensic Tool in Dentistry. J. Oral Med. Oral Surg. Oral Pathol. Oral Radiol. 2020, 5, 119–120. [Google Scholar] [CrossRef]
  4. Darwin, D.; Sakthivel, S.; Castelino, R.L.; Babu, G.S.; Asan, M.F.; Sarkar, A.S. Oral Cavity: A Forensic Kaleidoscope. J. Health Allied Sci. NU 2021, 12, 7–12. [Google Scholar] [CrossRef]
  5. Sha, S.K.; Rao, B.V.; Rao, M.S.; Kumari, K.H.; Chinna, S.K.; Sahu, D. Are Tooth Prints a Hard Tissue Equivalence of Finger Print in Mass Disaster: A Rationalized Review. J. Pharm. Bioallied Sci. 2017, 9, S29–S33. [Google Scholar] [CrossRef] [PubMed]
  6. Albernaz Neves, J.; Antunes-Ferreira, N.; Machado, V.; Botelho, J.; Proença, L.; Quintas, A.; Sintra Delgado, A.; Mendes, J.J. An Umbrella Review of the Evidence of Sex Determination Procedures in Forensic Dentistry. J. Pers. Med. 2022, 12, 787. [Google Scholar] [CrossRef]
  7. Wang, L.; Mao, J.; Hu, Y.; Sheng, W. Tooth Identification Based on Teeth Structure Feature. Syst. Sci. Control Eng. 2020, 8, 521–533. [Google Scholar] [CrossRef]
  8. Divakar, K.P. Forensic Odontology: The New Dimension in Dental Analysis. Int. J. Biomed. Sci. 2017, 13, 1–5. [Google Scholar] [PubMed]
  9. Chugh, A.; Narwal, A. Oral Mark in the Application of an Individual Identification: From Ashes to Truth. J. Forensic Dent. Sci. 2017, 9, 51–55. [Google Scholar] [CrossRef]
  10. Bjelopavlovic, M.; Zeigner, A.-K.; Hardt, J.; Petrowski, K. Forensic Dental Age Estimation: Development of New Algorithm Based on the Minimal Necessary Databases. J. Pers. Med. 2022, 12, 1280. [Google Scholar] [CrossRef]
  11. Eto, N.; Yamazoe, J.; Tsuji, A.; Wada, N.; Ikeda, N. Development of an Artificial Intelligence-Based Algorithm to Classify Images Acquired with an Intraoral Scanner of Individual Molar Teeth into Three Categories. PLoS ONE 2022, 17, e0261870. [Google Scholar] [CrossRef] [PubMed]
  12. RC, W. Dental Anatomy and Physiology, 3rd ed.; WB Saunder: Philadelphia, PA, USA, 1963. [Google Scholar]
  13. Winkler, J.; Gkantidis, N. Trueness and Precision of Intraoral Scanners in the Maxillary Dental Arch: An in Vivo Analysis. Sci. Rep. 2020, 10, 1172. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Hori, M.; Hori, T.; Ohno, Y.; Tsuruta, S.; Iwase, H.; Kawai, T. A Novel Identification Method Using Perceptual Degree of Concordance of Occlusal Surfaces Calculated by a Python Program. Forensic Sci. Int. 2020, 313, 110358. [Google Scholar] [CrossRef]
  15. Martínez-Rodríguez, C.; Patricia, J.-P.; Ricardo, O.-A.; Alejandro, I.-L. Personalized Dental Medicine: Impact of Intraoral and Extraoral Clinical Variables on the Precision and Efficiency of Intraoral Scanning. J. Pers. Med. 2020, 10, 92. [Google Scholar] [CrossRef]
  16. Nulty, A.B. A Comparison of Full Arch Trueness and Precision of Nine Intra-Oral Digital Scanners and Four Lab Digital Scanners. Dent. J. 2021, 9, 75. [Google Scholar] [CrossRef] [PubMed]
  17. Bernauer, S.A.; Zitzmann, N.U.; Joda, T. The Use and Performance of Artificial Intelligence in Prosthodontics: A Systematic Review. Sensors 2021, 21, 6628. [Google Scholar] [CrossRef] [PubMed]
  18. Joda, T.; Zitzmann, N.U. Personalized Workflows in Reconstructive Dentistry—Current Possibilities and Future Opportunities. Clin. Oral Investig. 2022, 26, 4283–4290. [Google Scholar] [CrossRef]
  19. Chau, R.C.W.; Chong, M.; Thu, K.M.; Chu, N.S.P.; Koohi-Moghadam, M.; Hsung, R.T.C.; McGrath, C.; Lam, W.Y.H. Artificial Intelligence-Designed Single Molar Dental Prostheses: A Protocol of Prospective Experimental Study. PLoS ONE 2022, 17, e0268535. [Google Scholar] [CrossRef] [PubMed]
  20. Bayraktar, Y.; Ayan, E. Diagnosis of Interproximal Caries Lesions with Deep Convolutional Neural Network in Digital Bitewing Radiographs. Clin. Oral Investig. 2022, 26, 623–632. [Google Scholar] [CrossRef]
  21. Chung, M.; Lee, J.; Park, S.; Lee, M.; Lee, C.E.; Lee, J.; Shin, Y.G. Individual Tooth Detection and Identification from Dental Panoramic X-Ray Images via Point-Wise Localization and Distance Regularization. Artif. Intell. Med. 2021, 111, 101996. [Google Scholar] [CrossRef] [PubMed]
  22. Schwendicke, F.; Krois, J. Data Dentistry: How Data Are Changing Clinical Care and Research. J. Dent. Res. 2021, 101, 21–29. [Google Scholar] [CrossRef]
  23. Chen, H.; Zhang, K.; Lyu, P.; Li, H.; Zhang, L.; Wu, J.; Lee, C.H. A Deep Learning Approach to Automatic Teeth Detection and Numbering Based on Object Detection in Dental Periapical Films. Sci. Rep. 2019, 9, 3840. [Google Scholar] [CrossRef] [Green Version]
  24. Miki, Y.; Muramatsu, C.; Hayashi, T.; Zhou, X.; Hara, T.; Katsumata, A.; Fujita, H. Classification of Teeth in Cone-Beam CT Using Deep Convolutional Neural Network. Comput. Biol. Med. 2017, 80, 24–29. [Google Scholar] [CrossRef]
  25. Niño-Sandoval, T.C.; Vasconcelos, B.C. Biotypic Classification of Facial Profiles Using Discrete Cosine Transforms on Lateral Radiographs. Arch. Oral Biol. 2021, 131, 105249. [Google Scholar] [CrossRef] [PubMed]
  26. Reesu, G.V.; Mânica, S.; Revie, G.F.; Brown, N.L.; Mossey, P.A. Forensic Dental Identification Using Two-Dimensional Photographs of a Smile and Three-Dimensional Dental Models: A 2D-3D Superimposition Method. Forensic Sci. Int. 2020, 313, 110361. [Google Scholar] [CrossRef]
  27. Reesu, G.V.; Brown, N.L. Application of 3D Imaging and Selfies in Forensic Dental Identification. J. Forensic Leg. Med. 2022, 89, 102354. [Google Scholar] [CrossRef]
  28. Armi, L.; Fekri-Ershad, S. Texture Image Analysis and Texture Classification Methods—A Review. arXiv 2019, arXiv:1904.06554. [Google Scholar]
  29. Alhammadi, M.; Al-Mashraqi, A.; Alnami, R.; Ashqar, N.; Alamir, O.; Halboub, E.; Reda, R.; Testarelli, L.; Patil, S. Accuracy and Reproducibility of Facial Measurements of Digital Photographs and Wrapped Cone Beam Computed Tomography (CBCT) Photographs. Diagnostics 2021, 11, 757. [Google Scholar] [CrossRef] [PubMed]
  30. University of Florida. Introduction to the Discrete Wavelet Transform (DWT). Mach. Learn. Lab. 2004, 3, 1–8. [Google Scholar]
  31. Kociolek, M.; Materka, A.; Strzelecki, M.; Szczypinski, P. Discrete Wavelet Transform—Derived Features for Digital Image Texture Analysis. Int. Conf. Signals Electron. Syst. 2001, 2, 99–104. [Google Scholar]
  32. Daniel, W.W. Biostatistics: A Foundation for Analysis in the Health Sciences, 9th ed.; Wiley & Sons: Hoboken, NJ, USA, 1999; ISBN 978-0-470-10582-5. [Google Scholar]
  33. de Almeida Gonçalves, M.; Silva, B.L.G.; Conte, M.B.; Campos, J.Á.D.B.; de Oliveira Capote, T.S. Identification of Lower Central Incisors. In Dental Anatomy; IntechOpen: London, UK, 2018. [Google Scholar]
  34. van Rossum, G.; Drake, F.L. Python 3 Reference Manual Createspace; CreateSpace: Scotts Valley, CA, USA, 2009. [Google Scholar]
  35. Derakhshani, R.; Ross, A. A Texture-Based Neural Network Classifier for Biometric Identification Using Ocular Surface Vasculature. In Proceedings of the IEEE International Conference on Neural Networks, Orlando, FL, USA, 12–17 August 2007; pp. 2982–2987. [Google Scholar] [CrossRef]
  36. Fan, C.; Chen, M.; Wang, X.; Wang, J.; Huang, B. A Review on Data Preprocessing Techniques Toward Efficient and Reliable Knowledge Discovery From Building Operational Data. Front. Energy Res. 2021, 9, 652801. [Google Scholar] [CrossRef]
  37. Tabik, S.; Peralta, D.; Herrera-Poyatos, A.; Herrera, F. A Snapshot of Image Pre-Processing for Convolutional Neural Networks: Case Study of MNIST. Int. J. Comput. Intell. Syst. 2017, 10, 555–568. [Google Scholar] [CrossRef]
  38. Joshi, S. Discrete Wavelet Transform Based Approach for Touchless Fingerprint Recognition. In Proceedings of the International Conference on Data Science and Applications, Kolkata, India, 26–27 March 2022; pp. 397–412. [Google Scholar]
  39. Mallat, S.G. Multifrequency Channel Decompositions of Images and Wavelet Models. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 2091–2110. [Google Scholar] [CrossRef] [Green Version]
  40. Mesejo, P.; Martos, R.; Ibáñez, Ó.; Novo, J.; Ortega, M. A Survey on Artificial Intelligence Techniques for Biomedical Image Analysis in Skeleton-Based Forensic Human Identification. Appl. Sci. 2020, 10, 4703. [Google Scholar] [CrossRef]
  41. Sohoni, N.S.; Aberger, C.R.; Leszczynski, M.; Zhang, J.; Ré, C. Low-Memory Neural Network Training: A Technical Report. arXiv 2019, arXiv:1904.10631. [Google Scholar]
  42. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of Deep Learning: Concepts, CNN Architectures, Challenges, Applications, Future Directions; Springer International Publishing: Berlin/Heidelberg, Germany, 2021; Volume 8, ISBN 4053702100444. [Google Scholar]
  43. Al-jabery, K.K.; Obafemi-Ajayi, T.; Olbricht, G.R.; Wunsch II, D.C. Data Analysis and Machine Learning Tools in MATLAB and Python. In Computational Learning Approaches to Data Analytics in Biomedical Applications; Elsevier: Amsterdam, The Netherlands, 2020; pp. 231–290. [Google Scholar]
  44. Liu, F.; Li, Z.; Quinn, W. Teeth Classification Based on Haar Wavelet Transform and Support Vector Machine; Atlantis Press: Dordrecht, The Netherlands, 2018. [Google Scholar]
  45. Kataoka, S.; Nishimura, Y.; Sadan, A. Nature’s Morphology: An Atlas of Tooth Shape and Form; Quintessence Publishing: Batavia, IL, USA, 2002; ISBN 9780867154115. [Google Scholar]
Figure 1. Process of the convolutional neural network (CNN)-based AI model used in the study. First is data preprocessing, which includes segmentation, followed by texture mapping, and finally model designing and evaluation.
Figure 1. Process of the convolutional neural network (CNN)-based AI model used in the study. First is data preprocessing, which includes segmentation, followed by texture mapping, and finally model designing and evaluation.
Jpm 12 01954 g001
Figure 2. Levels of discrete wavelet transform (DWT) texture extraction. Experiments of three different texture levels (level 1, level 2, and level 3), to reach the highest accuracy.
Figure 2. Levels of discrete wavelet transform (DWT) texture extraction. Experiments of three different texture levels (level 1, level 2, and level 3), to reach the highest accuracy.
Jpm 12 01954 g002
Figure 3. Architecture of the proposed novel convolutional neural network (CNN) model, which includes four convolutional layers, three pooling layers, two dropout layers, and flatten and dense layer.
Figure 3. Architecture of the proposed novel convolutional neural network (CNN) model, which includes four convolutional layers, three pooling layers, two dropout layers, and flatten and dense layer.
Jpm 12 01954 g003
Figure 4. Comparison of deep learning configurations with different parameter combinations, with and without augmentation. Configuration 5 achieved the highest accuracy.
Figure 4. Comparison of deep learning configurations with different parameter combinations, with and without augmentation. Configuration 5 achieved the highest accuracy.
Jpm 12 01954 g004
Figure 5. Confusion Matrix of the convolutional neural network (CNN)-based AI model for all teeth classes (Class A to H). The correct classification is presented in the diagonal grids, with classes B and E showing the highest accuracy of 1 (100%) and Class G showing the lowest accuracy of 0.32 (32%).
Figure 5. Confusion Matrix of the convolutional neural network (CNN)-based AI model for all teeth classes (Class A to H). The correct classification is presented in the diagonal grids, with classes B and E showing the highest accuracy of 1 (100%) and Class G showing the lowest accuracy of 0.32 (32%).
Jpm 12 01954 g005
Table 1. Tooth classification and number of samples.
Table 1. Tooth classification and number of samples.
LabelTooth Class NameNumber of Images
0Lower Anterior(A)64
1Lower Canine(B)87
2LowerPremolar(C)77
3Lower Molar(D)71
4Upper Centra(E)75
5Upper Lateral(F)49
6Upper Canine(G)34
7UpperPremolar(H)80
8Upper Molar(I)63
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Saleh, O.; Nozaki, K.; Matsumura, M.; Yanaka, W.; Miura, H.; Fueki, K. Texture-Based Neural Network Model for Biometric Dental Applications. J. Pers. Med. 2022, 12, 1954. https://doi.org/10.3390/jpm12121954

AMA Style

Saleh O, Nozaki K, Matsumura M, Yanaka W, Miura H, Fueki K. Texture-Based Neural Network Model for Biometric Dental Applications. Journal of Personalized Medicine. 2022; 12(12):1954. https://doi.org/10.3390/jpm12121954

Chicago/Turabian Style

Saleh, Omnia, Kosuke Nozaki, Mayuko Matsumura, Wataru Yanaka, Hiroyuki Miura, and Kenji Fueki. 2022. "Texture-Based Neural Network Model for Biometric Dental Applications" Journal of Personalized Medicine 12, no. 12: 1954. https://doi.org/10.3390/jpm12121954

APA Style

Saleh, O., Nozaki, K., Matsumura, M., Yanaka, W., Miura, H., & Fueki, K. (2022). Texture-Based Neural Network Model for Biometric Dental Applications. Journal of Personalized Medicine, 12(12), 1954. https://doi.org/10.3390/jpm12121954

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop