The Application of Artificial Intelligence for Tooth Segmentation in CBCT Images: A Systematic Review
Abstract
:1. Introduction
2. Methods
2.1. Research Questions
2.2. Search Strategy
2.3. Exclusion Criteria
2.4. Data Extraction
2.5. Risk of Bias Assessment
3. Results
3.1. Search Results and Study Selection
3.2. Risk of Bias and Applicability Concerns
3.3. Study Characteristics
3.4. Evaluation Metrics
3.5. Performance of AI Models
4. Discussion
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
Abbreviations
AI | Artificial intelligence |
ASSD | Average System Surface Distance |
BF Score | Boundary F1 Score |
CBCT | Cone Beam Computed Tomography |
CNNs | Convolutional Neural Networks |
CRF | Conditional Random Field |
DA | Detection Accuracy |
DSC | Dice Similarity Coefficient |
FA | Identification Accuracy |
FCNs | Fully Convolutional Networks |
FPN | Feature Pyramid Network |
GT | Ground Truth |
HD | Hausdorff Distance |
IoU | Intersection over Union |
JS | Jaccard Coefficient |
LSM | Level Set Method |
MADs | Mean Absolute Deviations |
MS-D | Mixed-Scale Dense |
MSSD | Maximum Symmetric Surface Distance |
OccAcc | Occupancy Accuracy |
OIR | Object Include Ratio |
PA | Pixel Accuracy |
PRISMA | Preferred Reporting Items for Systematic reviews and Meta-Analyses |
QUADAS-2 | Quality Assessment Tool for Diagnostic Accuracy Studies-2 |
ROI | Regions of Interest |
RPN | Region Proposal Network |
RVD | Relative Volume Differences |
SBD | Symmetric Best Dice |
VD | Volume Differences |
WT | Watershed Transform |
References
- Beek, D.-M.; Baan, F.; Liebregts, J.; Nienhuijs, M.; Bergé, S.; Maal, T.; Xi, T. A learning curve in 3D virtual surgical planned orthognathic surgery. Clin. Oral Investig. 2023, 27, 3907–3915. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Q.; Gong, Y.; Liu, F.; Wang, J.; Xiong, X.; Liu, Y. Association of temporomandibular joint osteoarthrosis with dentoskeletal morphology in males: A cone-beam computed tomography and cephalometric analysis. Orthod. Craniofac Res. 2023, 26, 458–467. [Google Scholar] [CrossRef] [PubMed]
- Algahtani, F.N.; Hebbal, M.; Alqarni, M.M.; Alaamer, R.; Alqahtani, A.; Almohareb, R.A.; Barakat, R.; Abdlhafeez, M.M. Prevalence of bone loss surrounding dental implants as detected in cone beam computed tomography: A cross-sectional study. PeerJ 2023, 11, e15770. [Google Scholar] [CrossRef] [PubMed]
- Casiraghi, M.; Scarone, P.; Bellesi, L.; Piliero, M.A.; Pupillo, F.; Gaudino, D.; Fumagalli, G.; Del Grande, F.; Presilla, S. Effective dose and image quality for intraoperative imaging with a cone-beam CT and a mobile multi-slice CT in spinal surgery: A phantom study. Phys. Med. 2021, 81, 9–19. [Google Scholar] [CrossRef] [PubMed]
- Weese, J.; Lorenz, C. Four challenges in medical image analysis from an industrial perspective. Med. Image Anal. 2016, 33, 44–49. [Google Scholar] [CrossRef] [PubMed]
- Barriviera, M.; Duarte, W.R.; Januário, A.L.; Faber, J.; Bezerra, A.C.B. A new method to assess and measure palatal masticatory mucosa by cone-beam computerized tomography. J. Clin. Periodontol. 2009, 36, 564–568. [Google Scholar] [CrossRef] [PubMed]
- Rad, A.; Rahim, M.S.M.; Rehman, A.; Altameem, A.; Saba, T. Evaluation of Current Dental Radiographs Segmentation Approaches in Computer-aided Applications. Iete Tech. Rev. 2013, 30, 210–222. [Google Scholar] [CrossRef]
- Li, S.; Fevens, T.; Krzyżak, A.; Jin, C.; Li, S. Semi-automatic computer aided lesion detection in dental X-rays using variational level set. Pattern Recognit. 2007, 40, 2861–2873. [Google Scholar] [CrossRef]
- Said, E.; Nassar, D.; Fahmy, G.; Ammar, H. Teeth segmentation in digitized dental X-ray films using mathematical morphology. IEEE Trans. Inf. Forensics Secur. 2006, 1, 178–189. [Google Scholar] [CrossRef]
- Barandiaran, I.; Macía, I.; Berckmann, E.; Wald, D.; Dupillier, M.P.; Paloc, C.; Grana, M. An Automatic Segmentation and Reconstruction of Mandibular Structures from CT-Data. In Proceedings of the 10th International Conference on Intelligent Data Engineering and Automated Learning (IDEAL 2009), Burgos, Spain, 23–26 September 2009. [Google Scholar]
- Evain, T.; Ripoche, X.; Atif, J.; Bloch, I. Semi-Automatic Teeth Segmentation in Cone-Beam Computed Tomography by Graph-Cut with Statistical Shape Priors. In Proceedings of the IEEE 14th International Symposium on Biomedical Imaging (ISBI)—From Nano to Macro, Melbourne, Australia, 18–21 April 2017. [Google Scholar]
- Modi, C.K.; Desai, N.P. A Simple and Novel Algorithm for Automatic Selection of Roi for Dental Radiograph Segmentation. In Proceedings of the 24th Canadian Conference on Electrical and Computer Engineering (CCECE), Niagara Falls, Canada, 8–11 May 2011. [Google Scholar]
- Indraswari, R.; Kurita, T.; Arifin, A.Z.; Suciati, N.; Astuti, E.R.; Navastara, D.A. 3D Region Merging for Segmentation of Teeth on Cone-Beam Computed Tomography Images. In Proceedings of the Joint 10th International Conference on Soft Computing and Intelligent Systems (SCIS)/19th International Symposium on Advanced Intelligent Systems (ISIS), Toyama, Japan, 5–8 December 2018. [Google Scholar]
- Rad, A.E.; Rahim, M.S.M.; Norouzi, A. Digital dental X-ray Image Segmentation and Feature Extraction. Telkomnika Indones. J. Electr. Eng. 2013, 11, 3109–3114. [Google Scholar]
- Gan, Y.; Xia, Z.; Xiong, J.; Zhao, Q.; Hu, Y.; Zhang, J. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model. Med. Phys. 2015, 42, 14–27. [Google Scholar] [CrossRef]
- Jiang, B.; Zhang, S.; Shi, M.; Liu, H.-L.; Shi, H. Alternate Level Set Evolutions With Controlled Switch for Tooth Segmentation. IEEE Access 2022, 10, 76563–76572. [Google Scholar] [CrossRef]
- Ahmed, N.; Abbasi, M.S.; Zuberi, F.; Qamar, W.; Halim MS, B.; Maqsood, A.; Alam, M.K. Artificial Intelligence Techniques: Analysis, Application, and Outcome in Dentistry-A Systematic Review. BioMed Res. Int. 2021, 2021, 9751564. [Google Scholar] [CrossRef] [PubMed]
- Karobari, M.I.; Adil, A.H.; Basheer, S.N.; Murugesan, S.; Savadamoorthi, K.S.; Mustafa, M.; Abdulwahed, A.; Almokhatieb, A.A. Evaluation of the Diagnostic and Prognostic Accuracy of Artificial Intelligence in Endodontic Dentistry: A Comprehensive Review of Literature. Comput. Math. Methods Med. 2023, 2023, 7049360. [Google Scholar] [CrossRef] [PubMed]
- Garcia-Garcia, A.; Orts-Escolano, S.; Oprea, S.; Villena-Martinez, V.; Martinez-Gonzalez, P.; Garcia-Rodriguez, J. A survey on deep learning techniques for image and video semantic segmentation. Appl. Soft Comput. 2018, 70, 41–65. [Google Scholar] [CrossRef]
- Fernandez, K.; Chang, C. Teeth/palate and interdental segmentation using artificial neural networks. In Proceedings of the Artificial Neural Networks in Pattern Recognition: 5th INNS IAPR TC 3 GIRPR Workshop, ANNPR 2012, Trento, Italy, 17–19 September 2012; Proceedings 5. Springer: Berlin/Heidelberg, Germany. [Google Scholar]
- Deleat-Besson, R.; Le, C.; Al Turkestani, N.; Zhang, W.; Dumont, M.; Brosset, S.; Soroushmehr, R. Automatic Segmentation of Dental Root Canal and Merging with Crown Shape. In Proceedings of the 43rd Annual International Conference of the IEEE-Engineering-in-Medicine-and-Biology-Society (IEEE EMBC), Virtual Event, 1–5 November 2021. [Google Scholar]
- Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef] [PubMed]
- Miki, Y.; Muramatsu, C.; Hayashi, T.; Zhou, X.; Hara, T.; Katsumata, A.; Fujita, H. Classification of teeth in cone-beam CT using deep convolutional neural network. Comput. Biol. Med. 2017, 80, 24–29. [Google Scholar] [CrossRef] [PubMed]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. Dental X-ray Image Segmentation Using a U-Shaped Deep Convolutional Network. In Proceedings of the International Symposium on Biomedical Imaging, Brooklyn, NY, USA, 16–19 April 2015. [Google Scholar]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017. [Google Scholar]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Syst. Rev. 2021, 10, 89. [Google Scholar] [CrossRef] [PubMed]
- Yin, Y.; Xu, W.; Chen, L.; Wu, H. CoT-UNet++: A medical image segmentation method based on contextual transformer and dense connection. Math. Biosci. Eng. 2023, 20, 8320–8336. [Google Scholar] [CrossRef]
- Alqahtani, K.A.; Jacobs, R.; Smolders, A.; Van Gerven, A.; Willems, H.; Shujaat, S.; Shaheen, E. Deep convolutional neural network-based automated segmentation and classification of teeth with orthodontic brackets on cone-beam computed-tomographic images: A validation study. Eur. J. Orthod. 2023, 45, 169–174. [Google Scholar] [CrossRef]
- Xie, R.; Yang, Y.; Chen, Z. WITS: Weakly-supervised individual tooth segmentation model trained on box-level labels. Pattern Recognit. 2023, 133, 108974. [Google Scholar] [CrossRef]
- Wang, Y.; Xia, W.; Yan, Z.; Zhao, L.; Bian, X.; Liu, C.; Qi, Z.; Zhang, S.; Tang, Z. Root canal treatment planning by automatic tooth and root canal segmentation in dental CBCT with deep multi-task feature learning. Med. Image Anal. 2023, 85, 102750. [Google Scholar] [CrossRef] [PubMed]
- Chen, Z.; Chen, S.; Hu, F. CTA-UNet: CNN-transformer architecture UNet for dental CBCT images segmentation. Phys. Med. Biol. 2023, 68, 175042. [Google Scholar] [CrossRef] [PubMed]
- Yang, H.; Wang, X.; Li, G. Tooth and Pulp Chamber Automatic Segmentation with Artificial Intelligence Network and Morphometry Method in Cone-beam CT. Int. J. Morphol. 2022, 40, 407–413. [Google Scholar] [CrossRef]
- Xie, L.; Liu, B.; Cao, Y.; Yang, C. Automatic Individual Tooth Segmentation in Cone-Beam Computed Tomography Based on Multi-Task CNN and Watershed Transform. In Proceedings of the 24th IEEE International Conference on High Performance Computing and Communications, 8th IEEE International Conference on Data Science and Systems, 20th IEEE International Conference on Smart City and 8th IEEE International Conference on Dependability in Sensor, Cloud and Big Data Systems and Application, HPCC/DSS/SmartCity/DependSys 2022, Hainan, China, 18–20 December 2022. [Google Scholar]
- Lee, J.; Chung, M.; Lee, M.; Shin, Y.G. Tooth instance segmentation from cone-beam CT images through point-based detection and Gaussian disentanglement. Multimed. Tools Appl. 2022, 81, 18327–18342. [Google Scholar] [CrossRef]
- Khan, S.; Mukati, A.; Rizvi, S.S.H.; Yazdanie, N. Tooth Segmentation in 3D Cone-Beam CT Images Using Deep Convolutional Neural Network. Neural Netw. World 2022, 32, 301–318. [Google Scholar] [CrossRef]
- Cui, Z.; Fang, Y.; Mei, L.; Zhang, B.; Yu, B.; Liu, J.; Jiang, C.; Sun, Y.; Ma, L.; Huang, J.; et al. A fully automatic AI system for tooth and alveolar bone segmentation from cone-beam CT images. Nat. Commun. 2022, 13, 2096. [Google Scholar] [CrossRef] [PubMed]
- Hsu, K.; Yuh, D.-Y.; Lin, S.-C.; Lyu, P.-S.; Pan, G.-X.; Zhuang, Y.-C.; Chang, C.-C.; Peng, H.-H.; Lee, T.-Y.; Juan, C.-H.; et al. Improving performance of deep learning models using 3.5D U-Net via majority voting for tooth segmentation on cone beam computed tomography. Sci. Rep. 2022, 12, 19809. [Google Scholar] [CrossRef] [PubMed]
- do Nascimento Gerhardt, M.; Fontenele, R.C.; Leite, A.F.; Lahoud, P.; Van Gerven, A.; Willems, H.; Smolders, A.; Beznik, T.; Jacobs, R. Automated detection and labelling of teeth and small edentulous regions on cone-beam computed tomography using convolutional neural networks. J. Dent. 2022, 122, 104139. [Google Scholar] [CrossRef] [PubMed]
- Fontenele, R.C.; Gerhardt, M.D.N.; Pinto, J.C.; Van Gerven, A.; Willems, H.; Jacobs, R.; Freitas, D.Q. Influence of dental fillings and tooth type on the performance of a novel artificial intelligence-driven tool for automatic tooth segmentation on CBCT images—A validation study. J. Dent. 2022, 119, 104069. [Google Scholar] [CrossRef]
- Fang, Y.; Cui, Z.; Ma, L.; Mei, L.; Zhang, B.; Zhao, Y.; Jiang, Z.; Zhan, Y.; Pan, Y.; Zhu, M.; et al. Curvature-Enhanced Implicit Function Network for High-quality Tooth Model Generation from CBCT Images. In Proceedings of the 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Singapore, 18–22 September 2022. [Google Scholar]
- Dou, W.; Gao, S.; Mao, D.; Dai, H.; Zhang, C.; Zhou, Y. Tooth instance segmentation based on capturing dependencies and receptive field adjustment in cone beam computed tomography. Comput. Animat. Virtual Worlds 2022, 33, e2100. [Google Scholar] [CrossRef]
- Jang, T.J.; Kim, K.C.; Cho, H.C.; Seo, J.K. A Fully Automated Method for 3D Individual Tooth Identification and Segmentation in Dental CBCT. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 6562–6568. [Google Scholar] [CrossRef] [PubMed]
- Cui, W.; Wang, Y.; Zhang, Q.; Zhou, H.; Song, D.; Zuo, X.; Jia, G.; Zeng, L. CTooth: A Fully Annotated 3D Dataset and Benchmark for Tooth Volume Segmentation on Cone Beam Computed Tomography Images. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2022. [Google Scholar]
- Cui, W.; Wang, Y.; Li, Y.; Song, D.; Zuo, X.; Wang, J.; Zhang, Y.; Zhou, H.; Chong, B.S.; Zeng, L.; et al. CTooth+: A Large-Scale Dental Cone Beam Computed Tomography Dataset and Benchmark for Tooth Volume Segmentation. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Cham, Switzerland, 2022. [Google Scholar]
- Al-Sarem, M.; Al-Asali, M.; Alqutaibi, A.Y.; Saeed, F. Enhanced Tooth Region Detection Using Pretrained Deep Learning Models. Int. J. Environ. Res. Public Health 2022, 19, 15414. [Google Scholar] [CrossRef] [PubMed]
- Yang, Y.; Xie, R.; Jia, W.; Chen, Z.; Yang, Y.; Xie, L.; Jiang, B. Accurate and automatic tooth image segmentation model with deep convolutional neural networks and level set method. Neurocomputing 2021, 419, 108–125. [Google Scholar] [CrossRef]
- Shaheen, E.; Leite, A.; Alqahtani, K.A.; Smolders, A.; Van Gerven, A.; Willems, H.; Jacobs, R. A novel deep learning system for multi-class tooth segmentation and classification on cone beam computed tomography. A validation study: Deep learning for teeth segmentation and classification. J. Dent. 2021, 115, 103865. [Google Scholar] [CrossRef] [PubMed]
- Lin, X.; Fu, Y.; Ren, G.; Yang, X.; Duan, W.; Chen, Y.; Zhang, Q. Micro–Computed Tomography–Guided Artificial Intelligence for Pulp Cavity and Tooth Segmentation on Cone-beam Computed Tomography. J. Endod. 2021, 47, 1933–1941. [Google Scholar] [CrossRef] [PubMed]
- Cui, Z.; Zhang, B.; Lian, C.; Li, C.; Yang, L.; Wang, W.; Zhu, M.; Shen, D. Hierarchical Morphology-Guided Tooth Instance Segmentation from CBCT Images. In Proceedings of the Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Virtual Event, 28–30 June 2021. [Google Scholar]
- Lahoud, P.; EzEldeen, M.; Beznik, T.; Willems, H.; Leite, A.; Van Gerven, A.; Jacobs, R. Artificial Intelligence for Fast and Accurate 3-Dimensional Tooth Segmentation on Cone-beam Computed Tomography. J. Endod. 2021, 47, 827–835. [Google Scholar] [CrossRef] [PubMed]
- Duan, W.; Chen, Y.; Zhang, Q.; Lin, X.; Yang, X. Refined tooth and pulp segmentation using U-Net in CBCT image. Dentomaxillofacial Radiol. 2021, 50, 20200251. [Google Scholar] [CrossRef] [PubMed]
- Wang, H.; Minnema, J.; Batenburg, K.J.; Forouzanfar, T.; Hu, F.J.; Wu, G. Multiclass CBCT Image Segmentation for Orthodontics with Deep Learning. J. Dent. Res. 2021, 100, 943–949. [Google Scholar] [CrossRef]
- Wu, X.; Chen, H.; Huang, Y.; Guo, H.; Qiu, T.; Wang, L. Center-Sensitive and Boundary-Aware Tooth Instance Segmentation and Classification from Cone-Beam CT. In Proceedings of the IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa, IA, USA, 3–7 April 2020. [Google Scholar]
- Rao, Y.; Wang, Y.; Meng, F.; Pu, J.; Sun, J.; Wang, Q. A Symmetric Fully Convolutional Residual Network With DCRF for Accurate Tooth Segmentation. IEEE Access 2020, 8, 92028–92038. [Google Scholar] [CrossRef]
- Lee, S.; Woo, S.; Yu, J.; Seo, J.; Lee, J.; Lee, C. Automated CNN-Based Tooth Segmentation in Cone-Beam CT for Dental Implant Planning. IEEE Access 2020, 8, 50507–50518. [Google Scholar] [CrossRef]
- Chung, M.; Lee, M.; Hong, J.; Park, S.; Lee, J.; Lee, J.; Yang, I.-H.; Lee, J.; Shin, Y.G. Pose-aware instance segmentation framework from cone beam CT images for tooth segmentation. Comput. Biol. Med. 2020, 120, 103720. [Google Scholar] [CrossRef] [PubMed]
- Chen, Y.; Du, H.; Yun, Z.; Yang, S.; Dai, Z.; Zhong, L.; Feng, Q.; Yang, W. Automatic Segmentation of Individual Tooth in Dental CBCT Images From Tooth Surface Map by a Multi-Task FCN. IEEE Access 2020, 8, 97296–97309. [Google Scholar] [CrossRef]
- Ezhov, M.; Zakirov, A.; Gusarev, M. Coarse-to-Fine Volumetric Segmentation of Teeth in Cone-Beah CT. In Proceedings of the 16th IEEE International Symposium on Biomedical Imaging (ISBI), Venice, Italy, 8–11 April 2019. [Google Scholar]
- Cui, Z.; Li, C.; Wang, W. ToothNet: Automatic Tooth Instance Segmentation and Identification from Cone Beam CT Images. In Proceedings of the 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Gou, M.; Rao, Y.; Zhang, M.; Sun, J.; Cheng, K. Automatic image annotation and deep learning for tooth CT image segmentation. In Proceedings of the Image and Graphics: 10th International Conference, ICIG 2019, Beijing, China, 23–25 August 2019; Proceedings, Part II 10. Springer: Berlin/Heidelberg, Germany. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Milletari, F.; Navab, N.; Ahmadi, S.A. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. In Proceedings of the 4th IEEE International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016. [Google Scholar]
- Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2016, Athens, Greece, 17–21 October 2016; Springer International Publishing: Cham, Switzerland. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In Proceedings of the 18th IEEE/CVF International Conference on Computer Vision (ICCV), Virtual Event, 10 March 2021. [Google Scholar]
- Lin, A.; Chen, B.; Xu, J.; Zhang, Z.; Lu, G.; Zhang, D. DS-TransUNet: Dual Swin Transformer U-Net for Medical Image Segmentation. IEEE Trans. Instrum. Meas. 2022, 71, 4005615. [Google Scholar] [CrossRef]
- Li, Y.; Yao, T.; Pan, Y.; Mei, T. Contextual Transformer Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 1489–1500. [Google Scholar] [CrossRef]
- Grau, V.; Mewes, A.U.J.; Alcaniz, M.; Kikinis, R.; Warfield, S.K. Improved watershed transform for medical image segmentation using prior information. IEEE Trans. Med. Imaging 2004, 23, 447–458. [Google Scholar] [CrossRef]
Database | Query | Results |
---|---|---|
MEDLINE | (“artificial intelligence” OR “deep learning” OR “machine learning” OR “neural networks” OR “automatic” OR “automated”) AND (“cone-beam computed tomography” OR “CBCT” OR “3D”) AND (“tooth segment*” OR “teeth segment*”) | 154 |
Web of Science | TS = (((artificial intelligence) OR (deep learning) OR (machine learning) OR (neural networks) OR (automatic) OR (automated)) AND ((cone-beam computed tomography) OR (CBCT) OR (3D)) AND ((tooth segment*) OR (teeth segment*))) | 276 |
Scopus | TITLE-ABS-KEY (((artificial AND intelligence) OR (deep AND learning) OR (machine AND learning) OR (neural AND networks) OR (automatic) OR (automated)) AND ((cone-beam AND computed AND tomography) OR (CBCT) OR (3d)) AND ((tooth AND segment*) OR (teeth AND segment*))) | 299 |
Total: | 729 |
Author | Category | Framework | Capture Method | Number of Samples | Evaluation Metrics | Model Reproducibility |
---|---|---|---|---|---|---|
Yin et al. 2023 [28] | Semantic | A context-transformed TransUNet++ (CoT-UNet++) architecture utilized a hybrid encoder to obtain contextual information between adjacent keys and global context, decode, and then fuse at multiple scales through dense concatenation to obtain more accurate location information for tooth segmentation. | CBCT | 20 groups of 300 images -Number of training and testing are not available | -Dice Similarity Coefficient (DSC): 0.9206 -Mean Intersection over Union (mIoU): 0.8605 -Mean Pixel Accuracy (MPA): 95.91% -True Positive Rate (TPR): 93.85% -95% Hausdorff Distance (HD): 1.06 mm -Average System Surface Distance (ASSD): 0.48 mm | Not available |
Ayidh Alqahtani et al. 2023 [29] | Instance | A multi-class deep CNN based tool for segmentation and classification of teeth with brackets. The CNN model was proposed by previous research (Shaheen et al. 2021). | CBCT | 215 scans -Training: 140 -Validation: 35 -Test: 40 | -Dice Similarity Coefficient (DSC): 0.99 -Intersection Over Union (IoU): 0.99 -Precision: 99% -Recall: 99% -Accuracy: 99% -95% Hausdorff Distance (HD): 0.12 mm -Segmentation time: 43.56 s | Request by contacting; Virtual Patient Creator (https://creator.relu.eu, accessed on 21 August 2023) |
Xie et al. 2023 [30] | Semantic | A deep learning method (FCOS) to detect location and size of each tooth and generate prior ellipses to constrain the evolution of level set by distance, and find out joint point using curvature direction, and then segment tooth. | CBCT | 10 scans (453 slices) -Training: 7 -Testing: 3 | -Dice coefficient: 0.9480 -Jaccard coefficient (JS): 0.9023 -Precision (PN): 94.84% -Boundary F1 (BF) score: 0.9795 | Code available (https://github.com/ruicx/Individual-Tooth-Segmentation-with-Rectangle-Labels, accessed on 21 August 2023) |
Wang et al. 2023 [31] | Instance | A 3D ERFNet base bone neural network adopted three branches to learn the spatial embedding, seed map, and identification simultaneously for tooth instance segmentation and further obtained root canal segmentation. | CBCT | 201 volumes -Split into three folds for cross-validation | -Symmetric best dice (SBD): 0.9584 -Average instance dice (AID): 0.9425 -Identification accuracy (FA): 97.97% -Average symmetric surface distance (ASSD): 0.12 mm | Confidential |
Chen et al. 2023 [32] | Semantic | A CNN–Transformer Architecture UNet (CTA–UNet) network, which combined the advantages of CNNs and Transformers through a parallel architecture, integrated local features extracted by CNNs and global representations obtained by self-attention modules (MSAB) to enhance the segmentation performance. | CBCT | 45 volumes -Training: 27 -Validation: 9 -Testing: 9 | -Dice similarity coefficient (DSC): 0.8650 -Intersection over union (IoU): 0.7812 -95% Hausdorff Distance (HD95): 0.64 mm -Average Symmetric Surface Distance (ASSD): 0.21 mm | Request by contacting |
Yang et al. 2022 [33] | Semantic | A U-Net model was first pre-trained using five labeled classes of images and then combined with a watershed approach to effectively segment the teeth, pulp cavity, and cortical bone. | CBCT | 5 photos -Number of training and testing are not available | -Dice score (DSC): 0.9859 | Not available |
Xie et al. 2022 [34] | Instance | A novel segmentation approach based on multi-task CNN and watershed transform (MCW). Multi-task CNN based on U-Net segmented the tooth foreground and landmark from 2D CBCT slices, 3D marked controlled watershed transform method separated the overlapping 3D tooth objects, and the post-processing method based on prior knowledge merged the individual tooth with the detached tooth root. | CBCT | 78 scans (38,082 slides) -Training: 39 (19,416 slides) -Validation: 14 (6820 slides) -Testing: 25 (11,846 slides) | -Dice Similarity Coefficient (DC): 0.88 -Precision (P): 98% -Recall (R): 93% -Average Symmetric Surface Distance (ASSD): 0.53 mm | Not available |
Lee et al. 2022 [35] | Instance | A two-stage point-based detection network using the FCN layers followed by an encoding–decoding structure to extract feature maps, and 3D U-Net architecture for individual tooth segmentation. The adjacent teeth were detected by introducing a novel GD loss function within heatmap regression. | CBCT | 120 scans -Training: 80 -Validation: 20 -Testing: 20 | -Intersection over Union (IOU): 0.704 -Precision: 93.2% -Average Precision 50 (AP50): 90.91% -Recall: 91.9% -Object Include Ratio (OIR): 96.6% | Not available |
Khan et al. 2022 [36] | Semantic | A novel deep learning model consists of 38 layers having 11 blocks of 3D convolutional layers followed by batch normalization layers and Relu layers. | CBCT | 70 volumes (Dataset augmentation by flipping to 140 volumes) -Training: 84 -Validation: 28 -Testing: 14 | -Layers: 38 -Mean Dice score: 0.90 -Mean intersection over union (IoU): 0.60 -Validation accuracy: 95.54% -Training time: 23 h -Model size: 4.3 MB | Not available |
Cui et al. a, 2022 [37] | Instance | A deep learning-based AI system with a hierarchical morphology-guided network to segment individual teeth and a filter-enhanced network to extract alveolar bony structures. Images were preprocessed by V-Net for ROI detection and two-stage tooth segmentation, which detected each tooth and represented it by the predicted skeleton. Then multi-task learning network predicted each tooth’s volumetric mask by simultaneously regressing the corresponding tooth apices and boundaries. | CBCT | Internal dataset 4938 scans (4215 patients) -Training and Validation: 3457 (3172 patients) -Testing: 1481 (1043 patients) External dataset 407 scans (404 patients) | Internal: -Average Dice score: 0.941 -Average sensitivity: 93.9% -Average ASD error: 0.17 mm External: -Average Dice: 0.9254 -Sensitivity: 92.1% -ASD error: 0.21 mm | Partial CBCT data available (https://pan.baidu.com/s/1LdyUA2QZvmU6ncXKl_bDTw, password:1234, accessed on 21 August 2023); Code available (https://pan.baidu.com/s/194DfSPbgi2vTIVsRa6fbmA, password:1234, accessed on 21 August 2023) |
Hsu et al. 2022 [38] | Semantic | A 3.5D U-Net was generated via majority voting for the predictions of 2D U-Nets from three orthogonal slices, 2.5D U-Net, and 3D U-Net at different combination strategies. | CBCT | 24 patients -Divided into 4 groups, 6 patients per group for cross validation | DSC: 0.911 Accuracy: 99.9% Sensitivity: 88.8% Sp: 1.00 PPV: 97.0% NPV: 99.9% | Request by contacting |
Gerhardt et al. 2022 [39] | Instance | A two-stage 3D U-Net architecture to assess the accuracy of automated detection of teeth and small edentulous regions, which was proposed by previous research (Shaheen et al. 2021). | CBCT | 175 scans -Training: 140 -Testing: 35 -Validation: 46 from extra | For fully dentate patients: -Intersection over union (IoU): 0.96 -Accuracy: 99.7% -Recall: 99.7% -Precision: 100% -95% Hausdorff Distance (95HD): 0.33 mm For patients presenting small edentulous areas: -Intersection over union (IoU): 0.97 -Accuracy: 99.% -Recall: 100% -Precision: 98.7% -95% Hausdorff Distance (95HD): 0.15 mm Time needed for the human versus machine detection -dental specialist-median time: 98 s to perform the analysis -the AI-median time: 1.5 s to do the same task | Virtual Patient Creator (https://creator.relu.eu, accessed on 21 August 2023) |
Fontenele et al. 2022 [40] | Instance | A two-stage 3D U-Net architecture to assess the influence of dental fillings on performance for tooth segmentation, which was proposed by previous research (Shaheen et al. 2021). | CBCT | 175 scans -Training: 140 -Validation: 35 -Test: 74 | -Dice similarity coefficient (DSC): 0.96 -Intersection over union (IoU): 0.92 -Accuracy: 100% -Recall: 96% -Precision: 95% -95% Hausdorff Distance (95HD): 0.27 mm | Virtual Patient Creator (https://creator.relu.eu, accessed on 21 August 2023) |
Fang et al. 2022 [41] | Instance | A novel curvature enhanced implicit function network for high-quality tooth model generation, which combines the CNN-based segmentation network (HMG–Net) with an implicit function network to generate 3D tooth models with fine-grained geometric details. | CBCT Intra-oral scanning | 50 scans -Training: 20 -Validation: 10 -Testing: 20 | -Intersection over Union (IoU): 0.8303 -Chamfer-L2: 3.00 × 10−4 -Normal Consistency (Normals): 96.25% -Occupancy accuracy (OccAcc): 79.7% | Not available |
Dou et al. 2022 [42] | Instance | A new two-stage deep learning network (TSDNet) from tooth centroid localization to tooth instance segmentation. The first stage used a centroid prediction network (V-Net framework +density-based fast search clustering algorithm) to predict the tooth centroid to achieve accurate spatial localization of individual teeth. Then a tooth instance segmentation network (self-attention mechanism-based guidance module for tooth geometry structure information and tooth feature integration module based on multi-scale fusion of dilated convolutions) was used to obtain instance-level tooth information of individual teeth. The second stage achieves robust and accurate tooth segmentation from CBCT data. | CBCT | 40 CBCT scans -training: 30 -validation: 5 -testing: 5 | -Dice: 0.952 -Jaccard: 90.2% -Detection accuracy (DA): 99.6% -Average surface distance (ASD): 0.15 mm -Hausdorff distance (HD): 2.12 mm | Not available |
Jang et al. 2022 [43] | Instance | A hierarchical multi-step deep learning model by reconstruction panoramic Image from 3D CBCT Images, identification and 2D segmentation of individual teeth in the panoramic images, extraction of loose and tight 3D tooth ROIs using the detected bounding boxes and segmented tooth regions, and finally 3D segmentation for individual teeth from the 3D tooth ROIs. | CBCT | 97 scans: -Training: 66 -Testing: 31 For the 3D segmentation -Training: 7 -Testing: 4 | -Dice similarity coefficient (DSC): 0.9479 -Precision: 95.97% -Recall: 93.71% -Hausdorff distance (HD): 1.66 mm -Average symmetric surface distance (ASSD): 0.14 mm | Not available |
Cui et al. b, 2022 [44] | Semantic | Established a fully annotated CBCT dataset CTooth with tooth gold standard, which contained 22 volumes (7363 slices) with fine tooth labels. An attention-based segmentation framework based on U-Net with an attention branch at the bottleneck position was proposed. | CBCT | CTooth Database: -5803 slices (4243 contain tooth annotations) -5504 annotated images from 22 patients | -Dice similarity coefficient (DSC): 0.8804 -Intersection over union (IoU): 0.7871 -Weighted dice similarity coefficient (WDSC): 95.14% -Sensitivity (SEN): 94.71% -Predictive value (PPV): 82.3% | CTooth (https://github.com/liangjiubujiu/CTooth, accessed on 21 August 2023) |
Cui et al. c, 2022 [45] | Semantic | Established a 3D dental CBCT dataset CTooth+, with 22 fully annotated volumes and 146 unlabeled volumes, and further evaluate several tooth segmentation strategies based on fully supervised learning, semi-supervised learning and active learning, with definition of the performance principles. | CBCT | CTooth+ Database: -5504 annotated CBCT images of 22 patients -25,876 unlabeled images of 146 patients 31,380 scans -Training: 80% for the fully supervised (with labelled images) and semi-supervised methods (with labelled and unlabeled images). -Evaluation: 20% image volumes | 1. Compared 8 fully-supervised segmentation methods: (3D SkipDenseNet, DenseVoxelNet, 3D Unet, VNet, Voxresnet, nnUnet, Dense Unet, Attention Unet) -Dice similarity coefficient (DSC): Attention UNet: 0.866 -Intersection-over-union (IoU): Attention UNet: 0.7645 -Sensitivity (SEN): Dense UNet: 90.80% -Positive predictive value (ppv): Attention UNet: 87.79% -Hausdorff distance (HD): nnUNet: 1.29 mm -Average symmetric surface distance (ASSD): Attention UNet and nnUNet: 0.27 mm -Surface overlap (SO): Dense UNet: 95.98% -Surface dice (SD): Dense UNet: 95.91% 2. Compared 4 semi-supervised methods (trained by 9 labelled volumes and 8 unlabeled volumes) (MT, CPS, DCT, CTCT) -Dice similarity coefficient (DSC): CTCT: 0.8532 -Intersection-over-union (IoU): CTCT: 0.746 -Sensitivity (SEN): CTCT: 87.55% -Positive predictive value (ppv): CTCT: 84.22% -Hausdorff distance (HD): MT: 2.76 mm -Average symmetric surface distance (ASSD): CTCT: 0.43 mm 3. Compared 3 active learning-based methods (trained by 9 labelled volumes and 8 unlabeled volumes) (ENT, MAR, CEAL) -Dice similarity coefficient (DSC): FSL 82: 0.866 -Intersection-over-union (IoU): FSL 82: 0.7645 -Sensitivity (SEN): CEAL: 87.85% -Positive predictive value (ppv): FSL 82: 87.79% -Hausdorff distance (HD): MT: 2.76 mm -Average symmetric surface distance (ASSD): CEAL: 1.05 mm -Surface overlap (SO): CEAL: 95.92% -Surface dice (SD): CEAL: 0.9589 | CTooth+ (https://github.com/liangjiubujiu/CTooth, accessed on 21 August 2023) |
Al-Sarem et al. 2022 [46] | Semantic | A pre-trained deep learning model (DenseNet169) based on U-Net for detecting and classifying tooth regions. | CBCT | 500 scans -Training: 70% -validation: 20% -Testing: 10% | -Accuracy: 90.81% -Precision: 96% -Recall: 97% -F1-score: 0.97 | Request by contacting; |
Yang et al. 2021 [47] | Instance | A two-stage tooth segmentation model with deep convolutional neural networks and level set method. First to detect the center point, direction and length of the tooth by deep convolutional neural networks (segment dental pulp by U-Net to locate the tooth) and use a series of mathematical methods to fit an ellipse curve as the shape prior information, which is used to define the prior constraint term. Then, to combine the image data term, the length term, the regularization term and the prior constraint term to define the level set formulation of the energy functional and propose an accurate tooth segmentation model. | CBCT | 10 patients (512 scanning slices for each) For training U-Net for pulp segmentation: -Training: 2 patients (1024 slices) -Validation: 1 patient (512 slices) | -Dice coefficient: 0.9791, -Jaccard coefficient: 0.9595 -Detection accuracy: 97.33% -Mean boundary F1 score: 0.9824 | Not available |
Shaheen et al. 2021 [48] | Instance | A CNN-based system for segmentation of each individual tooth and classification to a particular tooth class, which uses a 3D UNet to segment tooth with bounding box. | CBCT from two machines | 186 CBCT scans -Training: 140 (teeth: 400) -Validation: 35 (teeth: 100) -Testing: 11 (teeth: 332) | For segmentation -Dice similarity coefficient (DSC): 0.90 -Intersection over union (IoU): 0.82 -Recall: 83% -Precision: 98% -95% Hausdorff Distance (HD): 0.56 mm -Time: 13.7 ± 1.2 s For classification -Recall: 98.5% -Precision: 97.9% -Accuracy: 96.6% | Virtual Patient Creator (https://creator.relu.eu, accessed on 21 August 2023) |
Lin et al. 2021 [49] | Semantic | A novel data pipeline based on micro-CT data to train the 2D U-Net for an accurate pulp cavity and tooth segmentation on CBCT images. The 2D U-Net containing region proposal network (RPN) with a feature pyramid network (FPN) structure was proposed in previous research to locate the extracted tooth and segmentation (Duan et al. 2021). | CBCT Micro CT | 30 Teeth -Training: 25 groups (3200 sagittal slices and 6400 axial slices) -Testing: 5 groups | -Dice similarity coefficient (DSC): 0.962 -precision rate (PR): 97.31% -recall rate (RR): 95.11% -average symmetric surface distance (ASSD): 0.09 mm -Hausdorff distance (HD): 1.54 mm | Not available |
Cui et al. 2021 [50] | Instance | A hierarchical morphological guide model with 3D V-Net as backbone located tooth centroids and predicted skeletons first, and then predicted the detailed geometric features (tooth volume, boundary, and root landmarks) with a multi-task learning mechanism under guidance. | CBCT | 100 CBCT -Training set: 50 -Validation set: 10 -Testing: 40 | -Dice: 0.948 -Jaccard: 0.891 -Average surface distance (ASD): 0.18 mm -Hausdorff distance (HD): 1.52 ± 0.28 mm | Not available |
Lahoud et al. 2021 [51] | Instance | A CNN based AI-driven tooth wegmentation, automated detection and segmentation of tooth structures. | CBCT | 2924 slices, -Training: 2095 -Optimization: 501 -Validation: 328 | -DSC: 0.93 -IoU: 0.87 -Segmentation volumes: 536 mm3 -Average median surface deviation: 7.85 mm -Time: 0.5 min | Not available |
Duan et al. 2021 [52] | Instance | A two-phase deep learning solution for tooth and pulp segmentation using U-Net in CBCT images. First, the single tooth bounding box is extracted by using the Region Proposal Network (RPN) with the Feature Pyramid Network (FPN) method from the perspective of panorama. Second, U-Net model is iteratively performed for refined tooth and pulp segmentation. | CBCT | 20 sets Ten-fold cross-validation | Single root tooth/Multi-root tooth -Dice similarity coefficient (DSC): 0.957/0.962 -Average symmetric surface distance (ASD): 0.104/0.137 mm -Relative volume difference (RVD): 0.049/0.053 | Not available |
Wang et al. 2021 [53] | Semantic | A novel CNN architecture, mixed-scale dense (MS-D) CNN, for multiclass segmentation of the jaw, the teeth, and the background in CBCT scans. | CBCT | 30 scans (9507 slices) -Divided into 4 subsets, 7 scans each (4-fold cross-validation scheme) -Training: 3 -Testing: 1 | -Dice similarity coefficient (DSC): 0.945 -MAD: 0.204 ± 0.061 mm | Not available |
Wu et al. 2020 [54] | Instance | A two-level hierarchical deep neural network for tooth segmentation. First embed center-sensitive mechanism with global stage heatmap and a deep supervised 3D-Unet, to ensure accurate tooth centers and guide the localization of tooth instances. Then, in the local stage, DenseASPP-UNet is proposed for fine segmentation and classification and a boundary-aware dice loss is proposed to gain accurate tooth boundaries. | CBCT | 20 scans -Training: 12 (324 teeth) -Testing: 8 (219 teeth) | -Dice similarity coefficient (DSC): 0.962 -Detection accuracy (DA): 99.1% -Identification accuracy (FA): 99.5% -Average Symmetric Surface Distance (ASD): 0.122 mm | Not available |
Rao et al. 2020 [55] | Semantic | A symmetric fully convolutional residual network, with dense conditional random fields (DCRF) to refine the posterior probability map. It used a novel Deep Bottleneck Architecture (DBA) to replace the general convolutional layer in U-Net and introduce a skip connection structure to enhance the propagation and reuse of the features. The DCRF was applied for overall structured prediction to get rid of the noises in the images, which can locate the tooth contour precisely. | CBCT | -Training: 86 images (conventional/unconventional = 51/35) -Testing: 24 images | -Volume Difference (VD): 18.86 -Dice Similarity Coefficient (DSC): 0.9166 -Average Symmetric Surface Distance (ASSD): 0.25 mm -Maximum Symmetric Surface Distance (MSSD): 1.18 mm | Not available |
Lee et al. 2020 [56] | Semantic | A CNN based method (UDS-Net) with multi-phase training and preprocessing based on the U-Net structure. For multi-phase training, sub-volumes of different sizes were defined and used to produce stable and fast convergence. Then, a histogram-based method was used as a preprocessing step to estimate the average gray density level of the bone and tooth regions. Finally, a posterior probability function was developed and regularized the CNN models with spatial dropout layers and replaced the convolutional layers with dense convolution blocks, further improving the segmentation performance. | CBCT | 102 datasets -Training: 69 (1066 images) -Validation: 1 (400 images) -Testing: 32 (151 images) | For validation -Dice: 0.938 -Recall: 95.2% -Precision: 92.4% For testing -Dice: 0.918 -Recall: 93.2% -Precision: 90.4% | Not available |
Chung et al. 2020 [57] | Instance | A neural network, pose-aware TRCNN, for pixel-wise labeling to exploit an instance segmentation framework that is robust to metal artifacts. First, the alignment information of the patient was extracted by pose regression neural networks to attain a volume-of-interest (VOI) region and realign the input image, which reduces the inter overlapping area between tooth bounding boxes. Then VOI region was realigned based on the pose. Finally, a 3D U-Net was performed for individual tooth segmentation by converting the pixel-wise labeling task to a distance regression task. | CBCT | -Training: 100 images -Training: 50 -Testing: 25 | -F1 score: 0.93 -Aggregated Jaccard index (AJI): 0.86 -Precision: 93% -Sensitivity: 93% -Hausdorff distance (HD): 1.59 mm -Average symmetric surface distance (ASSD): 0.20 mm | Not available |
Chen et al. 2020 [58] | Instance | A multi-task 3D fully convolutional network (FCN) and marker-controlled watershed transform (MWT) to segment individual tooth. The multi-task FCN learns to simultaneously predict the probability of tooth region and the probability of tooth surface. Through the combination of the tooth probability gradient map and the surface probability map as the input image, MWT is used to automatically separate and segment individual tooth. | CBCT | 25 patient -Training: 20 -Testing: 5 | -Jaccard similarity coefficient (Omega): 0.936 -Dice similarity coefficient (DSC): 0.881 -Relative volume difference (RVD): 0.072 -Average symmetric surface distance (ASSD): 0.363 mm | Not available |
Ezhov et al. 2019 [59] | Semantic | A V-Net based fully convolutional network for both coarse and fine segmentation. First, the model was trained to predict coarse segmentation using a large weakly labeled dataset, and then finetuned on a smaller, precisely labeled dataset while still predicting coarse masks. | CBCT | -Training: 93% -Testing: 7% | -Intersection over union (IoU): 0.94 -Average surface distance (ASD): 0.17 mm | Not available |
Cui et al. 2019 [60] | Instance | A two-stage deep supervised neural network using 3D Mask R-CNN as the base network for segmentation and identification. First, the edge map was extracted from CBCT images enhance image contrast along shape boundaries. Then, the 3D region proposal network (RPN) was built with a novel learned similarity matrix to help efficiently remove redundant proposals, speed up training and save GPU memory. | CBCT | 20 scans -Training: 12 -Testing: 8 | -Dice similarity coefficient (DSC): 0.9237 -Detection accuracy (DA): 99.55% -Identification accuracy (FA): 96.85% | Not available |
Gou et al. 2019 [61] | Semantic | A novel tooth-based approach that integrated U-Net with a level set model. Level set method was used to build the mask for CT images. U-Net structure was changed for the feasibility to images of any size. | CBCT | 400 images -Training: 300 -Validation: 100 | -Accuracy: 66.7% -Time: 10 s | Not available |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tarce, M.; Zhou, Y.; Antonelli, A.; Becker, K. The Application of Artificial Intelligence for Tooth Segmentation in CBCT Images: A Systematic Review. Appl. Sci. 2024, 14, 6298. https://doi.org/10.3390/app14146298
Tarce M, Zhou Y, Antonelli A, Becker K. The Application of Artificial Intelligence for Tooth Segmentation in CBCT Images: A Systematic Review. Applied Sciences. 2024; 14(14):6298. https://doi.org/10.3390/app14146298
Chicago/Turabian StyleTarce, Mihai, You Zhou, Alessandro Antonelli, and Kathrin Becker. 2024. "The Application of Artificial Intelligence for Tooth Segmentation in CBCT Images: A Systematic Review" Applied Sciences 14, no. 14: 6298. https://doi.org/10.3390/app14146298
APA StyleTarce, M., Zhou, Y., Antonelli, A., & Becker, K. (2024). The Application of Artificial Intelligence for Tooth Segmentation in CBCT Images: A Systematic Review. Applied Sciences, 14(14), 6298. https://doi.org/10.3390/app14146298