Comprehensive Insights into Artificial Intelligence for Dental Lesion Detection: A Systematic Review
Abstract
:1. Introduction
2. Methods
2.1. Research Objectives and Methodology
2.1.1. Goal and Research Questions
- Analyze the lesion detection and segmentation using deep learning models in dental images;
- For the purpose of identification and analysis;
- With respect to object types in dental panoramic/periapical/CBCT images, state-of-the-art solutions, data augmentation methods to detect lesions in dental images, possible research directions, and future developments;
- From the point of view of dentists and computer science researchers;
- In the context of deep learning.
- RQ1. What are the object types for detection and classification in dental panoramic/periapical/CBCT images?
- This research question aims to determine the application areas of AI in dental images by investigating what types of objects can be detected in different dental imaging techniques in the literature.
- RQ2. What are the state-of-the-art approaches to detect lesions in dental images?
- This research question focuses on the analysis of current AI-based solutions applied by dentists to detect lesions that are relatively difficult to distinguish by the eye compared to other objects.
- RQ3. What are the data augmentation methods used for dental images?
- This research question aims to investigate the data augmentation methods applied due to the difficulty of object detection in dental images and the effects of these methods on model performance.
- RQ4. What are the challenges and proposed solutions in dental lesion detection?
- This research question aims to guide future researchers by including the difficulties encountered in AI-based dental lesion detection and solutions to these difficulties.
2.1.2. Data Extraction
2.1.3. Data Synthesis and Reporting
3. Results
3.1. Study Selection
3.2. Quality Assessment
3.3. Answers to Research Questions
3.3.1. What Are the Object Types for Detection and Classification in Dental Panoramic/Periapical/CBCT Images?
- Periapical lesion: One of the most prevalent dental diseases is periapical lesions. In order to assess common clues about the diagnosis of periapical lesions, clinical and radiographic examinations are essential [27]. Periapical radiographs, panoramic radiographs, and CBCT are imaging methods used in radiographic examination to assess the presence of periapical lesions [41]. However, the frequency of detection decreases for panoramic and CBCT images, most of which also involve periapical lesions. S1 is numbered as the first primary study identified in the Springer database during the search. In the study S1 [11], the evaluation of lesions in the roots of teeth was performed using deep learning on periapical radiographs. Within the scope of the study, 3000 periapical root areas were extracted from 1950 images, and scoring was performed according to the size of the lesion in the root on the resulting data. Then, periapical lesions were graded, and according to this grade, it was compared whether the model used with different metrics gave good results. As a result of the study, it was observed that some of the root scores such as 1, 2, 3, 4, and 5 where periapical lesions were graded gave good results, while some scores did not give good enough results. Therefore, it was thought that the model needed to be developed and needed more training. In the remaining 17 studies, different imaging techniques were performed based on the periapical lesion object.
- Cyst lesion: The most common bone in the human body to develop cysts are the jaw, which, due to growing dentition, is closely related to the several epithelial rests. Because of their similar clinicopathological and radiographic presentations, many cysts in the jaws might mimic tumors and intraosseous lesions [42]. Many deep learning-based studies have been carried out to detect these lesions. The S2 study [12] aimed to detect cyst-like lesions. In the study, panoramic dental X-Rays of 412 patients were examined, and maxillary cyst-like lesions were detected in these X-Rays. As a result of the study, it was stated that deep learning methods could achieve effective results in lesion detection, but further research was required for this. Similarly, the study S25 [35] addressed the confusion between radicular cysts and periapical lesions. On panoramic imaging, dentists frequently struggle to differentiate radicular cysts from periapical granulomas. Root canal therapy is the initial line of treatment for periapical granulomas, whereas surgical removal is necessary for radicular cysts. Thus, there is a need for an automated tool to support clinical decision-making [35]. The deep learning method was applied on panoramic radiographs of teeth with 80 radicular cysts and 72 periapical lesions previously determined in the dataset. When the obtained results were evaluated, it was thought that more research was needed for periapical lesion detection within the scope of the study and that automatic detection could be achieved by trying the deep learning method. In these two studies, in which cyst and periapical lesion object types were detected, it was seen that the second study achieved better results compared to the first when the cyst was focused on.
- Jawbone lesion: The most common pathologies encountered in the jawbones are cysts, tumors, and tumor-like diseases. Since the identification of lesions on panoramic radiography has significant clinical importance, intensive efforts have been made to develop deep learning-based models for pathological diagnosis. Despite these advances, the search for a method that can be effectively used in the clinic to diagnose jawbone pathologies continues [43]. The study S3 [13] focused on the detection of jawbone lesions based on deep learning methods. Following the implemented operations, it was observed that the diagnostic accuracy could reach up to 96.57% when 5% of the dataset—labeled with expert input and containing sufficient data—was utilized. The careful use of expert opinion in the labeling process increased sensitivity and specificity as well as accuracy. The study shows that when deep learning and carefully labeled data are used, detection from medical images can achieve highly accurate results. Based on this, the detection of the jawbone object was provided by the deep learning method.
- Tooth decay lesion: Tooth decay is a pathology that starts from the tooth enamel and causes tooth loss, especially as it progresses, and is considered the beginning of dental diseases. In addition, it can progress to the formation of cysts and tumor-like structures at later levels. Based on this, it was considered as an object type that required detection and classification for the lesion in the study [44]. In this context, studies S5 and S6 from the IEEE database were considered as primary studies, and the detection of the dental caries object was performed based on deep learning. In the study S5 [15], a new deep learning method was developed, and automatic detection of dental caries lesions was achieved. In the study conducted on periapical images, the sensitivity of caries lesions resulted in a high value of 99.13%. It was determined that the developed model was effective in detecting dental caries lesions. In the other article, S6 [16], the authors searched for methods to facilitate the treatment process of common dental diseases such as dental caries and missing teeth using AI and image techniques. Today, dentists’ manual search for lesions by eye both is a waste of time and sometimes leads to incorrect results. In the study in question, firstly, AI was used to detect dental diseases such as lesions, dental caries, and missing teeth using learning transfer methods. Although the target here was not exactly caries, dental diseases were detected in general.
- Apical lesion: The term apical periodontitis, which is the beginning of periapical tissue diseases, is generally used to explain the onset of various periapical conditions originating from pulp diseases, which are named and grouped according to the developmental stages of the disease [45]. Although apical and periapical lesions are combined in the same sense, it was deemed appropriate to present them under two separate headings as used in the articles included in this study. When the content of this study is examined, it is seen that apical lesion detection was performed in 10 articles. Among these, the three selected studies were S7, S8, S9, respectively. In the primary study S7 [17], the authors aimed to detect lesions in tooth roots, and the detection process was carried out using different deep learning methods on 660 images. After these processes, the results were compared and the method with the highest accuracy was decided. In the primary study S8 [18], the authors argue that different imaging techniques can be used in addition to the X-Ray images that are mostly used in the detection of apical lesions and that it may have a more facilitating effect on lesions detected manually. The imaging techniques discussed here are panoramic, periapical, and CBCT radiographs. In the scope of the study, a database was created for these images, and a new Convolutional Neural Network (CNN) proposal was given. Within the scope of the study S9 [19], the detection of lesions on periapical images was conducted. The purpose of this detection was to reduce the workload of dentists and save them from the difficulty of manual labeling. For this purpose, an analysis method was suggested within the scope of the study. Data were obtained through a database created by expert dentists. The detection process was performed using a neural network. The accuracy rate increased by more than 5% compared to the currently available methods. As a result, both time and treatment were saved. When these studies were examined, the apical lesion was also accepted as an object type that required detection in this study, presented as a separate group.
3.3.2. What Are the State-of-the-Art Approaches to Detect Lesions in Dental Images?
- U-Net: One of the methods used effectively in the detection and classification of medical images is the U-Net architecture [46]. It takes the name “U-Net” from its shape. Its architecture includes first the encoder and then the decoder sections. The encoder, or contraction section, is divided into three. These are convolution, pooling, and deeper feature extraction layers. The decoder consists of upsampling, concentration, and convolution layers. There is also a bottleneck section formed at the end of the encoder and the beginning of the decoder. This part includes more convolution operations. When we look at its general definition, the U-Net architecture is used to extract low-resolution features from the input image. The output image is obtained by looking at these features. This architecture, which is mostly used in segmentation processes, is combined with other methods of deep learning. In this way, it becomes a technological approach that can be used in the detection of dental diseases. Considering its application in medical images and the scope of this study, eight studies—numbered S10, S14, S15, S18, S19, S22, S23, and S24—that employed U-Net were primarily included. In the S10 study [20], the objective was to detect apical lesions using CNNs. In the experiments, a dataset of 1000 panoramic images was divided into 80% for training, 10% for validation, and 10% for testing. In the study where U-Net architecture was used, lesion detection performed on the data was evaluated with different metrics. As a result of the segmentation process where Intersection over Union (IoU) thresholds were 0.3, 0.4, and 0.5, respectively, F1-scores were found to be 82.8%, 81.5%, and 74.2%, respectively. In study S14 [24], a deep learning-based method was proposed using CBCT images for the detection and segmentation of periapical lesions. A total of 61 periapical root images taken from 20 CBCT devices were used for the study. In the study where U-Net architecture was used as a lesion detection method, five classes were determined for segmentation. These classes are “lesion”, “bone”, “tooth structure”, “background”, and “restorative materials”, respectively. The accuracy of the U-Net architecture in which lesion detection was performed was found to be 93%. Similarly, different metrics and accuracy values were used for each class in the segmentation. Dice indices for each class gave 52%, 78%, 74%, 95%, and 58% results for lesion, bone, tooth structure, background, and restorative materials, respectively. When the results were examined, it was emphasized that the necessary conditions for automatic analysis can be provided by developing deep learning techniques on CBCT images used for lesion detection. Finally, the performance of deep learning-based U-Net architecture was evaluated for the detection of periapical lesions in the study S15 [25]. A total of 195 CBCT images were used as data, and these data were focused on the detection of small lesions. The grading of periapical lesions was based on the size of the lesions. Then, training was performed using a deep learning method. The results were evaluated according to sensitivity and specificity. U-Net architecture showed 86.7% sensitivity and 84.3% specificity in detecting periapical lesions according to their size. It was anticipated that these results will reach higher accuracy and reliability with the development of the algorithm in further studies. Apart from these, the remaining five studies generally presented an architecture that assists the methods in the effective detection of periapical lesions.
- AlexNet: The AlexNet CNN method, which has an important place in the use of artificial neural networks, basically has five convolutional and three fully connected layers [47]. In addition to fully connected layers, it provides effective results for the learning transfer method today. AlexNet forms a structure used in the formation of other networks, especially in object detection. Its structure includes various parts such as convolution, pooling, fully connected layers, and activation functions. There is a “dropout” function that helps prevent overfitting and enables its use in large datasets by optimizing GPU usage. In terms of primary studies, object detection was performed with AlexNet in three articles. In the study S8 [18], where the apical lesion was detected, the highest accuracy was achieved thanks to the use of AlexNet. With the AlexNet CNN model, 92.5% diagnostic accuracy was obtained on panoramic, periapical, and CBCT images. In the study S9 [19], where the periapical lesion was detected with AlexNet, the accuracy rate was 96.21%, and dentists were prevented from manually searching for the apical lesion, thus providing orientation to different dental diseases. In the study S27, Sajad et al. [48] performed the classification of periapical lesions located on the roots of teeth that are too small to be seen by dentists. This process was carried out in two stages. In the first stage, features were extracted using AlexNet and training was performed with a Support Vector Machine (SVM) and CNN. In the second stage, features were extracted from fully connected layers using the learning transfer method, and thus, lesion classification was performed based on the most meaningful features with the help of the softmax function without the need for data augmentation. As a result of these two stages, the data were augmented again then transferred to the SVM classifier, and 98% accuracy was achieved. As a result of the primary studies, it was predicted that AlexNet’s revolution can be an example for the field of health applications.
- You Only Look Once (YOLOv3, YOLOv5, YOLOv8): In terms of technological approaches, three methods that are added to the study together and meet different versions of YOLO object detection are used in the literature [49]. When their individual definitions are examined, YOLOv3 is a neural network created based on 53-layer DarkNet53 and consists of 106 layers in total. It has only one fully connected layer. Thanks to its deep structure, it can achieve high accuracy and perform object detection by boxing. YOLOv5 creates a trainable structure using PyTorch libraries. The system offers faster and lighter processing performance than YOLOv3. The YOLOv8 approach trains the data by integrating the PyTorch library into its structure like YOLOv5. It is based on YOLOv5 in terms of structure but increases training performance with improvement and optimization. In general, YOLOv8 can be suggested as a higher version of the YOLOv5 method [50]. There are five selected primary studies in which YOLO is used in the detection of dental diseases. The detection of different lesions in terms of content was performed with these approaches. In the S1 study [11], the detection was performed via YOLOv3 on periapical lesions verified by different experts. In the study, accuracy was 86.3%, specificity was 76%, sensitivity was 92.1%, the positive prediction was 86.4%, the negative prediction was 86.1% and finally the F1-score was 89%. In the study S4 [14], a machine learning-based support system was developed for the detection of dental diseases. The aim of the system was to detect dental conditions such as periapical lesions and missing teeth on panoramic images. For this purpose, data belonging to 733 patients were collected from Future University, Egypt. Using these data, a total of six dental diseases such as missing teeth and periapical lesions were determined using the YOLOv5 detection method. As a result of the detection of six classes, metric results of 0.61 mAP@0.5 and 0.28 mAP@[0.5–0.95] were obtained. As a result of the study, it was predicted that more diseases could be detected with the developing technology other than these six classes. In the study S12 [22], the aim was to detect apical lesions using panoramic images. The advantages and effectiveness of AI were also included during this detection process. In the study, where the dataset consisted of 306 panoramic images and 400 apical roots obtained from them, the F1-score, specificity, and sensitivity metrics took the values of 71%, 56%, and 98%, respectively. A comparison was made using these values, and it was determined that AI, through a YOLOv3-based detection system, had an assisting effect on dentists in making diagnoses. In the article numbered S7 [17], the performances of the YOLOv5 and YOLOv8 methods were compared on the dental lesion images. When the results were examined, it was stated that YOLOv8 was more effective in detecting lesions.
- Convolutional Neural Network (CNN): It is one of the deep learning methods that can be used especially in operations such as audio and image recognition [51]. Since it is an acronym for CNN, it is basically included in the structure of other networks. Since it is a building block for other networks, it also has activation functions, pooling, and fully connected layers along with the convolutional layer [52]. It learns the features in terms of the image using a hierarchical structure. Considering its structure and features, it is among the latest technological approaches that can be used in the detection of dental lesions. As the primary studies, articles numbered S5, S13, S19, and S29 were added to this systematic review since they used CNN to detect lesions. Among these, in the study S5 [15], as previously mentioned, the authors tried to increase the detection performance by using the ensemble structure with the method called Multi-Input Deep Convolutional Neural Network Ensemble (MI-DCNNE) in the detection of dental caries. In the primary study S13 [23], the effective use of AI to detect periapical lesions in panoramic radiographs was investigated since some lesions are too small to be seen visually. In this context, the authors marked 18618 periapical root areas in 713 panoramic images. Afterward, they classified the periapical lesion as present/absent and detected it with the help of CNN architecture. The average accuracy was 74.95%, with a sensitivity of 81% and specificity of 86%. As a result, it was seen that AI achieved successful results in an object type that is difficult to see such as periapical lesions. In the primary study S19 [29], the determination of periapical pathology was carried out for the detection of lesions and similar dental diseases in panoramic radiographs. The process performed is segmentation and the study compared the pathological findings obtained from here with the metric values obtained from deep learning methods in the detection of diseases. Two different deep learning methods were applied to the pathological findings extracted from 250 panoramic images. These are U-Net and Mask-RCNN. Both methods were used and compared with different metrics. These metrics are accuracy, precision, sensitivity, Dice index, and F1-score. For comparison, U-Net accuracy was found to be 98.1% and Mask-RCNN accuracy was found to be 46.7%. When the results were examined, it was seen that U-Net architecture performed better in lesion segmentation and detection. It was determined that these results can be improved when sufficient data are obtained. In another study, S29 [39], periapical lesions were detected using an X-Ray imaging technique. The situation that is emphasized here is the development of imaging techniques. It has been argued that new imaging techniques should be created for objects that are difficult to detect, such as periapical lesions, in addition to the X-Ray images used. As a result of the study, the detection accuracy of anomalies such as periapical lesions in teeth was found to be 95.85%. As the studies show, the CNN approach creates an auxiliary network structure similar to the U-Net architecture.
- GoogleNet: The GoogleNet deep learning model known as InceptionV1 was developed by Google [53]. The most notable aspect of this method, first introduced to the literature following its success in a competitive setting, is its utilization of “inception” blocks to perform the detection process. These blocks are formed by receiving filters of different sizes in parallel at the same time. Thanks to parallelism, different information can be received at the same time, and while the number of parameters decreases, performance is increased. Since accuracy is not achieved during these operations, the Rectified Linear Unit (ReLu) activation function is used [54]. Apart from these, it is thought that effective results can be obtained when used in dental lesion detection by reducing the number of parameters in the GoogleNet deep learning model with pooling layers [53]. Generally, the usage areas of this model are classification, object detection, and segmentation. In the primary studies S6 and S8, GoogleNet was utilized as a comparative model. In the study S6 [16], GoogleNet achieved an accuracy of 97.10%. In S8 [18], different networks were evaluated and compared. The aim was to identify the network that would best detect apical lesions. While the highest accuracy was achieved with AlexNet in the scope of the study, GoogleNet reached an accuracy of 89.36%. It is evident that GoogleNet, with its distinctive “inception” block structure, can be improved in the future and has the potential to achieve high accuracy in dental lesion detection.
- Denti.Al: A system based on AI named Denti.AI provides dentists with automatic information about pathologies in images. Denti.AI examines and detects using X-Ray images of the tooth. For this system-based deep learning model, study S11 was included in this systematic review as a primary study. In study S11 [21], the detection of apical lesion radiolucencies was performed on periapical images. The detection process, which was previously carried out on CBCT images, was conducted this time using 68 intraoral periapical images in a way that also highlighted the effectiveness of deep learning methods. In the two-part process conducted with the data added to the Denti.AI deep learning tool, different metric values were obtained for Reader 1 and Reader 2 and compared. As a result, the use of periapical radiographs in the detection of apical lesion radiolucencies increased the accuracy by 8.6% according to the alternative Free-Response Operating Characteristic (FROC) curve and Average Free-Response Operating Characteristic—Area Under Curve (AFROC-AUC) metrics. The study examining the effect of the periapical imaging technique on apical lesions meets all internal criteria.
- Visual Geometry Group (VGG16, VGG19): VGG16 and VGG19 are deep learning models used in different situations and consist of two networks with three fully connected layers, similar to AlexNet [55]. These networks have achieved significant success, were developed within the scope of competition, and have reached the present day. As a result, they have become methods that can be used in dental lesion detection. The structure of VGG16 consists of sixteen layers. Thirteen of these sixteen layers are convolutional layers, and three are fully connected layers. The convolutional layers use 3 × 3 filters and apply the ReLU activation function [56]. Additionally, maximum pooling is performed with 2 × 2 image sizes [57]. After training, operations such as object detection can be performed. Furthermore, in the transfer learning method, the three fully connected layer structures provide effective feature extraction. In the case of VGG19, the network consists of convolutional layers followed by three fully connected layers [58]. VGG19 is the more advanced version of VGG16, with a deeper network structure. However, the increase in depth brings with it an increase in cost and number of parameters [59]. VGG19 is a deep learning model that can be selected for segmentation, object recognition, and similar image processing tasks. In this systematic review study, two studies that performed lesion detection in this context are discussed. In the study S8 [18], AlexNet was used as the third comparison model for GoogleNet. Here, an accuracy of 87.94% was achieved. In the other primary study selected, S26 [36], two classes were defined, and the detection process was performed on these classes. The first class consisted of healthy teeth, and the second class consisted of teeth with endodontic lesions. Detection was performed using the DenseNet-121 network after the VGG16 automatic classification in combination with a Siamese network. The dataset consisted of 1000 sagittal and coronal slices extracted from 1000 CBCT images. The methods were tested, and in addition to achieving an acceptable classification performance, a detection accuracy of 70% was obtained. Based on this result, it was suggested that the Siamese network could be combined with different deep learning methods in the future to provide higher lesion detection rates. As observed in the primary studies, VGG16 and VGG19 constitute a viable structure for lesion detection.
- DentaVN: The DentaVN software, presented as a new approach in lesion detection, was used in the S28 study, and as a result of inclusion criteria, the S28 study was included in the primary studies. In the study S28, Ngoc et al. [38] state that the main purpose of their research is to provide evidence for the use of AI in disease diagnosis. In the study, which focused on periapical lesions, it was noted that there was limited research on this subject. For this purpose, machine learning-based software called DentaVN was developed using the Faster Region-based CNN (Faster R-CNN) architecture. This software used parameters of Faster-RCNN and detected periapical lesions with a 95.6% accuracy rate, which was also validated by dentists. The sensitivity and specificity metrics in the study were found to be 89.5% and 97.9%, respectively. Considering the results, it was concluded that the DentaVN software can serve as a supportive tool for dentists and is effective in the detection of periapical lesions.
- RetinaNet: The deep learning model RetinaNet provides the location information of objects in an image and uses it for classification [60]. It is primarily defined as a computer vision method. The problem at hand is to balance high accuracy with fast computation. In terms of its features, RetinaNet can be divided into three parts. The first part is the “focal loss” function, which is used to separate objects classified with different weights [61]. It treats examples as positive and negative, eliminating the negative examples from the system’s use, allowing the positive examples to stand out. In the second part, RetinaNet performs object detection in a single stage [62]. This enables the model to run quickly. It performs feature extraction by integrating ResNet or ResNeXt networks into itself [63]. This is an effective method, particularly in detecting abnormalities in medical images. In study S17 [27], the authors used the RetinaNet deep learning model for the detection of lesions in tooth roots, manually labeled from panoramic radiographs. In the study, periapical lesions were trained using ten different deep learning methods on 457 panoramic radiographs, and the results were evaluated using various metrics. The metrics included accuracy, sensitivity, precision, and F1-score, respectively. The data were divided into 80% training, 10% validation, and 10% test sets. In the study, accuracy ranged from 67.3% to 81.2%, sensitivity from 74% to 91%, precision from 82% to 93%, and finally, F1-score from 80% to 89.5%. The study, which yielded different results for each deep learning method, found that the best model was RetinaNet, and the best performance was achieved with Adaptive Training Sample Selection (ATSS). As a result, it was suggested that the RetinaNet method may be used effectively in clinical settings in the future with further method trials.
- SqueezeNet: It has been argued that the cost and number of parameters should be reduced in the field of deep learning [64]. In this context, the SqueezeNet deep learning model has become available with fewer parameters and faster performance. It aims to achieve high detection and classification accuracy by reducing memory usage. The layers of the SqueezeNet model are two: the “squeeze layer”, which reduces the depth of the input data using a 1 × 1 filter, and the “expand layer” to expand the data [65]. When compared, it contains 50 times fewer parameters after training compared to network structures such as Alexnet [66]. The advantage of this model compared to other models is that it reduces memory usage considerably with its compact structure and achieves high accuracy by including speed. It can detect objects from many embedded systems and medical images. In addition, since it does not have a fully connected layer, it is not a suitable network model for transfer learning approaches. In study S6 [16], this model was compared with GoogleNet. In this comparison, the SqueezeNet model performed better than GoogleNet. The accuracy value reached a high value of 99.9%, detecting lesions such as tooth decay. The SqueezeNet deep learning model shows that it is open to effective use in the future with this accuracy value.
- Segment Anything Model (SAM): This is a deep learning model developed by Meta’s Fundamental AI Research (FAIR) as a state-of-the-art instance segmentation model [67]. The model identifies each object in an image and assigns a mask to it. In the output image, each object is represented with distinct colors and patterns. While this model is effectively used for object identification, it demonstrates particularly good results in medical imaging. However, it has not yet achieved sufficient accuracy in lesion detection. In study S7 [17], this method was employed for comparison purposes but was found to be less effective, achieving only a 60% accuracy rate compared to other deep learning models. Therefore, it is considered open to improvement and is expected to yield better results in the future.
- ResNet50: ResNet50 is a deep learning model developed by Microsoft Research, consisting of 50 layers as its name suggests [68]. The model aims to achieve high accuracy in image recognition and object detection. In its 50-layer architecture, the initial layers extract features from the input image using a 7 × 7 filter. The “residual blocks” layer, which prevents gradient structure loss, forms convolution blocks in the order of 1 × 1, 3 × 3, and 1 × 1 [69]. Activation functions are present in the final layer of the ResNet50 deep learning model. Within the context of this systematic review, study S8 [18] compared ResNet50 with AlexNet, GoogleNet, and VGG19 deep learning models for the detection of apical lesions. In this comparison, the accuracy of the ResNet50 model was found to be 86.65%. Furthermore, ResNet50 serves as the foundation for networks such as RetinaNet. Based on this evaluation, the ResNet50 model is considered an improvable and effective method.
- DetectNet: DetectNet is a deep learning model developed by NVIDIA, with the primary purpose of object detection [70]. It offers an advanced structure for real-time and video object detection. The working system is training-based and designed for GPU usage. The model contains a CNN layer in its architecture. The “bounding box regressor” structure is employed to determine the exact positions of objects within the image [71]. In study S2 [12], the DetectNet deep learning model, which performs scaling and normalization on the input images as part of its pre-processing, detects cyst lesions on panoramic images. The cyst lesions were detected with an accuracy rate of 75–77%. While the model does not provide sufficient accuracy for healthcare applications, it is anticipated that it can be improved using different techniques.
- MobileNetV2: The MobileNetV2 deep learning model, considered by Google to provide both optimization and efficiency, is used as a solution to resource shortages in mobile and embedded systems [72,73]. It refers to the second generation of the MobileNet model. In the network architecture, inverse incremental convolutional connections are used instead of residual connections [74]. This creates a compact system with low computational requirements. The information obtained from the image is increased by expanding the first layer structure. Then, depth separable convolution is applied to the expanded features. In the activation function part, a linear transformation is applied over the features. In the S25 study [35], classification between radicular cysts and periapical lesions was performed using MobileNetV2, and YOLOv3 was later used for lesion and radicular cyst detection. Different metric values were obtained for each process in classification and detection. For radicular cyst classification, sensitivity was 95% and specificity was 86%, while for periapical lesions, these values were 77% and 93%, respectively. For detection, sensitivity was 83% for radicular cysts and 74% for periapical lesions. Based on these results, it can be stated that MobileNetV2, with its network structure and features, can be used in object detection and is a deep learning model that can be compared for lesion detection. Table 10 presents the state-of-the-art approaches in primary studies, along with their counts and percentages.
3.3.3. What Are the Data Augmentation Methods Used for Dental Images?
- Brightness and contrast: Brightness and contrast adjustments are techniques used to actively improve visual image quality [76]. Contrast is achieved through brightness adjustments. In the context of primary studies, brightness and contrast techniques were effectively utilized in four studies: S3, S4, S10, and S12. In study S3 [13], the need for sufficient data for the detection process was emphasized, with identified deficiencies in both the quantity of data and labeling accuracy. It was suggested that the detection process could be improved by employing diverse data augmentation and magnification methods. Similarly, in study S10 [20], brightness and contrast adjustments yielded high accuracy in detecting apical lesions. However, the lack of high-resolution images limited the study’s overall advancement.
- Horizontal mirroring: Horizontal mirroring can be defined as a data augmentation technique that involves flipping an image along its vertical axis, effectively swapping the left and right sides. This method was utilized in study S3 [13] to generate additional images following brightness and contrast adjustments. In the study, which focused on jawbone detection, the number of tooth images was increased through horizontal mirroring, and the detection process was conducted on the augmented dataset.
- Trapezoid transformation: The trapezoid transformation, a geometric technique that alters the perspective of an image to enhance visual diversity, was used in the S3 study. In the S3 study [13], where brightness, contrast, and horizontal mirroring were previously applied, trapezoid transformation was used in the final step to visualize the jawbone from alternative angles. This approach enhanced the system’s ability to increase accuracy in jawbone detection through deep learning.
- Resize: Resizing, a data augmentation method, adjusts image dimensions to facilitate the application of deep learning models for tasks such as detection or classification. In the S4 study [14], the resizing method was employed to detect dental diseases. This approach successfully generated sufficient data to identify different types of lesions.
- Clipping: The cropping technique involves trimming a specific part of an image to create a new one, aiming to extract different regions from the same data. In particular, focusing on the region that contains the detected lesion is crucial. The cropping technique was applied in two primary studies. In the S4 study [14], resizing was performed first, followed by cropping on the resized data, enabling the generation of additional data. Similarly, in the S25 study [35], cropping was applied to focus on specific parts of the tooth for enhanced analysis.
- Flip (horizontal flip, vertical flip): The translation process is an image processing technique that involves flipping the image. It can be used to augment the dataset by horizontally and vertically flipping the images before adding them to the deep learning model [77]. A total of five primary studies employed the translation process, namely, S4, S10, S18, S23, and S24. In the S23 study [33], the authors performed deep learning-based detection using segmentation for apical lesion detection. Apical lesions were segmented from 470 panoramic images using the SpatialConfiguration-Net AI algorithm, which was developed based on the U-Net architecture. During this process, the images were flipped. A total of 63 apical lesions were detected across 47 panoramic images, which were used as test data. This approach led to the development of a deep convolutional neural network (D-CNN) algorithm for apical lesion detection. When evaluating the metrics, sensitivity was found to be 92%, precision 84%, and the F1-score 88%. After analyzing the results, the study questioned whether these outcomes were sufficient for use in dental clinics. In conclusion, the authors suggested that deep learning models could prove effective in detecting apical lesions.
- Rescale: Scaling is a data augmentation technique that modifies an image’s size to improve model accuracy. In the study S18 [28], when the scaling method was examined, periapical lesions were detected using a simple deep learning model. The goal of the study was to enhance panoramic radiograph images that contain periapical lesions. To achieve this enhancement, the illumination and contrast processes were performed in conjunction with scaling, using the Retinax algorithm. Afterward, the detection process was carried out using the U-Net architecture. Over 550 data points were used in the analysis of the modified images. According to the evaluation of the metric, the accuracy was 95.8%, the F1-score was 95.5%, and the sensitivity was 95.2%. The study demonstrated that this method could be effective and innovative for future applications in dental images. It was also suggested that these metric values could be further improved with additional data augmentation techniques.
- Shift (width shift range, height shift range, blurry shift): Shifting is a data augmentation technique that involves translating an image along the axes to enhance the generalization ability and performance of deep learning models. In the context of dental images, this technique aids in highlighting the region where the lesion is located, making it perceptible to both the model and the human observer. Shifting was applied in the primary studies S10 and S18 along both the horizontal and vertical axes. In the S10 study [20], this technique was used by shifting the axes to detect apical lesions. In the S18 study [28], shifting was performed in addition to scaling. Since scaling involves changes in pixel values, combining it with shifting allowed the model to achieve better results due to the enhancement provided by this additional data augmentation technique.
- Rotation and reflection: Rotating the image by a certain angle and reflecting it along a line are among the data augmentation techniques that contribute to the more effective execution of tasks such as detection and classification. Both methods were integrated and applied in five primary studies: S10, S24, S26, S27, and S29. In the S24 study [34], the authors highlighted that periapical lesions and associated conditions, such as tumors and cysts, are prevalent in dental diseases. A dataset was created with the help of 24 oral and maxillofacial specialists who identified dental diseases from 2902 panoramic images. Subsequently, deep learning methods based on U-Net architecture were tested using expert evaluations. The results showed a sensitivity of 92% and specificity of 84%. The study also reported an F1-score of 88%, a Dice coefficient of 88%, and an IoU metric value of 79%. With these results, 14 oral and maxillofacial experts confirmed that good results were achieved in lesion detection in panoramic images. The study concluded that the future development of deep learning in dental clinics is promising based on these metrics. However, it was noted that 49% of periapical radiolucencies were missed.
- Sharpening: The sharpening technique, used to enhance details in an image, may assist in lesion detection by sharpening tooth contours, offering an effective approach in dental imaging. In the primary study S10 [20], sharpening was applied to panoramic radiographs to improve the detection of apical lesions.
- Zoom: Zooming, a technique that adjusts the scale of an image to enhance the model’s ability to learn from different object sizes, allows for the enlargement of the visible part of a lesion, enabling the deep learning model to better learn the regions with a higher likelihood of lesion occurrence. In the primary study S26 [36], a new dataset was generated by zooming in on the root regions of teeth in panoramic radiographs to detect periapical lesions. By narrowing the focus to potential lesion areas instead of the entire image, the study achieved a more targeted analysis.
- Grayscale: The grayscale filter simplifies image processing by converting Red, Green, Blue (RGB)-based images into grayscale, reducing the complexity in deep learning models and facilitating the detection of conditions such as lesions, cysts, and missing teeth. In the primary studies S27 [37] and S29 [39], this technique was utilized as a foundational pre-processing step for these purposes.
3.3.4. What Are the Challenges and Proposed Solutions in Dental Lesion Detection?
- Lack of data: Concepts such as AI, deep learning, and machine learning require sufficient data to work effectively. Researchers cannot adopt deep learning models due to the lack of data and the lack of explainability of the trained models [78]. Therefore, lack of data is seen as one of the difficulties that arise in the performance of the model used. This situation, which has a significant effect on the detection capability of the model, can reduce the accuracy obtained.
- Image quality: Especially in the detection process from radiographic images such as panoramic and CBCT, the resolution must be of high quality. Image quality may deteriorate due to distortions such as blurring or the presence of noise. The distortion may cause a deficient, inappropriate image to be obtained [79]. As a result of this difficulty in medical images, the model cannot perform adequately. Therefore, image processing techniques such as noise reduction and cropping should be applied.
- Ability to generalize: The generalization capability of a model is defined as the determination of how it responds to new data given to the system outside the dataset used. This problem can be solved by using the information obtained from the uncertain region of the feature space as test data. The parts of the model that are not seen in the training data give the ability to generalize to the test data [80].
- Lesion indistinctness: In dental images, the lesion is a structure that is very difficult to see with the naked eye. For this reason, it is difficult to detect diseases such as cysts and caries in images with lesion ambiguity, and a solution is needed. Many deep learning models designed to detect lesions in medical imaging depend on AI systems. These systems can search for abnormally colored lumps with a specific shape. The parts of the system that can be fine-tuned can be healthy tissue colors or the minimum length and width range for a potential lump. Improvement of the system may prevent lesion ambiguity [81].
- Model complexity: Complex deep learning models require more computation. Overfitting occurs as the complexity of the selected model increases. Model complexity is still in its infant stage in terms of deep learning [82].
- Risk of overfitting: Although overfitting during training achieves high accuracy, these results are seen as incorrect. Overfitting refers to the situation where the network learns the features on the training dataset perfectly but does not generalize quite well on the test dataset [83]. This adaptation is a negative situation because it reduces the generalization ability of the model.
4. Discussion
- 1.
- Which lesion types were identified for detection and classification in dental panoramic, periapical, and CBCT images?
- 2.
- What are the state-of-the-art approaches used to detect lesions in dental images?
- 3.
- Which techniques are used for data augmentation in dental images?
- 4.
- What are the challenges and proposed solutions for detecting dental lesions?
4.1. Related Work
4.2. Limitations and Potential Threats to Validity
- Date: This study covers primary studies published from 2019 to August 2024.
- Literature type: This study includes studies published in peer-reviewed journals and conference/workshop/symposium proceedings. Secondary systematic review studies, gray literature, and research studies such as surveys were excluded from the primary study candidate pool.
- Although this study focused on lesion detection from dental images, the automatic search result eliminated lesion types other than teeth with EC-4.
- We investigated data augmentation techniques to address the problem of class imbalance due to the difficulty of acquiring and labeling dental images.
- We focused on deep learning models that can make fast and accurate detection in medical images in recent years instead of studies involving machine learning algorithms in the primary study selection process.
4.3. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Neyaz, Z.; Gadodia, A.; Gamanagatti, S.; Mukhopadhyay, S. Radiographical approach to jaw lesions. Singap. Med. J. 2008, 49, 165–176. [Google Scholar]
- Özgür, A.; Kara, E.; Arpacı, R.; Arpacı, T.; Esen, K.; Kara, T.; Duce, M.N.; Apaydın, F.D. Nonodontogenic mandibular lesions: Differentiation based on CT attenuation. Diagn. Interv. Radiol. 2014, 20, 475. [Google Scholar] [CrossRef] [PubMed]
- AbuSalim, S.; Zakaria, N.; Islam, M.R.; Kumar, G.; Mokhtar, N.; Abdulkadir, S.J. Analysis of deep learning techniques for dental informatics: A systematic literature review. Healthcare 2022, 10, 1892. [Google Scholar] [CrossRef] [PubMed]
- Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, N.71. [Google Scholar] [CrossRef]
- Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering. 2007. Available online: https://www.researchgate.net/publication/302924724_Guidelines_for_performing_Systematic_Literature_Reviews_in_Software_Engineering (accessed on 3 December 2024).
- Wohlin, C. Guidelines for snowballing in systematic literature studies and a replication in software engineering. In Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering, London, UK, 13–14 May 2014; pp. 1–10. [Google Scholar]
- Basili, V.R. Applying the Goal/Question/Metric paradigm in the experience factory. Softw. Qual. Assur. Meas. A Worldw. Perspect. 1993, 7, 21–44. [Google Scholar]
- Kitchenham, B.A.; Budgen, D.; Brereton, P. Evidence-Based Software Engineering and Systematic Reviews; CRC Press: Boca Raton, FL, USA, 2015; Volume 4. [Google Scholar]
- Liberati, A.; Altman, D.G.; Tetzlaff, J.; Mulrow, C.; Gøtzsche, P.C.; Ioannidis, J.P.; Clarke, M.; Devereaux, P.J.; Kleijnen, J.; Moher, D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. Ann. Intern. Med. 2009, 151, W-65. [Google Scholar] [CrossRef]
- Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Ann. Intern. Med. 2009, 151, 264–269. [Google Scholar] [CrossRef]
- Moidu, N.P.; Sharma, S.; Chawla, A.; Kumar, V.; Logani, A. Deep learning for categorization of endodontic lesion based on radiographic periapical index scoring system. Clin. Oral Investig. 2022, 26, 651–658. [Google Scholar] [CrossRef]
- Watanabe, H.; Ariji, Y.; Fukuda, M.; Kuwada, C.; Kise, Y.; Nozawa, M.; Sugita, Y.; Ariji, E. Deep learning object detection of maxillary cyst-like lesions on panoramic radiographs: Preliminary study. Oral Radiol. 2021, 37, 487–493. [Google Scholar] [CrossRef]
- Gwak, M.; Yun, J.P.; Lee, J.Y.; Han, S.S.; Park, P.; Lee, C. Attention-guided jaw bone lesion diagnosis in panoramic radiography using minimal labeling effort. Sci. Rep. 2024, 14, 4981. [Google Scholar] [CrossRef]
- El Bagoury, M.; Al-Shetairy, M.; Ahmed, O.; Mehanny, S.S.; Hamdy, T.; Omar, G.; Akram, M.; Mohamed, R.; Amr, H.; Gad, W. Dental Disease Detection based on CNN for Panoramic Dental Radiographs. In Proceedings of the 2023 Eleventh International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, Egypt, 21–23 November 2023; pp. 530–535. [Google Scholar]
- Kaarthik, K.; Vivek, G.; Dineshkumar, K.; Balasubramani, A. Detection and Classification of Dental Defect using CNN. In Proceedings of the 2022 6th International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 25–27 May 2022; pp. 1136–1142. [Google Scholar]
- Chen, S.L.; Chen, T.Y.; Huang, Y.C.; Chen, C.A.; Chou, H.S.; Huang, Y.Y.; Lin, W.C.; Li, T.C.; Yuan, J.J.; Abu, P.A.R.; et al. Missing teeth and restoration detection using dental panoramic radiography based on transfer learning with CNNs. IEEE Access 2022, 10, 118654–118664. [Google Scholar] [CrossRef]
- Demir, K.; Aksakalli, I.K.; Bayğin, N.; Sökmen, Ö.Ç. Deep Learning Based Lesion Detection on Dental Panoramic Radiographs. In Proceedings of the 2023 Innovations in Intelligent Systems and Applications Conference (ASYU), Sivas, Turkiye, 11–13 October 2023; pp. 1–6. [Google Scholar]
- Li, C.W.; Lin, S.Y.; Chou, H.S.; Chen, T.Y.; Chen, Y.A.; Liu, S.Y.; Liu, Y.L.; Chen, C.A.; Huang, Y.C.; Chen, S.L.; et al. Detection of dental apical lesions using CNNs on periapical radiograph. Sensors 2021, 21, 7049. [Google Scholar] [CrossRef] [PubMed]
- Chuo, Y.; Lin, W.M.; Chen, T.Y.; Chan, M.L.; Chang, Y.S.; Lin, Y.R.; Lin, Y.J.; Shao, Y.H.; Chen, C.A.; Chen, S.L.; et al. A high-accuracy detection system: Based on transfer learning for apical lesions on periapical radiograph. Bioengineering 2022, 9, 777. [Google Scholar] [CrossRef] [PubMed]
- Song, I.S.; Shin, H.K.; Kang, J.H.; Kim, J.E.; Huh, K.H.; Yi, W.J.; Lee, S.S.; Heo, M.S. Deep learning-based apical lesion segmentation from panoramic radiographs. Imaging Sci. Dent. 2022, 52, 351. [Google Scholar] [CrossRef]
- Hamdan, M.H.; Tuzova, L.; Mol, A.; Tawil, P.Z.; Tuzoff, D.; Tyndall, D.A. The effect of a deep-learning tool on dentists’ performances in detecting apical radiolucencies on periapical radiographs. Dentomaxillofac. Radiol. 2022, 51, 20220122. [Google Scholar] [CrossRef]
- İçöz, D.; Terzioğlu, H.; Özel, M.; Karakurt, R. Evaluation of an Artificial Intelligence System for the Diagnosis of Apical Periodontitis on Digital Panoramic Images. Niger. J. Clin. Pract. 2023, 26, 1085–1090. [Google Scholar] [CrossRef]
- Ba-Hattab, R.; Barhom, N.; Osman, S.A.A.; Naceur, I.; Odeh, A.; Asad, A.; Al-Najdi, S.A.R.; Ameri, E.; Daer, A.; Silva, R.L.D.; et al. Detection of periapical lesions on panoramic radiographs using deep learning. Appl. Sci. 2023, 13, 1516. [Google Scholar] [CrossRef]
- Setzer, F.C.; Shi, K.J.; Zhang, Z.; Yan, H.; Yoon, H.; Mupparapu, M.; Li, J. Artificial intelligence for the computer-aided detection of periapical lesions in cone-beam computed tomographic images. J. Endod. 2020, 46, 987–993. [Google Scholar] [CrossRef]
- Hadzic, A.; Urschler, M.; Press, J.N.A.; Riedl, R.; Rugani, P.; Štern, D.; Kirnbauer, B. Evaluating a Periapical Lesion Detection CNN on a Clinically Representative CBCT Dataset—A Validation Study. J. Clin. Med. 2023, 13, 197. [Google Scholar] [CrossRef]
- Krois, J.; Garcia Cantu, A.; Chaurasia, A.; Patil, R.; Chaudhari, P.K.; Gaudin, R.; Gehrung, S.; Schwendicke, F. Generalizability of deep learning models for dental image analysis. Sci. Rep. 2021, 11, 6102. [Google Scholar] [CrossRef]
- Çelik, B.; Savaştaer, E.F.; Kaya, H.I.; Çelik, M.E. The role of deep learning for periapical lesion detection on panoramic radiographs. Dentomaxillofac. Radiol. 2023, 52, 20230118. [Google Scholar] [CrossRef] [PubMed]
- Latke, V.; Narawade, V. Detection of dental periapical lesions using retinex based image enhancement and lightweight deep learning model. Image Vis. Comput. 2024, 146, 105016. [Google Scholar] [CrossRef]
- Adnan, N.; Umer, F.; Malik, S.; Hussain, O.A. Multi-model deep learning approach for segmentation of teeth and periapical lesions on pantomographs. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2024, 138, 196–204. [Google Scholar] [CrossRef] [PubMed]
- Al-Awasi, K.A.; Altaroti, G.A.; Aldajani, M.A.; Alshammari, A.A.; Almunasif, M.A.; AlQarni, A.A.M.; Aldokhi, M.A.; Ezzeldin, T.; Siddiqui, I.A. Apical status and prevalence of endodontic treated teeth among Saudi adults in Eastern province: A prospective radiographic evaluation. Saudi Dent. J. 2022, 34, 473–478. [Google Scholar] [CrossRef] [PubMed]
- Ekert, T.; Krois, J.; Meinhold, L.; Elhennawy, K.; Emara, R.; Golla, T.; Schwendicke, F. Deep learning for the radiographic detection of apical lesions. J. Endod. 2019, 45, 917–922. [Google Scholar] [CrossRef]
- Kirnbauer, B.; Hadzic, A.; Jakse, N.; Bischof, H.; Stern, D. Automatic detection of periapical osteolytic lesions on cone-beam computed tomography using deep convolutional neuronal networks. J. Endod. 2022, 48, 1434–1440. [Google Scholar] [CrossRef]
- Bayrakdar, I.S.; Orhan, K.; Çelik, Ö.; Bilgir, E.; Sağlam, H.; Kaplan, F.A.; Görür, S.A.; Odabaş, A.; Aslan, A.F.; Różyło-Kalinowska, I. AU-net approach to apical lesion segmentation on panoramic radiographs. BioMed Res. Int. 2022, 2022, 7035367. [Google Scholar] [CrossRef]
- Endres, M.G.; Hillen, F.; Salloumis, M.; Sedaghat, A.R.; Niehues, S.M.; Quatela, O.; Hanken, H.; Smeets, R.; Beck-Broichsitter, B.; Rendenbach, C.; et al. Development of a deep learning algorithm for periapical disease detection in dental radiographs. Diagnostics 2020, 10, 430. [Google Scholar] [CrossRef]
- Ver Berne, J.; Saadi, S.B.; Politis, C.; Jacobs, R. A deep learning approach for radiological detection and classification of radicular cysts and periapical granulomas. J. Dent. 2023, 135, 104581. [Google Scholar] [CrossRef]
- Calazans, M.A.A.; Ferreira, F.A.B.; Alcoforado, M.d.L.M.G.; Santos, A.d.; Pontual, A.d.A.; Madeiro, F. Automatic classification system for periapical lesions in cone-beam computed tomography. Sensors 2022, 22, 6481. [Google Scholar] [CrossRef]
- Sajad, M.; Shafi, I.; Ahmad, J. Automatic lesion detection in periapical X-Rays. In Proceedings of the 2019 International Conference on Electrical, Communication, and Computer Engineering (ICECCE), Swat, Pakistan, 24–25 July 2019; pp. 1–6. [Google Scholar]
- Ngoc, V.; Viet, D.H.; Anh, L.K.; Minh, D.Q.; Nghia, L.L.; Loan, H.K.; Tuan, T.M.; Ngan, T.T.; Tra, N.T. Periapical lesion diagnosis support system based on X-Ray images using machine learning technique. World 2021, 12, 190. [Google Scholar] [CrossRef]
- Latke, V.; Narawade, V. A New Approach towards Detection of Periapical Lesions using Artificial Intelligence. Grenze Int. J. Eng. Technol. (GIJET) 2023, 9, 396–402. [Google Scholar]
- Kitchenham, B.; Brereton, O.P.; Budgen, D.; Turner, M.; Bailey, J.; Linkman, S. Systematic literature reviews in software engineering–a systematic literature review. Inf. Softw. Technol. 2009, 51, 7–15. [Google Scholar] [CrossRef]
- Meirinhos, J.; Martins, J.; Pereira, B.; Baruwa, A.; Gouveia, J.; Quaresma, S.; Monroe, A.; Ginjeira, A. Prevalence of apical periodontitis and its association with previous root canal treatment, root canal filling length and type of coronal restoration–a cross-sectional study. Int. Endod. J. 2020, 53, 573–584. [Google Scholar] [CrossRef] [PubMed]
- Titinchi, F.; Morkel, J. Residual cyst of the jaws: A clinico-pathologic study of this seemingly inconspicuous lesion. PLoS ONE 2020, 15, e0244250. [Google Scholar] [CrossRef]
- Kwon, O.; Yong, T.H.; Kang, S.R.; Kim, J.E.; Huh, K.H.; Heo, M.S.; Lee, S.S.; Choi, S.C.; Yi, W.J. Automatic diagnosis for cysts and tumors of both jaws on panoramic radiographs using a deep convolution neural network. Dentomaxillofac. Radiol. 2020, 49, 20200185. [Google Scholar] [CrossRef]
- Kara, E.; İpek, B. Dental caries from the past to the future: Is it possible to reduce caries prevalence? Anatol. Curr. Med. J. 2024, 6, 240–247. [Google Scholar] [CrossRef]
- Blake, A.; Tuttle, T.; McKinney, R. Apical Periodontitis; StatPearls Publishing: Treasure Island, FL, USA, 2023. [Google Scholar]
- Chen, Z.; Chen, S.; Hu, F. CTA-UNet: CNN-transformer architecture UNet for dental CBCT images segmentation. Phys. Med. Biol. 2023, 68, 175042. [Google Scholar] [CrossRef]
- Rani, S.; Ghai, D.; Kumar, S.; Kantipudi, M.P.; Alharbi, A.H.; Ullah, M.A. Efficient 3D AlexNet architecture for object recognition using syntactic patterns from medical images. Comput. Intell. Neurosci. 2022, 2022, 7882924. [Google Scholar] [CrossRef]
- Ünsal, Ü.; Adem, K. Diş görüntüleri üzerinde görüntü işleme ve derin öğrenme yöntemleri kullanılarak çürük seviyesinin sınıflandırılması. Uluslararası Sivas Bilim Teknol. Üniversitesi Derg. 2022, 2, 30–53. [Google Scholar]
- Jiang, P.; Ergu, D.; Liu, F.; Cai, Y.; Ma, B. A Review of Yolo algorithm developments. Procedia Comput. Sci. 2022, 199, 1066–1073. [Google Scholar] [CrossRef]
- Fitria, M.; Elma, Y.; Oktiana, M.; Saddami, K.; Novita, R.; Putri, R.; Rahayu, H.; Habibie, H.; Janura, S. The Deep Learning Model for Decayed-Missing-Filled Teeth Detection: A Comparison Between YOLOV5 and YOLOV8. Jordanian J. Comput. Inf. Technol. 2024, 10, 335–349. [Google Scholar] [CrossRef]
- Wang, Z.J.; Turko, R.; Shaikh, O.; Park, H.; Das, N.; Hohman, F.; Kahng, M.; Chau, D.H.P. CNN explainer: Learning convolutional neural networks with interactive visualization. IEEE Trans. Vis. Comput. Graph. 2020, 27, 1396–1406. [Google Scholar] [CrossRef]
- Khan, A.; Sohail, A.; Zahoora, U.; Qureshi, A.S. A survey of the recent architectures of deep convolutional neural networks. Artif. Intell. Rev. 2020, 53, 5455–5516. [Google Scholar] [CrossRef]
- Hameed, T.; AmalaShanthi, S. A novel teeth segmentation on three-dimensional dental model using adaptive enhanced googlenet classifier. Multimed. Tools Appl. 2024, 83, 68547–68568. [Google Scholar] [CrossRef]
- Tang, P.; Wang, H.; Kwong, S. G-MS2F: GoogLeNet based multi-stage feature fusion of deep CNN for scene recognition. Neurocomputing 2017, 225, 188–197. [Google Scholar] [CrossRef]
- Verdhan, V.; Verdhan, V. VGGNet and AlexNet networks. In Computer Vision Using Deep Learning: Neural Network Architectures with Python and Keras; Apress: Berkeley, CA, USA, 2021; pp. 103–139. [Google Scholar]
- Lindberg, A.; Larsson, L. Evaluating the Performance of Extended Convolutional Networks: A Comparative Study between VGG-16 and VGG-23 for Image Classification. 2024. Available online: https://kth.diva-portal.org/smash/get/diva2:1886717/FULLTEXT01.pdf (accessed on 3 December 2024).
- Paymode, A.S.; Malode, V.B. Transfer learning for multi-crop leaf disease image classification using convolutional neural network VGG. Artif. Intell. Agric. 2022, 6, 23–33. [Google Scholar] [CrossRef]
- Karacı, A. VGGCOV19-NET: Automatic detection of COVID-19 cases from X-Ray images using modified VGG19 CNN architecture and YOLO algorithm. Neural Comput. Appl. 2022, 34, 8253–8274. [Google Scholar] [CrossRef]
- Awan, M.J.; Masood, O.A.; Mohammed, M.A.; Yasin, A.; Zain, A.M.; Damaševičius, R.; Abdulkareem, K.H. Image-based malware classification using VGG19 network and spatial convolutional attention. Electronics 2021, 10, 2444. [Google Scholar] [CrossRef]
- Nife, N.I.; Chtourou, M. A Comprehensive Study of Deep Learning and Performance Comparison of Deep Neural Network Models (YOLO, RetinaNet). Int. J. Online Biomed. Eng. 2023, 19, 62. [Google Scholar]
- Wu, S.; Yang, J.; Wang, X.; Li, X. Iou-balanced loss functions for single-stage object detection. Pattern Recognit. Lett. 2022, 156, 96–103. [Google Scholar] [CrossRef]
- Zhang, H.; Chang, H.; Ma, B.; Shan, S.; Chen, X. Cascade RetinaNet: Maintaining consistency for single-stage object detection. arXiv 2019, arXiv:1907.06881. [Google Scholar]
- Gao, S.H.; Cheng, M.M.; Zhao, K.; Zhang, X.Y.; Yang, M.H.; Torr, P. Res2net: A new multi-scale backbone architecture. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 652–662. [Google Scholar] [CrossRef] [PubMed]
- Koonce, B.; Koonce, B.E. Convolutional Neural Networks with Swift for Tensorflow: Image Recognition and Dataset Categorization; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
- Kömürcü, E.Y. Methods for Developing Tiny Convolutional Neural Networks for Deployment on Embedded Systems. 2023. Available online: https://uu.diva-portal.org/smash/get/diva2:1809980/FULLTEXT01.pdf (accessed on 3 December 2024).
- Setiawan, W.; Ghofur, A.; Rachman, F.H.; Rulaningtyas, R. Deep convolutional neural network alexnet and squeezenet for maize leaf diseases image classification. In Kinetik: Game Technology, Information System, Computer Network, Computing, Electronics, and Control; 2021; Available online: https://kinetik.umm.ac.id/index.php/kinetik/article/view/1335 (accessed on 3 December 2024).
- Boesch, G. Segment Anything Model (SAM)—The Complete 2025 Guide. Available online: https://viso.ai/deep-learning/segment-anything-model-sam-explained/ (accessed on 3 June 2024).
- Mandal, B.; Okeukwu, A.; Theis, Y. Masked face recognition using resnet-50. arXiv 2021, arXiv:2104.08997. [Google Scholar]
- Mantri, K.; Deora, D.B.S. Gradient-Watershed Transform Segmentation and ResNet50-Based Classification and Detection of Tumor from MRI Images. Stoch. Model. 2024, 26, 56–67. [Google Scholar]
- Byzkrovnyi, O.; Smelyakov, K.; Chupryna, A.; Lanovyy, O. Comparison of Object Detection Algorithms for the Task of Person Detection on Jetson TX2 NX Platform. In Proceedings of the 2024 IEEE Open Conference of Electrical, Electronic and Information Sciences (eStream), Vilnius, Lithuania, 25 April 2024; pp. 1–6. [Google Scholar]
- Rajpura, P.S.; Bojinov, H.; Hegde, R.S. Object detection using deep cnns trained on synthetic images. arXiv 2017, arXiv:1706.06782. [Google Scholar]
- Dong, K.; Zhou, C.; Ruan, Y.; Li, Y. MobileNetV2 model for image classification. In Proceedings of the 2020 2nd International Conference on Information Technology and Computer Application (ITCA), Guangzhou, China, 18–20 December 2020; pp. 476–480. [Google Scholar]
- Tarek, H.; Aly, H.; Eisa, S.; Abul-Soud, M. Optimized deep learning algorithms for tomato leaf disease detection with hardware deployment. Electronics 2022, 11, 140. [Google Scholar] [CrossRef]
- Singh, B.; Toshniwal, D.; Allur, S.K. Shunt connection: An intelligent skipping of contiguous blocks for optimizing MobileNet-V2. Neural Netw. 2019, 118, 192–203. [Google Scholar] [CrossRef]
- Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 1–48. [Google Scholar] [CrossRef]
- Huang, S.C.; Yeh, C.H. Image contrast enhancement for preserving mean brightness without losing image features. Eng. Appl. Artif. Intell. 2013, 26, 1487–1492. [Google Scholar] [CrossRef]
- Anwar, T.; Zakir, S. Effect of image augmentation on ECG image classification using deep learning. In Proceedings of the 2021 International Conference on Artificial Intelligence (ICAI), Islamabad, Pakistan, 5–7 April 2021; pp. 182–186. [Google Scholar]
- Whang, S.E.; Roh, Y.; Song, H.; Lee, J.G. Data collection and quality challenges in deep learning: A data-centric ai perspective. VLDB J. 2023, 32, 791–813. [Google Scholar] [CrossRef]
- Kamboj, A.; Bachute, M. Removing the noise from X-Ray image using Image processing Technology: A Bibliometric Survey and Future Research Directions. Libr. Philos. Pract. 2021, 5, 31. [Google Scholar]
- Yoon, J.S.; Oh, K.; Shin, Y.; Mazurowski, M.A.; Suk, H.I. Domain generalization for medical image analysis: A survey. arXiv 2023, arXiv:2310.08598. [Google Scholar]
- Mureșanu, S.; Hedeșiu, M.; Iacob, L.; Eftimie, R.; Olariu, E.; Dinu, C.; Jacobs, R.; Group, T.P. Automating Dental Condition Detection on Panoramic Radiographs: Challenges, Pitfalls, and Opportunities. Diagnostics 2024, 14, 2336. [Google Scholar] [CrossRef]
- Hu, X.; Chu, L.; Pei, J.; Liu, W.; Bian, J. Model complexity of deep learning: A survey. Knowl. Inf. Syst. 2021, 63, 2585–2619. [Google Scholar] [CrossRef]
- Prajapati, S.A.; Nagaraj, R.; Mitra, S. Classification of dental diseases using CNN and transfer learning. In Proceedings of the 2017 5th International Symposium on Computational and Business Intelligence (ISCBI), Dubai, United Arab Emirates, 11–14 August 2017; pp. 70–74. [Google Scholar]
- Sadr, S.; Mohammad-Rahimi, H.; Motamedian, S.R.; Zahedrozegar, S.; Motie, P.; Vinayahalingam, S.; Dianat, O.; Nosrat, A. Deep learning for detection of periapical radiolucent lesions: A systematic review and meta-analysis of diagnostic test accuracy. J. Endod. 2023, 49, 248–261. [Google Scholar] [CrossRef]
- Liu, Q.; Zhou, H.; Xu, Q.; Liu, X.; Wang, Y. PSGAN: A generative adversarial network for remote sensing image pan-sharpening. IEEE Trans. Geosci. Remote Sens. 2020, 59, 10227–10242. [Google Scholar] [CrossRef]
- Petersen, K.; Vakkalanka, S.; Kuzniarz, L. Guidelines for conducting systematic mapping studies in software engineering: An update. Inf. Softw. Technol. 2015, 64, 1–18. [Google Scholar] [CrossRef]
Keywords | |
---|---|
Q-1 | lesion detection OR object types AND dental images |
Q-2 | lesion detection AND deep learning methods |
Q-3 | state-of-the-art solutions AND dental images |
Q-4 | deep AND/OR machine learning methods in dental lesion detection |
Q-5 | application areas AND lesion detection |
Exclusion Criteria | |
---|---|
EC-1 | Studies that do not include state-of-the-art methods for lesion detection. |
EC-2 | Papers that do not have full text. |
EC-3 | Articles that do not fully address and discuss lesion detection. |
EC-4 | Articles that are systematic review articles, secondary studies, or surveys. |
EC-5 | Studies that only focus on lesion detection but do not include lesions in teeth. |
Inclusion Criteria | |
IC-1 | Title or abstract/keywords include key terms. |
IC-2 | The summary of the study shows that the work is related to deep/machine learning methods. |
IC-3 | The language of the study is English or Turkish. |
IC-4 | The study detects lesions through panoramic/periapical/CBCT images. |
Database | Total | Selected |
---|---|---|
Springer | 126 | 3 |
IEEE | 44 | 4 |
WoS | 24 | 9 |
PubMed | 44 | 1 |
ScienceDirect | 24 | 5 |
Google Scholar | 88 | 7 |
Total | 350 | 29 |
Paper Number | Authors | Title | Year | Source |
---|---|---|---|---|
S1 | Moidu et al. [11] | Deep learning for categorization of endodontic lesion based on radiographic periapical index scoring system | 2022 | Clinical Oral Investigations |
S2 | Watanabe et al. [12] | Deep learning object detection of maxillary cyst-like lesions on panoramic radiographs: preliminary study | 2020 | Oral Radiology |
S3 | Gwak et al. [13] | Attention-guided jawbone lesion diagnosis in panoramic radiography using minimal labeling effort | 2024 | Nature |
S4 | El Bagoury et al. [14] | Dental Disease Detection based on CNN for Panoramic Dental Radiographs | 2023 | 2023 IEEE Eleventh International Conference on Intelligent Computing and Information Systems |
S5 | Kaarthik et al. [15] | Detection and Classification of Dental Defect using CNN | 2022 | Proceedings of the Sixth International Conference on Intelligent Computing and Control Systems |
S6 | Chen et al. [16] | Missing Teeth and Restoration Detection Using Dental Panoramic Radiography Based on Transfer Learning With CNNs | 2022 | IEEE Access |
S7 | Demir et al. [17] | Deep Learning Based Lesion Detection on Dental Panoramic Radiographs | 2023 | Innovations in Intelligent Systems and Applications Conference |
S8 | Li et al. [18] | Detection of Dental Apical Lesions Using CNNs on Periapical Radiograph | 2021 | Sensors |
S9 | Chuo et al. [19] | A High-Accuracy Detection System: Based on Transfer Learning for Apical Lesions on Periapical Radiograph | 2022 | Bioengineering |
S10 | Song et al. [20] | Deep learning-based apical lesion segmentation from panoramic radiographs | 2022 | Imaging Science in Dentistry |
S11 | Hamdan et al. [21] | The effect of a deep-learning tool on dentists’ performances in detecting apical radiolucencies on periapical radiographs | 2022 | Dentomaxillofacial Radiology |
S12 | İçöz et al. [22] | Evaluation of an Artificial Intelligence System for the Diagnosis of Apical Periodontitis on Digital Panoramic Images | 2023 | Nigerian Journal of Clinical Practice |
S13 | Ba Hattab et al. [23] | Detection of Periapical Lesions on Panoramic Radiographs Using Deep Learning | 2023 | Clinical Oral Investigations |
S14 | Setzer et al. [24] | Artificial Intelligence for the Computer-aided Detection of Periapical Lesions in Cone-beam Computed Tomographic Images | 2020 | Journal of Endodontics |
S15 | Hadzic et al. [25] | Evaluating a Periapical Lesion Detection CNN on a Clinically Representative CBCT Dataset—A Validation Study | 2024 | Journal of Clinical Medicine |
S16 | Krois et al. [26] | Generalizability of deep learning models for dental image analysis | 2021 | Nature |
S17 | Çelik et al. [27] | The role of deep learning for periapical lesion detection on panoramic radiographs | 2023 | Dentomaxillofacial Radiology |
S18 | Latke et al. [28] | Detection of dental periapical lesions using retinex based image enhancement and lightweight deep learning model | 2024 | Image and Vision Computing |
S19 | Adnan et al. [29] | Multi-model Deep Learning approach for segmentation of teeth and periapical lesions on Pantomographs | 2023 | Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology |
S20 | Al-Awasi et al. [30] | Apical status and prevalence of endodontic treated teeth among Saudi adults in Eastern province: A prospective radiographic evaluation | 2022 | King Saud University Saudi Dental Journal |
S21 | Ekert et al. [31] | Deep Learning for the Radiographic Detection of Apical Lesions | 2019 | Journal of Endodontics |
S22 | Kirnbauer et al. [32] | Automatic Detection of Periapical Osteolytic Lesions on Cone-beam Computed Tomography Using Deep Convolutional Neural Networks | 2022 | Journal of Endodontics |
S23 | Bayrakdar et al. [33] | A U-Net Approach to Apical Lesion Segmentation on Panoramic Radiographs | 2022 | Hindawi BioMed Research International |
S24 | Endres et al. [34] | Development of a Deep Learning Algorithm for Periapical Disease Detection in Dental Radiographs | 2020 | Diagnostics |
S25 | Ver Berne et al. [35] | A deep learning approach for radiological detection and classification of radicular cysts and periapical granulomas | 2023 | Journal of Dentistry |
S26 | Calazans et al. [36] | Automatic Classification System for Periapical Lesions in Cone-Beam Computed Tomography | 2022 | Sensors |
S27 | Sajad et al. [37] | Automatic Lesion Detection in Periapical X-Rays | 2022 | Proc. of the 1st International Conference on Electrical, Communication and Computer Engineering |
S28 | Ngoc et al. [38] | Periapical Lesion Diagnosis Support System Based on X-Ray Images Using Machine Learning Technique | 2021 | World Journal of Dentistry |
S29 | Latke et al. [39] | A New Approach towards Detection of Periapical Lesions using Artificial Intelligence | 2023 | Grenze International Journal of Engineering and Technology |
No. | Publication | Long Name | Type | Instances |
---|---|---|---|---|
1 | - | Clinical Oral Investigations | Journal | 2 |
2 | - | Oral Radiology | Journal | 1 |
3 | - | Nature | Reports | 2 |
4 | ICICIS | 2023 IEEE Eleventh International Conference on Intelligent Computing and Information Systems | Conference | 1 |
5 | - | Proceedings of the Sixth International Conference on Intelligent Computing and Control Systems | Conference | 1 |
6 | - | IEEE Access | Journal | 1 |
7 | ASYU | Innovations in Intelligent Systems and Applications Conference | Conference | 1 |
8 | - | Sensors | Journal | 2 |
9 | - | Bioengineering | Journal | 1 |
10 | ISD | Imaging Science in Dentistry | Journal | 1 |
11 | - | Dentomaxillofacial Radiology | Journal | 2 |
12 | - | Nigerian Journal of Clinical Practice | Journal | 1 |
13 | JOE | Journal of Endodontics | Journal | 3 |
14 | J. Clin. Med | Journal of Clinical Medicine | Journal | 1 |
15 | - | Image and Vision Computing | Journal | 1 |
16 | - | Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology | Journal | 1 |
17 | - | King Saud University Saudi Dental Journal | Journal | 1 |
18 | - | Hindawi BioMed Research International | Journal | 1 |
19 | - | Diagnostics | Journal | 1 |
20 | - | Journal of Dentistry | Journal | 1 |
21 | ICECE | Proc. of the 1st International Conference on Electrical, Communication and Computer Engineering | Conference | 1 |
22 | - | World Journal of Dentistry | Journal | 1 |
23 | GRENZE | Grenze International Journal of Engineering and Technology | Journal | 1 |
Quality Metrics | Question | Q. Type |
---|---|---|
Q1 | Are the aims of the study clearly defined? | Reporting |
Q2 | Are the scope and the context of the study clearly stated? | Reporting |
Q3 | Is the proposed solution clearly explained and validated by an empirical study? | Reporting |
Q4 | Are the variables used in the study likely to be valid and reliable? | Relevance |
Q5 | Is the research process documented adequately? | Relevance |
Q6 | Are all study questions answered? | Relevance |
Q7 | Are the negative findings presented? | Rigor |
Q8 | Are the main findings stated clearly in terms of credibility, validity, and reliability? | Rigor |
Q9 | Do the conclusions relate to the aim of the purpose of the study? | Credibility |
Q10 | Does the report have implications in research and/or practice? | Credibility |
Paper | REPORTING | RIGOR | CREDIBILITY | RELEVANCE | Rpr. | Rig. | Cre. | Rel. | Total | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Q1 | Q2 | Q3 | Q4 | Q5 | Q6 | Q7 | Q8 | Q9 | Q10 | ||||||
S1 | 1 | 1 | 1 | 1 | 1 | 1 | 0.5 | 1 | 1 | 1 | 3 | 3 | 1.5 | 2 | 9.5 |
S2 | 1 | 1 | 1 | 1 | 1 | 0.5 | 1 | 1 | 1 | 1 | 3 | 2.5 | 2 | 2 | 9.5 |
S3 | 1 | 1 | 1 | 1 | 1 | 0.5 | 0.5 | 1 | 1 | 1 | 3 | 2.5 | 1.5 | 2 | 9 |
S4 | 1 | 0.5 | 1 | 0.5 | 0.5 | 0.5 | 1 | 0.5 | 0.5 | 0.5 | 2.5 | 1.5 | 1.5 | 1 | 7 |
S5 | 1 | 1 | 1 | 0.5 | 1 | 0.5 | 0 | 0.5 | 1 | 1 | 3 | 2 | 0.5 | 2 | 7.5 |
S6 | 1 | 0.5 | 1 | 1 | 1 | 0.5 | 1 | 1 | 1 | 1 | 2.5 | 2.5 | 2 | 2 | 9 |
S7 | 1 | 1 | 1 | 1 | 1 | 0.5 | 0.5 | 1 | 1 | 1 | 3 | 2.5 | 1.5 | 2 | 9 |
S8 | 1 | 1 | 1 | 1 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 1 | 3 | 2 | 1 | 1.5 | 7.5 |
S9 | 1 | 0.5 | 1 | 1 | 1 | 0.5 | 0.5 | 1 | 1 | 1 | 2.5 | 2.5 | 1.5 | 2 | 8.5 |
S10 | 1 | 0.5 | 1 | 1 | 1 | 0.5 | 1 | 1 | 1 | 1 | 2.5 | 2.5 | 2 | 2 | 9 |
S11 | 1 | 1 | 1 | 1 | 0.5 | 0.5 | 0.5 | 1 | 1 | 1 | 3 | 2 | 1.5 | 2 | 8.5 |
S12 | 1 | 1 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0.5 | 1 | 0.5 | 2.5 | 1 | 1.5 | 1.5 | 6.5 |
S13 | 1 | 1 | 1 | 0.5 | 1 | 0.5 | 1 | 0.5 | 1 | 1 | 3 | 2 | 1.5 | 2 | 8.5 |
S14 | 1 | 0.5 | 1 | 0.5 | 0 | 0.5 | 1 | 0.5 | 1 | 0.5 | 2.5 | 1 | 1.5 | 1.5 | 6.5 |
S15 | 1 | 1 | 1 | 1 | 0.5 | 0.5 | 1 | 1 | 1 | 1 | 3 | 2 | 2 | 2 | 9 |
S16 | 1 | 0.5 | 0.5 | 1 | 0 | 0.5 | 1 | 0.5 | 1 | 0.5 | 2 | 1.5 | 1.5 | 1.5 | 6.5 |
S17 | 1 | 1 | 1 | 1 | 1 | 0.5 | 1 | 1 | 1 | 1 | 3 | 2.5 | 2 | 2 | 9.5 |
S18 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 3 | 3 | 1 | 2 | 9 |
S19 | 1 | 1 | 1 | 0.5 | 1 | 0.5 | 1 | 1 | 1 | 1 | 3 | 2.5 | 2 | 2 | 9.5 |
S20 | 1 | 0.5 | 0.5 | 1 | 0.5 | 0.5 | 1 | 0.5 | 1 | 0.5 | 2 | 2 | 1.5 | 1.5 | 7 |
S21 | 1 | 0.5 | 0.5 | 0.5 | 0.5 | 0.5 | 1 | 0.5 | 1 | 1 | 2 | 1.5 | 1.5 | 2 | 7 |
S22 | 1 | 0.5 | 1 | 1 | 0.5 | 0.5 | 1 | 1 | 1 | 1 | 2.5 | 2 | 2 | 2 | 8.5 |
S23 | 1 | 1 | 1 | 1 | 1 | 0.5 | 1 | 1 | 1 | 1 | 3 | 2.5 | 2 | 2 | 9.5 |
S24 | 1 | 1 | 0.5 | 1 | 0.5 | 1 | 1 | 1 | 1 | 0.5 | 2.5 | 2.5 | 2 | 1.5 | 8.5 |
S25 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 3 | 3 | 1 | 2 | 9 |
S26 | 1 | 1 | 1 | 1 | 0.5 | 0.5 | 0 | 1 | 1 | 0.5 | 3 | 2 | 1 | 1.5 | 7.5 |
S27 | 1 | 1 | 1 | 1 | 1 | 0.5 | 1 | 1 | 1 | 1 | 3 | 2.5 | 2 | 2 | 9.5 |
S28 | 1 | 1 | 1 | 1 | 0.5 | 0.5 | 0.5 | 0.5 | 1 | 1 | 3 | 2 | 1 | 2 | 8 |
S29 | 1 | 1 | 1 | 1 | 0.5 | 1 | 1 | 1 | 1 | 1 | 3 | 2.5 | 2 | 2 | 9.5 |
Types | Studies | Total | Percent |
---|---|---|---|
Periapical lesion | S1, S4, S9, S11, S13, S14, S15, S17, S18, S19, S20, S22, S24, S25, S26, S27, S28, S29 | 18 | 62.07% |
Cyst lesion | S2, S25 | 2 | 6.9% |
Jawbone lesion | S3 | 1 | 3.45% |
Tooth decay lesion | S5, S6 | 2 | 6.9% |
Apical lesion | S7, S8, S9, S10, S11, S12, S16, S19, S21, S23 | 10 | 34.48% |
AI Models and Methods | Classification | Segmentation | Detection |
---|---|---|---|
U-Net | X | X | |
AlexNet | X | X | |
YOLOv3, YOLOv5, YOLOv8 | X | ||
CNN | X | X | X |
GoogleNet | X | X | |
Denti.Al | X | ||
VGG16, VGG19 | X | ||
DentaVn | X | ||
RetinaNet | X | ||
SqueezeNet | X | ||
SAM | X | ||
ResNet50 | X | ||
DetectNet | X | ||
MobileNetV2 | X |
Approaches | Studies | Total | Percent |
---|---|---|---|
U-Net | S10, S14, S15, S18, S19, S22, S23, S24 | 8 | 27.59% |
AlexNet | S8, S9, S27 | 3 | 10.34% |
YOLOv3, YOLOv5, YOLOv8 | S1, S4, S7, S12 | 4 | 13.8% |
CNN | S5, S13, S19, S29 | 4 | 13.8% |
GoogleNet | S6, S8 | 2 | 6.9% |
Denti.Al | S11 | 1 | 3.45% |
VGG16, VGG19 | S8, S26 | 2 | 6.9% |
DentaVn | S28 | 1 | 3.45% |
RetinaNet | S17 | 1 | 3.45% |
SqueezeNet | S6 | 1 | 3.45% |
SAM | S7 | 1 | 3.45% |
ResNet50 | S8 | 1 | 3.45% |
DetectNet | S2 | 1 | 3.45% |
MobileNetV2 | S25 | 1 | 3.45% |
Methods | Studies | Total | Percent |
---|---|---|---|
Brightness and contrast | S3, S4, S10, S12 | 4 | 13.79% |
Horizontal mirroring | S3 | 1 | 3.45% |
Trapezoid transformation | S3 | 1 | 3.45% |
Resize | S4 | 1 | 3.45% |
Clipping | S4, S25 | 2 | 6.9% |
Flip | S4, S10, S18, S23, S24 | 5 | 17.24% |
Rescale | S18 | 1 | 3.45% |
Shift | S10, S18 | 2 | 6.9% |
Rotation and reflection | S10, S24, S26, S27, S29 | 5 | 17.24% |
Sharpening | S10 | 1 | 3.45% |
Zoom | S26 | 1 | 3.45% |
Grayscale | S27, S29 | 2 | 6.9% |
No | Proposed Solutions |
---|---|
PS1 | Data augmentation |
PS2 | Image pre-processing techniques |
PS3 | Model optimization |
PS4 | Model training |
PS5 | Additional loss functions |
PS6 | Cross validation |
PS7 | Transfer learning |
PS8 | Performance evaluation |
PS9 | Model customization |
PS10 | Data diversification |
PS11 | Multiple model approach |
PS12 | Multi-scale CNN |
PS13 | Expert opinion |
No. | Challenges | |||||
---|---|---|---|---|---|---|
Lack of Data | Image Quality | Ability to Generalize | Lesion Indistinctness | Model Complexity | Risk of Overfitting | |
S1 | PS1 | PS2 | PS3 | - | - | - |
S2 | PS1 | PS2 | - | PS4 | - | - |
S3 | PS1 | - | - | - | PS5 | - |
S4 | PS1 | - | - | PS6 | - | - |
S5 | PS1 | - | PS6 | - | - | - |
S6 | PS1 | - | - | - | PS7 | - |
S7 | PS1 | - | - | PS7 | - | PS6 |
S8 | PS1 | - | PS6 | PS7 | - | - |
S9 | PS1 | - | PS8 | PS9 | PS6 | - |
S10 | PS1 | PS2 | - | - | PS8 | - |
S11 | - | PS2 | - | PS9 | - | - |
S12 | - | PS2 | - | PS9 | - | - |
S13 | PS1 | PS2 | - | PS9 | - | - |
S14 | PS1 | PS2 | - | - | - | - |
S15 | PS1 | PS2 | PS7 | - | - | - |
S16 | PS1 | PS2 | - | - | - | PS10 |
S17 | PS1 | PS2 | - | PS7 | - | - |
S18 | PS1 | PS2 | - | - | - | - |
S19 | PS1 | PS2 | - | - | PS11 | - |
S20 | PS1 | PS2 | - | - | - | - |
S21 | PS1 | PS2 | - | PS7 | - | - |
S22 | PS1 | PS2 | PS7 | PS12 | - | - |
S23 | PS1 | PS2 | - | PS7 | - | PS6 |
S24 | PS1 | PS2 | PS6 | PS12 | - | - |
S25 | PS1 | - | PS7 | PS11 | - | - |
S26 | PS1 | PS2 | PS11 | - | ||
S27 | - | PS2 | - | PS13 | - | - |
S28 | PS1 | PS2 | - | PS13 | - | - |
S29 | PS1 | PS2 | PS7 | PS9 | - | - |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Demir, K.; Sokmen, O.; Karabey Aksakalli, I.; Torenek-Agirman, K. Comprehensive Insights into Artificial Intelligence for Dental Lesion Detection: A Systematic Review. Diagnostics 2024, 14, 2768. https://doi.org/10.3390/diagnostics14232768
Demir K, Sokmen O, Karabey Aksakalli I, Torenek-Agirman K. Comprehensive Insights into Artificial Intelligence for Dental Lesion Detection: A Systematic Review. Diagnostics. 2024; 14(23):2768. https://doi.org/10.3390/diagnostics14232768
Chicago/Turabian StyleDemir, Kubra, Ozlem Sokmen, Isil Karabey Aksakalli, and Kubra Torenek-Agirman. 2024. "Comprehensive Insights into Artificial Intelligence for Dental Lesion Detection: A Systematic Review" Diagnostics 14, no. 23: 2768. https://doi.org/10.3390/diagnostics14232768
APA StyleDemir, K., Sokmen, O., Karabey Aksakalli, I., & Torenek-Agirman, K. (2024). Comprehensive Insights into Artificial Intelligence for Dental Lesion Detection: A Systematic Review. Diagnostics, 14(23), 2768. https://doi.org/10.3390/diagnostics14232768