Next Article in Journal
A Finite Element Method Study of Stress Distribution in Dental Hard Tissues: Impact of Access Cavity Design and Restoration Material
Previous Article in Journal
Segmentation of Heart Sound Signal Based on Multi-Scale Feature Fusion and Multi-Classification of Congenital Heart Disease
Previous Article in Special Issue
Profile of a Multivariate Observation under Destructive Sampling—A Monte Carlo Approach to a Case of Spina Bifida
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Precision Medicine for Apical Lesions and Peri-Endo Combined Lesions Based on Transfer Learning Using Periapical Radiographs

1
Department of General Dentistry, Taoyuan Chang Gung Memorial Hospital, Taoyuan City 32023, Taiwan
2
Department of Operative Dentistry, Taoyuan Chang Gung Memorial Hospital, Taoyuan City 32023, Taiwan
3
Department of Program on Semiconductor Manufacturing Technology, Academy of Innovative Semiconductor and Sustainable Manufacturing, National Cheng Kung University, Tainan City 701401, Taiwan
4
Department of Information Management, Chung Yuan Christian University, Taoyuan City 32023, Taiwan
5
Department of Electrical Engineering, Ming Chi University of Technology, New Taipei City 243303, Taiwan
6
Department of Electronic Engineering, Feng Chia University, Taichung City 40724, Taiwan
7
Department of Electronic Engineering, Chung Yuan Christian University, Taoyuan City 32023, Taiwan
8
Department of Electrical Engineering, National Cheng Kung University, Tainan City 701401, Taiwan
9
Ateneo Laboratory for Intelligent Visual Environments, Department of Information Systems and Computer Science, Ateneo de Manila University, Quezon City 1108, Philippines
*
Authors to whom correspondence should be addressed.
Bioengineering 2024, 11(9), 877; https://doi.org/10.3390/bioengineering11090877
Submission received: 7 August 2024 / Revised: 24 August 2024 / Accepted: 27 August 2024 / Published: 29 August 2024

Abstract

:
An apical lesion is caused by bacteria invading the tooth apex through caries. Periodontal disease is caused by plaque accumulation. Peri-endo combined lesions include both diseases and significantly affect dental prognosis. The lack of clear symptoms in the early stages of onset makes diagnosis challenging, and delayed treatment can lead to the spread of symptoms. Early infection detection is crucial for preventing complications. PAs used as the database were provided by Chang Gung Memorial Medical Center, Taoyuan, Taiwan, with permission from the Institutional Review Board (IRB): 02002030B0. The tooth apex image enhancement method is a new technology in PA detection. This image enhancement method is used with convolutional neural networks (CNN) to classify apical lesions, peri-endo combined lesions, and asymptomatic cases, and to compare with You Only Look Once-v8-Oriented Bounding Box (YOLOv8-OBB) disease detection results. The contributions lie in the utilization of database augmentation and adaptive histogram equalization on individual tooth images, achieving the highest comprehensive validation accuracy of 95.23% with the ConvNextv2 model. Furthermore, the CNN outperformed YOLOv8 in identifying apical lesions, achieving an F1-Score of 92.45%. For the classification of peri-endo combined lesions, CNN attained the highest F1-Score of 96.49%, whereas YOLOv8 scored 88.49%.

Graphical Abstract

1. Introduction

Apical lesions are primarily caused by bacteria and microorganisms entering the apex through dental caries, trauma, overheating, malocclusion, and other ways. Due to the small capacity of the pulp chamber, initial inflammation of the pulp spreads along the canals to the apex, forming a pathogenic mechanism with the root inflammation. The lack of obvious pain symptoms in early stages makes it difficult to detect until the pulp has already been necrotic. Periodontal disease results from bacteria in plaque. Poor oral hygiene leads to the accumulation of bacteria in the gingiva and alveolar bone. In mild cases, it is primarily related to the calculus from saliva calcification, leading to inflammation of the surrounding tissues. In severe cases, inflammation worsens and erodes from top to bottom, eventually separating the gingiva and tooth roots. Apical lesion is the destruction of the tooth’s inner part, while periodontal disease is from the outside to the inside. When these two infections progress simultaneously and connect, it is a peri-endo combined lesion, which means that the prognosis of this symptom will be significantly reduced. According to the degree of damage and the ability to repair, the infections could be categorized into primary pulpal infection, primary periodontal infection, independent manifestations of primary pulpal and periodontal infections, and combined manifestations of both primary pulpal and periodontal infections [1]. Therefore, dentists must carefully evaluate the area below the crown–root junction to assess symptoms.
The differences in access to oral health services in the United States are largely due to socioeconomic status. When a person’s overall health is lacking, oral diseases can worsen existing conditions and lead to more complications [2]. This is why improving service quality, accessibility, and patient-centered care across all levels is such an important goal. Currently, medical diagnosis relies on manual identification, imaging diagnosis, vitality tests, and past clinical experience. Dentists must pay attention to the presence, absence, and location of any swelling or drainage. Besides necessary root canal treatment, patients may also require periodontal treatment. However, the success rate of tooth retention cannot be guaranteed after treatment [3]. During this process, different dentists have different perspectives and experiences, and adding various symptoms involving pulpal, periodontal, or mixed lesions further increases the complexity. Untreated endodontic infections can lead to pocket formation, bone loss, calculus deposition, osteoclastic activity, and bone resorption. These infections can also impair wound healing and worsen periodontal disease [4]. Based on the above, this research aims to establish a medical image recognition system that adopts image-processing techniques and deep learning models to analyze PAs and classify them into three dental conditions: 0—asymptomatic, 1—apical lesion, and 2—peri-endo combined lesion. The purpose is to achieve intelligent diagnosis, facilitating early detection of potential problems and enabling early treatment and prevention. It significantly reduces dentists’ time spent on complicated tasks, allowing them to focus more on providing deeper patient care to improve clinical efficiency.
Numerous studies are underway in the medical field, where the integration of artificial intelligence (AI) and medical imaging has significantly advanced dental radiology. Integrating CNN and other deep learning techniques can enhance diagnostic accuracy and optimize clinical workflows. Dental panoramic radiographs [5], PAs [6], and bitewing radiographs [7] are essential diagnostic tools in dentistry, providing comprehensive insights into various dental conditions. Automated detection systems are gradually replacing traditional manual interpretation methods. For example, the U-Net CNN algorithm is used for segmentation in DPR, achieving an F1-score of 0.828 [8]. Faster R-CNN technology is successfully used to detect seven types of dental conditions on DPR, such as apical lesions and implants, achieving an accuracy rate of 94.18% [9]. The model architecture based on Mask-RCNN and MobileNet-v2 detects and locates five types of periapical lesions with an accuracy of 94%, a mean average precision of 85%, and a mean intersection over a union of 71.0% [10]. These systems accurately enhance dentists’ diagnostic capabilities. Furthermore, the introduction of transfer learning has facilitated highly specialized CNN models for dental tasks, such as detecting retained roots [11], apical lesions [12], missing teeth [13], and restorations [7]. By using pre-trained CNN architectures, researchers have obtained outstanding performance and surpassed traditional methods. For instance, the YOLOv2 model achieves an accuracy of 89.3% in identifying implant positions, while the AlexNet model demonstrates an accuracy of 90.4% in evaluating the damage caused by periodontitis around implants [14].
In addition to the selection and training of models, image enhancement and segmentation techniques are also crucial, as PA films can possibly have a significant amount of noise. In addition to this, the objects are apical lesions, peri-endo combined lesions, and asymptomatic cases. It is challenging for us to effectively and precisely enhance the symptom area and ensure that the improved features are captured and learned by the model. Sharpening, histogram equalization [15], and flat-field correction are employed to enhance images. Image segmentation is used to accurately label the position and number of each tooth, resulting in a 92.78% accuracy rate. This research adopts various image-processing techniques, including grayscale conversion [16], Gaussian high-pass filter [17,18], adaptive histogram equalization [19], linear transformation, flat-field correction, and negative film effect. These methods improve image quality and enhance features, thereby increasing the accuracy of subsequent training. The YOLOv8 model [20] not only detects the position of individual teeth, but also precisely handles teeth at irregular angles with OBB technology [21]. This adaptation facilitates more efficient and accurate object detection and subsequent image cropping, which are crucial for building the CNN training dataset. In this study, several CNN models are chosen to classify different diseases. A control group used YOLOv8 for object detection, and by comparing and evaluating these models, the one with the best generalization capability and highest accuracy was selected for this research. The purpose of using multiple models is to identify the best experimental method.
The purpose of using multiple models is to identify the best experimental method. This study aims to use machine learning technology to assist doctors throughout the entire treatment process of apical lesions and peri-endo combined lesions. This includes early detection and assessment of symptoms to avoid delayed treatment, mid-term assistance to prevent deterioration due to incomplete treatment, and final confirmation at a later stage to avoid recurrence. The use of multiple models is a deliberate strategy to rigorously identify the most effective experimental approach. This research integrates advanced algorithms for early detection; real-time support during treatment; and thorough follow-up to prevent delays, deterioration, and recurrence. This comprehensive use of machine learning enhances diagnostic precision and ensures effective, continuous care.

2. Materials and Methods

To adequately detect the symptoms apical lesions and peri-endo combined lesions, the flowchart which is shown in Figure 1 encompasses comprehensive steps from image segmentation to image processing, followed by CNN and object detection training and validation, to ensure accurate symptom classification by the model. This research received approval from the Chang Gung Medical Foundation Institutional Review Board, with the IRB number 02002030B0. The image database used in this study was annotated by dentists with over five years of experience and divided into training and validation datasets.

2.1. Image Segmentation

The PA images provide detailed views of tooth roots and surrounding periodontal structures, making them crucial for diagnosing apical lesions and peri-endo combined lesions, and each PA film contains 2 to 5 teeth. For CNN-based single-tooth pathology image classification, this study first processes the images to isolate individual teeth, allowing the CNN to accurately classify tooth pathologies.

2.1.1. Tooth Annotation

This study uses Roboflow as the annotation tool, leveraging its polygon tool to accurately mark the actual shape and arrangement of teeth in each image, as shown in Figure 2. This process helps to minimize the annotation of non-target areas, providing more explicit benchmarks and data for model learning.

2.1.2. YOLOv8 OBB Model Training

Several studies have focused on designing more complex object detection networks like SSD and YOLO. The object detection model is used to identify individual teeth and record their coordinates, followed by image segmentation using an algorithm specifically designed for this study. YOLO has an advantage in single-instance detection due to its efficiency and comprehensive feature capture. YOLOv8’s OBB excels in handling complex tooth arrangements and adapting to various angles.

2.1.3. Single-Tooth Cropping

Single-tooth cropping from PA images is crucial for focused diagnosis and treatment planning. This process involves isolating individual teeth from a PA radiograph to enhance the accuracy and effectiveness of dental assessments by employing advanced image-processing techniques, such as deep learning algorithms. This minimizes the interference from surrounding structures, enabling a clearer view of the tooth’s condition.
A.
Image Rotation
The trained YOLO optimal model is used for single-tooth detection in PA images, obtaining the OBB information of each tooth region, including position, size, and angle. Next, each detected tooth region undergoes individual processing steps. The rotation angle of each tooth region is calculated with the center of the PA image designated as the rotation center. The image is rotated to a horizontal position with a 0-degree angle, as shown in Figure 3. This ensures that the cropped image is a non-tilted rectangle, avoiding the loss of essential image parts due to the original skew angle.
B.
Coordinate Point Rotation
The four corner coordinates of the tooth also need to be rotated to the same angle for precise image cropping with consistent rotation. Initially, the four corner coordinates of each tooth’s bounding box are obtained based on the tooth’s position. These coordinates are rotated using a rotation matrix, as shown in Equation (1). The rotation matrix rotates a point in the xy-plane counterclockwise by an angle θ around the origin.
R v   =   c o s θ     s i n θ   s i n θ   c o s θ x   y   =   x   c o s θ     y   s i n θ   x   s i n θ   +   y   c o s θ
C.
Single-Tooth Cropping and Cropping Area Expansion
Each tooth is cropped to obtain individual tooth images, which is crucial for diagnosing apical lesions and peri-endo combined lesions by focusing on apical and periodontal edge features. However, the original detection range might accidentally cut off parts of the apex or periodontal area. To address this, the study expanded the detected tooth area before cropping, allowing more image features to be captured. Figure 4 compares the images before and after this expansion. This approach helps to determine the optimal cutting range, maximizing feature extraction while minimizing noise.

2.2. Image Processing

This study uses several image enhancement methods to conduct comparisons and cross-experiments, including grayscale transform, Gaussian high-pass filters, adaptive histogram equalization, linear transformation, flat-field correction, and negative film effect. The goal is to find the best combination, providing clear and high-quality images to help the system analyze and identify disease symptoms more effectively, thus improving the accuracy of the model.

2.2.1. Grayscale

The first step in image processing is converting the original color images into grayscale images which only contain luminance information. In this study, the PA image database is not a grayscale image database. This method ensures that subsequent processing is based on single-channel grayscale data, simplifying the complex stereoscopic data while retaining the basic structure and features of the images, thus providing a solid foundation for subsequent preprocessing steps.

2.2.2. Gaussian High-Pass Filter

The Gaussian high-pass filter is a frequency domain technique used in image processing to sharpen images and enhance details by filtering out low-frequency parts and retaining high-frequency parts. Based on the Gaussian function, the filter’s sharpness is controlled by adjusting the cutoff frequency D0, affecting the emphasis on high frequencies, as shown in Equation (2). The filter result is shown in Figure 5.
H u , v = 1 e D 2 ( u , v ) / 2 D 0 2

2.2.3. Adaptive Histogram Equalization

Adaptive histogram equalization enhances image contrast by making local adjustments based on brightness variations in different regions. The process involves dividing the image into equally sized regions, equalizing the histogram of each region to distribute pixel values evenly, and then recombining these regions to produce the final enhanced image, as shown in Figure 6.

2.2.4. Flat-Field Correction

The flat-field correction technique can eliminate or reduce deformations or artifacts caused by the shooting angle, bringing the images closer to the actual dental morphology. This enhances image quality and detail, such as the texture and edge contours of the tooth surface shown in Figure 7, and improves image contrast and color fidelity, enabling the model to observe dental features and make more precise judgments.

2.2.5. Linear Transformation

Linear transformation can effectively enhance image quality and readability, offering highly flexible adjustments to process pixel values within specific ranges according to different needs and purposes. In this study, the pixel values in the image are divided into three blocks: (1) below 40, (2) between 40 and 160, and (3) above 160. The pixel values in block 1 are uniformly set to 40. Block 2 is transformed to new pixel values not exceeding 160 according to Equation (3). Among this function, x is the original pixel value and y is the pixel value after linear transformation. Block 3 is uniformly set to 200. This setup adjusts the extremely dark and bright parts to enhance the overall visual consistency of the image. In addition, local contrast enhancement is performed on the image area in block 2 to make details more transparent or more prominent while limiting the values so that they do not exceed 160 to avoid overexposure or oversaturation. The linear transformation result is shown in Figure 8, and enhanced results are shown in Figure 9.
y = 4 3 x 40 3

2.2.6. Negative Film Effect

The negative film effect is used to invert the colors and brightness of the original image, making details and contrast stand out more clearly. This makes it easier to see the extent and spread of apical and periodontal erosion, especially the edge contours. The original PA image is first converted to grayscale, with pixel values ranging from 0 (black) to 255 (white). Then, symptom enhancement is applied by subtracting the grayscale values from 255, which flips black to white and white to black. The results are shown in Figure 10.

2.3. CNN Training and Validation

In medical imaging automatic detection systems, deep learning techniques are widely adopted due to their excellent feature learning capabilities. CNNs capture the spatial structures of images through convolutional layers, effectively handling local features and improving parameter efficiency through weight sharing. Most medical research employs CNN models, such as the study on dental detection in reference [22], which used a custom CNN architecture to achieve an accuracy of 93.04%. This study employs six well-known CNN models, aiming to identify the most accurate model through evaluation and comparison, to provide more reliable clinical symptom recognition. The study uses an Nvidia GeForce RTX 3070 GPU to accelerate model training, with specific hardware and software platforms detailed in Table 1. To optimize model performance to the maximum extent, this study implements the following strategies. The first is hyperparameter tuning. Systematic and combinatorial hyperparameter adjustments, such as learning rate and batch size, will optimize the model’s training process. This step aims to improve the model’s convergence speed and performance, making it more suitable for recognition tasks for specific symptoms. The second is overfitting prevention. Dropout is a commonly used regularization technique in deep learning. Dropout randomly drops some neurons and their connections during training, ensuring that the model uses different subsets for each training iteration. This prevents the network from relying on any single neuron. And the last is metric evaluation, paying high attention to loss function, accuracy, precision, recall, and F1-score during the model-training process. These metrics provide in-depth insights into the model’s performance in various aspects, helping to analyze the model’s strengths and limitations comprehensively.

2.3.1. CNN Architecture

This study adopts six CNN models: AlexNet, Places365-GoogLeNet, VGG-16, ResNet50, GoogLeNet, and ConvNeXt-v2. ConvNeXt-v2 has a powerful detection ability because it combines ConvNeXt, Global Response Normalization, and fully convolutional masked auto-encoder (FCMAE). GRN enhances feature competition between channels, including contrast and selectivity. FCMAE is a transformer specifically tuned for ConvNeXt-v2, and it can randomly mask the original image and let the model learn based on the remaining context. This architecture effectively alleviates the feature collapse, maintains feature diversity during training, and improves accuracy. Taking AlexNet as an example, its architecture is detailed in Table 2. The input size is set to 227 × 227 × 3 pixels. Given that this study’s image classification task involves only three categories, the output size of the last fully connected layer is adjusted from the original 1000 to 3 to match the classification task requirements. A dropout probability of 0.7 is set to reduce model overfitting effectively.

2.3.2. Hyperparameter

Key hyperparameters in deep learning model training include Initial Learning Rate, Max Epoch, and Mini Batch Size. The Initial Learning Rate, which determines the speed of parameter updates, is optimized to 0.0001 after experimenting with rates from 0.1 to 0.00001. Max Epoch settings are adjusted for different CNN models to influence learning extent. Mini Batch Size, affecting training speed and generalization, is set between 4 and 64, with 16 found to be optimal. These adjustments aim to minimize the loss function on the test dataset, improving model performance and generalization. This study applies the best hyperparameter settings from tuning AlexNet to other CNN models to ensure fair comparisons. While unified settings help to avoid unfair comparisons, they might cause overfitting in some models. Therefore, while keeping parameters like the Initial Learning Rate, Mini Batch Size, Learning Rate Drop Factor, and Learning Rate Drop Period consistent, Max Epoch is adjusted to prevent overfitting. The CNN model’s hyperparameter settings are shown in Table 3.

2.3.3. Training and Validation

The dataset is split into 80% training and 20% validation sets to ensure the independence and credibility of model training and validation, as shown in Table 4. Considering the insufficient original data, this study adopts data augmentation to expand the data volume by fourfold. The augmented data quantities are listed in Table 5. Data augmentation mainly involves horizontal and vertical flipping to increase the dataset and strengthen the model. To ensure that the model performs well with new, unknown data, researchers balance the numbers of asymptomatic and symptomatic teeth. This helps to prevent the model from overly biasing any category, thereby enhancing overall performance and adaptability.
This study assesses the model’s performance using four metrics after training: accuracy, precision, recall, and F1-score, as detailed in Equations (4)–(7). These metrics comprehensively assess the model’s prediction ability, recognition capability for different categories, and overall performance level, providing an objective evaluation of the model’s strengths and weaknesses. By using these evaluation standards, we ensure that the trained model is reliable and performs well in practical applications.
A c c u r a c y = T P + T N T P + T N + F P + F N
P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N
F 1 S c o r e = 2 × p r e c i s i o n × r e c a l l p r e c i s i o n + r e c a l l

2.4. Object Detection Training and Validation

This study conducts experiments on YOLOv8 using two models, YOLOv8n and YOLOv8s-OBB, to compare the imported original PA and to carry out object detection and classify several teeth in the image, thus optimizing the cumbersomeness of the training process, omitting the steps of cutting a single tooth, and classifying the training set. In addition, researchers hope to use the advantages of YOLOv8 and image enhancement technology to find a training method with high accuracy. The model hyperparameter reference is shown in Table 6.
Apart from basic object detection, YOLOv8 offers three notable advantages: it features new convolutional layers, replacing the c3 module with the c2f module, modifying various convolutions for greater efficiency, and using a Decoupled head while removing the objectness branch. It also supports anchor-free detection, eliminating the need for manual anchor boxes by directly predicting the object’s center and enhancing flexibility and efficiency. YOLOv8-OBB is crucial for accurately detecting teeth with varying orientations and angles, improving detection and classification accuracy without the need for repeated image preprocessing steps, thus streamlining the process and enhancing experimental efficiency. The prediction results are shown in Figure 11.

3. Results

This section discusses the outcomes of YOLO detection and image segmentation, image classification, and object detection.

3.1. YOLO Detection and Image Segmentation

The YOLOv8 OBB model was employed to identify individual teeth and record their coordinates. During the experimental phase, 147 PA images and their corresponding annotations were prepared and divided into training, validation, and test sets in a 7:2:1 ratio. Specifically, 103 images were used for training, 29 for validation, and 15 for testing. The object detection performance is illustrated in Figure 12, where three teeth were effectively detected in the periapical images. The model achieved a precision of 93.9%, a recall of 97.7%, and a mean average precision (mAP50) of 97.3%. This stage achieved precise tooth detection while preserving complete pathological features, providing a reliable foundation for CNN classification.

3.2. CNN Training

During the CNN model training process, it is essential to monitor the accuracy and loss function of the validation set. If the accuracy of the training set continues improving while the validation set accuracy stays the same or decreases, it could indicate that the model is overfitting. To address this, regularization methods are needed to prevent overfitting. For example, in the case of Places365-GoogLeNet, the validation accuracy and loss function are shown in Figure 13 and Figure 14. By analyzing these metrics, training strategies and parameters can be adjusted to improve accuracy.
This study used YOLOv8 for image segmentation and performed adaptive histogram equalization, as shown in Figure 15. The processed images were input into the trained AlexNet model, comparing the obtained validation results with ground truth values and recording the corresponding prediction probabilities, as detailed in Table 7.
To detect apical and peri-endo combined lesions, this study explored various image processing methods and identified effective models through multiple combination experiments. Four aspects were analyzed: black padding, data augmentation, cropping range expansion, and image enhancement. Six CNN models were tested across these aspects to find the best combination.
1.
Aspect 1: Black Padding
Most CNN models require square images as input to simplify the model design, facilitate feature extraction, and simplify internal computations, improving training efficiency and accuracy. Since tooth shapes and sizes vary, fitting a 1:1 square size is challenging. Therefore, two common methods were adopted for image processing: transforming individual teeth into a 1:1 size or placing teeth centrally while padding the sides with black pixels. After processing images with these two methods and training the CNN models, the results, as shown in Table 8, indicated that models using black pixel padding performed better, with the validation accuracy improving by 1.76–9.52%. For example, ConvNeXtv2’s accuracy improved from 76.19% to 85.71%, and GoogLeNet’s accuracy increased from 80.95% to 85.71%. These improvements highlight the effectiveness of black pixel padding in enhancing model performance.
2.
Aspect 2: Data Augmentation
Considering the limited data in this study, data augmentation is essential for enhancing model robustness and generalization ability. This study applied data augmentation to expand the original dataset by fourfold. The experimental results are shown in Table 9 indicating that the validation accuracy increased by 1.19% to 14.88% across different models. For instance, AlexNet’s accuracy improved from 83.33% to 88.69%, and Places365-GoogLeNet’s accuracy increased from 80.95% to 88.69%. These results highlight the effectiveness of data augmentation in scenarios with limited samples, validating its role in improving model performance.
3.
Aspect 3: Expanding the Cropping Range
Researchers aimed to expand the tooth region to capture more useful diagnostic features, increasing accuracy without introducing irrelevant features. This study conducted four trials: no expansion, horizontal expansion by 20 pixels, vertical expansion by 40 pixels, and both horizontal and vertical expansions. The results in Table 10 show that different models responded differently to these expansions. For example, Places365-GoogLeNet achieved its highest accuracy (91.67%) with vertical expansion by 40 pixels, while ConvNextv2 reached 91.07% accuracy with both expansions. This highlights the need for tailored strategies depending on the specific CNN model.
4.
Aspect 4: Image Enhancement
Researchers explored the impact of different image enhancement methods on model training, specifically testing combinations of Gaussian high-pass filters and adaptive histogram equalization. These enhancement techniques significantly improved model validation accuracy, with increases ranging from 2.38% to 7.73%, as shown in Table 11. Both using the Gaussian high-pass filter alone and applying adaptive histogram equalization after Gaussian high-pass filtering proved to be effective strategies. For instance, ConvNeXtv2’s accuracy improved to 95.23% with adaptive histogram equalization, demonstrating the effectiveness of these image enhancement methods in boosting model performance. The ConvNextv2 model utilized black padding, data augmentation, and adaptive histogram equalization and achieved a validation accuracy of 95.23%, making it the best model in this study. The confusion matrix for this model is shown in Table 12. Comparisons with other studies on apical lesion detection demonstrated that this method significantly outperformed others in precision and F1-score, as detailed in Table 13. The ConvNextv2 model, evaluated for classifying dental conditions into normal, apical lesion, and peri-endo combined lesion, achieved an overall validation accuracy of 95.23%. The model showed exceptional performance with high precision (93.33%) and recall (99.75%) for normal teeth, resulting in a strong F1-score of 96.55%. For apical lesions, the precision was also high at 98.00%, though recall was lower at 87.50%, leading to an F1-score of 92.45%. The model excelled in identifying peri-endo combined lesions, with a precision of 94.82% and recall of 98.21%, yielding an F1-score of 96.49%.

3.3. YOLOv8

This study used YOLOv8 object detection as a comparison group. Compared to the CNN model’s image recognition, YOLOv8 optimized preparatory steps and improved validation accuracy through data augmentation and image processing combinations, resulting in high efficiency. In YOLO models, two critical metrics are mAP50 and validation accuracy. A mAP50 value close to 1 indicates that the predicted bounding boxes overlap with the ground truth boxes by more than 50%, reflecting a good object detection capability. The number of objects in each image category is crucial. Table 14 shows the number of instances in the original training and validation sets, with an 8:2 ratio.
To achieve better accuracy and model generalization, data augmentation was applied to the original dataset, including horizontal, vertical, and 90-degree rotations. This effectively increased the diversity of training data, helping the model learn various tooth angles, directions, and features, thereby enhancing overall detection and prediction performance. As shown in Table 15, data augmentation improved accuracy by 9.7%, and mAP50 increased by 0.035 to 0.136, significantly strengthening the model’s ability to identify candidate areas for normal and apical lesion categories.
Building on data augmentation, this study explored the performance of YOLOv8 object detection on PA images preprocessed with various image-processing techniques, as detailed in Table 16. Linear transformation with adaptive histogram equalization achieved an overall accuracy of 85.16%; flat-field correction with adaptive histogram equalization reached 87.79%; and the combination of a Gaussian high-pass filter with a negative film effect performed the best, with an overall accuracy of 92.13%. These findings highlight the importance of selecting appropriate image-processing techniques to enhance model accuracy and detection capabilities.
Three effective combination methods were identified: increasing contrast and balancing image quality through two combinations—linear transformation plus adaptive histogram equalization and flat-field correction plus adaptive histogram equalization—as well as denoising, enhancing edge contours and details by using a Gaussian high-pass filter plus the negative image effect. The combination of a Gaussian high-pass filter and the negative film effect was the best, achieving an accuracy of 92.13% and an increase of 7.43%, with mAP50 consistently above 0.9 for all categories and overall, as shown in Table 17. A comparison with other studies using YOLO models is provided in Table 18.

4. Discussion

The findings of this study demonstrate significant advancements in the detection of apical lesions and peri-endo combined lesions using deep learning models, particularly CNNs and YOLOv8 object detection. This study is the first to use deep learning to assist in the diagnosis of peri-endo combined lesions, achieving a recall rate of up to 91.7%. Through the implementation of various image-processing techniques—such as black padding, data augmentation, cropping range expansion, and image enhancement—the system achieved significant improvements in model accuracy and robustness. Using various image-processing techniques and data augmentation strategies significantly boosted model accuracy and generalization. For instance, applying black padding improved the validation accuracy by 1.76% to 9.52%, showing how effective it is in standardizing image sizes and enhancing performance. Expanding the dataset through data augmentation increased the accuracy by 1.19% to 14.88%, emphasizing its essential role in addressing small dataset limitations and boosting the model’s robustness. Expanding the cropping range of tooth regions captured more diagnostic features, improving accuracy by 0.59% to 3.57% across different models. Image enhancement techniques, such as Gaussian high-pass filters and adaptive histogram equalization, further boosted accuracy by 2.38% to 7.73%, with the combination of these methods proving to be particularly effective.
Comparing these research results with previous studies, the ConvNextv2 model achieved a validation accuracy of 95.23%, outperforming other model methods in precision and F1-score. For instance, the proposed method significantly outperformed the U-Net model used in apical lesion segmentation, which had an F1-score of 0.828, and the Faster R-CNN technology for various dental conditions detection, which achieved an accuracy of 94.18%. YOLOv8 was applied to PA images that were preprocessed using various image-processing techniques. Among these techniques, the combination of a Gaussian high-pass filter with a negative film effect yielded the highest overall accuracy of 92.13%, which led to a 7.43% improvement in validation accuracy. This approach consistently achieved high mAP50 values across different dental conditions, demonstrating its efficacy in improving detection capabilities. However, the study has limitations, including the potential for overfitting because of the limited number of samples in the database. Despite the limitations posed by the relatively small dataset, the findings underscore the potential of deep learning models in enhancing dental diagnostics. Future research should focus on expanding the dataset and refining preprocessing methods to further improve model performance and generalization. Additionally, developing standardized criteria for lesion severity, location, and spread will be crucial for improving the precision of automated detection systems.

5. Conclusions

This study offers valuable insights into how AI can be applied to medical imaging, helping to develop more reliable and efficient diagnostic tools in dentistry. The ConvNextv2 model stood out with a validation accuracy of 95.23%, outperforming other methods in both precision and F1-score. Additionally, using a Gaussian high-pass filter combined with the negative film effect in YOLOv8 led to the highest accuracy of 92.13%, highlighting the importance of choosing the right preprocessing techniques.

Author Contributions

Conceptualization, P.-Y.W. and Y.-C.M.; data curation, P.-Y.W. and Y.-C.M.; formal analysis, T.-Y.C.; funding acquisition, T.-Y.C., S.-L.C., C.-A.C. and K.-C.L.; methodology, L.-T.K., X.-H.L. and S.-L.C.; resources, S.-L.C.; software, Y.-J.L., L.-T.K., X.-H.L., S.-L.C. and C.-A.C.; validation, Y.-J.L. and S.-L.C.; visualization, Y.-J.L., L.-T.K. and X.-H.L.; writing—original draft, Y.-J.L.; writing—review and editing, T.-Y.C., C.-A.C., K.-C.L., W.-C.T. and P.A.R.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Science and Technology (MOST), Taiwan, under grant numbers of MOST-109-2410-H-197-002-MY3, MOST-107-2218-E-131-002, MOST-107-2221-E-033-057, MOST-107-2622-E-131-007-CC3, MOST-106-2622-E-033-014-CC2, MOST-106-2221-E-033-072, MOST-106-2119-M-033-001, MOST 107-2112-M-131-001, MOST-112-2410-H-033-014 and the National Chip Implementation Center, Taiwan.

Institutional Review Board Statement

Institutional Review Board Statement: Chang Gung Medical Foundation Institutional Review Board; IRB number: 02002030B0; Date of Approval: 2020/12/01; Protocol Title: A Convolutional Neural Network Approach for Dental Bite-Wing, Panoramic and Periapical Radiographs Classification; Executing Institution: Chang-Geng Medical Foundation Taoyuan Chang-Geng Memorial Hospital of Taoyuan; Duration of Approval: from 1 December 2020 to 30 November 2021. The IRB reviewed and determined that it is an expedited review according to case research or cases treated or diagnosed by clinical routines. However, this does not include HIV-positive cases.

Informed Consent Statement

The IRB approves the waiver of the participants’ consent.

Data Availability Statement

The data presented in this study are available in this article.

Acknowledgments

The authors are grateful to the Applied Electrodynamics Laboratory (Department of Physics, National Taiwan University) for their provision of the microwave calibration kit and microwave components.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shenoy, N.; Shenoy, A. Endo-perio lesions: Diagnosis and clinical considerations. Indian J. Dent. Res. Off. Publ. Indian Soc. Dent. Res. 2010, 21, 579–585. [Google Scholar] [CrossRef] [PubMed]
  2. Northridge, M.E.; Kumar, A.; Kaur, R. Disparities in Access to Oral Health Care. Annu. Rev. Public Health 2020, 41, 513–535. [Google Scholar] [CrossRef] [PubMed]
  3. JOE Editorial Board. Endodontic-Periodontal Interrelationships: An Online Study Guide. J. Endod. 2008, 34, e71–e77. [Google Scholar] [CrossRef] [PubMed]
  4. Ehnevid, H.; Jansson, L.; Lindskog, S.; Weintraub, A.; Blomlöf, L. Endodontic Pathogens: Propagation of Infection through Patent Dentinal Tubules in Traumatized Monkey Teeth. Endod. Dent. Traumatol. 1995, 11, 229–234. [Google Scholar] [CrossRef] [PubMed]
  5. Chen, S.-L.; Chou, H.-S.; Chuo, Y.; Lin, Y.-J.; Tsai, T.-H.; Peng, C.-H.; Tseng, A.-Y.; Li, K.-C.; Chen, C.-A.; Chen, T.-Y. Classification of the Relative Position between the Third Molar and the Inferior Alveolar Nerve Using a Convolutional Neural Network Based on Transfer Learning. Electronics 2024, 13, 702. [Google Scholar] [CrossRef]
  6. Chuo, Y.; Lin, W.-M.; Chen, T.-Y.; Chan, M.-L.; Chang, Y.-S.; Lin, Y.-R.; Lin, Y.-J.; Shao, Y.-H.; Chen, C.-A.; Chen, S.-L.; et al. A High-Accuracy Detection System: Based on Transfer Learning for Apical Lesions on Periapical Radiograph. Bioengineering 2022, 9, 777. [Google Scholar] [CrossRef] [PubMed]
  7. Mao, Y.-C.; Chen, T.-Y.; Chou, H.-S.; Lin, S.-Y.; Liu, S.-Y.; Chen, Y.-A.; Liu, Y.-L.; Chen, C.-A.; Huang, Y.-C.; Chen, S.-L.; et al. Caries and Restoration Detection Using Bitewing Film Based on Transfer Learning with CNNs. Sensors 2021, 21, 4613. [Google Scholar] [CrossRef] [PubMed]
  8. Song, I.-S.; Shin, H.-K.; Kang, J.-H.; Kim, J.-E.; Huh, K.-H.; Yi, W.-J.; Lee, S.-S.; Heo, M.-S. Deep Learning-Based Apical Lesion Segmentation from Panoramic Radiographs. Imaging Sci. Dent. 2022, 52, 351–357. [Google Scholar] [CrossRef] [PubMed]
  9. Chen, S.-L.; Chen, T.-Y.; Mao, Y.-C.; Lin, S.-Y.; Huang, Y.-Y.; Chen, C.-A.; Lin, Y.-J.; Chuang, M.-H.; Abu, P.A.R. Detection of Various Dental Conditions on Dental Panoramic Radiography Using Faster R-CNN. IEEE Access 2023, 11, 127388–127401. [Google Scholar] [CrossRef]
  10. Fatima, A.; Shafi, I.; Afzal, H.; Mahmood, K.; Díez, I.D.; Lipari, V.; Ballester, J.B.; Ashraf, I. Deep Learning-Based Multiclass Instance Segmentation for Dental Lesion Detection. Healthcare 2023, 11, 347. [Google Scholar] [CrossRef] [PubMed]
  11. Chen, S.-L.; Chen, T.-Y.; Mao, Y.-C.; Lin, S.-Y.; Huang, Y.-Y.; Chen, C.-A.; Lin, Y.-J.; Hsu, Y.-M.; Li, C.-A.; Chiang, W.-Y.; et al. Automated Detection System Based on Convolution Neural Networks for Retained Root, Endodontic Treated Teeth, and Implant Recognition on Dental Panoramic Images. IEEE Sens. J. 2022, 22, 23293–23306. [Google Scholar] [CrossRef]
  12. Li, C.-W.; Lin, S.-Y.; Chou, H.-S.; Chen, T.-Y.; Chen, Y.-A.; Liu, S.-Y.; Liu, Y.-L.; Chen, C.-A.; Huang, Y.-C.; Chen, S.-L.; et al. Detection of Dental Apical Lesions Using CNNs on Periapical Radiograph. Sensors 2021, 21, 7049. [Google Scholar] [CrossRef] [PubMed]
  13. Chen, S.-L.; Chen, T.-Y.; Huang, Y.-C.; Chen, C.-A.; Chou, H.-S.; Huang, Y.-Y.; Lin, W.-C.; Li, T.-C.; Yuan, J.-J.; Abu, P.A.R.; et al. Missing Teeth and Restoration Detection Using Dental Panoramic Radiography Based on Transfer Learning with CNNs. IEEE Access 2022, 10, 118654–118664. [Google Scholar] [CrossRef]
  14. Chen, Y.-C.; Chen, M.-Y.; Chen, T.-Y.; Chan, M.-L.; Huang, Y.-Y.; Liu, Y.-L.; Lee, P.-T.; Lin, G.-J.; Li, T.-F.; Chen, C.-A.; et al. Improving Dental Implant Outcomes: CNN-Based System Accurately Measures Degree of Peri-Implantitis Damage on Periapical Film. Bioengineering 2023, 10, 640. [Google Scholar] [CrossRef] [PubMed]
  15. Huang, Y.-C.; Chen, C.-A.; Chen, T.-Y.; Chou, H.-S.; Lin, W.-C.; Li, T.-C.; Yuan, J.-J.; Lin, S.-Y.; Li, C.-W.; Chen, S.-L.; et al. Tooth Position Determination by Automatic Cutting and Marking of Dental Panoramic X-ray Film in Medical Image Processing. Appl. Sci. 2021, 11, 11904. [Google Scholar] [CrossRef]
  16. Jiang, Y.; Liu, Z.; Li, Y.; Li, J.; Lian, Y.; Liao, N.; Li, Z.; Zhao, Z. A Digital Grayscale Generation Equipment for Image Display Standardization. Appl. Sci. 2020, 10, 2297. [Google Scholar] [CrossRef]
  17. Soora, N.R.; Vodithala, S.; Badam, J.S.H. Filtering Techniques to remove Noises from an Image. In Proceedings of the 2022 International Conference on Advances in Computing, Communication and Applied Informatics (ACCAI), Chennai, India, 28–29 January 2022; pp. 1–9. [Google Scholar] [CrossRef]
  18. Jiang, Y.; Ming, Y. Application of image sharpening based quality assessment model: Extraction of Traditional Chinese Embroidery Patterns as an Example. In Proceedings of the 2024 5th International Conference on Computer Engineering and Application (ICCEA), Hangzhou, China, 12–14 April 2024; pp. 994–1001. [Google Scholar] [CrossRef]
  19. Zhang, Y.; Zheng, X. Development of Image Processing Based on Deep Learning Algorithm. In Proceedings of the 2022 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC), Dalian, China, 14–16 April 2022; pp. 1226–1228. [Google Scholar] [CrossRef]
  20. Mustafa, Z.; Nsour, H. Using Computer Vision Techniques to Automatically Detect Abnormalities in Chest X-rays. Diagnostics 2023, 13, 2979. [Google Scholar] [CrossRef] [PubMed]
  21. Zhang, S.; Wang, X.; Li, P.; Wang, L.; Zhu, M.; Zhang, H.; Zeng, Z. An Improved YOLO Algorithm for Rotated Object Detection in Remote Sensing Images. In Proceedings of the 2021 IEEE 4th Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Chongqing, China, 18–20 June 2021; pp. 840–845. [Google Scholar] [CrossRef]
  22. Deng, L.Y.; Ho, S.S.; Lim, X.Y. Diseases Classification Utilizing Tooth X-ray Images Based on Convolutional Neural Network. In Proceedings of the 2020 International Symposium on Computer, Consumer and Control (IS3C), Taichung City, Taiwan, 13–16 November 2020; pp. 300–303. [Google Scholar] [CrossRef]
  23. Herbst, S.R.; Pitchika, V.; Krois, J.; Krasowski, A.; Schwendicke, F. Machine Learning to Predict Apical Lesions: A Cross-Sectional and Model Development Study. J. Clin. Med. 2023, 12, 5464. [Google Scholar] [CrossRef] [PubMed]
  24. Duman, Ş.B.; Çelik Özen, D.; Bayrakdar, I.Ş.; Baydar, O.; Alhaija, E.S.A.; Helvacioğlu Yiğit, D.; Çelik, Ö.; Jagtap, R.; Pileggi, R.; Orhan, K. Second Mesiobuccal Canal Segmentation with YOLOv5 Architecture Using Cone Beam Computed Tomography Images. Odontology 2024, 112, 552–561. [Google Scholar] [CrossRef] [PubMed]
  25. İçöz, D.; Terzioğlu, H.; Özel, M.A.; Karakurt, R. Evaluation of an Artificial Intelligence System for the Diagnosis of Apical Periodontitis on Digital Panoramic Images. Niger. J. Clin. Pract. 2023, 26, 1085. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Research flowchart.
Figure 1. Research flowchart.
Bioengineering 11 00877 g001
Figure 2. Manual annotation using Roboflow’s polygon tool.
Figure 2. Manual annotation using Roboflow’s polygon tool.
Bioengineering 11 00877 g002
Figure 3. The first tooth rotated to a horizontal 0-degree image.
Figure 3. The first tooth rotated to a horizontal 0-degree image.
Bioengineering 11 00877 g003
Figure 4. Image expansion. (a) Original cropped image. (b) Cropped image expanded by 20 pixels horizontally and 40 pixels vertically.
Figure 4. Image expansion. (a) Original cropped image. (b) Cropped image expanded by 20 pixels horizontally and 40 pixels vertically.
Bioengineering 11 00877 g004
Figure 5. The Gaussian high-pass filter result. (a) The original image. (b) The result of the Gaussian high-pass filter. (c) The result of (a) minus (b).
Figure 5. The Gaussian high-pass filter result. (a) The original image. (b) The result of the Gaussian high-pass filter. (c) The result of (a) minus (b).
Bioengineering 11 00877 g005
Figure 6. The adaptive histogram equalization. (a) Original image and histogram. (b) Enhanced image and histogram after adaptive histogram equalization.
Figure 6. The adaptive histogram equalization. (a) Original image and histogram. (b) Enhanced image and histogram after adaptive histogram equalization.
Bioengineering 11 00877 g006
Figure 7. The flat-field correction result. (a) Original image. (b) Flat-field correction image.
Figure 7. The flat-field correction result. (a) Original image. (b) Flat-field correction image.
Bioengineering 11 00877 g007
Figure 8. Linear transform model.
Figure 8. Linear transform model.
Bioengineering 11 00877 g008
Figure 9. The result of linear transformation. (a) Original image. (b) Linear transformation image.
Figure 9. The result of linear transformation. (a) Original image. (b) Linear transformation image.
Bioengineering 11 00877 g009
Figure 10. The result of negative film effect. (a) Original image. (b) Negative film effect image.
Figure 10. The result of negative film effect. (a) Original image. (b) Negative film effect image.
Bioengineering 11 00877 g010
Figure 11. The disease prediction results with YOLOv8 OBB.
Figure 11. The disease prediction results with YOLOv8 OBB.
Bioengineering 11 00877 g011
Figure 12. Single-tooth prediction results.
Figure 12. Single-tooth prediction results.
Bioengineering 11 00877 g012
Figure 13. Validation accuracy during the training process of the Places365-GoogLeNet model.
Figure 13. Validation accuracy during the training process of the Places365-GoogLeNet model.
Bioengineering 11 00877 g013
Figure 14. Validation loss function during the training process of the Places365-GoogLeNet model.
Figure 14. Validation loss function during the training process of the Places365-GoogLeNet model.
Bioengineering 11 00877 g014
Figure 15. After adaptive histogram equalization, segmentation into multiple single-tooth images is conducted (numbered from left to right).
Figure 15. After adaptive histogram equalization, segmentation into multiple single-tooth images is conducted (numbered from left to right).
Bioengineering 11 00877 g015
Table 1. The hardware and software platform versions used in this study.
Table 1. The hardware and software platform versions used in this study.
Hardware PlatformVersion
CPU11 Gen Intel(R) Core(TM) [email protected]
GPUNVIDIA GeForce RTX 3070 8G
DRAM32 GB
Software PlatformVersionSoftware PlatformVersion
MATLABR2023bPython3.11.8
Deep Network designerR2023bPyTorch2.2.1 + cu121
Deep Learning Toolbox23.2CUDA12.1
Table 2. AlexNet architecture.
Table 2. AlexNet architecture.
LayerFilters/NeuronFilter SizeStridePaddingSize of Feature MapActivation Functions
Input 227 × 227 × 3
Conv 19611 × 114 55 × 55 × 96ReLU
MaxPool 1 3 × 32 27 × 27 × 96
Conv 22565 × 51227 × 27 × 256ReLU
MaxPool 2 3 × 32 13 × 13 × 256
Conv 33843 × 31113 × 13 × 384ReLU
Conv 43843 × 31113 × 13 × 384ReLU
Conv 53563 × 31113 × 13 × 256ReLU
MaxPool 3 3 × 32 6 × 6 × 256
Dropout 1Rate = 0.7 6 × 6 × 256
Fc 1 4096ReLU
Dropout 2Rate = 0.7 4096
Fc 2 4096ReLU
Fc 3 3Softmax
Table 3. CNN model hyperparameter settings.
Table 3. CNN model hyperparameter settings.
HyperparameterValueHyper
Parameter
ModelValue
Initial Learning Rate0.0001Max EpochAlexNet30
Mini Batch Size16Places365-GoogLeNet20
Learning Rate Drop Factor0.1VGG-166
Learning Rate Drop Period10ResNet5010
ShuffleEvery-epochGoogLeNet20
Validation Frequency100ConvNeXtv2_base10
Table 4. CNN training set and validation set quantity.
Table 4. CNN training set and validation set quantity.
DiseaseTraining SetValidation SetTotal
Normal531467
Apical Lesion531467
Peri-endo Combined Lesion531467
Total15942201
Table 5. The number of periapical images after data enhancement.
Table 5. The number of periapical images after data enhancement.
DiseaseOriginalAugmentation
Normal67268
Apical Lesion67268
Peri-endo Combined Lesion67268
Table 6. The hyperparameter value used in YOLOv8.
Table 6. The hyperparameter value used in YOLOv8.
HyperparameterValue
Epoch100
Batch8
imgsize640 × 640
lr00.01
Table 7. Comparison of validation results with ground truth.
Table 7. Comparison of validation results with ground truth.
NumberNo. 1No. 2No. 3
Ground TruthPeri-endo
Combined Lesion
NormalApical Lesion
ValidationPeri-endo Combined LesionNormalApical Lesion
Accuracy99.94%60.47%85.02%
Table 8. CNN validation results after padding.
Table 8. CNN validation results after padding.
MethodMetricsAlexNetPlaces365-GoogLeNetVGG16ResNet50GoogLeNetConvNeXtv2
OriginalAccuracy80.95%79.19%71.43%71.43%80.95%76.19%
Training time54 s1 m 20 s26 s1 m 19 s1 m 19 s6 m 4 s
After paddingAccuracy83.33%80.95%80.95%73.81%85.71%85.71%
Training time48 s1 m 29 s24 s1 m 19 s1 m 17 s5 m 46 s
Table 9. CNN validation results after data enhancement.
Table 9. CNN validation results after data enhancement.
MethodMetricsAlexNetPlaces365-GoogLeNetVGG16ResNet50GoogLeNetConvNeXtv2
PaddingAccuracy83.33%80.95%80.95%73.81%85.71%85.71%
Training time48 s1 m 29 s24 s1 m 19 s1 m 17 s5 m 46 s
Padding
+ Enhancement
Accuracy88.69%88.69%88.69%88.69%86.90%87.50%
Training time2 m 49 s5 m 17 s1 m 2 s4 m 33 s5 m 26 s21 m 58 s
Table 10. CNN validation results after expanding the cropping range.
Table 10. CNN validation results after expanding the cropping range.
MethodMetricsAlexNetPlaces365-GoogLeNetVGG16ResNet50GoogLeNetConvNeXtv2
Original YOLOv8 croppingAccuracy88.69%88.69%88.69%88.69%86.90%87.50%
Training time2 m 49 s5 m 17 s1 m 21 s4 m 33 s5 m 26 s21 m 58 s
Expand x = 20
pixels, y = 0 pixels
Accuracy91.67%89.29%90.48%85.71%89.29%89.28%
Training time1 m 30 s4 m 8 s1 m 37 s3 m 20 s4 m 39 s21 m 13 s
Expand x = 0
pixels, y = 40 pixels
Accuracy88.10%91.67%90.48%88.10%88.69%88.09%
Training time2 m 38 s1 m 23 s1 m 16 s3 m 22 s5 m 31 s22 m 4 s
Expand x = 20
pixels, y = 40 pixels
Accuracy89.29%89.88%88.10%89.88%88.10%91.07%
Training time2 m 1 s4 m 50 s1 m 29 s4 m 28 s4 m 37 s20 m 35 s
Table 11. CNN validation results after image enhancement.
Table 11. CNN validation results after image enhancement.
MethodMetricsAlexNetPlaces365-
GoogLeNet
VGG16ResNet50GoogLeNetConv
NeXtv2
Original
(padding, enhancement)
Accuracy88.69%88.69%88.69%88.69%86.90%87.50%
Training time2 m 49 s5 m 17 s1 m 21 s4 m 33 s5 m 26 s21 m 58 s
Gaussian high-pass filterAccuracy92.26%93..45%91.67%89.29%92.26%89.29%
Training time2 m 50 s6 m 3 s1 m 16 s4 m 3 s5 m 33 s20 m 53 s
Adaptive histogram equalizationAccuracy88.10%92.86%92.26%91.07%89.88%95.23%
Training time50 s1 m 16 s1 m 21 s4 m 1 s5 m 14 s55 m 40 s
Gaussian high-pass filter with
adaptive histogram equalization
Accuracy93.45%91.07%92.86%90.48%88.10%93.45%
Training time2 m 24 s5 m 37 s1 m 30 s3 m 55 s5 m 48 s22 m 2 s
Table 12. Confusion matrix for multi-class classification on validation set in CNN.
Table 12. Confusion matrix for multi-class classification on validation set in CNN.
DiseaseActual
NormalApical LesionPeri-endo
Combined Lesion
PredictedNormal5640
Apical Lesion0491
Peri-endo Combined Lesion0355
Table 13. Comparison between different methods.
Table 13. Comparison between different methods.
MethodThe Best Model in this StudyMethod in [8]Method
in [23]
ModelConvNeXtv2U-Net ModelDecision Tree
DiseaseNormalApical LesionPeri-endo Combined LesionTotalApical LesionApical Lesion
Accuracy 95.23%No dataNo data
Precision93.33%98.00%94.82%95.38%No dataNo data
Recall99.75%87.50%98.21%95.23%No dataNo data
F1-Score96.55%92.45%96.49%95.16%74.2%89%
Table 14. YOLOv8 training set and validation set instance quantity.
Table 14. YOLOv8 training set and validation set instance quantity.
DiseaseTraining SetValidation SetTotal
Normal19458252
Apical Lesion10620126
Peri-endo Combined Lesion701585
Total37093463
Table 15. YOLOv8 validation results after data enhancement.
Table 15. YOLOv8 validation results after data enhancement.
MethodMetricsNormalApical
Lesion
Peri-Endo Combined LesionTotal
OriginalmAP500.8710.7420.9280.847
Accuracy 75.00%
Original with
data enhancement
mAP500.9060.8780.9270.904
Accuracy 84.70%
Table 16. YOLOv8 validation result after image processing.
Table 16. YOLOv8 validation result after image processing.
MethodMetricsNormalApical
Lesion
Peri-Endo Combined LesionTotal
Linear Transformation with Adaptive histogram equalizationmAP500.9040.8760.9570.912
Accuracy 85.16%
Flat-Field Correction with Adaptive histogram equalizationmAP500.8880.8570.9710.905
Accuracy 87.79%
Gaussian high-pass filter with Negative Film EffectmAP500.9180.9130.9230.918
Accuracy 92.13%
Table 17. Confusion matrix for multi-class classification with YOLO detection.
Table 17. Confusion matrix for multi-class classification with YOLO detection.
DiseaseActual
NormalApical LesionPeri-endo
Combined Lesion
PredictedNormal14332
Apical Lesion6555
Peri-endo Combined Lesion2236
Table 18. YOLOv8 model compared to models in other papers.
Table 18. YOLOv8 model compared to models in other papers.
MethodThe Best Models of this Study MethodMethod
in [24]
Method
in [25]
ModelYOLOv8YOLOv5xYOLOv3 Darknet
DiseaseNormalApical LesionPeri-endo Combined LesionTotalApical LesionApical Lesion
Accuracy 92.13%No dataNo data
Precision69.3%91%86.4%82.2%83%56%
Recall84.2%95%91.7%90%No data98%
mAP500.9180.9130.9230.9180.88No data
F1-Score87.46%80.13%88.49%85.92%87%71%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wu, P.-Y.; Mao, Y.-C.; Lin, Y.-J.; Li, X.-H.; Ku, L.-T.; Li, K.-C.; Chen, C.-A.; Chen, T.-Y.; Chen, S.-L.; Tu, W.-C.; et al. Precision Medicine for Apical Lesions and Peri-Endo Combined Lesions Based on Transfer Learning Using Periapical Radiographs. Bioengineering 2024, 11, 877. https://doi.org/10.3390/bioengineering11090877

AMA Style

Wu P-Y, Mao Y-C, Lin Y-J, Li X-H, Ku L-T, Li K-C, Chen C-A, Chen T-Y, Chen S-L, Tu W-C, et al. Precision Medicine for Apical Lesions and Peri-Endo Combined Lesions Based on Transfer Learning Using Periapical Radiographs. Bioengineering. 2024; 11(9):877. https://doi.org/10.3390/bioengineering11090877

Chicago/Turabian Style

Wu, Pei-Yi, Yi-Cheng Mao, Yuan-Jin Lin, Xin-Hua Li, Li-Tzu Ku, Kuo-Chen Li, Chiung-An Chen, Tsung-Yi Chen, Shih-Lun Chen, Wei-Chen Tu, and et al. 2024. "Precision Medicine for Apical Lesions and Peri-Endo Combined Lesions Based on Transfer Learning Using Periapical Radiographs" Bioengineering 11, no. 9: 877. https://doi.org/10.3390/bioengineering11090877

APA Style

Wu, P. -Y., Mao, Y. -C., Lin, Y. -J., Li, X. -H., Ku, L. -T., Li, K. -C., Chen, C. -A., Chen, T. -Y., Chen, S. -L., Tu, W. -C., & Abu, P. A. R. (2024). Precision Medicine for Apical Lesions and Peri-Endo Combined Lesions Based on Transfer Learning Using Periapical Radiographs. Bioengineering, 11(9), 877. https://doi.org/10.3390/bioengineering11090877

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop