Next Article in Journal
Combined Oxygen–Ozone and Porcine Injectable Collagen Therapies Boosting Efficacy in Low Back Pain and Disability
Next Article in Special Issue
Artificial Intelligence for Tooth Detection in Cleft Lip and Palate Patients
Previous Article in Journal
Macular Alterations in a Cohort of Caucasian Patients Affected by Retinitis Pigmentosa
Previous Article in Special Issue
Accuracy of Artificial Intelligence Models in Dental Implant Fixture Identification and Classification from Radiographs: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of a Vendor-Agnostic Deep Learning Model for Noise Reduction and Image Quality Improvement in Dental CBCT

by
Wojciech Kazimierczak
1,2,3,*,
Róża Wajer
1,2,
Oskar Komisarek
4,
Marta Dyszkiewicz-Konwińska
5,
Adrian Wajer
6,
Natalia Kazimierczak
3,
Joanna Janiszewska-Olszowska
7 and
Zbigniew Serafin
8
1
Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
2
Department of Radiology and Diagnostic Imaging, University Hospital No. 1 in Bydgoszcz, Marii Skłodowskiej—Curie 9, 85-094 Bydgoszcz, Poland
3
Kazimierczak Private Medical Practice, Dworcowa 13/u6a, 85-009 Bydgoszcz, Poland
4
Department of Otolaryngology, Audiology and Phoniatrics, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
5
Department of Diagnostic Imaging, Poznan University of Medical Sciences, 61-701 Poznań, Poland
6
Dental Primus, Poznańska 18, 88-100 Inowrocław, Poland
7
Department of Interdisciplinary Dentistry, Pomeranian Medical University in Szczecin, Al. Powstańców Wlkp. 72, 70-111 Szczecin, Poland
8
Faculty of Medicine, Bydgoszcz University of Science and Technology, Kaliskiego 7, 85-796 Bydgoszcz, Poland
*
Author to whom correspondence should be addressed.
Diagnostics 2024, 14(21), 2410; https://doi.org/10.3390/diagnostics14212410
Submission received: 22 September 2024 / Revised: 21 October 2024 / Accepted: 22 October 2024 / Published: 29 October 2024

Abstract

:
Background/Objectives: To assess the impact of a vendor-agnostic deep learning model (DLM) on image quality parameters and noise reduction in dental cone-beam computed tomography (CBCT) reconstructions. Methods: This retrospective study was conducted on CBCT scans of 93 patients (41 males and 52 females, mean age 41.2 years, SD 15.8 years) from a single center using the inclusion criteria of standard radiation dose protocol images. Objective and subjective image quality was assessed in three predefined landmarks through contrast-to-noise ratio (CNR) measurements and visual assessment using a 5-point scale by three experienced readers. The inter-reader reliability and repeatability were calculated. Results: Eighty patients (30 males and 50 females; mean age 41.5 years, SD 15.94 years) were included in this study. The CNR in DLM reconstructions was significantly greater than in native reconstructions, and the mean CNR in regions of interest 1-3 (ROI1-3) in DLM images was 11.12 ± 9.29, while in the case of native reconstructions, it was 7.64 ± 4.33 (p < 0.001). The noise level in native reconstructions was significantly higher than in the DLM reconstructions, and the mean noise level in ROI1-3 in native images was 45.83 ± 25.89, while in the case of DLM reconstructions, it was 35.61 ± 24.28 (p < 0.05). Subjective image quality assessment revealed no statistically significant differences between native and DLM reconstructions. Conclusions: The use of deep learning-based image reconstruction algorithms for CBCT imaging of the oral cavity can improve image quality by enhancing the CNR and lowering the noise.

1. Introduction

Cone-beam computed tomography (CBCT) has emerged as a valuable dental imaging tool because of its ability to provide precise three-dimensional reconstruction of the dentomaxillofacial region. CBCT surpasses the limitations of conventional two-dimensional dental imaging, facilitating accurate insight into the multiplanar details of maxillofacial bony structures and adjacent soft tissues. A spatial resolution of less than 100 µm significantly surpasses the imaging capabilities of conventional computed tomography (CT), allowing for precise diagnosis and measurements [1,2,3]. Such precision is desired in implant procedure planning, cephalometry, and endodontics. Although relatively recently introduced (2000s) for broader commercial use, CBCT has already proven its value in a wide range of dental applications, including implant planning, periodontology, temporomandibular joint (TMJ) imaging, orthodontics, and oral and maxillofacial surgery [4,5].
However, the application of CBCT as an imaging modality has limitations. Despite the exceptional image quality achieved in phantom studies, patient studies should be conducted in accordance with ALADIP (As Low as Diagnostically Acceptable being Indication-oriented and Patient-specific) principles [6]. This approach, inter alia, aims to prevent excessive tube setting and thus may lead to a greater number of artifacts and excessive noise. In the case of CBCT, there is a significant variation in image quality, specifically regarding contrast resolution and the level of noise, across various CBCT machines and settings used during acquisition, accompanied by a broad spectrum of radiation doses administered to patients [7]. CBCT artifacts are induced by discrepancies between mathematical models and actual imaging processes [8]. Noise, an unwanted disturbance in a signal, can significantly impair the quality of the images produced by CBCT units. Noise manifests as inconsistent attenuation values in projection images, causing errors in the computed attenuation coefficient and reducing low-contrast resolution, affecting the differentiation of low-density tissues [9,10]. Both artifacts and noise may simulate or obscure pathologies, leading to misdiagnoses and potentially worsening patient outcomes. Additionally, noise is inherently associated with the dose delivered during examinations, demonstrating an inversely proportional relationship [11]. Therefore, it is reasonable to seek noise and artifact reduction, as their application may have an impact on reducing the radiation dose delivered during CBCT and improving its diagnostic accuracy.
To date, several studies have demonstrated the efficacy of deep learning-based image reconstruction algorithms in reducing noise and improving image quality in CBCT scans. For instance, iterative reconstruction (IR) techniques have been shown to significantly enhance image quality in conventional CT and CBCT imaging [12,13,14,15,16,17,18]. Recent advancements in vendor-specific DLRs, such as TrueFidelity™ by GE Healthcare and AiCE by Canon Medical Systems, have further improved diagnostic accuracy and reduced radiation doses [19,20,21]. However, the limitation of these approaches is their vendor-specific nature, which restricts their use to specific scanners. Our study explored the potential of a vendor-agnostic DLM to overcome these limitations. The solution appears to be a vendor-agnostic deep learning model (DLM) that works in the image postprocessing domain and does not require projection data. The term vendor-agnostic refers to the fact that the program is not limited to specific CT or CBCT machine manufacturers and can be applied across different platforms [22]. Previous studies have already proven that vendor-agnostic DLMs can both reduce image noise and provide high diagnostic accuracy comparable to vendor-specific DLRs [23,24,25,26]. Hypothetically, they could also positively affect the quality parameters of dental CBCT images, thereby increasing their diagnostic value for the evaluation of common pathological and dental lesions.
The aim of this study was to assess objective and subjective image quality parameters of standard dental CBCT and DLM-reconstructed images.

2. Materials and Methods

2.1. Population

The study population consisted of 93 patients (41 males and 52 females aged 15–72 years, SD 15.8; median 41.2). All CBCT scans were acquired at a single private orthodontic center. All patients were referred for CBCT scans by orthodontists and dental surgeons between January and September 2023. The primary indication was suspicion of periapical lesions on the basis of the OPG and single-tooth X-rays. The main study inclusion criterion was images obtained using the standard radiation dose and image quality protocol. Images burdened by motion artifacts were excluded from the study.

2.2. Image Acquisition and Postprocessing

All scans were performed using a Hyperion X9 PRO 13 × 10 (MyRay, Imola, Italy). One standard, marked as the “Regular” setting of the apparatus, was used (90 kV, 36 mAs, CTDI/Vol 4.09 mGy, and 13 cm field of view). All images were reconstructed at a slice thickness of 0.3 mm. After scanning, the images were anonymized and exported for further analysis. The deep learning and denoised reconstructions were obtained with the use of ClariCT.AI software (ClariPI, Seoul, Republic of Korea).

2.3. Objective Image Quality

To assess the objective image quality, a radiologist with 2 years of craniofacial CT assessment placed square regions of interest (ROIs) at:
  • Periapical region of tooth 15 within the maxillary bone,
  • Periapical region of tooth 33 within the mandible,
  • The spongious bone of the mandible in the mental foramen area,
  • Muscles of the tongue.
The ROIs were carefully placed in homogeneous tissues (spongious bone of periapical regions, mandible, and tongue musculature) to avoid artifacts and lesions (e.g., cysts, enostoses, and endodontic materials). The contrast-to-noise ratio (CNR) was evaluated using ImageJ software v. 1.41 (National Institutes of Health, Bethesda, MD, USA). ROIs were automatically propagated between the native and DLM reconstructions to maximize the objectivity of the results. The CNR calculation formula presented by Koivisto [27] was adopted:
CNR = (SR1-3,L − ST)/N
where SR1-3,L is the mean signal at the anatomical landmark or periapical lesion, ST is the mean signal in the background (tongue), and N is the average standard deviation (SD) in the anatomical landmark and background ROI (tongue).
The CNRs of the specified anatomical landmarks were compared to evaluate the effectiveness of the AI denoising tool.

2.4. Subjective Image Quality

Subjective image quality was assessed by a radiologist and two dentists (all readers with >5 years of experience in craniofacial CT assessment) who were blinded to patient details and the use of the AI denoising tool. The images were evaluated on a five-point scale (1 = poor, 5 = excellent), considering factors such as noise, sharpness, and visibility of anatomical structures as follows:
Level 5—excellent delineation of structures and excellent image quality;
Level 4—clear delineation of structures and good image quality;
Level 3—anatomical structures still fully assessable in all parts and acceptable image quality;
Level 2—structures identifiable with adequate image quality;
Level 1—anatomical structures not identifiable, images with no diagnostic value.
Image quality assessment was performed in the following predefined anatomical regions: the alveolar recess of the maxillary sinuses, the apical area of tooth 15, and the apical area of tooth 33.
To enhance the repeatability and objectivity of the qualitative analyses, an illustration was created to depict representative images evaluated according to the aforementioned scale (Figure 1). In cases of metal artifacts or missing teeth, the opposite side of the dental arch was assessed (e.g., severe artifacts in the apical area of tooth 15–tooth 25 were evaluated).
Agreement between all the readers’ ratings of the subjective image quality of the native and DLM-reconstructed images was assessed.
Subjective image quality analysis was performed on a dedicated console using iRYS Viewer version 6.2 (MyRay, Imola, Italy) software. The window width and center were predefined at 1048 and 4096, respectively.

2.5. Error Study

Fifteen randomly selected subjects were re-examined by the same author one month after the initial analysis. The ICC for subjective image quality analyses was calculated to assess the agreement between examinations.

2.6. Sample Size Calculation

The post hoc power analysis was conducted to determine adequacy of study sample. A two-tailed paired-sample t-test was used since CNR measurements were taken from the same patients under both conditions (native and DLM reconstructions). The effect size (Cohen’s d) for paired samples was assessed with pooled SD of mean CNR values in DLM and native reconstructions. Power analysis was conducted with G*Power software (version 3.1) [28]. The following assumptions were made: α error probability: 0.05, Power (1 − β): 0.80.

2.7. Statistical Evaluation

Inter-rater agreement was assessed using Fleiss’ Kappa. Differences between native and DLM reconstructions were analyzed using paired t-tests. A power analysis was conducted to determine the appropriate sample size for detecting significant differences in noise levels between the two reconstruction methods. Statistical significance was set at p < 0.05 [27]. Statistical analyses were conducted using R software version 4.3.2 [29].

3. Results

3.1. Population

The authors screened a total of 93 CBCT scans. Out of these, 13 scans were excluded, as they did not meet the inclusion criteria. Therefore, CBCT scans from 80 patients (30 males and 50 females; mean age 41.45 years, SD 15.94 years) were included in the final analysis (80/93 screened patients). The application of eligibility criteria is presented in Figure 2.

3.2. Objective Image Quality

Figure 3 shows the sample ROI position with the corresponding signal and SD values.
The average signal measured in the regions of interest (ROIs) in three locations (periapical area of teeth 15 and 33 and spongious bone of the mandible in the area of the mental foramen) showed slightly lower mean values in DLM images than in native reconstructions. However, the difference was not statistically significant (p > 0.05). Table 1 summarizes the results of the objective image quality assessment. Graphical representation of the mean signal calculations in Figure 4.
There was a statistically significant difference between noise levels on both types of reconstructions (p = 0.011). Figure 5 illustrates mean noise levels.
The CNR in DLM reconstructions was significantly higher than that in native reconstructions across all examined locations (p < 0.05), as shown in Figure 6. The mean CNR in ROI1-3 in DLM images was 11.12 ± 9.29, while in the case of native reconstructions, it was 7.64 ± 4.33 (Table 1).

3.3. Subjective Image Quality

The results of the subjective image quality assessments are summarized in Table 2. The data in the table represent the mean ratings of all readers. Overall, subjective image quality was lowest for the apical area of tooth 15 in both the native and DLM reconstructions. The highest mean scores were given to the apical area of tooth 33 in both the evaluated reconstructions. The differences between the mean ratings for both types of reconstructions were slight and not statistically significant (p > 0.05), ICC = 0.753. Figure 7 presents the results of the subjective image quality assessments.
Inter-reader agreement for subjective image quality assessments was evaluated using Fleiss’ Kappa. The results indicated moderate to substantial agreement among the three readers, with Kappa values ranging from 0.536 to 0.628 for native reconstructions and 0.540 to 0.628 for DLM reconstructions (Table 3).

3.4. Error Study

Analysis of the repeatability of subjective image quality analysis carried out by the reader demonstrated excellent concordance (ICC = 0.841).

3.5. Sample Size

A power analysis was conducted to determine the appropriate sample size required to detect significant differences in noise levels between native and DLM reconstructions. Pooled SD of mean CNR values of DLM and native images was 7.25. The calculated Cohen’s d was 0.48.
The analysis indicated that a sample size of 34 subjects per group was sufficient to achieve a power of 0.8, with an effect size (Cohen’s d) of 0.48 and a significance level of 0.05. This sample size ensures that the study is adequately powered to detect meaningful differences in objective image quality parameters.

4. Discussion

The aim of this study was to assess the image quality parameters of standard dental CBCT images and images reconstructed using DLM algorithms. Our study revealed that DLM reconstructions had slightly greater mean signal values than native reconstructions, although this difference was not statistically significant. However, the CNR was significantly higher in the DLM reconstructions than in the native reconstructions. Noise levels were also statistically significantly lower in the DLM reconstructions than in the native reconstructions. This indicates that the evaluated DLM algorithm improves the contrast between the anatomical structures in CBCT images. The results of the subjective image quality analysis performed by three readers blinded to the type of reconstruction showed no statistically significant differences.
Surprisingly, although the differences were statistically insignificant, the results of the subjective image quality assessments showed mixed results in the evaluation of the selected anatomical structures. The mean scores for all readers of the alveolar recess of the maxillary sinus and the apical area of tooth 33 were both greater for DLM reconstructions than for native reconstructions. However, the ratings for the apical area of tooth 15 were greater in native reconstructions than in DLM reconstructions. In our opinion, this indicates a clear convergence in the quality of both reconstructions and high repeatability of readers in quality assessment. Upon re-evaluation by the readers, after the results of the analyses were obtained, some of the DLM reconstructions showed poorer delineation of structures, which might have influenced the image quality ratings of both periapical areas. However, combined with reduced noise levels, excessive smoothing of the very thin structures worsened the delineation of structures, for example, the periodontal ligament. This phenomenon might have an impact on the visualization of critical CBCT-indicated structures, such as the root canal or alveolar bone, where spatial resolution is key [30,31,32]. Similar results were shown by Ylisiurua et al. [33], who reported that deep learning algorithms enhanced the visualization of soft tissues but degraded the visualization of bones and teeth. The authors subjectively noted a significant decrease in resolution and concluded that the images resembled images reconstructed with “soft-tissue kernels” used with CT scanners. Since CBCT is used mainly in the diagnosis of bones and teeth, such over-smoothing of the details may compromise diagnostic accuracy. Future studies focused on evaluating the delineation of such structures may answer the question of whether DLM algorithms significantly reduce the value of the examination in assessing submillimeter structures.
Our findings suggest that although the quantitative improvements are noticeable, the qualitative assessment of these changes may require a higher threshold to achieve significance. We must emphasize that the evaluated DLM was not designed for CBCT imaging. The purpose of the program was to reduce additional image noise in CT images. Therefore, the results of our study should be regarded as a scientifically driven attempt to explore the impact of this tool on a domain similar to CT. The results are similar to our previous study evaluating the effects of applying the same vendor-agnostic DLM to CBCT images of TMJs [34]. The study showed significantly better objective image quality of DLM reconstructions compared to native images (CNR levels; p < 0.001). However, the results of subjective image analysis showed no significant differences in image quality between the reconstruction types (p = 0.055). Moreover, the assessment of degenerative TMJ lesions was not affected by the type of reconstructions assessed (p > 0.05). We concluded that the analyzed DLM reconstruction notably enhanced the objective image quality in TMJ CBCT images, but did not significantly affect the subjective image quality or DJD lesion diagnosis. Our studies provide new insights into the efficacy of the selected DLM in this specific context, separate from its general approval and usage. Therefore, we caution against the generalization of our results beyond this specific context. However, our findings indicate that the use of AI denoising algorithms designed for CT imaging may improve the objective image quality parameters of CBCT images. Further studies, including a larger number of examinations performed using various devices and different diagnostic protocols, could demonstrate greater differences in the results of qualitative and quantitative image assessments. It is likely that the results would be similar to those published on qualitative analyses of studies performed using low-dose protocols in standard CT examinations [22,31,35,36,37,38,39]. Compared with standard and iterative reconstructions (IRs), deep learning reconstructions have already proven to have the potential for radiation dose reductions between 30% and 71% while maintaining diagnostic image quality owing to improved noise reduction [40]. Nevertheless, the trend toward improved image quality with the use of DLM algorithms in CBCT is promising.
Recent studies [41,42,43] have assessed the effectiveness of generative AI in reducing noise and metal artifacts in dental CT images. Hegazy et al. (2020) [41] evaluated the image quality of low-dose dental CT images reconstructed with a generative adversarial network using the Wasserstein loss function (WGAN). The authors achieved both quantitative and qualitative improvements in image quality; however, interestingly, they encountered the problem of over-smoothing small image details. In a 2021 study [43], Hegazy et al. evaluated the impact of variations in the WGAN and U-WGAN on the image quality of half-scan dental CTs. Both the noise levels and qualitative image parameters were significantly improved in the AI-reconstructed images. Another notable study by Hu et al. [42] proposed a WGAN to decrease the level of noise and metal artifacts in low-dose dental CT images. The results of the study showed that the proposed WGAN algorithm effectively removed artifacts and noise from low-dose dental CT images and outperformed other methods, such as general GANs and convolutional neural networks, in terms of image quality and artifact correction.
The literature concerning noise optimization in dental CBCT examinations, as opposed to conventional CT, is limited. In a recent study by Ramage (2023) [18], the authors assessed the effect of standard filtered back projection (FBP) and iterative reconstruction (IR) on CBCT image noise. They found that compared with FBP, IR significantly reduced image noise (99.84 ± 16.28 and 198.65 ± 55.58, respectively). The authors concluded that the additional processing time for IR reconstruction was clinically acceptable. A study by Choi et al. [44] investigated the efficacy of a novel, self-supervised convolutional neural network in projection noise reduction. The phantom study revealed that the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) significantly improved compared to those of uncorrected images—27.08 and 0.839 vs. 15.68 and 0.103, respectively. A similar phantom study by Han and Yu evaluated the efficacy of a novel self-supervising denoising method based on Bernoulli sampling [45]. The results showed that the proposed method outperforms conventional denoising methods by at least 4.47 dB in PSNR. Brendlin et al. [46] investigated the efficacy of deep-learning denoising (DLD) techniques in mitigating the trade-offs related to the radiation dose and noise of CBCT during interventional procedures. The results showed that the application of DLD enabled significant radiation dose reduction combined with enhanced objective image quality parameters (higher CNR and lower noise). Two studies evaluated the effectiveness of DLD techniques in maxillofacial CBCT [33,47]. Kim et al. confirmed that the use of DLD techniques improved the diagnostic accuracy of readers in diagnosing sinus fungal balls and chronic rhinosinusitis [47]. Ylisiura et al. [33] compared iterative and DLD techniques for effective noise reduction in dentomaxillofacial applications. Their study demonstrated that the proposed method enabled image enhancement comparable to that of the iterative method, but with faster processing time. However, despite promising results, the readers preferred iterative reconstruction over DLD images in hard tissue evaluation. However, none of these studies evaluated commercially available noise reduction methods, and the evaluated techniques were available only to narrow groups of scientists. Therefore, the possibility of comparing different DLD techniques is an exciting topic for further research.
The findings of this study suggest that the application of a DLM to dental CBCT images can improve the CNR without compromising diagnostic quality. These findings are supported by the objective measurements of the CNR, which showed a statistically significant improvement in the DLM-reconstructed images compared with the native reconstructions. It is important to note that while confidence intervals provide an estimate of the range within which the true parameter lies, they do not preclude the possibility of statistically significant differences between groups. Our findings of significant differences in CNR, despite overlapping confidence intervals, underscore the importance of hypothesis testing in statistical analysis. Compared to commercial software such as TrueFidelity™ by GE Healthcare and AiCE by Canon Medical Systems, our vendor-agnostic DLM offers several advantages. Unlike vendor-specific solutions, the vendor-agnostic DLM can be applied to scans from various manufacturers, enhancing its versatility in clinical settings. Although some studies have revealed that while the noise reduction capabilities of our DLM are comparable to those of commercial software, it excels in maintaining image quality across different imaging systems [48]. This flexibility could streamline workflows and reduce the costs associated with acquiring multiple software licenses.
However, this study has several limitations. The sample size, although adequate for a pilot study, was relatively small. Larger studies with more diverse patient populations are needed to generalize these findings. Moreover, the subjective nature of image quality assessment, even for experienced readers, can be influenced by individual biases. Although the study used predefined scales and illustrations to aid in the assessments, these evaluations are inherently subjective and should be interpreted with caution. Notably, this study focused on a specific DLM algorithm and CBCT scanner. Further research is required to evaluate the generalizability of these findings to other DLM algorithms and CBCT scanners. Additionally, we evaluated images acquired only with a “regular quality” preset; therefore, our findings cannot be extrapolated to other protocols, especially low-dose protocols.

5. Conclusions

Overall, the results of this study support the potential of DLMs to objectively improve CBCT image quality by increasing CNR and reducing image noise. However, some issues with the delineation of small bony structures were noted, although no statistically significant differences in subjective image quality ratings were found. Our results could have significant implications for patient care by reducing the radiation dose required for diagnostic-quality images and potentially improving the diagnostic accuracy of dentomaxillofacial pathology. Further research is warranted to fully understand the clinical impact of DLMs on CBCT and to explore their integration into standard practice.

Author Contributions

Conceptualization, W.K. and R.W.; methodology, W.K. and R.W.; software, W.K. and N.K.; validation, W.K., Z.S. and J.J.-O.; formal analysis, W.K. and R.W.; investigation, W.K., R.W., A.W., O.K. and M.D.-K.; resources, W.K.; data curation, W.K. and R.W.; writing—original draft preparation, W.K. and R.W.; writing—review and editing, W.K., R.W., O.K. and N.K.; visualization, W.K. and R.W.; supervision, W.K.; project administration, W.K. and R.W.; funding acquisition, Z.S. and W.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Collegium Medicum, Nicolaus Copernicus University in Torun, Poland (protocol no. KB 227/2023, 10.04.20223), for studies involving humans.

Informed Consent Statement

Patient consent was waived due to the retrospective nature of the study and the anonymization of patient data.

Data Availability Statement

Data are available upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kaasalainen, T.; Ekholm, M.; Siiskonen, T.; Kortesniemi, M. Dental Cone Beam CT: An Updated Review. Phys. Medica 2021, 88, 193–217. [Google Scholar] [CrossRef] [PubMed]
  2. Fokas, G.; Vaughn, V.M.; Scarfe, W.C.; Bornstein, M.M. Accuracy of Linear Measurements on CBCT Images Related to Presurgical Implant Treatment Planning: A Systematic Review. Clin. Oral Implant. Res. 2018, 29, 393–415. [Google Scholar] [CrossRef] [PubMed]
  3. Wikner, J.; Hanken, H.; Eulenburg, C.; Heiland, M.; Gröbe, A.; Assaf, A.T.; Riecke, B.; Friedrich, R.E. Linear Accuracy and Reliability of Volume Data Sets Acquired by Two CBCT-Devices and an MSCT Using Virtual Models: A Comparative In-Vitro Study. Acta Odontol. Scand. 2016, 74, 51–59. [Google Scholar] [CrossRef] [PubMed]
  4. Gaêta-Araujo, H.; Leite, A.F.; de Faria Vasconcelos, K.; Jacobs, R. Two Decades of Research on CBCT Imaging in DMFR—An Appraisal of Scientific Evidence. Dentomaxillofac. Radiol. 2021, 50, 20200367. [Google Scholar] [CrossRef]
  5. Abesi, F.; Jamali, A.S.; Zamani, M. Accuracy of Artificial Intelligence in the Detection and Segmentation of Oral and Maxillofacial Structures Using Cone-Beam Computed Tomography Images: A Systematic Review and Meta-Analysis. Pol. J. Radiol. 2023, 88, 256–263. [Google Scholar] [CrossRef]
  6. Oenning, A.C.; Jacobs, R.; Pauwels, R.; Stratis, A.; Hedesiu, M.; Salmon, B. Cone-Beam CT in Paediatric Dentistry: DIMITRA Project Position Statement. Pediatr. Radiol. 2018, 48, 308–316. [Google Scholar] [CrossRef]
  7. Widmann, G.; Bischel, A.; Stratis, A.; Bosmans, H.; Jacobs, R.; Gassner, E.-M.; Puelacher, W.; Pauwels, R. Spatial and Contrast Resolution of Ultralow Dose Dentomaxillofacial CT Imaging Using Iterative Reconstruction Technology. Dentomaxillofac. Radiol. 2017, 46, 20160452. [Google Scholar] [CrossRef]
  8. Schulze, R.; Heil, U.; Groß, D.; Bruellmann, D.D.; Dranischnikow, E.; Schwanecke, U.; Schoemer, E. Artefacts in CBCT: A Review. Dentomaxillofac. Radiol. 2011, 40, 265–773. [Google Scholar] [CrossRef]
  9. Bechara, B.; McMahan, C.A.; Moore, W.S.; Noujeim, M.; Geha, H.; Teixeira, F.B. Contrast-to-Noise Ratio Difference in Small Field of View Cone Beam Computed Tomography Machines. J. Oral Sci. 2012, 54, 227–232. [Google Scholar] [CrossRef]
  10. Nagarajappa, A.; Dwivedi, N.; Tiwari, R. Artifacts: The Downturn of CBCT Image. J. Int. Soc. Prev. Community Dent. 2015, 5, 440–445. [Google Scholar] [CrossRef]
  11. Kocasarac, H.D.; Yigit, D.H.; Bechara, B.; Sinanoglu, A.; Noujeim, M. Contrast-to-Noise Ratio with Different Settings in a CBCT Machine in Presence of Different Root-End Filling Materials: An In Vitro Study. Dentomaxillofac. Radiol. 2016, 45, 20160012. [Google Scholar] [CrossRef] [PubMed]
  12. Geyer, L.L.; Schoepf, U.J.; Meinel, F.G.; Nance, J.W., Jr.; Bastarrika, G.; Leipsic, J.A.; Paul, N.S.; Rengo, M.; Laghi, A.; De Cecco, C.N. State of the Art: Iterative CT Reconstruction Techniques. Radiology 2015, 276, 339–357. [Google Scholar] [CrossRef] [PubMed]
  13. Van Gompel, G.; Van Slambrouck, K.; Defrise, M.; Batenburg, K.J.; de Mey, J.; Sijbers, J.; Nuyts, J.; Schafer, A.L.; Kazakia, G.J.; Vittinghoff, E.; et al. Iterative Correction of Beam Hardening Artifacts in CT. Med. Phys. 2011, 38, S36–S49. [Google Scholar] [CrossRef] [PubMed]
  14. Schmidt, A.M.A.; Grunz, J.-P.; Petritsch, B.; Gruschwitz, P.; Knarr, J.; Huflage, H.; Bley, T.A.; Kosmala, A. Combination of Iterative Metal Artifact Reduction and Virtual Monoenergetic Reconstruction Using Split-Filter Dual-Energy CT in Patients with Dental Artifact on Head and Neck CT. Am. J. Roentgenol. 2022, 218, 716–727. [Google Scholar] [CrossRef]
  15. Gardner, S.J.; Mao, W.; Liu, C.; Aref, I.; Elshaikh, M.; Lee, J.K.; Pradhan, D.; Movsas, B.; Chetty, I.J.; Siddiqui, F. Improvements in CBCT Image Quality Using a Novel Iterative Reconstruction Algorithm: A Clinical Evaluation. Adv. Radiat. Oncol. 2019, 4, 390–400. [Google Scholar] [CrossRef]
  16. Chen, B.; Xiang, K.; Gong, Z.; Wang, J.; Tan, S. Statistical Iterative CBCT Reconstruction Based on Neural Network. IEEE Trans. Med. Imaging 2018, 37, 1511–1521. [Google Scholar] [CrossRef]
  17. Washio, H.; Ohira, S.; Funama, Y.; Morimoto, M.; Wada, K.; Yagi, M.; Shimamoto, H.; Koike, Y.; Ueda, Y.; Karino, T.; et al. Metal Artifact Reduction Using Iterative CBCT Reconstruction Algorithm for Head and Neck Radiation Therapy: A Phantom and Clinical Study. Eur. J. Radiol. 2020, 132, 109293. [Google Scholar] [CrossRef]
  18. Ramage, A.; Lopez Gutierrez, B.; Fischer, K.; Sekula, M.; Santaella, G.M.; Scarfe, W.; Brasil, D.M.; de Oliveira-Santos, C. Filtered Back Projection vs. Iterative Reconstruction for CBCT: Effects on Image Noise and Processing Time. Dentomaxillofac. Radiol. 2023, 52, 20230109. [Google Scholar] [CrossRef]
  19. Kim, J.H.; Yoon, H.J.; Lee, E.; Kim, I.; Cha, Y.K.; Bak, S.H. Validation of Deep-Learning Image Reconstruction for Low-Dose Chest Computed Tomography Scan: Emphasis on Image Quality and Noise. Korean J. Radiol. 2021, 22, 131–138. [Google Scholar] [CrossRef]
  20. Tatsugami, F.; Higaki, T.; Nakamura, Y.; Yu, Z.; Zhou, J.; Lu, Y.; Fujioka, C.; Kitagawa, T.; Kihara, Y.; Iida, M.; et al. Deep Learning–Based Image Restoration Algorithm for Coronary CT Angiography. Eur. Radiol. 2019, 29, 5322–5329. [Google Scholar] [CrossRef]
  21. Greffier, J.; Hamard, A.; Pereira, F.; Barrau, C.; Pasquier, H.; Beregi, J.P.; Frandon, J. Image Quality and Dose Reduction Opportunity of Deep Learning Image Reconstruction Algorithm for CT: A Phantom Study. Eur. Radiol. 2020, 30, 3951–3959. [Google Scholar] [CrossRef] [PubMed]
  22. Nam, J.G.; Ahn, C.; Choi, H.; Hong, W.; Park, J.; Kim, J.H.; Goo, J.M. Image Quality of Ultralow-Dose Chest CT Using Deep Learning Techniques: Potential Superiority of Vendor-Agnostic Post-Processing over Vendor-Specific Techniques. Eur Radiol 2021, 31. [Google Scholar] [CrossRef] [PubMed]
  23. Lim, W.H.; Choi, Y.H.; Park, J.E.; Cho, Y.J.; Lee, S.; Cheon, J.-E.; Kim, W.S.; Kim, I.-O.; Kim, J.H. Application of Vendor-Neutral Iterative Reconstruction Technique to Pediatric Abdominal Computed Tomography. Korean J. Radiol. 2019, 20, 1358–1367. [Google Scholar] [CrossRef] [PubMed]
  24. Choi, H.; Chang, W.; Kim, J.H.; Ahn, C.; Lee, H.; Kim, H.Y.; Cho, J.; Lee, Y.J.; Kim, Y.H. Dose Reduction Potential of Vendor-Agnostic Deep Learning Model in Comparison with Deep Learning–Based Image Reconstruction Algorithm on CT: A Phantom Study. Eur. Radiol. 2022, 32, 1247–1255. [Google Scholar] [CrossRef]
  25. Hong, J.H.; Park, E.-A.; Lee, W.; Ahn, C.; Kim, J.-H. Incremental Image Noise Reduction in Coronary CT Angiography Using a Deep Learning-Based Technique with Iterative Reconstruction. Korean J. Radiol. 2020, 21, 1165–1177. [Google Scholar] [CrossRef]
  26. Shin, Y.J.; Chang, W.; Ye, J.C.; Kang, E.; Oh, D.Y.; Lee, Y.J.; Park, J.H.; Kim, Y.H. Low-Dose Abdominal CT Using a Deep Learning-Based Denoising Algorithm: A Comparison with CT Reconstructed with Filtered Back Projection or Iterative Reconstruction Algorithm. Korean J. Radiol. 2020, 21, 356–364. [Google Scholar] [CrossRef]
  27. Koivisto, J.; van Eijnatten, M.; Ärnstedt, J.J.; Holli-Helenius, K.; Dastidar, P.; Wolff, J. Impact of Prone, Supine and Oblique Patient Positioning on CBCT Image Quality, Contrast-to-Noise Ratio and Figure of Merit Value in the Maxillofacial Region. Dentomaxillofac. Radiol. 2017, 46, 20160418. [Google Scholar] [CrossRef]
  28. Zou, G.Y. Sample Size Formulas for Estimating Intraclass Correlation Coefficients with Precision and Assurance. Stat. Med. 2012, 31, 3972–3981. [Google Scholar] [CrossRef] [PubMed]
  29. R Core Team. R: A Language and Environment for Statistical Computing; R Core Team: Vienna, Austria, 2021. [Google Scholar]
  30. Martins, J.N.R.; Versiani, M.A. CBCT and Micro-CT on the Study of Root Canal Anatomy. In The Root Canal Anatomy in Permanent Dentition; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  31. Brady, S.L.; Trout, A.T.; Somasundaram, E.; Anton, C.G.; Li, Y.; Dillman, J.R. Improving Image Quality and Reducing Radiation Dose for Pediatric CT by Using Deep Learning Reconstruction. Radiology 2021, 298, 180–188. [Google Scholar] [CrossRef]
  32. AlJehani, Y.A. Diagnostic Applications of Cone-Beam CT for Periodontal Diseases. Int. J. Dent. 2014, 2014, 865079. [Google Scholar] [CrossRef]
  33. Ylisiurua, S.; Sipola, A.; Nieminen, M.T.; Brix, M.A.K. Deep Learning Enables Time-Efficient Soft Tissue Enhancement in CBCT: Proof-of-Concept Study for Dentomaxillofacial Applications. Phys. Medica 2024, 117, 103184. [Google Scholar] [CrossRef] [PubMed]
  34. Kazimierczak, W.; Kędziora, K.; Janiszewska-Olszowska, J.; Kazimierczak, N.; Serafin, Z. Noise-Optimized CBCT Imaging of Temporomandibular Joints—The Impact of AI on Image Quality. J. Clin. Med. 2024, 13, 1502. [Google Scholar] [CrossRef] [PubMed]
  35. Nam, J.G.; Hong, J.H.; Kim, D.S.; Oh, J.; Goo, J.M. Deep Learning Reconstruction for Contrast-Enhanced CT of the Upper Abdomen: Similar Image Quality with Lower Radiation Dose in Direct Comparison with Iterative Reconstruction. Eur. Radiol. 2021, 31, 5533–5543. [Google Scholar] [CrossRef] [PubMed]
  36. Cheng, Y.; Han, Y.; Li, J.; Fan, G.; Cao, L.; Li, J.; Jia, X.; Yang, J.; Guo, J. Low-Dose CT Urography Using Deep Learning Image Reconstruction: A Prospective Study for Comparison with Conventional CT Urography. Br. J. Radiol. 2021, 94, 20201291. [Google Scholar] [CrossRef]
  37. Benz, D.C.; Ersözlü, S.; Mojon, F.L.A.; Messerli, M.; Mitulla, A.K.; Ciancone, D.; Kenkel, D.; Schaab, J.A.; Gebhard, C.; Pazhenkottil, A.P.; et al. Radiation Dose Reduction with Deep-Learning Image Reconstruction for Coronary Computed Tomography Angiography. Eur. Radiol. 2022, 32, 2620–2628. [Google Scholar] [CrossRef]
  38. Racine, D.; Brat, H.G.; Dufour, B.; Steity, J.M.; Hussenot, M.; Rizk, B.; Fournier, D.; Zanca, F. Image Texture, Low Contrast Liver Lesion Detectability and Impact on Dose: Deep Learning Algorithm Compared to Partial Model-Based Iterative Reconstruction. Eur. J. Radiol. 2021, 141, 109808. [Google Scholar] [CrossRef]
  39. Hata, A.; Yanagawa, M.; Yoshida, Y.; Miyata, T.; Tsubamoto, M.; Honda, O.; Tomiyama, N. Combination of Deep Learning-Based Denoising and Iterative Reconstruction for Ultra-Low-Dose CT of the Chest: Image Quality and Lung-RADS Evaluation. Am. J. Roentgenol. 2020, 215, 1321–1328. [Google Scholar] [CrossRef]
  40. Koetzier, L.R.; Mastrodicasa, D.; Szczykutowicz, T.P.; van der Werf, N.R.; Wang, A.S.; Sandfort, V.; van der Molen, A.J.; Fleischmann, D.; Willemink, M.J. Deep Learning Image Reconstruction for CT: Technical Principles and Clinical Prospects. Radiology 2023, 306, e221257. [Google Scholar] [CrossRef]
  41. Hegazy, M.A.A.; Cho, M.H.; Lee, S.Y. Image Denoising by Transfer Learning of Generative Adversarial Network for Dental CT. Biomed. Phys. Eng. Express 2020, 6, 055024. [Google Scholar] [CrossRef]
  42. Hu, Z.; Jiang, C.; Sun, F.; Zhang, Q.; Ge, Y.; Yang, Y.; Liu, X.; Zheng, H.; Liang, D. Artifact Correction in Low-Dose Dental CT Imaging Using Wasserstein Generative Adversarial Networks. Med. Phys. 2019, 46, 1686–1696. [Google Scholar] [CrossRef]
  43. Hegazy, M.A.A.; Cho, M.H.; Lee, S.Y. Half-Scan Artifact Correction Using Generative Adversarial Network for Dental CT. Comput. Biol. Med. 2021, 132, 104313. [Google Scholar] [CrossRef] [PubMed]
  44. Choi, K.; Kim, S.H.; Kim, S. Self-Supervised Denoising of Projection Data for Low-Dose Cone-Beam CT. Med. Phys. 2023, 50, 6319–6333. [Google Scholar] [CrossRef] [PubMed]
  45. Han, Y.-J.; Yu, H.-J. Self-Supervised Noise Reduction in Low-Dose Cone Beam Computed Tomography (CBCT) Using the Randomly Dropped Projection Strategy. Appl. Sci. 2022, 12, 1714. [Google Scholar] [CrossRef]
  46. Brendlin, A.S.; Dehdab, R.; Stenzl, B.; Mueck, J.; Ghibes, P.; Groezinger, G.; Kim, J.; Afat, S.; Artzner, C. Novel Deep Learning Denoising Enhances Image Quality and Lowers Radiation Exposure in Interventional Bronchial Artery Embolization Cone Beam CT. Acad. Radiol. 2024, 31, 2144–2155. [Google Scholar] [CrossRef]
  47. Kim, K.; Lim, C.Y.; Shin, J.; Chung, M.J.; Jung, Y.G. Enhanced Artificial Intelligence-Based Diagnosis Using CBCT with Internal Denoising: Clinical Validation for Discrimination of Fungal Ball, Sinusitis, and Normal Cases in the Maxillary Sinus. Comput. Methods Programs Biomed. 2023, 240, 107708. [Google Scholar] [CrossRef]
  48. Kim, C.; Kwack, T.; Kim, W.; Cha, J.; Yang, Z.; Yong, H.S. Accuracy of Two Deep Learning–Based Reconstruction Methods Compared with an Adaptive Statistical Iterative Reconstruction Method for Solid and Ground-Glass Nodule Volumetry on Low-Dose and Ultra–Low-Dose Chest Computed Tomography: A Phantom Study. PLoS ONE 2022, 17, e0270122. [Google Scholar] [CrossRef]
Figure 1. Qualitative image analysis: (A)—(5 points) excellent delineation of structures and excellent image quality; (B)—(4 points) clear delineation of structures and good image quality; (C)—(3 points) anatomical structures still fully assessable in all parts and acceptable image quality; (D)—(2 points) structures identifiable in adequate image quality; (E)—(1 point) anatomical structures not identifiable, image of no diagnostic value.
Figure 1. Qualitative image analysis: (A)—(5 points) excellent delineation of structures and excellent image quality; (B)—(4 points) clear delineation of structures and good image quality; (C)—(3 points) anatomical structures still fully assessable in all parts and acceptable image quality; (D)—(2 points) structures identifiable in adequate image quality; (E)—(1 point) anatomical structures not identifiable, image of no diagnostic value.
Diagnostics 14 02410 g001
Figure 2. Flow-chart presenting application of eligibility criteria in study material.
Figure 2. Flow-chart presenting application of eligibility criteria in study material.
Diagnostics 14 02410 g002
Figure 3. The sample ROI (yellow circle) positions and values in the native (A,C,E,G) and DLM (B,D,F,H) reconstructions were as follows: (A,B), tooth 15, mean signal 227.748, 227.267 and SD 179.793, 170,854, respectively; (C,D), tooth 33, mean signal 418.06, 417.462 and SD 136.493, 129,878, respectively; (E,F), mental foramen, mean signal 336.191, 330.893 and SD 111.672, 89.153, respectively; and (G,H), tongue musculature, mean signal 96.336, 95.785 and SD 38.251, 26.848, respectively.
Figure 3. The sample ROI (yellow circle) positions and values in the native (A,C,E,G) and DLM (B,D,F,H) reconstructions were as follows: (A,B), tooth 15, mean signal 227.748, 227.267 and SD 179.793, 170,854, respectively; (C,D), tooth 33, mean signal 418.06, 417.462 and SD 136.493, 129,878, respectively; (E,F), mental foramen, mean signal 336.191, 330.893 and SD 111.672, 89.153, respectively; and (G,H), tongue musculature, mean signal 96.336, 95.785 and SD 38.251, 26.848, respectively.
Diagnostics 14 02410 g003
Figure 4. Results of the mean signal calculations (mean values error bars represent SDs). No statistically significant differences were found (p > 0.05).
Figure 4. Results of the mean signal calculations (mean values error bars represent SDs). No statistically significant differences were found (p > 0.05).
Diagnostics 14 02410 g004
Figure 5. Results of noise calculations in ROIs 1-3 (mean values error bars represent SDs). p values shown on graphs. There was a statistically significant difference (p = 0.011).
Figure 5. Results of noise calculations in ROIs 1-3 (mean values error bars represent SDs). p values shown on graphs. There was a statistically significant difference (p = 0.011).
Diagnostics 14 02410 g005
Figure 6. Results of CNR calculations (mean values, error bars represent SDs). p values shown on graphs.
Figure 6. Results of CNR calculations (mean values, error bars represent SDs). p values shown on graphs.
Diagnostics 14 02410 g006
Figure 7. Results of subjective image quality assessments (mean values).
Figure 7. Results of subjective image quality assessments (mean values).
Diagnostics 14 02410 g007
Table 1. Results of the objective image quality assessment.
Table 1. Results of the objective image quality assessment.
ParameterNativeDLMp
SignalTooth 15341 ± 197.60339.91 ± 194.93p = 0.961
Tooth 33448.33 ± 232.01452.84 ± 249.1p = 0.906
Mental foramen456.15 ± 235.78454.46 ± 238.97p = 0.964
Mean ROI1-3415.3 ± 178.11415.74 ± 181.75p = 0.988
Noise45.83 ± 25.8935.61 ± 24.28p = 0.011 *
CNRTooth 155.62 ± 5.198.28 ± 8.25p = 0.016 *
Tooth 338.58 ± 5.4512.42 ± 8.76p = 0.001 *
Mental foramen8.63 ± 5.8112.29 ± 9.02p = 0.003 *
Mean ROI1-37.64 ± 4.3311.12 ± 9.29p < 0.001 *
The signal and CNR are given as the means ± standard deviations. DLM—deep learning model reconstruction; ROI—region of interest; CNR—contrast-to-noise ratio. *—statistically significant difference.
Table 2. Results of the subjective image quality assessment.
Table 2. Results of the subjective image quality assessment.
RegionNativeDLMp
Reader 1Reader 2Reader 3Reader 1Reader 2Reader 3
Maxillary Sinus3.253.263.363.343.253.550.350
Apex 153.233.253.483.183.253.400.674
Apex 333.493.463.523.553.453.660.529
DLM—deep learning model. p—Wilcoxon paired test.
Table 3. Inter-reader agreement for subjective image quality assessment.
Table 3. Inter-reader agreement for subjective image quality assessment.
RegionReconstructionICCInterpretation
Alveolar recess of maxillary sinusNative0.536Moderate Agreement
DLM0.552Moderate Agreement
Apex 15Native0.628Substantial Agreement
DLM0.628Moderate Agreement
Apex 33Native0.541Moderate Agreement
DLM0.540Moderate Agreement
DLM—deep learning model; ICC—interclass correlation coefficient.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kazimierczak, W.; Wajer, R.; Komisarek, O.; Dyszkiewicz-Konwińska, M.; Wajer, A.; Kazimierczak, N.; Janiszewska-Olszowska, J.; Serafin, Z. Evaluation of a Vendor-Agnostic Deep Learning Model for Noise Reduction and Image Quality Improvement in Dental CBCT. Diagnostics 2024, 14, 2410. https://doi.org/10.3390/diagnostics14212410

AMA Style

Kazimierczak W, Wajer R, Komisarek O, Dyszkiewicz-Konwińska M, Wajer A, Kazimierczak N, Janiszewska-Olszowska J, Serafin Z. Evaluation of a Vendor-Agnostic Deep Learning Model for Noise Reduction and Image Quality Improvement in Dental CBCT. Diagnostics. 2024; 14(21):2410. https://doi.org/10.3390/diagnostics14212410

Chicago/Turabian Style

Kazimierczak, Wojciech, Róża Wajer, Oskar Komisarek, Marta Dyszkiewicz-Konwińska, Adrian Wajer, Natalia Kazimierczak, Joanna Janiszewska-Olszowska, and Zbigniew Serafin. 2024. "Evaluation of a Vendor-Agnostic Deep Learning Model for Noise Reduction and Image Quality Improvement in Dental CBCT" Diagnostics 14, no. 21: 2410. https://doi.org/10.3390/diagnostics14212410

APA Style

Kazimierczak, W., Wajer, R., Komisarek, O., Dyszkiewicz-Konwińska, M., Wajer, A., Kazimierczak, N., Janiszewska-Olszowska, J., & Serafin, Z. (2024). Evaluation of a Vendor-Agnostic Deep Learning Model for Noise Reduction and Image Quality Improvement in Dental CBCT. Diagnostics, 14(21), 2410. https://doi.org/10.3390/diagnostics14212410

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop