Next Article in Journal
Possibilities of Using Tensiomyography to Assess Early Changes in Muscle Function in Patients with Multiple Sclerosis—Pilot Study
Previous Article in Journal
Optimizing TLIF Approach Selection: An Algorithmic Framework with Illustrative Cases
Previous Article in Special Issue
A Cross-Sectional Survey Assessing the Factors Influencing Dentists’ Decisions on Post-Endodontic Prosthetic Crown Restoration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Enhancing Image Quality in Dental-Maxillofacial CBCT: The Impact of Iterative Reconstruction and AI on Noise Reduction—A Systematic Review

by
Róża Wajer
1,*,
Pawel Dabrowski-Tumanski
2,
Adrian Wajer
3,
Natalia Kazimierczak
4,
Zbigniew Serafin
5 and
Wojciech Kazimierczak
1,4
1
Department of Radiology and Diagnostic Imaging, Collegium Medicum, Nicolaus Copernicus University in Torun, Jagiellońska 13-15, 85-067 Bydgoszcz, Poland
2
Faculty of Mathematics and Natural Sciences, School of Exact Sciences, Cardinal Stefan Wyszynski University, 01-815 Warsaw, Poland
3
Independent Researcher, 86-005 Zielonka, Poland
4
Kazimierczak Private Medical Practice, Dworcowa 13/u6a, 85-009 Bydgoszcz, Poland
5
Faculty of Medicine, Bydgoszcz University of Science and Technology, Kaliskiego 7, 85-796 Bydgoszcz, Poland
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2025, 14(12), 4214; https://doi.org/10.3390/jcm14124214
Submission received: 22 April 2025 / Revised: 18 May 2025 / Accepted: 6 June 2025 / Published: 13 June 2025

Abstract

Background: This systematic review evaluates articles investigating the use of iterative reconstruction (IR) algorithms and artificial intelligence (AI)-based noise reduction techniques to improve the quality of oral CBCT images. Materials and Methods: A detailed search was performed across PubMed, Scopus, Web of Science, ScienceDirect, and Embase databases. The inclusion criteria were prospective or retrospective studies with IR and AI for CBCT images, studies in which the image quality was statistically assessed, studies on humans, and studies published in peer-reviewed journals in English. Quality assessment was performed independently by two authors, and the conflicts were resolved by the third expert. For bias assessment, the Quality Assessment of Diagnostic Accuracy Studies (QUADAS)-2 tool was used for bias assessment. Material: A total of eleven studies were included, analyzing a range of IR and AI methods designed to reduce noise and artifacts in CBCT images. Results: A statistically significant improvement in CBCT image quality parameters was achieved by the algorithms used in each of the articles we reviewed. The most commonly used image quality measures were peak signal-to-noise ratio (PSNR) and contrast-to-noise ratio (CNR). The most significant increase in PSNR was demonstrated by Ylisiurua et al. and Vestergaard et al., who reported an increase in this parameter of more than 30% for both deep learning (DL) techniques used. Another subcategory used to improve the quality of CBCT images is the reconstruction of synthetic computed tomography (sCT) images using AI. The use of sCT allowed an increase in PSNR ranging from 17% to 30%. For the more traditional methods, FBP and iterative reconstructions, there was an improvement in the PSNR parameter but not as high, ranging from 3% to 13%. Among the research papers evaluating the CNR parameter, an improvement of 17% to 29% was achieved. Conclusions: The use of AI and IR can significantly improve the quality of oral CBCT images by reducing image noise.

1. Introduction

Cone Beam Computed Tomography (CBCT) has become an essential imaging modality in daily dental practice, providing detailed three-dimensional views of craniofacial structures and aiding in diagnostic, treatment planning, or evaluation of endodontic morphology [1,2,3]. However, CBCT images, more than CT [4], are often affected by noise, artifacts, and other quality issues that can compromise diagnostic accuracy and clinical decision-making [5]. The excess noise generated by CBCT is a result of the conical geometry of the X-ray beam and the lack of post-patient collimation. In addition, higher scattering degrades image quality compared to CT [6], while the commonly used low mAs protocols result in significantly degraded CBCT image quality. Therefore, especially in dental CBCT, image reconstruction plays a key role in improving diagnostic image quality by reducing noise and artifacts while preserving spatial resolution. In 1970, Gordon et al. introduced the first iterative reconstruction (IR) technique for CT scans [7]. However, its usefulness in practical applications was limited due to the high computational demands stemming from the requirement of iterative reprojection and back projection steps, aiming at calculation of the fidelity between measured and estimated projections [8].
To overcome the efficiency problem, a simpler analysis technique called Filtered Back Projection (FBP) has been widely used for more than 40 years [9]. In this technique, only one filtering process and one back projection are used, which speeds up the whole reconstruction process. However, the performance of the FBP technique depends on the quality, distribution, and noise level of the projection data, as well as on the choice of filter. In the 2000s, advanced IR algorithms for CT images were successfully developed to remove noise and artifacts, resulting in high-quality images even with lower-dose protocols, according to the ALARP (As Low As Reasonably Practicable) principle [10], and their effectiveness was confirmed by FDA approval in 2009 [11,12,13,14].
The adaptation of the FBP technique to cone-beam geometry has mainly been done using the conventional Feldkamp-Davis-Kress (FDK) algorithm, which is prone to image distortions and does not promote the reduction in image noise [15]. It results in blurring or graining the image, which reduces the visibility of critical anatomical features [6,16]. Therefore, an efficient denoising technique is still needed to enhance the difference in the signals between real structures and noise for better CBCT image quality, following their widespread clinical adoption, which is still awaited [17]. Recent advances in artificial intelligence have shown great promise in improving image quality through several deep learning-based reconstruction methods, but the main work is in CT research [18,19,20,21]. Compared to CT studies, the literature on intelligent image denoising methods in CBCT is scarce, but several studies have investigated the use of AI techniques for image quality assessment and enhancement in CBCT. Yet, no systematic review describing those methods was published. Moreover, the suitability of the proposed methods from the AI perspective was never analyzed. The primary CBCT image reconstruction methodologies are presented in Figure 1.
The aim of this systematic review was to critically evaluate the current literature on the role of iterative reconstruction and AI in noise reduction in dental-maxillofacial CBCT. In particular, we attempted to organize and fundamentally group iterative and AI methods to improve noise reduction, image quality improvement, and diagnostic utility in imaging. Moreover, for the first time, we looked at the validity of the methods from an AI perspective and proposed further directions and a framework for evaluating AI performance in CBCT image enhancement.

2. Materials and Methods

2.1. Article Selection and Data Extraction Process Overview

This systematic review was conducted according to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement [22] and the guidelines of the Cochrane Handbook for Systematic Reviews of Interventions [23]. It was registered within the PROSPERO service with ID 1048103.
The initial set of publications, retrieved from search engines, was first deduplicated and then manually screened. The aim of the screen was to (1) exclude articles on different topics that accidentally matched search criteria and (2) select eligible articles based on well-defined criteria, described in a further section. The whole process was shown in Figure 2. To standardize the comparison, each article from the final set was subjected to manual data extraction analysis (described in detail in a further section). Finally, to ensure the objectivity of the results, the Risk of Bias analysis was performed (as described in the further section).
The manual screening, data extraction, and risk of bias analysis were performed independently by two authors (RW and AW). Their agreement was measured by Cohen’s kappa coefficient [24], and the disagreements were resolved by the third expert (WK).
Based on PICO(S) [25], this systematic review concentrated on the following questions: (P) What is the Population (patient groups) in oral and maxillofacial CBCT scans; (I): What was the Intervention, i.e., if the CBCT images were enhanced with or without AI; (C): What was the Control used to assess quantitative and qualitative image enhancement; (O): What was the Outcome after image denoising.

2.2. Article Search

A series of preliminary searches of the PubMed, Scopus, Web of Science, ScienceDirect, and Embase databases were performed on 3 January 2025. The final search was performed on 8 January 2025 using all the above search engines. The search phrase was constructed from a combination of MeSH/non-MeSH terms joined by Boolean operators:
(“CBCT” OR “cone-beam computed tomography”) AND (“denoising” OR “denoise” OR “noise reduction”) AND (“oral cavity” OR “maxillofacial” OR “dental”)
As the syntaxes of search engines varied, the exact form of the search string was search dependent. The exact form search string for each search engine is present in SI.

2.3. Eligibility Criteria

Studies were included if they met the following well-defined Boolean criteria: (1) prospective or retrospective studies with AI for CBCT images, (2) studies in which the image quality was statistically assessed, (3) studies on humans, and (4) studies published in peer-reviewed journals in English. Exclusion criteria were as follows: (a) non-original studies, (b) studies not evaluating diagnostic dental or maxillofacial CBCT, (c) studies evaluating the use of CBCT in radiotherapy and interventional radiology, (d) lack of ethics committee approval, and (e) conference papers, literature reviews, case reports, and book chapters.

2.4. Data Extraction

A standardized data extraction form was used to extract data on study characteristics, including author, year, country, sample size, AI method used (deep learning model), dataset partitioning (train/test), quantitative and qualitative evaluation, and anatomical region.

2.5. Risk of Bias

The quality of the included studies was assessed using the Quality Assessment of Diagnostic Accuracy Studies (QUADAS)-2 tool [26]. The QUADAS-2 tool encompasses four domains: patient selection, reference standard, flow and timing, and index test. Each domain was assessed for risk of bias, with three domains also assessed for applicability concerns. The assessment of bias is facilitated by signaling questions. The QUADAS-2 methodology was implemented in four sequential steps: firstly, a review question is formulated; secondly, the tool is tailored to provide review-specific guidelines; thirdly, a primary study flowchart is created; and finally, bias and applicability are assessed.

3. Results

3.1. Search Results

A total of 209 articles were identified as potentially relevant to the subject. Following the removal of 35 duplicates, 174 titles and abstracts were subjected to a rigorous evaluation process. Following this, 138 papers were excluded as they did not meet the inclusion criteria or were not related to the topic of this review. The remaining 36 papers were retrieved and analyzed to perform this review.
The process of searching for relevant literature is illustrated by the PRISMA flowchart (see Figure 1). At this stage, both reviewers demonstrated a high level of agreement, achieving a Cohen’s kappa of 0.98. Discrepancies were resolved by a third reviewer (KW) in a limited number of cases. Thirty-six articles were then screened for full text, of which 25 were excluded for the following reasons: one was a non-original research study, two lacked information on ethical board approval, five were conference papers, literature reviews, case reports, or book chapters; ten studies did not evaluate the assessment of diagnostic dental or maxillofacial CBCT, and seven studies did not involve human subjects. (Supplementary Materials Table S1). Consequently, 11 articles were deemed eligible for inclusion in the review. The data obtained from these studies is presented in Table 1.
The articles and their primary characteristics are summarized in Table 1 and Table 2. Given the heterogeneity in objectives and methodologies across groups, each group is described in a separate section. The majority of studies were conducted in China and Poland (n = 3), followed by Korea and the USA (n = 2), with additional studies from Italy, Denmark, and Finland (n = 1).

3.2. Risk of Bias

The risk of bias assessment is summarized in Table 3. In the domain of patient selection, most of the included studies exhibit a low risk of bias, primarily due to the implementation of randomization. The presence of clearly defined inclusion criteria, along with accurate patient data, led to a low risk of bias. Conversely, an absence of complete patient data or clearly defined inclusion criteria resulted in an unclear risk of bias. It has to be noted, however, that although not mentioned explicitly, most probably in all cases the randomization of patients was done retrospectively. The risk was considered high if the study only stated that a certain number of CBCT examinations were included, without specifying their characteristics.
The risk of index test bias was considered low if both intra- and inter-rater agreement were examined. In instances where pertinent information regarding one of these tests was absent, the risk was categorized as unclear. Consequently, in instances where no error study was conducted, the risk was categorized as high.
The risk of bias due to the reference standard was deemed low, as each of the trials had a well-described reference standard, which was usually the FBP projection or the reference CT images. The risk of bias due to flow and timing was considered low in all studies, as the time interval between CBCT and the reference standard was reported. The applicability concerns regarding patient selection remain unaltered due to the nature of the study material.

3.3. Study Objectives

The set of retrieved articles is naturally divided into 3 groups, representing 3 different objectives. In the first group, there are two articles seeking to address the denoising problem with iterative refinement or specific filtering algorithms [27,35]. The next group consists of three articles where authors evaluate various aspects of vendor-agnostic, AI-based software for image denoising [28,29,33]. In particular, the authors measure the capability of AI methods to remove metal artifacts or sharpen images in general or in specific regions (temporomandibular joints, root canals). The largest group, consisting of six articles, aims at building its own AI models, aiming at transforming CBCT images to or obtaining them from corresponding CT images or by reconstructing high-quality CBCT images from low-dose samples [15,30,31,32,34,36]. The objectives of each article are summarized in Table 2 and are described in more detail in the following sections.

3.4. Metrics and Evaluation

Different objectives require different evaluation metrics. The metrics used in each article are presented in Table 1. In image denoising, one may be interested in how the key features of the original image were preserved. To quantify the similarity between the original and denoised image, most commonly one uses Peak Signal to Noise Ratio (PSNR, most common), Feature Similarity (FSIM), Structure Similarity Index Map (SSIM), or Correlation Coefficient (CORR), with all values the larger, the better.
On the other hand, one may be interested in how “noisy” the image really is. The level of noise is commonly measured by Contrast to Noise Ratio (CNR—the larger the better), requiring the assignment of a region of interest and surrounding tissue (background). If the artifacts are present, one may calculate the intensity of the artifact signal, called the Artifact Index (AIx—the lower the better). Alternatively, one can calculate the Differentiation between Voxel Values (ΔVV—the lower the better)—the absolute difference in voxel values between a region with the most pronounced artifact and a corresponding artifact-free control region.
In deep learning model training, one needs a “ground truth” objects, to which one compares the result of the model. As the results are usually images, one can measure the difference in obtained and required intensity pixel-by-pixel, calculating either Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Square Error (RMSE), or finally Normalized Root Mean Square Error (NRMSE). In each case, lower values are better, as they denote closer similarity to the desired outcome of the model.
It has to be noticed that better values of some metrics, like PSNR, do not automatically mean better perceived visual quality, especially in images with structural or perceptual differences. Therefore, the independent assessment of an expert in the field is also highly desirable, especially in the denoising process, where the expected outcome is not known. In particular, in most studies at least two experts were involved (Table 1).
In what follows, there is a description of the methods used and results obtained in each group of articles. The key values of metrics in each work, along with the reference methods and image enhancement, are presented in Table 4.

3.5. Classical Methods in Image Denoising

A review of the literature reveals that, as of yet, there are only two articles directly treating the problem of CBCT image denoising; however, their objectives and methodology are different. The first article aims to help in dental prostetics by denoising the CBCT images with the introduced Sampling Kantorovich (SK) algorithm [27]. The SK algorithm is a very precise mathematical operation, working as a filter, reconstructing the image with the noise smoothed out, but without distortion of the main subject or adding artifacts. The CBCT images denoised with the SK algorithm were compared to other standard denoising (bilinear and bicubic B-spline), showing that, e.g., the edges of the dental arch, paramount in prostetics, were more visible.
The other article describes the Iterative Noise Reduction (INR) introduced using Fuzzy-Entropic-based algorithms [35]. The aim of the study was to assess the root canal therapy outcomes, where the denoised CBCT images were compared to Digital Periapical Film (DPF) measurements. As it turned out, the denoised CBCT images were superior to DPF, showing statistically significant differences in assessing root canal filling length, root canal closeness, and root canal filling quality.
The denoising was also quantified objectively by calculating PSNR (both articles) and CORR (Zhang et al. [35]) metrics. However, the definition of PSNR used in both methods differs by a constant factor 10 · l o g ( w i d t h · h e i g h t ) taking into account the image width and height, not mentioned in the source article [27]. Assuming both dimensions were of the size of 10 3 pixels, the values of PSNR in Zhang et al. are shifted by a value around 90 dB. Yet, it seems that even after subtracting the correction, the PSNR values are higher for INR than for the SK algorithm (see Table 5). The efficiency of the algorithms was also compared to classical methods; however, the reference method differed between the papers. In particular, the SK algorithm has around 12% better PSNR than the B-spline method, while INR is around 3% better in terms of PSNR than the method based on the Penalized Weighted Least Squares algorithm (PWLS), which is only a part of the full INR method (Table 4).

3.6. Evaluation of AI-Based Denoising Models

In a series of three papers, the authors performed denoising of CBCT images using the vendor-agnostic model ClariCT.AI from ClariPi (Seoul, Republic of Korea) [28,29,33]. The ClariCT.AI model is founded on a Convolutional Neural Network (CNN) algorithm that reduces noise, and it incorporates Digital Imaging and Communications in Medicine (DICOM)-based sinogram blending and crossover inverse ray (IR) [37]. The authors conducted a comprehensive study, assessing the general image enhancement measured by CNR (evaluated on 3 regions of interest) [29], the image quality in CBCT images specifically focused on the temporomandibular joints (TMJs) [28], and the reduction in metal artifacts [33].
In each case, the utilization of ClariCT led to a reduction in noise, resulting in an enhancement of CNR (see Table 4). Yet, the subjective analysis was more nuanced. In particular, in both Kazimierczak’s et al. studies [28,29], although the AI-reconstructed images were usually preferred by the experts, there were no statistically significant differences in subjective image quality assessments. Conversely, Wajer’s study [33] demonstrated that subjective assessments exhibited a preference for DLM images, with higher ratings allocated to overall image quality, although, apart from CNR, other metrics did not show significant image enhancement. In particular, the ΔVV was similar after denoising, and the AIx values were better, but not close to the metal artifact itself, which was still intense. Yet, the image was cleaner and less distracting, with the halo around the artifact cleaned out. It seems that although the objective AI increases CNR and reduces noise, it does not automatically mean the image is better for diagnosis.

3.7. Transforming Images Between Techniques

The largest group contains articles introducing novel deep learning models, enhancing the image quality. The models, however, differ in main goal, architecture, solved task, training regime, and the loss function. The main details of each model are summarized in Table 5.
Regarding the aim, half of the models were thought to lower the exposure radiation. In particular, Zhao et al. [36] introduce a model called VVBPNet (View-by-view Backprojection Network), whose aim was to reconstruct the high-quality CBCT image from sparse data. On the other hand, Vestergaard et al. [32] introduce a model designed for the downstream task of improving the proton dose calculation, while the model of Ryu S. et al. [31] was introduced to apply automatic airway segmentation for Computational Fluid Dynamics (CFD) analysis. Only in one case was the task realized by the model just denoising the image [15]. In this case, the realistic noise was added to high-quality CBCT, and the model’s objective was to remove that noise and recreate CBCT images after the penalized least squares method with total variation regularization (PLS-TV).
Three other models aimed at creating (synthetic) CT images from CBCT scans [31,32,34]. In such a setup, there is always a problem to accurately match the images, as they are usually shifted due to another patient position and patient movement. This was solved by either aligning specific skull landmarks between the CT and CBCT scans [31] or by the deformable registration of the images [32,34]. The task of COMPUNet (Contextual Loss-Optimized Multi-Planar 2.5D U-Net) by Ryu K. et al. [30] was to recreate Multidetector Computed Tomography (MDCT) images from standard CBCT images measured at the same time, and VVBPNet by Zhao et al. [36] aimed at reconstruction of high-quality images from sparse data.
The model architecture also varied. In most cases it was a variation on a UNet architecture [15,30,31,34,36], or the model was trained in the Generative Adversarial Network (GAN) fashion [31,32,34]. UNet (named after the pictorial representation of the architecture in the U-shape) was introduced specifically for biomedical image segmentation tasks. It consists of encoder and decoder paths, which, working together, transform one image into another. In GAN models, one trains generator and discriminator models. The aim of the generator is to create a better image, while the quality of the image is judged by the discriminator. An interesting alternative is the CycleGAN network used by Vestergaard et al. [32], where the CBCT images are translated to CT and back to CBCT images, and the consistency between the original and recreated CBCT images is checked. Two other architectures proposed by Vestergaard et al. were the Contrastive Unpaired Translation (CUT) network, which maximizes the mutual information between corresponding patches or volumes in unpaired CBCT and CT images, and the mixture of the two (CycleCUT). More complex architecture was proposed by Zhao et al. [36] in the VVBPNet model, where the input data are the view-by-view backprojection tensors, which are intermediate results in the FDK algorithm. Those tensors are then applied to the network, which consists of two UNet-based paths, one for learning content and one for noise. Other architectures used are conditional GAN (cGAN), where, conversely to normal GAN, the discriminator also has access to the input CBCT image, and Unsupervised Image-to-Image Translation Network (UNIT), which works efficiently in the GAN manner.
In the training of deep learning models, what drives the model to the desired output is the definition of the loss function. In most cases it is based on Mean Absolute Error (MAE) between the intensity of the pixels in the Ground Truth and output images. In some cases, however, the loss was more complicated. In particular, Ryu K. et al. [30], apart from MAE, also tested VGG loss, which measures the similarity between internal representations of the images, and in the final model used the combination of both factors. A similar approach was also used by Ryu K. et al. [30]. On the other hand, the loss in the GAN network requires the effect of the generator and discriminator. To this end, Vestergaard et al. [32] used a deeply specialized loss based on cross-entropy.
Apart from the loss function, the size of the datasets and the diversity between images are important. The sizes of the training datasets are very small, only in one case exceeding 100 cases (Table 5). Additionally, in certain instances, the network’s performance was not evaluated using a held-out dataset. Consequently, despite the potentially favorable metrics, the actual validity of the networks and their potential to overcome common CBCT problems may be constrained. Surprisingly, only in two articles was the training dataset augmented by adding some noise [30,31], and in one case the model was previously pretrained on a larger set of images [30]. As none of the articles measured the similarity between the images, the existence of data leaks was not quantified. Moreover, it cannot be tested post factum, as there is no information if any model or dataset is placed in any public repository. Therefore, the models, along with their metrics, cannot be directly compared, as some problems solved may be easier due to less diverse test sets. This can also explain different results in subjective evaluation by experts. In some cases, the verdict of the experts was in favor of the Deep Learning Model [30,32], but in some cases the generated images were blurred and inferior to standard methods [15,34].

4. Discussion

4.1. Comparison of Traditional and AI-Based Tools

The main findings of the review are shown in Table 6. The present systematic review demonstrated that a statistically significant enhancement in CBCT image quality parameters was accomplished by the algorithms employed in each of the articles reviewed. However, direct comparison of the methods seems impossible due to the different reference methods and objectives. In particular, the widely used PSNR metric depends heavily on the input images. Nevertheless, it seems that the best results in image denoising were obtained by Ylisiurua et al., who reported the PSNR on the level of 70 dB, with a relative increase in over 50% compared to the standard algorithm. Still, one has to bear in mind that in this study there was no proper held-out test group in the model training. Yet another notable result is that of Costarelli et al. [27], who noted a PSNR of around 50 dB with the use of the Sampling Kantorovich method. This is higher than in most Deep Learning models, which obtain the PSNR value of around 30 dB. A comparison of these results with the current knowledge on CT examinations reveals a similarity in outcomes, with AI demonstrating a significant capacity to enhance image quality through denoising [6,38,39,40].
The literature on denoising methods in CBCT is limited, and a significant group of the articles we reviewed and excluded were phantom studies. Most of the proposed methods have not been widely implemented for clinical use. The selected set of articles describes three different areas where the CBCT denoising is analyzed. It is either in the case of classical methods, where the denoising algorithm is rather known and what can be enhanced is the fidelity of the transformed figure. The other group is analysis of the potency of existing Deep Learning models in denoising or removing artifacts. The last group is the creation of new models capable of transforming CBCT images to different techniques (usually CT). Conventional analytical reconstruction algorithms, such as FBP, are still mainly used in dental practice [41], which demonstrates their high utility but also the need for further research into AI denoising tools. IR algorithms, despite their good performance in quality enhancement parameters, often prove to be diagnostically impractical, as 3D raw CBCT data impose increased memory requirements and impractical computation times in a clinical setting [42,43]. In addition, literature data from CT studies showed that MBIR models reduce noise more than FBP [44,45,46], but the image extracted from these reconstructions is described as plastic and artificial [46] and further hinders the detection of small tissue differences, especially at low contrast [47].
Also, Deep Learning models have been shown to improve overall image quality significantly and better than MBIR or FBP in many low-dose CT studies [48,49,50]. From this overview, it can be concluded that Deep Learning models in CBCT provide the same or similar or even better noise reduction and overall image quality than FBP or IR. Since Deep Learning-based reconstruction or noise reduction is not an iterative process, the computation time can be reduced compared to IR, which might be the most important improvement.
Despite their benefits, literature on AI denoising techniques faces several limitations. One of the main challenges is the lack of standardized parameters for assessing image quality. Different studies use different metrics, making it difficult to make comprehensive comparisons and determine the most effective method. Many studies have been conducted on small cohorts of patients, which limits the generalizability of the results. Moreover, in some cases the experts judged the AI-denoised image as inferior compared to standard methods in some aspects. Therefore, although the quantitative metrics may be promising, the Deep Learning denoising might lose some important features of the analyzed images, which might for now be ineffective in an everyday practice.
The effectiveness of AI models largely depends on the availability of high-quality training data, which is not always available. Dose reduction in CBCT, like CT, can produce images of very poor quality, which can lead to DLR artifacts called hallucinations—when the network incorrectly identifies noise as a missing object and replaces it with a non-existent structure— or inverted hallucinations—when the network incorrectly removes a section [51,52]. Although such cases were not directly reported in the analyzed study, there were some Deep Learning-specific problems mentioned. For example, Ryu K. et al. [30] mention erroneous background masking and streaking artifacts. Therefore, it is important to explore this in more depth before DLR is widely used in clinical practice. Moreover, it is worth noting that in some cases the results differ between anatomical regions. For example, the morphology of the nasal cavity was found to be challenging to capture accurately, and some blurring effects were spotted in the hypopharynx and oropharyngeal regions [31]. On the other hand, in some cases the model enhanced the visibility, especially of the sinus floor and the TMJ complex [30]. Both effects can be, however, the results of the training set, where those regions could be represented with different accuracy.
With the increased availability of photo-counting CT scanners, the data obtained will be of high quality, which is likely to significantly improve the quality of the images, and studies relating to the use of PCCT’s in dentistry can be expected [53,54]. Sawal et al. [55], in their study, showed that, compared to conventional dental digital volumetric tomography systems, photon-counting computed tomography provides higher image quality even at lower dose levels. In this study, CNRD is higher in all PCCT acquisitions, and CNRD (normalized contrast-to-noise ratio) is 37% higher for dentin-enamel contrast and 31% higher for dentin-bone contrast.

4.2. Limitations

A number of limitations to this systematic review must be acknowledged. Firstly, the selection criteria employed a stringent approach, including exclusively articles published in peer-reviewed journals and indexed in prominent databases, written in English. This approach may have introduced publication bias. Furthermore, given that the application of artificial intelligence to CBCT exams is typically above standard but also technically unchallenging, the possibility of publication bias is heightened. Consequently, there is a possibility that some single-center experiences have not been published. While the peer-review system does not ensure optimal scientific value, its formal requirements should enhance the quality of publications. Secondly, the present study encompasses a broad spectrum of software postprocessing methodologies. Consequently, the accurate classification of noise reduction may be subject to bias. Finally, the absence of a uniform definition of significant noise reduction among the studies underscores the current paucity of high-quality evidence in this domain. Moreover, in some cases the definitions of the metrics used differed, and in some cases the definitions were not given. Another major obstacle is the lack of data or models publicly available. In some cases, the details of the model architecture remain unknown, and because of a lack of data, the results cannot be reobtained independently. Moreover, no similarity between images was measured; therefore, the capabilities of the Deep Learning models tested are practically incomparable.

4.3. Perspectives

In the current review, it turns out that most techniques used are based on some Deep Learning models. Analyzing the current trends and results, it seems that there is still room for improvement in this area. In particular, one might use some novel transformer-based techniques, like Denoising Vision Transformer. Another approach would be to use some foundational model pretrained on a large set of images either to obtain the correct internal representation or specifically for the denoising task. Moreover, due to the sparse set of images, it seems imperative to apply some dataset augmentation by adding artificial noise, random movements, rotations, etc. It would also be good to create a centralized dataset of the CBCT images, which could be used for training some models. In the best-case scenario, the images should be split into train/test/valid sets, based on reasonable criteria following from the image similarity. Such an approach would allow for a fair comparison of models.
The comparison should be performed with some standardized set of techniques. The absolute minimum in the area of image denoising is a metric showing the level of noise and the quantification of the similarity of important features with the reference figure. The latter can be measured using the SSIM metric. The former is usually measured with PSNR; however, this metric is also dependent on the reference images, so it does not calculate the noise level directly. It seems that metrics like Contrast-to-Noise Ratio (CNR) would better quantify the noise level. Also, to avoid any misunderstandings and provide a fair comparison, the metrics should be calculated according to the same equation included in the article. In the case of new Deep Learning models, it would be good if their code was publicly available, preferably with the datasets used. This would remove all the ambiguities related to the model’s architecture.

5. Conclusions

The development and clinical application of IR and DLM noise reduction algorithms have enabled patients to undergo examination at significantly reduced radiation dose levels. Due to their capacity to mitigate noise and produce diagnostic-quality CBCT images at reduced doses, IR and DL algorithms are anticipated to become the prevailing technique for reconstructing CBCT images at lower doses, displacing conventional FBP. In the pursuit of CBCT examinations at reduced doses, it is imperative that radiologists and dentists possess a comprehensive understanding of the strengths and limitations of IR and noise reduction algorithms. Moreover, they should be well-versed in the methodologies employed for the rigorous evaluation of these techniques.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/jcm14124214/s1, Table S1: Studies excluded after full-text analysis [41,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79]. Table S2: Exact phrases used in the search engines.

Author Contributions

Conceptualization, R.W. and W.K.; methodology, R.W. and P.D.-T.; software, R.W.; validation, Z.S. and W.K.; formal analysis, R.W.; investigation, A.W., N.K. and R.W.; resources, W.K.; data curation, R.W., A.W. and W.K.; writing—original draft preparation, R.W.; writing—review and editing, R.W., P.D.-T., Z.S. and W.K.; visualization, R.W.; supervision, Z.S., W.K. and R.W.; project administration, W.K.; funding acquisition, W.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available upon request.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

CNN = convolutional neural network, CBCT = cone beam computed tomography, DLR = deep learning reconstruction, FBP = filtered back projection, FDA = U.S. Food and Drug Administration HIR = hybrid iterative reconstruction, IR = iterative reconstruction, MAR = metal artifact reduction, AI = artificial intelligence.

References

  1. Gaêta-Araujo, H.; Leite, A.F.; Vasconcelos, K.d.F.; Jacobs, R. Two Decades of Research on CBCT Imaging in DMFR—An Appraisal of Scientific Evidence. Dentomaxillofac. Radiol. 2021, 50, 20200367. [Google Scholar] [CrossRef] [PubMed]
  2. Dawood, A.; Patel, S.; Brown, J. Cone Beam CT in Dental Practice. Br. Dent. J. 2009, 207, 23–28. [Google Scholar] [CrossRef]
  3. Puleio, F.; Lizio, A.S.; Coppini, V.; Lo Giudice, R.; Lo Giudice, G. CBCT-based assessment of vapor lock effects on endodontic disinfection. Appl. Sci. 2023, 13, 9542. [Google Scholar] [CrossRef]
  4. Pauwels, R.; Araki, K.; Siewerdsen, J.H.; Thongvigitmanee, S.S. Technical Aspects of Dental CBCT: State of the Art. Dentomaxillofacial Radiol. 2015, 44, 20140224. [Google Scholar] [CrossRef]
  5. Schulze, R.; Heil, U.; Groβ, D.; Bruellmann, D.D.; Dranischnikow, E.; Schwanecke, U.; Schoemer, E. Artefacts in CBCT: A Review. Dentomaxillofacial Radiol. 2011, 40, 265–273. [Google Scholar] [CrossRef]
  6. Endo, M.; Tsunoo, T.; Nakamori, N.; Yoshida, K. Effect of Scatter Radiation on Image Noise in Cone Beam CT. Med. Phys. 2001, 28, 469–474. [Google Scholar] [CrossRef] [PubMed]
  7. Gordon, R.; Bender, R.; Herman, G.T. Algebraic Reconstruction Techniques (ART) for Three-Dimensional Electron Microscopy and X-Ray Photography. J. Theor. Biol. 1970, 29, 471–481. [Google Scholar] [CrossRef] [PubMed]
  8. Lee, H.; Park, J.; Choi, Y.; Park, K.R.; Min, B.J.; Lee, I.J. Low-Dose CBCT Reconstruction via Joint Non-Local Total Variation Denoising and Cubic B-Spline Interpolation. Sci. Rep. 2021, 11, 3681. [Google Scholar] [CrossRef]
  9. Feldkamp, L.A.; Davis, L.C.; Kress, J.W. Practical Cone-Beam Algorithm. J. Opt. Soc. Am. A 1984, 1, 612–619. [Google Scholar] [CrossRef]
  10. Melchers, R.E. On the ALARP Approach to Risk Management. Reliab. Eng. Syst. Saf. 2001, 71, 201–208. [Google Scholar] [CrossRef]
  11. Willemink, M.J.; de Jong, P.A.; Leiner, T.; de Heer, L.M.; Nievelstein, R.A.J.; Budde, R.P.J.; Schilham, A.M.R. Iterative Reconstruction Techniques for Computed Tomography Part 1: Technical Principles. Eur. Radiol. 2013, 23, 1623–1631. [Google Scholar] [CrossRef] [PubMed]
  12. Noël, P.B.; Fingerle, A.A.; Renger, B.; Münzel, D.; Rummeny, E.J.; Dobritz, M. Initial Performance Characterization of a Clinical Noise–Suppressing Reconstruction Algorithm for MDCT. Am. J. Roentgenol. 2011, 197, 1404–1409. [Google Scholar] [CrossRef] [PubMed]
  13. Scheffel, H.; Stolzmann, P.; Schlett, C.L.; Engel, L.-C.; Major, G.P.; Károlyi, M.; Do, S.; Maurovich-Horvat, P.; Hoffmann, U. Coronary Artery Plaques: Cardiac CT with Model-Based and Adaptive-Statistical Iterative Reconstruction Technique. Eur. J. Radiol. 2012, 81, e363–e369. [Google Scholar] [CrossRef] [PubMed]
  14. Singh, S.; Kalra, M.K.; Gilman, M.D.; Hsieh, J.; Pien, H.H.; Digumarthy, S.R.; Shepard, J.-A.O. Adaptive Statistical Iterative Reconstruction Technique for Radiation Dose Reduction in Chest CT: A Pilot Study. Radiology 2011, 259, 565–573. [Google Scholar] [CrossRef]
  15. Ylisiurua, S.; Sipola, A.; Nieminen, M.T.; Brix, M.A.K. Deep Learning Enables Time-Efficient Soft Tissue Enhancement in CBCT: Proof-of-Concept Study for Dentomaxillofacial Applications. Phys. Medica 2024, 117, 103184. [Google Scholar] [CrossRef]
  16. Zhao, Z.; Gang, G.J.; Siewerdsen, J.H. Noise, Sampling, and the Number of Projections in Cone-Beam CT with a Flat-Panel Detector. Med. Phys. 2014, 41, 061909. [Google Scholar] [CrossRef]
  17. Kaasalainen, T.; Ekholm, M.; Siiskonen, T.; Kortesniemi, M. Dental Cone Beam CT: An Updated Review. Phys. Medica Eur. J. Med. Phys. 2021, 88, 193–217. [Google Scholar] [CrossRef]
  18. Wolterink, J.M.; Leiner, T.; Viergever, M.A.; Išgum, I. Generative Adversarial Networks for Noise Reduction in Low-Dose CT. IEEE Trans. Med. Imaging 2017, 36, 2536–2545. [Google Scholar] [CrossRef] [PubMed]
  19. Chen, H.; Zhang, Y.; Kalra, M.K.; Lin, F.; Chen, Y.; Liao, P.; Zhou, J.; Wang, G. Low-Dose CT With a Residual Encoder-Decoder Convolutional Neural Network. IEEE Trans. Med. Imaging 2017, 36, 2524–2535. [Google Scholar] [CrossRef]
  20. Kopp, F.K.; Catalano, M.; Pfeiffer, D.; Rummeny, E.J.; Noël, P.B. Evaluation of a Machine Learning Based Model Observer for X-Ray CT. In Proceedings of the Proc. SPIE, Houston, TX, USA, 7 March 2018; Volume 10577, p. 105770S. [Google Scholar]
  21. Wu, D.; Kim, K.; El Fakhri, G.; Li, Q. Iterative Low-Dose CT Reconstruction With Priors Trained by Artificial Neural Network. IEEE Trans. Med. Imaging 2017, 36, 2479–2486. [Google Scholar] [CrossRef]
  22. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. Syst. Rev. 2021, 10, 89. [Google Scholar] [CrossRef]
  23. Cumpston, M.; Li, T.; Page, M.J.; Chandler, J.; Welch, V.A.; Higgins, J.P.T.; Thomas, J. Updated Guidance for Trusted Systematic Reviews: A New Edition of the Cochrane Handbook for Systematic Reviews of Interventions. Cochrane Database Syst. Rev. 2019, 10, ED000142. [Google Scholar] [CrossRef] [PubMed]
  24. McHugh, M. Interrater Reliability: The Kappa Statistic. Biochem. Medica Časopis Hrvat. Društva Med. Biokem./HDMB 2012, 22, 276–282. [Google Scholar] [CrossRef]
  25. Amir-Behghadami, M.; Janati, A. Population, Intervention, Comparison, Outcomes and Study (PICOS) Design as a Framework to Formulate Eligibility Criteria in Systematic Reviews. Emerg. Med. J. 2020, 37, 387. [Google Scholar] [CrossRef]
  26. Whiting, P.F.; Rutjes, A.W.S.; Westwood, M.E.; Mallett, S.; Deeks, J.J.; Reitsma, J.B.; Leeflang, M.M.G.; Sterne, J.A.C.; Bossuyt, P.M.M. QUADAS-2: A Revised Tool for the Quality Assessment of Diagnostic Accuracy Studies. Ann. Intern. Med. 2011, 155, 529–536. [Google Scholar] [CrossRef] [PubMed]
  27. Costarelli, D.; Pozzilli, P.; Seracini, M.; Vinti, G. Enhancement of Cone-Beam Computed Tomography Dental-Maxillofacial Images by Sampling Kantorovich Algorithm. Symmetry 2021, 13, 1450. [Google Scholar] [CrossRef]
  28. Kazimierczak, W.; Kędziora, K.; Janiszewska-Olszowska, J.; Kazimierczak, N.; Serafin, Z. Noise-Optimized CBCT Imaging of Temporomandibular Joints—The Impact of AI on Image Quality. J. Clin. Med. 2024, 13, 1502. [Google Scholar] [CrossRef]
  29. Kazimierczak, W.; Wajer, R.; Komisarek, O.; Dyszkiewicz-Konwińska, M.; Wajer, A.; Kazimierczak, N.; Janiszewska-Olszowska, J.; Serafin, Z. Evaluation of a Vendor-Agnostic Deep Learning Model for Noise Reduction and Image Quality Improvement in Dental CBCT. Diagnostics 2024, 14, 2410. [Google Scholar] [CrossRef]
  30. Ryu, K.; Lee, C.; Han, Y.; Pang, S.; Kim, Y.H.; Choi, C.; Jang, I.; Han, S.S. Multi-Planar 2.5D U-Net for Image Quality Enhancement of Dental Cone-Beam CT. PLoS ONE 2023, 18, e0285608. [Google Scholar] [CrossRef]
  31. Ryu, S.; Kim, J.H.; Choi, Y.J.; Lee, J.S. Generating Synthetic CT Images from Unpaired Head and Neck CBCT Images and Validating the Importance of Detailed Nasal Cavity Acquisition through Simulations. Comput. Biol. Med. 2025, 185, 109568. [Google Scholar] [CrossRef]
  32. Vestergaard, C.D.; Elstrøm, U.V.; Muren, L.P.; Ren, J.; Nørrevang, O.; Jensen, K.; Taasti, V.T. Proton Dose Calculation on Cone-Beam Computed Tomography Using Unsupervised 3D Deep Learning Networks. Phys. Imaging Radiat. Oncol. 2024, 32, 100658. [Google Scholar] [CrossRef] [PubMed]
  33. Wajer, R.; Wajer, A.; Kazimierczak, N.; Wilamowska, J.; Serafin, Z. The Impact of AI on Metal Artifacts in CBCT Oral Cavity Imaging. Diagnostics 2024, 14, 1280. [Google Scholar] [CrossRef] [PubMed]
  34. Zhang, Y.; Ding, S.G.; Gong, X.C.; Yuan, X.X.; Lin, J.F.; Chen, Q.; Li, J.G. Generating Synthesized Computed Tomography from CBCT Using a Conditional Generative Adversarial Network for Head and Neck Cancer Patients. Technol. Cancer Res. Treat. 2022, 21, 15330338221085358. [Google Scholar] [CrossRef]
  35. Zhang, K.; Yang, W.; Pallikonda Rajasekaran, M. Iterative Noise Reduction Algorithm-Based Cone Beam Computed Tomography Image Analysis for Dental Pulp Disease in Root Canal Therapies. Sci. Program. 2022, 2022, 7322332. [Google Scholar] [CrossRef]
  36. Zhao, X.; Du, Y.; Peng, Y. VVBPNet: Deep Learning Model in View-by-View Backprojection (VVBP) Domain for Sparse-View CBCT Reconstruction. Biomed. Signal Process Control 2025, 102, 107195. [Google Scholar] [CrossRef]
  37. Hong, J.H.; Park, E.-A.; Lee, W.; Ahn, C.; Kim, J.-H. Incremental Image Noise Reduction in Coronary CT Angiography Using a Deep Learning-Based Technique with Iterative Reconstruction. Korean J. Radiol. 2020, 21, 1165–1177. [Google Scholar] [CrossRef]
  38. Sadia, R.T.; Chen, J.; Zhang, J. CT Image Denoising Methods for Image Quality Improvement and Radiation Dose Reduction. J. Appl. Clin. Med. Phys. 2024, 25, e14270. [Google Scholar] [CrossRef]
  39. Gorenstein, L.; Onn, A.; Green, M.; Mayer, A.; Segev, S.; Marom, E.M. A Novel Artificial Intelligence Based Denoising Method for Ultra-Low Dose CT Used for Lung Cancer Screening. Acad. Radiol. 2023, 30, 2588–2597. [Google Scholar] [CrossRef] [PubMed]
  40. Brendlin, A.S.; Schmid, U.; Plajer, D.; Chaika, M.; Mader, M.; Wrazidlo, R.; Männlin, S.; Spogis, J.; Estler, A.; Esser, M.; et al. AI Denoising Improves Image Quality and Radiological Workflows in Pediatric Ultra-Low-Dose Thorax Computed Tomography Scans. Tomography 2022, 8, 1678–1689. [Google Scholar] [CrossRef]
  41. Ramage, A.; Lopez Gutierrez, B.; Fischer, K.; Sekula, M.; Santaella, G.M.; Scarfe, W.; Brasil, D.M.; de Oliveira-Santos, C. Filtered Back Projection vs. Iterative Reconstruction for CBCT: Effects on Image Noise and Processing Time. Dentomaxillofacial Radiol. 2023, 52, 20230109. [Google Scholar] [CrossRef]
  42. Wang, J.; Li, T.; Xing, L. Iterative Image Reconstruction for CBCT Using Edge-Preserving Prior. Med. Phys. 2009, 36, 252–260. [Google Scholar] [CrossRef] [PubMed]
  43. Sun, T.; Sun, N.; Wang, J.; Tan, S. Iterative CBCT Reconstruction Using Hessian Penalty. Phys. Med. Biol. 2015, 60, 1965. [Google Scholar] [CrossRef] [PubMed]
  44. Singh, S.; Kalra, M.; Do, S.; Thibault, J.-B.; Pien, H.; O’Connor, O.; Blake, M. Comparison of Hybrid and Pure Iterative Reconstruction Techniques With Conventional Filtered Back Projection: Dose Reduction Potential in the Abdomen. J. Comput. Assist. Tomogr. 2012, 36, 347–353. [Google Scholar] [CrossRef] [PubMed]
  45. Morsbach, F.; Desbiolles, L.; Raupach, R.; Leschka, S.; Schmidt, B.; Alkadhi, H. Noise Texture Deviation: A Measure for Quantifying Artifacts in Computed Tomography Images With Iterative Reconstructions. Investig. Radiol. 2017, 52, 87–94. [Google Scholar] [CrossRef]
  46. Liu, L. Model-Based Iterative Reconstruction: A Promising Algorithm for Today’s Computed Tomography Imaging. J. Med. Imaging Radiat. Sci. 2014, 45, 131–136. [Google Scholar] [CrossRef]
  47. Mileto, A.; Zamora, D.; Alessio, A.; Pereira, C.; Liu, J.; Bhargava, P.; Carnell, J.; Cowan, S.; Dighe, M.; Gunn, M.; et al. CT Detectability of Small Low-Contrast Hypoattenuating Focal Lesions: Iterative Reconstructions versus Filtered Back Projection. Radiology 2018, 289, 180137. [Google Scholar] [CrossRef]
  48. Chen, H.; Li, Q.; Zhou, L.; Li, F. Deep Learning-Based Algorithms for Low-Dose CT Imaging: A Review. Eur. J. Radiol. 2024, 172, 111355. [Google Scholar] [CrossRef]
  49. Singh, R.; Digumarthy, S.R.; Muse, V.V.; Kambadakone, A.R.; Blake, M.A.; Tabari, A.; Hoi, Y.; Akino, N.; Angel, E.; Madan, R.; et al. Image Quality and Lesion Detection on Deep Learning Reconstruction and Iterative Reconstruction of Submillisievert Chest and Abdominal CT. Am. J. Roentgenol. 2020, 214, 566–573. [Google Scholar] [CrossRef]
  50. Hamabuchi, N.; Ohno, Y.; Kimata, H.; Ito, Y.; Fujii, K.; Akino, N.; Takenaka, D.; Yoshikawa, T.; Oshima, Y.; Matsuyama, T.; et al. Effectiveness of Deep Learning Reconstruction on Standard to Ultra-Low-Dose High-Definition Chest CT Images. Jpn. J. Radiol. 2023, 41, 1373–1388. [Google Scholar] [CrossRef]
  51. Hu, Y.; Cheng, M.; Wei, H.; Liang, Z. A Joint Learning Framework for Multisite CBCT-to-CT Translation Using a Hybrid CNN-Transformer Synthesizer and a Registration Network. Front. Oncol. 2024, 14, 1440944. [Google Scholar] [CrossRef]
  52. Bhadra, S.; Kelkar, V.A.; Brooks, F.J.; Anastasio, M.A. On Hallucinations in Tomographic Image Reconstruction. IEEE Trans. Med. Imaging 2021, 40, 3249–3260. [Google Scholar] [CrossRef] [PubMed]
  53. Ruetters, M.; Sen, S.; Gehrig, H.; Bruckner, T.; Kim, T.-S.; Lux, C.J.; Schlemmer, H.-P.; Heinze, S.; Maier, J.; Kachelrieß, M.; et al. Dental Imaging Using an Ultra-High Resolution Photon-Counting CT System. Sci. Rep. 2022, 12, 7125. [Google Scholar] [CrossRef] [PubMed]
  54. Zanon, C.; Pepe, A.; Cademartiri, F.; Bini, C.; Maffei, E.; Quaia, E.; Stellini, E.; Di Fiore, A. Potential Benefits of Photon-Counting CT in Dental Imaging: A Narrative Review. J. Clin. Med. 2024, 13, 2436. [Google Scholar] [CrossRef]
  55. Sawall, S.; Maier, J.; Sen, S.; Gehrig, H.; Kim, T.-S.; Schlemmer, H.-P.; Schönberg, S.O.; Kachelrieß, M.; Rütters, M. Dental Imaging in Clinical Photon-Counting CT at a Quarter of DVT Dose. J. Dent. 2024, 142, 104859. [Google Scholar] [CrossRef]
  56. Zhang, Y.; Li, L.; Wang, J.; Yang, X.; Zhou, H.; He, J.; Xie, Y.; Jiang, Y.; Sun, W.; Zhang, X.; et al. Texture-Preserving Diffusion Model for CBCT-to-CT Synthesis. Med. Image Anal. 2025, 99, 103362. [Google Scholar] [CrossRef]
  57. Al-Haddad, A.A.; Al-Haddad, L.A.; Al-Haddad, S.A.; Jaber, A.A.; Khan, Z.H.; Rehman, H.Z.U. Towards Dental Diagnostic Systems: Synergizing Wavelet Transform with Generative Adversarial Networks for Enhanced Image Data Fusion. Comput. Biol. Med. 2024, 182, 109241. [Google Scholar] [CrossRef] [PubMed]
  58. Bueno, M.R.; Estrela, C.; Azevedo, B.C.; Diogenes, A. Development of a New Cone—Beam Computed Tomography Software for Endodontic Diagnosis. Braz. Dent. J. 2018, 29, 517–529. [Google Scholar] [CrossRef]
  59. Bueno, M.R.; Estrela, C.; Azevedo, B.C.; Cintra Junqueira, J.L. Root Canal Shape of Human Permanent Teeth Determined by New Cone-Beam Computed Tomographic Software. J. Endod. 2020, 46, 1662–1674. [Google Scholar] [CrossRef]
  60. Dao-Ngoc, L.; Du, Y.-C. Generative Noise Reduction in Dental Cone-Beam CT by a Selective Anatomy Analytic Iteration Reconstruction Algorithm. Electronics 2019, 8, 1381. [Google Scholar] [CrossRef]
  61. Budaraju, D.; Narayanappa, C.K.; Hiremath, B.; Ravi, P.; Lakshminarayana, M.; Neelapu, B.; Jayaraman, S. Enhancement of Three-Dimensional Medical Images. In Computer-Aided Diagnosis (CAD) Tools and Applications for 3D Medical Imaging; Elsevier: Amsterdam, The Netherlands, 2024; Volume 136, ISBN 9780323988575. [Google Scholar]
  62. Friot--Giroux, L.; Peyrin, F.; Maxim, V. Iterative Tomographic Reconstruction with TV Prior for Low-Dose CBCT Dental Imaging. Phys. Med. Biol. 2022, 67, 205010. [Google Scholar] [CrossRef]
  63. GAN, J.; Yu, N.; Qian, G.; He, N. Concrete Learning Method for Segmentation and Denoising Using CBCT Image. In Proceedings of the 2023 4th International Conference on Control, Robotics and Intelligent System, Association for Computing Machinery, Guangzhou, China, 25–27 August 2023; pp. 41–46. [Google Scholar]
  64. Hegazy, M.; Cho, M.; Lee, S. Half-Scan Artifact Correction Using Generative Adversarial Network for Dental CT. Comput. Methods Programs Biomed. 2021, 132, 104313. [Google Scholar] [CrossRef] [PubMed]
  65. Jin, X.; Zhu, Y.; Wu, K.; Hu, D.; Gao, X. StarAN: A Star Attention Network Utilizing Inter-View and Intra-View Correlations for Sparse-View Cone-Beam Computed Tomography Reconstruction. Expert. Syst. Appl. 2024, 258, 125099. [Google Scholar] [CrossRef]
  66. Kim, K.; Lim, C.; Shin, J.; Chung, M.; Jung, Y.G. Enhanced Artificial Intelligence-Based Diagnosis Using CBCT with Internal Denoising: Clinical Validation for Discrimination of Fungal Ball, Sinusitis, and Normal Cases in the Maxillary Sinus. Comput. Methods Programs Biomed. 2023, 240, 107708. [Google Scholar] [CrossRef]
  67. Kroon, D.-J.; Slump, C.; Maal, T. Optimized Anisotropic Rotational Invariant Diffusion Scheme on Cone-Beam CT. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2010: 13th International Conference, Beijing, China, 20–24 September 2010; Springer: Berlin/Heidelberg, Germany, 2010; Volume 13, ISBN 978-3-642-15710-3. [Google Scholar]
  68. Lei, C.; Mengfei, X.; Wang, S.; Liang, Y.; Yi, R.; Wen, Y.-H.; Liu, Y.-J. Automatic Tooth Arrangement with Joint Features of Point and Mesh Representations via Diffusion Probabilistic Models. Comput. Aided Geom. Des. 2024, 111, 102293. [Google Scholar] [CrossRef]
  69. Lim, Y.; Park, S.; Jeon, D.; Kim, W.; Lee, S.; Cho, H. Eliminating Metal Artifacts in Dental Computed Tomography Using an Elaborate Sinogram Normalization Interpolation Method with CNR-Based Metal Segmentation. J. Instrum. 2024, 19, C11003. [Google Scholar] [CrossRef]
  70. Mirzaei, S.; Tohidypour, H.R.; Nasiopoulos, P.; Mirabbasi, S. An Efficient Quality Enhancement Method for Low-Dose CBCT Imaging. WSEAS Trans. Biol. Biomed. 2024, 22, 76–81. [Google Scholar] [CrossRef]
  71. Oliveira, M.; Schaub, S.; Dagassan-Berndt, D.; Bieder, F.; Cattin, P.; Bornstein, M. Development and Evaluation of a Deep Learning Model to Reduce Exomass-Related Metal Artefacts in Cone-Beam Computed Tomography of the Jaws. Dentomaxillofac. Radiol. 2024, 54, 109–117. [Google Scholar] [CrossRef]
  72. Park, H.; Jeon, K.; Lee, S.-H.; Seo, J. Unpaired-Paired Learning for Shading Correction in Cone-Beam Computed Tomography. IEEE Access 2022, 10, 26140–26148. [Google Scholar] [CrossRef]
  73. Sagawa, M.; Miyoseta, Y.; Hayakawa, Y.; Honda, A. Comparison of Two—And Three-Dimensional Filtering Methods to Improve Image Quality in Multiplanar Reconstruction of Cone-Beam Computed Tomography. Oral Radiol. 2009, 25, 154–158. [Google Scholar] [CrossRef]
  74. Wang, W.; Jin, Z.; Chen, X. CDRMamba: A Framework for Automated Craniomaxillofacial Defect Reconstruction Using Mamba-Based Modeling. Biomed. Signal Process Control 2025, 103, 107376. [Google Scholar] [CrossRef]
  75. Wenzel, A.; Haiter-Neto, F.; Frydenberg, M.; Kirkevang, L.-L. Variable-Resolution Cone-Beam Computerized Tomography with Enhancement Filtration Compared with Intraoral Photostimulable Phosphor Radiography in Detection of Transverse Root Fractures in an in Vitro Model. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. Endodontol. 2009, 108, 939–945. [Google Scholar] [CrossRef] [PubMed]
  76. Widmann, G.; Al-Ekrish, A.A. Ultralow Dose MSCT Imaging in Dental Implantology. Open Dent. J. 2018, 12, 87–93. [Google Scholar] [CrossRef] [PubMed]
  77. Xue, F.; Zhang, R.; Dai, J.; Zhang, Y.; Luan, Q.-X. Clinical Application of Mixed Reality Holographic Imaging Technology in Scaling and Root Planing of Severe Periodontitis: A Proof of Concept. J. Dent. 2024, 149, 105284. [Google Scholar] [CrossRef]
  78. Yun, S.; Jeong, U.; Kwon, T.; Choi, D.; Lee, T.; Ye, S.-J.; Cho, G.; Cho, S. Penalty-Driven Enhanced Self-Supervised Learning (Noise2Void) for CBCT Denoising. In Proceedings of the Proc. SPIE., San Diego, CA, USA, 7 April 2023; Volume 12463, p. 1246327. [Google Scholar]
  79. Zhang, Y.; Chen, Y.; Zhong, A.; Jia, X.; Wu, S.; Qi, H.; Zhou, L.; Xu, Y. Scatter Correction Based on Adaptive Photon Path-Based Monte Carlo Simulation Method in Multi-GPU Platform. Comput. Methods Programs Biomed. 2020, 194, 105487. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Primary CBCT reconstruction domains.
Figure 1. Primary CBCT reconstruction domains.
Jcm 14 04214 g001
Figure 2. Prisma 2020 flow diagram showing the flow of the studies analyzed, along with the numbers of excluded articles and exclusion reasons.
Figure 2. Prisma 2020 flow diagram showing the flow of the studies analyzed, along with the numbers of excluded articles and exclusion reasons.
Jcm 14 04214 g002
Table 1. Characteristics of the included studies.
Table 1. Characteristics of the included studies.
StudyYearCountryStudy Type/
No. of Patients
Reference Standard—Quantitative EvaluationReference Standard—Qualitative EvaluationAnatomical Region
1.Costarelli et al. [27]2021ItalyPatients/2MSE, PSNRNAdental-maxillofacial region
2.Kazimierczak et al. [28]2024PolandPatients/50CNRA radiologist and orthodontist temporomandibular joints
3.Kazimierczak et al. [29]2024PolandPatients/93CNRA radiologist and two dentistsdental-maxillofacial region
4.Ryu K. et al. [30]2023South Korea, USAPatients/30
Phantom/6
MAE, NRMSE, SSIMTwo radiologistshead and neck
5.Ryu S. et al. [31]2025South Korea, USAPatients/33MAE, PSNR, SSIMUnspecified researchers head and neck
6.Vestergaard et al. [32]2024DenmarkPatients/102PSNR, SSIM, MAE, MENAhead and neck
7.Wajer et al. [33]2024PolandPatients/61CNR, ΔVV, AIxA radiologist and a dentistdental-maxillofacial region
8.Ylisiurua et al. [15]2024FinlandPatients/32SSIM, PSNROne dentomaxillofacial radiologistdental-maxillofacial region
9.Zhang Y. et al. [34]2022ChinaPatients/120 MAE, RMSE, SSIM, PSNRNAhead and neck
10.Zhang K. et al. [35]2022ChinaPatients/88PSNR, CORRNAaffected teeth
11.Zhao et al. [36]2025ChinaPatients/223 RMSE, PSNR, SSIM, FSIMNAhead
(AIx—artifact index, CNR—contrast to noise ratio, CORR—correlation coefficient, FSIM—Feature Similarity Index, MAE—mean absolute error, NA—not assessed, NRMSE—normalized root-mean-square deviation, PSNR—peak signal-to-noise ratio, RMSE—root mean square error, SSIM—Structural Similarity Index, ΔVV—differentiation between voxel values).
Table 2. Characteristics of a separate group of the included studies.
Table 2. Characteristics of a separate group of the included studies.
StudyYearAnatomical RegionModel Used
Group 1—Denoising by classical methods (iterative reconstructions, filtering algorithms)
Costarelli et al. [27]2021dental-maxillofacial regionSampling Kantorovich (SK)
Zhang K. et al. [35]2022affected teethINR algorithm-based CBCT
Group 2—Evaluation of AI-based denoising model
Kazimierczak et al. [28]2024temporomandibular jointsClariCT.AI (commercial)
Kazimierczak et al. [29]2024dental-maxillofacial regionClariCT.AI (commercial)
Wajer et al. [33]2024dental-maxillofacial regionClariCT.AI (commercial)
Group 3—Transforming images between techniques
Ryu S. et al. [31]2025head and neckCycleGAN
Vestergaard et al. [32]2024head and neckCycleGAN/CUT
Zhang Y. et al. [34]2022head and neckGAN
Ylisiurua et al. [15]2024dental-maxillofacial regionUNIT and U-Net
Ryu K. et al. [30]2023head and neck UNet
Zhao et al. [36]2025head VVBPNet
Table 3. Risk of bias assessment according to the QUADAS-2 tool. The high and unclear values are shown in bold.
Table 3. Risk of bias assessment according to the QUADAS-2 tool. The high and unclear values are shown in bold.
StudyRisk of BiasApplicability Concerns
Patient
Selection
Index TestReference StandardFlow and
Timing
Patient
Selection
Index TestReference Standard
Costarelli et al., 2021 [27]HighUnclearLowHighHighLowLow
Kazimierczak et al., 2024 [28]LowLowLowLowLowLowLow
Kazimierczak et al., 2024 [29]LowLowLowLowLowLowLow
Ryu K. et al., 2023 [30]LowLowLowLowLowLowLow
Ryu S. et al., 2025 [31]LowLowLowLowLowLowLow
Vestergaard et al., 2024, [32]LowLowLowLowLowLowLow
Wajer et al., 2024 [33]LowLowLowLowLowLowLow
Ylisiurua et al., 2024 [15]LowLowLowLowLowLowLow
Zhang Y. et al., 2022 [34]LowLowLowLowLowLowLow
Zhang K. et al., 2022 [35]LowLowLowLowLowLowLow
Zhao et al., 2025 [36]UnclearUnclearLowLowUnclearLowLow
Table 4. Comparative parameters of denoising models. The values were divided into the ones that are calculated in comparison with Ground Truth images and with results of a reference, classical method if present. For some metrics (PSNR, CORR), the metric is calculated in reference to Ground Truth. For others (CNR), the value of the metric can be calculated for Ground Truth and for results separately, allowing for comparison of values. The means are calculated for each Region of Interests specified in the source article, without reweighting due to different numbers of scans. Rel. metric enhancement is the metric enhancement in reference to the ground truth or reference method. Only the metrics concerning image enhancement are considered. * in Zhang K. et al. [35], the PSNR metric was calculated differently than in other works, as a result being shifted by around 90 dB. ** in Wajer et al. [33], the CNR metric was calculated differently, with noise of artifact and control separated.
Table 4. Comparative parameters of denoising models. The values were divided into the ones that are calculated in comparison with Ground Truth images and with results of a reference, classical method if present. For some metrics (PSNR, CORR), the metric is calculated in reference to Ground Truth. For others (CNR), the value of the metric can be calculated for Ground Truth and for results separately, allowing for comparison of values. The means are calculated for each Region of Interests specified in the source article, without reweighting due to different numbers of scans. Rel. metric enhancement is the metric enhancement in reference to the ground truth or reference method. Only the metrics concerning image enhancement are considered. * in Zhang K. et al. [35], the PSNR metric was calculated differently than in other works, as a result being shifted by around 90 dB. ** in Wajer et al. [33], the CNR metric was calculated differently, with noise of artifact and control separated.
StudyAlgorithm NameComparison with Ground TruthComparison with Classical Method
Ground Truth ImagesMetricMean Metric ValueRel. Metric EnhancementReference MethodRel. Metric Enhancement
Classical image denoising
Costarelli et al. [27]Sampling Kantorovich (SK)Original CBCT images PSNR58.1 dB---Bilinear B-spline 15.5%
Bicubic B-spline13.8%
Zhang K. et al. [35]Iterative Noise Reduction (INR)Original CBCT imagesPSNR (shifted)191 dB---PWLS2.1%
PSNR—90 dB *101 dB---4.1%
CORR0.993---0.3%
Evaluation of AI-based denoising
Kazimierczak et al. [28]ClariCT.AIOriginal CBCT imagesCNR11.03 44.8%------
Kazimierczak et al. [29]ClariCT.AIOriginal CBCT imagesCNR9.9235.6%------
Wajer et al. [33]ClariCT.AIOriginal CBCT imagesCNR **0.9317.2%------
AIx350.92−5.0%
ΔVV341.04−0.2%
Transforming images between techniques
Ylisiurua et al. [15]UNetCBCT scans after PLS-TV regularizationPSNR77.4 dB---FDK denoised images52.9%
SSIM1.0---7.5%
UNITPSNR74.6 dB---47.4%
SSIM1.0---7.5%
Ryu K. et al. [30]COMPUNetMDCT imagesNRMSE0.14---Comparison with original CBCT35.7%
SSIM0.84---10.5%
Ryu S. et al. [31]CycleGAN with MAEVGG lossGround truth CT scansPSNR28.65 dB---Comparison with original CBCT28.3%
SSIM0.87---40.2
Vestergaard et al. [32]CycleGANGround truth CT scansPSNR31.8 dB---Comparison with original CBCT24.2%
SSIM0.97---2.1%
CUTPSNR31.8 dB---24.2%
SSIM0.97---2.1%
CycleCUTPSNR31.8 dB---24.2%
SSIM0.97---2.1%
Zhang Y. et al. [34]cGANReference CT imagesPSNR30.58 dB---Comparison with original CBCT20.7%
SSIM0.90---8.4%
CycleGANPSNR29.29 dB---15.6%
SSIM0.92---10.8%
UNetPSNR30.48 dB---20.3%
SSIM0.90---8.4%
Zhao et al. [36]VVBPNetReconstructed from full view projectionsPSNR37.3 dB---FDK denoised images21.9%
SSIM0.90---26.4%
FSIM0.99---0.1%
Table 5. Deep learning characteristics of studies describing image transformation methods.
Table 5. Deep learning characteristics of studies describing image transformation methods.
StudyTaskModel ArchitecturesDataset Size (Train/Valid/Test)
Ryu K. et al. [30]CBCT -> MDCTUNet30 (30/0/0)
Ryu S. et al. [31]CBCT -> CTCycleGAN33 (22/0/11)—cross validation
Vestergaard et al. [32]CBCT -> CTCycleGAN, CUT, CycleCut102 (77/5/20)
Zhang et al. [34]CBCT -> CTGAN120 (80/10/30)
Ylisiurua et al. [15]Simulated CBCT -> CBCTUNet, UNIT22 (22/0/0)
Zhao et al. [36]Sparse CBCT -> CBCTUNet223 (163/30/30)
Table 6. Main finding of the review.
Table 6. Main finding of the review.
CategoryClassic MethodDeep Learning Model
Quantitative analysisNew algorithms (SK, INR) perform much better than older ones.Models performing resonably good in denoising images.
Subjective anaysisThe images cleaner, however, the sample of methods is small.Mixed feelings—in most cases the images are smoother, clearner, brighter, yet in some cases the experts prefered the output of classic methods.
Time of analysisRather slow.Speed up 1–2 orders of magnitude.
UsageDenoising, further downstream tasks.Obtaining synthetic images from different techniques, more precise radiation dose calculation, lowering dose using sparse view.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wajer, R.; Dabrowski-Tumanski, P.; Wajer, A.; Kazimierczak, N.; Serafin, Z.; Kazimierczak, W. Enhancing Image Quality in Dental-Maxillofacial CBCT: The Impact of Iterative Reconstruction and AI on Noise Reduction—A Systematic Review. J. Clin. Med. 2025, 14, 4214. https://doi.org/10.3390/jcm14124214

AMA Style

Wajer R, Dabrowski-Tumanski P, Wajer A, Kazimierczak N, Serafin Z, Kazimierczak W. Enhancing Image Quality in Dental-Maxillofacial CBCT: The Impact of Iterative Reconstruction and AI on Noise Reduction—A Systematic Review. Journal of Clinical Medicine. 2025; 14(12):4214. https://doi.org/10.3390/jcm14124214

Chicago/Turabian Style

Wajer, Róża, Pawel Dabrowski-Tumanski, Adrian Wajer, Natalia Kazimierczak, Zbigniew Serafin, and Wojciech Kazimierczak. 2025. "Enhancing Image Quality in Dental-Maxillofacial CBCT: The Impact of Iterative Reconstruction and AI on Noise Reduction—A Systematic Review" Journal of Clinical Medicine 14, no. 12: 4214. https://doi.org/10.3390/jcm14124214

APA Style

Wajer, R., Dabrowski-Tumanski, P., Wajer, A., Kazimierczak, N., Serafin, Z., & Kazimierczak, W. (2025). Enhancing Image Quality in Dental-Maxillofacial CBCT: The Impact of Iterative Reconstruction and AI on Noise Reduction—A Systematic Review. Journal of Clinical Medicine, 14(12), 4214. https://doi.org/10.3390/jcm14124214

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop