Next Article in Journal
Light-Sheet Skew Ray-Based Microbubble Chemical Sensor for Pb2+ Measurements
Next Article in Special Issue
Investigations on the Performance of a 5 mm CdTe Timepix3 Detector for Compton Imaging Applications
Previous Article in Journal
High-Resolution Infrared Reflectance Distribution Measurement Under Variable Temperature Conditions
Previous Article in Special Issue
Unsupervised Denoising in Spectral CT: Multi-Dimensional U-Net for Energy Channel Regularisation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Reconstruction Approach for Polychromatic Computed Tomography in Highly Limited-Data Scenarios

by
Alessandro Piol
1,2,†,
Daniel Sanderson
1,3,†,
Carlos F. del Cerro
1,3,
Antonio Lorente-Mur
1,3,
Manuel Desco
1,3,4,5,* and
Mónica Abella
1,3,4,*
1
Bioengineering Department, Universidad Carlos III de Madrid, 28911 Leganes, Spain
2
Department of Information Engineering, University of Brescia, Via Branze, 38, 25123 Brescia, Italy
3
Instituto de Investigación Sanitaria Gregorio Marañón, 28007 Madrid, Spain
4
Centro Nacional de Investigaciones Cardiovasculares Carlos III (CNIC), 28029 Madrid, Spain
5
Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), 28029 Madrid, Spain
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2024, 24(21), 6782; https://doi.org/10.3390/s24216782
Submission received: 30 July 2024 / Revised: 10 October 2024 / Accepted: 15 October 2024 / Published: 22 October 2024
(This article belongs to the Special Issue Recent Advances in X-Ray Sensing and Imaging)

Abstract

:
Conventional strategies aimed at mitigating beam-hardening artifacts in computed tomography (CT) can be categorized into two main approaches: (1) postprocessing following conventional reconstruction and (2) iterative reconstruction incorporating a beam-hardening model. While the former fails in low-dose and/or limited-data cases, the latter substantially increases computational cost. Although deep learning-based methods have been proposed for several cases of limited-data CT, few works in the literature have dealt with beam-hardening artifacts, and none have addressed the problems caused by randomly selected projections and a highly limited span. We propose the deep learning-based prior image constrained (PICDL) framework, a hybrid method used to yield CT images free from beam-hardening artifacts in different limited-data scenarios based on the combination of a modified version of the Prior Image Constrained Compressed Sensing (PICCS) algorithm that incorporates the L2 norm (L2-PICCS) with a prior image generated from a preliminary FDK reconstruction with a deep learning (DL) algorithm. The model is based on a modification of the U-Net architecture, incorporating ResNet-34 as a replacement of the original encoder. Evaluation with rodent head studies in a small-animal CT scanner showed that the proposed method was able to correct beam-hardening artifacts, recover patient contours, and compensate streak and deformation artifacts in scenarios with a limited span and a limited number of projections randomly selected. Hallucinations present in the prior image caused by the deep learning model were eliminated, while the target information was effectively recovered by the L2-PICCS algorithm.

1. Introduction

The origin of the beam-hardening effect lies in the polychromatic nature of X-ray sources, whereby the mean energy increases its value when traversing a material since low-energy photons are more easily absorbed than high-energy photons. The beam-hardening effect leads to two distinct types of artifacts in CT reconstructed images: cupping in homogeneous regions and dark bands among dense areas in heterogeneous regions [1], compromising the quantitative fidelity of the image.
We can find multiple beam-hardening correction approaches in the literature. It is common to pre-harden the beam by using a physical filter that eliminates most of the low-energy photons [1]. However, this method falls short of completely suppressing artifacts, making additional image processing techniques necessary. The processing method implemented in most scanners is water linearization, which assumes that the sample is homogeneous and therefore only addresses cupping artifacts [2]. To correct both cupping and dark bands, some works model the beam-hardening effect using knowledge about the X-ray spectrum together with an estimation of the tissue thicknesses traversed [3,4]. The need for spectrum knowledge can be overcome by using a beam-hardening model based on information from a calibration phantom [5]. Other approaches, such as maximizing the flatness [6] or the entropy [7] in reconstructed images, avoid the need for explicit beam-hardening model characterization. However, these methods still require image segmentation, a challenging task in limited-data acquisitions where streaks and edge distortions compromise segmentation accuracy. In such scenarios, iterative algorithms can enhance segmented masks in successive iterations. The work proposed in [8] included a polychromatic model of the source but required knowledge of the spectrum to incorporate the energy effect into the projection matrix. This requirement was eliminated in [9], based on a simplified polychromatic model that uses two parameters together with the calibration step of the water linearization method. The polychromatic model was further improved in [10] by using a calibration step to model the beam-hardening function. While these methods can be implemented for the correction of limited-data CT with beam hardening, they have two main downsides: 1) applying a polychromatic model is computationally expensive and relies on heuristic approximations of the tissues, and 2) images in extreme scenarios with a very low number of projections are significantly distorted compared to the target images.
The computational cost associated with tomographic reconstruction can be significantly mitigated by the use of DL methods, but few works in the literature have dealt with beam-hardening artifacts. Li-ping et al. [11] exploited a three-layer convolutional network trained with simulated reconstructed data to correct industrial CT images made up of a single material, which is an easier problem than dealing with medical images. Ji et al. [12] followed the workflow of the empirical beam-hardening correction method proposed in [6] by replacing the empirical terms with a neural network, although the method showed a loss of spatial resolution for the low-dose cases. In [13], the authors proposed an end-to-end workflow that reconstructs images free of beam-hardening artifacts from the original projections based on the concatenation of three consecutive U-Net [14] networks, but the evaluation was only performed on images of 64 × 64 pixels—a matrix size far from those of real cases—and in most cases, the images resulting from their method did not outperform the FDK reconstruction. None of these works address the additional artifacts that may appear simultaneously, such as those caused by limited projections or a limited span.
Most DL works on tomographic image reconstruction have focused on correcting artifacts that are generated in scenarios of limited projections and limited span separately. In scenarios with a limited number of projections (LNP), most approaches aim to improve the sinograms [15], the image obtained from an initial FDK reconstruction [16], or both the sinogram and the reconstructed image [17,18]. Nevertheless, these approaches present some significant loss of details when the number of projections is too small. Similarly, to solve the issues arising from a limited span angle (LSA), some works focus on improving the image after an FDK reconstruction [19,20], the image in the wavelet domain [21], or both the sinogram and the reconstructed image [22]. Nevertheless, none of these works address simultaneously the artifacts caused by randomly selected projections in a gating scenario or by a highly limited angle span. Also, when evaluated for a low number of projections, these methods have shown significant artifacts. Additionally, scenarios where the span angle and the number of projections differ from those used during training have not been evaluated. Finally, an important problem of relying completely on DL to obtain the final reconstruction is the risk of generating false structures known as hallucinations [23], especially in the regions that are most affected by the limited data artifacts.
These limitations have been mostly addressed with the advent of plug-and-play [24,25] and unrolling [26] algorithms which embed a DL model within an iterative scheme. The former can be challenging to implement in cases with a very limited number of projections due to the domain shift between the training data and the intermediate reconstructions. The latter can be computationally prohibitive for 3D problems such as cone beam reconstruction due to the need to implement the projection and backprojection steps within the neural network. Additionally, it is unclear if it is feasible to incorporate a beam-hardening correction within this framework as it would require a polychromatic projection model in both cases.
An intermediate approach between full reconstruction with DL and the embedding of a DL model within an iterative algorithm has been to produce a DL reconstruction to use as a prior image within an iterative algorithm [27,28]. While this approach incorporates DL less elaborately compared to unrolling algorithms or plug-and-play algorithms, it circumvents some of the practical problems faced by those algorithms. Following this idea, we propose PICDL, a hybrid reconstruction method based on the combination of a modified version of the original PICCS algorithm [29,30], which we name L2-PICCS, with a DL model to generate a prior image to compensate artifacts due to beam hardening, applicable to situations with a limited number of randomly distributed projections and instances with restricted span data. L2-PICCS substitutes the prior TV regularization term with the L2 norm, making PICCS robust to very-low-dose scenarios by minimizing the appearance of streaks in the final reconstruction. The DL model has been tested in scenarios where the projections do not match those of the training data for either standard dose (SD), low dose (LD), LNP, or LSA scenarios. While L2-PICCS does not use a polychromatic projection model, the results show that the simple correction of the beam-hardening within the prior image allows for the correction of beam-hardening in the result of the algorithm. The effectiveness of the algorithm was tested on real small-animal cone beam CT data.

2. Methods

PICDL is a hybrid method that consists of three stages: (1) the preliminary FDK reconstruction with artifacts, which is the input to the network, (2) the generation of a prior image from this preliminary reconstruction with a DL architecture (DeepBH) that works slice by slice and is trained to correct beam-hardening along with either SD, LD, LSA, or LNP, and (3) the use of this prior image in the L2-PICCS algorithm to obtain the final reconstruction without artifacts (Figure 1).
The following sections briefly describe L2-PICCS for completeness and describe the architecture of DeepBH used to generate the prior image.

2.1. L2-PICCS Method

Assuming that the solution u has a sparse gradient, and that it is close to a prior image up, PICCS finds u as the solution to the optimization problem [30]:
min u 1 α T 1 u + α T 2 u u p , s . t   F u f 2 2 σ 2 , u   0
where f is the limited projection data, F is the forward operator, T 1 and T 2 can be any transform previously used in Compressed Sensing (CS) studies, σ represents the variance of the noise in the data, and α weights the contributions of T 1 and T 2 . We add the positivity constraint u 0 as in [31].
While in [30] the authors use the total variation pseudo-norm for both T 1 and T 2 , L2-PICCS differs from this original PICCS implementation by using the L2 norm as the second transform T 2 . This modification is carried out to avoid transferring into the solution the high gradient streaks present in the ( u u p ) regularization term, caused by the very low number of projections in our data.

2.2. Split Bregman Algorithm

Equation (1) can be solved by using the Split Bregman method [32]. We introduce the Bregman iterators b x k , b y k , and b v k , and the auxiliary variables d x ,   d y ,   v under the constrains d y = x u ,   d y = y u ,   v = u . By adding the constraints through penalty functions, we obtain the following minimization problem:
m i n u , d x , d y , v 1 α d x , d y 1 + α u u p 2 + Φ v i 0 + μ 2 F u f 2 2 + λ 2 d x x u b x k 2 2 + λ 2 d y y u b y k 2 2 + γ 2 v u b v k 2 2
where Φ ( v > 0 ) is the indicator function for the positivity constraint, and λ, µ, and γ are the penalty function weights.
We decouple the problem by alternating between the optimization of the terms using the L2 norm, L1 norm, and the indicator function of positive values. The first is solved by setting the gradients to 0, leading to the following system of equations that we iteratively compute with the Krylov solver:
K u k + 1 = r k
K = μ F T F + λ D x T D x + λ D y T D y + 2 α + γ   I
r k = μ F T f k + 2 α   u p + λ D x T d x k b x k + λ D y T d y k b y k + γ v k b v k
The L1 subproblem is solved using the proximal operator of the L1 norm, the general shrinkage formula, applied to the variables d x , d y , v . The positivity constraint is enforced using the projector (i.e., the proximal operator of the indicator function) onto the set of positive values.
d x k + 1 , d y k + 1 = m a x s k 1 α / λ , 0   D j u k + 1 + b j k s k , j = x , y
s k = D x u k + 1 + b x k 2 + D y u k + 1 + b y k 2
v k + 1 = m a x ( u k + 1 + b v k , 0 )
The Bregman iterators are updated as follows:
b x k + 1 = b x k + x u k + 1 d x k + 1
b y k + 1 = b y k + y u k + 1 d y k + 1
b v k + 1 = b v k + u k + 1 v k + 1
f k + 1 = f k + f F u k + 1
The resulting hyperparameters are μ, that weights the contribution of the limited projection data f , α and λ that weigh the regularization terms, and γ that regulates the convergence speed of the algorithm. The differences in L2-PICCS with respect to the original PICCS are reflected in the removal of the proximal operator originally associated with the T 2 transform from the L1 subproblem (Equation (4)) and the incorporation of the L2 norm into the Krylov solver (Equation (3)).

2.3. DeepBH

The prior image is obtained with DeepBH, a DL architecture based on a modification of the U-Net architecture [14] replacing the original encoder with ResNet-34 [33] to take advantage of the improved performance offered by the residual block. The decoder mainly comprises oversampling blocks that perform subpixel convolution via pixel shuffling with CNN resize initialization (ICNR) [34], followed by two blocks consisting of a convolutional layer and a rectified linear unit (ReLU). Each oversampling block is linked to its corresponding encoder block via skip connections. The complete architecture is depicted in Figure 2.
We trained the model using a Smooth L1 loss function, defined as follows:
l x ,   u = 1 N   n = 0 N l n ( x n ,   u n ) ,   l n ( x n ,   u n ) = 0.5 × ( x n u n ) 2 i f   x n u n   1 x n u n 0.5 > 1  
where x = x 1 , , x N T is the predicted image, u = u 1 , , u N T is the target image, and N is the number of images present in the dataset. This loss function combines the advantages of L1 loss, i.e., stable gradients when the difference between the prediction and the target is large, and L2 loss, i.e., fewer oscillations during updates, when the differences between prediction and target are small. We used the Adam optimizer [35] due to its higher convergence speed, with a weight decay equal to 10−2 as a regularization strategy. The layers of the CNN were initialized using the Kaiming uniform method [36]. Fine-tuning was employed as a training strategy. We first trained only the encoder until convergence with a learning rate of 1 × 10−4 determined by the Leslie N. Smith test [37]. Subsequently, the model was trained end-to-end with a learning rate of 1 × 10−7 to improve the model performance. To ensure the reproducibility of our training results, we initialized the weights and biases of the CNN using a specified random seed.
To generate the prior image, the model was trained at half the size to reduce the risk of transferring potential hallucinations from the network’s output to the final image reconstructed with PICDL. To this end, input images were downsampled to half size by averaging the neighborhoods of 4 pixels and the output image was upsampled by bilinear interpolation.

3. Experiments and Results

We first used DeepBH directly in two conventional scenarios, standard dose and low dose, to evaluate the performance and robustness of the model used to obtain the prior image. In the second step, we evaluated the whole hybrid method PICDL in the scenarios in which a simple postprocessing like DeepBH is not enough, i.e., in highly limited-data scenarios, either with a limited span angle or limited number of projections. L2-PICCS has been implemented only in the central slice of the volume to simplify the projection and backprojection steps in the iterative algorithm. The results of PICDL and DeepBH were compared with those of SART-PICCS, which was adapted from [38] by changing ART by SART given that they are equivalent and SART is faster.

3.1. Datasets

We used 11 rodent head studies acquired with the CT subsystem of an ARGUS/CT system [39], a cone-beam micro-CT scanner based on a flat panel detector with a source-detector distance of 370.95 mm. We obtained 360 projections within a span angle of 360 degrees, with a projection size of 516 × 574 pixels and a pixel size of 0.2 mm. Experiments were carried out in accordance with the Animal Experimentation Ethics Committee of the Community of Madrid (Ref. PROEX 332/15), following the EU Directive 2010/63EU and Recommendation 2007/526/EC, and the enforcement in Spain from RD 53/2013.
We trained and validated one model for each of the following four data scenarios generated from the acquired data of the 11 rodent studies (Figure 3):
  • Standard dose (SD) scenario. Complete data, that is, 360 projections with a span angle of 360 degrees, resulting in seven studies for training, two for validation, and two for test.
  • Low-dose (LD) scenario. We selected every second projection from each study (180 projections covering a 360-degree span), resulting in seven studies for training, two for validation, and two for test.
  • Limited span angle (LSA) scenario. This scenario entailed the random selection of three span angles between 90 and 160 degrees for each of the 11 rodent studies. Consequently, we obtained a total of 21 studies for training, 6 for validation, and an additional 6 for testing.
  • Limited number of random projections (LNP) scenario. We conducted three rounds of selection, randomly choosing a varying number of projections between 30 and 60 within a 360-degree span for each of the 11 datasets. This process yielded a total of 21 studies allocated for training, 6 for validation, and another 6 for testing.
Table 1 shows the resulting number of 2D images for each scenario.
Preliminary reconstructions consisted of volumes of size 448 × 512 × 496, with a voxel size of 0.122 × 0.122 × 0.122 mm3, obtained with an FDK-based method implemented in FuxSim [40]. Target images free from beam-hardening artifacts were obtained from the SD datasets using FDK + 2DCalBH [5]. Since the prior volume in PICCS is used to recover the texture and help recover the main structures, the CTs were downsampled by half in the three dimensions, resulting in 224 × 256 × 248 volumes. The prior image obtained with the DL model was then rescaled to 516 × 516 × 496 to be used in the PICCS algorithm.

3.2. Parameters of Iterative Methods and Evaluation Metrics

Table 2 shows the parameters used for the L2-PICCS algorithm, which were selected to maximize both data consistency and perceptual appearance after performing a visual grid search.
The robustness of PICDL against different priors obtained from different datasets was assessed by training the model for the LNP case with two different random seeds (33 and 42).
SART-PICCS was run for 40 global iterations that include 10 PICCS iterations and 1 SART iteration. We used a PICCS step size of 0.2, a SART step size of 1.0, a prior weight of 0.35, and a relax value of 0.01, under all scenarios.
The performance on the test studies was assessed visually and quantitatively using Peak Signal-to-Noise Ratio (PSNR) to quantify texture and pixel value differences, Structural Similarity Index (SSIM) to quantify structural differences with respect to the target images [41], and Correlation Coefficient (CC) [42] to measure the overall linear similarity between the images, providing a comprehensive evaluation of both pixel-wise and structural congruence.

3.3. Results in Conventional Scenarios

Results for the SD and LD scenarios showed that a postprocessing step with DeepBH was able to correct both streaks and dark band artifacts, with a result very similar to the target regardless of the dose (Figure 4). This is quantitatively supported by the improvement in PSNR and SSIM values, especially for the LD scenarios due to the elimination of streaks (Figure 5).

3.4. Results in Highly Limited-Data Scenarios

Figure 6 shows that SART-PICCS is sensitive to the hallucinations in the prior image, while PICDL achieves almost identical results for priors obtained with different random seeds, with only minor differences appearing in internal structures with higher intensity, reaching very similar PSNR and SSIM values (Table 3). This demonstrates the robustness of the method against changes in the prior image.
Figure 7 shows that DeepBH, SART-PICCS, and PICDL were able to correct the deformations caused by the reduced span angle (see white arrows in Figure 7A and Figure 7F), leading to PSNR and SSIM improvements with PICDL as high as 100% and 67%, respectively (Table 4). While SART-PICCS introduced blurring in the reconstructed image owing to the downsampled prior, PICDL could recover high-resolution edges.
In the case of the LNP scenario, while we see an elimination of beam-hardening artifacts and streaks with all methods, DeepBH showed some hallucinations in the form of new or incomplete structures (see white arrows Figure 8B and Figure 8G). While SART-PICCS transferred the hallucinations of the prior image into the reconstructed image, PICDL was able to avoid them (Figure 8E and Figure 8J), resulting in an increased PSNR and SSIM of more than 86% and 121%, respectively, compared to FDK.

4. Discussion and Conclusions

This paper presents a hybrid reconstruction method designed to mitigate beam-hardening artifacts in CT images. This method is valid for limited-data scenarios and is based on the combination of L2-PICCS, a modified version of the PICCS algorithm, with DeepBH, a DL architecture that generates the prior information. While this prior information is available in dynamic acquisitions as respiratory gating, the proposed method enables the use of L2-PICCS for the limited-data cases where complete data are not available (i.e., static low-dose acquisitions).
Evaluation with rodent head studies in a CT scanner showed that DeepBH was able to correct beam-hardening artifacts and streaks in scenarios with standard and low-dose data without introducing hallucinations. In both highly limited-data scenarios, LSA, and LNP, DeepBH was able to eliminate beam-hardening, streaks, and deformation artifacts arising due to the lack of projections and improve the texture, especially in soft tissue. Nevertheless, in LNP, the lack of data in internal structures and surface edges resulted in the generation of hallucinations not present in the other scenarios. This suggests that edge deformations are more easily learned by the network than small structures hindered by severe streaks. It is difficult to assess the diagnostic relevance of these hallucinations as it depends on the specific pathology or purpose of the study, but they introduce an uncertainty factor that precludes its use for clinical applications and hampers a possible use in preclinical scenarios depending on the specific application.
The appearance of hallucinations under the LNP scenario justifies the need for hybrid techniques such as PICDL to properly reconstruct data with a highly limited number of projections and recover internal structures and edges. The results show that while SART-PICCS was not able to correct small hallucinations in the prior image, the incorporation of the original measured data through our hybrid approach effectively eliminates the hallucinations introduced by the DL model into the prior image. The use of Split Bregman in our implementation allows the PICCS problem to be solved accurately, while SART-PICCS approximates the gradient of the TV norm by means of a relax parameter to apply gradient descent, what may cause its inability to correct small hallucinations in the prior image, leading to an incorrect reconstruction of small gradients.
The evaluation of PICDL using different priors, resulting from different initializations of the weightings and biases of the CNN, supported the robustness of the proposed method since it can obtain results that are visually and quantitatively very similar to each other.
The main challenge with PICDL was the selection of the regularization parameters for L2-PICCS, which was carried out heuristically depending on the type of artifact, the number of total projections, and their spatial distribution within the total span angle. Future work will explore the possibility of optimizing these parameters with a deep learning model, provided that enough complete PICDL studies are available. Other methods such as the so-called unrolling methods include the DL model as part of the iterative method [43,44]. Nevertheless, the use of unrolling methods with 3D data is limited by the large computational memory costs.
One limitation of our study is the reduced size of the training database. Future work will require the creation of larger datasets and the evaluation of the method on anatomical regions other than rodent head studies to determine the need to independently train the DL model on each body part.
While PICDL does not include a polychromatic projector model, it shows that the use of a beam-hardening corrected prior allows for beam hardening to be solved in a way that is not computationally demanding.
The proposed method is an easy-to-implement approach to incorporate the potential of the deep learning strategies to limited-data CT, preventing the transfer of hallucinations to the final solution. This could be especially useful in settings with mechanical limitations, as is the use of C-arms to obtain tomographic images in the operating room, or where the radiation dose must be reduced, as in screening.
In conclusion, PICDL has demonstrated effectiveness in compensating for beam-hardening artifacts across various scenarios, successfully correcting streaks and deformation artifacts present in highly limited data, paving the way for real-time quantitative tomography in situations involving non-standard trajectories, such as those encountered in the used C-arm systems during surgery.

Author Contributions

Conceptualization, M.D. and M.A.; methodology A.P., D.S., C.F.d.C., A.L.-M., M.D. and M.A.; software, A.P., D.S., C.F.d.C. and A.L.-M.; validation, A.P., D.S., C.F.d.C., A.L.-M. and M.A.; formal analysis, M.D. and M.A.; investigation, A.P., D.S., C.F.d.C., A.L.-M., and M.A.; resources, M.D. and M.A.; data curation, A.P. and C.F.d.C.; writing—original draft preparation, A.P., C.F.d.C., and M.A.; writing—review and editing, A.L.-M., D.S., M.D. and M.A.; visualization, A.P., D.S. and C.F.d.C.; supervision, M.A.; project administration, M.A.; funding acquisition, M.D. and M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by grant PDC2021-121656-I00 funded by MICIU/AEI/10.13039/501100011033 and by the European Union NextGenerationEU/PRTR, and grant PID2019-110369RB-I00, funded by MICIU/AEI/10.13039/501100011033. Also funded by In-stituto de Salud Carlos III (ISCIII), through the projects PMPTA22/00118 and PMPTA22/00121, co-funded by the European Union NextGenerationEU, Mecanismo de Recuperación y Resiliencia (MRR) and project PT20/00044, co-funded by European Regional Development Fund “A way to make Europe”. Also partially supported by project EXP2022/008917, funded by Delegación del Gobierno para el Plan Nacional sobre Drogas, Mecanismo de Recuperación y Resiliencia de la Unión Europea, and by ASPIDE Project, funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant 801091. The CNIC is supported by the Instituto de Salud Carlos III (ISCIII), the Ministerio de Ciencia, Innovación y Universidades (MICIU) and the Pro CNIC Foundation and is a Severo Ochoa Center of Excellence (grant CEX2020-001041-S funded by MICIU/AEI/10.13039/501100011033).

Institutional Review Board Statement

Experiments were carried out in accordance with the Animal Experimentation Ethics Committee of the Community of Madrid (Ref. PROEX 332/15), following the EU Directive 2010/63EU and Recommendation 2007/526/EC, and the enforcement in Spain from RD 53/2013.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original data presented in the study are openly available in zenodo at https://zenodo.org/records/11952603 (accessed on 1 June 2024).

Acknowledgments

We thank J. Garcia Blas from the Department of Computer Science and Engineering at Universidad Carlos III de Madrid for providing access to the cluster of computers he manages.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Barrett, J.F.; Keat, N. Artifacts in CT: Recognition and Avoidance. RadioGraphics 2004, 24, 1679–1691. [Google Scholar] [CrossRef] [PubMed]
  2. Brooks, R.A.; Di Chiro, G. Beam hardening in x-ray reconstruction tomography. Phys. Med. Biol. 1976, 21, 390–398. [Google Scholar] [CrossRef] [PubMed]
  3. Nalcioglu, O.; Lou, R.Y. Post-reconstruction Method for Beam Hardening in Computerised Tomography. Phys. Med. Biol. 1979, 24, 330–340. [Google Scholar] [CrossRef] [PubMed]
  4. Joseph, P.M.; Spital, R.D. A method for correcting bone induced artifacts in computed tomography scanners. J. Comput. Assist. Tomogr. 1978, 2, 100–108. [Google Scholar] [CrossRef] [PubMed]
  5. Martinez, C.; Fessler, J.A.; Desco, M.; Abella, M. Simple beam-hardening correction method (2DCalBH) based on 2D linearization. Phys. Med. Biol. 2022, 67, 115005. [Google Scholar] [CrossRef]
  6. Kyriakou, Y.; Meyer, E.; Prell, D.; Kachelrieß, M. Empirical beam hardening correction (EBHC) for CT. Med. Phys. 2010, 37, 5179–5187. [Google Scholar] [CrossRef]
  7. Schüller, S.; Sawall, S.; Stannigel, K.; Hülsbusch, M.; Ulrici, J.; Hell, E.; Kachelrieß, M. Segmentation-free empirical beam hardening correction for CT. Med. Phys. 2015, 42, 794–803. [Google Scholar] [CrossRef]
  8. Elbakri, I.A.; Fessler, J.A. Statistical Image Reconstruction for Polyenergetic X-Ray Computed Tomography. IEEE Trans. Med. Imaging 2002, 21, 89–99. [Google Scholar] [CrossRef]
  9. Martinez, C.; Fessler, J.A.; Desco, M.; Abella, M. Segmentation-free statistical method for polyenergetic X-ray computed tomography with a calibration step. In Proceedings of the 6th International Conference on Image Formation in X-Ray Computed Tomography, Regensburg, Germany, 3–7 August 2020; pp. 118–121. [Google Scholar]
  10. Sanderson, D.; Martinez, C.; Fessler, J.A.; Desco, M.; Abella, M. Statistical image reconstruction with beam-hardening com-pensation for X-ray CT by a calibration step (2DIterBH). Med. Phys. 2024, 51, 5204–5213. [Google Scholar] [CrossRef]
  11. Li-ping, Z.; Yi, S.; Kai, C.; Jian-qiao, Y. Deep learning based beam hardening artifact reduction in industrial X-ray CT. CT Theory Appl. 2018, 27, 227–240. [Google Scholar]
  12. Ji, X.; Gao, D.; Gan, Y.; Zhang, Y.; Xi, Y.; Quan, G.; Lu, Z.; Chen, Y. A Deep-Learning-Based Method for Correction of Bone-Induced CT Beam-Hardening Artifacts. IEEE Trans. Instrum. Meas. 2023, 72, 4504012. [Google Scholar] [CrossRef]
  13. Kalare, K.; Bajpai, M.; Sarkar, S.; Munshi, P. Deep neural network for beam hardening artifacts removal in image reconstruction. Appl. Intell. 2022, 52, 6037–6056. [Google Scholar] [CrossRef]
  14. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  15. Adishesha, A.S.; Vanselow, D.J.; La Riviere, P.; Cheng, K.C.; Huang, S.X. Sinogram domain angular upsampling of sparse-view micro-CT with dense residual hierarchical transformer and attention-weighted loss. Comput. Methods Programs Biomed. 2023, 242, 107802. [Google Scholar]
  16. Qiao, Z.; Du, C. Rad-unet: A residual; attention-based, dense unet for CT sparse reconstruction. J. Digit. Imaging 2022, 35, 1748–1758. [Google Scholar] [CrossRef]
  17. Zhang, Y.; Yu, H. Convolutional neural network based metal artifact reduction in x-ray computed tomography. IEEE Trans. Med. Imaging 2018, 37, 1370–1381. [Google Scholar] [CrossRef]
  18. Zhang, P.; Li, K. A dual-domain neural network based on sinogram synthesis for sparse-view CT reconstruction. Comput. Methods Programs Biomed. 2022, 226, 107168. [Google Scholar] [CrossRef] [PubMed]
  19. Huang, Y.; Wang, S.; Guan, Y.; Maier, A. Limited angle tomography for transmission X-ray microscopy using deep learning. J. Synchrotron Radiat. 2020, 27, 477–485. [Google Scholar] [CrossRef]
  20. Zhang, H.; Li, L.; Qiao, K.; Wang, L.; Yan, B.; Li, L.; Hu, G. Image prediction for limited-angle tomography via deep learning with convolutional neural network. arXiv 2016, arXiv:1607.08707. [Google Scholar]
  21. Gu, J.; Ye, J.C. Multi-scale wavelet domain residual learning for limited-angle CT reconstruction. arXiv 2017, arXiv:1703.01382. [Google Scholar]
  22. Zhang, Q.; Hu, Z.; Jiang, C.; Zheng, H.; Ge, Y.; Liang, D. Artifact removal using a hybrid-domain convolutional neural network for limited-angle computed tomography imaging. Phys. Med. Biol. 2020, 65, 155010. [Google Scholar] [CrossRef]
  23. Bhadra, S.; Kelkar, V.A.; Brooks, F.J.; Anastasio, M.A. On hallucinations in tomographic image reconstruction. IEEE Trans. Med. Imaging 2021, 40, 3249–3260. [Google Scholar] [CrossRef] [PubMed]
  24. Cascarano, P.; Piccolomini, E.L.; Morotti, E.; Sebastiani, A. Plug-and-Play gradient-based denoisers applied to CT image enhancement. Appl. Math. Comput. 2022, 422, 126967. [Google Scholar] [CrossRef]
  25. Hu, D.; Zhang, Y.; Liu, J.; Luo, S.; Chen, Y. DIOR: Deep iterative optimization-based residual-learning for limited-angle CT reconstruction. IEEE Trans. Med. Imaging 2022, 41, 1778–1790. [Google Scholar] [CrossRef] [PubMed]
  26. Adler, J.; Öktem, O. Learned primal-dual reconstruction. IEEE Trans. Med. Imaging 2018, 37, 1322–1332. [Google Scholar] [CrossRef] [PubMed]
  27. Huang, Y.; Preuhs, A.; Lauritsch, G.; Manhart, M.; Huang, X.; Maier, A. Data consistent artifact reduction for limited angle tomography with deep learning prior. In Proceedings of the International Workshop on Machine Learning for Medical Image Reconstruction, Shenzhen, China, 17 October 2019; Springer: Cham, Switzerland, 2019; pp. 101–112. [Google Scholar]
  28. Zhang, C.; Li, Y.; Chen, G.H. Accurate and robust sparse-view angle CT image reconstruction using deep learning and prior image constrained compressed sensing (DL-PICCS). Med. Phys. 2021, 48, 5765–5781. [Google Scholar] [CrossRef]
  29. Chen, G.H.; Thériault-Lauzier, P.; Tang, J.; Nett, B.; Leng, S.; Zambelli, J.; Qi, Z.; Bevins, N.; Raval, A.; Reeder, S.; et al. Time-resolved interventional cardiac C-arm cone-beam CT: An application of the PICCS algorithm. IEEE Trans. Med. Imaging 2011, 31, 907–923. [Google Scholar] [CrossRef]
  30. Chen, G.H.; Tang, J.; Leng, S. Prior image constrained compressed sensing (PICCS): A method to accurately reconstruct dynamic CT images from highly undersampled projection data sets. Med. Phys. 2008, 35, 660–663. [Google Scholar] [CrossRef]
  31. Abascal, J.F.; Abella, M.; Sisniega, A.; Vaquero, J.J.; Desco, M. Investigation of different sparsity transforms for the PICCS algorithm in small-animal respiratory gated CT. PLoS ONE 2015, 10, e0120140. [Google Scholar]
  32. Goldstein, T.; Osher, S. The Split Bregman Method for L1 Regularized Problems. SIAM J. Imaging Sci. 2009, 2, 323–343. [Google Scholar] [CrossRef]
  33. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 770–778. [Google Scholar]
  34. Aitken, A.; Ledig, C.; Theis, L.; Caballero, J.; Wang, Z.; Shi, W. Checkerboard artifact free sub-pixel convolution: A note on sub-pixel convolution, resize convolution and convolution resize. arXiv 2017, arXiv:1707.02937. [Google Scholar]
  35. Kingma, D.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  36. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
  37. Smith, L.N. Cyclical Learning Rates for Training Neural Networks. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), Santa Rosa, CA, USA, 24–31 March 2017. [Google Scholar]
  38. Hu, Z.; Zheng, H. Improved total variation minimization method for few-view computed tomography image reconstruction. BioMedical Eng. OnLine 2014, 13, 70. [Google Scholar] [CrossRef] [PubMed]
  39. Vaquero, J.J.; Redondo, S.; Lage, E.; Abella, M.; Sisniega, A.; Tapias, G.; Montenegro, M.L.S.; Desco, M. Assessment of a New High-Performance Small- Animal X-ray Tomograph. IEEE Trans. Nucl. Sci 2008, 55, 898–905. [Google Scholar] [CrossRef]
  40. Abella, M.; Serrano, E.; Garcia-Blas, J.; García, I.; De Molina, C.; Carretero, J.; Desco, M. FUX-Sim: An implementation of a fast universal simulation/reconstruction framework for X-ray systems. PLoS ONE 2017, 12, e0180363. [Google Scholar] [CrossRef] [PubMed]
  41. Necasová, T.; Burgos, N.; Svoboda, D. Chapter 25-Validation and evaluation metrics for medical and biomedical image synthesis. In Biomedical Image Synthesis and Simulation; The MICCAI Society book Series; Burgos, N., Svoboda, D., Eds.; Academic Press: Cambridge, MA, USA, 2022; pp. 573–600. [Google Scholar]
  42. Teukolsky, S.A.; Flannery, B.P.; Press, W.H.; Vetterling, W. Numerical Recipes in C; Cambridge University Press: New York, NY, USA, 1992. [Google Scholar]
  43. Monga, V.; Li, Y.; Eldar, Y. Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing. IEEE Signal Process. Mag. 2021, 38, 18–44. [Google Scholar] [CrossRef]
  44. Wu, D.; Kim, K.; Li, Q. Computationally efficient deep neural network for computed tomography image reconstruction. Med. Phys. 2019, 46, 4763–4776. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed method.
Figure 1. Flowchart of the proposed method.
Sensors 24 06782 g001
Figure 2. Proposed U-Net network architecture.
Figure 2. Proposed U-Net network architecture.
Sensors 24 06782 g002
Figure 3. Central axial slice of the reconstructions of one of the tests cases for the target (A) and SD (B), LD (C), LSA (D), and LNP (E) scenarios.
Figure 3. Central axial slice of the reconstructions of one of the tests cases for the target (A) and SD (B), LD (C), LSA (D), and LNP (E) scenarios.
Sensors 24 06782 g003
Figure 4. Top: central axial slices of the FDK reconstructions scenarios and DeepBH results for SD and LD scenarios for the two test studies. Bottom: zoomed-in images. Arrows in zoomed-in images point to beam hardening (first column) and streaks (third column).
Figure 4. Top: central axial slices of the FDK reconstructions scenarios and DeepBH results for SD and LD scenarios for the two test studies. Bottom: zoomed-in images. Arrows in zoomed-in images point to beam hardening (first column) and streaks (third column).
Sensors 24 06782 g004
Figure 5. Mean and standard deviation of PSNR, SSIM, and CC values calculated for the SD and LD scenarios in each slice.
Figure 5. Mean and standard deviation of PSNR, SSIM, and CC values calculated for the SD and LD scenarios in each slice.
Sensors 24 06782 g005
Figure 6. LNP scenario of 42 random projections with random seed = 42 (top) and random seed = 33 (bottom). Axial slices of DeepBH (A,E), prior images (B,F), SART-PICCS (C,G), and PICDL (D,H). Arrows indicate hallucinations.
Figure 6. LNP scenario of 42 random projections with random seed = 42 (top) and random seed = 33 (bottom). Axial slices of DeepBH (A,E), prior images (B,F), SART-PICCS (C,G), and PICDL (D,H). Arrows indicate hallucinations.
Sensors 24 06782 g006
Figure 7. Central axial slices of FDK reconstructions (A,F), DeepBH reconstructions (B,G), prior images (C,H), SART-PICCS reconstructions (D,I), and PICDL reconstructions (E,J) for the two test animals in LSA scenario of 120 and 130 projections, respectively. Arrows indicate the LSA artifacts.
Figure 7. Central axial slices of FDK reconstructions (A,F), DeepBH reconstructions (B,G), prior images (C,H), SART-PICCS reconstructions (D,I), and PICDL reconstructions (E,J) for the two test animals in LSA scenario of 120 and 130 projections, respectively. Arrows indicate the LSA artifacts.
Sensors 24 06782 g007
Figure 8. Central axial slices of FDK reconstructions (A,F), DeepBH reconstructions (B,G), prior images (C,H), SART-PICCS reconstructions (D,I), and PICDL reconstructions (E,J) for the two test animals in LNP scenario of 49 and 42 projections, respectively. Arrows indicate hallucinations.
Figure 8. Central axial slices of FDK reconstructions (A,F), DeepBH reconstructions (B,G), prior images (C,H), SART-PICCS reconstructions (D,I), and PICDL reconstructions (E,J) for the two test animals in LNP scenario of 49 and 42 projections, respectively. Arrows indicate hallucinations.
Sensors 24 06782 g008
Table 1. Images used to create the model.
Table 1. Images used to create the model.
ScenarioSDLDLSALNP
Training3361336110,08310,083
Validation99299229762976
Test99299229762976
Table 2. Regularization parameters used in the PICCS algorithm.
Table 2. Regularization parameters used in the PICCS algorithm.
ScenarioRodent 1Rodent 2
Μλαμλα
LNP1.60.120.51.40.10.9
LSA1.40.1231.40.123
Table 3. PSNR, SSIM, and CC calculated for the different LNP random state results.
Table 3. PSNR, SSIM, and CC calculated for the different LNP random state results.
Random Seed ValuePSNR (dB)SSIMCC
PriorSARTPICDLPriorSARTPICDLPriorSARTPICDL
4225.8223.9932.070.500.610.830.930.910.96
3325.4724.4131.850.520.630.840.930.910.96
Table 4. PSNR, SSIM, and CC calculated for the different scenarios and test images.
Table 4. PSNR, SSIM, and CC calculated for the different scenarios and test images.
MetricReconstruction MethodRodent 1Rodent 2
LSALNPLSALNPs
PSNR (dB)FDK19.8115.8115.0915.52
Prior26.4523.6327.9725.82
SART23.3224.6625.1023.99
PICDL31.3729.3730.2432.07
SSIMFDK0.6800.3570.5120.327
Prior0.6740.4540.7380.500
SART0.7330.6190.7320.610
PICDL0.840 0.7890.8560.831
CCFDK0.7930.7820.8240.722
Prior0.9680.9640.9350.925
SART0.9120.9330.9070.913
PICDL0.9610.9550.9530.960
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Piol, A.; Sanderson, D.; del Cerro, C.F.; Lorente-Mur, A.; Desco, M.; Abella, M. Hybrid Reconstruction Approach for Polychromatic Computed Tomography in Highly Limited-Data Scenarios. Sensors 2024, 24, 6782. https://doi.org/10.3390/s24216782

AMA Style

Piol A, Sanderson D, del Cerro CF, Lorente-Mur A, Desco M, Abella M. Hybrid Reconstruction Approach for Polychromatic Computed Tomography in Highly Limited-Data Scenarios. Sensors. 2024; 24(21):6782. https://doi.org/10.3390/s24216782

Chicago/Turabian Style

Piol, Alessandro, Daniel Sanderson, Carlos F. del Cerro, Antonio Lorente-Mur, Manuel Desco, and Mónica Abella. 2024. "Hybrid Reconstruction Approach for Polychromatic Computed Tomography in Highly Limited-Data Scenarios" Sensors 24, no. 21: 6782. https://doi.org/10.3390/s24216782

APA Style

Piol, A., Sanderson, D., del Cerro, C. F., Lorente-Mur, A., Desco, M., & Abella, M. (2024). Hybrid Reconstruction Approach for Polychromatic Computed Tomography in Highly Limited-Data Scenarios. Sensors, 24(21), 6782. https://doi.org/10.3390/s24216782

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop