Next Article in Journal
Improved Model-Free Adaptive Predictive Control for Nonlinear Systems with Quantization Under Denial of Service Attacks
Previous Article in Journal
AI-Driven Technology in Heart Failure Detection and Diagnosis: A Review of the Advancement in Personalized Healthcare
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multigrid Methods for Computed Tomography

by
Alessandro Buccini
1,†,
Marco Donatelli
2,*,† and
Marco Ratto
2,†
1
Department of Mathematics and Computer Science, University of Cagliari, Via Ospedale 72, 09124 Cagliari, Italy
2
Department of Science and High Technology, University of Insubria, Via Valleggio 11, 22100 Como, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2025, 17(3), 470; https://doi.org/10.3390/sym17030470
Submission received: 3 February 2025 / Revised: 6 March 2025 / Accepted: 15 March 2025 / Published: 20 March 2025
(This article belongs to the Section Mathematics)

Abstract

:
We consider the problem of computed tomography (CT). This ill-posed inverse problem arises when one wishes to investigate the internal structure of an object with a non-invasive and non-destructive technique. This problem is severely ill-conditioned, meaning it has infinite solutions and is extremely sensitive to perturbations in the collected data. This sensitivity produces the well-known semi-convergence phenomenon if iterative methods are used to solve it. In this work, we propose a multigrid approach to mitigate this instability and produce fast, accurate, and stable algorithms starting from unstable ones. We consider, in particular, symmetric Krylov methods, like lsqr, as smoother, and a symmetric projection of the coarse grid operator. However, our approach can be extended to any iterative method. Several numerical examples show the performance of our proposal.

1. Introduction

Computed tomography (CT) is an ill-posed inverse problem extremely relevant in several areas of science, engineering, and cultural heritage preservation. CT is a non-invasive and non-destructive technique used to reconstruct the internal structure of an object. The final result of CT is a map of the absorption coefficients of the object, which can be obtained through the application of the Radon transform. Assume that an X-ray is irradiated at an angle, θ , through a two-dimensional object. If we represent the absorption coefficient of the scanned object by f : R 2 R and assume that f is regular enough, then we can express the Radon transform as follows:
R f ( θ ) = L f ( x ) | d x | .
where L is the straight-line path along which the X-ray travels, L denotes the line integral along L, summing up the contributions of f ( x ) along that line, and R f ( θ ) is the total intensity loss of the X-ray beam after passing through the scanned object at an angle θ . In practice, a CT scanner captures values of R f ( θ ) for some angles θ [ 0 , π ) . The fundamental problem in CT imaging is to reconstruct an approximation of f ( x ) , i.e., to recover the internal structure of the object from these measurements.
If we approximate f by a piece-wise constant function on a two-dimensional grid with n elements, we can approximate the integral in (1) by the following:
R f ( θ ) i I f i L i = : b ( θ ) ,
where I { 1 , , n } denotes the set of indices of the elements of the grid that the X-ray passes through, f i is the value of the absorption coefficient (that we assumed constant) in the grid element i, and L i is the length of the path that the X-ray travels in the i-th grid element. Considering a finite set of angles (and, therefore, X-rays) θ j , for j = 1 , , m , we obtain the following linear system of equations:
b j : = b ( θ j ) = i I j f i L i , j = 1 , , m .
We can rewrite the system (2) compactly as follows:
A x = b ,
where A R m × n is usually a rectangular matrix, with m < n , containing the lengths L i in (2), b R m collects the measurements, and x R n contains the unknown coefficients that we wish to recover. The measurements, b , known as the sinogram, are usually corrupted by some errors, and the “exact” ones are usually not available. In practical applications, we only have access to b δ , such that we have the following:
b δ b δ ,
where · denotes the Euclidean norm. We will assume that a fairly accurate estimate of δ is available. The matrix, A, may be ill-conditioned, i.e., its singular values may decay rapidly to zero with no significant gap between consecutive ones. Moreover, the system may have many more unknowns than equations; this may be due to several reasons. For instance, it may be impossible to scan the object from certain angles or one may not want to irradiate it with too much radiations. The latter is a common case when one deals with medical applications. Therefore, the CT inversion is a discrete inverse problem. We refer the interested reader to [1,2] for more details on CT and to [3,4] for more details on ill-posed inverse problems.
Since the linear system of equations
A x = b δ
is a discrete ill-posed problem, one needs to regularize it. There are many possible approaches to regularization, in this work, we consider the so-called iterative regularization methods. Iterative methods, like Krylov methods, exhibit the semi-convergence phenomenon [5], i.e., in the first iterations, the iterates x ( k ) approach the desired solution x = A b , where A denotes the Moore–Penrose pseudo-inverse of A. After a certain, unknown, amount of iterations have been performed, the noise present in b δ is amplified, corrupting the computed solution, and the iterates eventually converge to A b δ . The latter is a very poor approximation of x , and in most cases, is completely useless. Therefore, regularization is obtained by early stopping the iterations, before the noise is amplified. Determining an effective stopping criterion is still a matter of current research, and an imprudent choice may produce extremely poor reconstructions.
In recent years, Krylov methods have been considered for solving CT imaging problems. In particular, recently, a variation of the GMRES algorithm has been proposed [6]. This method considers an unmatched transpose operator, similar to what has been done in [7] for image deblurring. Algebraic iterative reconstruction methods are possible alternatives to Krylov methods often applied in CT; see [8] and references therein. Several sparsity-promoting algorithms have been proposed in the literature, often in combination with data-driven deep learning methods; see, e.g., [9,10,11]. Concerning multigrid methods, to the best of our knowledge, there are very few proposals, and they consider the multigrid strategy only to accelerate the convergence, solving a Tikhonov regularization model [12], or resorting to a domain decomposition strategy [13,14]. To the best of our knowledge, this is the first time an algebraic multigrid algorithm has been developed to solve the CT problem directly.
In this paper, we propose an iterative regularization multigrid method that stabilizes the convergence of the iterative lsqr method. The latter algorithm is an iterative Krylov method that solves the least-squares problem
arg min x A x b δ 2 .
As we mentioned above, the solution to the minimization problem (5) is of no interest, and regularization is achieved by early stopping of the iterations. Depending on the problem, the convergence of the lsqr method may be so fast that accurately selecting a stopping iteration may be challenging. Combining this Krylov method with the multigrid approach, with properly selected projection and restriction operators, allows us to construct a more stable and accurate algorithm. This is mainly achieved by exploiting the symmetry of the smoother (lsqr, which solves the symmetric normal equations associated with (5)) and the symmetric Galerkin projection of the coarse operator. Moreover, in our multigrid method, we can simply add the projection into the nonnegative cone to further improve the quality of the reconstruction.
Note that our multigrid method does not resort to Tikhonov regularization as in [12,15], and its aim is not to accelerate the convergence as in [13,14], but is inspired by iterative regularization multigrid methods for image deblurring problems as discussed in [16,17]. Therefore, we obtain a reliable iterative regularization method, robust with respect to the stopping iteration, but with a computational cost usually larger than that of the lsqr method used as the smoother.
This paper is structured as follows. In Section 2, we briefly describe the multigrid method, and in Section 3, we detail our algorithmic proposal. Section 4 presents some numerical results to show the performances of the proposed method and we draw our conclusions in Section 5.

2. Multigrid Method

We will now briefly describe how the multigrid method (MGM) works for solving invertible, usually positive definite, linear systems [18]. The main idea of classical MGM is to split R n into two subspaces. The first one is where the operator, A, is well-conditioned, and the second one is where the operator is ill-conditioned.
It is well-known that iterative methods first reduce the errors in the well-conditioned space and, only in later iterations, solve the problem in the ill-conditioned one. The cost per iteration is usually of the order of the cost of the matrix-vector product with A. Even if matrix A is moderately ill-conditioned, the decrease of the error in the ill-conditioned space may be extremely slow, and, overall, a large number of iterations is required to achieve numerical convergence, making the iterative method computationally unattractive.
On the other hand, direct methods require a fixed amount of operations, regardless of the conditioning of A. However, the cost is usually sensibly higher than the one of a single iteration of iterative methods, i.e., if m n , then the cost is usually O ( n 3 ) . Moreover, they usually factor the matrix into the product of two or more matrices that are “easier” to handle. However, these factors may not have the same properties as the original matrix. This is particularly of interest if the matrix A is sparse. In this case, even if m , n 1 , it is still possible to store A, however, if the factors are full, they may require too much memory to be stored and handled.
MGM couples the two approaches exploiting the strengths of both iterative and direct methods and overcoming their shortcomings. We define the operators:
R i R m i + 1 × m i and P i R n i × n i + 1 , for   i = 0 , , L 1 ,
where m 0 = m , n 0 = n , m i + 1 < m i , and n i + 1 < n i . The sequence stops when the minimum between m L and n L is small enough. The operator R i projects a vector v R m i into a smaller size subspace and, when m = n , we have that P i = R i T , where the superscript T denotes the transposition. If m n , P i cannot be the transpose of R i , due to the difference in the dimensions, but it still represents the adjoint of the operator discretized by R i . According to the Galerkin approach, we define the following:
A 0 = A , A i + 1 = R i A i P i , i = 0 , , L 1 .
The matrices A i R m i × n i are the projection of the original operator A into smaller subspaces. The choice of such subspaces and, hence, of the projectors R i and P i , is crucial for the effectiveness of the MGM. In particular, they have to be the ill-conditioned subspaces of matrices A i in order to obtain fast convergence. We will discuss later how to choose them to enforce regularization for ill-posed problems.
The MGM is an iterative method. This algorithm is quite involved, therefore, we first describe the two-grid method (TGM).
The TGM, like the MGM, exploits the so-called “error equation”. Let x ( k ) be an approximation of the exact solution x . We can write the error e ( k ) as follows:
e ( k ) = x x ( k ) .
It is trivial to see that, for the linear system (3), we have the following:
A e ( k ) = b A x ( k ) = : r ( k ) and x = x ( k ) + e ( k ) .
Therefore, an improved approximation of x , denoted by x ( k + 1 ) , can be obtained by approximately solving the error equation A e ( k ) = r ( k ) , obtaining h ( k ) , and setting the following:
x ( k + 1 ) = x ( k ) + h ( k ) .
To compute h ( k ) , one may simply apply a few steps of an iterative method, however, as we discussed above, this would reduce the error only in the well-conditioned space. Reducing the error in the ill-conditioned space may require too much computational work using a simple iterative method. Therefore, in order to obtain a fast solver, we wish to exploit direct methods to tackle the ill-conditioned space. The main drawback of direct methods is the high computational cost. Assuming that R 1 projects into a subspace of small dimension such that A 1 can be factorized cheaply, then a system of the form A 1 x = y can be solved efficiently using a direct method. A single iteration of the TGM goes as follows:
r ( k ) = R 1 b A x ( k ) , r ˜ ( k ) = R 1 r , e ˜ ( k ) = A 1 1 r ( k ) , x ( k + 1 / 2 ) = x ( k ) + P 1 e ˜ ( k ) , x ( k + 1 ) = Post - smoother A , b , x ( k + 1 / 2 ) ,
where by Post - smoother A , b , x ( k + 1 / 2 ) , we denote the application of a few steps of an iterative method, like a Krylov method, to the original system (3), with the starting guess x ( k + 1 / 2 ) .
If n is large, which is usually the case, then A 1 may be too large to invert; therefore, to solve the system, one can use the TGM, further projecting the problem. Doing this recursively gives rise to the MGM algorithm. One projects the problem L times until the sizes of A L are small enough to directly invert it. We refer to each projection as a level. We summarize the computation of a single iteration of MGM in Algorithm 1.
Note that in Algorithm 1 we do not specify explicitly the stopping criterion so that we can later tailor the algorithm to our application of interest, i.e., CT. Also, for i 0 , the initial guess is the zero vector. This is because, on lower levels, we are solving the error equation, and one expects the solution to have all vanishing components.
Note that in CT problems, the ill-conditioned subspace resides in the high frequencies. Therefore, if we project into such a subspace to speed up convergence, we risk amplifying the noise, which can destroy the quality of the restored image. On the other hand, for this application, stabilizing instead of accelerating the convergence is more important. Therefore, in the next section, we will choose the grid transfer operators to project the problem in the lower frequencies as done in [16,17] for image deblurring problems. The main difference between image deblurring and CT is that, in image deblurring, matrix A is square and structured (e.g., block Toeplitz with Toeplitz blocks (BTTB), block circulant with circulant blocks (BCCB)), while here, in CT, we consider rectangular sparse matrices, so the methods used for image deblurring cannot be applied.
Algorithm 1: MGM method
Symmetry 17 00470 i001

3. Our Proposal

We are now in a position to detail our algorithmic proposal.
The first element we wish to describe is the restriction operator, R i . The prolongation is chosen as P i = R i T . We will assume that the vector x is the vectorization of a two-dimensional matrix X, i.e.,
x = vec ( X ) ,
where the operator “vec” orders the entries of X in lexicographical order. We denote the inverse operation by “ vec 1 ”. For simplicity of notation, we will assume that n = s 2 , i.e., that X is a square of size s × s . The restriction operator R i combines two operators. The first, defined by a stencil M R t × t with t < s , selects the subspace to which the problem is projected; see [19] for further details. The second one is a downsampling operator, which defines the size of the coarser problem. In detail, if s is odd, it is defined as follows:
K 0 = 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 R s 1 × s , where   s 1 = s 1 2 ,
while, if s is even, it is defined as follows:
K 0 = 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 R s 1 × s , where   s 1 = s 2 .
Let ∗ denote the convolution operator and ⊗ denote the Kronecker product; then, we define the following:
R 0 x = K 0 K 0 v e c M * v e c 1 x R s 1 .
We can iteratively define R i + 1 by applying the above construction to the lower levels. Namely, if R i R s i + 1 × s i , then we define K i + 1 R s i + 2 × s i + 1 as in (6) if s i + 1 is odd or as in (7) if s i + 1 is even. Assuming t s i + 1 , then we define the following:
R i + 1 x = K i + 1 K i + 1 v e c M v e c 1 x R s i + 2 , f o r i = 0 , , L .
We consider four possible choices of M, as follows:
M 1 = 1 4 0 0 0 0 1 1 0 1 1 , M 2 = 1 9 1 2 1 2 4 2 1 2 1 , M 3 = 1 64 0 0 0 0 0 0 1 3 3 1 0 3 9 9 3 0 3 9 9 3 0 1 3 3 1 , M 4 = 1 256 1 4 6 4 1 4 16 24 16 4 6 24 36 24 6 4 16 24 16 4 1 4 6 4 1 .
These correspond to different B-spline approximations, where M i has order i [20]. The first one, i.e., M 1 , sums up four adjacent grid elements, while the other ones correspond to different types of averages. We wish to discuss M 1 in more detail. Using this restrictor corresponds to summing up the lengths of the ray at a certain angle θ on each of the four grid elements that are “fused” to get from level i to level i + 1 ; see Figure 1. Therefore, using this restrictor, intuitively, corresponds to re-discretizing the problem on a coarser grid when we move from level i to level i + 1 and, in some sense, construct a so-called “geometric” multigrid.
Theoretically, the difference between the operators is that each one has a different order, changing the size of the low-frequency subspace, where we project the problem [21]. Furthermore, when the order is even, the operators are symmetric, while the others are not.
As the post-smoother, we use the lsqr algorithm; see [22]. The lsqr method is crucial to our proposal since it can handle rectangular matrices, which is usually the case with CT, and provides a more stable implementation of the mathematically equivalent CGLS [23]. Given an initial guess, x ^ , and an initial residual, r ^ = b δ A x ^ , this Krylov method, at each iteration, solves the following:
x ( k ) = x ^ + arg min x K k ( A T A , A T r ^ ) A x r ^ 2 ,
where
K k A T A , A T r ^ = span { A T r ^ , ( A T A ) A T r ^ , ( A T A ) 2 A T r ^ , , ( A T A ) k 1 A T r ^ } .
The lsqr algorithm uses the Golub–Kahan bidiagonalization algorithm to construct an orthonormal basis of K k A T A , A T b δ . We will assume that we have access to the matrix A T or to an accurate approximation of it; see [2,6] for a discussion on non-matching transposes in CT. In the latter case, one may need to use reorthogonalization on the Golub–Kahan algorithm.
Absorption coefficients are always nonnegative, therefore, to ensure that the computed solution satisfies this property, at each iteration, we project the computed solution in the nonnegative cone. This can be simply achieved by setting all the negative values to zero. Projection into the nonnegative cone is often employed in the solution to ill-posed inverse problems since the projection into a convex set is a form of regularization; see, e.g., [24,25,26]. This nonnegative projection is performed after the post-smoother and only at the finest level i = 0 , because the coarser levels solve the error equations, so they do not have any sign constraints.
Finally, we describe the stopping criterion employed in our algorithm. If we let k , then x ( k ) would converge to a solution to the noisy problem (5). As mentioned above, this is not ideal as this solution is usually meaningless. To achieve regularization, we early stop the iterations. We employ the discrepancy principle (DP); see [3]. The DP prescribes that the iterations are stopped as soon as the following condition is met:
A x ( k ) b δ τ δ ,
where δ is an upper-bound for the norm of the noise (see (4)) and τ > 1 is a user- defined constant.
The theory of the discrepancy principle states that the parameter τ has to be greater than one [27]; however, the greater we take it, the earlier it stops the method, so it is common practice to choose a value very close to 1 [28]. To summarize, the novelty of our method stands in the choice of the grid transfer operators, as long as the projection in the nonnegative cone, which is specific to the CT problem. We sum up the computations in Algorithm 2. Note that, since the matrix A L might be rectangular and may be rank-deficient, we use A L rather than A L 1 .
Algorithm 2: MGM method for CT
Symmetry 17 00470 i002

4. Numerical Examples

To show the potentiality and performance of our algorithmic proposal, we consider three examples with synthetic data.
Our aim is to show that, coupling MGM with a Krylov method, like lsqr, produces a more stable and more accurate algorithm. Therefore, we compare our approach with the usage of lsqr and the simultaneous iterative reconstruction technique (SIRT), which is a regularized form of a weighted least squares method often used in computed tomography problems [29,30].
We compare the methods in terms of the number of iterations and accuracy. We measure the latter using the relative restoration error defined by the following:
RRE ( x ) = x x x .
As mentioned above, we stop our method, as well as lsqr and SIRT, with the DP, where we set τ = 1.01 . Moreover, we set the number of iterations of lsqr to perform at each iteration of MGM to = 1 and we set a maximum number of outer iterations, K = 100 .
The matrix A was explicitly formed using the MATLAB IRTools toolbox [28].
All computations were performed on MATLAB r2021b running on Windows 11 on a laptop with an 11th Gen Intel Core i7 processor with 16 GB of RAM with 15 digits of precision.
  • Shepp–Logan (full angles).
In our first example, we consider the Shepp–Logan phantom discretized on a 256 × 256 grid of pixels; see Figure 2a. We irradiate the phantom with 362 parallel rays at 180 equispaced angles between 0 and π . Therefore, we obtain A R 65 , 160 × 65 , 536 . This is an ideal case, where we assume that we can irradiate the object with as many rays and from as many angles as we wish. We obtain the noise-free sinogram in Figure 2b. We consider different levels of noise to show the behavior of our method in different scenarios. The noise level is ν if we have the following:
ν = b b δ b .
We set ν { 5 % , 10 % , 15 % , 20 % } and always consider white Gaussian noise. We report the noise-corrupted sinogram with ν = 10 % in Figure 2c. Note that δ = ν b .
As mentioned above, we compare our algorithm with lsqr and SIRT. As stated above, we expect the solution to be nonnegative and, therefore, in our method, we project the computed solution into the nonnegative cone at each iteration. To improve its accuracy further, we perform the same projection in the lsqr method ensuring that all the computed solutions are physically meaningful. In all the tests, the SIRT method performed worse than the others, computing not very accurate reconstructions and requiring a large amount of iterations to converge. In Figure 3, we report the evolution of the RRE against the iterations for all considered methods and noise levels. We can observe that semi-convergence is more evident for the lsqr method than for our algorithmic proposal. Regardless of the M j considered, our method is more stable than lsqr as the RRE increases much slower. We can also observe that the convergence is the most stable when M = M 1 . This is more evident for higher levels of noise. It is clear that, for the lsqr method, if one overestimates the stopping iteration, then the RRE may become extremely large. This is not the case for our MGM methods. In Table 1, we report the RRE obtained at the “optimal” iterations, i.e., the one that minimizes the RRE, and at the DP iteration. In all cases, we can observe that our method is more accurate than the lsqr algorithm, albeit at a generally higher computational cost. However, we would like to stress that the cost per iteration of both lsqr and the MGM is of the same order of magnitude as one matrix-vector product with A and one with A T , as analyzed in [16]. Since A is extremely sparse, this cost is O ( n ) ; therefore, performing few iterations is computationally extremely cheap. For the noise level, ν = 10 % , we report the solutions computed by lsqr and by the MGM with M = M 1 in Figure 4.
We conclude this example by showing the behavior of our method when we set = 2 . We fix M = M 1 , ν = 10 % , and we compare the RRE evolution against the iterations of lsqr and MGM with either = 1 and = 2 , i.e., when we perform two steps of the post-smoother. We show this in Figure 5. We can observe that for = 2 , the number of iterations required to reach convergence decreases significantly, even though the cost per iteration is doubled. The resulting method is, however, less stable than the one with = 1 , and the obtained RRE is slightly higher. Therefore, in the following, we will set = 1 for all examples.
  • Shepp–Logan (limited angles).
Depending on the situation, it may occur that one cannot irradiate an object with as many rays as required. It is, therefore, of high importance to verify how an algorithm performs when fewer data are collected. To this aim, we now consider the same phantom as above, but only half of the angles, i.e., we select 90 equispaced angles between 0 and π . This produces a matrix A R 32 , 580 × 65 , 536 . Note that the number of columns of A is roughly double the number of rows.
As above, we consider four levels of noise, namely ν { 5 % , 10 % , 15 % , 20 % } and report the evolution of the RRE in Figure 6. We can observe that the obtained results are similar to the ones in the previous case, i.e., our method is more accurate and more stable, albeit requiring more iterations to converge. Moreover, as in the previous case, the highest stability is obtained for M = M 1 . This is confirmed by the results in Table 1. For the noise level ν = 10 % , we report the solutions computed by lsqr and by the MGM with M = M 1 in Figure 7. Note that all the operators perform well, improving the stability of lsqr and keeping the same computational cost. However, M 1 has the lowest semi-convergence, especially with higher levels of noise and when the problem is strongly underdetermined due to the limited angles used.
  • MRI image.
Finally, we consider a more realistic image. This is a slice taken from an anisotropic 3D MRI volume that simulates the scan of a human brain present in the MRI dataset in MATLAB. In this case, the image is composed of 128 × 128 pixels; see Figure 8. We irradiate the image with 181 parallel rays at 90 equispaced angles between 0 and π . This produces a matrix A R 16 , 290 × 16 , 384 . We corrupt the data with 5 % white Gaussian noise, resulting in the sinogram shown in Figure 8b.
We report the evolution of the RRE against the iterations in Figure 9. We can observe that the results are very close to the ones obtained for the Shepp–Logan phantom. In particular, M 1 gives more stable convergence, while higher-order projectors provide slightly better reconstructions. This can also be seen from the results reported in Table 2 and Figure 10. The latter shows the computed solutions by the MGM using M = M 1 and M = M 2 , each stopped at the discrepancy principle iteration.

5. Conclusions

In this paper, we proposed a multigrid method for solving the CT problem. The proposed method is obtained using the lsqr algorithm as the post-smoother. We showed that combining a Krylov method with the multigrid approach can improve the accuracy of the algorithm and greatly improve its stability. This latter property is of extreme interest as it mitigates the semi-convergence phenomenon and simplifies the task of selecting an appropriate stopping criterion, allowing for greater confidence in consistently obtaining good reconstructions, even when the problem is highly underdetermined and ill-conditioned due to high levels of noise. Several computed examples showed that our method is stable and produces satisfactory results. Future work will include extending this approach to three-dimensional CT applications, with the aid of multilinear linear algebra techniques, applying it to real datasets, and combining it with variational methods.

Author Contributions

Conceptualization, A.B. and M.D.; Implementation, A.B. and M.R.; writing—original draft preparation, A.B., M.D. and M.R.; writing—review and editing, A.B., M.D. and M.R.; Validation, A.B., M.D. and M.R. These authors made equal contributions to this work. All authors have reviewed and approved the final version of the manuscript.

Funding

The authors are members of the GNCS group of INdAM and are partially supported by INdAM-GNCS 2024 Project “Tecniche numeriche per problemi di grandi dimensioni” (CUP_ E53C24001950001). A.B. is partially supported by the PRIN 2022 PNRR project no. P2022PMEN2 financed by the European Union—NextGenerationEU and by the Italian Ministry of University and Research (MUR). A. B. and M. D. are partially supported by the PRIN 2022 project “Inverse Problems in the Imaging Sciences (IPIS)” (2022ANC8HL) financed by the European Union—NextGenerationEU and by the Italian Ministry of University and Research (MUR). A.B.’s work is partially funded by Fondazione di Sardegna, Progetto biennale bando 2021, and “Computational Methods and Networks in Civil Engineering (COMANCHE)”.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analysis, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Buzug, T.M. Computed tomography. In Springer Handbook of Medical Technology; Springer: Berlin/Heidelberg, Germany, 2011; pp. 311–342. [Google Scholar]
  2. Hansen, P.C.; Jørgensen, J.; Lionheart, W.R.B. Computed Tomography: Algorithms, Insight, and Just Enough Theory; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2021. [Google Scholar]
  3. Engl, H.W.; Hanke, M.; Neubauer, A. Regularization of Inverse Problems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1996; Volume 375. [Google Scholar]
  4. Hansen, P.C. Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion; SIAM: Philadelphia, PA, USA, 1998. [Google Scholar]
  5. Hanke, M. Conjugate Gradient Type Methods for Ill-Posed Problems; Chapman and Hall/CRC: Boca Raton, FL, USA, 2017. [Google Scholar]
  6. Hansen, P.C.; Hayami, K.; Morikuni, K. GMRES methods for tomographic reconstruction with an unmatched back projector. J. Comput. Appl. Math. 2022, 413, 114352. [Google Scholar] [CrossRef]
  7. Donatelli, M.; Martin, D.; Reichel, L. Arnoldi methods for image deblurring with anti-reflective boundary conditions. Appl. Math. Comput. 2015, 253, 135–150. [Google Scholar] [CrossRef]
  8. Dong, Y.; Hansen, P.C.; Hochstenbach, M.E.; Brogaard Riis, N.A. Fixing nonconvergence of algebraic iterative reconstruction with an unmatched backprojector. SIAM J. Sci. Comput. 2019, 41, A1822–A1839. [Google Scholar]
  9. Rantala, M.; Vanska, S.; Jarvenpaa, S.; Kalke, M.; Lassas, M.; Moberg, J.; Siltanen, S. Wavelet-based reconstruction for limited-angle X-ray tomography. IEEE Trans. Med. Imaging 2006, 25, 210–217. [Google Scholar] [PubMed]
  10. Bubba, T.A.; Kutyniok, G.; Lassas, M.; März, M.; Samek, W.; Siltanen, S.; Srinivasan, V. Learning the invisible: A hybrid deep learning-shearlet framework for limited angle computed tomography. Inverse Probl. 2019, 35, 064002. [Google Scholar]
  11. Xiang, J.; Dong, Y.; Yang, Y. FISTA-Net: Learning a Fast Iterative Shrinkage Thresholding Network for Inverse Problems in Imaging. IEEE Trans. Med. Imaging 2021, 40, 1329–1339. [Google Scholar] [CrossRef] [PubMed]
  12. Bolten, M.; MacLachlan, S.P.; Kilmer, M.E. Multigrid preconditioning for regularized least-squares problems. arXiv 2023, arXiv:2306.11067. [Google Scholar] [CrossRef]
  13. Marlevi, D.; Kohr, H.; Buurlage, J.W.; Gao, B.; Batenburg, K.J.; Colarieti-Tosti, M. Multigrid reconstruction in tomographic imaging. IEEE Trans. Radiat. Plasma Med. Sci. 2019, 4, 300–310. [Google Scholar] [CrossRef]
  14. Zhang, Z.; Chen, K.; Tang, K.; Duan, Y. Fast multi-grid methods for minimizing curvature energies. IEEE Trans. Image Process. 2023, 32, 1716–1731. [Google Scholar]
  15. Donatelli, M. A multigrid for image deblurring with Tikhonov regularization. Numer. Linear Algebra Appl. 2005, 12, 715–729. [Google Scholar]
  16. Donatelli, M.; Serra-Capizzano, S. On the regularizing power of multigrid-type algorithms. SIAM J. Sci. Comput. 2006, 27, 2053–2076. [Google Scholar]
  17. Buccini, A.; Donatelli, M. A multigrid frame based method for image deblurring. Electron. Trans. Numer. Anal. 2020, 53, 283–312. [Google Scholar]
  18. Briggs, W.L.; Henson, V.E.; McCormick, S.F. A Multigrid Tutorial; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
  19. Bolten, M.; Donatelli, M.; Huckle, T. Analysis of smoothed aggregation multigrid methods based on Toeplitz matrices. Electron. Trans. Numer. Anal. 2015, 44, 25–52. [Google Scholar]
  20. Donatelli, M. An algebraic generalization of local Fourier analysis for grid transfer operators in multigrid based on Toeplitz matrices. Numer. Linear Algebra Appl. 2010, 17, 179–197. [Google Scholar]
  21. Donatelli, M.; Serra-Capizzano, S. Filter factor analysis of an iterative multilevel regularizing method. Electron. Trans. Numer. Anal. 2007, 29, 163–177. [Google Scholar]
  22. Golub, G.H.; Van Loan, C.F. Matrix Computations; JHU Press: Baltimore, MD, USA, 2013. [Google Scholar]
  23. Björck, r.; Elfving, T.; Strakos, Z. Stability of Conjugate Gradient and Lanczos Methods for Linear Least Squares Problems. SIAM J. Matrix Anal. Appl. 1998, 19, 720–736. [Google Scholar] [CrossRef]
  24. Bai, Z.Z.; Buccini, A.; Hayami, K.; Reichel, L.; Yin, J.F.; Zheng, N. Modulus-based iterative methods for constrained Tikhonov regularization. J. Comput. Appl. Math. 2017, 319, 1–13. [Google Scholar] [CrossRef]
  25. Gazzola, S. Flexible CGLS for box-constrained linear least squares problems. In Proceedings of the 2021 21st International Conference on Computational Science and Its Applications (ICCSA), Cagliari, Italy, 13–16 September 2021; pp. 133–138. [Google Scholar] [CrossRef]
  26. Nagy, J.G.; Strakos, Z. Enforcing nonnegativity in image reconstruction algorithms. In Proceedings of the Mathematical Modeling, Estimation, and Imaging, San Diego, CA, USA, 31 July–1 August 2000; Volume 4121, pp. 182–190. [Google Scholar]
  27. Groetsch, C. The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind; Chapman & Hall/CRC Research Notes in Mathematics Series; Pitman Advanced Pub. Program: London, UK, 1984. [Google Scholar]
  28. Gazzola, S.; Hansen, P.C.; Nagy, J.G. IR Tools: A MATLAB package of iterative regularization methods and large-scale test problems. Numer. Algorithms 2019, 81, 773–811. [Google Scholar] [CrossRef]
  29. Elfving, T.; Hansen, P.C.; Nikazad, T. Semiconvergence and Relaxation Parameters for Projected SIRT Algorithms. SIAM J. Sci. Comput. 2012, 34, A2000–A2017. [Google Scholar] [CrossRef]
  30. Hansen, P.C.; Saxild-Hansen, M. AIR Tools—A MATLAB package of algebraic iterative reconstruction methods. J. Comput. Appl. Math. 2012, 236, 2167–2178. [Google Scholar] [CrossRef]
Figure 1. An X-ray going through an object discretized at two different levels of the MGM. In black is the grid at level i and in red is the one at level i + 1 . The blue dashed line represents an X-ray going through the object. Each square represents a small portion of the object where we assume the structure to be constant.
Figure 1. An X-ray going through an object discretized at two different levels of the MGM. In black is the grid at level i and in red is the one at level i + 1 . The blue dashed line represents an X-ray going through the object. Each square represents a small portion of the object where we assume the structure to be constant.
Symmetry 17 00470 g001
Figure 2. Shepp–Logan phantom test problem: (a) True image ( 256 × 256 pixels), (b) True sinogram ( 362 × 180 pixels), (c) Noisy sinogram (10% Gaussian noise).
Figure 2. Shepp–Logan phantom test problem: (a) True image ( 256 × 256 pixels), (b) True sinogram ( 362 × 180 pixels), (c) Noisy sinogram (10% Gaussian noise).
Symmetry 17 00470 g002
Figure 3. Shepp–Logan phantom (full angles). Evolution of the RRE against the iteration for each considered noise level. The ∗ marks the discrepancy principle iteration, while ∘ denotes the optimal one. (a) ν = 5 % , (b) ν = 10 % , (c) ν = 15 % , (d) ν = 20 % .
Figure 3. Shepp–Logan phantom (full angles). Evolution of the RRE against the iteration for each considered noise level. The ∗ marks the discrepancy principle iteration, while ∘ denotes the optimal one. (a) ν = 5 % , (b) ν = 10 % , (c) ν = 15 % , (d) ν = 20 % .
Symmetry 17 00470 g003
Figure 4. Shepp–Logan (full angles) with 10% noise. On top of the reconstructions obtained at the DP iteration: (a) lsqr, (b) MGM, M 1 . At the bottom, the optimal reconstructions: (c) lsqr, (d) MGM, M 1 .
Figure 4. Shepp–Logan (full angles) with 10% noise. On top of the reconstructions obtained at the DP iteration: (a) lsqr, (b) MGM, M 1 . At the bottom, the optimal reconstructions: (c) lsqr, (d) MGM, M 1 .
Symmetry 17 00470 g004
Figure 5. Evolution of the RRE against the iteration for the multigrid with grid transfer operators of order one. Comparison between 1 and 2 steps of lsqr as the post-smoother. The ∗ marks the discrepancy principle iteration, while ∘ denotes the optimal one.
Figure 5. Evolution of the RRE against the iteration for the multigrid with grid transfer operators of order one. Comparison between 1 and 2 steps of lsqr as the post-smoother. The ∗ marks the discrepancy principle iteration, while ∘ denotes the optimal one.
Symmetry 17 00470 g005
Figure 6. Shepp–Logan phantom (limited angles). Evolution of the RRE against the iteration for each considered noise level ν . The * marks the discrepancy principle iteration, while ∘ denotes the optimal one. (a) ν = 5 % , (b) ν = 10 % , (c) ν = 15 % , (d) ν = 20 % .
Figure 6. Shepp–Logan phantom (limited angles). Evolution of the RRE against the iteration for each considered noise level ν . The * marks the discrepancy principle iteration, while ∘ denotes the optimal one. (a) ν = 5 % , (b) ν = 10 % , (c) ν = 15 % , (d) ν = 20 % .
Symmetry 17 00470 g006
Figure 7. Shepp–Logan (limited angles) with 10% noise. On top of the reconstructions obtained at the DP iteration: (a) lsqr, (b) MGM, M 1 . At the bottom the optimal reconstructions: (c) lsqr, (d) MGM, M 1 .
Figure 7. Shepp–Logan (limited angles) with 10% noise. On top of the reconstructions obtained at the DP iteration: (a) lsqr, (b) MGM, M 1 . At the bottom the optimal reconstructions: (c) lsqr, (d) MGM, M 1 .
Symmetry 17 00470 g007
Figure 8. MRI image test problem: (a) True image ( 128 × 128 pixels), (b) noisy sinogram (5% Gaussian noise).
Figure 8. MRI image test problem: (a) True image ( 128 × 128 pixels), (b) noisy sinogram (5% Gaussian noise).
Symmetry 17 00470 g008
Figure 9. MRI image. Evolution of the RRE against the iteration with 5% Gaussian noise. The ∗ marks the discrepancy principle iteration, while ∘ denotes the optimal one.
Figure 9. MRI image. Evolution of the RRE against the iteration with 5% Gaussian noise. The ∗ marks the discrepancy principle iteration, while ∘ denotes the optimal one.
Symmetry 17 00470 g009
Figure 10. MRI test image with 5% noise. Reconstructions obtained at the DP iteration: (a) MGM, M 1 , (b) MGM, M 2 .
Figure 10. MRI test image with 5% noise. Reconstructions obtained at the DP iteration: (a) MGM, M 1 , (b) MGM, M 2 .
Symmetry 17 00470 g010
Table 1. Shepp–Logan phantom with full and limited angles. RREs and stopping iterations (in brackets) obtained with lsqr and the MGM for the different choices of M j in (8).
Table 1. Shepp–Logan phantom with full and limited angles. RREs and stopping iterations (in brackets) obtained with lsqr and the MGM for the different choices of M j in (8).
AnglesMethodStopping
Criterion
Noise Level
5 % 10 % 15 % 20 %
FulllsqrOpt. iter0.23497 (10)0.31214 (7)0.37171 (6)0.41170 (5)
DP0.24805 (8)0.32502 (6)0.38857 (5)0.41170 (5)
MGM, M 1 Opt. iter0.21478 (62)0.28387 (30)0.32905 (21)0.36395 (16)
DP0.22999 (36)0.29928 (19)0.34711 (13)0.38955 (9)
MGM, M 2 Opt. iter0.20675 (65)0.27849 (32)0.32697 (21)0.36477 (15)
DP0.22255 (40)0.29507 (21)0.34451 (14)0.38544 (10)
MGM, M 3 Opt. iter0.20665 (60)0.27592 (31)0.32234 (22)0.35904 (16)
DP0.22175 (41)0.29862 (18)0.35038 (12)0.38793 (9)
MGM, M 4 Opt. iter0.20955 (61)0.28706 (25)0.33990 (16)0.38095 (11)
DP0.22232 (34)0.29862 (18)0.35038 (12)0.38793 (9)
LimitedlsqrOpt. iter0.27112 (10)0.35752 (7)0.41858 (5)0.46636 (5)
DP0.29477 (7)0.35906 (6)0.41858 (5)0.47735 (4)
MGM, M 1 Opt. iter0.24574 (52)0.31565 (24)0.36522 (16)0.40832 (10)
DP0.26960 (26)0.33736 (14)0.39070 (9)0.42503 (7)
MGM, M 2 Opt. iter0.23872 (41)0.31466 (21)0.37184 (14)0.41246 (9)
DP0.26294 (25)0.34086 (13)0.38863 (9)0.41946 (8)
MGM, M 3 Opt. iter0.23894 (48)0.31528 (23)0.36765 (15)0.40882 (11)
DP0.25553 (31)0.33940 (15)0.39116 (10)0.42386 (8)
MGM, M 4 Opt. iter0.24369 (37)0.33034 (17)0.38812 (11)0.43201 (8)
DP0.25084 (28)0.33526 (15)0.39128 (10)0.43646 (7)
Table 2. MRI image. RREs and stopping iterations (in brackets) obtained with lsqr and the MGM for the different choices of M j in (8) with 5% noise.
Table 2. MRI image. RREs and stopping iterations (in brackets) obtained with lsqr and the MGM for the different choices of M j in (8) with 5% noise.
MethodStopping CriterionRRE (Stop. Iter.)
lsqrOpt. iter0.16726 (7)
DP0.17782 (6)
MGM, M 1 Opt. iter0.15969 (47)
DP0.17020 (26)
MGM, M 2 Opt. iter0.14722 (35)
DP0.16580 (20)
MGM, M 3 Opt. iter0.1489 (27)
DP0.16387 (19)
MGM, M 4 Opt. iter0.15000 (24)
DP0.16473 (18)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Buccini, A.; Donatelli, M.; Ratto, M. Multigrid Methods for Computed Tomography. Symmetry 2025, 17, 470. https://doi.org/10.3390/sym17030470

AMA Style

Buccini A, Donatelli M, Ratto M. Multigrid Methods for Computed Tomography. Symmetry. 2025; 17(3):470. https://doi.org/10.3390/sym17030470

Chicago/Turabian Style

Buccini, Alessandro, Marco Donatelli, and Marco Ratto. 2025. "Multigrid Methods for Computed Tomography" Symmetry 17, no. 3: 470. https://doi.org/10.3390/sym17030470

APA Style

Buccini, A., Donatelli, M., & Ratto, M. (2025). Multigrid Methods for Computed Tomography. Symmetry, 17(3), 470. https://doi.org/10.3390/sym17030470

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop