Next Article in Journal
Evaluating a Machine Learning Tool for the Classification of Pathological Uptake in Whole-Body PSMA-PET-CT Scans
Previous Article in Journal
High Rate of False Negative Diagnosis of Silent Patent Ductus Arteriosus on the Chest CT with 3 mm Slice-Thickness, Suggesting the Need for Analysis with Thinner Slice Thickness
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Iterative MLEM Image Reconstruction Algorithm Based on Beltrami Filter: Application to ECT Images

by
Abdelwahhab Boudjelal
1,2,*,
Abderrahim Elmoataz
1,
Bilal Attallah
2 and
Zoubeida Messali
3
1
Image Team, GREYC Laboratory, University of Caen Normandy, CEDEX, 14050 Caen, France
2
Electrical Engineering Laboratory LGE, M’sila University, 28000 M’sila, Algeria
3
Electronics Department, University of Mohamed El Bachir El Ibrahimi—Bordj Bou Arréridj, 34030 Bordj Bou Arréridj (BBA), Algeria
*
Author to whom correspondence should be addressed.
Tomography 2021, 7(3), 286-300; https://doi.org/10.3390/tomography7030026
Submission received: 22 June 2021 / Revised: 17 July 2021 / Accepted: 23 July 2021 / Published: 28 July 2021

Abstract

:
The implementation of emission-computed tomography (ECT), including positron emission tomography and single-photon emission-computed tomography, has been an important research topic in recent years and is of significant and practical importance. However, the slow rate of convergence and the computational complexity have severely impeded the efficient implementation of iterative reconstruction. By combining the maximum-likelihood expectation maximization (MLEM) iteratively along with the Beltrami filter, this paper proposes a new approach to reformulate the MLEM algorithm. Beltrami filtering is applied to an image obtained using the MLEM algorithm for each iteration. The role of Beltrami filtering is to remove mainly out-of-focus slice blurs, which are artifacts present in most existing images. To improve the quality of an image reconstructed using MLEM, the Beltrami filter employs similar structures, which in turn reduce the number of errors in the reconstructed image. Numerical image reconstruction tomography experiments have demonstrated the performance capability of the proposed algorithm in terms of an increase in signal-to-noise ratio (SNR) and the recovery of fine details that can be hidden in the data. The SNR and visual inspections of the reconstructed images are significantly improved compared to those of a standard MLEM. We conclude that the proposed algorithm provides an edge-preserving image reconstruction and substantially suppress noise and edge artifacts.

1. Introduction

Image reconstruction is a type of inverse problem [1], and as the main and most complex issue related to such problems, the crucial information of an image is not consistently attainable. Analytical and iterative reconstruction algorithms are the two principle types of methods recommended in the literature [2,3,4]. Filtered backprojection (FBP) [5] is the most highly investigated diagnostic method used in emission-computed tomography (ECT) [6,7], and uses single photons, i.e., single-photon emission-computed tomography (SPECT) reconstruction [8], and pairs of photons, i.e., positron emission tomography (PET) [9] reconstruction. As a result of its rapid and simple application in software reconstruction programs, the supremacy of FBP has been upheld for a high number of applications. Analytic image reconstruction processes typically contain numerous restrictions, which damage and restrain their effectiveness. On the whole, its analytic methodology disregards measurement noise and commonly produces images that have unsatisfactory compromises in terms of spatial resolution and image modifications.
By using iterative image reconstruction algorithms, it is possible for these inadequacies to be avoided [10,11,12,13,14,15]. In comparison to analytic reconstruction methods, iterative reconstruction methods have significant benefits regarding their capability to integrate different solutions to image-degrading issues such as unfinished, noisy, and dynamic datasets in a more efficient manner. By incorporating the benefits of iterative reconstructions, significant results are achieved, enabling the completed outcome to have a higher standard of qualitative elements, as well as a more precise approximation of the tracer concentration, and enhanced and upgraded spatial resolution and image contrast.
Within ECT, the most commonly used iterative algorithms are the maximum-likelihood expectation maximization (MLEM) [16,17,18] algorithm, and the enhanced and quicker ordered subset EM (OSEM) algorithm [19]. However, it is important to acknowledge that iterative reconstruction methods individually are insufficient in reconstructing artifact-free images, and additional developments are therefore necessary to achieve enhanced outcomes.
Based on the above understanding of past works [20,21,22,23,24], this paper proposes a novel scheme based on iterative filtering for computing the MLEM algorithm for ECT image reconstruction from noisy projections, namely a filtered MLEM. More precisely, we include an additional Beltrami [25,26,27] filtering step at each iteration of the MLEM to reduce noise and unwanted artifacts while preserving the edge information. The combination of these techniques not only preserves the geometrical structures, such as edges, it also achieves the best reconstruction results and performance in terms of the resolution.
In Section 2 of this paper, we describe the main steps of the proposed image reconstruction method using Beltrami image filtering and provide a solution using the MLEM algorithm. Section 3 presents the evaluation criteria for the quality of the reconstruction. Numerical results from a quantitative comparison of recent reconstruction methods in terms of the relative norm error, SNR, and human visual quality of the reconstructed images are given in Section 4. Finally, Section 5 provides some concluding remarks regarding this work.

2. Materials and Methods

In the MLEM algorithm, the measured projection datasets play a significant role. In a SPECT scanner, the size of the projection data depends on both the number of detectors in the camera strip and the number of angles. If the camera contains b number of detectors and we measure at a angles, then the number of elements of projection data vector, J = a × b . For easier calculation, this vector is generally represented as a column vector. In PET, there is a ring of detectors around the patient that measures the annihilation event. If N is the number of detectors in the ring, then J = N ( N 1 ) / 2 N is the number of all pairs of the detector in coincidence. For a tomographic reconstruction, the image to be reconstructed is digitized into a matrix x with n x rows and n y columns. Again, for computational purposes, we represent the image as a column vector with I elements, where I = n x × n y elements. Physicists have proved that such emissions follow a Poisson model [28]. Therefore, the unknown total number of emission events in the ith pixel, x ^ ( i ) , represents a Poisson random variable, with mean x ¯ ( i ) .
We know that a system matrix represents the probability distribution of the projection data. Hence, elements of system matrix p ( i , j ) represent the probability of emission i to be detected by detector j. It is possible to calculate the expected value of the projection data depending on the system matrix using the following formula [17,18,29]:
y ( j ) = E x ( j ) = i = 1 I x ( i ) · p ( i , j )
Because x ( i ) are independent Poisson random variables, a linear combination of these variables is also distributed as in the above Poisson equation. Regarding the above equation, the probability of the detected data is
L ( x ) = P ( y | x ) = j = 1 J e x ^ ( j ) x ^ ( j ) y ( j ) y ( j ) !
The likelihood function L ( x ) indicates the Poisson probability of observing the given counts in detector pairs in coincidence if the true density is x ( i ) . The log-likelihood function is produced through the combination of Equations (1) and (2) [30]:
l ( x ) = l o g ( L ( x ) ) =   j = 1 J i = 1 I x i p i , j + j = 1 J y j log i = 1 I x i p i , j     ( j = 1 j l o g ( y j ! ) )
It can be seen using the first and second derivatives of the log-likelihood function that the matrix of second derivatives is negative semi-define, and that l ( x ) is concave [30]. As a result, the conditions sufficient for vector x ^ to yield the maximum of L are the Kuhn-Tucker (TK) conditions [31]:
0 = x ( i ) l ( x ) x ( i ) x ¯   =   x ^ ( i ) p ( i , · ) + j = 1 J n ( j ) x ^ ( i ) p ( i , j ) i ´ = 1 I x ^ ( i ´ ) p ( i ´ , j )
and
l ( x ) x ( i ) x ¯ 0 i f x ^ ( i ) = 0
The MLEM algorithm starts with an initial estimate x ( 0 ) , and uses the maximization condition to iteratively improve the estimate. Researchers have used a variety of initial estimates to reach the results faster [32,33,34]. The main formula for the MLEM algorithm is derived by solving the above maximization condition for x ^ ( i ) , given in iteration n + 1:
x n + 1 ( i ) = x n ( i ) 1 i = 1 I p ( i , j ) j = 1 J n ( j ) p ( i , j ) i ´ = 1 I x n ( i ´ ) p ( i ´ , j )
Analyzing Equation (6), the MLEM algorithm can be summarized as follows:
1.
Start with estimate x ( 0 ) , where x ( 0 ) > 0 for i = 1 , 2 , 3 , , I .
2.
If x ( n ) denotes the estimate of x at the n t h iteration, define a new estimate n ( n + 1 ) using Equation (6).
3.
If the required accuracy for the numerical convergence has been achieved, then stop; else, return to (2).

2.1. Noise Reduction Method Based on Geometric Flow

To identify a suitable resolution for ECT image reconstruction, the iterative filtering structure of the MLEM method is used as a result of uniting an enhanced image-filtering methodology into the existing MLEM iteration strategy, specifically, an existing iterative filtering strategy that endorses and supports the MLEM image quality during the iterative development. As a result of upholding the key aspects, the vast proportion of dimensions functioning within the image are entrenched into an improved dimension that enables the use of the influential disparity geometry operatives [25].
An inventive geometric dispersal flow method is the Beltrami flow, which has the objective of lowering the image area manifold and motivating the flow in the direction of a minor surface resolution, while simultaneously upholding the edges.
The Beltrami framework, is based on a nonlinear flow that was applied as an edge-preserving denoising and deblurring algorithm for signals and especially multi-channel images.
The manner in which the Beltrami flow is communicated is shown below [35]:
x t = 1 g div x g
where x t = x t represents the derivative of the density of image x with consideration of time t. In addition, x is the gradient vector, which is x ( x x 1 , x x 2 ) , for 2D images; however, x ( x x 1 , x x 2 , x x 3 ) for 3D volumes, where x x 1 = x x 1 indicates the derivative of x with consideration of x 1 ( x 2 and x 3 have similar circumstances). Moreover, div is the divergence operator, signified for vector function f = ( f x 1 , f x 2 ) as
div ( f ) = f x 1 x 1 + f x 2 x 2
Lastly, g is used to signify the determinant for the initial essential form of the surface, which is represented as g = 1 + | x | 2 . The format of g originates from a prompted metric for the Euclidean ( n + 1 ) D space within which the concentration of an n D image is entrenched within the dimension [25] (with n = 2 for 2D images, and n = 3 for 3D volumes). The purpose of g is to deliver the measurement of the area increase amid surface domain S and image domain x, and consequently is a substantial element in encouraging the flow in the direction of the surface with the smallest surface area.
Furthermore, 1 / g in Equation (7) acts as an edge signifier. Consequently, the objective of the Beltrami flow is to function as a selective noise filtering strategy that upholds the edges and lowers the dispersal across the edges, while applying wide-ranging and substantial diffusion everywhere else [36]. As can be seen through Equation (7), the application of a partial differential equation is founded upon the limited dissimilarities, using a Euler forward dissimilarity estimation for x, and central dissimilarities to estimate the spatial offshoots:
x k = x k 1 + h t x x 1 x 1 ( 1 + x x 2 2 ) + x x 2 x 2 ( 1 + x x 1 2 ) 2 x x 1 x x 2 I x x 1 x 2 ( 1 + x x 1 2 + x x 2 2 ) 2
where x k is the reconstructed image in the k th iteration, h t is the time step, and x x 1 is the first derivative with respect to x 1 (applied in the same way as x 2 ), i.e.,
x x 1 ( i , j ) = x ( i , j + 1 ) x ( i , j 1 ) 2
Here, x x 1 x 1 is the second-order derivative with respect to x 1 (applied in the same way as x 2 ), i.e.,
x x 1 x 1 ( i , j ) = x ( i , j + 1 ) 2 x ( i , j ) + x ( i , j 1 )
and x x 1 x 2 is the mixed double partial derivative with respect to x 1 and x 2 :
x x 1 x 2 ( i , j ) = x ( i + 1 , j + 1 ) x ( i + 1 , j 1 ) 4 x I ( i 1 , j + 1 ) + x ( i 1 , j 1 ) 4
These derivatives are calculated from the reconstructed image in the past iteration x k 1 , where i , j are the pixel indices.

2.2. Combining MLEM Algorithm and Beltrami Image Flow Filtering

In this section, we propose a new scheme called filtered MLEM (f-MLEM). MLEM algorithm was combined with Beltrami image flow filtering to reduce the noise and unwanted artifacts with preserving the edge information which is the filtered MLEM algorithm for the unpenalized problem by including an additional filtering step in each iteration. As is well known, the MLEM algorithm is an iterative approach that increases the likelihood of the estimated image in each iteration. Our proposal is to introduce a filtering step for each iteration through which the current estimated image is smoothed in a suitable manner.
The MLEM method was applied sequentially by changing the number of iterations K of the Beltrami filter in each iteration N of the MLEM algorithm. We started the MLEM iterations with a large K to converge at the desired solution. With an increase in the number of iterations of the MLEM algorithm, we decrease the number of iterations of the Beltrami filter. These two types of iterations have an inverse relationship because, with an increased number of iterations of the MLEM algorithm, the image becomes more apparent, and we therefore no longer need further improvements and therefore reduce the number of iterations for the Beltrami filter. This leads to a reduction in memory usage and time consumption, while maintaining the image quality. This means that during the iterations, unknown parties are allowed to vary while the MLEM algorithm wish to maximize the likelihood function L, to obtain the estimated image x i the best fits the measured data n j .
To obtain acceptable result images, owing to its edge-preserving performance, we applied a Beltrami filter within the pixel distance along the x 1 and x 2 directions:
h x 1 = 0.5 · 0 0 0 1 0 1 0 0 0
h x 2 = 0.5 · 0 1 0 0 0 0 0 1 0
h x 1 x 1 = 0.5 · 0 0 0 1 2 1 0 0 0
h x 2 x 2 = 0.5 · 0 1 0 0 2 0 0 1 0
h x 1 x 2 = 0.5 · 1 0 1 0 0 0 1 0 1
The proposed f-MLEM reconstruction algorithm is summarized in Algorithm 1.
Algorithm 1: filtered MLEM Algorithm
Tomography 07 00026 i001

3. Performance Evaluation

To assess the overall performance of the proposed algorithm, computer simulations were conducted. To evaluate the reconstructed results objectively, two image-quality measurement parameters were computed: the relative norm error of the reconstructed images [36], which is defined as
d f = x x ^ 2 x 2
The Signal-to-noise ratio defined as:
S N R = j = 1 M i = 1 N [ x ^ ] 2 j = 1 M i = 1 N [ x x ^ ] 2
The gray level value of the test image is denoted as f, with f ^ being the same value as in the reconstructed image. A lower d f value, and higher SNR values, indicate that the resulting reconstructed image is closer to the test image. The number of iterations is a further criterion contributing to the iterative reconstruction algorithm, with a smaller number of iterations being preferable. Comparisons of profiles lines displaying isolines of the reconstructed images and a plot of the profiles of reconstructed images are also used.
We simulated a PET scanner in a 2D mode, which has 420 crystals in each ring with each crystal having a cross-section of 6.3 × 6.3 mm 2 .
In our work, we use the phantoms shown in Figure 1. Figure 1a shows a Hoffman Brain Phantom [37], which was used to simulate the distribution of a PET tracer through gray matter, white matter, and three tumors. The digital designs of the tumors in the phantom provide essential information regarding the reconstructed images. The smallest tumor that can be individually distinguished indicates the resolution of the scanner. The second phantom, shown in Figure 1b, is a standard medical image of an abdomen [38], and provides an anatomically accurate simulation of the radioisotope distribution obtained in a healthy stomach. Quantitative and qualitative explorations of the influences of scatter attenuation that can be observed in imaging using SPECT, or alternatively, PET, can be enabled by the phantom. All the phantoms and additional reconstructed images have pixel resolutions of 192 × 192 pixels with a pixel size of 3 × 3 mm 2 . To create a sonogram, a simulated phantom image was forward projected and 30 % true coincidences were uniformly introduced to simulate a background of random and scattered fractions. The fractions were determined by counting and dividing them by the total number. Poisson noise is usually generated separately to create noisy sinograms. Here, the noise added to the data measured 500 k events.

4. Results and Discussion

The objective of this section is to deliver a comprehensive comparison of the reconstruction outcomes of the regular MLEM algorithm with those of the proposed f-MLEM algorithm for the purpose of image reconstruction within emission tomography. To enable a clear view of the images, they were scaled to [ 01 ] and exhibited using an unchanged linear gray scale. Primarily, the foremost study is centered upon providing a comparison amid the noise greatness in low- and high-count areas of the MLEM and f-MLEM images. To ensure that the objective of the study was accomplished, the modification as a method of positioning for the phantom images reconstructed using the MLEM and f-MLEM algorithms was first carefully investigated. For the purpose of developing the projection information, Poisson noise was added to every attenuated phantom estimate. Following this, the noisy projections generated were used for the purpose of reconstructing the 2D phantoms.
Correspondingly, Figure 2 and Figure 3 show the reconstructed images that were generated and attained from the regular MLEM algorithm by altering the quantity of iterations for the Hoffman Brain and standard medical image of an abdomen phantoms. As the figures show, it is clear that the visual quality of the reconstructed images increased substantially by increasing the number of iterations.
The reconstructed images that were generated and attained from the projected f-MLEM algorithm by altering the number of iterations are shown in Figure 4 and Figure 5. Based on these experiments, it is evident that the f-MLEM algorithm has the potential to successfully eradicate star artifacts produced by using the regular MLEM algorithm. In comparison to the MLEM algorithm, the excellence dimensions of f-MLEM are substantially improved. As a result of the evaluation between the two algorithms, it is possible to conclude that the projected algorithm is dependable and suitable for improving the overall standard of reconstructed images, and is a more appropriate strategy for regulating edge and noise artifacts. Figure 4a shows the reconstruction results of the f-MLEM algorithm at iteration 20. Figure 4b shows the results at iteration 40, at which our approximate feasibility criterion is fulfilled. An image with a higher contrast at iteration 60 is shown in Figure 4c. A considerable reduction in noise artifacts is shown for the iterative f-MLEM algorithm. Figure 4d shows the results at iteration 80, and Figure 4e shows the results at iteration 100 when applying a Beltrami filter at iteration 60. This latter method appears to achieve a better image in terms of the visual quality.
The most noticeable difference when comparing the results from the MLEM and f-MLEM algorithms is that the f-MLEM reconstructed images are much smoother than the MLEM reconstructed images, and the former contain fewer artifacts. In particular, the artifacts are reduced in number within the peripheral regions of the reconstructed images. The reconstruction outcomes of the pair of simulated phantoms yielded the two rectangular areas illustrated in Figure 1a,b, which highlight how the two reconstruction algorithms differ. The proposed algorithm successfully preserved the edges. A detailed investigation reveals that the two image reconstruction algorithms differ. For example, the hot lesion edges were effectively preserved when applying the proposed algorithm, whereas artifacts and deviations were more likely to occur with the conventional MLEM algorithm. Furthermore, the intensity distribution within the hot areas was made more homogeneous by the proposed algorithm as compared to the conventional MLEM algorithm. Likewise, the proposed algorithm produced a better outcome for the Hoffman Brain Phantom. It was followed in efficiency by the conventional MLEM algorithm (Figure 6). A visual examination showed that the outcomes generated by the two algorithms for a standard medical image of an abdomen are not significantly different (Figure 7). It is clear from these findings that the proposed algorithm is more effective in generating smoother images with minimal bias and deviations compared to the conventional MLEM algorithm.
To further display the differences, we used an image of a Huffman Brain Phantom and a standard medical image of an abdomen to examine the edge-preservation capability of the proposed algorithm. Horizontal 1D line profiles through the reconstructed images and the ideal Huffman Brain Phantom image, which includes three ROIs, are compared in Figure 8. These profiles were calculated using 80 iterations.
The conventional MLEM algorithm allows spatial noise, and introduces some bias in the region with the same pixel values in the reconstructed image. The iterative MLEM algorithm produces an unbiased profile but a noisy image for the same number of iterations as used in the iterative f-MLEM. The standard medical image of the abdomen included three ROIs, and the line segments crossing the regions can be seen in Figure 9. From these profiles, we can see that the proposed algorithm lowers the noise in different regions while keeping the edges between regions sharp. It can be seen that there are nontrivial values when applying the conventional MLEM method for pixels outside the phantom, which is open air. With the proposed iterative f-MLEM algorithm, these aerial pixels are nullified, and their values are redistributed to the pixels inside the phantom, therefore causing their values to increase. Our newly proposed f-MLEM method outperforms the conventional MLEM in terms of reconstructing the ideal profile. Visually, the images provided by the proposed f-MLEM algorithm are close to the original image.
The SNRs of the reconstructed images obtained using the conventional MLEM algorithm and the proposed f-MLEM algorithm versus the number of iterations are shown in Figure 10. The later demonstrates that the f-MLEM algorithm provides better quality measurements than the conventional MLEM algorithm. The iterations number is much required to enhance the image quality.
Figure 11 shows the relative norm errors ( d f ) versus the number-of-iteration curves for a Hoffman Brain Phantom image and a standard medical image of an abdomen when applying the proposed f-MLEM and the conventional MLEM algorithm. The proposed f-MLEM algorithm achieves better results even at a small number of iterations, and produces a better quality of reconstruction in terms of the relative norm errors.
It can easily be seen that the performance parameters are considerably improved compared to those of the conventional MLEM, particularly for 50 iterations, and the performance parameters remain almost constant after this number. It should be noted here that the number of iterations is necessary to improve and refine the quality of a reconstructed image. Both the quality measurements (SNR) and the relative norm errors ( d f ) clearly reveal that the performance of the conventional MLEM algorithm after 100 iterations is similar to the performance of the f-MLEM at 22 iterations, as shown in Figure 10a and Figure 11a. The proposed algorithm requires a minimum of around 20 iterations to display an acceptable reconstructed image. This is the most common method for eliminating star artifacts that are usually generated with a conventional MLEM algorithm. The proposed f-MLEM algorithm is fast and efficient because it provides the best reconstructed images after a sufficiently small number of iterations.
To more illustrate the benefits of our algorithm, we provide Table 1 which compare the Signal-to-noise ratio SNR obtained via reconstruction techniques discussed in this work. Table 1 shows the difference between MLEM and f-MLEM algorithms of Huffman Brain Phantom in terms of performance parameters using different number of iterations. f-MLEM algorithm performs better even at limited number of iterations.
In all the visual-displays, the quality measurements and line plots demonstrate that the proposed f-MLEM algorithm outperforms the conventional MLEM algorithm. From the above observations, we can conclude that the proposed algorithm provides an edge-preserving image reconstruction and substantially suppress noise and edge artifacts present after a small number of iterations. It also extends the conventional MLEM algorithm in reconstructions from noisy random projections with a small number of iterations.
However, despite all these advantages, this new technology is still suffering from the drawback. One drawback of these proposed algorithms compared to conventional analytical solutions is their high computational complexity. However, due to the continuous improvement of computer technologies, effective programming methods and intelligent implementation techniques and other aspects of modeling, this deficiency has been partially surmounted.

5. Conclusions

For the reconstruction of projection data with inadequate iterations and noise, a filtered MLEM algorithm has been developed and applied. Under this specific scenario, the analytical capacity and test phantom reproduction outcomes of a Hoffman Brain Phantom illustrate that the proposed f-MLEM algorithm delivers a substantial enhancement in terms of reconstructed image standards and preciseness in comparison to a regular MLEM algorithm. Furthermore, the application and execution of the projected f-MLEM is extremely straightforward and avoids disrupting the physical models it defines. This study demonstrated that the proposed algorithm is an effective and successful method for enhancing the reconstruction standards and performance capabilities. Additionally, the data addition procedure used in the filtered MLEM algorithm has the capability to be added consistently to the OSEM algorithm.

Author Contributions

Writing—original draft A.B.; Methodology A.B. and Z.M.; Validation A.E.; Software A.B.; Visialization, A.B. and B.A.; Writing—review & editing all athors; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

This work was supported by the Normandy Region and by the European Union (FEDER) through the MIDIPATH project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bertero, M.; Boccacci, P. Introduction to Inverse Problems in Imaging; CRC Press: Boca Raton, FL, USA, 1998. [Google Scholar]
  2. Cherry, S.R.; Sorenson, J.A.; Phelps, M.E. Physics in Nuclear Medicine; Elsevier Health Sciences: Amsterdam, The Netherlands, 2012. [Google Scholar]
  3. Zeng, G.L. Image reconstruction—A tutorial. Comput. Med. Imaging Graph. 2001, 25, 97–103. [Google Scholar] [CrossRef]
  4. Zaidi, H.; Erwin, W.D. Quantitative analysis in nuclear medicine imaging. J. Nuclear Med. 2007, 48, 1401. [Google Scholar] [CrossRef] [Green Version]
  5. Macovski, A. Medical Imaging Systems; Prentice Hall: Birmingham, AL, USA, 1983. [Google Scholar]
  6. Ollinger, J.M.; Fessler, J.A. Positron-emission tomography. IEEE Signal Process. Mag. 1997, 14, 43–55. [Google Scholar] [CrossRef]
  7. Phelps, M.E.; Mazziotta, J.; Schelbert, H.R. Positron Emission Tomography and Autoradiography: Principles and Applications for the Brain and Heart; Raven Press Pub.: New York, NY, USA, 1985. [Google Scholar]
  8. English, R.J.; Brown, S.E. SPECT Single Photon Emission Computed Tomography: A Primer; Society of Nuclear Medicine: New York, NY, USA, 1986. [Google Scholar]
  9. Turkington, T.G. Introduction to PET instrumentation. J. Nucl. Med. Technol. 2001, 29, 4–11. [Google Scholar]
  10. Fessler, J.A. Penalized weighted least-squares image reconstruction for positron emission tomography. IEEE Trans. Med Imaging 1994, 13, 290–300. [Google Scholar] [CrossRef] [Green Version]
  11. Anastasio, M.A.; Zhang, J.; Pan, X.; Zou, Y.; Ku, G.; Wang, L.V. Half-time image reconstruction in thermoacoustic tomography. IEEE Trans. Med Imaging 2005, 24, 199–210. [Google Scholar] [CrossRef] [PubMed]
  12. Wernick, M.N.; Aarsvold, J.N. Emission Tomography: The Fundamentals of PET and SPECT; Academic Press: Cambridge, MA, USA, 2004. [Google Scholar]
  13. Pan, X.; Sidky, E.Y.; Vannier, M. Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction? Inverse Probl. 2009, 25, 123009. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Mehta, D.; Thompson, R.; Morton, T.; Dhanantwari, A.; Shefer, E. Iterative model reconstruction: Simultaneously lowered computed tomography radiation dose and improved image quality. Med. Phys. Int. J. 2013, 2, 147–155. [Google Scholar]
  15. Puchner, S.B.; Ferencik, M.; Maurovich-Horvat, P.; Nakano, M.; Otsuka, F.; Kauczor, H.U.; Virmani, R.; Hoffmann, U.; Schlett, C.L. Iterative image reconstruction algorithms in coronary CT angiography improve the detection of lipid-core plaque—A comparison with histology. Eur. Radiol. 2015, 25, 15–23. [Google Scholar] [CrossRef] [PubMed]
  16. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B (Methodol.) 1977, 39, 1–38. [Google Scholar]
  17. Shepp, L.A.; Vardi, Y. Maximum likelihood reconstruction for emission tomography. IEEE Trans. Med. Imaging 1982, 1, 113–122. [Google Scholar] [CrossRef]
  18. Lange, K.; Carson, R. EM reconstruction algorithms for emission and transmission tomography. J. Comput. Assist. Tomogr. 1984, 8, 306–316. [Google Scholar] [PubMed]
  19. Hudson, H.M.; Larkin, R.S. Accelerated image reconstruction using ordered subsets of projection data. IEEE Trans. Med. Imaging 1994, 13, 601–609. [Google Scholar] [CrossRef] [Green Version]
  20. Boudjelal, A.; Elmoataz, A.; Lozes, F.; Messali, Z. PDEs on Graphs for Image Reconstruction on Positron Emission Tomography. In International Conference on Image and Signal Processing; Springer: Berlin/Heidelberg, Germany, 2018; pp. 351–359. [Google Scholar]
  21. Boudjelal, A.; Messali, Z.; Elmoataz, A. A Novel Kernel-Based Regularization Technique for PET Image Reconstruction. Technologies 2017, 5, 37. [Google Scholar] [CrossRef] [Green Version]
  22. Boudjelal, A.; Messali, Z.; Elmoataz, A.; Attallah, B. Improved Simultaneous Algebraic Reconstruction Technique Algorithm for Positron-Emission Tomography Image Reconstruction via Minimizing the Fast Total Variation. J. Med. Imaging Radiat. Sci. 2017, 48, 385–393. [Google Scholar] [CrossRef]
  23. Boudjelal, A.; El Moataz, A.; Messali, Z. A New Method of Image Reconstruction for PET Using a Combined Regularization Algorithm. In International Conference on Image and Signal Processing; Springer: Berlin/Heidelberg, Germany, 2020; pp. 178–185. [Google Scholar]
  24. Boudjelal, A.; Messali, Z.; Attallah, B. PET image reconstruction based on Bayesian inference regularised maximum likelihood expectation maximisation (MLEM) method. Int. J. Biomed. Eng. Technol. 2018, 27, 337–354. [Google Scholar] [CrossRef]
  25. Kimmel, R.; Sochen, N.; Malladi, R. From high energy physics to low level vision. In Scale-Space Theory in Computer Vision; Springer: Berlin/Heidelberg, Germany, 1997; pp. 236–247. [Google Scholar]
  26. Sochen, N.; Kimmel, R.; Malladi, R. A general framework for low level vision. IEEE Trans. Image Process. 1998, 7, 310–318. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Yezzi, A. Modified curvature motion for image smoothing and enhancement. IEEE Trans. Image Process. 1998, 7, 345–352. [Google Scholar] [CrossRef] [Green Version]
  28. Politte, D.G.; Snyder, D.L. Corrections for accidental coincidences and attenuation in maximum-likelihood image reconstruction for positron-emission tomography. IEEE Trans. Med Imaging 1991, 10, 82–89. [Google Scholar] [CrossRef]
  29. Byrne, C. Iterative algorithms in tomography. UMass Libr. 2005. Available online: https://faculty.uml.edu//cbyrne/nlat.pdf (accessed on 27 July 2021).
  30. Vardi, Y.; Shepp, L.; Kaufman, L. A statistical model for positron emission tomography. J. Am. Stat. Assoc. 1985, 80, 8–20. [Google Scholar] [CrossRef]
  31. Guignard, M. Generalized Kuhn–Tucker conditions for mathematical programming problems in a Banach space. SIAM J. Control 1969, 7, 232–241. [Google Scholar] [CrossRef]
  32. Mehranian, A.; Kotasidis, F.; Zaidi, H. Accelerated time-of-flight (TOF) PET image reconstruction using TOF bin subsetization and TOF weighting matrix pre-computation. Phys. Med. Biol. 2016, 61, 1309. [Google Scholar] [CrossRef] [Green Version]
  33. Vaissier, P.; Beekman, F.; Goorden, M. Similarity-regulation of OS-EM for accelerated SPECT reconstruction. Phys. Med. Biol. 2016, 61, 4300. [Google Scholar] [CrossRef] [PubMed]
  34. Vaissier, P.; Goorden, M.; Beekman, F. Similarity-Regulated OSEM reconstruction for pinhole-PET. J. Nucl. Med. 2015, 56, 48. [Google Scholar]
  35. Kimmel, R.; Malladi, R.; Sochen, N. Images as embedded maps and minimal surfaces: Movies, color, texture, and volumetric medical images. Int. J. Comput. Vis. 2000, 39, 111–129. [Google Scholar] [CrossRef]
  36. Kak, A.C.; Slaney, M. Principles of Computerized Tomographic Imaging; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2001. [Google Scholar]
  37. Hoffman, E.; Cutler, P.; Digby, W.; Mazziotta, J. 3-D phantom to simulate cerebral blood flow and metabolic images for PET. IEEE Trans. Nucl. Sci. 1990, 37, 616–620. [Google Scholar] [CrossRef]
  38. Codes, C. Computed Tomography (CT) Abdomen & Pelvis Combination. In Clinical Appropriateness Guidelines: Advanced Imaging; AIM Specialty Health: Chicago, IL, USA, 2016; p. 39. [Google Scholar]
Figure 1. Input images: (a) Hoffman Brain Phantom and (b) standard medical image of an abdomen.
Figure 1. Input images: (a) Hoffman Brain Phantom and (b) standard medical image of an abdomen.
Tomography 07 00026 g001
Figure 2. Conventional MLEM reconstructions. (ac) row: iterations 20, 40, and 60. (df) row: iterations 80, 100, and 120.
Figure 2. Conventional MLEM reconstructions. (ac) row: iterations 20, 40, and 60. (df) row: iterations 80, 100, and 120.
Tomography 07 00026 g002
Figure 3. Conventional MLEM reconstructions. (ac) row: iterations 20, 40, and 60. (df) row: iterations 80, 100, and 120.
Figure 3. Conventional MLEM reconstructions. (ac) row: iterations 20, 40, and 60. (df) row: iterations 80, 100, and 120.
Tomography 07 00026 g003
Figure 4. f-MLEM Intermediate MLEM reconstructions. (ac) row: iterations 20, 40, and 60. (df) row: iterations 80, 100, and 120.
Figure 4. f-MLEM Intermediate MLEM reconstructions. (ac) row: iterations 20, 40, and 60. (df) row: iterations 80, 100, and 120.
Tomography 07 00026 g004
Figure 5. f-MLEM Intermediate MLEM reconstructions. (ac) row: iterations 20, 40, and 60. (df) row: iterations 80, 100, and 120.
Figure 5. f-MLEM Intermediate MLEM reconstructions. (ac) row: iterations 20, 40, and 60. (df) row: iterations 80, 100, and 120.
Tomography 07 00026 g005
Figure 6. ROIs of Huffman Brain Phantom reconstruction achieved using various algorithms as shown under magnification: (a,d) the ground truth; (b,e) MLEM; and (c,f) f-MLEM.
Figure 6. ROIs of Huffman Brain Phantom reconstruction achieved using various algorithms as shown under magnification: (a,d) the ground truth; (b,e) MLEM; and (c,f) f-MLEM.
Tomography 07 00026 g006
Figure 7. ROIs of standard medical image of abdomen reconstructed using various algorithms as shown under magnification: (a,d) the ground truth; (b,e) MLEM; and (c,f) f-MLEM.
Figure 7. ROIs of standard medical image of abdomen reconstructed using various algorithms as shown under magnification: (a,d) the ground truth; (b,e) MLEM; and (c,f) f-MLEM.
Tomography 07 00026 g007
Figure 8. 1D Line profile of two reconstruction algorithms across different RIOs.
Figure 8. 1D Line profile of two reconstruction algorithms across different RIOs.
Tomography 07 00026 g008
Figure 9. 1D Line profile of two reconstruction algorithms across different RIOs.
Figure 9. 1D Line profile of two reconstruction algorithms across different RIOs.
Tomography 07 00026 g009
Figure 10. SNR vs. the number of iterations for (a) Huffman Brain and (b) Abdomen Phantom images.
Figure 10. SNR vs. the number of iterations for (a) Huffman Brain and (b) Abdomen Phantom images.
Tomography 07 00026 g010
Figure 11. SNR vs. the number of iterations for (a) Huffman Brain and (b) Abdomen Phantom images.
Figure 11. SNR vs. the number of iterations for (a) Huffman Brain and (b) Abdomen Phantom images.
Tomography 07 00026 g011
Table 1. Signal-to-noise ratio (SNR) by varying number of iterations for f-MLEM and MLEM algorithms. Test image: Huffman Brain.
Table 1. Signal-to-noise ratio (SNR) by varying number of iterations for f-MLEM and MLEM algorithms. Test image: Huffman Brain.
1020406080100
MLEM15.0016.5018.0018.2018.4519.00
f-MLEM1619.0023.3025.1026.0226.30
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Boudjelal, A.; Elmoataz, A.; Attallah, B.; Messali, Z. A Novel Iterative MLEM Image Reconstruction Algorithm Based on Beltrami Filter: Application to ECT Images. Tomography 2021, 7, 286-300. https://doi.org/10.3390/tomography7030026

AMA Style

Boudjelal A, Elmoataz A, Attallah B, Messali Z. A Novel Iterative MLEM Image Reconstruction Algorithm Based on Beltrami Filter: Application to ECT Images. Tomography. 2021; 7(3):286-300. https://doi.org/10.3390/tomography7030026

Chicago/Turabian Style

Boudjelal, Abdelwahhab, Abderrahim Elmoataz, Bilal Attallah, and Zoubeida Messali. 2021. "A Novel Iterative MLEM Image Reconstruction Algorithm Based on Beltrami Filter: Application to ECT Images" Tomography 7, no. 3: 286-300. https://doi.org/10.3390/tomography7030026

Article Metrics

Back to TopTop