Next Article in Journal
Precision and Reliability of Tightly Coupled PPP GNSS and Landmark Monocular Vision Positioning
Previous Article in Journal
Embedded Bio-Mimetic System for Functional Electrical Stimulation Controlled by Event-Driven sEMG
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Local SVD Denoising of MRI Based on Sparse Representations

1
Department of Systems Engineering, Universidad del Norte, Barranquilla 080001, Colombia
2
Independent Consultant, Barranquilla 080001, Colombia
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(5), 1536; https://doi.org/10.3390/s20051536
Submission received: 18 January 2020 / Revised: 12 February 2020 / Accepted: 14 February 2020 / Published: 10 March 2020
(This article belongs to the Section Physical Sensors)

Abstract

:
Magnetic Resonance (MR) Imaging is a diagnostic technique that produces noisy images, which must be filtered before processing to prevent diagnostic errors. However, filtering the noise while keeping fine details is a difficult task. This paper presents a method, based on sparse representations and singular value decomposition (SVD), for non-locally denoising MR images. The proposed method prevents blurring, artifacts, and residual noise. Our method is composed of three stages. The first stage divides the image into sub-volumes, to obtain its sparse representation, by using the KSVD algorithm. Then, the global influence of the dictionary atoms is computed to upgrade the dictionary and obtain a better reconstruction of the sub-volumes. In the second stage, based on the sparse representation, the noise-free sub-volume is estimated using a non-local approach and SVD. The noise-free voxel is reconstructed by aggregating the overlapped voxels according to the rarity of the sub-volumes it belongs, which is computed from the global influence of the atoms. The third stage repeats the process using a different sub-volume size for producing a new filtered image, which is averaged with the previously filtered images. The results provided show that our method outperforms several state-of-the-art methods in both simulated and real data.

Graphical Abstract

1. Introduction

MR Imaging is a useful diagnostic technique that produces high-resolution images that are affected by different sources of quality deterioration due to the inherent limitations in the hardware, scanning time, movements of patients, and, one of the most important, noise. The presence of noise is not only a visual problem but also may interfere during the analysis and segmentation tasks [1].
A study of errors in radiology conducted by Donald L. Renfrew [2] found that errors usually involved limitations in imaging technique and acquisition of inaccurate or incomplete clinical history, among others. Previous research found that 69.23% were perceptual errors, and 6.4% of errors were due to poor image quality [3]. This research suggests the importance of guaranteeing the image quality in order to obtain better results in image segmentation and analysis for preventing errors. Despite the advance in MR imaging acquisition technology, which allowed the increment in the signal-to-noise ratio, the acquisition speed, and the resolution, MR images are still affected by noise [4], therefore the filtering of MR images continues being an active research area [5,6,7,8,9,10].
The raw captured data of an MR image are complex-valued, i.e., the image can be decomposed in amplitude and phase or real and imaginary images [11]. Both real and imaginary images are affected by additive Gaussian noise. However, magnitude MR images, which are obtained by computing the magnitude from the real and imaginary images, through a nonlinear mapping, are affected by noise that does not follow a Gaussian distribution; instead, it follows a Rician distribution.
According to Foi [12], Rician distribution has two parameters, the noise-free magnitude of the data, which is unknown, and the standard deviation of the noise that affects the real and imaginary images. Estimating the magnitude is a difficult task, because the standard deviation of Rician noise in magnitude MR images depends on the magnitude of the data itself, and the expectation of the noisy magnitude differs from the noise-free magnitude by a nonlinear function of the noise standard-deviation and of the noise-free magnitude. Foi [12] proposed a method, called Variance-Stabilization Transformation (VST), which enables methods designed for filtering additive Gaussian noise, to deal with MR images.
Some works focused on improving the MR imaging results during the scanning process [13,14]. However, when no previous treatment has been performed, we must deal with issues in the captured image. Hence, many previous works proposed methods for denoising MR images, most of them are adaptations of approaches for filtering 2D images, such as the widely known Non-Local Means (NLM) filtering, proposed by Buades et al. [15] in 2005. This method was the first approach in considering the self-similarity of the image for filtering noise. This method computes the pixel value as a non-local weighted average of all pixels in the image; the weights depend on the similarity between the patches around the pixels. This method is the basis of many proposed variants; however, blurring and blocking artifacts still affect the denoised image [16].
Coupé et al. [17] and Manjón et al. [18] introduced the first adaptations of the NLM for denoising MR Images in 2008. The proposal of Coupé et al. [17] focused on an efficient implementation that reduces the dependency on parameters by automatically tuning the smoothing parameter and selecting the most relevant voxels. Similarly, the proposal of Manjón et al. [18] focused on finding the optimal parameters for denoising MR magnitude images.
Due to the high computational complexity of NLM-based methods, some works focused on efficient implementations for processing MR images. Hu et al. [19] proposed a variant called SNLM. The proposed algorithm drastically reduces the running time to 1/20 of other NLM’s variants by randomly selecting a small subset of voxels. They introduced an optimal sampling pattern for further improvement. However, the sampling ratio needs to be high for the denoising results to be comparable to NLM. Klosowski and Frahm [20] presented an efficient implementation for real-time MR image processing, which improves details preservation. The improvement was carried out by introducing a simple weighting function based on a compact support kernel. The reported results are comparable to the block-matching three-dimensional filter. Manjón proposed to use NLM along with sparseness [21], and NLM along with sparseness plus principal components analysis (PCA) [10] for denoising MR images. The last method estimates the noise level of the image, which is assumed as white noise and filters it using non-local PCA. However, this is not satisfied when using Fourier or compressed sensing; therefore, the method cannot be applied when this assumption is not met.
The SVD is also a widely recognized technique that has been used for image denoising [22,23,24]. Rajwade et al. [25] successfully used the higher order singular value decomposition (HOSVD) for filtering natural images, although this technique require complex criteria, which involve solving optimization models, to establish the low rank approximation for filtering noise. In addition, the HOSVD suffers from artifacts when applies on homogeneous regions [26]. The work of Rawade was adapted for Zhang et al. [27] for filtering MR images. This adaptation improved the HOSVD, by adding an additional recursive stage which in general terms, repeats the HOSVD for filtering the noisy image resulting from the partial re-incorporation of the residual noise into the denoised image produced by the first stage. Zhang et al. [26] also adapted the HOSVD for filtering diffusion-weighted images. Several other works used the SVD a long with NLM for filtering MR images [28] and natural images [29,30]
Dabov et al. [31] proposed one of the most important methods in state-of-the-art image denoising, “Image denoising with block-matching and 3D filtering” (BM3D). This method exploits the self-similarity in conjunction with filtering in the transform domain, and has a variant for direct processing of volumetric data, denominated BM4D [32], which is still considered a state-of-the-art algorithm [33]. Like the BM3D, the BM4D in the first stage applies thresholding for gathering similar cubes in a 4D data structure and removes noise through a 4D transform. The second stage restores the details by using collaborative filtering. The stacked 3D cube voxels in the 4D structure lead to difficulties in removing noise in the beginning and at the end of the bands of hyperspectral image [33]. In addition, when the noise level is high, the block-matching distance does not allow the program to obtain enough similar blocks; therefore, the denoising reduces its performance, and some artifacts appear.
Another important approach that attracted attention in image processing is sparse representation. It has been used for image labeling [34], image segmentation and classification [35], image inpainting [36] and image denoising [37,38,39], among others. This approach was also used for denoising MR images. Recently, Yuan [40] proposed an efficient implementation, based on an improved Split-Bregman algorithm, for filtering MR images using a sparse tensor model. Zhang et al. [41] use clustering to gather similar patches and filter them using a sparse model, which uses sparse gradient priors to exploit the image low-rank properties. Kang et al. [7] use a sparse representation-based model for both filtering Rician noise and deblurring MR images. The model incorporates a regularization term to preserve small details and repeated patterns while using a non-convex total variation term for preserving edges. Li et al. [42] presented an improved dictionary learning model for medical image fusion denoising. By incorporating a low rank and sparse regularization term, into the sparse model, this proposal can remove noise while preserving textural details.
Further contributions, such as those proposed by Veraart et al. [9], use random matrix theory for filtering typical fluctuations originated from thermal noise in diffusion-weighted MR images. Phophalia and Mitra [43] use Kernel principal component analysis (KPCA) and Rough Set Theory (RST) for denoising volumetric MR data. RST is used to group similar voxels forming basis vectors, which are projected into a kernel space for performing PCA where the linear separation is possible, and the filtering is carried out.
Baselice et al. [44] introduced an approach from the complex domain under the argument that, in this domain, the filtering has the advantage of a simplified statistical model of the signal with Gaussian nature and zero mean. This method is an advantage over methods that operate in the amplitude domain where the noise model may not comply with being Gaussian.
Although recently some authors have focused on accurately estimating the noise [45] and correcting the bias produced by the aggregation of non-stationary distributed MR images samples, before trying to remove the noise [46,47], new research focused on denoising MR images by applying machine learning-based techniques. Chang et al. [6] presented an extension of the bilateral filter, which adapts to voxel intensity variations by including an intensity similarity function and automates the calculations of denoising parameters values, by extracting descriptors of texture issues, through an ensemble classifier of support vector machines and artificial neural networks. Other works use deep networks for exploring the similarity between neighboring slices in order to reduce the over-smoothing [5] and for denoising Dynamic contrast-enhanced MR images [8]. Zhang et al. [48] proposed a feed-forward denoising convolutional neural networks (DnCNNs) that took the batch normalization and residual learning architecture to denoising images, reducing the training process and improving the denoising performance. Based on the DnCNN, Jiang et al. [49] proposed a denoising method for brain MR images. They proposed a multichannel version of the DnCNN for dealing with 3D images, improving filtering performance in various noise levels. Kidoh et al. [50] also proposed a deep learning-based reconstruction method for denoising brain MR images. Their method reduces image noise while preserving image quality outperforming the DnCNN.
This paper presents a filter for denoising MR magnitude images. Unlike some of the works presented based on NLM [17,18,19,20], or SVD [26,27], or sparse representations [7,40,41,42], or some combination of these [21,28,29,30,41], we present a novel method, which does not propose a new sparse model but fully exploits one of the most typical representative examples of sparse representation, dictionary learning. We use dictionary learning for both orchestrating the non-local and the SVD approach efficiently, and for outperforming the representation of fine details of the image, by (1) enhancing the learned dictionary and determining the weighting coefficients of the patches, according to the global influence of the dictionary atoms, for filtering noise by aggregation, while avoiding blurring and keeping fine details. (2) Establishing the self-similarity of the image efficiently, in a clean space by analyzing the dictionary atoms and the sparse coding matrix, which allows us to accurately recover similar patches throughout the volume, without resorting to the exhaustive comparison of each patch in the volume, because the self-similarity can be inferred from the sparse coding matrix. Thus, we minimize the appearance of artifacts. (3) Unlike [25,26,27], which establishes complex criteria that use optimization models to determine the best low-rank approximation for filtering noise, our method, based on the similar patches identified from the sparse representation, determines simply and efficiently, the best low-rank approximation according to the estimated noise level of the image, thus reducing the remaining noise in the denoised image.
The rest of this paper is organized as follows: Section 2 presents the fundamentals of sparse representations. Section 3 presents the proposed method. Section 4 presents the results and discussion. Finally, Section 5 presents the conclusions of this work.

2. Sparse Representation

Dictionary Learning is a typical example of sparse representation. It aims to find the sparse representation of a set of signals Y from a dictionary D , learned from the signals themselves [51]. i.e., Y D A where A is the sparse coding matrix or the sparse representation of Y .
Formally, let Y : = y 1 y 2 y P R n × P be a matrix representing a set of P n-dimensional signals, the dictionary learning aims to find a dictionary D : = d 1 d 2 d K R n × K and a sparse matrix A : = α 1 α 2 α P R K × P such that Y can be approximated by a linear combination of the basis vectors (column vectors known as atoms) of D according to A . Most of the coefficients α i , j of A are zeros, or close to zero [52]. Dictionary learning can typically be formulated as an optimization problem, as Equation (1) indicates:
min D , A c = 1 P y c D α c 2 2 s . t . α c 0 L
In this formulation, y c is the c-th column vector (signal) of Y , D is learned from the signals themselves, L is a regularization parameter, and α c is the c-th column vector of A . The term α c 0 measures the sparsity of the decomposition. It can be understood as the number of coefficients different from zero in α c , or sparse coefficients, to approximate the signals in Y as sparse as possible.
The KSVD algorithm proposed by Aharon et al. in [53,54] allows us to obtain the optimal values of D and A in Equation (1). By analyzing the sparse representation given by D and A , some characteristics of the signals in Y can be detected. Aspects such as the similarity between the signals and their rarity, are easily manageable from their sparse representation and can be useful for improving the filtering process. This will be addressed below.

3. Proposed Method for MRI Denoising

The typical procedure for denoising images, via sparse representations, consist of decompose the noisy image I into full overlapped n × n patches to convert them into signals and filter them using a model, such as that in Equation (1). This procedure can be extended in the case of MR images for processing the whole volume; however, as mentioned before, noise in MR images follows a Rician distribution, which is non-additive but dependent on the data. To handle this type of noise, and to get an accurate estimation of its stabilized standard deviation γ , we apply the variance stabilization transformation (VST) proposed by Foi [12] before processing the noisy volume V, which give us V VST . Then, we divide the volume V VST into full overlapped sub-volumes of n × n × n . Each sub-volume is converted into a signal y c R n 3 × 1 to conform the matrix Y : = y 1 y 2 y P R n 3 × P , where P is the number of signals. Now Y can be reconstructed solving Equation (1) by using the KSVD algorithm.
However, the reconstruction given by D A leads to the recovery of an MR image suffering from blurring effect, artifacts, and that keeps some residual noise, as shown in Figure 1. To overcome this, we focus on three important issues of the sparse representations: the weighting of its atoms, the self-similarity that emerges from these, and the scales that these representations can handle. The three steps that make up our method address each of the issues above. Figure 2 illustrates the steps of our method.

3.1. Weighting the Dictionary Atoms

In digital image processing, there exist several approaches for denoising images corrupted by noise; one of them is filtering in the spatial domain using masks. The coefficients of the spatial masks are set depending on both the noise level and the type of noise. For Gaussian noise, the coefficients are weighting factors set according to the distance to the center of the mask [55]. The central coefficient gives the weight of the pixel under analysis during the denoising process. The greater this coefficient is, the more similar the filtered pixel will be to the original. This similarity implies that this coefficient can control the blurring produced by the smoothing process. We apply this reasoning to the dictionary learning, but instead of weighting the pixels of the image, we weight the atoms.
For denoising the image while keeping fine details, which is essential in MR images, this new approach proposes weighting the atoms according to its global influence. We define the global influence of a dictionary atom as the probability that such an atom appears as a linear combination in the reconstruction of a signal. i.e., For example, after finding the optimal A in Equation (1), the global influence of an atoms is calculated as the number of non-zero coefficients corresponding to that atom in A . Formally, the global influence gi of an atom d c of D is equal to the probability that d c affects a signal, as Equation (2) indicates.
gi ( d c ) = f ( d c ) k = 1 K f ( d k )
f ( d c ) = p = 1 P g ( α c , p )
g ( α c , p ) = 1 , if α c , p 0 0 , otherwise
where α c , p is the element entry of the c-th row and the p-th column of the matrix A . The global influence is used to reinforce the effect of the more frequent atoms in the signals, to reduce the blurring caused by the average of sub-volumes during the aggregation. In the case of brain MR images, this global influence will be used for reinforcing atoms that represent common patterns found in these images, like the edges between brain structures, and therefore, for preventing the blurring in such patterns. Equation (5) updates the dictionary for reinforcing the atoms according to their global influence.
D = D × Diag ( 1 + δ GI )
where GI : = [ gi ( d 1 ) , gi ( d 2 ) , , gi ( d K ) ] is a vector containing the global influence of all atoms, δ is a constant for controlling the influence ( δ = 8 , which was experimentally obtained by numerical tests). 1 is a vector of ones, and Diag ( · ) is a K × K diagonal matrix with diagonal element entries equal to 1 + δ GI . Now Y ^ , a noise-free reconstruction of Y that keeps fine details, is computed using Equation (6).
Y ^ = D A
Although Y ^ , improves the representation of fine details, there is still some residual noise that can be eliminated, taking advantage of the self-similarity of the image, which can be easily examined from the sparse representation given by D and A . This will be addressed below.

3.2. Non-Local SVD Filtering

3.2.1. Signal Grouping

Non-local filtering overcomes local filtering [56,57]; however, it is dependent on parameters such as search windows and thresholds [18]. Also, it is affected by searching for similar patches in a noisy space, which may affect the patch grouping and therefore lead to a bad estimation of the pixel. Instead, we search for similar signals in the clean space given by D A without searching windows, because the sparse representation allows examining the self-similarity of the image through the dictionary atoms.
For performing non-local filtering based on sparse representations, we rely on the fact that similar signals admit similar sparse representations [58,59]; hence, to find similar signals to a given signal y ^ c , we starting determining which atoms make up y ^ c by using Equation (7).
I ( y ^ c ) : = r : α r , c 0 , r = 1 , 2 , , K
where K is the number of atoms in D (number of rows of A ). I denotes the set of influencer atoms of y ^ c i.e., the set of atoms belonging to D that are linearly combined according to α c for generating y ^ c (Figure 2 encloses an influencer atom of y ^ c in a red rectangle in the SIGNAL GROUPING box). Then, we search for the most influencer atoms of Y ^ as Equation (8) indicates.
M ( Y ^ ) : = ( r , c ) : | α r , c | = max ( | α c | ) , c = 1 , 2 , , P
where | α c | : = [ | α 1 , c | , | α 2 , c | , , | α K , c | ] is the vector α c with its components in absolut value, max ( | α c | ) denotes the largest component of | α c | , and P is the number of signals in Y ^ . M denotes the set of ordered pairs ( r , c ) meaning that d r of D is the greatest influencer atom of the signal y ^ c of Y ^ . From the two previous sets, the set CSS ( y ^ c ) of candidates to similar signals of y ^ c is defined according to Equation (9).
CSS ( y ^ c ) = y ^ j : ( r , j ) M ( Y ^ ) r I ( y ^ c )
where CSS ( y ^ c ) contains all the signals y ^ j of Y ^ whose greatest influencer atom belongs to I ( y ^ c ) . Figure 3a shows an example of a given reference signal y ^ c in a MR image. Figure 3b shows the set of candidates to similar signals CSS ( y ^ c ) from the given reference signal of Figure 3a, obtained using Equation (9). For obtaining the set of similar signals of y ^ c denoted by SS ( y ^ c ) (signals pointed out by green arrows in Figure 2), and additionally, to get a better filtering performance in visual terms, we rely on the fact that the human eye is relatively less sensitive to dark signals than to bright ones [60]. Therefore, we do not define a fixed threshold for selecting similar signals, but a variable one, dependent on both the magnitude of the reference signal and the noise level of the image. This variable threshold will allow reference signals with a large magnitude admit less difference to similar signals than the dark ones; in addition, will allow increasing the number of similar signals for MR images with high noise levels to get a better estimation of the noise-free reference signal. Hence, we define SS ( y ^ c ) according to Equation (10).
SS ( y ^ c ) : = y ^ j CSS ( y ^ c ) : y ^ j y ^ c y ^ c b a y ^ c 1 n 3 ( 1 + γ )
where b is the upper limit and ba is the lower limit, respectively, of the interval containing the variable threshold that establishes the relative similarity between the signals ( a = 0.06 and b = 0.18 , which were experimentally obtained by numerical tests). 1 n 3 is an n 3 -dimensional vector of ones (with n being the edge of the sub-volume). γ is the estimated noise level. The left side of the inequality in Equation (10) is undefined when y ^ c = 0 . Hence, we convert it into y ^ j when y ^ c 10 6 . Please note that a bright signal (large magnitude signal) admits less difference in their similar signals than a dark one because of the magnitude of the threshold b a y ^ c 1 n 3 decreases as long as y ^ c increases. Also note that, although the threshold is inversely proportional to the magnitude of the reference signal in the interval [ b a , b ] , this threshold will be greater for signals of high noise level images than for signals of low noise level images, because of the term ( 1 + γ ) . We can increase the threshold for collecting similar signals on high noise level images with out significantly affect the subsequent estimation of the noise-free reference signal, because we search for similar signals in the clean space given by the sparse representation.
Figure 3c shows the set of similar signals SS ( y ^ c ) of the reference signal given in Figure 3a obtained after applying Equation (10) to CSS ( y ^ c ) . The red arrows in Figure 3c point out similar signals away from the reference signal, demonstrating that the sparse representation allows us to locate similar signals throughout the MR image, without examining the entire volume. It also shows that our method can locate signals in a space not restricted to search windows. Therefore we can collect a larger number of similar signals (with respect to methods that use such windows), which improves the denoising.
Now, the set SS ( y ^ c ) can be used for estimating y ^ c in a non-local way; however, as was mentioned before, all the signals in Y ^ keep some residual noise, and a better estimation could be carried out by a cleaner version of SS ( y ^ c ) .

3.2.2. Efficient SVD Filtering

A known way for reducing the noise of an image is by analyzing its singular value decomposition (SVD). SVD is a matrix factorization that allows for the decomposition of a given matrix into three new matrices U , Σ and V so that X = U Σ V , where U and V are orthogonal matrices, and Σ is a diagonal matrix whose diagonal entries are the singular values of X . Alternatively, in summation form, X = i = 1 n s i u i v i , which can be understood as a weighted sum of component images, where the Eigen-images corresponding to the smallest singular values are commonly associated with noise [61]. Therefore, a vital issue for denoising based on SVD is determining what singular values correspond to the component noisy images. According to the Eckart–Young–Mirsky theorem [62], given a matrix X = U Σ V then for any rank-k matrix X ^ , it is fulfilled that:
X X ^ F = X U Σ k V F i = k + 1 n σ i 2
where · F is the Frobenius norm, Σ k is the Σ matrix of the SVD decomposition of X with the singular values σ k + 1 = σ k + 2 = = σ n = 0 . By squaring Equations (11) and (12) is obtained.
X X ^ F 2 = X U Σ k V F 2 i = k + 1 n σ i 2
Taking the right side of the inequality and matching γ 2 , where γ is the stabilized standard deviation of noise (estimated according to [12]), we have:
i = k + 1 n σ i 2 γ 2
Equation (13) tells us that, when the sum of squares of the n k less significant singular values of X is equal or greater than γ 2 , we have the best k for filtering the noise of X . These n k less significant singular values correspond to the noisy component images (noisy component volumes in our case), and therefore, using only the first k more significant singular values we discard such component noisy images of X .
Now, for filtering the residual noise of both y ^ c and SS ( y ^ c ) let us define X as the matrix made up of y ^ c and its similar signals SS ( y ^ c ) , i.e., X = y ^ c SS ( y ^ c ) , and following Equation (13) the noise-free version X ^ is obtained with the low rank approximation given by U Σ k V considering the estimated level of noise γ .

3.2.3. Signal Estimation

y ^ c is updated from X ^ , according to Equation (14).
y ^ c = 1 l i = 1 l x ^ i
where x ^ i is the i - t h signal of X ^ and l is the number of signals in X ^ .

3.2.4. Aggregation

The restoration of the MR sub-volumes from Y ^ involves the aggregation of overlapped voxels, which can lead to the blurry recovery of the voxel. However, as mentioned before, weighting the voxels can control blurring. Now, the voxels to be aggregated belong to sub-volumes (signals), which may be frequent (has many similar signals) or rare (has a few similar signals). Since the frequent sub-volumes are better estimated than the rare sub-volumes because of the number of signals averaged in Equation (14), these voxels should have more weight than the voxels from the rare sub-volumes. Therefore, we propose weighting the voxel according to the rarity of the signal they belong to.
Similar to the procedure outlined in Leal et al. [63], the global influence of atoms can be used to establish the rarity of a signal. For example, a signal will be rare if it accumulates little global influence; it will be frequent if it accumulates a lot of global influence. Equation (15) allows computing the rarity of a signal based on the global influence of the atoms that make it up.
R ( y ^ c ) = GI , α c
where R ( y ^ c ) denotes the rarity score of the signal y ^ c , , denotes the dot product, and α c is the c-th column vector of A , which represents the sparse representation of y ^ c on D .
High scores of Equation (15) are related to frequent signals, while low scores are related to rare signals. The denoised version of the voxel v is obtained according to Equation (16), after converting back the signals into sub-volumes.
v = i C ( v i ) R ( V i ) v i i C ( v i ) R ( V i )
where v i is a version of the voxel v contained in the sub-volume V i , R ( V i ) is the rarity score of the signal y ^ i corresponding to the sub-volume V i , and C ( v i ) denotes the set of sub-volumes V i that contains a version of v.

3.3. Scaling

An important parameter for denoising based on sparse representations is the patch size. Large patches lead to denoised images suffering from blurring effect, while small patches lead to denoised images that can keep a high level of noise. However, small patches keep also fine details of the original image.
Averaging images denoised using different patch sizes can carry out a good trade-off between blurring and fine details. The noisy image can be denoised several times; but, each time with a different patch size, then the denoised versions of the image are averaged. The different patch sizes for processing the image are known as scales. Let us define V ^ n as the denoised version of a noisy MR image V using the proposed method with a sub-volume of size n × n × n . The averaged denoised version V ^ of V, is obtained using Equation (17).
V ^ = 1 | V S | i V S V ^ i
where | V S | denotes the number of elements in V S , and V S is the set of different sub-volume sizes for processing V. In our tests, we achieved the best results by averaging the denoised versions V ^ 3 and V ^ 4 . To obtain the final output V ^ , we apply the inverse VST (VST−1) to the denoised volume V ^ . The denoising of Rician data using the proposed method, Non-local SVD Denoising—NLSD, is synthesized by Equation (18)
V ^ = V S T 1 ( N L S D ( V S T ( V , σ ) , γ ) , σ )
where γ is the stabilized standard deviation induced by the VST, V is the volume corrupted by Rician noise with standard deviation σ .

4. Results and Discussion

We conducted several experiments to obtain the values of the parameters for the optimal setting of our method and to compare some state-of-the-art methods. These experiments were conducted in MATLAB R2016a running under the Windows 10 Professional OS on an Intel Core i7 First Generation CPU with 8GB of memory.
A dataset of MR phantom images (1 T1w, 1 T2w, and 1 PDw) from the BrainWeb data base [64], with a size of 181 × 217 × 181 voxels (resolution = 1 mm3), was used for numerical comparisons, as well as for setting the optimal parameter values of our method.

4.1. Parameter Estimation

The first stage of our method, which is preceded by the VST, aims to compute the global influence to reinforce the atoms for avoiding blurring in the denoised image. This reinforcing is controlled by the parameter δ , which does not intervene in the remaining stages. For determining a suitable value for this parameter, it was performed an experiment which consisted of corrupting the dataset of MR phantom images with different levels of Rician noise (1–10% of the maximum intensity), i.e., from each volume of the dataset were obtained 10 different noisy versions, one per each level of noise, for a total of 30 corrupted images. Then, we ran the first stage of our method over each corrupted image, followed by the inverse VST. Finally, we calculate the widely recognized quality measures Peak Signal to Noise Ratio (PSNR), and the Structural Similarity Index (SSIM) [65] of each denoised image to establish the best value for this parameter numerically.
Figure 4 shows the results of applying the first stage of our method over the dataset of MR phantom images corrupted with the different Rician noise levels. Each point on the green curve is the average PSNR of the denoised images for a given value of δ , and each point on the blue curve is the average SSIM.
We observed that the PSNR is maximum ( 31.991 ) at δ = 8 and the SSIM is maximum ( 0.8934 ) at δ = 3 . It is preferable to choose the value of δ that generates the maximum PSNR instead of the value that generates the maximum SSIM. Since this metric (PSNR) is more sensitive to noise than the SSIM (which is more correlated with a human visual system) [66], and by choosing the value of δ that give us the maximum PSNR we are choosing the cleaner search space for the next stage. Additionally, the best PSNR and SSIM pair is reached at δ = 8 .
The second stage of our method is to perform the Non-Local-SVD filtering. At this stage, we search for similar signals to a reference signal in the clean space given by the first stage. We defined a variable threshold in the interval [ b a , b ] for establishing the level of similarity between the signals (right side of the inequality in Equation (10)). This threshold depends on the magnitude of y ^ c and the noise level γ estimated in the first stage. The interval where the threshold moves is determined by the constants a and b. To establish a suitable value for these constants, it was performed an experiment which consisted of running the second stage of our method over the sparse representation Y ^ = D A of each denoised image produced by the tests of the first stage (previous to the application of the inverse VST), trying different values for a and b. In this experiment, the application of the second stage of our method was followed by the inverse VST. Figure 5 shows the results. Each point on the PSNR surface (Figure 5a) represents the average of the PSNRs computed after applying the second stage or our method over the images denoised by the first stage, for a given pair of values a and b. Similarly, Each point on the SSIM surface (Figure 5b) is the average SSIM.
As shown in Figure 5a, the maximum PSNR is equal to 36.247 , which is reached at a = 0.06 and b = 0.18 . At this point, the SSIM is 0.9407 . As shown in Figure 5b, the maximum SSIM is 0.9439 , which is reached at a = 0.02 and b = 0.17 . At this point, the PSNR is 36.104 . We chose the pair a = 0.06 and b = 0.18 because the increment of the PSNR at this point with respect to the PSNR at the point where the maximum SSIM is reached, is more significant than the increment of the SSIM at a = 0.02 and b = 0.17 , with respect to the SSIM at the point where the maximum PSNR is reached.

4.2. Methods Comparison

We compared the proposed method for MRI denoising, NLSD, to the following widely adopted filters: KSVD [53], PRINLM [21] (implementation downloaded from [67]), BM4D [32] (implementation downloaded from [68]), and DnCNN [48] (implementation dowloaded from [69]). The implementation of the methods PRINLM and BM4D used for comparison are developed in MATLAB with its core developed in C++. The DnCNN, the KSVD, and the NLSD, are implemented purely in MATLAB and do not use any graphic acceleration. The application of the KSVD was preceded by the VST, and followed by its inverse. We performed both quantitative and qualitative comparisons for demonstrating the effectivity or our method.

4.2.1. Synthetic Data

For numerical comparisons, we used the MR phantom images dataset, corrupted with Rician noise (1–10% of the maximum intensity). The PSNR, and the SSIM were used to evaluate the performance. Figure 6, Figure 7, Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16 and Figure 17 and Table 1, Table 2, Table 3 and Table 4 show the results of these comparisons.
Figure 6 shows the results of the comparisons on the T1w synthetic noise-free MR image corrupted with 5% Rician noise. It shows that the methods PRINLM, BM4D, DnCNN, and NLSD perform very well removing noise; however, the PRINLM produces some artifacts as the zoom-in in Figure 7d,k show. The KSVD and the BM4D also produce artifacts but to a lesser extent, as shown in Figure 7c,e, respectively, and Figure 7j,l respectively. The DnCNN over-smooth the image and therefore fine details as shown in Figure 8m. Instead, the NLSD removes noise while keeping fine details, in addition, does not produce artifacts. The KSVD maintains a lot of residual noise, as well as the BM4D, although to a much lesser extent, as shown in Figure 8c,e, respectively. Figure 8d,g show that the PRINLM and the NLSD, respectively, do not keep residual noise. Figure 8h–n show the zoom-in on the details labeled as “c” in Figure 6. It shows that the methods KSVD, BM4D, and DnCNN are affected by the blurring effect (Figure 8j,l,m, respectively). The PRINLM (Figure 8k) and the NLSD (Figure 8n) keep fine details; however, the PRINLM tends to swell such details. Table 1 summarizes the results of the numerical tests in the T1w synthetic MR image. Bold highlights indicate the best results.
Figure 9 qualitatively compares the denoising methods from the residual images (images resulting from the difference between the denoised and noisy images). It shows that the KSVD retains some structural information, and the DnCNN to a lesser extent, while the PRINLM, the BM4D, and the NLSD perform very well. However, the NLSD presents both a higher PSNR and a higher SSIM, as Table 1 shows.
Figure 10 shows the results of the comparisons on the T1w synthetic noise-free MR image corrupted with 9% Rician noise. It shows that the PRINLM do not eliminate the noise in dark regions but tends to flatten (circled area) and destroy fine details (red arrows). BM4D also keep noise in dark areas (circled area) and also destroy fine details. The DnCNN remove noise but over-blur the image destroying fine details. The KSVD keep a lot of noise and destroy fine details. The NLSD instead keep fine details and perform very well removing noise. You can zooming in the images to appreciate the details. Figure 11 shows the qualitative comparison of the method evaluated from the residual images. No structural information is observed on the residuals of the NLSD. The residuals of the KSVD and the DnCNN present some structural information. The residuals of the PRINLM and the BM4D also present structural information to a lesser extent.
Figure 12 shows the results of the comparisons on the T2w synthetic noise-free MR image corrupted with 7% Rician noise. It shows that the image filtered by the KSVD (Figure 12c) keeps high levels of noise, which is corroborated by the corresponding zoom-in of region “c” in Figure 13c. The BM4D also keeps noise, although to a lesser extent, as shown in region “c” in Figure 13e. The DnCNN over-blur the image and destroy fine details, as can be seen in the corresponding zoom-in of region “b” in Figure 13f. The methods PRINLM and NLSD perform very well at removing noise; however, as shown in the zoom-in of the regions “a” and “b” in Figure 13d, the PRINLM produces artifacts. The NLSD does not produce artifacts. Figure 14 qualitatively compares the denoising methods from the residual images. It shows that the KSVD presents a correlated residual image and also the BM4D. The DnCNN present a residual image that keep some estructural information, but to a lesser extent, while the PRINLM, and the NLSD perform very well. However, the NLSD presents both a higher PSNR and a higher SSIM, as Table 2 shows. Table 2 summarizes the results of the numerical tests in the T2w synthetic MR image. Bold highlights indicate the best results.
Figure 15 shows the results of the comparisons on the PDw synthetic noise-free MR image corrupted with 5% Rician noise. Figure 16 shows the zoom-in of the regions labeled as “a” and “b” in Figure 15. It shows that the BM4D and mainly the KSVD retain noise, as shown in Figure 16c,e in the corresponding zoom-in. The DnCNN perform very well filtering noise but over-blur the image destroying fine details, as can be seen in Figure 16f. The PRINLM and the NLSD perform very well, although the NLSD (Figure 16g) tends to preserve some fine structures better than the PRINLM. Figure 16d shows how the PRINLM tends to thicken or destroy such structures. Figure 17 qualitatively compares the denoising methods from the residual images. It shows that the KSVD presents a correlated residual image. The BM4D and DnCNN also shows a bit of correlation, while the PRINLM and the NLSD perform very well. However, the NLSD presents both a higher PSNR and a higher SSIM, as Table 3 shows.
Table 3 summarizes the results of the numerical tests in the PDw synthetic MR image. Bold highlights indicate the best results. Table 4 presents the running time of the methods.

4.2.2. Real Data

For qualitative comparisons on real clinical data, we used a data set of T1w brain MR images acquired in a 1.5 T Siemens Magnetom equipped with a standard head coil. The parameters of the T1w images were: T R = 7.9 ms, T E = 3.8 ms, ACQ matrix 220 × 220 pixels, voxel size 0.5 mm × 0.5 mm × 0.5 mm. We compared the NLSD results to the PRINLM, the BM4D and the DnCNN results. The KSVD was not considered because its performance was much lower in the tests on synthetic data. Figure 18 shows the results. The four methods perform very well, as can be seen in their corresponding zoom-ins; however, the BM4D and DnCNN results are slightly blurry and retain some noise. The PRINLM also keeps some residual noise. It can be noted that, close to the dark areas the BM4D and the PRINLM maintain residual noise slightly flattened, as can be seen in the zoom-in. The NLSD instead reduces noise while keeping fine details and yielding better visual results. This can be seen around the skull, where almost there is no any noisy pixel. This is because of the variable threshold in Equation (10) and the better estimation of the signals achieved by the search of similar signals in the clean space given by D A .

5. Conclusions

In this paper, we presented a new method for MR images denoising based on non-local means, singular value decomposition, and sparse representation. Both qualitative and quantitative results validate the effectiveness of the proposed method. We demonstrated how the global influence of the dictionary atoms could act as weighting factors to improve the noise filtering while keeping the fine details of the image and reducing the blurring effect caused by the average of overlapping patches.
We also demonstrated how the strategy of non-local means could be applied directly from the dictionary without resorting to the use of search windows, which limit the search for similar signals (patches or sub-volumes) to a given region. The dictionary, instead, allows finding similar signals in any location of the MR image without exploring the whole volume; and therefore, improving the filtering. Additionally, the strategy of searching for similar signals in the clean space given by the sparse representation allows the process of collecting such signals to improve; since, the level of noise in this space is lower than in the original space given by the noise MR image. Finally, we also showed how the estimated level of noise in the image could be used for computing efficiently the number of singular values, to obtain a low-rank approximation of the set of similar signals, and to reduce their noise before proceeding to the estimation of the noise-free signal.

Author Contributions

All authors have read and agree to the published version of the manuscript. Conceptualization, N.L.; methodology, N.L. and E.Z.; software, N.L. and E.L.; validation, N.L., E.Z. and E.L.; formal analysis, N.L.; investigation, N.L.; data curation, N.L.; writing–original draft preparation, N.L.; writing–review and editing, N.L.; visualization, N.L. and E.L.; supervision, E.Z.

Funding

This research was partially supported by the Administrative Department of Science and Technology of Colombia—COLCIENCIAS under the program of doctoral scholarship COLCIENCIAS-UNINORTE 2015-006.

Acknowledgments

We gratefully thank Mariana Pino for supplying the real clinical MR images used in this research.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Aja-Fernandez, S.; Tristan-Vega, A. A Review on Statistical Noise Models for Magnetic Resonance Imaging; Technical Report LPI, TECH-LPI2013-01; Universidad de Valladolid: Valladolid, Spain, 2013. [Google Scholar]
  2. Renfrew, D.L.; Franken, E.A.; Berbaum, K.S.; Weigelt, F.H.; Abu-Yousef, M.M. Error in radiology: Classification and lessons in 182 cases presented at a problem case conference. Radiology 1992, 183, 145–150. [Google Scholar] [CrossRef] [PubMed]
  3. Fitzgerald, R. Error in radiology. Clin. Radiol. 2001, 56, 938–946. [Google Scholar] [CrossRef] [PubMed]
  4. Mohan, J.; Krishnaveni, V.; Guo, Y. A survey on the magnetic resonance image denoising methods. Biomed. Signal Process. Control. 2014, 9, 56–69. [Google Scholar] [CrossRef]
  5. Ran, M.; Hu, J.; Chen, Y.; Chen, H.; Sun, H.; Zhou, J.; Zhang, Y. Denoising of 3D magnetic resonance images using a residual encoder decoder Wasserstein generative adversarial network. Med Image Anal. 2019, 55, 165–180. [Google Scholar] [CrossRef] [Green Version]
  6. Chang, H.H.; Li, C.Y.; Gallogly, A.H. Brain MR image restoration using an automatic trilateral filter with GPU-based acceleration. IEEE Trans. Biomed. Eng. 2018, 65, 400–413. [Google Scholar] [CrossRef]
  7. Kang, M.; Jung, M.; Kang, M. Rician denoising and deblurring using sparse representation prior and nonconvex total variation. J. Vis. Commun. Image Represent. 2018, 54, 80–99. [Google Scholar] [CrossRef]
  8. Benou, A.; Veksler, R.; Friedman, A.; Riklin, R.T. Ensemble of expert deep neural networks for spatio-temporal denoising of contrast-enhanced MRI sequences. Med Image Anal. 2017, 42, 145–159. [Google Scholar] [CrossRef]
  9. Veraart, J.; Novikov, D.S.; Christiaens, D.; Ades-aron, B.; Sijbers, J.; Fieremans, E. Denoising of diffusion MRI using random matrix theory. NeuroImage 2016, 142, 394–406. [Google Scholar] [CrossRef] [Green Version]
  10. Manjón, J.V.; Coupé, P.; Buades, A. MRI noise estimation and denoising using non-local PCA. Med Image Anal. 2015, 22, 35–47. [Google Scholar] [CrossRef]
  11. Gudbjartsson, H.; Patz, S. The Rician distribution of noisy MRI data. Magn. Reson. Med. 1995, 34, 910–914. [Google Scholar] [CrossRef]
  12. Foi, A. Noise estimation and removal in MR imaging: The variance-stabilization approach. In Proceedings of the IEEE International Symposium on Biomedical Imaging: From Nano to Macro, Chicago, IL, USA, 30 March–2 April 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 1809–1814. [Google Scholar] [CrossRef] [Green Version]
  13. Zhou, Z.; Tian, R.; Wang, Z.; Yang, Z.; Liu, Y.; Liu, G.; Wang, R.; Gao, J.; Song, J.; Nie, L.; et al. Artificial local magnetic field inhomogeneity enhances T2 relaxivity. Nat. Commun. 2017, 8, 1–10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Hong, H.; Zhang, L.; Xie, F.; Zhuang, R.; Jiang, D.; Liu, H.; Li, J.; Yang, H.; Zhang, X.; Nie, L.; et al. Rapid one-step 18F-radiolabeling of biomolecules in aqueous media by organophosphine fluoride acceptors. Nat. Commun. 2019, 10, 1–7. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Buades, A.; Coll, B.; Morel, J.M. A non-Local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; IEEE: Piscataway, NJ, USA, 2005; pp. 60–65. [Google Scholar]
  16. Wang, G.; Liu, Y.; Xiong, W.; Li, Y. An improved non-local means filter for color image denoising. Optik 2018, 173, 157–173. [Google Scholar] [CrossRef]
  17. Coupé, P.; Yger, P.; Prima, S.; Hellier, P.; Kervrann, C.; Barillot, C. An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images. IEEE Trans. Med Imaging 2008, 27, 425–441. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Manjón, J.V.; Carbonell, J.; Lull, J.; García, G.; Martí, L.; Robles, M. MRI denoising using Non-Local Means. Med. Image Anal. 2008, 12, 514–523. [Google Scholar] [CrossRef]
  19. Hu, J.; Zhou, J.; Wu, X. Non-local MRI denoising using random sampling. Magn. Reson. Imaging 2016, 34, 990–999. [Google Scholar] [CrossRef]
  20. Klosowski, J.; Frahm, J. Image denoising for real-Time MRI. Magn. Reson. Med. 2017, 77, 1340–1352. [Google Scholar] [CrossRef]
  21. Manjón, J.V.; Coupé, P.; Buades, A.; Collins, L.; Robles, M. New methods for MRI denoising based on sparseness and self-similarity. Med. Image Anal. 2012, 16, 18–27. [Google Scholar] [CrossRef] [Green Version]
  22. Wang, M.; Yan, W.; Zhou1, S. Image denoising using singular value difference in the wavelet domain. Math. Probl. Eng. 2018, 2018, 1–19. [Google Scholar] [CrossRef] [Green Version]
  23. Malini, S.; Moni, R.S. Image denoising using multiresolution singular value decomposition transform. Procedia Comput. Sci. 2015, 46, 1708–1715. [Google Scholar] [CrossRef] [Green Version]
  24. Hou, Z. Adaptive singular value decomposition in wavelet domain for image denoising. Pattern Recognit. 2003, 36, 1747–1763. [Google Scholar] [CrossRef]
  25. Rajwade, A.; Rangarajan, A.; Banerjee, A. Image denoising using the higher order singular value decomposition. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 849–862. [Google Scholar] [CrossRef] [PubMed]
  26. Zhanga, X.; Peng, J.; Xu, M.; Yang, W.; Zhang, Z.; Guo, H.; Chen, W.; Feng, Q.; Wu, E.X.; Feng, Y. Denoise diffusion-weighted images using higher-order singular value decomposition. NeuroImage 2017, 156, 128–145. [Google Scholar] [CrossRef] [PubMed]
  27. Zhang, X.; Xu, Z.; Jia, N.; Yang, W.; Feng, Q.; Chen, W.; Feng, Y. Denoising of 3D magnetic resonance images by using higher-order singular value decomposition. Med. Image Anal. 2015, 19, 75–86. [Google Scholar] [CrossRef] [PubMed]
  28. Kong, Z.; Han, L.; Liu, X.; Yang, X. A New 4-D nonlocal transform-domain filter for 3-D magnetic resonance images denoising. IEEE Trans. Med. Imaging 2018, 37, 941–954. [Google Scholar] [CrossRef] [PubMed]
  29. Dong, W.; Shi, G.; Li, X. Nonlocal image restoration with bilateral variance estimation: A low-rank approach. IEEE Trans. Image Process. 2013, 22, 700–711. [Google Scholar] [CrossRef]
  30. Dong, W.; Shi, G.; Li, X. Efficient nonlocal-means denoising using the SVD. In Proceedings of the 2008 15th IEEE International Conference on Image Processing, San Diego, CA, USA, 12–15 October 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1732–1735. [Google Scholar] [CrossRef] [Green Version]
  31. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising with block-matching and 3D filtering. In Image Processing: Algorithms and Systems, Neural Networks, and Machine Learning; International Society for Optics and Photonics, SPIE: Philadelphia, PA, USA, 2006; Volume 6064, pp. 354–365. [Google Scholar] [CrossRef]
  32. Maggioni, M.; Katkovnik, V.; Egiazarian, K.; Foi, A. A nonlocal transform-domain filter for volumetric data denoising and reconstruction. IEEE Trans. Image Process. 2013, 22, 119–133. [Google Scholar] [CrossRef]
  33. Xu, P.; Chen, B.; Xue, L.; Zhang, J.; Zhu, L.; Duan, H. A new MNF BM4D denoising algorithm based on guided filtering for hyperspectral images. ISA Trans. 2019, 92, 315–324. [Google Scholar] [CrossRef]
  34. Tao, D.; Cheng, J.; Gao, X.; Li, X.; Deng, C. Robust sparse coding for mobile image labeling on the cloud. IEEE Trans. Circuits Syst. Video Technol. 2016, 27, 62–72. [Google Scholar] [CrossRef]
  35. Moradi, N.; Mahdavi-Amiri, N. Kernel sparse representation based model for skin lesions segmentation and classification. Comput. Methods Programs Biomed. 2019, 182. [Google Scholar] [CrossRef]
  36. Li, Z.; He, H.; Yin, Z.; Chen, F. A color-gradient patch sparsity based image inpainting algorithm with structure coherence and neighborhood consistency. Signal Process. 2014, 99, 116–128. [Google Scholar] [CrossRef]
  37. Elad, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef] [PubMed]
  38. Khedr, W.; Ali, R.; Ismail, F. Image denoising using K-SVD algorithm based on Gabor wavelet dictionary. Int. J. Comput. Appl. 2012, 59, 30–33. [Google Scholar] [CrossRef]
  39. Wang, H.; Xiao, X.; Peng, X.; Liu, Y.; Zhao, W. Improved image denoising algorithm dased on superpixel clustering and sparse representation. Appl. Sci. 2017, 7, 436. [Google Scholar] [CrossRef] [Green Version]
  40. Yuan, J. MRI denoising via sparse tensors with reweighted regularization. Appl. Math. Model. 2019, 69, 552–562. [Google Scholar] [CrossRef]
  41. Zhang, Y.; Yang, Z.; Hu, J.; Zou, S.; Fu, Y. MRI denoising using low rank prior and sparse gradient prior. IEEE Access 2019, 7, 45858–45865. [Google Scholar] [CrossRef]
  42. Li, H.; He, X.; Tao, D.; Tang, Y.; Wang, R. Joint medical image fusion, denoising and enhancement via discriminative low-rank sparse dictionaries learning. Pattern Recognit. 2018, 79, 130–146. [Google Scholar] [CrossRef]
  43. Phophalia, A.; Mitra, S.K. 3D MR image denoising using rough set and kernel PCA method. Magn. Reson. Imaging 2017, 36, 135–145. [Google Scholar] [CrossRef]
  44. Baselice, F.; Ferraioli, G.; Pascazio, V.; Sorriso, A. Bayesian MRI denoising in complex domain. Magn. Reson. Imaging 2017, 38, 112–122. [Google Scholar] [CrossRef]
  45. Aja-Fernández, S.; Pieciak, T.; Vegas-Sánchez-Ferrero, G. Spatially variant noise estimation in MRI: A homomorphic approach. Med Image Anal. 2015, 20, 184–197. [Google Scholar] [CrossRef]
  46. Pieciak, T.; Rabanillo-Viloria, I.; Aja-Fernández, S. Bias correction for non-stationary noise filtering in MRI. In Proceedings of the IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 307–310. [Google Scholar]
  47. Liu, L.; Yang, H.; Fan, J.; Wen, R.; Duan, Y. Rician noise and intensity nonuniformity correction (NNC) model for MRI data. Biomed. Signal Process. Control. 2019, 49, 506–519. [Google Scholar] [CrossRef]
  48. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Jiang, D.; Dou, W.; Vosters, L.; Xu, X.; Sun, Y.; Tan, T. Denoising of 3D magnetic resonance images with multi-channel residual learning of convolutional neural network. Jpn. J. Radiol. 2018, 36, 566–574. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Kidoh, M.; Shinoda, K.; Kitajima, M.; Isogawa, K.; Nambu, M.; Uetani, H.; Morita, K.; Nakaura, T.; Tateishi, M.; Yamashita, Y.; et al. Deep Learning Based Noise Reduction for Brain MR Imaging: Tests on Phantoms and Healthy Volunteers. Magn. Reson. Med. Sci. 2019, 1–12. [Google Scholar] [CrossRef] [Green Version]
  51. Zhang, Z.; Xu, Y.; Yang, J.; Li, X.; Zhang, D. A Survey of sparse representation: Algorithms and applications. IEEE Access 2015, 3, 490–530. [Google Scholar] [CrossRef]
  52. Bao, C.; Ji, H. Dictionary learning for sparse coding: Algorithms and convergence analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 1356–1369. [Google Scholar] [CrossRef]
  53. Aharon, M.; Elad, M.; Bruckstein, A.M. The K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  54. Aharon, M.; Elad, M.; Bruckstein, A.M. On the uniqueness of overcomplete dictionaries, and a practical way to retrieve them. J. Linear Algebra Appl. 2006, 416, 48–67. [Google Scholar] [CrossRef] [Green Version]
  55. Gonzalez, R.C.; Woods, R.E. Digital Image Processing, 3rd ed.; Pearson: Upper Saddle River, NJ, USA, 2017; p. 175. [Google Scholar]
  56. Buades, A.; Coll, B.; Morel, J.M. A review of image denoising algorithms, with a new one. Multiscale Model. Simul. Siam Interdiscip. J. Soc. Ind. Appl. Math. 2005, 4, 490–530. [Google Scholar] [CrossRef]
  57. Raghuvanshi, D.; Hasan, S.; Agrawal, M. Analysing image denoising using non local means algorithm. Int. J. Comput. Appl. 2012, 56, 7–11. [Google Scholar] [CrossRef] [Green Version]
  58. Ramirez, I.; Sprechmann, P.; Sapiro, G. Classification and clustering via dictionary learning with structured incoherence and shared features. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 3501–3508. [Google Scholar] [CrossRef]
  59. Sprechmann, P.; Sapiro, G. Dictionary learning and sparse coding for unsupervised clustering. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Dallas, TX, USA, 15–19 March 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 2042–2045. [Google Scholar] [CrossRef] [Green Version]
  60. Kimpe, T.; Tuytschaever, T. Increasing the number of gray shades in medical display systems how much is enough? J. Digit. Imaging 2007, 20, 422–432. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  61. Sadek, R. SVD based image processing applications: State of the art, contributions and research challenges. Int. J. Adv. Comput. Sci. Appl. IJACSA 2012, 3, 26–34. [Google Scholar]
  62. Eckart, C.; Young, G. The approximation of one matrix by another of lower rank. Psychometrika 1936, 1, 211–218. [Google Scholar] [CrossRef]
  63. Leal, N.; Moreno, S.; Zurek, E. Simple method for detecting visual saliencies based on dictionary learning and sparse coding. In Proceedings of the 14th Iberian Conference on Information Systems and Technologies (CISTI), Coimbra, Portugal, 19–22 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–5. [Google Scholar] [CrossRef]
  64. Cocosco, C.A.; Kollokian, V.; Kwan, R.K.S.; Evans, A.C. BrainWeb: Simulated Brain Database. 1997. Available online: https://brainweb.bic.mni.mcgill.ca (accessed on 6 July 2019).
  65. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  66. Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 2366–2369. [Google Scholar] [CrossRef]
  67. Coupé, P. Personal Home Page. 2019. Available online: https://sites.google.com/site/pierrickcoupe/softwares (accessed on 6 July 2019).
  68. Foi, A. Image and Video Denoising by Sparse 3D Transform-Domain Collaborative Filtering. 2019. Available online: http://http://www.cs.tut.fi/~foi/GCF-BM3D (accessed on 6 July 2019).
  69. Zhang, K. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising (TIP, 2017). Available online: https://github.com/cszn/DnCNN/tree/4a4b5b8bcac5a5ac23433874d4362329b25522ba (accessed on 2 February 2019).
Figure 1. Issues of the KSVD filtering. Columns from left to right correspond to a phantom noise-free image added with 5% Rician noise, original phantom noise-free image, and image denoised by the KSVD. The third column from top to down corresponds to artifacts, residual noise, and blurring issues that the KSVD produces.
Figure 1. Issues of the KSVD filtering. Columns from left to right correspond to a phantom noise-free image added with 5% Rician noise, original phantom noise-free image, and image denoised by the KSVD. The third column from top to down corresponds to artifacts, residual noise, and blurring issues that the KSVD produces.
Sensors 20 01536 g001
Figure 2. Block diagram illustrating the three steps of the proposed method.
Figure 2. Block diagram illustrating the three steps of the proposed method.
Sensors 20 01536 g002
Figure 3. Example of a reference signal and its corresponding sets of Candidates to similar signals CSS , and Similar signals SS . (a) Example of a reference signal. (b) Candidates to similar signals of the reference signal in (a). (c) Similar signals of the reference signal in (B). (d) Zoom-in of the reference signal in (a). (e) Zoom-in of the similar signals 1 and 2, demonstrating that our method can locate similar signals away from the reference signal.
Figure 3. Example of a reference signal and its corresponding sets of Candidates to similar signals CSS , and Similar signals SS . (a) Example of a reference signal. (b) Candidates to similar signals of the reference signal in (a). (c) Similar signals of the reference signal in (B). (d) Zoom-in of the reference signal in (a). (e) Zoom-in of the similar signals 1 and 2, demonstrating that our method can locate similar signals away from the reference signal.
Sensors 20 01536 g003aSensors 20 01536 g003b
Figure 4. Summary of the tests performed to numerically establish the best value of the parameter δ for controlling the global influence of the atoms. The red arrow points out the maximum SSIM (blue curve) and its corresponding PSNR (green curve). The yellow arrow points out the maximum PSNR and its corresponding SSIM.
Figure 4. Summary of the tests performed to numerically establish the best value of the parameter δ for controlling the global influence of the atoms. The red arrow points out the maximum SSIM (blue curve) and its corresponding PSNR (green curve). The yellow arrow points out the maximum PSNR and its corresponding SSIM.
Sensors 20 01536 g004
Figure 5. Summary of the tests performed to numerically establish the best values for the parameters a and b for controlling the similarity between the signals according to the magnitude of the reference signal and the estimated level of noise. (a) PSNR surface where each point corresponds to the average PSNR of the MR phantom images corruped with different Rician noise levels and denoised using the two first stages of our method. (b) SSIM surface obtained similarly to the PSNR surface.
Figure 5. Summary of the tests performed to numerically establish the best values for the parameters a and b for controlling the similarity between the signals according to the magnitude of the reference signal and the estimated level of noise. (a) PSNR surface where each point corresponds to the average PSNR of the MR phantom images corruped with different Rician noise levels and denoised using the two first stages of our method. (b) SSIM surface obtained similarly to the PSNR surface.
Sensors 20 01536 g005
Figure 6. Comparison of the denoising methods on the T1w synthetic noise-free MR image corrupted with 5% Rician noise. (a) Original synthetic noise-free MR image. (b) Original image corrupted with 5% Rician noise. (cg) Denoising results of the KSVD, PRINLM, BM4D, DnCNN, and the proposed method NLSD.
Figure 6. Comparison of the denoising methods on the T1w synthetic noise-free MR image corrupted with 5% Rician noise. (a) Original synthetic noise-free MR image. (b) Original image corrupted with 5% Rician noise. (cg) Denoising results of the KSVD, PRINLM, BM4D, DnCNN, and the proposed method NLSD.
Sensors 20 01536 g006
Figure 7. Zoom-in on the details of the regions labeled as “d” and “b” in Figure 6. Figure (ag) show the detail labeled as “d” in Figure 6a–g respectively. Figure (cg) Correspond to the methods KSVD, PRINLM, BM4D, DnCNN and NLSD, respectively. Figure (hn) show the detail labeled as “b” in Figure 6a–g respectively. Figure (jn) Correspond to the methods KSVD, PRINLM, BM4D, DnCNN, and NLSD, respectively.
Figure 7. Zoom-in on the details of the regions labeled as “d” and “b” in Figure 6. Figure (ag) show the detail labeled as “d” in Figure 6a–g respectively. Figure (cg) Correspond to the methods KSVD, PRINLM, BM4D, DnCNN and NLSD, respectively. Figure (hn) show the detail labeled as “b” in Figure 6a–g respectively. Figure (jn) Correspond to the methods KSVD, PRINLM, BM4D, DnCNN, and NLSD, respectively.
Sensors 20 01536 g007
Figure 8. Zoom-in on the details of the regions labeled as “a” and “c” in Figure 6. Figure (ag) correspond to the regions labeled as “a” and Figure (hn) correspond to the regions labeled as “c”. Figure (cg) and (jn) correspond to the zoom-in on the results of the methods KSVD, PRINLM, BM4D, DnCNN and NLSD, respectively.
Figure 8. Zoom-in on the details of the regions labeled as “a” and “c” in Figure 6. Figure (ag) correspond to the regions labeled as “a” and Figure (hn) correspond to the regions labeled as “c”. Figure (cg) and (jn) correspond to the zoom-in on the results of the methods KSVD, PRINLM, BM4D, DnCNN and NLSD, respectively.
Sensors 20 01536 g008
Figure 9. Residuals of the different results produced by the application of the methods evaluated on the noise-free T1w synthetic MR image of Figure 6a corrupted with 5% Rician noise. (ae) are the denoising results of the methods KSVD, PRINLM, BM4D, DnCNN, and NLSD, respectively. (fj) are the corresponding residuals.
Figure 9. Residuals of the different results produced by the application of the methods evaluated on the noise-free T1w synthetic MR image of Figure 6a corrupted with 5% Rician noise. (ae) are the denoising results of the methods KSVD, PRINLM, BM4D, DnCNN, and NLSD, respectively. (fj) are the corresponding residuals.
Sensors 20 01536 g009
Figure 10. Comparison of the denoising methods on the T1w synthetic noise-free MR image corrupted with 9% Rician noise. (a) Original synthetic noise-free MR image. (b) Original image corrupted with 9% Rician noise. (cg) Denoising results of the KSVD, PRINLM, BM4D, DnCNN and the proposed method NLSD.
Figure 10. Comparison of the denoising methods on the T1w synthetic noise-free MR image corrupted with 9% Rician noise. (a) Original synthetic noise-free MR image. (b) Original image corrupted with 9% Rician noise. (cg) Denoising results of the KSVD, PRINLM, BM4D, DnCNN and the proposed method NLSD.
Sensors 20 01536 g010
Figure 11. Residuals of the different results produced by the application of the methods evaluated on the noise-free T1w synthetic MR image of Figure 10a corrupted with 9% Rician noise. (ae) are the denoising results of the methods KSVD, PRINLM, BM4D, DnCNN and NLSD, respectively. (fj) are the corresponding residuals.
Figure 11. Residuals of the different results produced by the application of the methods evaluated on the noise-free T1w synthetic MR image of Figure 10a corrupted with 9% Rician noise. (ae) are the denoising results of the methods KSVD, PRINLM, BM4D, DnCNN and NLSD, respectively. (fj) are the corresponding residuals.
Sensors 20 01536 g011
Figure 12. Comparison of the denoising methods on the T2w synthetic noise-free MR image corrupted with 7% Rician noise. (a) Original synthetic noise-free MR image. (b) Original image corrupted with 7% Rician noise. (cg) Denoising results of the KSVD, the PRINLM, the BM4D, the DnCNN and the NLSD respectively.
Figure 12. Comparison of the denoising methods on the T2w synthetic noise-free MR image corrupted with 7% Rician noise. (a) Original synthetic noise-free MR image. (b) Original image corrupted with 7% Rician noise. (cg) Denoising results of the KSVD, the PRINLM, the BM4D, the DnCNN and the NLSD respectively.
Sensors 20 01536 g012
Figure 13. Zoom-in on the details labeled as “a”, “b”, and “c” in Figure 12. Figures (ag) correspond to the zoom-in of the Original image, the Noisy image, the KSVD, the PRINLM, the BM4D, the DnCNN, and the NLSD, respectively.
Figure 13. Zoom-in on the details labeled as “a”, “b”, and “c” in Figure 12. Figures (ag) correspond to the zoom-in of the Original image, the Noisy image, the KSVD, the PRINLM, the BM4D, the DnCNN, and the NLSD, respectively.
Sensors 20 01536 g013
Figure 14. Residuals of the different results produced by the application of the methods evaluated on the noise-free T2w synthetic MR image of Figure 12a corrupted with 7% Rician noise. (ae) are the denoising results of the methods KSVD, PRINLM, BM4D, DnCNN, and NLSD, respectively. (fj) are the corresponding residuals.
Figure 14. Residuals of the different results produced by the application of the methods evaluated on the noise-free T2w synthetic MR image of Figure 12a corrupted with 7% Rician noise. (ae) are the denoising results of the methods KSVD, PRINLM, BM4D, DnCNN, and NLSD, respectively. (fj) are the corresponding residuals.
Sensors 20 01536 g014
Figure 15. Comparison of the denoising methods on the PDw synthetic noise-free MR image corrupted with 5% Rician noise. (a) Original synthetic noise-free MR image. (b) Original image corrupted with 5% Rician noise. (cg) are the denoising results of the KSVD, the PRINLM, the BM4D, the DnCNN and the NLSD, respectively.
Figure 15. Comparison of the denoising methods on the PDw synthetic noise-free MR image corrupted with 5% Rician noise. (a) Original synthetic noise-free MR image. (b) Original image corrupted with 5% Rician noise. (cg) are the denoising results of the KSVD, the PRINLM, the BM4D, the DnCNN and the NLSD, respectively.
Sensors 20 01536 g015
Figure 16. Zoom-in on the details labeled as “a” and “b” in Figure 15. Figures (ag) correspond to the zoom-in of the Original image, the Noisy image, the KSVD, the PRINLM, the BM4D, the DnCNN and the NLSD in Figure 15, respectively.
Figure 16. Zoom-in on the details labeled as “a” and “b” in Figure 15. Figures (ag) correspond to the zoom-in of the Original image, the Noisy image, the KSVD, the PRINLM, the BM4D, the DnCNN and the NLSD in Figure 15, respectively.
Sensors 20 01536 g016
Figure 17. Residuals of the different results produced by the application of the methods evaluated on the noise-free PDw synthetic MR image of Figure 15a corrupted with 5% Rician noise. (ae) are the denoising results of the methods KSVD, PRINLM, BM4D, DNCNN and NLSD, respectively. (fj) are the corresponding residuals.
Figure 17. Residuals of the different results produced by the application of the methods evaluated on the noise-free PDw synthetic MR image of Figure 15a corrupted with 5% Rician noise. (ae) are the denoising results of the methods KSVD, PRINLM, BM4D, DNCNN and NLSD, respectively. (fj) are the corresponding residuals.
Sensors 20 01536 g017
Figure 18. Quality comparisons on a real T1w MR image. (a) Original MR image, (be) denoising results of the PRINLM, the BM4D, DnCNN, and the NLSD. (fj) the corresponding zoom-in of figures (ae).
Figure 18. Quality comparisons on a real T1w MR image. (a) Original MR image, (be) denoising results of the PRINLM, the BM4D, DnCNN, and the NLSD. (fj) the corresponding zoom-in of figures (ae).
Sensors 20 01536 g018
Table 1. Comparison of the different methods evaluated on the T1w synthetic noise-free MR image corrupted with different levels of Rician noise. Bold highlights indicate the best results.
Table 1. Comparison of the different methods evaluated on the T1w synthetic noise-free MR image corrupted with different levels of Rician noise. Bold highlights indicate the best results.
1%3%5%7%9%
MethodPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
Noisy39.9800.965830.4510.808425.9820.664423.0450.553620.8880.4693
KSVD40.4210.977332.6370.922529.5090.860527.5880.801426.4230.7368
PRINLM44.2990.993038.5890.965535.5120.928133.6420.865433.1770.8332
BM4D44.6880.991138.1290.979935.6180.964332.7990.940430.4120.9115
DnCNN43.1090.975637.8430.960234.7900.958133.7440.947831.5010.9163
NLSD44.9010.997038.7230.989535.8010.977633.9820.963033.1300.9296
Table 2. Comparison of the different methods evaluated on the T2w synthetic noise-free MR image corrupted with different levels of Rician noise. Bold highlights indicate the best results.
Table 2. Comparison of the different methods evaluated on the T2w synthetic noise-free MR image corrupted with different levels of Rician noise. Bold highlights indicate the best results.
1%3%5%7%9%
MethodPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
Noisy39.9920.970930.4180.833026.0160.715123.0730.624220.9280.5606
KSVD40.1040.970632.0980.916629.3420.866926.5300.813424.6640.7633
PRINLM43.9850.994337.7860.970334.7380.926232.9180.885231.3470.8286
BM4D43.6550.961437.1660.947334.1400.937331.8870.927330.0300.9100
DnCNN42.8970.956937.0030.935434.3620.921032.5440.916231.0450.9139
NLSD44.0990.997837.9730.988534.9100.952233.5460.932731.6010.9233
Table 3. Comparison of the different methods evaluated on the PDw synthetic noise-free MR image corrupted with different levels of Rician noise. Bold highlights indicate the best results.
Table 3. Comparison of the different methods evaluated on the PDw synthetic noise-free MR image corrupted with different levels of Rician noise. Bold highlights indicate the best results.
1%3%5%7%9%
MethodPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIMPSNRSSIM
Noisy39.9710.963630.4410.790125.9840.634123.0880.525020.8980.4408
KSVD39.9800.963929.6330.895627.6260.812826.2240.735724.9520.6589
PRINLM44.6990.994038.0790.975235.1110.922833.1010.863832.5620.8211
BM4D44.6030.962638.6210.939935.6830.928033.6530.914731.8870.8990
DnCNN43.2450.956737.9910.926835.7000.937233.4190.907530.6430.8986
NLSD44.9800.998738.9060.989435.9730.953233.6250.938732.6560.9102
Table 4. Implementation language and computation time of the methods evaluated.
Table 4. Implementation language and computation time of the methods evaluated.
PRINLMBM4DDnCNNKSVDNLSD
LanguageMATLAB - C++MATLAB - C++MATLABMATLABMATLAB
Average running time45.32 s46.83 s65.64 min70.13 min120.84 min

Share and Cite

MDPI and ACS Style

Leal, N.; Zurek, E.; Leal, E. Non-Local SVD Denoising of MRI Based on Sparse Representations. Sensors 2020, 20, 1536. https://doi.org/10.3390/s20051536

AMA Style

Leal N, Zurek E, Leal E. Non-Local SVD Denoising of MRI Based on Sparse Representations. Sensors. 2020; 20(5):1536. https://doi.org/10.3390/s20051536

Chicago/Turabian Style

Leal, Nallig, Eduardo Zurek, and Esmeide Leal. 2020. "Non-Local SVD Denoising of MRI Based on Sparse Representations" Sensors 20, no. 5: 1536. https://doi.org/10.3390/s20051536

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop