Laplacian Eigenmaps Network-Based Nonlocal Means Method for MR Image Denoising

Magnetic resonance (MR) images are often corrupted by Rician noise which degrades the accuracy of image-based diagnosis tasks. The nonlocal means (NLM) method is a representative filter in denoising MR images due to its competitive denoising performance. However, the existing NLM methods usually exploit the gray-level information or hand-crafted features to evaluate the similarity between image patches, which is disadvantageous for preserving the image details while smoothing out noise. In this paper, an improved nonlocal means method is proposed for removing Rician noise in MR images by using the refined similarity measures. The proposed method firstly extracts the intrinsic features from the pre-denoised image using a shallow convolutional neural network named Laplacian eigenmaps network (LEPNet). Then, the extracted features are used for computing the similarity in the NLM method to produce the denoised image. Finally, the method noise of the denoised image is utilized to further improve the denoising performance. Specifically, the LEPNet model is composed of two cascaded convolutional layers and a nonlinear output layer, in which the Laplacian eigenmaps are employed to learn the filter bank in the convolutional layers and the Leaky Rectified Linear Unit activation function is used in the final output layer to output the nonlinear features. Due to the advantage of LEPNet in recovering the geometric structure of the manifold in the low-dimension space, the features extracted by this network can facilitate characterizing the self-similarity better than the existing NLM methods. Experiments have been performed on the BrainWeb phantom and the real images. Experimental results demonstrate that among several compared denoising methods, the proposed method can provide more effective noise removal and better details preservation in terms of human vision and such objective indexes as peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM).


Introduction
In medical imaging, high-quality images play a critical role in clinical diagnosis by enabling the clinicians to determine the state of the illness based on the structural details and functional characteristics of the images. Many imaging techniques have been developed in recent decades. Among them, magnetic resonance imaging (MRI) has attracted much attention due to its advantages of high resolution, nonradiation, noninvasiveness, and high contrast with human tissues [1]. However, the MR images are inevitably corrupted by noise. It has been shown that the noise in the MR images is governed by Rician distribution [2]. Such noise may affect the clinical diagnosis by degrading the reduction. Due to the effectiveness in mapping the data within the high-dimensional space into a low-dimensional manifold, the LEP is better for discovering nonlinear features and manifold structure embedded in the set of data [24] than such linear dimensionality reduction methods as PCA [25], multidimensional scaling [26], and linear discriminant analysis [27].
In order to extract structural features from an input MR image for the denoising task, we have proposed an LEP network (LEPNet) by emulating the architecture of PCANet. The proposed LEPNet consists of three processing layers: the two LEP-based convolutional layers and the output layer. Different from the binary-hashing-based output layer in PCANet, the LEPNet uses the activation function Leaky Rectified Linear Unit (LeakyReLU) [28,29] as the final output layer to map nonlinearity into the data. The proposed LEPNet will be used to extract the features from the pre-denoised version of the input noisy MR images and the produced features are then utilized to refine the computation of similarity weights between image patches for NLM denoising of MR images. The method noise of the denoised image is utilized to produce the final restored image. To evaluate the denoising performance of the proposed method which combines the LEPNet with the NLM filter, we have conducted extensive experiments on the simulated images and the real MR images to compare the restoration results among the proposed method and several state-of-the-art denoising methods. The visual inspection and quantitative analysis demonstrate the advantage of the proposed method in reducing noise and preserving fine details of MR images.

Rician Noise Model
The data of MR is a collection of the complex valued signal, whose magnitude is reconstructed to obtain the MR images. If the real and imaginary parts of this raw complex signal are corrupted by Gaussian distributed noise, the resulting MR images will have Rician distributed noise [30].
Supposing that an underlying clean MR image A is corrupted by the zero-mean Gaussian white noise with standard deviation σ in the real and imaginary channels, the noisy image A is computed as: A = (R + n R ) 2 + (I + n I ) 2 (1) where R and I are the real part and the imaginary part of the raw signal, respectively. n R ∼ N(0, σ 2 ) and n I ∼ N(0, σ 2 ). Estimation of noise in MR images is a challenging task since the Rician noise is signal-dependent. To overcome the problem, Nowak [31] suggested filtering the square of the MR magnitude image, by which the noise bias in the MR image is changed to be additive and signal-independent and can be removed easily. Accordingly, we can obtain the following equation by squaring Equation (1): where n 1 ∼ N(0, 1) and n 2 ∼ N(0, 1). Hence, the expectation of the squared magnitude image can be calculated as: The noise bias is equal to 2σ 2 as indicated in Equation (3). From the image background segmented using the Otsu thresholding method [32], the noise of the MR image can be estimated as σ = µ 2 , where µ is the mean value of the background pixels.

The Proposed LEPNet Model
In this section, we will construct a LEPNet model for improving the denoising performance of the NLM method on MR images. This network includes two convolutional filter bank layers and a nonlinear processing layer. A block diagram is shown in Figure 1 to illustrate how to extract the features from an input MR image using the LEPNet. As shown in Figure 1, an image is prefiltered by the PRI-NLM method. The prefiltered image is firstly processed by the trained LEP filters to generate the feature maps in the first convolutional stage. Then, each feature map is convoluted with the trained LEP filters to produce the feature maps in the second convolutional stage. These feature maps will be processed by the following LeakyReLU function to generate the final outputs. In the LEPNet, only the LEP filters (i.e., the convolutional kernel) need to be trained from the input images. It should be noted that the LEPNet is trained on the smoothed image because the LEP is sensitive to noise and the direct application of the LEP to the input noisy image cannot produce the effective LEP filters. In what follows, the components of the LEPNet are described in detail. In this section, we will construct a LEPNet model for improving the denoising performance of the NLM method on MR images. This network includes two convolutional filter bank layers and a nonlinear processing layer. A block diagram is shown in Figure 1 to illustrate how to extract the features from an input MR image using the LEPNet. As shown in Figure 1, an image is prefiltered by the PRI-NLM method. The prefiltered image is firstly processed by the trained LEP filters to generate the feature maps in the first convolutional stage. Then, each feature map is convoluted with the trained LEP filters to produce the feature maps in the second convolutional stage. These feature maps will be processed by the following LeakyReLU function to generate the final outputs. In the LEPNet, only the LEP filters (i.e., the convolutional kernel) need to be trained from the input images. It should be noted that the LEPNet is trained on the smoothed image because the LEP is sensitive to noise and the direct application of the LEP to the input noisy image cannot produce the effective LEP filters. In what follows, the components of the LEPNet are described in detail.

The First Convolutional Layer
To learn the convolution kernels of LEPNet, M training images of size m n  will be used.
The patch size is set to be 1 2 k k  at all stages. Around each pixel in the i -th input training image, the corresponding patch is collected step by step. The patch mean is then subtracted from each patch and the resulting mean-removed patch is vectorized to produce the matrix i s a is the s -th vector and S is the number of vectors acquired from the i -th image. All the training images are processed in the same way and put together to construct the matrix Q as: The LEP operation is then implemented on Q to construct a manifold representation of data to preserve the structural information embedded in the high-dimensional space. The algorithm procedure of LEP is described below [23].
(1) Constructing the adjacency graph of Q .
An adjacency graph on Q is defined, in which nodes p x and q x are connected by an edge if q x is among K-nearest-neighbor of p x . The distance between p x and q x is determined using the Euclidean distance.
If nodes p x and q x are connected by an edge, the weight of the edge is defined as: where t is the parameter of heat kernel. If p x and q x are disconnected nodes, we will set , =0 (3) Obtaining the eigenmaps.

The First Convolutional Layer
To learn the convolution kernels of LEPNet, M training images of size m × n will be used. The patch size is set to be k 1 × k 2 at all stages. Around each pixel in the i-th input training image, the corresponding patch is collected step by step. The patch mean is then subtracted from each patch and the resulting mean-removed patch is vectorized to produce the matrix A i = [a i,1 , a i,2 , · · · , a i,S ], where a i,s is the s-th vector and S is the number of vectors acquired from the i-th image. All the training images are processed in the same way and put together to construct the matrix Q as: The LEP operation is then implemented on Q to construct a manifold representation of data to preserve the structural information embedded in the high-dimensional space. The algorithm procedure of LEP is described below [23].
(1) Constructing the adjacency graph of Q. An adjacency graph on Q is defined, in which nodes x p and x q are connected by an edge if x q is among K-nearest-neighbor of x p . The distance between x p and x q is determined using the Euclidean distance.
If nodes x p and x q are connected by an edge, the weight of the edge is defined as: where t is the parameter of heat kernel. If x p and x q are disconnected nodes, we will set W p,q = 0. In this way, the weight matrix W = (W pq ) d×d , a symmetric affinity matrix, can be built. (3) Obtaining the eigenmaps. In order to seek the low-dimensional representation of Q, we need to minimize the cost function ϕ(Y) which is written as: where y p and y q are connected points by an edge in the low-dimensional space. Let D denote the diagonal weight matrix whose entries are the row sums of W, thus the Laplacian matrix L is obtained by L = D − W. Accordingly, Equation (6) can be written as: where tr(·) is the trace of the matrix and Z = [y T 1 ; y T 2 , · · · ; y T M ] is the embedding matrix. Equation (7) is an optimization problem and its solution is the embedding matrix Z. Equation (7) can also be written as: min where I is the identity matrix. The solution of Equation (8) is sorted in an ascending order: where d is the number of solutions of Equation (8).
Assuming that the number of filters in the first layer is L 1 , the first L 1 values of Equation (9) are chosen to produce L 1 filters O 1 l : where q l (ZZ T ) denotes the principal eigenvector corresponding to the l-th solution λ l , and mat is a function for mapping q l (ZZ T ) to the matrix O 1 l .

The Second Convolutional Layer
To extract higher level features, the multiple stages of LEP filters will be stacked. All the outputs produced in the first convolutional layer will serve as the inputs of the second convolutional layer. By applying the same operation as the first layer, we can obtain L 2 filters of the second layer. Similarly, the process can be further repeated to capture the deeper features. In this study, we have found that the use of two convolutional layers can produce the competitive denoising performance, whereas the deeper architecture only brings little improvement.

The Nonlinear Processing Layer
The input image will produce L 1 × L 2 outputs after being filtered by two convolutional layers. All these outputs will be processed to add nonlinear information to the data by using the LeakyReLU function. Here, the LeakyReLU function is defined as: where a is a coefficient controlling the slope of the negative part. Compared with the classical activation functions such as ReLU and sigmoid, the LeakyReLU is more effective in preserving the image details in that the structural information corresponding to the negative values in the image can be maintained.

The LEPNet-Based Nonlocal Means Method
The similarity in the traditional NLM algorithm is determined by exploiting the gray-level information itself. Such a scheme may result in the unreliable determination of similarity weights. Therefore, we have proposed a shallow deep learning model LEPNet to extract the robust intrinsic features from the input MR image to refine the similarity weights in the NLM filter. To learn the intrinsic features by the LEPNet, we will train the model to produce the convolution kernels. Considering that the training of LEPNet is performed using the patch-based learning strategy, even a small number of images can produce a large number of image patches. Thus, we only use 300 general MR images from the open MRI database [33][34][35][36] as the training samples to train the LEPNet model. All the 300 images are cropped to 160 × 160 pixels and prefiltered by the PRI-NLM filter.
By using the trained convolution kernels, the LEPNet can capture the structural features from the input MR images. The L(L = L 1 × L 2 ) feature maps produced from two convolutional layers will be processed by the LeakyReLU function to provide the final outputs.
In the traditional NLM (TNLM) method, each restored pixel is the weighted mean of all other pixels in a search window in the noisy image: where NLM[ A(i 1 , j 1 )] is the filtered intensity at location (i 1 , j 1 ) in the noisy image A by using the NLM method, Ω(i 1 , j 1 ) is the search window centered at (i 1 , j 1 ), and ω(i 1 , j 1 , i 2 , j 2 ) is the similarity between two image patches centered at (i 1 , j 1 ) and (i 2 , j 2 ), which is defined as: where Z(i 1 , j 1 ) is the normalization constant ensuring (i 2 ,j 2 )∈Ω(i 1 ,j 1 ) ω(i 1 , j 1 , i 2 , j 2 ) = 1 and is defined as is the Euclidean distance computed as: where α is the standard deviation of Gaussian kernel function, h is the decay parameter and it is determined using the rule-of-thumb [37], that is, h = Cσ, where C denotes a constant. The rule works well in this study. To effectively determine h, the noise standard deviation σ of the MR image needs to be estimated using Equation (3). Instead of the utilization of gray-level patterns around each pixel, the proposed method refines the computation of similarity weight based on the feature vectors obtained by the LEPNet model. Figure 2 illustrates how to construct the feature vectors related to the pixels. The considered image patch in the noisy image is represented with a feature vector by concatenating the pixel intensities of the same location in all feature images as shown in Figure 2. Therefore, the features for the image patch centered at the pixel (i 1 , j 1 ) in a search window marked by the yellow boxes can be represented as: where p 1 (i 1 , j 1 ), p 2 (i 1 , j 1 ), · · · , p L (i 1 , j 1 ) are the pixel intensities in the feature images marked by the blue boxes in Figure 2. Likewise, the features for the image patch centered at another pixel (i 2 , j 2 ) marked by the green boxes in Figure 2 can be denoted as: The structural similarity ω(i 1 , j 1 , i 2 , j 2 ) between two pixels (i 1 , j 1 ) and (i 2 , j 2 ) is calculated using the corresponding feature vectors P(i 1 , j 1 ) and P(i 2 , j 2 ). Compared with the patch-based computation strategy in the TNLM method, the calculation of the Euclidean distance based on the constructed feature vectors can greatly improve the computational efficiency. Considering that some image details will be lost during denoising, the residual details in the method noise [38], which is defined as the difference between the noisy image and its denoised version, will be extracted. Here the method noise is firstly filtered to generate the residual image using the NLM method where the weight (13)

Overall Description of the Proposed Denoising Algorithm
We refer to the proposed denoising method as LEP-NLM. A block diagram of the LEP-NLM method is shown in Figure 3. The implementation details of this method are summarized as follows. Step 1: The LEPNet model is trained using 300 PRI-NLM filtered MR images obtained from the open MRI database to learn the convolution kernels of two convolutional layers.
Step 2: The input noisy MR image is preprocessed by the PRI-NLM filter to produce the pre-denoised image, then it is input into the trained LEPNet model to generate L feature Considering that some image details will be lost during denoising, the residual details in the method noise [38], which is defined as the difference between the noisy image and its denoised version, will be extracted. Here the method noise is firstly filtered to generate the residual image using the NLM method where the weight ω(i 1 , j 1 , i 2 , j 2 ) in Equation (13) is used. Then, a 3 × 3 mean filter is implemented on the residual image to smooth residual noise, thereby producing the residual detail image r( A(x, y)). As a result, the final restored image ∧ A(x, y) can be computed as ∧ A(x, y) = NLM( A(x, y)) + r ( A(x, y)).

Overall Description of the Proposed Denoising Algorithm
We refer to the proposed denoising method as LEP-NLM. A block diagram of the LEP-NLM method is shown in Figure 3. The implementation details of this method are summarized as follows.
Step 1: The LEPNet model is trained using 300 PRI-NLM filtered MR images obtained from the open MRI database to learn the convolution kernels of two convolutional layers.
Step 2: The input noisy MR image is preprocessed by the PRI-NLM filter to produce the pre-denoised image, then it is input into the trained LEPNet model to generate L feature images as the output of this model. Step 3: Based on the obtained feature images, the feature vectors related to the pixels are constructed for calculating the similarity weights.
Step 4: Based on the obtained similarity weights and the decay parameter, the input MR image is denoised using the NLM algorithm to produce the denoised image.
Step 5: The method noise for the denoised image produced in Step 4 is processed by the NLM method and a 3 × 3 mean filter to retrieve the lost image details in the denoised image.
Step 6: By combining the denoised image in Step 4 and the retrieved details in Step 5, the final restored image can be obtained.

Overall Description of the Proposed Denoising Algorithm
We refer to the proposed denoising method as LEP-NLM. A block diagram of the LEP-NLM method is shown in Figure 3. The implementation details of this method are summarized as follows. Step 1: The LEPNet model is trained using 300 PRI-NLM filtered MR images obtained from the open MRI database to learn the convolution kernels of two convolutional layers.
Step 2: The input noisy MR image is preprocessed by the PRI-NLM filter to produce the pre-denoised image, then it is input into the trained LEPNet model to generate L feature images as the output of this model.

Experimental Results and Discussion
To evaluate the effectiveness of the proposed method, several experiments have been conducted on different datasets including the well-known BrainWeb phantom [39] and the real brain image dataset Atlas [40]. The performance of the LEP-NLM method is compared with that of some related MRI denoising methods, such as the Wiener filter, the TNLM filter, the wavelet sub-band coefficient mixing (WSM) filter [41], the ODCT filter [12], the PRI-NLM filter, and the BM4D filter. In all experiments, the 3 × 3 filtering window is used for the Wiener filter. For these NLM-based filters, the size of the similarity window and the search window are set to 7 × 7 and 17 × 17, respectively. Additionally, the decay parameter h is set to be proportional to the standard deviation σ of the noisy image (i.e., h = Cσ). Here, σ is estimated according to Equation (3). The constant C will influence the restoration performance. If C is too small, little noise will be removed. Otherwise, the image will be overly smoothed and some image details will be damaged. C will be determined through the visual impression and quantitative indexes. For the simulated MR images, C is chosen based on the quantitative results and the visual inspection. In the real images, C is determined only by the visual inspection since the ground truth is unavailable. The WSM, ODCT, and BM4D filters are run using the parameters as suggested by the authors. For the proposed method, a two-layer LEPNet model is used and the number of LEP filters in the convolutional layers is set to 12, the size of the image patch and convolutional kernel are both 7 × 7, and the hyperparameter a of LeakyReLU function is set to 3 through the grid search.
To evaluate all compared methods quantitatively, two widely used quantitative metrics including peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) are considered [42], which are computed as: PSNR(A(x, y),Â(x, y)) = 10 · log 10 ( 255 where A(x, y) andÂ(x, y) are the noise-free image and the filtered image, respectively. A W and A H are the width and height of A(x, y), µ A and µÂ are the average of A(x, y) andÂ(x, y), σ A and σÂ are the variance of A(x, y) andÂ(x, y), respectively. σ AÂ is the covariance of images A(x, y) andÂ(x, y), c 1 and c 2 are two constants of small value for stabilizing the division with a weak denominator. In addition to visual quality, the computational complexity is an important aspect for the denoising method. In this study, all the experiments are performed on a personal computer with the Matlab (R2017a) environment, a 2.30 GHz i5 processor, and 24 GB DDR4 of RAM. We will compare the run time of the TNLM and the LEP-NLM for denoising images of size 181 × 217 and 256 × 256. For the two images, the denoising time of the TNLM is 50.48 s and 91.43 s, respectively. By comparison, the implementation time of the LEP-NLM is 23.10 s and 25.45 s, respectively. The results show that the computational efficiency of the LEP-NLM is significantly higher than that of the TNLM.

Simulated MR Images
The simulated images of size 181 × 217 are taken from the BrainWeb phantom. They include T1-, T2-, and PD-weighted MR images. Each image is corrupted with various levels of simulated Rician noise. The standard deviation σ of noise is computed as σ = µ · max(A) where µ is the noise proportion and max(A) denotes the maximum intensity of the noise-free image A. In this paper, µ takes 2%, 5%, 10%, 15%, 20%, 25%, and 30%.
An example is given to illustrate the denoising performance among the considered filters by adding 15% Rician noise to the original image. Figure 4 depicts the visual comparison of the restored images using the different methods. Significant noise is observed in the image obtained by the Wiener filter. In contrast, the TNLM method is effective in reducing noise, but it blurs the edges and distorts the lines. The WSM and ODCT filters generate the Gibbs artifacts in the smooth region although they can preserve the sharp edges in the images. The PRI-NLM filter tends to produce the oversmoothing of image details in some regions, while the BM4D filter may cause the damage to the small structures. However, the LEP-NLM filter achieves satisfactory visual impression in that the noise is removed effectively, the artifacts are avoided, and the fine details are preserved well.

Simulated MR Images
The simulated images of size 181 × 217 are taken from the BrainWeb phantom. They include T1-, T2-, and PD-weighted MR images. Each image is corrupted with various levels of simulated Rician noise. The standard deviation  of noise is computed as = max( ) where  is the noise proportion and max( ) A denotes the maximum intensity of the noise-free image A . In this paper,  takes 2%, 5%, 10%, 15%, 20%, 25%, and 30%.
An example is given to illustrate the denoising performance among the considered filters by adding 15% Rician noise to the original image. Figure 4 depicts the visual comparison of the restored images using the different methods. Significant noise is observed in the image obtained by the Wiener filter. In contrast, the TNLM method is effective in reducing noise, but it blurs the edges and distorts the lines. The WSM and ODCT filters generate the Gibbs artifacts in the smooth region although they can preserve the sharp edges in the images. The PRI-NLM filter tends to produce the oversmoothing of image details in some regions, while the BM4D filter may cause the damage to the small structures. However, the LEP-NLM filter achieves satisfactory visual impression in that the noise is removed effectively, the artifacts are avoided, and the fine details are preserved well. In order to visualize the structural information more clearly, the enlarged views of the regions of interest (ROIs) marked with the blue boxes in Figure 4 are presented in Figure 5. Here, ROIs include the cerebral gyri and sulcus. Obviously, the images produced by the Wiener, the TNLM, and the WSM filters are unsatisfactory because too much noise and artifacts remain or edges and small objects are damaged. The ODCT filter blurs the boundaries between the cerebral gyri and sulcus, while the PRI-NLM and BM4D filters tend to oversmooth the sulcus area, which leads to the loss of structural information. More specifically, we have compared the proposed method with such competitive methods as the PRI-NLM and BM4D methods. Some regions are chosen and marked with the yellow elliptic dotted lines as shown in Figure 5. In the PD-weighted image, the PRI-NLM In order to visualize the structural information more clearly, the enlarged views of the regions of interest (ROIs) marked with the blue boxes in Figure 4 are presented in Figure 5. Here, ROIs include the cerebral gyri and sulcus. Obviously, the images produced by the Wiener, the TNLM, and the WSM filters are unsatisfactory because too much noise and artifacts remain or edges and small objects are damaged. The ODCT filter blurs the boundaries between the cerebral gyri and sulcus, while the PRI-NLM and BM4D filters tend to oversmooth the sulcus area, which leads to the loss of structural information. More specifically, we have compared the proposed method with such competitive methods as the PRI-NLM and BM4D methods. Some regions are chosen and marked with the yellow elliptic dotted lines as shown in Figure 5. In the PD-weighted image, the PRI-NLM filter generates the blurred edges while the obvious artifact can be observed for the BM4D method. In the T1-weighted image, the small objects are smoothed by the BM4D filter. In the T2-weighted image, the PRI-NLM and BM4D filters produce the similar negative influence as they have done on the PD-weighted image. By comparison, the LEP-NLM filter works well on recovering visually significant structures while removing noise and artifacts. The PSNR and SSIM results of various methods on the PD-weighted, T1-weighted, and T2-weighted MR images are shown in Tables 1-3, respectively. Specifically, the LEP-NLM method without additionally using the method noise is denoted by the LEP-NLMw. The best value for each corrupted image at each noise level is represented in bold. As one can see, the LEP-NLM method always achieves the best PSNR and SSIM values at almost all noise levels except that its PSNR and SSIM values are slightly lower than those of the BM4D method for denoising PD-weighted images with 2% Rician noise and T1-weighted images with three levels of Rician noise, which demonstrates that the proposed method is more effective in suppressing Rician noise and preserving fine structures in the image. Additionally, the comparison between the LEP-NLMw and the LEP-NLM shows that the PSNR and SSIM of the latter could benefit from the utilization of the method noise.  The PSNR and SSIM results of various methods on the PD-weighted, T1-weighted, and T2-weighted MR images are shown in Tables 1-3, respectively. Specifically, the LEP-NLM method without additionally using the method noise is denoted by the LEP-NLM w . The best value for each corrupted image at each noise level is represented in bold. As one can see, the LEP-NLM method always achieves the best PSNR and SSIM values at almost all noise levels except that its PSNR and SSIM values are slightly lower than those of the BM4D method for denoising PD-weighted images with 2% Rician noise and T1-weighted images with three levels of Rician noise, which demonstrates that the proposed method is more effective in suppressing Rician noise and preserving fine structures in the image. Additionally, the comparison between the LEP-NLM w and the LEP-NLM shows that the PSNR and SSIM of the latter could benefit from the utilization of the method noise.

Real MR Images
Two real clinical MR images of size 256 × 256 are acquired from the Atlas dataset [38] for evaluating all compared methods. Here, one image is a normal sagittal T2-weighted brain MR image and another is a transaxial T1-weighted brain MR image with cerebrovascular disease. In this experiment, quantitative assessment is no longer feasible because the ground truth of the real images is unavailable. Therefore, we will perform the visual inspection on the filtered images. Figures 6 and 7 depict the denoised results of the two real MR images for the compared methods. It can be seen that the LEP-NLM method provides satisfactory restoration performance since it not only smooths out the noise but also preserves the meaningful structural details and maintains the image contrast. Furthermore, the zoomed details of two images are shown. Obviously, the Wiener method blurs the boundaries while the TNLM method produces unwanted artifacts. In the denoised images by the WSM, ODCT, and PRI-NLM methods, there exists much residual noise. For the BM4D method, the fine details have not been preserved well although the noise is removed effectively. However, the proposed method achieves better performance in structural information preservation by enhancing the edges and small objects.

Conclusions
In this paper, we have developed a novel denoising method by combining the LEPNet and the NLM method to reduce the Rician noise in MR images. To improve the accuracy of similarity weight computation, we have designed an unsupervised shallow convolutional network LEPNet to extract the image features and used them to refine similarity computation. By means of the two cascaded LEP-based convolutional layers and a LeakyReLU-based nonlinear layer, the LEPNet model can extract the robust intrinsic image features, which ensures that the self-similarity can be represented more effectively than using the pixel intensities in the existing NLM methods. The experimental results demonstrate that the proposed LEP-NLM method is very effective in removing the Rician noise and preserving the structural details of images. Therefore, the proposed method can potentially assist clinical experts in MRI-based disease diagnosis by preserving the tiny structure of lesions. Future study will be focused on investigating more deep learning networks to further improve the denoising performance of the NLM-based method.

Conclusions
In this paper, we have developed a novel denoising method by combining the LEPNet and the NLM method to reduce the Rician noise in MR images. To improve the accuracy of similarity weight computation, we have designed an unsupervised shallow convolutional network LEPNet to extract the image features and used them to refine similarity computation. By means of the two cascaded LEP-based convolutional layers and a LeakyReLU-based nonlinear layer, the LEPNet model can extract the robust intrinsic image features, which ensures that the self-similarity can be represented more effectively than using the pixel intensities in the existing NLM methods. The experimental results demonstrate that the proposed LEP-NLM method is very effective in removing the Rician noise and preserving the structural details of images. Therefore, the proposed method can potentially assist clinical experts in MRI-based disease diagnosis by preserving the tiny structure of lesions. Future study will be focused on investigating more deep learning networks to further improve the denoising performance of the NLM-based method.

Conclusions
In this paper, we have developed a novel denoising method by combining the LEPNet and the NLM method to reduce the Rician noise in MR images. To improve the accuracy of similarity weight computation, we have designed an unsupervised shallow convolutional network LEPNet to extract the image features and used them to refine similarity computation. By means of the two cascaded LEP-based convolutional layers and a LeakyReLU-based nonlinear layer, the LEPNet model can extract the robust intrinsic image features, which ensures that the self-similarity can be represented more effectively than using the pixel intensities in the existing NLM methods. The experimental results demonstrate that the proposed LEP-NLM method is very effective in removing the Rician noise and preserving the structural details of images. Therefore, the proposed method can potentially assist clinical experts in MRI-based disease diagnosis by preserving the tiny structure of lesions. Future study will be focused on investigating more deep learning networks to further improve the denoising performance of the NLM-based method.