Recovering Texture with a Denoising-Process-Aware LMMSE Filter

: Image denoising methods generally remove not only noise but also ﬁne-scale textures and thus degrade the subjective image quality. In this paper, we propose a method of recovering the texture component that is lost under a state-of-the-art denoising method called weighted nuclear norm minimization (WNNM). We recover the image texture with a linear minimum mean squared error estimator (LMMSE ﬁlter), which requires statistical information about the texture and noise. This requirement is the key problem preventing the application of the LMMSE ﬁlter for texture recovery because such information is not easily obtained. We propose a new method of estimating the necessary statistical information using Stein’s lemma and several assumptions and show that our estimated information is more accurate than the simple estimation in terms of the Fréchet distance. Experimental results show that our proposed method can improve the objective quality of denoised images. Moreover, we show that our proposed method can also improve the subjective quality when an additional parameter is chosen for the texture to be added.


Introduction
Image denoising methods that can estimate a noiseless, clean natural image (original image) from a noisy observation are actively being studied . Many previous works have assumed that the noise in such a noisy observation is an additive white Gaussian noise (AWGN). One popular approach for image denoising is to use nonlocal self-similarity (NLSS), in which it is assumed that a local segment of an image (a patch) is similar to other local patches [10][11][12][13][14][15][16][17][18][19][20][21].
Weighted nuclear norm minimization for image denoising (WNNM) [16] is an optimization-based method based on an NLSS-based objective function. WNNM assumes that a matrix whose columns consist of similar patches extracted from a clean image is low rank and achieves state-of-the-art denoising performance among non-learningbased methods.
Image denoising methods such as WNNM can estimate the original image well in terms of the mean squared error (MSE) or the peak-signal-to-noise ratio (PSNR). However, as shown in Figure 1, texture is often lost in the estimated image. Because the texture carries important information about the aesthetic properties of the materials depicted in certain parts of an image (such as the feel of sand, fur, or tree bark), the texture losses severely degrade the subjective image quality.
On the other hand, it is difficult to obtain a good description of texture (we refer to such a description as a texture model) and to estimate its parameters because of its stochastic nature, and the performance of texture-aware image denoising methods [8,[19][20][21] depends on these models and parameters. Thus, we can classify texture-aware image denoising methods based on the texture models they utilize.
The gradient histogram preservation (GHP) method [19] is a prominent texture model for texture-aware image denoising. GHP describes texture features in the form of a his-togram of the spatial gradients of the pixel values (gradient histogram). GHP is based on an image denoising method called nonlocally centralized sparse representation [15], and it can recover texture by imposing the condition that the gradient histogram of the output image must be close to that of the original image. The parameters of GHP can be estimated from the observed image by solving an inverse problem.
However, GHP does not utilize the relationships between distant pixels. Nevertheless, these relationships can carry important texture information because textures are often repeated over long distances. Zhao et al. addressed this problem by proposing a texture-preserving denoising method [20] that groups similar texture patches through adaptive clustering and then applies principal component analysis (PCA) and a suboptimal Wiener filter to each group. The Wiener filter is a special case of a linear minimum mean squared error estimator (LMMSE filter), and it requires an estimation of the covariance matrices of the original signals. In this texture model, the relationships between distant pixels can be expressed by the covariance matrices. The covariance matrices that are used in the suboptimal Wiener filter are calculated from sample observation patches, which are chosen using a nonfixed search window.
As another method of managing the distant relationships characterizing texture, a denoising method using the total generalized variation (TGV) and low-rank matrix approximation via nuclear norm minimization has been proposed [21]. In this method, the TGV is used to avoid the oversmoothing that is typically caused by low-rank matrix approximation methods [10,16]. The TGV and nuclear norm minimization are applied to capture the relationships between nearby pixels and distant pixels, respectively. Liu et al. claim that the latter are related to textures with regular patterns. The parameter of their texture model is the weight parameter of the TGV term. An iterative algorithm is required to estimate this parameter.
There is a trade-off between the complexity (or capacity) of a texture model and the simplicity of parameter estimation. For example, GHP [19] is a simple texture model with parameters that are easily estimated. However, this model cannot express textures in detail. By contrast, the covariance in the PCA domain [20] and the combination of the TGV and the nuclear norm [21] are complex texture models that can represent detailed textures well; however, their parameter estimation is relatively difficult.
To resolve this problem, we employ a slightly different definition of texture in this paper. We define texture as the difference between the original image and the corresponding denoised image (as obtained from an existing denoiser). We use the covariances within the texture component and between the texture and noise (covariance matrices) as our texture model, thereby capturing information about the relationships between distant pixels. Throughout this paper, we represent the covariance information in matrix form. Thus, we refer to this information as covariance matrices. It would appear to be difficult to estimate these matrices from a noisy observation. However, because we define the texture as the difference between the original and denoised images, we can utilize the information obtained in the denoising process to estimate the texture covariance matrices more easily.
We propose to use Stein's lemma [22] and several empirical assumptions to estimate the covariance matrices. Then, we apply an LMMSE filter based on the covariance matrices to estimate the lost texture information. In general, the Wiener filter assumes that the target and the noise have zero covariance. However, in our case, there is nonzero covariance between the texture, which is the difference between the original and denoised images, and the noise because the denoised image depends on the noise. Thus, we propose to use the LMMSE filter for the case in which there is covariance between the signal and noise.
Moreover, our approach yields a separate texture image, which allows us to emphasize the texture with any desired magnitude. Such texture magnification can improve the subjective quality of an image. Figure 2 presents our motivation. Figure 2. Scheme of our proposed texture recovery method. In contrast to existing image denoising methods , including texture-aware methods [8,[19][20][21], our method recovers the texture from both the observed image and the denoised image obtained via WNNM [16].
Our contributions are summarized as follows: • We propose a new definition of texture for texture-aware denoising and a method of recovering texture information by applying an LMMSE filter. • We introduce several nontrivial assumptions to estimate the covariance matrices regarding the texture and noise that are used in the LMMSE filter based on Stein's lemma. • We show an effectiveness of our method in terms of the PSNR and subjective quality (with texture magnification) through experiments. We also show similarities between our estimated covariance matrices and the corresponding true covariance matrices in terms of the Fréchet distance [23].
Recently, several image denoising methods based on neural networks have been proposed [9,18]. These methods achieve excellent denoising performance. However, understanding the process underlying such black-box methods is difficult. To the best of our knowledge, WNNM is still the state-of-the-art among denoising algorithms that are not based on machine learning approaches (i.e., white-box methods), and our proposed method represents the first successful attempt to significantly improve the performance of WNNM. Additionally, our method can explicitly extract the texture component from the noisy image, enabling us to maximize the perceptual quality of the denoised images by arbitrarily magnifying the obtained texture. In the pursuit of the inherent model of natural images, it is worthwhile to improve the denoising performance of white-box algorithms, as accomplished with the proposed method. This paper is organized as follows. Section 2 introduces WNNM as the background to this study. Section 3 introduces our newly proposed method of texture recovery and analyzes the texture and noise covariance matrices using Stein's lemma. We propose a linear approximation of WNNM to enable the application of Stein's lemma. Section 4 compares our method with other state-of-the-art denoising methods. Section 5 concludes the paper. Appendix A proves that our LMMSE filter can successfully estimate the texture component of the original image. Appendix B shows experimental results to confirm our several assumptions.
Please note that this work on texture recovery has been previously presented in conference proceedings [24]. In this paper, we show the proof of the LMMSE filter and experimental results to confirm some assumptions. We also analyze the estimation accuracy of our texture covariance matrix used for texture recovery and the effectiveness of emphasizing the texture statistically. In conference proceedings, we only provided limited experimental results (24 images of Kodak Photo CD PCD 0992). On the other hand, in this paper, we show new experimental results obtained on two image datasets (contains 110 images in total) to confirm our method's effectiveness.

Preliminaries and Notation
In this paper, R denotes the set of real numbers, small bold letters denote vectors, and large bold letters denote matrices. We denote the estimates of a and A byâ and A, respectively.
In this section, we describe WNNM [16] (a state-of-the-art denoising method based on NLSS) because our method estimates the texture information lost via the denoising process in WNNM.
In image denoising, a noisy observation is modeled as where y ∈ R is the noisy observation with pixels, x ∈ R is the original image that is the target of estimation, and n ∈ R is AWGN, of which the standard deviation is . In WNNM, the observed image y is first divided into overlapping focused patches ( √ × √ pixels). The depends on the image size and the noise level. Our experiment follows the default parameter of author's implementation of WNNM. For example, in the case of = 20, the patch size is 6 × 6 pixels, and the focused patches are selected by 1 pixel skip (stride 2). Therefore, when the image size is 256 × 512 pixels, the total number of focused patches is 32,004 (note that similar patches are selected from 6 × 6 pixels patches extracted from the neighborhood (61 × 61 pixels) of the focused patch. The overlapping focused patches are then vectorized. We denote the -th focused patch of y by y ∈ R ( = 1, · · · , ). Then, a search is performed for the − 1 patches that are the most similar to each segmented patch y , and for each y , a patch matrixỸ ∈ R × is created that includes y as its leftmost column, while the remaining columns ofỸ are the patches that are similar to y . For the -th patches of the other components x and n, we use similar notation, i.e., x and n . We denote their corresponding patch matrices byX andÑ . Note that the indices of the similar patches inX andÑ are the same as those inỸ .
For simplicity, we subtract the columnwise average of the matrixỸ from each row ofỸ and denote the result by Y . The same columnwise average subtraction method is used in the implementation provided by Gu et al.; however, this is not explicitly described in [16]. Because we assume that n is AWGN, each columnwise average of the noiseless matrixX is sufficiently similar to that ofỸ . Thus, the objective of WNNM is to estimate X which is obtained by subtracting the columnwise average ofX fromX .
The core of the denoising process of WNNM is the following equation based on the singular value decomposition (SVD) of the observed patch matrix Y : whereX is the denoised patch matrix, Y = UΣ Y V is the SVD and (·) is a threshold function. Each component ( , ) of the diagonal matrix (Σ) is expressed as is an arbitrary parameter, and is a small number. This process corresponds to a closedform solution of the iterative reweighting method applied to the singular value matrix of Y [16].
The singular value thresholding described above is applied to each focused patch and similar patches, and estimates of the original patches are obtained by adding the columnwise average toỸ . Then, the estimated original patches are combined to obtain a reconstructed image (overlapping patches are subjected to pixelwise averaging). In WNNM, this process is iterated to estimate the original image. In each iteration, the denoising target y ( ) is updated by calculating a weighted sum of the most recently estimated imagê x ( −1) and the original noisy observation y (where is the index of the iteration) with a weight parameter , as follows: y ( ) =x ( −1) + (y −x ( −1) ). This process is called iterative regularization, and the parameter is fixed to 0.1 in [16]. The WNNM algorithm is given in Algorithm 1. For additional details on WNNM, please consult [16].

Recovering Texture via Statistical Analysis
In this section, we introduce our denoising method that recovers the texture lost by denoising a noisy observation. In this paper, we define a structure (or cartoon) component s = (y) ∈ R , where (·) : R → R is a map corresponding to the denoising process of WNNM. Then, in contrast to previous work [8,[19][20][21], we define a texture component t as the difference between the original image x and the structure s (i.e., t = x − s). Thus, we obtain our extended observation model, i.e., Please note that the above equation is equivalent to Equation (1) since x = s + t. Our goal is to estimate the texture component t that has been lost via WNNM.
First, we denoise the observation y with WNNM. Following WNNM, we estimate each texture patch matrix T from the corresponding structure patch matrix S , which is obtained in the final iteration of the WNNM procedure, and the corresponding observation patch matrix Y . Because we use an LMMSE filter to estimate T , we need the statistical information on the relationship between the texture and noise; however, this information cannot be calculated directly because t and n are not observable. Instead, we estimate this information using Stein's lemma and several assumptions, which are described below, and then reconstruct the estimated texture patch matrices to obtain the estimated texture componentt. Finally, we obtain an estimate of the original image,x, by adding the estimated texture componentt to the denoised image s.

LMMSE Filter for Texture Recovery
We use an LMMSE filter to estimate each texture patch t . The objective function of the LMMSE filter, W * ∈ R × is formulated as where E[·] denotes the expected value and · 2 is the ℓ − 2 norm. Because this formula is quadratic, we can easily find the minimizer as follows: where we adopt the following covariance matrix notation: (where a and b are some random vectors). The proof is given in Appendix A. Unfortunately, the covariance matrices R t t and R t n cannot be directly obtained because t and n are not observable. We solve this problem by using Stein's lemma and introducing several assumptions.

Estimation of the Covariance Matrices
As mentioned above, we need to estimate R t t and R t n . Since we define t as x − s, and the value of s is changed with respect to the value of n, we can assume that t is the output of a function that takes n as an input. n also follows a normal distribution. Thus, we can estimate R t n using Stein's lemma as follows: The above equation shows that our desired covariance matrices can be obtained from the effects of the n on the t . The texture patch t is equal to x − s according to Equation (5); moreover, we assume that the effect of n on x is zero. Therefore, we can estimate the covariance matrix R t n as We need to analyze the variations in the WNNM output as the noise varies. As mentioned above, the WNNM process is complex. Thus, we need to approximate the WNNM process with a linear filter to simplify the analysis.
Surprisingly, the linear approximation of the WNNM process also provides us with an empirical method of estimating R t t , which is more difficult to estimate than R t n . In the next subsection, we describe how to approximate WNNM with a linear filter and how to estimate R t t .

Linear Approximation of the WNNM Procedure
To simplify the analysis and determine the covariance matrix R t n by using Stein's lemma, we assume that we can precisely approximate the whole process of WNNM as a linear filter: The SVDs of Y and S are formulated as If we ignore the fact that the patch matrices are updated in WNNM via a similar patch search, reconstruction and iterative regularization in each iteration, then the SVDs of Y and S will have common right and left singular matrices. We assume that U Y and V Y are equal to U S and V S , respectively. Thus, we can obtain the approximate WNNM filter F as follows: The partial derivative of the columnwise average ofỸ with respect to n is considered to be very small because n is Gaussian and because the average of n is zero. Note that Y is obtained by subtracting the columnwise average ofỸ fromỸ .
If this approximation of WNNM is sufficiently accurate, then s n ≈ F . From Equation (9), we can estimate R t n asR Additionally, we use the approximate WNNM filter F to estimate R t t . We assume that R t t can be estimated asR This assumption is founded on preliminary experiments. The details of these preliminary experiments are presented in Appendix B.3.

Recovering the Texture of a Denoised Image
Based on the above discussion, we can calculate the LMMSE filter and estimate t . However, R (t +n ) (t +n ) is not invertible in practice because the number of patches is smaller than . Although an equivalent estimate of R (t +n ) (t +n ) appears to bê R t t +R t n +R n t + 2 I (where I is the identity matrix), this choice does not provide good recovery performance. Instead, substituting R t t + 2 I into the sample covariance matrix R (t +n ) (t +n ) was found to yield the best results in our preliminary experiments. Thus, we calculate the LMMSE filter as Substituting Equation (13) and (14) into (15), we can estimate T aŝ Then, we can obtain the desired estimated texture imaget that is obtained by combining eachT in the same manner used to reconstruct an image from the patches obtained in WNNM. Finally, we can obtain the final estimated imagex with enhanced texture aŝ We use the LMMSE filter to obtaint; however, minimizing the MSE often causest to lose clarity. We can obtain a clearer texture-enhanced image as follows: where is an arbitrary parameter used to control the magnitude of the texture that is added. Note that = 1 means the proposed method applies the LMMSE filter and we can expect to obtain the denoised images with best PSNR if all of our assumptions hold. The entire proposed process is presented in Algorithm 2.
Algorithm 2 Texture recovery using the proposed method.
Input: Noisy observation y Initialize s (0) = y and y (0) = y for = 1 : do Iterative regularization: y ( ) = s ( −1) + (y − s ( −1) ) for = 1 : do Find the similar patch matrix Y ( ) Estimate the corresponding original patch matrix X via Equation (2); the result is denoted by S ( ) if k=K then Estimate R t t and R t n using Stein's lemma via Equation (13) and (14) Calculate W * via Equation (9) Estimate T via Equation (16) to obtainT end if end for Use the S ( ) to reconstruct the estimated image s ( ) end for Use theT to reconstruct the estimated texture componentt Output: Obtain the final estimated image asx = s (K) + t

Experimental Results
In this section, we discuss our experimental results. First, we present performance of our denoising method and compare it to other non-learning-based state-of-the-art denoising methods, namely, block matching and 3D filtering (BM3D) [11], GHP [19], and WNNM [16]. For comparison with learning-based denoising methods, we compare the denoising performance of the proposed method with that of a denoising convolutional neural network (DnCNN) [9]. We also confirm the effects of the parameter . Moreover, we present a histogram of the Fréchet distance to confirm the assumption of Equation (14).
We used two image datasets for validation. One dataset consists of ten grayscale natural images from [21], as shown in Figure 3 (dataset I). The other dataset contains 100 grayscale natural images from the Berkeley segmentation dataset [25] (BSD100). In all experiments, we used MATLAB R2018b for the implementation, and all noisy observations were simulated by adding noise generated by a pseudorandom number generator to the original images. Image01 Image02 Image03 Image04 Image05 Image06 Image07 Image08 Image09 Image10

Image Denoising
The PSNR and structural self-similarity (SSIM) [26] results for the denoised images from dataset I with different noise levels are given in Tables 1-3. Note that higher PSNR and SSIM values mean that the reference image is more similar to the original image. We highlight the highest PSNR and SSIM in each row in bold. We set the texture scaling parameter to = 1 because we wish to measure how much improvement is achieved in terms of the PSNR. We recover the texture using the LMMSE filter; thus, the MSE is likely to be low, meaning that the PSNR should be high. These tables show that our denoising method outperforms the other state-of-the-art methods in terms of both the PSNR and SSIM. The largest difference in the PSNR between WNNM and our method is 0.15 dB. Additionally, the averages of the denoising results on BSD100 at each noise level are shown in Table 4. This table shows that the performance improvement achieved with our method is independent of the dataset. For each image in BSD100, we performed one-tailed paired t-tests on the difference of each PSNR and SSIM between the proposed method and WNNM. In this test, the null hypothesis is that the PSNR/SSIM values obtained by proposed method are not greater than the corresponding values obtained by WNNM. The p-values are also shown in Table 4. From the results, since the p-values are small enough, we confirmed statistically significant differences between our proposed method and WNNM.
Moreover, we show the averages of computational time of WNNM and the proposed method in Table 5. The computational times were measured on 10 distinct observed images (the noise value of each observation is different) for each test image from BSD100. The additional computational time of our texture recovery is 7-13% of the computational time of WNNM. The PSNR/SSIM gain obtained by our texture recovery method is considered to be sufficient for the users to accept this additional cost.  We also compare the denoising performance of the proposed method with that of the DnCNN [9] on dataset I. The noise level was set to 15 and the texture scaling parameter was set to 1. We used the PyTorch implementation of DnCNN and the pretrained model parameters for noise level = 15, which is published by the authors of [9]. The experimental results are shown in Table 6. The improvement of DnCNN over WNNM is 0.30 dB in the average of PSNR while that of the proposed method is 0.10 dB. Note that the proposed method requires no training process and is fully explainable while the denoising mechanism of DnCNN is black box.
Next, to observe the effect of the parameter , we searched for the value of this parameter that would maximize the SSIM of the output image, denoted by SSIM . We obtained SSIM via a line search in the range from 0 to 8 in increments of 0.01. For several examples, the original image, the noisy observation, the output image obtained via GHP, the output image obtained via WNNM, and an output image obtained with our method are shown in Figure 4. From this figure, we can observe that the subjective quality and SSIM can be drastically improved by properly choosing the parameter . For example, as shown in Figure 4p-t for the image named 196073, the SSIM value obtained with our method is 0.073 higher than that achieved via WNNM. Moreover, we show t and t of each image of Figure 4 in Figure 5. Please note that the pixel value of each image is multiplied by a factor of 3. This figure shows thatt is similar to t in each textured region such as surface of a stone, a coral, the skin of a snake, and the surface of the sea.  A histogram of the SSIM values obtained on BSD100 is shown in Figure 6. We note that this histogram shows the frequency under three noise levels ( = 10, 20, 30). This figure indicates that there is no single optimal value of SSIM that is common to all images. Nevertheless, the figure also shows that choosing = 2 will almost always increase the expected value of the SSIM. Moreover, as with SSIM , we searched for the value of that would maximize the PSNR of the output image, and denoted it by PSNR . We obtained PSNR for each image via a line search in the range from 0 to 8 in increments of 0.01. Figure 7 shows a histogram of the values of PSNR . We confirmed that = 1 produces the denoised images with the best PSNR in most cases.

Accuracy of Texture Covariance Matrix Estimation
In this section, we compare the accuracy of two texture covariance matrices. One is our estimated texture covariance matrix, and the other is a simply sampled estimate that is calculated asR where ℎ(·) is a function that replaces any negative eigenvalues of the input matrix with zero; this is equivalent to metric projection to a positive-semidefinite matrix. We calculate the Fréchet distance to evaluate the accuracy of the texture covariance matrix. Dowson and Landau introduced the calculation of the Fréchet distance between multivariate normal (MVN) distributions [23]. The Fréchet distance is also used to measure the similarity between generated and real images [27]. We assume that the texture component t follows an MVN distribution. Consider a random MVN-distributed variable a ∈ R whose mean vector is a = E[a] ∈ R and whose covariance matrix is R aa (a ∼ N ( a , R aa )); its probability density is denoted by Additionally, the Fréchet distance between the MVN distributions N ( , R aa ) and N ( , R bb ) is calculated as We assume that the mean vector of the texture is 0. Thus, we can calculate the Fréchet distance between the estimated texture covariance matrixR t t and the true texture covariance matrix R t t from the ground truth as follows: We can similarly calculate the Fréchet distance between the simply sampled estimatê R smp t t and R t t by replacingR t t withR smp t t in Equation (22). Figure 8 shows histograms of these Fréchet distances. Note that we calculated the Fréchet distance for each patch matrix from dataset I. However, we ignored the very few outliers caused by numerical errors. This figure shows thatR t t is more similar to R t t thanR smp t t is at each noise level.

Conclusions
In this paper, we have proposed a method of recovering texture information that has been oversmoothed by the denoising process in WNNM. For texture recovery, we apply an LMMSE filter to a noisy image. Because our filter requires covariance matrices between the texture and noise, we have also proposed a method of estimating this information based on Stein's lemma and several key assumptions.
Experimental results obtained on various image datasets show that our method can improve PSNR and SSIM of WNNM and also outperforms other state-of-the-art methods with respect to both criteria. Moreover, we confirmed statistically significant differences between our method and WNNM. With our method, SSIM values can be further improved by choosing a suitable value for a scaling parameter that controls the magnitude of the added texture. Moreover, blurred edges and texture can be enhanced through a proper selection of this scaling parameter. Additionally, our estimated texture covariance matrices are more similar to the corresponding oracle covariance matrices in terms of the Fréchet distance than are simply sampled estimates obtained from the observed images. Finally, an additional computational time of our texture recovery is 7-13% of the computational time of WNNM. We consider the additional cost is acceptable considering the gain of PSNR and SSIM.
In the experiments, we chose almost all the parameters of the proposed method based on the default setting of WNNM. The only parameter that the user needs to choose is . When the user wants to maximize the PSNR, = 1 gives the best result in most cases. If the user would like to enhance the texture, the user should choose the larger than 1. In addition, our experimental results show that = 2 almost always improves the SSIM.  Data Availability Statement: For dataset I, please consult [21]. Also, BSD100 used in this study is available at https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/.
To determine each index of t corresponding to T , 20 images from BSD100 and 40 patch indices from each image were randomly selected. The noise level was set to 20.
The results of our Kolmogorov-Smirnov test at a significance level of 0.05 show that, for 97% of the pixel values of the texture component, there is no clear evidence that they do not follow a normal distribution.
The reader might suppose that the fact that each pixel of t follows a normal distribution does not necessarily imply that t follows an MVN distribution. However, the former strongly implies the latter in practice.

Appendix B.3. Assumption Regarding the Estimate of the Texture Covariance Matrix
In Section 3.3, we assume that R t t can be estimated asR t t = 2 2 F . We performed an experiment to experimentally prove this assumption. With some abuse of notation, we defineR t t = 2 F in this subsection and experimentally confirm a relationship between the parameter and the denoising performance.
We used the 10 images of dataset I and set the noise level to 20. In this experiment, the parameter was varied in the range of 0 to 3 in increments of 0.1, and the noisy images were denoised using the proposed method withR t t = 2 F . We calculated the PSNR from all output imagesx of the proposed method and x for each . Note that this PSNR is not the average of the PSNR of each image but rather is calculated as the average of the squared error of each image. The experimental results are shown in Figure A2.
This figure shows that the denoising performance is highest when is approximately 2. Thus, we assume thatR t t = 2 2 F . Figure A2. The relation ship between the parameter and the denoising performance of the proposed method when the is applied toR t t = 2 F .