Next Article in Journal
A New Framework for Smart Doors Using mmWave Radar and Camera-Based Face Detection and Recognition Techniques
Previous Article in Journal
Adaptive Convolution Sparse Filtering Method for the Fault Diagnosis of an Engine Timing Gearbox
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Robust Noise Estimation Algorithm Based on Redundant Prediction and Local Statistics

1
State Key Laboratory of High-Performance Complex Manufacturing, School of Mechanical and Electrical Engineering, Central South University, Changsha 410083, China
2
School of Mechanical and Electrical Engineering, Changsha University, Changsha 410022, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(1), 168; https://doi.org/10.3390/s24010168
Submission received: 27 October 2023 / Revised: 1 December 2023 / Accepted: 5 December 2023 / Published: 28 December 2023
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Blind noise level estimation is a key issue in image processing applications that helps improve the visualization and perceptual quality of images. In this paper, we propose an improved block-based noise level estimation algorithm. The proposed algorithm first extracts homogenous patches from a single noisy image using local features, obtaining the covariance matrix eigenvalues of the patches, and constructs dynamic thresholds for outlier discrimination. By analyzing the correlations between scene complexity, noise strength, and other parameters, a nonlinear discriminant coefficient regression model is fitted to accurately predict the number of redundant dimensions and calculate the actual noise level according to the statistical properties of the elements in the redundancy dimension. The experimental results show that the accuracy and robustness of the proposed algorithm are better than those of the existing noise estimation algorithms in various scenes under different noise levels. It performs well overall in terms of performance and execution speed.

Graphical Abstract

1. Introduction

With the wide application of digital images in various fields, image quality requirements are increasing. However, images are affected by various disturbances during acquisition, transmission, and processing. Different levels of noise appear during these processes; this noise not only reduces the image quality but also affects the subsequent image processing and analysis. Image noise estimation aims to obtain the accurate noise standard deviation and use it as a priori information to select appropriate algorithm parameters for subsequent image processing activities such as image denoising [1,2,3,4,5], restoration [6], compression [7,8], and segmentation [9]. The accuracy of the estimation results will directly determine the performance of the algorithm.

1.1. Related Works

Additive Gaussian White Noise (AWGN), as the most common type of noise in image processing, is the most widely used noise model and research object. In recent decades, the estimation of AWGN in digital images has been widely studied. The existing noise estimation methods can be roughly categorized into statistical approaches, transform domain-based approaches, filter-based approaches, and patch-based approaches. For some specific application scenarios or noise types, the noise in an image can be assumed to follow a specific probability distribution, and the corresponding noise model [10,11] can be constructed to compute the noise level using statistical information, such as the local variance [12] and histogram [13], of the image itself. Zoran et al. [14] exploited the fact that the kurtosis value in the natural noise-free image has a scale-invariant property, related the noise variance to the kurtosis, and transformed the noise level estimation problem into a nonlinear optimization problem. Chen et al. [15] used statistical measures to describe the distribution of pixel values and constructed corresponding noise models. Maximum likelihood estimation is often used to estimate the parameters of a noise distribution, leading to a more accurate understanding of the properties of the noise [16]. However, the kurtosis scale invariance assumption has limitations for images with highly directional edges or large smooth regions [17]. Statistical approaches are more suitable for simple noise models, but they rely on the a priori information of the noise model and have limited adaptability to different scenes. These approaches may not be able to estimate the noise efficiently for different types of images and noise.
Transform domain methods usually transform the image to other domains through discrete cosine transform or wavelet transform to separate the noise from the image information in the transform domain [18,19]. Liu et al. [20] proposed a method based on the singular value decomposition transform, which uses the tail values of the singular value sequence to approximate the value of the image noise level. Khalilh et al. [21] proposed using the wavelet-transformed coefficients of the details of most of the subbands to reflect only the noise components. Hashemi et al. [22] estimated the noise variance using posterior distribution by establishing the prior probability distribution of the noise variance using the Bayesian inference method. However, the domain change may introduce errors in the image representation and a loss of signal information [23], and these methods are not satisfactory for working with nonsmooth noise or high-frequency image signals.
Filter-based approaches mainly consist of applying an appropriate filter to the image to suppress the high-frequency noise components. The noise estimation results are obtained indirectly by analyzing the image before and after filtering. Immerkar et al. [24] proposed a weighted local mean method to obtain an unbiased estimation of the noise variance by averaging and differencing the image using a matrix mask and estimating the noise variance using polynomial approximation. Olivier et al. [25] proposed a noise estimator based on a step-signal model that used polarization derivatives, directional derivatives, and a nonlinear combination of these derivatives to estimate the noise distribution. Edge detectors such as the Laplace operator are used in these approaches to suppress the structural information of the image and to ensure that the structure and detail do not interfere with the estimation of the noise variance [26]. However, some image structure is always present in the filtered image, and the difference between the filtered image and the original image cannot immediately and directly be assumed to be noise. Such methods can also introduce distortions and are limited by the image structure information and noise characteristics, and their accuracy is significantly reduced for images with rich textures.
Patch-based approaches generally decompose the noisy image into a set of patches and select homogenous patches using pixel brightness proximity and simple texture and structure as selection criteria. Assuming that the pixel value variations of homogenous patches are mainly caused by noise, principal component analysis (PCA) is often used to separate the noise component, and local statistical properties are used to estimate the global noise level. Liu et al. [27] iteratively searched the image for weak-textured patches using the gradient values and statistics of the image patches. Pyatykh et al. [28] used principal component analysis to estimate the noise level of an image. By analyzing the image patch covariance matrix, the principal components containing noise were identified to obtain an accurate noise estimation. Fang et al. [29] proposed a new patch-based algorithm that linearly combines overestimated and underestimated results to yield higher accuracy and robustness. Hou et al. [30] introduced a pixel-level nonlocal self-similarity prior to searching for similar pixels to accurately and quickly estimate the noise level. Ping et al. [31] used image self-similarity and separability to decompose an image into noise principal components and detail components. They then used the principal texture representation to estimate the noise principal components and compute the principal component via image texture variance. Dutta et al. [32,33] used quantum mechanical concepts to represent and process signals and images, proposing novel algorithms and frameworks that bring new ideas and approaches to the field of image noise research.
The advantage of patch-based approaches is that they can fully exploit the statistical features of the local region and reflect the global information with local features. Unlike traditional transform domains and statistical approaches, they can adapt to different noise distributions and scene structures. They can also be combined with air domain and frequency domain processing tools [34,35], providing greater flexibility and applicability. However, the method also has some challenges, such as selecting the appropriate patch size and image coverage and reducing the interference of the original image structure information in the noise estimation.

1.2. Motivations and Contributions

This study focuses on the above problems, analyses the relationship between the eigenvalues of homogenous patches and the actual noise level, and designs parameters and strategies that can be adaptively adjusted according to the specific situation in order to obtain accurate and reliable noise estimation results.
In particular, the main contributions of this paper can be summarized as follows:
  • We present the limitations of the traditional method of homogenous patch selection, analyze the effect of the eigenvalues of uniform blocks on the noise estimation results, and provide empirical evidence and a theoretical basis for the phenomena of underestimation and overestimation.
  • We use the median absolute deviation (MAD) to remove the eigenvalues of the main dimension and predict the number of redundant dimensions. Through statistical analysis of the correlation between the adaptive discriminant coefficients and the input variables, we established a multivariate nonlinear regression model.
  • We show various experimental results on complex benchmark datasets. These results demonstrate that the proposed algorithm performs well in a variety of scenarios with different noise conditions.

1.3. Paper Organization

The rest of this paper is organized as follows: Section 2 briefly overviews the noise estimation method based on principal component analysis and analyzes the relationship between the minimum eigenvalue and the actual noise level. Section 3 describes the proposed method in detail. Section 4 specifies the parameter settings, reports the experimental results of the proposed method on a benchmark dataset, and validates its effectiveness by comparing its performance with that of state-of-the-art methods. Finally, Section 5 concludes the paper.

2. Noise Level Estimation Based on PCA

For an input noisy image I of size M × N , the image is scanned pixel by pixel, point by point, in a sliding window fashion; y is the set of overlapping patches of size r = d 2 , containing a total of s = ( M d + 1 ) ( N d + 1 ) patches. The matrix columns of the patches y i are vectorized:
y i = x i + n i ,
where x i is the i-th noiseless (original image) patch, n i is additive white noise following a Gaussian distribution N ( 0 , σ 2 ) , and y i is the corresponding noise patch. Assuming that the noise vectors are uncorrelated with each other, refs. [27,28] used principal component analysis on this noise model to derive the following equation:
λ min ( y ) = λ min ( x ) + σ 2 ,
where λ min ( y ) and λ min ( x ) are the minimum eigenvalues of the covariance matrices y and x respectively. However, for the blind noise estimation problem, the noiseless image is unknown information [36]. It can be assumed that λ min ( x ) of the set of patches x covered by weak-textured and relatively homogenous regions in the noisy image is equal to zero. Then, Equation (2) can be rewritten as:
σ 2 = λ min ( y ¯ ) ,
where y ¯ is the subset of patches in set y that satisfy the homogenous conditions. The minimum eigenvalue of the covariance matrix y ¯ is equal to the actual noise level, so screening homogenous patches from the set of raw patches is a key in the noise estimation algorithm based on principal component analysis.
An appropriate measure of image structure must be determined to select low-ranked patches with similar structures [37]. The gradient covariance matrix [38] can be used to detect image texture and structure, and Liu et al. [27] proposed an iterative weak-textured patch selection method that uses the trace of a patch’s gradient covariance matrix as a texture strength measure. The statistical properties of the texture strength ξ ( n ) satisfy the gamma distribution, and when the texture strength of a patch is less than the threshold τ , the patch can be considered a weak-textured patch. In the k-th iteration, the threshold τ can be expressed as a function of the given significance level δ and the noise level σ n ( k ) of the k-th iteration as follows:
τ = σ n ( k ) 2 F 1 ( δ , d 2 2 , 2 d 2 t r ( D x T D x + D y T D y ) ) ,
where F 1 ( δ , α , β ) is the inverse gamma cumulative distribution function with shape parameter α and scale parameter β , δ is the probability that the strength of the texture is not greater than τ , σ n ( k ) is the standard deviation of the Gaussian noise at the k-th iteration, and D x and D y represent the matrices of horizontal and vertical derivative operators.
The local variance is also an important feature for characterizing the structural information of the image. When the local variance of the patch is large, it indicates that the patch is a high-frequency part of the image, such as the edge and corner point regions. The local variance of y i is defined as follows:
σ P 2 = 1 ( d 2 1 ) y i y ( y i μ ) ,
where μ is the average gray value of y i . The σ P 2 of the flat patch obeys a gamma distribution Γ ( r / 2 , r / 2 σ 2 ) . Based on the nature of the gamma distribution we can obtain the mean value of the variance E ( σ P 2 ) = σ 2 . Thus, we can use the mean local variance to estimate the noise. For the flat patch, Ping et al. [39] proved the following theorem:
Theorem 1.
There must exist a unique σ ˜ 2 ( σ ˜ > 0 ) such that
σ ˜ 2 = E ( σ P 2 | σ P 2 λ σ ˜ 2 ) ,
σ ˜ 2 = ρ σ 2 ,
with
ρ = 0 F 1 ( δ , r / 2 , 2 / r ) x z ( x ) d x 0 F 1 ( δ , r / 2 , 2 / r ) z ( x ) d x ,
λ = F 1 ( δ , r / 2 , r / 2 ) / ρ ,
where z ( x ) = σ 2 f ( σ 2 x ) represents the pdf of Γ ( r / 2 , r / 2 ) , F 1 ( δ , α , β ) is the inverse Gamma cumulative distribution function, and δ , which is a given significance level, denotes the probability of that local variance is no more than λ σ ˜ 2 , i.e., P ( σ P 2 λ σ ˜ 2 ) = δ ( P ( ) presents the probability).
Following Theorem 1, we consider the patches whose local variances do not exceed λ σ ˜ 2 to be flat patches. Assuming that the patches obtained after the selection process of weak-textured and flat patches are homogenous patches, the true noise variance can be obtained by performing principal component analysis-based noise estimation on these patches. However, for images with high texture and high rank, the homogenous patch selection method has some obvious drawbacks. Patches with small local variance σ P or texture strength ξ ( n ) do not necessarily satisfy the homogeneity condition, and patches containing high-frequency information with high similarity are also low-rank patches [40]. Distinguishing such patches for collection is difficult, so homogenous patches will not be fully extracted when relying only on the above patch selection methods.
Most of the eigenvalues of a noisy image fluctuate around the true noise variance rather than in an absolutely equal relationship, which requires us to compute the standard deviation of the noise based on the statistical properties of the eigenvalues. The methods in [27,28] assumed that the smallest eigenvalue λ min ( y ¯ ) of homogenous patches is an estimate of the noise variance σ n 2 . However, experimental results have shown that the estimated noise variance is consistently smaller than the true noise variance σ 2 .
Due to the inherent information redundancy and correlation of natural images, the patches obtained from images are usually located in low-dimensional subspaces [41]. S = { λ i } i = 1 r is the set of eigenvalues of the homogenous patch set y ¯ . Given λ 1 λ 2 λ r , we can represent S = { λ i } i = 1 r as S 1 S 2 , where S 1 = { λ i } i = 1 m denotes the eigenvalue of the principal dimension and S 2 = { λ i } i = m + 1 r denotes the eigenvalue of the redundant dimension [15]. While the eigenvalues of the redundant dimension { λ i } i = m + 1 r follow a Gaussian distribution N ( σ 2 , 2 σ 4 / s ) , the expected value of λ i can be approximated as [42,43]:
E ( λ i ) σ 2 + Φ 1 ( m r α + 1 i m r 2 α + 1 ) v ,
where α = 0.375 and Φ ( x ) denotes the cumulative distribution function of the standard Gaussian distribution. When the number of redundant dimensions m r > 1 , we obtain the minimum eigenvalue that is intrinsically smaller than the actual noise level, E ( λ 1 ) < σ 2 . The result of Equation (3) underestimates the noise variance, and the accuracy of the estimation results progressively decreases as the number of redundant dimensions increases.
The redundant dimension assumption also has limitations because highly textured images have extremely rich textures with large differences in pixel values. Therefore, detailed information tends to be incorrectly included in the noise signal, resulting in a minimum eigenvalue being greater than zero [44]. Figure 1 shows two images with different texture levels, “Bark” and “House”, and the results of the estimated noise standard deviation. “House” has a simpler structure, with most of the localized areas relatively smooth and having weak texture strength, and the minimum eigenvalue is slightly smaller than the actual noise level. “Bark” presents complex details with fine textures, including rich edges and ridged scenes, and the minimum eigenvalue is larger than the actual noise level, especially in low-noise conditions. Obviously, the noise estimation premised on redundancy assumptions will overestimate the noise level, which will seriously reduce the accuracy of the estimation results.

3. Proposed Method

3.1. Overall Texture Strength

The proposed noise estimation algorithm has to avoid the influence of the noisy image’s own texture structure characteristics on the noise estimation results. We need to detect the overall texture strength of the uniform block and optimize the algorithm parameters. It is known that the local variance strongly characterizes the detailed information of the image. We define the parameters as a measure of the overall texture strength of the image, which is calculated as follows:
Δ = σ ¯ P λ r ,
where σ ¯ P is the average local variance of the patches and λ r is the minimum eigenvalue of the covariance matrix. As shown in Figure 2, among the three noiseless images with different texture intensities, a smaller Δ indicates an image with a simple texture structure and Δ gradually increases as the complexity of the structure increases. This metric will be introduced as an important variable in the statistical analysis process of the feature values.

3.2. Deletion of Outliers

The eigenvalues obtained from the computation of the set of homogenous patches consist of the eigenvalues of the redundant dimension and the upper outliers of the main dimension. However, the number of redundant dimensions is unknown in noise estimation. We can estimate the number of redundant dimensions by removing the eigenvalues in the main dimension S 1 , which represents the image information from S . We use the statistical properties of the eigenvalues of the redundant dimensions S 2 to compute the exact noise standard deviation.
The MAD is a robust measure of sample deviation in univariate numerical data and is often used to screen out outliers in data [45]. The method primarily determines whether an item is an outlier by determining whether its deviation from the median value is within a reasonable range. This method is mainly applied in the wavelet domain, We also use this method to remove the top outliers of the set S by first calculating the MAD of the set S , and then executing the judgment and outlier removal for all elements in the set. The calculation process is shown below:
M A D = m e d i a n { | λ i m e d i a n { S } | } ( i = 1 , , r ) ,
λ i = { 0 i f λ i > S m e d i a n + n M A D λ i i f λ i S m e d i a n + n M A D ( i = 1 , , r ) ,
where m e d i a n { } is the median, S m e d i a n is the median of the set S , and n is the discriminant coefficient. If n is set as a constant, the accuracy and robustness of the algorithm will be poor. The discriminant coefficient n should be adaptively adjusted according to the Δ of the homogenous patch and the actual noise level σ . The optimal value of n is calculated as follows:
n = ( 2 σ λ r ) 2 S m e d i a n M A D ,
However, σ is unknown in the blind noise estimation problem. Thus, we add AWGN to images with different overall texture strengths, calculate the parameters n , Δ , and σ of the synthesized images at different noise levels, and analyze the correlation between them. Figure 3 illustrates the computational results of 188 images from three image datasets: Textures, BSDS500-val [46], and Kodak. These datasets contain a variety of scenes and textures from the real world with different overall texture intensities, which will help analyze the mapping of coefficients to parameter sums. Figure 3a–c shows that the discriminant coefficient n is in a steady state when Δ is large, but when Δ is less than 5, n has a larger range of variation, and there are many outliers in the scatter plot. This phenomenon is more obvious, especially at low noise levels. This is primarily because if an image contains highly correlated textures and structures, the image will be considered low rank and have a small Δ . At low noise levels, high-frequency signals and noise cannot be effectively distinguished, there is a large deviation between σ and λ r , the value of M A D is small, and it is possible that the result n obtained by Equation (14) is an outlier, so the data need to be processed. As shown in Figure 3d–f, the range of values allowed for n is set to [−2.5, 2.5] and values outside this range are identified as outliers and replaced by the nearest endpoint values [47]. A nonlinear relationship between n and Δ can be found after data correction. The nonlinear regression model developed based on the scatterplot fitting method can be expressed as:
n = a exp ( b Δ ) + c
This nonlinear regression model is just an example for solving the problem and other regression models will also serve this purpose. Figure 3d–f demonstrates that the parameters of the regression equation change somewhat at different noise levels. To explore the relationship between the parameters a , b , and c of this nonlinear regression model and the actual noise level, we use the nonlinear regression method for n and Δ under different noise levels. The obtained model parameters are shown in Figure 4.
Parameters a and b are linearly and negatively correlated with the noise level, and both can be linearly represented by the noise standard deviation σ , while parameter c remains essentially unchanged given different noise levels. In the noise estimation process, we use λ r to replace the unknown actual noise level σ . Then, n can be established as a function with respect to Δ and λ r by rewriting Equation (15) as:
n = ( A 1 λ r + B 1 ) exp [ ( A 2 λ r + B 2 ) Δ ] + c ,
where A 1 and B 1 denote the slope and intercept of the fitted straight line for parameter a , while A 2 and B 2 denote the slope and intercept of the fitted straight line for parameter b , respectively. Details of calculating the parameters of the mapping function are illustrated in Section 4.

3.3. Noise Level Estimation

After removing out-of-range outliers using dynamic thresholding, all nonzero elements in the set S satisfy a Gaussian distribution with a mean value of σ 2 . Finally, the mean value of all the elements is used as the final estimate of the noise level. If the set S is the empty set, it is determined that λ r > σ and λ r are used as the noise estimation result, and the formula is expressed as follows:
σ n = { m e a n { S } i f S φ λ r i f S = φ ,
where m e a n { } denotes the mean value. Using the mean value of the eigenvalues to estimate the standard deviation of the noise can reduce the error caused by a single eigenvalue. Based on the previous analysis, the proposed noise level estimation algorithm is summarized in Algorithm 1.
Algorithm 1 Proposed Method
Input: Noisy image I
Output: Noise Level σ n
Step:
1: σ n ( 0 ) The noisy image I is divided into patches with the size of d 2 .
2: for k = 1 to K do
3: ξ ( n ) Calculate the trace of y .
4: τ k Calculate the threshold by (4).
5: σ n ( k ) Select the weak-textured patches.
6: end for
7: Δ Select homogenous patches subset y ¯ by using the iterative method as described in [39].
8: S Compute the eigenvalues of y ¯ .
9: n Calculate the discriminant coefficient by (16).
10: σ n Remove the outliers in S .

4. Experimental Results

In this section, the performance results of the proposed algorithm on synthetic noisy images are obtained through experimental tests. We use the following two classical image datasets, which are commonly used to evaluate image noise estimation algorithms: the BSDS500-test dataset and the Textures dataset. The BSDS500-test dataset contains 200 real-world scenes involving scenes with diversity (e.g., transportation, architecture, landscape, nature, people, flora, and fauna), and Figure 5 shows some natural images in the BSDS500 dataset.
The Texture dataset contains 64 texture images, including surface texture images of various materials with differences in features such as size, shape, and color, several texture images are shown in Figure 6.
In our experiments, we added AWGN with variance σ from 10 to 100 in steps of 10 to the test images to generate synthetic images, and we ran a total of 100 simulations for each image and noise level.
We selected five of the most commonly used and state-of-the-art noise estimation algorithms [14,27,28,29,30] for performance comparison with the proposed algorithm. In this study, the source codes of these methods are downloaded from the corresponding authors’ websites. To ensure the authenticity and reliability of the comparison results, the default parameters provided by the authors were used for all five algorithms. In the comparison experiments, the final noise level estimates were obtained by calculating the average of the noise level estimates for the different color channels while testing the color images. All algorithms were implemented on a MATLAB2021a platform (8 GB RAM, Intel(®) Core(™) i5-6500 CPU, 3.20 GHz processor (Intel, Santa Clara, CA, USA)).

4.1. Evaluation Metrics

In this study, we use the following three commonly used evaluation metrics to evaluate the performance of the proposed algorithm: bias (Bias), standard deviation (Std), and root mean square error (RMSE), which reflect the accuracy, robustness, and overall performance of the algorithm, respectively. The calculation formulas are as follows [29,40]:
B i a s ( σ ^ ) = E | σ E ( σ ^ ) | ,
S t d ( σ ^ ) = E [ σ E ( σ ^ ) ] 2 ,
R M S E ( σ ^ ) = B i a s 2 ( σ ^ ) + S t d 2 ( σ ^ ) ,
where σ ^ is the output result of the noise estimation algorithm. When these three evaluation metrics are smaller, the performance of the noise estimation algorithm is better.

4.2. Parameter Configuration

Before validating the proposed algorithm, the parameters in Equation (16) were obtained by sampling the image datasets. We calculated the values of parameters n , Δ , and λ 1 of the synthetic images at various noise levels by adding known noise levels to the reference images in the three image datasets as described in Section 3.2. The process was repeated for a total of 100 experimental simulations and the parameters in Equation (16) were calculated using multivariate nonlinear regression to obtain robust nonlinear regression model parameters. The results are presented in Table 1.
Using the parameters obtained from the image dataset ensures the accuracy of the redundant dimension prediction and improves the accuracy and robustness of the noise estimation results. The images used to obtain the parameters of the regression model and the images used for validation are mostly mutually exclusive.
In the proposed algorithm, there are four fixed parameters in the patch selection process. The significance level δ and the number of iterations K were used directly with the best parameters chosen in [27]. Two other important parameters of the algorithm: the significance level δ and the patch size d , need to be set. We first selected a range of parameters based on the default parameters of other comparative algorithms for reference and determined the optimal parameters by testing the proposed algorithm using the trial and error method. The parameter values for d ranged from 4 to 9, and the parameter values for δ ranged from 0.7 to 0.9. We introduced these parameters into the proposed algorithm and applied the proposed algorithm to the BSDS500-train dataset. The simulations were performed according to the experimental methodology described in Section 4, and evaluation metrics were used to ensure that the parameter settings were optimal.
The experimental results are shown in Figure 7, which clearly shows that the performance of the proposed algorithm is extremely similar when δ is set to 9 and d is set to 4 or 5. The accuracy and robustness are better under these parameters than under other parameter combinations. The image characteristics, noise strength, and application-specific requirements should also be accounted for when choosing the appropriate patch size d and significance level δ . Because smaller patches are more sensitive to structure, the results may be high and unstable under low noise levels and large structural differences, whereas larger patches contain more pixel information for statistical analysis, resulting in higher accuracy and reliability. Therefore, d is set to 5 and δ to 0.9 as the default parameters. The initial parameters of the proposed algorithm are listed in Table 1, this set of parameters was used for all experiments in this section.

4.3. Performance Comparison

To test the robustness of the algorithm to scene transformations, we conducted experiments on the BSDS500-test dataset with high complexity and realism, and Table 2, Table 3 and Table 4 illustrates the results of the accuracy, reliability, and overall performance comparisons.
The proposed algorithm is clearly superior to the other methods in all evaluation metrics and maintains a stable performance level in the selected noise range of 10 to 100. The proposed model achieves the best accuracy, robustness, and overall performance when compared with other algorithms. The results also show that our algorithm is not limited to a single situation and can effectively respond to the needs of different scenarios.
We further evaluated the performance of the algorithm on high-frequency images from the challenging Textures dataset, where most of the tests contained a large range of fine structures. As can be seen from Table 5, Table 6 and Table 7, when the noise standard deviation is 10, the Bias of [27] slightly outperforms the proposed algorithm, so there is no significant difference. The Std and RMSE [28] are slightly higher than the proposed algorithm under low noise level conditions. However, as the noise level increases, in the noise level range of 40–100, our algorithm gradually establishes a significant performance advantage, and every performance indicator is ahead of the comparison method.
The experimental results in this subsection show that our algorithm mitigates the underestimation in traditional noise estimation algorithms for images with the presence of homogenous regions, and effectively avoids the loss of image data information and the aggravation of overestimation due to redundancy assumptions in low-noise conditions for highly textured images.

4.4. Computational Complexity

For an image, when considering the computational complexity of Algorithm 1, we must focus on the calculation of the sample covariance matrix, as it is applied to calculate the initial noise standard deviation (Equation (3)), and two iterative computational procedures: weak texture patches search and flat patches search. Algorithm 1 generally has a computational complexity O ( d 2 × M × N ) in generating the overlapping patches, and O ( d 2 × M × N ) in calculating the eigenvalue of its covariance matrix. O ( d × M × N ) in searching the weak-textured patches, and O ( d 2 × M × N ) in calculating the eigenvalue of its covariance matrix. According to the explanation in [39], the total computational complexity in searching the flat patches is O ( M a x _ s t e p × M 2 × N 2 ) ; obviously, the computational complexity is much larger than that of the weak texture patch search process. O ( d 2 × M × N ) in calculating the overall texture strength, O ( d 3 ) in calculating the redundant dimension, and O ( d 2 ) in calculating the final estimate of the standard deviation of actual noise level.
The most expensive part of Algorithm 1 is the flat block searching, which is much more complex than the other steps, thus it has an overall computational complexity of O ( M a x _ s t e p × M 2 × N 2 ) . (In this study M a x _ s t e p = 100 and d = 5 ).
We also evaluate the computational complexity by comparing the running time in seconds of the proposed algorithm with that of other noise estimation algorithms. The average execution time of various algorithms on a 512 × 512 image is shown in Table 8.
The proposed algorithm possesses a faster running speed than [14,28,29]. Although refs. [27,30] are faster than the proposed algorithm, they perform much worse in the experiments in Section 4.3.

5. Conclusions

In this study, we propose a noise estimation algorithm to estimate the true noise variance from a single noisy image with complex texture and strong noise. Using the PCA noise estimation algorithm based on homogenous patches as a foundation, we discuss the relationship between eigenvalues, noise strength, and scene complexity and analyze the reasons for the generation of underestimation and overestimation. We optimize the outlier removal process by introducing the absolute median difference. According to the data sample statistics, the nonlinear regression model of discriminant coefficients is fitted to improve the accuracy of the estimation results, which makes the algorithm more robust.
To validate the performance of the proposed algorithm, we compare it with five state-of-the-art algorithms under different types of datasets and different noise levels and further validate it in denoising applications. The experimental results show that the accuracy and robustness of the proposed algorithm are the highest in most scenarios over a wide range of noise levels. In some scenarios, the algorithm also runs faster while achieving the same overall performance. In addition, the performance of the algorithm is extremely stable on different datasets, having better results for both high- and low-frequency images.
For the algorithm proposed in this study, it is necessary to build a more comprehensive noisy image dataset in the future, including more types of scenes and different levels of noise, for training and expanding the algorithm. Explore the use of different feature extraction methods, such as wavelet transform, quantum mechanics, etc., to improve the robustness and performance of the algorithm. The current nonlinear regression model and its parameters are not necessarily the optimal solution. The optimal model parameter configuration can be selected by cross-validation, optimization algorithms, and other methods to try to improve and optimize the algorithm and apply it to a wide range of fields, such as remote sensing [48] image, Channel noise [49], and so on.
In this study, the noise model of the images is assumed to be a zero-mean additive Gaussian distribution, but in real-world applications, the noise may be more complex. In addition to the common Gaussian noise model, other types of noise need to be considered, such as salt and pepper noise and Poisson noise, and understanding the nature and generation mechanism of these noise models is crucial for the expansion of the algorithm. Next, we will also look into the possibility of extending the research algorithm to multiplicative noise evaluation, and we will also aim to improve the noise parameter estimation algorithm by using more robust methods such as [50]. According to the characteristics of different noise types, we can use targeted preprocessing methods and select suitable features to extract the noise information in the image. In order to adapt to the modeling and estimation of different types of noise, we need to adjust the model parameters in the algorithm accordingly or explore the design of more flexible model structures.

Author Contributions

Conceptualization, H.X., Z.Y. and S.Y.; methodology, H.X. and S.Y.; validation, H.X. and S.Y.; formal analysis, H.X. and S.Y.; investigation, H.X. and Z.Y.; writing original draft preparation, H.X.; writing—review and editing, H.X., Z.Y. and S.Y.; visualization, H.X.; funding acquisition, Z.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the project ‘Intelligent Sampling Technology of China’ (XQ201828).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets used in this paper are publicly available [46] or can be obtained on the website: https://sipi.usc.edu/database/database.php?volume=textures (accessed on 13 August 2023), https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds (accessed on 13 August 2023), https://r0k.us/graphics/kodak/ (accessed on 13 August 2023).

Acknowledgments

We would like to express our thanks to the anonymous reviewers for their valuable and insightful suggestions, which helped us to improve the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Feng, L.; Wang, J. Research on Image Denoising Algorithm Based on Improved Wavelet Threshold and Non-local Mean Filtering. In Proceedings of the 2021 IEEE 6th International Conference on Signal and Image Processing (ICSIP), Nanjing, China, 22–24 October 2021; pp. 493–497. [Google Scholar]
  2. Jia, H.; Yin, Q.; Lu, M. Blind-noise image denoising with block-matching domain transformation filtering and improved guided filtering. Sci. Rep. 2022, 12, 16195. [Google Scholar] [CrossRef] [PubMed]
  3. Yao, H.; Zou, M.; Qin, C.; Zhang, X. Signal-Dependent Noise Estimation for a Real-Camera Model via Weight and Shape Constraints. IEEE Trans. Multimed. 2022, 24, 640–654. [Google Scholar] [CrossRef]
  4. Yuan, Z.; Chen, T.; Xing, X.; Peng, W.; Chen, L. BM3D Denoising for a Cluster-Analysis-Based Multibaseline InSAR Phase-Unwrapping Method. Remote Sens. 2022, 14, 1836. [Google Scholar] [CrossRef]
  5. Abubakar, A.; Zhao, X.; Li, S.; Takruri, M.; Bastaki, E.; Bermak, A. A Block-Matching and 3-D Filtering Algorithm for Gaussian Noise in DoFP Polarization Images. IEEE Sens. J. 2018, 18, 7429–7435. [Google Scholar] [CrossRef]
  6. Zhang, X.F.; Lin, W.S.; Xiong, R.Q.; Liu, X.M.; Ma, S.W.; Gao, W. Low-Rank Decomposition-Based Restoration of Compressed Images via Adaptive Noise Estimation. IEEE Trans. Image Process. 2016, 25, 4158–4171. [Google Scholar] [CrossRef] [PubMed]
  7. Goto, T.; Kato, Y.; Hirano, S.; Sakurai, M.; Nguyen, T.Q. Compression Artifact Reduction based on Total Variation Regularization Method for MPEG-2. IEEE Trans. Consum. Electron. 2011, 57, 253–259. [Google Scholar] [CrossRef]
  8. Sun, W.H.; He, X.H.; Chen, H.G.; Xiong, S.H.; Xu, Y.F. A nonlocal HEVC in-loop filter using CNN-based compression noise estimation. Appl. Intell. 2022, 52, 17810–17828. [Google Scholar] [CrossRef]
  9. Wang, C.; Zhou, M.C.; Pedrycz, W.; Li, Z.W. Comparative Study on Noise-Estimation-Based Fuzzy C-Means Clustering for Image Segmentation. IEEE Trans. Cybern. 2022; early access. [Google Scholar] [CrossRef]
  10. Thai, T.H.; Retraint, F.; Cogranne, R. Generalized signal-dependent noise model and parameter estimation for natural images. Signal Process. 2015, 114, 164–170. [Google Scholar] [CrossRef]
  11. Liu, X.H.; Tanaka, M.; Okutomi, M. Practical Signal-Dependent Noise Parameter Estimation from a Single Noisy Image. IEEE Trans. Image Process. 2014, 23, 4361–4371. [Google Scholar] [CrossRef]
  12. Ghazal, M.; Amer, A. Homogeneity Localization Using Particle Filters with Application to Noise Estimation. IEEE Trans. Image Process. 2011, 20, 1788–1796. [Google Scholar] [CrossRef] [PubMed]
  13. Dong, L.; Zhou, J.; Tang, Y.Y. Effective and Fast Estimation for Image Sensor Noise Via Constrained Weighted Least Squares. IEEE Trans Image Process 2018, 27, 2715–2730. [Google Scholar] [CrossRef] [PubMed]
  14. Zoran, D.; Weiss, Y. Scale invariance and noise in natural images. In Proceedings of the IEEE International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009. [Google Scholar]
  15. Chen, G.; Zhu, F.; Heng, P.A. An Efficient Statistical Method for Image Noise Level Estimation. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 477–485. [Google Scholar]
  16. Dhaka, A.; Nandal, A.; Rosales, H.G.; Malik, H.; Monteagudo, F.E.L.; Martinez-Acuna, M.I.; Singh, S. Likelihood Estimation and Wavelet Transformation Based Optimization for Minimization of Noisy Pixels. IEEE Access 2021, 9, 132168–132190. [Google Scholar] [CrossRef]
  17. Tang, C.W.; Yang, X.K.; Zhai, G.T. Noise Estimation of Natural Images via Statistical Analysis and Noise Injection. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 1283–1294. [Google Scholar] [CrossRef]
  18. Pimpalkhute, V.A.; Page, R.; Kothari, A.; Bhurchandi, K.M.; Kamble, V.M. Digital Image Noise Estimation Using DWT Coefficients. IEEE Trans. Image Process. 2021, 30, 1962–1972. [Google Scholar] [CrossRef] [PubMed]
  19. Ma, B.; Yao, J.C.; Le, Y.F.; Qin, C.; Yao, H. Efficient image noise estimation based on skewness invariance and adaptive noise injection. IET Image Process. 2020, 14, 1393–1401. [Google Scholar] [CrossRef]
  20. Liu, W.; Lin, W.S. Additive White Gaussian Noise Level Estimation in SVD Domain for Images. IEEE Trans. Image Process. 2013, 22, 872–883. [Google Scholar] [CrossRef]
  21. Khalil, H.H.; Rahmat, R.O.K.; Mahmoud, W.A. Chapter 15: Estimation of Noise in Gray-Scale and Colored Images Using Median Absolute Deviation (MAD). In Proceedings of the 2008 3rd International Conference on Geometric Modeling and Imaging, London, UK, 9–11 July 2008; pp. 92–97. [Google Scholar]
  22. Hashemi, M.; Beheshti, S. Adaptive Noise Variance Estimation in BayesShrink. IEEE Signal Process. Lett. 2010, 17, 12–15. [Google Scholar] [CrossRef]
  23. Robertson, M.A.; Stevenson, R.L. DCT quantization noise in compressed images. IEEE Trans. Circuits Syst. Video Technol. 2005, 15, 27–38. [Google Scholar] [CrossRef]
  24. Immerkaer, J. Fast Noise Variance Estimation. Comput. Vis. Image Underst. 1996, 64, 300–302. [Google Scholar] [CrossRef]
  25. Laligant, O.; Truchetet, F.; Fauvet, E. Noise Estimation from Digital Step-Model Signal. IEEE Trans. Image Process. 2013, 22, 5158–5167. [Google Scholar] [CrossRef] [PubMed]
  26. Tai, S.C.; Yang, S.-M. A fast method for image noise estimation using Laplacian operator and adaptive edge detection. In Proceedings of the 2008 3rd International Symposium on Communications, Control and Signal Processing, St Julians, Malta, 12–14 March 2008; pp. 1077–1081. [Google Scholar]
  27. Liu, X.; Tanaka, M.; Okutomi, M. Single-image noise level estimation for blind denoising. IEEE Trans Image Process 2013, 22, 5226–5237. [Google Scholar] [CrossRef] [PubMed]
  28. Pyatykh, S.; Hesser, J.; Zheng, L. Image noise level estimation by principal component analysis. IEEE Trans Image Process 2013, 22, 687–699. [Google Scholar] [CrossRef] [PubMed]
  29. Fang, Z.; Yi, X. A novel natural image noise level estimation based on flat patches and local statistics. Multimed. Tools Appl. 2019, 78, 17337–17358. [Google Scholar] [CrossRef]
  30. Hou, Y.; Xu, J.; Liu, M.; Liu, G.; Liu, L.; Zhu, F.; Shao, L. NLH: A Blind Pixel-Level Non-Local Method for Real-World Image Denoising. IEEE Trans. Image Process. 2020, 29, 5121–5135. [Google Scholar] [CrossRef]
  31. Jiang, P.; Wang, Q.; Wu, J. Efficient Noise Level Estimation Based on Principal Image Texture. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 1987–1999. [Google Scholar] [CrossRef]
  32. Dutta, S.; Basarab, A.; Georgeot, B.; Kouame, D. Quantum Mechanics-Based Signal and Image Representation: Application to Denoising. IEEE Open J. Signal Process. 2021, 2, 190–206. [Google Scholar] [CrossRef]
  33. Dutta, S.; Basarab, A.; Georgeot, B.; Kouamé, D. A Novel Image Denoising Algorithm Using Concepts of Quantum Many-Body Theory. Signal Process. 2022, 201, 13. [Google Scholar] [CrossRef]
  34. Mohan, S.B.; Raghavendiran, T.A.; Rajavel, R. Patch based fast noise level estimation using DCT and standard deviation. Clust. Comput. 2019, 22, 14495–14504. [Google Scholar] [CrossRef]
  35. Kowalski, J.P.; Mikolajczak, G.; Pęksiński, J. Noise Variance Estimation in Digital Images using Finite Differences Filter. In Proceedings of the 2018 41st International Conference on Telecommunications and Signal Processing (TSP), Athens, Greece, 4–6 July 2018; pp. 1–4. [Google Scholar]
  36. Xiao, J.; Tian, H.; Zhang, Y.; Zhou, Y.; Lei, J. Blind video denoising via texture-aware noise estimation. Comput. Vis. Image Underst. 2018, 169, 1–13. [Google Scholar] [CrossRef]
  37. Amer, A.; Dubois, E. Fast and reliable structure-oriented video noise estimation. IEEE Trans. Circuits Syst. Video Technol. 2005, 15, 113–118. [Google Scholar] [CrossRef]
  38. Zhu, X.; Milanfar, P. Automatic parameter selection for denoising algorithms using a no-reference measure of image content. IEEE Trans Image Process 2010, 19, 3116–3132. [Google Scholar] [CrossRef] [PubMed]
  39. Jiang, P.; Zhang, J.-z. Fast and reliable noise level estimation based on local statistic. Pattern Recognit. Lett. 2016, 78, 8–13. [Google Scholar] [CrossRef]
  40. Li, Y.; Liu, C.; You, X.; Liu, J. A Single-Image Noise Estimation Algorithm Based on Pixel-Level Low-Rank Low-Texture Patch and Principal Component Analysis. Sensors 2022, 22, 8899. [Google Scholar] [CrossRef] [PubMed]
  41. Zhao, W.; Lv, Y.; Liu, Q.; Qin, B. Detail-Preserving Image Denoising via Adaptive Clustering and Progressive PCA Thresholding. IEEE Access 2018, 6, 6303–6315. [Google Scholar] [CrossRef]
  42. Royston, J.P. Expected Normal Order Statistics (Exact and Approximate). J. R. Stat. Soc. Ser. C Appl. Stat. 1982, 31, 161–165. [Google Scholar] [CrossRef]
  43. Shapiro, S.; Wilk, M. An Analysis of Variance Test for Normality. Biometrica 1965, 52, 591. [Google Scholar] [CrossRef]
  44. Khmag, A.; Ramli, A.R.; Al-haddad, S.A.R.; Kamarudin, N. Natural image noise level estimation based on local statistics for blind noise reduction. Vis. Comput. 2017, 34, 575–587. [Google Scholar] [CrossRef]
  45. Li, Y.S.; Li, Z.Z.; Wei, K.; Xiong, W.Q.; Yu, J.P.; Qi, B. Noise Estimation for Image Sensor Based on Local Entropy and Median Absolute Deviation. Sensors 2019, 19, 339. [Google Scholar] [CrossRef]
  46. Arbelaez, P.; Maire, M.; Fowlkes, C.; Malik, J. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 898–916. [Google Scholar] [CrossRef]
  47. Hsu, P.P.; Kang, S.A.; Rameseder, J.; Zhang, Y.; Ottina, K.A.; Lim, D.; Peterson, T.R.; Choi, Y.; Gray, N.S.; Yaffe, M.B.; et al. The mTOR-regulated phosphoproteome reveals a mechanism of mTORC1-mediated inhibition of growth factor signaling. Science 2011, 332, 1317–1322. [Google Scholar] [CrossRef] [PubMed]
  48. Li, B.B.; Zhou, Y.; Xie, D.H.; Zheng, L.J.; Wu, Y.; Yue, J.B.; Jiang, S.W. Stripe Noise Detection of High-Resolution Remote Sensing Images Using Deep Learning Method. Remote Sens. 2022, 14, 873. [Google Scholar] [CrossRef]
  49. Onyema, E.M.; Kumar, M.A.; Balasubaramanian, S.; Bharany, S.; Rehman, A.U.; Eldin, E.T.; Shafiq, M. A Security Policy Protocol for Detection and Prevention of Internet Control Message Protocol Attacks in Software Defined Networks. Sustainability 2022, 14, 11950. [Google Scholar] [CrossRef]
  50. Colom, M.; Buades, A. Analysis and Extension of the PCA Method, Estimating a Noise Curve from a Single Image. Image Process. Line 2016, 6, 365–390. [Google Scholar] [CrossRef]
Figure 1. Two images with different texture strengths. (a) “House”, (b) “Bark”, and (c) Noise level estimation curves for the reference image.
Figure 1. Two images with different texture strengths. (a) “House”, (b) “Bark”, and (c) Noise level estimation curves for the reference image.
Sensors 24 00168 g001
Figure 2. Overall texture strength for three different noise-free mosaic images. The weak-textured image has smaller values. (a) Δ = 3.7067 , (b) Δ = 5.2795 , (c) Δ = 8.2543 .
Figure 2. Overall texture strength for three different noise-free mosaic images. The weak-textured image has smaller values. (a) Δ = 3.7067 , (b) Δ = 5.2795 , (c) Δ = 8.2543 .
Sensors 24 00168 g002
Figure 3. Results of Δ and n for 64 images on the Texture dataset under two noise levels. The blue solid line is the minimum allowable value of n , and the blue dashed line is the maximum allowable value of n . (ac) Experimental results of A under different noise levels ( σ = 10 , σ = 60 , and σ = 90 ) (df) Corrected experimental results ( σ = 10 , σ = 60 , and σ = 90 ) and their fitted curves.
Figure 3. Results of Δ and n for 64 images on the Texture dataset under two noise levels. The blue solid line is the minimum allowable value of n , and the blue dashed line is the maximum allowable value of n . (ac) Experimental results of A under different noise levels ( σ = 10 , σ = 60 , and σ = 90 ) (df) Corrected experimental results ( σ = 10 , σ = 60 , and σ = 90 ) and their fitted curves.
Sensors 24 00168 g003
Figure 4. Parameters in the nonlinear regression model of n and Δ at different noise levels.
Figure 4. Parameters in the nonlinear regression model of n and Δ at different noise levels.
Sensors 24 00168 g004
Figure 5. Some natural images from the BSDS500 database.
Figure 5. Some natural images from the BSDS500 database.
Sensors 24 00168 g005
Figure 6. Several images from the Textures database.
Figure 6. Several images from the Textures database.
Sensors 24 00168 g006
Figure 7. Results of the proposed algorithm with various values of parameters δ and d for the BSDS500-train dataset, (a) Bias, (b) Std, and (c) RMSE.
Figure 7. Results of the proposed algorithm with various values of parameters δ and d for the BSDS500-train dataset, (a) Bias, (b) Std, and (c) RMSE.
Sensors 24 00168 g007
Table 1. Initial parameters of the algorithm.
Table 1. Initial parameters of the algorithm.
A 1 B 1 A 2 B 2 c d δ K δ
−0.0062.267−0.0181.9211.78550.93 1 10 6
Table 2. The Bias results of different AWGN estimation algorithms on the BSDS500-Test Dataset. The best results are highlighted in bold.
Table 2. The Bias results of different AWGN estimation algorithms on the BSDS500-Test Dataset. The best results are highlighted in bold.
σ [14][27][28][29][30]Proposed
100.250.160.450.050.800.03
200.290.310.890.100.710.03
300.470.471.350.140.620.05
400.700.631.800.200.650.06
501.070.782.250.240.750.08
601.780.952.660.290.700.10
700.931.103.140.331.130.12
800.821.263.600.381.040.15
901.281.414.050.440.940.26
1001.701.584.490.470.880.43
Average0.930.862.470.260.820.13
Table 3. The Std results of different AWGN estimation algorithms on the BSDS500-Test Dataset. The best results are highlighted in bold.
Table 3. The Std results of different AWGN estimation algorithms on the BSDS500-Test Dataset. The best results are highlighted in bold.
σ [14][27][28][29][30]Proposed
100.570.030.090.040.990.02
200.630.060.180.080.850.04
300.870.090.280.110.810.06
401.180.120.390.150.800.07
502.570.140.460.180.820.09
602.980.180.560.230.810.12
701.410.200.630.251.010.13
801.210.240.750.291.030.15
901.900.280.840.331.030.18
1002.890.310.920.371.040.19
Average1.620.170.510.200.920.10
Table 4. The RMSE results of different AWGN estimation algorithms on the BSDS500-Test Dataset. The best results are highlighted in bold.
Table 4. The RMSE results of different AWGN estimation algorithms on the BSDS500-Test Dataset. The best results are highlighted in bold.
σ [14][27][28][29][30]Proposed
100.620.160.460.061.270.04
200.690.320.910.121.110.05
300.990.481.380.181.020.08
401.370.641.840.251.030.09
502.780.792.30.31.110.12
603.480.962.710.371.070.16
701.691.113.20.421.520.18
801.461.283.680.481.460.21
902.291.434.140.551.390.32
1003.361.614.580.61.370.47
Average1.870.882.520.331.230.17
Table 5. The Bias results of different AWGN estimation algorithms on the Textures Dataset. The best results are highlighted in bold.
Table 5. The Bias results of different AWGN estimation algorithms on the Textures Dataset. The best results are highlighted in bold.
σ [14][27][28][29][30]Proposed
103.091.201.402.157.661.24
202.681.110.781.387.260.88
302.940.710.621.006.830.62
403.290.530.600.806.620.52
503.600.500.860.687.080.47
603.970.471.070.576.760.41
704.180.491.470.546.580.36
804.710.521.690.506.110.35
905.020.571.990.465.580.33
1005.090.632.380.465.280.37
Average3.860.671.280.856.570.55
Table 6. The Std results of different AWGN estimation algorithms on the Textures Dataset. The best results are highlighted in bold.
Table 6. The Std results of different AWGN estimation algorithms on the Textures Dataset. The best results are highlighted in bold.
σ [14][27][28][29][30]Proposed
105.181.621.422.937.611.52
203.762.680.871.717.391.10
303.841.580.711.367.600.96
404.011.020.821.097.820.79
504.340.891.100.977.900.70
604.760.681.260.787.750.60
705.130.691.600.807.250.52
805.750.661.740.746.960.46
906.560.641.770.706.610.42
1006.710.621.960.656.430.55
Average5.001.111.331.177.330.76
Table 7. The RMSE results of different AWGN estimation algorithms on the Textures Dataset. The best results are highlighted in bold.
Table 7. The RMSE results of different AWGN estimation algorithms on the Textures Dataset. The best results are highlighted in bold.
σ [14][27][28][29][30]Proposed
106.042.011.993.6310.801.97
204.612.901.172.1910.361.41
304.831.740.941.6910.221.14
405.191.150.991.3610.240.97
505.641.021.401.1910.610.85
606.200.821.660.9610.290.73
706.620.852.170.969.790.64
807.430.832.430.899.260.58
908.260.852.660.848.650.54
1008.420.883.080.798.320.66
Average6.321.311.851.459.850.95
Table 8. Average Running Time of Different Noise Estimation Algorithms for 512 × 512 Images.
Table 8. Average Running Time of Different Noise Estimation Algorithms for 512 × 512 Images.
Method[14][27][28][29][30]Proposed
Time (s)0.720.480.621.240.170.54
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xie, H.; Yi, S.; Yang, Z. A Robust Noise Estimation Algorithm Based on Redundant Prediction and Local Statistics. Sensors 2024, 24, 168. https://doi.org/10.3390/s24010168

AMA Style

Xie H, Yi S, Yang Z. A Robust Noise Estimation Algorithm Based on Redundant Prediction and Local Statistics. Sensors. 2024; 24(1):168. https://doi.org/10.3390/s24010168

Chicago/Turabian Style

Xie, Huangxin, Shengxian Yi, and Zhongjiong Yang. 2024. "A Robust Noise Estimation Algorithm Based on Redundant Prediction and Local Statistics" Sensors 24, no. 1: 168. https://doi.org/10.3390/s24010168

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop