1. Introduction
Adaptive filters have found widespread use in channel estimation, equalization, beamforming, active noise control, and inverse modeling [
1,
2,
3,
4]. The Normalized Least Mean Square (NLMS) algorithm, in particular, has been widely adopted owing to its simplicity and stable convergence behavior; it continues to appear in recent work, for example, in fault diagnosis applications [
5], power systems [
6,
7], and beamforming control [
8]. It is well known, however, that the performance of NLMS deteriorates when the input signal is contaminated by noise [
9], since the noise introduces a bias into the weight estimate that slows convergence and increases the steady-state error [
10].
This bias has been recognized as a fundamental challenge since the early development of adaptive filtering [
11,
12]. While the steady-state error caused by output noise decreases as the step size is reduced, the bias induced by input noise distorts the input correlation matrix and persists regardless of the step size [
13]. Indeed, such input noise is frequently encountered in a variety of practical signal processing systems. In acoustic echo cancellation, the far-end signal picked up by a microphone is inevitably corrupted by background noise before entering the adaptive filter [
14,
15,
16]. A similar problem appears in wireless channel estimation, where thermal noise and interference corrupt the received pilot signal, so that the adaptive filter observes a noisy copy of the true transmitted sequence [
10,
17]. Active noise control (ANC) systems face the same difficulty: the reference microphone captures not only the target noise source but also sensor self-noise and secondary-path disturbances [
18,
19]. Furthermore, in modern applications such as vehicle interior noise reduction and industrial vibration control, standard Filtered-x NLMS (FxNLMS) algorithms often suffer from significantly degraded noise attenuation when reference signals are contaminated [
20,
21]. Under all of these conditions, the standard NLMS algorithm yields biased weight estimates, and the resulting performance degradation can be substantial [
22]. Therefore, various robust NLMS variants have been proposed to mitigate the effects of such noise [
23].
To specifically tackle the weight bias induced by input noise, several bias-compensation strategies have been developed. The C-NLMS algorithm [
24] achieves consistency by correcting the noisy input correlation matrix; however, its noise variance estimate ignores output noise entirely, limiting its accuracy to scenarios where the output noise power is much smaller than the input noise power. BC-NLMS [
25] improves on this by incorporating the input-to-output noise variance ratio
, which extends its applicability to a broader range of SNR conditions. Alternatively, BC-NSAF [
26] replaces the noise ratio with a shrinkage threshold derived from the output noise variance
. These approaches suppress weight bias and perform well under static conditions.
Driven by these successes, many recent studies have extended bias compensation techniques to a wider variety of adaptive filtering structures. Zhao et al. [
27] designed a sign subband adaptive filter with per-tap weighting factors for input noise, thereby gaining robustness against sign-based quantization errors. Song et al. [
28] proposed a robust bias-compensated subband adaptive filter capable of handling both noisy inputs and impulsive interference. A bias-compensated sign algorithm with optimized step size was also developed for noisy input environments [
29]. In a related effort, Zhao et al. [
30,
31] combined the maximum correntropy criterion with subband filtering to deal simultaneously with impulsive noise and input noise. Park et al. [
32] brought bias compensation into the affine projection framework together with a variable step-size mechanism, and Gu et al. [
33] developed a constrained adaptive filtering algorithm that performs bias compensation under linear constraints.
However, all the aforementioned methods share a critical limitation: they strictly rely on prior noise information, such as the noise variance ratio, the exact output noise power, or a related threshold. This requirement limits their applicability in time-varying environments. In practical implementations, continuously updating these prior parameters requires additional sensors or periodic halting of the system for recalibration, imposing structural burdens. For example, in an active noise control (ANC) system for vehicle interiors or portable headphones, the primary noise characteristics and background interference can fluctuate rapidly as the user moves through different acoustic environments [
34]. If the bias compensation relies on a fixed noise ratio or a one-time calibration of the output noise power, the filter will operate with stale parameters after such a change, leading to degraded tracking and elevated steady-state error. This motivates the development of an estimation method that continuously adapts to the current noise conditions without any externally supplied parameters.
In this paper, we propose the Prior-Free Noise Variance Estimator (PFNVE) that requires no prior information. The key idea is to exploit the orthogonality between the weight-error vector and the current weight vector during the update process, which approximately removes the bias induced by the corrupted input correlation matrix. This yields an unbiased variance estimate in real time, independent of the output noise level or the input-to-output noise variance ratio. To verify the performance of the proposed method, extensive system identification experiments were conducted under dense and sparse environments across various signal-to-noise ratio (SNR) conditions. The simulation results show that the resulting filter matches the steady-state accuracy of BC-NLMS and BC-NSAF while offering faster tracking when the system undergoes abrupt parameter changes. Furthermore, because the estimator operates solely on internal variables already available in any standard LMS-type update, it transfers to other adaptive filter structures without algorithmic modification. We verify this applicability by integrating the proposed method with the recently studied MS-PNLMS algorithm [
35], showing that comparable performance can be achieved at similar computational cost.
The remainder of this paper is organized as follows.
Section 2 reviews the conventional bias-compensated algorithms.
Section 3 derives the proposed noise variance estimator and discusses its properties.
Section 4 presents the simulation results under various conditions including time-varying noise and colored input. Finally, conclusions are drawn in
Section 5.
2. Review of the Conventional Algorithm
The desired signal
is defined as:
where
denotes the noise-free input vector;
is the input signal at time index
n, that is a white Gaussian sequence of zero-mean with
;
is the optimal weight vector;
M means the tap length of the unknown system;
denotes the transpose of the vector or matrix;
is the measurement noise, and it is a zero-mean white Gaussian noise with
.
The bias-compensated NLMS and MS-PNLMS algorithms employ a realistic model for the filter input signal by considering it as a noisy version, expressed as:
where
denotes the input noise vector, modeled as a zero-mean white Gaussian process with variance
.
2.1. Review of Bias-Compensated NLMS (BC-NLMS) Algorithm
The update equation of BC-NLMS algorithm is given by [
25]:
where
is the step size,
is the error signal defined as
, and
represents an estimate of the input noise variance.
Several approaches have been proposed for estimating the input noise variance. The first method, which utilizes the ratio between input and output noise variances, uses the following formula [
25]:
where
is the input-output noise variance ratio and
with
being a smoothing parameter.
Another approach employs the shrinkage denoising method and is formulated as [
26]:
where
here,
denotes the sign function and
.
However, both methods are limited by their reliance on prior knowledge of either the input-output noise variance ratio or the exact value of the output noise variance. This requirement can significantly restrict their applicability in practical scenarios where such prior information is unavailable or difficult to obtain.
2.2. Review of Bias-Compensated MS-PNLMS Algorithm
The update equation of the Bias-compensated MS-PNLMS (BC-MS-PNLMS) algorithm is presented as [
35]
where
denotes the step-size matrix for the individual taps of the filter. The diagonal elements of
are calculated as
where
,
is the maximum value between
a and
b,
, and
means the user parameter.
The BC-MS-PNLMS algorithm estimates input noise variance using the same technique as BC-NLMS [
35].
where
is the input-output noise variance ratio, and
with
being a smoothing parameter.
As shown in the above equations, the BC-MS-PNLMS algorithm also depends on the input-to-output noise variance ratio . Consequently, the conventional bias-compensated methods reviewed in this section share a common limitation: they require prior noise information.
3. Prior-Free Bias-Compensated NLMS
This section derives a noise variance estimator that does not require any prior information. Specifically, the derivation begins by forming the product of the filter output and the error signal, expressed as follows:
Applying the orthogonality principle, which implies
[
25], and assuming that (i) the input noise vector
is independent of
, (ii) the measurement noise
is zero-mean and independent of
and
, and (iii)
is independent of the clean input component, the expectation of the above expression yields
Rearranging the above in terms of
, the following expression is obtained:
where
is a small regularization parameter. The terms
and
are estimated as
The absolute value of
is taken to ensure
remains non-negative, as
itself can take negative values. Furthermore, to reduce variability caused by noise and outliers during the estimation process of
, a median filter is used [
36]. Specifically,
is updated as follows:
where
denotes the median operator over a sliding window of length
and
.
Remark 1. The orthogonality relation holds in the vicinity of convergence [25]. Near the steady state, is dominated by random gradient noise and fluctuates around zero, rendering it statistically orthogonal to . During the initial transient phase or following an abrupt system change, however, is large and this relation does not hold exactly. To mitigate this, the exponential smoother with factor α attenuates rapid fluctuations in the estimate, while the median filter over samples suppresses outlier products that arise when the weight error is temporarily large. Furthermore, due to the finite window size and large transient fluctuations, the statistical cancellation of cross-terms in may be incomplete, occasionally yielding a negative variance estimate. The absolute-value operation is therefore applied to prevent a negative , which would otherwise destabilize the update in (3). Once the filter returns to the neighborhood of the optimum, the orthogonality relation is restored and the estimator resumes reliable variance tracking. The proposed bias-compensated adaptive filtering method is summarized in Algorithm 1. Although the weight update step is written based on the standard BC-NLMS formulation, the same algorithm can be applied by substituting other LMS-family update equations, such as MS-PNLMS, without altering the noise variance estimation process.
| Algorithm 1 BC-NLMS with PFNVE |
- 1:
Initialization: - 2:
, , - 3:
Parameters: - 4:
Step size , smoothing factor , median filter window length , regularization - 5:
Computation: - 6:
fordo - 7:
Get the input vector and desired signal - 8:
Calculate the filter output: - 9:
- 10:
Calculate the error signal: - 11:
- 12:
Construct the sliding window buffer: - 13:
- 14:
Update the intermediate estimates: - 15:
- 16:
- 17:
Estimate the input noise variance: - 18:
- 19:
Update the filter weight vector: - 20:
- 21:
end for
|
Computational complexity is a critical issue in practical applications of adaptive filters; therefore, an analysis of the computational requirements is presented in
Table 1.
Table 1 compares the computational complexity of the noise variance estimation step in terms of multiplications per iteration. By utilizing the scalar product of the filter output and the error signal,
, the proposed method maintains low computational demand. Specifically, it requires
multiplications plus a median selection over
elements, which adds negligible overhead for small
.
4. Simulation Results
This section evaluates the PFNVE-based BC-NLMS algorithm through system identification experiments. The algorithm was compared with NLMS, BC-NLMS, and BC-NSAF methods in dense systems, and with BC-MS-PNLMS in sparse systems. To ensure a fair comparison with these full-band algorithms, the BC-NSAF was implemented with a single subband (). The adaptive filter length matched that of the unknown system. In each trial, the unknown system was generated as a random vector drawn from a standard normal distribution and normalized to unit norm.
The input signal was an independent and identically distributed (i.i.d.) Gaussian sequence with unit variance unless otherwise stated. The input noise and output noise were generated as independent zero-mean white Gaussian processes. The input SNR is defined as , and the output SNR is defined as . All algorithms were initialized with the zero vector. Performance was evaluated using normalized mean square deviation (NMSD) in decibels, defined as , averaged over 100 independent trials.
4.1. Dense System
Performance of the PFNVE-based BC-NLMS algorithm in dense systems is presented in comparison with NLMS, BC-NLMS, and BC-NSAF. All algorithms used a step size of 0.2 for fairness. Parameter settings were as follows: BC-NLMS used
and
; BC-NSAF used
and
; and the PFNVE-based BC-NLMS used
,
,
. The window length
was selected after comparing
and 10; it provides a good balance between estimation stability and tracking responsiveness.
Figure 1,
Figure 2,
Figure 3,
Figure 4,
Figure 5 and
Figure 6 present the simulation results for randomly generated 32-tap systems, except
Figure 3, which involves 16- and 64-tap systems.
Figure 1 shows results for input SNR of 10 dB, where
Figure 1a,b represents output SNRs of 10 dB and 20 dB respectively. Without requiring noise power ratios or output noise power, the algorithm achieves comparable steady-state accuracy.
Figure 2 illustrates results for input SNR equal to 20 dB, with
Figure 2a,b corresponding to output SNRs of 10 dB and 20 dB respectively. Consistent with
Figure 1, the PFNVE-based BC-NLMS maintains performance comparable to traditional algorithms.
Figure 1 and
Figure 2 show consistent performance across the tested SNR conditions.
To provide a quantitative comparison,
Table 2 reports the steady-state NMSD values averaged over the last 500 iterations for each algorithm under the four SNR conditions of
Figure 1 and
Figure 2. The PFNVE-based BC-NLMS achieves steady-state NMSD within approximately 0.5 dB of BC-NLMS and BC-NSAF in all cases, despite requiring no prior noise information. The small gap arises from the median filter, which introduces a slight downward bias in the variance estimate due to its nonlinear averaging property. This difference is within an acceptable range considering that the PFNVE operates without any prior noise statistics.
Figure 3 shows the results for 16-tap and 64-tap systems, both tested at 10 dB input and output SNR. The PFNVE-based BC-NLMS reaches a steady-state MSD level close to those of BC-NLMS and BC-NSAF in both cases, which indicates that the estimator works well regardless of the filter length and does not require re-tuning of the parameters.
To assess tracking capability [
37], the true system coefficients are multiplied by
at iteration 4000, creating an abrupt parameter change. The resulting learning curves are shown in
Figure 4. We measure the recovery time as the number of iterations needed for the NMSD to fall back within 1 dB of its pre-change steady-state level. In this experiment, the PFNVE-based BC-NLMS recovers after roughly 1000 iterations, which is about the same time it took to converge initially. BC-NLMS, on the other hand, needs about 1500 iterations, and BC-NSAF takes more than 2500 iterations to reach a comparable level. This rapid readaptation is a direct consequence of the prior-free design. Since the estimator recalculates the noise variance from the current filter state at every iteration, it avoids the influence of stale parameters from the previous environment.
To evaluate robustness under non-stationary noise conditions, the output noise variance was doubled at iteration 2000 while all other parameters remained unchanged. BC-NLMS and BC-NSAF were initialized with the exact prior information for the original noise environment. As shown in
Figure 5, all three bias-compensated algorithms achieve similar steady-state MSD levels of approximately −16 dB, confirming that conventional methods perform well when given accurate prior knowledge. However, after the noise change, BC-NLMS degrades to approximately −13.5 dB because its fixed noise ratio parameter
no longer matches the current environment. BC-NSAF exhibits a slightly less severe degradation to about −14.5 dB, as the square-root relationship in its threshold
partially mitigates the parameter mismatch. In contrast, the PFNVE-based BC-NLMS re-estimates the noise variance at every iteration and settles at approximately −14.8 dB. While this is comparable to BC-NSAF, it represents a clear improvement over BC-NLMS, and is achieved without relying on any prior noise parameters.
Figure 6 evaluates the tracking performance with a colored input signal generated by a first-order autoregressive model,
, where
is a white Gaussian noise sequence. The remaining simulation conditions are identical to those in
Figure 1a. Before the system change at iteration 4000, all three bias-compensated algorithms achieve similar steady-state MSD levels. However, following the sign reversal, the algorithms exhibit different tracking behaviors. The PFNVE-based BC-NLMS readapts and converges within approximately 1500 iterations. In contrast, BC-NLMS fails to return to its original steady-state level within the simulation period. BC-NSAF eventually recovers, but it requires nearly 4000 iterations. The poor tracking of BC-NLMS under colored input is expected, as its variance estimator in (4) assumes a white input. When the input is correlated, this assumption no longer holds and the ratio-based estimate becomes inaccurate, particularly after an abrupt system change where the weight-error vector is large. BC-NSAF is less affected since its shrinkage operates on the scalar error signal, but its recovery is still slower than the PFNVE-based BC-NLMS, which does not rely on any assumption about the input statistics.
4.2. Sparse System
Performance in sparse systems is evaluated through comparison with BC-MS-PNLMS. The unknown system has 64 taps, with sparsity levels of 2 and 48 considered. The sparsity level indicates the number of nonzero coefficients; a sparsity of 2 represents a highly sparse channel where only two taps carry signal energy, while a sparsity of 48 represents a weakly sparse system. Input and output SNRs were set to 10 dB and 20 dB, respectively. For a fair comparison, all algorithms used a step size of 0.4,
,
, and
. BC-MS-PNLMS parameters were set to
and
[
35], while the PFNVE used
and
.
The PFNVE is applied to the MS-PNLMS structure by replacing the ratio-based noise variance estimator in (16) with the orthogonality-based estimator in (
21). No other modification to the gain matrix computation or the weight update is required.
Figure 7 illustrates the MSD learning curves for sparsity levels of 2 (
Figure 7a) and 48 (
Figure 7b). In both cases, the PFNVE-based MS-PNLMS matches the steady-state accuracy and tracking speed of BC-MS-PNLMS. These results show that the prior-free estimator extends beyond the standard NLMS framework, maintaining robust performance across varying degrees of system sparsity without requiring prior noise information.
5. Conclusions
A bias-compensated adaptive filter has been presented in which the input noise variance estimator operates without any prior knowledge of the noise environment. The estimator is derived from the orthogonality between the weight-error vector and the current weight vector, a relation that holds near convergence, and its computational cost per iteration is comparable to that of existing methods. System identification experiments on both dense and sparse channels show that the algorithm achieves a steady-state mean-square deviation comparable to BC-NLMS and BC-NSAF, with faster recovery after abrupt system changes. Additional experiments with time-varying output noise and colored input show that the estimator can track changes in the noise statistics, whereas methods that rely on fixed prior parameters exhibit degradation. The estimation step reuses quantities that are already available in the weight update, so it can be incorporated into other LMS-family filters without modification; this has been verified here with the MS-PNLMS structure.
Several directions remain open for future work. On the theoretical side, extending the convergence analysis to colored input noise conditions is a natural next step, as the current derivation assumes white input noise. From a practical standpoint, an adaptive rule for the median filter window could improve the balance between estimation stability and tracking speed. Extensions to complex-valued implementations and filtered-x variants such as FxNLMS are also worthwhile directions, particularly for communications and active noise control applications, where evaluation on real-world data would further establish the practical limits of the proposed method.