Next Article in Journal
Repeatability of Inertial Measurements of Spinal Posture in Daily Life
Previous Article in Journal
Assessment of Corrosion in Naval Steels Submerged in Artificial Seawater Utilizing a Magnetic Non-Destructive Sensor
Previous Article in Special Issue
Comparative Analysis of Beamforming Techniques and Beam Management in 5G Communication Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Data-Reuse Regularized Recursive Least-Squares Algorithms for System Identification Applications †

by
Radu-Andrei Otopeleanu
1,2,
Constantin Paleologu
1,*,
Jacob Benesty
3,
Laura-Maria Dogariu
1,2,
Cristian-Lucian Stanciu
1,2 and
Silviu Ciochină
1
1
Department of Telecommunications, National University of Science and Technology POLITEHNICA Bucharest, 060042 Bucharest, Romania
2
Academy of Romanian Scientists, Ilfov 3, 050044 Bucharest, Romania
3
INRS-EMT, University of Quebec, Montreal, QC H5A 1K6, Canada
*
Author to whom correspondence should be addressed.
Presented at the 2024 16th International Symposium on Electronics and Telecommunications (ISETC), Timişoara, Romania, 7–8 November 2024.
Sensors 2025, 25(16), 5017; https://doi.org/10.3390/s25165017
Submission received: 5 June 2025 / Revised: 8 August 2025 / Accepted: 11 August 2025 / Published: 13 August 2025

Abstract

The recursive least-squares (RLS) algorithm stands out as an appealing choice in adaptive filtering applications related to system identification problems. This algorithm is able to provide a fast convergence rate for various types of input signals, which represents its main asset. In the current paper, we focus on the regularized version of the RLS algorithm, which also owns improved robustness in noisy conditions. Since convergence and robustness are usually conflicting criteria, the data-reuse technique is used to achieve a proper compromise between these performance features. In this context, we develop a computationally efficient approach for the data-reuse process in conjunction with the regularized RLS algorithm, using an equivalent single step instead of multiple iterations (for data-reuse). In addition, different regularization techniques are involved, which lead to variable-regularized algorithms, with time-dependent regularization parameters. This allows a better control in different challenging conditions, including noisy environments and other external disturbances. The resulting data-reuse regularized RLS algorithms are tested in the framework of echo cancellation, where the obtained results support the theoretical findings and indicate the reliable performance of these algorithms.

1. Introduction

There are many important real-world applications that rely on adaptive filters [1,2,3]. Such popular signal processing tools are frequently involved in system identification problems [4], interference cancellation schemes [5], channel equalization scenarios [6], sensor networks [7], and prediction configurations [8], among many others. The key block that controls the overall operation of this type of filter is the adaptive algorithm, which basically commands the coefficients’ update.
There are several families of adaptive filtering algorithms, among which two stand out as the most representative [1,2,3]. First, the least-mean-square (LMS) algorithms are popular due to their simplicity and practical features, especially in terms of low computational complexity. However, their performance is quite limited when operating with highly correlated input signals and/or long-length filters. The second important category of algorithms belong to the recursive least-squares (RLS) family, with improved convergence performance as compared to their LMS counterparts, even in the previously mentioned challenging scenarios. Nevertheless, the RLS algorithms are more computationally expensive and can also experience some stability problems in practical implementations. On the other hand, the performance of implementation platforms nowadays is exponentially improving, in terms of both processing speed and implementation facility. Consequently, the popularity of the RLS-type algorithms is also constantly growing, thus becoming the solution of choice in different frameworks [9,10,11].
Motivated by these aspects, the current paper targets further improvements on the global performance of the RLS algorithm, by aiming for a better control of its convergence parameters. The application framework focuses on the system identification problem, which represents one of the basic configurations in adaptive filtering, with a wide range of applications [2]. In this context, the forgetting factor represents one of the main parameters that tune the algorithm behavior [3]. This positive subunitary constant weights the square errors that contribute to the cost function, so that it mainly influences the memory of the algorithm. A larger value of this parameter (i.e., closer to one) leads to a better accuracy of the solution provided by the adaptive filter. However, in order to remain alert to any potential changes in the system to be identified, a lower value of the forgetting factor is desired, which would lead to a faster tracking reaction.
The RLS algorithm should also be robust to different external perturbations in the operating environment, which can frequently happen in system identification scenarios. For example, let us consider an echo cancellation context [5], where an acoustic sensor (i.e., microphone) captures the background noise from the surroundings, which can be significantly strong and highly nonstationary. In this case, the algorithm should be robust to such variations, a goal that cannot be achieved using only the forgetting factor as the control parameter. Toward this purpose, the cost function of the algorithm should include (besides the error-related term) an additional regularization component [12,13,14,15]. As a result, the robustness of the algorithm is controlled in terms of the resulting regularization parameter. Nevertheless, most of the regularized RLS algorithms require additional (a priori) information about the environment or need some extra parameters that are difficult to evaluate in practice. Moreover, a robust behavior to such external perturbations (related to the environment) is usually paid by a slower tracking reaction when dealing with time-varying systems.
These conflicting requirements, in terms of accuracy, tracking, and robustness, lead to a performance compromise between these main performance criteria. More recently, the data-reuse technique was also introduced and analyzed in the context of RLS algorithms [16,17,18]. The basic idea is to use the same set of data (i.e., the input and reference signals) several times within each main iteration of the algorithm, in order to improve the convergence rate and the tracking capability of the filter. Usually, mainly due to complexity reasons, the date-reuse method is extensively used in conjunction with LMS-type algorithms [19,20,21,22,23,24,25,26]. The solutions proposed in [16,18], in the framework of RLS-type algorithms, replace the multiple iterations of the data-reuse process with a single equivalent step, thus maintaining the computational complexity order of the original algorithm. Moreover, the data-reuse parameter (i.e., the number of equivalent data-reuse iterations) is used as an additional control factor, in order to improve the tracking capability of the RLS algorithm, even when operating with a large value of the forgetting factor.
The previous work [16] introduced the data-reuse principle in the context of the conventional RLS algorithm, which does not include any regularization component within its cost function, so that it is inherently limited in terms of robustness. Following [16], a convergence analysis of this algorithm was presented in [17]. More recently, we developed a data-reuse regularized RLS algorithm [18], with improved robustness features, where the regularization parameter is related to the signal-to-noise ratio (SNR). The current work represents an extension of the conference paper [18], with a twofold new contribution. First, it provides additional theoretical details and simulation results related to the algorithm developed in [18], also including a practical estimation of the SNR. Second, it presents a novel regularization technique recently proposed in [27], in conjunction with the data-reuse technique, thus resulting in a new RLS-type algorithm. Its regularization parameter considers both the influence of the external noise and a term related to the model’s uncertainties. This approach leads to improved performance as compared to the previously developed data-reuse regularized RLS algorithm.
Following this introduction, the rest of this paper is structured as follows. Section 2 contains the basics of the regularized RLS algorithms, including the recent method from [27]. Next, Section 3 develops the data-reuse method in conjunction with the regularized RLS algorithms. Simulation results are presented in Section 4, in the framework of echo cancellation. The paper is concluded in Section 5, which summarizes the main findings and outlines several perspectives for future research.

2. Regularized RLS Algorithms

Let us consider a system identification setup [3] having the reference (or desired) signal, obtained as
d ( n ) = x T ( n ) h ( n ) + v ( n ) = y ( n ) + v ( n ) ,
where n represents the discrete-time index,
x ( n ) = x ( n ) x ( n 1 ) x ( n L + 1 ) T
is a vector containing the most recent L samples of the zero-mean input signal  x ( n ) , with the superscript T denoting the transpose of a vector or a matrix,
h ( n ) = h 0 ( n ) h 1 ( n ) h L 1 ( n ) T
is the impulse response (of length L) of the system that we need to identify, and  v ( n )  is a zero-mean additive noise signal, which is independent of  x ( n ) . In this context, the main objective is to identify  h ( n )  with an adaptive filter, denoted by
h ^ ( n ) = h ^ 0 ( n ) h ^ 1 ( n ) h ^ L 1 ( n ) T .
Therefore, the a priori error between the reference signal and its estimate results in
e ( n ) = d ( n ) x T ( n ) h ^ ( n 1 ) = d ( n ) y ^ ( n ) ,
where  y ^ ( n )  represents the output of the adaptive filter. In the framework of echo cancellation [5], a replica of the echo signal is obtained at the output of the adaptive filter. In other words, the echo path impulse response is estimated/modeled by the adaptive filter, so that this unknown system (i.e., the echo path) is identified. Thus, the echo cancellation application can be formulated as a system identification problem.
This system identification problem can be solved following the least-squares (LS) optimization criterion [3], which is based on the minimization of the cost function:
J ( n ) = i = 1 n λ n i d ( i ) x T ( i ) h ^ ( n ) 2 ,
where  λ  is the forgetting factor, with  0 < λ 1 . This parameter controls the memory of the algorithm, as explained in Section 1. The minimization of  J ( n )  with respect to  h ^ ( n )  leads to the normal equations [3]:
R ( n ) h ^ ( n ) = c ( n ) ,
where
R ( n ) = λ R ( n 1 ) + x ( n ) x T ( n ) ,
c ( n ) = λ c ( n 1 ) + x ( n ) d ( n ) .
The estimates from (5) and (6) are associated, respectively, to the covariance matrix of the input signal, and to the cross-correlation vector between the input and reference sequences. The system of equations from (4) can be recursively solved, thus leading to the conventional RLS algorithm [3], which is defined by the update
h ^ ( n ) = h ^ ( n 1 ) + R 1 ( n ) x ( n ) e ( n ) .
The standard initialization of this algorithm is  h ^ ( 0 ) = 0 L  and  R ( 0 ) = δ + 1 I L , where  0 L  and  I L  denote an all-zero vector of length L and the identity matrix of size  L × L , respectively, while  δ +  is a positive constant, also known as the regularization parameter. However, the influence of  δ +  is only limited to the initial convergence of the algorithm, since its contribution is diminishing when n increases, due to the presence of the subunitary forgetting factor  λ . As we can notice from (5), when using the previous initialization for  R ( 0 ) , the matrix  R ( n )  contains the term  λ n δ + I L , which is basically negligible for n large enough and  λ < 1 . As a result, the overall performance of the conventional RLS algorithm, in terms of accuracy and tracking, is basically influenced by the forgetting factor, which represents the main control parameter. On the other hand, these performance criteria are conflicting, since a large value of  λ  (i.e., close to one) leads to a good accuracy of the filter estimate, but with a slow tracking reaction (when the system changes). In order to improve the tracking behavior, the forgetting factor should be reduced, while sacrificing the accuracy of the solution. In terms of robustness (against external perturbations), the higher the value of  λ , the more robust the algorithm is. However, there is an inherent performance limitation even for  λ = 1 , so that using the forgetting factor as the single control mechanism is not always a practical asset.
As outlined in Section 1, the robustness of RLS-type algorithms can be improved by incorporating a proper regularization component directly into the cost function. There are different approaches to this problem; however, the practical issues should also be taken into account. In other words, the resulting regularization term should be easy to control in practice, without requiring additional or a priori knowledge related to the system or the environment. Among the existing solutions, we present in the following two practical regularization techniques.
The first one involves the Euclidean norm (or the  l 2  regularization), so that the cost function of the regularized RLS algorithm [1] results in
J ( n ) = i = 1 n λ n i d ( i ) x T ( i ) h ^ ( n ) 2 + δ h ^ ( n ) 2 ,
where  ·  denotes the Euclidean norm. In this case, the update of the regularized RLS algorithm becomes
h ^ ( n ) = h ^ ( n 1 ) + R ( n ) + δ I L 1 x ( n ) e ( n ) .
In order to find a proper value of  δ , the solution proposed in [12] rewrites the update from (9) as
h ^ ( n ) = Q ( n ) h ^ ( n 1 ) + h ^ ̲ ( n ) ,
where
Q ( n ) = I L R ( n ) + δ I L 1 x ( n ) x T ( n ) ,
h ^ ̲ ( n ) = R ( n ) + δ I L 1 x ( n ) d ( n ) .
In this way, we can notice a “separation” in the right-hand side of (10), where  Q ( n )  depends only on the input signal, while  h ^ ̲ ( n )  represents the correctiveness component of the algorithm. In relation to this component, a new error signal can be defined as
e ̲ ( n ) = d ( n ) x T ( n ) h ^ ̲ ( n ) .
At this point, in order to attenuate the effects of the noise in the estimate from (12), the condition imposed in [12] is to find  δ  in such a way that
E e ̲ 2 ( n ) = σ v 2 ,
where  E ·  stands for mathematical expectation and  σ v 2 = E v 2 ( n )  is the variance of the noise signal from (1). Developing (14) based on (12), the regularization parameter results in [12]
δ = L 1 + 1 + SNR SNR σ x 2 ,
where  SNR = σ y 2 / σ v 2  represents the signal-to-noise ratio, while  σ y 2 = E y 2 ( n )  and  σ x 2 = E x 2 ( n )  are the variances of the output signal and the input sequence, respectively. It can be noticed from (15) that a low SNR leads to a high value of  δ  and, consequently, to a small update term in (9), which is the desired behavior in terms of robustness in noisy conditions. Nevertheless, in practice, the SNR is not available and should be estimated.
A simple yet efficient method to this purpose was proposed in [13]. It relies on the assumption that the adaptive filter has converged to a certain degree, i.e.,  y ( n ) y ^ ( n ) , so that  σ y 2 σ y ^ 2 , where  σ y ^ 2 = E y ^ 2 ( n )  denotes the variance of the estimated output from the right-hand side of (2). Also, since  y ( n )  and  v ( n )  are uncorrelated, taking the expectation in (1) results in
σ d 2 = σ y 2 + σ v 2 σ y ^ 2 + σ v 2 ,
where  σ d 2 = E d 2 ( n )  is the variance of the reference signal. Therefore,  σ v 2 σ d 2 σ y ^ 2 , so that the SNR can be approximated as
SNR σ y ^ 2 ϵ + | σ d 2 σ y ^ 2 | ,
where the absolute value at the denominator is used to prevent any minor deviations of the estimates (which could make the SNR negative) and  ϵ  is a very small positive constant that prevents a division by zero. Since the signals required in (17) are available, i.e.,  d ( n )  and  y ^ ( n ) , their associated variances can be recursively estimated as
σ d 2 ( n ) = λ σ d 2 ( n 1 ) + ( 1 λ ) d 2 ( n ) ,
σ y ^ 2 ( n ) = λ σ y ^ 2 ( n 1 ) + ( 1 λ ) y ^ 2 ( n ) ,
where the forgetting factor  λ  is now used as a weighting parameter. The initialization is  σ d 2 ( 0 ) = σ y ^ 2 ( 0 ) = 0 . The resulting algorithm defined by the update (9), which uses  δ  computed based on (15) and using the estimated SNR from (17), is referred to as the variable-regularized RLS (VR-RLS) algorithm [13].
The second practical regularization technique analyzed in this work has been recently proposed in [27]. It considers a linear state model, where the observation equation is given in (1), while the state system follows a simplified first-order Markov model:
h ( n ) = h ( n 1 ) + w ( n ) ,
where  w ( n )  is a zero-mean white Gaussian noise signal vector, which is uncorrelated to  h ( n 1 )  and  v ( n ) . Related to this model, we denote by  σ w 2  and  R w ( n ) = σ w 2 I L  the variance and covariance matrix of  w ( n ) , respectively. The first-order Markov model is frequently used for modeling time-varying systems (or nonstationary environments), especially in the context of adaptive filters [1,2,3]. Moreover, this model fits very well for echo cancellation scenarios [5], where the impulse response of the echo path (to be modeled by the adaptive filter) is associated to a time-varying system, which can be influenced by several factors. For example, in acoustic echo cancellation, the room impulse response is influenced by temperature, pressure, humidity, and the movement of objects or bodies. Thus, the model in (20) represents a benchmark in this framework in order to model the unknown dynamics of the environment. In this context, it represents a particularly convenient stochastic model for such time-varying systems. This model represents systems that gradually change into an unpredictable direction, which is strongly in agreement with the nature of time-varying impulse responses of the echo paths.
Next, in order to find the estimate  h ^ ( n ) , the weighted LS criterion is used, together with a regularization term that incorporates the model uncertainties, which are captured by  σ w 2 . Consequently, the cost function is
J ( n ) = i = 1 n λ n i d ( n ) x T ( n ) h ^ ( n ) 2 σ v 2 + 1 L i = 1 n λ n i h ^ ( n ) h ( i 1 ) T R w 1 ( n ) h ^ ( n ) h ( i 1 ) .
This cost function takes into consideration both types of noise, i.e., the external noise that corrupts the output of the system and the internal noise that models the system uncertainties. In (21), the first term consists of the standard RLS cost function from (3), which is weighted by the external noise power ( σ v 2 ), while the second term consists of a weighted sum (using the same forgetting factor  λ ) of the terms that contain the covariance matrix of  w ( n )  in order to capture the model uncertainties ( σ w 2 ).
The minimization of  J ( n )  with respect to  h ^ ( n )  leads to a set of normal equations that can be recursively solved using the update [27]:
h ^ ( n ) = h ^ ( n 1 ) + R ( n ) + δ I L 1 x ( n ) e ( n ) ,
where
δ = 1 L ( 1 λ ) · σ v 2 σ w 2 .
The second term from the right-hand side of (23) can be interpreted as the noise-to-uncertainty ratio (NUR) and captures the effects of both “noises,” i.e., the external perturbation and the model uncertainties, which are related to the environment conditions and system variability, respectively. Clearly, the NUR is unavailable in practice and should be estimated. The solution proposed in [27] uses the following recursive estimators for the main parameters required in (23):
σ v 2 ( n ) = λ σ v 2 ( n 1 ) + ( 1 λ ) e 2 ( n ) ,
σ w 2 ( n ) = λ σ w 2 ( n 1 ) + ( 1 λ ) h ^ ( n ) h ^ ( n 1 ) 2 L ,
using the same forgetting factor  λ  as a weighting parameter. The initialization is  σ v 2 ( 0 ) = 0  and  σ w 2 ( 0 ) = ξ , where the small positive constant  ξ  is used since the estimate from (25) appears at the denominator of (23).
The estimator from (24) is based on the fact that in system identification scenarios, the goal of the adaptive algorithm is not to drive the error signal to zero, since this would introduce noise into the filter estimate. Instead, the noise signal should be recovered from the error of the adaptive filter after this one converges to its steady-state solution. In other words, some related information about  v ( n )  can be extracted from the error signal,  e ( n ) . Second, the estimator from (25) is derived based on (20). Thus, using the adaptive filter estimates from time indices n and  n 1 , we can use the approximation  h ^ ( n ) h ^ ( n 1 ) w ( n ) , while  w ( n ) 2 L σ w 2 , for  L 1 . In this way, the term  h ^ ( n ) h ^ ( n 1 ) 2  captures the uncertainties in the system. In summary, the resulting regularized RLS algorithm based on the weighted LS criterion, referred to as WR-RLS [27], is defined by the update (22), with the regularization parameter  δ  evaluated as in (23), and using the estimated NUR based on (24) and (25).

3. Data-Reuse Regularized RLS Algorithms

The general update of the regularized RLS algorithms presented in the previous section can be summarized as follows:
h ^ ( n ) = h ^ ( n 1 ) + P ( n ) x ( n ) e ( n ) ,
where
P ( n ) = R ( n ) + δ I L 1
and  δ  generally denotes the regularization parameter, which can be a positive constant or a variable term that can be evaluated, as in (15) or (23). The filter update from (26) is performed for each set of data, i.e.,  x ( n )  and  d ( n ) , and for each time index n. On the other hand, in the context of the data-reuse approach, this process is repeated N times for the same time index n, i.e., the same set of data is reused N times. As a result, for the regularized RLS algorithms, the relations that define the data-reuse are
Initialization : g 0 ( n ) = h ^ ( n 1 ) Data reuse : for j = 1 , 2 , , N
ε j ( n ) = d ( n ) x T ( n ) g j 1 ( n )
g j ( n ) = g j 1 ( n ) + P ( n ) x ( n ) ε j ( n ) Update : h ^ ( n ) = g N ( n ) .
Since  P ( n )  [or  R ( n ) ] depends only on  x ( n ) , it remains the same within the cycle associated with the data-reuse process. It can be noticed that the conventional regularized RLS algorithm from (26) is obtained when  N = 1 .
Nevertheless, it is not efficient (especially in terms of computational complexity) to implement the data-reuse process in the conventional way, as presented before. As an equivalent alternative, we show in the following how the entire data-reuse cycle can be efficiently grouped into a single update of the filter. Let us begin with the first step, which can be written as
ε 1 ( n ) = d ( n ) x T ( n ) g 0 ( n ) ,
g 1 ( n ) = g 0 ( n ) + P ( n ) x ( n ) ε 1 ( n ) .
The previous relations are then involved within the second step, which can be developed as
ε 2 ( n ) = d ( n ) x T ( n ) g 1 ( n ) = d ( n ) x T ( n ) g 0 ( n ) + P ( n ) x ( n ) ε 1 ( n ) = d ( n ) x T ( n ) g 0 ( n ) x T ( n ) P ( n ) x ( n ) ε 1 ( n ) = ε 1 ( n ) q ( n ) ε 1 ( n ) = r ( n ) ε 1 ( n ) ,
using the notation:
q ( n ) = x T ( n ) P ( n ) x ( n ) ,
r ( n ) = 1 q ( n ) .
It was also taken into account that  P ( n ) = P T ( n ) , due to the specific symmetry of the matrix  R ( n ) . Therefore, in this second step, we can evaluate the update of the filter as
g 2 ( n ) = g 1 ( n ) + P ( n ) x ( n ) ε 2 ( n ) = g 0 ( n ) + P ( n ) x ( n ) ε 1 ( n ) + P ( n ) x ( n ) r ( n ) ε 1 ( n ) = g 0 ( n ) + 1 + r ( n ) P ( n ) x ( n ) ε 1 ( n ) .
Similarly, the third step of the data-reuse process becomes equivalent to
ε 3 ( n ) = r 2 ( n ) ε 1 ( n ) ,
g 3 ( n ) = g 0 ( n ) + 1 + r ( n ) + r 2 ( n ) P ( n ) x ( n ) ε 1 ( n ) .
Following the same approach and using mathematical induction, we obtain the relations associated with the final Nth step of the cycle, i.e.,
ε N ( n ) = r N 1 ( n ) ε 1 ( n ) ,
g N ( n ) = g 0 ( n ) + l = 0 N 1 r l ( n ) P ( n ) x ( n ) ε 1 ( n ) .
It is known that  ε 1 ( n ) = e ( n ) g 0 ( n ) = h ^ ( n 1 ) g N ( n ) = h ^ ( n ) , and  l = 0 N 1 r l ( n )  sums the N terms of a geometric progression with the common ratio  r ( n ) . The later term can be computed as
s ( n ) = 1 r N ( n ) 1 r ( n ) ,
so that the final update becomes
h ^ ( n ) = h ^ ( n 1 ) + s ( n ) P ( n ) x ( n ) e ( n ) .
The resulting data-reuse regularized RLS algorithm is summarized in Table 1 in a slightly modified form, which targets a more efficient implementation. Also, a block diagram of this type of algorithm is presented in Figure 1. The most challenging operations (in terms of complexity) are the matrix inversion and the computation of  p ( n ) . However, these steps could be efficiently solved using line search methods, like the conjugate gradient (CG) or coordinate descent (CD) algorithms [28,29,30]. (These methods have not been considered here for the implementation of the data-reuse regularized RLS-type algorithms, since they are beyond the scope of this paper. However, they represent a subject for future work, as will be outlined in Section 5). Also, the update of  R ( n )  can be computed by taking into account the symmetry of this matrix and the time-shift property of the input vector,  x ( n ) . Thus, only the first row and column should be computed, while the rest of the elements are available from the previous iteration. As compared to the conventional regularized RLS algorithm, there is only a moderate increase in terms of computational complexity, mainly due to the evaluation of  q ( n ) = x T ( n ) p ( n ) . Nevertheless, this extra computational amount is reasonable, i.e., L multiplications and  L 1  additions.
The data-reuse parameter N can play the role of an additional control factor, besides the forgetting factor  λ . In this context, we aim to improve the overall performance of the algorithm, even when using a very large value of  λ  (i.e., very close or equal to 1), which leads to a good accuracy, but significantly affects the tracking. As a consequence, the data-reuse regularized RLS algorithm can attain a better compromise between the main performance criteria, i.e., accuracy versus tracking. Moreover, using a proper regularization parameter for this type of algorithm, like in (15) or (23), can improve its behavior in noisy environments.
The regularization parameter of the data-reuse regularized RLS algorithm can be set or evaluated in different ways, as indicated in Table 2. In the simplest approach,  δ  is selected as a positive constant, thus resulting in the data-reuse conventionally regularized RLS (DR-CR-RLS) algorithm. A more rigorous method for setting the constant regularization parameter relies on its connection to the SNR [12]. In conjunction with the data-reuse technique, this led to the data-reuse optimally regularized RLS (DR-OR-RLS) presented in [18].
Nevertheless, the estimated SNR from (17) was not considered in [18], where the true value of the SNR was assumed to be available in the evaluation of the “optimal” regularization constant,  δ o  (see Table 2). As an extension to this previous work, the estimated SNR from (17) is considered in the current paper. Here, the parameter  δ  from (15) is used within the matrix  P ( n )  from (41), but it is evaluated based on (17), in conjunction with (18) and (19). The resulting algorithm is referred to as the data-reuse variable-regularized RLS (DR-VR-RLS) algorithm. For  N = 1 , it is equivalent to the VR-RLS algorithm from [13].
The regularization parameter specific to the WR-RLS algorithm [27], i.e.,  δ  from (23), can also be involved within the matrix  P ( n ) . Thus, in conjunction with the previously developed data-reuse process, which led to the update (41), a data-reuse WR-RLS (DR-WR-RLS) algorithm is obtained. Its regularization relies on (23), while using the estimated NUR based on (24) and (25). Also, the WR-RLS algorithm from [27] is a special case obtained when  N = 1 .
As compared to the DR-VR-RLS algorithm that relies only on the estimated SNR, the regularization approach behind the DR-WR-RLS algorithm is potentially better, since it also includes the contribution of the model uncertainties within the NUR. In other words, the NUR can represent a better measure (for robustness control) instead of the SNR. For both DR-VR-RLS and DR-WR-RLS algorithms, their regularization parameters are time-dependent, so that the time index is indicated in Table 2, for  δ ( n )  and  δ ( n ) , respectively.
The computational complexity of these data-reuse regularized RLS algorithms is provided in Table 3 (in terms of the number of multiplications per iteration), as compared to the standard RLS and LMS algorithms [1,2,3]. Clearly, the LMS algorithm is the least complex, since its update is similar to (7), but using a positive constant  μ  (known as the step-size parameter) instead of  R 1 ( n ) . However, it is known that the overall performance of the LMS algorithms (in terms of both the convergence rate and accuracy of the estimate) is inferior to the RLS-based algorithms, especially when operating with long-length filters and correlated input signals [1,2,3]. The complexity order of the RLS-based algorithms is proportional to  O ( L 2 ) , but it also depends on the computational amount required by the matrix inversion, which is denoted by  O 1  in Table 3. The conventional RLS algorithm avoids this direct operation by using the matrix inversion lemma [3], so that its complexity order remains proportional to  O ( L 2 ) . There are several alternative (iterative) techniques that can be used in this context, for solving the normal equations related to the RLS-based algorithms. Among the existing solutions, the dichotomous coordinate descent (DCD) method [28] represents one of the most popular choices, since it reduces the computational amount up to  O ( L ) , using a proper selection of its parameters. Nevertheless, the influence of these methods on the overall performance of the algorithms is beyond the scope of this paper and it will be investigated in future works (as will be outlined later in Section 4.6). The evaluation of the regularization parameter of the DR-⋆R-RLS algorithms from Table 2 (where ⋆ generally denotes the corresponding version, i.e., C/O/V/W), denoted by  O δ , requires only a few operations, as compared to the overall amount. For example, in case of the DR-CR-RLS and DR-OR-RLS algorithms, the regularization parameters can be set a priori. The DR-VR-RLS requires only 6 multiplications per iteration for evaluating  δ ( n ) , while the computational amount related to  δ ( n )  of the DR-WR-RLS algorithm is  6 + L  multiplications per iteration. Even if the DR-WR-RLS algorithm is the most complex among its counterparts from Table 2, its improved performance compensates for this extra computational amount, as will be supported in Section 4. Also, since  N L , the computational amount required by the data-reuse process is negligible in the context of the overall complexity of the data-reuse regularized RLS algorithms. In addition, their robustness features justify the moderate extra computational amount as compared to the standard RLS algorithm.
Finally, we should outline that a detailed theoretical convergence analysis of the proposed algorithms is a self-containing issue that is beyond the scope of the paper and it will be explored in future works. Nevertheless, at the end of this section, we provide a brief convergence analysis in the mean value, under some simplified assumptions. First, let us consider that the covariance matrix of the input signal is close to a diagonal one, i.e.,  E x ( n ) x T ( n ) σ x 2 I L . Consequently, for large enough n, its estimate from (5) results in  R ( n ) σ x 2 / ( 1 λ ) I L . Also, for  L 1  (like in echo cancellation scenarios), the approximation  x T ( n ) x ( n ) L σ x 2  is valid. At this point, let us note that a general rule for setting the forgetting factor is [18]
λ = 1 1 K L ,
with  K 1 . Under these circumstances, based on (27) and (34), we obtain
P ( n ) 1 λ δ ( 1 λ ) + σ x 2 I L = 1 δ + K L σ x 2 I L ,
r ( n ) 1 ( 1 λ ) L σ x 2 δ ( 1 λ ) + σ x 2 = 1 L σ x 2 δ + K L σ x 2 .
Since  δ > 0 , it can be noticed that  0 < r ( n ) < 1  and  s ( n )  can be considered as deterministic, being obtained as the sum of a geometric progression with N terms and the common ratio  r ( n ) . At the limit, when  λ = 1  (i.e.,  K ), the common ratio becomes  r ( n ) = 1 , which results in  s ( n ) = N .
Next, we assume that the system to be identified is time-invariant, so that its impulse response is fixed (for the purpose of this simplified analysis), i.e.,  h ( n ) h . In this context, the system mismatch (or the coefficients’ error) can be defined as  m ( n ) = h h ^ ( n ) , so that the condition for the convergence in the mean value results in  E m ( n ) = 0 L , for  n . This is equivalent to  E h ^ ( n ) = h , for  n , which implies that the coefficients of the adaptive filter converge to those of the system impulse response. Based on (1) and (2), the update from (41) can be developed as
h ^ ( n ) = h ^ ( n 1 ) + s ( n ) P ( n ) x ( n ) x T ( n ) m ( n 1 ) + v ( n ) ,
so that subtracting  h  from both sides (and changing the sign), an update for the system mismatch is obtained as
m ( n ) = I L s ( n ) P ( n ) x ( n ) x T ( n ) m ( n 1 ) s ( n ) P ( n ) x ( n ) v ( n ) .
Then, taking the expectation on both sides of (46), using (43), and considering that  E x ( n ) v ( n ) = 0 L  (since the input signal and the additive noise are uncorrelated), we obtain
E m ( n ) = 1 s ( n ) σ x 2 δ + K L σ x 2 E m ( n 1 ) .
As indicated in Table 1, the initialization for the adaptive filter is  h ^ ( n ) = 0 L , so that  m ( 0 ) = h . Hence, processing (47)—starting with this initialization—results in
E m ( n ) = 1 s ( n ) σ x 2 δ + K L σ x 2 n E m ( 0 ) = 1 s ( n ) σ x 2 δ + K L σ x 2 n h .
Thus, to obtain the exponential decay toward zero, the convergence condition translates into  s ( n ) σ x 2 / δ + K L σ x 2 < 1 . Using the upper limit  s ( n ) = N , which, as explained before, is related to (44), we need to verify that  N σ x 2 / δ + K L σ x 2 < 1 . Since  δ > 0 , we have  N σ x 2 / δ + K L σ x 2 < N σ x 2 / K L σ x 2 , so that we basically need to verify that  N / ( K L ) < 1 , i.e.,  N < K L . This condition is always true in practice, since the common setting is  N L . Consequently, the data-reuse regularized RLS algorithm is convergent in the mean value.
In addition, a simple and reasonable mechanism to evaluate the stability of the algorithm is related to the conversion factor [1,3]. First, similarly to (2) but using the coefficients from the time index n, we can define the a posteriori error of the adaptive filter as
e ˜ ( n ) = d ( n ) x T ( n ) h ^ ( n ) .
Next, using the updated (41) in (49) and taking (2) into account, we obtain
e ˜ ( n ) = 1 s ( n ) x T ( n ) P ( n ) x ( n ) e ( n ) = γ ( n ) e ( n ) ,
where
γ ( n ) = 1 s ( n ) x T ( n ) P ( n ) x ( n )
represents the so-called conversion factor. Under the same simplified assumptions used before (related to the convergence in the mean), this conversion factor can be approximated as
γ ( n ) 1 N L σ x 2 δ + K L σ x 2 .
For stability, we need to verify that  0 < γ 1 , which further leads to  | e ˜ ( n ) | | e ( n ) | . Since the second term from the right-hand side of (52) is positive, the condition  γ 1  is always true. In order to also have  γ > 0 , the ratio from (52) should be subunitary, i.e.,  N L σ x 2 < δ + K L σ x 2 . Thus, using  K N  is sufficient to fulfill this condition and to guarantee the stability of the algorithm. This represents a common practical setting in most of the scenarios, as shown in the next section.

4. Simulation Results

The performances of the data-reuse regularized RLS algorithms presented in the previous section are analyzed in the following, based on the experiments performed in the framework of echo cancellation [5]. This type of application represents a very challenging system identification scenario, due to several main reasons. First, the input signal coming from the far-end (e.g., speech/audio or different type of noises) is usually nonstationary and also highly correlated. Second, the length of the system to be identified (i.e., the echo path) can be on the order of hundreds of coefficients. Third, the acoustic sensor (i.e., the microphone) that contains the echo signal also captures the background noise, together with the near-end voice and/or other external signals.
The previous factors raise significant challenges for the adaptive filtering algorithm, especially in terms of its convergence rate, tracking ability, accuracy of the estimate, and robustness to external (noisy) conditions. These represent the main performance criteria for assessing the overall behavior of the algorithms. As mentioned in Section 1, several robust RLS-based algorithms can be found in the literature. However, most of them are difficult to control in practice, since they usually require a priori information related to the operating environment and/or need the tuning of some additional parameters. Due to these reasons, the VR-RLS algorithm from [13] is considered a practical benchmark. The motivation behind this choice is twofold. First, it outperforms other robust RLS-based versions; second, it is considered a “practical” algorithm in terms of estimating/tuning its parameters in a facile manner. Clearly, the data-reuse technique presented in Section 3 can be applied to other robust RLS algorithms, but their previously mentioned limitations (related to the evaluation of the regularization term) still remain. For  N = 1 , the DR-VR-RLS version is equivalent to the VR-RLS algorithm from [13]. In terms of the computational cost, most of the robust/regularized RLS-based algorithms are comparable, since the main computational amount is related to the relations that define the RLS part of the algorithm, while the evaluation of their regularization parameter contributes with only a few operations, as outlined in Table 3 in Section 3. Here, the proposed data-reuse regularized RLS algorithms are gradually discussed and compared in order to outline their main performance features and the practical aspects related to the regularization terms.
In the following, the experimental conditions are presented in Section 4.1. Then, in Section 4.2, Section 4.3, Section 4.4 and Section 4.5, the data-reuse regularized algorithms from Section 3 are gradually introduced and analyzed. Finally, a brief discussion is provided in Section 4.6.

4.1. Simulation Setup

The experiments involve two types of input signals,  x ( n ) , considering both stationary and nonstationary sequences. First, an autoregressive (AR) process is involved in simulations, which is obtained by filtering a white Gaussian noise through the transfer function  1 / ( 1 0.9 z 1 ) . This first-order AR process, referred as AR(1), represents a stationary signal, but a highly correlated one, due to the pole (of the transfer function) close to unity. Second, a female voice is used as input, which is a more challenging signal due to its nonstationarity. The sampling frequency is 8 kHz.
The system to be identified is chosen according to the ITU-T G168 Recommendation [31]. It is based on the fourth cluster of coefficients ( b 4 ), which has the length  L = 128 . The impulse response of the echo path is obtained as  h ( n ) = b 4 + u ( n ) , where  u ( n )  is a white Gaussian noise with the variance  σ u 2 . This parameter is mainly set to  σ u 2 = 10 4 b 4 2 , but it changes to  σ u 2 = 10 2 b 4 2  in several experiments in which an echo path change scenario is simulated at some point.
The output of the echo path, i.e., the echo signal,  y ( n ) , is corrupted by a white Gaussian noise,  v ( n ) , such that  SNR = 20  dB in most of the scenarios. Nevertheless, other perturbations are also considered in several experiments, like lower SNRs, different types of noise, and double-talk periods. These particular scenarios will be detailed in relation to each specific experiment from the following subsections. A set of input/output signal waveforms and the system impulse response used in simulations are depicted in Figure 2. This plot shows the far-end speech (i.e., the input signal), the system impulse response (i.e., the echo path), and the resulting echo (i.e., the output signal).
All the RLS-based algorithms are using the forgetting factor  λ  into their cost functions. This positive subunitary parameter is usually associated to the filter length, i.e., the longer the filter, the larger the  λ  value should be, as indicated in (42). In other words, the value of  λ  can be controlled in terms of the value of K, so that increasing this parameter results in increasing the forgetting factor. Also, as we can notice in (42), a larger value of L involves a larger value of  λ , i.e., closer to one. The specific selection of  λ  (or K) will be indicated in each of the following experiments. The positive constant that appears within the denominator of several relations in order to prevent a division by zero (see Table 2) is set to  ϵ = 10 5 . As a performance measure used to assess the behavior of all the analyzed algorithms, the normalized misalignment (in dB) is defined as
NM h ( n ) , h ^ ( n ) = 20 log 10 h ( n ) h ^ ( n ) h ( n ) .
This represents one of the most popular indicators to evaluate the system identification scenarios, since it basically focuses on the “difference” (computed based on the Euclidean norm) between the true impulse response and its estimate. A lower value of  NM h ( n ) , h ^ ( n )  indicates a more accurate solution. Also, a steeper misalignment curve is associated to a faster convergence rate and/or a better tracking capability.

4.2. The DR-CR-RLS Algorithm

In the first set of experiments, we assess the performance of the DR-CR-RLS algorithm, which represents the conventional benchmark that uses a constant value for its regularization parameter, as indicated in the beginning of Table 2. The input signal is an AR(1) process and  SNR = 20  dB. The constant regularization term is set to  δ = 20 σ x 2 , which represents a general rule of thumb, as outlined in [12] and also explained later at the end of Section 4.3.
Under these circumstances, in Figure 3, an echo-path-change scenario is simulated after 2 s. The results from Figure 3 illustrate the influence of the data-reuse parameter (N) on the overall performance of the DR-CR-RLS algorithm for a fixed forgetting factor, which is set to  λ = 1 1 / ( 20 L ) . Also, the estimated echo path (before the change), as compared to its true impulse response, is shown in Figure 4 for two values of N. It can be noticed from Figure 3 that a larger value of N leads to a faster tracking reaction when the echo path changes. On the other hand, this gain is paid in terms of accuracy, which is indicated by a higher misalignment level. The accuracy issue is also visible in Figure 4, especially for the smaller coefficients from the tail of the echo path, but also for other peaks of the impulse response.
This behavior resembles the influence of the forgetting factor on the performance of RLS-type algorithms. In order to support this aspect, the previous experiment is repeated in Figure 5, but using different values of the forgetting factor (by varying the value of K) and setting  N = 1 , which is equivalent to the conventional regularized RLS algorithm without data-reuse. As we can notice from Figure 5, a larger value of  λ  (or K) leads to a better accuracy of the estimate (i.e., lower misalignment), but pays in terms of the tracking behavior.
This performance compromise can be better addressed by using a large value of the forgetting factor, while also increasing the value of N. This approach is supported in Figure 6, where the echo path change is introduced after one second. It can be noticed that the DR-CR-RLS algorithm using  λ = 1 1 / ( 200 L )  and  N = 8  achieves a better compromise between the main performance criteria, as compared to the conventional regularized RLS algorithm (i.e., the DR-CR-RLS with  N = 1 ) using different forgetting factors. This flexibility of the data-reuse approach allows for a better control of the overall performance of the algorithm.
At this point, we should outline that the well-known affine projection algorithm (APA) [32] can be interpreted as a data-reuse LMS algorithm, since it acts based on an “optimal” reuse of data, through its projection order [24]. The update of APA is defined by the relation
h ^ ( n ) = h ^ ( n 1 ) + μ X ( n ) δ + X T ( n ) X ( n ) 1 e ( n ) ,
where  μ  is the so-called step-size parameter (with  0 < μ 1 );
X ( n ) = x ( n ) x ( n 1 ) x ( n M + 1 )
is the input signal matrix (of size  L × M ), with M representing the projection order;  δ > 0  is a regularization constant (that prevents a “bad” matrix inversion); and
e ( n ) = d ( n ) X T ( n ) h ^ ( n 1 )
is the error signal vector, with
d ( n ) = d ( n ) d ( n 1 ) d ( n M + 1 ) T
grouping the last M samples of the reference signal. The APA has a moderate computational complexity, which is proportional to  O ( M L ) . Note that the matrix inversion from (54) can be performed using different efficient techniques [33]. This algorithm also owns reliable convergence features, especially for correlated input signals, thus outperforming the LMS algorithm.
In order to support its performance, as compared to the data-reuse regularized RLS-type algorithm, the experiment from Figure 6 is repeated in Figure 7, but using APA instead of the DR-CR-RLS algorithm. Two values of the projection order are used, i.e.,  M = 1  and 8, together with two step-size parameters,  μ = 1  and  0.1 . The APA using  M = 1  is equivalent to the well-known normalized LMS (NLMS) algorithm. Setting the value of the step-size parameter to  μ = 1  leads to the fastest convergence mode [3]. Similar to the combination of N and  λ  related to the DR-CR-RLS algorithm (see Figure 6), a performance balance can be achieved in case of APA by tuning the parameters M and  μ , respectively. As we can notice in Figure 7, a higher value of M improves the convergence rate and tracking, but sacrificing the accuracy of the estimate, which is indicated by a higher misalignment level. On the other hand, a lower step-size parameter improves the accuracy, but pays in terms of the convergence features. Using a larger value of M together with a lower value of  μ  can lead to a better compromise between the performance criteria. Nevertheless, by comparing the results from Figure 6 and Figure 7, we can notice that the DR-CR-RLS algorithm outperforms the APA, achieving a faster initial convergence and a better tracking reaction, but also reaching a lower misalignment (i.e., better accuracy).

4.3. The DR-OR-RLS Algorithm

The DR-CR-RLS algorithm achieves a fairly reliable performance in good SNR conditions, as supported by the experiments provided in the previous subsection. However, in noisy conditions with low SNRs, the importance of using an appropriate regularization parameter becomes more significant. This aspect was also indicated in [18], where the regularization parameter of the data-reuse RLS algorithm was connected to the SNR. The resulting algorithm is referred to as DR-OR-RLS in Table 2 and its regularization parameter is denoted by  δ o , considering that the value of the SNR is available. The experiments from this subsection outlines the importance of selecting a proper (constant) regularization parameter, as compared to an arbitrary one (based on a rule of thumb). The input signal remains the AR(1) process used in Section 4.2 and the forgetting factor is  λ = 1 1 / ( 10 L )  for both the DR-CR-RLS and DR-OR-RLS algorithms, while the regularization constant of the DR-CR-RLS algorithm is the same as in the previous set of experiments (i.e.,  δ = 20 σ x 2 ).
First, in Figure 8, the SNR is set to 20 dB, so that the influence of the background noise is minor. As we can see, despite the value of the data-reuse parameter N, the behavior of the DR-OR-RLS algorithm is similar to its conventional DR-CR-RLS counterpart. In other words, the value of the regularization parameter does not influence the overall performance in this case (with good SNR). This aspect is also supported in Figure 9, where the estimated impulse responses are depicted (as compared to the true impulse response), for different values of N, related to the experiment from Figure 8. In all the cases, reliable estimates of  h ( n )  are obtained due to the mild noisy conditions.
Nevertheless, this behavior is no longer valid in low-SNR conditions, as supported in Figure 10. Here,  SNR = 0  dB, so that we deal with a very noisy environment. In this case, it can be noticed that the DR-OR-RLS algorithm that uses  δ o  (which is related to the SNR) outperforms the conventional DR-CR-RLS benchmark, which is inherently limited by the value of its regularization constant ( δ = 20 σ x 2 ). The difference is visible for both values of the data-reuse parameter ( N = 2  and 4) involved in this experiment. The DR-OR-RLS algorithm reaches lower misalignment levels as compared to the DR-CR-RLS version, which translates into better accuracy of the estimates. The estimated impulse responses from the experiment reported in Figure 10 are provided in Figure 11. Clearly, the challenging noisy conditions are influencing the accuracy of the estimates. However, the DR-OR-RLS algorithm leads to better estimates as compared to the DR-CR-RLS version, as indicated in Figure 10c,d, which support the results from Figure 10.
To support the previous discussion (and also the “rule of thumb” initially mentioned in Section 4.2), the evolution of the parameter  δ o / σ x 2  is depicted in Figure 12, with respect to the SNR. The normalization to  σ x 2  is just used to focus only on the component that depends on the SNR, within the regularization parameter  δ o  of the DR-OR-RLS algorithm. As we can see from this figure, the lower the SNR, the higher the regularization parameter value should be. Also, when  SNR = 20  dB, the value of  δ o / σ x 2  is in the vicinity of 20, which justifies the rule of thumb selection ( δ = 20 σ x 2 ) from Section 4.2.

4.4. The DR-VR-RLS Algorithm

The DR-OR-RLS algorithm from [18] can be considered as a theoretical benchmark for outlining the importance of selecting a proper value of the regularization parameter in different noisy conditions. Nevertheless, the true value of the SNR is not available in practice and should be estimated. This is performed within the DR-VR-RLS algorithm from Table 2, which uses the variable regularization (time-dependent) parameter  δ ( n ) , with the SNR estimated based on (17)–(19). In the following experiments, a speech sequence is used as input signal, which represents a challenge for the adaptive filtering algorithms, especially due to its nonstationary nature.
A first comparison between the DR-OR-RLS and DR-VR-RLS algorithms is provided in Figure 13, for different values of the data-reuse parameter N, and using the forgetting factor  λ = 1 1 / ( 20 L ) . An echo path change scenario is considered after 3 s and the SNR is set to 20 dB. It can be noticed that the performances of these two algorithms are very similar under this scenario, proving that the estimate of the SNR used by the DR-VR-RLS algorithm is accurate. Only a slightly slower tracking reaction is noticeable as compared to the DR-OR-RLS algorithm, due to the evaluation of the estimates from (18) and (19), which are based on the exponential windowing technique using the forgetting factor  λ  (thus, resulting in an inherent latency). Related to this experiment, the estimated impulse responses for both algorithms (before the echo path changes) are provided in Figure 14, for two values of the data-reuse parameter, i.e.,  N = 1  and 8. As expected, using a larger value of the data-reuse parameter results in a slightly lower accuracy of the echo path estimate, as indicated in Figure 14b,d. This behavior is valid for both versions of the data-reuse regularized RLS algorithm from Figure 13.
The advantage of the practical estimation of the SNR within the DR-VR-RLS algorithm becomes more apparent in scenarios with variable background noise, as frequently happens in echo cancellation applications. Such a scenario is considered in Figure 15, where three bursts of white Gaussian noise of different durations corrupt the microphone signal (i.e., the acoustic sensor), with gradually decreasing SNRs, i.e., 10 dB, 0 dB, and  10  dB, respectively. Two values of the data-reuse parameter are used (i.e.,  N = 2  and 4), while the forgetting factor is set to  λ = 1 1 / ( 10 L )  for both algorithms. It can be noticed from Figure 15 that the DR-VR-RLS is more robust to the SNR variations, despite the increasing amount of noise. On the other hand, the DR-OR-RLS algorithm that uses the regularization parameter  δ o  evaluated based on the steady-state background noise (i.e., between the bursts), with  SNR = 20  dB, is significantly affected in this scenario.

4.5. The DR-WR-RLS Algorithm

The DR-WR-RLS algorithm uses the regularization parameter  δ ( n ) , which includes the NUR estimated based on (24) and (25), as also shown in Table 2. Besides the contribution of the external noise (related to its power estimate,  σ v 2 ), the NUR incorporates the model uncertainties, which are captured by the parameter  σ w 2 . In this last subsection of simulation results, we compare the DR-WR-RLS algorithm with the previous two versions, i.e, the DR-OR-RLS and DR-VR-RLS algorithms, using a speech sequence as the input signal and different types of perturbation, in order to challenge the operating conditions.
First, the performance of the DR-WR-RLS algorithm is assessed for different values of the data-reuse parameter (N). An echo path change is introduced after 3 s and the forgetting factor is chosen as  λ = 1 1 / ( 5 L ) . The results are shown in Figure 16, where we can notice the same influence of the data-reuse parameter, i.e., faster convergence and tracking when N increases, at the cost of higher misalignment. Thus, a similar approach can be considered for a better compromise between the performance criteria: increasing the value of the forgetting factor together with the value of N. The estimated impulse responses provided by the DR-WR-RLS algorithm (before the echo path changes) are depicted in Figure 17a,b, for two values of the data-reuse parameter, i.e.,  N = 1  and  N = 8 , respectively. Here, a slight reduction in accuracy can be noticed for a larger value of N. This is an expected behavior that supports the results from Figure 16, which show that increasing the data-reuse parameter improves the tracking reaction but increases the misalignment level.
The main performance feature of the DR-WR-RLS algorithm is related to its robustness in different challenging scenarios. For example, in Figure 18, we consider a realistic communication scenario, when the acoustic sensor (i.e., the microphone) captures two different types of noise, for different periods of time. First, a highway noise appears between 2 and 4 s; then, an engine noise bursts between 7 and 10 s of the experiment. The background noise between these periods remains the same, with  SNR = 20  dB. The DR-OR-RLS algorithm used for comparison employs  δ o , evaluated based on the SNR of this background noise, while the estimated (variable) SNR is used within the DR-VR-RLS algorithm. For all the algorithms, the forgetting factor is set to  λ = 1 1 / ( 10 L )  and two data-reuse parameters are used, i.e.,  N = 2  and 4. In both cases, it can be noticed that the DR-WR-RLS algorithm significantly outperforms its counterparts in terms of robustness to external perturbations. Even if the DR-VR-RLS algorithm is still better than the DR-OR-RLS version, it is also significantly affected during these challenging noisy conditions.
A similar experiment is shown in Figure 19, but considering the more challenging double-talk scenario [5]. In this case, the microphone signal captured by the acoustic sensor also contains the voice of the near-end talker, which acts like a large level of nonstationary disturbance for the adaptive filter. Two such double-talk periods are considered in this experiment, between periods of 2 to 4 s and 7 to 10 s, respectively, using different intensities, with the second one being longer in time and more intense in amplitude. Nevertheless, the DR-WR-RLS algorithm is still very robust under this difficult scenario, while the other two versions used for comparison are clearly disturbed and significantly biased during double-talk.
In this framework, the estimated NUR of the DR-WR-RLS algorithm is depicted in Figure 20a,b, for the experiments from Figure 18a and Figure 19a, respectively (for  N = 2 ). In both cases, it can be noticed that this specific term provides a reliable measure related to the disturbance periods. Basically, the estimated NUR increases during these periods, thus reducing the update term of the DR-WR-RLS algorithm. As a result, its adaptation is slower and less affected by the disturbances, which represents the desired behavior in these challenging scenarios.
The overall performance of the DR-WR-RLS algorithm relies on the estimation of NUR, which depends on the estimated  σ v 2  and  σ w 2 . Thus, a legitimate practical issue could be related to the sensitivity of the algorithm to this estimation, especially in challenging conditions like abrupt changes in noise or system dynamics. In order to assess such aspects, let us consider an “ideal” version of the algorithm, which is referred to as the DR-WR-RLSid, assuming that the near-end signal and the model uncertainties are available. This further allows the availability of a “true” NUR within the DR-WR-RLSid algorithm. Under these circumstances, the experiment from Figure 21 considers an echo path change after 3 s, followed by a burst of engine noise that appears between 7 and 10 s (like in the second part of Figure 18). The input signal is a speech signal and the background noise is  SNR = 20  dB. The DR-WR-RLS algorithm is compared to its “ideal” version (DR-WR-RLSid) when using different values of the data-reuse parameter ( N = 2  and 4). The forgetting factor is set to  λ = 1 1 / ( 5 L )  for both algorithms (with  L = 128 ). As expected, there is a slight delay in the tracking reaction of the DR-WR-RLS algorithm as compared to its “ideal” version, which becomes less apparent when the value of N increases. Also, in the “ideal” case, there is a minor improvement in terms of robustness during the noise burst. This is outlined in the zoom portion shown on the top-right of Figure 21. Nevertheless, the DR-WR-RLS algorithm is fairly reliable against the estimation of NUR, so that its sensitivity is quite minor. This represents an important practical aspect related to the identification of real-world time-varying systems, especially when operating in challenging conditions and environments.

4.6. Discussion

The results presented in Section 4.2, Section 4.3, Section 4.4 and Section 4.5 indicate several important performance features of the data-reuse regularized RLS algorithms, especially in terms of their robustness in noisy environments. While the forgetting factor is recognized as the main convergence parameter of the conventional RLS-type algorithms [1,2,3], tuning its value alone cannot always lead to a proper balance between the main performance criteria, i.e., convergence/tracking, accuracy, and robustness. Using a value of the forgetting factor close to its maximum bound results in a fast initial convergence rate and a good accuracy of the estimate, but with a slow tracking reaction when the impulse response of the system changes. Also, the robustness features of the algorithm are inherently limited when using only the forgetting factor as the control mechanism. In this context, a proper regularization term improves the robustness, while the data-reuse mechanism enhances the tracking capabilities even when using a large value of the forgetting factor. These are the main assets of the proposed data-reuse regularized RLS algorithms.
As shown in simulations, these algorithms can reliably operate in noisy conditions, with low SNRs. While the background noise inherently biases the accuracy of the estimate [5], the data-reuse regularized RLS algorithms can operate in noisy conditions, e.g., with  SNR = 0  dB, leading to a reasonable accuracy, with a misalignment level below  10  dB (as shown in Figure 10). This performance feature can be further improved when using a larger value of the forgetting factor. Moreover, with a proper (and practical) estimation of the regularization parameter, the data-reuse regularized RLS algorithms can fairly cope with challenging noisy bursts, e.g., when  SNR = 10  dB (like in Figure 15). In addition, the DR-WR-RLS version outperforms its counterparts, being able to operate in adverse scenarios with nonstationary high-level background noises (as supported in Figure 18). In addition, it is also suitable for the double-talk scenarios, which are critical in echo cancellation applications (see Figure 19). All these characteristics support the potential practical applicability of the data-reuse regularized RLS algorithms, with appealing performances for real-world system identification scenarios.
The echo path models involved in this work originate from the ITU-T G168 Recommendation [31], for the sake of reproducibility of the results. Some of the noise sequences used in simulations are recorded in real environments, like the highway noise and engine noise (related to the results from Figure 18 and Figure 21). Also, several simulations were performed using a recorded female voice as the far-end (input) signal, like in Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20 and Figure 21. In addition, the near-end signal used in the double-talk scenarios from Figure 19 is also a recorded (male) voice. On the other hand, in most of the experiments, we used synthetic background noise, with different SNRs. In this context, it would be highly useful to test the algorithms with real-world acoustic data, for a more realistic experimental framework. This task represents a mandatory subject for future works, since the current paper basically outlines the main performance features of the data-reuse regularized RLS algorithms. The results have indicated improved robustness in different challenging scenarios (e.g., nonstationary noise and double-talk); however, there might be inherent limitations that could arise in practice, especially related to the practical implementation of these algorithms on DSP/FPGA platforms and the related numerical precision effects. Nevertheless, we aim to implement some of the challenging operations (like the matrix inversion) in a numerically robust manner, using computationally efficient iterative techniques, like the DCD method [28].

5. Conclusions and Perspectives

This paper has presented several regularized RLS algorithms that rely on the data-reuse method and own improved robustness features, in the framework of system identification. In this context, two regularization techniques have been involved, as presented in Section 2. The first one has led to a regularization parameter that depends on the SNR. The second one has also included the model uncertainties into the cost function, thus leading to a regularization parameter that includes the NUR. In addition, the data-reuse method applied to the regularized RLS-type algorithm has been formulated in a computationally efficient manner, using a single (equivalent) step for the entire data-reuse process, as developed in Section 3 (and summarized in Table 1). The resulting algorithms have been referred to as DR-CR-RLS, DR-OR-RLS, DR-VR-RLS, and DR-WR-RLS, as summarized in Table 2. The first two versions (i.e., DR-CR-RLS and DR-OR-RLS) represent theoretical benchmarks, using constant regularization parameters. The other two algorithms (i.e., DR-VR-RLS and DR-WR-RLS) are variable-regularized versions, since they use time-dependent regularization parameters and involve practical estimations of the SNR and NUR, respectively. As a result, they inherit the advantages of both the regularization-based approach and the data-reuse technique, leading to improved robustness and fast convergence/tracking, respectively. Simulation results obtained in the framework of echo cancellation (presented in Section 4) support these performance features. In this context, the algorithms have been tested in different challenging conditions, including variations of the SNR, different types of noise as external perturbations, and double-talk scenarios. Among the analyzed versions, the DR-WR-RLS algorithm stands out as the most performant one, especially in terms of its robustness against various adverse conditions.
In the future, we aim to further improve the DR-WR-RLS algorithm in terms of several aspects. First, the development of low-complexity versions of this algorithm is considered, based on the iterative techniques for solving systems of equations, like the CG and CD methods [28,29,30]. Second, the design of a multichannel version of the DR-WR-RLS algorithm represents a subject of interest, since there are different adaptive filtering applications that involve multiple acoustic sensors (i.e., microphones), for an enhanced listening experience. Third, tensor-based signal processing techniques could be used for the decomposition of the impulse response of the DR-WR-RLS adaptive filter, which would lead to improved overall performance due to a combination of filters with shorter lengths.

Author Contributions

Conceptualization, R.-A.O.; methodology, C.P.; validation, J.B.; investigation, L.-M.D.; software, C.-L.S.; formal analysis, S.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sayed, A.H. Adaptive Filters; Wiley: New York, NY, USA, 2008. [Google Scholar]
  2. Diniz, P.S.R. Adaptive Filtering: Algorithms and Practical Implementation, 4th ed.; Springer: New York, NY, USA, 2013. [Google Scholar]
  3. Haykin, S. Adaptive Filter Theory, 5th ed.; Pearson: Upper Saddle River, NJ, USA, 2014. [Google Scholar]
  4. Ljung, L. System Identification: Theory for the User, 2nd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 1999. [Google Scholar]
  5. Hänsler, E.; Schmidt, G. Acoustic Echo and Noise Control—A Practical Approach; Wiley: Hoboken, NJ, USA, 2004. [Google Scholar]
  6. Bottomley, G.E. Channel Equalization for Wireless Communications: From Concepts to Detailed Mathematics; Wiley-IEEE Press: Piscatawsay, NJ, USA, 2011. [Google Scholar]
  7. Arenas-Garcia, J.; Azpicueta-Ruiz, L.A.; Silva, M.T.M.; Nascimento, V.H.; Sayed, A.H. Combinations of adaptive filters: Performance and convergence properties. IEEE Signal Process. Mag. 2016, 33, 120–140. [Google Scholar] [CrossRef]
  8. Bäckström, T. (Ed.) Speech Coding: With Code-Excited Linear Prediction; Springer International Publishing: Cham, Switzerland, 2017. [Google Scholar]
  9. Zhou, N.; Trudnowski, D.J.; Pierre, J.W.; Mittelstadt, W.A. Electromechanical mode online estimation using regularized robust RLS methods. IEEE Trans. Power Syst. 2008, 23, 1670–1680. [Google Scholar] [CrossRef]
  10. Iqbal, N.; Zerguine, A. AFD-DFE using constraint-based RLS and phase noise compensation for uplink SC-FDMA. IEEE Trans. Veh. Technol. 2017, 66, 4435–4443. [Google Scholar] [CrossRef]
  11. Zhong, Y.; Yu, C.; Xiang, X.; Lian, L. Proximal policy-optimized regularized least squares algorithm for noise-resilient motion prediction of UMVs. IEEE J. Ocean. Eng. 2024, 49, 1397–1410. [Google Scholar] [CrossRef]
  12. Benesty, J.; Paleologu, C.; Ciochină, S. Regularization of the RLS algorithm. IEICE Trans. Fundam. 2011, E94-A, 1628–1629. [Google Scholar] [CrossRef]
  13. Elisei-Iliescu, C.; Stanciu, C.; Paleologu, C.; Benesty, J.; Anghel, C.; Ciochină, S. Robust variable regularized RLS algorithms. In Proceedings of the 2017 Hands-free Speech Communications and Microphone Arrays (HSCMA), San Francisco, CA, USA, 1–3 March 2017; pp. 171–175. [Google Scholar]
  14. Yang, F.; Yang, J.; Albu, F. An alternative solution to the dynamically regularized RLS algorithm. In Proceedings of the 2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Lanzhou, China, 18–21 November 2019; pp. 1072–1075. [Google Scholar]
  15. Li, B.; Wu, S.; Tripp, E.E.; Pezeshki, A.; Tarokh, V. Recursive least squares with minimax concave penalty regularization for adaptive system identification. IEEE Access 2024, 12, 66993–67004. [Google Scholar] [CrossRef]
  16. Paleologu, C.; Benesty, J.; Ciochină, S. Data-reuse recursive least-squares algorithms. IEEE Signal Process. Lett. 2022, 29, 752–756. [Google Scholar] [CrossRef]
  17. Gao, W.; Chenn, J.; Richard, C. Theoretical analysis of the performance of the data-reuse RLS algorithm. IEEE Trans. Circuits Syst. II Express Briefs 2024, 71, 490–494. [Google Scholar] [CrossRef]
  18. Otopeleanu, R.; Dogariu, L.M.; Stanciu, C.L.; Paleologu, C.; Benesty, J.; Ciochină, S. A data-reuse regularized recursive least-squares adaptive filtering algorithm. In Proceedings of the 2024 International Symposium on Electronics and Telecommunications (ISETC), Timisoara, Romania, 7–8 November 2024. [Google Scholar]
  19. Shaffer, S.; Williams, C.S. Comparison of LMS, alpha LMS, and data reusing LMS algorithms. In Proceedings of the Conference Record of the Seventeenth Asilomar Conference on Circuits, Systems and Computers, Santa Clara, CA, USA, 31 October–2 November 1983; pp. 260–264. [Google Scholar]
  20. Roy, S.; Shynk, J.J. Analysis of the data-reusing LMS algorithm. In Proceedings of the 32nd Midwest Symposium on Circuits and Systems, Champaign, IL, USA, 14–16 August 1989; pp. 1127–1130. [Google Scholar]
  21. Schnaufer, B.A.; Jenkins, W.K. New data-reusing LMS algorithms for improved convergence. In Proceedings of the Conference Record of the Twenty-Seventh Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 1–3 November 1993; pp. 1584–1588. [Google Scholar]
  22. Apolinário, J.A., Jr.; de Campos, M.L.R.; Diniz, P.S.R. Convergence analysis of the binormalized data-reusing LMS algorithm. IEEE Trans. Signal Process. 2000, 48, 3235–3242. [Google Scholar] [CrossRef]
  23. Diniz, P.S.R.; Werner, S. Set-membership binormalized data-reusing LMS algorithms. IEEE Trans. Signal Process. 2003, 51, 124–134. [Google Scholar] [CrossRef]
  24. Soni, R.A.; Gallivan, K.A.; Jenkins, W.K. Low-complexity data reusing methods in adaptive filtering. IEEE Trans. Signal Process. 2004, 52, 394–405. [Google Scholar] [CrossRef]
  25. Vinhoza, T.T.V.; de Lamare, R.C.; Sampaio-Neto, R. Low complexity blind constrained data-reusing algorithms based on minimum variance and constant modulus criteria. In Proceedings of the 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, Toulouse, France, 14–19 May 2006; pp. III-776–III-779. [Google Scholar]
  26. Kwon, J.-C.; Choi, Y.-S.; Song, W.-J. Equation-error adaptive IIR filtering based on data reuse. IEEE Trans. Circuits Syst. II Express Briefs 2007, 54, 695–699. [Google Scholar] [CrossRef]
  27. Otopeleanu, R.A.; Benesty, J.; Paleologu, C.; Stanciu, C.L.; Dogariu, L.M.; Ciochină, S. A practical regularized recursive least-squares algorithm for robust system identification. In Proceedings of the European Signal Processing Conference (EUSIPCO), Palermo, Italy, 8–12 September 2025. [Google Scholar]
  28. Zakharov, Y.V.; White, G.P.; Liu, J. Low-complexity RLS algorithms using dichotomous coordinate descent iterations. IEEE Trans. Signal Process. 2008, 56, 3150–3161. [Google Scholar] [CrossRef]
  29. Zakharov, Y.V.; Nascimento, V.H. DCD-RLS adaptive filters with penalties for sparse identification. IEEE Trans. Signal Process. 2013, 61, 3198–3213. [Google Scholar] [CrossRef]
  30. Yu, Y.; Lu, L.; Zheng, Z.; Wang, W.; Zakharov, Y.; de Lamare, R.C. DCD-based recursive adaptive algorithms robust against impulsive noise. IEEE Trans. Circuits Syst. II Express Briefs 2020, 67, 1359–1363. [Google Scholar] [CrossRef]
  31. Digital Network Echo Cancellers, ITU-T Recommendations G.168. 2015. Available online: https://www.itu.int/rec/T-REC-G.168 (accessed on 5 June 2025).
  32. Ozeki, K.; Umeda, T. An adaptive filtering algorithm using an orthogonal projection to an affine subspace and its properties. Electron. Commun. Jpn. 1984, 67-A, 19–27. [Google Scholar] [CrossRef]
  33. Zakharov, Y.V.; Albu, F. Coordinate descent iterations in fast affine projection algorithm. IEEE Signal Process. Lett. 2005, 12, 353–356. [Google Scholar] [CrossRef]
Figure 1. Block diagram of the data-reuse regularized RLS algorithm from Table 1.
Figure 1. Block diagram of the data-reuse regularized RLS algorithm from Table 1.
Sensors 25 05017 g001
Figure 2. A set of input/output signal waveforms and the system impulse response used in simulations: (a) the input signal (the far-end speech), (b) the impulse response of the echo path, and (c) the output signal (the echo).
Figure 2. A set of input/output signal waveforms and the system impulse response used in simulations: (a) the input signal (the far-end speech), (b) the impulse response of the echo path, and (c) the output signal (the echo).
Sensors 25 05017 g002
Figure 3. Normalized misalignment of the DR-CR-RLS algorithm using different values of the data-reuse parameter N. The other settings are  λ = 1 1 / ( 20 L )  and  δ = 20 σ x 2 . The input signal is an AR(1) process,  SNR = 20  dB,  L = 128 , and the echo path changes after 2 s.
Figure 3. Normalized misalignment of the DR-CR-RLS algorithm using different values of the data-reuse parameter N. The other settings are  λ = 1 1 / ( 20 L )  and  δ = 20 σ x 2 . The input signal is an AR(1) process,  SNR = 20  dB,  L = 128 , and the echo path changes after 2 s.
Sensors 25 05017 g003
Figure 4. Estimated impulse response of the echo path (before the change), as compared to the true impulse response, for different values of the data-reuse parameter N, in relation to the experiment from Figure 3: (a N = 1  and (b N = 8 .
Figure 4. Estimated impulse response of the echo path (before the change), as compared to the true impulse response, for different values of the data-reuse parameter N, in relation to the experiment from Figure 3: (a N = 1  and (b N = 8 .
Sensors 25 05017 g004
Figure 5. Normalized misalignment of the DR-CR-RLS algorithm using different values of the forgetting factor  λ = 1 1 / ( K L )  and  N = 1  (equivalent to the conventional algorithm without data-reuse). The other conditions are the same as in Figure 3.
Figure 5. Normalized misalignment of the DR-CR-RLS algorithm using different values of the forgetting factor  λ = 1 1 / ( K L )  and  N = 1  (equivalent to the conventional algorithm without data-reuse). The other conditions are the same as in Figure 3.
Sensors 25 05017 g005
Figure 6. Normalized misalignment of the DR-CR-RLS algorithm using different values of the data-reuse parameter N and different values of the forgetting factor  λ = 1 1 / ( K L ) . The input signal is an AR(1) process,  SNR = 20  dB,  L = 128 δ = 20 σ x 2 , and the echo path changes after 1 s.
Figure 6. Normalized misalignment of the DR-CR-RLS algorithm using different values of the data-reuse parameter N and different values of the forgetting factor  λ = 1 1 / ( K L ) . The input signal is an AR(1) process,  SNR = 20  dB,  L = 128 δ = 20 σ x 2 , and the echo path changes after 1 s.
Sensors 25 05017 g006
Figure 7. Normalized misalignment of the APA using different values of the projection order N and different values of the step-size  μ . The other conditions are the same as in Figure 6.
Figure 7. Normalized misalignment of the APA using different values of the projection order N and different values of the step-size  μ . The other conditions are the same as in Figure 6.
Sensors 25 05017 g007
Figure 8. Normalized misalignment of the DR-CR-RLS and DR-OR-RLS algorithms using different values of the data-reuse parameter N. The other settings are  λ = 1 1 / ( 10 L ) δ = 20 σ x 2  for DR-CR-RLS, and  δ o  for DR-OR-RLS. The input signal is an AR(1) process,  SNR = 20  dB, and  L = 128 .
Figure 8. Normalized misalignment of the DR-CR-RLS and DR-OR-RLS algorithms using different values of the data-reuse parameter N. The other settings are  λ = 1 1 / ( 10 L ) δ = 20 σ x 2  for DR-CR-RLS, and  δ o  for DR-OR-RLS. The input signal is an AR(1) process,  SNR = 20  dB, and  L = 128 .
Sensors 25 05017 g008
Figure 9. Estimated impulse response of the echo path, as compared to the true impulse response, for different values of the data-reuse parameter N, related to the experiment from Figure 8: (a) DR-CR-RLS with  N = 4 , (b) DR-CR-RLS with  N = 2 , (c) DR-OR-RLS with  N = 4 , and (d) DR-OR-RLS with  N = 2 .
Figure 9. Estimated impulse response of the echo path, as compared to the true impulse response, for different values of the data-reuse parameter N, related to the experiment from Figure 8: (a) DR-CR-RLS with  N = 4 , (b) DR-CR-RLS with  N = 2 , (c) DR-OR-RLS with  N = 4 , and (d) DR-OR-RLS with  N = 2 .
Sensors 25 05017 g009
Figure 10. Normalized misalignment of the DR-CR-RLS and DR-OR-RLS algorithms using different values of the data-reuse parameter N, in a noisy environment, with  SNR = 0  dB. The other conditions are the same as in Figure 8.
Figure 10. Normalized misalignment of the DR-CR-RLS and DR-OR-RLS algorithms using different values of the data-reuse parameter N, in a noisy environment, with  SNR = 0  dB. The other conditions are the same as in Figure 8.
Sensors 25 05017 g010
Figure 11. Estimated impulse response of the echo path, as compared to the true impulse response, for different values of the data-reuse parameter N, related to the experiment from Figure 10: (a) DR-CR-RLS with  N = 4 , (b) DR-CR-RLS with  N = 2 , (c) DR-OR-RLS with  N = 4 , and (d) DR-OR-RLS with  N = 2 .
Figure 11. Estimated impulse response of the echo path, as compared to the true impulse response, for different values of the data-reuse parameter N, related to the experiment from Figure 10: (a) DR-CR-RLS with  N = 4 , (b) DR-CR-RLS with  N = 2 , (c) DR-OR-RLS with  N = 4 , and (d) DR-OR-RLS with  N = 2 .
Sensors 25 05017 g011
Figure 12. Evolution of the parameter  δ o / σ x 2  (related to the DR-OR-RLS algorithm) as a function of the SNR.
Figure 12. Evolution of the parameter  δ o / σ x 2  (related to the DR-OR-RLS algorithm) as a function of the SNR.
Sensors 25 05017 g012
Figure 13. Normalized misalignment of (a) the DR-OR-RLS algorithm and (b) the DR-VR-RLS algorithm, using different values of the data-reuse parameter N and  λ = 1 1 / ( 20 L ) . The input signal is a speech sequence,  SNR = 20  dB,  L = 128 , and the echo path changes after 3 s.
Figure 13. Normalized misalignment of (a) the DR-OR-RLS algorithm and (b) the DR-VR-RLS algorithm, using different values of the data-reuse parameter N and  λ = 1 1 / ( 20 L ) . The input signal is a speech sequence,  SNR = 20  dB,  L = 128 , and the echo path changes after 3 s.
Sensors 25 05017 g013
Figure 14. Estimated impulse response of the echo path (before the change), as compared to the true impulse response, for different values of the data-reuse parameter N, related to the experiment from Figure 13: (a) DR-OR-RLS with  N = 1 , (b) DR-OR-RLS with  N = 8 , (c) DR-VR-RLS with  N = 1 , and (d) DR-VR-RLS with  N = 8 .
Figure 14. Estimated impulse response of the echo path (before the change), as compared to the true impulse response, for different values of the data-reuse parameter N, related to the experiment from Figure 13: (a) DR-OR-RLS with  N = 1 , (b) DR-OR-RLS with  N = 8 , (c) DR-VR-RLS with  N = 1 , and (d) DR-VR-RLS with  N = 8 .
Sensors 25 05017 g014
Figure 15. Normalized misalignment of the DR-OR-RLS and DR-VR-RLS algorithms using (a N = 2  and (b N = 4 , in noisy conditions, considering three noise bursts with  SNR = 10  dB (between 1 and 2 s),  SNR = 0  dB (between 4 and 6 s), and  SNR = 10  dB (between 8 and 10 s). The input signal is a speech sequence,  SNR = 20  dB (between the noise bursts),  L = 128 , and  λ = 1 1 / ( 10 L ) .
Figure 15. Normalized misalignment of the DR-OR-RLS and DR-VR-RLS algorithms using (a N = 2  and (b N = 4 , in noisy conditions, considering three noise bursts with  SNR = 10  dB (between 1 and 2 s),  SNR = 0  dB (between 4 and 6 s), and  SNR = 10  dB (between 8 and 10 s). The input signal is a speech sequence,  SNR = 20  dB (between the noise bursts),  L = 128 , and  λ = 1 1 / ( 10 L ) .
Sensors 25 05017 g015
Figure 16. Normalized misalignment of the DR-WR-RLS algorithm using different values of the data-reuse parameter N and  λ = 1 1 / ( 5 L ) . The input signal is a speech sequence,  SNR = 20  dB,  L = 128 , and the echo path changes after 3 s.
Figure 16. Normalized misalignment of the DR-WR-RLS algorithm using different values of the data-reuse parameter N and  λ = 1 1 / ( 5 L ) . The input signal is a speech sequence,  SNR = 20  dB,  L = 128 , and the echo path changes after 3 s.
Sensors 25 05017 g016
Figure 17. Estimated impulse response of the echo path (before the change), as compared to the true impulse response, for different values of the data-reuse parameter N, related to the experiment from Figure 16: (a N = 1  and (b N = 8 .
Figure 17. Estimated impulse response of the echo path (before the change), as compared to the true impulse response, for different values of the data-reuse parameter N, related to the experiment from Figure 16: (a N = 1  and (b N = 8 .
Sensors 25 05017 g017
Figure 18. Normalized misalignment of the DR-OR-RLS, DR-VR-RLS, and DR-WR-RLS algorithms using (a N = 2  and (b N = 4 , in noisy conditions, considering two bursts of highway noise (between 2 and 4 s) and engine noise (between 7 and 10 s). The input signal is a speech sequence,  SNR = 20  dB (between the noise bursts),  L = 128 , and  λ = 1 1 / ( 10 L ) .
Figure 18. Normalized misalignment of the DR-OR-RLS, DR-VR-RLS, and DR-WR-RLS algorithms using (a N = 2  and (b N = 4 , in noisy conditions, considering two bursts of highway noise (between 2 and 4 s) and engine noise (between 7 and 10 s). The input signal is a speech sequence,  SNR = 20  dB (between the noise bursts),  L = 128 , and  λ = 1 1 / ( 10 L ) .
Sensors 25 05017 g018
Figure 19. Normalized misalignment of the DR-OR-RLS, DR-VR-RLS, and DR-WR-RLS algorithms using (a N = 2  and (b N = 4 , in double-talk conditions, considering two double-talk periods with different intensities, between periods of 2 to 4 s and 7 to 10 s, respectively. The input signal is a speech sequence,  SNR = 20  dB (for the background noise),  L = 128 , and  λ = 1 1 / ( 10 L ) .
Figure 19. Normalized misalignment of the DR-OR-RLS, DR-VR-RLS, and DR-WR-RLS algorithms using (a N = 2  and (b N = 4 , in double-talk conditions, considering two double-talk periods with different intensities, between periods of 2 to 4 s and 7 to 10 s, respectively. The input signal is a speech sequence,  SNR = 20  dB (for the background noise),  L = 128 , and  λ = 1 1 / ( 10 L ) .
Sensors 25 05017 g019
Figure 20. Time evolution of the NUR (related to the DR-WR-RLS algorithm) for the experiments reported in (a) Figure 18a and (b) Figure 19a.
Figure 20. Time evolution of the NUR (related to the DR-WR-RLS algorithm) for the experiments reported in (a) Figure 18a and (b) Figure 19a.
Sensors 25 05017 g020
Figure 21. Normalized misalignment of the DR-WR-RLS algorithm and its “ideal” version (DR-WR-RLSid) using different values of the data-reuse parameter N and  λ = 1 1 / ( 5 L ) . The input signal is a speech sequence;  SNR = 20  dB (for the background noise),  L = 128 , the echo path changes after 3 s, and a burst of engine noise appears between 7 and 10 s.
Figure 21. Normalized misalignment of the DR-WR-RLS algorithm and its “ideal” version (DR-WR-RLSid) using different values of the data-reuse parameter N and  λ = 1 1 / ( 5 L ) . The input signal is a speech sequence;  SNR = 20  dB (for the background noise),  L = 128 , the echo path changes after 3 s, and a burst of engine noise appears between 7 and 10 s.
Sensors 25 05017 g021
Table 1. Data-reuse regularized RLS algorithm.
Table 1. Data-reuse regularized RLS algorithm.
Parameters : ̲ 0 < λ 1 ( f o r g e t t i n g   f a c t o r ) N 1 ( n u m b e r   o f   data-reuse   s t e p s ) Initialization : ̲ h ^ ( 0 ) = 0 L R ( 0 ) = 0 L × L For time - index ̲ n = 1 , 2 , : x ( n ) = x ( n ) x ( n 1 ) x ( n L + 1 ) T y ^ ( n ) = x T ( n ) h ^ ( n 1 ) e ( n ) = d ( n ) y ^ ( n ) R ( n ) = λ R ( n 1 ) + x ( n ) x T ( n )
Evaluation of δ, depending on the algorithm (see Table 2)
P ( n ) = R ( n ) + δ I L 1 p ( n ) = P ( n ) x ( n ) q ( n ) = x T ( n ) p ( n ) r ( n ) = 1 q ( n ) s ( n ) = 1 r N ( n ) q ( n ) h ^ ( n ) = h ^ ( n 1 ) + s ( n ) p ( n ) e ( n )
Table 2. Regularization parameters of the data-reuse regularized RLS algorithms.
Table 2. Regularization parameters of the data-reuse regularized RLS algorithms.
DR CR RLS Algorithm : ̲ δ = positive constant DR OR RLS Algorithm : ̲ SNR assumed to be available δ o = L 1 + 1 + SNR SNR σ x 2 DR VR RLS Algorithm : ̲ σ d 2 ( n ) = λ σ d 2 ( n 1 ) + ( 1 λ ) d 2 ( n ) σ y ^ 2 ( n ) = λ σ y ^ 2 ( n 1 ) + ( 1 λ ) y ^ 2 ( n ) SNR ( n ) = σ y ^ 2 ( n ) ϵ + | σ d 2 ( n ) σ y ^ 2 ( n ) | δ ( n ) = L 1 + 1 + SNR ( n ) SNR ( n ) σ x 2 DR WR RLS Algorithm : ̲ σ v 2 ( n ) = λ σ v 2 ( n 1 ) + ( 1 λ ) e 2 ( n ) σ w 2 ( n ) = λ σ w 2 ( n 1 ) + ( 1 λ ) h ^ ( n ) h ^ ( n 1 ) 2 L NUR ( n ) = σ v 2 ( n ) ϵ + σ w 2 ( n 1 ) δ ( n ) = 1 L ( 1 λ ) NUR ( n )
Table 3. Computational complexity of the main algorithms.
Table 3. Computational complexity of the main algorithms.
AlgorithmsNumber of Multiplications per Iteration
LMS   3 L
RLS   L 2 + 2 L + O 1
DR-⋆R-RLS   L 2 + 4 L + N + O 1 + O δ
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Otopeleanu, R.-A.; Paleologu, C.; Benesty, J.; Dogariu, L.-M.; Stanciu, C.-L.; Ciochină, S. Robust Data-Reuse Regularized Recursive Least-Squares Algorithms for System Identification Applications. Sensors 2025, 25, 5017. https://doi.org/10.3390/s25165017

AMA Style

Otopeleanu R-A, Paleologu C, Benesty J, Dogariu L-M, Stanciu C-L, Ciochină S. Robust Data-Reuse Regularized Recursive Least-Squares Algorithms for System Identification Applications. Sensors. 2025; 25(16):5017. https://doi.org/10.3390/s25165017

Chicago/Turabian Style

Otopeleanu, Radu-Andrei, Constantin Paleologu, Jacob Benesty, Laura-Maria Dogariu, Cristian-Lucian Stanciu, and Silviu Ciochină. 2025. "Robust Data-Reuse Regularized Recursive Least-Squares Algorithms for System Identification Applications" Sensors 25, no. 16: 5017. https://doi.org/10.3390/s25165017

APA Style

Otopeleanu, R.-A., Paleologu, C., Benesty, J., Dogariu, L.-M., Stanciu, C.-L., & Ciochină, S. (2025). Robust Data-Reuse Regularized Recursive Least-Squares Algorithms for System Identification Applications. Sensors, 25(16), 5017. https://doi.org/10.3390/s25165017

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop