A Modiﬁed Recursive Regularization Factor Calculation for Sparse RLS Algorithm with l 1 -Norm

: In this paper, we propose a new calculation method for the regularization factor in sparse recursive least squares (SRLS) with l 1 -norm penalty. The proposed regularization factor requires no prior knowledge of the actual system impulse response, and it also reduces computational complexity by about half. In the simulation, we use Mean Square Deviation (MSD) to evaluate the performance of SRLS, using the proposed regularization factor. The simulation results demonstrate that SRLS using the proposed regularization factor calculation shows a difference of less than 2 dB in MSD from SRLS, using the conventional regularization factor with a true system impulse response. Therefore, it is conﬁrmed that the performance of the proposed method is very similar to that of the existing method, even with half the computational complexity.


Introduction
The sparse channel system refers to a model characterized by only a few numbers of non-zero taps when modeling the impulse response of the channel system, using the tapped-delay-line model. We can experience such channels in TV channels [1], wide-band radio communication [2], underwater sound channels [3], etc. [4]. There have been many recent works on the use of adaptive estimation for sparse channel estimation; subsequently, various LMS-type algorithms [5,6] such as sparse LMF algorithms [7,8] and sparse LMS/F algorithms [9,10] have been proposed for sparse channel estimation. The RLS algorithm is better suited to fast convergence rates than the LMS-type algorithm. Hence, the RLSbased sparse adaptive filtering algorithm is considered one of the most promising fast algorithms in many system estimation applications, such as channel estimation. There are some algorithms based on sparse RLS [11][12][13][14] as well as the Total Least Squares (TLS)based algorithm [15,16]. Many sparse RLS algorithms are algorithmically comparable to the plain RLS algorithm; however, the updated equations in most sparse RLS (SRLS) algorithms are not intrinsically recursive, as in the plain RLS algorithm. Eksioglu and Tanc [13] proposed a fully recursive SRLS algorithm that is comparable to the plain RLS algorithm. The sparse RLS algorithms handle the sparsity with l 1 -norm. Therefore, it is essential to properly select the regularization factor for l 1 -norm. Many researchers have developed as well as proposed a proper selection for the regularization factor. The authors in [13] also proposed a regularization factor calculation method for the SRLS algorithm. Similar recursive regularization factor selection methods were used in [15][16][17][18][19]. However, the algorithm from [15][16][17][18] is not practical because the regularization factor in [15][16][17] assumed that the true system impulse response is known in advance and set the true system response in part to an arbitrary constant in [18]. The regularization factor selection method in [19] needs no true system impulse response. However, the regularization factor is recursively updated. Therefore, errors in updating the regularization factor are likely to propagate and accumulate. In [20], Lim proposed the regularization factor by estimating the sparsity of the estimated system. Although the regularization factor required no true system information, it added error to the system. Then Lim in [20] utilized it only to TLSbased system modeling. In [21], l 1 -IWF (iterative Wiener filter) was proposed, which was a kind of a steepest descent method. l 1 -IWF had a regularization factor without requiring a priori knowledge of the true system response. However, it did not show whether the regularization factor converges to the optimal regularization factor.
In this paper, the major contribution is that we propose a new regularization factor for the SRLS algorithm in [13] and show that it converges to the scaled optimal regularization factor. The proposed regularization factor does not require a priori knowledge of the true system response. The minor contribution is that the proposed regularization factor requires less computation complexity than the regularization in [13].
The remainder of the paper is organized as follows. In Section 2, we reformulate the sparse RLS. In Section 3, a new regularization factor is proposed. In this section, we show that the regularization factor in [13] requires a priori knowledge of the true system impulse response. We also show the calculation complexity of the proposed regularization factor. The simulation condition and results are described and illustrated in Section 4, and a discussion of the results follows in Section 5. We conclude in Section 6.

Problem Formulation
In this section, we reformulate the SRLS in [13]. Consider a sparse weight vector w o ∈ R N , which represents the channel by a delay line with N taps. By sparsity, the number of significant factors in w o , S, is much lower than its total dimensionality (that is, S N). The goal is to derive the sparse vector w o based on the input signal vector x(n) ∈ R N and the received signal y(n), which is assumed to be generated by a linear system, such as (1).
where η(n) is the additive noise. Consider the following standard RLS optimization problem, subject to a sparsity constraint.

Proposed Recursive Regularization Factor for Sparse RLS Algorithm
In this section, we derive the regularization factor,γ(n − 1), such that ŵ(n − 1) 1 = c, which means that the l 1 -norm ofŵ(n) is preserved for all the time steps in n. This property yields the time invariance inŵ(n) [23].
By referring to (7) and assuming a normalized sample time T s to 1 [24], (8) is as follows.
This means that the estimation error with regularization is lower than the estimation error without regularization.
In order to review the asymptotic meaning of the proposed regularization in (10), we also assume x(n) is a white noise process whose autocorrelation matrix, Φ(n), asymptotically becomes σ 2 x I and then P(n) = Φ(n) −1 ∼ = σ −2 x (1 − λ)I [25]. We use the simplified gain vector k(n) in [25]. k(n) = P(n)x(n), and approximate the error, ξ(n), in high SNR, When we substitute (10) with (14) and (15), we can rewrite the proposed regularization in (10) as follows: In (16), P(n)x(n)x T (n) can be asymptotically approximated as the expected value of (17) in [26].
Substituting (16) with (17) and P(n) ∼ = σ −2 x (1 − λ)I, the variable regularization factor in (16) becomes the following: In (19), the proposed regularization parameter is approximately converse to the scaled optimal regularization parameter in [13]. Therefore, the newly derived regularization parameterγ(n) in (19) satisfies 0 ≤γ(n) ≤γ(n). Therefore, the proposed regularization parameter can be used as a regularization parameter on behalf of the optimal regularization parameter. In addition, the proposed regularization parameter needs no true system parameter w o .
In terms of computational complexity, the difference between SRLS in [13] and the proposed algorithm lies only in the computational complexity of the regularization factor. Therefore, we can compare the computational complexity between the proposed regularization factor in (10) and the regularization factor in (11) from [13]. Actually, the regularization factor used in [13] is (20), which is an approximation of (11).
The regularization factor is calculated in line 6 in Algorithm 2. When calculating the complexity of the regularization factor, it should be taken into account that the regularization factor uses the elements calculated before line 6. Taking this into account and calculating the complexity, (20) requires 2N + 3 multiplications, whereas (21) requires N + 1 multiplications.

Simulation Results
For the simulation in this section, we set the same experimental conditions as in [13] (code is available at https://web.itu.edu.tr/eksioglue/pubs.htm accessed on 16 March 2021). We assume two system parameters, w o , with N = 64 taps and 256 taps, respectively. Out of the N coefficients, the only S coefficients are not zero. We generate the values of coefficients from an N(0, 1/S) distribution and randomly place the non-zero coefficients.
In the simulation, we show the system estimation results, using the proposed regularization factor that does not require the true system response information. We compare the l 1 -RLS using the true system response [13], the l 1 -RLS in [19], l 1 -IWF in [21] and the l 1 -RLS, using the proposed regularization factor selection method. In addition, we compare the estimation result from the conventional RLS. We simulate these algorithms in the sparse impulse response for S = 2, 4, 8, and 16 for the performance evaluation. Figure 1 shows the Mean Square Deviation (MSD) comparison results in the case of an order N = 64 at SNR = 20 dB, where MSD(ŵ(n)) = E w o −ŵ(n) 2 . Figure 2 also shows the MSD comparison results in the case of an order N = 256 at SNR = 20 dB. In addition, the MSD comparison results are summarized in Table 1 at SNR = 20 dB, 10 dB and 0 dB, respectively.

Discussion
The MSD comparison results in Figure 1 show that the estimation performance of l 1 -RLS, using the regularization factor of the proposed method, is almost the same as the l 1 -RLS, using the regularization factor with the true system impulse response. l 1 -IWF also performs almost the same as the l 1 -RLS with the true system information as the results in [21]. The results from the proposed algorithm are even better than those of the l 1 -RLS in [19]. Predictably, the conventional RLS has the worst MSD in all cases. Figure 1 confirms that, despite having no prior knowledge of the true system response, the proposed regularization factor is comparable to the conventional regularization factor with the true system impulse response. Figure 2 shows the MSD results for a longer-order system: N = 256. We can see that Figure 2 has very similar results to those in Figure 1. Therefore, it can be confirmed that the proposed regularization factor calculation method works well, regardless of the system dimension. Table 1 shows the performance of sparse RLS, using the proposed regularization factor calculation method, compared with other algorithms in various SNR situations. The performance of l 1 -IWF is also similar to that of the proposed algorithm, but the performance of the proposed algorithm is better at low SNR. The results of Table 1 shows that, although SRLS using the proposed regularization factor does not actually utilize the impulse response of the true system, the MSD difference between SRLS with the regularization factor using information on the actual impulse response of the target system, and SRLS with the proposed regularization factor, is less than 2 dB. Therefore, it can be said that the performance of the two SRLS is very similar. In addition, as mentioned in Section 3, it is also remarkable that the computational complexity of the regularization factor can be reduced by about half of the conventional regularization factor.

Conclusions
In this paper, we proposed a new calculation method for a regularization factor in l 1 -RLS, requiring no prior knowledge of the true system response. We also showed that the proposed regularization factor converges to the scaled optimal regularization factor. Therefore, we have shown that the proposed regularization factor can be used on behalf of the optimal regularization factor. The simulation results confirmed that the proposed regularization factor behaves almost the same as the conventional regularization factor with the true system response.