An Alternative Approach to Obtain a New Gain in Step ‐ Size of LMS Filters Dealing with Periodic Signals

: Partial updates (PU) of adaptive filters have been successfully applied in different contexts to lower the computational costs of many control systems. In a PU adaptive algorithm, only a frac ‐ tion of the coefficients is updated per iteration. Particularly, this idea has been proved as a valid strategy in the active control of periodic noise consisting of a sum of harmonics. The convergence analysis carried out here is based on the periodic nature of the input signal, which makes it possible to formulate the adaptive process with a matrix ‐ based approach, the periodic least ‐ mean ‐ square (P ‐ LMS) algorithm In this paper, we obtain the upper bound that limits the step ‐ size parameter of the sequential PU P ‐ LMS algorithm and compare it to the bound of the full ‐ update P ‐ LMS algorithm. Thus, the limiting value for the step ‐ size parameter is expressed in terms of the step ‐ size gain of the PU algorithm. This gain in step ‐ size is the quotient between the upper bounds ensuring convergence in the following two scenarios: first, when PU are carried out and, second, when every coefficient is updated during every cycle. This step ‐ size gain gives the factor by which the step ‐ size can be mul ‐ tiplied so as to compensate for the convergence speed reduction of the sequential PU algorithm, which is an inherently slower strategy. Results are compared with previous results based on the standard sequential PU LMS formulation. Frequency ‐ dependent notches in the step ‐ size gain are not present with the matrix ‐ based formulation of the P ‐ LMS. Simulated results confirm the expected behavior.


Introduction
The least mean-square (LMS) [1][2][3] is an adaptive algorithm where a simplification of the gradient vector computation is carried out by means of an appropriate modification of the goal function. The LMS algorithm is widely used in different applications due to its computational simplicity. Very different fields of knowledge such as underwater communications [4] or ultrawide bandwidth systems [5] make use of the LMS algorithm to optimize an objective function by iteratively minimizing the error signal. Apart from the classical performance analysis of the LMS algorithm, one may find recent relevant references about stochastic analysis of the LMS algorithm for non-Gaussian [6], white Gaussian [7], and colored Gaussian [8] cyclostationary input signals.
In this paper we analyze the convergence process of the LMS filter, under the assumption of deterministic periodic input. The periodic nature of the reference (also referred to as regressor or input) signal and the training signal allows us to use the matricial approach of the LMS algorithm proposed by Parra et al. [9], periodic leastmean-square (P-LMS) algorithm, in the following. The referenced paper is based on a previous work [10] where a matrix-based approach is proposed to analyze the stability of adaptive algorithms.
Active noise control (ANC) is a well-known strategy widely used to attenuate acoustic disturbances by means of controllable secondary loudspeakers. The output of these secondary sound sources is arranged so as to destructively interfere with the acoustic noise from the primary source. The basic idea behind ANC systems has been reported for the last decades [11][12][13][14], and it still is a research field with continuous dissemination of interesting results [15][16][17][18][19].
ANC systems' efficiency has been demonstrated by many researchers who have published complete references on the topic [20][21][22]. Active techniques, therefore, are generally considered a valid strategy of overcoming passive systems' performance in certain cases.
Many applications of ANC have to deal with periodic disturbances consisting of several harmonics [23,24]. The reader might take into account that narrowband active noise control systems may attenuate a great variety of noises, e.g., those generated by compressors, turbines, engines, or fans. In [25], Jafari et al. proposed a robust adaptive strategy for the rejection of periodic components of a disturbance and analyzed its performance and stability properties.
One may find in the literature numerous examples of adaptive algorithms derived from the conventional LMS aimed at improving their performance by dealing with the step-size parameter. For instance, Han-sol et al. proposed [26] a variable step-size LMS strategy that achieves a fast convergence rate and low mis-adjustment by improvement of the updating stage.
Besides, system identification is a common task in many application fields. If systems have to be estimated using sparse computation due to computational complexity constraints, partial updates (PU) of the coefficients turn out to be an optimal option. Thus, algorithms exploiting sparseness in the coefficient update domain, where most of the weights are small and only a few are relevant, are often based on variations of the LMS adaptive algorithm [27,28].
In this sense, gain in step-size [29] is a parameter defined in the context of periodic primary noise attenuation when PU of the weights of the filter are used to lower the computational cost of an adaptive algorithm [30][31][32].
The existence of a gain in step-size (that depends on the frequency, the number of coefficients of the filter M, and the decimating factor N) was already proved-theoretically and experimentally, in previous works [29] when sequential partial updates (Seq PU) were applied to a filter controlled by the LMS algorithm.
In this paper, we want to verify if there is also a ratio between the step-size bounds when N > 1 (Seq. PU P-LMS) and N = 1 (P-LMS), if the LMS is applied by the matrix-based approach proposed by Parra el al. [9] for the case of periodic input signals that we refer to as P-LMS. In this latter case, the convergence analysis in not based on the eigenvalues from the autocorrelation matrix of the input signal, but on the so-called stability matrix.

Materials and Methods
In this second section, we show an overview of the algorithmic proposals. Theoretical basis and convergence analysis are provided.

Matricial Formulation of the Periodic LMS Algorithm
We begin this subsection bringing the matrix-based algorithmic proposal of Parra et al. [9] before reducing its computational complexity in Section 2.2 by means of PU.
Let be an LMS filter with periodic reference and training signal , both of period P.
Defining the corresponding M-length vectors, we have Here, the standard LMS updating process of the coefficients is given by the wellknown recursion with being the step-size parameter and being the error. The error is defined as the difference between the training signal (often referred to as desired response) and the output of the filter. .
Iterating in Equation (3) the LMS algorithm in the interval given by , and ∑ Due to the P-periodicity of the input signals and , one may consider separately complete P-periods of the signals. Particularly, during the period 0, 1, 2, … , we have:  the vector from the training signal: ⋯ 1 1  the vector from the error: ⋯ 1 1  and, finally, the estimated response vector: Additionally, because is periodic, the training signal vector remains invariant during all periods, that is, , ∀ ∈ ℕ. Substituting into Equation (6), at every period, we have the following matrix identity (7) and, we also have (8) that is valid in the interval 1 . is a P-by-M matrix given by: For each value of , the related matrix is the P-by-M matrix where its first rows are equal to the first rows of , whereas the remaining ones are null.
The square matrix is defined as (10) where is the P-order identity matrix and is the (strictly) lower triangular matrix given by is the sensitivity matrix. From we can obtain as (12) and substituting into Equation (8), which is particularized for 1 , gives that a block of P-iterations updates the LMS filter according to [9] 1 (13) where (14) and (15)

Sequential Partial Updates (PU) Applied to the Periodic LMS Algorithm
The sequential partial updates (Seq. PU) LMS algorithm updates a subset of M/N coefficients per iteration, out of a total of M weights according to Equation (16).
for 1 , where represents the -th coefficient of the filter, N the decimation factor of the PU strategy, μ the step-size parameter of the algorithm, the input (often named regressor) signal, and the error signal. Thus, the computational complexity of the Seq. PU algorithm is reduced directly as N increases [30].
The Seq. PU strategy expressed by Equation (16) is summarized in Figure 1. Here, the weights to be updated at every cycle are determined, as well as the related samples of the regressor signal . In this scheme, we assume that the first update is carried out at the first cycle, and the current value of the input signal is . From Equation (16) and Figure 1, one can surmise that the current value given by is used to update the first N filter´s taps during the upcoming N cycles. In general, in a full-update adaptive algorithm, it is necessary to renew the input vector at every cycle with a new sample of . Nevertheless, according to Figure 1, the Seq. LMS adaptive algorithm makes use of every N-th position of the input vector. Hence, it is enough to obtain a new sample at just one out of N iterations.
Then, one can consider that the whole filter of M coefficients is made up of N logical subfilters of (M/N) coefficients. These logical subfilters come from the process of sampling uniformly the weights of the original M-length filter with a sampling factor of N positions. To illustrate this idea, the weights of the first logical subfilter are marked with a circle in Figure 1. The weights that are placed at the same relative position in every logical subfilter are updated with the same value of the regressor signal . This regressor signal is renewed just in 1 out of N cycles. Then, after N cycles, a new value of is sampled and taken to renew the first coefficient of each subfilter, whereas the oldest value of is shifted out of its active range. Summarizing, during N consecutive cycles, N (M/N)-length logical subfilters are updated with the same regressor vector. This regressor signal is an N-decimated version of . Therefore, we analyze the convergence of the M-length filter on the basis of the parallel convergence of N (M/N)-length subfilters updated by an Ndecimated input . In order to use this PU strategy for the P-LMS algorithm, one should decimate the input signal and use a shorter version of the matrices involved in the updating process, because the active number of coefficients per iteration results in a filter of coefficients.
If is the P-periodic signal, then the N-decimated signal (18) is also periodic, of period , with ′ is a P'-by-M' matrix, obtained by substituting in Equation (9) by its N-decimated version , the period P by , and the number of coefficients M by . The square matrix is defined as (20) with being the P'-order identity matrix and the lower triangular matrix obtained by substituting in Equation (11) by its N-decimated version and the period P by . At every iteration, a logical M/N-length subfilter is updated from the whole set of taps according to the following sampling process of the coefficients .
The M/N coefficients that form the iteration-dependent logical subfilter are shown in Table 1.

Iteration
Index Coefficients from to be Updated.
The entire filter is applied to obtain the current error (22) but only one out of every N coefficients is updated according to where (24) and (25) where the N-decimated version of the training signal is sampled as

Gain in Step-Size of the Periodic LMS Algorithm with Sequential Partial Updates
The approach followed in this part of the paper is in accordance with that of Parra et al. [9], where the authors define a sensitivity matrix. The sensitivity matrix is built from a matricial arrangement of a period of the regressor/input signal. From the sensitivity matrix one may obtain the stability matrix [10], (27) that controls the convergence of the LMS.
A simple criterion of the filter convergence is where is the largest eigenvalue of matrix minimizing the quotient cos ⁄ , with related to the stability matrix according to Equation (27). Here, we have applied sequential PU of the coefficients of the M-length adaptive filter to lower the computational complexity. In so doing, the decimating factor N of the sequential PU strategy reduces the number of operations because only M/N coefficients are updated per cycle.
Then, the ratio between the bounds on the step-size parameter when N > 1 (Seq. PU P-LMS) and N = 1 (P-LMS), that we name gain in step-size, is given by . (29) With the subindex , we refer to the full-update periodic LMS proposed by Parra et al. [9], whereas the subindex is used to denote the sequential PU strategy proposed in Section 2.2.

Gain in Step-Size of Standard Version of the Sequential PU LMS Algorithm
In previous works [29] we have carried out a similar analysis on the basis of the eigenvalues of the autocorrelation matrix of the input signal. As we see, the shape of the gain in step-size has nothing to do in both cases (LMS vs. P-LMS).

Eigenvalues of the Autocorrelation Matrix of a P-Periodic Signal Composed of K Pure Harmonics
Let us assume that the input signal of an adaptive filter is defined as follows.
where is the fundamental frequency normalized by the sampling rate and ,…, are the initial random phases. Phases are uniformly distributed from 0 to 2π radians and mutually independent. Finally, ,…, are the amplitudes of the harmonics. The autocorrelation function of input signal can be expressed as Therefore, the autocorrelation matrix of the input vector ) (n x can be expressed as the sum of K matrices of size M × M as follows.
The largest eigenvalue , of each matrix is given by [33] , where the subscript k refers to the index of the submatrix . Let us consider a matrix made up of a sum of matrices of the same dimensions. According to the triangle inequality [34], appendix E, we have that the sum of the largest eigenvalues of every matrix from the sum of matrices establishes the bound for the largest eigenvalue of the matrix defined as a sum of matrices. As a result of that, the bound of the largest eigenvalue of , which we refer to as , , is limited according to where the subscript tot is used to refer to the autocorrelation matrix [29].
As far as the Seq. PU LMS adaptive algorithm is concerned, the requirements for the convergence that the entire filter has to meet can be translated to the joint convergence of N logical (M/N)-length subfilters updated by an N-decimated input signal [29]. Adjusting the above approach to the case of sequential PU LMS, where the length of the autocorrelation matrix is M/N and the sampling rate is divided by N, we deal with K matrices of dimension (M/N) × (M/N) obtained by substituting in Equation (33) Considering the triangle inequality, the largest eigenvalue It should be noticed that for N = 1 the sequential PU LMS algorithm reduces to the conventional full-update LMS algorithm and Equations (36) and (37) reduce to Equations (34) and (35), respectively.
The quotient between the limits of the step-sizes in two different cases (N > 1, sequential PU LMS and N = 1, conventional LMS), defines the step-size gain Gμ as the factor by which one can multiply the step-size meeting convergence requirements [29].
The gain in step-size given by Equation (38) depends on the length of the filter M and on the decimation factor N. In order to visualize this double dependence, we set to K = 1 the number of harmonics of the input signal. In so doing, the gain in step-size yields The step-size gain for a pure tone when different decimation factors N and different filter lengths M are considered, is shown in Figures 2 and 3 [29]. According to Figures 2  and 3, one can infer that the step-size can be increased by a factor up to N if certain frequencies are not present in the input signal. These frequencies are those that exhibit notches in the gain in step-size. The location of these critical frequencies, as well as the width and the number of the notches, can be analyzed as a function of the length of the adaptive filter M, the decimating factor N, and the sampling rate . Thus, the step-size parameter can be multiplied by a factor of N in order to ensure that the sequential PU LMS algorithm convergence rate is as fast as the full-update LMS algorithm. To afford that increase in convergence rate, the undesired disturbance must be free of significant components placed at frequency notches in the step-size gain.

Dependence on the Frequency of the Gain in Step-Size for the Periodic LMS Algorithm with Sequential Partial Updates
In order to figure out the dependence of gain in step-size on (a) the frequency, (b) the decimating factor , and (c) finally, the length of the filter given by Equation (29), we carried out two experiments described in the upcoming subsubsections.

Gain in
Step-Size vs. Frequency for a Fixed Value of the Decimating Factor ( 2) and Variable Length of the Filter In this experiment, we worked with a single-tone discrete-time signal. Digital frequency was set by normalizing analog frequency by a sampling frequency 8000 .
Because we want to deal with periodic signals (one may consider that a discrete-time sinusoidal signal only shows a periodic-in samples-behavior under certain conditions), we have considered a range of values for the period from 512 down to 8 samples (step 8 samples), corresponding, respectively, to analog frequencies from 15,6 to 1000 Hz. The length of the filter has been set to 40, 80, 160, 240, 320 coefficients. The decimating factor of the sequential partial updates strategy has been set to 2. Thus, from signal and its decimated version we have derived matrices and , respectively from Equations (14) and (24). These matrices are necessary to implement the full updates and the partial-update adaptive strategies described, respectively, in subsections 2.1 and 2.2. Then, the ratio between the bounds of the stepsize given by Equation (29) has been obtained and drawn. In so doing, we have the gain in step-size affordable for every frequency, shown in Figure 4. Gain in step size. Sequential PU P-LMS

Gain in Step-Size vs. Frequency for a Fixed Value of the Length of the Filter ( 160) Coefficients and Variable Decimating Factor N
In this experiment, whose results appear in Figure 5, we repeat the main idea of the previous subsection but set the length of the filter to 160 coefficients, whereas the decimating factor varies as 1, 2, 4, 8 . The signal used is the same single tone given by Equation (40), sampled at 8000 . The range of values for period varies from 512 down to 32 samples (in steps of 8 samples), corresponding, respectively, to analog frequencies from 15,6 to 250 Hz. From Figures 4 and 5 one may infer that the gain in step tends asymptotically toward the squared decimating factor as frequency increases. Moreover, the gain in step-size shown in these figures does not present frequency-dependent notches with the matrixbased formulation of the P-LMS.

Algorithms Comparison
This section is devoted to a comparison of the learning curves of four different strategies dealing with active noise the control of periodic disturbances:  Algorithm 1: standard LMS algorithm  Algorithm 2: LMS algorithm with sequential PU, applying, optionally, the gain in step-size.  Algorithm 3: Periodic LMS algorithm, based on a matrix formulation.  Algorithm 4: Periodic LMS algorithm, based on a matrix formulation. Sequential PU (and gain in step-size) are applied again as in alternative 2.
In Figure 6 is shown the block diagram of the noise control problem addressed with the four adaptive strategies. The sampling frequency is set to 8000 samples/s. We execute 1920 iterations of the adaptive algorithms, and the weights of the filters are reset to zero Step size gain. Seq. PU P-LMS Step size gain. Seq. PU P-LMS Step size gain. Seq. PU P-LMS Step size gain. Seq. PU P-LMS after 800 iterations so as to evaluate a second convergence process. The step-size chosen for the experiment is set near its maximum value, which ensures convergence.
In the experiment, we deal with two different versions of the desired signal . In both cases, referred to in sub-indexes 1 and 2, the desired signal consists of three different harmonics,  The two versions of the reference and the desired signals are designed in order to explain the behavior of the adaptive algorithm with regard to the gain in step-size in the second and fourth alternative. When the second algorithm (LMS with Seq. PU) is applied, one may infer from Figure 7 that the gain in step-size can be taken at its full strength for the second version of the reference signal, (marked with green circles), but not for the first version, (marked with red circles) because some/all of its harmonics are located in notches of the gain. More precisely, if the decimating factor is set to 2, the second harmonic of appears in a frequency affected by a notch (around 0.25 normalized Hz). If the decimating factor is set to 4, the three harmonics of appear in frequencies where notches in gain in step-size are present.
Nevertheless, if the fourth algorithm (periodic P-LMS with Seq. PU) is applied, we see from Figure 8 that the gain in step-size does not present notches as long as frequencies are above the very low frequency range. Then, in this fourth adaptive strategy, both versions of the reference signal and are expected to provide similar results.

Gain in step size
In the comparison of the four algorithms presented below (Figures 9-14), the performance measure used is the instantaneous squared error in logarithmic scale (dB). All simulation results are obtained by averaging 50 independent runs.
First, we work with and , that is, the input signals whose frequencies are marked in red in Figures 7 and 8.
In the first example, we compare the fourth adaptive algorithms previously listed, making 1. As we do not apply either PU or gain in step-size, the second and fourth alternatives are identical, respectively, to the first and third algorithms. In Figure 9 we confirm that the learning curves of the four adaptive strategies are similar in terms of convergence rate. In the next step, we apply PU by setting a decimating factor 2. If we do not apply gain in step-size to compensate for the inherent reduction in convergence rate, that is, we make 1, the second and fourth learning curves should reduce the convergence rate by a factor of 2, as we can confirm in Figure 10. Finally, we apply PU by setting 2 to reduce computational costs, but we also make 2 to compensate the reduction in convergence rate. As expected (and we can confirm in Figure 11), the second strategy named (b) diverges because the gain in stepsize can not be applied at full strength due to the notch that appears at the second harmonic of (see Figure 7). On the other hand, the fourth strategy (d) works properly because the theoretical limit of the gain in step-size is well above the value used. Now, let us carry out the second version of the experiment dealing with and . Here, the input signals contains harmonics at frequencies marked in green in Figures 7 and 8.
In Figure 12, the comparison of the four adaptive algorithms is carried out by making 1. We obtain results similar to those shown in Figure 9 with the first version of the experiment.  In Figure 13, we show the results achieved with a decimating factor 2 and a gain in step-size set to 1; as expected, the second and fourth learning curves reduce the convergence rate by a factor of 2, as happened in Figure 10.  The difference of the two versions of the experiments arises if we apply PU by setting 2 and we also make 2 to compensate for the reduction in convergence rate.
Here, both PU strategies-denoted as (b) and (d)-in Figure 14 converge and succeed in the compensation of the inherent reduction of convergence rate due to PU.

Discussion
First of all, it is important to note the fact that active noise control of periodic disturbances is a common task that many control systems have to deal with. As already said, narrowband active noise control is a current-day topic of great importance because the attenuation of noises from engines, compressors, turbines, fans, or propellers is a problem worldwide addressed by many researchers.
The gain in step-size is a concept addressed in previous works in the context of the sequential PU LMS algorithm leading to the convergence of a standard FIR filter. The main idea behind this topic is that the inherent reduction of convergence speed due to partial updates can be compensated for by increasing the step-size parameter μ, because the μ bound seems to be increased by a factor of up to the decimating factor N. Nevertheless, the gain in step-size is not a constant of value N, but exhibits notches whose number, width, and location can be predicted and, consequently, avoided.
Then, we have considered the alternative matrix-based approach of the LMS formulation proposed by Parra et al. [9], that we refer to as periodic LMS (P-LMA). In order to reduce its computational complexity by a decimating factor N, sequential PU have been proposed in the framework of the P-LMS algorithm. As expected, PU reduce operations per cycle but, as a drawback, the convergence rate slows down proportionally by N.
On this point, we have looked into the gain in step-size when sequential PU are considered in the P-LMS. We conducted a study of the theoretical framework of the adaptive strategy as well as several experiments to conclude that the gain in step-size does not exhibit notches. The frequency-dependent shape of the gain in step-size of the P-LMS tends to for high frequencies but shows a lower value in the low frequency range. Nevertheless, there is no evidence of the presence of the notches that were visible when the standard (and not the periodic LMS) was considered.
To sum up, research results show that a reduction in the computational costs associated with PU is not achieved at the expense of a reduction of the convergence rate. This statement is justified by the existence of a gain in step-size ,when PU are considered, and, more importantly, thanks to the matrix-based formulation of the LMS algorithm that we have referred to as P-LMS. This gain in step-size does not show notches in the frequency domain as it does in the case of a standard LMS formulation but just a poor behavior in the low frequency range. Thus, paying attention to the low frequency components, one may take full advantage of the application of the step-size gain.

Data Availability Statement:
The data presented in this study are available on request from the corresponding author.