A Novel Generalized Group-Sparse Mixture Adaptive Filtering Algorithm

: A novel adaptive filtering (AF) algorithm is proposed for group-sparse system identifications. In the devised algorithm, a novel mixed error criterion (MEC) with two-order logarithm error, p -order errors and group sparse constraint method is devised to give a resistant to the impulsive noise. The proposed group-sparse MEC can fully use the known group-sparse characteristics in the cluster sparse systems, and it is derived and analyzed in detail. Various simulations are presented and analyzed to give a verification on the effectiveness of the developed group-sparse MEC algorithms, and the simulated results shown that the developed algorithm outperforms the previously developed sparse AF algorithms for identifying the systems. normalized LMS (NLMS), mixed error criterion (MEC), proportionate normalized LMS (PNLMS), improved PNLMS (IPNLMS).


Introduction
Adaptive filters (AF) is an important field of signal processing, which has been utilized for system identification (SI) [1,2]. For the AF applications, the least mean square (LMS) and its variants based on the two-order error cost function have been used widely since they are low in complexity and simple in practical implementations [1][2][3][4][5]. However, these algorithms might diverge, especially in the presence of outliers. The least mean fourth (LMF) has been reported by using the four-order error, resulting in improved performance at low signal-to-noise ratio (SNR) [6,7]. The LMF algorithm becomes unstable unless a small step size is used, resulting in that it is less accurate and has higher misalignment than the LMS algorithm. Then, mixed AF algorithms have been proposed by introducing a weighted factor to combine the two-order and higher-order errors to construct new cost function for improving the performance without sacrificing the simplicity and stability of the LMS [8][9][10][11][12]. Specifically, the least mean square/fourth (LMS/F) and the least mean mixed norm (LMMN) algorithms have been presented [8,11,12]. However, the early reported algorithms mentioned above cannot use the prior sparse-structure information in practical systems, and their performance might be reduced in the presence of the impulsive noise because of the use of the squared error criterion.
In practical engineering applications, many systems have inherent sparse features, such as the channel impulse response (CIR) of the digital television transmissions, acoustic echo paths and satellite linked echo paths which are typical sparse systems [13][14][15]. In these CIRs, most of the channel coefficients, which are inactive, are equal to zero, while a few active coefficients have larger amplitudes [13][14][15]. In these sparse systems, the convergence speed rate can be accelerated by taking advantage of the prior sparse information structures. Then, proportional update scheme has been proposed to implement the sparse SI, such as proportionate normalized LMS (PNLMS) which yields better convergence at the initial stage by assigning larger gain to the larger taps in comparison to the inactive taps, and the convergence rate for the gain allocated PNLMS algorithm can even be slower than that of the classical normalized LMS (NLMS) algorithm at the later adaptation stage [15]. To further enhance the behavior of the gain assigned PNLMS, several enhanced proportionate-type (Pt) algorithms have been reported, including the improved PNLMS (IPNLMS) algorithm [16]. Additionally, in recent years, zero-attraction (ZA) techniques have been carried out to use the sparse system characteristics to optimize the behavior of the classical LMS algorithm by considering both the convergence and steady-state mean-square-error (MSE). Then, the zero-attraction LMS (ZA-LMS) and its reweighted form (RZA-LMS) were presented by exerting the l 1 -norm on the estimated filter coefficient vector to form a penalty to modify the basic LMS's cost function [17]. However, the ZA-based AF algorithms are not good enough to deal with the group sparse systems.
According to the active coefficient distribution in these sparse systems, the sparse systems are divided into three types [18][19][20][21][22][23]: (1) General sparse systems; (2) one-group sparse systems, and (3) multi-group sparse systems. It is known to us that the network echo paths are modeled as a typical one-group system while the satellite link echo paths have been modeled as multi-group systems, owing to the bulk delays which are always extant in network encoding and jitter buffer delays [19,21]. Then, a block sparse LMS (BS-LMS) was developed by introducing the hybrid l 2,0 -norm into the basic LMS's cost function to identify group-sparse systems [18] and a block sparse PNLMS (BS-PNLMS) is also realized by integrating mixed l 2,1 -norm of coefficient vector into the basic PNLMS algorithm to provide enhanced performance for dealing with group bulked system identification [19].
In this article, we put our effort to propose a mixed error criterion (MEC) algorithm based on squared error and p-order sign errors for identifying sparse systems in impulsive noise, and a family of generalized group-sparse mixture adaptive filtering (GGS-MAF) algorithms are realized by integrating proportionate group-update approach into the MEC algorithm to utilize the prior group-sparse information. Simulation results reveal that the devised GGS-MAF algorithms converge faster and has lower normalized misalignment (NM) for identifying group-sparse systems in impulsive noises (IN).

NLMS Algorithm
Considering the application in the SI, we give the assumptions that an input signal vector is x(n) = [x(n), x(n − 1), · · · , · · · , x(n − L + 1)] T , and a CIR unknown system denotes h(n) = [h 0 (n), h 1 (n), · · · , h L−1 (n)] T , where n represents the time index and L denotes the total elements in the discrete CIR system. Thus, the expected signal d(n) is [1] where r(n) is an additive noise that is an IN.ĥ(n) is considered as the estimation of the CIR, and then, the estimated error is described as [1] Within the AF framework of SI, our purpose is to minimize the statistical measure of error signal. Two statistical measures, namely, squared error and fourth error, are employed to construct the cost function of the classical LMS and LMF [1,6], which are Based on the two measurements, many AF algorithms have been derived to implement the SI. For example, the well-known LMS algorithm aims to minimize the squared error to form the cost function (CF) J LMS (n) = 1 2 E[e 2 (n)]. Considering the commonly used stochastic gradient method (SGM), the LMS's updating equation iŝ where µ LMS denotes the overall step-size. Considering the normalized AF theory, the NLMS's updating equation is given byĥ where ε NLMS > 0 is to prevent dividing from zero, µ NLMS is a step-size, and · means the Euclidean norm. Then, the LMF is reported by using 1 4 E[e 4 (n)] instead of 1 2 E[e 2 (n)], and its updating equation iŝ

The Proposed GGS-MAF Algorithms
The devised algorithms, which are named as generalized group-sparse mixture adaptive filtering (GGS-MAF) algorithms, propose a novel gain allocation scheme for the MEC algorithm, which will be derived in detail. We first proposed a mixed error criterion (MEC) algorithm to give a resistant to the impulsive noise in the context of the SI. Then, a family of constructed GGS-MAF algorithms is presented in detail.

Mixed Error Criterion Algorithm
As we know, the disturbance in the error signal with a shape of an outlier probably affects the performance of the LMSs such as the impulsive noise. To improve the LMS's performance, two estimation errors called two-order logarithm error and p-order sign errors are constructed and used to realize the MEC algorithm. Within the AF theory, we can write the modified CF for the MEC algorithm as where the parameter α ∈ [0, 1] controls the mixed freedom between the two different errors. Taking the gradient on J MEC (n) byĥ(n), we have Then, the MEC's update equation obtained from the SGM iŝ with a step-size of µ MEC . τ > 0 is used for preventing the denominator from zero. It is worth noting that the MEC algorithm behaves like the squared error algorithms for α = 1, while it works toward p-order sign error algorithm when α is close to 0.

The GGS-MAF Algorithms
In the well-known SGM, we attempt to seek out the minimum of the cost functions regardless of the unknown parameter space. Herein, we use a Riemannian metric structure (RMS) presented in [24] instead of the Euclidean space for providing a rapid convergence within the Pt updating method.
In the RMS, the cost functions are given by J(d, x,ĥ) [24], where the distance betweenĥ(n + 1) and h(n) becomes D ĥ (n + 1),ĥ(n) with Riemannian metric tensor G −1 (n) [24]. According to the proportionate-type AF updating scheme, we can write the reported proportionate-type algorithms via finding out the solution of the following generic updating [25,26] where µ 1 represents the step-size. To make the MEC suitable for exploiting the group-sparse characteristics in the blocked echo systems, we introduce proportionate group-update approach to the MEC algorithm for further utilizing the prior group-sparse information. In this case, the GGS-MAF algorithms aim to seek a solution by solving the giving equation where Then, the mixed-norm constraint coefficient vectors are developed to obtain the G 1 (n) and G 2 (n) which are obtained from ĥ 2,c , (c = 1,0). Then, we have and Herein, the approximation l 0 -norm scheme in [27] is considered to implement ĥ 2,0 , and N denotes the number of groups defined as N = L/B, and B denotes the length of the group, and β > 0 is a regularization parameter. Considering the combination of the G 1 (n) and G 2 (n), we can have four different GGS-MAF algorithms, named GGS-MAF-1, GGS-MAF-2, GGS-MAF-3 and GGS-MAF-4 algorithms, which areĥ Then, we can achieve GGS-MAF-1 when G 1 = G 2 is implemented by using (13). The GGS-MAF-2 algorithm is obtained when G 1 is achieved from (13) and G 2 is from (14). The GGS-MAF-3 algorithm is achieved when G 1 is achieved from (14) and G 2 is from (13). If G 1 = G 2 and they are both implemented by using (14), we can obtain the GGS-MAF-4. We can see that the proposed GGS-MAF algorithms are realized by the combination of G 1 and G 2 that control the gain assignment matrix. Herein, the G 1 and G 2 are given by where 1 B in (16) denotes B-length row-vector, and g k (n) are calculated by in which for utilizing ĥ (n) for using ĥ (n) 2,1 , where time index n is ignored.
From the derivation of the GGS-MAF algorithms, we can see that the proposed GGS-MAF algorithms are realized by using mixture errors and the mixed l 2,c -norm to give a resistant to the impulsive noise and to use the prior sparse-structure-information. In the GGS-MAF algorithms, the l 2 -norm penalty aims to divide the groups, while the l c -norm exploits the sparseness of the systems.

Results Analysis
Various simulation experiments are setup in order to discuss the estimated performance of the developed GGS-MAF algorithms by considering different inputs. In all the simulations, the additive noise r(n) = v(n) + i(n), where v(n) denotes a white Gaussian noise (WGN) with SNR of 30 dB, and i(n) represents the impulsive interference which is modeled by the Bernoulli-Gaussian (BG) distribution, and i(n) = b(n)g(n). Herein, g(n) is another WGN and b(n) denotes a Bernoulli process which has a probability of p(b(n) = 1) = 0.1, and the signal-interference ratio (SIR) is 15 dB, and L = 1024. In these simulations, the regularization parameters are set to be ε NLMS = 0.01, ε GGS−MAF = ε PNLMS = 1 L ε NLMS [26], the step sizes are set to be µ LMS =µ ZA−LMS =µ RZA−LMS = 3 × 10 −4 , µ MEC = 2.5 × 10 −4 , µ LMF = 3 × 10 −5 , µ NLMS = 0.3, µ PNLMS = µ IPNLMS = µ ZA−PNLMS = 0.1 and µ GGS−MAF = 0.065. The performance of GGS-MAF algorithms, defined by using NM which has a definition of 10log 10 ( h −ĥ 2 2 / h 2 2 ), is discussed over two group sparse systems. The first group sparse system has one group active channel coefficients distributed in [267,282], and the second one possesses two group active coefficients appeared in [267,282] and [778,793], which are given in Figure 1.

Performance Comparisons of Four GGS-MAF Algorithms
From the Equation (15), it is clear to see that there are four forms of GGS-MAF algorithms. An experiment is created to discuss the performance of these GGS-MAF algorithms with colored noise (CN) input. One group system is used for the first 7000 iterations, and we use the two group system for the next 7000 iterations. The experiment is similar to those in [9,14,19]. The CN is realized by the WGN filtering via a first-order autoregressive process at a pole of 0.8. Herein, B = 4 and β = 20 are selected for all the GGS-MAF algorithms, and the simulation results are demonstrated in Figure 2. We can see from the results that the GGS-MAF-1 algorithm behaves the best since it only uses the l 2,0 -norm constraints where the l 0 -norm can well exploit the sparsity properties in the systems. Thus, we only choose GGS-MAF-1 to give a comparison with previous AF algorithms with different group sizes.  Figure 3. From the results in Figure 3a, it is obvious to see that the GGS-MAF-1 algorithm converges quickly and arrives at the lower steady state error for the sake of comparison with the other algorithms when the input is WGN signal. Figure 3b,c show that the convergence speed of the GGS-MAF-1 algorithm degrades for CN and SS inputs, but the performance of the GGS-MAF-1 algorithm is still better than the other algorithms. Note that the GGS-MAF-1 algorithm with B = 4 exhibits the best performance while GGS-MAF-1 algorithm with B = 64 behaves the worst. This is because that the size for each group is 16 in the two different systems. If the group size B is greater than 16, each group will cover much more inactive taps, and hence, the performance will be degraded. In addition, we can see that there is a sudden jump in Figures 2 and 3 because the system is changed from one group system to two group system. This is to say that the proposed algorithm can track the time-varying systems, which is similar to [9,14,19]. Since the active system coefficients in the two group system are doubled in comparison with the one group system, the two group system is less sparse, and hence, the original normalized misalignment is higher.

SNR Effects on the Proposed GGS-MAF-1 Algorithm
In this experiment, different SNR (from 0 dB to 40 dB) are used to analyze the NM of the proposed GGS-MAF-1 algorithm for CN input. B = 4 is chosen for GGS-MAF-1 algorithm, and other parameters are same as previous experiments. Simulation performance of the GGS-MAF-1 algorithm with different SNRs is shown in Figure 4. Figure 4 demonstrates that the NM decreases when SNR increases from 0 to 40 dB. We can see that the NM of the proposed GGS-MAF-1 algorithm is lower than other algorithms.

Conclusions
A group of GGS-MAF algorithms obtained from the mixed error criterion have been proposed, investigated and discussed for identifying group-sparse systems. The constructed GGS-MAF algorithm has been achieved by incorporating mixed l 2,c , (c = 0, 1) norm constraints into the MEC's cost function to exploit the prior group-sparse information. The proposed GGS-MAF is derived in detail. Different parameters are considered to discuss the performance of GGS-MAF algorithms under three different inputs. The results shown that the devised GGS-MAF algorithms converges faster and gives lower NM compared with the traditional AF algorithms for both one-group and two-group systems. In the future, the proposed algorithm can be investigated for multi-channel network applications.