Open Access
This article is

- freely available
- re-usable

*Entropy*
**2017**,
*19*(6),
281;
https://doi.org/10.3390/e19060281

Article

An Enhanced Set-Membership PNLMS Algorithm with a Correntropy Induced Metric Constraint for Acoustic Channel Estimation

^{1}

College of Information and Communications Engineering, Harbin Engineering University, Harbin 150001, China

^{2}

College of Communication and Electronic Engineering, Qiqihar University, Qiqihar 161006, China

^{3}

National Space Science Center, Chinese Academy of Sciences, Beijing 100190, China

^{*}

Author to whom correspondence should be addressed.

Received: 30 April 2017 / Accepted: 13 June 2017 / Published: 15 June 2017

## Abstract

**:**

In this paper, a sparse set-membership proportionate normalized least mean square (SM-PNLMS) algorithm integrated with a correntropy induced metric (CIM) penalty is proposed for acoustic channel estimation and echo cancellation. The CIM is used for constructing a new cost function within the kernel framework. The proposed CIM penalized SM-PNLMS (CIMSM-PNLMS) algorithm is derived and analyzed in detail. A desired zero attraction term is put forward in the updating equation of the proposed CIMSM-PNLMS algorithm to force the inactive coefficients to zero. The performance of the proposed CIMSM-PNLMS algorithm is investigated for estimating an underwater communication channel estimation and an echo channel. The obtained results demonstrate that the proposed CIMSM-PNLMS algorithm converges faster and provides a smaller estimation error in comparison with the NLMS, PNLMS, IPNLMS, SM-PNLMS and zero-attracting SM-PNLMS (ZASM-PNLMS) algorithms.

Keywords:

set-membership proportionate normalized least mean square; sparse adaptive filtering; PNLMS algorithm; zero attracting algorithm; correntropy induced metric (CIM)## 1. Introduction

The adaptive filter technique has been widely used for echo cancellation, channel equalization, signal enhancement and active noise control [1]. A great number of adaptive filtering algorithms have been developed to meet the various requirements in practical engineering applications. From these algorithms, the least mean square (LMS) algorithm and its normalized form have been extensively studied because they are easy to implement and have good performance [2,3]. These two algorithms are the most classical algorithms that have been applied in channel estimation and echo cancellation in recent decades. Furthermore, set-membership (SM) filtering techniques have been proposed not only to reduce the computational burden but also to improve the estimation performance [4,5,6,7,8,9,10,11,12,13,14,15]. The SM filtering technique utilizes a special bound on the magnitude of the estimation error to split the adaptive filtering algorithms into two steps: (1) the first step is the information evaluation; (2) the second step is parameter update. If the second step does not occur frequently, the SM filters have low computations. Thus, the SM can reduce the complexity due to data-selective methods. Furthermore, the SM filtering algorithms also achieve lower steady-state misadjustment, such as the SM normalized LMS (NLMS) algorithm (SM-NLMS) [8,14]. However, the SM-NLMS algorithm cannot use the sparse characteristics of the multi-path channels. Then, the adaptive filtering algorithms for sparse channel estimation and sparse system identification applications have been proposed, including the proportionate NLMS (PNLMS) and the zero attracting adaptive filtering algorithms [16,17,18,19,20,21,22,23,24,25,26,27,28,29].

The PNLMS algorithm is realized by assigning different gains to each coefficients, which is implemented according to the magnitudes of each coefficient. In fact, the PNLMS algorithm is a variable step-size NLMS algorithm, which is implemented by using the gain assignment scheme to control the step-size. As a result, the PNLMS algorithm can enhance the convergence at the initial iterations in comparison with the classical NLMS algorithm. However, the performance of the PNLMS algorithm may be worse than that of the NLMS algorithm when it reaches its steady-state status. Then, an improved PNLMS (IPNLMS) has been presented to enhance the performance of the PNLMS algorithm [30]. However, they still need to be improved. After that, the SM technique has been integrated into the PNLMS algorithm to develop a SM-PNLMS algorithm [31]. The results showed that the SM-PNLMS performs better than the PNLMS in terms of the convergence and the estimation bias.

With the development of the signal processing, the sparse signal processing has attracted much more attention in recent decades. In particular, the compressed sensing (CS) [32] promotes the sparse signal processing since it can achieve high recovery accuracy for dealing with sparse signals. Then, the CS concepts have been incorporated into the adaptive filter algorithms to help to exploit sparseness characteristics of existing sparse channels or systems. On the basis of the motivation of the CS, the ${l}_{1}$-norm penalty has been integrated into the LMS’s cost function to create the zero attracting LMS (ZA-LMS) algorithm. The ZA-LMS gives an attraction term to force the inactive coefficients to zero compared to the traditional LMS algorithm. However, the ZA-LMS exerts a penalty on all the coefficients uniformly, which may degrade the estimation performance for handling the less sparse signals. Then, a reweighted ZA-LMS (RZA-LMS) algorithm is proposed by Chen et al. [33] The RZA-LMS selects different penalties to each coefficient. However, these LMS algorithms have drawbacks for dealing with colored inputs and are sensitive to the scaling of the inputs. To overcome the drawbacks of the zero attracting LMS algorithms, the zero attracting techniques have been expanded into the affine projection algorithms [20,21,34,35,36,37,38], combined LMS [32], high-order error criterion algorithms [28,29] and PNLMS algorithms [24]. However, some of these expanded zero attracting adaptive algorithms are high in computational complexity.

In this paper, a correntropy induced metric (CIM) penalized SM-PNLMS algorithm is proposed to fully utilize the sparsity property of the acoustic channel and echo channel. The proposed CIM scheme is developed within the kernel framework. Then, the CIM penalty is integrated into the cost function of the SM-PNLMS algorithm to create an approximation ${l}_{0}$-norm constrained zero attraction. The proposed CIM constraint SM-PNLMS (CIMSM-PNLMS) algorithm is derived in detail and its estimation behaviors are investigated through a underwater acoustic channel and an echo channel. The simulated numerical results illustrated that the proposed CIMSM-PNLMS algorithm converges faster and provides a smaller estimation error in comparison with the NLMS, PNLMS, IPNLMS, SM-PNLMS and zero-attracting SM-PNLMS (ZASM-PNLMS) algorithms.

The structure of the paper is summarized as follows. In Section 2, the SM filtering theory and the SM-PNLMS algorithm are reviewed. Section 3 gives the proposed CIMSM-PNLMS algorithm within the framework of the SM theory and ZA theory. Section 4 presents the estimation performance of the proposed CIMSM-PNLMS algorithm. Section 5 draws a short conclusion of this paper.

## 2. Review of Related Algorithms

#### 2.1. The Review of the SM Filtering Theory

Assuming that the channel input signal is $\mathbf{x}(n)={\left[x(n),x(n-1),x(n-2),\cdots ,x(n-N+1)\right]}^{T}$, the unknown channel impulse response of a finite impulse response (FIR) channel denotes $\mathbf{h}={\left[{h}_{0},{h}_{1},\cdots ,{h}_{N-1}\right]}^{T}$, where N represents the number of the total channel coefficients. The output of the adaptive estimator is
where $\widehat{\mathbf{h}}(n)$ denotes the estimation channel vector at instant n. According to the adaptive estimation, the desired signal is
where $v(n)$ is a noise signal which is assumed to be independent with the input signal $\mathbf{x}(n)$. In this case, the estimated error is

$$y(n)={\mathbf{x}}^{T}(n)\widehat{\mathbf{h}}(n),$$

$$d(n)={\mathbf{x}}^{T}(n)\mathbf{h}+v(n),$$

$$e(n)=d(n)-y(n).$$

The SM technique defines a special model space $\Theta $ that contains input and output vector pairs. A specified bound is set to effectively select the updating criterion for these data pairs within $\Theta $. The goal of SM filtering criterion is to solve the optimization subjected to
where $\gamma $ represents the specified bound [2]. For $\forall \widehat{\mathbf{h}}(n+1)\notin \Theta $, the optimization problem of the SM-NLMS will turn to be [2,6,8,9,10,11]

$$|e(n){|}^{2}\le {\gamma}^{2},$$

$$\mathrm{min}{\u2225\widehat{\mathbf{h}}(n+1)-\widehat{\mathbf{h}}(n)\u2225}_{2}^{2}\phantom{\rule{4.25pt}{0ex}}\mathrm{s}.\mathrm{t}.\phantom{\rule{3.33333pt}{0ex}}d(n)-{\mathbf{x}}^{T}(n)\widehat{\mathbf{h}}(n+1)=\gamma .$$

A Lagrange multiplier method is considered to determine the minimization of Equation (5). Then, the updating equation of the SM-NLMS [8] is
where
and ${\epsilon}_{\mathrm{SM}}$ is a small positive constant used to prevent the denominator from zero. The updates of Equation (6) occurs when $d(n)-{\mathbf{x}}^{T}(n)\widehat{\mathbf{h}}(n+1)>\gamma \phantom{\rule{4.25pt}{0ex}}\mathrm{or}\phantom{\rule{4.25pt}{0ex}}d(n)-{\mathbf{x}}^{T}(n)\widehat{\mathbf{h}}(n+1)<-\gamma $ [9,37].

$$\widehat{\mathbf{h}}(n+1)=\widehat{\mathbf{h}}(n)+\frac{{\mu}_{\mathrm{SM}}(n)\mathbf{x}(n)e(n)}{{\mathbf{x}}^{T}(n)\mathbf{x}(n)+{\epsilon}_{\mathrm{SM}}},$$

$${\mu}_{\mathrm{SM}}(n)=\left\{\begin{array}{c}1-\frac{\gamma}{\left|e(n)\right|},\mathrm{if}\left|e(n)\right|>\gamma \\ 0,\phantom{\rule{3.33333pt}{0ex}}\mathrm{otherwise}\end{array}\right.,$$

#### 2.2. The Review of the SM-PNLMS Algorithm

Similar to the PNLMS algorithm [22], a gain assignment matrix is incorporated into the SM-NLMS algorithm for constructing the expected SM-PNLMS algorithm whose updating equation is
where ${\mu}_{\mathrm{SM}}$ is a overall step size. $\mathbf{G}(n)$ is a diagonal matrix that assigns different step sizes for different coefficients, which is defined as
where each element in $\mathbf{G}(n)$ can be calculated by using

$$\widehat{\mathbf{h}}(n+1)=\widehat{\mathbf{h}}(n)+\frac{{\mu}_{\mathrm{SM}}\mathbf{x}(n)\mathbf{G}(n)e(n)}{{\mathbf{x}}^{T}(n)\mathbf{G}(n)\mathbf{x}(n)+{\epsilon}_{\mathrm{SM}}},$$

$$\mathbf{G}(n)=\mathrm{diag}\{\left.{g}_{0}(n),{g}_{1}(n),\dots ,{g}_{\mathrm{N}-1}(n)\right\},$$

$${g}_{i}(n)=\frac{{\alpha}_{i}(n)}{{\displaystyle \sum _{i=0}^{N-1}}{\alpha}_{i}},0\le i\le N-1,$$

$$\begin{array}{cc}\hfill {\alpha}_{i}(n)=& max\left\{\rho \mathrm{max}\left\{\left.\delta ,\left|{\widehat{h}}_{0}(n)\right|,\left|{\widehat{h}}_{1}(n)\right|,\cdots ,\left|{\widehat{h}}_{N-1}(n)\right|\right\}\right.\right.,\left.\left|{\widehat{h}}_{i}(n)\right|\right\}.\hfill \end{array}$$

Here, $\rho $ is a positive parameter whose value range is usually $\frac{1}{N}\sim \frac{5}{N}$, and it prevents ${\widehat{h}}_{i}(n)$ from stopping when its value is much smaller than the largest coefficient. Herein, we set $\rho =\frac{5}{N}$. The parameter $\delta $ is a regularization parameter that prevents the updating from stalling when all the coefficients are zeros at the initial iterations. In this paper, we set $\delta =0.5$.

## 3. The Proposed CIMSM-PNLMS Algorithm

Although the PNLMS algorithm can exploit the sparse channel characteristics and the SM-PNLMS algorithm simplifies the computational complexity of the PNLMS by using the SM technique, these algorithms may perform worse than the NLMS and SM-NLMS, respectively. The PNLMS just improves the convergence at the initial iterations. It may cause a worse steady-state estimation behavior or a slower convergence when the PNLMS converges [22,23]. To further improve the convergence and the estimation performance of the SM-PNLMS and to exploit the sparsity of the acoustic channels, a CIM penalized SM-PNLMS (CIMSM-PNLMS) is proposed by introducing a CIM penalty into the cost function of the SM-PNLMS algorithm. The CIM constraint is used for further exploiting the sparsity property of the sparse systems or channels, and improving the convergence and steady-state performance at the later iterations. Furthermore, a ${l}_{1}$-norm and a reweighting ${l}_{1}$-norm are also used for constructing zero attracting SM-PNLMS (ZASM-PNLMS) algorithm and reweighted ZASM-PNLMS (RZASM-PNLMS) algorithm for the sake of comparison. Herein, the CIM is discussed within the kernel framework [39,40,41]. The correntropy of two arbitrary vectors can be described as
where N denotes the number of elements in the vectors, and $k(.)$ represents kernel function. A Gaussian kernel is used to develop the CIM, and $k(.)$ is written as
where $\sigma $ denotes the kernel width. The nonlinear metric CIM is defined as

$$\mathrm{V}(\mathbf{X},\mathbf{Y})=\frac{1}{N}{\displaystyle \sum _{i=1}^{N}}k({x}_{i},{y}_{i}),$$

$$k(x,y)=\frac{1}{\sigma \sqrt{2\pi}}\mathrm{exp}(-\frac{{(x-y)}^{2}}{2{\sigma}^{2}}),$$

$$\mathrm{CIM}(\mathbf{X},\mathbf{Y})={(k(0)-V(\mathbf{X},\mathbf{Y}))}^{\frac{1}{2}}.$$

Taking the square on both sides of Equation (14), we obtain

$${\mathrm{CIM}}^{2}(\mathbf{X},0)=\frac{k(0)}{N}{\displaystyle \sum _{i=1}^{N}}(1-exp(-\frac{{({x}_{i})}^{2}}{2{\sigma}^{2}})).$$

Then, the CIM is incorporated into the cost function of the SM-PNLMS algorithm to develop the CIMSM-PNLMS algorithm. For $\forall \widehat{\mathbf{h}}(n+1)\notin \Theta $, the purpose of our proposed CIMSM-PNLMS algorithm is to solve the optimization problem

$$\begin{array}{c}min{\u2225\widehat{\mathbf{h}}(n+1)-\widehat{\mathbf{h}}(n)\u2225}_{{\mathbf{G}}^{-1}(n)}^{2}+{\rho}_{\mathrm{CIM}}{\mathbf{G}}^{-1}(n){\mathrm{CIM}}^{2}(\widehat{\mathbf{h}}(n+1),0)\hfill \\ \mathrm{s}.\mathrm{t}.d(n)-{\mathbf{x}}^{T}(n)\widehat{\mathbf{h}}(n+1)=\gamma \hfill \end{array}.$$

Then, a new cost function of the CIMSM-PNLMS is achieved

$$\begin{array}{cc}\hfill J(n)& ={\left(\widehat{\mathbf{h}}(n+1)-\widehat{\mathbf{h}}(n)\right)}^{T}{\mathbf{G}}^{-1}(n)\left(\widehat{\mathbf{h}}(n+1)-\widehat{\mathbf{h}}(n)\right)\hfill \\ & +{\rho}_{\mathrm{CIM}}{\mathbf{G}}^{-1}(n){\mathrm{CIM}}^{{}^{2}}\left(\widehat{\mathbf{h}}(n+1),0\right)\hfill \\ & +\lambda \left(d(n)-{\mathbf{x}}^{T}(n)\widehat{\mathbf{h}}(n+1)-\gamma \right)\hfill \end{array}.$$

Let
and

$$\frac{\partial J(n)}{\partial \widehat{\mathbf{h}}(n+1)}=0,$$

$$\frac{\partial J(n)}{\partial \lambda}=0.$$

Then, we can get
and

$$\begin{array}{c}\hfill {\mathbf{G}}^{-1}(n)\left(\widehat{\mathbf{h}}(n+1)-\widehat{\mathbf{h}}(n)\right)+{\rho}_{\mathrm{CIM}}{\mathbf{G}}^{-1}(n)\frac{\partial {\mathrm{CIM}}^{{}^{2}}\left(\widehat{\mathbf{h}}(n+1),0\right)}{\partial \widehat{\mathbf{h}}(n+1)}-\lambda \mathbf{x}(n)=0,\end{array}$$

$${\mathbf{x}}^{T}(n)\widehat{\mathbf{h}}(n+1)=d(n)-\gamma .$$

Left multiplying $\mathbf{G}(n)$ to both sides of the Equation (20), and we can get

$$\begin{array}{cc}\hfill \left(\widehat{\mathbf{h}}(n+1)-\widehat{\mathbf{h}}(n)\right)+& {\rho}_{\mathrm{CIM}}\frac{\partial {\mathrm{CIM}}^{{}^{2}}\left(\widehat{\mathbf{h}}(n+1),0\right)}{\partial \widehat{\mathbf{h}}(n+1)}-\lambda \mathbf{G}(n)\mathbf{x}(n)=0.\hfill \end{array}$$

Thus, we have

$$\widehat{\mathbf{h}}(n+1)=\widehat{\mathbf{h}}(n)+\lambda \mathbf{G}(n)\mathbf{x}(n)-{\rho}_{\mathrm{CIM}}\frac{\partial {\mathrm{CIM}}^{{}^{2}}\left(\widehat{\mathbf{h}}(n+1),0\right)}{\partial \widehat{\mathbf{h}}(n+1)}.$$

Multiplying ${\mathbf{x}}^{T}(n)$ on Equation (23) and substituting it into Equation (21), we will get

$$\lambda =\frac{e(n)-\gamma +{\mathbf{x}}^{T}(n){\rho}_{\mathrm{CIM}}\frac{\partial {\mathrm{CIM}}^{{}^{2}}\left(\widehat{\mathbf{h}}(n+1),0\right)}{\partial \widehat{\mathbf{h}}(n+1)}}{{\mathbf{x}}^{T}(n)\mathbf{G}(n)\mathbf{x}(n)}.$$

Substituting Equation (24) into Equation (23), we will get

$$\begin{array}{cc}\hfill \widehat{\mathbf{h}}(n+1)& =\widehat{\mathbf{h}}(n)+\frac{{\mu}_{\mathrm{CIM}}\mathbf{G}(n)\mathbf{x}(n)e(n)}{{\mathbf{x}}^{T}(n)\mathbf{G}(n)\mathbf{x}(n)}\hfill \\ & -{\rho}_{\mathrm{CIM}}\left(\mathbf{I}-\frac{\mathbf{x}(n){\mathbf{x}}^{T}(n)\mathbf{G}(n)}{{\mathbf{x}}^{T}(n)\mathbf{G}(n)\mathbf{x}(n)}\right)\frac{\partial {\mathrm{CIM}}^{{}^{2}}\left(\widehat{\mathbf{h}}(n+1),0\right)}{\partial \widehat{\mathbf{h}}(n+1)}.\hfill \end{array}$$

If one does not affect the minimum disturbance, a simpler method can be used to reduce the computational burden. Herein, $\frac{\mathbf{x}(n){\mathbf{x}}^{T}(n)\mathbf{G}(n)}{{\mathbf{x}}^{T}(n)\mathbf{G}(n)\mathbf{x}(n)}$ is ignored, which is similar to [24,37,42]. In addition, from Equation (15), we can get

$${\mathrm{CIM}}^{{}^{2}}\left(\widehat{\mathbf{h}}(n+1),0\right)=\frac{1}{N\sigma \sqrt{2\pi}}\left(1-exp\left(-\frac{{\left(\widehat{\mathbf{h}}(n+1)\right)}^{2}}{2{\sigma}^{2}}\right)\right).$$

Thus, we have

$$\frac{\partial {\mathrm{CIM}}^{2}\left(\widehat{\mathbf{h}}(n+1),0\right)}{\partial \widehat{\mathbf{h}}(n+1)}=\frac{1}{N{\sigma}^{3}\sqrt{2\pi}}\widehat{\mathbf{h}}(n+1)\mathrm{exp}\left(-\frac{{\left(\widehat{\mathbf{h}}(n+1)\right)}^{2}}{2{\sigma}^{2}}\right).$$

Assuming that $\widehat{\mathbf{h}}(n+1)\approx \widehat{\mathbf{h}}(n)$, we can write the update function of the CIMSM-PNLMS as
where ${\mu}_{\mathrm{CIM}}$ is also obtained from Equation (7). In the second term of Equation (28), ${\epsilon}_{\mathrm{CIM}}$ is a small positive constant used to prevent the denominator from zero. $\mathbf{G}(n)$ in the CIMSM-PNLMS is the same as that in the SM-PNLMS algorithm. For $\forall \widehat{\mathbf{h}}(n+1)\in \Theta $, the updating equation of the CIMSM-PNLMS algorithm is

$$\widehat{\mathbf{h}}(n+1)=\widehat{\mathbf{h}}(n)+\frac{{\mu}_{\mathrm{CIM}}\mathbf{x}(n)\mathbf{G}(n)e(n)}{{\mathbf{x}}^{T}(n)\mathbf{G}(n)\mathbf{x}(n)+{\epsilon}_{\mathrm{CIM}}}-{\rho}_{\mathrm{CIM}}\frac{1}{N{\sigma}^{3}\sqrt{2\pi}}\widehat{\mathbf{h}}(n)\mathrm{exp}(-\frac{{(\widehat{\mathbf{h}}(n))}^{2}}{2{\sigma}^{2}}),$$

$$\begin{array}{cc}\hfill \widehat{\mathbf{h}}(n+1)& =\widehat{\mathbf{h}}(n).\hfill \end{array}$$

We can see that the updating equation of the proposed CIMSM-PNLMS algorithm has an additional term $-{\rho}_{\mathrm{CIM}}\frac{1}{N{\sigma}^{3}\sqrt{2\pi}}\widehat{\mathbf{h}}(n)\mathrm{exp}(-\frac{{(\widehat{\mathbf{h}}(n))}^{2}}{2{\sigma}^{2}})$ in comparison with the SM-PNLMS algorithm, which is denoted as CIM zero attractor. The zero attraction strength of the CIM zero attractor is controlled by the parameter ${\rho}_{\mathrm{CIM}}$.

In a word, the proposed CIMSM-PNLMS algorithm exploits the sparsity of a sparse channel or a sparse system by the zero attraction penalty. Here, the CIMSM-PNLMS algorithm can be summarized as
where
where the $\mathrm{A}1$ term is an adaptive update term and the $\mathrm{A}2$ term denotes the sparsity constraint term, which acts as a zero attractor. In addition, the CIMSM-PNLMS algorithm provides two update paths, namely, $\mathrm{P}1$ and $\mathrm{P}2$. The path $\mathrm{P}1$ approximates $\widehat{\mathbf{h}}(n)$ to the hyperplane defined by $e(n)=0$. $\mathrm{P}2$ is the zero attractor that forces the zero or near zero coefficients of $\widehat{\mathbf{h}}(n)$ in the direction of zero. Similar to the CIMSM-PNLMS, we also proposed a zero attracting SM-PNLMS (ZASM-PNLMS), which is implemented by integrating a ${l}_{1}$-norm into the cost function of the SM-PNLMS algorithm whose updating function is
where ${\mu}_{\mathrm{ZASM}}(n)$ is the same as Equation (7) and ${\epsilon}_{\mathrm{ZASM}}$ is a small positive constant used to prevent the denominator from zero. $\mathbf{G}(n)$ in the ZASM-PNLMS is the same as that in the SM-PNLMS algorithm.

$$\widehat{\mathbf{h}}(n+1)=\underset{\mathrm{CIMSM}-\mathrm{PNLMS}\phantom{\rule{4.25pt}{0ex}}\mathrm{algorithm}}{\underbrace{\underset{\mathrm{SM}-\mathrm{PNLMS}\phantom{\rule{4.25pt}{0ex}}\mathrm{algorithm}}{\underbrace{\widehat{\mathbf{h}}(n)\stackrel{\mathrm{P}1}{\overbrace{+\mathrm{A}1}}}}\stackrel{\mathrm{P}2}{\overbrace{+\mathrm{A}2}}}},$$

$$\mathrm{A}1=\frac{{\mu}_{\mathrm{CIM}}\mathbf{x}(n)\mathbf{G}(n)e(n)}{{\mathbf{x}}^{T}(n)\mathbf{G}(n)\mathbf{x}(n)+{\epsilon}_{\mathrm{CIM}}},$$

$$\mathrm{A}2=-{\rho}_{\mathrm{CIM}}\frac{1}{N{\sigma}^{3}\sqrt{2\pi}}\widehat{\mathbf{h}}(n)\mathrm{exp}(-\frac{{(\widehat{\mathbf{h}}(n))}^{2}}{2{\sigma}^{2}}),$$

$$\widehat{\mathbf{h}}(n+1)=\widehat{\mathbf{h}}(n)+\frac{{\mu}_{\mathrm{ZASM}}(n)\mathbf{x}(n)\mathbf{G}(n)e(n)}{{\mathbf{x}}^{T}(n)\mathbf{G}(n)\mathbf{x}(n)+{\epsilon}_{\mathrm{ZASM}}}-{\rho}_{\mathrm{ZASM}}\mathrm{sgn}[\widehat{\mathbf{h}}(n)],$$

## 4. Performance of the Proposed CIMSM-PNLMS Algorithm

In order to analyze the performance of the proposed CIMSM-PNLMS algorithm, five experiments are constructed within the sparse channel estimation scenarios. In the first experiment, the convergence speed of the proposed CIMSM-PNLMS will be investigated. In the second experiment, the mean square error (MSE) of the CIMSM-PNLMS with different SNRs are presented. In the third experiment, the CIMSM-PNLMS is evaluated with different sparsity levels. In the fourth experiment, the key parameter of the CIMSM-PNLMS is investigated in detail. In the fifth experiment, we verify the performance of the CIMSM-PNLMS in an acoustic echo channel.

**Experiment**

**1.**

The proposed CIMSM-PNLMS algorithm is investigated in an underwater acoustic channel mode with a length of 64 to discuss the convergence speed. Herein, only four active coefficients are considered in this underwater acoustic channel [43], where the active coefficients are 1 and the other coefficients are 0. The signal-to-noise ratio (SNR) is set to be 30 dB, where input signal is normalized to 1 (${{\delta}_{s}}^{2}=1$) and the noise power (${{\delta}_{v}}^{2}$) is $1\times {10}^{-3}$. One hundred Monte Carlo runs are considered to obtain each point. The convergence speed of the proposed CIMSM-PNLMS is compared with that of the NLMS, PNLMS, IPNLMS, SM-PNLMS, ZASM-PNLMS, RZASM-PNLMS and CIMSM-NLMS [44] algorithms. Assume that the noise is Gaussian white noise, which is independent from the input signal. The MSE is used for evaluating the estimation performance, and the simulation parameters are: ${\mu}_{\mathrm{NLMS}}=0.45$, ${\mu}_{\mathrm{PNLMS}}={\mu}_{\mathrm{IPNLMS}}=0.4$, $a=-0.5$, ${\rho}_{\mathrm{ZASM}}={\rho}_{\mathrm{RZASM}}=6\times {10}^{-6}$, ${\rho}_{\mathrm{CIMN}}={\rho}_{\mathrm{CIM}}=8\times {10}^{-5}$, $\gamma =\sqrt{2}{{\delta}_{v}}^{2}$, $\sigma =0.05$. ${\mu}_{\mathrm{NLMS}}$, ${\mu}_{\mathrm{PNLMS}}$ and ${\mu}_{\mathrm{IPNLMS}}$ denote the step sizes of the NLMS, PNLMS and IPNLMS, respectively. a denotes the adjusting parameter of the IPNLMS whose value range is $[-1,1]$. ${\rho}_{\mathrm{ZASM}}$, ${\rho}_{\mathrm{RZASM}}$, ${\rho}_{\mathrm{CIMN}}$, ${\rho}_{\mathrm{CIM}}$ denote the zero-attraction factors of the ZASM-PNLMS, RZASM-PNLMS, CIMSM-NLMS and CIMSM-PNLMS. $\gamma $ denotes the error bound of the SM technique. $\sigma $ denotes the kernel width. The simulation result is shown in Figure 1. We can see that the proposed CIMSM-PNLMS achieves the fastest convergence. The PNLMS algorithm converges quickly at the initial iterations while its convergence slows down dramatically at the later iterations. Our proposed CIMSM-PNLMS algorithm integrates a CIM zero attractor, and hence, the convergence is speeded up at the later iterations.

**Experiment**

**2.**

The estimation behaviors of the proposed CIMSM-PNLMS algorithm at different SNRs are analyzed herein. In this experiment, the length of the channel is 64 and only one coefficient is active in this channel. We set the SNR to be 30 dB, 20 dB and 10 dB, respectively. To get the same initial convergence speed, the parameters are set as follows: ${\mu}_{\mathrm{NLMS}}=0.8$, ${\mu}_{\mathrm{PNLMS}}=0.65$, ${\mu}_{\mathrm{IPNLMS}}=0.6$, $a=-0.5$, ${\rho}_{\mathrm{ZASM}}=3\times {10}^{-5}$, ${\rho}_{\mathrm{RZASM}}=6\times {10}^{-5}$, ${\rho}_{\mathrm{CIMN}}=7\times {10}^{-5}$, ${\rho}_{\mathrm{CIM}}=5\times {10}^{-5}$, $\gamma =\sqrt{2}{{\delta}_{v}}^{2}$, $\sigma =0.01$. The simulation results are given in Figure 2, Figure 3 and Figure 4 for SNR = 30 dB, SNR = 20 dB and SNR = 10 dB, respectively. It is found that the proposed CIMSM-PNLMS algorithm has the smallest estimation error in terms of the MSE. The larger the SNR is, the smaller the MSE is. Even when the SNR is equal to 10 dB, the CIMSM-PNLMS algorithm still possesses the lowest MSE. In a word, the proposed CIMSM-PNLMS algorithm can give a good estimation performance even though channel environment is not good. It is worth noting that the estimation performance of the ZASM-PNLMS algorithm decreases for SNR = 10 dB.

**Experiment**

**3.**

In this experiment, the proposed CIMSM-PNLMS algorithm with various sparsity levels is studied in detail. Here, the sparsity level K is defined as the number of the active channel coefficients. $K=1$, $K=4$ and $K=8$ are employed to investigate the estimation performance at different sparsity levels. The related parameters are ${\mu}_{\mathrm{NLMS}}=0.75$, ${\mu}_{\mathrm{PNLMS}}=0.65$, ${\mu}_{\mathrm{IPNLMS}}=0.5$, $a=0$, ${\rho}_{\mathrm{ZASM}}={\rho}_{\mathrm{RZASM}}={\rho}_{\mathrm{CIMN}}={\rho}_{\mathrm{CIM}}=9\times {10}^{-6}$, $\gamma =\sqrt{2}{{\delta}_{v}}^{2}$, $\sigma =0.01$. The numerical results are demonstrated in Figure 5, Figure 6 and Figure 7 for $K=1$, $K=4$ and $K=8$, respectively. It is observed that the proposed CIMSM-PNLMS outperforms the PNLMS, SM-PNLMS, ZASM-PNLMS, RZASM-PNLMS, CIMSM-NLMS and IPNLMS algorithms. When the sparsity level K increases from 1 to 8, the proposed CIMSM-PNLMS algorithm performs best in all the mentioned algorithms with respect to the MSE. Even though $K=8$, the CIMSM-PNLMS algorithm still achieves the lowest MSE. However, the estimation gain is reduced with an increment of K.

**Experiment**

**4.**

The key parameters of the proposed CIMSM-PNLMS algorithm is considered to obtain the effects on the convergence and the MSE. Herein, we set the sparsity level $K=2$ and SNR = 30 dB. The effects on the MSE of ${\rho}_{\mathrm{CIM}}$ are presented in Figure 8. It is found that when ${\rho}_{\mathrm{CIM}}$ decreases from $5\times {10}^{-4}$ to $5\times {10}^{-5}$, the MSE is reduced. A lowest MSE is achieved for ${\rho}_{\mathrm{CIM}}=5\times {10}^{-5}$. When ${\rho}_{\mathrm{CIM}}$ continues to reduce from ${\rho}_{\mathrm{CIM}}=5\times {10}^{-5}$ to ${\rho}_{\mathrm{CIM}}=5\times {10}^{-8}$, the MSE rebounds to the opposite direction, which means that the estimation error is increased. ${\rho}_{\mathrm{CIM}}$ effects at different SNRs on the MSE are also given in Figure 9. It can be seen that the ${\rho}_{\mathrm{CIM}}$ gives similar effects on the MSE at different SNRs. When ${\rho}_{\mathrm{CIM}}$ reaches $5\times {10}^{-5}$, the proposed CIMSM-PNLMS algorithm has the smallest MSE. The effects of $\gamma $ are reported in Figure 10. We found that with the decrement of $\gamma $, the MSE is reduced and the convergence is accelerated. However, the MSE will not continue to decrease when the $\gamma $ is less than 0.1. Thus, we should select proper parameters to get a good estimation performance for the proposed CIMSM-PNLMS.

**Experiment**

**5.**

Finally, the proposed algorithm is used to estimate an acoustic echo channel to further verify the estimation performance of the CIMSM-PNLMS algorithm. A typical acoustic echo channel with a length of 256 is shown in Figure 11. Herein, the active coefficients are obtained by an exponential function, while other coefficients are zeros. The sparsity measurement is defined as ${\xi}_{12}(\mathbf{h})=\frac{\mathbf{N}}{\mathbf{N}-\sqrt{\mathbf{N}}}(\mathbf{1}-\frac{{\u2225\mathbf{h}\u2225}_{\mathbf{1}}}{\sqrt{\mathbf{N}}{\u2225\mathbf{h}\u2225}_{\mathbf{2}}})$, which is used for measuring the sparsity of this acoustic echo channel. Herein, we consider ${\xi}_{12}(\mathbf{h})$ = 0.7416. The related parameters are ${\mu}_{\mathrm{PNLMS}}=0.8$, ${\mu}_{\mathrm{IPNLMS}}=0.75$, ${\rho}_{\mathrm{ZASM}}=5\times {10}^{-7}$, ${\rho}_{\mathrm{RZASM}}=7\times {10}^{-7}$, ${\rho}_{\mathrm{CIMN}}=3\times {10}^{-7}$, ${\rho}_{\mathrm{CIM}}=3\times {10}^{-6}$. The numerical results are shown in Figure 11. It is noted that the CIMSM-PNLMS still performs best in comparison with the NLMS, PNLMS, IPNLMS, SM-PNLMS, ZASM-PNLMS, RZASM-PNLMS, CIMSM-NLMS and ZASM-PNLMS algorithms.

## 5. Conclusions

In this paper, a robust sparse SM-PNLMS algorithm with a CIM zero attractor has been proposed and its performance has been deeply investigated in various acoustic channels. The CIMSM-PNLMS algorithm has been derived in detail and the key parameter effects on the MSE and convergence have been studied. The proposed CIMSM-PNLMS algorithm has been used for estimating acoustic channels at different sparsity levels to verify its effectiveness. The numerical results show that the proposed CIMSM-PNLMS algorithm provides the fastest convergence speed and the lowest MSE for estimating underwater acoustic channels and acoustic echo channels at different SNRs and sparsity levels.

## Acknowledgments

This work was partially supported by the National Key Research and Development Program of China-Government Corporation Special Program (2016YFE0111100), the National Science Foundation of China (61571149), the Science and Technology innovative Talents Foundation of Harbin (2016RAXXJ044), Projects for the Selected Returned Overseas Chinese Scholars of Heilongjiang Province and Ministry of Human Resources and Social Security of the People’s Republic of China (MOHRSS) of China, and the Foundational Research Funds for the Central Universities (HEUCF160815, HEUCFD1433).

## Author Contributions

Zhan Jin wrote the draft of the paper and wrote the code and did the simulations of this paper. Yingsong Li helped to check the coding and simulations, and he also put forward the idea of the CIMSM-PNLMS and ZASM-PNLMS algorithms. Yanyan Wang provided some analysis on the CIMSM-PNLMS and ZASM-PNLMS algorithms. All of the authors wrote this paper together and they have read and approved the final manuscript.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

- Haykin, S. Adaptive Filter Theory, 4th ed.; Prentice Hall: Englewood Cliffs, NJ, USA, 2002. [Google Scholar]
- Diniz, P.S.R. Adaptive Filtering: Algorithms and Practical Implementaion, 4th ed.; Springer: New York, NY, USA, 2013. [Google Scholar]
- Sayed, A.H. Fundamentals of Adaptive filtering; Wiley-IEEE: New York, NY, USA, 2003. [Google Scholar]
- Combettes, P.L. The foundations of set theoretic estimation. Proc. IEEE
**1993**, 81, 182–208. [Google Scholar] [CrossRef] - Nagaraj, S.; Gollamudi, S.; Kapoor, S.; Huang, Y. An adaptive set-membership filtering technique with sparse updates. IEEE Trans. Signal Process.
**1999**, 47, 2928–2941. [Google Scholar] [CrossRef] - Werner, S.; Diniz, P.S.R. Set-membership affine projection algorithm. IEEE Signal Process. Lett.
**2001**, 8, 231–235. [Google Scholar] [CrossRef] - Gollamudi, S.; Nagaraj, S.; Huang, Y.F. Blind equalization with a deterministic constant modulus cost-a set-membership filtering approach. In Proceedings of the 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Istanbul, Turkey, 5–9 June 2000; pp. 2765–2768. [Google Scholar]
- Gollamudi, S.; Nagaraj, S.; Huang, Y.F. Set-membership filtering and a set-membership normalized LMS algorithm with an adaptive step size. IEEE Signal Process. Lett.
**1998**, 5, 111–114. [Google Scholar] [CrossRef] - Diniz, P.S.R. Adaptive Filtering:Algorithms and Practical Implementations, 2nd ed.; Kluwer: Boston, MA, USA, 2002; pp. 234–237. [Google Scholar]
- De Lamare, R.C.; Diniz, P.S.R. Set-membership adaptive algorithms based on time-vary error bounds for CDMA interference suppression. IEEE Trans. Veh. Technol.
**2009**, 58, 644–654. [Google Scholar] [CrossRef] - Bhotto, M.Z.A.; Antoniou, A. Robust set-membership affine projection adaptive-filtering algorithm. IEEE Trans. Signal Process.
**2012**, 60, 73–81. [Google Scholar] [CrossRef] - Lin, T.M.; Nayeri, M.; Deller, J.R., Jr. Consistently convergent OBE algorithm with automatic selection of error bounds. Int. J. Adapt. Control Signal Process.
**1998**, 12, 302–324. [Google Scholar] [CrossRef] - De Lamare, R.C.; Sampaio-Neto, R. Adaptive reduced-rank MMSE filtering with interpolated FIR filters and adaptive interpolators. IEEE Signal Process. Lett.
**2005**, 12, 177–180. [Google Scholar] [CrossRef] - Clarke, P.; Lamare, R.C.D. Low-complexity reduced-rank linear interference suppression based on set-membership joint iterative optimization for DS-CDMA systems. IEEE Trans. Veh. Technol.
**2011**, 60, 4324–4337. [Google Scholar] [CrossRef] - Cai, Y.; Lamare, R.C.D. Set-membership adaptive constant modulus beamforming based on generalized sidelobe cancellation with dynamic bounds. In Proceedings of the 10th International Symposium on Wireless Communication Systems, Ilmenau, German, 27–30 August 2013; pp. 194–198. [Google Scholar]
- Gu, Y.; Jin, J.; Mei, S. l
_{0}norm constraint LMS algorithms for sparse system identification. IEEE Signal Process. Lett.**2009**, 16, 774–777. [Google Scholar] - Taheri, O.; Vorobyov, S.A. Sparse channel estimation with l
_{p}-norm and reweighted l_{1}-norm penalized least mean squares. In Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP), Prague, Czech Republic, 22–27 May 2011; pp. 2864–2867. [Google Scholar] - Li, Y.; Wang, Y.; Jiang, T. Sparse channel estimation based on a p-norm-like constrained least mean fourth algorithm. In Proceedings of the 7th International Conference on Wireless Communications and Sinal Processing (WCSP), Nanjing, China, 15–17 October 2015. [Google Scholar]
- Gui, G.; Adachi, F. Improved least mean square algorithm with application to adaptive sparse channel estimation. EURASIP J. Wirel. Commun. Netw.
**2013**, 2013, 204. [Google Scholar] [CrossRef] - Li, Y.; Zhang, C.; Wang, S. Low complexity non-uniform penalized affine projection algorithm for sparse system identification. Circuits Syst. Signal Process.
**2016**, 35, 1611–1624. [Google Scholar] [CrossRef] - Li, Y.; Li, W.; Yu, W.; Wan, J.; Li, Z. Sparse adaptive channel estimation based on l
_{p}-norm penalized affine projection algorithm. Int. J. Antennas Propag.**2014**, 2014, 434659. [Google Scholar] [CrossRef] - Duttweiler, D.L. Proportionate normalized least-mean-squares adaptition in echo cancelers. IEEE Trans. Speech Audio Process.
**2000**, 8, 508–518. [Google Scholar] [CrossRef] - Deng, H.; Doroslova, M. Improved convergence of the PNLMS algorithm for sparse impluse response identification. IEEE Signal Process. Lett.
**2005**, 12, 181–184. [Google Scholar] [CrossRef] - Li, Y.; Hamamura, M. An improved proportionate normalized least-mean-square algorithm for broadband multipath channel estimation. Sci. World J.
**2014**, 2014, 1–9. [Google Scholar] [CrossRef] [PubMed] - Li, Y.; Hamamura, M. Zero-attracting variable-step-size least mean square algorithms for adaptive sparse channel estimation. Int. J. Adapt. Control Signal Process.
**2015**, 29, 1189–1206. [Google Scholar] [CrossRef] - Gui, G.; Peng, W.; Adachi, F. Sparse least mean fourth algorithm for adaptive channel estimation in low signal-to-noise ratio region. Int. J. Commun. Syst.
**2014**, 27, 3147–3157. [Google Scholar] [CrossRef] - Gui, G.; Xu, L.; Matsushita, S. Improved adaptive sparse channel estimation using mixed square/fourth error criterion. J. Frankl. Inst.
**2015**, 352, 4579–4594. [Google Scholar] [CrossRef] - Gui, G.; Mehbodniya, A.; Adachi, F. Least mean square/fourth algorithm for adaptive sparse channel estimation. In Proceedings of the 24th IEEE International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC), London, UK, 8–11 September 2013; pp. 296–300. [Google Scholar]
- Li, Y.; Wang, Y.; Jiang, T. Norm-adaption penalized least mean square/fourth algorithm for sparse channel estimation. Signal Process.
**2016**, 128, 243–251. [Google Scholar] [CrossRef] - Benesty, J.; Gay, S.L. An improved PNLMS algorithm. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Orlando, FL, USA, 13–17 May 2002. [Google Scholar]
- Li, Y.; Wang, Y.; Jiang, T. Sparse-aware Set-membership NLMS algorithms and their application for sparse channel estimation and echo cancellation. Int. J. Electron. Commun.
**2016**, 70, 895–902. [Google Scholar] [CrossRef] - Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory
**2006**, 52, 1289–1306. [Google Scholar] [CrossRef] - Chen, Y.; Gu, Y.; Hero, A.O. Sparse LMS for system identification. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Taipei, Taiwan, 19–24 April 2009. [Google Scholar]
- Li, Y.; Hamamura, M. Smooth approximation l
_{0}-norm constrained affine projection algorithm and its applications in sparse channel estimation. Sci. World J.**2014**, 2014, 937252. [Google Scholar] - Meng, R.; de Lamare, R.C.; Nascimento, V.H. Sparsity-aware affine projection adaptive algorithms for system identification. In Proceedings of the Sensor Signal Processing for Defence (SSPD), London, UK, 27–29 September 2011. [Google Scholar]
- Lima, M.V.S.; Martins, W.A.; Diniz, P.S.R. Affine projection algorithms for sparse system identification. In Proceedings of the IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP), Vancouver, BC, Canada, 26–31 May 2013. [Google Scholar]
- Lima, M.V.S.; Ferreira, T.N.; Martins, W.A.; Diniz, P.S.R. Sparsity-Aware Data-Selective Adaptive Filters. IEEE Trans. Signal Process.
**2014**, 62, 4557–4572. [Google Scholar] [CrossRef] - Lima, M.V.S.; Sobron, I.; Martins, W.A.; Diniz, P.S.R. Stability and MSE analyses of affine projection algorithms for sparse system identification. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 6399–6403. [Google Scholar]
- Seth, S.; Principe, J.C. Compressed signal reconstruction using the correntropy induced metric. In Proceedings of the 2008 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Las Vegas, NV, USA, 31 March–4 April 2008; pp. 3845–3848. [Google Scholar]
- Chen, B.; Xing, L.; Liang, J.; Zheng, N.; Principe, J.C. Steady-state mean-square error analysis for adaptive filtering under the maximum correntropy criterion. IEEE Signal Process. Lett.
**2014**, 21, 880–884. [Google Scholar] - Chen, B.; Xing, L.; Zhao, H.; Zheng, N.; Principe, J.C. Generalized correntropy for robust adaptive filtering. IEEE Trans. Signal Process.
**2016**, 64, 3376–3387. [Google Scholar] [CrossRef] - Das, R.L.; Chakraborty, M. Improving the performance of the PNLMS algorithm using l
_{1}norm regularization. IEEE/ACM Trans. Audio Speech Lang. Prov.**2016**, 24, 1280–1290. [Google Scholar] [CrossRef] - George, Z. A novel matlab-based underwater acoustic channel simulator. J. Commun. Comput.
**2013**, 10, 1131–1138. [Google Scholar] - Wang, Y.; Li, Y.; Albu, F.; Yang, R. Sparse Channel Estimation Using Correntropy Induced Metric Criterion Based SM-NLMS Algorithm. In Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC), San Francisco, CA, USA, 19–22 March 2017; pp. 1–6. [Google Scholar]

**Figure 8.**${\rho}_{\mathrm{CIM}}$ effects of the CIMSM-PNLMS algorithm on the mean square error (MSE).

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).