Next Article in Journal
Offshore Platform Extraction Using RadarSat-2 SAR Imagery: A Two-Parameter CFAR Method Based on Maximum Entropy
Next Article in Special Issue
Kernel Risk-Sensitive Mean p-Power Error Algorithms for Robust Learning
Previous Article in Journal
Information Entropy Theory Applied to the Dip-Phenomenon Analysis in Open Channel Flows
Previous Article in Special Issue
Reduction of Markov Chains Using a Value-of-Information-Based Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Correntropy-Based Proportionate Affine Projection Algorithm for Estimating Sparse Channels with Impulsive Noise

1
College of Information and Communications Engineering, Harbin Engineering University, Harbin 150001, China
2
Key Laboratory of Microwave Remote Sensing, National Space Science Center, Chinese Academy of Sciences, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(6), 555; https://doi.org/10.3390/e21060555
Submission received: 2 May 2019 / Revised: 30 May 2019 / Accepted: 31 May 2019 / Published: 2 June 2019
(This article belongs to the Special Issue Information Theoretic Learning and Kernel Methods)

Abstract

:
A novel robust proportionate affine projection (AP) algorithm is devised for estimating sparse channels, which often occur in network echo and wireless communication channels. The newly proposed algorithm is realized by using the maximum correntropy criterion (MCC) and the data reusing scheme used in AP to overcome the identification performance degradation of the traditional PAP algorithm in impulsive noise environments. The proposed algorithm is referred to as the proportionate affine projection maximum correntropy criterion (PAPMCC) algorithm, which is derived in the context of channel estimation framework. Many simulation results were obtained to verify that the PAPMCC algorithm is superior to early reported AP algorithms with different input signals under impulsive noise environments.

1. Introduction

A class of adaptive filtering (AF) algorithms are extensively considered in use in channel estimation (CE), echo cancellation, noise elimination, etc. [1,2,3,4,5,6,7,8,9,10]. For example, the well-known least mean square (LMS), normalized LMS (NLMS) and recursive least square (RLS) algorithms were used in various systems. Although the LMS algorithm has a simple principle and a small amount of computation in practice, it might converge slowly in low signal-to-noise ratio (SNR) scenes. In contrast, the RLS converges faster than the basic LMS. However, it is proved to have high cost of increased computational complexity, which will use more computing resources when the order of the AF is large. In addition, if the input signal is a speech signal, the convergence speed for the basic LMS algorithm becomes very slow as the eigenvalue distribution range for the input signal autocorrelation matrix is large [11]. To enhance the identification behaviors of the RLS and NLMS algorithms in practical engineering and to obtain high accuracy and fast convergence, the affine projection (AP) algorithm is proposed by reusing latest input signals to improve the NLMS’s performance [12]. The computation burden of the AP is between the LMS and RLS algorithms, and the AP algorithm has a fast convergence, especially for colored or speech signal input signals [13].
In many engineerings, such as speech signal processing and real-time traffic predictions, noise often exhibits strongly impulsive characteristics [14,15]. Traditional NLMS and AP algorithms, which use the minimum-mean-square-error (MMSE) criterion to construct an expected cost function, will suffer from performance degradation in those impulsive noise environments. To find the solution for handling these problems, the maximum correntropy criterion (MCC) and the minimum error entropy criterion (MEEC) have been proposed to give resistance to the impulsive noise [16,17]. Although the MEEC is a robust criterion, its computational complexity is very high, while the MCC algorithm whose computational complexity is comparable to the LMS has been widely used to resist the impulsive noise [18,19,20,21].
On the other hand, scholars found that the sparse characteristics are existing in a great number of scenarios such as network echo channels and underwater acoustic communication channels [22,23,24,25]. However, classical LMS, AP and MCC algorithms cannot take advantage of the sparse structures of these sparse channels. Then, the proportionate AF algorithms have been proposed to make use of the sparse information in the mentioned channels [26]. For example, the proportionate NLMS (PNLMS) combines the proportionate scheme into the NLMS to reassign the gains to each channel coefficients [26]. Then, proportionate-type AF algorithms were widely realized and utilized for channel estimation as well as the echo cancellation [27,28,29,30]. For the sake of comparison with the traditional NLMS, the PNLMS suffers from slow convergence if the input signal is driven by colored or speech signals, resulting in that steady-state error might be worse than that of the NLMS. Inspired by the PNLMS, the proportionate AP (PAP) algorithm has been proposed by using the idea in PNLMS to fully use the sparse structure-information of the echo channels [31] based on the data reusing principle. Then, various proportionate-type AP algorithms have been proposed [32,33,34,35,36,37]. However, the PAP-type algorithms have performance degradation in impulsive noise environments because of the MMSE scheme. Thus, the sign algorithms, such as affine projection sign (APS) algorithm and proportionate APS (PAPS) algorithm [38,39], are successfully used for dealing with impulsive noise. Additionally, another kind of sparse-aware APs have been reported and analyzed by taking the consideration of the compressed sensing (CS) theory [40]. With the help of the concept of the CS, a series of sparsity-aware AF algorithms, such as zero-attracting LMS (ZA-LMS), reweighted ZA-LMS (RZA-LMS), ZA-AP, and RZA-AP algorithms have been proposed within the AF [41,42,43,44,45,46,47].
In this paper, the AP scheme and MCC are considered together to construct a new cost function to enhance the PAP algorithm in impulsive noise environments, which is denoted as proportionate affine projection maximum correntropy criterion (PAPMCC) algorithm. The proposed PAPMCC algorithm is investigated by using α -stable distribution as the impulsive noise model. Experimental results verify that the PAPMCC provides a lower steady state error than AP, ZA-AP, RZA-AP, and PAP algorithms with different inputs.

2. Review of the PAP Algorithm

In the range of AF, the implementation schematic diagram for CE is presented in Figure 1. Assume that the input signal x ( m ) = [ x ( m ) , x ( m 1 ) , , x ( m K + 1 ) ] T is used in this paper, and the channel impulse response (CIR) is modeled as w ( m ) = w 0 ( m ) , , w K 1 ( m ) T , where K denotes as the total length and m denotes the time slot. Then, the received signal d ( m ) is
d ( m ) = x T ( m ) w ( m ) + r ( m ) ,
in which r ( m ) represents the additive impulsive noise that is usually independent of x ( m ) , and T represents the transposed operation. The gotten CIR is given as w ^ ( m ) = w ^ 0 ( m ) , w ^ 1 ( m ) , , w ^ K 1 ( m ) T , resulting in
y ( m ) = x T ( m ) w ^ ( m ) .
The estimation error at m is expressed as
e ( m ) = d ( m ) y ( m ) .

2.1. AP Algorithm

To the best of our knowledge, the AP algorithm reuses the current and previous input signal information, which achieves faster convergence compared with the NLMS when the input signal is colored. The input matrix for the AP algorithm is
X ( m ) = [ x ( m ) , x ( m 1 ) , , x ( m M + 1 ) ] .
where M is a projection order. Due to the reuse of data, y ( m ) and the estimated error e ( m ) are expressed as
y ( m ) = X T ( m ) w ^ ( m ) ,
d ( m ) = [ d ( m ) , d ( m 1 ) , , d ( m M + 1 ) ] T ,
e ( m ) = d ( m ) y ( m ) .
The iteration equation for the standard AP algorithm is given by
w ^ ( m + 1 ) = w ^ ( m ) + μ AP X ( m ) X T ( m ) X ( m ) + δ AP I M 1 e ( m ) ,
in which μ AP denotes the step size, δ AP > 0 is to prevent the matrix to be inverted to singular, and I M is a M-order identity matrix.

2.2. PAP Algorithm

From the inspiration of the well-known PNLMS algorithm, the PAP algorithm integrates the proportionate idea into the AP algorithm to modify the gain allocation method, and realizes a dynamic step size (STS) based on the magnitudes of the channel coefficients that are included in the unknown channels. The iteration equation of the PAP is modified to be
w ^ ( m + 1 ) = w ^ ( m ) + μ PAP G ( m ) X ( m ) X T ( m ) G ( m ) X ( m ) + δ PAP I M 1 e ( m ) ,
where μ PAP is still used as a STS, δ PAP denotes the regularization factor in the PAP, and G ( m ) acts as the gain controlling matrix, which is written as
G ( m ) = diag g 0 ( m ) , g 1 ( m ) , , g K 1 ( m ) ,
where
g k ( m ) = φ k ( m ) i = 0 K 1 φ i ( m ) ,
and
φ k = max p max q , w ^ 0 , w ^ k , , w ^ K 1 , w ^ k .
In Equation (12), parameters p > 0 and q > 0 are used to prevent the update process from stalling. In practice, p = 5 K is usually chosen [26].

3. Proposed PAPMCC Algorithm

The PAP can provide amazing convergence performance in Gaussian noise environments, but the performance will degrade under the impulsive noise environments. To take full use of the sparse characteristics of the CIRs, a robust PAP algorithm is realized by combining the proportionate idea with the basic AP and MCC together to construct the PAPMCC algorithm. As a result, the proposed PAPMCC algorithm solves the minimization problem given by
w ^ ( m + 1 ) w ^ ( m ) G 1 ( m ) 2 subject to e ( m ) = 1 M ξ exp e ( m ) e ( m ) 2 σ 2 e ( m ) ,
where e ( m ) = d ( m ) X T ( m ) w ^ ( m + 1 ) , σ denotes the kernel width, and 1M is a column vector whose elements are ones. e ( m ) e ( m ) denotes the Hadamard product between two estimated error vectors e(m). According to the Lagrange multiplier method (LMM) with multiple constraints, the cost function is presented as
J ( w ^ ( m + 1 ) ) = w ^ ( m + 1 ) w ^ ( m ) G 1 ( m ) 2 + λ e ( m ) 1 M ξ exp e ( m ) e ( m ) 2 σ 2 e ( m ) ,
where λ = [λ1, λ2, …, λM]. Then, let
J ( w ^ ( m + 1 ) ) w ^ ( m + 1 ) = 0 and J ( w ^ ( m + 1 ) ) λ = 0 .
After performing algebraic operations, we get
w ^ ( m + 1 ) = w ^ ( m ) + 1 2 G ( m ) X ( m ) λ T ,
and
d ( m ) = X T ( m ) w ^ ( m + 1 ) + 1 M ξ exp e ( m ) e ( m ) 2 σ 2 e ( m ) .
Solving Equations (16) and (17), the Lagrange multiplier vector is given by
λ T = 2 ξ [ X T ( m ) G ( m ) X ( m ) ] 1 exp e ( m ) e ( m ) 2 σ 2 e ( m ) .
Substituting Equation (18) into Equation (16), the iteration of the PAPMCC is expressed as
w ^ ( m + 1 ) = w ^ ( m ) + ξ G ( m ) X ( m ) [ X T ( m ) G ( m ) X ( m ) ] 1 exp e ( m ) e ( m ) 2 σ 2 e ( m ) .
In practice, Equation (19) can be corrected to
w ^ ( m + 1 ) = w ^ ( m ) + μ G ( m ) X ( m ) [ X T ( m ) G ( m ) X ( m ) + I M δ PAPMCC ] 1 exp e ( m ) e ( m ) 2 σ 2 e ( m ) ,
where μ = ξ acts as the step size, δ PAPMCC denotes the regularization factor, and G ( m ) is the weight assignment matrix that is defined in Equations (10)–(12).
The computation complexity of the devised PAPMCC algorithm is compared with the AP, ZA-AP, RZA-AP and PAP algorithms with respect to the total number of additions, multiplications, and divisions in each iteration. The comparison is presented in Table 1. It is clear to see that the computational complexity of the proposed PAPMCC algorithm is comparable to that of the PAP algorithm.

4. Experimental Results

Several experiments were constructed to give an analysis on the performance of the PAPMCC algorithm for implementing the sparse CE. Since the α -stable distribution can well construct the non-Gaussian phenomenon, which is ubiquitous in practice, it was chosen to model the impulsive noise in the simulations. The α -stable distribution function is defined as
f ( t ) = exp j χ t γ t α 1 + j β sgn ( t ) S ( t , α ) ,
where
S ( t , α ) = tan α π 2 if α 1 2 π log t if α = 1 ,
in which α ( 0 , 2 ] represents the characteristic index, which controls the behavior of the impulsive distribution. When parameter α is smaller, the impulsive intensity becomes larger. β [ 1 , 1 ] is the symmetric parameter, χ denotes positional parameter, and γ > 0 represents the dispersion parameter. Furthermore, the α -stable distribution is given by V α stable ( α , β , γ , χ ) . Herein, V α stable ( 1.5 , 0 , 0.2 , 0 ) is chosen to implement the impulsive noise. In all simulation experiments, K = 1024 , and σ = 1 were selected, and the input signal power was 1. The network echo channel used for the experiments, which is classical sparse channel presented in Figure 2, whose active coefficients distributed in [ 257 , 272 ] , was considered to evaluate the proposed PAPMCC algorithm. The related parameters were set to be δ AP = δ ZA AP = δ RZA AP = 0.01 and δ PAP = δ PAPMCC = 1 K δ AP [48]. The performance for all used algorithms was evaluated by normalized misalignment (NM), which is written as 10 log 10 ( w w ^ 2 2 / w 2 2 ) .

4.1. Performance of the PAPMCC Algorithm with Various Projection Orders M, Step-Sizes μ and Kernel Width  σ

Firstly, the effects of the projection order M on the convergence for the PAPMCC algorithm was investigated. The colored noise, which was obtained from white Gaussian noise (WGN) filtering through an autoregressive with a pole at 0.8, was used as the input signal. Herein, μ = 0.05 . The results given in Figure 3 point out that increasing the projection order M could speed up the convergence, while the steady-state misalignment was increased. Therefore, a trade-off between the convergence speed and steady-state misalignment should be taken into consideration.
Secondly, the effects of kernel width σ on the convergence for the PAPMCC algorithm was analyzed and discussed. Herein, M = 4 was selected. From Equation (20), the parameter σ affects the estimation behaviors of the PAPMCC algorithm, while σ is an important parameter for Gaussian kernel to suppress noise interference. Given the diversity and complexity of the target signal and noise, it is not easy to get the optimal solution of the kernel width σ from the theoretical derivation. Therefore, the simulation experiments were used to determine the appropriate value σ . The results given in Figure 4 point out that the steady-state error of the PAPMCC algorithm increased with the increment of σ . The PAPMCC algorithm had high estimation error when σ took a larger value since MCC behaved similar LMS when the value of σ was very large.
Thirdly, the effects of μ on the convergence for the PAPMCC algorithm was investigated using colored noise as the input signal. From the above simulation results, σ = 1.0 was selected, and other parameters were the same as the first experiments, and the results are presented in Figure 5. Parameter μ controls the convergence speed of the PAPMCC. With the increment of μ , the normalized misalignment was decreased, while the convergence rate became fast. Consequently, the parameters μ and σ are supposed to be reasonably selected in practical application.

4.2. Performance Comparisons of the Proposed PAPMCC Algorithm under Different Input Signals

According to the analysis presented above, we found that the devised PAPMCC algorithm had a lower steady-state MSE when σ and μ were selected. Herein, the estimation behaviors of the PAPMCC was compared with the AP, RZA-AP,ZA-AP, and PAP algorithms. All algorithms were investigated by using WGN, colored noise, and speech signal as input signals, and the sampling frequency for the speech signal was 8 kHz. The used speech signal in this simulation is presented in Figure 6. The performance comparisons of the PAPMCC algorithm with various inputs for network echo channel are presented in Figure 7, Figure 8 and Figure 9, respectively. The PAPMCC algorithm achieved the lowest NM for the sake of comparison with the ZA-AP, AP and RZA-AP algorithms. The PAPMCC algorithm had lower steady-state error while its convergence speed was similar to that of the PAP algorithm. When the input signal was speech signal, the proposed PAPMCC was still better than the related algorithms by considering the convergence and estimation error.

4.3. SNR vs. Normalized Misalignment (NM) of the PAPMCC Algorithm

NM versus SNR was used to analyze the performance of the devised PAPMCC under colored input for estimating network echo channel. The performance results of the PAPMCC with various SNRs are presented in Figure 10, which shows that the estimation error decreased as the SNR increased from 0 to 20 dB. Clearly, the steady-state performance of the PAPMCC was significantly better than the related algorithms in low SNR environments.

4.4. Performance Comparisons of the Proposed PAPMCC Algorithm with the Conventional Robust AP Algorithms

Two conventional robust algorithms were taken into account for comparison to deal with impulsive noise, namely, APS and PAPS algorithms. Herein, the step sizes of the APS and PAPS algorithms were set to 0.005, while the step sizes for the PAP and improved PAP (IPAP) algorithm [49] were set to 0.5, and the bound of set-membership PAP (SM-PAP) algorithm [34] was set to 2 σ r 2 where σ r 2 represents the power of the noise. The other parameters were consistent with the previous simulations. The results presented in Figure 11 indicate that the proposed PAPMCC algorithm could still achieve the lowest steady-state error.

5. Conclusions

In this paper, the proportionate affine projection maximum correntropy criterion (PAPMCC) has been put forward by the combination of the proportionate and affine projection schemes with the MCC to get a new cost function from the concept of the PAP. The proposed PAPMCC algorithm is carefully derived and investigated via the simulation from various experiments. The results indicate that the PAPMCC algorithm clearly improves the ability of the traditional PAP algorithm under impulsive noise environments. Moreover, compared with the AP, PAP, ZA-AP, RZA-AP, IPAP, APS, PAPS and SM-AP algorithms, the proposed PAPMCC algorithm achieves the lowest NM under three different input signals for estimating network echo channels.

Author Contributions

Z.J. wrote the draft, programmed the code and finished the simulation results. Y.L. put forward the PAPMCC algorithm and checked the simulations. X.H. provided some useful analyses for PAPMCC algorithm. All authors wrote and approved the final version of this manuscript.

Funding

This research is partly supported by the National Key Research and Development Program of China (2016YFE0111100), the National Science Foundation of China (61571149), the Science and Technology Innovative Talents Foundation of Harbin (2016RAXXJ044), the Opening Fund of Acoustics Science and Technology Laboratory (SSKF2016001), the Key Research and Development Program of Heilongjiang Province (GX17A016), the China Postdoctoral Science Foundation (2017M620918, 2019M651264), the Natural Science Foundation of Beijing (4182077) and the Fundamental Research Funds for the Central Universities (3072019CFG0801).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Haykin, S. Adaptive Filter Theory, 2nd ed.; Prentice-Hall: Upper Saddle River, NJ, USA, 1991. [Google Scholar]
  2. Benesty, J.; Gänsler, T.; Morgan, D.R.; Sondhi, M.M.; Gay, S.L. Advances in Network and Acoustic Echo Cancellation; Springer: Berlin, Germany, 2001. [Google Scholar]
  3. Shi, W.; Li, Y.; Wang, Y. Noise-free maximum correntropy criterion algorithm in non-Gaussian environment. IEEE Trans. Circuits Syst. II-Express Briefs 2019. [Google Scholar] [CrossRef]
  4. Widrow, B.; Stearns, S.D. Adaptive Signal Processing; Prentice-Hall: Upper Saddle River, NJ, USA, 1985. [Google Scholar]
  5. Douglas, S.C.; Meng, T.Y. Normalized data nonlinearities for LMS adaptation. IEEE Trans. Signal Process. 1994, 42, 1352–1365. [Google Scholar] [CrossRef]
  6. Li, Y.; Wang, Y.; Jiang, T. Sparse-aware set-membership NLMS algorithms and their application for sparse channel estimation and echo cancelation. AEU-Int. J. Electron. Commun. 2006, 70, 895–902. [Google Scholar] [CrossRef]
  7. Li, Y.; Wang, Y. Sparse SM-NLMS algorithm based on correntropy criterion. Electron. Lett. 2016, 52, 1461–1463. [Google Scholar] [CrossRef]
  8. Li, Y.; Li, W.; Yu, W.; Wan, J.; Li, Z. Sparse adaptive channel estimation based on -norm-penalized affine projection algorithm. Int. J. Antennas Propag. 2014, 2014, 1–13. [Google Scholar] [CrossRef]
  9. Li, Y.; Jiang, Z.; Jin, Z.; Han, X.; Yin, J. Cluster-sparse proportionate NLMS algorithm with the hybrid norm constraint. IEEE Access 2018, 6, 47794–47803. [Google Scholar] [CrossRef]
  10. Shi, W.; Li, Y.; Zhao, L.; Liu, X. Controllable sparse antenna array for adaptive beamforming. IEEE Access 2019, 7, 6412–6423. [Google Scholar] [CrossRef]
  11. Sayed, A.H. Fundamentals of Adaptive Filtering; Wiley: New York, NY, USA, 2003. [Google Scholar]
  12. Ozeki, K.; Umeda, T. An adaptive filtering algorithm using an orthogonal projection to an affine subspace and its properties. Electr. Commun. Jpn. 1984, 67, 19–27. [Google Scholar] [CrossRef]
  13. Shin, H.-C.; Sayed, A.H.; Song, W.-J. Variable step-size NLMS and affine projection algorithms. IEEE Signal Process. Lett. 2004, 11, 132–135. [Google Scholar] [CrossRef]
  14. Vega, L.R.; Rey, H.; Benesty, J.; Tressens, S. A new robust variable step-size NLMS algorithm. IEEE Trans. Signal Process. 2008, 56, 1878–1893. [Google Scholar] [CrossRef]
  15. Vega, L.R.; Rey, H.; Benesty, J.; Tressens, S. A fast robust recursive least-squares algorithm. IEEE Trans. Signal Process. 2008, 57, 1209–1216. [Google Scholar] [CrossRef]
  16. Singh, A.; Principe, J.C. Using correntropy as a cost function in linear adaptive filters. In Proceedings of the 2009 International Joint Conference on Neural Networks, Atlanta, GA, USA, 14–19 June 2009; pp. 1209–1216. [Google Scholar]
  17. Peng, S.; Ser, W.; Chen, B.; Sun, L.; Lin, Z. Robust constrained adaptive filtering under minimum error entropy criterion. IEEE Trans. Circuits Syst. II-Express Briefs 2018, 65, 1119–1123. [Google Scholar] [CrossRef]
  18. Chen, B.; Wang, J.; Zhao, H.; Zheng, N.; Príncipe, J.C. Convergence of a fixed-point algorithm under maximum correntropy criterion. IEEE Signal Process. Lett. 2015, 22, 1723–1727. [Google Scholar] [CrossRef]
  19. Li, Y.; Jiang, Z.; Shi, W.; Han, X.; Chen, B. Blocked maximum correntropy criterion algorithm for cluster-sparse system identifications. IEEE Trans. Circuits Syst. II-Express Briefs 2019. [Google Scholar] [CrossRef]
  20. Ma, W.; Chen, B.; Zhao, H.; Gui, G.; Duan, J.; Principe, J.C. Sparse least logarithmic absolute difference algorithm with correntropy-induced metric penalty. Circuits Syst. Signal Process. 2016, 35, 1077–1089. [Google Scholar] [CrossRef]
  21. Pimenta, R.M.; Resende, L.C.; Siqueira, N.N.; Haddad, I.B.; Petraglia, M.R. A new proportionate adaptive filtering algorithm with coefficient reuse and robustness against impulsive noise. In Proceedings of the 2018 26th European Signal Processing Conference (EUSIPCO), Rome, Italy, 3–7 September 2018; pp. 465–469. [Google Scholar]
  22. Kalouptsidis, N.; Mileounis, G.; Babadi, B.; Tarokh, V. Adaptive algorithms for sparse system identification. Signal Process. 2011, 91, 1910–1919. [Google Scholar] [CrossRef]
  23. Li, W.; Preisig, J.C. Estimation of rapidly time-varying sparse channels. IEEE J. Ocean. Eng. 2007, 32, 927–939. [Google Scholar] [CrossRef]
  24. Cotter, S.F.; Rao, B.D. Sparse channel estimation via matching pursuit with application to equalization. IEEE Trans. Commun. 2002, 50, 374–377. [Google Scholar] [CrossRef]
  25. Berger, C.R.; Zhou, S.; Preisig, J.C.; Willett, P. Sparse channel estimation for multicarrier underwater acoustic communication: From subspace methods to compressed sensing. IEEE Trans. Signal Process. 2010, 58, 1708–1721. [Google Scholar] [CrossRef]
  26. Duttweiler, D.L. Proportionate normalized least-mean-squares adaptation in echo cancelers. IEEE Trans. Speech Audio Process. 2000, 8, 508–518. [Google Scholar] [CrossRef]
  27. Li, Y.; Jiang, Z.; Osman, O.M.O.; Han, X.; Yin, J. Mixed norm constrained sparse APA algorithm for satellite and network echo channel estimation. IEEE Access 2018, 6, 65901–65908. [Google Scholar] [CrossRef]
  28. Deng, H.; Doroslovački, M. Improving convergence of the PNLMS algorithm for sparse impulse response identification. IEEE Signal Process. Lett. 2005, 12, 181–184. [Google Scholar] [CrossRef]
  29. Li, Y.; Hamamura, M. An improved proportionate normalized least-mean-square algorithm for broadband multipath channel estimation. Sci. World J. 2014. [Google Scholar] [CrossRef] [PubMed]
  30. Liu, L.; Fukumoto, M.; Saiki, S. An improved mu-law proportionate NLMS algorithm. In Proceedings of the 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, NV, USA, 31 March–4 April 2008; pp. 3797–3800. [Google Scholar]
  31. Gansler, T.; Benesty, J.; Gay, S.L.; Sondhi, M.M. A robust proportionate affine projection algorithm for network echo cancellation. In Proceedings of the 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing, Istanbul, Turkey, 5–9 June 2000; pp. 793–796. [Google Scholar]
  32. Paleologu, C.; Ciochina, S.; Benesty, J. An efficient proportionate affine projection algorithm for echo cancellation. IEEE Signal Process. Lett. 2009, 17, 165–168. [Google Scholar] [CrossRef]
  33. Liu, J.; Grant, S.L.; Benesty, J. A low complexity reweighted proportionate affine projection algorithm with memory and row action projection. EURASIP J. Adv. Signal Process. 2015, 2015, 1–12. [Google Scholar] [CrossRef]
  34. Werner, S.; Apolinário, J.A., Jr.; Diniz, P.S.R. Set-membership proportionate affine projection algorithms. EURASIP J. Audio Speech Music Process. 2007, 2007, 1–10. [Google Scholar] [CrossRef]
  35. Yang, J.; Sobelman, G.E. Efficient μ-law improved proportionate affine projection algorithm for echo cancellation. Electron. Lett. 2011, 47, 73–526. [Google Scholar] [CrossRef]
  36. Zheng, Z.; Zhao, H. Memory improved proportionate M-estimate affine projection algorithm. Electron. Lett. 2015, 51, 525–526. [Google Scholar] [CrossRef]
  37. Benesty, J.; Gay, S.L. An improved PNLMS algorithm. In Proceedings of the 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing, Orlando, FL, USA, 13–17 May 2002; pp. 1881–1884. [Google Scholar]
  38. Shao, T.; Zheng, Y.R.; Benesty, J. An affine projection sign algorithm robust against impulsive interferences. IEEE Signal Process. Lett. 2010, 17, 327–330. [Google Scholar] [CrossRef]
  39. Yang, Z.; Zheng, Y.R.; Grant, S.L. Proportionate affine projection sign algorithms for network echo cancellation. IEEE Trans. Audio Speech Lang. Process. 2011, 19, 2283–2284. [Google Scholar] [CrossRef]
  40. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  41. Chen, Y.; Gu, Y.; Hero, A.O. Sparse LMS for system identification. In Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, Taiwan, 19–24 April 2009; pp. 3125–3128. [Google Scholar]
  42. Li, Y.; Wang, Y.; Jiang, T. Norm-adaption penalized least mean square/fourth algorithm for sparse channel estimation. Signal Process. 2016, 128, 243–251. [Google Scholar] [CrossRef]
  43. Meng, R.; de Lamare, R.C.; Nascimento, V.H. Sparsity-aware affine projection adaptive algorithms for system identification. In Proceedings of the Sensor Signal Processing for Defence (SSPD 2011), London, UK, 27–29 September 2011; pp. 793–796. [Google Scholar]
  44. Li, Y.; Wang, Y.; Jiang, T. Sparse least mean mixed-norm adaptive filtering algorithms for sparse channel estimation applications. Int. J. Commun. Syst. 2016, 30, 1–14. [Google Scholar] [CrossRef]
  45. Li, Y.; Zhang, C.; Wang, S. Low-complexity non-uniform penalized affine projection algorithm for sparse system identification. Circuits Syst. Signal Process. 2016, 35, 1–14. [Google Scholar] [CrossRef]
  46. Lima, M.V.; Ferreira, T.N.; Martins, W.A.; Diniz, P.S. Sparsity-aware data-selective adaptive filters. IEEE Trans. Signal Process. 2014, 62, 4557–4572. [Google Scholar] [CrossRef]
  47. Lima, M.V.; Martins, W.A.; Diniz, P.S. Affine projection algorithms for sparse system identification. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013. [Google Scholar]
  48. Benesty, J.; Paleologu, C.; Ciochina, S. On regularization in adaptive filtering. IEEE Trans. Audio Speech Lang. Process. 2011, 19, 1734–1742. [Google Scholar] [CrossRef]
  49. Hoshuyama, O.; Goubran, R.A.; Sugiyama, A. A generalized proportionate variable step-size algorithm for fast changing acoustic environments. In Proceedings of the 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, Montreal, QC, Canada, 17–21 May 2004; pp. 161–164. [Google Scholar]
Figure 1. Typical CE schematic diagram.
Figure 1. Typical CE schematic diagram.
Entropy 21 00555 g001
Figure 2. The impulse response used in simulation below.
Figure 2. The impulse response used in simulation below.
Entropy 21 00555 g002
Figure 3. The effects of the projection orders on PAPMCC algorithm.
Figure 3. The effects of the projection orders on PAPMCC algorithm.
Entropy 21 00555 g003
Figure 4. The effects of the kernel width σ on PAPMCC algorithm.
Figure 4. The effects of the kernel width σ on PAPMCC algorithm.
Entropy 21 00555 g004
Figure 5. The effects of the step size μ on PAPMCC algorithm.
Figure 5. The effects of the step size μ on PAPMCC algorithm.
Entropy 21 00555 g005
Figure 6. The actual speech signal which is used to estimate the network echo channel.
Figure 6. The actual speech signal which is used to estimate the network echo channel.
Entropy 21 00555 g006
Figure 7. Performance comparisons of the proposed PAPMCC algorithm. Input signal: WGN.
Figure 7. Performance comparisons of the proposed PAPMCC algorithm. Input signal: WGN.
Entropy 21 00555 g007
Figure 8. Performance comparisons of the proposed PAPMCC algorithm. Input signal: colored.
Figure 8. Performance comparisons of the proposed PAPMCC algorithm. Input signal: colored.
Entropy 21 00555 g008
Figure 9. Performance comparisons of the proposed PAPMCC algorithm. Input signal: speech signal.
Figure 9. Performance comparisons of the proposed PAPMCC algorithm. Input signal: speech signal.
Entropy 21 00555 g009
Figure 10. Effects of SNR on the PAPMCC algorithm.
Figure 10. Effects of SNR on the PAPMCC algorithm.
Entropy 21 00555 g010
Figure 11. Performance comparisons of the proposed PAPMCC algorithm with the conventional robust AP algorithms. Input signal: colored.
Figure 11. Performance comparisons of the proposed PAPMCC algorithm with the conventional robust AP algorithms. Input signal: colored.
Entropy 21 00555 g011
Table 1. Computational complexity in each iteration.
Table 1. Computational complexity in each iteration.
AlgorithmAdditionMultiplicationDivision
AP ( 2 M 2 + M ) K ( 2 M 2 + 3 M ) K + M 2 0
ZA-AP ( 2 M 2 + M + 1 ) K ( 2 M 2 + 3 M + 1 ) K + M 2 0
RZA-AP ( 2 M 2 + M + 2 ) K ( 2 M 2 + 3 M + 2 ) K + M 2 K
PAP 2 M K 2 + ( 2 M 2 M + 1 ) K 1 2 M K 2 + ( 2 M 2 + 3 M + 1 ) K + M 2 K
PAPMCC 2 M K 2 + ( 2 M 2 M + 1 ) K 1 2 M K 2 + ( 2 M 2 + 3 M + 1 ) K + M 2 + 2 M K + M

Share and Cite

MDPI and ACS Style

Jiang, Z.; Li, Y.; Huang, X. A Correntropy-Based Proportionate Affine Projection Algorithm for Estimating Sparse Channels with Impulsive Noise. Entropy 2019, 21, 555. https://doi.org/10.3390/e21060555

AMA Style

Jiang Z, Li Y, Huang X. A Correntropy-Based Proportionate Affine Projection Algorithm for Estimating Sparse Channels with Impulsive Noise. Entropy. 2019; 21(6):555. https://doi.org/10.3390/e21060555

Chicago/Turabian Style

Jiang, Zhengxiong, Yingsong Li, and Xinqi Huang. 2019. "A Correntropy-Based Proportionate Affine Projection Algorithm for Estimating Sparse Channels with Impulsive Noise" Entropy 21, no. 6: 555. https://doi.org/10.3390/e21060555

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop