Next Article in Journal
Assessment of Performance of Two Rapid Methods for On-Site Control of Microbial and Biofilm Contamination
Next Article in Special Issue
Analysis of the Quantization Noise in Discrete Wavelet Transform Filters for 3D Medical Imaging
Previous Article in Journal
Modeling of the Loading–Unloading Contact of Two Cylindrical Rough Surfaces with Friction
Previous Article in Special Issue
A Division Algorithm in a Redundant Residue Number System Using Fractions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Maximum Correntropy Criterion Based l1-Iterative Wiener Filter for Sparse Channel Estimation Robust to Impulsive Noise

Department of Electrical Engineering, Sejong University, Seoul 143-747, Korea
Appl. Sci. 2020, 10(3), 743; https://doi.org/10.3390/app10030743
Submission received: 20 December 2019 / Revised: 15 January 2020 / Accepted: 20 January 2020 / Published: 21 January 2020
(This article belongs to the Special Issue Mathematics and Digital Signal Processing)

Abstract

:
In this paper, we propose a new sparse channel estimator robust to impulsive noise environments. For this kind of estimator, the convex regularized recursive maximum correntropy (CR-RMC) algorithm has been proposed. However, this method requires information about the true sparse channel to find the regularization coefficient for the convex regularization penalty term. In addition, the CR-RMC has a numerical instability in the finite-precision cases that is linked to the inversion of the auto-covariance matrix. We propose a new method for sparse channel estimation robust to impulsive noise environments using an iterative Wiener filter. The proposed algorithm does not need information about the true sparse channel to obtain the regularization coefficient for the convex regularization penalty term. It is also numerically more robust, because it does not require the inverse of the auto-covariance matrix.

1. Introduction

In many signal processing applications [1,2,3,4], we find various sparse channels in which most of the impulse responses are close to zero and only some of them are large. In recent years, many kinds of sparse adaptive filtering algorithms have been proposed for sparse system estimation, including recursive least squares (RLS)-based [5,6,7,8,9] and least mean square (LMS)-based algorithms [10,11,12,13,14]. It is generally known that RLS-based algorithms have faster convergence and less error after convergence than LMS-based algorithms [15]. However, there are fewer RLS-based than LMS-based algorithms. Among these, the convex regularized recursive least squares (CR-RLS) proposed by Eksioglu [6] is a full recursive convex regularized RLS like a typical RLS.
While the aforementioned algorithms typically show good performance in a Gaussian noise environment, their performance deteriorates in a nonGaussian noise environment such as an impulsive noise environment. Recently, the maximum correntropy criterion (MCC) [16,17,18,19] has been successfully applied to various adaptive algorithms robust to impulsive noise. Current studies in robust sparse adaptive methods have resulted in the development of CR-RLS-based algorithms with MCC [20,21], and showed strong robustness under impulsive noise. However, CR-RLS used in [20,21] is not practical when determining the regularization coefficient for the sparse regularization term because CR-RLS [6] needs information about the true channel when calculating the regularization coefficients. In addition, MCC CR-RLS algorithms (so called convex regularized recursive maximum correntropy (CR-RMC)) [20,21] include the inversion of the auto-covariance matrix, which is linked to the numerical instability in finite-precision environments [15].
The recursive inverse (RI) algorithm [22,23] and the iterative Wiener filter (IWF) algorithm [24] have recently been proposed. RI and IWF have the same structure besides a step size calculation. They perform similarly to the conventional RLS algorithm in terms of convergence and mean squared error, without using the inverse of the auto-covariance matrix. Therefore, RI [22,23] and IWF [24] can be considered algorithms without the numerical instability of RLS.
This paper proposes a sparse channel estimation algorithm robust to impulse noise using IWF and maximum correntropy criterion with l1-norm regularization. The proposed algorithm includes a new regularization coefficient calculation method for l1-norm regularization that does not require information about true channels. In addition, the proposed algorithm has numerical stability because it does not include inverse matrix calculation.
In Section 2 of this paper, we derive the new algorithm using IWF. In Section 3, we provide simulation results that show the performance of the proposed algorithm. In Section 4, we note our conclusions.

2. MCC l1-IWF Formulation

In the channel estimation problem, we assume that at time instant n the observed signal y ( n ) is the result of the input signal x ( k ) sequence passing through the system w o = [ w 0 , , w M 1 ] T in the M-dimensional finite impulse response (FIR) format. Especially, in the sparse channel estimation problem, we assume that the system response w is sparse.
In the adaptive channel estimation, we apply an M dimensional channel w ( k ) to the same dimensional signal vector x ( k ) , estimate an output y ^ ( k ) = x T ( k ) w ( k ) , and calculate the error signal e ( k ) = y ( k ) + n ( k ) y ^ ( k ) = y ˜ ( k ) y ^ ( k ) , where y ( k ) is the output of the actual system, y ^ ( k ) is the estimated output, and n ( k ) is the measurement noise. Especially, the measurement noise is nonGaussian.
To estimate the channel in nonGaussian noise, we define an MCC cost function with exponential forgetting factor λ shown in (1) [20,21] and minimize it adaptively.
minimize w ^ ( n ) m = 0 n λ n m exp y ( m ) w ^ ( n ) T x ( m ) 2 2 σ 2 , s . t . w ^ ( n ) 1 c ,
where w ^ ( n ) = w ^ 0 ( n ) , , w ^ M 1 ( n ) T , x ( m ) = x ( m ) , x ( m 1 ) , , x ( m M + 1 ) T , λ is a forgetting factor, and w ^ ( n ) 1 k = 0 M 1 w ^ k ( n ) . The Lagrangian for (1) becomes
J w ^ ( n ) , γ ( n ) = ζ w ^ ( n ) + γ ( n ) w ^ ( n ) 1 c ,
where ζ w ^ ( n ) = m = 0 n λ n m exp y ( m ) w ^ ( n ) T x ( m ) 2 2 σ 2 , and γ ( n ) is a real-valued Lagrangian multiplier. We minimize the regularized cost function to find the optimal vector in the same way that IWF was derived [24].
The regularized cost function is convex and nondifferentiable; therefore, subgradient analysis replaces the gradient. When denoting a subgradient vector of f at w ^ with s f ( w ^ ) , the subgradient vector of J ( w ^ ( n ) , γ ( n ) ) with respect to w ^ ( n ) can be written as follows:
s J w ^ ( n ) , γ ( n ) = ζ w ^ ( n ) + γ ( n ) s w ^ ( n ) 1 .
Hence, for the optimal w ^ ( n ) minimizing J ( w ^ ( n ) , γ ( n ) ) , we set the subgradient of J ( w ^ ( n ) , γ ( n ) ) to 0 at the optimal point. When evaluating the gradient ζ ( w ^ ( n ) ) , we can derive a gradient vector as (4).
s J w ^ ( n ) , γ ( n ) = 1 σ 2 Φ ( n ) w ^ ( n ) r ( n ) + γ ( n ) sgn w ^ ( n ) = g n ,
where e ( n ) = y ( n ) w ^ ( n ) T x ( n ) , Φ ( n ) = m = 0 n λ n m x ( m ) x ( m ) T = λ Φ ( n 1 ) + exp e ( n ) 2 2 σ 2 x ( n ) x ( n ) T , r ( n ) = m = 0 n λ n m y ( m ) x ( m ) = λ r ( n 1 ) + exp e ( n ) 2 2 σ 2 y ( n ) x ( n ) , and s w ^ ( n ) 1 = sgn w ^ ( n ) [6]. Using Equation (4), we can obtain the update expression for w ^ ( n ) as (5).
w ^ ( n + 1 ) = w ^ ( n ) μ n s J w ^ ( n ) , γ ( n ) = w ^ ( n ) μ n g n
To get the step size μ n , we find the μ n that minimizes exponentially averaged a posteriori error energy, J w ^ ( n + 1 ) , γ ( n ) , where a posteriori error is e ( n ) = y ( n ) w ^ ( n + 1 ) T x ( n ) .
μ J w ^ ( n + 1 ) , γ ( n ) = 1 σ 2 w ^ ( n + 1 ) T Φ ( n ) g n + 1 σ 2 r ( n ) T g n γ ( n ) s w ^ ( n + 1 ) 1 T g n 1 σ 2 w ^ ( n + 1 ) T Φ ( n ) g n + 1 σ 2 r ( n ) T g n γ ( n ) s w ^ ( n ) 1 T g n = 1 σ 2 w ^ ( n + 1 ) T Φ ( n ) g n + 1 σ 2 r ( n ) T g n γ ( n ) sgn ( w ^ ( n ) ) T g n .
Substituting Equation (5) into Equation (6), we get
μ J w ^ ( n + 1 ) , γ ( n ) = 1 σ 2 w ^ ( n ) T Φ ( n ) g n + 1 σ 2 μ n g n T Φ ( n ) g n + 1 σ 2 r ( n ) T g n γ ( n ) sgn w ^ ( n ) T g n .
To find μ n , we set μ J w ^ ( n ) , γ ( n ) = 0 , and
μ n = 1 σ 2 w ^ ( n ) T Φ ( n ) r ( n ) T g n + γ ( n ) sgn w ^ ( n ) T g n 1 σ 2 g n T Φ ( n ) g n = σ 2 g n T g n g n T Φ ( n ) g n
We have to derive regularization coefficient γ ^ ( n ) such that w ^ ( n + 1 ) 1 = c , i.e., the l1-norm of vector w ^ ( n + 1 ) is preserved at all time steps of n. This can be represented by a flow equation in continuous time-domain in [25].
w ^ ( t ) 1 t = w ^ ( t ) 1 w ^ T w ^ t = s w ^ ( t ) 1 T w ^ t = 0 .
Using a sufficiently small interval δ, the time derivative in (9) can be approximated as
s w ^ ( t ) 1 T w ^ t s w ^ ( n ) 1 T w ^ ( n + 1 ) w ^ ( n ) δ = 0 .
Using (5) and (6), (10) becomes
sgn w ^ ( n ) T w ^ ( n + 1 ) w ^ ( n ) = sgn w ^ ( n ) T ( μ n g n ) = 0 .
and
sgn w ^ ( n ) T 1 σ 2 Φ ( n ) w ( n ) 1 σ 2 r ( n ) + γ ( n ) sgn w ^ ( n ) = 0 .
The regularization coefficient γ ^ ( n ) obtained from Equation (12) is as follows.
γ ( n ) = sgn w ^ ( n ) T Φ ( n ) w ( n ) r ( n ) σ 2 sgn w ^ ( n ) T sgn w ^ ( n ) .
On the contrary, CR-RMC algorithm in [20] uses the same regularization coefficient as that in [6]. The regularization coefficient is shown in (14)
γ ^ ( n ) = 2 t r Φ 1 ( n ) M w ^ ( n ) 1 ρ + + sgn ( w ^ ( n ) ) T Φ 1 ( n ) ε ( n ) σ 2 Φ 1 ( n ) sgn ( w ^ ( n ) ) 2 2 ,
where ε ( n ) = w ˜ ( n ) w ^ ( n ) and w ˜ ( n ) is the solution to the normal equation, Φ ( n ) w ˜ ( n ) = r ( n ) . In (14), the regularization coefficient has the parameter, ρ . In [6] and [20], the parameter was set as ρ = f ( w t r u e ) = w t r u e 1 , with w t r u e indicating the impulse response of the true channel. There was no further discussion about how to set ρ . We summarize the algorithm in Table 1.

3. Simulation Results

In this section, we compare the sparse channel estimation performance between the proposed algorithm and the convex regularized recursive maximum correntropy (CR-RMC) [20]. In addition, the numerical robustness of the proposed algorithm is compared with that of CR-RMC in the finite-precision environments.

3.1. Estimation of Sparse Channels

In this experiment, we showed the sparse system estimation results. The simulation was performed under the same experimental conditions in [6]. The true system parameter w o had an order of M = 64. Out of the 64 coefficients, there were S nonzero coefficients. The nonzero coefficients were placed randomly, and the values of the coefficients were drawn from a N 0 , 1 / S distribution. The impulsive noise is generated according to the Gaussian mixture model [26]
p v = 1 p r N 0 , σ 1 2 + p r N 0 , σ 2 2
where N 0 , σ i 2 i = 1 , 2 denote the Gaussian distribution with zero-mean and variance σ i 2 . The p r denotes the occurrence probability of the Gaussian distribution with variance σ 2 2 , which usually is much larger than σ 1 2 so as to generate the impulsive noises. The zero-mean Gaussian distribution with variance σ 1 2 generated the background noise, and the zero-mean Gaussian distribution with variance σ 2 2 (usually σ 2 2 σ 1 2 ) generated the impulsive noise with the probability p r . In this experiment, we set the variance of σ 1 2 to 0.01 and generate the input signal so that SNR keeps 20dB. The other parameters were set as σ 2 2 = 500 and p r = 0.01 .
We compare CR-RMC [20] using the true system response information and the proposed algorithm using a regularization factor that did not use the true system response. It also included the results of the MCC-RLS [27] and the conventional RLS without considering impulsive noise and sparsity. For the performance evaluation, we simulated the algorithms in the sparse impulse response for S = 4, 8, 16, 32.
Figure 1 illustrates the mean standard deviation (MSD) curves. The results show that the estimation performance of the proposed algorithm is similar to that of CR-RMC using the regularization factor referring to the true system impulse response. As expected, the conventional RLS produced the worst MSD in all cases.
Figure 1 confirms that, without a priori information about the true system impulse response, the proposed regularization factor works similarly to that of the regularization factor in CR-RMC using the true system impulse response information.

3.2. Numerical Robustness Experiment

In this experiment, we showed the proposed algorithm to be numerically more robust than CR-RMC in the finite-precision environments. We performed channel estimation with finite precision by quantization to show the numerical robustness [28]. The round-off error from the quantization with finite bits was accumulated and propagated through the inverse matrix operation of Φ ( n ) , and, finally, explosive divergence occurred [15,28]. To illustrate this, we repeated numerical stability experiments, decreasing the quantization bit from 32 bits by 1 bit to find the quantization bits that started numerical instability in each algorithm while comparing and verifying the performance for the case of S = 4 and S = 16. In addition, the rest of the setup for the experiment was the same as Experiment 3.1.
Figure 2 shows the results of comparing the performance of the proposed algorithm and CR-RMC in terms of MSD with different numbers of quantization bits. Figure 2a,b shows the results when quantized to 32 bits. In this case, we can observe that the proposed algorithm as well as CR-RMC converges normally as Figure 1a,b. Figure 2 shows the results of comparing the performance of the proposed algorithm and CR-RMC in terms of MSD with different numbers of quantization bits. Figure 2a,b shows the results when quantized to 32 bits. In this case, we can observe that the proposed algorithm as well as CR-RMC converges normally as Figure 1a,b. Figure 2c,d shows the quantization results for 16 bits. In 16 bits, CR-RMC started numerical instability. Compared with the results of Figure 2a,b, it can be observed that quantized CR-RMC diverges due to the cumulative effect of the error of quantization noise. Figure 2e,f shows the quantization results for 11 bits. In 11 bits, the proposed algorithm also started numerical instability. If we consider the level of quantization error with signal to quantization noise ratio (SQNR), SQNR (dB) = 1.76 + 6.02 × bits [29], CR-RMC is stable at above 98.08 dB in SQNR. The proposed algorithm is stable at above 67.98 dB in SQNR. In other word, the proposed algorithm has 30.1 dB gain in numerical stability compared to CR-RMC.
The experimental results confirm that the proposed algorithm is numerically more robust than CR-RMC.

4. Conclusions

In this paper, this paper have proposed a sparse channel estimation algorithm robust to impulse noise using IWF and MCC with l1-norm regularization. The proposed algorithm includes a regularization factor calculation algorithm without any requirement for a priori knowledge about the true system response. The simulation results show that the proposed algorithm works similarly to the CR-RMC algorithm with a regularization factor referring to the true system response information. In addition, simulation results show that the proposed algorithm is more robust against numerical error than the CR-RMC algorithm.

Funding

This research received no external funding.

Acknowledgments

This paper was supported by the Agency for Defense Development (ADD) in Korea (UD190005DD).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Loganathan, P.; Khong, A.W.; Naylor, P.A. A class of sparseness-controlled algorithms for echo cancellation. IEEE Trans. Audio Speech Lang. Process. 2003, 17, 1591–1601. [Google Scholar] [CrossRef]
  2. Carbonelli, C.; Vedantam, S.; Mitra, U. Sparse channel estimation with zero tap detection. IEEE Trans. Wirel. Commun. 2007, 6, 1743–1763. [Google Scholar] [CrossRef] [Green Version]
  3. Yousef, N.R.; Sayed, A.H.; Khajehnouri, N. Detection of fading overlapping multipath components. Signal Process. 2006, 86, 2407–2425. [Google Scholar] [CrossRef]
  4. Singer, A.C.; Nelson, J.K.; Kozat, S.S. Signal processing for underwater acoustic communications. IEEE Commun. Mag. 2009, 47, 90–96. [Google Scholar] [CrossRef]
  5. Babadi, B.; Kalouptsidis, N.; Tarokh, V. SPARLS: The sparse RLS algorithm. IEEE Trans. Signal Process. 2010, 58, 4013–4025. [Google Scholar] [CrossRef] [Green Version]
  6. Eksioglu, E.M. RLS algorithm with convex regularization. IEEE Signal Process. Lett. 2011, 18, 470–473. [Google Scholar] [CrossRef]
  7. Eksioglu, E.M. Sparsity regularised recursive least squares adaptive filtering. IET Signal Process. 2011, 5, 480–487. [Google Scholar] [CrossRef]
  8. Das, B.; Chakraborty, M. Improved l0-RLS adaptive filter algorithms. Electron. Lett. 2017, 53, 1650–1651. [Google Scholar] [CrossRef]
  9. Liu, L.; Zhang, Y.; Sun, D. VFF l1-norm penalised WL-RLS algorithm using DCD iterations for underwater acoustic communication. IET Commun. 2017, 11, 615–621. [Google Scholar] [CrossRef]
  10. Li, Y.; Wang, Y.; Jiang, T. Norm-adaption penalized least mean square/fourth algorithm for sparse channel estimation. Signal Process. 2016, 128, 243–251. [Google Scholar] [CrossRef]
  11. Jahromi, M.N.S.; Salman, M.S.; Hocanin, A. Convergence analysis of the zero-attracting variable step-size LMS algorithm for sparse system identification. Signal Image Video Process. 2013, 9, 1353–1356. [Google Scholar] [CrossRef]
  12. Li, Y.; Wang, Y.; Jiang, T. Sparse-aware setmembership NLMS algorithms and their application for sparse channel estimation and echo cancelation. AEU-Int. J. Electron. Commun. 2016, 70, 895–902. [Google Scholar] [CrossRef]
  13. Gu, Y.; Jin, J.; Mei, S. l0-norm constraint LMS algorithm for sparse system identification. IEEE Signal Process. Lett. 2009, 16, 774–777. [Google Scholar]
  14. Chen, Y.; Gu, Y.; Hero, A.O. Sparse LMS for system identification. In Proceedings of the IEEE International Conference on Acoustics, Speech Signal Processing, Taipei, Taiwan, 19–24 April 2009; pp. 3125–3128. [Google Scholar]
  15. Haykin, S. Adaptive Filter Theory, 5th ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2014. [Google Scholar]
  16. Liu, W.; Pokharel, P.P.; Principe, J.C. Correntropy: Properties and applications in non-Gaussian signal processing. IEEE Trans Signal Process. 2007, 55, 5286–5298. [Google Scholar]
  17. Chen, B.; Xing, L.; Liang, J.; Zheng, N.; Principe, J.C. Steady-state mean-square error analysis for adaptive filtering under the maximum correntropy criterion. IEEE Signal Process. Lett. 2014, 21, 880–884. [Google Scholar]
  18. Ma, W.; Qu, H.; Gui, G.; Xu, L.; Zhao, J.; Chen, B. Maximum correntropy criterion based sparse adaptive filtering algorithms for robust channel estimation under non-Gaussian environments. J. Frankl. Inst. 2015, 352, 2708–2727. [Google Scholar] [CrossRef] [Green Version]
  19. Chen, B.; Xing, L.; Zhao, H.; Zheng, N.; Principe, J.C. Generalized correntropy for robust adaptive filtering. IEEE Trans. Signal Process. 2016, 64, 3376–3387. [Google Scholar] [CrossRef] [Green Version]
  20. Zhang, X.; Li, K.; Wu, Z.; Fu, Y.; Zhao, H.; Chen, B. Convex regularized recursive maximum correntropy algorithm. Signal Process. 2016, 129, 12–16. [Google Scholar] [CrossRef]
  21. Ma, W.; Duan, J.; Chen, B.; Gui, G.; Man, W. Recursive generalized maximum correntropy criterion algorithm with sparse penalty constraints for system identification. Asian J. Control 2017, 19, 1164–1172. [Google Scholar] [CrossRef]
  22. Ahmad, M.S.; Kukrer, O.; Hocanin, A. Recursive inverse adaptive filtering algorithm. Digit. Signal Process. 2011, 21, 491–496. [Google Scholar] [CrossRef]
  23. Salman, M.S.; Kukrer, O.; Hocanin, A. Recursive inverse algorithm: Mean-square-error analysis. Digit. Signal Process. 2017, 66, 10–17. [Google Scholar] [CrossRef]
  24. Xi, B.; Liu, Y. Iterative Wiener Filter. Electron. Lett. 2013, 49, 343–344. [Google Scholar] [CrossRef]
  25. Khalid, S.; Abrar, S. Blind adaptive algorithm for sparse channel equalization using projections onto lp-ball. Electron. Lett. 2015, 51, 1422–1424. [Google Scholar] [CrossRef]
  26. Wang, W.; Zhao, J.; Qu, H.; Chen, B. A correntropy inspired variable step-size sign algorithm against impulsive noises. Signal Process. 2017, 141, 168–175. [Google Scholar] [CrossRef]
  27. Ma, W.; Qu, H.; Zhao, J. Estimator with forgetting factor of correntropy and recursive algorithm for traffic network prediction. In Proceedings of the 2013 25th Chinese Control and Decision Conference (CCDC), Guiyang, China, 25–27 May 2013; pp. 490–494. [Google Scholar]
  28. Sayed, A.H. Fundamentals of Adaptive Filtering; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2003; pp. 775–803. [Google Scholar]
  29. Proakis, J.G.; Manolakis, D.K. Digital Signal Processing, 4th ed.; Pearson Education Limited: London, UK, 2014; p. 35. [Google Scholar]
Figure 1. Steady state MSD for S = 4, 8, 16, 32 (-⊳-: the proposed algorithm, -∘-: convex regularized recursive maximum correntropy (CR-RMC), -⋄-: maximum correntropy criterion (MCC)- recursive least squares (RLS), solid line: conventional RLS without considering impulsive noise and sparsity): (a) S = 4, (b) S = 8, (c) S = 16, (d) S = 32.
Figure 1. Steady state MSD for S = 4, 8, 16, 32 (-⊳-: the proposed algorithm, -∘-: convex regularized recursive maximum correntropy (CR-RMC), -⋄-: maximum correntropy criterion (MCC)- recursive least squares (RLS), solid line: conventional RLS without considering impulsive noise and sparsity): (a) S = 4, (b) S = 8, (c) S = 16, (d) S = 32.
Applsci 10 00743 g001
Figure 2. Results of numerical robustness experiment (-⊳-: the proposed algorithm, -x-: CR-RMC): (a) S = 4 case quantized by 32 bits; (b) S = 16 case quantized by 32 bits; (c) S = 4 case quantized by 16 bits; (d) S = 16 case quantized by 16; (e) S = 4 case quantized by 11 bits; and (f) S = 16 case quantized by 11 bits.
Figure 2. Results of numerical robustness experiment (-⊳-: the proposed algorithm, -x-: CR-RMC): (a) S = 4 case quantized by 32 bits; (b) S = 16 case quantized by 32 bits; (c) S = 4 case quantized by 16 bits; (d) S = 16 case quantized by 16; (e) S = 4 case quantized by 11 bits; and (f) S = 16 case quantized by 11 bits.
Applsci 10 00743 g002aApplsci 10 00743 g002b
Table 1. Summary of the l1- iterative Wiener filter (IWF).
Table 1. Summary of the l1- iterative Wiener filter (IWF).
Initialization: Φ ( 0 ) , r ( 0 ) , w ^ ( 0 ) , σ 2 .
For n = 1 …
Φ ( n ) = λ Φ ( n 1 ) + exp e ( n ) 2 2 σ 2 x ( n ) x ( n ) T
r ( n ) = λ r ( n 1 ) + exp e ( n ) 2 2 σ 2 y ( n ) x ( n )
γ ( n ) = sgn w ^ ( n ) T Φ ( n ) w ( n ) r ( n ) σ 2 sgn w ^ ( n ) T sgn w ^ ( n )
g n = Φ ( n ) w ^ ( n ) r ( n ) + γ ( n ) sgn w ^ ( n )
μ n = g n 1 T g n 1 g n 1 T Φ ( n ) g n 1
w ^ ( n + 1 ) = w ^ ( n ) μ n g n
end

Share and Cite

MDPI and ACS Style

Lim, J. Maximum Correntropy Criterion Based l1-Iterative Wiener Filter for Sparse Channel Estimation Robust to Impulsive Noise. Appl. Sci. 2020, 10, 743. https://doi.org/10.3390/app10030743

AMA Style

Lim J. Maximum Correntropy Criterion Based l1-Iterative Wiener Filter for Sparse Channel Estimation Robust to Impulsive Noise. Applied Sciences. 2020; 10(3):743. https://doi.org/10.3390/app10030743

Chicago/Turabian Style

Lim, Junseok. 2020. "Maximum Correntropy Criterion Based l1-Iterative Wiener Filter for Sparse Channel Estimation Robust to Impulsive Noise" Applied Sciences 10, no. 3: 743. https://doi.org/10.3390/app10030743

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop