Next Article in Journal
Deep Metric Learning: A Survey
Previous Article in Journal
Composite Quantile Regression for Varying Coefficient Models with Response Data Missing at Random
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Kernel Recursive Maximum Versoria-Like Criterion Algorithm for Nonlinear Channel Equalization

1
College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China
2
Key Laboratory of Microwave Remote Sensing, National Space Science Center, Chinese Academy of Sciences, Beijing 100190, China
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(9), 1067; https://doi.org/10.3390/sym11091067
Submission received: 1 August 2019 / Revised: 17 August 2019 / Accepted: 20 August 2019 / Published: 21 August 2019

Abstract

:
In this paper, a kernel recursive maximum Versoria-like criterion (KRMVLC) algorithm has been constructed, derived, and analyzed within the framework of nonlinear adaptive filtering (AF), which considers the benefits of logarithmic second-order errors and the symmetry maximum-Versoria criterion (MVC) lying in reproducing the kernel Hilbert space (RKHS). In the devised KRMVLC, the Versoria approach aims to resist the impulse noise. The proposed KRMVLC algorithm was carefully derived for taking the nonlinear channel equalization (NCE) under different non-Gaussian interferences. The achieved results verify that the KRMVLC is robust against non-Gaussian interferences and performs better than those of the popular kernel AF algorithms, like the kernel least-mean-square (KLMS), kernel least-mixed-mean-square (KLMMN), and Kernel maximum Versoria criterion (KMVC).

1. Introduction

Nonlinear adaptive filters, like the Kernel methods [1,2], are widely used in adaptive filtering (AF) to promote nonlinear implementations in the field of signal processing [3,4], and plenty of linear filters have been recasted in high-dimensional space, such as the well-known reproducing-kernel-Hilbert space (RKHS), to give robust nonlinear extension [1]. In recent years, a great number of classical kernel AFs (KAF) have been proposed for non-linear filters [1,2,5], including the kernel recursive least-square (KRLS) [6] and kernel least mean-square (KLMS) [2]. Variations of the early reported kernel algorithms have also been expanded to improve the behaviors of classical KAFs [7,8,9,10,11,12,13]. However, KLMS and KRLS algorithms perform poorly under non-Gaussian interferences, since they only consider the second-order error signal statistics [14,15].
Finding a cost function of an AF algorithm which can capture beyond second-order error statistics makes it very useful and powerful for AF under various non-Gaussian noise interferences [16,17,18,19,20,21,22,23,24,25,26,27,28,29,30]. The least-mean with p-order error for realizing a powerful method called the LMP algorithm [16,17] uses a high-ordered error to create a cost function to form new AF, which has better behavior than the least mean square (LMS) algorithm for non-Gaussian interferences. However, sometimes it still shows slow convergence, like the least mean-fourth (LMF) for the parameter p = 4 , since its convergence characteristics might be sensitive to the adaptive weights that are in proximity to the optimal solution from the wiener method [18]. To solve the above problems, a kind of KAF algorithm has been developed by using two different norms to construct a cost function, such as the kernel least mean-mixed-norm (KLMMN) algorithm [19,20,21,22]. Comparing with the KLMS and kernel LMF (KLMF), the KLMMN algorithm combines the advantages of these algorithms, which have fast speed with respect to the speed of convergence, and reduces the estimated bias with the help of the mean square error (MSE) evaluation. Then, the kernel methods have been introduced into the maximum Versoria criterion (MVC) [23], as well as the recursive generalized mixed-norm (RGMN) to create the kerneled MVC (KMVC) [24] and kerneled RGMN (KRGMN) [22] for the nonlinear filter under non-Gaussian noise interferences.
The main purpose of this work was to develop a KAF algorithm for dealing with the equalization of nonlinear channels that is also denoted nonlinear channel equalization (NCE). Logarithmic second-order errors (LSOE) and the symmetry maximum Versoria-criterion (MVC) have been considered to form a desired novel cost function, while the gradient optimization has been adopted to seek for its solution to develop a more powerful KAF, called the kernel recursive maximum Versoria-like criterion (KRMVLC) algorithm. The KRMVLC algorithm has been derived in the field of the KAF, while its evaluated performance has been investigated and discussed for implementing NCE. The achieved numerical results demonstrate that the KRMVLC achieves better behaviors compared with the KLMF, KLMS, KMVC, KLMMN, KRGMN, and KRLS algorithms when using the convergence and MSE criterions to discuss the evaluation performance over NCE.
The main contributions of the proposed KRMVLC are the following three aspects: (1) it uses the LSOE and MVC to design a new function; (2) it has been derived in the field of the KAF; and (3) it achieves the best performance. The rest of this paper is constructed as follows: In Section 2, the proposed KRMVLC algorithm is introduced briefly. In Section 3, the performance of the KRMVLC algorithm is illustrated using the simulation experiments. Finally, in Section 4, the conclusion is given.

2. The KRMVLC Algorithm

The well-known kernel methods gave rise to the idea of converting the input data to a higher dimensional featured space via the nonlinear-mapping method under φ : U F . Herein, U denotes the input space, and F is implemented in RKHS. The training data is written as u i , d i i = 1 N , where the training data has N elements, d ( i ) stands for the signal we expected, and  u ( i ) is the input of the considered system. In the kernel methods, the input u ( i ) is transformed as φ ( u ( i ) ) in RKHS. According to Mercer’s theorem [1], the general kernel function is expressed as:
κ a , a = φ T a φ a κ a , a = exp a a σ 2
Here, σ denotes the kernel width that is used in the kernel function given in Equation (1). In this paper, the KRMVLC algorithm is derived, analyzed, and discussed in the field of the kernel and mixture AF algorithms. An exponential weighting method is used herein for finding out the solution of the new cost function that constructs the KRMVLC method since it can resist the impulse noise, and this new cost function is:
J KR ( Ω ) = j = 1 i γ i j { ω 2 log 1 + d ( j ) Ω T φ j 2 2 ξ 2 + ( 1 ω ) 2 E 1 1 + τ d ( j ) Ω T φ j 2 + 1 2 γ i λ Ω 2 .
The expectation operator is written as E · . τ is the Versoria-shaped parameter, and ξ is a constant, while the forgetting factor γ is used for gradually strengthening the weights. ω and λ are the mixed and regularization factors, respectively. Herein, the regularization term is implemented by using a norm constraint to ensure the existence of the inverse autocorrelation matrix. The gradient operation of Equation (2) is
J KR ( Ω ) Ω = j = 1 i γ i j φ j z j d ( j ) + j = 1 i γ i j φ j z j φ T j + γ i λ Ω ,
where z ( j ) is written as
z j = ω d ( j ) Ω T φ j 2 + 2 ξ 2 + 1 ω τ 1 + τ d ( j ) Ω T φ j 2 2 .
Let Equation (3) be zero, and consider Ω
Ω = j = 1 i γ i j φ j z j φ T j + γ i λ 1 j = 1 i γ i j φ j z j d ( j ) .
By considering
d i = d 1 , d 2 , d 3 , d i 1 , d i T ,
Φ i = φ 1 , φ 2 , φ 3 , φ i 1 , φ i ,
Ψ i = diag [ γ i 1 ω d ( 1 ) Ω T φ 1 2 + 2 ξ 2 + ( 1 ω ) τ 1 + τ d ( 1 ) Ω T φ 1 2 2 , γ i 2 ω d ( 2 ) Ω T φ 2 2 + 2 ξ 2 + ( 1 ω ) τ 1 + τ d ( 2 ) Ω T φ 2 2 2 , , ω d ( i ) Ω T φ i 2 + 2 ξ 2 + ( 1 ω ) τ 1 + τ d ( i ) Ω T φ i 2 2 .
The matrix form of Equation (5) is
Ω i = Φ i Ψ i Φ T i + γ i λ I 1 Φ i Ψ i d i .
The matrix inverse lemma is introduced into Equation (9), which is
B + DEF 1 = B 1 B 1 D ( E 1 + F B 1 D ) 1 F B 1 ,
and considering
γ i λ I B , Φ i D , Ψ i E , Φ T i F ,
then, Equation (9) is modified as
Φ i Ψ i Φ T i + γ i λ I 1 Φ i Ψ i = Φ i Φ T i Φ i + γ i λ Ψ 1 i 1 .
Ω i can also be considered as
Ω i = Φ i Φ T i Φ i + γ i λ Ψ 1 i 1 d i ,
Then, Ω i is analyzed by considering the input data linear combination
Ω i = Φ i a i
with a i of
a i = Φ T i Φ i + γ i λ Ψ 1 i 1 d i .
Defining Q i as
Q i = Φ T i Φ i + γ i λ Ψ 1 i 1 ,
where Φ i = Φ i 1 , φ i , and then, Q i is obtained
Q i = Φ T i 1 Φ i 1 + γ i λ Ψ 1 i 1 Φ T i 1 φ i φ T i Φ i 1 { ω d ( i ) Ω T φ ( i ) 2 + 2 ξ 2 + 1 ω τ 1 + τ d ( i ) Ω T φ ( i ) 2 2 } 1 × γ i λ + φ T i φ i 1 .
Then, defining c i
c i = ω d ( i ) Ω T φ ( i ) 2 + 2 ξ 2 + 1 ω τ 1 + τ d ( i ) Ω T φ ( i ) 2 2 1
Then, Q i is modified to be
Q i = Φ T i 1 Φ i 1 + γ i λ Ψ i 1 1 Φ T i 1 φ i φ T i Φ i 1 φ T i φ i + γ i λ c i 1 .
Based on the derivation above, we have
Q 1 i = Q 1 i 1 b i b T i φ T i φ i + γ i λ c i ,
where b i = Φ i 1 T φ i , while the block-matrix inversion operation is presented as
B D E F 1 = B D F 1 E 1 B 1 D F E B 1 D 1 F 1 E B D F 1 E 1 F E B 1 D 1 .
Utilizing the block matrix inversion operation, Equation (19) can be rewritten as
Q i = ε 1 i Q i 1 ε i + f i f T i f i f T i 1 ,
where f i = Q i 1 b i , and  ε i = γ i λ c i + φ T i φ i f T i b i . Therefore, we can get a i given by
a i = Q i d i = Q i 1 + f i f T i ε 1 i f i ε 1 i f T i ε 1 i ε 1 i d i 1 d i , = a i 1 f i ε 1 i e i ε 1 i e i .
In Equation (23), e i = d i b T i a i 1 . At last, the KRMVLC is devised and presented in Algorithm 1 in the form of a pseudo-code.
Algorithm 1 KRMVLC.
Require: 
τ , ξ , γ , ω , λ
Ensure: 
a ( i )
  1:
Q 1 = γ λ + κ u 1 , u 1 1 ,
  2:
a 1 = Q 1 d 1 ,
  3:
for i > 1 do
  4:
b i = κ u 1 , u 1 , , κ u i , u i 1 T
  5:
f i = Q i 1 b i
  6:
c i = ω d ( i ) Ω T φ ( i ) 2 + 2 ξ 2 + 1 ω τ 1 + τ d ( i ) Ω T φ ( i ) 2 2 1
  7:
ε i = γ i λ δ i + φ T i φ i f T i b i
  8:
Q i = ε i 1 Q i 1 ε i + f i f T i f i f T i 1
  9:
e i = d i b T i a i 1 = d ( i ) Ω T φ ( i )
10:
a i = a i 1 f i ε 1 i e i ε 1 i e i
11:
end for

3. Simulation Result

In this section, the NCE is considered to discuss the behavior of the devised KRMVLC under different noise-interference environments. The nonlinear channel equalization is presented in Figure 1. In the channel equalization system, a binary signal s ( i ) is input to the nonlinear channel, and an output signal r ( i ) to which noise has been added is obtained at the output of the nonlinear channel, and the signal affected by channel distortion is used as the input signal of the nonlinear adaptive filter. The purpose of channel equalization is to construct an “inverse” filter to reproduce the original input signal with the lowest possible error probability, so when the MSE reaches the minimum value, the nonlinear adaptive filter can be used to represent the inverse model of a nonlinear channel. The basic nonlinear channel model is presented in Figure 2, and the description of this model is given: s 1 , s 2 , , s L where the binary value is considered as the input of the utilized nonlinear channel model. At the receiving side, the received signal is the presence of the additive noise n ( i ) , while the observation is r 1 , r 2 , r 3 , , r L 1 , r L . If it is regarded as a simple regression problem, the samples can be written as r i , , r i + l , s i D . In this paper, l was the embedded time-length, while D was considered as the equilibrium lag time. D = 2 and l = 3 are used in the following experiments. The input of the nonlinear channel is given by x i = s i + 0.5 s i 1 , while the output of this channel is expressed as r i = x i 0.9 x i 2 + n i , and n i denotes the mixed noise using n 1 i and n 2 i [31]. The devised KRMVLC was investigated using Monte-Carlo simulations, and its MSE was compared with KLMF, KLMS, KRGMN, KLMMN, KMVC, and KRLS. Three different mixed noises were chosen to analyze the performance of the KRMVLC algorithm.
(1) Noise n 1 i with the Bernoulli distribution, a power of 0.5, and noise n 2 i with Gaussian distribution and power of 0.5 are utilized in the constructed Experiment-1.
(2) Noise n 1 i with the Laplace distribution, a power of 0.5, and noise n 2 i with Gaussian distribution and power of 0.5 are considered in the constructed Experiment-2.
(3) Noise n 1 i with a uniform distribution, power of 0.5, and noise n 2 i with Gaussian distribution and power of 0.5 are utilized in the constructed Experiment-3.
The achieved results with 50 independent Monte-Carlo runs are presented to discuss the proposed KRMVLC. In these simulations, the noise power is 0.1, and some parameters are listed in Table 1, where τ was set to be 0.01, and ξ was set to be 0.01, where the parameters were settled to achieve almost the same initial convergence. The convergence of the algorithms is presented in Figure 3. These figures show that the KRMVLC converges the fastest and has lower MSE, which is because the MVC can resist impulse noises. Then, only the Laplace distribution noise was considered to discuss the performance of the KRMVLC, given in Figure 2. Herein, the power of the Laplace distribution noise is 0.5 in Experiment-4. The KRMVLC algorithm still converges faster and has the lowest MSE compared to the others with respect to algorithms. Thus, the proposed KRMVLC is amazing in the MSE and convergence speed for handling the non-Gaussian noises.
Next, the key parameters, λ and τ , are discussed regarding the performance effects on the devised KRMVLC algorithm. When a parameter is tuned, other parameters are fixed. λ was chosen from 0.15 to 0.9 with an interval of 0.15 to show the effect of the devised KRMVLC algorithm. In this case, other parameters are the same as the above simulations. The noise model in Experiment-1 was used herein. The achieved MSE are illustrated in Figure 4. From Figure 4, we see that the KRMVLC achieves the best MSE for λ = 0.15 , meaning that λ has important effects on the estimation error behavior for the developed KRMVLC algorithm.
Then, the performance of the KRMVLC algorithms with different Versoria shape parameter is verified, where τ is set to be (1, 0.25, 0.0625, 0.01, 0.0025). In this experiment, other conditions are the same as in Experiment-1. The results obtained from the simulation are provided in Figure 5, which shows that the KRMVLC has the best behavior when τ = 1 .
Finally, the tracking ability for the derived KRMVLC was tested by considering an abrupt changing channel that was changed when 500 iterations had been done to analyze the tracking behavior. In this case, the measurement noise was considered to be the Laplace and Uniform distribution noises used above. After 500 iterations, the used channel was modified to r i = 0.9 x 2 i 1 + n i x i . The simulation results demonstrated in this paper were averaged from 50 independent Monte Carlo runs. The performance is illustrated in Figure 6. Results from the experiment show that the behavior of the KRMVLC is still superior to other algorithms, with a small change at the 500th iteration.

4. Conclusions

In this paper, the kernel recursive maximum Versoria-like criterion (KRMVLC) has been developed and analyzed for estimating nonlinear channels in a non-Gaussian interference environment. The KRMVLC algorithm was invented via taking the considering the logarithmic second-order error and the MVC in RKHS. Simulations show that the KRMVLC achieved excellent estimation behavior when using the convergence and MSE as criteria. The KRMVLC algorithm provides much more benefits in various environments compared with the other aforementioned algorithms, and can combat abrupt channel changes. The results also indicate that the devised KRMVLC provides a powerful solution for handling nonlinear and non-Gaussian signals. In the future, we will discuss and analyze the KRMVLC in complex kernel adaptive filter areas.

Author Contributions

Q.W. wrote the first version of this paper and coded for the proposed algorithm. Y.L. proposed the algorithms and give the analysis of the idea, he is also the projection administration. W.X. gave the investigation of the algorithm and he got the funding. All the authors wrote and gave the discussion of this paper together, and they have read and approved the final version of this manuscript.

Funding

This paper is supported by the National Key Research and Development Program of China (2016YFE0111100), Key Research and Development Program of Heilongjiang (GX17A016), the Science and Technology innovative Talents Foundation of Harbin (2016RAXXJ044), the Natural Science Foundation of Beijing (4182077), China Postdoctoral Science Foundation (2017M620918,2019T120134), the Fundamental Research Funds for the Central Universities (HEUCFG201829,2072019CFG0801), and Opening Fund of Acoustics Science and Technology Laboratory (SSKF2016001).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, W.; Príncipe, J.C.; Haykin, S. Kernel Adaptive Filtering: A Comprehensive Introduction; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  2. Liu, W.; Pokharel, P.P.; Príncipe, J.C. The kernel least mean square algorithm. IEEE Trans. Signal Process. 2018, 56, 543–554. [Google Scholar] [CrossRef]
  3. Li, Y.; Wang, Y.; Jiang, T. Sparse-aware set-membership NLMS algorithms and their application for sparse channel estimation and echo cancelation. AEU Int. J. Electron. Commun. 2016, 70, 895–902. [Google Scholar] [CrossRef]
  4. Shi, W.; Li, Y.; Zhao, L.; Liu, X. Controllable sparse antenna array for adaptive beamforming. IEEE Access 2019, 7, 6412–6423. [Google Scholar] [CrossRef]
  5. Lu, L.; Zhao, H.; Chen, B. Time series prediction using kernel adaptive filter with least mean absolute third loss function. Nonlinear Dyn. 2017, 90, 999–1013. [Google Scholar] [CrossRef]
  6. Engel, Y.; Mannor, S.; Meir, R. The kernel recursive least squares algorithm. IEEE Trans. Signal Process. 2004, 52, 2275–2285. [Google Scholar] [CrossRef]
  7. Chen, B.; Zhao, S.; Zhu, P.; Príncipe, J.C. Quantized kernel least mean square algorithm. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 22–32. [Google Scholar] [CrossRef] [PubMed]
  8. Chen, B.; Zhao, S.; Zhu, P.; Príncipe, J.C. Quantized kernel recursive least squares algorithm. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 1484–1491. [Google Scholar] [CrossRef] [PubMed]
  9. Liu, W.; Park, I.; Príncipe, J.C. An information theoretic approach of designing sparse kernel adaptive filters. IEEE Trans. Neural Netw. 2009, 20, 1950–1961. [Google Scholar] [CrossRef]
  10. Liu, W.; Park, I.; Wang, Y.; Príncipe, J.C. Extended kernel recusive least squares algorithm. IEEE Trans. Signal Process. 2009, 57, 3081–3814. [Google Scholar]
  11. Wang, S.; Zheng, Y.; Ling, C. Regularized kernel least mean square algorithm with multiple-delay feedback. IEEE Signal Process. Lett. 2016, 23, 98–101. [Google Scholar] [CrossRef]
  12. Zhao, J.; Liao, X.; Wang, S.; Tse, C.K. Kernel least mean square with single feedback. IEEE Signal Process. Lett. 2015, 22, 953–957. [Google Scholar] [CrossRef]
  13. Chen, B.; Liang, J.; Zheng, N.; Príncipe, J.C. Kernel least mean square with adaptive kernel size. Neurocomputing 2016, 191, 95–106. [Google Scholar] [CrossRef] [Green Version]
  14. Shi, L.; Lin, Y.; Xie, X. Combination of affine projection sign algorithms for robust adaptive filtering in non-gaussian impulsive interference. Electron. Lett. 2014, 50, 466–467. [Google Scholar] [CrossRef]
  15. Pelekanakis, K.; Chitre, M. Adaptive sparse channel estimation under symmetric α-stable noise. IEEE Trans. Wirel. Commun. 2014, 13, 3183–3195. [Google Scholar] [CrossRef]
  16. Pei, S.C.; Tseng, C.C. Least mean p-power error criterion for adaptive FIR filter. IEEE J. Sel. Areas Commun. 1994, 12, 1540–1547. [Google Scholar]
  17. Li, Y.; Wang, Y.; Jiang, T. Norm-adaption penalized least mean square/fourth algorithm for sparse channel estimation. Signal Process. 2016, 128, 243–251. [Google Scholar] [CrossRef]
  18. Tanrikulu, O.; Chambers, J.A. Convergence and steady-state properties of the least-mean mixed-norm (LMMN) adaptive algorithm. IEE Proc. Vis. Image Signal Process. 1994, 143, 137–142. [Google Scholar] [CrossRef]
  19. Chambers, J.A.; Tanrikulu, O.; Constantinides, A.G. Least mean mixed-norm adaptive filtering. Electron. Lett. 1994, 30, 1574–1575. [Google Scholar] [CrossRef]
  20. Miao, Q.Y.; Li, C.G. Kernel least-mean mixed-norm algorithm. In Proceedings of the International Conference on Automatic Control and Artificial Intelligence (ACAI), Xiamen, China, 3–5 March 2012; pp. 1285–1288. [Google Scholar]
  21. Luo, X.; Deng, J.; Liu, J.; Li, A.; Wang, W.; Zhao, W. A novel entropy optimized kernel least-mean mixed-norm algorithm. In Proceedings of the 2016 International Joint Conference on Neural Networks (IJCNN), Vancouver, BC, Canada, 24–29 July 2016; pp. 1716–1722. [Google Scholar]
  22. Ma, W.; Qiu, X.; Duan, J.; Li, Y.; Chen, B. Kernel recursive generalized mixed norm algorithm. J. Frankl. Inst. 2018, 355, 1596–1613. [Google Scholar] [CrossRef]
  23. Huang, F.; Zhang, J.; Zhang, S. Maximum versoria criterion-based robust adaptive filtering algorithm. IEEE Trans. Circuits Syst. II Exp. Briefs 2017, 355, 1596–1613. [Google Scholar] [CrossRef]
  24. Jain, S.; Mitra, R.; Bhatia, V. Kernel adaptive filtering based on maximum versoria criterion. In Proceedings of the 2018 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS), Indore, India, 16–19 December 2018. [Google Scholar]
  25. Li, Y.; Jiang, Z.; Shi, W.; Han, X.; Chen, B. Blocked maximum correntropy criterion algorithm for cluster-sparse system identifi-cations. IEEE Trans. Circuits Syst. II Exp. Briefs 2019. [Google Scholar] [CrossRef]
  26. Shi, W.; Li, Y.; Wang, Y. Noise-free maximum correntropy criterion algorithm in non-Gaussian environment. IEEE Trans. Circuits Syst. II Exp. Briefs 2019. [Google Scholar] [CrossRef]
  27. Li, Y.; Jiang, Z.; Osman, O.; Han, X.; Yin, J. Mixed norm constrained sparse APA algorithm for satellite and network echo channel estimation. IEEE Access 2018, 6, 65901–65908. [Google Scholar] [CrossRef]
  28. Lerga, J.; Sucic, V.; Sersic, D. Performance analysis of the LPA-RICI denoising method. In Proceedings of the 2009 6th International Symposium on Image and Signal Processing and Analysis, Salzburg, Austria, 16–18 September 2009. [Google Scholar]
  29. Lerga, J.; Grbac, E.; Sucic, V. An ICI based algorithm for fast denoising of video signals. Automatika 2014, 55, 351–358. [Google Scholar] [CrossRef]
  30. Segon, G.; Lerga, J.; Sucic, V. Improved LPA-ICI-based estimators embedded in a signal denoising virtual instrument. Signal Image Video Process. 2017, 11, 211–218. [Google Scholar] [CrossRef]
  31. Filipovic, V.Z. Consistency of the robust recursive Hammerstein model identification algorithm. J. Frankl. Inst. 2015, 352, 1932–1945. [Google Scholar] [CrossRef]
Figure 1. Nonlinear channel equalization.
Figure 1. Nonlinear channel equalization.
Symmetry 11 01067 g001
Figure 2. General nonlinear channel model.
Figure 2. General nonlinear channel model.
Symmetry 11 01067 g002
Figure 3. Convergence analysis of the KRMVLC for various mixed noises.
Figure 3. Convergence analysis of the KRMVLC for various mixed noises.
Symmetry 11 01067 g003
Figure 4. MSE performance against λ .
Figure 4. MSE performance against λ .
Symmetry 11 01067 g004
Figure 5. MSE performance against τ .
Figure 5. MSE performance against τ .
Symmetry 11 01067 g005
Figure 6. Learning performance for the KRMVLC with an abrupt change.
Figure 6. Learning performance for the KRMVLC with an abrupt change.
Symmetry 11 01067 g006
Table 1. Parameters for algorithms.
Table 1. Parameters for algorithms.
Algorithm α ω μ λ γ
KLMS-00.02--
KLMF-00.01--
KLMMN-0.250.0225--
KMVC-00.02--
KRLS10.25-0.451
KRGMN10.25-0.451
KRMVLC10.25-0.451

Share and Cite

MDPI and ACS Style

Wu, Q.; Li, Y.; Xue, W. A Kernel Recursive Maximum Versoria-Like Criterion Algorithm for Nonlinear Channel Equalization. Symmetry 2019, 11, 1067. https://doi.org/10.3390/sym11091067

AMA Style

Wu Q, Li Y, Xue W. A Kernel Recursive Maximum Versoria-Like Criterion Algorithm for Nonlinear Channel Equalization. Symmetry. 2019; 11(9):1067. https://doi.org/10.3390/sym11091067

Chicago/Turabian Style

Wu, Qishuai, Yingsong Li, and Wei Xue. 2019. "A Kernel Recursive Maximum Versoria-Like Criterion Algorithm for Nonlinear Channel Equalization" Symmetry 11, no. 9: 1067. https://doi.org/10.3390/sym11091067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop