A Kernel Least Mean Square Algorithm Based on Randomized Feature Networks
Abstract
1. Introduction
2. The KLMS Algorithm Based on Random Fourier Features
2.1. The KLMS Algorithm
2.2. The Random Fourier Feature Mapping
2.3. Randomized Feature Networks-Based KLMS Algorithm
Algorithm 1: The KLMS-RFN Algorithm. |
|
3. The Mean Square Convergence Analysis
3.1. The Energy Conversion Relation
3.2. Mean Square Convergence Condition
3.3. Steady State Mean Square Error Analysis
- (1)
- When the step size is sufficiently small, the term can be assumed to be far less than . Therefore, for a small , the following is obtained:
- (2)
- When the step size is large (the value is not infinitesimally small to guarantee filter stability), the following independence assumption is required to find the expression for EMSE. At steady-state, the input signal is statistically independent of . Thus, we obtain
4. Computational Complexity
5. Simulations and Results
5.1. Lorenz Time Series Prediction
5.2. Nonlinear Channel Equalization
5.2.1. Time-Varying Channel Equalization
5.2.2. Abruptly Changed Channel Equalization
6. Discussion and Conclusions
Author Contributions
Conflicts of Interest
References
- Muller, K.; Mika, S.; Ratsch, G.; Tsuda, K.; Scholkopf, B.; Muller, K.R.; Ratsch, G.; Scholkopf, B. An introduction to kernel-based learning algorithms. IEEE Trans. Neural Netw. 2001, 12, 181. [Google Scholar] [CrossRef] [PubMed]
- Rojo-Alvarez, J.L.; Martinez-Ramon, M.; Munoz-Mari, J.; Camps-Valls, G. Adaptive Kernel Learning for Signal Processing. In Digital Signal Processing with Kernel Methods; Wiley-IEEE Press: Hoboken, NJ, USA, 2018; pp. 387–431. [Google Scholar] [CrossRef]
- Ding, G.; Wu, Q.; Yao, Y.D.; Wang, J. Kernel-Based Learning for Statistical Signal Processing in Cognitive Radio Networks: Theoretical Foundations, Example Applications, and Future Directions. IEEE Signal Process. Mag. 2013, 30, 126–136. [Google Scholar] [CrossRef]
- Liu, W.; Pokharel, P.P.; Principe, J.C. The Kernel Least-Mean-Square Algorithm. IEEE Trans. Signal Process. 2008, 56, 543–554. [Google Scholar] [CrossRef]
- Engel, Y.; Mannor, S.; Meir, R. The kernel recursive least-squares algorithm. IEEE Trans. Signal Process. 2004, 52, 2275–2285. [Google Scholar] [CrossRef]
- Liu, W.; Principe, J.C. Kernel Affine Projection Algorithms. Eurasip. J. Adv. Signal Process. 2008, 2008, 1–12. [Google Scholar] [CrossRef]
- Parreira, W.D.; Bermudez, J.C.M.; Richard, C.; Tourneret, J.Y. Stochastic behavior analysis of the Gaussian Kernel Least Mean Square algorithm. IEEE Trans. Signal Process. 2012, 60, 2208–2222. [Google Scholar] [CrossRef]
- Zhao, J.; Liao, X.; Wang, S.; Chi, K.T. Kernel Least Mean Square with Single Feedback. IEEE Signal Process. Lett. 2015, 22, 953–957. [Google Scholar] [CrossRef]
- Paul, T.K.; Ogunfunmi, T. A Kernel Adaptive Algorithm for Quaternion-Valued Inputs. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 2422–2439. [Google Scholar] [CrossRef] [PubMed]
- Haghighat, N.; Kalbkhani, H.; Shayesteh, M.G.; Nouri, M. Variable bit rate video traffic prediction based on kernel least mean square method. IET Image Process. 2015, 9, 777–794. [Google Scholar] [CrossRef]
- Platt, J.C. A Resource-Allocating Network for Function Interpolation. Neural Comput. 1991, 3, 213–225. [Google Scholar] [CrossRef]
- Liu, W.; Park, I.; Principe, J.C. An Information Theoretic Approach of Designing Sparse Kernel Adaptive Filters. IEEE Trans. Neural Netw. 2009, 20, 1950–1961. [Google Scholar] [CrossRef] [PubMed]
- Richard, C.; Bermudez, J.C.M.; Honeine, P. Online Prediction of Time Series Data With Kernels. IEEE Trans. Signal Process. 2009, 57, 1058–1067. [Google Scholar] [CrossRef]
- Chen, B.; Zhao, S.; Zhu, P.; Principe, J.C. Quantized Kernel Least Mean Square Algorithm. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 22–32. [Google Scholar] [CrossRef] [PubMed]
- Chen, B.; Zheng, N.; Principe, J.C. Sparse kernel recursive least squares using L1 regularization and a fixed-point sub-iteration. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 5257–5261. [Google Scholar]
- Gao, W.; Chen, J.; Richard, C.; Huang, J. Online Dictionary Learning for Kernel LMS. IEEE Trans. Signal Process. 2014, 62, 2765–2777. [Google Scholar]
- Zhao, S.; Chen, B.; Zhu, P.; Principe, J.C. Fixed budget quantized kernel least-mean-square algorithm. Signal Process. 2013, 93, 2759–2770. [Google Scholar] [CrossRef]
- Rahimi, A.; Recht, B. Random features for large-scale kernel machines. In Proceedings of the International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 3–6 December 2007; pp. 1177–1184. [Google Scholar]
- Rahimi, A.; Recht, B. Uniform approximation of functions with random bases. In Proceedings of the Allerton Conference on Communication, Control, and Computing, Urbana-Champaign, IL, USA, 23–26 September 2008; pp. 555–561. [Google Scholar]
- Shakiba, N.; Rueda, L. MicroRNA identification using linear dimensionality reduction with explicit feature mapping. BMC Proc. 2013, 7, S8. [Google Scholar] [CrossRef] [PubMed]
- Hu, Z.; Lin, M.; Zhang, C. Dependent Online Kernel Learning with Constant Number of Random Fourier Features. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 2464–2476. [Google Scholar] [CrossRef] [PubMed]
- Boroumand, M.; Fridrich, J. Applications of Explicit Non-Linear Feature Maps in Steganalysis. IEEE Trans. Inf. Forensics Secur. 2018, 13, 823–833. [Google Scholar] [CrossRef]
- Sharma, M.; Jayadeva; Soman, S.; Pant, H. Large-Scale Minimal Complexity Machines Using Explicit Feature Maps. IEEE Trans. Syst. Man Cybern. Syst. 2017, 47, 2653–2662. [Google Scholar] [CrossRef]
- Rudin, W. Fourier Analysis on Groups; Interscience Publishers: Geneva, Switzerland, 1962; p. 82. [Google Scholar]
- Sutherland, D.J.; Schneider, J. On the error of random fourier features. In Proceedings of the Conference on Uncertainty in Artificial Intelligence, Amsterdam, The Netherlands, 12–16 July 2015; pp. 862–871. [Google Scholar]
- Yousef, N.R.; Sayed, A.H. A unified approach to the steady-state and tracking analyses of adaptive filters. IEEE Trans. Signal Process. 2001, 49, 314–324. [Google Scholar] [CrossRef]
- Al-Naffouri, T.Y.; Sayed, A.H. Transient analysis of data-normalized adaptive filters. IEEE Trans. Signal Process. 2003, 51, 639–652. [Google Scholar] [CrossRef]
- Mirmomeni, M.; Lucas, C.; Araabi, B.N.; Moshiri, B.; Bidar, M.R. Recursive spectral analysis of natural time series based on eigenvector matrix perturbation for online applications. IET Signal Process. 2011, 5, 515–526. [Google Scholar] [CrossRef]
- Chandra, R.; Zhang, M. Cooperative coevolution of Elman recurrent neural networks for chaotic time series prediction. Neurocomputing 2012, 86, 116–123. [Google Scholar] [CrossRef]
- Miranian, A.; Abdollahzade, M. Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 207–218. [Google Scholar] [CrossRef] [PubMed]
- Kechriotis, G.; Zervas, E.; Manolakos, E.S. Using recurrent neural networks for adaptive communication channel equalization. IEEE Trans. Neural Netw. 1994, 5, 267–278. [Google Scholar] [CrossRef] [PubMed]
- Choi, J.; Lima, A.C.C.; Haykin, S. Kalman filter-trained recurrent neural equalizers for time-varying channels. IEEE Trans. Commun. 2005, 53, 472–480. [Google Scholar] [CrossRef]
- Liang, Q.; Mendel, J.M. Equalization of nonlinear time-varying channels using type-2 fuzzy adaptive filters. IEEE Trans. Fuzzy Syst. 2000, 8, 551–563. [Google Scholar] [CrossRef]
- Patra, J.C.; Meher, P.K.; Chakraborty, G. Nonlinear channel equalization for wireless communication systems using Legendre neural networks. Signal Process. 2009, 89, 2251–2262. [Google Scholar] [CrossRef]
- Xu, L.; Huang, D.; Guo, Y.J. Robust Blind Learning Algorithm for Nonlinear Equalization Using Input Decision Information. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 3009–3020. [Google Scholar] [CrossRef] [PubMed]
The Number of Calculations of the KLMS-RFN Algorithm. |
---|
(1) Calculating random Fourier feature vectors |
Number of multiplications ; |
Number of additions ; |
Number of calculation |
(2) Calculating the output of the filter |
Number of multiplications ; |
Number of additions ; |
(3) Updating the weight vector |
Number of multiplications ; |
Number of additions ; |
(4) Total calculations |
Total number of multiplications ; |
Total number of additions ; |
Total number of |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, Y.; Sun, C.; Jiang, S. A Kernel Least Mean Square Algorithm Based on Randomized Feature Networks. Appl. Sci. 2018, 8, 458. https://doi.org/10.3390/app8030458
Liu Y, Sun C, Jiang S. A Kernel Least Mean Square Algorithm Based on Randomized Feature Networks. Applied Sciences. 2018; 8(3):458. https://doi.org/10.3390/app8030458
Chicago/Turabian StyleLiu, Yuqi, Chao Sun, and Shouda Jiang. 2018. "A Kernel Least Mean Square Algorithm Based on Randomized Feature Networks" Applied Sciences 8, no. 3: 458. https://doi.org/10.3390/app8030458
APA StyleLiu, Y., Sun, C., & Jiang, S. (2018). A Kernel Least Mean Square Algorithm Based on Randomized Feature Networks. Applied Sciences, 8(3), 458. https://doi.org/10.3390/app8030458