Next Article in Journal
Objective Weights Based on Ordered Fuzzy Numbers for Fuzzy Multiple Criteria Decision-Making Methods
Next Article in Special Issue
A Sparse Multiwavelet-Based Generalized Laguerre–Volterra Model for Identifying Time-Varying Neural Dynamics from Spiking Activities
Previous Article in Journal
Collaborative Service Selection via Ensemble Learning in Mixed Mobile Network Environments
Previous Article in Special Issue
Non-Linear Stability Analysis of Real Signals from Nuclear Power Plants (Boiling Water Reactors) Based on Noise Assisted Empirical Mode Decomposition Variants and the Shannon Entropy
Open AccessArticle

A Quantized Kernel Learning Algorithm Using a Minimum Kernel Risk-Sensitive Loss Criterion and Bilateral Gradient Technique

1
School of Computer and Communication Engineering, University of Science and Technology Beijing (USTB), Beijing 100083, China
2
Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing 100083, China
3
Department of Computer Science and Information Engineering, National Taipei University of Technology, Taipei 10608, Taiwan
4
Department of Electrical Engineering and Computer Science, Cleveland State University, Cleveland, OH 44115, USA
*
Authors to whom correspondence should be addressed.
Entropy 2017, 19(7), 365; https://doi.org/10.3390/e19070365
Received: 19 June 2017 / Revised: 8 July 2017 / Accepted: 13 July 2017 / Published: 20 July 2017
(This article belongs to the Special Issue Entropy in Signal Analysis)
Recently, inspired by correntropy, kernel risk-sensitive loss (KRSL) has emerged as a novel nonlinear similarity measure defined in kernel space, which achieves a better computing performance. After applying the KRSL to adaptive filtering, the corresponding minimum kernel risk-sensitive loss (MKRSL) algorithm has been developed accordingly. However, MKRSL as a traditional kernel adaptive filter (KAF) method, generates a growing radial basis functional (RBF) network. In response to that limitation, through the use of online vector quantization (VQ) technique, this article proposes a novel KAF algorithm, named quantized MKRSL (QMKRSL) to curb the growth of the RBF network structure. Compared with other quantized methods, e.g., quantized kernel least mean square (QKLMS) and quantized kernel maximum correntropy (QKMC), the efficient performance surface makes QMKRSL converge faster and filter more accurately, while maintaining the robustness to outliers. Moreover, considering that QMKRSL using traditional gradient descent method may fail to make full use of the hidden information between the input and output spaces, we also propose an intensified QMKRSL using a bilateral gradient technique named QMKRSL_BG, in an effort to further improve filtering accuracy. Short-term chaotic time-series prediction experiments are conducted to demonstrate the satisfactory performance of our algorithms. View Full-Text
Keywords: kernel methods; correntropy; kernel risk-sensitive loss; online vector quantization kernel methods; correntropy; kernel risk-sensitive loss; online vector quantization
Show Figures

Figure 1

MDPI and ACS Style

Luo, X.; Deng, J.; Wang, W.; Wang, J.-H.; Zhao, W. A Quantized Kernel Learning Algorithm Using a Minimum Kernel Risk-Sensitive Loss Criterion and Bilateral Gradient Technique. Entropy 2017, 19, 365.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop