Rate-Distortion Bounds for Kernel-Based Distortion Measures†
AbstractKernel methods have been used for turning linear learning algorithms into nonlinear ones. These nonlinear algorithms measure distances between data points by the distance in the kernel-induced feature space. In lossy data compression, the optimal tradeoff between the number of quantized points and the incurred distortion is characterized by the rate-distortion function. However, the rate-distortion functions associated with distortion measures involving kernel feature mapping have yet to be analyzed. We consider two reconstruction schemes, reconstruction in input space and reconstruction in feature space, and provide bounds to the rate-distortion functions for these schemes. Comparison of the derived bounds to the quantizer performance obtained by the kernel
Share & Cite This Article
Watanabe, K. Rate-Distortion Bounds for Kernel-Based Distortion Measures. Entropy 2017, 19, 336.
Watanabe K. Rate-Distortion Bounds for Kernel-Based Distortion Measures. Entropy. 2017; 19(7):336.Chicago/Turabian Style
Watanabe, Kazuho. 2017. "Rate-Distortion Bounds for Kernel-Based Distortion Measures." Entropy 19, no. 7: 336.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.