Next Article in Journal
Spatial Heterogeneity Analysis: Introducing a New Form of Spatial Entropy
Next Article in Special Issue
Recognizing Information Feature Variation: Message Importance Transfer Measure and Its Applications in Big Data
Previous Article in Journal
Stochastic Entropy Solutions for Stochastic Nonlinear Transport Equations
Previous Article in Special Issue
Polynomial-Time Algorithm for Learning Optimal BFS-Consistent Dynamic Bayesian Networks
Article Menu

Export Article

Open AccessArticle
Entropy 2018, 20(6), 397; https://doi.org/10.3390/e20060397

Shannon Entropy Estimation in ∞-Alphabets from Convergence Results: Studying Plug-In Estimators

Information and Decision System Group, Department of Electrical Engineering, Universidad de Chile, Av. Tupper 2007, Santiago 7591538, Chile
Received: 12 April 2018 / Revised: 14 May 2018 / Accepted: 18 May 2018 / Published: 23 May 2018
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
View Full-Text   |   Download PDF [428 KB, uploaded 24 May 2018]

Abstract

This work addresses the problem of Shannon entropy estimation in countably infinite alphabets studying and adopting some recent convergence results of the entropy functional, which is known to be a discontinuous function in the space of probabilities in ∞-alphabets. Sufficient conditions for the convergence of the entropy are used in conjunction with some deviation inequalities (including scenarios with both finitely and infinitely supported assumptions on the target distribution). From this perspective, four plug-in histogram-based estimators are studied showing that convergence results are instrumental to derive new strong consistent estimators for the entropy. The main application of this methodology is a new data-driven partition (plug-in) estimator. This scheme uses the data to restrict the support where the distribution is estimated by finding an optimal balance between estimation and approximation errors. The proposed scheme offers a consistent (distribution-free) estimator of the entropy in ∞-alphabets and optimal rates of convergence under certain regularity conditions on the problem (finite and unknown supported assumptions and tail bounded conditions on the target distribution). View Full-Text
Keywords: Shannon entropy estimation; countably infinite alphabets; entropy convergence results; statistical learning; histogram-based estimators; data-driven partitions; strong consistency; rates of convergence Shannon entropy estimation; countably infinite alphabets; entropy convergence results; statistical learning; histogram-based estimators; data-driven partitions; strong consistency; rates of convergence
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Silva, J.F. Shannon Entropy Estimation in ∞-Alphabets from Convergence Results: Studying Plug-In Estimators. Entropy 2018, 20, 397.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top