Learning Entropy as a Learning-Based Information Concept
AbstractRecently, a novel concept of a non-probabilistic novelty detection measure, based on a multi-scale quantification of unusually large learning efforts of machine learning systems, was introduced as learning entropy (LE). The key finding with LE is that the learning effort of learning systems is quantifiable as a novelty measure for each individually observed data point of otherwise complex dynamic systems, while the model accuracy is not a necessary requirement for novelty detection. This brief paper extends the explanation of LE from the point of an informatics approach towards a cognitive (learning-based) information measure emphasizing the distinction from Shannon’s concept of probabilistic information. Fundamental derivations of learning entropy and of its practical estimations are recalled and further extended. The potentials, limitations, and, thus, the current challenges of LE are discussed.
Share & Cite This Article
Bukovsky, I.; Kinsner, W.; Homma, N. Learning Entropy as a Learning-Based Information Concept. Entropy 2019, 21, 166.
Bukovsky I, Kinsner W, Homma N. Learning Entropy as a Learning-Based Information Concept. Entropy. 2019; 21(2):166.Chicago/Turabian Style
Bukovsky, Ivo; Kinsner, Witold; Homma, Noriyasu. 2019. "Learning Entropy as a Learning-Based Information Concept." Entropy 21, no. 2: 166.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.