Next Article in Journal
Searching for Conservation Laws in Brain Dynamics—BOLD Flux and Source Imaging
Next Article in Special Issue
Variational Bayes for Regime-Switching Log-Normal Models
Previous Article in Journal
An Estimation of the Entropy for a Rayleigh Distribution Based on Doubly-Generalized Type-II Hybrid Censored Samples
Previous Article in Special Issue
On Clustering Histograms with k-Means by Using Mixed α-Divergences
Open AccessArticle

Extending the Extreme Physical Information to Universal Cognitive Models via a Confident Information First Principle

School of Computer Science and Technology, Tianjin University, Tianjin 300072, China
Department of Computing, The Hong Kong Polytechnic University, Hung Hom, Kowloon,Hong Kong, China
Department of Computing and Communications, The Open University, Milton Keynes MK76AA, UK
Author to whom correspondence should be addressed.
Entropy 2014, 16(7), 3670-3688;
Received: 25 March 2014 / Revised: 6 June 2014 / Accepted: 20 June 2014 / Published: 1 July 2014
(This article belongs to the Special Issue Information Geometry)
The principle of extreme physical information (EPI) can be used to derive many known laws and distributions in theoretical physics by extremizing the physical information loss K, i.e., the difference between the observed Fisher information I and the intrinsic information bound J of the physical phenomenon being measured. However, for complex cognitive systems of high dimensionality (e.g., human language processing and image recognition), the information bound J could be excessively larger than I (J ≫ I), due to insufficient observation, which would lead to serious over-fitting problems in the derivation of cognitive models. Moreover, there is a lack of an established exact invariance principle that gives rise to the bound information in universal cognitive systems. This limits the direct application of EPI. To narrow down the gap between I and J, in this paper, we propose a confident-information-first (CIF) principle to lower the information bound J by preserving confident parameters and ruling out unreliable or noisy parameters in the probability density function being measured. The confidence of each parameter can be assessed by its contribution to the expected Fisher information distance between the physical phenomenon and its observations. In addition, given a specific parametric representation, this contribution can often be directly assessed by the Fisher information, which establishes a connection with the inverse variance of any unbiased estimate for the parameter via the Cramér–Rao bound. We then consider the dimensionality reduction in the parameter spaces of binary multivariate distributions. We show that the single-layer Boltzmann machine without hidden units (SBM) can be derived using the CIF principle. An illustrative experiment is conducted to show how the CIF principle improves the density estimation performance. View Full-Text
Keywords: information geometry; Boltzmann machine; Fisher information; parametric reduction information geometry; Boltzmann machine; Fisher information; parametric reduction
MDPI and ACS Style

Zhao, X.; Hou, Y.; Song, D.; Li, W. Extending the Extreme Physical Information to Universal Cognitive Models via a Confident Information First Principle. Entropy 2014, 16, 3670-3688.

Show more citation formats Show less citations formats

Article Access Map by Country/Region

Only visits after 24 November 2015 are recorded.
Search more from Scilit
Back to TopTop