E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "The Information Bottleneck Method"

Quicklinks

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: closed (1 February 2013)

Special Issue Editors

Guest Editor
Prof. Dr. Naftali Tishby

1 School of Engineering and Computer Science
2 Director, Interdisciplinary Center for Neural Computation
3 Flinkman Professor of Brain Research, Edmond and Lilly Safra Center for Brain Sciences, The Hebrew University, Jerusalem 91904, Israel
Website | E-Mail
Interests: information theory in machine learning; dynamical systems and control; statistical physics of neural systems; computational neuroscience
Guest Editor
Dr. Daniel Polani

Department of Computer Science, University of Hertfordshire, Hatfield AL10 9AB, UK
Website | E-Mail
Interests: artificial intelligence; artificial life; information theory for intelligent information processing; sensor Evolution; collective and multiagent systems

Special Issue Information

Dear Colleagues,

The Information Bottleneck Method is a simple optimization principle for a model-free extraction the relevant part of one random variable with respect to another. Its formulation is closely related to classical problems in information theory, such as Rate-Distortion Theory and channel coding with side-information (the WAK problem). But it turned out to be connected with many other interesting domains. It generalizes the notion of minimal sufficient statistics in classical estimation theory; generalizes the Canonical-Correlation-Analysis (CCA) when applied to multivariate Gaussian variables; provides an optimal solution to the Kelly gambling problem; and serves as a basic building block for an information theory of perception and action. It provides elegant extensions of optimal control theory to information gathering problems, and has numerous applications in machine learning and computational neuroscience.  This special issue should provide a thorough discussion of the statistical, algorithmic, control theoretic, and biological aspects of this suggestive principle.

Prof. Dr. Naftali Tishby
Dr. Daniel Polani
Guest Editors

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed Open Access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs).

Keywords

  • information divergences
  • sufficient statistics
  • predictive information
  • information and control
  • side information
  • cognitive information processing

Published Papers (6 papers)

View options order results:
result details:
Displaying articles 1-6
Export citation of selected articles as:

Research

Open AccessArticle Information Bottleneck Approach to Predictive Inference
Entropy 2014, 16(2), 968-989; doi:10.3390/e16020968
Received: 3 June 2013 / Accepted: 18 June 2013 / Published: 17 February 2014
Cited by 8 | PDF Full-text (276 KB) | HTML Full-text | XML Full-text
Abstract
This paper synthesizes a recent line of work on automated predictive model making inspired by Rate-Distortion theory, in particular by the Information Bottleneck method. Predictive inference is interpreted as a strategy for efficient communication. The relationship to thermodynamic efficiency is discussed. The overall
[...] Read more.
This paper synthesizes a recent line of work on automated predictive model making inspired by Rate-Distortion theory, in particular by the Information Bottleneck method. Predictive inference is interpreted as a strategy for efficient communication. The relationship to thermodynamic efficiency is discussed. The overall aim of this paper is to explain how this information theoretic approach provides an intuitive, overarching framework for predictive inference. Full article
(This article belongs to the Special Issue The Information Bottleneck Method)
Open AccessArticle Consideration on Singularities in Learning Theory and the Learning Coefficient
Entropy 2013, 15(9), 3714-3733; doi:10.3390/e15093714
Received: 21 June 2013 / Revised: 29 August 2013 / Accepted: 30 August 2013 / Published: 6 September 2013
PDF Full-text (210 KB) | HTML Full-text | XML Full-text
Abstract
We consider the learning coefficients in learning theory and give two new methods for obtaining these coefficients in a homogeneous case: a method for finding a deepest singular point and a method to add variables. In application to Vandermonde matrix-type singularities, we show
[...] Read more.
We consider the learning coefficients in learning theory and give two new methods for obtaining these coefficients in a homogeneous case: a method for finding a deepest singular point and a method to add variables. In application to Vandermonde matrix-type singularities, we show that these methods are effective. The learning coefficient of the generalization error in Bayesian estimation serves to measure the learning efficiency in singular learning models. Mathematically, the learning coefficient corresponds to a real log canonical threshold of singularities for the Kullback functions (relative entropy) in learning theory. Full article
(This article belongs to the Special Issue The Information Bottleneck Method)
Open AccessArticle Quantifying Morphological Computation
Entropy 2013, 15(5), 1887-1915; doi:10.3390/e15051887
Received: 10 March 2013 / Revised: 23 April 2013 / Accepted: 9 May 2013 / Published: 21 May 2013
Cited by 6 | PDF Full-text (2234 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The field of embodied intelligence emphasises the importance of the morphology and environment with respect to the behaviour of a cognitive system. The contribution of the morphology to the behaviour, commonly known as morphological computation, is well-recognised in this community. We believe that
[...] Read more.
The field of embodied intelligence emphasises the importance of the morphology and environment with respect to the behaviour of a cognitive system. The contribution of the morphology to the behaviour, commonly known as morphological computation, is well-recognised in this community. We believe that the field would benefit from a formalisation of this concept as we would like to ask how much the morphology and the environment contribute to an embodied agent’s behaviour, or how an embodied agent can maximise the exploitation of its morphology within its environment. In this work we derive two concepts of measuring morphological computation, and we discuss their relation to the Information Bottleneck Method. The first concepts asks how much the world contributes to the overall behaviour and the second concept asks how much the agent’s action contributes to a behaviour. Various measures are derived from the concepts and validated in two experiments that highlight their strengths and weaknesses. Full article
(This article belongs to the Special Issue The Information Bottleneck Method)
Open AccessArticle Function Identification in Neuron Populations via Information Bottleneck
Entropy 2013, 15(5), 1587-1608; doi:10.3390/e15051587
Received: 26 February 2013 / Revised: 27 March 2013 / Accepted: 22 April 2013 / Published: 6 May 2013
Cited by 1 | PDF Full-text (825 KB) | HTML Full-text | XML Full-text
Abstract
It is plausible to hypothesize that the spiking responses of certain neurons represent functions of the spiking signals of other neurons. A natural ensuing question concerns how to use experimental data to infer what kind of a function is being computed. Model-based approaches
[...] Read more.
It is plausible to hypothesize that the spiking responses of certain neurons represent functions of the spiking signals of other neurons. A natural ensuing question concerns how to use experimental data to infer what kind of a function is being computed. Model-based approaches typically require assumptions on how information is represented. By contrast, information measures are sensitive only to relative behavior: information is unchanged by applying arbitrary invertible transformations to the involved random variables. This paper develops an approach based on the information bottleneck method that attempts to find such functional relationships in a neuron population. Specifically, the information bottleneck method is used to provide appropriate compact representations which can then be parsed to infer functional relationships. In the present paper, the parsing step is specialized to the case of remapped-linear functions. The approach is validated on artificial data and then applied to recordings from the motor cortex of a macaque monkey performing an arm-reaching task. Functional relationships are identified and shown to exhibit some degree of persistence across multiple trials of the same experiment. Full article
(This article belongs to the Special Issue The Information Bottleneck Method)
Open AccessArticle A Free Energy Principle for Biological Systems
Entropy 2012, 14(11), 2100-2121; doi:10.3390/e14112100
Received: 17 August 2012 / Revised: 1 October 2012 / Accepted: 25 October 2012 / Published: 31 October 2012
Cited by 27 | PDF Full-text (434 KB) | HTML Full-text | XML Full-text
Abstract
This paper describes a free energy principle that tries to explain the ability of biological systems to resist a natural tendency to disorder. It appeals to circular causality of the sort found in synergetic formulations of self-organization (e.g., the slaving principle) and models
[...] Read more.
This paper describes a free energy principle that tries to explain the ability of biological systems to resist a natural tendency to disorder. It appeals to circular causality of the sort found in synergetic formulations of self-organization (e.g., the slaving principle) and models of coupled dynamical systems, using nonlinear Fokker Planck equations. Here, circular causality is induced by separating the states of a random dynamical system into external and internal states, where external states are subject to random fluctuations and internal states are not. This reduces the problem to finding some (deterministic) dynamics of the internal states that ensure the system visits a limited number of external states; in other words, the measure of its (random) attracting set, or the Shannon entropy of the external states is small. We motivate a solution using a principle of least action based on variational free energy (from statistical physics) and establish the conditions under which it is formally equivalent to the information bottleneck method. This approach has proved useful in understanding the functional architecture of the brain. The generality of variational free energy minimisation and corresponding information theoretic formulations may speak to interesting applications beyond the neurosciences; e.g., in molecular or evolutionary biology. Full article
(This article belongs to the Special Issue The Information Bottleneck Method)
Open AccessArticle The Mathematical Structure of Information Bottleneck Methods
Entropy 2012, 14(3), 456-479; doi:10.3390/e14030456
Received: 2 December 2011 / Revised: 7 February 2012 / Accepted: 24 February 2012 / Published: 1 March 2012
Cited by 3 | PDF Full-text (221 KB) | HTML Full-text | XML Full-text
Abstract
Information Bottleneck-based methods use mutual information as a distortion function in order to extract relevant details about the structure of a complex system by compression. One of the approaches used to generate optimal compressed representations is by annealing a parameter. In this manuscript
[...] Read more.
Information Bottleneck-based methods use mutual information as a distortion function in order to extract relevant details about the structure of a complex system by compression. One of the approaches used to generate optimal compressed representations is by annealing a parameter. In this manuscript we present a common framework for the study of annealing in information distortion problems. We identify features that should be common to any annealing optimization problem. The main mathematical tools that we use come from the analysis of dynamical systems in the presence of symmetry (equivariant bifurcation theory). Through the compression problem, we make connections to the world of combinatorial optimization and pattern recognition. The two approaches use very different vocabularies and consider different problems to be “interesting”. We provide an initial link, through the Normalized Cut Problem, where the two disciplines can exchange tools and ideas. Full article
(This article belongs to the Special Issue The Information Bottleneck Method)

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
entropy@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy
Back to Top