E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Information Decomposition of Target Effects from Multi-Source Interactions"

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: closed (30 June 2017)

Special Issue Editors

Guest Editor
Dr. Joseph Lizier

Centre for Complex Systems, Faculty of Engineering and IT, The University of Sydney, Sydney, New South Wales, Australia
Website1 | Website2 | E-Mail
Interests: information theory; complex systems; information transfer; information dynamics; complex networks; dynamical systems; transfer entropy; computational neuroscience
Guest Editor
Dr. Nils Bertschinger

Frankfurt Institute of Advanced Studies (FIAS), Frankfurt, Germany
Website | E-Mail
Guest Editor
Prof. Juergen Jost

1. Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany
2. Santa Fe Institute, NM, USA
Website | E-Mail
Guest Editor
Prof. Michael Wibral

1. MEG Unit, Brain Imaging Center, Goethe University, Frankfurt, Germany
2. Max Planck Institute for Dynamics and Self-Organisation, Goettingen, Germany
Website | E-Mail

Special Issue Information

Dear Colleagues,

Shannon information theory has provided rigorous ways to capture our intuitive notions regarding uncertainty and information, and made an enormous impact in doing so. One of the fundamental measures here is mutual information, which captures the average information contained in one variable about another, and vice versa. If we have two source variables and a target, for example, we can measure the information held by one source about the target, the information held by the other source about the target, and the information held by those sources together about the target. Any other notion about the directed information relationship between these variables, which can be captured by classical information-theoretic measures (e.g., conditional mutual information terms) is linearly redundant with those three quantities.

However, intuitively, there is strong desire to measure further notions of how this directed information interaction may be decomposed, e.g., how much information the two source variables hold redundantly about the target, how much each source variable holds uniquely, and how much information can only be discerned by synergistically examining the two sources together. These notions go beyond the traditional information-theoretic view of a channel serving the purpose of reliable communication, considering now the situation of multiple communication streams converging on a single target. This is a common situation in biology, and in particular in neuroscience, where, say, the ability of a target to synergistically fuse multiple information sources in a non-trivial fashion is likely to have its own intrinsic value, independently of reliability of communication.

The absence of measures for such decompositions into redundant, unique and synergistic information is arguably the most fundamental missing piece in classical information theory. Triggered by the formulation of the Partial Information Decomposition framework by Williams and Beer in 2010, the past few years have witnessed a concentration of work by the community in proposing, contrasting, and investigating new measures to capture these notions of information decomposition. Other theoretical developments consider how these measures relate to concepts of information processing in terms of storage, transfer and modification. Meanwhile, computational neuroscience has emerged as a primary application area due to significant interest in questions surrounding how target neurons integrate information from large numbers of sources, as well as the availability of data sets to investigate these questions on.

This Special Issue seeks to bring together these efforts, to capture a snapshot of the current research, as well as to provide impetus for and focused scrutiny on newer work. We also seek to present progress to the wider community and attract further research. We welcome research articles proposing new measures or pointing out future directions, review articles on existing approaches, commentary on properties and limitations of such approaches, philosophical contributions on how such measures may be used or interpreted, applications to empirical data (e.g., neural imaging data), and more.

Dr. Joseph Lizier
Dr. Nils Bertschinger
Prof. Michael Wibral
Prof. Juergen Jost
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

Shannon information, information theory, information decomposition, mutual information, synergy, redundancy, shared information, transfer entropy

Published Papers (8 papers)

View options order results:
result details:
Displaying articles 1-8
Export citation of selected articles as:

Research

Open AccessFeature PaperArticle Quantifying Information Modification in Developing Neural Networks via Partial Information Decomposition
Entropy 2017, 19(9), 494; doi:10.3390/e19090494
Received: 13 July 2017 / Revised: 12 September 2017 / Accepted: 12 September 2017 / Published: 14 September 2017
PDF Full-text (4808 KB) | HTML Full-text | XML Full-text
Abstract
Information processing performed by any system can be conceptually decomposed into the transfer, storage and modification of information—an idea dating all the way back to the work of Alan Turing. However, formal information theoretic definitions until very recently were only available for information
[...] Read more.
Information processing performed by any system can be conceptually decomposed into the transfer, storage and modification of information—an idea dating all the way back to the work of Alan Turing. However, formal information theoretic definitions until very recently were only available for information transfer and storage, not for modification. This has changed with the extension of Shannon information theory via the decomposition of the mutual information between inputs to and the output of a process into unique, shared and synergistic contributions from the inputs, called a partial information decomposition (PID). The synergistic contribution in particular has been identified as the basis for a definition of information modification. We here review the requirements for a functional definition of information modification in neuroscience, and apply a recently proposed measure of information modification to investigate the developmental trajectory of information modification in a culture of neurons vitro, using partial information decomposition. We found that modification rose with maturation, but ultimately collapsed when redundant information among neurons took over. This indicates that this particular developing neural system initially developed intricate processing capabilities, but ultimately displayed information processing that was highly similar across neurons, possibly due to a lack of external inputs. We close by pointing out the enormous promise PID and the analysis of information modification hold for the understanding of neural systems. Full article
Figures

Figure 1

Open AccessArticle The Partial Information Decomposition of Generative Neural Network Models
Entropy 2017, 19(9), 474; doi:10.3390/e19090474
Received: 8 July 2017 / Revised: 13 August 2017 / Accepted: 1 September 2017 / Published: 6 September 2017
PDF Full-text (297 KB) | HTML Full-text | XML Full-text
Abstract
In this work we study the distributed representations learnt by generative neural network models. In particular, we investigate the properties of redundant and synergistic information that groups of hidden neurons contain about the target variable. To this end, we use an emerging branch
[...] Read more.
In this work we study the distributed representations learnt by generative neural network models. In particular, we investigate the properties of redundant and synergistic information that groups of hidden neurons contain about the target variable. To this end, we use an emerging branch of information theory called partial information decomposition (PID) and track the informational properties of the neurons through training. We find two differentiated phases during the training process: a first short phase in which the neurons learn redundant information about the target, and a second phase in which neurons start specialising and each of them learns unique information about the target. We also find that in smaller networks individual neurons learn more specific information about certain features of the input, suggesting that learning pressure can encourage disentangled representations. Full article
Figures

Figure 1

Open AccessArticle Information Theoretical Study of Cross-Talk Mediated Signal Transduction in MAPK Pathways
Entropy 2017, 19(9), 469; doi:10.3390/e19090469
Received: 28 June 2017 / Revised: 30 August 2017 / Accepted: 1 September 2017 / Published: 5 September 2017
PDF Full-text (5481 KB) | HTML Full-text | XML Full-text
Abstract
Biochemical networks having similar functional pathways are often correlated due to cross-talk among the homologous proteins in the different networks. Using a stochastic framework, we address the functional significance of the cross-talk between two pathways. A theoretical analysis on generic MAPK pathways reveals
[...] Read more.
Biochemical networks having similar functional pathways are often correlated due to cross-talk among the homologous proteins in the different networks. Using a stochastic framework, we address the functional significance of the cross-talk between two pathways. A theoretical analysis on generic MAPK pathways reveals cross-talk is responsible for developing coordinated fluctuations between the pathways. The extent of correlation evaluated in terms of the information theoretic measure provides directionality to net information propagation. Stochastic time series suggest that the cross-talk generates synchronisation in a cell. In addition, the cross-interaction develops correlation between two different phosphorylated kinases expressed in each of the cells in a population of genetically identical cells. Depending on the number of inputs and outputs, we identify signal integration and signal bifurcation motif that arise due to inter-pathway connectivity in the composite network. Analysis using partial information decomposition, an extended formalism of multivariate information calculation, also quantifies the net synergy in the information propagation through the branched pathways. Under this formalism, signature of synergy or redundancy is observed due to the architectural difference in the branched pathways. Full article
Figures

Figure 1

Open AccessArticle Morphological Computation: Synergy of Body and Brain
Entropy 2017, 19(9), 456; doi:10.3390/e19090456
Received: 9 July 2017 / Revised: 18 August 2017 / Accepted: 25 August 2017 / Published: 31 August 2017
PDF Full-text (1792 KB) | HTML Full-text | XML Full-text
Abstract
There are numerous examples that show how the exploitation of the body’s physical properties can lift the burden of the brain. Examples include grasping, swimming, locomotion, and motion detection. The term Morphological Computation was originally coined to describe processes in the body that
[...] Read more.
There are numerous examples that show how the exploitation of the body’s physical properties can lift the burden of the brain. Examples include grasping, swimming, locomotion, and motion detection. The term Morphological Computation was originally coined to describe processes in the body that would otherwise have to be conducted by the brain. In this paper, we argue for a synergistic perspective, and by that we mean that Morphological Computation is a process which requires a close interaction of body and brain. Based on a model of the sensorimotor loop, we study a new measure of synergistic information and show that it is more reliable in cases in which there is no synergistic information, compared to previous results. Furthermore, we discuss an algorithm that allows the calculation of the measure in non-trivial (non-binary) systems. Full article
Figures

Figure 1

Open AccessArticle Invariant Components of Synergy, Redundancy, and Unique Information among Three Variables
Entropy 2017, 19(9), 451; doi:10.3390/e19090451
Received: 27 June 2017 / Revised: 21 August 2017 / Accepted: 25 August 2017 / Published: 28 August 2017
Cited by 1 | PDF Full-text (1509 KB) | HTML Full-text | XML Full-text
Abstract
In a system of three stochastic variables, the Partial Information Decomposition (PID) of Williams and Beer dissects the information that two variables (sources) carry about a third variable (target) into nonnegative information atoms that describe redundant, unique, and synergistic modes of dependencies among
[...] Read more.
In a system of three stochastic variables, the Partial Information Decomposition (PID) of Williams and Beer dissects the information that two variables (sources) carry about a third variable (target) into nonnegative information atoms that describe redundant, unique, and synergistic modes of dependencies among the variables. However, the classification of the three variables into two sources and one target limits the dependency modes that can be quantitatively resolved, and does not naturally suit all systems. Here, we extend the PID to describe trivariate modes of dependencies in full generality, without introducing additional decomposition axioms or making assumptions about the target/source nature of the variables. By comparing different PID lattices of the same system, we unveil a finer PID structure made of seven nonnegative information subatoms that are invariant to different target/source classifications and that are sufficient to describe the relationships among all PID lattices. This finer structure naturally splits redundant information into two nonnegative components: the source redundancy, which arises from the pairwise correlations between the source variables, and the non-source redundancy, which does not, and relates to the synergistic information the sources carry about the target. The invariant structure is also sufficient to construct the system’s entropy, hence it characterizes completely all the interdependencies in the system. Full article
Figures

Figure 1

Open AccessFeature PaperArticle Multiscale Information Decomposition: Exact Computation for Multivariate Gaussian Processes
Entropy 2017, 19(8), 408; doi:10.3390/e19080408
Received: 21 June 2017 / Revised: 3 August 2017 / Accepted: 7 August 2017 / Published: 8 August 2017
Cited by 1 | PDF Full-text (1829 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Exploiting the theory of state space models, we derive the exact expressions of the information transfer, as well as redundant and synergistic transfer, for coupled Gaussian processes observed at multiple temporal scales. All of the terms, constituting the frameworks known as interaction information
[...] Read more.
Exploiting the theory of state space models, we derive the exact expressions of the information transfer, as well as redundant and synergistic transfer, for coupled Gaussian processes observed at multiple temporal scales. All of the terms, constituting the frameworks known as interaction information decomposition and partial information decomposition, can thus be analytically obtained for different time scales from the parameters of the VAR model that fits the processes. We report the application of the proposed methodology firstly to benchmark Gaussian systems, showing that this class of systems may generate patterns of information decomposition characterized by prevalently redundant or synergistic information transfer persisting across multiple time scales or even by the alternating prevalence of redundant and synergistic source interaction depending on the time scale. Then, we apply our method to an important topic in neuroscience, i.e., the detection of causal interactions in human epilepsy networks, for which we show the relevance of partial information decomposition to the detection of multiscale information transfer spreading from the seizure onset zone. Full article
Figures

Figure 1

Open AccessArticle On Extractable Shared Information
Entropy 2017, 19(7), 328; doi:10.3390/e19070328
Received: 31 May 2017 / Accepted: 22 June 2017 / Published: 3 July 2017
Cited by 1 | PDF Full-text (260 KB) | HTML Full-text | XML Full-text
Abstract
We consider the problem of quantifying the information shared by a pair of random variables X1,X2 about another variable S. We propose a new measure of shared information, called extractable shared information, that is left monotonic; that
[...] Read more.
We consider the problem of quantifying the information shared by a pair of random variables X 1 , X 2 about another variable S. We propose a new measure of shared information, called extractable shared information, that is left monotonic; that is, the information shared about S is bounded from below by the information shared about f ( S ) for any function f. We show that our measure leads to a new nonnegative decomposition of the mutual information I ( S ; X 1 X 2 ) into shared, complementary and unique components. We study properties of this decomposition and show that a left monotonic shared information is not compatible with a Blackwell interpretation of unique information. We also discuss whether it is possible to have a decomposition in which both shared and unique information are left monotonic. Full article
Open AccessArticle Measuring Multivariate Redundant Information with Pointwise Common Change in Surprisal
Entropy 2017, 19(7), 318; doi:10.3390/e19070318
Received: 2 May 2017 / Revised: 25 June 2017 / Accepted: 27 June 2017 / Published: 29 June 2017
Cited by 3 | PDF Full-text (588 KB) | HTML Full-text | XML Full-text
Abstract
The problem of how to properly quantify redundant information is an open question that has been the subject of much recent research. Redundant information refers to information about a target variable S that is common to two or more predictor variables Xi
[...] Read more.
The problem of how to properly quantify redundant information is an open question that has been the subject of much recent research. Redundant information refers to information about a target variable S that is common to two or more predictor variables X i . It can be thought of as quantifying overlapping information content or similarities in the representation of S between the X i . We present a new measure of redundancy which measures the common change in surprisal shared between variables at the local or pointwise level. We provide a game-theoretic operational definition of unique information, and use this to derive constraints which are used to obtain a maximum entropy distribution. Redundancy is then calculated from this maximum entropy distribution by counting only those local co-information terms which admit an unambiguous interpretation as redundant information. We show how this redundancy measure can be used within the framework of the Partial Information Decomposition (PID) to give an intuitive decomposition of the multivariate mutual information into redundant, unique and synergistic contributions. We compare our new measure to existing approaches over a range of example systems, including continuous Gaussian variables. Matlab code for the measure is provided, including all considered examples. Full article
Figures

Figure 1

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy Edit a special issue Review for Entropy
logo
loading...
Back to Top