E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Transfer Entropy"

Quicklinks

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: closed (31 December 2014)

Special Issue Editor

Guest Editor
Dr. Deniz Gencaga

Carnegie Mellon University Pittsburgh, PA, USA
Website | E-Mail
Interests: bayesian data analysis; statistical signal processing; machine learning (for big data); information theory; source separation; computational mathematics and statistics; autonomous and intelligent systems; data mining and knowledge discovery; remote sensing; climatology; astronomy; systems biology; smart grid Contribution: Special Issue: Transfer Entropy

Special Issue Information

Dear Colleagues,

In many research fields, we need to analyze the causal interactions among the variables of a complex system, to better understand the physical behavior of it. While linear techniques, such as correlation, are widely used to identify and characterize these relationships, more advanced information-based techniques, such as mutual information and transfer entropy, have proven to be superior. Transfer entropy has been used to analyze the causal relationships between subsystem variables from data. The fact that it is non-symmetric enables one to infer the direction of information flow. Granger causality, which generally relies on autoregression to assess interactions, has been shown to be equivalent to transfer entropy in the case of Gaussian variables.

In this special issue, we would like to collect papers focusing on both the theory and applications of Transfer Entropy. The application areas are diverse and include neuroscience, systems biology, bioinformatics, environmental sciences, climatology, engineering, finance, astronomy, Earth and space sciences, and astronomy. Of special interest are theoretical papers elucidating the state of the art of data-based transfer entropy estimation techniques.

Dr. Deniz Gencaga
Guest Editor

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed Open Access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs).

Keywords

  • transfer entropy
  • causality
  • causal relationships
  • entropy estimation
  • statistical dependency
  • nonlinear interactions
  • interacting subsystems
  • information-theory
  • Granger causality
  • mutual information
  • machine learning
  • data mining

Published Papers (16 papers)

View options order results:
result details:
Displaying articles 1-16
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle Contribution to Transfer Entropy Estimation via the k-Nearest-Neighbors Approach
Entropy 2015, 17(6), 4173-4201; doi:10.3390/e17064173
Received: 31 December 2014 / Accepted: 10 June 2015 / Published: 16 June 2015
PDF Full-text (750 KB) | HTML Full-text | XML Full-text
Abstract
This paper deals with the estimation of transfer entropy based on the k-nearest neighbors (k-NN) method. To this end, we first investigate the estimation of Shannon entropy involving a rectangular neighboring region, as suggested in already existing literature, and develop
[...] Read more.
This paper deals with the estimation of transfer entropy based on the k-nearest neighbors (k-NN) method. To this end, we first investigate the estimation of Shannon entropy involving a rectangular neighboring region, as suggested in already existing literature, and develop two kinds of entropy estimators. Then, applying the widely-used error cancellation approach to these entropy estimators, we propose two novel transfer entropy estimators, implying no extra computational cost compared to existing similar k-NN algorithms. Experimental simulations allow the comparison of the new estimators with the transfer entropy estimator available in free toolboxes, corresponding to two different extensions to the transfer entropy estimation of the Kraskov–Stögbauer–Grassberger (KSG) mutual information estimator and prove the effectiveness of these new estimators. Full article
(This article belongs to the Special Issue Transfer Entropy)
Open AccessArticle Assessing Coupling Dynamics from an Ensemble of Time Series
Entropy 2015, 17(4), 1958-1970; doi:10.3390/e17041958
Received: 30 November 2014 / Revised: 28 February 2015 / Accepted: 19 March 2015 / Published: 2 April 2015
Cited by 5 | PDF Full-text (709 KB) | HTML Full-text | XML Full-text
Abstract
Finding interdependency relations between time series provides valuable knowledge about the processes that generated the signals. Information theory sets a natural framework for important classes of statistical dependencies. However, a reliable estimation from information-theoretic functionals is hampered when the dependency to be assessed
[...] Read more.
Finding interdependency relations between time series provides valuable knowledge about the processes that generated the signals. Information theory sets a natural framework for important classes of statistical dependencies. However, a reliable estimation from information-theoretic functionals is hampered when the dependency to be assessed is brief or evolves in time. Here, we show that these limitations can be partly alleviated when we have access to an ensemble of independent repetitions of the time series. In particular, we gear a data-efficient estimator of probability densities to make use of the full structure of trial-based measures. By doing so, we can obtain time-resolved estimates for a family of entropy combinations (including mutual information, transfer entropy and their conditional counterparts), which are more accurate than the simple average of individual estimates over trials. We show with simulated and real data generated by coupled electronic circuits that the proposed approach allows one to recover the time-resolved dynamics of the coupling between different subsystems. Full article
(This article belongs to the Special Issue Transfer Entropy)
Open AccessArticle A Recipe for the Estimation of Information Flow in a Dynamical System
Entropy 2015, 17(1), 438-470; doi:10.3390/e17010438
Received: 7 February 2014 / Revised: 8 December 2014 / Accepted: 8 January 2015 / Published: 19 January 2015
Cited by 1 | PDF Full-text (1655 KB) | HTML Full-text | XML Full-text
Abstract
Information-theoretic quantities, such as entropy and mutual information (MI), can be used to quantify the amount of information needed to describe a dataset or the information shared between two datasets. In the case of a dynamical system, the behavior of the relevant variables
[...] Read more.
Information-theoretic quantities, such as entropy and mutual information (MI), can be used to quantify the amount of information needed to describe a dataset or the information shared between two datasets. In the case of a dynamical system, the behavior of the relevant variables can be tightly coupled, such that information about one variable at a given instance in time may provide information about other variables at later instances in time. This is often viewed as a flow of information, and tracking such a flow can reveal relationships among the system variables. Since the MI is a symmetric quantity; an asymmetric quantity, called Transfer Entropy (TE), has been proposed to estimate the directionality of the coupling. However, accurate estimation of entropy-based measures is notoriously difficult. Every method has its own free tuning parameter(s) and there is no consensus on an optimal way of estimating the TE from a dataset. We propose a new methodology to estimate TE and apply a set of methods together as an accuracy cross-check to provide a reliable mathematical tool for any given data set. We demonstrate both the variability in TE estimation across techniques as well as the benefits of the proposed methodology to reliably estimate the directionality of coupling among variables. Full article
(This article belongs to the Special Issue Transfer Entropy)
Open AccessArticle Inferring a Drive-Response Network from Time Series of Topological Measures in Complex Networks with Transfer Entropy
Entropy 2014, 16(11), 5753-5776; doi:10.3390/e16115753
Received: 19 August 2014 / Revised: 5 October 2014 / Accepted: 28 October 2014 / Published: 3 November 2014
Cited by 1 | PDF Full-text (3117 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Topological measures are crucial to describe, classify and understand complex networks. Lots of measures are proposed to characterize specific features of specific networks, but the relationships among these measures remain unclear. Taking into account that pulling networks from different domains together for statistical
[...] Read more.
Topological measures are crucial to describe, classify and understand complex networks. Lots of measures are proposed to characterize specific features of specific networks, but the relationships among these measures remain unclear. Taking into account that pulling networks from different domains together for statistical analysis might provide incorrect conclusions, we conduct our investigation with data observed from the same network in the form of simultaneously measured time series. We synthesize a transfer entropy-based framework to quantify the relationships among topological measures, and then to provide a holistic scenario of these measures by inferring a drive-response network. Techniques from Symbolic Transfer Entropy, Effective Transfer Entropy, and Partial Transfer Entropy are synthesized to deal with challenges such as time series being non-stationary, finite sample effects and indirect effects. We resort to kernel density estimation to assess significance of the results based on surrogate data. The framework is applied to study 20 measures across 2779 records in the Technology Exchange Network, and the results are consistent with some existing knowledge. With the drive-response network, we evaluate the influence of each measure by calculating its strength, and cluster them into three classes, i.e., driving measures, responding measures and standalone measures, according to the network communities. Full article
(This article belongs to the Special Issue Transfer Entropy)
Open AccessArticle Structure of a Global Network of Financial Companies Based on Transfer Entropy
Entropy 2014, 16(8), 4443-4482; doi:10.3390/e16084443
Received: 16 May 2014 / Revised: 30 June 2014 / Accepted: 1 August 2014 / Published: 7 August 2014
Cited by 9 | PDF Full-text (1825 KB) | HTML Full-text | XML Full-text
Abstract
This work uses the stocks of the 197 largest companies in the world, in terms of market capitalization, in the financial area, from 2003 to 2012. We study the causal relationships between them using Transfer Entropy, which is calculated using the stocks of
[...] Read more.
This work uses the stocks of the 197 largest companies in the world, in terms of market capitalization, in the financial area, from 2003 to 2012. We study the causal relationships between them using Transfer Entropy, which is calculated using the stocks of those companies and their counterparts lagged by one day. With this, we can assess which companies influence others according to sub-areas of the financial sector, which are banks, diversified financial services, savings and loans, insurance, private equity funds, real estate investment companies, and real estate trust funds. We also analyze the exchange of information between those stocks as seen by Transfer Entropy and the network formed by them based on this measure, verifying that they cluster mainly according to countries of origin, and then by industry and sub-industry. Then we use data on the stocks of companies in the financial sector of some countries that are suffering the most with the current credit crisis, namely Greece, Cyprus, Ireland, Spain, Portugal, and Italy, and assess, also using Transfer Entropy, which companies from the largest 197 are most affected by the stocks of these countries in crisis. The aim is to map a network of influences that may be used in the study of possible contagions originating in those countries in financial crisis. Full article
(This article belongs to the Special Issue Transfer Entropy)
Figures

Open AccessArticle Transfer Entropy Expressions for a Class of Non-Gaussian Distributions
Entropy 2014, 16(3), 1743-1755; doi:10.3390/e16031743
Received: 17 January 2014 / Revised: 10 March 2014 / Accepted: 18 March 2014 / Published: 24 March 2014
PDF Full-text (218 KB) | HTML Full-text | XML Full-text
Abstract
Transfer entropy is a frequently employed measure of conditional co-dependence in non-parametric analysis of Granger causality. In this paper, we derive analytical expressions for transfer entropy for the multivariate exponential, logistic, Pareto (type IIV) and Burr distributions. The latter two
[...] Read more.
Transfer entropy is a frequently employed measure of conditional co-dependence in non-parametric analysis of Granger causality. In this paper, we derive analytical expressions for transfer entropy for the multivariate exponential, logistic, Pareto (type IIV) and Burr distributions. The latter two fall into the class of fat-tailed distributions with power law properties, used frequently in biological, physical and actuarial sciences. We discover that the transfer entropy expressions for all four distributions are identical and depend merely on the multivariate distribution parameter and the number of distribution dimensions. Moreover, we find that in all four cases the transfer entropies are given by the same decreasing function of distribution dimensionality. Full article
(This article belongs to the Special Issue Transfer Entropy)
Open AccessArticle Information Theory Analysis of Cascading Process in a Synthetic Model of Fluid Turbulence
Entropy 2014, 16(3), 1272-1286; doi:10.3390/e16031272
Received: 29 September 2013 / Revised: 12 February 2014 / Accepted: 19 February 2014 / Published: 27 February 2014
Cited by 5 | PDF Full-text (283 KB) | HTML Full-text | XML Full-text
Abstract
The use of transfer entropy has proven to be helpful in detecting which is the verse of dynamical driving in the interaction of two processes, X and Y . In this paper, we present a different normalization for the transfer entropy, which is
[...] Read more.
The use of transfer entropy has proven to be helpful in detecting which is the verse of dynamical driving in the interaction of two processes, X and Y . In this paper, we present a different normalization for the transfer entropy, which is capable of better detecting the information transfer direction. This new normalized transfer entropy is applied to the detection of the verse of energy flux transfer in a synthetic model of fluid turbulence, namely the Gledzer–Ohkitana–Yamada shell model. Indeed, this is a fully well-known model able to model the fully developed turbulence in the Fourier space, which is characterized by an energy cascade towards the small scales (large wavenumbers k), so that the application of the information-theory analysis to its outcome tests the reliability of the analysis tool rather than exploring the model physics. As a result, the presence of a direct cascade along the scales in the shell model and the locality of the interactions in the space of wavenumbers come out as expected, indicating the validity of this data analysis tool. In this context, the use of a normalized version of transfer entropy, able to account for the difference of the intrinsic randomness of the interacting processes, appears to perform better, being able to discriminate the wrong conclusions to which the “traditional” transfer entropy would drive. Full article
(This article belongs to the Special Issue Transfer Entropy)
Open AccessArticle Risk Contagion in Chinese Banking Industry: A Transfer Entropy-Based Analysis
Entropy 2013, 15(12), 5549-5564; doi:10.3390/e15125549
Received: 7 September 2013 / Revised: 19 October 2013 / Accepted: 9 December 2013 / Published: 16 December 2013
Cited by 8 | PDF Full-text (391 KB) | HTML Full-text | XML Full-text
Abstract
What is the impact of a bank failure on the whole banking industry? To resolve this issue, the paper develops a transfer entropy-based method to determine the interbank exposure matrix between banks. This method constructs the interbank market structure by calculating the transfer
[...] Read more.
What is the impact of a bank failure on the whole banking industry? To resolve this issue, the paper develops a transfer entropy-based method to determine the interbank exposure matrix between banks. This method constructs the interbank market structure by calculating the transfer entropy matrix using bank stock price sequences. This paper also evaluates the stability of Chinese banking system by simulating the risk contagion process. This paper contributes to the literature on interbank contagion mainly in two ways: it establishes a convincing connection between interbank market and transfer entropy, and exploits the market information (stock price) rather than presumptions to determine the interbank exposure matrix. Second, the empirical analysis provides an in depth understanding of the stability of the current Chinese banking system. Full article
(This article belongs to the Special Issue Transfer Entropy)
Figures

Open AccessArticle Linearized Transfer Entropy for Continuous Second Order Systems
Entropy 2013, 15(8), 3186-3204; doi:10.3390/e15083276
Received: 22 May 2013 / Revised: 5 July 2013 / Accepted: 18 July 2013 / Published: 7 August 2013
Cited by 4 | PDF Full-text (397 KB)
Abstract
The transfer entropy has proven a useful measure of coupling among components of a dynamical system. This measure effectively captures the influence of one system component on the transition probabilities (dynamics) of another. The original motivation for the measure was to quantify such
[...] Read more.
The transfer entropy has proven a useful measure of coupling among components of a dynamical system. This measure effectively captures the influence of one system component on the transition probabilities (dynamics) of another. The original motivation for the measure was to quantify such relationships among signals collected from a nonlinear system. However, we have found the transfer entropy to also be a useful concept in describing linear coupling among system components. In this work we derive the analytical transfer entropy for the response of coupled, second order linear systems driven with a Gaussian random process. The resulting expression is a function of the auto- and cross-correlation functions associated with the system response for different degrees-of-freedom. We show clearly that the interpretation of the transfer entropy as a measure of "information flow" is not always valid. In fact, in certain instances the "flow" can appear to switch directions simply by altering the degree of linear coupling. A safer way to view the transfer entropy is as a measure of the ability of a given system component to predict the dynamics of another. Full article
(This article belongs to the Special Issue Transfer Entropy)
Open AccessArticle Simulation Study of Direct Causality Measures in Multivariate Time Series
Entropy 2013, 15(7), 2635-2661; doi:10.3390/e15072635
Received: 28 March 2013 / Revised: 5 June 2013 / Accepted: 27 June 2013 / Published: 4 July 2013
Cited by 24 | PDF Full-text (3052 KB)
Abstract
Measures of the direction and strength of the interdependence among time series from multivariate systems are evaluated based on their statistical significance and discrimination ability. The best-known measures estimating direct causal effects, both linear and nonlinear, are considered, i.e., conditional Granger causality index
[...] Read more.
Measures of the direction and strength of the interdependence among time series from multivariate systems are evaluated based on their statistical significance and discrimination ability. The best-known measures estimating direct causal effects, both linear and nonlinear, are considered, i.e., conditional Granger causality index (CGCI), partial Granger causality index (PGCI), partial directed coherence (PDC), partial transfer entropy (PTE), partial symbolic transfer entropy (PSTE) and partial mutual information on mixed embedding (PMIME). The performance of the multivariate coupling measures is assessed on stochastic and chaotic simulated uncoupled and coupled dynamical systems for different settings of embedding dimension and time series length. The CGCI, PGCI and PDC seem to outperform the other causality measures in the case of the linearly coupled systems, while the PGCI is the most effective one when latent and exogenous variables are present. The PMIME outweighs all others in the case of nonlinear simulation systems. Full article
(This article belongs to the Special Issue Transfer Entropy)
Open AccessArticle Transfer Entropy for Coupled Autoregressive Processes
Entropy 2013, 15(3), 767-788; doi:10.3390/e15030767
Received: 26 January 2013 / Revised: 13 February 2013 / Accepted: 19 February 2013 / Published: 25 February 2013
Cited by 7 | PDF Full-text (1163 KB) | HTML Full-text | XML Full-text
Abstract
A method is shown for computing transfer entropy over multiple time lags for coupled autoregressive processes using formulas for the differential entropy of multivariate Gaussian processes. Two examples are provided: (1) a first-order filtered noise process whose state is measured with additive noise,
[...] Read more.
A method is shown for computing transfer entropy over multiple time lags for coupled autoregressive processes using formulas for the differential entropy of multivariate Gaussian processes. Two examples are provided: (1) a first-order filtered noise process whose state is measured with additive noise, and (2) two first-order coupled processes each of which is driven by white process noise. We found that, for the first example, increasing the first-order AR coefficient while keeping the correlation coefficient between filtered and measured process fixed, transfer entropy increased since the entropy of the measured process was itself increased. For the second example, the minimum correlation coefficient occurs when the process noise variances match. It was seen that matching of these variances results in minimum information flow, expressed as the sum of transfer entropies in both directions. Without a match, the transfer entropy is larger in the direction away from the process having the larger process noise. Fixing the process noise variances, transfer entropies in both directions increase with the coupling strength. Finally, we note that the method can be generally employed to compute other information theoretic quantities as well. Full article
(This article belongs to the Special Issue Transfer Entropy)
Open AccessArticle On Thermodynamic Interpretation of Transfer Entropy
Entropy 2013, 15(2), 524-543; doi:10.3390/e15020524
Received: 16 November 2012 / Revised: 16 January 2013 / Accepted: 28 January 2013 / Published: 1 February 2013
Cited by 21 | PDF Full-text (324 KB) | HTML Full-text | XML Full-text
Abstract
We propose a thermodynamic interpretation of transfer entropy near equilibrium, using a specialised Boltzmann’s principle. The approach relates conditional probabilities to the probabilities of the corresponding state transitions. This in turn characterises transfer entropy as a difference of two entropy rates: the rate
[...] Read more.
We propose a thermodynamic interpretation of transfer entropy near equilibrium, using a specialised Boltzmann’s principle. The approach relates conditional probabilities to the probabilities of the corresponding state transitions. This in turn characterises transfer entropy as a difference of two entropy rates: the rate for a resultant transition and another rate for a possibly irreversible transition within the system affected by an additional source. We then show that this difference, the local transfer entropy, is proportional to the external entropy production, possibly due to irreversibility. Near equilibrium, transfer entropy is also interpreted as the difference in equilibrium stabilities with respect to two scenarios: a default case and the case with an additional source. Finally, we demonstrated that such a thermodynamic treatment is not applicable to information flow, a measure of causal effect. Full article
(This article belongs to the Special Issue Transfer Entropy)
Figures

Open AccessArticle Compensated Transfer Entropy as a Tool for Reliably Estimating Information Transfer in Physiological Time Series
Entropy 2013, 15(1), 198-219; doi:10.3390/e15010198
Received: 29 October 2012 / Revised: 21 December 2012 / Accepted: 5 January 2013 / Published: 11 January 2013
Cited by 24 | PDF Full-text (314 KB) | HTML Full-text | XML Full-text
Abstract
We present a framework for the estimation of transfer entropy (TE) under the conditions typical of physiological system analysis, featuring short multivariate time series and the presence of instantaneous causality (IC). The framework is based on recognizing that TE can be interpreted as
[...] Read more.
We present a framework for the estimation of transfer entropy (TE) under the conditions typical of physiological system analysis, featuring short multivariate time series and the presence of instantaneous causality (IC). The framework is based on recognizing that TE can be interpreted as the difference between two conditional entropy (CE) terms, and builds on an efficient CE estimator that compensates for the bias occurring for high dimensional conditioning vectors and follows a sequential embedding procedure whereby the conditioning vectors are formed progressively according to a criterion for CE minimization. The issue of IC is faced accounting for zero-lag interactions according to two alternative empirical strategies: if IC is deemed as physiologically meaningful, zero-lag effects are assimilated to lagged effects to make them causally relevant; if not, zero-lag effects are incorporated in both CE terms to obtain a compensation. The resulting compensated TE (cTE) estimator is tested on simulated time series, showing that its utilization improves sensitivity (from 61% to 96%) and specificity (from 5/6 to 0/6 false positives) in the detection of information transfer respectively when instantaneous effect are causally meaningful and non-meaningful. Then, it is evaluated on examples of cardiovascular and neurological time series, supporting the feasibility of the proposed framework for the investigation of physiological mechanisms. Full article
(This article belongs to the Special Issue Transfer Entropy)
Figures

Open AccessArticle Moving Frames of Reference, Relativity and Invariance in Transfer Entropy and Information Dynamics
Entropy 2013, 15(1), 177-197; doi:10.3390/e15010177
Received: 16 November 2012 / Accepted: 31 December 2012 / Published: 10 January 2013
Cited by 4 | PDF Full-text (493 KB) | HTML Full-text | XML Full-text
Abstract
We present a new interpretation of a local framework for informationdynamics, including the transfer entropy, by defining a moving frame of reference for theobserver of dynamics in lattice systems. This formulation is inspired by the idea ofinvestigating “relativistic” effects on observing the dynamics
[...] Read more.
We present a new interpretation of a local framework for informationdynamics, including the transfer entropy, by defining a moving frame of reference for theobserver of dynamics in lattice systems. This formulation is inspired by the idea ofinvestigating “relativistic” effects on observing the dynamics of information - in particular,we investigate a Galilean transformation of the lattice system data. In applying thisinterpretation to elementary cellular automata, we demonstrate that using a moving frameof reference certainly alters the observed spatiotemporal measurements of informationdynamics, yet still returns meaningful results in this context. We find that, as expected,an observer will report coherent spatiotemporal structures that are moving in their frame asinformation transfer, and structures that are stationary in their frame as information storage.Crucially, the extent to which the shifted frame of reference alters the results dependson whether the shift of frame retains, adds or removes relevant information regarding thesource-destination interaction. Full article
(This article belongs to the Special Issue Transfer Entropy)

Review

Jump to: Research

Open AccessReview The Liang-Kleeman Information Flow: Theory and Applications
Entropy 2013, 15(1), 327-360; doi:10.3390/e15010327
Received: 17 October 2012 / Revised: 22 November 2012 / Accepted: 28 December 2012 / Published: 18 January 2013
Cited by 11 | PDF Full-text (996 KB) | HTML Full-text | XML Full-text
Abstract
Information flow, or information transfer as it may be referred to, is a fundamental notion in general physics which has wide applications in scientific disciplines. Recently, a rigorous formalism has been established with respect to both deterministic and stochastic systems, with flow measures
[...] Read more.
Information flow, or information transfer as it may be referred to, is a fundamental notion in general physics which has wide applications in scientific disciplines. Recently, a rigorous formalism has been established with respect to both deterministic and stochastic systems, with flow measures explicitly obtained. These measures possess some important properties, among which is flow or transfer asymmetry. The formalism has been validated and put to application with a variety of benchmark systems, such as the baker transformation, Hénon map, truncated Burgers-Hopf system, Langevin equation, etc. In the chaotic Burgers-Hopf system, all the transfers, save for one, are essentially zero, indicating that the processes underlying a dynamical phenomenon, albeit complex, could be simple. (Truth is simple.) In the Langevin equation case, it is found that there could be no information flowing from one certain time series to another series, though the two are highly correlated. Information flow/transfer provides a potential measure of the cause–effect relation between dynamical events, a relation usually hidden behind the correlation in a traditional sense. Full article
(This article belongs to the Special Issue Transfer Entropy)
Open AccessReview The Relation between Granger Causality and Directed Information Theory: A Review
Entropy 2013, 15(1), 113-143; doi:10.3390/e15010113
Received: 14 November 2012 / Revised: 19 December 2012 / Accepted: 19 December 2012 / Published: 28 December 2012
Cited by 20 | PDF Full-text (579 KB) | HTML Full-text | XML Full-text
Abstract
This report reviews the conceptual and theoretical links between Granger causality and directed information theory. We begin with a short historical tour of Granger causality, concentrating on its closeness to information theory. The definitions of Granger causality based on prediction are recalled, and
[...] Read more.
This report reviews the conceptual and theoretical links between Granger causality and directed information theory. We begin with a short historical tour of Granger causality, concentrating on its closeness to information theory. The definitions of Granger causality based on prediction are recalled, and the importance of the observation set is discussed. We present the definitions based on conditional independence. The notion of instantaneous coupling is included in the definitions. The concept of Granger causality graphs is discussed. We present directed information theory from the perspective of studies of causal influences between stochastic processes. Causal conditioning appears to be the cornerstone for the relation between information theory and Granger causality. In the bivariate case, the fundamental measure is the directed information, which decomposes as the sum of the transfer entropies and a term quantifying instantaneous coupling. We show the decomposition of the mutual information into the sums of the transfer entropies and the instantaneous coupling measure, a relation known for the linear Gaussian case. We study the multivariate case, showing that the useful decomposition is blurred by instantaneous coupling. The links are further developed by studying how measures based on directed information theory naturally emerge from Granger causality inference frameworks as hypothesis testing. Full article
(This article belongs to the Special Issue Transfer Entropy)

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Type of Paper: Review
Title: Assessment of Transfer Entropy and Related Measures
Author: Dimitris Kugiumtzis
Affiliation: Department of Mathematical, Physical and Computational Sciences, Faculty of Engineering, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece; E-Mail: dkugiu@gen.auth.gr
Abstract: In the analysis of multivariate time series, the primary interest is in investigating inter-dependencies of the observed variables, which are often assumed to be representing possibly coupled systems. The measures of directed coupling or inter-dependence are based on the concept of Granger causality (X Granger causes Y if the prediction of Y from its own past improves by including also the past of X). The transfer entropy (TE) quantifies Granger causality in terms of information, and it thus has the advantages of being model-free and able to identify nonlinear effects. Besides the recent popularity of TE in applications, its estimation is not straightforward and depends heavily on free parameters related to the state space reconstruction and the estimation of entropies. These points are discussed and different estimation schemes are assessed, suggesting the use of k-nearest neighbor estimate. Further, extensions of transfer entropy will be discussed, such as the use of rank vectors instead of sample vectors, termed as symbolic transfer entropy (STE), and the mutual information on mixed embedding (MIME), as well as the modification of TE to take into account the presence of confounding variables, termed partial transfer entropy (PTE). These measures will be assessed as to their statistical significance and power, and examples on simulated and real data will be given.

Type of Paper: Article
Title: State-Dependent Functional Interactions in Neuronal Cultures Revealed By Transfer Entropy
Authors: Olav Stetter 1,2, Javier Orlandi 3, Jordi Soriano 3 and Demian Battaglia 1,2
Affiliation: 1 Max Planck Institute for Dynamics and Self-organization, Göttingen, Germany; E-Mail: demian@nld.ds.mpg.de
2 Bernstein Center for Computational Neuroscience, Göttingen, Germany
3 Universitat de Barcelona, Facultat de Fisica, Barcelona, Spain
Abstract: Structural connectivity, describing actual synaptic connections, contribute to shape spontaneous or induced neural activity generated by neural circuits, both in vitro and in vivo. However, the resulting dynamics is not fully constrained by structural connectivity and different activity patterns can be generated by a same network, depending on its dynamical state. Beyond structural connectivity, actual influences between neurons in a circuit are described by directed functional connectivity, assessed by means of causal analysis tools like Granger Causality or, more recently, Transfer Entropy. Thus, structural networks with a rich repertoire of possible dynamics give rise to a multiplicity of functional networks. We focus here, on a specific example of state-dependent functional connectivity, resorting to simulations of large networks of spiking neurons and to analyses of real data. First, we consider a model of a culture of dissociated neurons in vitro, undergoing spontaneous switching between bursting and non-bursting states. These dynamical regimes are associated to functional digraphs with qualitatively different topologies, only partially dictated by the underlying unchanging structural topology. Second, we identify similar state-dependency patterns in living neuronal cultures, analyzing actual high-resolution calcium imaging recordings. Interestingly, we show that, while Transfer Entropy (and even time-delayed Mutual Information) analysis provide a description of directed functional connectivity in agreement with an intuition based on dynamical systems theory, this does not hold for linear measures like Granger Causality, which are strongly biased toward the inference of exceedingly clustered functional topologies.

Type of Paper: Research Article
Title: Nonuniform Embedding Corrected Transfer Entropy as a Tool for Reliably Estimating Information Transfer in Physiological Time Series
Authors: Luca Faes 1, Giandomenico Nollo 1 and Alberto Porta 2
Affiliation: 1 Dept. of Physics and BIOtech, University of Trento, Trento, Italy
2 Dept. of Biomedical Sciences for Health, Galeazzi Orthopaedic Institute, University of Milan, Milano, Italy; E-Mail: luca.faes@unitn.it
Abstract: Transfer entropy (TE) has recently emerged as a powerful nonlinear and model-free tool to detect the direction and quantify the strength of the information flow between interacting processes. However, TE estimation in short and noisy physiological time series is complicated by a number of practical issues. In the present study we develop a multivariate TE estimation framework based on conditional entropy (CE) estimation, non-uniform embedding, and consideration of instantaneous causality. The framework is based on recognizing that TE can be interpreted as CE difference, and builds on an efficient CE estimator that compensates for the bias occurring for high dimensional conditioning vectors and follows a sequential embedding procedure whereby the conditioning vectors are formed progressively according to a criterion for CE minimization. The issue of instantaneous causality is faced allowing for the possibility of zero-lag effects in TE computation, according to two alternative procedures: if instantaneous effects are deemed as physiologically meaningful, the zero-lag term is assimilated to the lagged terms to make it causally relevant; if not, the zero-lag term is incorporated in both CE computations to obtain a compensation of its confounding effects. The resulting TE estimator, denoted as corrected TE (cTE), is first tested on simulations of linear stochastic and nonlinear deterministic systems. In particular, we consider realistic simulations of cardiovascular time series interacting according to both instantaneous and time lagged effects, and of brain source signals instantaneously mixed to reproduce volume conduction effects. Simulation results demonstrate the flexibility of the proposed framework and make clear the necessity of accounting for instantaneous causality through cTE computation. Then, representative examples of heart rate, arterial pressure and respiration variability series measured during a paced breathing protocol, and of magnetoencephalography multi-trial signals measured during a visuo-tactile cognitive experiment, are provided to illustrate the practical applicability of the framework. Results indicate that the information flow detected by means of the cTE is better interpretable, according to the known cardiovascular physiology or the expected mechanisms of multisensory integration, when instantaneous causality effects are properly taken into account.

Journal Contact

MDPI AG
Entropy Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
entropy@mdpi.com
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Entropy
Back to Top