# Information Decomposition in Bivariate Systems: Theory and Application to Cardiorespiratory Dynamics

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Methods

#### 2.1. Information-Theoretic Preliminaries

_{v}p(v | w) log p(v | w) is the entropy of V measured when W = w. The conditional entropy is measured computing the conditional probability p(v|w) which expresses the probability of observing V = v given that W = w has been observed.

_{v}p(v, w) log p(v | w) / p(v). Since even the reverse holds, the MI is symmetric, I(V;W) = I(W;V). MI can be expressed in terms of Shannon and conditional entropies as I(V;W) = H(V)–H(V|W). Moreover, the conditional mutual information between V and W given a third variable Z quantifies the residual MI between V and W when Z is known: I (V;W | Z) =∑

_{v}p(v,w, z) log p(v | w, z) / p(v | z), and can be expressed in terms of conditional entropies as I(V;W|Z) = H(V|Z) – H(V|W,Z) = H(W|Z) – H(W|V,Z) = I(W;V|Z). Note that entropy and MI are often measured in bits after using 2 as the base of logarithms in the computation, while in this study we use natural logarithms and therefore the units are called nats.

#### 2.2. Information Dynamics in Bivariate Systems

_{n}and Y

_{n}the univariate variables describing the present of the processes X and Y, and as X

_{n}

^{−}= [X

_{n}

_{−1}, X

_{n}

_{−2},⋯] and Y

_{n}

^{−}= [Y

_{n}

_{−1}, Y

_{n}

_{−2},⋯] the multivariate variables describing the past of the processes. Then, in the framework of information dynamics [19] the temporal statistical structure of the observed system is characterized through the standard information-theoretic measures recalled in Section 2.1, computed taking as arguments properly chosen combinations of the past and present of the two dynamic processes.

_{n}). Then, the effect on the target of the temporal dynamics of the joint process {X,Y} is measured by the so-called Prediction Entropy (PE):

_{n}

^{−}= [ X

_{n}

^{−}, Y

_{n}

^{−}]. In an attempt to separate the statistical dependencies arising from each of the two systems, the PE can be decomposed exploiting the chain rule for mutual information [14] as:

#### 2.3. Properties and Theoretical Interpretation

_{n}

^{−}→Y

_{n}and Y

_{n}

^{−}→X

_{n}) are distinguished from causal interactions involving variables of the same process, which we denote as internal dynamics (X

_{n}

^{−}→X

_{n}and Y

_{n}

^{−}→Y

_{n}); note that the links between the past of the two processes are induced by the causal interactions, i.e., X

_{n}

^{−}→Y

_{n}implies X

_{n}

^{−}→Y

_{n}

^{−}, and the same from Y to X. It is worth also noting that the graphical structure adopted in Figure 1 serves for the analysis of causality intended in the Granger sense [11,12], i.e., with the purpose of characterizing causal relations between the whole past of the processes and their present, without taking care of lag-specific interactions (a possible treatment of lag-specific causal interactions is outlined in [29]), and that this representation presupposes the absence of instantaneous dependence between the processes (i.e., X

_{n}⊥Y

_{n}|X

_{n}

^{−},Y

_{n}

^{−}; a solution for incorporating zero-lag dependencies in practical analysis is outlined in [30]).

_{n}

^{−}⊥Y

_{n}|X

_{n}

^{−}, Figure 1b), and absence of causal interactions from source to target corresponds to statistical independence between the past of the driver and the present of the target given its past (X

_{n}

^{−}⊥Y

_{n}|Y

_{n}

^{−}, Figure 1c). Then, the conditional independencies associated with absence of causal interactions can be detected using the conditional MI. However, since conditional independence does not imply independence, the (unconditioned) MI cannot be used to probe causal interactions.

_{n}

^{−}, i.e., only in the absence of causal interactions from Y to X (Figure 1d). Similarly, a proper cause-effect interpretation can be given to the internal dynamics in the target process Y only if Y

_{n}

^{−}is autonomous, i.e., only in the absence of causal interactions from X to Y (Figure 1c). In these situations, meaningful measures of the magnitude of the natural causal effects related to interactions from X to Y and internal dynamics of Y can be obtained elaborating the joint distributions p(Y

_{n};X

_{n}

^{−}) and p(Y

_{n};Y

_{n}

^{−}) in terms of MI, i.e., computing the MIs I(Y

_{n}; X

_{n}

^{−}) and I(Y

_{n}; Y

_{n}

^{−}).

_{n}⊥X

_{n}

^{−},Y

_{n}

^{−}, to the entropy of the target process, measured when [X

_{n}

^{−},Y

_{n}

^{−}] fully predicts Y

_{n}. The PE is nonzero in the presence of any combination of internal dynamics in the target system (Y

_{n}

^{−}→Y

_{n}) and causal interactions from source to target (X

_{n}

^{−}→Y

_{n}). As such, it is an useful measure of the overall predictive information about the target process reflecting the natural causal effect of [X

_{n}

^{−},Y

_{n}

^{−}] on Y

_{n}, but cannot disentangle the causal sources of statistical dependence giving rise to this predictive information.

_{n};Y

_{n}

^{−})>0 arises not only from internal dynamics in the target system (Y

_{n}

^{−}→Y

_{n}) but also from causal interactions from source to target; in the latter case X

_{n}

^{−}acts as a common driver (Y

_{n}←X

_{n}

^{−}→Y

_{n}

^{−}), creating statistical dependence between Y

_{n}

^{−}and Y

_{n}even without the existence of a causal connection between them. For this reason, the SE cannot be related to the presence of internal dynamics in the target process. However, in the particular case of absent causal interaction from source to target (Figure 1c), not only the SE reflects the internal dynamics of the target, but also quantifies these dynamics as occurring from natural causal effects. On the contrary, the cSE reflects the internal information in the target process because it is always zero in the absence of internal dynamics and, consequently, finding it higher than zero means that autodependency effects take place in the target process (Y

_{n}

^{−}→Y

_{n}). However, the internal information measured by the cSE does not reflect natural causal effects because it is based on conditional probabilities rather than simple joint probabilities. To sum up, the main distinction between SE and cSE can be subsumed stating that a system without internal dynamics does not exhibit internal information, but may exhibit information storage (see Figure 1b for an example).

_{n}

^{−}→Y

_{n}) and that resulting from the contemporaneous presence of internal dynamics in the target and causal interactions from target to driver (common driver effect X

_{n}

^{−}←Y

_{n}

^{−}→Y

_{n}). Therefore significant CE measured as I(Y

_{n};X

_{n}

^{−})>0 cannot be taken as an indication of causal interaction from X to Y. However, in the presence of unidirectional interactions from driver to target the CE is a proper measure of the overall natural causal effects subsuming these interactions. On the contrary, the TE reflects the information transfer in the target process because it is exactly zero in the absence of causal interactions from driver to target and, consequently, finding it strictly positive means that the driver is causing the target (X

_{n}

^{−}→Y

_{n}). To summarize the differences between CE and TE we can thus state that in the absence of any causal interaction from driver to target there is no information transfer, but there can be cross information (see Figure 1c for an example).

_{Y}= P

_{Y}and T

_{X→Y}= 0, even in the presence of substantial causal interactions from source to target. Similarly, when the present of the target process is an exact function of the past of the driver we always measure C

_{X→Y}= P

_{Y}and S

_{Y|X}= 0, even in the presence of substantial internal dynamics in the target. In other words, a full predictability of the target, either given its past or the past of the driver, entails a null value for the conditional MI and thus prevents from any possibility to measure additional predictability. This is the reason of the absence of the “only if” condition in the relation between conditional independence and null conditional MI reported in Table 1 for the TE and the cSE. To put it simply, while there is no information transfer without causal interactions, and there is no internal information without internal dynamics, the reverse does not hold. Examples of causal interactions not reflected by information transfer and internal dynamics not reflected by internal information are reported in Figure 2. To sum up we can say that the measures of information transfer and internal information serve as useful proxies for causal interactions and internal dynamics for stochastic processes, while their causal interpretation should proceed more carefully in the presence of deterministic effects.

#### 2.4. Computation of Information Dynamics

_{n}

^{−}and Y

_{n}

^{−}appearing in the definitions (1),(3),(4),(6),(7) with the l-dimensional variables X

_{n}

^{l}= [X

_{n}

_{−1}, X

_{n}

_{−2}, ··· X

_{n}

_{−}

_{l}] and Y

_{n}

^{l}= [Y

_{n}

_{−1}, Y

_{n}

_{−2}, ···Y

_{n}

_{−}

_{l}], and then applying (8) and (9) to compute entropy and conditional entropy so that to obtain:

_{n}given the various combinations of the past of X and Y are obtained. Since any partial variance can be computed using (10), the problem reduces to computing the relevant covariance and cross-covariance matrices between the present and past variables of the two processes. In general, these matrices contain as scalar elements the covariance between two time-lagged variables of the processes X and Y, which in turn appear as elements of the autocovariance of the bivariate process S = {X,Y}, defined at each lag k ≥ 0 as Γ

_{k}= E[S

_{n}S

^{T}

_{n-k}]. In the following we review the procedure to derive the autocovariance of vector autoregressive (AR) processes from the parametric representation of these processes [37].

_{n}=[X

_{n}Y

_{n}]

^{T}includes the present variables of the joint process,

**A**

_{k}are 2×2 coefficient matrices and ε

_{n}is a noise process with diagonal covariance matrix Λ. The autocovariance of the process (12) is related to the AR parameters via the well known Yule-Walker equations:

_{k}

_{0}is the Kronecher product. In order to solve equation (13) for Γ

_{k}, k = 0, 1, …, p–1, we first express (12) in a compact form as ${S}_{n}^{p}={\mathbf{A}}^{p}{S}_{n-1}^{p}+{\epsilon}_{n}^{p}$, where:

^{p}denotes the covariance of ${\epsilon}_{n}^{p}$). The Lyapunov equation can be solved for ${\Gamma}_{0}^{p}$, thus yielding the autocovariance matrices Γ

_{0},…, Γ

_{p}

_{-1}. Finally, the autocovariance can be calculated recursively for any lag k ≥ p by applying (13). This shows how the autocovariance sequence can be computed up to arbitrarily high lags starting from the parameters of bivariate AR representation of the process.

**A**

_{1}, …,

**A**

_{p}, Λ), compute the autocovariance Γ

_{k}for any lag k = 0,…,l solving the Lyapunov equation (16); (ii) picking up the proper elements from the Γ

_{k}, build the covariance matrices $\Sigma ({X}_{n}{}^{l},{Y}_{n}{}^{l})$, of dimension 2l × 2l, and $\Sigma ({Y}_{n};{X}_{n}{}^{l},{Y}_{n}{}^{l})$, of dimension 1×2l, and compute the variance σ(Y

_{n}) as the element (2,2) of Γ

_{0}; (iii) use equation (10) to find the partial variance $\sigma ({Y}_{n}|{X}_{n}{}^{l},{Y}_{n}{}^{l})$; (iv) use the first equation in (11) to compute the PE.

_{l}. As a rule of thumb, given that the autocovariance of a vector AR process decays exponentially with the lag, with a rate of decay depending on the modulus of the largest eigenvalue of

**A**

^{p}, ρ(A), it has been suggested to compute the autocovariance up to a lag l such that ρ(A)

^{l}is smaller than a predefined numerical tolerance [37]. We have found that computation of very long autocovariance sequences is not necessary for the purpose of evaluating information dynamics, because all measures stabilize to constant values already for small lags (typically l = 10) even for reasonably high values of ρ(A) [39–41].

## 3. Simulation Study

#### 3.1. Linear AR Bivariate Process

_{n}and ξ

_{n}are independent Gaussian white noise processes with zero mean and unit variance. The causal statistical structure of the process (17) is determined by the autodependency effects in X and Y, modulated by the parameters a and b, and by the causal interactions between X and Y, modulated by the parameters c and d. In this study we considered the situations in which one of these parameters is forced to zero, and the other parameters are let free to vary in the range 0–0.5. These situations reflect four scenarios characterized by absence of internal dynamics in the process X (a = 0) or in the process Y (b = 0), and absence of causal interactions from X to Y (c = 0) or from Y to X (d = 0). The causal structures resulting in the four scenarios are conveniently represented in Figure 3, both in the form of time series graphs showing all time-lagged effects and in a more condensed form reporting only the causal relations between the past and present of the two processes.

_{Y|X}= 0 if and only if b=0, and T

_{X→Y}= 0 if and only if c = 0). The simulation also confirms that the information storage quantified by the SE reflects the presence of internal dynamics in Y but also that of causal interactions from X to Y (e.g., in Figure 2b we find S

_{Y}> 0 even with b = 0), and that the cross information quantified by the CE reflects causal interactions from X to Y but also common effects of the past of Y on its present and on the present of X (e.g., in Figure 2c we find C

_{X→Y}> 0 even with c = 0).

_{X→Y}increases with a); this reflects the fact that T

_{X→Y}measures the relation between X

_{n}

^{−}and Y

_{n}and thus is affected both by the causal effect X

_{n}

^{−}→Y

_{n}and by dynamical changes in X

_{n}

^{−}. Similarly, the cSE may vary not only as a function of the strength of the internal dynamics in the target process (parameter b), but also with changes in the coupling from target to driver (e.g., in Figure 4a,c S

_{Y|X}decreases at increasing d); this reflects the fact that S

_{Y|X}measures the relation between Y

_{n}

^{−}and Y

_{n}and thus is affected both by the causal effect Y

_{n}

^{−}→Y

_{n}and by changes in the dynamical interaction between Y

_{n}

^{−}and X

_{n}. These findings confirm previous results indicating that the TE is sensitive to internal changes in the individual system components [15], and extend these to the indication that the cSE is sensitive to the connectivity between components. Nevertheless, as a reassuring result we observe that TE and cSE keep in some sense separate the analysis of internal dynamics in the target and causal interactions from source to target, since T

_{X→Y}is not affected by the internal dynamics of Y and S

_{Y|X}is not affected by the causal effects from X to Y. Moreover, we find that anytime T

_{X→Y}is stable the parameter c was unvaried, and anytime S

_{Y|X}is stable the parameter b was unvaried. This suggests that observing unchanged information transfer or unchanged internal information across conditions can be used to indicate respectively that the causal interactions from driver to target did not vary, or that the internal dynamics in the target did not vary. Moreover, in our examples all variations observed in the TE and the cSE were monotonic in dependence of the relevant parameter. However, in general this result has to be taken with caution, since it has been shown that the monotonic behavior may be lost in conditions close to determinism [15,42].

#### 3.2. Simulated Cardiovascular Dynamics

_{n}and ξ

_{n}are independent Gaussian white noises with zero mean and unit variance. The autodependency effects are set to generate autonomous oscillations in the two processes at the frequencies typical of cardiorespiratory variability. This was obtained placing pairs of complex-conjugated poles, of modulus ρ and phase 2πf, in the complex plane representation of the processes. Specifically, very low frequency (VLF) and low frequency (LF) oscillations are obtained for the simulated HPV setting poles with ρ

_{VLF}= 0.2, f

_{VLF}= 0.03 and ρ

_{LF}= 0.8, f

_{LF}= 0.1 for the process HP, and high frequency (HF) oscillations are obtained for the simulated RV setting poles with ρ

_{HF}= 0.9, f

_{HF}= 0.3 for the process R. The AR coefficients resulting from this setting are a

_{1}= –0.556, a

_{2}= –0.81, b

_{1}= 1.687, b

_{2}= –1.189, b

_{3}= 0.303, b

_{4}= –0.026. Then, causal interactions are set from R to HP at lags 0 and 1, simulating respectively fast (within beat) and one-beat delayed coupling from RV to HPV; this simulated cardiorespiratory coupling was weighed by the parameter c.

_{VLF}and ρ

_{LF}proportionally to a parameter b, to simulate a rise in the VLF and LF oscillations of HPV, and simultaneously decreased the parameter c to simulate a progressive weakening of the cardiorespiratory coupling. Figure 5a illustrates how these changes in the parameters were reflected by the measures of information dynamics composing the predictive information about the process HP. We see that the information storage measured by the SE is always significant, as it measures a statistical dependence between HP

_{n}

^{−}and HP

_{n}that mixes together the causal interactions from RV to HPV (common driver HP

_{n}

^{−}←R

_{n}

^{−}→HP

_{n}, prevalent at low values of b) and the internal dynamics of HPV (direct effect HP

_{n}

^{−}→HP

_{n}, prevalent at high values of b). As a consequence, S

_{HP}does not exhibit a monotonic behavior at varying the parameter b. On the contrary, the internal information measured by the cSE reflects only the strength of the internal dynamics in the simulated HPV, so that S

_{HP|R}is zero with b = 0 and increases monotonically with b. Moving to the information transfer, we see that the TE T

_{R→HP}decreases monotonically at increasing b, assuming the highest value at b = 0 when the simulated cardiorespiratory coupling is maximal, and reaching zero at b = 1 when the cardiorespiratory coupling vanishes. Nevertheless, in this simulation without causal interactions from target to driver also the CE reflected well the variations in the cardiorespiratory coupling, with values of C

_{R→HP}encompassing a wider range of variation from C

_{R→HP}= P

_{HP}measured at b = 0, to C

_{R→HP}= 0 measured at b = 1.

_{HF}progressively from 0.3 Hz to 0.1 Hz (Figure 5b). The information storage measured by S

_{HP}increased substantially at decreasing f

_{HF}, reflecting the progressive entrainment of the LF and HF oscillations of HPV that makes the process HP more predictable. This increasing storage was due to the common driver effect HP

_{n}

^{−}←R

_{n}

^{−}→HP

_{n}rather than to the direct effect HP

_{n}

^{−}→HP

_{n}, because with the imposed variations in the respiration frequency only the internal dynamics of the driver process R were altered. As a consequence, the internal information measured by the cSE remained constant at varying f

_{HF}, correctly reflecting the fact that the internal dynamics of the target process HP were kept unchanged. The information transfer measured by the TE showed a decrease with the respiratory frequency which, though being slight, is not compatible with the unaltered cardiorespiratory coupling; this result reflects a similar situation shown in Section 3.1, where in some circumstances the TE was found to vary with the internal dynamics of the driving process. The CE showed the opposite behavior, i.e., it increased substantially at decreasing f

_{HF}; this behavior can be more reasonably explained, in terms of natural causal effects from the autonomous driver process R to the target process HP, considering that the enhanced internal dynamics of R are measured through a higher cross-information.

## 4. Application to Cardiorespiratory Variability

#### 4.1. Experimental Protocols and Data Analysis

_{n}, was taken at the onset of the cardiac n-th interval, HP

_{n}. In accordance with this convention, instantaneous (i.e., non delayed) effects from R

_{n}to HP

_{n}were allowed in the analysis of information dynamics. For each protocol, synchronous sequences of N beats were selected in each condition according to the guidelines of short-term cardiovascular variability analysis [45] (N = 300 for HUT and N = 256 for PB). The sequences were linearly detrended and reduced to zero mean. Then, the measures of information dynamics were computed as outlined in Section 2.4. Specifically, a bivariate AR model was fitted on each pair of series using least-squares estimation, and optimizing the model order p by the Bayesian Information Criterion applied to the regression of HP

_{n}on {HP

_{n}

_{−1},…,HP

_{n}

_{−}

_{p},R

_{n},R

_{n}

_{−1}…,R

_{n}

_{−}

_{p}} [46]; then, the estimated model parameters were used to compute the autocovariance sequence of the bivariate process, from which the PE, SE, TE, CE and cSE were estimated using l = 20 past lags to approximate the past history of the process.

#### 4.2. Results and Discussion

_{HP}and of C

_{R→HP}were measured simulating a decrease in the respiratory frequency (Figure 5b). Therefore, we hypothesize that the higher predictive information about HPV induced by slow PB may not be the result of stronger causal interactions from RV to HPV, or stronger internal dynamics of HPV. Rather, this higher predictive information could be due to the progressive entrainment of the typical LF and HF oscillations of HPV resulting from the decrease of the breathing frequency, which is reflected by an increased information storage using the classical entropy decomposition based on SE and TE, and by an increased cross information using the alternative decomposition based on CE and cSE. Such an entrainment, which is supposed to enhance the oscillatory characteristics of HPV, might also contribute to strengthen the coupling of the cardiac and respiratory oscillators which has been clearly documented using phase dynamic models of cardiorespiratory interactions [7,8,51] and was confirmed also with continuously slowing the frequency of paced breathing [7,52]. According to these interpretations the same physiological phenomenon, i.e., the increased respiratory sinus arrhythmia observed during paced breathing at slow breathing rates, may be seen in terms of an increased coupling function from the respiratory to the cardiac oscillator using phase dynamics, and in terms of an enhanced information storage in the cardiac system induced by alterations of the respiratory driver using information dynamics. This latter interpretation confirms on physiological data our theoretical result indicating that, contrary to some intuitive belief, the information storage reflects not only the internal dynamics of the target process but also the causal interactions from driver to target. Finally, the interpretation of these results should consider that also a possible role of latent variables cannot be excluded. Indeed, the present analysis did not account for the baroreflex control of heart rate. Since R is exogenous for HP possible modifications of HP in response to arterial pressure changes are likely to inflate the terms describing information storage and internal information.

## 5. Discussion

## Author Contributions

## Conflicts of Interest

**PACS Codes:**05.45.Tp; 87.19.lo; 02.50.Sk; 89.75.-k; 87.19.ug

## References

- Cohen, M.A.; Taylor, J.A. Short-term cardiovascular oscillations in man: measuring and modelling the physiologies. J. Physiol
**2002**, 542, 669–683. [Google Scholar] - Berntson, G.G.; Cacioppo, J.T.; Quigley, K.S. Respiratory Sinus Arrhythmia - Autonomic Origins, Physiological-Mechanisms, and Psychophysiological Implications. Psychophysiology
**1993**, 30, 183–196. [Google Scholar] - Valdes-Sosa, P.A.; Roebroeck, A.; Daunizeau, J.; Friston, K. Effective connectivity: Influence, causality and biophysical modeling. Neuroimage
**2011**, 58, 339–361. [Google Scholar] - Pearl, J. Causality: Models, Reasoning and Inference; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
- Friston, K.J.; Harrison, L.; Penny, W. Dynamic causal modelling. Neuroimage
**2003**, 19, 1273–1302. [Google Scholar] - Porta, A.; Bassani, T.; Bari, V.; Tobaldini, E.; Takahashi, A.C.M.; Catai, A.M.; Montano, N. Model-based assessment of baroreflex and cardiopulmonary couplings during graded head-up tilt. Comp. Biol. Med
**2012**, 42, 298–305. [Google Scholar] - Stankovski, T.; Duggento, A.; McClintock, P.V.E.; Stefanovska, A. Inference of Time-Evolving Coupled Dynamical Systems in the Presence of Noise. Phys. Rev. Lett
**2012**, 109, 024101. [Google Scholar] - Iatsenko, D.; Bernjak, A.; Stankovski, T.; Shiogai, Y.; Owen-Lynch, P.J.; Clarkson, P.B.M.; McClintock, P.V.E.; Stefanovska, A. Evolution of cardiorespiratory interactions with age. Phil. Trans. Royal Soc. A
**2013**, 371, 20110622. [Google Scholar] - Chicharro, D.; Panzeri, S. Algorithms of causal inference for the analysis of effective connectivity among brain regions. Front. Neuroinf
**2014**, 8. [Google Scholar] [CrossRef] - Wiener, N. The Theory of Prediction; McGraw-Hill: New York, NY, USA, 1956. [Google Scholar]
- Granger, C.W.J. Economic processes involving feedback. Inf. Control
**1963**, 6, 28–48. [Google Scholar] - Granger, C.W.J. Testing for causality: A personal viewpoint. J. Econom. Dynam. Control
**1980**, 2, 329–352. [Google Scholar] - Porta, A.; Faes, L. Assessing causality in brain dynamics and cardiovascular control. Phil. Trans. Royal Soc. A
**2013**, 371, 20120517. [Google Scholar] - Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley: New York, NY, USA, 2006. [Google Scholar]
- Chicharro, D.; Ledberg, A. Framework to study dynamic dependencies in networks of interacting processes. Phys. Rev. E
**2012**, 86, 041901. [Google Scholar] - Faes, L.; Porta, A. Conditional entropy-based evaluation of information dynamics in physiological systems. In Directed Information Measures in Neuroscience; Vicente, R., Wibral, M., Lizier, J.T., Eds.; Springer-Verlag: Berlin, Germany, 2014; pp. 61–86. [Google Scholar]
- Lizier, J.T.; Prokopenko, M.; Zomaya, A.Y. Local measures of information storage in complex distributed computation. Inform. Sci
**2012**, 208, 39–54. [Google Scholar] - Schreiber, T. Measuring information transfer. Phys. Rev. Lett
**2000**, 85, 461–464. [Google Scholar] - Lizier, J.T. The Local Information Dynamics of Distributed Computation in Complex Systems; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
- Wibral, M.; Lizier, J.T.; Vogler, S.; Priesemann, V.; Galuske, R. Local active information storage as a tool to understand distributed neural information processing. Front. Neuroinf
**2014**, 8, 1. [Google Scholar] - Faes, L; Nollo, G.; Jurysta, F.; Marinazzo, D. Information dynamics of brain-heart physiological networks during sleep. New J. Phys
**2014**, 16, 105005. [Google Scholar] - Faes, L.; Porta, A.; Rossato, G.; Adami, A.; Tonon, D.; Corica, A.; Nollo, G. Investigating the mechanisms of cardiovascular and cerebrovascular regulation in orthostatic syncope through an information decomposition strategy. Auton. Neurosci
**2013**, 178, 76–82. [Google Scholar] - Lizier, J.T.; Pritam, S.; Prokopenko, M. Information Dynamics in Small-World Boolean Networks. Artif. Life
**2011**, 17, 293–314. [Google Scholar] - Lizier, J.T.; Prokopenko, M. Differentiating information transfer and causal effect. Eur. Phys. J. B
**2010**, 73, 605–615. [Google Scholar] - Wibral, M.; Vicente, R.; Lindner, M. Transfer entropy in neuroscience. In Directed Information Measures in Neuroscience; Vicente, R., Wibral, M., Lizier, J.T., Eds.; Springer-Verlag: Berlin, Germany, 2014; pp. 3–36. [Google Scholar]
- Chicharro, D.; Ledberg, A. When two become one: The limits of causality analysis of brain dynamics. PLoS One
**2012**, 7, e32466. [Google Scholar] - Dahlhaus, R. Graphical interaction models for multivariate time series. Metrika
**2000**, 51, 157–172. [Google Scholar] - Runge, J.; Heitzig, J.; Petoukhov, V.; Kurths, J. Escaping the Curse of Dimensionality in Estimating Multivariate Transfer Entropy. Phys. Rev. Lett
**2012**, 108, 258701. [Google Scholar] - Faes, L.; Marinazzo, D.; Montalto, A.; Nollo, G. Lag-specific transfer entropy as a tool to assess cardiovascular and cardiorespiratory information transfer. IEEE Trans. Biomed. Eng
**2014**, 61, 2556–2568. [Google Scholar] - Faes, L.; Nollo, G.; Porta, A. Compensated transfer entropy as a tool for reliably estimating information transfer in physiological time series. Entropy
**2013**, 15, 198–219. [Google Scholar] - Runge, J.; Heitzig, J.; Marwan, N.; Kurths, J. Quantifying causal coupling strength: A lag-specific measure for multivariate time series related to transfer entropy. Phys. Rev. E
**2012**, 86, 061121. [Google Scholar] - Vlachos, I.; Kugiumtzis, D. Nonuniform state-space reconstruction and coupling detection. Phys. Rev. E
**2010**, 82, 016207. [Google Scholar] - Kraskov, A.; Stogbauer, H.; Grassberger, P. Estimating mutual information. Phys. Rev. E
**2004**, 69, 066138. [Google Scholar] - Faes, L.; Nollo, G.; Porta, A. Information-based detection of nonlinear Granger causality in multivariate processes via a nonuniform embedding technique. Phys. Rev. E
**2011**, 83, 051112. [Google Scholar] - Porta, A.; Faes, L.; Bari, V.; Marchi, A.; Bassani, T.; Nollo, G.; Perseguini, N.M.; Milan, J.; Minatel, V.; Borghi-Silva, A.; Takahashi, A.C.M.; Catai, A.M. Effect of Age on Complexity and Causality of the Cardiovascular Control: Comparison between Model-Based and Model-Free Approaches. PLoS One
**2014**, 9, e89463. [Google Scholar] - Barnett, L.; Barrett, A.B.; Seth, A.K. Granger causality and transfer entropy are equivalent for Gaussian variables. Phys. Rev. Lett
**2009**, 103, 238701. [Google Scholar] - Barnett, L.; Seth, A.K. The MVGC multivariate Granger causality toolbox: A new approach to Granger-causal inference. J. Neurosci. Methods
**2014**, 223, 50–68. [Google Scholar] - Barrett, A.B.; Barnett, L.; Seth, A.K. Multivariate Granger causality and generalized variance. Phys. Rev. E
**2010**, 81, 041907. [Google Scholar] - Faes, L.; Montalto, A.; Nollo, G.; Marinazzo, D. Information decomposition of short-term cardiovascular and cardiorespiratory variability, Proceedings of 2013 Computing in Cardiology Conference (CinC), Zaragoza, Spain, 22–25 September 2013; pp. 113–116.
- Faes, L.; Kugiumtzis, D.; Nollo, G.; Jurysta, F.; Marinazzo, D. Estimating the decomposition of predictive information in multivariate systems. Phys Rev. E
**2014**. submitted for publication. [Google Scholar] - Faes, L.; Widjaja, D.; van Huffel, S.; Nollo, G. Investigating cardiac and respiratory determinants of heart rate variability in an information-theoretic framework, Proceedings of 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Chicago, IL, USA, 26–30 August 2014; pp. 6020–6023.
- Kaiser, A.; Schreiber, T. Information transfer in continuous processes. Physica D
**2002**, 166, 43–62. [Google Scholar] - Faes, L.; Nollo, G.; Porta, A. Information domain approach to the investigation of cardio-vascular, cardio-pulmonary, and vasculo-pulmonary causal couplings. Front. Physiol
**2011**, 2, 1–13. [Google Scholar] - Porta, A.; Bassani, T.; Bari, V.; Pinna, G.D.; Maestri, R.; Guzzetti, S. Accounting for Respiration is Necessary to Reliably Infer Granger Causality from Cardiovascular Variability Series. IEEE Trans. Biomed. Eng
**2012**, 59, 832–841. [Google Scholar] - Task force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology Heart rate variability. Standards of measurement, physiological interpretation, and clinical use. Eur. Heart J
**1996**, 17, 354–381. - Faes, L.; Erla, S.; Nollo, G. Measuring connectivity in linear multivariate processes: definitions, interpretation, and practical analysis. Comp. Math. Methods Med 140513.
- Porta, A.; Guzzetti, S.; Montano, N.; Pagani, M.; Somers, V.; Malliani, A.; Baselli, G.; Cerutti, S. Information domain analysis of cardiovascular variability signals: evaluation of regularity, synchronisation and co-ordination. Med. Biol. Eng. Comput
**2000**, 38, 180–188. [Google Scholar] - Porta, A.; Guzzetti, S.; Montano, N.; Furlan, R.; Pagani, M.; Malliani, A.; Cerutti, S. Entropy, entropy rate, and pattern classification as tools to typify complexity in short heart period variability series. IEEE Trans. Biomed. Eng
**2001**, 48, 1282–1291. [Google Scholar] - Faes, L.; Nollo, G.; Porta, A. Non-uniform multivariate embedding to assess the information transfer in cardiovascular and cardiorespiratory variability series. Comput. Biol. Med
**2012**, 42, 290–297. [Google Scholar] - Van Diest, I.; Vlemincx, E.; Verstappen, K.; Vansteenwegen, D. The Effects of instructed ventilatory patterns on physiological and psychological dimensions of relaxation, Presented at the 17th meeting of the International Society for the Advancement of Respiratory Psychophysiology (ISARP), New York City, NY, USA, 26–27 September 2010.
- Kralemann, B.; Fruhwirth, M.; Pikovsky, A.; Rosenblum, M.; Kenner, T.; Schaefer, J.; Moser, M. In vivo cardiac phase response curve elucidates human respiratory heart rate variability. Nat. Comm
**2013**, 4. [Google Scholar] [CrossRef] - Stankovski, T.; Cooke, W.H.; Rudas, L.; Stefanovska, A.; Eckberg, D.L. Time-frequency methods and voluntary ramped-frequency breathing: A powerful combination for exploration of human neurophysiological mechanisms. J. Appl. Physiol
**2013**, 115, 1806–1821. [Google Scholar] - Barrett, A.B.; Barnett, L. Granger causality is designed to measure effect, not mechanism. Front. Neurosci
**2013**, 7, 61–62. [Google Scholar] - Gigi, S.; Tangirala, A.K. Quantitative analysis of directional strengths in jointly stationary linear multivariate processes. Biol. Cybern
**2010**, 103, 119–133. [Google Scholar] - Vicente, R.; Wibral, M.; Lindner, M.; Pipa, G. Transfer entropy—a model-free measure of effective connectivity for the neurosciences. J. Comput. Neurosci
**2011**, 30, 45–67. [Google Scholar] - Kugiumtzis, D. Direct-coupling information measure from nonuniform embedding. Phys. Rev. E
**2013**, 87, 062918. [Google Scholar] - Wollstadt, P.; Martinez-Zarzuela, M.; Vicente, R.; Diaz-Pernas, F.J.; Wibral, M. Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series. PLoS One
**2014**, 9, e102833. [Google Scholar] - Labarre, D.; Grivel, E.; Berthoumieu, Y.; Todini, E.; Najim, M. Consistent estimation of autoregressive parameters from noisy observations based on two interacting Kalman filters. Sign. Proc
**2006**, 86, 2863–2876. [Google Scholar] - Arnold, M.; Miltner, W.H.; Witte, H.; Bauer, R.; Braun, C. Adaptive AR modeling of nonstationary time series by means of Kalman filtering. IEEE Trans. Biomed. Eng
**1998**, 45, 553–562. [Google Scholar] - Baselli, G.; Cerutti, S.; Badilini, F.; Biancardi, L.; Porta, A.; Pagani, M.; Lombardi, F.; Rimoldi, O.; Furlan, R.; Malliani, A. Model for the assessment of heart period and arterial pressure variability interactions and of respiration influences. Med. Biol. Eng. Comput
**1994**, 32, 143–152. [Google Scholar] - Fortrat, J.O.; Yamamoto, Y.; Hughson, R.L. Respiratory influences on non-linear dynamics of heart rate variability in humans. Biol. Cybern
**1997**, 77, 1–10. [Google Scholar]

**Figure 1.**(

**a**) Causal structure of a bivariate dynamic process {X,Y}, evidencing internal dynamics of a single process (black arrows) and causal interactions from one process to the other (blue arrows). (

**b**) Absence of internal dynamics in the process Y. (

**c**) Absence of causal interactions from X to Y. (

**d**) Absence of causal interactions from Y to X.

**Figure 2.**(

**a**) Time series graph depicting the dynamics of a binary bivariate process {X,Y} assuming values 0 and 1 and generated by the deterministic relations X

_{n}= 1–X

_{n}

_{-1}, Y

_{n}= X

_{n}

_{-1}, setting internal dynamics in X and causal interactions from X to Y. (

**b**) The bivariate process is now generated by the relations Y

_{n}= 1–Y

_{n}

_{-1}, X

_{n}= Y

_{n}

_{-1}, setting internal dynamics in Y and causal interactions from Y to X. Since the process outcomes are identical in the two cases, the measures of information dynamics relevant to the process Y are the same: P

_{Y}= S

_{Y}= C

_{X→Y}= H(Y

_{n}) = log2, T

_{X→Y}= S

_{Y|X}= 0. Therefore, considering Y as target process we have causal interactions without information transfer in (

**a**) and internal dynamics without internal information in (

**b**).

**Figure 3.**Graphical representation of the bivariate AR process {X,Y} of Equation (17), imposing: (

**a**) absence of internal dynamics in X (a = 0); (

**b**) absence of internal dynamics in Y (b = 0); (

**c**) absence of causal interactions from X to Y (c = 0); (

**d**) absence of causal interactions from Y to X (d = 0). The causal structure of the process is represented with a detailed time series graph (up) and with a condensed graph showing only the interactions between the past and present of the two processes (down), indicating for each arrow the parameter that has effect on the causal interaction depicted by the arrow.

**Figure 4.**Information dynamics computed as a function of the parameters of the bivariate AR process {X,Y} of Equation (17), imposing: (

**a**) absence of internal dynamics in X (a = 0); (

**b**) absence of internal dynamics in Y (b = 0); (

**c**) absence causal interactions from X to Y (c = 0); (

**d**) absence causal interactions from Y to X (d = 0). In each condition, one of the three nonzero parameters is varied in the range 0–0.5 while keeping the other two parameters equal to 0.5. In the plots, the predictive information is decomposed either as the sum of information storage and information transfer (D1: P

_{Y}= S

_{Y}+T

_{X→Y}) or as the sum of cross information and internal information (D2: P

_{Y}= C

_{X→Y}

_{+}S

_{Y|X}).

**Figure 5.**Information dynamics computed for different parameter settings of the bivariate AR process {R,HP} of Equation (18), simulating: (

**a**) a shift in the sympatho-vagal balance, obtained changing the parameter b from 0 to 1 and setting ρ

_{VLF}= 0.2b, ρ

_{LF}= 0.8b; c = 1–b; (

**b**) a shift in the respiratory frequency, obtained changing the parameter f

_{HF}from 0.1 to 0.3. Left plots depict the profiles of the power spectral density of the simulated RV, S

_{R}(f ), and HPV, S

_{HP}( f ). Right plots depict the expansion of the predictive information as the sum of information storage and information transfer (D1: P

_{HP}= S

_{HP}+T

_{R→HP}) or as the sum of cross information and internal information (D2: P

_{HP}= C

_{R→HP}+S

_{HP|R}).

**Figure 6.**Information dynamics computed for HPV and RV series measured during the HUT protocol. Box plots depict the distributions over subjects of the predictive information of HPV (P

_{HP}), the information storage of HPV (S

_{HP}), the information transfer from RV to HPV (T

_{R→HP}), the cross information from RV to HPV (C

_{R→HP}), and the internal information of HPV (S

_{HP|R}), computed in the supine (SU) and upright (UP) conditions. * p<0.01 SU vs. UP.

**Figure 7.**Information dynamics computed for HPV and RV series measured during the PB protocol. Box plots depict the distributions over subjects of the predictive information of HPV (P

_{HP}), the information storage of HPV (S

_{HP}), the information transfer from RV to HPV (T

_{R→HP}), the cross information from RV to HPV (C

_{R→HP}), and the internal information of HPV (S

_{HP|R}), computed during spontaneous respiration (SR) and paced respiration at 10, 15 and 20 breaths/min. * p<0.01, ANOVA and post-hoc pairwise test.

Name | Meaning | Symbol | Lower Bound | Upper Bound |
---|---|---|---|---|

Prediction Entropy (PE) | Predictive Information | P_{Y} | Y_{n}⊥X_{n}^{−},Y_{n}^{−} ⇔ P_{Y} = 0 | Y_{n} = f (X_{n}^{−},Y_{n}^{−}) ⇔ P_{Y} = H(Y_{n}) |

Self Entropy (SE) | Information Storage | S_{Y} | Y_{n}⊥Y_{n}^{−} ⇔ S_{Y}=0 | Y_{n} = f (Y_{n}^{−}) ⇔ S_{Y} = H(Y_{n}) |

Transfer Entropy (TE) | Information Transfer | T_{X→Y} | Y_{n}⊥X_{n}^{−}|Y_{n}^{−} ⇒T_{X→Y} = 0 | Y_{n} = f(X_{n}^{−},Y_{n}^{−}) ⇔T_{X→Y} = H(Y_{n}|Y_{n}^{−}) |

Cross Entropy (CE) | Cross Information | C_{X→Y} | Y_{n}⊥X_{n}^{−}⇔ C_{X→Y} = 0 | Y_{n}=f (X_{n}^{−}) ⇔ C_{X→Y} = H(Y_{n}) |

Conditional Self Entropy (cSE) | Internal Information | S_{Y|X} | Y_{n}⊥Y_{n}^{−}|X_{n} ^{−}⇒ S_{Y|X} = 0 | Y_{n}=f (X_{n}^{−},Y_{n}^{−}) ⇔ S_{Y|X} = H(Y_{n}|X_{n}^{−}) |

© 2015 by the authors; licensee MDPI, Basel, Switzerland This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Faes, L.; Porta, A.; Nollo, G.
Information Decomposition in Bivariate Systems: Theory and Application to Cardiorespiratory Dynamics. *Entropy* **2015**, *17*, 277-303.
https://doi.org/10.3390/e17010277

**AMA Style**

Faes L, Porta A, Nollo G.
Information Decomposition in Bivariate Systems: Theory and Application to Cardiorespiratory Dynamics. *Entropy*. 2015; 17(1):277-303.
https://doi.org/10.3390/e17010277

**Chicago/Turabian Style**

Faes, Luca, Alberto Porta, and Giandomenico Nollo.
2015. "Information Decomposition in Bivariate Systems: Theory and Application to Cardiorespiratory Dynamics" *Entropy* 17, no. 1: 277-303.
https://doi.org/10.3390/e17010277