Next Article in Journal
Black-Box Optimization Using Geodesics in Statistical Manifolds
Next Article in Special Issue
Analyses of Heart Rate, Respiration and Cardiorespiratory Coupling in Patients with Schizophrenia
Previous Article in Journal
The Entropy of an Armco Iron under Irreversible Deformation
Previous Article in Special Issue
Multiscale Entropy Analysis of Heart Rate Variability for Assessing the Severity of Sleep Disordered Breathing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Information Decomposition in Bivariate Systems: Theory and Application to Cardiorespiratory Dynamics

1
BIOtech, Department of Industrial Engineering, University of Trento, and IRCS PAT-FBK Trento, Italy
2
Department of Biomedical Sciences for Health, University of Milan, Via R. Galeazzi 4, 20161 Milan, Italy
3
IRCCS Galeazzi Orthopedic Institute, Via R. Galeazzi 4, 20161 Milan, Italy
*
Author to whom correspondence should be addressed.
Entropy 2015, 17(1), 277-303; https://doi.org/10.3390/e17010277
Submission received: 3 November 2014 / Accepted: 11 December 2014 / Published: 12 January 2015
(This article belongs to the Special Issue Entropy and Cardiac Physics)

Abstract

:
In the framework of information dynamics, the temporal evolution of coupled systems can be studied by decomposing the predictive information about an assigned target system into amounts quantifying the information stored inside the system and the information transferred to it. While information storage and transfer are computed through the known self-entropy (SE) and transfer entropy (TE), an alternative decomposition evidences the so-called cross entropy (CE) and conditional SE (cSE), quantifying the cross information and internal information of the target system, respectively. This study presents a thorough evaluation of SE, TE, CE and cSE as quantities related to the causal statistical structure of coupled dynamic processes. First, we investigate the theoretical properties of these measures, providing the conditions for their existence and assessing the meaning of the information theoretic quantity that each of them reflects. Then, we present an approach for the exact computation of information dynamics based on the linear Gaussian approximation, and exploit this approach to characterize the behavior of SE, TE, CE and cSE in benchmark systems with known dynamics. Finally, we exploit these measures to study cardiorespiratory dynamics measured from healthy subjects during head-up tilt and paced breathing protocols. Our main result is that the combined evaluation of the measures of information dynamics allows to infer the causal effects associated with the observed dynamics and to interpret the alteration of these effects with changing experimental conditions.

Graphical Abstract

1. Introduction

The dynamics of complex physical and biological systems can often be explained as emerging from the activity of multiple system components, which carry a certain degree of autonomy but also interact with each other producing nontrivial collective behaviors. In the field of cardiac physics, a typical example of these behaviors is given by cardiorespiratory dynamics, which arise from both an internal regulation of the cardiac system, accomplished through several central and peripheral physiological mechanisms, and its interaction with the respiratory system [1]. This interaction is commonly observed in the temporal dynamics of the main output variable of the cardiac system, i.e., heart period variability (HPV), as the component occurring in synchrony with respiratory variability (RV). Such a component is denoted as respiratory sinus arrhythmia, and is of particular physiological and clinical importance as it constitutes a meaningful indicator of parasympathetic activity and modulation [2].
The analysis of coupled systems is commonly accomplished by mapping the system activity with a set of variables, and then studying the statistical dependence among the observed realizations of these variables collected in the form of multivariate time series. In the general field of statistical causal modeling [3] this is achieved assessing the causal sources of statistical dependence for an assigned target variable. A prominent framework is structural causal modeling [4], which exploits graphical models to encode direct causal links among variables by directed edges in the graphical representation of the observed interactions. While this framework is very powerful in describing the causal mechanisms of coupled processes, it is restricted—at least in its most common formulations—by the facts that it ignores time and it presupposes acyclic dependencies between the variables. To extend the framework to the study of dynamic processes with possibly reciprocal (cyclic) causal interactions, two main approaches to statistical causal modeling have been proposed. The first is based on formulating realistic cyclic and time-dependent models of how the observed data have been generated, and then inferring the causal statistical structure from the estimated coupling parameters. A famous example in neuroscience is given by dynamic causal modeling [5] which explicitly considers the biophysical interactions among inaccessible neural populations as well as their mapping to the measured variables. In cardiac physiology, approaches modeling the temporal dynamics of the amplitude [6] or the phase [7,8] of heart rate and respiration variables have been proposed to assess cardiopulmonary dynamics. The other, apparently unrelated approach relies on explicitly incorporating the flow of time into structural causal modeling, in a way such that cyclic interactions turn into acyclic graphs when the graph nodes are deployed over subsequent time steps [3,9]. This allows one to investigate structural causal models following the general principles of Wiener-Granger causality [1012], according to which causal dependencies occur when the cause variable precedes in time the effect variable and the cause contains unique information about the future values of its effect. After its introduction in the field of econometrics [11], this approach has been largely followed in other fields of science, including neuroscience and cardiovascular physiology (see [13] for an overview).
In the present study we follow the Wiener-Granger approach to the study of dynamic interactions among coupled processes, and consider its implementation within the field of information theory [14]. Essentially, a dynamical system can be called complex, and its descriptive process unpredictable, if it generates information at a nonzero rate. In the information-theoretic framework, this is quantified by the entropy. Then, the flow of time is taken into account studying to what extent the past system history contributes to resolve the uncertainty about its present state. This way to proceed leads to assess in a natural way the predictive information about an assigned target system as the amount of uncertainty about its present state that can be resolved by the knowledge of the past states of all available systems [15,16]. Interestingly, in bivariate systems the predictive information can be decomposed in two terms quantifying respectively the information stored inside the target system and the information transferred to it from the other connected system. Information storage and information transfer are becoming very popular concepts in the study of dynamic processes, also thanks to the fact that well-founded measures like the so-called self-entropy (SE) and transfer entropy (TE) have been defined for their quantification [17,18]. Taken together, these concepts form the basis of the field of information dynamics [19], which is becoming very popular as it provides an elegant unifying framework to study the complex behaviour of ensembles of dynamical systems [2023].
Nevertheless, the full exploitation of information dynamics for the analysis of experimental time series is hampered by some theoretical and practical issues. For instance, the interpretation of the SE as a measure of information storage may be confounded by the fact that it incorporates dynamical influences arising not only from the investigated target system, but also from other systems potentially connected to it [20,24]. As to the information transfer, it has been shown that in some circumstances the TE may vanish even if the driver and target systems are causally connected [15,25], and that using the TE magnitude as a measure of connectivity strength can lead to erroneous interpretations of the underlying causal effects [15,26]. Moreover, the decomposition through which the predictive information is expanded as the sum of the information storage and information transfer is not unique [15,16], so that the question arises about whether alternative decompositions may give more or different insight on the dynamical structure of the observed system. From a practical viewpoint, the computation of information dynamics from short and noisy experimental time series can be a daunting task because it entails the estimation of high-dimensional joint probability densities, and therefore simplifying assumptions are often needed. All these issues are addressed in the present study: (i) providing thorough definitions of all measures of information dynamics that may be derived from the two possible decompositions of the predictive information about the target of a bivariate system; (ii) studying the theoretical properties of these measures, including the conditions for existence and the meaning of the information theoretic quantity that each of them reflects; (iii) investigating their behavior on simulated processes with known underlying dynamics; and (iv) presenting an approach for their practical computation based on the linear Gaussian approximation of the entropy functions. In the second part of the paper, the measures of information dynamics are investigated in realistic simulations of cardiorespiratory dynamics, and then computed on real RV and HPV time series measured in different conditions, showing how they can be used to describe the changes evoked in the physiological regulation by experimental stimuli like head-up tilt and paced breathing.

2. Methods

In this Section, we present the measures of information dynamics that characterize the causal statistical structure of the processes representing the time evolution of bivariate dynamic systems. The natural domain for measuring information dynamics is information theory [14], which provides a general framework for quantifying the ‘information content’ of individual variables or collections of variables, and the information exchange between variables. We first recall the basic information-theoretic measures and their properties in Section 2.1, then outline how to use these measures for analyzing dynamic processes in terms of information dynamics in Section 2.2. Subsequently, Section 2.3 discusses the interpretation of information dynamics, and Section 2.4 describes an approach for their numerical computation.

2.1. Information-Theoretic Preliminaries

The central quantity in information theory is the Shannon entropy, which expresses the amount of information of a (possibly multivariate) random variable V as the average uncertainty associated with its outcomes, and is formulated as H ( V ) = v p ( v ) log p ( v ), where p(v) is the probability for V to take the value v, and the sum is taken over all outcomes with nonzero probability. The conditional entropy of V given another variable W quantifies the residual information about V when W is known as the average uncertainty that remains about V when the outcomes of W are assigned: H ( V | W ) = w p ( w ) H ( V | W = w ), where H(V |W=w) −∑v p(v | w) log p(v | w) is the entropy of V measured when W = w. The conditional entropy is measured computing the conditional probability p(v|w) which expresses the probability of observing V = v given that W = w has been observed.
The mutual information (MI) between V and W quantifies the information shared by V and W. MI between V and W corresponds to the part of the information of V that can be predicted by the knowledge of W as the average reduction in uncertainty about V that results from knowing the values of W: I (V;W) =∑v p(v, w) log p(v | w) / p(v). Since even the reverse holds, the MI is symmetric, I(V;W) = I(W;V). MI can be expressed in terms of Shannon and conditional entropies as I(V;W) = H(V)–H(V|W). Moreover, the conditional mutual information between V and W given a third variable Z quantifies the residual MI between V and W when Z is known: I (V;W | Z) =∑v p(v,w, z) log p(v | w, z) / p(v | z), and can be expressed in terms of conditional entropies as I(V;W|Z) = H(V|Z) – H(V|W,Z) = H(W|Z) – H(W|V,Z) = I(W;V|Z). Note that entropy and MI are often measured in bits after using 2 as the base of logarithms in the computation, while in this study we use natural logarithms and therefore the units are called nats.
Now we recall some basic properties and inequalities that will be useful in the description of information dynamics. First, we note that all Shannon’s information measures are non-negative. The entropy H(V) is zero if and only if V is deterministic (i.e., it assumes a single outcome with probability one). The conditional entropy H(V|W) is zero if and only if V is a function of W, i.e., V is deterministic for each outcome w of W, so that H(V|W=w) = 0; note that V can be at the same time non-deterministic and a function of W, so that H(V) > 0 and H(V|W) = 0. The conditional entropy is maximum, H(V|W) = H(V), if and only if V and W are independent (i.e., p(v|w) = p(v) for each v and w), denoted as VW; in this case the mutual information I(V;W) is equal to zero. Finally, the conditional MI I(V;W|Z) is zero if and only if V and W are independent when conditioning on Z (i.e., p(v|w,z)=p(v|z) for each v, w, z), denoted as VW|Z. As we will see below, conditional statistical independence is a key property for the assessment of the causal statistical structure of bivariate processes based on information dynamics.

2.2. Information Dynamics in Bivariate Systems

Let us consider a bivariate dynamic system S composed of two possibly interacting systems X and Y, and assume that the evolution of the systems over time is described by the dynamic bivariate process S = {X,Y}. Moreover, setting a temporal reference frame in which n is the present time, we denote as Xn and Yn the univariate variables describing the present of the processes X and Y, and as Xn = [Xn−1, Xn−2,⋯] and Yn = [Yn−1, Yn−2,⋯] the multivariate variables describing the past of the processes. Then, in the framework of information dynamics [19] the temporal statistical structure of the observed system is characterized through the standard information-theoretic measures recalled in Section 2.1, computed taking as arguments properly chosen combinations of the past and present of the two dynamic processes.
In particular, considering Y as the target system, the information produced by the target process Y is quantified by the entropy of its present state, H(Yn). Then, the effect on the target of the temporal dynamics of the joint process {X,Y} is measured by the so-called Prediction Entropy (PE):
P Y   = I ( Y n ; X n , Y n ) = H ( Y n ) H ( Y n | X n , Y n ) ,
which quantifies the part of the information carried by the present of the target process that can be predicted by the past of the joint process, Sn = [ Xn, Yn ]. In an attempt to separate the statistical dependencies arising from each of the two systems, the PE can be decomposed exploiting the chain rule for mutual information [14] as:
P Y = S Y + T X Y ,
where:
S Y = I ( Y n ; Y n ) = H ( Y n ) H ( Y n | Y n )
is the self-entropy (SE), quantifying the part of the information carried by the present of the target process that can be predicted by its own past, and:
T X Y = I ( Y n ; X n | Y n ) = H ( Y n | Y n ) H ( Y n   | X n , Y n )
is the transfer entropy (TE), measuring the part of the information carried by the present of the target process that can be predicted by the past of the driver above and beyond the part that was predicted by the past of the target. The formulation proposed above is very popular, as it evidences the SE and the TE which are well-known measures of information dynamics [1618]. As an alternative to the decomposition (2), another way to expand the PE is to apply the chain rule first considering the past of the driver X, and then the past of the target Y. Accordingly, the PE can be written as:
P Y = C X Y + S Y | X ,
where:
C X Y = I ( Y n ; X n ) = H ( Y n ) H ( Y n | X n ) ,
denoted here as cross-entropy (CE), quantifies the part of the information carried by the present of the target that can be predicted exclusively by the past of the driver, and:
S Y | X = I ( Y n ; Y n | X n ) = H ( Y n | X n ) H ( Y n | X n , Y n )
is defined as the conditional SE (cSE), quantifying the part of the information carried by the present of the target process that can be predicted by its past above and beyond the part that was predicted by the past of the driver. Note that, although the formulation in (5)(7) is less popular than that in (2)(4), it is equally valid from a mathematical point of view. In the next subsections we will discuss analogies and differences between TE and CE, and between SE and cSE, indicating theoretical and practical situations in which the two decompositions should be used.

2.3. Properties and Theoretical Interpretation

This subsection is devoted to investigate the theoretical meaning and properties of the various measures of information dynamics defined in Section 2.2. First, in order to study how the various measures reflect the dynamical structure of the observed bivariate system, we exploit the framework of causal graphs [9,27,28] for representing the causal statistical structure of the system. Such a structure is shown in Figure 1a for the bivariate system {X,Y}, where the variables associated with the past and present of the processes X and Y are depicted as the nodes of the graph, and arrows depict causal interactions. In the figure, causal interactions between different processes (XnYn and YnXn) are distinguished from causal interactions involving variables of the same process, which we denote as internal dynamics (XnXn and YnYn); note that the links between the past of the two processes are induced by the causal interactions, i.e., XnYn implies XnYn, and the same from Y to X. It is worth also noting that the graphical structure adopted in Figure 1 serves for the analysis of causality intended in the Granger sense [11,12], i.e., with the purpose of characterizing causal relations between the whole past of the processes and their present, without taking care of lag-specific interactions (a possible treatment of lag-specific causal interactions is outlined in [29]), and that this representation presupposes the absence of instantaneous dependence between the processes (i.e., XnYn|Xn,Yn; a solution for incorporating zero-lag dependencies in practical analysis is outlined in [30]).
The causal structure of the observed bivariate system can be inferred from time series data exploiting the relation between causal interactions and conditional statistical independencies tested from the probability distributions of the associated variables [4]. In particular, recent studies [9,28,29,31] have shown that the causal structures associated with multivariate dynamic processes can be inferred straightforwardly by testing the non-existence of Granger-causal interactions through measures of the statistical independence between the present of the target process and the past of the driver process, conditioned to the past of the target and any other process. In our bivariate context, absence of internal dynamics in the target process corresponds to statistical independence between its present and past variables given the past of the driver (YnYn|Xn, Figure 1b), and absence of causal interactions from source to target corresponds to statistical independence between the past of the driver and the present of the target given its past (XnYn|Yn, Figure 1c). Then, the conditional independencies associated with absence of causal interactions can be detected using the conditional MI. However, since conditional independence does not imply independence, the (unconditioned) MI cannot be used to probe causal interactions.
Above we have addressed the problem of assessing the existence of causal connections in the observed system, related either to the interactions between the two processes or to the internal dynamics of one process. However, another question arises about whether and under which conditions it is possible to quantify in a meaningful way the impact that the causal connections have on the system dynamics. This question, which is intimately related to the definition of proper measures of “causal coupling strength” between two variables, has been addressed satisfactorily in the framework of interventional causality [4] by quantifying the effect on the target variable of actively perturbing the driver variable. Although a similar definition of “causal effect” is much less straightforward if the analysis aims at characterizing the dynamics that occur naturally in the unperturbed system, a recent study [26] has drawn a connection between the two approaches providing a definition of “natural causal effects” and stating the conditions under which these effects can be quantified without intervening on the system variables. In [26] a natural causal effect between two variables of an unperturbed system is defined as the causal effect that results if conditioning on the driver variable is identical to intervening on that variable. Such an effect can be defined only when the causal structure of the observed system is such that the driving variable is autonomous with respect to the rest, and can be quantified meaningfully through the joint probability distribution of the target and driver variables. Translating these concepts to our context of dynamic bivariate processes, we have that the causal interactions from X to Y can be interpreted as natural causal effects only if no arrows in the causal graph point to Xn, i.e., only in the absence of causal interactions from Y to X (Figure 1d). Similarly, a proper cause-effect interpretation can be given to the internal dynamics in the target process Y only if Yn is autonomous, i.e., only in the absence of causal interactions from X to Y (Figure 1c). In these situations, meaningful measures of the magnitude of the natural causal effects related to interactions from X to Y and internal dynamics of Y can be obtained elaborating the joint distributions p(Yn;Xn) and p(Yn;Yn) in terms of MI, i.e., computing the MIs I(Yn; Xn) and I(Yn ; Yn).
The considerations above lead to the theoretical interpretation of information dynamics summarized in Table 1 and described in the following. Starting with the PE, we observe that it ranges from 0, measured when YnXn,Yn, to the entropy of the target process, measured when [Xn,Yn] fully predicts Yn. The PE is nonzero in the presence of any combination of internal dynamics in the target system (YnYn) and causal interactions from source to target (XnYn). As such, it is an useful measure of the overall predictive information about the target process reflecting the natural causal effect of [Xn,Yn] on Yn, but cannot disentangle the causal sources of statistical dependence giving rise to this predictive information.
The SE is a useful measure of information storage intended as a quantity reflecting the whole information contained in the past of the target that can be used to predict its present, regardless of the origin of such an information. Indeed, significant SE measured as I(Yn;Yn)>0 arises not only from internal dynamics in the target system (YnYn) but also from causal interactions from source to target; in the latter case Xn acts as a common driver (YnXnYn), creating statistical dependence between Yn and Yn even without the existence of a causal connection between them. For this reason, the SE cannot be related to the presence of internal dynamics in the target process. However, in the particular case of absent causal interaction from source to target (Figure 1c), not only the SE reflects the internal dynamics of the target, but also quantifies these dynamics as occurring from natural causal effects. On the contrary, the cSE reflects the internal information in the target process because it is always zero in the absence of internal dynamics and, consequently, finding it higher than zero means that autodependency effects take place in the target process (YnYn). However, the internal information measured by the cSE does not reflect natural causal effects because it is based on conditional probabilities rather than simple joint probabilities. To sum up, the main distinction between SE and cSE can be subsumed stating that a system without internal dynamics does not exhibit internal information, but may exhibit information storage (see Figure 1b for an example).
With a similar reasoning, the CE can be interpreted as measuring the cross information from the driver process to the target process intended as the overall amount of information carried by the present of the target that can be explained by the driver’s past. This overall information includes both the contribution due to the causal interaction from driver to target (XnYn) and that resulting from the contemporaneous presence of internal dynamics in the target and causal interactions from target to driver (common driver effect XnYnYn). Therefore significant CE measured as I(Yn;Xn)>0 cannot be taken as an indication of causal interaction from X to Y. However, in the presence of unidirectional interactions from driver to target the CE is a proper measure of the overall natural causal effects subsuming these interactions. On the contrary, the TE reflects the information transfer in the target process because it is exactly zero in the absence of causal interactions from driver to target and, consequently, finding it strictly positive means that the driver is causing the target (XnYn). To summarize the differences between CE and TE we can thus state that in the absence of any causal interaction from driver to target there is no information transfer, but there can be cross information (see Figure 1c for an example).
Table 1 reports also the upper bounds of the measures of information dynamics. For each measure, the upper bound is attained if and only if the present of the target is a function of the other variables which appear in the MI or CMI defining the measure. In such a case, the second conditional entropy term in the definition of the measure vanishes, and the measure becomes equivalent to the first (conditional) entropy term, see Equations (1), (3), (4), (6) and (7). The presence of deterministic effects maximizing some measures of information dynamics may limit the interpretability of the other measures derived from the decomposition of the predictive information. Specifically, when the present of the target process is an exact function of its past we always measure SY = PY and TX→Y = 0, even in the presence of substantial causal interactions from source to target. Similarly, when the present of the target process is an exact function of the past of the driver we always measure CX→Y = PY and SY|X = 0, even in the presence of substantial internal dynamics in the target. In other words, a full predictability of the target, either given its past or the past of the driver, entails a null value for the conditional MI and thus prevents from any possibility to measure additional predictability. This is the reason of the absence of the “only if” condition in the relation between conditional independence and null conditional MI reported in Table 1 for the TE and the cSE. To put it simply, while there is no information transfer without causal interactions, and there is no internal information without internal dynamics, the reverse does not hold. Examples of causal interactions not reflected by information transfer and internal dynamics not reflected by internal information are reported in Figure 2. To sum up we can say that the measures of information transfer and internal information serve as useful proxies for causal interactions and internal dynamics for stochastic processes, while their causal interpretation should proceed more carefully in the presence of deterministic effects.

2.4. Computation of Information Dynamics

The practical computation of the measures appearing in the entropy decompositions of Equations (1)(4) and (5)(7) presupposes to provide estimates of the MI and conditional MI for high-dimensional vector variables. In the most general case, and when nonlinear effects are relevant, non-parametric approaches are recommended to yield model-free estimates of entropy and MI [29,3234]. However, the necessity to estimate entropies of variables of very high dimension may impair the reliability of model-free estimators, especially when short realizations of the processes are available [35]. In this study we adopt the assumption of Gaussianity and exploit the exact expressions that hold in this case for the information measures. Specifically, in the following we provide a derivation of the exact values of information dynamics under the assumption that the observed bivariate process S = {X,Y} has a joint Gaussian distribution. The utilization of an exact computation has also the advantage that it allows analytic evaluation of information dynamics for known linear systems (this will be done in Section 3).
We start recalling some known entropy expressions for Gaussian variables. The entropy of a univariate Gaussian random variable V can be expressed as [14]:
H ( V ) = 1 2 ln ( 2 π e σ ( V ) ) ,
where σ(V) is the variance of V. Moreover, given a multivariate variable W such that V and W are jointly multivariate Gaussian (i.e., any finite subset of the component variables has a joint Gaussian distribution) the conditional entropy of V given W, can be expressed as [36]:
H ( V | W ) = 1 2 ln ( 2 π e σ ( V | W ) ) ,
where σ(V|W) is the partial variance of V given W, that is the variance of the residuals of a linear regression of V on W, which in turn can be expressed in terms of covariance matrices as [36]:
σ ( V | W ) = σ ( V ) Σ ( V ; W ) Σ ( W ) 1 Σ ( V ; W ) T ,
with Σ(·) and Σ(·;·) indicating respectively covariance and cross-covariance matrix. Then, the various measures of information dynamics can be computed first approximating the infinite-dimensional variables Xn and Yn appearing in the definitions (1),(3),(4),(6),(7) with the l-dimensional variables Xnl = [Xn−1, Xn−2, ··· Xnl] and Ynl = [Yn−1, Yn−2, ···Ynl], and then applying (8) and (9) to compute entropy and conditional entropy so that to obtain:
P Y = 1 2 ln σ ( Y n ) σ ( Y n | X n l , Y n l ) S Y = 1 2 ln σ ( Y n ) σ ( Y n | Y n l ) , T X Y = 1 2 ln σ ( Y n | Y n l ) σ ( Y n | X n l , Y n l ) C X Y = 1 2 ln σ ( Y n ) σ ( Y n | Y n l ) , S Y | X = 1 2 ln σ ( Y n | Y n l ) σ ( Y n | X n l , Y n l ) .
Given the formulations in (11), we see that computation of information dynamics is straightforward once the partial variances of Yn given the various combinations of the past of X and Y are obtained. Since any partial variance can be computed using (10), the problem reduces to computing the relevant covariance and cross-covariance matrices between the present and past variables of the two processes. In general, these matrices contain as scalar elements the covariance between two time-lagged variables of the processes X and Y, which in turn appear as elements of the autocovariance of the bivariate process S = {X,Y}, defined at each lag k ≥ 0 as Γk = E[SnSTn-k]. In the following we review the procedure to derive the autocovariance of vector autoregressive (AR) processes from the parametric representation of these processes [37].
Given a multivariate Gaussian process, the statistical dependences between the present and the past variables constituting the bivariate process S = {X,Y} can be fully accounted by its bivariate AR representation [38]:
S n = k = 1 p A k S n k + ε n ,
where p is the process order, Sn =[Xn Yn]T includes the present variables of the joint process, Ak are 2×2 coefficient matrices and εn is a noise process with diagonal covariance matrix Λ. The autocovariance of the process (12) is related to the AR parameters via the well known Yule-Walker equations:
Γ k = l = 1 p A l Γ k l + δ k 0 Λ ,
where δk0 is the Kronecher product. In order to solve equation (13) for Γk, k = 0, 1, …, p–1, we first express (12) in a compact form as S n p = A p S n 1 p + ε n p, where:
S n p = [ S n T S n 1 T S n p + 1 T ] T , A p = [ A 1 A p 1 A p I 0 0 0 I 0 ] , ε n p = [ ε n T 0 ] T .
Then, the covariance matrix of S n p, which has the form:
Γ 0 p = E [ S n p S n p T ] = [ Γ 0 Γ 1 Γ p 1 Γ 1 T Γ 0 Γ p 2 Γ p 1 T Γ p 2 T Γ 0 ] ,
can be expressed as:
Γ 0 p = A p Γ 0 p A p T + Λ p ,
which is a discrete-time Lyapunov equation (Λp denotes the covariance of ε n p). The Lyapunov equation can be solved for Γ 0 p, thus yielding the autocovariance matrices Γ0,…, Γp-1. Finally, the autocovariance can be calculated recursively for any lag kp by applying (13). This shows how the autocovariance sequence can be computed up to arbitrarily high lags starting from the parameters of bivariate AR representation of the process.
To summarize, the procedure described above is based first on computing the autocovariance sequence of the bivariate process from its AR parameters, and then on rearranging the elements of the autocovariance matrices for building the covariances to be used in the computation of information dynamics. For example, to compute the predictive information of the target process Y from the l past lags of the joint process {X,Y} we proceed as follows: (i) starting from the bivariate AR parameters (A1, …, Ap, Λ), compute the autocovariance Γk for any lag k = 0,…,l solving the Lyapunov equation (16); (ii) picking up the proper elements from the Γk, build the covariance matrices Σ ( X n l , Y n l ), of dimension 2l × 2l, and Σ ( Y n ; X n l , Y n l ), of dimension 1×2l, and compute the variance σ(Yn) as the element (2,2) of Γ0; (iii) use equation (10) to find the partial variance σ ( Y n | X n l , Y n l ); (iv) use the first equation in (11) to compute the PE.
The parameter determining the accuracy of the procedure is the number of lags used to truncate the past history of the process: considering the past up to lag l corresponds to calculating the autocovariance of the process (12) up to the matrix Γl. As a rule of thumb, given that the autocovariance of a vector AR process decays exponentially with the lag, with a rate of decay depending on the modulus of the largest eigenvalue of Ap, ρ(A), it has been suggested to compute the autocovariance up to a lag l such that ρ(A)l is smaller than a predefined numerical tolerance [37]. We have found that computation of very long autocovariance sequences is not necessary for the purpose of evaluating information dynamics, because all measures stabilize to constant values already for small lags (typically l = 10) even for reasonably high values of ρ(A) [3941].

3. Simulation Study

In this section we investigate the behavior of the measures of information dynamics at varying the causal statistical structure of bivariate processes using simulations. In order to make the interpretation free of issues related to practical estimation of the measures, we simulate Gaussian AR processes and exploit the procedure described in Section 2.4 to quantify information dynamics.

3.1. Linear AR Bivariate Process

In the first simulation we consider the bivariate process of order 2 defined as:
X n = a X n 2 + d Y n 1 + ε n Y n = b Y n 2 + c X n 1 + ξ n ,
where εn and ξn are independent Gaussian white noise processes with zero mean and unit variance. The causal statistical structure of the process (17) is determined by the autodependency effects in X and Y, modulated by the parameters a and b, and by the causal interactions between X and Y, modulated by the parameters c and d. In this study we considered the situations in which one of these parameters is forced to zero, and the other parameters are let free to vary in the range 0–0.5. These situations reflect four scenarios characterized by absence of internal dynamics in the process X (a = 0) or in the process Y (b = 0), and absence of causal interactions from X to Y (c = 0) or from Y to X (d = 0). The causal structures resulting in the four scenarios are conveniently represented in Figure 3, both in the form of time series graphs showing all time-lagged effects and in a more condensed form reporting only the causal relations between the past and present of the two processes.
Figure 4 reports the trends of the various measures of information dynamics computed at varying the simulation parameters in the four scenarios, taking Y as the target process. First, we observe that the measures reflect the properties stated in Section 2.3. The PE is always nonzero, except for the combinations of the parameters such that the target process is fully isolated (b = c = 0, indicating absence of internal dynamics in Y and of causal interactions from X to Y; Figure 4b,c). Whenever the predictive information is nonzero, it is of interest to investigate how the PE splits into contributions related to information dynamics. The simulation confirms that the internal dynamics of the target process are assessed by the internal information, as we observe zero cSE whenever b = 0 (e.g., Figure 2b). Similarly, the causal interactions from driver to target are assessed by the information transfer, as we observe zero TE whenever c = 0 (e.g., Figure 2c). Note that, in this simulation without deterministic relation between the two processes, there is full correspondence between internal dynamics and internal information, and between causal interactions and information transfer (i.e., SY|X = 0 if and only if b=0, and TX→Y = 0 if and only if c = 0). The simulation also confirms that the information storage quantified by the SE reflects the presence of internal dynamics in Y but also that of causal interactions from X to Y (e.g., in Figure 2b we find SY > 0 even with b = 0), and that the cross information quantified by the CE reflects causal interactions from X to Y but also common effects of the past of Y on its present and on the present of X (e.g., in Figure 2c we find CX→Y > 0 even with c = 0).
Besides the detection of causal connections in the observed system, another relevant issue is to investigate to what extent the measures of information dynamics are able to reflect properly the impact of the causal connections on the dynamics of the observed system. The trends reported in Figure 4 document that a straightforward interpretation of the variations in magnitude of a single measure related to the underlying mechanism is not possible in general, though it can be aided by the knowledge of some conditions and by the combined analysis of the various measures. Looking at Figure 4 we see that the SE and the CE may vary as a function of any simulation parameter, and thus cannot be related to a specific mechanism. While this is explainable from the interpretation of information storage and cross information as quantities that accommodate different types of statistical dependence, we note that also the cSE and the TE, which more specifically refer to internal dynamics and causal interactions, may vary with parameters other than the relevant expected one. In fact, the TE may vary not only as a function of the strength of the causal interactions from driver to target (parameter c) but also with changes in the internal dynamics of the driver process (e.g., in Figures 4b,d TX→Y increases with a); this reflects the fact that TX→Y measures the relation between Xn and Yn and thus is affected both by the causal effect XnYn and by dynamical changes in Xn. Similarly, the cSE may vary not only as a function of the strength of the internal dynamics in the target process (parameter b), but also with changes in the coupling from target to driver (e.g., in Figure 4a,c SY|X decreases at increasing d); this reflects the fact that SY|X measures the relation between Yn and Yn and thus is affected both by the causal effect YnYn and by changes in the dynamical interaction between Yn and Xn. These findings confirm previous results indicating that the TE is sensitive to internal changes in the individual system components [15], and extend these to the indication that the cSE is sensitive to the connectivity between components. Nevertheless, as a reassuring result we observe that TE and cSE keep in some sense separate the analysis of internal dynamics in the target and causal interactions from source to target, since TX→Y is not affected by the internal dynamics of Y and SY|X is not affected by the causal effects from X to Y. Moreover, we find that anytime TX→Y is stable the parameter c was unvaried, and anytime SY|X is stable the parameter b was unvaried. This suggests that observing unchanged information transfer or unchanged internal information across conditions can be used to indicate respectively that the causal interactions from driver to target did not vary, or that the internal dynamics in the target did not vary. Moreover, in our examples all variations observed in the TE and the cSE were monotonic in dependence of the relevant parameter. However, in general this result has to be taken with caution, since it has been shown that the monotonic behavior may be lost in conditions close to determinism [15,42].
To conclude this section, we note that there are specific conditions under which one of the two possible decompositions of the predictive information should be preferred to the other. In the presence of unidirectional interactions from driver to target (Figure 3d), these interactions are measured as natural causal effects by the CE, while the cSE varies only with the internal dynamics in the target (Figure 4d). In the presence of unidirectional interactions from target to driver (Figure 3c), the TE is always zero and the SE captures all the internal dynamics in the target process in terms of natural causal effects (Figure 4c). In the two other situations explored in the simulation the interpretation is less straightforward because none of the measures closely reflects natural causal effects. However, the situation with absent internal dynamics in the target (Figure 3b) is reasonably represented with CE and cSE detecting the absence of internal information and ascribing all variations to the cross-information (Figure 4b). Finally, when both bidirectional interactions and internal dynamics in the target are present (Figure 3a) it seems that combining the cSE and the TE may be useful to infer variations related to the causal interactions and the target internal dynamics.

3.2. Simulated Cardiovascular Dynamics

In the second simulation we consider a bivariate process specifically designed to reproduce the dynamics of RV and HPV and their interactions. The process is defined as [41]:
R n = a 1 R n 1 + a 2 R n 2 + ε n H P n = k = 1 4 b k H P n k + c ( R n R n 1 ) + ξ n ,
where the processes R and HP represent respectively RV and HPV, and εn and ξn are independent Gaussian white noises with zero mean and unit variance. The autodependency effects are set to generate autonomous oscillations in the two processes at the frequencies typical of cardiorespiratory variability. This was obtained placing pairs of complex-conjugated poles, of modulus ρ and phase 2πf, in the complex plane representation of the processes. Specifically, very low frequency (VLF) and low frequency (LF) oscillations are obtained for the simulated HPV setting poles with ρVLF = 0.2, fVLF = 0.03 and ρLF = 0.8, fLF = 0.1 for the process HP, and high frequency (HF) oscillations are obtained for the simulated RV setting poles with ρHF = 0.9, fHF = 0.3 for the process R. The AR coefficients resulting from this setting are a1 = 0.556, a2 = 0.81, b1 = 1.687, b2 = 1.189, b3 = 0.303, b4 = 0.026. Then, causal interactions are set from R to HP at lags 0 and 1, simulating respectively fast (within beat) and one-beat delayed coupling from RV to HPV; this simulated cardiorespiratory coupling was weighed by the parameter c.
The realistic power spectral densities of RV and HPV that result at varying some simulation parameters are shown in Figure 5. In particular we considered two settings for the parameter variations, of which the first was designed to reproduce a shift in the sympatho-vagal balance toward sympathetic activation and vagal deactivation (Figure 5a). In this case we increased the parameters ρVLF and ρLF proportionally to a parameter b, to simulate a rise in the VLF and LF oscillations of HPV, and simultaneously decreased the parameter c to simulate a progressive weakening of the cardiorespiratory coupling. Figure 5a illustrates how these changes in the parameters were reflected by the measures of information dynamics composing the predictive information about the process HP. We see that the information storage measured by the SE is always significant, as it measures a statistical dependence between HPn and HPn that mixes together the causal interactions from RV to HPV (common driver HPnRnHPn, prevalent at low values of b) and the internal dynamics of HPV (direct effect HPnHPn, prevalent at high values of b). As a consequence, SHP does not exhibit a monotonic behavior at varying the parameter b. On the contrary, the internal information measured by the cSE reflects only the strength of the internal dynamics in the simulated HPV, so that SHP|R is zero with b = 0 and increases monotonically with b. Moving to the information transfer, we see that the TE TR→HP decreases monotonically at increasing b, assuming the highest value at b = 0 when the simulated cardiorespiratory coupling is maximal, and reaching zero at b = 1 when the cardiorespiratory coupling vanishes. Nevertheless, in this simulation without causal interactions from target to driver also the CE reflected well the variations in the cardiorespiratory coupling, with values of CR→HP encompassing a wider range of variation from CR→HP = PHP measured at b = 0, to CR→HP = 0 measured at b = 1.
With the second parameter setting we simulated and a change in the breathing frequency by decreasing the parameter fHF progressively from 0.3 Hz to 0.1 Hz (Figure 5b). The information storage measured by SHP increased substantially at decreasing fHF, reflecting the progressive entrainment of the LF and HF oscillations of HPV that makes the process HP more predictable. This increasing storage was due to the common driver effect HPnRnHPn rather than to the direct effect HPnHPn, because with the imposed variations in the respiration frequency only the internal dynamics of the driver process R were altered. As a consequence, the internal information measured by the cSE remained constant at varying fHF, correctly reflecting the fact that the internal dynamics of the target process HP were kept unchanged. The information transfer measured by the TE showed a decrease with the respiratory frequency which, though being slight, is not compatible with the unaltered cardiorespiratory coupling; this result reflects a similar situation shown in Section 3.1, where in some circumstances the TE was found to vary with the internal dynamics of the driving process. The CE showed the opposite behavior, i.e., it increased substantially at decreasing fHF; this behavior can be more reasonably explained, in terms of natural causal effects from the autonomous driver process R to the target process HP, considering that the enhanced internal dynamics of R are measured through a higher cross-information.

4. Application to Cardiorespiratory Variability

This Section is relevant to the practical computation of information dynamics in cardiorespiratory time series. The analysis is focused on the decomposition of the predictive information about heart period dynamics, aimed at describing the sources of statistical dependence related to cardiac and respiratory contributions and performed during two experimental protocols which are known to evoke different types of neuroautonomic modulation, i.e., head-up tilt and paced breathing.

4.1. Experimental Protocols and Data Analysis

We considered two experimental protocols involving young healthy subjects head up tilt protocol (HUT, 15 subjects—seven females and eight males, aged from 22 to 32 years, median 25 years) and paced breathing (PB, 19 subjects—11 females and eight males, aged from 27 to 35 years, median 31 years) [43,44]. In both protocols, the recorded signals were the surface ECG (lead II) and the respiratory flow measured by a nasal thermistor. During HUT, the signals were recorded with subjects breathing spontaneously in two different conditions: in the resting supine position (SU) and in the 60° upright position (UP) which was reached passively using a motorized table. During PB, the recording sessions included four conditions in which the subjects were lying in the resting supine position: the first session with spontaneous respiration (SR) was followed by three sessions in random order with the subject breathing according to a metronome at 10, 15, and 20 breaths/min (R10, R15, R20).
In the two protocols, the analysis of each experimental condition started after about two min from the beginning of the experiment. HPV and RV were measured respectively as the sequence of the consecutive heart period (HP) durations approximated as the time distance between two consecutive R-wave apexes from the ECG (series HP), and as the values of the respiratory nasal airflow signal sampled at each R-peak of the ECG (series R). The adopted measurement convention was such that the n-th respiration sample, Rn, was taken at the onset of the cardiac n-th interval, HPn. In accordance with this convention, instantaneous (i.e., non delayed) effects from Rn to HPn were allowed in the analysis of information dynamics. For each protocol, synchronous sequences of N beats were selected in each condition according to the guidelines of short-term cardiovascular variability analysis [45] (N = 300 for HUT and N = 256 for PB). The sequences were linearly detrended and reduced to zero mean. Then, the measures of information dynamics were computed as outlined in Section 2.4. Specifically, a bivariate AR model was fitted on each pair of series using least-squares estimation, and optimizing the model order p by the Bayesian Information Criterion applied to the regression of HPn on {HPn−1,…,HPnp,Rn,Rn−1…,Rnp} [46]; then, the estimated model parameters were used to compute the autocovariance sequence of the bivariate process, from which the PE, SE, TE, CE and cSE were estimated using l = 20 past lags to approximate the past history of the process.
During HUT, the statistical analysis was performed using the Wilcoxon signed rank test for assessing the significance of the differences of each information measure between SU and UP. During PB, the statistical significance of the differences of each information measure across the four conditions (SR, R10, R15, R20) was assessed using the Kurskall Wallis analysis of variance, followed by post-hoc pairwise tests performed through multiple comparisons with critical values set according to the Tukey honestly significant difference criterion. A p < 0.01 was always considered as statistically significant.

4.2. Results and Discussion

The results of entropy decomposition applied to HPV and RV series measured during the HUT protocol are shown in Figure 6. The figure shows that the two possible decompositions of the predictive information yield concordant results for this application. The PE about the target process HP increased significantly with the transition from SU to UP, and this was the result of an increase in the amount of information that could be predicted from its own past (measured either through the SE or the cSE) that was not compensated by the decrease in the amount of information that could be predicted from the past of the driver process R (measured either through the TE or the CE). The increase with tilt of the information stored in the cardiac system is a known behavior in cardiovascular variability, which has been observed in terms of an increased regularity or a reduced complexity of HPV [47,48]. In this study we show that the internal information measured by the cSE increases concurrently with the information storage, and such a concordance suggests that tilt is associated with changes in the internal dynamics of the cardiac system. This is also physiologically plausible as, in our context, higher internal information of HPV may be associated to an enhancement of regulation mechanisms unrelated to respiration. According to the known cardiovascular physiology, these mechanisms involve the activation of the sympathetic nervous system commonly evoked by tilt [1]. In addition, also the concordance between variations of information transfer and cross information seems explanatory of an underlying mechanism, in this case the cardiorespiratory coupling. Indeed, the significant decrease of both the TE and the CE observed moving from SU to UP suggests that the strength of the causal interaction from RV to HPV is reduced after tilt. Physiologically, the lower impact of the causal connection from RV to HPV on the cardiac dynamics observed in the UP position may be explained with a dampening of respiratory sinus arrhythmia, likely due to the lower involvement of the vagal contribution to HPV and reduced cardiorespiratory coupling in this body position [6,29,49]. The trends observed in Figure 6 for the real RV and HPV series are compatible with the shift of sympatho-vagal balance toward sympathetic activation and parasympathetic deactivation shown for simulated processes in Figure 5a.
During the PB protocol, the PE of HPV showed a tendency to increase progressively while decreasing the breathing rate (Figure 7). This result, documenting a higher overall predictability of the cardiac dynamics in conditions of slow PB, can be explained physiologically by considering that the respiratory sinus arrhythmia tends to be enhanced during forced ventilation at low breathing rates [50]. In this case, the two entropy decompositions yielded different interpretations about how the predictable dynamics in the target process HP arise from its own past and from the past of the driver process R. Using the first decomposition, the significantly higher predictive information observed during R10 compared to SR and R20 was ascribed to similar variations in the information storage measured by the SE, while the information transfer measured by the TE did not change across conditions. Using the second decomposition, the PE variations were ascribed to the cross information measured by the CE, while the internal information measured by the cSE was substantially unaltered. This apparent discrepancy can be settled considering the meaning of the different measures of information dynamics, in particular remarking that, while TE and cSE are more closely related to causal interactions and internal dynamics, CE and SE more often reflect other sources of statistical dependence. Our theoretical results have shown that finding unchanged TE (or respectively, unchanged cSE) across conditions means that the causal interactions from driver to target (or respectively, the internal dynamics of the target) are left unvaried by the change of conditions. Translating this interpretation to the results of PB analysis, the finding that the TE and the cSE do not vary across conditions lets us suppose that both the causal interactions from RV to HPV and the internal dynamics of HPV may be not affected significantly by the PB protocol. A parameter strongly changing in this protocol is of course the breathing frequency, which is related to the internal dynamics of the driver process R and, as such, may affect both the SE of the target process HP and the CE from R to HP. This has been clearly shown in our simulations, where higher values of SHP and of CR→HP were measured simulating a decrease in the respiratory frequency (Figure 5b). Therefore, we hypothesize that the higher predictive information about HPV induced by slow PB may not be the result of stronger causal interactions from RV to HPV, or stronger internal dynamics of HPV. Rather, this higher predictive information could be due to the progressive entrainment of the typical LF and HF oscillations of HPV resulting from the decrease of the breathing frequency, which is reflected by an increased information storage using the classical entropy decomposition based on SE and TE, and by an increased cross information using the alternative decomposition based on CE and cSE. Such an entrainment, which is supposed to enhance the oscillatory characteristics of HPV, might also contribute to strengthen the coupling of the cardiac and respiratory oscillators which has been clearly documented using phase dynamic models of cardiorespiratory interactions [7,8,51] and was confirmed also with continuously slowing the frequency of paced breathing [7,52]. According to these interpretations the same physiological phenomenon, i.e., the increased respiratory sinus arrhythmia observed during paced breathing at slow breathing rates, may be seen in terms of an increased coupling function from the respiratory to the cardiac oscillator using phase dynamics, and in terms of an enhanced information storage in the cardiac system induced by alterations of the respiratory driver using information dynamics. This latter interpretation confirms on physiological data our theoretical result indicating that, contrary to some intuitive belief, the information storage reflects not only the internal dynamics of the target process but also the causal interactions from driver to target. Finally, the interpretation of these results should consider that also a possible role of latent variables cannot be excluded. Indeed, the present analysis did not account for the baroreflex control of heart rate. Since R is exogenous for HP possible modifications of HP in response to arterial pressure changes are likely to inflate the terms describing information storage and internal information.

5. Discussion

The present work was focused on studying how the temporal evolution of a dynamical system can be described as resulting from its own internal dynamics and from the dynamics of another system possibly connected to it. To this end, we have analyzed the different information-theoretic measures that result from the decomposition of the predictive information about the target system. In a bivariate system the predictive information can be decomposed in two alternative ways, both leading to expand the overall entropy reduction that the knowledge of the system’ past brings about the present state of the target as the sum of a MI term (the SE or the CE) and a conditional MI term (the TE or the cSE). Our theoretical derivations indicate that the SE and the CE, being formulated as a MI, incorporate both causal and non-causal sources of statistical dependence. As a consequence, the concepts of information storage and cross information are useful to quantify overall dynamic dependencies but cannot be exploited to infer the connections between coupled dynamic processes. On the contrary, internal information and information transfer are concepts more closely related to the causal statistical structure of the observed coupled stochastic processes, with the cSE and the TE reflecting respectively the internal dynamics of the target process and the causal interactions from the driver to the target process. Our analysis performed on benchmark AR processes showed indeed that these measures vanish in the absence of causal statistical dependencies, and in general their magnitude reflects the strength of these dependencies. Notwithstanding this, the causal interpretation of internal information and information transfer is limited by the fact that finding zero TE/cSE, or finding changes of TE/cSE across experimental conditions, are conditions sufficient but not necessary to conclude that the related causal dependence is absent or is changing with the condition. In particular, the interpretation of TE and cSE should proceed carefully when the dynamic processes under investigation exhibit low degrees of stochasticity, since the results of the present and previous studies [15,25] indicate that the measures of information dynamics based on conditional MI tend to degenerate when the temporal evolution of the observed systems is close to determinism. More generally, it should be kept in mind that the measures derived in the frame of information dynamics, as any other approach to statistical causal modeling based on the probabilistic notions of Wiener-Granger causality, are designed to reflect the effect that the causal connections have on the dynamics of the observed system, rather than the effective mechanism generating the observed data [53].
Our analysis confirms the fundamental assertion of [26] stating that, while the existence of causal connections can always be probed from the observed dynamic processes, and in our case this is done testing for conditional independences in terms of nonzero TE or cSE, a meaningful quantification of the interaction strength is not always possible, and depends on the topology of the causal connections in the observed system. In particular, when all mechanisms of internal dynamics and causal interactions are simultaneously active the impact of these mechanisms on the system dynamics cannot be quantified in terms of cause-and-effect [26]. In such a case, the measures of information dynamics can explain only in part the dynamic properties of individual system components, e.g., they are useful to probe the existence or the stability in strength of a causal connection but not to quantify the causal effect in absolute terms. Nevertheless, a full interpretation of the system dynamics in terms of natural causal effects between components is possible when the causal statistical structure of the observed joint process is constrained to specific topologies. We found that this is the case for unidirectional interactions, for which a proper description of the temporal dynamics of the target system is achieved by the proposed information decomposition strategies. In particular, in the presence of unidirectional interactions from source to target the decomposition of PE that evidences cross information and internal information achieves a better separation of the sources of statistical dependence generating causal interactions from driver to target and internal dynamics in the target. This was observed in the simulation study showing that in this case the parameter changes are reflected in a straightforward way by the CE and cSE measures (Figure 4d, Figure 5), and was then verified in the analysis of real cardiorespiratory interactions which are most likely unidirectional from RV to HPV [29,43]. Remarkably, the patterns of information dynamics estimated for real HPV and RV time series during the two considered experimental protocols (Figures 6 and 7) resembled those reported in the realistic simulation of cardiorespiratory dynamics (Figure 5). This supports the proposed physiological interpretations, according to which head-up tilt provokes a reorganization of the sympatho-vagal balance manifested in terms of stronger internal dynamics in the cardiac system and blunted causality from the respiration system, whereas paced breathing does not alter these causal effects but only alters information dynamics through the variations in the breathing frequency.
The decomposition of multivariate interactions has been the subject of intense research also in different contexts than the information-theoretic domain. In the framework of coupled oscillators, Iatsenko et al. [8] proposed to decompose the phase dynamics of bivariate systems into contributions reflecting the effects of the target oscillator on itself, the direct driving by the other oscillator, and more complicated coupling mechanisms dependent on the phase of both oscillators. This approach, devised specifically for the study of cardiac and respiratory oscillators, showed how cardiorespiratory interactions evolve with the process of aging [8]. In the frequency domain, the vector autoregressive parametrization of multivariate processes leads to formalize a spectral decomposition evidencing directed transfers of power [46], with each decomposition term describing a specific transfer function that includes direct effects, indirect effects, and interference effects [54]. Interestingly, all these approaches to the decomposition of multivariate interactions evidence the difficulty of providing a thorough separation of the causal sources of statistical dependence for the observed dynamics: the phase and frequency domain approaches make use of decomposition terms that account for the combined effects of different causal sources, while our information dynamics approach shows that the causal connections may serve simultaneously both components of the predictive information (i.e., information storage and information transfer, or cross information and internal information).
The framework proposed in this study for the assessment of information dynamics was developed assuming stationarity of the considered bivariate process. While this allowed to drop the dependency on the time index n for the measures of information dynamics, see Equations (1)(7), the generalization to a non-stationary framework is theoretically intuitive, and may be achieved, e.g., according to the formulations presented in [15]. Moreover, being devised within the model-free context of information theory, the framework holds for the analysis of virtually any type of linear and nonlinear dynamics. As regards the practical computation of information dynamics from time series data, in this study we built on previous derivations [37,38] to devise an estimation approach based on the linear parametric representation of multivariate Gaussian processes. The analytical computation of all measures appearing in the decomposition of the predictive information (Equation (11)) was exploited in this study to isolate the fundamental properties of information dynamics from any estimation bias, thus making possible to investigate with high reliability how the PE, SE, TE, CE and cSE depend on the causal statistical structure of the observed dynamic system. In the analysis of unknown systems, the estimation of information dynamics relies on the identification of a vector AR models. While this eases considerably the estimation task, when the data distribution departs from Gaussianity the formulations in Equation (11) become approximate expressions of information dynamics, and the adopted estimator may miss dependence structures that originate from nonlinear dynamics. In such a case it is appropriate to resort to model-free computation methods, preferably those recently devised to tackle the difficult task of non-parametric entropy estimation in high dimensions [16,55,56] or for non-stationary data [57]. Moreover, since vector AR identification may yield unreliable parameter estimates in the presence of noisy data and non-stationary dynamics, variants of the traditional least squares estimators (e.g., exploiting Kalman filters to increase robustness [58] or to track time-varying behaviors [59]) should be considered when these aspects are deemed significant. In the cardiorespiratory data analyzed in this study, good signal-to-noise ratios and stationarity within the observed windows were guaranteed by careful experimental settings and time series measurements and editing, and the linear Gaussian approximation was supported by the knowledge that a conspicuous amount of cardiorespiratory variability can be explained by linear interaction models [6,60]. However, since nonlinear dynamics have been proposed as a possible determinant of the short term cardiac and respiratory variability [61], future studies should assess their contribution to cardiorespiratory information dynamics, comparing the entropy decompositions based on linear regression with those computed through model-free approaches to entropy estimation [29,35,40,55]. Further, given that the proposed framework can be readily extended to the description of multivariate processes, the introduction of new variables (e.g., arterial blood pressure and peripheral resistance) would allow one to account for additional regulatory mechanisms such as baroreflex and vasomotion, thus taking more advantage from the ability of the proposed analysis in disentangling components of the cardiovascular control while accounting for confounding factors.
In conclusion this study showed that, while a close correspondence between causal effects and the measures of information dynamics cannot be established in general, the proposed comprehensive framework in which different measures are evaluated together is helpful to characterize the statistical structure of complex systems exhibiting coupling behaviors as well as internal regulation. The combined analysis of information storage and information transfer on the one side, and of cross information and internal information on the other side, may be indeed necessary to unravel complex dynamical dependencies resulting from multiple causation mechanisms. In the context of cardiorespiratory dynamics, this approach led us to interpret a similar behavior (i.e., the increased predictive information about heart rate variability measured both after head-up tilt and during paced breathing) as resulting from completely different physiological mechanisms (respectively, the counterbalanced alteration of cardiac dynamics and cardiorespiratory coupling, and the mere variation of the breathing frequency).

Author Contributions

Luca Faes conceived the study, designed the theoretical part, processed and analyzed the data, interpreted the data, drafted the article, proof-read the final version prior to publication. Alberto Porta contributed to theoretical developments, processed part of the data, interpreted the data, performed critical revision of the article, proof-read the final version prior to publication. Giandomenico Nollo contributed to discussion, performed critical revision of the article, proof-read the final version prior to publication. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare that this research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
PACS Codes: 05.45.Tp; 87.19.lo; 02.50.Sk; 89.75.-k; 87.19.ug

References

  1. Cohen, M.A.; Taylor, J.A. Short-term cardiovascular oscillations in man: measuring and modelling the physiologies. J. Physiol 2002, 542, 669–683. [Google Scholar]
  2. Berntson, G.G.; Cacioppo, J.T.; Quigley, K.S. Respiratory Sinus Arrhythmia - Autonomic Origins, Physiological-Mechanisms, and Psychophysiological Implications. Psychophysiology 1993, 30, 183–196. [Google Scholar]
  3. Valdes-Sosa, P.A.; Roebroeck, A.; Daunizeau, J.; Friston, K. Effective connectivity: Influence, causality and biophysical modeling. Neuroimage 2011, 58, 339–361. [Google Scholar]
  4. Pearl, J. Causality: Models, Reasoning and Inference; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  5. Friston, K.J.; Harrison, L.; Penny, W. Dynamic causal modelling. Neuroimage 2003, 19, 1273–1302. [Google Scholar]
  6. Porta, A.; Bassani, T.; Bari, V.; Tobaldini, E.; Takahashi, A.C.M.; Catai, A.M.; Montano, N. Model-based assessment of baroreflex and cardiopulmonary couplings during graded head-up tilt. Comp. Biol. Med 2012, 42, 298–305. [Google Scholar]
  7. Stankovski, T.; Duggento, A.; McClintock, P.V.E.; Stefanovska, A. Inference of Time-Evolving Coupled Dynamical Systems in the Presence of Noise. Phys. Rev. Lett 2012, 109, 024101. [Google Scholar]
  8. Iatsenko, D.; Bernjak, A.; Stankovski, T.; Shiogai, Y.; Owen-Lynch, P.J.; Clarkson, P.B.M.; McClintock, P.V.E.; Stefanovska, A. Evolution of cardiorespiratory interactions with age. Phil. Trans. Royal Soc. A 2013, 371, 20110622. [Google Scholar]
  9. Chicharro, D.; Panzeri, S. Algorithms of causal inference for the analysis of effective connectivity among brain regions. Front. Neuroinf 2014, 8. [Google Scholar] [CrossRef]
  10. Wiener, N. The Theory of Prediction; McGraw-Hill: New York, NY, USA, 1956. [Google Scholar]
  11. Granger, C.W.J. Economic processes involving feedback. Inf. Control 1963, 6, 28–48. [Google Scholar]
  12. Granger, C.W.J. Testing for causality: A personal viewpoint. J. Econom. Dynam. Control 1980, 2, 329–352. [Google Scholar]
  13. Porta, A.; Faes, L. Assessing causality in brain dynamics and cardiovascular control. Phil. Trans. Royal Soc. A 2013, 371, 20120517. [Google Scholar]
  14. Cover, T.M.; Thomas, J.A. Elements of Information Theory; Wiley: New York, NY, USA, 2006. [Google Scholar]
  15. Chicharro, D.; Ledberg, A. Framework to study dynamic dependencies in networks of interacting processes. Phys. Rev. E 2012, 86, 041901. [Google Scholar]
  16. Faes, L.; Porta, A. Conditional entropy-based evaluation of information dynamics in physiological systems. In Directed Information Measures in Neuroscience; Vicente, R., Wibral, M., Lizier, J.T., Eds.; Springer-Verlag: Berlin, Germany, 2014; pp. 61–86. [Google Scholar]
  17. Lizier, J.T.; Prokopenko, M.; Zomaya, A.Y. Local measures of information storage in complex distributed computation. Inform. Sci 2012, 208, 39–54. [Google Scholar]
  18. Schreiber, T. Measuring information transfer. Phys. Rev. Lett 2000, 85, 461–464. [Google Scholar]
  19. Lizier, J.T. The Local Information Dynamics of Distributed Computation in Complex Systems; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  20. Wibral, M.; Lizier, J.T.; Vogler, S.; Priesemann, V.; Galuske, R. Local active information storage as a tool to understand distributed neural information processing. Front. Neuroinf 2014, 8, 1. [Google Scholar]
  21. Faes, L; Nollo, G.; Jurysta, F.; Marinazzo, D. Information dynamics of brain-heart physiological networks during sleep. New J. Phys 2014, 16, 105005. [Google Scholar]
  22. Faes, L.; Porta, A.; Rossato, G.; Adami, A.; Tonon, D.; Corica, A.; Nollo, G. Investigating the mechanisms of cardiovascular and cerebrovascular regulation in orthostatic syncope through an information decomposition strategy. Auton. Neurosci 2013, 178, 76–82. [Google Scholar]
  23. Lizier, J.T.; Pritam, S.; Prokopenko, M. Information Dynamics in Small-World Boolean Networks. Artif. Life 2011, 17, 293–314. [Google Scholar]
  24. Lizier, J.T.; Prokopenko, M. Differentiating information transfer and causal effect. Eur. Phys. J. B 2010, 73, 605–615. [Google Scholar]
  25. Wibral, M.; Vicente, R.; Lindner, M. Transfer entropy in neuroscience. In Directed Information Measures in Neuroscience; Vicente, R., Wibral, M., Lizier, J.T., Eds.; Springer-Verlag: Berlin, Germany, 2014; pp. 3–36. [Google Scholar]
  26. Chicharro, D.; Ledberg, A. When two become one: The limits of causality analysis of brain dynamics. PLoS One 2012, 7, e32466. [Google Scholar]
  27. Dahlhaus, R. Graphical interaction models for multivariate time series. Metrika 2000, 51, 157–172. [Google Scholar]
  28. Runge, J.; Heitzig, J.; Petoukhov, V.; Kurths, J. Escaping the Curse of Dimensionality in Estimating Multivariate Transfer Entropy. Phys. Rev. Lett 2012, 108, 258701. [Google Scholar]
  29. Faes, L.; Marinazzo, D.; Montalto, A.; Nollo, G. Lag-specific transfer entropy as a tool to assess cardiovascular and cardiorespiratory information transfer. IEEE Trans. Biomed. Eng 2014, 61, 2556–2568. [Google Scholar]
  30. Faes, L.; Nollo, G.; Porta, A. Compensated transfer entropy as a tool for reliably estimating information transfer in physiological time series. Entropy 2013, 15, 198–219. [Google Scholar]
  31. Runge, J.; Heitzig, J.; Marwan, N.; Kurths, J. Quantifying causal coupling strength: A lag-specific measure for multivariate time series related to transfer entropy. Phys. Rev. E 2012, 86, 061121. [Google Scholar]
  32. Vlachos, I.; Kugiumtzis, D. Nonuniform state-space reconstruction and coupling detection. Phys. Rev. E 2010, 82, 016207. [Google Scholar]
  33. Kraskov, A.; Stogbauer, H.; Grassberger, P. Estimating mutual information. Phys. Rev. E 2004, 69, 066138. [Google Scholar]
  34. Faes, L.; Nollo, G.; Porta, A. Information-based detection of nonlinear Granger causality in multivariate processes via a nonuniform embedding technique. Phys. Rev. E 2011, 83, 051112. [Google Scholar]
  35. Porta, A.; Faes, L.; Bari, V.; Marchi, A.; Bassani, T.; Nollo, G.; Perseguini, N.M.; Milan, J.; Minatel, V.; Borghi-Silva, A.; Takahashi, A.C.M.; Catai, A.M. Effect of Age on Complexity and Causality of the Cardiovascular Control: Comparison between Model-Based and Model-Free Approaches. PLoS One 2014, 9, e89463. [Google Scholar]
  36. Barnett, L.; Barrett, A.B.; Seth, A.K. Granger causality and transfer entropy are equivalent for Gaussian variables. Phys. Rev. Lett 2009, 103, 238701. [Google Scholar]
  37. Barnett, L.; Seth, A.K. The MVGC multivariate Granger causality toolbox: A new approach to Granger-causal inference. J. Neurosci. Methods 2014, 223, 50–68. [Google Scholar]
  38. Barrett, A.B.; Barnett, L.; Seth, A.K. Multivariate Granger causality and generalized variance. Phys. Rev. E 2010, 81, 041907. [Google Scholar]
  39. Faes, L.; Montalto, A.; Nollo, G.; Marinazzo, D. Information decomposition of short-term cardiovascular and cardiorespiratory variability, Proceedings of 2013 Computing in Cardiology Conference (CinC), Zaragoza, Spain, 22–25 September 2013; pp. 113–116.
  40. Faes, L.; Kugiumtzis, D.; Nollo, G.; Jurysta, F.; Marinazzo, D. Estimating the decomposition of predictive information in multivariate systems. Phys Rev. E 2014. submitted for publication. [Google Scholar]
  41. Faes, L.; Widjaja, D.; van Huffel, S.; Nollo, G. Investigating cardiac and respiratory determinants of heart rate variability in an information-theoretic framework, Proceedings of 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Chicago, IL, USA, 26–30 August 2014; pp. 6020–6023.
  42. Kaiser, A.; Schreiber, T. Information transfer in continuous processes. Physica D 2002, 166, 43–62. [Google Scholar]
  43. Faes, L.; Nollo, G.; Porta, A. Information domain approach to the investigation of cardio-vascular, cardio-pulmonary, and vasculo-pulmonary causal couplings. Front. Physiol 2011, 2, 1–13. [Google Scholar]
  44. Porta, A.; Bassani, T.; Bari, V.; Pinna, G.D.; Maestri, R.; Guzzetti, S. Accounting for Respiration is Necessary to Reliably Infer Granger Causality from Cardiovascular Variability Series. IEEE Trans. Biomed. Eng 2012, 59, 832–841. [Google Scholar]
  45. Task force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology Heart rate variability. Standards of measurement, physiological interpretation, and clinical use. Eur. Heart J 1996, 17, 354–381.
  46. Faes, L.; Erla, S.; Nollo, G. Measuring connectivity in linear multivariate processes: definitions, interpretation, and practical analysis. Comp. Math. Methods Med 140513.
  47. Porta, A.; Guzzetti, S.; Montano, N.; Pagani, M.; Somers, V.; Malliani, A.; Baselli, G.; Cerutti, S. Information domain analysis of cardiovascular variability signals: evaluation of regularity, synchronisation and co-ordination. Med. Biol. Eng. Comput 2000, 38, 180–188. [Google Scholar]
  48. Porta, A.; Guzzetti, S.; Montano, N.; Furlan, R.; Pagani, M.; Malliani, A.; Cerutti, S. Entropy, entropy rate, and pattern classification as tools to typify complexity in short heart period variability series. IEEE Trans. Biomed. Eng 2001, 48, 1282–1291. [Google Scholar]
  49. Faes, L.; Nollo, G.; Porta, A. Non-uniform multivariate embedding to assess the information transfer in cardiovascular and cardiorespiratory variability series. Comput. Biol. Med 2012, 42, 290–297. [Google Scholar]
  50. Van Diest, I.; Vlemincx, E.; Verstappen, K.; Vansteenwegen, D. The Effects of instructed ventilatory patterns on physiological and psychological dimensions of relaxation, Presented at the 17th meeting of the International Society for the Advancement of Respiratory Psychophysiology (ISARP), New York City, NY, USA, 26–27 September 2010.
  51. Kralemann, B.; Fruhwirth, M.; Pikovsky, A.; Rosenblum, M.; Kenner, T.; Schaefer, J.; Moser, M. In vivo cardiac phase response curve elucidates human respiratory heart rate variability. Nat. Comm 2013, 4. [Google Scholar] [CrossRef]
  52. Stankovski, T.; Cooke, W.H.; Rudas, L.; Stefanovska, A.; Eckberg, D.L. Time-frequency methods and voluntary ramped-frequency breathing: A powerful combination for exploration of human neurophysiological mechanisms. J. Appl. Physiol 2013, 115, 1806–1821. [Google Scholar]
  53. Barrett, A.B.; Barnett, L. Granger causality is designed to measure effect, not mechanism. Front. Neurosci 2013, 7, 61–62. [Google Scholar]
  54. Gigi, S.; Tangirala, A.K. Quantitative analysis of directional strengths in jointly stationary linear multivariate processes. Biol. Cybern 2010, 103, 119–133. [Google Scholar]
  55. Vicente, R.; Wibral, M.; Lindner, M.; Pipa, G. Transfer entropy—a model-free measure of effective connectivity for the neurosciences. J. Comput. Neurosci 2011, 30, 45–67. [Google Scholar]
  56. Kugiumtzis, D. Direct-coupling information measure from nonuniform embedding. Phys. Rev. E 2013, 87, 062918. [Google Scholar]
  57. Wollstadt, P.; Martinez-Zarzuela, M.; Vicente, R.; Diaz-Pernas, F.J.; Wibral, M. Efficient Transfer Entropy Analysis of Non-Stationary Neural Time Series. PLoS One 2014, 9, e102833. [Google Scholar]
  58. Labarre, D.; Grivel, E.; Berthoumieu, Y.; Todini, E.; Najim, M. Consistent estimation of autoregressive parameters from noisy observations based on two interacting Kalman filters. Sign. Proc 2006, 86, 2863–2876. [Google Scholar]
  59. Arnold, M.; Miltner, W.H.; Witte, H.; Bauer, R.; Braun, C. Adaptive AR modeling of nonstationary time series by means of Kalman filtering. IEEE Trans. Biomed. Eng 1998, 45, 553–562. [Google Scholar]
  60. Baselli, G.; Cerutti, S.; Badilini, F.; Biancardi, L.; Porta, A.; Pagani, M.; Lombardi, F.; Rimoldi, O.; Furlan, R.; Malliani, A. Model for the assessment of heart period and arterial pressure variability interactions and of respiration influences. Med. Biol. Eng. Comput 1994, 32, 143–152. [Google Scholar]
  61. Fortrat, J.O.; Yamamoto, Y.; Hughson, R.L. Respiratory influences on non-linear dynamics of heart rate variability in humans. Biol. Cybern 1997, 77, 1–10. [Google Scholar]
Figure 1. (a) Causal structure of a bivariate dynamic process {X,Y}, evidencing internal dynamics of a single process (black arrows) and causal interactions from one process to the other (blue arrows). (b) Absence of internal dynamics in the process Y. (c) Absence of causal interactions from X to Y. (d) Absence of causal interactions from Y to X.
Figure 1. (a) Causal structure of a bivariate dynamic process {X,Y}, evidencing internal dynamics of a single process (black arrows) and causal interactions from one process to the other (blue arrows). (b) Absence of internal dynamics in the process Y. (c) Absence of causal interactions from X to Y. (d) Absence of causal interactions from Y to X.
Entropy 17 00277f1
Figure 2. (a) Time series graph depicting the dynamics of a binary bivariate process {X,Y} assuming values 0 and 1 and generated by the deterministic relations Xn = 1–Xn-1, Yn = Xn-1, setting internal dynamics in X and causal interactions from X to Y. (b) The bivariate process is now generated by the relations Yn = 1–Yn-1, Xn = Yn-1, setting internal dynamics in Y and causal interactions from Y to X. Since the process outcomes are identical in the two cases, the measures of information dynamics relevant to the process Y are the same: PY = SY = CX→Y = H(Yn) = log2, TX→Y = SY|X = 0. Therefore, considering Y as target process we have causal interactions without information transfer in (a) and internal dynamics without internal information in (b).
Figure 2. (a) Time series graph depicting the dynamics of a binary bivariate process {X,Y} assuming values 0 and 1 and generated by the deterministic relations Xn = 1–Xn-1, Yn = Xn-1, setting internal dynamics in X and causal interactions from X to Y. (b) The bivariate process is now generated by the relations Yn = 1–Yn-1, Xn = Yn-1, setting internal dynamics in Y and causal interactions from Y to X. Since the process outcomes are identical in the two cases, the measures of information dynamics relevant to the process Y are the same: PY = SY = CX→Y = H(Yn) = log2, TX→Y = SY|X = 0. Therefore, considering Y as target process we have causal interactions without information transfer in (a) and internal dynamics without internal information in (b).
Entropy 17 00277f2
Figure 3. Graphical representation of the bivariate AR process {X,Y} of Equation (17), imposing: (a) absence of internal dynamics in X (a = 0); (b) absence of internal dynamics in Y (b = 0); (c) absence of causal interactions from X to Y (c = 0); (d) absence of causal interactions from Y to X (d = 0). The causal structure of the process is represented with a detailed time series graph (up) and with a condensed graph showing only the interactions between the past and present of the two processes (down), indicating for each arrow the parameter that has effect on the causal interaction depicted by the arrow.
Figure 3. Graphical representation of the bivariate AR process {X,Y} of Equation (17), imposing: (a) absence of internal dynamics in X (a = 0); (b) absence of internal dynamics in Y (b = 0); (c) absence of causal interactions from X to Y (c = 0); (d) absence of causal interactions from Y to X (d = 0). The causal structure of the process is represented with a detailed time series graph (up) and with a condensed graph showing only the interactions between the past and present of the two processes (down), indicating for each arrow the parameter that has effect on the causal interaction depicted by the arrow.
Entropy 17 00277f3
Figure 4. Information dynamics computed as a function of the parameters of the bivariate AR process {X,Y} of Equation (17), imposing: (a) absence of internal dynamics in X (a = 0); (b) absence of internal dynamics in Y (b = 0); (c) absence causal interactions from X to Y (c = 0); (d) absence causal interactions from Y to X (d = 0). In each condition, one of the three nonzero parameters is varied in the range 0–0.5 while keeping the other two parameters equal to 0.5. In the plots, the predictive information is decomposed either as the sum of information storage and information transfer (D1: PY = SY+TX→Y) or as the sum of cross information and internal information (D2: PY = CX→Y + SY|X).
Figure 4. Information dynamics computed as a function of the parameters of the bivariate AR process {X,Y} of Equation (17), imposing: (a) absence of internal dynamics in X (a = 0); (b) absence of internal dynamics in Y (b = 0); (c) absence causal interactions from X to Y (c = 0); (d) absence causal interactions from Y to X (d = 0). In each condition, one of the three nonzero parameters is varied in the range 0–0.5 while keeping the other two parameters equal to 0.5. In the plots, the predictive information is decomposed either as the sum of information storage and information transfer (D1: PY = SY+TX→Y) or as the sum of cross information and internal information (D2: PY = CX→Y + SY|X).
Entropy 17 00277f4
Figure 5. Information dynamics computed for different parameter settings of the bivariate AR process {R,HP} of Equation (18), simulating: (a) a shift in the sympatho-vagal balance, obtained changing the parameter b from 0 to 1 and setting ρVLF = 0.2b, ρLF = 0.8b; c = 1–b; (b) a shift in the respiratory frequency, obtained changing the parameter fHF from 0.1 to 0.3. Left plots depict the profiles of the power spectral density of the simulated RV, SR(f ), and HPV, SHP( f ). Right plots depict the expansion of the predictive information as the sum of information storage and information transfer (D1: PHP = SHP+TR→HP) or as the sum of cross information and internal information (D2: PHP = CR→HP+SHP|R).
Figure 5. Information dynamics computed for different parameter settings of the bivariate AR process {R,HP} of Equation (18), simulating: (a) a shift in the sympatho-vagal balance, obtained changing the parameter b from 0 to 1 and setting ρVLF = 0.2b, ρLF = 0.8b; c = 1–b; (b) a shift in the respiratory frequency, obtained changing the parameter fHF from 0.1 to 0.3. Left plots depict the profiles of the power spectral density of the simulated RV, SR(f ), and HPV, SHP( f ). Right plots depict the expansion of the predictive information as the sum of information storage and information transfer (D1: PHP = SHP+TR→HP) or as the sum of cross information and internal information (D2: PHP = CR→HP+SHP|R).
Entropy 17 00277f5
Figure 6. Information dynamics computed for HPV and RV series measured during the HUT protocol. Box plots depict the distributions over subjects of the predictive information of HPV (PHP), the information storage of HPV (SHP), the information transfer from RV to HPV (TR→HP), the cross information from RV to HPV (CR→HP), and the internal information of HPV (SHP|R), computed in the supine (SU) and upright (UP) conditions. * p<0.01 SU vs. UP.
Figure 6. Information dynamics computed for HPV and RV series measured during the HUT protocol. Box plots depict the distributions over subjects of the predictive information of HPV (PHP), the information storage of HPV (SHP), the information transfer from RV to HPV (TR→HP), the cross information from RV to HPV (CR→HP), and the internal information of HPV (SHP|R), computed in the supine (SU) and upright (UP) conditions. * p<0.01 SU vs. UP.
Entropy 17 00277f6
Figure 7. Information dynamics computed for HPV and RV series measured during the PB protocol. Box plots depict the distributions over subjects of the predictive information of HPV (PHP), the information storage of HPV (SHP), the information transfer from RV to HPV (TR→HP), the cross information from RV to HPV (CR→HP), and the internal information of HPV (SHP|R), computed during spontaneous respiration (SR) and paced respiration at 10, 15 and 20 breaths/min. * p<0.01, ANOVA and post-hoc pairwise test.
Figure 7. Information dynamics computed for HPV and RV series measured during the PB protocol. Box plots depict the distributions over subjects of the predictive information of HPV (PHP), the information storage of HPV (SHP), the information transfer from RV to HPV (TR→HP), the cross information from RV to HPV (CR→HP), and the internal information of HPV (SHP|R), computed during spontaneous respiration (SR) and paced respiration at 10, 15 and 20 breaths/min. * p<0.01, ANOVA and post-hoc pairwise test.
Entropy 17 00277f7
Table 1. Measures of information dynamics.
Table 1. Measures of information dynamics.
NameMeaningSymbolLower BoundUpper Bound
Prediction Entropy (PE)Predictive InformationPYYnXn,YnPY = 0Yn = f (Xn,Yn) ⇔ PY = H(Yn)
Self Entropy (SE)Information StorageSYYnYnSY=0Yn = f (Yn) ⇔ SY = H(Yn)
Transfer Entropy (TE)Information TransferTX→YYnXn|YnTX→Y = 0Yn = f(Xn,Yn) ⇔TX→Y = H(Yn|Yn)
Cross Entropy (CE)Cross InformationCX→YYnXnCX→Y = 0Yn=f (Xn) ⇔ CX→Y = H(Yn)
Conditional Self Entropy (cSE)Internal InformationSY|XYnYn|Xn SY|X = 0Yn=f (Xn,Yn) ⇔ SY|X = H(Yn|Xn)

Share and Cite

MDPI and ACS Style

Faes, L.; Porta, A.; Nollo, G. Information Decomposition in Bivariate Systems: Theory and Application to Cardiorespiratory Dynamics. Entropy 2015, 17, 277-303. https://doi.org/10.3390/e17010277

AMA Style

Faes L, Porta A, Nollo G. Information Decomposition in Bivariate Systems: Theory and Application to Cardiorespiratory Dynamics. Entropy. 2015; 17(1):277-303. https://doi.org/10.3390/e17010277

Chicago/Turabian Style

Faes, Luca, Alberto Porta, and Giandomenico Nollo. 2015. "Information Decomposition in Bivariate Systems: Theory and Application to Cardiorespiratory Dynamics" Entropy 17, no. 1: 277-303. https://doi.org/10.3390/e17010277

Article Metrics

Back to TopTop