Next Article in Journal
Drift and Diffusion in Panel Data: Extracting Geopolitical and Temporal Effects in a Study of Passenger Rail Traffic
Previous Article in Journal
Multivariate Forecasting Evaluation: Nixtla-TimeGPT
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Interpersonal Coordination Through Granger Causality Applied to AR Processes Modeling the Time Evolution of Low-Frequency Powers of RR Intervals †

by
Pierre Bouny
1,‡,
Eric Grivel
1,*,‡,
Roberto Diversi
2 and
Veronique Deschodt Arsac
1,‡
1
IMS laboratory CNRS UMR 5218, Bordeaux INP, 33400 Talence, France
2
Department of Electrical, Electronic, and Information Engineering “Guglielmo Marconi”, University of Bologna, 40126 Bologna, Italy
*
Author to whom correspondence should be addressed.
Presented at the 11th International Conference on Time Series and Forecasting, Canaria, Spain, 16–18 July 2025.
These authors contributed equally to this work.
Comput. Sci. Math. Forum 2025, 11(1), 30; https://doi.org/10.3390/cmsf2025011030
Published: 26 August 2025

Abstract

In this paper, interpersonal coordination is studied by analyzing physiological synchronization between individuals. To this end, a four-phase protocol is proposed to collect biosignals from the participants in each dyad. Then, the time evolution of the low-frequency (LF) power of the heart rate variability process for each participant is deduced. Finally, an approach based on a bivariate autoregressive model and Granger causality is proposed to determine whether a dependency exists between the biosignals. The approach is first applied to synthetic data and then to real data. This method has the advantage of providing explicit modeling of the dependency, which can help physiologists achieve better interpretation.

1. Introduction

A great deal of interest has been paid to interpersonal coordination for the last decades. This type of coordination refers to the interdependence between two or more individuals and can be divided into two main concepts: behavioral mimicry and interpersonal synchronization [1]. Behavioral mimicry occurs when people behave in the same way in a relatively short space of time [2], while interpersonal synchronization is defined as the spontaneous rhythmic and temporal coordination of actions, emotions, thoughts, and physiological processes [3,4,5]. During the last few years, research teams have studied this synchronization based on motor or physiological biosignals. For instance, it has been shown that during a side-by-side or arm-in-arm walk, two individuals synchronize their locomotor patterns through a complexity matching process [6]. In particular, this process can restore the locomotion quality of elderly people by making them walk with healthy individuals [7]. The phenomenon of interpersonal synchronization is studied not only with motor behaviors, but also by exploring the relationships between the physiological dynamics of individuals. This measure is based on the recording of several biosignals to assess interactions within a group or dyad, i.e., a couple of subjects. Among these biosignals, investigations on the respiratory biosignal [8], electrodermal activity [9], or electromyographic activity [10] highlight physiological interpersonal synchronization. Regarding cardiac biosignals, in particular heart rate variability (HRV), it is increasingly used to monitor an individual’s state or change of state. HRV indices are particularly sensitive to changes in cognitive [11] and emotional [12] states. This can be explained by the brain–heart coordination within the organism [13]. This dynamic interaction of systems in humans allows for the measurement of cerebral activity modifications by analyzing the activity of peripheral systems such as the autonomic nervous system, which is more accessible. This has significant implications for the generalization of these methods to real-world contexts. HRV has recently been used to measure such physiological interpersonal synchronization. Blons et al. [14] conducted a study to understand the physiological resonances associated with emphatic stress. The aim of their study was to record psychological and physiological cues with HRV during stress induction within a group. The results showed that psychophysiological synchronizations existed, and the authors suggest that “even if individuals perceive themselves as autonomous entities, their emotions and affective states are related to those of their peers, which facilitates social connection and coordination among human beings”. This project follows in the path taken by these works, which paved the way for questioning the existence of such synchronization. In this paper, our purpose is to measure the synchronization of cardiac activity using new methodological approaches in a couple of subjects in several contexts, inducing different levels of collaboration. More specifically, our purpose is to measure whether there is a dependency between cardiac activity within a dyad in several contexts that induce different levels of collaboration and to evaluate its influence on performance and perceived performance on the task. Our contributions are twofold:
  • A protocol with four phases was defined and data was collected. Instead of having a single value of a biomarker for a task, its temporal evolution is considered over the whole task. The temporal evolution of the low-frequency (LF) power of each participant’s HRV process is derived. For each phase and each dyad, two time series constitute the input of the processing chain in order to decide the physiological dependency.
  • Various approaches based on “associative” measures could be considered to address the above problem by taking into account the fact that the results must be interpreted easily by the physiologists. Thus, the normalized cross-correlation function could be studied, but the approach is sensitive to the number of samples available. Detrended cross-correlation analysis [15], which can be seen as an extension of the detrended fluctuation analysis [16] to two signals, is mainly used to quantify long-range cross-correlation between the processes. However, evaluating the long-range link is not necessarily of interest in our application. Cross-entropy-based approaches aim to quantify the degree of coupling between two signals. Thus, one could develop a method based on cross-sample entropy [17] or some variants [18,19]. One could eventually use the multiscale cross-trend sample entropy, whose goal is to evaluate the synchronism of the dynamical structure of two series with potential trends [20]. However, the above approaches may not necessarily be explicit enough for the physiologist for an a posteriori interpretation. Deep learning-based approaches could be developed, but the question of explainability is important.
Therefore, to decide the presence or absence of collaboration and have a representation of the dependency, we propose to consider a Granger causality-based method. Granger causality (G-causality) is a method used to determine whether one time series can predict another. It has been applied in various fields, from finance [21] to neuroscience, including the analysis of functional Magnetic Resonance Imaging time series, epilepsy detection, and more. It should be noted that selected landmarks in the development and application of G-causality for neuroscience and neuroimaging are given in [22]. A common approach to study G-causality is to model wide-sense stationary (w.s.s) processes using a multivariate autoregressive (AR) process. The level of interaction is then quantified by computing the log ratio of the variances of the prediction error between independent and dependent cases. In the time domain, G-causality interactions are considered significant when the AR coefficient matrices are significantly non-diagonal. To this end, the Wald test can be used. Alternatively, other approaches have been proposed and can be based on partial coherence statistics [23]. It should be noted that a MATLAB R2024b toolbox for Granger causal connectivity analysis was proposed by Seth [24]. During the last few years, there has been a growing interest in analyzing causal influences in the frequency domain. This consists of applying the Fourier transform to the time-domain equations of the multivariate AR model. In this context, the transfer matrix of the process can be derived, enabling the definition of the power spectral density matrix, also known as the spectral matrix. Spectral G-causality can then be deduced. The normalized Directed Transfer Function (DTF), introduced by Kaminsky et al. [25], can also be derived from the system’s transfer matrix. By integrating the DTF over a specific frequency band, the integrated DTF can be obtained. In [26], an extension to time-varying AR modeling is proposed, where time-varying AR matrices are estimated using Kalman filtering, assuming that the driving-process variance is known. By introducing a time-varying transfer matrix derived from the TV AR coefficients, TV versions of the above connectivity measures can also be developed.
In this paper, we suggest modeling the low-frequency powers of the RR interval time series (the RR interval time series refers to the time elapsed between two successive R-waves of the ECG signal) by a bivariate AR process. Then, the AR matrices and the covariance matrix of the driving process are estimated by considering two cases: either the two time series are independent or they are correlated. Interpersonal physiological coordination is finally assessed by comparing the variances of the prediction error obtained under independent and dependent conditions using a statistical test for variance comparison. For a given dyad, if the first-subject prediction error variance is significantly larger under the independent assumption, then coordination is present. This means that the second-subject signal provides relevant information for predicting the signal of the first participant. To enhance the robustness of the decision, different estimation methods are considered.
The rest of this paper is organized as follows: In Section 2, the protocol we defined is presented as well as the recorded data and the features we extracted from them over time. They correspond to the time series we studied. In Section 3, the processing chain is presented. Then, in Section 4, results are given for both synthetic and real data.

2. Protocol, Data, and Features

2.1. Presentation of the Protocol

A total of 44 students from the University of Bordeaux (28 men, 16 women, aged 22 ± 4 years) provided informed consent to take part in this study, which was carried out as a part of their university curriculum. All declared having no history of neurological or physiological disorders and having normal or corrected vision. Participants were asked to abstain from alcohol, caffeine, and strenuous physical activity for 12 h prior to the experiment. In total, 22 same-sex dyads were formed. Due to corrupted data, two dyads were excluded. During the whole experiment, i.e., around 50 min, participants sat in a chair at a distance of around 0.6 m from a 24-inch computer screen where they watched documentaries. A keyboard was used to play games and answer questionnaires. All verbal and non-verbal communication was prohibited throughout the protocol.
There are four phases in the experiment: In the initial 7-min “solo basal video” phase, both participants watched a wildlife documentary on opposite sides of a visual divider. In the 2nd 10-min phase, “solo game”, both participants played a platform game individually. The 3rd 10-min phase, “duo game”, involved collaboration between both participants who played the same platform game. The divider was removed to facilitate collaboration and so that the participants could see each other. During the 4th 7 min phase, “duo basal video”, both participants watched the rest of the wildlife documentary on their respective screens; unlike in the first phase, they could see each other.

2.2. Pre-Analysis of the Collected Data

Two electrocardiograms (ECGs), one for each participant, connected to a PowerLab (ADInstruments, Dunedin, New Zealand; sampling rate 1000 Hz), were recorded to synchronously obtain the cardiac electrical activities throughout the experiment. HRV was calculated by cyclically measuring the interval between successive R waves in the ECGs. Time series of RR intervals (300 to 900 samples depending on the duration of each phase and the subject’s heart rate) were obtained. They were visually inspected to identify artifacts, irregular beats, or singular patterns such as cardiac coherence. Data from 7 dyads were not retained. Too many artifacts were identified in 2 dyads, and cardiac coherence was achieved by at least one of the two subjects in 5 others. Therefore, 13 dyads were kept.

2.3. Extracting the LF Marker over Time

RR time series were resampled to 4 Hz using Hermite cubic spline interpolation. The low-frequency (LF) power (0.04–0.15 Hz), a marker of sympathetic autonomic activity [27], was estimated over time using Fourier analysis by combining a fast Fourier transform with zero padding and the trapezoidal method. A two-minute sliding Hanning window with a maximum overlap was then used to obtain an evolution of this marker over time. For each dyad and each experimental phase, two sequences, denoted as x j ( k ) with j = 1 or 2, were thus available (900 to 1700 samples). We noticed that the data can be assumed to be w.s.s.

3. Whole Processing Chain

Two assumptions are proposed: for each dyad, the two LF markers over time at each phase of the protocol are assumed to be a bivariate AR process or a bivariate AR process disturbed by an additive white Gaussian noise. In each case, the two sequences are either independent (and processed independently in that case) or not. We recall the existing approaches to estimate the AR parameters and the noise variances. Then, we present our methodology to decide which assumption can be considered to model the data and whether the LF markers of the two participants are independent or not.

3.1. Assumption 1: Data Modeled by an AR Model

The data are first centered by removing the estimated mean. We then propose to model the resulting centered data by considering a real bivariate p-th-order AR process denoted as X ( k ) whose components are x 1 ( k ) and x 2 ( k ) :
X ( k ) = i = 1 p A i p X ( k i ) + U p ( k )
with { A i p } i = 1 , , p being the AR matrices and U p ( k ) being a white noise vector can be interpreted as the error made on the prediction of X ( k ) based on the last p values of the process.
When the AR matrices and the covariance matrix of U p ( k ) are diagonal, the processes x 1 ( k ) and x 2 ( k ) are independent. Each satisfies the following relationship:
x j ( k ) = i = 1 p j a i , j p j x j ( k i ) + u j p j ( k )
with { a i , j p j } i = 1 , , p j being the AR parameters of the process for the order p j , u j p j ( k ) being the prediction error, and j = 1 , 2 .
Given the above model, let us now address the parameter estimation from the observations available. It should be noted that when the processes x 1 ( k ) and x 2 ( k ) are independent, we opt for (2); otherwise, the processes are correlated and we start from (1).
Independent case: Batch methods and recursive approaches exist. Let us first focus on the first family. From (2), a recurrence relation satisfied by the correlation function r x j x j ( τ ) of x j and the variance σ u , j 2 ( p j ) of u j p j ( k ) can be deduced:
r x j x j ( τ ) = i = 1 p j a i , j p j r x j x j ( τ i ) + σ u , j 2 ( p j ) δ ( τ )
By selecting τ = 1 , , p , we obtain the Yule–Walker equations from which the AR parameters are deduced. By setting τ = 0 in (3), σ u , j 2 ( p j ) is deduced. In a compact form, one obtains the following:
r x j x j ( 0 ) σ u , j 2 ( p j ) r x j x j ( 1 ) r x j x j ( 1 p j ) r x j x j ( 1 ) r x j x j ( 0 ) r x j x j ( p j 1 ) r x j x j ( 0 ) 1 a 1 , j p j a p j , j p j = R x j u j 1 θ j = 0 p j + 1
where 0 p j + 1 is a column vector of p j + 1 zeros. It should be noted that R x j u j is the covariance matrix of the vector x j ( k ) u j p j ( k ) x j ( k 1 ) x j ( k p j ) T . If σ u , j 2 ( p j ) is known, the parameter vector 1 a 1 , j p j a p j , j p j T can be directly computed from (4), as it is the kernel of R x j u j . As σ u , j 2 ( p j ) is usually unknown, it can be retrieved by taking advantage of the fact that the determinant of R x j u j is equal to zero. Starting from the following block partition of R x j u j ,
R x j u j = r x j x j ( 0 ) σ u , j 2 ( p j ) β T β R x j x j
it is easy to find
σ u , j 2 ( p j ) = r x j x j ( 0 ) β T R x j x j 1 β = det ( R ¯ x j x j ) det ( R x j x j ) ,
where R ¯ x j x j is obtained by removing σ u , j 2 ( p j ) in R x j u j . Therefore, the second batch method consists of computing the ratio of the determinants of the covariance matrices of consecutive sizes larger than or equal to p j to obtain σ u , j 2 ( p j ) . The eigenvector of (5) associated with the zero eigenvalue normalized with respect to the first element provides an estimate of the AR parameters.
Furthermore, to reduce the computation cost, the Durbin–Levinson algorithm is designed to estimate the parameters of AR models by incrementally increasing the model order [28]. The variances for two consecutive orders, namely p j and p j + 1 , verify the following:
σ u , j 2 ( p j + 1 ) = ( 1 ρ p j + 1 2 ) σ u , j 2 ( p j )
with ρ p j + 1 being the ( p + 1 ) t h reflection coefficient of the j t h process equal to a p j + 1 , j p j + 1 . Given (7), its modulus is smaller than 1.
In practice, there are various methods to estimate the AR parameters: When considering the Markel and Gray approach [29], namely the correlation method, the AR process is assumed to be w.s.s. and ergodic. In that case, the expectation is replaced by an infinite time average. When a finite number N of samples is available, the biased estimate is considered. The covariance method proposed by Atal et al. [30] is based on the same concept except that the correlation function is estimated slightly differently. The estimation of the correlation matrix is still symmetric but no longer Toeplitz. An alternative consists of minimizing the sum of the squares of the prediction errors. In that case, (2) can be written in a matrix form as follows:
X ̲ j = x j ( p j + 1 ) x j ( N ) T = A j θ ̲ j + U ̲ j
with A j being a matrix of size ( N p j ) × p j defined from the samples of the process and U ̲ j being similarly defined as X ̲ j with the samples of the driving process u j p j ( k ) . The ordinary least squares method leads to the following:
θ ̲ j = ( A j T A j ) 1 A j T X ̲ j
It should be noted that a singular value decomposition of A j is usually applied to avoid numerical issues. Moreover, in some cases, regularized approaches like Ridge and Lasso are used.
Finally, the Burg algorithm is based on the minimization of the forward and backward sums of squared errors.
Recursive approaches exist and are usually based on the least mean squares (LMS) (or one of its variants such as the normalized LMS) and Kalman filtering combined with a step criterion [28].
In the above methods, one of the key questions lies in the estimation of the order of the AR process. For this purpose, one can use the selection criteria such as the Akaike information criterion (AIC), Bayesian information criterion (BIC) and final prediction error (FPE) criteria compare the AR spectrum associated with the estimated AR parameters with the power spectrum of the signal obtained from the Fourier transform and check some theoretical properties such as (7).
Dependent cases: We can extend the Yule–Walker equations to the case of multivariate AR processes. The correlation function is replaced by a correlation matrix. The extension of (9) to a bivariate AR process can be also carried out.

3.2. Assumption 2: Data Modeled by a Real Bivariate AR Model + Noise

The data are modeled by a bivariate AR process disturbed by an additive white noise:
Y ( k ) = X ( k ) + W p ( k )
with W p ( k ) = [ w 1 p 1 ( k ) w 2 p 2 ( k ) ] T being a white process with the covariance matrix Σ w and uncorrelated with the prediction error U p ( k ) . With reference to the independent case, by inserting y j ( k ) = x j ( k ) + w j p j ( k ) in (2), we get
y j ( k ) + i = 1 p j a i , j p j y j ( k i ) + u j p j ( k ) + w j p j ( k ) + i = 1 p j a i , j p j w j p j ( k i ) = 0
Starting from (11), it is possible to show that the Yule–Walker Equation (4) for the j t h process becomes
R y j u j σ w , j 2 ( p j ) I p j + 1 1 θ j = 0 p j + 1 ,
where R y j u j is similarly defined as R x j u j and σ w , j 2 ( p j ) is the variance of w j p j ( k ) . The above equation can also be obtained by replacing r x j x j ( 0 ) with r y j y j ( 0 ) σ w , j 2 ( p j ) in (4).
Because of the presence of the noise W ( k ) , the previously mentioned identification methods lead to biased estimates, even asymptotically. To overcome this problem, different solutions have been proposed in the literature. Since the additive noise affects the correlation function of the noisy data at lag zero, one initial idea is to avoid using (3), which includes the correlation function at zero lag. This results in the modified Yule–Walker equations, which can be interpreted as an instrumental variable technique [31,32,33]. However, the variances of the driving process and the additive noise cannot be estimated. The major part of the available methods are based on the bias compensation principle. The rationale consists of ‘compensating’ the bias of classical approaches through the estimation of the additive noise covariance matrix Σ w or the variance σ w , j 2 ( p j ) [34,35,36,37,38,39,40]. In [41], a Gauss–Newton recursive prediction error is proposed.
Other approaches exploit the expectation maximization (EM) algorithm [42,43,44]. An expectation maximization (M)-based method was proposed in [42]. Then, Gannot et al. [43] revisited it and integrated Kalman filtering into the EM algorithm. Finally, Whittle likelihood can also be considered. In [45,46], structure-based dual estimators (Kalman or H ) were presented.
Finally, one of the most promising approaches relies on the so-called Frisch scheme [47]. It consists of finding the solution of the identification problem within a locus of solutions compatible with the second-order statistics of the noisy data [48,49]. In order to estimate a single model among the set of possible solutions, it is necessary to define a suitable selection criterion. The univariate [48] and multivariate cases [49] were addressed. It should be noted that other works were conducted when the additive noise was no longer white but colored (see [40,50,51]).

3.3. Comparing the Variance of Prediction Error Under Independent or Correlated Assumptions

Physiological interpersonal coordination is assessed using the following methodology:
Step 1—selection of the model: Both assumptions are first analyzed to see whether an additive white noise has to be considered in the modeling. When applying Frisch-scheme-based methods described in the above section, we noticed that its variance was negligible with respect to the power of the AR processes. Assumption 1 in terms of modeling was kept. It should be noted that a more formal approach would be to use an AIC-based selection method.
Step 2—estimation of the AR parameters/matrices and the variance/covariance matrix of the prediction error: Estimation methods are first selected such as the one based on (9), the covariance method and the Burg algorithm for the independent case, and the extension of (9) to a bivariate AR process for the dependent case. Moreover, depending on the criteria to select the order, we decided to consider different assumptions for the order. For a specific method, if the results obtained for various orders did not satisfy (7), they were not taken into account.
Step 3—decision on dependency: The purpose is to compare the variances of the prediction error obtained in the independent and dependent cases by using a statistical test for comparison of variances (two-sample F-Test with a significance threshold set at 0.05). For a given dyad and phase, and for one estimation method, if the estimated variance of the prediction error of one participant in the independent case is larger than the one obtained in the dependent case, then coordination is at play. Considering the different estimation methods and possible orders for each comparison of variances, the results are given in the form of a probability of influence of one player on the other player in the dyad. This probability corresponds to the number of comparisons in favor of coordination compared with the total number of comparisons. When the probability is strictly larger than 0.5, a final decision is taken. When the probability is equal to 0.5, we consider the decision to be uncertain.

4. Results and Comments

4.1. Synthetic Data

Let us consider a set of 3000 stochastic bounded input-bounded output-bivariate AR processes. The order (between 2 and 10) and the parameters are randomly drawn. Four classes are generated: class 1: independent components; class 2: the first component depends on the second; class 3: the second depends on the first; and class 4: both depend on each other.
Based on the results provided in Table 1, Table 2 and Table 3, the proposed method yields good results for the first three classes but struggles to identify the double dependencies (class 4), although increasing the number of samples improves the classification accuracy. Nevertheless, the method can clearly determine whether a dependency exists. Therefore, for real-world data, only two classes will be considered—no dependency (ND) or dependency (D)—with the latter encompassing either a single dependency or mutual dependencies.

4.2. Real Data

The results are presented in Table 4 below for the four experimental phases (solo basal video, solo game, duo game, and duo basal video) and the 13 dyads. The orders of the AR modeling are equal to 6 and 8 most of the time. In most cases, no coordination was detected. For Phases 1 and 2, during which the members of the dyads neither see each other nor collaborate, a non-dependency decision seems consistent. However, dyad no. 11 remains an exception. For Phases 3 and 4, which may induce more collaboration, we observe that dyads no. 2 and no. 3 exhibit physiological coordination. A very strict decision threshold was selected in order to limit the number of false positives. This partly explains the low number of synchronized dyads, particularly during collaborative phases. Optimizing this decision threshold is one of the potential avenues for improving the method. To extend the interpretation of this methodological work, this preliminary study will be complemented by an additional cohort of participants to achieve a sufficient number of physiologically coordinated dyads. Psychological factors such as stress, anxiety, and social proximity, as well as behavioral performance factors, may act as moderating variables of the level of physiological coordination. These parameters are not discussed in this initial methodological study but will be incorporated into future research aimed at applying these methods to various real-world contexts. For instance, in dyad 11, a strong social proximity between the two participants could explain the synchronization observed from the beginning of the experiment.

5. Conclusions and Perspectives

This multidisciplinary project is challenging. The proposed method is promising. We are currently working on alternative models such as nonlinear AR models or time-varying AR models, but also deep learning approaches using a CNN architecture whose input is a windowed estimation of the normalized cross-correlation between the signals and whose training is made with labeled synthetic bivariate AR processes.
Our long-term objective is to compare and combine approaches. We will also explore data on perceived performance and social proximity to measure their mitigating effects on physiological synchronization.

Author Contributions

Conceptualization, E.G. and V.D.A.; methodology, P.B., E.G., R.D., and V.D.A.; software, P.B., E.G., and R.D.; validation, P.B., R.D., V.D.A., and E.G.; formal analysis, P.B., E.G., R.D., and V.D.A.; investigation, P.B., E.G., R.D., and V.D.A.; writing—original draft preparation, P.B., E.G., R.D., and V.D.A.; writing—review and editing, P.B., E.G., R.D., and V.D.A.; visualization, P.B., E.G., R.D., and V.D.A.; supervision, V.D.A. and E.G.; project administration, V.D.A. and E.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Institut Carnot Cognition.

Institutional Review Board Statement

All the procedures were approved by the IRB of the faculte des STAPS and were in accordance with the Declaration of Helsinki.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ARAutoregressive
DFTDirected transfer function
HRVHeart rate variability
LFLow frequency
w.s.s.Wide-sense stationary

References

  1. Mayo, O.; Gordon, I. In and out of synchrony-Behavioral and physiological dynamics of dyadic interpersonal coordination. Psychophysiology 2020, 57, e13574. [Google Scholar] [CrossRef]
  2. Chartrand, T.L.; Lakin, J.L. The antecedents and consequences of human behavioral mimicry. Annu. Rev. Psychol. 2013, 64, 285–308. [Google Scholar] [CrossRef]
  3. Ackerman, J.M.; Nocera, C.C.; Bargh, J.A. Incidental Haptic Sensations Influence Social Judgments and Decisions. Science 2010, 328, 1712–1715. [Google Scholar] [CrossRef] [PubMed]
  4. Bernieri, F.J.; Rosenthal, R. Interpersonal coordination: Behavior matching and interactional synchrony. In Fundamentals of Nonverbal Behavior; Cambridge University Press: Cambridge, UK, 1991; pp. 401–432. [Google Scholar]
  5. Palumbo, R.V.; Marraccini, M.E.; Weyandt, L.L.; Wilder-Smith, O.; McGee, H.A.; Liu, S.; Goodwin, M.S. Interpersonal Autonomic Physiology: A Systematic Review of the Literature. Personal. Soc. Psychol. Rev. Off. J. Soc. Personal. Soc. Psychol. 2017, 21, 99–141. [Google Scholar] [CrossRef] [PubMed]
  6. Almurad, Z.M.H.; Roume, C.; Delignières, D. Complexity matching in side-by-side walking. Hum. Mov. Sci. 2017, 54, 125–136. [Google Scholar] [CrossRef] [PubMed]
  7. Almurad, Z.M.H.; Roume, C.; Blain, H.; Delignières, D. Complexity Matching: Restoring the Complexity of Locomotion in Older People Through Arm-in-Arm Walking. Front. Physiol. 2018, 9, 1766. [Google Scholar] [CrossRef]
  8. Codrons, E.; Bernardi, N.F.; Vandoni, M.; Bernardi, L. Spontaneous Group Synchronization of Movements and Respiratory Rhythms. PLoS ONE 2014, 9, e107538. [Google Scholar] [CrossRef]
  9. Guastello, S.J.; Pincus, D.; Gunderson, P.R. Electrodermal arousal between participants in a conversation: Nonlinear dynamics and linkage effects. Nonlinear Dyn. Psychol. Life Sci. 2006, 10, 365–399. [Google Scholar]
  10. Mønster, D.; Håkonsson, D.D.; Eskildsen, J.K.; Wallot, S. Physiological evidence of interpersonal dynamics in a cooperative production task. Physiol. Behav. 2016, 156, 24–34. [Google Scholar] [CrossRef]
  11. Bouny, P.; Arsac, L.M.; Touré Cuq, E.; Deschodt-Arsac, V. Entropy and Multifractal-Multiscale Indices of Heart Rate Time Series to Evaluate Intricate Cognitive-Autonomic Interactions. Entropy 2021, 23, 663. [Google Scholar] [CrossRef]
  12. Blons, E.; Arsac, L.; Gilfriche, P.; Deschodt-Arsac, V. Multiscale Entropy of Cardiac and Postural Control Reflects a Flexible Adaptation to a Cognitive Task. Entropy 2019, 21, 1024. [Google Scholar] [CrossRef]
  13. Thayer, J.F.; Lane, R.D. Claude Bernard and the heart–brain connection: Further elaboration of a model of neurovisceral integration. Neurosci. Biobehav. Rev. 2009, 33, 81–88. [Google Scholar] [CrossRef] [PubMed]
  14. Blons, E.; Arsac, L.M.; Grivel, E.; Lespinet-Najib, V.; Deschodt-Arsac, V. Physiological Resonance in Empathic Stress: Insights from Nonlinear Dynamics of Heart Rate Variability. Int. J. Environ. Res. Public Health 2021, 18, 2081. [Google Scholar] [CrossRef]
  15. Podobnik, B.; Stanley, H.E. Detrended Cross-Correlation Analysis: A New Method for Analyzing Two Nonstationary Time Series. Phys. Rev. Lett. 2008, 100, 084102. [Google Scholar] [CrossRef] [PubMed]
  16. Peng, C.K.; Buldyrev, S.V.; Havlin, S.; Simons, M.; Stanley, H.E.; Goldberger, A.L. Mosaic organization of DNA nucleotides. Phys. Rev. E 1994, 49, 1685–1689. [Google Scholar] [CrossRef]
  17. Richman, J.; Moorman, J. Physiological time-series analysis using approximate entropy and sample entropy. Am. J. Physiol.-Heart Circ. Physiol. 2000, 278, H2039–H2049. [Google Scholar] [CrossRef]
  18. He, J.; Shang, P.; Xiong, H. Multidimensional scaling analysis of financial time series based on modified cross-sample entropy methods. Phys. A 2018, 500, 210–221. [Google Scholar] [CrossRef]
  19. Xie, H.; Zheng, Y.; Guo, J.; Chen, X. Cross-fuzzy entropy: A new method to test pattern synchrony of bivariate time series. Inf. Sci. 2010, 180, 1715–1724. [Google Scholar] [CrossRef]
  20. Wang, F.; Zhao, W.; Jiang, S. Detecting asynchrony of two series using multiscale cross-trend sample entropy. Nonlinear Dyn. 2020, 99, 451–1465. [Google Scholar] [CrossRef]
  21. Farrell, H.; O’Connor, F. The CNN Fear and Greed Index as a predictor of US equity index returns: Static and time-varying Granger causality. Financ. Res. Lett. 2025, 72, 106492. [Google Scholar] [CrossRef]
  22. Seth, A.K.; Barrett, A.B.; Barnett, L. Granger Causality Analysis in Neuroscience and Neuroimaging. J. Neurosci. 2015, 35, 3293–3297. [Google Scholar] [CrossRef] [PubMed]
  23. Scharf, L.; Wang, Y. Testing for Granger causality using a partial coherence statistic. Signal Process. 2023, 213, 109190. [Google Scholar] [CrossRef]
  24. Seth, A.K. A MATLAB toolbox for Granger causal connectivity analysis. J. Neurosci. Methods 2010, 186, 262–273. [Google Scholar] [CrossRef]
  25. Kaminski, M.; Blinowska, K. A new method of the description of the information flow in the brain structures. Biol. Cybern. 1991, 65, 203–210. [Google Scholar] [CrossRef]
  26. Van Mierlo, P.; Carrette, E.; Hallez, H.; Vonck, K.; Van Roost, D.; Boon, P.; Staelens, S. Accurate epileptogenic focus localization through time-variant functional connectivity analysis of intracranial electroencephalographic signals. NeuroImage 2011, 56, 1122–1133. [Google Scholar] [CrossRef] [PubMed]
  27. Task Force of The European Society of Cardiology and The North American; Society of Pacing and Electrophysiology. Heart rate variability Standards of measurement, physiological interpretation, and clinical use. Eur. Heart J. 1996, 17, 354–381. [Google Scholar] [CrossRef]
  28. Najim, M. Modeling, Estimation and Optimal Filtering in Signal Processing; Wiley: Hoboken, NJ, USA, 2010. [Google Scholar]
  29. Markel, J.D.; Gray, A.H. On Autocorrelation Equation Applied to Speech Analysis. IEEE Trans. Audio Electroacoust. 1973, 21, 69–79. [Google Scholar] [CrossRef]
  30. Atal, B.S.; Hanauer, S.L. Speech Analysis and Synthesis by Linear Prediction of the Speech Wave. J. Acoust. Soc. Am. 1971, 50, 637–655. [Google Scholar] [CrossRef]
  31. Kay, S.M. Noise Compensation for Autoregressive Spectral Estimates. IEEE Trans. Acoust. Speech Signal Process. 1980, 28, 292–303. [Google Scholar] [CrossRef]
  32. Chan, Y.; Langford, R. Spectral estimation via the high-order Yule-Walker equations. IEEE Trans. Acoust. Speech, Signal Process. 1982, 30, 689–698. [Google Scholar] [CrossRef]
  33. Lee, T. Large sample identification and spectral estimation of noisy multivariate autoregressive processes. IEEE Trans. Acoust. Speech, Signal Process. 1983, 31, 76–82. [Google Scholar] [CrossRef]
  34. Zheng, W.X. Unbiased Identification of Autoregressive Signals Observed in Colored Noise. In Proceedings of the IEEE-ICASSP ‘98, Seattle, WA, USA, 15 May 1998; Volume 4, pp. 2329–2332. [Google Scholar]
  35. Zheng, W.X. A least-squares based method for autoregressive signal in presence of noise. IEEE Trans. Circuits Syst. II Analog. Digit. Signal Process. 1999, 46, 531–534. [Google Scholar]
  36. Zheng, W.X. Autoregressive Parameter Estimation from Noisy Data. IEEE Trans. Circuits Syst. II Analog. Digit. Signal Process. 2000, 47, 71–75. [Google Scholar] [CrossRef]
  37. Zheng, W.X. Fast Identification of Autoregressive Signals from Noisy Observations. IEEE Trans. Circuits Syst. II Express Briefs 2005, 52, 43–48. [Google Scholar]
  38. Xia, Y.; Zheng, W.X. Novel parameter estimation of autoregressive signals in the presence of noise. Automatica 2015, 62, 98–105. [Google Scholar] [CrossRef]
  39. Mahmoudi, A.; Karimi, M. Inverse filtering based method for estimation of noisy autoregressive signals. Signal Process. 2011, 91, 1659–1664. [Google Scholar] [CrossRef]
  40. Esfandiari, M.; Vorobyov, S.A.; Karimi, M. New estimation methods for autoregressive process in the presence of white observation noise. Signal Process. 2020, 171, 107480. [Google Scholar] [CrossRef]
  41. Nehorai, A.; Stoica, P. Adaptive algorithms for constrained ARMA signals in the presence of noise. IEEE Trans. Acoust. Speech Signal Process. 1998, 36, 1282–1291. [Google Scholar] [CrossRef]
  42. Deriche, M. AR parameter estimation from noisy data using the EM algorithm. In Proceedings of the ICASSP ’94, Adelaide, SA, Australia, 19–22 April 1994; Volume 4, pp. 69–72. [Google Scholar]
  43. Gannot, S.; Burshtein, D.; Weinstein, E. Iterative and Sequential Kalman Filter-Based Speech Enhancement Algorithms. IEEE Trans. Acoust. Speech, Signal Process. 1998, 6, 373–385. [Google Scholar] [CrossRef]
  44. Kuropatwinski, M.; Kleijn, B. On the EM algorithm for the estimation of speech AR parameters in noise. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 7044–7048. [Google Scholar]
  45. Labarre, D.; Grivel, E.; Berthoumieu, Y.; Todini, E.; Najim, M. Consistent estimation of autoregressive parameters from noisy observations based on two interacting Kalman filters. Signal Process. 2006, 86, 2863–2876. [Google Scholar] [CrossRef]
  46. Labarre, D.; Grivel, E.; Najim, M.; Christov, N. Dual H infinity Algorithms for Signal Processing—Application to Speech Enhancement. IEEE Trans. Signal Process. 2007, 55, 5195–5208. [Google Scholar] [CrossRef]
  47. Guidorzi, R.; Diversi, R.; Soverini, U. The Frisch scheme in algebraic and dynamic identification problems. Kybernetika 2008, 44, 585–616. [Google Scholar]
  48. Diversi, R.; Guidorzi, R.; Soverini, U. Identification of autoregressive models in the presence of additive noise. Int. J. Adapt. Control Signal Process. 2008, 22, 465–481. [Google Scholar] [CrossRef]
  49. Diversi, R. Identification of multichannel AR models with additive noise: A Frisch scheme approach. In Proceedings of the EUSIPCO 2018, Rome, Italy, 3–7 September 2018; pp. 1252–1256. [Google Scholar]
  50. Mahmoudi, A.; Karimi, M.; Amindavar, H. Parameter estimation of autoregressive signals in presence of colored AR(1) noise as a quadratic eignevalue problem. Signal Process. 2012, 92, 1151–1156. [Google Scholar] [CrossRef]
  51. Diversi, R.; Ijima, H.; Grivel, E. Prediction error method to estimate the AR parameters when the AR process is disturbed by a colored noise. In Proceedings of the ICASSP, Vancouver, BC, Canada, 26–31 May 2013; pp. 6143–6147. [Google Scholar]
Table 1. Confusion matrix when the number of samples is equal to 1000.
Table 1. Confusion matrix when the number of samples is equal to 1000.
Generation (Row)Class 1Class 2Class 3Class 4Uncertainty
vs. Decision (Col.)
Class 196.8000.23
Class 22.2796.3700.031.33
Class 31.97097.170.10.76
Class 41.6335.9737.8722.871.66
Table 2. Confusion matrix when the number of samples is equal to 3000.
Table 2. Confusion matrix when the number of samples is equal to 3000.
Generation (Row)Class 1Class 2Class 3Class 4Uncertainty
vs. Decision (Col.)
Class 197.67000.072.27
Class 21.5097.3700.031.10
Class 31.63097.270.071.03
Class 40.8031.8331.5734.601.20
Table 3. Confusion matrix when the number of samples is equal to 10,000.
Table 3. Confusion matrix when the number of samples is equal to 10,000.
Generation (Row)Class 1Class 2Class 3Class 4Uncertainty
vs. Decision (Col.)
Class 196.670.070.030.033.20
Class 21.3797.9300.030.67
Class 31.20097.900.000.90
Class 40.4026.5326.4345.930.70
Table 4. Decision made on each dyad; D and ND, respectively, mean dependency and no dependency.
Table 4. Decision made on each dyad; D and ND, respectively, mean dependency and no dependency.
Dyad NumberPhase 1Phase 2Phase 3Phase 4
no. 1NDNDNDND
no. 2NDNDNDD
no. 3NDNDDD
no. 4NDNDNDND
no. 5NDNDNDND
no. 6NDNDNDND
no. 7NDNDNDND
no. 8NDNDNDND
no. 9NDNDNDND
no. 10NDNDNDND
no. 11DNDNDD
no. 12NDNDNDND
no. 13NDNDNDND
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bouny, P.; Grivel, E.; Diversi, R.; Deschodt Arsac, V. Interpersonal Coordination Through Granger Causality Applied to AR Processes Modeling the Time Evolution of Low-Frequency Powers of RR Intervals. Comput. Sci. Math. Forum 2025, 11, 30. https://doi.org/10.3390/cmsf2025011030

AMA Style

Bouny P, Grivel E, Diversi R, Deschodt Arsac V. Interpersonal Coordination Through Granger Causality Applied to AR Processes Modeling the Time Evolution of Low-Frequency Powers of RR Intervals. Computer Sciences & Mathematics Forum. 2025; 11(1):30. https://doi.org/10.3390/cmsf2025011030

Chicago/Turabian Style

Bouny, Pierre, Eric Grivel, Roberto Diversi, and Veronique Deschodt Arsac. 2025. "Interpersonal Coordination Through Granger Causality Applied to AR Processes Modeling the Time Evolution of Low-Frequency Powers of RR Intervals" Computer Sciences & Mathematics Forum 11, no. 1: 30. https://doi.org/10.3390/cmsf2025011030

APA Style

Bouny, P., Grivel, E., Diversi, R., & Deschodt Arsac, V. (2025). Interpersonal Coordination Through Granger Causality Applied to AR Processes Modeling the Time Evolution of Low-Frequency Powers of RR Intervals. Computer Sciences & Mathematics Forum, 11(1), 30. https://doi.org/10.3390/cmsf2025011030

Article Metrics

Back to TopTop