Next Article in Journal
Kernel Two-Sample and Independence Tests for Nonstationary Random Processes
Previous Article in Journal
Improved Output Gap Estimates and Forecasts Using a Local Linear Regression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Rényi Transfer Entropy Estimators for Financial Time Series  †

Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Břehová 7, 115 19 Praha 1, Czech Republic
*
Author to whom correspondence should be addressed.
Presented at the 7th International conference on Time Series and Forecasting, Gran Canaria, Spain, 19–21 July 2021.
These authors contributed equally to this work.
§
Current address: Department of Institutional, Environmental and Experimental Economics, University of Economics in Prague, 130 67 Prague, Czech Republic.
Eng. Proc. 2021, 5(1), 33; https://doi.org/10.3390/engproc2021005033
Published: 30 June 2021
(This article belongs to the Proceedings of The 7th International Conference on Time Series and Forecasting)

Abstract

:
In this paper, we discuss the statistical coherence between financial time series in terms of Rényi’s information measure or entropy. In particular, we tackle the issue of the directional information flow between bivariate time series in terms of Rényi’s transfer entropy. The latter represents a measure of information that is transferred only between certain parts of underlying distributions. This fact is particularly relevant in financial time series, where the knowledge of “black swan” events such as spikes or sudden jumps is of key importance. To put some flesh on the bare bones, we illustrate the essential features of Rényi’s information flow on two coupled GARCH( 1 , 1 ) processes.

1. Introduction

The linear framework for measuring and testing causality has been widely applied in a number of fields. In finance, one typically uses Granger’s linear regression model to study the internal cross-correlations between various market activities. The correlation functions have, however, at least two limitations. First, they measure only linear relations, although it is clear that linear models do not faithfully reflect real market interactions. Second, all they determine is whether two time series (e.g., two stock-index series) have correlated movement. They, however, do not indicate which series affects which, or in other words, they do not provide any directional information about cause and effect. However, there is extensive literature on causality modeling that goes beyond the linear regression model, e.g., applying and combining mathematical logic, graph theory, Markov models, Bayesian probability, etc. (for an extensive review see, e.g., [1]). We will focus here on the information-theoretic approaches, which understand causality as a phenomenon that can be not only detected or measured but also quantified. A particularly important quantifier of the information flow between two time series is the so-called transfer entropy (TE).
In his 2000 seminal paper, Schreiber [2] used Shannon’s information measure to formulate the concept of TE, which is a version of mutual information operating on conditional probabilities. TE is designed to detect the directed exchange of information between two stochastic variables, conditioned to common history and inputs. An advantage of information-theoretic measures, in comparison with, say, the standard Granger causality, is that they are sensitive to nonlinear signal properties, as they do not rely on linear regression models. A limitation of TEs is that they are, by their very formulation, restricted to bivariate situations. In addition, information-theoretic measures often require substantially more data than regression methods. It can be also shown that for Gaussian variables, Granger causality and transfer entropy are entirely equivalent [3]. For a comparison of TEs with other causal measures, including various implementations of the Granger causality, see, e.g., Ref. [4].
Shannonian TE was generalized to the class of α -Rényi transfer entropies by Jizba et al. in Ref. [5]. The corresponding Rényi TE can be defined in much the same way as its Shannonian counterpart. In particular, one can utilize the concept of mutual information of the order α to quantify the directed exchange of information. Because Rényi’s entropy (RE) works, unlike Shannon’s entropy, with rescaled distributions, it allows addressing information flow between different parts of underlying distributions in bivariate time series. Consequently, Rényi’s TE provides more detailed information concerning the excess (or lack) of information in various parts of the underlying distribution resulting from updating the distribution on the condition that a second time series is known. This is particularly relevant in the context of financial time series, where the knowledge of tale-part (or “black swan”) events such as spikes or sudden jumps bears direct implications, e.g., in various risk-reducing formulas in portfolio theory.
In order to quantify the strength of Rényian information flow and its directionality from high-quality time series data, special care has to be taken to select suitable estimators of Rényi’s entropy. The aim of this paper is to demonstrate that the estimator introduced by Leonenko et al. [6] is an appropriate instrument for this task. We illustrate this by analyzing Rényi’s information flow between two coupled GARCH(1,1) processes.
The paper is organized in the following way. In Section 2, we discuss some essentials from Rényi’s entropy and the ensuing Rényi transfer entropy. Section 3 introduces the concept of effective transfer entropy and briefly discusses the pros and cons of Leonenko et al.’s Rényi entropy estimator. In Section 4, we set up our model system, namely a system of two coupled GARCH processes, that will serve as generating processes for bivariate time series to be analyzed. Section 5 is dedicated to the analysis of the effective Rényi’s TE for coupled GARCH( 1 , 1 ) processes. Finally, in Section 6, we provide some concluding remarks and propose some further generalizations.

2. Rényi Transfer Entropy

In this section, we briefly review some essentials of Rényi’s entropy and ensuing directional information flow that will be needed in following sections.

2.1. Rényi Entropy

Rényi’s information measure (also known as Rényi’s entropy) was introduced by Rényi in his seminal 1961 paper [7] as a one-parameter generalization of Shannon’s entropy. Let α 0 , then Rényi’s entropy of a probability distribution function P associated with a discrete random variable X is defined as
H α [ P ] = 1 1 α log 2 x X p α ( x ) .
In particular, for α = 0 , we obtain the so-called Hartley entropy, while the cases with α = 2 and α = + yield the collision entropy (that is closely related to correlation dimension) and the Min-entropy, respectively. Note that for α = 1 , Rényi’s entropy converges to Shannon’s entropy by the L’Hospital rule. It can be shown [8] that Rényi’s entropy is a non-negative, monotonically decreasing function of α , thus
H 0 H 1 H 2 H + 0 .
Particularly important in our following considerations will be the so-called conditional Rényi entropy that is defined as [8,9]
H α [ P | Q ] = 1 1 α log 2 x X , y Y p α ( x , y ) y Y p α ( y ) 1 1 α log 2 y Y ρ α ( y ) x X p α ( x | y ) ,
where Q is a probability distribution function of a random variable Y, and ρ α , defined as
ρ α ( x ) = p α ( x ) x X p α ( x ) ,
is known as escort distribution [10]. The latter is also termed as a “zooming” distribution because it scales (deforms and re-emphasizes) different parts of an underlying distribution function P . In particular, for α < 1 , the central part of the distribution is flattened, i.e., high-probability events are suppressed, and low-probability events are emphasized. This effect is more pronounced for smaller α . In the opposite situation when α > 1 , low-probability events are suppressed, and high-probability events are emphasized. Thus, for α 0 , the escort distribution tends to a uniform-like shape and α + to a very platykurtic (Dirac’s- δ function like) distribution. Because this behavior is also true for the conditional RE, it will be seen that REs are instrumental in the understanding of (directional) information flow between bivariate time series.

2.2. Shannon’s and Rényi’s Transfer Entropies

The concept of transfer entropy was introduced by Schreiber in Ref. [2] and independently under the name conditional mutual information by Paluš in Ref. [11]. According to these, TE represents a measure of a directional (Shannonian) information flow defined by means of Kullback–Leibler divergence on conditional transition probabilities of two Markov processes X and Y as
T X Y ( k , l ) = x X , y Y p ( y n + 1 , y n ( l ) , x n ( k ) ) log 2 p ( y n + 1 | y n ( l ) , x n ( k ) ) p ( y n + 1 | y n ( l ) ) .
Here l and k denote Markov orders of Y and X processes, respectively, e.g., x n ( k ) ( x n , . . . , x n k + 1 ) . For independent processes, TE is equal to zero. It is also not a symmetric measure as mutual information is; therefore, T Y X T X Y , which becomes clear if we rewrite (5) as
T X Y ( k , l ) = H ( y n + 1 | y n ( l ) ) H ( y n + 1 | y n ( l ) , x n ( k ) ) .
Now we can use Equation (6) to define Rényi’s transfer entropy (RTE). Substituting H α instead of H, we obtain [5]
T α , X Y R ( k , l ) = 1 1 α log 2 ρ α ( y n ( l ) ) p α ( y n + 1 | y n ( l ) ) ρ α ( y n ( l ) , x n ( k ) ) p α ( y n + 1 | y n ( l ) , x n ( k ) ) .
It can be checked that Definition (5) is a special case of (7) for α = 1 , which we will refer to as Shannon’s transfer entropy (STE). Most of the aforementioned properties of STE are still valid also for RTE. The most important difference is that the zero values of RTE for α 1 do not imply the independence of processes X and Y (i.e., that all order cross-correlations are zero); however, if X and Y are independent, RTE is zero for any α . In addition, in contrast to the Shannonian case, RTE can also have negative values. The reason for this is not difficult to understand. For instance, for, α < 1 , the negativity of T α , X Y R ( k , l ) simply means that the knowledge of historical values of both X and Y flattens the tail part of the anticipated distribution function for the price value y n + 1 more than the historical values of Y alone would do. In other words, extra knowledge of the historical values of X shows that there is a greater risk in the next time step of Y than one would expect by only knowing the historical data of Y. In this sense, T α , X Y R ( k , l ) represents a rating factor that quantifies a gain or loss in the risk concerning the behavior of Y at the future time y n + 1 after the historical values of X until x n were accounted for [5]. This information can be further used in financial decisions concerning risk analysis, portfolio selection, or in derivative pricing.

3. Financial Data Processing and Rényi Entropy Estimation

Financial data recorded on stock markets are typically nonstationary, discrete, and with periods of no trading activity. The last two factors can be dealt with technically by means of pertinent data processing methods. The nonstationarity might be problematic for the Markov assumption that we used in the definition of the transfer entropy; however, this is typically solved by introducing a new variable that can be thought of as asymptotic stationary [12].
Following Samuelson’s work on geometric Brownian motion [13], it has become clear that the asset log-return (rather than raw return) is the relevant financial variable. So, for our future convenience, it is suitable to define the log-return associated with X as
R X , τ = log x t + τ x t ,
where x t is the value of the process X at time t.
Another problem that might hinder the correct estimation of the transfer entropy is a limited number of the recorded data. To this end, Marchinski et al. introduced in [12] effective transfer entropy, which, for Rényi’s TE, can be rewritten in the form
T α , X Y R , eff = T α , X Y R T α , X shuffled Y R .
Effective RTE is thus a difference between two RTEs, where the second one is computed on the shuffled X series. Here the shuffling is performed in terms of the surrogate data technique [14]. In essence, a surrogate data series has the same mean, the same variance, the same autocorrelation function, and, therefore, the same power spectrum as the original series, but phase relations are destroyed. Consequently, all the potential correlations between X and Y are removed, which implies that T α , X shuffled Y R should be zero. In practice, this is typically not the case, despite the fact that there is no obvious structure in the data. The nonzero value of T α , X shuffled Y R must then be a consequence of the finite dataset. Definition (9) then simply ensures that pseudo-effects caused by finite values of k and l are removed.
By its very definition, effective RTE is not symmetric in X and Y. So, in order to visualize and quantify the disparity between the X Y and Y X flow, it is convenient to define the balance of flow or net flow of effective RTE as
T α , X Y R , bal eff = T α , X Y R , eff T α , Y X R , eff .
The concept of the balance of flow of effective RTE will be employed in Section 5.
In their original work, Marchinski et al. [12] employed effective STE to compute the information transfer between two financial time series. They also used a partitioning method to discretize their financial data. This is a good first approach to data processing that was also employed in our earlier work [5]. However, partitioning can cause the loss of valuable correlations present in the data. That is why, in the following, we test another method of data processing and evaluate RTE using estimators for Rényi’s entropy introduced by Leonenko et al. [6].

Rényi’s Entropy Estimation

Estimators of Shannon’s entropy based on the k-nearest-neighbor search in one-dimensional spaces were studied in statistics already almost 60 years ago by Dobrushin [15] and a short while later by Vašíček [16]. Unfortunately, they cannot be generalized directly to higher-dimensional spaces, and hence, they are inapplicable to TEs. Presently, there exists a number of suitable entropy estimators—most of them in the Shannonian framework (for a review, see, e.g., [17]). Here we will present the k-nearest-neighbor entropy estimator for higher-dimensional spaces introduced by Leonenko et al. [6]. The latter is not only suitable for RE evaluation but it can also easily be numerically implemented so that RTE can be computed in real time, which is clearly relevant in finance, for instance, in various risk-aversion decisions. An explicit empirical analysis based on this estimator will be presented in Section 5.
Leonenko et al.’s [6] approach is based on an estimator of RE from a finite sequence of N points, and it is defined as
H ^ N , k , α = 1 1 α log 2 Γ k Γ k + 1 α π m ( 1 α ) 2 Γ 1 α m 2 + 1 N 1 i = 1 N ρ k ( i ) m 1 α log 2 N .
Here Γ x is Euler’s gamma function, m is the dimension of the dataset space, and ρ k ( i ) is the distance from data i to the k-th nearest data counterpart using the Euclidean metric. The estimator thus depends on the number of data in a dataset N and the rank of the nearest neighbor used k.
Advantages of Estimator (11) in contrast to the standard bin method that estimates probability within a range are:
  • Relative accuracy for small datasets;
  • Applicability for high-dimensional data;
  • Combining the set estimators provides statistics for estimation.
The disadvantage of the method is the computational complexity of the algorithm and the complicated data container. The algorithm can, however, be optimized so that it can run in real time. We can also stress that in contrast to other RE estimators, such as the fixed-ball estimator [17], Estimator (11) is not confined to only a certain range of α values.
Estimators of the average and standard deviation for a dataset of size N and the parameter of RE α with Bessel correction are defined, respectively, as
H ¯ N , α = k = 1 n H ^ N , k , α n , σ H N , α = k = 1 n H ^ N , k , α H ¯ N , α 2 N 1 ,
where n is the highest order of the nearest data counterpart. Theoretically, we should use n = N , but such a set up would require an enormous amount of computer memory to hold the distances. So, in our calculations, we used n = 50 , which turned out to be a good compromise between accuracy and computer time.

4. Model Setup: Coupled GARCH Processes

Assuming independent events, as it is typically done in the financial context, is not very realistic. To capture time dependence between different log-returns, it is often convenient to assume that volatility (the square root of the variance of log-returns) for a given financial asset is a time-dependent stochastic process. This assumption is called heteroskedasticity, where each term in the time series is described with a generally different variance. Typically, asset returns are not even close to being independent and identically distributed, and their distributions are often heavy-tailed. Another observed fact is a tendency that large changes in prices are followed by large changes and small changes by small changes. This is known as volatility clustering. A stochastic process that is able to capture distributional stylized facts (such as heavy tails or high peakedness), as well as the time series stylized facts (such as volatility clustering), was introduced by Engle in 1982 [18] under the name autoregressive conditional heteroskedasticity or simply ARCH.
In Engle’s original ARCH(q) model, the conditional variance at time t, i.e., σ t 2 , was postulated to be a linear function of the squares of past q observations modeled by a stochastic process x t N ( 0 , σ t 2 ) . Unfortunately, in many applications of the ARCH model, a long lag length, and therefore a large number of parameters, is required. This makes the parameter estimation quite impractical. To circumvent this problem, Bollerslev [19] generalized the ARCH process to the so-called generalized ARCH (or simply GARCH) process. The latter is (similar to ARCH) a linear function of the squares of past observations plus the linear combination of the past values of variances. For instance, the GARCH( q , p ) process specifies the conditional variance as follows:
σ t 2 = α 0 + α 1 x t 1 2 + + α q x t q 2 + β 1 σ t 1 2 + + β p σ t p 2 ,
where α 0 > 0 , α i 0 ( i = 1 , , q ) and β k 0 ( k = 1 , , p ) are control parameters. It is also common to require that the covariance stationarity condition holds, which implies that i = 1 q α i + k = 1 p β k < 1 (see, e.g., [19]). In most empirical applications, it turns out that the simple choice p = q = 1 is already able to correctly grasp the volatility dynamics of financial data.
To test RTE with the RE estimator (11), we will examine two GARCH( 1 , 1 ) processes with unidirectional coupling. The stationary coupling parameter ϵ will allow us, in turn, to probe how the information flows between the two GARCH( 1 , 1 ) processes change with the coupling strength. To be more specific, let us consider two coupled GARCH( 1 , 1 ) processes. In particular, let x t N ( 0 , σ t 2 ) be a GARCH( 1 , 1 ) process with α 0 = 0.2 , α 1 = β 1 = 0.4 and y t N ( 0 , η t 2 ) be a GARCH( 1 , 1 ) process with α 0 = 0.3 , α 1 = β 1 = 0.35 such that
η t 2 = α 0 + α 1 y t 1 2 + β 1 η t 1 2 + ϵ x t 1 2 .
In this way, the coupling between stochastic variables is not direct but mediated through variance. Nonlinear coupling in variances of the GARCH processes is a good approximation of the possible couplings in real-world market data in which one asset, say X, can cause volatility η = η ( X ) that will in turn influence another asset, say Y. For future convenience, we will refer to process X as the master process and to Y as the slave process.

5. Analysis of Effective RTE for Coupled GARCH( 1 , 1 ) Processes

In this section, we use Estimator (11) for Rényi’s entropy to compute effective RTE (9) between coupled GARCH( 1 , 1 ) processes (13) and (14). The calculations are performed directly on log-returns (8) rather than on amplitudes. A typical dependence of RTE (7) on both α and the coupling strength ϵ is depicted in Figure 1. Ensuing effective RTEs (9) with different Markov parameters are presented in Figure 2. In order to quantify the master–slave relationship in terms of information flow, the balance of flow (10) is calculated (see Figure 3). Respective standard deviations are depicted in Figure 4.
Based on experience with STE, one could anticipate that RTE T α , X Y R , eff (see also Figure 1) should increase with the growing value of the coupling parameter ϵ . This expectation is indeed confirmed for α 1 . In fact, even though the trend is noisy, we can detect a clear upward drift. This can be interpreted as an increase in the information flow between central sectors of the underlying empirical distributions. On the other hand, for α < 1 , the drift seems to be missing or even decreasing. This fact will be commented on shortly. The aforementioned type of behavior persists even when larger Markov parameters are considered, cf. Figure 2. The smallness of the X Y information flow can be attributed to the indirect (nonlinear) coupling between X and Y.
From the results depicted in Figure 2, we can study how the increase in values of the Markov parameters in the respective time series influences the value of effective RTE. It can be observed that when historical values of the slave process Y are included, information flows improve (stabilizes) significantly. On the other hand, conditioning on the additional historical values of the master process X does not seem to change the information flows notably. However, for large histories, it stabilizes the results, as can be seen from the comparison of ( 2 , 11 ) and ( 11 , 11 ) in Figure 2.
The balance of effective transfer entropy presented in Figure 3 exhibits an increasing function of the coupling parameter ϵ for α 1 . Contrary to that, for α < 1 , it is a stagnating or decreasing function with large fluctuations. The trend is most distinguishable for ( k , l ) = ( 11 , 11 ) . Thus, one can conclude that the deterministic behavior is bared by α > 1 and is stochasticity captured by α < 1 . Positive values of the balance of ERTE suggest higher information flows from the master process to the slave process than in the opposite direction. Therefore, ( 11 , 11 ) confirms the omnipresent master–slave relationship.
Standard deviations of the balance of effective transfer entropy in Figure 4 reveal statistical stability of results for α > 1 in contrast to regime α < 1 . The typical size of fluctuations in the latter case is about 5 10 times larger than in the previous one. However, the results show instability in the results for various strengths of interaction among the time series ϵ . We admit that this is an effect of the insufficient size of datasets or missing statistics of datasets.

6. Conclusions

6.1. Summary

In this paper, we performed extensive computer calculations of the novel method to calculate Rényi’s transfer entropy that was applied to a coupled GARCH time series. We performed analogous analysis on the surrogate datasets. Based on that, we calculated the statistics of the balance of effective Rényi transfer entropy.
Analysis of the two-dimensional GARCH process where one dimension influences the latter using Rényi’s transfer entropy using the nearest order estimators provides insight into the flow of information within the system. Transfer entropy expectably increases with the increasing strength of the interaction, and the rate increases with the increasing indices of memory. Particularly, it increases with the memory of the time series where the interaction is heading. This observation follows for effective transfer entropy and the balance of effective transfer entropy.

6.2. Perspectives and Generalizations

Transfer entropy is a powerful tool to reveal the strength and direction of information flow in a time series. In combination with the effective use of computer power and modern advancements in the mathematical theory of entropy calculation, it is a better tool to investigate nonlinear causality than the Granger test that is limited to Gaussian time series with linear causality. The advantage of using Rényi’s entropy with parameter α is its ability to detect informational flow during extreme events, such as sudden jumps. This is because Rényi’s entropy gives an emphasis on the tails or the center of the probability distribution.
Using the complex algorithm on financial datasets can be a potent tool to reveal information flows among, e.g., different stocks or other valuable assets. It can be also used as a precursor of instability or critical behavior in international markets.

Author Contributions

Conceptualization, P.J.; Formal analysis, H.L. and Z.T.; Methodology, P.J., H.L. and Z.T.; Validation, H.L. and Z.T.; Software design, data structures, computer calculation, and visualization, H.L.; Writing—original draft, P.J.; Writing—review editing, P.J., H.L., and Z.T. All authors have read and agreed to the published version of the manuscript.

Funding

P.J., H.L., and Z.T. were supported by the Czech Science Foundation Grant No. 19-16066S.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pearl, J. Causality; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
  2. Schreiber, T. Measuring Information Transfer. Phys. Rev. Lett. 2000, 85, 461–464. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Barnett, L.; Barrett, A.B.; Seth, A.K. Granger Causality and Transfer Entropy Are Equivalent for Gaussian Variables. Phys. Rev. Lett. 2009, 103, 238701. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Lungerella, M.; Ishigoro, K.; Kuniyoshi, Y.; Otsu, N. Methods for quantifying the causal structure of bivariate time series. Prog. Neurobiol. 2007, 77, 1–37. [Google Scholar] [CrossRef] [Green Version]
  5. Jizba, P.; Kleinert, H.; Shefaat, M. Rényi’s information transfer between financial time series. Physica A 2012, 391, 2971–2989. [Google Scholar] [CrossRef] [Green Version]
  6. Leonenko, N.; Pronzato, L.; Savani, V. A class of Rényi information estimators for multidimensional densities. Ann. Stat. 2008, 36, 2153–2182. [Google Scholar] [CrossRef]
  7. Rényi, A. On measures of entropy and information. Proc. Fourth Berkeley Symp. on Math. Statist. Prob. 1961, 1, 547–561. [Google Scholar]
  8. Rényi, A. Selected Papers of Alfred Rényi; Akademia Kiado: Budapest, Hungary, 1976; Volume 2. [Google Scholar]
  9. Jizba, P.; Arimitsu, T. World According to Rényi: Thermodynamics of Multifractal Systems. Ann. Phys. 2004, 312, 17–57. [Google Scholar] [CrossRef]
  10. Beck, C.; Schlögl, F. Thermodynamics of Chaotic Systems; Cambridge Nonlinear Science Series (Book 4); Cambridge University Press: Cambridge, UK, 1995. [Google Scholar]
  11. Paluš, M.; Hlaváčkovxax-Schindler, K.; Vejmelka, M.; Bhattacharya, J. Causality detection based on information-theoretic approaches in time series analysis. Phys. Rep. 2007, 441, 1–46. [Google Scholar]
  12. Marschinski, R.; Kantz, H. Analysing the Information Flow Between Financial Time Series. Eur. Phys. J. B 2002, 30, 275–281. [Google Scholar] [CrossRef]
  13. Samuelson, P.A. Rational Theory of Warrant Pricing. Ind. Manag. Rev. 1965, 6, 13–31. [Google Scholar]
  14. Keylock, C.J. Constrained surrogate time series with preservation of the mean and variance structure. Phys. Rev. E 2006, 73, 036707. [Google Scholar] [CrossRef] [PubMed]
  15. Dobrushin, R.L. A simplified method of experimentally evaluating the entropy of a stationary sequence. Teor. Veroyatnostei Primen. 1958, 3, 462–464. [Google Scholar] [CrossRef]
  16. Vašíček, O. A test for normality based on sample entropy. J. Roy. Statist. Soc Ser. B Methodol. 1976, 38, 54–59. [Google Scholar]
  17. Kantz, H.; Schreiber, T. Nonlinear Time Series Analysis; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  18. Engle, R.F. Autoregressive Conditional Heteroscedasticity with Estimates of the Variance of United Kingdom Inflation. Econometrica 1982, 50, 987–1007. [Google Scholar] [CrossRef]
  19. Bollerslev, T. Generalized Autoregressive Conditional Heteroskedasticity. J. Econom. 1986, 31, 7–327. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Transfer entropy T α , X Y R ( 2 , 7 ) T ( R ) α , X Y ( [ 0 , 1 ] , [ 1 ] , [ 0 , 1 , 2 , 3 , 4 , 5 , 6 ] ) , where X and Y are GARCH processes described in Section 4. The coupling strength ϵ { 0.1 , 0.2 , , 2 } is on the horizontal axis, and the transfer entropy is on the vertical axis. Each graph represents a different value of α { 0.7 , 0.8 , , 1.9 } .
Figure 1. Transfer entropy T α , X Y R ( 2 , 7 ) T ( R ) α , X Y ( [ 0 , 1 ] , [ 1 ] , [ 0 , 1 , 2 , 3 , 4 , 5 , 6 ] ) , where X and Y are GARCH processes described in Section 4. The coupling strength ϵ { 0.1 , 0.2 , , 2 } is on the horizontal axis, and the transfer entropy is on the vertical axis. Each graph represents a different value of α { 0.7 , 0.8 , , 1.9 } .
Engproc 05 00033 g001
Figure 2. Effective transfer entropy T α , X Y R , eff ( k , l ) T ( R ) α , X Y ( [ 0 , 1 , , k 1 ] , [ 1 ] , [ 0 , 1 , , l 1 ] ) , where ( k , l ) = ( 2 , 2 ) , ( 2 , 4 ) , ( 4 , 2 ) , ( 4 , 4 ) , ( 2 , 11 ) , ( 11 , 11 ) from left to right and from top to bottom. The coupling parameter ϵ is on the x-axis.
Figure 2. Effective transfer entropy T α , X Y R , eff ( k , l ) T ( R ) α , X Y ( [ 0 , 1 , , k 1 ] , [ 1 ] , [ 0 , 1 , , l 1 ] ) , where ( k , l ) = ( 2 , 2 ) , ( 2 , 4 ) , ( 4 , 2 ) , ( 4 , 4 ) , ( 2 , 11 ) , ( 11 , 11 ) from left to right and from top to bottom. The coupling parameter ϵ is on the x-axis.
Engproc 05 00033 g002
Figure 3. Balance of effective transfer entropy T α , X Y R , bal eff ( k , l ) for ( k , l ) = ( 2 , 2 ) , ( 2 , 4 ) , ( 4 , 2 ) , ( 4 , 4 ) , ( 2 , 11 ) , ( 11 , 11 ) from left to right and from top to bottom. Coupling parameter ϵ is on the x-axis.
Figure 3. Balance of effective transfer entropy T α , X Y R , bal eff ( k , l ) for ( k , l ) = ( 2 , 2 ) , ( 2 , 4 ) , ( 4 , 2 ) , ( 4 , 4 ) , ( 2 , 11 ) , ( 11 , 11 ) from left to right and from top to bottom. Coupling parameter ϵ is on the x-axis.
Engproc 05 00033 g003
Figure 4. Standard deviation of the balance of effective transfer entropy T α , X Y R , bal eff ( k , l ) for ( k , l ) = ( 2 , 2 ) , ( 2 , 4 ) , ( 4 , 4 ) , ( 11 , 11 ) from left to right and from top to bottom. With ϵ on the x-axis.
Figure 4. Standard deviation of the balance of effective transfer entropy T α , X Y R , bal eff ( k , l ) for ( k , l ) = ( 2 , 2 ) , ( 2 , 4 ) , ( 4 , 4 ) , ( 11 , 11 ) from left to right and from top to bottom. With ϵ on the x-axis.
Engproc 05 00033 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jizba, P.; Lavička, H.; Tabachová, Z. Rényi Transfer Entropy Estimators for Financial Time Series . Eng. Proc. 2021, 5, 33. https://doi.org/10.3390/engproc2021005033

AMA Style

Jizba P, Lavička H, Tabachová Z. Rényi Transfer Entropy Estimators for Financial Time Series . Engineering Proceedings. 2021; 5(1):33. https://doi.org/10.3390/engproc2021005033

Chicago/Turabian Style

Jizba, Petr, Hynek Lavička, and Zlata Tabachová. 2021. "Rényi Transfer Entropy Estimators for Financial Time Series " Engineering Proceedings 5, no. 1: 33. https://doi.org/10.3390/engproc2021005033

Article Metrics

Back to TopTop