Next Article in Journal
Data Revisions and the Statistical Relation of Global Mean Sea Level and Surface Temperature
Next Article in Special Issue
Regularized Maximum Diversification Investment Strategy
Previous Article in Journal
On the Asymptotic Distribution of Ridge Regression Estimators Using Training and Test Samples
Previous Article in Special Issue
New Evidence of the Marginal Predictive Content of Small and Large Jumps in the Cross-Section
Article

Reducing the Bias of the Smoothed Log Periodogram Regression for Financial High-Frequency Data

Department of Statistics and Operations Research, University of Vienna, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria
*
Author to whom correspondence should be addressed.
Econometrics 2020, 8(4), 40; https://doi.org/10.3390/econometrics8040040
Received: 8 March 2020 / Revised: 24 September 2020 / Accepted: 27 September 2020 / Published: 10 October 2020

Abstract

For typical sample sizes occurring in economic and financial applications, the squared bias of estimators for the memory parameter is small relative to the variance. Smoothing is therefore a suitable way to improve the performance in terms of the mean squared error. However, in an analysis of financial high-frequency data, where the estimates are obtained separately for each day and then combined by averaging, the variance decreases with the sample size but the bias remains fixed. This paper proposes a method of smoothing that does not entail an increase in the bias. This method is based on the simultaneous examination of different partitions of the data. An extensive simulation study is carried out to compare it with conventional estimation methods. In this study, the new method outperforms its unsmoothed competitors with respect to the variance and its smoothed competitors with respect to the bias. Using the results of the simulation study for the proper interpretation of the empirical results obtained from a financial high-frequency dataset, we conclude that significant long-range dependencies are present only in the intraday volatility but not in the intraday returns. Finally, the robustness of these findings against daily and weekly periodic patterns is established.
Keywords: long-range dependence; log periodogram regression; smoothed periodogram; subsampling; intraday returns long-range dependence; log periodogram regression; smoothed periodogram; subsampling; intraday returns

1. Introduction

After Mandelbrot (1971) had discussed the possibility that the strength of the statistical dependence of stock prices decreases very slowly, several researchers investigated this issue empirically. For example, Greene and Fielitz (1977) found indications of long-range dependence when they applied a technique called range over standard deviation (R/S) analysis (Hurst 1951; Mandelbrot and Wallis 1969; Mandelbrot 1972, 1975) to daily stock return series. This technique is based on the R/S statistic Q n , which is defined as the range of all partial sums of a time series of length n from its mean divided by its standard deviation. For a large class of short-range dependent processes, Q n / n H converges to a non-degenerate random variable if H = 0.5 . An analogous result with H 0.5 holds for long-range dependent processes. The parameter H is called the Hurst coefficient and is used as a measure of long-range dependence. However, Lo (1991) pointed out that the results obtained with this technique may be misleading because of the sensitivity of Q n to short-range dependence (see also Davis and Harte 1987; Hauser and Reschenhofer 1995) and proposed, therefore, a Newey and West (1987) type modification for the denominator of the R/S statistic, which is appropriate for general forms of short-range dependence. Contrary to the findings of Greene and Fielitz (1977) and others, he found no evidence of long-range dependence in daily and monthly index returns once the possible short-range dependence was properly taken care of. A disadvantage of Lo’s (1991) modified R/S analysis is its dependence on an important tuning parameter, namely the truncation lag q, which determines the number of included autocovariances. The general conditions that ensure the consistency of the Newey and West estimator provide little guidance in selecting q in finite samples. Additionally, Andrews’s (1991) data-dependent rule for choosing q is based on asymptotic arguments.
Long-range dependence can not only be characterized by a Hurst coefficient H 0.5 but also by a slowly decaying autocorrelation function ρ or a spectral density f that is steep in a small neighborhood of frequency zero, i.e.,
ρ ( k ) k 1 2 d     c   a s   k ,   c > 0 ,   d 0 ,
and:
f ( ω )   ~   c ω 2 d ,   ω ( 0 , ε ) ,   c > 0 ,   d 0 ,
respectively. The parameter d is called a memory parameter (or fractional differencing parameter) and is related to H by d = H 0.5 . It can be estimated by replacing the unknown spectral density f in (2) by the periodogram (Geweke and Porter-Hudak 1983) or a more sophisticated estimate of f (Hassler 1993; Peiris and Court 1993; Reisen 1994), taking the log of both sides, and regressing the log estimate on a deterministic regressor. Robustness against short-range dependence can be achieved by using only the K = n α lowest Fourier frequencies in the regression. A popular choice for the tuning parameter α is 0.5. For the purpose of testing, the asymptotic error variance is used. Applying the log periodogram regression method of Geweke and Porter-Hudak (1983) to the daily returns of the 30 components of the Dow Jones Industrials index and several indices, Barkoulas and Baum (1996) found no convincing evidence in favor of long-range dependence, which is not surprising in light of the finding of Mangat and Reschenhofer (2019) that the test based on the asymptotic error variance has very low power. Unfortunately, using the standard variance formula of the least squares estimator of the slope in a simple linear regression instead of the asymptotic error variance is also problematic because it leads to overrejecting the true null hypothesis (see Mangat and Reschenhofer 2019).
The negative results of Lo (1991) and Barkoulas and Baum (1996) are in line with the results obtained by Cheung and Lai (1995) with both modified R/S analysis and log periodogram regression for stock return data from eighteen countries and by Crato (1994) with fractionally integrated ARMA (ARFIMA) models (Granger and Joyeux 1980; Hosking 1981) for stock indices of the G-7 countries. Using not only the log periodogram regression with the asymptotic error variance but additionally also nonparametric techniques such as R/S analysis and modified R/S analysis as well as parametric techniques, Grau-Carles (2000) also found little evidence of long-range dependence in index returns but strong evidence of persistence in volatility measured as squared returns and absolute returns, respectively, which corroborates earlier findings of Crato and de Lima (1994) and Lobato and Savin (1998). In general, results obtained with ARFIMA models must be treated with caution. Firstly, the true model dimension is unknown in practice and reliable inference after automatic model selection is illusory. Secondly, Pötscher (2002) has shown that the problem of estimating the memory parameter d falls into the category of ill-posed estimation problems when the class of data generating processes is too rich. For example, Grau-Carles (2000) considered all ARFIMA(p,q) processes with p 3 and q 3 , which is possibly an unnecessarily large class for return series.
While the bulk of empirical research focused on major capital markets, Barkoulas et al. (2000) examined an emerging capital market, namely the Greek stock market, with the log periodogram regression and obtained significant estimates of d in the range between 0.20 and 0.30 for values of the tuning parameter α between 0.5 and 0.6. However, their sample period is relatively short and the sampling frequency is weekly rather than daily. Even less confidence-inspiring are the positive results obtained by Henry (2002) with monthly data from several international stock markets. Clearly, methods that have been designed for large samples should not be applied to small and medium samples. Recently, small-sample tests for testing hypotheses about the memory parameter d have been proposed (Mangat and Reschenhofer 2019; Reschenhofer and Mangat 2020). When applied to asset returns, these tests produced negative results throughout. Cajueiro and Tabak (2004), Carbone et al. (2004), Batten and Szilagyi (2007), Batten et al. (2008), Souza et al. (2008), Batten et al. (2013), and Auer (2016a, 2016b) observed time-variability of the Hurst exponent in stock returns, currency prices, and the prices of precious metals, respectively. These apparent changes were occasionally interpreted as indications of changing market efficiency or even used for the construction of trading strategies. Although it cannot be ruled out that some erratic estimator for the memory parameter d catches signals that are useful for trading purposes even when in fact there is no long-range dependence, there still seems to be a need for a more efficient estimator that actually allows to get some information about the true nature of the data generating process.
In general, there is always a trade-off between bias and variance. Estimators for the memory parameter d that are based on a smooth estimate of the spectral density have typically a smaller variance and a larger bias than those based on the periodogram (Chen et al. 1994; Reschenhofer et al. 2020), which is advantageous in situations where the squared bias is small relative to the variance. However, in the case of high-frequency financial data, there are usually gaps between the individual trading sessions, which make it necessary to estimate d separately for each trading session and compute the final estimate by averaging the individual estimates. Here, the variance decreases with the number of trading sessions but the bias remains fixed; hence, conventional smoothing methods, which achieve a reduction in the variance at the expense of an increase in the bias, are of no use. The goal of this paper is therefore to introduce a new method of smoothing that does not systematically have a negative impact on the bias. This method will be described in detail in the next section. Section 3 presents the results of an extensive simulation study, which compares the performance of various estimators for the memory parameter in terms of bias, variance, and root-mean-square error (RMSE). Using limit order book data obtained from Lobster, Section 4 searches for indications of long-range dependence both in the intraday volatility and in the intraday returns. Section 5 provides a conclusion.

2. Methods

2.1. Log Periodogram Regression

Fractionally integrated white noise satisfies the difference equation:
y t = ( 1 L ) d u t ,  
where L is the lag operator and u t is white noise with mean zero and variance σ 2 (Adenstedt 1974). Its spectral density is given by:
f ( ω ) = σ 2 2 π | 1 e i ω | 2 d = σ 2 2 1 + 2 d   π ( sin 2 ( ω 2 ) ) d .
The memory parameter d, which represents the degree of long memory if d 0 , can be estimated by regressing the log periodogram of the time series y 1 , , y n on a deterministic regressor (Geweke and Porter-Hudak 1983). Indeed, we have:
L j = log I ( ω j ) = c + d x j + v j ,  
where:
I ( ω ) = 1 2 π n | t = 1 n y t e i ω t | 2 .  
is the periodogram,
ω j = 2 π j / n ,   j = 1 , , K m = [ ( n 1 ) / 2 ] ,  
are the first K Fourier frequencies between 0 and π,
x j = 2   log ( sin (   ω j 2 ) )
is a deterministic regressor,
c = log ( σ 2 / ( 2 1 + 2 d   π ) )
is a constant, and
v j = log ( I ( ω j ) / f ( ω j ) ) .  
are random perturbations. Choosing K m rather than K = m is advisable when it is suspected that not only long-term dependencies are present but also short-term dependencies, e.g., when the data come from an ARFIMA process:
y t = ( 1 ϕ 1 L ϕ p L p ) 1 ( 1 L ) d ( 1 + θ 1 L + + θ q L q ) u t
(Granger and Joyeux 1980; Hosking 1981), where the parameter d takes care of the former dependencies and the parameters ϕ 1 , ,   ϕ p , θ 1 , , θ q take care of the latter. It is assumed that d < 0.5 (stationarity condition), d > 0.5 (invertibility condition), and all roots of the lag operator polynomials Φ ( L ) = 1 ϕ 1 L ϕ p L p and Θ ( L ) = 1 + θ 1 L + + θ q L q lie outside the unit circle (causality condition and invertibility condition, respectively).
In the special case of d = p = q = 0 and Gaussianity, the ratios I ( ω j ) / f ( ω j ) are independent and identically distributed (i.i.d.) standard exponential and v 1 , , v m are, therefore, i.i.d. Gumbel with mean −γ and variance π 2 / 6 , where γ = 0.57721… is Euler’s constant. The variance of the Geweke Porter-Hudak (GPH) estimator d ^ G P H is then identical to the variance of the ordinary least squares (OLS) estimator for the slope in a simple regression model, i.e.,
v a r ( d ^ G P H ) = σ v 2 S x x = π 2 6 S x x ,  
where:
S x x = t = 1 K ( x t x ¯ ) 2 .  
In a neighborhood of frequency zero:
sin ( ω ) ω ,
Hence:
S x x 4 t = 1 K ( log ( t ) log ( t ) ¯ ) 2 .  
Furthermore:
1 K log 2 ( t ) 1 K ( 1 K log ( t ) ) 2 = K   log 2 ( K ) 2 K log ( K ) + 2 ( K 1 ) 1 K ( K log ( K ) ( K 1 ) ) 2   =   K + o ( K ) .  
Indeed, we have:
s x x = 4 ( K + o ( K ) )
If:
K log ( K ) / n 0
(see Hurvich and Beltrao 1993), hence, the variance formula (10) becomes:
v a r ( d ^ G P H ) π 2 24 K
in line with the asymptotic result:
K ( d ^ G P H d ) d N ( 0 , π 2 24 )
derived by Hurvich et al. (1998) under the assumption that K = o ( n 4 / 5 ) and log 2 ( n ) = o ( K ) for a class of stationary Gaussian long-memory processes with spectral densities of the form:
f ( ω ) = | 1 e i ω | 2 d f * ( ω ) ,  
which includes all stationary ARFIMA processes.
If d 0 , the ratios I ( ω j ) / f ( ω j ) are neither independent nor identically distributed, not even asymptotically (Robinson 1995). The problem is the irregular behavior of the spectral density in the neighborhood of frequency zero, i.e., f ( ω ) as ω 0 if d > 0 and f ( ω ) 0 as ω 0 if d < 0 . Robinson (1995), therefore, proposed to remove the lowest Fourier frequencies from the log periodogram regression. Künsch (1986) showed that in the case of ARFIMA processes, the ratios I ( ω j ) / f ( ω j ) , j = H + 1 , , H + K are indeed asymptotically i.i.d. standard exponential provided that ( H + 1 ) / n and ( H + K ) / n 0 . However, Reisen et al. (2001) and Mangat and Reschenhofer (2019) found that even the removal of only the first Fourier frequency already has a negative effect on the performance of the estimator d ^ G P H .

2.2. Smoothing the Periodogram

An obvious possibility to further develop the estimator d ^ G P H is to smooth the periodogram before it is used in the regression (5) (Hassler 1993; Peiris and Court 1993; Reisen 1994). In order to illustrate the effect of smoothing, we consider the simple case of K/3 non-overlapping averages:
( I ( ω j 1 ) + I ( ω j ) + I ( ω j + 1 ) ) / 3 ,   j = 2 , 5 , 8 , , K 1 .  
In this case, the sample size is divided by three but at the same time the variance of the error term decreases approximately from:
var ( log ( I ( ω j ) f ( ω j ) ) ) π 2 6
to the variance of the log chi-square distribution with 6 degrees of freedom because:
var ( log ( I ( ω j 1 ) + I ( ω j ) + I ( ω j + 1 ) 3   f ( ω j ) ) ) var ( log ( 2 I ( ω j 1 ) f ( ω j 1 ) + 2 I ( ω j ) f ( ω j ) + 2 I ( ω j + 1 ) f ( ω j + 1 ) ) ) .  
Noting that the mean (first cumulant) and the variance (second cumulant) of the log chi-square distribution with k degrees of freedom are given by:
κ 1 = log ( 2 ) + ψ ( k / 2 )
and:
κ 2 = ψ ( k / 2 ) ,
respectively. we obtain for k = 6 , κ 1 = 1.615932 and κ 2 = 0.3949341 . Here, ψ is the digamma function and ψ is its first derivative. Overall, the (approximate) variance of the least squares estimator of the memory parameter d decreases from
π 2 6 1 4 K = 1.644934 1 4 K
to
ψ ( 3 ) 1 4 K / 3 = 1.184802 1 4 K ,
where we have assumed that
1 K / 3 t = 2 , 5 , ( x t x ¯ ) 2 1 K t = 1 K ( x t x ¯ ) 2 4 .
The little practical relevance of asymptotic results such as (20) can be seen when the asymptotic values are confronted with the actual values obtained by simulations. In the simplest case of Gaussian white noise, we do not have to safeguard against short-range dependence and can therefore choose a value of α slightly below 4/5. Choosing α = 0.7 and K n α , we obtain 0.00857 (27) and 0.00617 (28) vs. 0.01148 and 0.00885 (simulated) for n = 250 and K = 48 , 0.00326 and 0.00235 vs. 0.00381 and 0.00282 for n = 1000 and K = 126 , 0.00065 and 0.00047 vs. 0.00068 and 0.00050 for n = 10,000 and K = 630 , and 0.00021 and 0.00015 vs. 0.00021 and 0.00015 for n = 50,000 and K = 1947 . Obviously, huge sample sizes are required for good agreement. In the case of a nontrivial ARFIMA process, this problem will become even more serious because a smaller value of α must be chosen.
More sophisticated further developments of the estimator d ^ G P H are obtained by using more than three periodogram ordinates, allowing for overlaps, and introducing weights, or, equivalently, by using a lag-window estimator of the form:
f ^ ( ω j ) = 1 2 π s = m m w ( s / m ) γ ^ ( s ) e i ω j s ,   j = 1 , , K ,
where γ ^ ( s ) denotes the sample autocovariance at lag s and the lag window w satisfies w ( 0 ) = 1 , | w ( s ) | 1 , and w ( s ) = w ( s ) (see Hassler 1993; Peiris and Court 1993; Reisen 1994). A disadvantage of these estimation procedures is that they require the specification of a second tuning parameter, namely the length of the weighted averages in the former case and m n 1 in the latter case, in addition to K. Of course, suitable weights and a suitable lag window, respectively, must be chosen too. Carrying out an extensive simulation study to compare various frequency-domain estimators for d, Reschenhofer et al. (2020) found that too strong smoothing, e.g., caused by choosing a too small value for m, entails an extremely large bias. Hunt et al. (2003) derived an approximation for the bias and observed generally a good agreement between their approximation and the corresponding value obtained by simulations when d > 0 . However, the practical relevance of this approximation is limited because of its dependence on characteristics of the data generating process, which are unknown in practice.

2.3. Using Subsamples

A simple method of smoothing without introducing a bias is to average estimates obtained from different subsamples. Assume, for example, that the final estimate d ^ is obtained by averaging over N preliminary estimates d ^ 1 , , d ^ N obtained from independent subsamples y 11 , , y n 1 , …, y 1 N , , y n N ; then, the variance of d ^ vanishes as N increases while the bias remains unchanged. Of course, artificially splitting a long, homogeneous time series into non-overlapping subseries does not necessarily have a positive effect. For illustration, consider the simplest case where the time series y 1 , , y n is split into two disjoint subseries y 1 , , y n / 2 and y n / 2 + 1 , , y n of equal length. To allow a fair comparison, the frequency range ( 0 , ω K ] , is kept constant, which implies that in the case of the two subseries the number of used Fourier frequencies is K / 2 . Under the simplistic and mostly unrealistic assumption that the two subseries are independent, the (approximate) variance of the mean of the two GPH estimators based on the two subseries is given by:
1 4 ( π 2 6 1 4 K / 2 + π 2 6 1 4 K / 2 ) = π 2 6 1 4 K
which is, therefore, of the same size as that of the original estimator, which is based on the whole time series. However, there is still room for improvement. A reduction in the variance may be achieved by allowing for overlaps between the subseries, e.g., with a rolling estimation window or a combination of different partitions.
At first glance, the idea of improving an OLS estimator by averaging the OLS estimators obtained from the whole sample and the first and second halves, respectively, seems to be at odds with the Gauß-Markov theorem because the combined estimator is still linear. However, the crucial point here is that only the observations are partitioned and not the log periodogram, which is used as dependent variable in the regression and is obtained from the observations through nonlinear transformations. For illustration, consider an estimator of the form:
d ˜ 2 = ( 1 2 λ ) d ^ 1 + λ d ^ 21 + λ d ^ 22 ,
where d ^ 1 , d ^ 21 , d ^ 22 are the OLS estimators for d based on the log periodograms L 1 , L 21 ,   L 22 of the whole sample and the first and second halves, respectively. In the special case of Gaussian white noise with variance 2 π , the constant c in the regression (3) vanishes, and we may, therefore, use the simpler estimators:
d ˘ 1 = j = 1 K x _ j L j 1 j = 1 K x _ j 2 1 4 K j = 1 K x _ j L j 1 ,
and:
d ˘ 2 s = j = 1 K / 2 x _ 2 j L j 2 s j = 1 K / 2 x _ 2 j 2 1 2 K j = 1 K / 2 x _ 2 j L j 2 s ,   s = 1 , 2 ,  
where x _ j = x j x ¯ . For the variances of the simplistic estimators d ˘ 1 and:
d ˘ 2 = ( 1 2 λ ) d ˘ 1 + λ d ˘ 21 + λ d ˘ 22 ,
we obtain approximately:
v a r ( d ˘ 1 ) ( 1 4 K ) 2 j = 1 K x _ j 2 π 2 6 π 2 24 K
and:
v a r ( d ˘ 2 ) π 2 24 K ( ( 1 2 λ ) 2 + 4 λ 2 ) + 4 λ ( 1 2 λ ) c o v ( d ˘ 1 , d ˘ 21 ) π 2 24 K ( ( 1 2 λ ) 2 + 4 λ 2 + 4 λ ( 1 2 λ ) ( ρ 0 + ρ 1 ) ) 0.69   π 2 24 K ,   if   λ = 1 4 ,
respectively, where we have used that c o v ( d ˘ 1 , d ˘ 21 ) = c o v ( d ˘ 1 , d ˘ 22 ) and c o v ( d ˘ 21 , d ˘ 22 ) = 0 as well as the rough approximations:
j = 1 K 2 x _ 2 j 2 j = 1 K 2 x _ 2 j x _ 2 j 1 j = 1 K 2 1 x _ 2 j x _ 2 j + 1 2 K ,
c o r ( L j 1 , L k 2 s ) { ρ 0 = 0.35 ,   i f   2 k = j ,   ρ 1 = 0.13 ,   i f   | 2 k j | = 1 ,   0 ,   e l s e  
(see Table 1), and:
c o v ( d ˘ 1 , d ˘ 21 ) 1 8 K 2 c o v ( j = 1 K 2 x _ 2 j L 2 j 1 + j = 1 K 2 x _ 2 j 1 L 2 j 1 1 ,   k = 1 K 2 x _ 2 k L k 21 ) 1 8 K 2 ( j = 1 K 2 k = 1 K 2 x _ 2 j x _ 2 k   c o v ( L 2 j 1 , L 2 k 21 ) + j = 1 K 2 k = 1 K 2 x _ 2 j 1 x _ 2 k   c o v ( L 2 j 1 1 , L 2 k 21 ) ) 1 8 K 2 ( ρ 0 π 2 6 j = 1 K 2 x _ 2 j 2 + ρ 1 π 2 6 j = 1 K 2 x _ 2 j x _ 2 j 1 ) π 2 24 K ( ρ 0 + ρ 1 )
For a further reduction of the variance, we may consider more general estimators of the form:
d ˜ k = 1 k ( d ^ 1 + j = 2 k 1 j ( d ^ j 1 + + d ^ j j ) ) ,
which are based on k partitions. The next section examines whether this possible reduction actually materializes and whether it is accompanied by an increase in the bias. All computations are carried out with the free statistical software R (R Core Team 2018).

3. Simulations

In this section, we compare the new estimator d ˜ k (41) for k = 2 ,   3 ,   5 ,   10 with Geweke and Porter-Hudak’s (1983) estimator d ^ G P H , which is based on the log periodogram regression (5), and the estimators d ^ s m and d ^ s m P β , which are obtained by replacing the periodogram ordinates in (5) by simple moving averages of neighboring periodogram ordinates and lag-window estimates of the form (30) with truncation lags m = [ n β ] , β = 0.5 ,   0.7 ,   0.9 , 1 , respectively. In the latter case, the Parzen window is used, which is given by:
w ( z ) = {   1 6 z 2 + 6 | z | 3 ,   | z | < 1 2 ,   2 ( 1 | z | ) 3 ,   1 2 | z | 1 .
With a view to the later application of the estimators to 1-min intraday returns in Section 4, the sample size n = 390 is chosen for our simulation study because there are 390 min in a regular trading session for U.S. stocks, which starts at 9:30 a.m. and ends at 4:00 p.m. The number K of Fourier frequencies included in the log periodogram regression is defined by setting K = 20 [ n α ] , α = 0.5 . For k = 2 , the first K / k = 10 Fourier frequencies of the two disjoint subseries of length n / k = 195 are given by ω 2 , ω 4 , , ω K , and for k = 10 , the first K / k = 2 Fourier frequencies of the 10 disjoint subseries of length n / k = 39 are given by ω 10 , ω K . Clearly, we cannot go beyond k = 10 because at least two frequencies are required to carry out the log periodogram regression. Additionally, using frequencies outside the interval ( 0 , ω K ] is not an option because this would amount to an unfair advantage, particularly when there are no short-term dependencies which have to be taken into account.
With the help of the R-package ‘fracdiff’, 10,000 realizations of length n = 390 of ARFIMA(1,d,0) processes with standard normal innovations and parameter values d = 0.25 ,   0.1 ,   0 ,   0.1 ,   0.25 and ϕ 1 = 0.25 ,   0.1 ,   0 ,   0.1 ,   0.25 , respectively, are generated using a burn-in period of 10,000. For each realization, the estimators d ^ G P H , d ^ s m , d ^ s m P β , β = 0.5 ,   0.7 ,   0.9 , 1 , d ˜ k , k = 2 ,   3 ,   5 ,   10 , are employed for the estimation of the memory parameter d. The competing estimators are compared with respect to bias (Table 2), variance (Table 3), and RMSE (Table 4). Table 3 shows that d ˜ 2 has indeed a smaller variance than d ^ G P H = d ˜ 1 . The variance keeps decreasing as the number of partitions increases from two to 10. Table 2 shows that this improvement does in general not come at the cost of a greater bias. In contrast, the reduction in the variance achieved in the case of the estimator d ^ s m P β by increasing the degree of smoothing from β = 0.9 to β = 0.5 is for d 0 accompanied by a dramatic increase in the bias. Overall, in terms of the RSME, the best results are obtained with d ^ s m P 0.5 for small values of d and with d ^ s m P 0.7 for larger value of d. However, this is only relevant in the standard case where only a single time series is available. When a large number of time series are examined simultaneously (as in the empirical study of Section 4), the bias is the decisive factor and the new estimators d ˜ k are therefore more appropriate than the conventional estimators d ^ s m P β .
Since values of β such as 0.5, 0.7, or 0.9 are usually chosen to minimize the MSE for a single sample, we may suspect that the estimator d ^ s m P β becomes more competitive in the case of multiple samples when the averaging is taken into account. This can be done by further reducing the degree of smoothing. Unfortunately, there is a limit to what can be achieved by increasing the value of β . Table 2 shows that large biases are still obtained with the maximum possible value of β , i.e., β = 1 . This is due to the fact that global smoothing inevitably causes local distortions and cutting off higher-order sample autocovariances is not the only source of smoothing. Downweighting the sample autocovariances with the Parzen window also has a strong smoothing effect, even when all sample autocovariances are used.

4. Empirical Results

In this section, we employ the estimators discussed in the previous sections for the search of possible long-range dependencies in intraday returns and absolute intraday returns. For this purpose, the limit order book data from 27 June 2007 to 30 April 2019 (2980 trading days) of the iShares Core S&P 500 ETF (IVV) are downloaded from Lobster (https://lobsterdata.com). In the process of data cleaning, 27 early-closure days (the day before Independence Day, the day after Thanksgiving, and Christmas Eve) are removed as well as 9 January 2019 because of a large number of missing values. For each of the remaining days, the first mid-quotes (midpoints of the best bid and ask quotes) in each minute and the last mid-quote in the last minute are computed and subsequently used to obtain 1-min log returns. Finally, another three days are omitted because of extreme returns, namely 19 September 2008, 6 May 2010, and 24 August 2015, which leaves 2949 days for our analysis. Estimates are computed for each day, divided by the number of days, and plotted cumulatively; hence, the last values correspond to the averages of the estimates. The validity of these values is reinforced by the striking linearity of the curves. This linearity also implies that the possible long-range dependence is not changing over time; hence, there appears to be no such thing as fractal dynamics. Figure 1a suggests that d is close to zero in case of the 1-min log returns. The large negative values obtained with d ^ s m P 0.9 and d ^ s m P 0.7 as well as the comparatively inconspicuous values obtained with d ^ s m P 0.5 can be explained with the help of the results of our simulation study. According to Table 2, they are indicative for d = 0 . In contrast, there is strong evidence of long-range dependence in the volatility (see Figure 1b). Most estimators suggest that the memory parameter d is approximately in the range between 0.3 and 0.4. Only the estimator d ^ s m P 0.5 , which is severely downward biased in case of positive d (see Table 2), favors a smaller value.
Visual significance of the differences between certain estimates can be ascertained just by observing the large differences between the slopes of the corresponding lines in Figure 1 and noting the striking stability of these lines over time. However, we still might want to augment our visual analysis with a formal statistical test. A simple way to accomplish that is to calculate the difference between two estimates separately for each trading day and compare the number of positive differences to the number of negative differences (sign test). Not surprisingly, the resulting p-values are infinitesimal. For example, even in the case of the two neighboring lines corresponding to d ^ G P H and d ˜ 2 in Figure 1b, the p-value is less than 2.2 × 10 16 . It is still less than 9.7 × 10 8 when we omit most of the trading days and use only Wednesdays in order to ensure approximate independence of the subsamples. Note that there are 4 × 390 = 1560 1-min returns between the last 1-min return of some Wednesday and the first 1-min return of the next Wednesday plus five overnight breaks and a whole weekend. Even for a relatively large value of the memory parameter such as d = 0.3 , the autocorrelation of an ARFIMA(0,d,0) process at lag j = 1561 is quite small, i.e.,
ρ ( j ) = Γ ( j + d ) Γ ( 1 d ) Γ ( j d + 1 ) Γ ( d ) = h = 1 j h 1 + d h d 0.023 .
Finally, in order to check the robustness of our findings against daily and weekly periodic patterns, we repeat the graphical analyses with suitably transformed data. Replacing the 1-min log returns r t ( s ) ,   s = 1 , , 390 , by the daily differences r t ( s ) r t 1 ( s ) and the weekly differences r t ( s ) r t 5 ( s ) , respectively, ensures that any daily or weekly periodic patterns are erased while long-range dependencies remain unaffected. Figure 1c,e are very similar to Figure 1a, which shows that the insights gained from Figure 1a are genuine. Analogously, comparing Figure 1d,f with Figure 1b, we see that the same is true for the absolute returns

5. Discussion

In this paper, we have introduced a new estimator for the memory parameter d, which is based on running a log periodogram regression repeatedly for different partitions of the data. In contrast to conventional smoothing methods, which manage to achieve a reduction in the variance at the expense of an increase in the bias, our approach does not systematically have a negative impact on the bias, which makes it particularly useful for applications where the bias is the decisive factor. For example, intraday returns are usually only available during trading hours and estimation must therefore be carried out separately for each trading day. When the individual estimates are eventually combined by averaging, the variance decreases as the sample size increases, but the bias does not change. The results of an extensive simulation study confirm the good performance of the new estimator. It outperforms all of its competitors when both bias and variance are taken into account, but the bias is weighted more heavily.
The importance of results obtained with the help of simulations is due to the fact that reliable inference on the memory parameter d is not possible under general conditions. Some asymptotic results can be obtained under very restrictive conditions though. Unfortunately, convergence is typically very slow (recall the discussion in Section 2.2). Indeed, Pötscher (2002) showed that many common estimation problems in statistics and econometrics, which include the estimation of d , are ill-posed in the sense that the minimax risk is bounded from below by a positive constant independent of n and does, therefore, not converge to zero as n . In particular, he found that for any estimator d ^ n for d based on a sample of size n from a Gaussian process with spectral density f :
sup f F   E | d ^ n d | r 1 2 r > 0 ,
where 1 r < and F is the set of all ARFIMA spectral densities ( p 0 ,   q 0 ), ARFI spectral densities ( p 0 ,   q = 0 ), or FIMA spectral densities ( p = 0 ,   q 0 ). Furthermore, he showed that for every f 0 F , (44) holds also “locally,” when the supremum is taken over an arbitrarily small L 1 -neighborhood of f 0 . Finally, he established that confidence intervals for d coincide with the entire parameter space for d with high probability and are therefore uninformative. Nevertheless, it may be possible to formally derive the statistical properties of our new estimator for a rather narrow class of processes such as low order ARFI processes. However, this is left for future research. The current paper provides just a proof of concept.
In our empirical investigation of high-frequency data of an index ETF, we have applied the competing estimators to 1-min log returns and absolute 1-min log returns separately for each day. The results are quite stable over time and across estimation methods. The few deviations are due to conventional smoothing methods and can easily be explained by the size of their bias as shown in Table 2. We may, therefore, safely conclude that significant long-range dependencies are present only in the intraday volatility but not in the intraday returns. These findings are genuine and not just due to daily or weekly periodic patterns because similar results are obtained when daily and weekly differences are investigated instead of the original intraday returns.

Author Contributions

Both authors contributed equally to the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors thank the academic editor and three anonymous reviewers for helpful comments and suggestions. Open Access Funding by the University of Vienna.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Adenstedt, Rolf K. 1974. On Large-Sample Estimation for the Mean of Stationary Random Sequence. The Annals of Statistics 2: 1095–107. [Google Scholar] [CrossRef]
  2. Andrews, Donald W. K. 1991. Heteroskedasticity and autocorrelation consistent covariance matrix estimation. Econometrica 59: 817–58. [Google Scholar] [CrossRef]
  3. Auer, Benjamin R. 2016a. On the performance of simple trading rules derived from the fractal dynamics of gold and silver price fluctuations. Finance Research Letters 16: 255–67. [Google Scholar] [CrossRef]
  4. Auer, Benjamin R. 2016b. On time-varying predictability of emerging stock market returns. Emerging Markets Review 27: 1–13. [Google Scholar] [CrossRef]
  5. Barkoulas, John T., and Christopher F. Baum. 1996. Long-term dependence in stock returns. Economic Letters 53: 253–59. [Google Scholar] [CrossRef]
  6. Barkoulas, John T., Christopher F. Baum, and Nickolas Travlos. 2000. Long memory in the Greek stock market. Applied Financial Economics 10: 177–84. [Google Scholar] [CrossRef]
  7. Batten, Jonathan A., and Peter G. Szilagyi. 2007. Covered interest parity arbitrage and long-term dependence between the US dollar and the Yen. Physica A 376: 409–21. [Google Scholar] [CrossRef]
  8. Batten, Jonathan A., Craig A. Ellis, and Thomas A. Fethertson. 2008. Sample period selection and long-term dependence: New evidence from the Dow Jones Index. Chaos, Solitons Fractals 36: 1126–40. [Google Scholar] [CrossRef]
  9. Batten, Jonathan, Cetin Ciner, Brian M. Lucey, and Peter G. Szilagyi. 2013. The structure of gold and silver spread returns. Quantitative Finance 13: 561–70. [Google Scholar] [CrossRef]
  10. Cajueiro, Daniel O., and Benjamin M. Tabak. 2004. The Hurst exponent over time: Testing the assertion that emerging markets are becoming more efficient. Physica A 336: 521–37. [Google Scholar] [CrossRef]
  11. Carbone, Anna, Giuliano Castelli, and H. Eugene Stanley. 2004. Time-dependent Hurst exponent in financial time series. Physica A 344: 267–71. [Google Scholar] [CrossRef]
  12. Chen, Gemai, Bovas Abraham, and Shelton Peiris. 1994. Lag window estimation of the degree of differencing in fractionally integrated time series models. Journal of Time Series Analysis 15: 473–87. [Google Scholar] [CrossRef]
  13. Cheung, Yin-Wong, and Kon S. Lai. 1995. A search for long memory in international stock market returns. Journal of International Money and Finance 14: 597–615. [Google Scholar] [CrossRef]
  14. Crato, Nuno. 1994. Some international evidence regarding the stochastic behaviour of stock returns. Applied Financial Economics 4: 33–39. [Google Scholar] [CrossRef]
  15. Crato, Nuno, and Pedro J. F. de Lima. 1994. Long-range dependence in the conditional variance of stock returns. Economics Letters 45: 281–85. [Google Scholar] [CrossRef]
  16. Davis, Robert, and David Harte. 1987. Tests of the Hurst effect. Biometrika 74: 95–101. [Google Scholar] [CrossRef]
  17. Geweke, John, and Susan Porter-Hudak. 1983. The estimation and application of long memory time series models. Journal of Time Series Analysis 4: 221–38. [Google Scholar] [CrossRef]
  18. Granger, Clive W. J., and Roselyne Joyeux. 1980. An introduction to long-memory time series models and fractional differencing. Journal of Time Series Analysis 1: 15–29. [Google Scholar] [CrossRef]
  19. Grau-Carles, Pilar. 2000. Empirical evidence of long-range correlations in stock returns. Physica A 287: 396–404. [Google Scholar] [CrossRef]
  20. Greene, Myron T., and Bruce D. Fielitz. 1977. Long term dependence in common stock eturns. Journal of Financial Economics 4: 339–49. [Google Scholar] [CrossRef]
  21. Hassler, Uwe. 1993. Regression of spectral estimators with fractionally integrated time series. Journal of Time Series Analysis 14: 369–80. [Google Scholar] [CrossRef]
  22. Hauser, Michael A., and Erhard Reschenhofer. 1995. Estimation of the fractionally differencing parameter with the R/S method. Computational Statistics & Data Analysis 20: 569–79. [Google Scholar]
  23. Henry, Ólan T. 2002. Long memory in stock returns: Some international evidence. Applied Financial Economics 12: 725–29. [Google Scholar] [CrossRef]
  24. Hosking, Jonathan R. M. 1981. Fractional differencing. Biometrika 68: 165–76. [Google Scholar] [CrossRef]
  25. Hunt, R. L., M. Shelton Peiris, and N. C. Weber. 2003. The bias of lag window estimators of the fractional difference parameter. Journal of Applied Mathematics and Computing 12: 67–79. [Google Scholar] [CrossRef]
  26. Hurst, Harold E. 1951. Long term storage capacity of reservoirs. Transactions of the American Society of Civil Engineers 116: 770–99. [Google Scholar]
  27. Hurvich, Clifford M., and Kaizo I. Beltrao. 1993. Asymptotics for the low-freqeuncy ordinates of the periodogram of a long-memory time series. Journal of Time Series Analysis 14: 455–72. [Google Scholar] [CrossRef]
  28. Hurvich, Clifford M., Rohit Deo, and Julia Brodsky. 1998. ‘The mean square error of Geweke and Porter-Hudak’s estimator of the memory parameter of a long-memory time series. Journal of Time Series Analysis 19: 19–46. [Google Scholar] [CrossRef]
  29. Künsch, Hans-Rudolf. 1986. Discrimination between monotonic trends and long-range dependence. Journal of Applied Probability 23: 1025–30. [Google Scholar] [CrossRef]
  30. Lo, Andrew. 1991. Long-term memory in stock market prices. Econometrica 59: 1279–313. [Google Scholar] [CrossRef]
  31. Lobato, Ignacio N., and N. E. Savin. 1998. Real and spurious long-memory properties of stock-market data. Journal of Business & Economic Statistics 16: 261–68. [Google Scholar]
  32. Mandelbrot, Benoît. 1971. When can price be arbitraged efficiently? A limit to the validity of the random walk and martingale models. The Review of Economics and Statistics 53: 225–36. [Google Scholar] [CrossRef]
  33. Mandelbrot, Benoît. 1972. Statistical methodology for non-periodic cycles: From the covariance to R/S analysis. Annals of Economic and Social Measurement 1: 259–90. [Google Scholar]
  34. Mandelbrot, Benoît. 1975. Limit theorems on the delf.-normalized range for weakly and strongly dependent processes. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete 31: 271–85. [Google Scholar] [CrossRef]
  35. Mandelbrot, Benoît, and James Wallis. 1969. Computer experiments with fractional Gaussian noises. Parts 1, 2, 3. Water Resources Research 4: 909–18. [Google Scholar] [CrossRef]
  36. Mangat, Manveer K., and Erhard Reschenhofer. 2019. Testing for long-range dependence in financial time series. Central European Journal of Economic Modelling and Econometrics 11: 93–106. [Google Scholar]
  37. Newey, Whitney K., and Kenneth D. West. 1987. A simple, positive semi-definite, heteroskedasticity and autocorrelation consistent covariance matrix. Econometrica 55: 703–8. [Google Scholar] [CrossRef]
  38. Peiris, M. Shelton, and J. R. Court. 1993. A note on the estimation of degree of differencing in long memory time series analysis. Probability and Mathematical Statistics 14: 223–29. [Google Scholar]
  39. Pötscher, Benedikt M. 2002. Lower risk bounds and properties of confidence sets for ill-posed estimation problems with applications to spectral density and persistence estimation, unit roots, and estimation of long memory parameters. Econometrica 70: 1035–65. [Google Scholar] [CrossRef]
  40. R Core Team. 2018. R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing. [Google Scholar]
  41. Reisen, Valderio A. 1994. Estimation of the fractional difference parameter in the ARIMA(p,d,q) model using the smoothed periodogram. Journal of Time Series Analysis 15: 335–50. [Google Scholar] [CrossRef]
  42. Reisen, Valderio A., Bovas Abraham, and Silvia Lopes. 2001. Estimation of parameters in ARFIMA processes: A simulation study. Communications in Statistics: Simulation and Computation 30: 787–803. [Google Scholar] [CrossRef]
  43. Reschenhofer, Erhard, and Manveer K. Mangat. 2020. Detecting long-range dependence with truncated ratios of periodogram ordinates. Communications in Statistics—Theory and Methods. [Google Scholar] [CrossRef]
  44. Reschenhofer, Erhard, Manveer K. Mangat, and Thomas Stark. 2020. Improved estimation of the memory parameter. Theoretical Economics Letters 10: 47–68. [Google Scholar] [CrossRef]
  45. Robinson, Peter M. 1995. Log-periodogram regression of time series with long range dependence. Annals of Statistics 23: 1048–72. [Google Scholar] [CrossRef]
  46. Souza, Sergio, Benjamin M. Tabak, and Daniel O. Cajueiro. 2008. Long-range dependence in exchange rates: The case of the European monetary system. International Journal of Theoretical and Applied Finance 11: 199–223. [Google Scholar] [CrossRef]
Figure 1. Cumulative plots of the estimates obtained by applying d ^ G P H (blue), d ^ s m (darkgreen), d ^ s m P 1 (green), d ^ s m P 0.9 (gold), d ^ s m P 0.7 (red), d ^ s m P 0.5 (orange), d ^ 2 (pink), d ^ 3 (magenta), d ^ 5 (turquoise), d ^ 10 (yellowgreen) to the (a) 1-min intraday log returns r t ( s ) ,   s = 1 , , 390 , (b) absolute 1-min intraday log returns | r t ( s ) | , (c) r t ( s ) r t 1 ( s ) , (d) | r t ( s ) r t 1 ( s ) | , (e) r t ( s ) r t 5 ( s ) , (f) | r t ( s ) r t 5 ( s ) | .
Figure 1. Cumulative plots of the estimates obtained by applying d ^ G P H (blue), d ^ s m (darkgreen), d ^ s m P 1 (green), d ^ s m P 0.9 (gold), d ^ s m P 0.7 (red), d ^ s m P 0.5 (orange), d ^ 2 (pink), d ^ 3 (magenta), d ^ 5 (turquoise), d ^ 10 (yellowgreen) to the (a) 1-min intraday log returns r t ( s ) ,   s = 1 , , 390 , (b) absolute 1-min intraday log returns | r t ( s ) | , (c) r t ( s ) r t 1 ( s ) , (d) | r t ( s ) r t 1 ( s ) | , (e) r t ( s ) r t 5 ( s ) , (f) | r t ( s ) r t 5 ( s ) | .
Econometrics 08 00040 g001
Table 1. Sample correlations between L j ,   j = 1 , , 20 , and L k 1 ,   k = 1 , , 10 , obtained from 10,000,000 realizations of Gaussian white noise ( n = 400 ).
Table 1. Sample correlations between L j ,   j = 1 , , 20 , and L k 1 ,   k = 1 , , 10 , obtained from 10,000,000 realizations of Gaussian white noise ( n = 400 ).
12345678910
10.14750.01860.00720.00440.00270.00140.00130.0010.00080.0005
20.35410.0002−0.0001−0.000400.0003−0.00030.00020.00020.0005
30.13640.1330.01540.0060.00320.00250.00090.0010.00070.0003
4−0.00010.3541−0.0001−0.00020.00020.0008−0.0005−0.0002−0.0004−0.0003
50.01640.13160.13070.01440.0050.00270.00190.00160.00080.0008
6−0.0001−0.00030.3540.00020.0002−0.00040.00040.0001−0.00050.0005
70.0070.01470.13110.13080.0140.00430.00250.00210.00130.0011
800.00010.00040.35410.00040.0001−0.0001−0.0002−0.0001−0.0002
90.00350.00540.01430.13020.13020.01390.00510.0030.00160.0009
10−0.00030−0.00010.00040.3539−0.00030.00030.0001−0.00050.0003
110.00230.00330.00470.01380.13010.130.01330.00540.00250.0014
12−0.0004−0.0001−0.0004−0.00010.00030.35420.0001−0.00010.00020
130.00130.0020.00320.00530.01370.13050.13090.01470.0040.003
14−0.00040.00010.00030.00040.00080.00020.3544−0.00020.0005−0.0002
150.00110.00160.0020.00250.00590.0140.13040.12970.01410.0055
16−0.00060.0001−0.000400.0002−0.0001−0.00010.3540.00020.0002
170.00110.00090.00090.00210.00250.00490.01380.13050.13040.0137
180.0003−0.00020−0.0001−0.0006−0.0004−0.0002−0.00040.3541−0.0001
190.00080.00050.00110.00150.00190.00260.00460.01380.13060.1302
20−0.00010.00050.00010.00020.00080.00010.0007−0.0003−0.00050.3541
Table 2. Bias of the estimators d ^ G P H (log periodogram regression), d ^ s m (simple smoothing), d ^ s m P β , β = 1 ,   0.9 ,   0.7 ,   0.5 (smoothing with Parzen window and truncation lag m = [ n β ] ), and d ˜ k , k = 2 ,   3 ,   5 ,   10 (k partitions) obtained from 10,000 realizations (length: n = 390 , number of used Fourier frequencies: K = 20 ) of Gaussian ARFIMA(1,d,0) processes with d = 0.25 ,   0.1 ,   0 ,   0.1 ,   0.25 and ϕ 1 = 0.25 ,   0.1 ,   0 ,   0.1 ,   0.25 .
Table 2. Bias of the estimators d ^ G P H (log periodogram regression), d ^ s m (simple smoothing), d ^ s m P β , β = 1 ,   0.9 ,   0.7 ,   0.5 (smoothing with Parzen window and truncation lag m = [ n β ] ), and d ˜ k , k = 2 ,   3 ,   5 ,   10 (k partitions) obtained from 10,000 realizations (length: n = 390 , number of used Fourier frequencies: K = 20 ) of Gaussian ARFIMA(1,d,0) processes with d = 0.25 ,   0.1 ,   0 ,   0.1 ,   0.25 and ϕ 1 = 0.25 ,   0.1 ,   0 ,   0.1 ,   0.25 .
d ϕ 1 d ^ G P H d ^ s m d ^ s m P 1 d ^ s m P 0.9 d ^ s m P 0.7 d ^ s m P 0.5 d ˜ 2 d ˜ 3 d ˜ 5 d ˜ 10
−0.25−0.250.0074−0.0001−0.0073−0.00990.03450.16090.00870.00840.00980.0107
−0.10.00500.0002−0.0083−0.01070.03450.16250.00800.00840.00870.0092
00.0042−0.0031−0.0098−0.01240.03370.16410.00650.00650.00760.0086
0.10.00970.0036−0.0049−0.00730.03800.16640.01260.01200.01280.0140
0.250.01510.01100.0006−0.0020.04360.17170.01650.01790.02010.0216
−0.1−0.250.0002−0.0029−0.0211−0.0280−0.0080.05700.00080.00160.00060.0002
−0.10.0015−0.0028−0.0212−0.0286−0.00850.0578−0.00010.00050.0001−0.0001
00.00390.0017−0.0184−0.0251−0.00530.06010.00380.00520.00600.0057
0.10.00140.0007−0.0197−0.0263−0.00560.06120.00240.00280.00390.0037
0.250.00550.0059−0.0148−0.0215−0.00030.06660.00860.00990.00930.0101
0−0.25−0.0043−0.0035−0.0282−0.0376−0.0321−0.0107−0.0038−0.0039−0.0048−0.0049
−0.1−0.00110.0006−0.0258−0.0353−0.0299−0.0096−0.0004−0.0007−0.0004−0.0010
0−0.0011−0.0001−0.0265−0.0361−0.0305−0.0087−0.0016−0.0004−0.0006−0.0006
0.1−0.00010.0009−0.0235−0.0333−0.0278−0.00630.00160.00250.00190.0025
0.250.00400.0064−0.0214−0.0309−0.0250−0.00220.00330.00600.00530.0073
0.1−0.250.00090.0057−0.0274−0.039−0.0475−0.07620.0009−0.00030.0008−0.0001
−0.10.00160.0056−0.0277−0.0396−0.0478−0.0754−0.00030.0002−0.0007−0.0006
0−0.00050.0043−0.0277−0.0396−0.0479−0.0745−0.0012−0.0012−0.0012−0.0010
0.10.00290.0059−0.0250−0.0374−0.0458−0.07270.00200.00280.00380.0034
0.250.00970.0149−0.0186−0.0305−0.0392−0.06850.00880.00960.01140.0115
0.25−0.250.00060.0102−0.0314−0.0451−0.0690−0.17480.00210.00180.00090.0006
−0.10.00160.0112−0.0314−0.0453−0.0689−0.17440.00060.00110.00140.0010
00.00440.0140−0.0281−0.0420−0.0656−0.17300.00320.00370.00400.0039
0.10.00490.0162−0.0269−0.0408−0.0649−0.17180.00490.00650.00610.0060
0.250.00790.0229−0.0228−0.0364−0.0600−0.16820.01050.01200.01300.0137
Table 3. Variance of the estimators d ^ G P H (log periodogram regression), d ^ s m (simple smoothing), d ^ s m P β , β = 1 ,   0.9 ,   0.7 ,   0.5   (smoothing with Parzen window and truncation lag m = [ n β ] ), and d ˜ k , k = 2 ,   3 ,   5 ,   10 (k partitions) obtained from 10,000 realizations (length: n = 390 , number of used Fourier frequencies: K = 20 ) of Gaussian ARFIMA(1,d,0) processes with d = 0.25 ,   0.1 ,   0 ,   0.1 ,   0.25 and ϕ 1 = 0.25 ,   0.1 ,   0 ,   0.1 ,   0.25 .
Table 3. Variance of the estimators d ^ G P H (log periodogram regression), d ^ s m (simple smoothing), d ^ s m P β , β = 1 ,   0.9 ,   0.7 ,   0.5   (smoothing with Parzen window and truncation lag m = [ n β ] ), and d ˜ k , k = 2 ,   3 ,   5 ,   10 (k partitions) obtained from 10,000 realizations (length: n = 390 , number of used Fourier frequencies: K = 20 ) of Gaussian ARFIMA(1,d,0) processes with d = 0.25 ,   0.1 ,   0 ,   0.1 ,   0.25 and ϕ 1 = 0.25 ,   0.1 ,   0 ,   0.1 ,   0.25 .
d ϕ 1 d ^ G P H d ^ s m d ^ s m P 1 d ^ s m P 0.9 d ^ s m P 0.7 d ^ s m P 0.5 d ˜ 2 d ˜ 3 d ˜ 5 d ˜ 10
−0.25−0.250.03300.03280.02010.0180.01060.00110.02870.02590.02540.0238
−0.10.03340.03390.02070.01850.01100.00120.02970.02660.02610.0245
00.03420.03370.02090.01850.01080.00110.02960.02670.02620.0248
0.10.03270.03300.02020.01800.01070.00110.02870.02620.02570.0240
0.250.03230.03250.01990.01780.01060.00110.02870.02600.02580.0242
−0.1−0.250.03330.03270.02110.01870.01140.00110.02950.02680.02640.0250
−0.10.03320.03170.02090.01860.01140.00110.02910.02640.02600.0250
00.03340.03300.02120.01890.01150.00120.02980.02710.02670.0251
0.10.03300.03150.02080.01850.01120.00110.02890.02620.02580.0246
0.250.03280.03200.02090.01850.01120.00110.02910.02660.02630.0248
0−0.250.03330.03220.02120.01910.01200.00120.02960.02680.02630.0250
−0.10.03280.03200.02120.01910.01200.00120.02930.02680.02610.0252
00.03350.03190.02140.01920.01190.00120.02970.02710.02660.0254
0.10.03380.03230.02170.01950.01220.00120.02990.02710.02700.0260
0.250.03320.03240.02130.01920.01200.00120.03000.02730.02690.0255
0.1−0.250.03320.03270.02180.01980.01300.00120.02990.02740.02710.0260
−0.10.03270.03210.02180.01990.01300.00120.02940.02690.02620.0252
00.03280.03170.02140.01940.01270.00120.02930.02640.02630.0250
0.10.03310.03210.02150.01950.01290.00120.02950.02690.02670.0256
0.250.03260.03210.02170.01970.01300.00120.02930.02680.02630.0254
0.25−0.250.03330.03150.02200.02020.01450.00130.03000.02710.02710.0260
−0.10.03270.03230.02220.02050.01480.00130.03020.02780.02750.0265
00.03280.03120.02190.02020.01460.00120.02970.02680.02640.0255
0.10.03330.03250.02260.02070.01470.00130.03010.02740.02740.0262
0.250.03390.03190.02260.02080.01500.00120.03020.02750.02720.0261
Table 4. RMSE of the estimators d ^ G P H (log periodogram regression), d ^ s m (simple smoothing), d ^ s m P β , β = 1 ,   0.9 ,   0.7 ,   0.5 (smoothing with Parzen window and truncation lag m = [ n β ] ), and d ˜ k , k = 2 ,   3 ,   5 ,   10 (k partitions) obtained from 10,000 realizations (length: n = 390 , number of used Fourier frequencies: K = 20 ) of Gaussian ARFIMA(1,d,0) processes with d = 0.25 ,   0.1 ,   0 ,   0.1 ,   0.25 and ϕ 1 = 0.25 ,   0.1 ,   0 ,   0.1 ,   0.25 .
Table 4. RMSE of the estimators d ^ G P H (log periodogram regression), d ^ s m (simple smoothing), d ^ s m P β , β = 1 ,   0.9 ,   0.7 ,   0.5 (smoothing with Parzen window and truncation lag m = [ n β ] ), and d ˜ k , k = 2 ,   3 ,   5 ,   10 (k partitions) obtained from 10,000 realizations (length: n = 390 , number of used Fourier frequencies: K = 20 ) of Gaussian ARFIMA(1,d,0) processes with d = 0.25 ,   0.1 ,   0 ,   0.1 ,   0.25 and ϕ 1 = 0.25 ,   0.1 ,   0 ,   0.1 ,   0.25 .
d ϕ 1 d ^ G P H d ^ s m d ^ s m P 1 d ^ s m P 0.9 d ^ s m P 0.7 d ^ s m P 0.5 d ˜ 2 d ˜ 3 d ˜ 5 d ˜ 10
−0.25−0.250.18180.18110.14210.13440.10840.16430.16970.16120.15950.1545
−0.10.18270.18400.14420.13650.11040.16610.17240.16340.16180.1567
00.18510.18370.14490.13680.10920.16740.17210.16350.16210.1578
0.10.18120.18160.14230.13430.11030.16980.17000.16230.16070.1555
0.250.18030.18070.14120.13350.11190.17510.17010.16210.16190.1571
−0.1−0.250.18250.18080.14660.13960.10700.06630.17170.16360.16240.1581
−0.10.18230.17820.14600.13940.10720.06690.17050.16250.16110.1580
00.18290.18160.14670.13980.10720.06910.17270.16470.16360.1585
0.10.18170.17750.14540.13860.10610.06980.16990.16180.16070.1569
0.250.18110.17890.14510.13780.10590.07450.17070.16340.16250.1578
0−0.250.18260.17960.14810.14310.11420.03600.17210.16390.16240.1583
−0.10.18120.17900.14790.14260.11370.03590.17130.16380.16150.1588
00.18310.17850.14860.14330.11320.03510.17230.16460.16300.1593
0.10.18370.17960.14910.14350.11390.03510.17290.16470.16450.1611
0.250.18240.18010.14750.14180.11230.03450.17310.16530.16400.1599
0.1−0.250.18220.18100.15020.1460.12370.08370.17280.16570.16460.1612
−0.10.1810.17930.15020.14640.12370.08310.17150.16390.16170.1588
00.1810.17810.14900.14480.12260.08200.17110.16240.16220.1582
0.10.18190.17920.14890.14460.12230.08050.17170.16410.16330.1599
0.250.18080.17990.14850.14370.12060.07680.17130.16400.16260.1596
0.25−0.250.18240.17780.15170.14930.13900.17840.17330.16480.16470.1612
−0.10.18090.18000.15220.15020.13980.17800.17380.16660.16570.1629
00.18100.17720.15050.14830.13750.17650.17230.16360.16260.1598
0.10.18240.18090.15260.14950.13770.17540.17370.16570.16570.1621
0.250.18420.17990.15220.14870.13630.17180.17400.16630.16540.1623
Back to TopTop