1. Introduction
Long memory of a time series is a special characteristic that we observe when analyzing time series data. A time series process is considered to have long memory if its serial dependence or the autocorrelation function (ACF) decays more slowly than an exponential decay (a time series with an exponentially decaying ACF is known as having short memory). This indicates that in long memory time series, the ACF decays hyperbolically and a significant dependence exists between two points even when they are far apart. This hyperbolic behavior of the ACF forces an unbounded spectrum at the origin and, as a result, the standard theory for short memory time series models, such as auto-regressive moving average (ARMA) models cannot be applicable. One of the earliest researchers to identify the need for long memory models was Hurst [
1,
2].
In order to model such long memory time series, Granger and Joyeux [
3] and Hosking [
4] proposed a family of auto-regressive fractionally integrated moving average (ARFIMA) and these proved to be very useful in many time series applications, especially in the areas of geophysics (Haslett and Raftery [
5], Lustig et al. [
6]), economics (Gil-Alana et al. [
7]), and finance (Barkoulas et al. [
8], Reschenhofer et al. [
9]). To investigate some hidden characteristics of time series, in his paper, Peiris [
10] used a similar approach and defined a family of generalized auto-regressive (GAR) models. The ARFIMA model of a process
is defined by
where
,
,
represents a zero-mean uncorrelated process with variance
,
d is a real number which, for the process to be stationary should satisfy
,
p and
q are non-negative integers and
B is the backshift operator, defined as
.
The interested reader may compare (1) to a standard Box–Jenkins ARIMA model (Box and Jenkins [
11]) where
d is a non-negative integer. Where
d is allowed to be fractional, (1) may be rearranged to show a factor
which can be written as a Taylor series expansion
with
When , (1) is often referred to as fractionally differenced white noise or FDWN.
Section 2 is devoted to highlight the important properties of GAR(1) model and its relationship to ARFIMA. Recent advancements related to GAR(1) model can be found in Hunt et al. [
12] and further extensive results on long memory time series are available in Hassler [
13].
In this paper, we will explore some issues with the formulae supplied in Granger and Joyeux [
3] for the spectral density function of ARFIMA processes, and then move on to extend the current results for multivariate ARFIMA
ACF functions.
Section 2 will briefly examine the GAR model of Peiris [
10] which provides a general formula for the ACF of these processes.
Section 3 will examine and discuss some issues with Granger and Joyeux [
3] and
Section 4 will look at a Vector ARFIMA
process, extending existing results to a wider range of the fractional differencing exponent.
Section 5 will conclude the paper.
2. Generalized Auto-Regressive Model of Order 1 (GAR(1))
In his paper, Peiris [
10] considered a time series
generated by a GAR(1) model given by
where
B is the backshift operator and
is a white noise process.
The restriction
in (2) can be removed as
. The stationary solution to (2) is
and the corresponding spectrum
is
where
.
It has been shown by Peiris [
10] P163, Theorem 3.2 that the ACF at lag
k,
is given by
Using
for
the above reduces to
Furthermore, we can use Eulers reflection formula
to give
where
is the hyper-geometric function.
These general results in (3) and (4) can be used in ARFIMA modeling. The interested reader is advised to refer to Bondon and Palma [
14] or Hassler [
13] for further details.
Next, consider the model ARFIMA also known as fractionally differenced white noise.
3. Fractionally Differenced White Noise—A Discussion
Formally, we define a FDWN process as (1) with , although this can also be defined as (2) with . In this section, to emphasize the fractional nature of the exponent we will use the notation rather than d for this.
In this case, Peiris [
10] provides a formula for the auto-covariance as
We can use Eulers reflection formula to give
Other authors have also provided results.
In Hosking [
4], Theorem 1 looks at FDWN with
and
and provides a formula
(9) can be shown to be identical to (7) using
Palma [
15] also provides a similar formula for a general
(Equation 3.21) but the implication from Section 3.2.1 (but not explicitly stated for the ACF) is that this holds for
. As above, a similar result was also reported by Bondon and Palma [
14]. In Hassler [
13], Proposition 6.4 formally provides this result for
.
However, the result due to Granger and Joyeux [
3] p. 17 for
(used by Granger and Joyeux [
3] to identify the auto-covariance at lag
) does not reduce to
. We now proceed to explore why this is the case.
In Granger and Joyeux [
3] Section 2, the spectrum of the process being studied is given as
The assumption behind this is that may consist of a range of non-long-memory parameters. For instance, for a fractional white noise process, one would expect .
However, this is at best misleading.
Suppose
(Brockwell and Davis [
16] 13.2.18).
This can clearly be written as (11) by setting , however this is no longer independent of the long memory parameter d. We believe the intention was that should have been a constant independent of d.
We feel it would be best to write the spectral density as (12) rather than (11), and use
. In the more general form used by Granger and Joyeux [
3], the spectral density is
This changes the formula for the auto-covariance function. To avoid confusion we denote the auto-covariance function of (13) as
, to distinguish it from the version documented in Granger and Joyeux [
3] labeled as
, and written as
Lemma 1. , .
Proof. We proceed by evaluating a formula for
similar to that which Granger and Joyeux [
3] obtained for
.
We can write
where we have used the identity
.
Note that, at this point in Granger and Joyeux [
3], there appears to be a typographic error where the limits of integration are mistakenly set to be between 0 and
, rather than 0 and
.
Using Gradshteyn and Ryzhik [
17] Equation (3), 631.8 with
when
;
and
we have
The beta function can be represented as
, so that
Now
so
so
Eulers reflection formula is
so
so that
Now
, so
To complete the proof, compare (17) with (8) which are equal when . □
Further, compare (17) with the original version given by Granger and Joyeux [
3] in (14).
The difference is a factor of
. As a practical comparison, consider
Table 1 below.
4. Vector FDWN and Related Results
This section considers an extension of the above results for the vector case. We note that some results have already been published for a particular case in Kechagias and Pipiras [
18] Proposition 5.1, but we consider a special case and show an alternative derivation.
Suppose that
is an
m-dimensional vector of time series at time
t. Assume that the time series
follows long memory
where
with backshift operator B, ,
is an m-dimensional zero-mean covariance stationary vector with variance-covariance matrix . That is, for all .
Let , where for each .
Theorem 1.
(a) and , which converges for using arguments from Bondon and Palma [14] and Hassler [13] Definition 3.1 and Proposition 6.2. (b) Let . Then we have:where for all . (c) Let be the auto-covariance matrix at lag k of . Then we have:where and for all . Proof. Let
. Now
When , (19) reduces to and when these reduce to (b) in the theorem.
(19) can be rewritten using Eulers reflection formula as
We can again apply Eulers reflection formula to give
When
, then
can be further reduced to
and when
□
Remark 1. It is straightforward to show that these formula are a special case of those provided by Kechagias and Pipiras [18] Proposition 5.1 whenand. Using their notation, we chooseand(the matrix of zeros). Then Kechagias and Pipiras [18] Equation (69) is (20). With these values forandKechagias and Pipiras [18] Proposition 5.1 defineswhereandand so (24) is the same as (20). Whenthis readily reverts to the univariate case since as noted above (writing)which is the same as (7).
5. Conclusions
Long memory processes exhibit behavior of relatively high correlations between observations even though they might occur far apart in time. These processes can be modeled using ARFIMA processes.
Vector processes can also exhibit long memory and this can happen to different degrees for different components.
In this paper we have explored some issues with a previous formula for the ACF and spectral density of a univariate model, and also looked at extending the applicability of the result for the ACF of a vector ARFIMA(0,d,0) process. Later work may consider extending this to a more general ARFIMA(p,d,q) model.
Author Contributions
Conceptualization, S.P.; methodology, S.P. and R.H.; software, R.H.; validation, S.P. and R.H.; writing—original draft preparation, R.H.; writing—review and editing, S.P. and R.H.. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
This article did not use any research data.
Acknowledgments
This work was initiated while Shelton Peiris was visiting the Tor Vergata University of Rome in September 2022. He acknowledges the hospitality of the faculty of Economics and Tommaso Proietti. The authors greatly acknowledge the comments and suggestions from the referees and the editorial board which have improved the quality of this paper.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Hurst, H. Long-term storage capacity of reservoirs. Trans. Am. Soc. Civ. Eng. 1951, 116, 770. [Google Scholar] [CrossRef]
- Hurst, H. The problem of long-term storage in Reserviors. Int. Assoc. Sci. Hydrol. 1956, 1, 13–27. [Google Scholar] [CrossRef]
- Granger, C.; Joyeux, R. An Introduction to Long Memory Time Series models and Fractional Differencing. J. Time Ser. Anal. 1980, 1, 15–29. [Google Scholar] [CrossRef]
- Hosking, J. Fractional differencing. Biometrika 1981, 68, 165–176. [Google Scholar] [CrossRef]
- Haslett, J.; Raftery, A. Space-time Modelling with Long-memory Dependence: Assessing Ireland’s Wind Power Resource. J. R. Stat. Soc. Ser. C 1989, 38, 1–50. [Google Scholar] [CrossRef]
- Lustig, A.; Charlot, P.; Marimoutou, V. The memory of ENSO revisited by a 2-factor Gegenbauer process. Int. J. Climatol. 2017, 37, 2295–2303. [Google Scholar] [CrossRef]
- Gil-Alana, L.; Ozdemir, Z.; Tansel, A. Long Memory in Turkish Unemployment Rates. Emerg. Mark. Financ. Trade 2019, 55, 201–217. [Google Scholar] [CrossRef]
- Barkoulas, J.; Labys, W.; Onochie, J. Fractional Dynamics in International Commodity Prices. J. Futur. Mark. 1997, 17, 161–189. [Google Scholar] [CrossRef]
- Reschenhofer, E.; Mangat, M.; Zwatz, C.; Guzmics, S. Evaluation of current research on stock return predictability. J. Forecast. 2020, 39, 334–351. [Google Scholar] [CrossRef]
- Peiris, S. Improving the quality of forecasting using generalized AR models: An application to statistical quality control. Stat. Methods 2003, 5, 156–171. [Google Scholar]
- Box, G.; Jenkins, G. Time Series Analysis: Forecasting and Control; Holden-Day: San Francisco, CA, USA, 1976. [Google Scholar]
- Hunt, R.; Peiris, S.; Weber, N. Seasonal Generalized AR models. Commun.-Stat. Theory Methods 2022. [Google Scholar] [CrossRef]
- Hassler, U. Time Series Analysis with Long Memory in View; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2019. [Google Scholar] [CrossRef]
- Bondon, P.; Palma, W. A class of antipersistent processes. J. Time Ser. Anal. 2007, 28, 261–273. [Google Scholar] [CrossRef]
- Palma, W. Long-Memory Time Series Theory and Methods; John Wiley & Sons: Hoboken, NJ, USA, 2006. [Google Scholar]
- Brockwell, P.; Davis, R. Time Series: Theory and Methods, 2nd ed.; Springer Science and Business Media: New York, NY, USA, 1991. [Google Scholar]
- Gradshteyn, I.; Ryzhik, I. Table of Integrals, Series, and Products, 8th ed.; Elsevier Inc.: Amsterdam, The Netherlands, 2014. [Google Scholar]
- Kechagias, S.; Pipiras, V. Definitions and representations of multivariate long-range dependent time series. J. Time Ser. Anal. 2015, 36, 1–25. [Google Scholar] [CrossRef]
Table 1.
Specific Values of the process variance for parameter values and .
Table 1.
Specific Values of the process variance for parameter values and .
Formula | |
---|
Original, Uncorrected | 2.731511 |
Corrected (& Hosking [4]) | 2.070098 |
Brockwell and Davis [16] 13.2.8 | 2.070098 |
Peiris [10] Thm 3.1 | 2.070098 |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).