You are currently viewing a new version of our website. To view the old version click .
Sustainability
  • Article
  • Open Access

18 May 2019

Application of Wavelet-Based Maximum Likelihood Estimator in Measuring Market Risk for Fossil Fuel

and
1
Economics Department, Business School, The University of Western Australia, Perth WA 6009, Australia
2
Faculty of Finance, Banking and Business Administration, Quy Nhon University, Quy Nhon 590000, Vietnam
3
Business and Economics Research Group, Ho Chi Minh City Open University, Hồ Chí Minh City 700000, Vietnam
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Developments in Risk Measurement, with Applications in Climate Change Finance and Economics

Abstract

Energy commodity prices are inherently volatile, since they are determined by the volatile global demand and supply of fossil fuel extractions, which in the long-run will affect the observed climate patterns. Measuring the risk associated with energy price changes, therefore, ultimately provides us with an important tool to study the economic drivers of climate changes. This study examines the potential use of long-memory estimation methods in capturing such risk. In particular, we are interested in investigating the energy markets’ efficiency at the aggregated level, using a novel wavelet-based maximum likelihood estimator (waveMLE). We first compare the performance of various conventional estimators with this new method. Our simulated results show that waveMLE in general outperforms these previously well-established estimators. Additionally, we document that while energy returns realizations follow a white-noise and are generally independent, volatility processes exhibits a certain degree of long-range dependence.

1. Introduction

The price quoted for an asset reflects the present value of a future stream of expected earnings. In an efficient market, any re-evaluation of the asset price must therefore immediately reflect unforeseen changes in those earnings. Conventional asset pricing theories show that the magnitude and frequency of such changes, whenever they appear, can be used as a measure of risk. Market imperfections cause prices to reflect information slowly, and sometimes the response to new information is dragged over a long period. This well-established empirical regularity, known as the long-range dependence of price observations, serves as the main theme of this paper.
Evidence of this phenomenon is most often illustrated by the slow hyperbolic decay rate of the autocorrelation function of empirical time series in the physical sciences. Similarly, it is widely documented that the evolution of the risk of financial assets’ returns constitutes a long-memory stochastic process. To be specific, this type of process is defined with a real number H and a constant C such that the process’s autocorrelation is ρ ( l ) = Cl 2 H 2 as the lag parameter l . We show later that the parameter H is known as the Hurst exponent, named after the hydrologist Hurst who first analyzed the presence and measurement of long-memory behavior in stochastic processes [1]. In the following Section, we provide a theoretical background for studying the long-memory of volatility processes. Section 3 then demonstrates a plethora of well-established and robust methodologies aimed to detect and estimate the degree of long-memory characterized by the level of the Hurst exponent H . In Section 4, we demonstrate the performance of these methods using simulated data and show that the wavelet-based maximum likelihood estimator, originally formulated by [2], generally outperforms several other methods in most of the simulated experiments and is arguably inferior to none. The application of the wavelet estimator on actual energy price time series is the topic of Section 5, where we find support for long-memory in the spot returns of major global fossil fuels. Section 6 provides concluding remarks, the implications of our study, and qualifications for our results.

3. Methods for Estimating Long-Memory in Financial Time Series

3.1. Conventional Methods

A well-established technique for estimating the Hurst exponent is based on a statistic known as the “rescaled range over standard deviation” or “R/S” statistic. This statistic was first introduced in 1951 by the hydrologist H. E. Hurst who observed long range dependence in the dynamics of the Nile river’s annual water level to determine the long-term storage capacity crucial for the construction of irrigation reservoirs. Since then, robust empirical evidence of long-range dependence in time series has been extensively documented in various disciplines, particularly from physical science studies, where studied time series exhibit some kind of trending behavior (e.g., the circumferences of tree trunks, levels of rainfall, fluctuations in air temperature, oceanic movements, and volcanic activities, etc.). Among the first to use re-scaled range analysis to examine this behavior in common stock returns is the renowned econometrician B. Mandelbrot, who also coined the term Hurst exponent in recognition of Hurst. Refs [12,29] radically refined the R/S statistic. In particular, they advocate its robustness in detecting as well as estimating long-range dependence even for non-Gaussian processes with extreme degrees of skewness and kurtosis. Furthermore, this method’s superiority over traditional approaches such as spectral analysis or variance ratios in detecting long-memory was also presented in this body of research.
However, as [5] pointed out, the refinements were not able to distinguish the effects of short-range and long-range dependence. To compensate for this weakness, he proposed a new modified R/S framework. His findings indicate that the dependence structure documented in previous studies are mostly short-ranged, corresponding to high frequency autocorrelation or heteroskedasticity. There are two important implications for us from [5]: (i) empirical inferences of long-range behavior must be carefully drawn, preferably by accounting for dependence at higher frequencies, and (ii) in such cases, conventional models of short-range dependence model (such as AR(1) or random walk models) might be adequate. On the other hand, as implied in a counterargument raised by [30], we should also be cautious of the implication of [5]’s modified method because of its tendency to reject long-range dependence even when evidence of such behavior in fact exists (albeit weakly).
Therefore, despite the enormous praise the R/S statistic has enjoyed over the years, we follow these authors’ advice of not relying solely on this technique, but on a diverse range of well-established alternatives in the literature for estimating long-range dependence. In the following paragraphs we provide descriptions for the methods we used to estimate the long-range dependence parameter H. Furthermore, to suit our empirical analysis, we focus on the case of discrete-time stochastic processes. Additional estimation techniques we utilize in this study include the aggregated variance method as analyzed in [25,31], the Higuchi method [32], the residuals of regression [33], and the periodogram method [34]. Detailed discussions of these methods, as well as their strength and weaknesses, are available upon request.

3.2. Wavelet-Based Maximum Likelihood Estimator

Most recent studies adopt the approach from a ‘time domain’ perspective, that is, the data are analyzed as time series which are commonly recorded at a pre-determined frequency(s) (i.e., daily, weekly, monthly etc.). This approach, no matter how effective, implicitly imposes that the recorded frequency is the sole frequency to be considered when studying realizations of a time varying variable. Problems emerge when this assumption turns out to be insufficient. Specifically, what will the situation be when there are many, not one, frequencies that dictate the underlying generating process of the variable of interest? This issue is particularly relevant in the context of financial assets, of which prices are determined by the activities of agents with multiple trading frequencies.
To address this concern, a different approach taking into account the frequency aspect is called for. A well-established methodology representing this branch of ‘frequency-domain’ analysis is the Fourier transform/spectral analysis. In general, this method is a very powerful tool specifically designed to study cyclical behavior of stationary variables. Based on this fundamental idea, an advanced technique was developed to simultaneously incorporate both aspects of a data sequence. This relatively novel methodology is known as the wavelet transform. It is worth noting that though wavelet analysis has been used for a long time in the field(s) of engineering, in particular signal processing, its application in finance is only recently becoming more popular thanks to the work of pioneers such as [2,35].
To apply the wavelet-based estimation method for long-memory processes, we begin by examining the popular case of the fractional ARIMA process class: the FARIMA (0, d, 0) [also known as a “fractional difference process” (hereafter, FDP)] which is described as X t = Δ d z t   with   z t     i . i . d   N ( 0 ,   σ 2 ) . or, equivalently,
( 1     L ) d X t = z t
with d the fractional difference parameter. This expression means that the d -th order difference of { X t } equals a (stationary) white noise process. A zero-mean FDP (with 0.5   <   d   <   0.5 ), denoted as { X t } , is stationary and invertible (see e.g., [2,36]). Recall that we define its slowly decaying auto-covariance function as:
γ ( l )   =   E [ X t ,   X t + l ]   C d | l | 2 d 1   when   l     .
Correspondingly, for frequency f satisfying 1 2 <   f < 1 2 the spectral density function (SDF) of { X t } satisfies:
S ( f ) = σ z 2 | 2 sin ( π f ) | 2 d f 2 d   when   f 0 .
The SDFs of the process with different values of d (and standard normal innovations, i.e., σ 2   =   1 ) are plotted in Figure 1. When 0   <   d   <   0.5 (i.e., long memory exists), the slope of the SDF on a log-log scale increases as d increases. In this case, the SDF has an asymptote at frequency zero, or it is “unbounded at the origin”, in [36]’s terminology. In other words, S ( f )     when f     0 . Correspondingly, the auto-correlation function (hereafter, ACF) decays more slowly as d increases. While the ACF of a process with d     0 quickly dissipates after a small number of lags, the ACF of a process at the ‘high-end’ of the long-memory family, with d   =   0.5 , effectively persists at long lags. The former was commonly interpreted as exhibiting “short-memory” behavior.
Figure 1. Spectral density function and ACF of a fractional difference process with various values of d . Notes: The higher the fractional difference parameter, the stronger the long-run dependence and the higher the slope of the spectrum. The rate of decay of the corresponding ACF decreases as d increases: from very quickly (when d   0 , i.e., “short-memory”) to effectively infinite persistence (when d   =   0.5 ). Note how the asymptote at very low frequencies (in the SDF plot) is associated to the persistence at very big lags (in the ACF plot). Because of scaling factor, the plot of the SDFs seemingly indicates that the SDFs with d   >   0 ‘cut’ the x-axis, although in fact this is not the case. All SDFs exhibit an exponential decaying pattern when plotted on separate scale. The higher the value of d , the higher the asymptote near frequency zero and the faster the SDF decays.
We can see the relationship between auto-covariance and the spectrum: the ACF decreasing towards very long lags, which correspond to very low frequencies (as the observations are separated by a great time distance, i.e., the wavelength of the periodic signal becomes very high). This reminds us that the spectrum is simply a “representation” of the autocorrelation function in the frequency domain. In addition, Figure 1 shows that the higher the degree of long-memory (the higher the d parameter), the larger the spectrum will be when f     0 . It was well-established that both slowly decaying auto-correlation and unbounded spectrum at the origin independently characterize long-memory behavior (see, e.g., [37,38]). In line with these authors, [39] agrees that the pattern of power concentrates at low frequencies and exponentially declines as frequency increases, such as the ones in the top plot of Figure 1, which is the “typical shape” of an economic variable. An important remark that should be made from this observation is that since the periodogram is very high at low frequencies, it is the low frequencies components of a long-memory process that contribute the most to the dynamics of the whole process. For our purposes, we show that to understand the underlying mechanism of risk process, emphasis needs to be placed in the activities of investors with long trading horizons rather than the day-to-day, noisy activities of, for example, market makers.
To avoid the burden of computing the exact likelihood, [2] utilize an approximation to the covariance matrix obtained via the discrete wavelet transformation (hereafter, the DWT): let { X t } be a fractional difference process with dyadic length N   =   2 j and covariance matrix X , the likelihood function is defined as:
L ( d ,   σ z 2 | X ) = 2 π N / 2 | Σ X | 1 / 2 exp [ 1 2 X T Σ X 1 X ] ,
where | Σ X | denotes the determinant of Σ X . Furthermore, we have the approximate covariance matrix given by Σ X Σ ^ X = W Ω N W , where W is the orthonormal matrix representing the DWT. Ω N is a diagonal matrix which contains the variances of DWT coefficients. The approximate likelihood function and its logarithm are:
L ^ ( d ,   σ z 2 | X ) = ( 2 π ) N 2 | Σ X ^ | 1 2 exp [ 1 2 X T Σ X ^ 1 X ]                
log L ^ ( d ,   σ z 2 | X ) = 2   log L ^ ( d ,   σ z 2 | X ) N log ( 2 π ) = log ( |   Σ ^ X | ) + X T Σ X ^ 1 X .
As noted earlier, [1] introduced the wavelet variance S j for scale λ j which satisfies S j ( d ,   σ z 2 )   =   σ z 2 S j   ( d ) . The properties of diagonal and orthonormal matrices allow us to rewrite the approximate log-likelihood function in Equation (1) as:
L ^ ( d ,   σ z 2 | X ) = N log ( σ z 2 ) + log [ S J + 1 ( d ) ] + Σ j = 1 J N j log [ S j ( d ) ] + 1 σ z 2 [ v J T v J S J + 1 ( d ) + j = 1 J w j T w j S j ( d ) ] .
The maximum likelihood procedure requires us to find the values of d and σ z 2 to minimize the log-likelihood function. To do this, we set the differentiated Equation (2) (with respect to σ z 2 ) to zero and then find the MLE of σ z 2 :
σ ^ z 2 = 1 N [ v J T v J S J + 1 ( d ) + j = 1 J w j T w j S j ( d ) ] .
Finally, we put this value into Equation (2) to get the reduced log-likelihood, which is a function of the parameter d:
L ^ ( d ,   σ z 2 | X ) = N log ( σ z 2 ) + log [ S J + 1 ( d ) ] + Σ j = 1 J N j log [ S j ( d ) ] .
As an illustration, we apply the wavelet MLE to our simulated fGn dataset with H = 0.7 and the volatility series of S&P500 index (proxied by daily absolute returns). Because a dyadic length signal is crucial for this experiment, we obtain daily data ranges from 6 February 1981 to 31 July 2013 (from http://finance.yahoo.com), for a total of 8192 = 213 working days. In addition, we set the number of simulated fGn observations equals to 8192 for comparison. We chose an LA8 wavelet with the decomposition depth set to 13 levels. Figure 2 summarizes our results. Because the actual values of the SDF are very small, we substitute them with their base-10 logarithmic transformation to make the plot visually clear. Estimates of d are 0.2435 and 0.2444 (corresponding to H = 0.7435 and H = 0.7444) for the simulated fGn and S&P500 daily risk processes, respectively. Corresponding values of σ ^ 2 (or the residuals’ variance) are 0.8359 and 6.2236   ×   10 5 . Subsequently, we have the corresponding time series models:
( 1     L ) 0.2435 X t   =   z t   with     z t     i . i . d   N ( 0 ,   0.8359 )
( 1     L ) 0.2444 | r t | =   u t   with   u t     i . i . d   N   ( 0 ,   6.2236   ×   10 5 ) .
Figure 2. Spectral density functions (on a log 10 scale) of a fractional Gaussian noise process and the daily volatility process of S&P500. Notes: Highest frequency being 0.5 corresponds to the Nyquist frequency. In both plots the green line indicates theoretical spectrum of long-memory process with values of d estimated using the waveMLE method.
To further demonstrate the ability of our estimator in capturing long-memory behavior, for each case we plot the theoretical SDF of a fractional difference process with a parameter d set to equal that of the estimated value. Then, we fit this SDF (indicated by a green line) with the corresponding periodogram/spectral density function obtained from the data. In line with [1]’s findings, for both cases the two spectra are in good agreement in terms of overall shape, save for some random variation. However, we obtain a much smaller value of σ ^ z 2 for the S&P500 series, thus random variation is less severe in its case. In other words, the green line approximates the spectrum of the risk series better than in the case of the fGn. In summary, it can be concluded that this method is effective regarding detecting long-range dependence. The result also indicates that the daily S&P500 volatility series can be reasonably modelled as an fGn process since the two have very similar long-memory parameter.

4. Estimators’ Performance Comparison

In Section 2 and Section 3, we introduced a plethora of estimating methods for long-memory parameter. In this section, we compare their performance, especially against the wavelet-based MLE. We follow [34]’s framework for comparing the performance of described methodologies: first, we simulate N = 500 realizations of the long-memory processes [i.e., fBm, fGn, FARIMA (0, d, 0) and ARFIMA (1, d, 1)], each realization will have a length of 10,000 (which makes it a dyadic length time series) and is generated with the Hurst exponent set to H = 0.7. Then we applied all 9 estimators to each realization, to obtain a sample of 500 estimates of H for each of the methods.
Next, we compute the sample mean, standard deviation and the square root of the mean squared error (MSE) for each sample as follows:
H ¯ = 1 N ( n = 1 N H n ) ,
σ ^ = 1 N 1 [ n = 1 N H n 2 1 N ( n = 1 N H n ) 2 ] ,
M S E = 1 N n = 1 N ( H n H ) 2 ,
where H n is the estimate of Hurst index obtained from the n-th (simulated) realization of each process in each sample. Similar to conventional estimating techniques, here the standard error indicates the significance of the estimator while the mean squared error measures its performance by comparing it with the nominal value. We repeat this procedure with H   =   0.5 and H   =   0.9 (approximately representing the lower and upper bounds of the Hurst exponent for the stationary long-memory class). For comparative purpose, we also estimate H for a sample of 50 realizations ( N   =   50 ) as done by [34]. We noted that the MSE incorporate an indicator of bias within our simulated samples that can be expressed as: M S E = S a m p l e   v a r i a n c e + B i a s 2 = σ 2 + ( H ¯ H ) 2 so that the estimator yielding the smallest value of M S E is considered to have the best performance.
The results for the simulation experiment with the fGn and FARIMA (0, d, 0) (for which H ^ = d ^ + 0.5 ) are reported in Table 2. There are several observations that can be made from this table:
Table 2. Performance comparison for different long-memory estimators.
  • All of the proposed methods seem to estimate parameter H effectively, in that they can detect the dependence structure of the simulated time series, with relatively small standard errors.
  • The values of H ¯ , σ ^ and M S E do not differ significantly between the cases of N = 50 and N = 500.
  • The rescaled range method yields the least desirable performance in all simulated experiments. This is in contrast to many previous studies supporting the use of this method, yet it is in line with skeptics such as [34].
When examining Table 2, we can see that the wavelet-based MLE performs relatively robustly compared to several other methods. In particular, in the case of the simulated fGn, when H   =   0.5 (i.e., when the process becomes a Brownian motion) waveMLE is superior. For a “typical” long-memory process ( H   =   0.7 ) waveMLE ranks third, and for process exhibiting extreme long-range dependence behaviour ( H   =   0.9 ) this estimator ranks second. When it comes to the FARIMA (0, d, 0) process, waveMLE performs best in all cases. We illustrate these arguments by presenting the rankings based on the MSE (with N   =   500 ) in Table 3.
Table 3. Performance of long-memory estimators.
Additionally, when H   =   0.5 and H   =   0.7 (corresponding to the value generally expected to be observed in financial returns and financial risk time series, respectively) wave MLE provides the smallest value of MSE on average, which is also smaller than those obtained when estimating H   =   0.9 (which is unlikely to be observed). Furthermore, in cases where waveMLE is not the best estimator, the difference between its performance and that of the best estimator is not significant. For example, in the case of fGn with H   =   0.7 and H   =   0.9 this difference is only measured by units of 0.1%. To see how important this is, consider the performance of the Peng method (which outperforms waveMLE in these cases): when the Peng method is not the best, the difference between its performance and the best estimator (waveMLE) is in units of 1%.
It can be concluded that waveMLE is rarely seen being beaten by another estimator, and when it is, it does not get beaten by a large margin. Nevertheless, with all evidence in clear favor of waveMLE, in practice we still need to take into account the main limitation of this seemingly superior estimator, that is, it can only be applied on a dyadic length time series.

5. Application to Fossil Fuel Prices

In this section, we apply the methodology discussed in Section 3 to examine the characteristics of fuel price time series. Our data are the spot prices of six commodities: crude oil [West Texas Intermediate (WTI)], crude oil (Brent – Europe), diesel fuel (Los Angeles Ultra-Low-Sulfur No. 2), gasoline (New York Harbor conventional), heating oil (New York Harbor No. 2), and liquid natural gas (LNG) (Henry Hub). The data are provided by [40] and range from 9 January 1997 to 22 April 2019. We compute daily spot returns as r i t = log p i t log p i , t 1   ( t = 1 , , T ) , where p i t denotes the spot price of commodity i and T = 5535 denotes the number of observations. The corresponding volatility series are computed as 30-day rolling standard deviations of returns, i.e., σ i t = ( 1 / 30 ) j = 2 31 [ ( log p i , t + j log p i , t + j 1 ) r ¯ i t ] 2 where r ¯ i t = ( 1 / 30 ) j = 2 31 ( log p i , t + j log p i , t + j 1 ) is the rolling average return. As can be seen in Figure 3, of the six commodity returns examined, natural gas seems to yield the most volatile returns, at times exceeding 40%. This is also shown in the corresponding volatility plot presented in the bottom-right panel of Figure 4. Visual inspection of Figure 3 reveals that returns time series exhibit characteristics of white-noise processes.
Figure 3. Daily returns of energy commodities, 1997–2019. Notes: Returns of commodity i are computed as the percentage change of the corresponding daily spot price: 100 × [ exp ( log p i t log p i , t 1 ) 1 ] = 100 × [ ( p i t p i , t 1 ) / p i , t 1 ] , where p i t denotes the spot price of commodity i. To facilitate visualization, the vertical range is concatenated at [−40, 40] (%).
Figure 4. Volatilities of energy commodities, 1997–2019. Notes: Volatilities are defined as the 30-day rolling standard deviation of the log-change in prices, i.e., σ i t = ( 1 / 30 ) j = 2 31 [ ( log p i , t + j log p i , t + j 1 ) r ¯ i t ] 2 , where r ¯ i t = ( 1 / 30 ) j = 2 31 ( log p i , t + j log p i , t + j 1 ) is the rolling average return. p i t denotes the spot price of commodity i.
The summary statistics for return and volatility time series are reported in Table 4. We can see that all returns series have a zero mean and a symmetric, leptokurtic distribution (as the Jarque-Bera statistics indicate that the null of normal distribution are strongly rejected). Volatilities have non-zero mean and yield even stronger evidence of serial correlations. Additionally, they also exhibit serial correlation, as the null of independent observations is also strongly rejected as shown by the portmanteau Ljung-Box statistics. These dependence patterns over time imply that the energy spot markets may not be efficient.
Table 4. Summary statistics of returns and volatilities.
In Table 5, we provide the results of the test for stationarity of returns and volatilities. As can be seen from panel A, all returns series exhibit stationarity, since the null of unit root can be rejected strongly at the 1% significance level (using the Augmented Dickey-Fuller (ADF) [41] and Phillips-Perron (PP) [42] tests) while the null of stationary cannot be rejected at conventional significance levels (using the Kwiatkowski et al. (KPSS) [43] test). Results for volatilities, on the other hand, are mixed: while the ADF and PP tests rule out the existence of unit roots, the KPSS test show evidence of non-stationarity. A possible explanation for the contrasting results is that long-memory of volatilities lower the power of these tests. For example, time series with long-range dependence, as opposed to infinite dependence and following the random walk, may still be stationary.
Table 5. Stationarity tests for returns and volatilities.
Figure 5 sheds light on this issue by presenting the auto-correlation functions (ACF) of the volatility series. The auto-correlation coefficients are all highly significant even at the 200-day lag (i.e., close to a full trading year). Based on these observations, in Table 6 we present the estimated Hurst exponents for the 6 returns and volatilities series. All returns yield values are in the proximity of 0.4–0.5, while volatilities’ values are in the range of 0.7–0.8. According to the implications of Hurst index on the statistical property of time series (see Table 1), these results support our observation that energy price returns exhibit behavior of white-noise processes while volatilities show evidence of long-memory, which is in agreement with observations previously made in the literature (see, e.g., [18,23]).
Figure 5. Auto-correlograms of volatility of energy commodities, 1997–2019. Notes: Returns of commodity i are computed as the log-change of the corresponding daily spot price: r i t = log p i t log p i , t 1 . Volatility is computed as 30-day rolling standard deviations of r i t .
Table 6. Estimates of Hurst exponents with waveMLE.

6. Conclusions, Implications and Qualifications

Burning fossil fuels, the main source of man-made carbon dioxide, is the most important cause of climate change. Contributing to this issue is the recently lower energy prices and quickly rising share of middle-class population in developing economies, which significantly lifts the global energy consumption rate via the use of personal transportation vehicles [44,45]. Given the importance of prices in determining the supply and demand of energy commodities, it is crucial to develop tools to measure the risk associated with markets for these commodities, to better understand what ultimately affects progress on reducing the adverse effects of climate fluctuations.
This paper contributes to the existing literature by examining various estimation methods for long-memory in price returns and volatilities to assess energy market efficiency. We first investigate the performance of the various conventional methods reviewed by [34] and the wavelet-based maximum likelihood estimator (waveMLE) proposed by [2]. Our simulations indicate that waveMLE is superior to the majority of other methods. Applications of waveMLE support white-noise behavior for returns and long-range dependence for volatility. Our findings offer market participants an interesting opportunity to exploit potential inefficiency in the energy markets. For example, policy makers can prepare to provide stricter price regulation when there are volatility spikes that tend to be prolonged, while traders can design optimal hedging strategies based on forecasts of future volatilities that incorporate long-memory.
Our findings are subject to two qualifications. First, potential structural breaks in fossil fuel prices are not covered in our analyses. However, given the robust evidence of long-memory detected for volatilities in previous studies even in the presence of such breaks (see, e.g., [16,17,18]), we conjecture that these breaks are not a major source of bias for our primary findings. Second, we have not investigated the forecasting value of our estimators, which could potentially offer further important implications for the understanding of energy market dynamics. This will be an interesting future research direction.

Author Contributions

Analyses are conducted by L.H.V. Original draft is prepared by D.H.V. Applied and extended analyses are conducted by L.H.V. and D.H.V. Review and editing are done by all authors.

Funding

Part of this research is an updated version of studies carried out when Long Vo was a Master student at the School of Economics and Finance, Victoria University of Wellington, where he received financial support from a New Zealand Ministry of Foreign and Trade New Zealand-ASEAN Scholar Award. Duc Vo acknowledges the financial assistance from Ho Chi Minh City Open University, Vietnam.

Acknowledgments

The authors would like to acknowledge the constructive comments from two anonymous referees.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hurst, H. Long term storage capacity of reservoirs. Trans. Am. Soc. Civ. Eng. 1951, 116, 770–799. [Google Scholar]
  2. Gencay, R.; Selcuk, F.; Whitcher, B. An Introduction to Wavelets and Other Filtering Methods in Finance and Economics; Academic Press: San Diego, CA, USA, 2002. [Google Scholar]
  3. Samuelson, P.A. Proof that Properly Anticipated Prices Fluctuate Randomly. Ind. Manag. Rev. 1965, 6, 41–49. [Google Scholar]
  4. Fama, E.F. The Behavior of stock market prices. J. Bus. 1965, 38, 34–105. [Google Scholar] [CrossRef]
  5. Lo, A.W. Long-term memory in stock market prices. Econometrica 1991, 59, 1279–1313. [Google Scholar] [CrossRef]
  6. Jennergren, L.P.; Korsvold, P.E. Price formation in the Norwegian and Swedish stock markets: Some random walk tests. Swedish J. Econ. 1974, 76, 171–185. [Google Scholar] [CrossRef]
  7. Cootner, P.H. Common Elements in Futures Markets for Commodities and Bonds. Am. Econ. Rev. 1961, 51, 173–183. [Google Scholar]
  8. Fama, E.F.; French, K.R. Permanent and temporary components of stock prices. J. Polit. Econ. 1988, 96, 246–273. [Google Scholar] [CrossRef]
  9. Ding, Z.; Granger, C.W.J.; Engle, R.F. A long memory property of stock market returns and a new model. J. Empir. Finance 1993, 1, 83–106. [Google Scholar] [CrossRef]
  10. Andersen, T.G.; Bollerslev, T. Heterogeneous information arrivals and return volatility dynamics: Uncovering the long run in high frequency returns. J. Finance 1997, 52, 975–1005. [Google Scholar] [CrossRef]
  11. Mandelbrot, B.; Wallis, J.R. Noah, Joseph, and operational hydrology. Water Resources Res. 1968, 4, 909–918. [Google Scholar] [CrossRef]
  12. Peters, E. Chaos and Order in the Capital Markets: A New View of Cycles, Prices, and Market Volatility; John Wiley & Sons: New York, NY, USA, 1996. [Google Scholar]
  13. Lo, A.W.; MacKinlay, C.A. A Non-Random Walk Down Wall Street; Princeton University Press: Princeton, NJ, USA, 1999. [Google Scholar]
  14. Hull, M.; McGroarty, F. Do emerging markets become more efficient as they develop? Long memory persistence in equity indices. Emerging Markets Rev. 2013, 18, 45–61. [Google Scholar] [CrossRef]
  15. Grossman, S.J.; Stiglitz, J.E. On the impossibility of informationally efficient markets. Am. Econ. Rev. 1980, 70, 393–408. [Google Scholar]
  16. Lo, A.W. The adaptive markets hypothesis: Market efficiency from an evolutionary perspective. J. Portfolio Manag. 2004, 30, 15–29. [Google Scholar] [CrossRef]
  17. Choi, K.; Hammoudeh, S. Long memory in oil and refined products markets. Energy J. 2009, 30, 97–116. [Google Scholar] [CrossRef]
  18. Baillie, R.; Han, Y.-W.; Myers, R.; Song, J. Long memory models for daily and high frequency commodity futures returns. J. Futures Markets 2007, 27, 643–668. [Google Scholar] [CrossRef]
  19. Arouri, M.E.H.; Lahiani, A.; Lévy, A.; Nguyen, D.K. Forecasting the conditional volatility of oil spot and futures prices with structural breaks and long memory models. Energy Econ. 2012, 34, 283–293. [Google Scholar] [CrossRef]
  20. Charfeddine, L. True or spurious long memory in volatility: Further evidence on the energy futures markets. Energy Policy 2014, 71, 76–93. [Google Scholar] [CrossRef]
  21. Wang, Y.; Wu, C. Long memory in energy futures markets: Further evidence. Resources Policy 2012, 37, 261–272. [Google Scholar] [CrossRef]
  22. Di Sanzo, S. A Markov switching long memory model of crude oil price return volatility. Energy Econ. 2018, 74, 351–359. [Google Scholar] [CrossRef]
  23. Nademi, A.; Nademi, Y. Forecasting crude oil prices by a semiparametric Markov switching model: OPEC, WTI, and Brent cases. Energy Econ. 2018, 74, 757–766. [Google Scholar] [CrossRef]
  24. Dieker, A.B.; Mandjes, M. On spectral simulation of fractional Brownian motion. Probab. Eng. Informat. Sci. 2003, 17, 417–434. [Google Scholar] [CrossRef]
  25. Cox, D.R. Long-range Dependence: A Review. In Statistics: An Appraisal, Proceedings of a Conference Marking the 50th Anniversary of the Statistical Laboratory, Iowa State University, Ames, Iowa, 13–15 June 1983; David, H.A., David, H.T., Eds.; Iowa State University Press: Ames, IA, USA, 1984; pp. 55–74. [Google Scholar]
  26. Beran, J. Statistics for Long-Memory Processes; Chapman and Hall: New York, NY, USA, 1994. [Google Scholar]
  27. Mandelbrot, B.; Van Ness, J.W. Fractional Brownian motions, fractional noises and applications. SIAM Rev. 1968, 10, 422–437. [Google Scholar] [CrossRef]
  28. Mitra, S.K. Is Hurst exponent value useful in forecasting financial time series? Asian Soc. Sci. 2012, 8, 111–120. [Google Scholar] [CrossRef]
  29. Mandelbrot, B. Statistical methodology for nonperiodic cycles: From the covariance to R/S analysis. In Proceedings of the Annals of Economic and Social Measurement; National 579 Bureau of Economic Research: Stanford, CA, USA, 1972; Volume 1, pp. 259–290. [Google Scholar]
  30. Willinger, W.; Taqqu, M.S.; Teverovsky, V. Stock market prices and long-range dependence. Finance Stochastics 1999, 3, 1–13. [Google Scholar] [CrossRef]
  31. Teverovsky, V.; Taqqu, M. Testing for long-range dependence in the presence of shifting means or a slowly declining trend, using a variance-type estimator. J. Time Ser. Anal. 1997, 18, 279–304. [Google Scholar] [CrossRef]
  32. Higuchi, T. Approach to an irregular time series on the basis of the fractal theory. Phys. D Nonlinear Phenomena 1981, 31, 277–283. [Google Scholar] [CrossRef]
  33. Peng, C.K.; Buldyrev, S.V.; Havlin, S.; Simons, M.; Stanley, H.E.; Goldberger, A.L. Mosaic organization of DNA nucleotides. Phys. Rev. E 1994, 49, 1685–1689. [Google Scholar] [CrossRef]
  34. Taqqu, M.T.V.; Willinger, W. Estimators for long-range dependence: An empirical study. Fractals 1995, 3, 785–798. [Google Scholar] [CrossRef]
  35. In, F.; Kim, S. An Introduction to Wavelet Theory in Finance: A Wavelet Multiscale Approach; World Scientific: Singapore, 2013. [Google Scholar]
  36. Jensen, M.J. An alternative maximum likelihood estimator of long-memory processes using compactly supported wavelets. J. Econ. Dyn. Control 2000, 24, 361–387. [Google Scholar] [CrossRef]
  37. McLeod, B.; Hipel, K. Preservation of the rescaled adjusted range. Water Resources Res. 1978, 14, 491–518. [Google Scholar] [CrossRef]
  38. Resnick, S. Extreme Values, Regular Variation, and Point Processes; Springer: New York, NY, USA, 2007. [Google Scholar]
  39. Granger, C.W.J. The typical spectral shape of an economic variable. Econometrica 1966, 34, 150–161. [Google Scholar] [CrossRef]
  40. United States Energy Information Administration. Petroleum and other liquids. Available online: https://www.eia.gov/petroleum/data.php (accessed on 18 May 2019).
  41. Dickey, D.A.; Fuller, W.A. Distributions of the estimators for autoregressive time series with a unit root. J. Am. Stat. Assoc. 1979, 74, 427–431. [Google Scholar]
  42. Phillips, P.C.B.; Perron, P. Testing for a unit root in time series regression. Biometrika 1988, 75, 335–346. [Google Scholar] [CrossRef]
  43. Kwiatkowski, D.; Phillips, P.C.B.; Schmidt, P.; Shin, Y. Testing the null hypothesis of stationarity against the alternative of a unit root. J. Econometr. 1992, 54, 159–178. [Google Scholar] [CrossRef]
  44. Montgomery, S.L. Cheap oil is blocking progress on climate change. The Conversation [online] 2018, 12. Available online: https://theconversation.com/cheap-oil-is-blocking-progress-on-climate-change-108450 (accessed on 28 April 2019).
  45. Office of Energy Efficiency and Renewable Energy. About Two-Thirds of Transportation Energy Use is Gasoline for Light Vehicles; Office of Energy Efficiency and Renewable Energy: Washington, DC, USA, 2013.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.