Exponential Time Trends in a Fractional Integration Model

: This paper introduces a new modelling approach that incorporates nonlinear, exponential deterministic terms into a fractional integration framework. The proposed model is based on a specific test on fractional integration that is more general than the standard methods, which allow for only linear trends.. Its limiting distribution is standard normal, and Monte Carlo simulations show that it performs well in finite samples. Three empirical examples confirm that the suggested specification captures the properties of the data adequately.


Introduction
It is common practice in applied work to allow for simple linear deterministic trends when modelling standard economic and financial series (Bhargava 1986; Stock and Watson 1988;Schmidt and Phillips 1992).However, some of these appear to be characterised by exponential growth, as in the case of compound interest.An exponential growth trend can be captured through taking logs of the series of interest and regressing the data against a constant and a linear trend.However, fitting a linear trend with a constant growth rate is in most cases too restrictive.Alternatively, the raw data can be used to run a regression including an exponential time trend as well as a constant.The present paper takes the latter approach based on exponential trends and develops an appropriate modelling and testing framework in the context of fractional integration with a standard normal asymptotic distribution.The proposed fractional integration model belongs to the category of longmemory processes; its feature is that the number of differences that are required to make a series stationary and with short memory (e.g., a white noise or a stationary ARMA) is a non-integer positive value.In most cases investigated with such models, a linear trend is considered and most of the departures from this specification are in the form of special non-linear deterministic structures such as those produced by Chebyshev polynomials in time or Fourier functions.This paper considers exponential trends instead and employs simulation techniques to evaluate the properties of the proposed test with finite samples; it also presents three empirical applications to show that the advocated framework captures well the behaviour of the data.Modelling exponential trends in a fractional integration framework is a novel contribution, and the suggested approach is a practical tool to use in cases of economic and financial series that possibly exhibit long memory as well as exponential deterministic trends.
The structure of this paper is as follows.Section 2 presents the proposed framework and testing procedure along with its asymptotic distribution, which is standard normal.Section 3 reports some Monte Carlo simulation results to assess the finite sample behaviour of the suggested test.Section 4 discusses three empirical applications.Section 5 offers some concluding remarks.

The Model
We consider a time series {y t , t = 1, 2, . ... } for which the following regression model is specified: where α, β and γ are unknown parameters (the intercept, the time trend coefficient and its exponent respectively); in addition, x t is assumed to be an integrated process of order d, i.e.,: (1 − B) d x t = u t , t = 1, 2, . . ., where d can be any real scalar value, B is the backshift operator, i.e., B k x t = x t−k , and u t is thus an I(0) process, more precisely a covariance-stationary one with a spectral density function that is positive and bounded at all frequencies in the spectrum. 1Thus, u t might be a white noise process but it might also display a weakly autocorrelated structure as in the autoregressive moving average (ARMA) processes.
We test the null hypothesis: for any real value d 0 in the model given by ( 1) and ( 2), through choosing specific values for γ, for example between 0 and 2, with 0.01 increments.Under the null hypothesis (3), the model given by ( 1) and ( 2) becomes: where and 2 and u t is still an I(0) process.Since the value of γ is set, one can follow the same strategy as in Robinson (1994) and therefore the test statistic is given as follows: where T is the sample size, and where λ j = 2πj/T and the summation in * in the above equations is over all frequencies which are bounded in the spectrum. 3I(λ j ) is the periodogram of ût , where ût = with T * as a suitable subset of the R q Euclidean space.Finally, g u is a known function coming from the spectral density of u t : Note that this test is parametric and, therefore, it requires specific modelling assumptions about the short-memory specification of u t .In particular, if u t is a white noise, g u ≡ 1, whilst if it is an AR process of the form φ(L)u t = ε t , (with white noise ε t ), then g u = |φ(e iλ )| −2 , with σ 2 = V(ε t ) and the AR coefficients being a function of τ.
In this context, Robinson (1994) showed that for γ = 1: where "→ d " stands for convergence in distribution.Therefore, unlike in the case of other (unit root/fractional) procedures, this is a classical large-sample testing situation.On the basis of ( 5), the null H o (3) is rejected against the alternative H a : In addition, one-sided tests can be obtained against the alternatives H a : d > d o (d < d o ) at the 100α% when r > z α (r < −z α ), where the probability that a standard normal variate exceeds z α is α.
This result holds for any finite value of γ.Specifically, Robinson (1994) used the following regression model: where z t is a (kx1) observable vector whose elements are assumed to be non-stochastic, such as polynomials in t, for example, to include the null hypothesis of a unit root with drift if d o = 1 and z t = (1, t) T .According to Robinson: "The limiting null and local distributions of our test statistic are unaffected by the presence of such regressors.For simplicity, we treat only linear regression, but undoubtedly a nonlinear regression will also leave our limit distributions unchanged, under standard regularity conditions".These regularity conditions are described in his definition of the class G provided in Appendix A to that paper: "G is the class of k X 1 vector sequences { z t , t = 0, +1, . ..} such that z t = 0, t < 0 and D defined as: and is positive definite for sufficiently large T.".G imposes no rate of increase on D; different elements can increase at different rates, and indeed D need not tend to infinity as T → ∞.If D is positive definite for T = T o , then it is positive definite for all T > T o .
In this context, the following theorem can be stated: Theorem 1.Under the null hypothesis (3) in the model defined by Equations ( 1) and ( 2), with γ = γ o where γ o is the true value of the exponential trend, and under the condition: where det denotes determinant and Ψ = 1 2π ∞ −∞ ψ λ j 2 , r converges asymptotically in distribution: Note that the right-hand-side inequality in (10) is not satisfied by the autoregressive AR alternatives, whilst it is by the fractional model in (2) (see the expression for ψ λ j below (5), and Appendix A for the proof of this theorem).
As an alternative approach, one can compute the residual sum of the squares for a set of values of γ and choose the one that minimises it.In such a case, under standard regularity conditions, the estimate should coincide with the one obtained with our method when choosing the value of d that minimises R in (11).In the empirical applications carried out in Section 4, we set values of γ = 0, 0.10, 0.20, . . .(0.10), . .., 1.40, and 1.50, and in each case we estimated the differencing parameter via choosing the test statistic (based on Robinson 1994) with the lowest value.The estimate of d was virtually identical to the Whittle one based on the frequency domain analysed in Robinson (1994), as R clearly depends on γ.Then, for each value of γ and the associated d, we computed the residual sum of the squares and chose the pair producing the lowest statistic, R, in (7).
Next, we display some realisations of the model given by Equations ( 1) and (2).More specifically, we first generated a white noise process with sample size T = 1000, and produced time series for 1 t and t t via setting different values for d 0 and γ.Then, y t was obtained from Equation (4) with α = 0.2 and β = 0.4 and first differences of d 0 were taken after removing the first 100 observations.
Econometrics 2024, 12, x FOR PEER REVIEW 4 of 13 Note that the right-hand-side inequality in ( 10) is not satisfied by the autoregressive AR alternatives, whilst it is by the fractional model in (2) (see the expression for ( ) below ( 5), and Appendix A for the proof of this theorem).
As an alternative approach, one can compute the residual sum of the squares for a set of values of γ and choose the one that minimises it.In such a case, under standard regularity conditions, the estimate should coincide with the one obtained with our method when choosing the value of d that minimises  in (11).In the empirical applications carried out in Section 4, we set values of γ = 0, 0.10, 0.20, … (0.10), …, 1.40, and 1.50, and in each case we estimated the differencing parameter via choosing the test statistic (based on Robinson 1994) with the lowest value.The estimate of d was virtually identical to the Whittle one based on the frequency domain analysed in Robinson (1994), as  clearly depends on γ.Then, for each value of γ and the associated d, we computed the residual sum of the squares and chose the pair producing the lowest statistic,  , in (7).
Next, we display some realisations of the model given by Equations ( 1) and ( 2).More specifically, we first generated a white noise process with sample size T = 1000, and produced time series for t 1 ~ and t t ~ via setting different values for d0 and γ.Then, t y ~ was obtained from Equation ( 4) with α = 0.2 and β = 0.4 and first differences of d0 were taken after removing the first 100 observations.

Simulation Results
In this section, we examine the finite sample behaviour of the test statistic proposed above by means of Monte Carlo simulation techniques (the Fortran codes are available from the authors upon request).As data generating processes, we used the GASDEV and RAN3 routines from Press et al. (1986) to obtain Gaussian series for different sample sizes T = 100, 500, and 1000 and carried out 10,000 replications in each case; specifically, we used the model given by Equations ( 1) and ( 2) with α = 0.2 and β = 0.4, γ = 0.75, and tested the null hypothesis (3) with do = 0.50; the reported results are for a nominal size of 5%.Using alternative values for α, β, γ, and do produced almost identical results.
Table 1 displays the rejection frequencies of the test statistic ̂ in (5) for three different sample sizes, T = 100, 500, and 1000 and a nominal size of 5%. 4 It can be seen that the nominal sizes were too large in all cases, and they approached 0.05 as the sample size increased.There was also a bias in the size as higher values were obtained in all cases against alternatives of form d < do.Finally, the frequencies against departures from the null increased as the sample size increased, which was consistent with the asymptotic behaviour in the test.
Table 2 is similar to Table 1 but reports the results based on the t3-Student distribution for the error term.Once again, the sizes were higher than the 5% level and higher values were observed against departures in the form d < do.The rejection frequencies were also higher for this type of departure, and even for small ones the rejection frequencies were relatively high.1) and ( 2) with d = 1.00.Note: We generated Gaussian series with T = 1000, and then produced the realisations of y t in (1) and ( 2) with d = 1.00.

Simulation Results
In this section, we examine the finite sample behaviour of the test statistic proposed above by means of Monte Carlo simulation techniques (the Fortran codes are available from the authors upon request).As data generating processes, we used the GASDEV and RAN3 routines from Press et al. (1986) to obtain Gaussian series for different sample sizes T = 100, 500, and 1000 and carried out 10,000 replications in each case; specifically, we used the model given by Equations ( 1) and ( 2) with α = 0.2 and β = 0.4, γ = 0.75, and tested the null hypothesis (3) with d o = 0.50; the reported results are for a nominal size of 5%.Using alternative values for α, β, γ, and d o produced almost identical results.
Table 1 displays the rejection frequencies of the test statistic r in (5) for three different sample sizes, T = 100, 500, and 1000 and a nominal size of 5%. 4 It can be seen that the nominal sizes were too large in all cases, and they approached 0.05 as the sample size increased.There was also a bias in the size as higher values were obtained in all cases against alternatives of form d < d o .Finally, the frequencies against departures from the null increased as the sample size increased, which was consistent with the asymptotic behaviour in the test.   1 but reports the results based on the t 3 -Student distribution for the error term.Once again, the sizes were higher than the 5% level and higher values were observed against departures in the form d < d o .The rejection frequencies were also higher for this type of departure, and even for small ones the rejection frequencies were relatively high.

Three Empirical Applications
For illustration purposes, we used the proposed framework to model three US time series.The first was the US real GNP per capita series analysed in Omay et al. (2017); it is quarterly and spans the period from 1947 Q1 to 2018 Q1, for a total of 285 observations (see Figure 5), and its source was the FRED database of the Federal Reserve Bank of St Louis (https://www.stlouisfed.org/accessed on 1 May 2020).The second was the S&P500 weekly series from 1 January 1970 up to 23 October 2023, obtained from Yahoo! Finance (see Figure 6).The third was the US Consumer Price Index for All Urban Consumers, monthly, from January 1913 until October 2023 (see Figure 7).The issue of interest is whether the effects of exogenous shocks are transitory or permanent, and thus whether the series can be characterised as trend stationary or difference stationary (Omay et al. 2017).
the period from 1947 Q1 to 2018 Q1, for a total of 285 observations (see Figure 5), and its source was the FRED database of the Federal Reserve Bank of St Louis (https://www.stlouisfed.org/accessed on 1 May 2020).The second was the S&P500 weekly series from 1 January 1970 up to 23 October 2023, obtained from Yahoo! Finance (see Figure 6).The third was the US Consumer Price Index for All Urban Consumers, monthly, from January 1913 until October 2023 (see Figure 7).The issue of interest is whether the effects of exogenous shocks are transitory or permanent, and thus whether the series can be characterised as trend stationary or difference stationary (Omay et al. 2017).Table 3 reports the results for US real GNP, more precisely, the estimates of α, β, γ, and d in the model given by Equations ( 1) and ( 2) under the assumption that ut is a white noise process with zero mean and constant variance.It can be seen that when values of γ from 0 to 1.50 with 0.10 increments were selected, the estimates of d were very similar and ranged from 1.28 to 1.30.The estimated model exhibited an exponential trend with γ = 0.80, d = 1.28, and the 95% confidence interval being given by (1.17, 1.42), with the remain- Table 3 reports the results for US real GNP, more precisely, the estimates of α, β, γ, and d in the model given by Equations ( 1) and ( 2) under the assumption that ut is a white noise process with zero mean and constant variance.It can be seen that when values of γ from 0 to 1.50 with 0.10 increments were selected, the estimates of d were very similar and ranged from 1.28 to 1.30.The estimated model exhibited an exponential trend with γ = Table 3 reports the results for US real GNP, more precisely, the estimates of α, β, γ, and d in the model given by Equations ( 1) and ( 2) under the assumption that u t is a white noise process with zero mean and constant variance.It can be seen that when values of γ from 0 to 1.50 with 0.10 increments were selected, the estimates of d were very similar and ranged from 1.28 to 1.30.The estimated model exhibited an exponential trend with γ = 0.80, d = 1.28, and the 95% confidence interval being given by (1.17, 1.42), with the remaining two parameters, α and β, both being statistically significant.Thus, the unit root null hypothesis is rejected in favour of d > 1 and γ < 1, which indicates the presence of a concave time trend in the data.Note: The first column reports the values of the exponent for the trend.The second and third columns, respectively, refer to the estimated differencing parameter and the associated 95% confidence intervals.The following columns display the intercept and the slope of the exponential trend along with their associated t-values.The final column reports the test statistics.
Table 4 has the same layout as the previous one but concerns the S&P500 stock market index.The estimates of d ranged between 0.91 and 1.24 and the lowest statistic was obtained with γ = 1.00 and d = 0.97 (0.92, 1.24).Thus, a linear time trend with a unit root seems to be a plausible hypothesis; this is consistent, for t > 2, with a random walk model with an intercept, and thus with the efficiency market hypothesis (EMH) in its weak form (Fama 1970).Note: The first column reports the values of the exponent for the trend.The second and third columns, respectively, refer to the estimated differencing parameter and the associated 95% confidence intervals.The following columns display the intercept and the slope of the exponential trend along with their associated t-values.The final column reports the test statistics.
Finally, Table 5 reports the corresponding results for the US Consumer Price Index.In this case, d was much higher than 1 (specifically, 1.44), with a confidence interval given by (1.38, 1.52).Thus, the unit root null hypothesis is rejected in favour of d > 1; also, the estimate of γ = 1.10 implies a convex time trend.The first column reports the values of the exponent for the trend.The second and third columns, respectively, refer to the estimated differencing parameter and the associated 95% confidence intervals.The following columns display the intercept and the slope of the exponential trend along with their associated t-values.The final column reports the test statistics.

Conclusions
This paper puts forward a long-memory modelling and testing framework that allows for exponential deterministic trends in a fractional integration context.An attractive feature of the proposed test statistic is that its asymptotic distribution is N(0,1).The Monte Carlo simulations carried out to examine the properties of the proposed test indicated that it performed well with finite samples.As an illustration, the proposed framework was then applied to model the behaviour of US real GDP, the S&P500 stock market index, and US consumer prices.The empirical exercise showed that the suggested model captured well the behaviour of the series under examination and was data-congruent; specifically, in the case of US real GNP per capita and US CPI, the exponential trend fractional model outperformed the one with a linear trend (i.e., γ = 1) for different differencing parameters.
The proposed modelling approach is widely applicable to time series that exhibit exponential trends.However, it should be noted that, although unlimited exponential growth might characterise some economic and financial series, this is not likely to occur whenever real resources are involved.In such cases, there will necessarily be an upper bound which should also be introduced into the model, for instance, through a logistic curve.In addition, the stochastic structure of the model described with Equation (2) can be extended using alternative approaches that allow for poles or singularities in the spectrum at one or more frequencies away from zero, as is the case with seasonal and/or cyclical structures. 5These issues are left for future research.

Figure 5 .
Figure 5. US real GNP per capita.Note: the data source was the FRED database of the Federal Reserve Bank of St Louis (https://www.stlouisfed.org/accessed on 1 May 2020); the series is quarterly and the sample period spans from 1947 Q1 to 2018 Q1.

Figure 6 .
Figure 6.S&P500 Stock Market Index.Note: the data source was Yahoo!Finance (https://es.finance.yahoo.com/); the series is weekly and the sample period extends from 1 January 1970 to 23 October 2023.

Figure 7 .
Figure 7. US Consumer Price Index for All Urban Consumers.Note: the data source was the U.S. Department of Labor Bureau of Labor Statistics (https://www.bls.gov); the series is monthly and the sample period runs from 1913m1 to 2023m10.

Table 1 .
Rejection frequencies against one-sided alternatives with Gaussian errors.: The values reported in this table are the rejection frequencies of the test against fractional alternatives.The size of the test is shown in bold. Note

Table 2
is similar to Table

Table 2 .
Rejection frequencies against one-sided alternatives with t 3 -distributed errors.
Note: The values reported in the table are the rejection frequencies of the test against fractional alternatives.The size of the test is shown in bold.
US real GNP per capita.Note: the data source was the FRED database of the Federal Reserve Bank of St Louis (https://www.stlouisfed.org/accessed on 1 May 2020); the series is quarterly and the sample period spans from 1947 Q1 to 2018 Q1.
S&P500 Stock MarketIndex.Note: the data source was Yahoo!Finance (https://es.finance.yahoo.com/); the series is weekly and the sample period extends from 1 January 1970 to 23 October 2023.US real GNP per capita.Note: the data source was the FRED database of the Federal Reserve Bank of St Louis (https://www.stlouisfed.org/accessed on 1 May 2020); the series is quarterly and the sample period spans from 1947 Q1 to 2018 Q1.US Consumer Price Index for All Urban Consumers.Note: the data source was the U.S. Department of Labor Bureau of Labor Statistics (https://www.bls.gov); the series is monthly and the sample period runs from 1913m1 to 2023m10.
US Consumer Price Index for All Urban Consumers.Note: the data source was the U.S. Department of Labor Bureau of Labor Statistics (https://www.bls.gov); the series is monthly and the sample period runs from 1913m1 to 2023m10.

Table 3 .
Estimated coefficients for the log of US real GNP per capita.

Table 4 .
Estimated coefficients for the S&P500 stock market prices.

Table 5 .
Estimated coefficients for the US Consumer Price Index.