# Jackknife Bias Reduction in the Presence of a Near-Unit Root

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Jackknife Estimation with a Near-Unit Root

#### 2.1. The Model and the Standard Jackknife Estimator

**Assumption**

**1.**

#### 2.2. Sub-Sample Properties

**Theorem**

**1.**

- (a)
- $\frac{1}{{\ell}^{2}}\sum _{t\in {\tau}_{j}}{y}_{t}^{2}\Rightarrow {\sigma}^{2}{m}^{2}{\int}_{(j-1)/m}^{j/m}{J}_{c}^{2}$;
- (b)
- $\frac{1}{\ell}\sum _{t\in {\tau}_{j}}{y}_{t-1}{u}_{t}\Rightarrow {\sigma}^{2}m{\int}_{(j-1)/m}^{j/m}{J}_{c}dW+\frac{1}{2}({\sigma}^{2}-{\sigma}_{u}^{2})$;
- (c)
- $\ell ({\widehat{\rho}}_{j}-\rho )\Rightarrow {Z}_{c,j}(\eta )=\frac{{\displaystyle {\int}_{(j-1)/m}^{j/m}{J}_{c}dW+\frac{1}{2m}(1-\eta )}}{{\displaystyle m{\int}_{(j-1)/m}^{j/m}{J}_{c}^{2}}},\phantom{\rule{4pt}{0ex}}\phantom{\rule{4pt}{0ex}}j=1,\dots ,m,$$\eta ={\sigma}_{u}^{2}/{\sigma}^{2}$.

**Theorem**

**2.**

## 3. A Moment Generating Function and Its Properties

**Theorem**

**3.**

- (a)
- The joint MGF of ${N}_{c}$ and ${D}_{c}$ is given by:$${M}_{c}({\theta}_{1},{\theta}_{2})=E\mathrm{exp}({\theta}_{1}{N}_{c}+{\theta}_{2}{D}_{c})=\mathrm{exp}\left(\right)open="("\; close=")">-\frac{({\theta}_{1}+c)}{2}(b-a)$$$${H}_{c}({\theta}_{1},{\theta}_{2})=\mathrm{cosh}\left(\right)open="("\; close=")">(b-a)\lambda $$
- (b)
- The individual MGFs for ${N}_{c}$ and ${D}_{c}$ are given by, respectively,$$\begin{array}{cc}{M}_{{N}_{c}}({\theta}_{1})=\hfill & \mathrm{exp}\left(\right)open="("\; close=")">-\frac{({\theta}_{1}+c)}{2}(b-a)\hfill \end{array}$$$${M}_{{D}_{c}}({\theta}_{2})=\mathrm{exp}\left(\right)open="("\; close=")">-\frac{c}{2}(b-a)\mathrm{sinh}\left(\right)open="("\; close=")">(b-a){\lambda}_{2}$$
- (c)
- Let:$$g({\theta}_{2})=\mathrm{cosh}\left(\right)open="("\; close=")">(b-a){({c}^{2}+2{\theta}_{2})}^{1/2}\frac{\mathrm{sinh}\left(\right)open="("\; close=")">(b-a){({c}^{2}+2{\theta}_{2})}^{1/2}}{}{({c}^{2}+2{\theta}_{2})}^{1/2}$$Then, the expectation of ${N}_{c}/{D}_{c}$ is given by:$$E\left(\right)open="("\; close=")">\frac{{N}_{c}}{{D}_{c}}d{\theta}_{2}={I}_{1}(a,b)+{I}_{2}(a,b)+{I}_{3}(a,b)+{I}_{4}(a,b),$$$$\begin{array}{ccc}\hfill {I}_{1}(a,b)\hfill & =& -\frac{(b-a)}{2}\mathrm{exp}\left(\right)open="("\; close=")">-\frac{c(b-a)}{2}{\int}_{0}^{\infty}\frac{1}{g{({\theta}_{2})}^{1/2}}d{\theta}_{2},\hfill \end{array}$$

**Corollary**

**to**

**Theorem**

**3.**

**Theorem**

**4.**

- (a)
- ${(-2c)}^{1/2}{\int}_{a}^{b}{J}_{c}dW\Rightarrow N(0,(b-a))$;
- (b)
- $(-2c){\int}_{a}^{b}{J}_{c}^{2}\stackrel{p}{\to}(b-a)$;
- (c)
- $K(c)\Rightarrow N(0,1)$ if ${\sigma}_{u}^{2}={\sigma}^{2}$ (and hence $\eta =1$) and diverges otherwise.

## 4. An Optimal Jackknife Estimator

**Theorem**

**5.**

## 5. Conclusions

## Acknowledgments

## Author Contributions

## Conflicts of Interest

## Appendix A.

**Proof**

**of**

**Theorem**

**1.**

**Proof**

**of**

**Theorem**

**2.**

**Proof**

**of**

**Theorem**

**3.**

- (i)
- Take the expectation in ${M}_{c}({\theta}_{1},{\theta}_{2})$ conditional on ${\mathcal{F}}_{0}^{a}$, the sigma field generated by W on $[0,a]$.
- (ii)
- Introduce another O-U process V and apply Girsanov’s theorem again to take the expectation with respect to ${\mathcal{F}}_{0}^{a}$.

**Proof**

**of**

**Corollary**

**to**

**Theorem**

**3.**

- (a)
- $b-a=1$ and ${v}^{2}=0$;
- (b)
- $b-a=1/m$ and ${\mathrm{lim}}_{c\to 0}{v}_{j-1}^{2}=(j-1)/m$.

**Derivation of**(14)

**.**From (A3), we can write:

**Proof**

**of**

**Theorem**

**4.**

**Proof**

**of**

**Theorem**

**5.**

- (i)
- ${w}_{1,c}^{\ast}(\eta )+{w}_{2,c}^{\ast}(\eta )=1$, and
- (ii)
- ${w}_{1,c}^{\ast}(\eta ){\mu}_{c}(\eta )+{w}_{2,c}^{\ast}(\eta ){\sum}_{j=1}^{m}{\mu}_{c,j}(\eta )=0$.

## References

- Abadir, Karim M. 1993. The limiting distribution of the autocorrelation coefficient under a unit root. Annals of Statistics 21: 1058–70. [Google Scholar] [CrossRef]
- Chambers, Marcus J. 2013. Jackknife estimation and inference in stationary autoregressive models. Journal of Econometrics 172: 142–57. [Google Scholar] [CrossRef]
- Chambers, Marcus J. 2015. A jackknife correction to a test for cointegration rank. Econometrics 3: 355–75. [Google Scholar] [CrossRef]
- Chambers, Marcus J., and Maria Kyriacou. 2013. Jackknife estimation with a unit root. Statistics and Probability Letters 83: 1677–82. [Google Scholar] [CrossRef]
- Chan, Ngai H., and Ching-Zong Wei. 1987. Asymptotic inference for nearly nonstationary AR(1) processes. Annals of Statistics 15: 1050–63. [Google Scholar] [CrossRef]
- Chen, Ye, and Jun Yu. 2015. Optimal jackknife for unit root models. Statistics and Probability Letters 99: 135–42. [Google Scholar] [CrossRef]
- Gonzalo, Jesus, and Jean-Yves Pitarakis. 1998. On the exact moments of nonstandard asymptotic distributions in an unstable AR(1) with dependent errors. International Economic Review 39: 71–88. [Google Scholar] [CrossRef]
- Kruse, Robinson, and Hendrik Kaufmann. 2015. Bias-corrected estimation in mildly explosive autoregressions. Paper presented at Annual Conference 2015: Economic Development—Theory and Policy, Verein für Socialpolitik/German Economic Association, Muenster, Germany, September 6–9. [Google Scholar]
- Kyriacou, Maria. 2011. Jackknife Estimation and Inference in Non-stationary Autoregression. Ph.D. thesis, University of Essex, Colchester, UK. [Google Scholar]
- Kyriacou, Maria, Peter C. B. Phillips, and Francesca Rossi. 2017. Indirect inference in spatial autoregression. The Econometrics Journal 20: 168–89. [Google Scholar] [CrossRef]
- Magnus, Jan R. 1986. The exact moments of a ratio of quadratic forms in normal variables. Annales d’Économie et de Statistique 4: 95–109. [Google Scholar] [CrossRef]
- Meng, Xiao-Li. 2005. From unit root to Stein’s estimator to Fisher’s k statistics: If you have a moment, I can tell you more. Statistical Science 20: 141–62. [Google Scholar] [CrossRef]
- Park, Joon. 2006. A bootstrap theory for weakly integrated processes. Journal of Econometrics 133: 639–72. [Google Scholar] [CrossRef]
- Perron, Pierre. 1989. The calculation of the limiting distribution of the least-squares estimator in a near-integrated model. Econometric Theory 5: 241–55. [Google Scholar] [CrossRef]
- Perron, Pierre. 1991. A continuous time approximation to the unstable first-order autoregressive process: The case without an intercept. Econometrica 59: 211–36. [Google Scholar] [CrossRef]
- Perron, Pierre. 1996. The adequacy of asymptotic approximations in the near-integrated autoregressive model with dependent errors. Journal of Econometrics 70: 317–50. [Google Scholar] [CrossRef]
- Phillips, Peter C. B. 1987a. Towards a unified asymptotic theory for autoregression. Biometrika 74: 535–47. [Google Scholar] [CrossRef]
- Phillips, Peter C. B. 1987b. Time series regression with a unit root. Econometrica 55: 277–301. [Google Scholar] [CrossRef]
- Phillips, Peter C. B. 2012. Folklore theorems, implicit maps, and indirect inference. Econometrica 80: 425–54. [Google Scholar]
- Phillips, Peter C. B. 2014. On confidence intervals for autoregressive roots and predictive regression. Econometrica 82: 1177–95. [Google Scholar] [CrossRef]
- Phillips, Peter C. B., Hyungsik Roger Moon, and Zhijie Xiao. 2001. How to estimate autoregressive roots near unity. Econometric Theory 17: 26–69. [Google Scholar] [CrossRef]
- Phillips, Peter C. B., and Jun Yu. 2005. Jackknifing bond option prices. Review of Financial Studies 18: 707–42. [Google Scholar] [CrossRef]
- Quenouille, Maurice H. 1956. Notes on bias in estimation. Biometrika 43: 353–60. [Google Scholar] [CrossRef]
- Stoykov, Marian Z. 2017. Optimal Jackknife Estimation of Local to Unit Root Models. Colchester: Essex Business School, preprint. [Google Scholar]
- Tanaka, Katsuto. 1996. Time Series Analysis: Nonstationary and Noninvertible Distribution Theory. New York: Wiley. [Google Scholar]
- Tukey, John W. 1958. Bias and confidence in not-quite large samples. Annals of Mathematical Statistics 29: 614. [Google Scholar]
- White, John S. 1958. The limiting distribution of the serial correlation coefficient in the explosive case. Annals of Mathematical Statistics 29: 1188–97. [Google Scholar] [CrossRef]

1 | If ${y}_{0}\ne 0$, then additional random variables appear in the numerator and denominator of the bias, thereby complicating the derivation of the required expectation. In the case of $c=0$, the simulation results reported in Table 2.3 of Kyriacou (2011) indicate that the bias of $\widehat{\rho}$ increases with ${y}_{0}/\sigma $, but that the jackknife continues to be an effective method of bias reduction. |

2 | The integrals were computed numerically using an adaptive quadrature method in the integrate1d routine in Gauss 17. |

3 | The Gauss codes used for the jackknife estimators are available from the authors on request. |

4 | In the simulations, we are taking $\eta =1$ as known. |

5 | We thank a referee for suggesting that we investigate the performance of the estimators when $\eta <1$. |

6 | It should also be noted that we have assumed ${y}_{0}=0$ in deriving the jackknife weights, whereas the expansion in Theorem 2 suggests that the bias functions (and, hence, the jackknife weights) will depend in a non-trivial and more complicated way on ${y}_{0}$ when ${y}_{0}\ne 0$. Although we have not investigated the issue further here, the results in Kyriacou (2011) suggest that jackknife methods can still provide bias reduction even when ${y}_{0}\ne 0$. |

$\mathit{j}\mathbf{\setminus}\mathit{c}$ | $\mathbf{-}\mathbf{50}$ | $\mathbf{-}\mathbf{20}$ | $\mathbf{-}\mathbf{10}$ | $\mathbf{-}\mathbf{5}$ | $\mathbf{-}\mathbf{1}$ | 0 | 1 |
---|---|---|---|---|---|---|---|

$m=1$ | |||||||

1 | $-1.9995$ | $-1.9972$ | $-1.9912$ | $-1.9758$ | $-1.8818$ | $-1.7814$ | $-1.5811$ |

$m=2$ | |||||||

1 | $-1.9981$ | $-1.9912$ | $-1.9758$ | $-1.9439$ | $-1.8408$ | $-1.7814$ | $-1.6969$ |

2 | $-1.9604$ | $-1.9043$ | $-1.8214$ | $-1.6891$ | $-1.3295$ | $-1.1382$ | $-0.8920$ |

$m=3$ | |||||||

1 | $-1.9962$ | $-1.9838$ | $-1.9595$ | $-1.9175$ | $-1.8234$ | $-1.7814$ | $-1.7283$ |

2 | $-1.9412$ | $-1.8613$ | $-1.7502$ | $-1.5921$ | $-1.2722$ | $-1.1382$ | $-0.9791$ |

3 | $-1.9412$ | $-1.8613$ | $-1.7500$ | $-1.5845$ | $-1.1515$ | $-0.9319$ | $-0.6759$ |

$m=4$ | |||||||

1 | $-1.9939$ | $-1.9758$ | $-1.9439$ | $-1.8973$ | $-1.8138$ | $-1.7814$ | $-1.7427$ |

2 | $-1.9225$ | $-1.8214$ | $-1.6891$ | $-1.5210$ | $-1.2411$ | $-1.1382$ | $-1.0210$ |

3 | $-1.9225$ | $-1.8214$ | $-1.6879$ | $-1.5021$ | $-1.1016$ | $-0.9319$ | $-0.7410$ |

4 | $-1.9225$ | $-1.8214$ | $-1.6879$ | $-1.5006$ | $-1.0396$ | $-0.8143$ | $-0.5643$ |

$m=6$ | |||||||

1 | $-1.9884$ | $-1.9594$ | $-1.9175$ | $-1.8698$ | $-1.8037$ | $-1.7814$ | $-1.7564$ |

2 | $-1.8867$ | $-1.7502$ | $-1.5921$ | $-1.4268$ | $-1.2085$ | $-1.1382$ | $-1.0616$ |

3 | $-1.8867$ | $-1.7500$ | $-1.5845$ | $-1.3812$ | $-1.0482$ | $-0.9319$ | $-0.8059$ |

4 | $-1.8867$ | $-1.7500$ | $-1.5843$ | $-1.3732$ | $-0.9697$ | $-0.8143$ | $-0.6472$ |

5 | $-1.8867$ | $-1.7500$ | $-1.5842$ | $-1.3717$ | $-0.9243$ | $-0.7348$ | $-0.5331$ |

6 | $-1.8867$ | $-1.7500$ | $-1.5842$ | $-1.3715$ | $-0.8958$ | $-0.6761$ | $-0.4450$ |

$m=8$ | |||||||

1 | $-1.9823$ | $-1.9439$ | $-1.8973$ | $-1.8526$ | $-1.7984$ | $-1.7814$ | $-1.7629$ |

2 | $-1.8530$ | $-1.6891$ | $-1.5210$ | $-1.3686$ | $-1.1915$ | $-1.1382$ | $-1.0813$ |

3 | $-1.8530$ | $-1.6879$ | $-1.5021$ | $-1.2991$ | $-1.0203$ | $-0.9319$ | $-0.8381$ |

4 | $-1.8530$ | $-1.6879$ | $-1.5006$ | $-1.2815$ | $-0.9326$ | $-0.8143$ | $-0.6893$ |

5 | $-1.8530$ | $-1.6879$ | $-1.5005$ | $-1.2766$ | $-0.8795$ | $-0.7348$ | $-0.5829$ |

6 | $-1.8530$ | $-1.6879$ | $-1.5005$ | $-1.2752$ | $-0.8444$ | $-0.6761$ | $-0.5008$ |

7 | $-1.8530$ | $-1.6879$ | $-1.5005$ | $-1.2748$ | $-0.8201$ | $-0.6302$ | $-0.4345$ |

8 | $-1.8530$ | $-1.6879$ | $-1.5005$ | $-1.2747$ | $-0.8027$ | $-0.5931$ | $-0.3793$ |

$m=12$ | |||||||

1 | $-1.9693$ | $-1.9175$ | $-1.8698$ | $-1.8324$ | $-1.7929$ | $-1.7814$ | $-1.7693$ |

2 | $-1.7916$ | $-1.5921$ | $-1.4268$ | $-1.3016$ | $-1.1742$ | $-1.1382$ | $-1.1007$ |

3 | $-1.7916$ | $-1.5845$ | $-1.3812$ | $-1.1979$ | $-0.9916$ | $-0.9319$ | $-0.8698$ |

4 | $-1.7916$ | $-1.5842$ | $-1.3732$ | $-1.1612$ | $-0.8943$ | $-0.8143$ | $-0.7313$ |

5 | $-1.7916$ | $-1.5842$ | $-1.3717$ | $-1.1464$ | $-0.8328$ | $-0.7348$ | $-0.6335$ |

6 | $-1.7916$ | $-1.5842$ | $-1.3715$ | $-1.1403$ | $-0.7904$ | $-0.6761$ | $-0.5585$ |

7 | $-1.7916$ | $-1.5842$ | $-1.3714$ | $-1.1376$ | $-0.7595$ | $-0.6302$ | $-0.4981$ |

8 | $-1.7916$ | $-1.5842$ | $-1.3714$ | $-1.1365$ | $-0.7362$ | $-0.5931$ | $-0.4477$ |

9 | $-1.7916$ | $-1.5842$ | $-1.3714$ | $-1.1360$ | $-0.7183$ | $-0.5622$ | $-0.4047$ |

10 | $-1.7916$ | $-1.5842$ | $-1.3714$ | $-1.1358$ | $-0.7041$ | $-0.5358$ | $-0.3674$ |

11 | $-1.7916$ | $-1.5842$ | $-1.3714$ | $-1.1357$ | $-0.6928$ | $-0.5131$ | $-0.3346$ |

12 | $-1.7916$ | $-1.5842$ | $-1.3714$ | $-1.1356$ | $-0.6837$ | $-0.4931$ | $-0.3055$ |

$\mathit{m}\mathbf{:}$ | 2 | 3 | 4 | 6 | 8 | 12 |
---|---|---|---|---|---|---|

Standard weights | ||||||

${w}_{1}$ | 2.0000 | 1.5000 | 1.3333 | 1.2000 | 1.1429 | 1.0909 |

${w}_{2}$ | −1.0000 | −0.5000 | −0.3333 | −0.2000 | −0.1429 | −0.0909 |

Optimal weights: $c=-50$ | ||||||

${w}_{1,c}^{\ast}(1)$ | 2.0206 | 1.5156 | 1.3470 | 1.2122 | 1.1544 | 1.1016 |

${w}_{2,c}^{\ast}(1)$ | −1.0206 | −0.5156 | −0.3470 | −0.2122 | −0.1544 | −0.1016 |

Optimal weights: $c=-20$ | ||||||

${w}_{1,c}^{\ast}(1)$ | 2.0521 | 1.5385 | 1.3670 | 1.2292 | 1.1698 | 1.1151 |

${w}_{2,c}^{\ast}(1)$ | −1.0521 | −0.5385 | −0.3670 | −0.2292 | −0.1698 | −0.1151 |

Optimal weights: $c=-10$ | ||||||

${w}_{1,c}^{\ast}(1)$ | 2.1026 | 1.5741 | 1.3969 | 1.2535 | 1.1909 | 1.1325 |

${w}_{2,c}^{\ast}(1)$ | −1.1026 | −0.5741 | −0.3969 | −0.2535 | −0.1909 | −0.1325 |

Optimal weights: $c=-5$ | ||||||

${w}_{1,c}^{\ast}(1)$ | 2.1923 | 1.6336 | 1.4445 | 1.2898 | 1.2213 | 1.1565 |

${w}_{2,c}^{\ast}(1)$ | −1.1923 | −0.6336 | −0.4445 | −0.2898 | −0.2213 | −0.1565 |

Optimal weights: $c=-1$ | ||||||

${w}_{1,c}^{\ast}(1)$ | 2.4605 | 1.7956 | 1.5678 | 1.3788 | 1.2937 | 1.2117 |

${w}_{2,c}^{\ast}(1)$ | −1.4605 | −0.7956 | −0.5678 | −0.3788 | −0.2937 | −0.2117 |

Optimal weights: $c=0$ | ||||||

${w}_{1,c}^{\ast}(1)$ | 2.5651 | 1.8605 | 1.6176 | 1.4147 | 1.3228 | 1.2337 |

${w}_{2,c}^{\ast}(1)$ | −1.5651 | −0.8605 | −0.6176 | −0.4147 | −0.3228 | −0.2337 |

Optimal weights: $c=1$ | ||||||

${w}_{1,c}^{\ast}(1)$ | 2.5689 | 1.8773 | 1.6355 | 1.4311 | 1.3373 | 1.2455 |

${w}_{2,c}^{\ast}(1)$ | −1.5689 | −0.8773 | −0.6355 | −0.4311 | −0.3373 | −0.2455 |

Estimator | $\mathit{n}\mathbf{=}\mathbf{24}$ | $\mathit{n}\mathbf{=}\mathbf{48}$ | $\mathit{n}\mathbf{=}\mathbf{96}$ | $\mathit{n}\mathbf{=}\mathbf{192}$ |
---|---|---|---|---|

$c=0$ | ||||

OLS | −0.0663 | −0.0351 | −0.0180 | −0.0091 |

Standard jackknife | −0.0343 | −0.0155 | −0.0072 | −0.0035 |

Optimal/unit root jackknife | −0.0163 | −0.0045 | −0.0011 | −0.0003 |

Chen-Yu jackknife | −0.0264 | −0.0144 | −0.0110 | −0.0102 |

$c=-1$ | ||||

OLS | −0.0675 | −0.0365 | −0.0188 | −0.0096 |

Standard jackknife | −0.0323 | −0.0145 | −0.0067 | −0.0032 |

Optimal jackknife | −0.0161 | −0.0044 | −0.0011 | −0.0002 |

Unit root jackknife | −0.0124 | −0.0021 | 0.0002 | 0.0004 |

Chen-Yu jackknife | −0.0190 | −0.0099 | −0.0087 | −0.0089 |

$c=-5$ | ||||

OLS | −0.0589 | −0.0350 | −0.0188 | −0.0098 |

Standard jackknife | −0.0193 | −0.0088 | −0.0038 | −0.0017 |

Optimal jackknife | −0.0116 | −0.0037 | −0.0009 | −0.0001 |

Unit root jackknife | 0.0031 | 0.0061 | 0.0048 | 0.0029 |

Chen-Yu jackknife | 0.0037 | 0.0027 | −0.0017 | −0.0051 |

$c=-10$ | ||||

OLS | −0.0437 | −0.0310 | −0.0178 | −0.0095 |

Standard jackknife | −0.0114 | −0.0055 | −0.0022 | −0.0009 |

Optimal jackknife | −0.0080 | −0.0029 | −0.0006 | −0.0001 |

Unit root jackknife | 0.0069 | 0.0089 | 0.0066 | 0.0039 |

Chen-Yu jackknife | 0.0090 | 0.0071 | 0.0013 | −0.0035 |

Estimator | $\mathit{n}\mathbf{=}\mathbf{24}$ | $\mathit{n}\mathbf{=}\mathbf{48}$ | $\mathit{n}\mathbf{=}\mathbf{96}$ | $\mathit{n}\mathbf{=}\mathbf{192}$ |
---|---|---|---|---|

$c=0$ | ||||

OLS | 0.1366 | 0.0719 | 0.0371 | 0.0187 |

Standard jackknife $(m=2)$ | 0.1482 | 0.0766 | 0.0396 | 0.0199 |

Standard jackknife $(m=4,6,6,8)$ | 0.1310 | 0.0659 | 0.0333 | 0.0165 |

Optimal/unit root jackknife $(m=2)$ | 0.1753 | 0.0915 | 0.0479 | 0.0242 |

Optimal/unit root jackknife $(m=4,8,12,12)$ | 0.1383 | 0.0642 | 0.0312 | 0.0154 |

Chen-Yu jackknife $(m=2)$ | 0.1640 | 0.0864 | 0.0462 | 0.0249 |

Chen-Yu jackknife $(m=3)$ | 0.1392 | 0.0719 | 0.0374 | 0.0188 |

$c=-1$ | ||||

OLS | 0.1428 | 0.0762 | 0.0396 | 0.0200 |

Standard jackknife $(m=2)$ | 0.1524 | 0.0797 | 0.0415 | 0.0209 |

Standard jackknife $(m=4,6,6,8)$ | 0.1368 | 0.0698 | 0.0355 | 0.0176 |

Optimal jackknife $(m=2)$ | 0.1724 | 0.0908 | 0.0477 | 0.0242 |

Optimal jackknife $(m=4,8,12,12)$ | 0.1416 | 0.0679 | 0.0334 | 0.0165 |

Unit root jackknife $(m=2)$ | 0.1778 | 0.0939 | 0.0495 | 0.0251 |

Unit root jackknife $(m=4,8,12,12)$ | 0.1435 | 0.0680 | 0.0333 | 0.0165 |

Chen-Yu jackknife $(m=2)$ | 0.1710 | 0.0916 | 0.0489 | 0.0262 |

Chen-Yu jackknife $(m=3)$ | 0.1477 | 0.0778 | 0.0408 | 0.0207 |

$c=-5$ | ||||

OLS | 0.1626 | 0.0901 | 0.0476 | 0.0243 |

Standard jackknife $(m=2)$ | 0.1745 | 0.0944 | 0.0498 | 0.0253 |

Standard jackknife $(m=6,6,8,8)$ | 0.1615 | 0.0855 | 0.0442 | 0.0223 |

Optimal jackknife $(m=2)$ | 0.1813 | 0.0982 | 0.0520 | 0.0265 |

Optimal jackknife $(m=6,8,12,12)$ | 0.1641 | 0.0847 | 0.0432 | 0.0216 |

Unit root jackknife $(m=2)$ | 0.1975 | 0.1078 | 0.0576 | 0.0295 |

Unit root jackknife $(m=6,12,12,12)$ | 0.1710 | 0.0852 | 0.0433 | 0.0219 |

Chen-Yu jackknife $(m=2)$ | 0.2066 | 0.1138 | 0.0610 | 0.0318 |

Chen-Yu jackknife $(m=3)$ | 0.1857 | 0.1014 | 0.0540 | 0.0279 |

$c=-10$ | ||||

OLS | 0.1809 | 0.1037 | 0.0558 | 0.0288 |

Standard jackknife $(m=2)$ | 0.1971 | 0.1096 | 0.0584 | 0.0300 |

Standard jackknife $(m=8,8,12,12)$ | 0.1853 | 0.1016 | 0.0534 | 0.0272 |

Optimal jackknife $(m=2)$ | 0.2003 | 0.1114 | 0.0595 | 0.0306 |

Optimal jackknife $(m=4,12,12,12)$ | 0.1877 | 0.1015 | 0.0530 | 0.0270 |

Unit root jackknife $(m=2)$ | 0.2175 | 0.1217 | 0.0656 | 0.0340 |

Unit root jackknife $(m=6,12,12,12)$ | 0.1985 | 0.1032 | 0.0539 | 0.0277 |

Chen-Yu jackknife $(m=2)$ | 0.2328 | 0.1315 | 0.0711 | 0.0372 |

Chen-Yu jackknife $(m=3)$ | 0.2151 | 0.1202 | 0.0649 | 0.0338 |

Estimator | $\mathit{n}\mathbf{=}\mathbf{24}$ | $\mathit{n}\mathbf{=}\mathbf{48}$ | $\mathit{n}\mathbf{=}\mathbf{96}$ | $\mathit{n}\mathbf{=}\mathbf{192}$ |
---|---|---|---|---|

$\eta =0.5556$ (MA case) | ||||

$c=0$ | ||||

OLS | −0.0191 | −0.0105 | −0.0054 | −0.0028 |

Standard jackknife | −0.0059 | −0.0018 | −0.0006 | −0.0002 |

Optimal/unit root jackknife ($\eta =1$) | 0.0016 | 0.0031 | 0.0022 | 0.0012 |

Chen-Yu jackknife | 0.0062 | 0.0054 | 0.0034 | 0.0018 |

$c=-1$ | ||||

OLS | −0.0060 | −0.0040 | −0.0022 | −0.0012 |

Standard jackknife | 0.0110 | 0.0068 | 0.0038 | 0.0020 |

Optimal jackknife ($\eta =1$) | 0.0188 | 0.0118 | 0.0065 | 0.0034 |

Unit root jackknife | 0.0206 | 0.0129 | 0.0072 | 0.0037 |

Chen-Yu jackknife | 0.0275 | 0.0167 | 0.0091 | 0.0048 |

$c=-5$ | ||||

OLS | 0.0631 | 0.0304 | 0.0150 | 0.0074 |

Standard jackknife | 0.0885 | 0.0457 | 0.0234 | 0.0118 |

Optimal jackknife ($\eta =1$) | 0.0934 | 0.0486 | 0.0250 | 0.0127 |

Unit root jackknife | 0.1028 | 0.0543 | 0.0281 | 0.0143 |

Chen-Yu jackknife | 0.1129 | 0.0601 | 0.0312 | 0.0159 |

$\eta =0.0526$ (AR case) | ||||

$c=0$ | ||||

OLS | 0.0524 | 0.0237 | 0.0107 | 0.0049 |

Standard jackknife | 0.0306 | 0.0137 | 0.0066 | 0.0033 |

Optimal/unit root jackknife ($\eta =1$) | 0.0183 | 0.0081 | 0.0043 | 0.0024 |

Chen-Yu jackknife | 0.0293 | 0.0143 | 0.0076 | 0.0039 |

$c=-1$ | ||||

OLS | 0.0801 | 0.0386 | 0.0184 | 0.0089 |

Standard jackknife | 0.0624 | 0.0305 | 0.0153 | 0.0077 |

Optimal jackknife ($\eta =1$) | 0.0542 | 0.0268 | 0.0139 | 0.0071 |

Unit root jackknife | 0.0524 | 0.0260 | 0.0135 | 0.0070 |

Chen-Yu jackknife | 0.0645 | 0.0331 | 0.0172 | 0.0088 |

$c=-5$ | ||||

OLS | 0.2116 | 0.1086 | 0.0547 | 0.0273 |

Standard jackknife | 0.2096 | 0.1070 | 0.0541 | 0.0271 |

Optimal jackknife ($\eta =1$) | 0.2092 | 0.1067 | 0.0539 | 0.0271 |

Unit root jackknife | 0.2084 | 0.1061 | 0.0537 | 0.0270 |

Chen-Yu jackknife | 0.2212 | 0.1138 | 0.0575 | 0.0289 |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Chambers, M.J.; Kyriacou, M.
Jackknife Bias Reduction in the Presence of a Near-Unit Root. *Econometrics* **2018**, *6*, 11.
https://doi.org/10.3390/econometrics6010011

**AMA Style**

Chambers MJ, Kyriacou M.
Jackknife Bias Reduction in the Presence of a Near-Unit Root. *Econometrics*. 2018; 6(1):11.
https://doi.org/10.3390/econometrics6010011

**Chicago/Turabian Style**

Chambers, Marcus J., and Maria Kyriacou.
2018. "Jackknife Bias Reduction in the Presence of a Near-Unit Root" *Econometrics* 6, no. 1: 11.
https://doi.org/10.3390/econometrics6010011