Next Article in Journal
Structural Break Tests Robust to Regression Misspecification
Next Article in Special Issue
The Relation between Monetary Policy and the Stock Market in Europe
Previous Article in Journal
Recent Developments in Macro-Econometric Modeling: Theory and Applications
Previous Article in Special Issue
Evaluating Forecasts, Narratives and Policy Using a Test of Invariance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Johansen’s Reduced Rank Estimator Is GMM

Department of Economics, University of Wisconsin, Madison, WI 53706, USA
Econometrics 2018, 6(2), 26; https://doi.org/10.3390/econometrics6020026
Submission received: 30 January 2018 / Revised: 9 March 2018 / Accepted: 16 May 2018 / Published: 18 May 2018
(This article belongs to the Special Issue Celebrated Econometricians: Katarina Juselius and Søren Johansen)

Abstract

:
The generalized method of moments (GMM) estimator of the reduced-rank regression model is derived under the assumption of conditional homoscedasticity. It is shown that this GMM estimator is algebraically identical to the maximum likelihood estimator under normality developed by Johansen (1988). This includes the vector error correction model (VECM) of Engle and Granger. It is also shown that GMM tests for reduced rank (cointegration) are algebraically similar to the Gaussian likelihood ratio tests. This shows that normality is not necessary to motivate these estimators and tests.
Keywords:
GMM; VECM; reduced rank
JEL Classification:
C3

1. Introduction

The vector error correction model (VECM) of Engle and Granger (1987) is one of the most widely used time-series models in empirical practice. The predominant estimation method for the VECM is the reduced-rank regression method introduced by Johansen (1988, 1991, 1995). Johansen’s estimation method is widely used because it is straightforward, it is a natural extension of the VAR model of Sims (1980), and it is computationally tractable.
Johansen motivated his estimator as the maximum likelihood estimator (MLE) of the VECM under the assumption that the errors are i.i.d. normal. For many users, it is unclear whether the estimator has a broader justification. In contrast, it is well known that least-squares estimation is both maximum likelihood under normality and method of moments under uncorrelatedness.
This paper provides the missing link. It is shown that Johansen’s reduced-rank estimator is algebraically identical to the generalized method of moments (GMM) estimator of the VECM, under the imposition of conditional homoscedasticity. This GMM estimator only uses uncorrelatedness and homoscedasticity. Thus Johansen’s reduced-rank estimator can be motivated under much broader conditions than normality.
The asymptotic efficiency of the estimator in the GMM class relies on the assumption of homoscedasticity (but not normality). When homoscedasticity fails, the reduced-rank estimator loses asymptotic efficiency but retains its interpretation as a GMM estimator.
It is also shown that the GMM tests for reduced (cointegration) rank are nearly identical to Johansen’s likelihood ratio tests. Thus the standard likelihood ratio tests for cointegration can be interpreted more broadly as GMM tests.
This paper does not introduce new estimation nor inference methods. It merely points out that the currently used methods have a broader interpretation than may have been understood. The results leave open the possibility that new GMM methods that do not impose homoscedasticity could be developed.
This connection is not new. In a different context, Adrian et al. (2015) derived the equivalence of the likelihood and minimum-distance estimators of the reduced-rank model. The equivalence between the Limited Information Maximum Likelihood (LIML) estimator (which has a dual relation with reduced-rank regression) and a minimum distance estimator was discovered by Goldberger and Olkin (1971). Recently, Kolesár (2018) drew out connections between likelihood-based and minimum-distance estimation of endogenous linear regression models.
This paper is organized as follows. Section 2 introduces reduced-rank regression models and Johansen’s estimator. Section 3 presents the GMM and states the main theorems demonstrating the equivalence of the GMM and MLE. Section 4 presents the derivation of the GMM estimator. Section 5 contains two technical results relating generalized eigenvalue problems and the extrema of quadratic forms.

2. Reduced-Rank Regression Models

The VECM for p variables of cointegrating rank r with k lags is
Δ X t = α β X t 1 + i = 1 k 1 Γ i Δ X t i + Φ D t + e t ,
where D t are the deterministic components. Observations are t = 1 , , T . The matrices α and β are p × r with r p . This is a famous workhorse model in applied time series, largely because of the seminal work of Engle and Granger (1987).
The primary estimation method for the VECM is known as reduced-rank regression and was developed by Johansen (1988, 1991, 1995). Algebraically, the VECM (1) is a special case of the reduced-rank regression model:
Y t = α β X t + Ψ Z t + e t ,
where Y t is p × 1 , X t is m × 1 , and Z t is q × 1 . The coefficient matrix α is p × r and β is m × r with r min ( m , p ) . Johansen derived the MLE for model (2) under the assumption that e t is i.i.d. N 0 , Ω . This immediately applies to the VECM (1) and is the primary application of reduced-rank regression in econometrics.
Canonical correlations were introduced by Hotelling (1936), and reduced-rank regression was introduced by Bartlett (1938). A complete theory was developed by Anderson and Rubin (1949, 1950) and Anderson (1951). These authors developed the MLE for the model:
Y t = Π X t + e t ,
Γ Π = 0 ,
where Γ is p × p r and is unknown. This is an alternative parameterization of (2) without the covariates Z t . Anderson and Rubin (1949, 1950) considered the case p r = 1 and primarily focused on estimation of the vector Γ . Anderson (1951) considered the case p r 1 .
While the models (2) and (3)–(4) are equivalent and thus have the same MLE, the different parameterizations led the authors to different derivations. Anderson and Rubin derived the estimator of (3) and (4) by a tedious application of constrained optimization. (Specifically, they maximized the likelihood of (3) imposing the constraint (4) using Lagrange multiplier methods. The solution turned out to be tedious because (4) is a nonlinear function of the parameters Γ and Π .) The derivation is so cumbersome that it is excluded from nearly all statistics and econometrics textbooks, despite the fact that it is the source of the famous LIML estimator.
The elegant derivation used by Johansen (1988) is algebraically unrelated to that of Anderson-Rubin and is based on applying a concentration argument to the product structure in (2). It is similar to the derivation in Tso (1981), although the latter did not include the covariates Z t . Johansen’s derivation is algebraically straightforward and thus is widely taught to students.
It is useful to briefly describe the likelihood problem. The log-likelihood for model (2) under the assumption that e t is i.i.d. N 0 , Ω is
α , β , Ψ , Ω = T 2 log det Ω 1 2 t = 1 T Y t α β X t Ψ Z t Ω 1 Y t α β X t Ψ Z t .
The MLE maximizes α , β , Ψ , Ω . Johansen’s solution is as follows. Define the projection matrix M Z = I T Z Z Z 1 Z and the residual matrices Y ˜ = M Z Y and X ˜ = M Z X . Consider the generalized eigenvalue problem:
X ˜ Y ˜ Y ˜ Y ˜ 1 Y ˜ X ˜ X ˜ X ˜ λ = 0 .
The solutions 1 > λ ^ 1 > > λ ^ p > 0 satisfy
X ˜ Y ˜ Y ˜ Y ˜ 1 Y ˜ X ˜ ν i = X ˜ X ˜ ν ^ i λ ^ i .
where ( λ ^ i , ν ^ i ) are known as the generalized eigenvalues and eigenvectors of X ˜ Y ˜ Y ˜ Y ˜ 1 Y ˜ X ˜ with respect to X ˜ X ˜ . The normalization ν ^ i X ˜ X ˜ ν ^ i = 1 is imposed.
Given the normalization β X ˜ X ˜ β = I r , Johansen’s reduced-rank estimator for β is
β ^ mle = ν ^ 1 , , ν ^ r .
The MLE α ^ mle and Ψ ^ mle are found by least-squares regression of Y t on β ^ mle X t and Z t .

3. Generalized Method of Moments

Define W t = X t , Z t . The GMM estimator of the reduced-rank regression model (2) is derived under the standard orthogonality restriction:
E W t e t = 0
plus the homoscedasticity condition:
E e t e t W t W t = Ω Q ,
where Ω = E e t e t and Q = E W t W t . These moment conditions are implied by the normal regression model. (Equations (7) and (8) can be deduced from the first-order conditions for maximization of (5)). Because (7) and (8) can be deduced from (5) but not vice versa, the moment condition model (7) and (8) is considerably more general than the normal regression model (5).
The efficient GMM criterion (see Hansen 1982) takes the form
J r ( α , β , Ψ ) = T g ¯ r α , β , Ψ V ^ 1 g ¯ r α , β , Ψ ,
where
g ¯ r α , β , Ψ = 1 T t = 1 n Y t α β X t Ψ Z t W t , V ^ = Ω ^ Q ^ ,
Ω ^ = 1 T t = 1 n e ^ t e ^ t , Q ^ = 1 T t = 1 n W t W t ,
and e ^ t are the least-squares residuals of the unconstrained model:
e ^ t = Y t Π ^ X t Ψ ^ Z t .
The GMM estimator are the parameters that jointly minimize the criterion J r α , β , Ψ subject to the normalization β X ˜ X ˜ β = I r :
α ^ gmm , β ^ gmm , Ψ ^ gmm = argmin β X ˜ X ˜ β = I r J r α , β , Ψ .
The main contribution of the paper is the following surprising result.
Theorem 1.
α ^ gmm , β ^ gmm , Ψ ^ gmm = α ^ mle , β ^ mle , Ψ ^ mle .
Theorem 2.
J r ( α ^ gmm , β ^ gmm , Ψ ^ gmm ) = tr Ω ^ 1 Y ˜ Y ˜ T p T i = 1 r λ ^ i 1 λ ^ i , where λ ^ i are the eigenvalues from (6).
Theorem 1 states that the GMM estimator is algebraically identical to the Gaussian maximum likelihood estimator.
This shows that Johansen’s reduced-rank regression estimator is not tied to the normality assumption. This is similar to the equivalence of least-squares as a method of moments estimator and the Gaussian MLE in the regression context.
The key is the use of the homoscedastic weight matrix. This shows that the Johansen reduced-rank estimator is an efficient GMM estimator under conditional homoscedasticity. When homoscedasticity fails, the Johansen reduced-rank estimator continues to be a GMM estimator but is no longer the efficient GMM estimator.
It is important to understand that Theorem 1 is different from the trivial statement that the MLE is GMM applied to the first-order condition of the likelihood (e.g., Hall (2005), Section 3.8.1). Specifically, if you take the derivatives of the Gaussian log-likelihood function (5) and treat these as moment conditions and solve, this is a GMM estimator, and thus MLE can be interpreted as GMM. That is not what Theorem 1 states.
GMM hypothesis tests can be constructed by the difference in the GMM criteria; tests for reduced rank are considered, which in the context of VECM are tests for cointegration rank. The model
Y t = Π X t + Ψ Z t + e t
is taken and the following hypotheses on reduced rank are considered:
H r : rank Π = r .
The GMM test statistic for H r against H r + 1 is
C r , r + 1 = min β X ˜ X ˜ β = I r J r α , β , Ψ min β X ˜ X ˜ β = I r + 1 J r + 1 α , β , Ψ .
The GMM test statistic for H r against H p is
C r , p = min β X ˜ X ˜ β = I r J r α , β , Ψ min β X ˜ X ˜ β = I p J p α , β , Ψ .
Theorem 3.
The GMM test statistics for reduced rank are
C r , r + 1 = T λ ^ r + 1 1 λ ^ r + 1 , C r , p = T i = r + 1 p λ ^ i 1 λ ^ i ,
where λ ^ i are the eigenvalues from (6).
Here it is recalled in contrast that the likelihood ratio test statistics derived by Johansen are
L R r , r + 1 = T log 1 λ ^ r + 1 , L R r , p = T i = r + 1 p log 1 λ ^ r + 1 .
The GMM test statistic C r , r + 1 and the likelihood ratio (LR) statistic L R r , r + 1 yield equivalent tests, as they are monotonic functions of one another. (If the bootstrap is used to assess significance, the two statistics will yield numerically identical p-values.) They are asymptotically identical under standard approximations and in practice will be nearly identical, because the eigenvalues λ ^ i tend to be quite small in value (at least under the null hypothesis), so that log 1 λ λ / ( 1 λ ) λ . For p ( r + 1 ) > 1 , the GMM test statistic C r , p and the LR statistic L R r , p do not provide equivalent tests (they cannot be written as monotonic functions of one another), but they are also asymptotically equivalent and will be nearly identical in practice.
An interesting connection noted by a referee is that the statistic C r , p was proposed by Pillai (1955) and Muirhead (1982, Section 11.2.8).

4. Derivation of the GMM Estimator

It is convenient to rewrite the criterion in standard matrix notation, defining the matrices Y, X, Z, and W by stacking the observations. Model (2) is
Y = X β α + Z Ψ + e .
The moment (9) is
g ¯ r α , β , Ψ = 1 T vec W Y X β α Z Ψ .
Using the relation
tr A B C D = vec D C A vec B ,
the following is obtained:
J r ( α , β , G ) = T g ¯ r α , β , Ψ Ω ^ 1 Q ^ 1 g ¯ r α , β , Ψ = vec W Y X β α Z Ψ Ω ^ 1 W W 1 vec W Y X β α Z Ψ = tr Ω ^ 1 Y X β α Z Ψ W W W 1 W Y X β α Z Ψ .
Following the concentration strategy used by Johansen, β is fixed and α and Ψ are concentrated out, producing a concentrated criterion that is a function of β only. The system is linear in the regressors X β and Z. Given the homoscedastic weight matrix, the GMM estimator of ( α , Ψ ) is multivariate least-squares. Using the partialling out (residual regression) approach, the least-squares residual can be written as the residual from the regression of Y ˜ on X ˜ β , where Y ˜ = M Z Y and X ˜ = M Z X are the residuals from regressions on Z. That is, the least-squares residual is
e ^ ( β ) = Y ˜ X ˜ β β X ˜ X ˜ β 1 β X ˜ Y ˜ = Y ˜ X ˜ β β X ˜ Y ˜ ,
where the second equality uses the normalization β X ˜ X ˜ β = I r . Because the space spanned by W = ( X , Z ) equals that spanned by ( X ˜ , Z ) , the following can be written:
W W W 1 W = Z Z Z 1 Z + X ˜ X ˜ X ˜ 1 X ˜ .
Because Z e ^ ( β ) = 0 , then
W W W 1 W e ^ ( β ) = X ˜ X ˜ X ˜ 1 X ˜ e ^ ( β ) = X ˜ X ˜ X ˜ 1 X ˜ Y ˜ X ˜ β β X ˜ Y ˜
and
e ^ ( β ) W W W 1 W e ^ ( β ) = Y ˜ X ˜ X ˜ X ˜ 1 X ˜ Y ˜ Y ˜ X ˜ β β X ˜ Y ˜ = Y ˜ Y ˜ Y ˜ M X ˜ Y ˜ Y ˜ X ˜ β β X ˜ Y ˜ ,
where
M X ˜ = I X ˜ X ˜ X ˜ 1 X ˜ .
Using the partialling out (residual regression) approach, the variance estimator (10) can be written as
Ω ^ = 1 T Y I W W W 1 W Y = 1 T Y ˜ M X ˜ Y ˜ .
Thus the concentrated GMM criterion is
J r * ( β ) = tr Ω ^ 1 e ^ ( β ) W W W 1 W e ^ ( β ) = tr Ω ^ 1 Y ˜ Y ˜ tr Ω ^ 1 Y ˜ M X ˜ Y ˜ tr Ω ^ 1 Y ˜ X ˜ β β X ˜ Y ˜ = tr Ω ^ 1 Y ˜ Y ˜ T p T tr β X ˜ Y ˜ Y ˜ M X ˜ Y ˜ 1 Y ˜ X ˜ β .
The GMM estimator minimizes J r * ( β ) or, equivalently, maximizes the third term in (11). This is a generalized eigenvalue problem. Lemma 2 (in the next section) shows that the solution is β ^ gmm = ν ˜ 1 , , ν ˜ r as claimed.
Because the estimates α ^ gmm and Ψ ^ gmm are found by regression given β ^ gmm , and because this is equivalent with the MLE, it is also concluded that α ^ gmm = α ^ mle and Ψ ^ gmm = Ψ ^ mle . This completes the proof of Theorem 1.
To establish Theorem 2, Lemma 2 also shows that the minimum of the criterion is
J r ( α ^ gmm , β ^ gmm , Ψ ^ gmm ) = min β X ˜ X ˜ β = I r J r ( α , β , G ) = min β X ˜ X ˜ β = I r J r * ( β ) = tr Ω ^ 1 Y ˜ Y ˜ T p T max β X ˜ X ˜ β = I r tr β X ˜ Y ˜ Y ˜ M X ˜ Y ˜ 1 Y ˜ X ˜ β = tr Ω ^ 1 Y ˜ Y ˜ T p T i = 1 r λ ^ i 1 λ ^ i .
This establishes Theorem 2.

5. Extrema of Quadratic Forms

To establish Theorems 1 and 2, a simple extrema property is necessary. First, a simple property that relates the maximization of quadratic forms to generalized eigenvalues and eigenvectors is given. It is a slight extension of Theorem 11.13 of Magnus and Neudecker (1988).
Lemma 1.
Suppose A and C are p × p real symmetric matrices with C > 0 . Let λ 1 > > λ p > 0 be the generalized eigenvalues of A with respect to C and ν 1 , , ν p be the associated eigenvectors. Then
max β C β = I r tr β A β = i = 1 r λ i
and
argmax β C β = I r tr β A β = ν 1 , , ν r .
Proof. 
Define γ = C 1 / 2 β and A ¯ = C 1 / 2 A C 1 / 2 . The eigenvalues of A ¯ are equal to the generalized eigenvalues λ i of A with respect to C. The associated eigenvectors of A ¯ are C 1 / 2 ν i . Thus by Theorem 11.13 of Magnus and Neudecker (1988),
max β C β = I r tr β A β = max γ γ = I r tr γ A ¯ γ = i = 1 r λ i
and
argmax β C β = I r tr β A β = C 1 / 2 argmax γ γ = I r tr γ A ¯ γ = C 1 / 2 C 1 / 2 ν 1 , . . . , ν r = ν 1 , . . . , ν r
as claimed. ☐
Lemma 2.
Let M X = I X X X 1 X . If X X > 0 and Y M X Y > 0 then
max β X X β = I r tr β X Y ( Y M X Y ) 1 Y X β = i = 1 r λ i 1 λ i
and
argmax β X X β = I r tr β X Y ( Y M X Y ) 1 Y X β = ν 1 , , ν r ,
where 1 > λ 1 > > λ p > 0 are the generalized eigenvalues of X Y ( Y Y ) 1 Y X with respect to X X , and ν 1 , , v p are the associated eigenvectors.
Proof. 
By Lemma 1,
max β X X β = I r tr β X Y ( Y M X Y ) 1 Y X β = i = 1 r λ ˜ i
and
argmax β X X β = I r tr β X Y ( Y M X Y ) 1 Y X β = ν ˜ 1 , , ν ˜ r ,
where λ ˜ 1 > > λ ˜ p > 0 are the generalized eigenvalues of X Y ( Y M X Y ) 1 Y X with respect to X X and ν ˜ 1 , , ν ˜ p are the associated eigenvectors. The proof is established by showing that λ ˜ i = λ i / ( 1 λ i ) and ν ˜ i = ν i .
Let ( ν ˜ , λ ˜ ) be a generalized eigenvector/eigenvalue pair of X Y ( Y M X Y ) 1 Y X with respect to X X . The pair satisfies
X Y Y M X Y 1 Y X ν ˜ = X X ν ˜ λ ˜ .
By the Woodbury matrix identity (e.g., Magnus and Neudecker (1988), Equation (7)),
Y M X Y 1 = Y Y Y X X X 1 X Y 1 = Y Y 1 + Y Y 1 Y X X X X Y Y Y 1 Y X 1 X Y Y Y 1 = Y Y 1 + Y Y 1 Y X X M Y X 1 X Y Y Y 1 ,
where M Y = I Y Y Y 1 Y . Thus
X Y Y M X Y 1 Y X = X Y Y Y 1 Y X + X Y Y Y 1 Y X X M Y X 1 X Y Y Y 1 Y X = X P Y X + X P Y X X M Y X 1 X P Y X = X X X M Y X 1 X P Y X ,
where P Y = Y Y Y 1 Y and the final equality uses X P Y X = X X X M Y X . Substituting into (12) produces
X X X M Y X 1 X P Y X ν ˜ = X X ν ˜ λ ˜ .
Multiplying both sides by X M Y X ( X X ) 1 , this implies
X P Y X ν ˜ = X M Y X ν ˜ λ ˜ = X X ν ˜ λ ˜ X P Y X ν ˜ λ ˜ .
By collecting terms,
X P Y X ν ˜ ( 1 + λ ˜ ) = X X ν ˜ λ ˜ ,
which implies
X P Y X ν ˜ = X X ν ˜ λ ˜ ( 1 + λ ˜ ) .
This is an eigenvalue equation. It shows that λ ˜ / ( 1 + λ ˜ ) = λ is a generalized eigenvalue and ν ˜ is the associated eigenvector of X P Y X with respect to X X . Solving, λ ˜ = λ / ( 1 λ ) . This means that the generalized eigenvalues of X Y ( Y M X Y ) 1 Y X with respect to X X are λ i / ( 1 λ i ) and ν i . Because λ / ( 1 λ ) is monotonically increasing on [ 0 , 1 ) and λ i < 1 , it follows that the orderings of λ i and λ ˜ i are identical. Thus λ ˜ i = λ i / ( 1 λ i ) as claimed. ☐

Acknowledgments

This research is supported by the National Science Foundation and the Phipps Chair. Thanks to Richard Crump, the co-editors, and two referees for helpful comments on an earlier version. The author gives special thanks to Soren Johansen and Katerina Juselius for many years of stunning research, stimulating conversations, and impeccable scholarship.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Adrian, Tobias, Richard K. Crump, and Emanuel Moench. 2015. Regression-based estimation of dynamic asset pricing models. Journal of Financial Economics 118: 211–44. [Google Scholar] [CrossRef]
  2. Anderson, Theodore Wilbur. 1951. Estimating linear restrictions on regression coefficeints for multivariate normal distributions. Annals of Mathematical Statistics 22: 327–50. [Google Scholar] [CrossRef]
  3. Anderson, Theodore Wilbur, and Herman Rubin. 1949. Estimation of the parameters of a single equation in a complete system of stochastic equations. Annals of Mathematical Statistics 20: 46–63. [Google Scholar] [CrossRef]
  4. Anderson, Theodore Wilbur, and Herman Rubin. 1950. The asymptotic properties of estimates of the parameters of a single equation in a complete system of stochastic equations. Annals of Mathematical Statistics 21: 570–82. [Google Scholar] [CrossRef]
  5. Bartlett, Maurice S. 1938. Further aspects of the theory of multiple regression. Proceedings of the Cambridge Philosophical Society 34: 33–40. [Google Scholar] [CrossRef]
  6. Engle, Robert F., and Clive W. J. Granger. 1987. Co-integration and error correction: Representation, estimation, and testing. Econometrica 55: 251–76. [Google Scholar] [CrossRef]
  7. Goldberger, Arthur S., and Ingram Olkin. 1971. A minimum-distance interpretation of limited-information estimation. Econometrica 39: 635–49. [Google Scholar] [CrossRef]
  8. Hall, Alastair R. 2005. Generalized Method of Moments. Oxford: Oxford University Press. [Google Scholar]
  9. Hansen, Lars Peter. 1982. Large sample properties of generalized method of moments estimators. Econometrica 50: 1029–54. [Google Scholar] [CrossRef]
  10. Hotelling, Harold. 1936. Relations between two sets of variates. Biometrika 28: 321–77. [Google Scholar] [CrossRef]
  11. Johansen, Søren. 1988. Statistical analysis of cointegration vectors. Journal of Economic Dynamics and Control 12: 231–54. [Google Scholar] [CrossRef]
  12. Johansen, Søren. 1991. Estimation and hypothesis testing of cointegration vectors in Gaussian vector autoregressive models. Econometrica 59: 1551–80. [Google Scholar] [CrossRef]
  13. Johansen, Søren. 1995. Likelihood-Based Inference in Cointegrated Vector Auto-Regressive Models. Oxford: Oxford University Press. [Google Scholar]
  14. Kolesár, Michal. 2018. Minimum distance approach to inference with many instruments. Journal of Econometrics 204: 86–100. [Google Scholar] [CrossRef]
  15. Magnus, Jan R., and Heinz Neudecker. 1988. Matrix Differential Calculus with Applications in Statistics and Econometrics. New York: Wiley. [Google Scholar]
  16. Muirhead, Robb J. 1982. Aspects of Multivariate Statistical Theory. New York: Wiley. [Google Scholar]
  17. Pillai, K. C. S. 1955. Some new test criteria in multivariate analysis. The Annals of Mathematical Statistics 26: 117–21. [Google Scholar] [CrossRef]
  18. Sims, Christopher A. 1980. Macroeconomics and reality. Econometrica 48: 1–8. [Google Scholar] [CrossRef]
  19. Tso, M. K.-S. 1981. Reduced-rank regression and canonical analysis. Journal of the Royal Statistical Society, Series B 43: 183–89. [Google Scholar]

Share and Cite

MDPI and ACS Style

Hansen, B.E. Johansen’s Reduced Rank Estimator Is GMM. Econometrics 2018, 6, 26. https://doi.org/10.3390/econometrics6020026

AMA Style

Hansen BE. Johansen’s Reduced Rank Estimator Is GMM. Econometrics. 2018; 6(2):26. https://doi.org/10.3390/econometrics6020026

Chicago/Turabian Style

Hansen, Bruce E. 2018. "Johansen’s Reduced Rank Estimator Is GMM" Econometrics 6, no. 2: 26. https://doi.org/10.3390/econometrics6020026

APA Style

Hansen, B. E. (2018). Johansen’s Reduced Rank Estimator Is GMM. Econometrics, 6(2), 26. https://doi.org/10.3390/econometrics6020026

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop