Next Article in Journal
The Specification of Dynamic Discrete-Time Two-State Panel Data Models
Next Article in Special Issue
Looking Backward and Looking Forward
Previous Article in Journal
Interval Estimation of Value-at-Risk Based on Nonparametric Models
Previous Article in Special Issue
Filters, Waves and Spectra
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

State-Space Models on the Stiefel Manifold with a New Approach to Nonlinear Filtering

1
Department of Statistics, Uppsala University, P.O. Box 513, SE-75120 Uppsala, Sweden
2
Center for Data Analytics, Stockholm School of Economics, SE-11383 Stockholm, Sweden
3
Center for Operations Research and Econometrics, Université Catholique de Louvain, B-1348 Louvain-la-Neuve, Belgium
*
Author to whom correspondence should be addressed.
Econometrics 2018, 6(4), 48; https://doi.org/10.3390/econometrics6040048
Submission received: 30 July 2018 / Revised: 25 November 2018 / Accepted: 10 December 2018 / Published: 12 December 2018
(This article belongs to the Special Issue Filtering)

Abstract

:
We develop novel multivariate state-space models wherein the latent states evolve on the Stiefel manifold and follow a conditional matrix Langevin distribution. The latent states correspond to time-varying reduced rank parameter matrices, like the loadings in dynamic factor models and the parameters of cointegrating relations in vector error-correction models. The corresponding nonlinear filtering algorithms are developed and evaluated by means of simulation experiments.

1. Introduction

The coefficient matrix of explanatory variables in multivariate time series models can be rank deficient due to some modelling assumptions, and the parameter constancy of the rank deficient matrix may be questionable. This may happen, for example, in the factor model, which construct very few factors by using a large number of macroeconomic and financial predictors, while the factor loadings are suspect to be time-varying. Stock and Watson (2002) state that it is reasonable to suspect temporal instability taking place in factor loadings, and later Stock and Watson (2009) and Breitung and Eickmeier (2011) find empirical evidence of instability. Another setting where instability may arise is in cointegrating relations (see e.g., Bierens and Martins (2010)), hence in the the reduced rank cointegrating parameter matrix of a vector error-correction model.
There are solutions in the literature to the modelling of the temporal instability of reduced rank parameter matrices. Such parameters are typically regarded as unobserved random components and in most cases are modelled as random walks on a Euclidean space; see, for example, Del Negro and Otrok (2008) and Eickmeier et al. (2014). In these works, the noise component of the latent processes (factor loading) is assumed to have a diagonal covariance matrix in order to alleviate the computational complexity and make the estimation feasible, especially when the dimension of the system is high. However, the random walk assumption on the Euclidean space cannot guarantee the orthonormality of the factor loading (or cointegration) matrix, while this type of assumption identifies the loading (or cointegration) space. Hence, other identification restrictions on the Euclidean space are needed. Moreover, the diagonality of the error covariance matrix of the latent processes contradicts itself when a permutation of the variables is performed.
In this work, we develop new state-space models on the Stiefel manifold, which do not suffer from the problems on the Euclidean space. It is noteworthy that Chikuse (2006) also develops state-space models on the Stiefel manifold. The key difference between Chikuse (2006) and our work is that we keep the Euclidean space for the measurement evolution of the observable variables, while Chikuse (2006) puts them on the Stiefel manifold, which is not relevant for modelling economic time series. By specifying the time-varying reduced rank parameter matrices on the Stiefel manifold, their orthonormality is obtained by construction, and therefore their identification is guaranteed.
The corresponding recursive nonlinear filtering algorithms are developed to estimate the a posteriori distributions of the latent processes of the reduced rank matrices. By applying the matrix Langevin distribution on the a priori distributions of the latent processes, conjugate a posteriori distributions are achieved, which gives great convenience in the computational implementation of the filtering algorithms. The predictive step of the filtering requires solving an integral on the Stiefel manifold, which does not have a closed form. To compute this integral, we resort to a Laplace method.
The paper is organized as follows. Section 2 introduces the general framework of the vector models with time-varying reduced rank parameters. Two specific forms of the time-varying reduced rank parameters, which the paper is focused on, are given. Section 3 discusses some problems in the prevalent literature on modelling the time dependence of the time-varying reduced rank parameters, which underlie our modelling choices. Then, in Section 4, we present the novel state-space models on the Stiefel manifold. Section 5 presents the nonlinear filtering algorithms that we develop for the new state-space models. Section 6 presents several simulation based examples. Finally, Section 7 concludes and gives possible research extensions.

2. Vector Models with Time-Varying Reduced Rank Parameters

Consider the multivariate time series model with partly time-varying parameters
y t = A t x t + B z t + ε t , t = 1 , , T ,
where y t is a (column) vector of dependent variables of dimension p, x t and z t are vectors of explanatory variables of dimensions q 1 and q 2 , A t and B are p × q 1 and p × q 2 matrices of parameters, and ε t is a vector belonging to a white noise process of dimension p, with positive-definite covariance matrix Ω . For quasi-maximum likelihood estimation, we further assume that ε t N p ( 0 , Ω ) .
The distinction between x t and z t is introduced to separate the explanatory variables between those that have time-varying coefficients ( A t ) from those that have fixed coefficients ( B ). In the sequel, we always consider that x t is not void (i.e., q 1 > 0 ). The explanatory variables may contain lags of y t , and the remaining stochastic elements (if any) of these vectors are assumed to be weakly exogenous. Equation (1) provides a general linear framework for modelling time-series observations with time-varying parameters, embedding multivariate regressions and vector autoregressions. For an exposition of the treatment of such a model using the Kalman filter, we refer to Chapter 13 of Hamilton (1994).
We assume furthermore that the time-varying parameter matrix A t has reduced rank r < min ( p , q 1 ) . This assumption can be formalized by decomposing A t as α t β t , where α t and β t are p × r and q 1 × r full rank matrices, respectively. If we allow both α t and β t to be time-varying, the model is not well focused and hard to explain, and its identification is very difficult. Hence, we focus on the cases where either α t or β t is time-varying, that is, on the following two cases:
Case 1 : A t = α t β ,
Case 2 : A t = α β t .
Next, we explain how the two cases give interesting alternatives to modelling different kinds of temporal instability in parameters.
The case 1 model (Equations (1) and (2)) ensures that the subspace spanned by β is constant over time. This specification can be viewed as a cointegration model allowing for time-varying short-run adjustment coefficients (the entries of α t ) but with time-invariant long-run relations (cointegrating subspace). To see this, consider that model (1) corresponds to a vector error-correction form of a cointegrated vector autoregressive model of order k with X t as the dependent variables, if y t = Δ X t , x t = X t 1 , z t contains Δ X t i for i = 1 , , k 1 , as well as some predetermined variables. There are papers in the literature arguing that the temporal instability of the parameters in both stationary and non-stationary macroeconomic data does exist and cannot be overlooked. For example, Swanson (1998) and Rothman et al. (2001) give convincing examples in investigating the Granger causal relationship between money and output using a nonlinear vector error-correction model. They model the instability in α by means of regime-switching mechanisms governed by some observable variable. An alternative to that modelling approach is to regard α t as a totally latent process.
The case 1 model also includes as a particular case the factor model with time-varying factor loadings. In the factor model context, the factors f t are extracted from a number of observable predictors x t by using the r linear combinations f t = β x t . Note that f t is latent since β is unknown. Then, the corresponding factor model (neglecting the B z t term) takes the form
y t = α t f t + ε t ,
where α t is a matrix of the time-varying factor loadings. The representation is quite flexible in the sense that y t can be equal to x t and then we reach exactly the same representation as Stock and Watson (2002), but we also allow them to be distinct. In Stock and Watson (2002), the factor loading matrix α is time-invariant and the identification is obtained by imposing the constraints q 1 α = β and α β = β α = α α / q 1 = q 1 β β = I r . Notice that, if α is time-varying but β time-invariant, these constraints cannot be imposed.
The case 2 model (Equations (1) and (3)) can be used to account for time-varying long-run relations in cointegrated time series, as β t is changing. Bierens and Martins (2010) show that this may be the case for the long run purchasing power parity. In the case 2 model, there exist p r linearly independent vectors α that span the left null space of α , such that α A t = 0 . Therefore, the case 2 model implies that the time-varying parameter matrix β t vanishes in the structural vector model
γ y t = γ B z t + γ ε t ,
for any column vector γ sp ( α ) , where sp ( α ) denotes the space spanned by α , thus implying that the temporal instability can be removed in the above way. Moreover, x t does not explain any variation of γ y t .
Another possible application for the case 2 model is the instability in the factor composition. Considering the factor model y t = α f t + ε t , with time-invariant factor loading α , the factor composition may be slightly evolving through β t in f t = β t x t .

3. Issues about the Specification of the Time-Varying Reduced Rank Parameter

In the previous section, we have introduced two models with time-varying reduced rank parameters. In this section, in order to motivate our choices presented in Section 4, we discuss the specification in the literature of the dynamic process governing the evolution of the time-varying parameters.
Since the sequences α t or β t in the two cases are unobservable in practice, it is quite natural to write the two models into the state-space form with a measurement equation like (1) for the observable variables and transition equations for α t or β t . To build the time dependence in the sequences of α t or β t is of great practical interest as it enables one to use the historical time series data for conditional forecasting, especially by using the prevalent state-space model based approach. How to model the evolution of these time-varying parameters, nevertheless, is an open issue and needs careful investigation. Almost all the works in the literature of time series analysis hitherto only deal with state-space models on the Euclidean space. See, for example, the books by Hannan (1970); Anderson (1971); Koopman (1974); Durbin and Koopman (2012); and more recently Casals et al. (2016).
Consider, for example, the factor model (4) with time-varying factor loading α t , but notice that the following discussion can be easily adapted to the cointegration model, where only β t is time-varying. The traditional state-space framework on the Euclidean space assumes that the elements of the time-varying matrix α t evolve like random walks on the Euclidean space, see for example Del Negro and Otrok (2008) and Eickmeier et al. (2014). That is,
vec ( α t + 1 ) = vec ( α t ) + η t ,
where vec denotes the vectorization operator, and the sequence of η t is assumed to be a Gaussian strong white noise process with constant positive definite covariance matrix Σ η . Thus, Labels (1) and (6) form a vector state-space model, and the Kalman filter technique can be applied for estimating α t .
A first problem of the model (6) is that the latent random walk evolution on the Euclidean space is strange. Consider the special case p = 2 and r = 1 : in Figure 1, points 1–3 are possible locations of the latent variable vec ( α t ) = ( α 1 t , α 2 t ) . Suppose that the next state α t + 1 evolves as in (6) with a diagonal covariance matrix Σ η . The circles centered around points 1–3 are contour lines such that, say, almost all the probability mass lies inside the circles. The straight lines OA and OB are tangent lines to circle 1 with A and B the tangent points; the straight lines OC and OD are tangent lines to circle 2; and the straight lines OE and OF are tangent lines to circle 3. The angles between the tangent lines depend on the location of the points 1-2-3: generally, the more distant a point from the origin, the smaller the corresponding angle despite some special ellipses. The plot shows that the distributions of the next subspace based on the current point differ for different subspaces (angles for 3 and 2 smaller than the angle for 1); even for the same subspace (points 2 and 3), the distribution of the subspace is different (angle for 3 smaller than angle for 2).
A second problem is the identification issue. The pair of α t and β should be identified before we can proceed with the estimation of (1) and (6). If both α and β are time-invariant, it is common to assume the orthonormality (or asymptotic orthonormality) α α / q 1 = I r or α α = I r to identify the factors and then to estimate them by using the principle components method. However, when α t is evolving as (6), the orthonormality of α t can never be guaranteed for all t on the Euclidean space.
The alternative solution to the identification problem is to normalize the time-invariant part β as ( I r , b ) . The normalization is valid when the upper block of β is invertible, but if the upper block of β is not invertible, one can always permute the rows of β to find an invertible submatrix of order r rows for such a normalization. The permutation can be performed by left-multiplying β by a permutation matrix P to make its upper block invertible. In practice, it should be noted that the choice of the permutation matrix P is usually arbitrary and casual.
Even though the model defined by (1) and (6) is identified by some normalized β , if one does not impose any constraint on the elements of the positive definite covariance matrix Σ η , the estimation can be very difficult due to computational complexity. A feasible solution is to assume that η t is cross-sectionally uncorrelated. This restriction reduces the number of parameters, alleviates the complexity of the model, and makes the estimation much more efficient, but it may be too strong and imposes a priori information on the data. However, a third problem then arises. In the following two propositions, we show that any design like (1) and (6) with the restriction that Σ η is diagonal is casual in the sense that it may lead to contradiction since the normalization of β is arbitrarily chosen.
Proposition 1.
Suppose that the reduced rank coefficient matrix A t in (1) with rank r has the decomposition (2). By choosing some permutation matrix P β ( p × p ), the time-invariant component β can be linearly normalized if the r × r upper block b 1 in
P β β = b 1 b 2
is invertible. Then, the corresponding linear normalization is
β ˜ = P β β b 1 1 = I r b 2 b 1 1 ,
and the time-varying component is re-identified as α ˜ t = α t b 1 .
Assuming that the time-varying component evolves by following
v e c ( α ˜ t + 1 ) = v e c ( α ˜ t ) + η t α .
Consider another permutation P β * P β with the corresponding α ˜ t * , β ˜ * , b 1 * and η t α * . The variance–covariance matrices of η t α and η t α * are both diagonal if and only if b 1 = b 1 * .
Proof. 
See Appendix A. □
Proposition 2.
Suppose that the reduced rank coefficient matrix A t in (1) with rank r has the decomposition (3). By choosing some permutation matrix P α ( p × p ), the constant component α can be linearly normalized if the r × r upper block a 1 in
P α α = a 1 a 2
is invertible. The corresponding linear normalization is
α ˜ = P α α a 1 1 = I r a 2 a 1 1 ,
and the time-varying component is re-identified as β ˜ t = β t a 1 . Assuming that the time-varying component evolves by following
v e c ( β ˜ t + 1 ) = v e c ( β ˜ t ) + η t β .
Consider another permutation P α * P α with the corresponding α ˜ * , β ˜ t * , a 1 * and η t β * . The variance–covariance matrices of η t β and η t β * are both diagonal if and only if a 1 = a 1 * .
Proof. 
See Appendix B. □
The two corollaries below follow Propositions 1 and 2 immediately, showing that the assumption that the variance–covariance matrix Σ η is always diagonal for any linear normalization is inappropriate.
Corollary 1.
Given the settings in Propostion 1, the variance–covariance matrices of the error vectors in forms like (9) based on different linear normalizations cannot be both diagonal if b 1 b 1 * where b 1 and b 1 * are the upper block square matrices in forms like (7).
Corollary 2.
Given the settings in Proposition 2, the variance–covariance matrices of the error vectors in forms like (12) based on different linear normalizations cannot be both diagonal if a 1 a 1 * where a 1 and a 1 * are the upper block square matrices in forms like (10).
One may argue that there is a chance for the two covariance matrices to be both diagonal, i.e., when b 1 = b 1 * . It should be noticed that the condition b 1 = b 1 * does not imply that P = P * . Instead, it implies that the permutation matrices move the same variables to the upper part of β with the same order. If this is the case, the two permutation matrices P and P * are distinct but equivalent as the order of the variables in the lower part is trivial for linear normalization.
Since the choice of the permutation P and the corresponding linear normalization is arbitrary in practice, which is simply the order of x t ( y t for case 2), the models with different P are telling different stories about the data. In fact, the model has been over-identified by the assumption that Σ η must be diagonal. Consequently, the model becomes β -normalization dependent, and the β -normalization imposes some additional information on the data. This can be serious when the forecasts from the models with distinct normalizations of α give totally different results. A solution to this ”unexpected” problem may be to try all possible normalizations of α and do model selection, that is, after estimating every possible model, pick the best model according to an information criterion. However, this solution is not always feasible because the number of possible permutations for α , which is equal to q 1 ( q 1 1 ) ( q 1 r + 1 ) , can be huge. When the number of predictors is large, which is common in practice, the estimation of each possible model based on different normalization becomes a very demanding task.
Stock and Watson (2002) propose the assumption that the cross-sectional dependence between the elements in η t is weak and the variances of the elements are shrinking with the increase of the sample size. Then, the aforementioned problem may not be so serious, as, intuitively, different normalizations with diagonal covariance matrix Σ η may produce approximately or asymptotically the same results.
We have shown that the modelling of the time-varying parameter matrix in (2) as a process like (6) on the Euclidean space involves some problems. Firstly, the evolution of the subspace spanned by the latent process on the Euclidean space is strange. Secondly, the process does not comply with the orthonormality assumption to identify the pair of α t and β . Thus, a linear normalization is employed instead of the orthonormality. Thirdly, the state-space model on the Euclidean space suffers from the curse of dimensionality, and hence the diagonality of the covariance of the errors is often used with the linear normalization in order to alleviate the computational complexity when the dimension is high. This leads to two other problems: firstly, the diagonality assumption is inappropriate in the sense that different linear normalizations may lead to a contradiction; secondly, the model selection can be a tremendous task when there are many predictors.
In the following section, we propose that the time-varying parameter matrices α t and β t evolve on the Stiefel manifold, instead of the Euclidean space, and we show that the corresponding state-space models do not suffer from the aforementioned problems.

4. State-Space Models on the Stiefel Manifold

4.1. The Stiefel Manifold and the Matrix Langevin Distribution

Before presenting the state-space models on the Stiefel manifold, we introduce some concepts and terms. The Stiefel manifold V a , b , for dimensions a and b such that a b , is a space whose points are b-frames in R a . A set of b orthonormal vectors in R a is called a b-frame in R a . The Stiefel manifold is a collection of a × b full rank matrices X such that X X = I b ; if b = 1 , the Stiefel manifold is the unit circle if a = 2 , sphere if a = 3 , and hypersphere if a > 3 . The link with the modelling presented in Section 2 and developed in the next subsection is that the time-varying matrix α t of (2) is assumed to be evolving in V p , r (instead of a Euclidean space), and β t of (3) in V q 1 , r . Hence, each α t and β t is by definition orthonormal.
We also need to replace the assumption (6) that the distribution of vec ( α t + 1 ) conditional on vec ( α t ) is N p × r ( vec ( α t ) , Σ η ) by an appropriate distribution defined on V p , r , and likewise for vec ( β t + 1 ) . A convenient distribution for this purpose is the matrix Langevin distribution (also known as von Mises–Fisher distribution) denoted by M L ( a , b , F ) . A random matrix X V a , b follows a matrix Langevin distribution if and only if it has the probability density function
f M L ( X | a , b , F ) = etr F X 0 F 1 ( a 2 ; 1 4 F F ) ,
where etr { Q } stands for exp { tr { Q } } for any full rank square matrix Q , F is a a × b matrix, and 0 F 1 ( a / 2 ; F F / 4 ) is called ( 0 , 1 ) -type hypergeometric function with arguments a / 2 and F F / 4 . The hypergeometric function 0 F 1 is unusual due to a matrix argument, see Herz (1955), and it is actually the normalizing constant of the density defined in (13), that is,
0 F 1 a 2 ; 1 4 F F = etr F X [ d X ] ,
where [ d X ] = j = 1 a b i = 1 b x b + j d x i i < j x j d x i , stands for the differential form of a Haar measure on the Stiefel manifold, x i is a column vector of X , and ∧ is the exterior product of vectors.
The density function (13) is obtained from a normal density for a random matrix Z of dimension a × b , defined as v e c ( Z ) N a × b ( v e c ( M ) , I a Σ ) (where M is a matrix of dimension a × b , and Σ is a positive definite matrix of dimension b × b ) by imposing that Z Z = I b . The parameter F of (13) is then equal to M Σ 1 .
The matrix F has a singular value decomposition UDV , where U V a , b , V is a b × b orthogonal matrix, and D = diag { d 1 , d 2 , , d b } is a diagonal matrix with singular values d 1 d 2 d b 0 . Each pair of the column vectors in U and V corresponds to a singular value in D . Notice that the hypergeometric function in (13) has the property that
0 F 1 a 2 ; 1 4 F F = 0 F 1 a 2 ; 1 4 D 2 ,
see Khatri and Mardia (1977).
It can be shown that the density function (13) has maximum value exp ( i = 1 b d i ) at X m = U V , called the modal orientation of the matrix Langevin distribution. The mode is unique if min ( d i ) > 0 . The diagonal matrix D is called concentration as it controls how tight the distribution is in the following sense: the larger d i , the tighter the distribution is around the corresponding i-th column vector of the modal orientation matrix. For more details about the matrix Langevin distribution, see, for example, Prentice (1982); Chikuse (2003); Khatri and Mardia (1977); and Mardia (1975).
The density function (13) is rotationally symmetric around X m , in the sense that the density at H 1 X H 2 is the same as that at X for all orthogonal matrices H 1 (of dimension a × a ) and H 2 (of dimension b × b ) such that H 1 U = U and H 2 V = V (hence H 1 X m H 2 = X m ).
Figure 2 illustrates the Stiefel manifold and Figure 3 three matrix Langevin (not normalized) densities M L ( 2 , 1 , F ) where F = U D V = ( 1 / 2 , 1 / 2 ) D , setting V (a scalar) equal to 1, for three values of D (a scalar); the smaller D, the flatter the density. In Figure 2, the modal orientation U = ( 1 / 2 , 1 / 2 ) is shown for the densities of Figure 3, and the point at which the density values are minimal, this point being equal to U . The densities are shown on Figure 3 as functions of the angle θ shown on Figure 2, for θ between 0 and 2 π , instead of being shown as lines above the unit circle. Rotational symmetry in this example means that, if we premultiply the random vector X by any orthogonal 2 × 2 matrix H 1 that does not modify the modal orientation, the densities are not changed.

4.2. Models

Chikuse (2006) develops a state-space model whose observable and latent variables are both evolving on Stiefel manifolds. For economic data, it is not appropriate to assume that the observable variables evolve on a Stiefel manifold, so that we keep the assumption that y t evolves on a Euclidean space in the measurement Equation (1).
We define two state space models corresponding to the case 1 and case 2 models introduced in Section 2, with latent processes evolving over the Stiefel manifold and following conditional matrix Langevin distributions:
Model 1 : y t = α t β x t + B z t + ε t , α t + 1 | α t M L ( p , r , a t D V ) ,
Model 2 : y t = α β t x t + B z t + ε t , β t + 1 | β t M L ( q 1 , r , b t D V ) ,
with the constraints that a t V = α t and b t V = β t , respectively. We assume in addition that the error ε t and α t + 1 or β t + 1 are mutually independent. The parameters of the ML distributions of the models are chosen so that the previous state of α t or β t is the modal orientation of the next state. Thus, the transitions of the latent processes are random walks on the Stiefel manifold and evolve in the matrix Langevin way.
The models (16) and (17) are not yet identified due to the fact that the pairs between a t or b t and the nuisance parameter V can be arbitrarily chosen, and therefore the time-invariant β and α are not identified as well. The identification problem can be solved by imposing V = I r . Then, the identified version of the models is
Model 1 : y t = α t β x t + B z t + ε t , α t + 1 | α t M L ( p , r , α t D ) ,
Model 2 : y t = α β t x t + B z t + ε t , β t + 1 | β t M L ( q 1 , r , β t D ) .
The new state-space models in (18) and (19) do not have the problems mentioned in Section 3, due to the fact that both α t and β t are points in the Stiefel manifold. By construction, orthonormality is ensured, which is α t α t = I r for Model 1, and similarly β t β t = I r for Model 2. If the space spanned by the columns of α t (or the columns of β t ) is subjected to a rotation, the model is fundamentally unchanged. Indeed, in the case of Model 1, let H be an orthogonal matrix ( p × p ), and define the rotation α ˜ t = H α t . Then, α ˜ t α ˜ t = α t H H α t = α t α t = I r . A similar reasoning holds for Model 2.
More simple versions of the models in (18) and (19) are obtained by assuming that the evolutions of α t and β t are independent of their previous states, with the same modal orientations α * and β * across time:
Model 1 * : y t = α t β x t + B z t + ε t , α t M L ( p , r , α 0 D ) ,
Model 2 * : y t = α β t x t + B z t + ε t , β t M L ( q 1 , r , β 0 D ) .
If we assume that the random variation of α t + 1 in (18) or β t + 1 in (19) are inside the subspace spanned by α t or β t (hence α 0 or β 0 ), then we have another two state space models. The corresponding conditional distributions of α t + 1 and β t + 1 become truncated matrix Langevin distributions with the density functions:
f ( α t + 1 | α t ) etr D α t α t + 1 , if sp ( α t + 1 ) = sp ( α t ) or sp ( α 0 ) = 0 , otherwise .
f ( β t + 1 | β t ) etr D β t β t + 1 , if sp ( β t + 1 ) = sp ( β t ) or sp ( β 0 ) = 0 , otherwise .
These two models can be interesting if the spaces spanned by the time-varying α t and β t are expected to be invariant over time.
Denote Δ = ( α 1 , , α T ) in Model 1 or ( β 1 , , β T ) in Model 2; and let F t 1 = ( x 1 , z 1 , y 1 , , y t 1 , x t , z t ) represent all the observable information up to time t 1 , such that E ( y t | F t 1 ) = A t x t + B z t ; and let Y = ( y 1 , , y T ) .
The quasi-likelihood function for Model 1 based on Gaussian errors takes the form
f ( Y , Δ | θ ) = t = 1 T ( 2 π ) p 2 | Ω | 1 2 exp 1 2 ε t Ω 1 ε t etr D α t 1 α t 0 F 1 ( p 2 ; 1 4 D 2 ) ,
where θ = ( β , B , Ω , D , α 0 ) , ε t = y t α t β x t B z t .
The quasi-likelihood function for Model 2 based on Gaussian errors takes the form
f ( Y , Δ | θ ) = t = 1 T ( 2 π ) p 2 | Ω | 1 2 exp 1 2 ε t Ω 1 ε t etr D β t 1 β t 0 F 1 ( p 2 ; 1 4 D 2 ) ,
where θ = ( α , B , Ω , D , β 0 ) , ε t = y t α β t x t B z t .
We treat the initial values α 0 and β 0 as the parameters to be estimated, but of course they can be regarded as given.

5. The Filtering Algorithms

In this section, for the models (18) and (19) defined in the previous section, we propose nonlinear filtering algorithms to estimate the a posteriori distributions of the latent processes based on the Gaussian error assumption in the measurement equations.
We start with Model 1 which has time-varying α t . The filtering algorithm consists of two steps:
Predict : f ( α t | F t 1 ) = f ( α t | α t 1 ) f ( α t 1 | F t 1 ) [ d α t 1 ] ,
Update : f ( α t | F t ) f ( y t | α t , F t 1 ) f ( α t | F t 1 ) ,
where the symbol [ d α t 1 ] stands for the differential form for a Haar measure on the Stiefel manifold. The predictive density in (26) represents the a priori distribution of the latent variable before observing the information at time t. The updating density, which is also called the filtering density, represents the a posteriori distribution of the latent variable after observing the information at time t.
The prediction step is quite tricky in the sense that, even if we can find the joint distribution of α t and α t 1 , which is the product f ( α t | α t 1 ) f ( α t 1 | F t 1 ) , we must integrate out α t 1 over the Stiefel manifold. The density kernel f ( α t 1 | F t 1 ) appearing in the integral in the first line of (27) comes from the previous updating step and is quite straightforward as it is proportional to the product of the density function of y t 1 and the predicted density of α t 1 (see the updating step in (27)).
The initial condition for the filtering algorithm can be a Dirac delta function f ( α 0 | F 0 ) such that f ( α 0 | F 0 ) = when α 0 = U 0 where U 0 is the modal orientation and zero otherwise, but the integral f ( α 0 | F 0 ) [ d α 0 ] is exactly equal to one.
The corresponding nonlinear filtering algorithm is recursive like the Kalman filter in linear dynamic systems. We start the algorithm with
f ( α 1 | F 0 ) etr { D U 0 α 1 } ,
and proceed to the updating step for α 1 as follows:
f ( α 1 | F 1 ) etr { H 1 α 1 J α 1 + C 1 α 1 } ,
where H 1 = 1 2 β x 1 x 1 β , J = Ω 1 , C 1 = U 0 D + Ω 1 ( y 1 B z 1 ) x 1 β . Then, we move to the prediction step for α 2 and obtain the integral as follows:
f ( α 2 | F 1 ) = f ( α 2 | α 1 ) f ( α 1 | F 1 ) [ d α 1 ] ,
where
f ( α 2 | α 1 ) = etr D α 1 α 2 0 F 1 ( a 2 ; 1 4 D 2 ) ,
due to (13) and (15), and f ( α 1 | F 1 ) in (29). Hence, we have
f ( α 2 | α 1 ) f ( α 1 | F 1 ) = ξ · etr { D α 1 α 2 } · etr { H α 1 J α 1 + C 1 α 1 } ,
where ξ does not depend on α 1 and α 2 . Unfortunately, there is no closed form solution to the integral (30) in the literature.
Another contribution of this paper is that we propose to approximate this integral by using the Laplace method. (see Wong (1989, chps. 2 and 9) for a detailed exposition). Rewrite the integral (30) as
f ( α 2 | F 1 ) = ξ h ( α 1 ) exp { p · g ( α 1 ) } [ d α 1 ] ,
where p is the dimension of y t ,
h ( α 1 ) = etr { D α 1 α 2 } exp i = 1 r d i ,
is bounded, and
g ( α 1 ) = tr { H 1 α 1 J α 1 + C 1 α 1 } / p ,
which is twice differentiable with respect to α 1 and is assumed to be convergent to some nonzero value when p .
Then, the Laplace method can be applied since the Taylor expansion on which it is based is valid in the neighbourhood for any point on the Stiefel manifold. It follows that, with p , the integral (30) can be approximated by
f ( α 2 | F 1 ) ξ h ( U 1 ) exp { p g ( U 1 ) } , etr { D U 1 α 2 }
where
U 1 = arg   max α 1 V p , r etr { H 1 α 1 J α 1 + C 1 α 1 } .
Given f ( α 2 | F 1 ) etr { D U 1 α 2 } , then it can be shown that f ( α 2 | F 2 ) has the same form as (29) with H 2 = 1 2 β x 2 x 2 β , C 2 = U 1 D + Ω 1 ( y 2 B z 2 ) x 2 β .
Thus, by induction, we have the following proposition for the recursive filtering algorithm for state-space Model 1.
Proposition 3.
Given the state-space Model 1 in (18) with the quasi-likelihood function (24) based on Gaussian errors, the Laplace approximation based recursive filtering algorithm for α t is given by
Predict : f ( α t | F t 1 ) etr { D U t 1 α t } ,
Update : f ( α t | F t ) etr { H t α t J α t + C t α t } ,
where H t = 1 2 β x t x t β , J = Ω 1 , C t = U t 1 D + Ω 1 ( y t B z t ) x t β , and
U t 1 = arg   max α t 1 V p , r etr { H t 1 α t 1 J α t 1 + C t 1 α t 1 } .
Likewise, we have the recursive filtering algorithm for the state-space Model 2.
Proposition 4.
Given the state-space Model 2 in (19) with the quasi-likelihood function (25) based on Gaussian errors, the Laplace approximation based recursive filtering algorithm for β t is given by
Predict : f ( β t | F t 1 ) etr { D U t 1 β t } ,
Update : f ( β t | F t ) etr { H β t J t β t + C t β t } ,
where H = 1 2 α Ω 1 α , J t = x t x t , C t = U t 1 D + x t ( y t B z t ) Ω 1 α , and
U t 1 = arg   max β t 1 V q 1 , r etr { H β t 1 J t 1 β t 1 + C t 1 β t 1 } .
Several remarks related to the propositions follow.
Remark 1.
The distributions of predicted and updated α t and β t in the recursive filtering algorithms are conjugate.
The predictive distribution and the updating or filtering distribution are both known as the matrix Langevin–Bingham (or matrix Bingham–von Mises–Fisher) distribution; see, for example, Khatri and Mardia (1977). This feature is desirable as it gives great convenience in the computational implementation of the filtering algorithms.
Remark 2.
When estimating the predicted distribution of α t and β t , a numerical optimization for finding U t 1 is required.
There are several efficient line-search based optimization algorithms available in the literature which can be easily implemented and applied. See Absil et al. (2008, chp. 4) for a detailed exposition.
Remark 3.
The predictive distributions in (38) and (41) are Laplace type approximations. Therefore, the dimensions of the data y t in Model 1 and the predictors in Model 2 are expected to be high enough in order to achieve good approximations.
For the high-dimensional factor models that use a large number of predictors, the filtering algorithms are natural choices to model the possible temporal instability, while a small value of the rank r implies the dimension reduction in forecasting. In the next section, our finding from simulation is that, even for small p and q 1 , the approximations of the modal orientations can be very good.
Remark 4.
The recursive filtering algorithms make it possible to use both maximum likelihood estimation and the Bayesian analysis for the proposed state-space models.
Next, we consider the models in (20) and (21). The corresponding filtering algorithms are similar to Propositions 3 and 4. The filtering algorithm for Model 1 * is given by
Predict : f ( α t | F t 1 ) etr { D α 0 α t } ,
Update : f ( α t | F t ) etr { H t α t J α t + C t α t } ,
where H t = 1 2 β x t x t β , J = Ω 1 , C t = α 0 D + Ω 1 ( y t B z t ) x t β . In addition, for Model 2 * , we have
Predict : f ( β t | F t 1 ) etr { D β 0 β t } ,
Update : f ( β t | F t ) etr { H β t J t β t + C t β t } ,
where H = 1 2 α Ω 1 α , J t = x t x t , C t = β 0 D + x t ( y t B z t ) Ω 1 α . We have the following remarks for both models.
Remark 5.
The predictive distributions do not depend on any previous information, which is due to the assumption of sequentially independent latent processes.
Remark 6.
The predictive and filtering distributions for Model 1 * and Model 2 * are not approximations.
We do not need to approximate integral like (30). Since f ( α t | F t 1 ) does not depend on α t 1 in Model 1 * and f ( β t | F t 1 ) does not depend on β t 1 in Model 2 * , f ( α t | F t 1 ) and f ( β t | F t 1 ) can be directly moved outside the integral.
The smoothing distribution is defined to be the a posteriori distribution of the latent parameters given all the observations. We have the following two propositions for the smoothing distributions of the state-space models.
Proposition 5.
The smoothing distribution of Model 1 is given by
f ( Δ | θ , Y ) t = 1 T etr { H t α t J α t + C t α t } ,
where H t = 1 2 β x t x t β , J = Ω 1 , and C t = α t 1 D + Ω 1 ( y t B z t ) x t β .
Proposition 6.
The smoothing distribution of Model 2 is given by
f ( Δ | θ , Y ) t = 1 T etr { H β t J t β t + C t β t } ,
H = 1 2 α Ω 1 α , J t = x t x t , C t = β t 1 D + x t ( y t B z t ) Ω 1 α .
There is no closed form for the smoothing distributions as the corresponding normalizing constants are unknown. Hoff (2009) develops a Gibbs sampling algorithm that can be used to sample from these smoothing distributions.

6. Evaluation of the Filtering Algorithms by Simulation Experiments

To investigate the performance of the filtering algorithm in Proposition 3, we consider several settings based on data generated from Model 1 in (18) for different values of its parameters.
Recall that at each iteration of the recursive algorithm, the predictive density kernel in (38) is a Laplace type approximation of the true predictive density which takes an integral form as (30), and hence the resulting filtering density is an approximation as well. It is of great interest to check the performance of the approximation under different settings. Since the exact filtering distributions of the latent process are not available, we resort to comparing the true (i.e., generated) value α t and the filtered modal orientation at time t from the filtering distribution f ( α t | F t ) , which is U t as defined in (40). The modal orientations are expected to be distributed around the true values across time if the algorithm performs well.
Then, a measure of distance between two points in the Stiefel manifold is needed for the comparison. We consider the squared Frobenius norm of the difference between two matrices or column vectors:
F 2 ( X , Y ) = X Y 2 = tr { ( X Y ) ( X Y ) } = tr { X X + Y Y X Y Y X } .
If the two matrices or column vectors X and Y are points in the Stiefel manifold, then it holds that F 2 ( X , Y ) = 2 r 2 tr { X Y } [ 0 , 4 r ] , and F 2 ( X , Y ) takes the minimum 0 when X = Y (closest) and the maximum 4 r when X = Y (furthest). Thus, we employ the normalized distance
δ ( X , Y ) = F 2 ( X , Y ) / 4 r [ 0 , 1 ] ,
which is matrix dimension free.
Note that the modal orientation of the filtering distribution is not supposed to be consistent to the true value of the latent process with the increase of the sample size T. As a matter of fact, the sample size is irrelevant to the consistency which can be seen from the filtering density (39). We should note that the filtering distribution in (39) also has concentration or dispersion which is determined by H t , J (the inverse of Ω ) and C t (the current information, i.e. y t , x t and z t ), together with the parameters, while the previous information has limited influence only through the orthonormal matrix U t 1 . Since the concentration of the filtering distribution does not shrink with the increase of the sample size, we use T = 100 in all the experiments. If the filtering distribution has big concentration, the filtered modal orientations are expected to be close to the true values and hence the normalized distances close to zero and less dispersed.
The data generating process follows Model 1 in (18). Since we input the true parameters in the filtering algorithm, the difference y t B z t is perfectly known and then there is no need to consider the effect of B z t . Thus, it is natural to exclude B z t from the data generating process.
We consider the settings with different combinations of
  • T = 100 , the sample size,
  • p { 2 , 3 , 10 , 20 } , the dimension of the dependent variable y t ,
  • r { 1 , 2 } , the rank of the matrix A t ,
  • x t , the explanatory variable vector has dimension q 1 = 3 ensuring that q 1 > r always holds, and each x t is sampled independently (over time) from a N 3 ( 0 , I 3 ) ,
  • β = ( 1 , 1 , 1 ) / 3 ,
  • α 0 = ( 1 , 1 , 1 , ) / p , the initial value of α t sequence for the data generating process,
  • Ω = ρ I p , the covariance matrix of the errors is diagonal with ρ { 0.1 , 0.5 , 1 } ,
  • D = d I r , and d { 5 , 50 , 500 , 800 } .
The simulation based experiment of each setting consists of the following three steps:
  • We sample from Model 1 by using the identified version in (18). First, simulate α t given α t 1 , and then y t given α t . We save the sequence of the latent process α t , t = 1 , , T .
  • Then, we apply the filtering algorithm on the sampled data to obtain the filtered modal orientation U t , t = 1 , , T .
  • We compute the normalized distances δ t ( α t , U t ) and report by plotting them against the time t.
We use the same seed, which is one, for the underlying random number generator throughout the experiments so that all the results can be replicated. Sampling values form the matrix Langevin distribution can be done by the rejection method described in Section 2.5.2 of Chikuse (2003).
Figure 4 depicts the results from the setting p { 2 , 10 , 20 } , r = 1 , ρ = 0.1 and d = 50 . We see that the sequences of the normalized distances δ t are persistent. This is a common phenomenon throughout the experiments, and, intuitively, it can be attributed to the fact that the current δ t depends on the previous one through the pair of U t and α t . For the low dimensional case p = 2 , almost all the distances are very close to 0, which means that the filtered modal orientations are very close to the true ones, despite few exceptions. However, for the higher dimensional cases p = 10 and 20, the distances are at higher levels and are more dispersed. This is consistent with the fact that, given the same concentration d = 50 , an increase of the dimension the orthonormal matrix or vector goes along with an increase of the dispersion of the corresponding distributions on the Stiefel manifold, as the volume of the manifold explodes with the increase of the dimensions (both p and r).
Figure 5 displays the results for the same setting p { 2 , 10 , 20 } , r = 1 , ρ = 0.1 but with a much higher concentration d = 500 . We see that the curse of dimensionality can be remedied through a higher concentration as the distances for the high dimensional cases are much closer to zero than when d = 50 .
The magnitude ρ of the variance of the errors affects the results of the filtering algorithm as well, as it determines the concentration of the filtering distribution, which can be seen from (39) through J and C t (both depend on the inverse of Ω ). The following experiments apply the settings p = 2 , r = 1 and d { 5 , 50 , 500 } showing the impact of different ρ on the filtering results. Figure 6 depicts the results with ρ = 1 , and Figure 7 with ρ = 0.1 . We see that the normalized distances become closer to zero when a lower ρ is applied. Their variability also decreases for the lowest value of d = 5 and for the intermediate value d = 50 . It is worth mentioning that, in the two cases corresponding to the two bottom plots of the figures, the matrix C t dominates the density function, which implies that the filtering distribution resembles a highly concentrated matrix Langevin.
In the following experiments, our focus is on the investigation of the filtering algorithm when r approaches p. We consider the setting p = 3 with the rank number r { 1 , 2 } , with ρ = 0.1 and d = 500 . Figure 8 depicts the results. The normalized distances are stable at a low level for the case p = 3 with r = 1 , but a high level (around 0.5) in the case p = 3 with r = 2 . A higher concentration ( d = 800 ) reduces the latter level to about 0.12, as can be seen on the lower plot of Figure 8. We conclude that the approximation of the true filtering distribution tends to fail when the matrix α t tends to a square matrix, that is, p r , and therefore the filtering algorithms proposed in this paper seems to be appropriate when p is sufficiently larger than r.
All the previous experiments are based on the true initial value α 0 , but, in practice, this is unknown. The filtering algorithm may be sensitive to the choice of the initial value. In the following experiments, we look into the effect of a wrong initial value. The setting is p { 2 , 10 , 20 } , r = 1 , ρ = 0.1 and d = 50 , and we use as initial value α 0 , which is the furthest point in the Stiefel manifold away from the true one. Figure 9 depicts the results. We see that in all the experiments the normalized distances move towards zero, hence the filtered values approach the true values in no more than 20 steps. After that, the level and dispersion of the distance series are similar to what they are in Figure 4 where the true initial value is used. Thus, we can conclude that the effect of a wrongly chosen initial value is temporary.
We have conducted similar simulation experiments for Model 2 in (19) to investigate the performance of the algorithm proposed in Proposition 4. We find similar results to those for Model 1. All the experiments that we have conducted are replicable using the R code available at https://github.com/yukai-yang/SMFilter_Experiments, and the corresponding R package SMFilter implementing the filtering algorithms of this paper is available at the Comprehensive R Archive Network (CRAN).

7. Conclusions

In this paper, we discuss the modelling of the time dependence of the time-varying reduced rank parameters in multivariate time series models and develop novel state-space models whose latent states evolve on the Stiefel manifold. Almost all the existing models in the past literature only deal with the case where the evolution of the latent processes takes places on the Euclidean space, and we point out that this approach can be problematic. These problems motivate the development of the novel state-space models. The matrix Langevin distribution is proposed to specify the sequential evolution of the corresponding latent processes over the Stiefel manifold. Nonlinear filtering algorithms for the new models are designed, wherein the integral for computing the predictive step is approximated by applying the Laplace method. An advantage of the matrix Langevin distribution is that the a priori and a posteriori distributions of the latent variables are conjugate. The new models can be useful when the temporal instability of some parameters of multvariate models is suspected, for example, cointegration models with time-varying short-run adjustment or time-varying long-run relations, and factor models with time-varying factor loading.
Further research is needed in several directions. The most obvious one is the implementation of estimation methods, which can be maximum likelihood or Bayesian inference, and the investigation of their properties. This will enable us to apply the models to data. In this paper, we only consider the case where the latent variables evolve on the Stiefel manifold in a ‘random walk’ way. It will be interesting to consider the case where the latent variables evolve on the Stiefel manifold but in a mean-reverting way.

Author Contributions

Conceptualization, Y.Y.; Methodology, Y.Y.; Software, Y.Y.; Formal analysis, Y.Y. and L.B.; Investigation, Y.Y. and L.B.; Validation, Y.Y. and L.B.; Funding acquisition, Y.Y.; Writing—original draft, Y.Y.; Writing—review and editing, Y.Y. and L.B.

Funding

This research was funded by Jan Wallander’s and Tom Hedelius’s Foundation grant number P2016-0293:1.

Acknowledgments

The authors would like to thank the two referees for helpful comments. Yukai Yang acknowledges support from Jan Wallander’s and Tom Hedelius’s Foundation. Responsibility for any errors or shortcomings in this work remains ours.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proof of Proposition 1

In the model (1) with the decomposition (2), both the rows of β and the order of the variables x t are permuted by P β as follows:
A t x t = α t β x t = α t β P β P β x t .
The time-invariant component β can be linearly normalized if the r × r upper block b 1 in (7) is invertible. It follows that the corresponding linear normalization defined in (8) is due to
α t β P β P β x t = α t ( b 1 , b 2 ) P β x t = α t b 1 ( b 1 ) 1 ( b 1 , b 2 ) P β x t = α ˜ t β ˜ P β x t ,
where α ˜ t = α t b 1 is the new time-varying component following the evolution (9).
Consider another permutation P β * P β . Similarly, we have
A t x t = α t β P β * P β * x t ,
together with
P β * β = b 1 * b 2 * , β ˜ * = P β * β b 1 * 1 = I r b 2 * b 1 * 1 , α ˜ t * = α t b 1 * ,
where b 1 * is also invertible. Then, we can have the evolution
vec ( α ˜ t + 1 * ) = vec ( α ˜ t * ) + η t α * .
Assume that the error vector η t α in (9) has zero mean and a diagonal variance–covariance matrix. From (A1)–(A3), we have
A t = α ˜ t β ˜ P β = α ˜ t * β ˜ * P β * ,
and hence it follows that
α ˜ t * = α ˜ t β ˜ P β P β * κ ,
where the q 1 × r matrix κ satisfies β ˜ * κ = I r . The existence of κ is guaranteed by the fact that β has full rank and so does β ˜ * .
Thus, the vectorized α ˜ t + 1 * can be written as
vec ( α ˜ t + 1 * ) = vec ( α ˜ t + 1 β ˜ P β P β * κ ) = ( ( κ P β * P β β ˜ ) I p ) vec ( α ˜ t + 1 ) = ( ( κ P β * P β β ˜ ) I p ) vec ( α ˜ t ) + ( ( κ P β * P β β ˜ ) I p ) η t α = vec ( α ˜ t * ) + η t α * ,
due to (9) and (A5). Hence, it can be seen that η t α * = ( ( κ P β * P β β ˜ ) I p ) η t α , and that η t α * has diagonal variance–covariance matrix if and only if κ P β * P β β ˜ is diagonal given that η t α has diagonal variance–covariance matrix.
Next, we need to verify whether κ P β * P β β ˜ is diagonal and investigate under what condition it will be diagonal. By substituting β ˜ with (8), we obtain
κ P β * P β β ˜ = κ P β * P β P β β b 1 1 = κ P β * β b 1 1 .
In addition, we know that, by substituting β ˜ * with (A4),
κ β ˜ * = κ P β * β b 1 * 1 = I r .
Since the r × r square matrix κ P β * β has full rank, it can be seen that η t α * has diagonal variance–covariance matrix if and only if b 1 = b 1 * .

Appendix B. Proof of Proposition 2

In the model (1) with the decomposition (3), the rows of α are permuted by P α as follows:
A t x t = α β t x t = P α P α α β t x t .
Notice that we can remove P α in the equation, which means that we choose not to permute back to the original order of the dependent variables y t . The linear normalization (11) is obtained by
P α P α α β t x t = P α a 1 a 2 β t x t = P α a 1 a 2 a 1 1 a 1 β t x t = P α α ˜ β ˜ t x t ,
where β ˜ t = β t a 1 is the new time-varying component following evolution (12).
Consider another permutation P α * P α , such that
A t x t = P α * P α * α β t x t ,
together with
P α * α = a 1 * a 2 * , α ˜ * = P α * α a 1 * 1 = I r a 2 * a 1 * 1 , β ˜ t * = β t a 1 * ,
where a 1 * is invertible. Then, we can have the evolution
vec ( β ˜ t + 1 * ) = vec ( β ˜ t * ) + η t β * .
We assume that the error vector η t β in (12) has zero mean and diagonal variance–covariance matrix. From (A11)–(A13), we have
A t = P α α ˜ β ˜ t = P α * α ˜ * β ˜ t * ,
and hence it follows that
β ˜ t * = β ˜ t α ˜ P α P α * δ ,
where the p × r matrix δ satisfies α ˜ * δ = I r . The existence of δ is guaranteed by the fact that α has full rank and so does α ˜ * .
Then, we get the vectorized β ˜ t + 1 * :
vec ( β ˜ t + 1 * ) = vec ( β ˜ t + 1 α ˜ P α P α * δ ) = ( ( δ P α * P α α ˜ ) I q 1 ) vec ( β ˜ t + 1 ) = ( ( δ P α * P α α ˜ ) I q 1 ) vec ( β ˜ t ) + ( ( δ P α * P α α ˜ ) I q 1 ) η t β = vec ( β ˜ t * ) + η t β * ,
due to (12) and (A15). Hence, it can be seen that η t β * = ( ( δ P α * P α α ˜ ) I q 1 ) η t β , and that η t β * has a diagonal variance–covariance matrix if and only if δ P α * P α α ˜ is diagonal given that η t β has a diagonal variance–covariance matrix.
The investigation of under what condition δ P α * P α α ˜ is diagonal is similar to the previous proof. By substituting α ˜ with (11), we obtain
δ P α * P α α ˜ = δ P α * P α P α α a 1 1 = δ P α * α a 1 1 .
By substituting β ˜ * with (A14), we obtain that
δ α ˜ * = δ P α * α a 1 * 1 = I r .
Since the r × r square matrix δ P α * α has full rank, it can be seen that η t β * has diagonal variance–covariance matrix if and only if a 1 = a 1 * .

References

  1. Absil, Pierre-Antoine, Robert Mahony, and Rodolphe Sepulchre. 2008. Optimization Algorithms on Matrix Manifolds. Princeton: Princeton University Press. [Google Scholar]
  2. Anderson, Theodore Wilbur. 1971. The Statistical Analysis of Time Series. New York: Wiley. [Google Scholar]
  3. Bierens, Herman J., and Luis F. Martins. 2010. Time-varying cointegration. Econometric Theory 26: 1453–90. [Google Scholar] [CrossRef]
  4. Breitung, Jörg, and Sandra Eickmeier. 2011. Testing for structural breaks in dynamic factor models. Journal of Econometrics 163: 71–84. [Google Scholar] [CrossRef] [Green Version]
  5. Casals, Jose, Alfredo Garcia-Hiernaux, Miguel Jerez, Sonia Sotoca, and A. Alexandre Trindade. 2016. State-Space Methods for Time Series Analysis: Theory, Applications and Software. Chapman & Hall/CRC, Monographs on Statistics & Applied Probability. New York: CRC Press. [Google Scholar]
  6. Chikuse, Yasuko. 2003. Statistics on Special Manifolds. New York: Springer. [Google Scholar]
  7. Chikuse, Yasuko. 2006. State space models on special manifolds. Journal of Multivariate Analysis 97: 1284–94. [Google Scholar] [CrossRef]
  8. Del Negro, Marco, and Christopher Otrok. 2008. Dynamic Factor Models with Time-Varying Parameters: Measuring Changes in International Business Cycles. Staff Report No. 326. New York: Federal Reserve Bank of New York. [Google Scholar]
  9. Durbin, James, and Siem Jan Koopman. 2012. Time Series Analysis by State Space Methods, 2nd ed. Oxford: Oxford University Press. [Google Scholar]
  10. Eickmeier, Sandra, Wolfgang Lemke, and Massimiliano Marcellino. 2014. Classical time varying factor-augmented vector auto-regressive models—Estimation, forecasting and structural analysis. Journal of the Royal Statistical Society Series A (Statistics in Society) 178: 493–533. [Google Scholar] [CrossRef]
  11. Hamilton, James Douglas. 1994. Time Series Analysis. Princeton: Princeton University Press. [Google Scholar]
  12. Hannan, Edward J. 1970. Multiple Time Series. New York: Wiley. [Google Scholar]
  13. Herz, Carl S. 1955. Bessel functions of matrix argument. Annals of Mathematics 61: 474–523. [Google Scholar] [CrossRef]
  14. Hoff, Peter D. 2009. Simulation of the matrix Bingham-von Mises-Fisher distribution, with applications to multivariate and relational data. Journal of Computational and Graphical Statistics 18: 438–56. [Google Scholar] [CrossRef]
  15. Khatri, C. G., and Kanti V. Mardia. 1977. The von Mises-Fisher matrix distribution in orientation statistics. Journal of the Royal Statistical Society, Series B 39: 95–106. [Google Scholar] [CrossRef]
  16. Koopman, Lambert Herman. 1974. The Spectral Analysis of Time Series. New York: Academic Press. [Google Scholar]
  17. Mardia, Kanti V. 1975. Statistics of directional data (with discussion). Journal of the Royal Statistical Society, Series B 37: 349–93. [Google Scholar]
  18. Prentice, Michael J. 1982. Antipodally symmetric distributions for orientation statistics. Journal of Statistical Planning and Inference 6: 205–14. [Google Scholar] [CrossRef]
  19. Rothman, Philip, Dick van Dijk, and Philip Hans Franses. 2001. A Multivariate STAR analysis of the relationship between money and output. Macroeconomic Dynamics 5: 506–32. [Google Scholar]
  20. Stock, James, and Mark Watson. 2009. Forecasting in dynamic factor models subject to structural instability. In The Methodology and Practice of Econometrics. A Festschrift in Honour of David F. Hendry. Edited by David F. Hendry, Jennifer Castle and Neil Shephard. Oxford: Oxford University Press, pp. 173–205. [Google Scholar]
  21. Stock, James H., and Mark W. Watson. 2002. Forecasting using principal components from a large number of predictors. Journal of the American Statistical Association 97: 1167–79. [Google Scholar] [CrossRef]
  22. Swanson, Norman Rasmus. 1998. Finite sample properties of a simple LM test for neglected nonlinearity in error correcting regression equations. Statistica Neerlandica 53: 76–95. [Google Scholar] [CrossRef]
  23. Wong, Roderick S. C. 2001. Asymptotic Approximations of Integrals. In Classics in Applied Mathematics. Philadelphia: Society for Industrial and Applied Mathematics. [Google Scholar]
Figure 1. Euclidean state space for p = 2 and r = 1 . Points 1–3 are possible locations of the latent variable ( α 1 t , α 2 t ) . Circles are isodensity contours assuming ( α 1 , t + 1 , α 2 , t + 1 ) | ( α 1 t , α 2 t ) N 2 ( ( α 1 t , α 2 t ) , I 2 ) .
Figure 1. Euclidean state space for p = 2 and r = 1 . Points 1–3 are possible locations of the latent variable ( α 1 t , α 2 t ) . Circles are isodensity contours assuming ( α 1 , t + 1 , α 2 , t + 1 ) | ( α 1 t , α 2 t ) N 2 ( ( α 1 t , α 2 t ) , I 2 ) .
Econometrics 06 00048 g001
Figure 2. Stiefel manifold for p = 2 and r = 1 .
Figure 2. Stiefel manifold for p = 2 and r = 1 .
Econometrics 06 00048 g002
Figure 3. Matrix Langevin density kernels for p = 2 and r = 1 .
Figure 3. Matrix Langevin density kernels for p = 2 and r = 1 .
Econometrics 06 00048 g003
Figure 4. Normalized distances δ t for the settings p { 2 , 10 , 20 } , r = 1 , ρ = 0.1 and d = 50 .
Figure 4. Normalized distances δ t for the settings p { 2 , 10 , 20 } , r = 1 , ρ = 0.1 and d = 50 .
Econometrics 06 00048 g004
Figure 5. Normalized distances δ t for the settings p { 2 , 10 , 20 } , r = 1 , ρ = 0.1 and d = 500 .
Figure 5. Normalized distances δ t for the settings p { 2 , 10 , 20 } , r = 1 , ρ = 0.1 and d = 500 .
Econometrics 06 00048 g005
Figure 6. Normalized distances δ t for the settings p = 2 , r = 1 , ρ = 1 and d { 5 , 50 , 500 } .
Figure 6. Normalized distances δ t for the settings p = 2 , r = 1 , ρ = 1 and d { 5 , 50 , 500 } .
Econometrics 06 00048 g006
Figure 7. Normalized distances δ t for the settings p = 2 , r = 1 , ρ = 0.1 and d { 5 , 50 , 500 } .
Figure 7. Normalized distances δ t for the settings p = 2 , r = 1 , ρ = 0.1 and d { 5 , 50 , 500 } .
Econometrics 06 00048 g007
Figure 8. Normalized distances δ t for the settings p = 3 , r { 1 , 2 } , ρ = 0.1 and d = { 500 , 800 } .
Figure 8. Normalized distances δ t for the settings p = 3 , r { 1 , 2 } , ρ = 0.1 and d = { 500 , 800 } .
Econometrics 06 00048 g008
Figure 9. Normalized distances δ t for the settings p { 2 , 10 , 20 } , r = 1 , ρ = 0.1 and d = 50 . The initial value of the filtering algorithm is α 0 .
Figure 9. Normalized distances δ t for the settings p { 2 , 10 , 20 } , r = 1 , ρ = 0.1 and d = 50 . The initial value of the filtering algorithm is α 0 .
Econometrics 06 00048 g009

Share and Cite

MDPI and ACS Style

Yang, Y.; Bauwens, L. State-Space Models on the Stiefel Manifold with a New Approach to Nonlinear Filtering. Econometrics 2018, 6, 48. https://doi.org/10.3390/econometrics6040048

AMA Style

Yang Y, Bauwens L. State-Space Models on the Stiefel Manifold with a New Approach to Nonlinear Filtering. Econometrics. 2018; 6(4):48. https://doi.org/10.3390/econometrics6040048

Chicago/Turabian Style

Yang, Yukai, and Luc Bauwens. 2018. "State-Space Models on the Stiefel Manifold with a New Approach to Nonlinear Filtering" Econometrics 6, no. 4: 48. https://doi.org/10.3390/econometrics6040048

APA Style

Yang, Y., & Bauwens, L. (2018). State-Space Models on the Stiefel Manifold with a New Approach to Nonlinear Filtering. Econometrics, 6(4), 48. https://doi.org/10.3390/econometrics6040048

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop