1. Introduction
Let 
, 
, be a 
d-dimensional weakly stationary time series, where each 
 is a complex-valued random variable on the same probability space 
. It is a second-order, and in this sense, translation-invariant process:
      where 
, 
, is the non-negative definite covariance matrix function of the process. (Here and below, if 
 is a matrix then 
 denotes its adjoint matrix, i.e., its complex conjugate transpose. Vectors like 
 are written as column matrices, so 
 is a row matrix. All the results are valid for real-valued time series as well with no change; then 
 denotes matrix transpose.) Without loss of generality, from now on it is assumed that 
.
Thus, the considered random variables will be d-dimensional, square integrable, zero expectation random complex vectors whose components belong to the Hilbert space . The orthogonality of the random vectors  and  is defined by the relationship .
The 
past of  until time  is the closed linear span in 
 of the past and present values of the components of the process:
The remote past of  is . The process  is called regular if  and it is called singular if . Of course, there is a range of cases between these two extremes.
Singular processes are also called 
deterministic (see, e.g., 
Brockwell et al. 1991) because based on the past 
, future values 
, 
, …, can be predicted with zero mean square error. On the other hand, regular processes are also called 
purely non-deterministic, since their behavior is completely influenced by random 
innovations. Consequently, knowing 
, future values 
, 
, …, can be predicted only with positive mean square errors 
, 
, …, and 
. This shows why studying 
regular time series is a primary target both in the theory and applications. The Wold decomposition proves that any non-singular process can be decomposed into an orthogonal sum of a nonzero regular and singular process. This also supports why it is important to study regular processes.
The Wold decomposition implies (see, e.g., the classical references 
Rozanov 1967 and 
Wiener and Masani 1957) that 
 is regular if and only if 
 can be written as a causal infinite moving average (
a Wold representation)
      
      where 
 is an 
r-dimensional 
 orthonormal white noise sequence 
, 
, 
 is the 
 identity matrix. If the white noise process 
 in (
1) is given by the Wold representation, then it is unique up to a multiplication by a constant unitary matrix; therefore, it is called 
a fundamental process of the regular time series. In this case the pasts of 
 and 
 are identical: 
 for any 
.
An important use of Wold representation is that 
the best linear h-step ahead prediction  can be given in terms of that. If the present time is 0, then the orthogonal projection of a future random vector 
 (
) to the Hilbert subspace 
 representing past and present is
      
Then  gives the best linear prediction of  with minimal least square error.
An alternative way to write Wold representation is
      
      where 
 is the 
d-dimensional white noise process of 
innovations:
      where 
 denotes the orthogonal projection of the random vector 
 to the Hilbert subspace 
. Furthermore, 
, 
, 
 is a 
 non-negative definite matrix of rank 
r, 
, the covariance matrix of the best linear one-step ahead prediction error.
It is also classical that any weakly stationary process has a non-negative definite spectral measure matrix 
 on 
 such that
      
Then 
 is regular (see again, e.g., 
Rozanov 1967 and 
Wiener and Masani 1957) if and only if 
, the spectral density 
 has a.e. constant rank 
r, and can be factored in a form
      
      where
      
	   denotes spectral norm. Here the sequence of coefficients 
 is not necessarily the same as in the Wold decomposition. Furthermore,
      
      so the entries of the 
analytic matrix function  are analytic functions in the open unit disc 
D and belong to the class 
 on the unit circle 
T; consequently, they belong to the Hardy space 
. It is written as 
 or briefly 
.
Recall that the Hardy space 
, 
, denotes the space of all functions 
g analytic in 
D whose 
 norms over all circles 
, 
, are bounded; see, e.g., (
Rudin 2006, Definition 17.7). If 
, then equivalently, 
 is the Banach space of all functions 
 such that
      
      so the Fourier series of 
 is one-sided, 
 when 
; see, e.g., (
Fuhrmann 2014, sct. II.12). Notice that in formulas (
6) and (
7) there is a negative sign in the exponent; this is a matter of convention that I am going to use in the sequel too.
An especially important special case of Hardy spaces is 
, which is a Hilbert space, and which by Fourier transform is isometrically isomorphic with the 
 space of sequences 
 with norm square
      
For a one-dimensional time series 
 (
) there exists a rather simple sufficient and necessary condition of regularity given by (
Kolmogorov 1941):
- (1)
-  has an absolutely continuous spectral measure with density f; 
- (2)
- , that is, . 
Then the Kolmogorov–Szeg
 formula also holds:
      where 
 is the variance of the innovations 
, that is, the variance of the one-step ahead prediction.
For a multidimensional time series 
 which has 
full rank, that is, when 
 has a.e. 
full rank: 
, and so the innovations 
 defined by (
4) have full rank 
d, there exists a similar simple sufficient and necessary condition of regularity; see 
Rozanov (
1967) and 
Wiener and Masani (
1957):
- (1)
-  has an absolutely continuous spectral measure matrix  with density matrix ; 
- (2)
- , that is, . 
Then the 
d-dimensional Kolmogorov–Szeg
 formula also holds:
      where 
 is the covariance matrix of the innovations 
 defined by (
4).
Unfortunately, the generic case of regular time series when the rank of the process can be smaller than the dimension is rather complicated. To the best of my knowledge, in that case, (
Rozanov 1967, Theorem 8.1) gives a necessary and sufficient condition of regularity. Namely, a 
d-dimensional stationary time series 
 is regular and of rank 
r, 
, if and only if each of the following conditions holds:
- (1)
- It has an absolutely continuous spectral measure matrix  with density matrix  which has rank r for a.e. . 
- (2)
- The density matrix  -  has a principal minor  - , which is nonzero a.e. and
           
- (3)
- Let  denote the determinant obtained from  by replacing its ℓth row by the row . Then the functions  are boundary values of functions of the Nevanlinna class N. 
It is immediately remarked in the cited reference that “unfortunately, there is no general method determining, from the boundary value  of a function , whether it belongs to the class N”.
Recall that the Nevanlinna class 
N consists of all functions 
g analytic in the open unit ball 
D that can be written as a ratio 
, 
, 
, 
, where 
 and 
 denote Hardy spaces; see, e.g., (
Nikolski  2019, Definition 3.3.1).
The aim of this paper is to extend from the one-dimensional case to the multidimensional case a well-known sufficient condition for the regularity and a method of finding an  spectral factor and the covariance matrix  in the case of smooth spectral density.
  2. Generic Regular Processes
To find an  spectral factor if possible, a simple idea is to use a spectral decomposition of the spectral density matrix. (Take care that here we use the word ‘spectral’ in two different meanings. On one hand, we use the spectral density of a time series in terms of a Fourier spectrum, and on the other hand we take the spectral decomposition of a matrix in terms of eigenvalues and eigenvectors.)
So let 
 be a 
d-dimensional stationary time series and assume that its spectral measure matrix 
 is absolutely continuous with density matrix 
 which has rank 
r, 
, for a.e. 
. Take the parsimonious spectral decomposition of the self-adjoint, non-negative definite matrix 
:
      where
      
      for a.e. 
, is the diagonal matrix of nonzero eigenvalues of 
 and
      
      is a sub-unitary matrix of corresponding right eigenvectors, not unique even if all eigenvalues are distinct. Then, still, we have
      
The matrix function 
 is a self-adjoint, positive definite function, and
      
      where 
 is the density function of a finite spectral measure. This shows that the integral of 
 over 
 is finite. Thus 
 can be considered the spectral density function of a full rank regular process. So it can be factored, and in fact, we may take a miniphase 
 spectral factor 
 of it:   
      where 
 is a diagonal matrix.
Then a simple way to factorize 
 is to choose
      
      where 
, each 
 being a measurable function on 
 such that 
 for any 
, but otherwise arbitrary and 
 still denotes a sub-unitary matrix of eigenvectors of 
 in the same order as the one of the eigenvalues.
To the best of my knowledge, it is not known if for any regular time series 
 there exists such a matrix-valued function 
 such that 
 defined by (
13) has a Fourier series with only non-negative powers of 
. Equivalently, does there exist an 
 analytic matrix function 
 whose boundary value is the above spectral factor 
 with some 
, according to Formulas (
5)–(
7)?
Thus, the role of each 
 is to modify the corresponding eigenvector 
 so that 
 has a Fourier series with only non-negative powers of 
, this way achieving that 
. Equivalently, if 
 is the boundary value of a complex function 
 defined in the unit disc, 
, and 
 has singularities for 
, then we would like to find a complex function 
 in the unit disc so that 
 and 
 is analytic in the open unit disc 
D and continuous in the closed unit disc 
,
      
Carrying out this procedure for , one would obtain an  sub-unitary matrix function.
Example 2.2.4 in (
Szabados 2022) shows that—at least theoretically—this task can be carried out in certain cases. Furthermore, as a very similar problem, in the case of a rational spectral density, one can always find an inner matrix multiplier so that it gives a rational analytic matrix function 
 whose boundary value is an 
 spectral factor 
; see, e.g., (
Rozanov 1967, chp. I, Theorem 10.1).
      
Theorem 1. - (a) 
- Assume that a d-dimensional stationary time series  is regular of rank r, . Then for  defined by-  ( 10- )  one has , equivalently,
 
- (b) 
- If, moreover, one assumes that the regular time series  is such that it has an  spectral factor of the form-  ( 13- ) , then the following statement holds as well:
 - The sub-unitary matrix function  appearing in the spectral decomposition of  in-  ( 9- )  can be chosen so that it belongs to the Hardy space , and thus
 - In this case one may call  an inner matrix function. 
 The next theorem gives a sufficient condition for the regularity of a generic weakly stationary time series; compare with the statements of Theorem 1 above. Observe that assumptions (1) and (2) in the next theorem are necessary conditions of regularity as well. Only assumption (3) is not known to be necessary. We think that these assumptions are simpler to check in practice than those of Rozanov’s theorem cited above. By formula (
13), checking assumption (3) means that for each eigenvector 
 of 
 we are searching for a complex function multiplier 
 of unit absolute value that gives an 
 function result.
      
Theorem 2. Let  be a d-dimensional time series. It is regular of rank  if the following three conditions hold.
- (1) 
- It has an absolutely continuous spectral measure matrix  with density matrix  which has rank r for a.e. . 
- (2) 
- For  defined by-  ( 10- )  one has ; equivalently,-  ( 15- )  holds.
 
- (3) 
- The sub-unitary matrix function  appearing in the spectral decomposition of  in-  ( 9- )  can be chosen so that it belongs to the Hardy space ; thus,-  ( 16- )  holds.
 
 Next we discuss a multivariable version of a well-known one-dimensional theorem; see, e.g., (
Lamperti 1977, sct. 4.4). This theorem gives the Wold representation of a regular time series 
. First let us recall some facts we are going to use. A sufficient and necessary condition of regularity is given by the factorization (
5) and (
6) of the spectral density 
, where the 
 spectral factor 
 is in 
. Using Singular Value Decomposition (SVD), we can write that
      
      where 
 is a 
 sub-unitary matrix, 
 is an 
 unitary matrix, 
 is an 
 diagonal matrix of positive singular values 
, for a.e. 
. Clearly, 
, for 
.
The (generalized) inverse of 
 is not unique when 
. Let 
 be the Moore–Penrose inverse of 
:
We also need the spectral (Cramér) representation of the stationary time series
      
      where 
 is a stochastic process with orthogonal increments, obtained by the 
isometry between the Hilbert spaces 
 and 
; see, e.g., (
Bolla and Szabados 2021, sct. 1.3). Namely, if 
 (
), then
      
Theorem 3. Assume that the spectral measure of a d-dimensional weakly stationary time series  is absolutely continuous with density  which has constant rank r, . Moreover, assume that there is a finite constant M such that  for all , and  has a factorization , where  and its Moore–Penrose inverse  as well.
Then the time series is regular and its fundamental white noise process can be obtained aswhereis the Fourier series of , convergent in  sense. The sequence of coefficients  of the Wold representation is given by the  convergent Fourier series  Proof.  The regularity of 
 obviously follows from the assumptions by (
5) and (
6).
First let us verify that the stochastic integral (
19) is correct, that is, the components of 
 belong to 
:
        
		This also justifies that
        
Second, let us check that the sequence defined by (
19) is orthonormal, using the isometry (
18):
        
Third, let us show that 
 is orthogonal to the past 
 for any 
:
        
        for any 
, since 
, so its Fourier coefficients with negative indices are zero.
Fourth, let us see that  for . Because of stationarity, it is enough to show that . Since  is the closure in  of the components of all finite linear combinations of the form , by the isometry it is equivalent to the fact that  belongs to the closure in  of all finite linear combinations of the form .
Using the assumed boundedness of 
, we obtain that
        
We assumed that 
, which means that 
 has a one-sided Fourier series (
21) which converges in 
, where 
 denotes Lebesgue measure:
        
Since the spectral norm squared of a matrix is bounded by the sum of the absolute values squared of the entries of the matrix, these imply that the last term in (
23) tends to 0 as 
. This shows that 
.
Fifth, by (
17), we see that
        
        a.e. in 
. Consequently, the difference
        
        is orthogonal to itself in 
, so it is a zero vector. Then by (
19) and (
22),
        
Equation (
24) shows that each entry of 
 belongs to 
, so the isometry between this space and 
 justifies (
25).
Finally, the previous steps show that the innovation spaces of the sequences 
 and 
 are the same for any time 
, so the pasts 
 and 
 agree as well for any 
. Thus (
25) gives the Wold representation of 
.    □
   3. Smooth Eigenvalues of the Spectral Density
In the one-dimensional case there is a well-known sufficient condition of regularity, which at the same time gives a formula for an 
 spectral factor and also for the white noise sequence and the coefficients in the Wold decomposition (
1). This is the assumption that the process has a continuously differentiable spectral density 
 for any 
; see, e.g., (
Lamperti 1977, p. 76) or (
Bolla and Szabados 2021, sct. 2.8.2).
This sufficient condition can be partially generalized to the multidimensional case. When a regular 
d-dimensional time series 
 has an 
 spectral factor of the form (
13), equivalently, it has a sub-unitary matrix function 
 appearing in the spectral decomposition of 
 in (
9) that can be chosen so that it belongs to the Hardy space 
, and therefore the smoothness of the nonzero eigenvalues of the spectral density 
 gives a formula for an 
 spectral factor. The argument above in the paragraph around Equation (
14) shows that in certain cases one can find such a sub-unitary matrix function 
.
      
Theorem 4. Let  be a d-dimensional time series. It is regular of rank  if the following three conditions hold.
- (1) 
- It has an absolutely continuous spectral measure matrix  with density matrix  which has rank r for a.e. . 
- (2) 
- Each nonzero eigenvalue  of  is a continuously differentiable positive function on . 
- (3) 
- The sub-unitary matrix function  appearing in the spectral decomposition of  in-  ( 9- )  can be chosen so that it belongs to the Hardy space , and thus-  ( 16- )  holds.
 
Moreover,  satisfies the conditions of Theorem 3 too, so formulas (
19)–(
22) 
give the Wold representation of .  Proof.  Condition (2) implies that each 
 is also a continuously differentiable function on 
 and so it can be expanded into a uniformly convergent Fourier series
        
Observe that each  and consequently each  is a continuous function on , so it is in . Moreover, each  and consequently each  has only positive powers of  in its Fourier series. So each  belongs to the Hardy space .
Substitute (
28) into (
9):
        
Thus we can take a spectral factor
        
Since each 
 and by condition (3) each entry of 
 is in 
, each entry of 
 is in 
. It means that 
 is an 
 spectral factor as in (
5) and (
6), and consequently 
 is regular.
Take the Moore–Penrose inverse of 
:
        
        where by (
27) each
        
        so it is also continuous and its Fourier series has only positive powers of 
 too. It implies that 
.
Finally, since each  is a continuous function on , and thus bounded, and the components of  are bounded functions because  is sub-unitary, it follows that  is bounded.    □
 The 
d-dimensional Kolmogorov–Szeg
 Formula (
8) gives only the determinant of the covariance matrix 
 of the innovations in the full rank regular time series. Similar is the case when the rank 
r of the process is less than 
d:
      where 
 is the diagonal matrix of the 
r nonzero eigenvalues of 
 and 
 is the covariance matrix of the innovation of an 
r-dimensional subprocess of rank 
r of the original time series; see (
Bolla and Szabados 2021, Corollary 4.5) or (
Szabados 2022, Corollary 2.5)
Fortunately, under the conditions of Theorem 4, one can obtain the covariance matrix 
 itself by a similar formula, as the next theorem shows.
      
Theorem 5. Assume that a weakly stationary d-dimensional time series satisfies the conditions of Theorem 4. Then the covariance matrix Σ of the innovations of the process can be obtained aswhere , , are the nonzero eigenvalues of the spectral density matrix  of the process,  is the  matrix of corresponding orthonormal eigenvectors, and  Proof.  The error of the best 1-step linear prediction by (
2), and the same time, the innovation is
        
        using the Wold decomposition of 
. Thus the covariance of the innovation is
        
With the analytic function 
 corresponding to the Wold decomposition by (
7), 
. Taking the Fourier series (
16), let
        
In addition, denote by (
27)
        
Now using (
29), it follows that
        
        and
        
Combining the previous results,
        
        where 
 is given by (
30) and by (
26),
        
This completes the proof of the theorem.    □