This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

In this survey paper, a summary of results which are to be found in a series of papers, is presented. The subject of interest is focused on matrix algebraic properties of the Fisher information matrix (FIM) of stationary processes. The FIM is an ingredient of the Cramér-Rao inequality, and belongs to the basics of asymptotic estimation theory in mathematical statistics. The FIM is interconnected with the Sylvester, Bezout and tensor Sylvester matrices. Through these interconnections it is shown that the FIM of scalar and multiple stationary processes fulfill the resultant matrix property. A statistical distance measure involving entries of the FIM is presented. In quantum information, a different statistical distance measure is set forth. It is related to the Fisher information but where the information about one parameter in a particular measurement procedure is considered. The FIM of scalar stationary processes is also interconnected to the solutions of appropriate Stein equations, conditions for the FIM to verify certain Stein equations are formulated. The presence of Vandermonde matrices is also emphasized.

In this survey paper, a summary of results derived and described in a series of papers, is presented. It concerns some matrix algebraic properties of the Fisher information matrix (abbreviated as FIM) of stationary processes. An essential property emphasized in this paper concerns the matrix resultant property of the FIM of stationary processes. To be more explicit, consider the coefficients of two monic polynomials

A statistical distance measure involving entries of the FIM is presented and is based on [

The matrix Stein equation, see e.g., [

In this section we display the class of linear stationary processes whose corresponding Fisher information matrix shall be investigated in a matrix algebraic context. But first some basic definitions are set forth, see e.g., [

If a random variable _{t}

_{t}

_{t}_{t}_{X}_{t}_{X}_{r}_{s}_{r}_{r}_{s}_{s}

_{t}

_{t}^{2} < ∞

_{t}

_{X}_{X}

From Definition 2.3 can be concluded that the joint probability distributions of the random variables {_{1}, _{2}, . . . _{tn}_{1+}_{k}_{2+}_{k}_{tn}_{+}_{k}_{1}, _{2}, . . . , _{n}

We display one of the most general linear stationary process called the multivariate autoregressive, moving average and exogenous process, the VARMAX process. To be more specific, consider the vector difference equation representation of a linear system {

where _{j}^{n}^{×}^{n}_{j}^{n}^{×}^{n}_{j}^{n}^{×}^{n}_{0} ≡ _{0} ≡ _{0} ≡ _{n}

where

we use _{t}_{t}_{−1}. The matrix polynomials

The error {^{T}(^{T} denotes the transposition of matrix ^{2}(^{n}^{2}(^{p}^{+}^{q}^{+}^{r}^{)×1} is defined by

The vec operator transforms a matrix into a vector by stacking the columns of the matrix one underneath the other according to vec
^{n}^{×}^{n}^{(}^{p}^{+}^{q}^{+}^{r}^{)} is of the form

Representation (_{1}, _{2},. . ., _{p}_{1}, _{2},. . ., _{r}_{1}, _{2}, . . ., _{q}^{2}(

Before describing the control-exogenous variable

When the process (

which is a vector autoregressive and moving average process, VARMA(^{2}(

A VARMA process equivalent to the parameter matrix (

A description of the input variable ^{T}(

where _{x}

where ^{2} = −1, _{x}^{i}^{ω}_{x}^{i}^{ω}

The scalar equivalent to the VARMAX(

and for the ARMA(

popularized in, among others, the Box-Jenkins type of time series analysis, see e.g., [_{j}_{j}_{j}

Note that as in the multiple case, _{0} = _{0} = 1. The parameter vector,

and

respectively.

In the next section the matrix algebraic properties of the Fisher information matrix of the stationary processes (

The Fisher information is an ingredient of the Cramér-Rao inequality, also called by some the Cauchy-Schwarz inequality in mathematical statistics, and belongs to the basics of asymptotic estimation theory in mathematical statistics. The Cramér-Rao theorem [

here

When time series models are the subject, using _{t}

where the (^{T}, the derivative with respect to ^{T}, for any (^{T} is used for obtaining the appropriate dimensions. Equality (

In this section, the focus is on the FIM of the ARMA process (

when combined for all _{t}

The expressions of the different blocks of the matrix

where the integration above and everywhere below is counterclockwise around the unit circle. The reciprocal monic polynomials ^{p}a^{−1}) and ^{q}b^{−1}) and _{1}, . . . , _{p}_{1}, . . . , _{q}^{T} introduced in (_{k}^{2}, . . . , ^{k}^{−1}) ^{T} and _{k}^{k}^{−1}_{k}^{−1}). Considering the stability condition of the ARMA(

The resultant property of a matrix is considered, in order to show that the FIM

Consider the

The matrix

and

where ^{pq} ℛ^{q} ℛ^{p} ℛ_{i}_{j}_{i}^{nαi}_{j}^{nβj}_{αi}_{βj}

where ^{n}

In order to prove that the FIM

where the matrix ^{(}^{p}^{+}^{q}^{)×(}^{p}^{+}^{q}^{)} admits the form

It is proved in [

The FIM of an ARMA(

The matrix ^{(}^{p}^{+}^{q}^{)×(}^{p}^{+}^{q}^{)}, given in (^{T}(^{(}^{p}^{+}^{q}^{)×(}^{p}^{+}^{q}^{)} upper triangular matrix that is unique if its diagonal elements are all positive. Consequently, all its eigenvalues are then positive so that the matrix

and taking the property, if ^{T}

Assume the vector

We will now consider the Rank-Nullity Theorem, see e.g., [

and the property dim (Im ^{T}). When applied to the (

which completes the proof.

Notice that the dimension of the null space of matrix

By virtue of the equality (

In [_{1}, _{2},. . . , _{n}

The straight-line or Euclidean distance between the stochastic vector
^{n}

where the metric ^{n}

The observations _{1}, _{2}, . . . , _{n}_{1}, _{2}, . . . , _{m}^{m}

is applied, where _{i}^{m}^{×}^{n}

The following matrix decomposition is applied in order to obtain a transformed FIM

where _{φ}_{φ}_{i}_{i}

By virtue of (_{φ}_{1,1}, _{2,2}, . . . , _{m}_{,}_{m}_{φ}_{1,1}, _{2,2}, . . . , _{m}_{,}_{m}

As developed in [_{1}, _{2}, . . . , _{m}

where

and _{j}_{,}_{l}_{i}_{,}_{i}_{i}_{+1,}_{i}_{+1}(_{i}

this guaratees the metric property of (

where

Proposition 3.5 in [_{φ}

By virtue of the equalities (_{i}_{i}

A straightforward conclusion from Proposition 3.3 is then

In the next section a distance measure introduced in quantum information is discussed.

Statistical Distance Measure - Fisher Information and Quantum Information

In quantum information, the Fisher information, the information about a parameter

the Fisher information is the square of the derivative of the statistical distance

In this section an additional resultant matrix is presented, it concerns the Bezout matrix or Bezoutian. The notation of Lancaster and Tismenetsky [_{0} = _{0} = 1. The Bezout matrix

This matrix is often referred as the Bezoutian. We will display a decomposition of the Bezout matrix _{φ}_{φ}

Let (1 − _{1}_{1}_{1} and _{1} are zeros of _{1}_{−1}(_{1}_{−1}(_{2}, . . . , _{n}_{−(}_{k}_{−1)}(_{k}z_{−}_{k}_{−}_{k}_{0}(_{0}(

The following non-symmetric decomposition of the Bezoutian is derived, considering the notations above

with _{α}_{1} such that
_{β}_{1}. Iteration gives the following expansion for the Bezout matrix

where
^{n}_{j}_{j}^{T}, with all its components equal to 0 except the

Corollary 3.2 in [_{−1}(_{−1}(

This a direct consequence of (

Related to Corollary 3.2 in[

Corollary 3.3 in [_{1}, . . ., _{m}_{1}, . . . , _{m}^{n}

for _{k}_{ij}

An alternative representation to (

where

and

The matrix _{0} = 1, see [

The matrix

The matrix _{0} ≠ 0 and _{0} ≠ 0, which is the case since we have _{0} = _{0} = 1. From (

In [

The Stein matrix equation is now set forth. Let ^{m}^{×}^{m}^{n}^{×}^{n}^{n}^{×}^{m}

It has a unique solution if and only if _{m}

^{−1}

In this section an interconnection between the representation (_{1}, _{2}, . . . , _{p}_{1}, _{2}, . . . , _{q}

where

and

the polynomial _{αβ}_{αβ}

It is clear that the (_{αβ}_{αβ}_{i}_{j}_{k}_{h}_{i}_{k}_{αβ}_{αβ}

where

Since
^{(}^{p}^{+}^{q}^{)(}^{p}^{+}^{q}^{−1)/2} so that

We shall now introduce an appropriate Stein equation of the form (

where the entries _{i}_{p}_{+}_{q}_{p}_{+}_{q}_{−1}(_{1}(^{T}. Likewise is the vector _{1}(_{1}(_{p}_{+}_{q}^{T}, for investigating the properties of a companion matrix see e.g., [_{i}β_{j}_{i}α_{j}_{i}β_{j}

where the closed contour is now the unit circle

where adj(^{−1} Det(

where

and

and the operator ⊗ is the tensor (Kronecker) product of two matrices, see e.g., [

Combining (_{i}_{j}_{k}_{h}_{i}_{k}_{αβ}_{αβ}

The following equality holds true

or

Consequently, under the condition _{i}_{j}_{k}_{h}_{i}_{k}_{αβ}_{αβ}

We will formulate a Stein equation when the matrix

where _{p}_{+}_{q}^{p}^{+}^{q}^{m}

The unique solution of (

more explictely written,

Using the property of the companion matrix
_{p}_{+}_{q}_{p}_{+}_{q}_{p}_{+}_{q}_{p}_{+}_{q}^{p}^{+}^{q}^{−1}_{p}_{+}_{q}^{−1}). This implies that adj(_{p}_{+}_{q}_{p}_{+}_{q}_{p}_{+}_{q}

Consequently, the solution

The Stein equation that is verified by the FIM

respectively. Introduce the (^{p}^{q}

followed by the theorem.

The eigenvalues of the companion matrices

developing this integral expression in a more explicit form yields

Considering the form of the companion matrices
_{p}_{p}_{p}_{p}

Representation (

or

The symmetry property of the FIM

It is straightforward to verify that the submatrix (1,2) in (^{T} (

An Illustrative Example of Theorem 3.6

To illustrate Theorem 3.6, the case of an ARMA(2

where _{1}_{2}^{2}) and _{1}_{2}^{2}) and the parameter vector is _{1}_{2}_{1}_{2})^{T}. The condition, the zeros of the polynomials

are in absolute value smaller than one, is imposed. The FIM

where

The submatrices ℱ_{aa}(_{bb}(_{ab}(^{T} are

where

holds true, when ^{T} are given in (

In Proposition 5.1 in [_{1} is the first unit standard basis column vector in ℝ^{n}_{n}^{n}

where

A corollary to Proposition 5.1, [

where _{β}_{β}_{α}_{α}

The condition of a unique solution of the Stein

in order to proceed successfully, the following matrix property is displayed, to obtain

When applied to the

Considering that the last column vector of the matrices adj(_{p}_{n}_{n}_{n}

Applying the standard residue theorem leads for the respective submatrices

where the

and the 2

It is clear that the first and third matrices in _{11}(_{12}(_{21}(_{22}(

In this section an explicit form of the solution

The FIM of the ARMAX process (

where

where the submatrices of

where _{x}^{−1})^{−1}), combining all the expressions in (

where (^{*} is the complex conjugate transpose of the matrix ^{m×n}

here

The representation (

where the matrix ^{(}^{p}^{+}^{r}^{)}^{×}^{(}^{p}^{+}^{r}^{)} is of the form

It is shown in [

In Lemma 3.5 it is proved that the matrix _{x}^{−1})). The degree of the polynomial

We consider a companion matrix of the form (_{i}_{p}_{+}_{q}_{+ℓ}(_{p}_{+}_{q}_{+ℓ−1}(_{1}(^{T}. Likewise for the vector _{1}(_{1}(_{p}_{+}_{q}_{+ℓ}(^{T}. The property Det(_{p}_{+}_{q}_{+ℓ} −
_{p}_{+}_{q}_{+ℓ} −

We will formulate a Stein equation when the matrix

where _{p}_{+}_{r}^{p}^{+}^{r}

The unique solution of (

or

taking the property of the companion matrix
_{p}_{+}_{r}_{p}_{+}_{r}_{p}_{+}_{r}_{p}_{+}_{r}

Consequently, the matrix

The matrices,

The (

where _{αβξ}_{αβξ}

The (_{αβξ}_{αβξ}_{i}_{j}_{k}_{h}_{m}_{n}_{i}_{k}_{i}_{m}_{k}_{m}_{αβξ}_{αβξ}

where

Like for the Vandermonde matrices _{αβ}

In Lemma 3.9, the FIM

The process (

and we assume that {^{T} (^{2}(^{2}(

where ^{T} is of size ^{2}(

The substitution of representation (^{T} in (^{2}(^{2}(

In [

where

and

In order to obtain a multiple variant of (

where

and the matrix

In Lemma 2.3 in [

A brief summary of the contribution of [

The ^{2}(^{2}(

is displayed according to [

where

To obtain the term ^{T}, of size ^{2}(^{T} in (

The matrices in (

additionally we have Ψ(_{x}_{x}^{2} ^{2} and ^{2} ^{2} submatrices
^{2}^{2} tensor Sylvester resultant matrix

The matrices involved are of the form

and _{x}_{x}

A parallel between the scalar and multiple structures is straightforward. This is best illustrated by comparing the representations (_{x}_{x}

In this survey paper, matrix algebraic properties of the FIM of stationary processes are discussed. The presented material is a summary of papers where several matrix structural aspects of the FIM are investigated. The FIM of scalar and multiple processes like the (V)ARMA(X) are set forth with appropriate factorized forms involving (tensor) Sylvester matrices. These representations enable us to prove the resultant matrix property of the corresponding FIM. This has been done for (V)ARMA(

The statistical distance measure derived in [

The author thanks a perceptive reviewer for his comments which significantly improved the quality and presentation of the paper.

The authors have declared no conflict of interest.