Next Article in Journal
Entropy and Exergy Analysis of a Heat Recovery Vapor Generator for Ammonia-Water Mixtures
Next Article in Special Issue
Information Geometry of Positive Measures and Positive-Definite Matrices: Decomposable Dually Flat Structure
Previous Article in Journal
A Fuzzy Parallel Processing Scheme for Enhancing the Effectiveness of a Dynamic Just-in-time Location-aware Service System
Previous Article in Special Issue
Learning from Complex Systems: On the Roles of Entropy and Fisher Information in Pairwise Isotropic Gaussian Markov Random Fields

Entropy 2014, 16(4), 2023-2055; doi:10.3390/e16042023

Article
Matrix Algebraic Properties of the Fisher Information Matrix of Stationary Processes
André Klein
Rothschild Blv. 123 Apt.7, 65271 Tel Aviv, Israel; E-Mail: A.A.B.Klein@uva.nl or klein@contact.uva.nl; Tel.: 972.5.25594723
Received: 12 February 2014; in revised form: 11 March 2014 / Accepted: 24 March 2014 /
Published: 8 April 2014

Abstract

: In this survey paper, a summary of results which are to be found in a series of papers, is presented. The subject of interest is focused on matrix algebraic properties of the Fisher information matrix (FIM) of stationary processes. The FIM is an ingredient of the Cramér-Rao inequality, and belongs to the basics of asymptotic estimation theory in mathematical statistics. The FIM is interconnected with the Sylvester, Bezout and tensor Sylvester matrices. Through these interconnections it is shown that the FIM of scalar and multiple stationary processes fulfill the resultant matrix property. A statistical distance measure involving entries of the FIM is presented. In quantum information, a different statistical distance measure is set forth. It is related to the Fisher information but where the information about one parameter in a particular measurement procedure is considered. The FIM of scalar stationary processes is also interconnected to the solutions of appropriate Stein equations, conditions for the FIM to verify certain Stein equations are formulated. The presence of Vandermonde matrices is also emphasized.MSC Classification: 15A23, 15A24, 15B99, 60G10, 62B10, 62M20.
Keywords:
Bezout matrix; Sylvester matrix; tensor Sylvester matrix; Stein equation; Vandermonde matrix; stationary process; matrix resultant; Fisher information matrix

1. Introduction

In this survey paper, a summary of results derived and described in a series of papers, is presented. It concerns some matrix algebraic properties of the Fisher information matrix (abbreviated as FIM) of stationary processes. An essential property emphasized in this paper concerns the matrix resultant property of the FIM of stationary processes. To be more explicit, consider the coefficients of two monic polynomials p(z) and q(z) of finite degree, as the entries of a matrix such that the matrix becomes singular if and only if the polynomials p(z) and q(z) have at least one common root. Such a matrix is called a resultant matrix and its determinant is called the resultant. The Sylvester, Bezout and tensor Sylvester matrices have such a property and are extensively studied in the literature, see e.g., [13]. The FIM associated with various stationary processes will be expressed by these matrices. The derived interconnections are obtained by developing the necessary factorizations of the FIM in terms of the Sylvester, Bezout and tensor Sylvester matrices. These factored forms of the FIM enable us to show that the FIM of scalar and multiple stationary processes fulfill the resultant matrix property. Consequently, the singularity conditions of the appropriate Fisher information matrices and Sylvester, Bezout and tensor Sylvester matrices coincide, these results are described in [46].

A statistical distance measure involving entries of the FIM is presented and is based on [7]. In quantum information, a statistical distance measure is set forth, see [810], and is related to the Fisher information but where the information about one parameter in a particular measurement procedure is considered. This leads to a challenging question that can be presented as, can the existing distance measure in quantum information be developed at the matrix level?

The matrix Stein equation, see e.g., [11], is associated with the Fisher information matrices of scalar stationary processes through the solutions of the appropriate Stein equations. Conditions for the Fisher information matrices or associated matrices to verify certain Stein equations are formulated and proved in this paper. The presence of Vandermonde matrices is also emphasized. The general and more detailed results are set forth in [12] and [13]. In this survey paper it is shown that the FIM of linear stationary processes form a class of structured matrices. Note that in [14], the authors emphasize that statistical problems related to stationary processes have been treated successfully with the aid of Toeplitz forms. This paper is organized as follows. The various stationary processes, considered in this paper, are presented in Section 2, the Fisher information matrices of the stationary processes are displayed in Section 3. Section 3 sets forth the interconnections between the Fisher information matrices and the Sylvester, Bezout, tensor Sylvester matrices, and solutions to Stein equations. A statistical distance measure is expressed in terms of entries of a FIM.

2. The Linear Stationary Processes

In this section we display the class of linear stationary processes whose corresponding Fisher information matrix shall be investigated in a matrix algebraic context. But first some basic definitions are set forth, see e.g., [15].

If a random variable X is indexed to time, usually denoted by t, the observations {Xt, t Entropy 16 02023f1} is called a time series, where Entropy 16 02023f1 is a time index set (for example, Entropy 16 02023f1 = ℤ, the integer set).

Definition 2.1

A stochastic process is a family of random variables {Xt, t Entropy 16 02023f1} defined on a probability space {Ω, , }.

Definition 2.2

The Autocovariance function. If {Xt, t Entropy 16 02023f1} is a process such that Var(Xt) < ∞ (variance) for each t, then the autocovariance function γX (·, ·) of {Xt} is defined by γX (r, s) = Cov (Xr, Xs) = Entropy 16 02023f2 [(Xr Entropy 16 02023f2Xr) (Xs Entropy 16 02023f2Xs)], r, s ∈ ℤ and Entropy 16 02023f2 represents the expected value.

Definition 2.3

Stationarity. The time series {Xt, t ∈ ℤ}, with the index set ℤ ={0,±}1,±}2, . . .}, is said to be stationary if

(i)

Entropy 16 02023f2 |Xt|2 < ∞

(ii)

Entropy 16 02023f2 (Xt) = m for all t ∈ ℤ, m is the constant average or mean

(iii)

γX (r, s) = γX (r + t, s + t) for all r, s, t ∈ ℤ,

From Definition 2.3 can be concluded that the joint probability distributions of the random variables {X1, X2, . . . Xtn} and {X1+k, X2+k, . . . Xtn+k} are the same for arbitrary times t1, t2, . . . , tn for all n and all lags or leads k = 0, ±}1, ±}2, . . .. The probability distribution of observations of a stationary process is invariant with respect to shifts in time. In the next section the linear stationary processes that will be considered throughout this paper are presented.

2.1. The Vector ARMAX or VARMAX Process

We display one of the most general linear stationary process called the multivariate autoregressive, moving average and exogenous process, the VARMAX process. To be more specific, consider the vector difference equation representation of a linear system {y(t), t ∈ ℤ}, of order (p, r, q),

j = 0 p A j y ( t - j ) = j = 0 r C j x ( t - j ) + j = 0 q B j ɛ ( t - j ) , t

where y(t) are the observable outputs, x(t) the observable inputs and ɛ(t) the unobservable errors, all are n-dimensional. The acronym VARMAX stands for vector autoregressive-moving average with exogenous variables. The left side of (1) is the autoregressive part the second term on the right is the moving average part and x(t) is exogenous. If x(t) does not occur the system is said to be (V)ARMA. Next to exogenous, the input x(t) is also named the control variable, depending on the field of application, in econometrics and time series analysis, e.g., [15], and in signal processing and control, e.g., [16,17]. The matrix coefficients, Aj ∈ ℝn×n, Cj ∈ ℝn×n, and Bj ∈ ℝn×n are the associate parameter matrices. We have the property A0B0C0In.

Equation (1) can compactly be written as

A ( z ) y ( t ) = C ( z ) x ( t ) + B ( z ) ɛ ( t )

where

A ( z ) = j = 0 p A j z j ; C ( z ) = j = 0 r C j z j ; B ( z ) = j = 0 q B j z j

we use z to denote the backward shift operator, for example z xt = xt−1. The matrix polynomials A(z), B(z) and C(z) are the associated autoregressive, moving average matrix polynomials, and the exogenous matrix polynomial respectively of order p, q and r respectively. Hence the process described by Equation (2) is denoted as a VARMAX(p, r, q) process. Here z ∈ ℂ with a duplicate use of z as an operator and as a complex variable, which is usual in the signal processing and time series literature, e.g., [15,16,18]. The assumptions Det(A(z)) ≠ 0, such that |z| ≤ 1 and Det(B(z)) ≠ 0, such that |z| < 1 for all z ∈ ℂ, is imposed so that the VARMAX(p, r, q) process (2) has exactly one stationary solution and the condition Det(B(z)) ≠ 0 implies the invertibility condition, see e.g., [15] for more details. Under these assumptions, the eigenvalues of the matrix polynomials A(z) and B(z) lie outside the unit circle. The eigenvalues of a matrix polynomial Y (z) are the roots of the equation Det(Y (z)) = 0, Det(X) is the determinant of X. The VARMAX(p, r, q) stationary process (2) is thoroughly discussed in [15,18,19].

The error {ɛ(t), t ∈ ℤ} is a collection of uncorrelated zero mean n-dimensional random variables each having positive definite covariance matrix ∑ and we assume, for all s, t, Entropy 16 02023f3{ x(s) ɛT(t)} = 0, where XT denotes the transposition of matrix X and Entropy 16 02023f3 represents the expected value under the parameter ϑ. The matrix ϑ represents all the VARMAX(p, r, q) parameters, with the total number of parameters being n2(p + q + r). For different purposes which will be specified in the next sections, two choices of the parameter structure are considred. First, the parameter vector ϑ ∈ ℝn2(p+q+r)×1 is defined by

ϑ = vec { A 1 , A 2 , , A p , C 1 , C 2 , , C r , B 1 , B 2 , , B q }

The vec operator transforms a matrix into a vector by stacking the columns of the matrix one underneath the other according to vec X = col ( col ( X i j ) i = 1 n ) j = 1 n, see e.g., [2,20]. A different choice is set forth, when the parameter matrix ϑ ∈ ℝn×n(p+q+r) is of the form

ϑ = ( ϑ 1 ϑ 2 ϑ p ϑ p + 1 ϑ p + 2 ϑ p + r ϑ p + r + 1 ϑ p + r + 2 ϑ p + r + q )
= ( A 1 A 2 A p C 1 C 2 C r B 1 B 2 B q )

Representation (5) of the parameter matrix has been used in [21]. The estimation of the matrices A1, A2,. . ., Ap, C1, C2,. . ., Cr, B1, B2, . . ., Bq and ∑ has received considerable attention in the time series and statistical signal processing literature, see e.g., [15,17,19]. In [19], the authors study the asymptotic properties of maximum likelihood estimates of the coefficients of VARMAX(p, r, q) processes, stored in a ( × 1) vector ϑ, where = n2(p + q + r).

Before describing the control-exogenous variable x(t) used in this survey paper, we shall present the different special cases of the model described in Equations (1) and (2).

2.2. The Vector ARMA or VARMA Process

When the process (2) does not contain the control process x(t) it yields

A ( z ) y ( t ) = B ( z ) ɛ ( t )

which is a vector autoregressive and moving average process, VARMA(p, q) process, see e.g., [15]. The matrix ϑ represents now all the VARMA parameters, with the total number of parameters being n2(p+q). The VARMA(p, q) version of the parameter vector ϑ defined in (3) is then given by

ϑ = vec { A 1 , A 2 , , A p , B 1 , B 2 , , B q }

A VARMA process equivalent to the parameter matrix (4) is then the n × n(p + q) parameter matrix

ϑ = ( ϑ 1 ϑ 2 ϑ p ϑ p + 1 ϑ p + 2 ϑ p + q ) = ( A 1 A 2 A p B 1 B 2 B q )

A description of the input variable x(t), in Equation (2) follows. Generally, one can assume either that x(t) is non stochastic or that x(t) is stochastic. In the latter case, we assume Entropy 16 02023f3{ x(s) ɛT(t)} = 0, for all s, t, and that statistical inference is performed conditionally on the values taken by x(t). In this case it can be interpreted as constant, see [22] for a detailed exposition. However, in the papers referred in this survey, like in [21] and [23], the observed input variable x(t), is assumed to be a stationary VARMA process, of the form

α ( z ) x ( t ) = β ( z ) η ( t )

where α(z) and β(z) are the autoregressive and moving average polynomials of appropriate degree and {η(t), t ∈ ℤ} is a collection of uncorrelated zero mean n-dimensional random variables each having positive definite covariance matrix Ω. The spectral density of the VARMA process x(t) is Rx(·)/2π and for a definition, see e.g., [15,16], to obtain

R x ( e i ω ) = α - 1 ( e i ω ) β ( e i ω ) Ω β * ( e i ω ) α - * ( e i ω )             ω [ - π , π ]

where i is the imaginary unit with the property i2 = −1, ω is the frequency, the spectral density Rx(eiω) is Hermitian, and we further have, Rx(eiω) 0 and - π π R x ( e i ω ) d ω < . As mentioned above, the basic assumption, x(t) and ɛ(t) are independent or at least uncorrelated processes, which corresponds geometrically with orthogonal processes, holds and X* is the complex conjugate transpose of matrix X.

2.3. The ARMAX and ARMA Processes

The scalar equivalent to the VARMAX(p, r, q) and VARMA(p, q) processes, given by Equations (2) and (6) respectively, shall now be displayed, to obtain for the ARMAX(p, r, q) process

a ( z ) y ( t ) = c ( z ) x ( t ) + b ( z ) ɛ ( t )

and for the ARMA(p, q) process

a ( z ) y ( t ) = b ( z ) ɛ ( t )

popularized in, among others, the Box-Jenkins type of time series analysis, see e.g., [15]. Where a(z), b(z) and c(z) are respectively the scalar autoregressive, moving average polynomials and exogenous polynomial, with corresponding scalar coefficients aj, bj and cj,

a ( z ) = j = 0 p a j z j ; c ( z ) = j = 0 r c j z j ; b ( z ) = j = 0 q b j z j

Note that as in the multiple case, a0 = b0 = 1. The parameter vector, ϑ, for the processes, Equations (11) and (12) is then

ϑ = { a 1 , a 2 , , a p , c 1 , c 2 , , c r , b 1 , b 2 , , b q }

and

ϑ = { a 1 , a 2 , , a p , b 1 , b 2 , , b q }

respectively.

In the next section the matrix algebraic properties of the Fisher information matrix of the stationary processes (2), (6), (11) and (12) will be verified. Interconnections with various known structured matrices like the Sylvester resultant matrix, the Bezout matrix and Vandermonde matrix are set forth. The Fisher information matrix of the various stationary processes is also expressed in terms of the unique solutions to the appropriate Stein equations.

3. Structured Matrix Properties of the Asymptotic Fisher Information Matrix of Stationary Processes

The Fisher information is an ingredient of the Cramér-Rao inequality, also called by some the Cauchy-Schwarz inequality in mathematical statistics, and belongs to the basics of asymptotic estimation theory in mathematical statistics. The Cramér-Rao theorem [24] is therefore considered. When assuming that the estimators of ϑ, defined in the previuos sections, are asymptotically unbiased, the inverse of the asymptotic information matrix yields the Cramér-Rao bound, and provided that the estimators are asymptotically efficient, the asymptotic covariance matrix then verifies the inequality

Cov ( ϑ ^ ) - 1 ( ϑ ^ )

here (ϑ̂) is the FIM, Cov (ϑ̂) is the covariance of ϑ̂, the unbiased estimator of ϑ, for a detailed fundamental statistical analysis, see [25,26]. The FIM equals the Cramér-Rao lower bound, and the subject of the FIM is also of interest in the control theory and signal processing literature, see e.g., [27]. Its quantum analog was introduced immediately after the foundation of mathematical quantum estimation theory in the 1960’s, see [28,29] for a rigorous exposition of the subject. More specifically, the Fisher information is also emphasized in the context of quantum information theory, see e.g., [30,31]. It is clear that the Cramér-Rao inequality takes a lot of attention because it is located on the highly exciting boundary of statistics, information and quantum theory and more recently matrix theory. In the next sections, the Fisher information matrices of linear stationary processes will be presented and its role as a new class of structured matrices will be the subject of study.

When time series models are the subject, using Equation (2) for all t ∈ ℤ to determine the residual ɛ(t) or ɛt(ϑ), to emphasize the dependency on the parameter vector ϑ, and assuming that x(t) is stochastic and that (y(t), x(t)) is a Gaussian stationary process, the asymptotic FIM (ϑ) is defined by the following ( × ) matrix which does not depend on t

F ( ϑ ) = E { ( ɛ t ( ϑ ) ϑ ) Σ - 1 ( ɛ t ( ϑ ) ϑ ) }

where the (v × ) matrix (·)/∂ϑ T, the derivative with respect to ϑ T, for any (v × 1) column vector (·) and is the total number of parameters. The derivative with respect to ϑ T is used for obtaining the appropriate dimensions. Equality (16) is used for computing the FIM of the various time series processes presented in the previous sections and appropriate definitions of the derivatives are used, especially for the multivariate processes (2) and (6), see [21,22].

3.1. The Fisher Information Matrix of an ARMA(p, q) Process

In this section, the focus is on the FIM of the ARMA process (12). When ϑ is given in Equation (15), the derivatives in Equation (16) are at the scalar level

ɛ t ( ϑ ) a j = 1 a ( z ) ɛ t - j             for j = 1 , , p and ɛ t ( ϑ ) b k = - 1 b ( z ) ɛ t - k     for k = 1 , , q

when combined for all j and k, the FIM of the ARMA process (12) with the variance of the noise process ɛt(ϑ) equal to one, yields the block decomposition, see [32]

F ( ϑ ) = ( F a a ( ϑ ) F a b ( ϑ ) F b a ( ϑ ) F b b ( ϑ ) )

The expressions of the different blocks of the matrix (ϑ) are

F a a ( ϑ ) = 1 2 π i z = 1 u p ( z ) u p ( z - 1 ) a ( z ) a ( z - 1 ) d z z = 1 2 π i z = 1 u p ( z ) v p ( z ) a ( z ) a ^ ( z ) d z
F a b ( ϑ ) = - 1 2 π i z = 1 u p ( z ) u q ( z - 1 ) a ( z ) b ( z - 1 ) d z z = - 1 2 π i z = 1 u p ( z ) v q ( z ) a ( z ) b ^ ( z ) d z
F b a ( ϑ ) = - 1 2 π i z = 1 u q ( z ) u p ( z - 1 ) a ( z - 1 ) b ( z ) d z z = - 1 2 π i z = 1 u q ( z ) v p ( z ) a ^ ( z ) b ( z ) d z
F b b ( ϑ ) = 1 2 π i z = 1 u q ( z ) u q ( z - 1 ) b ( z ) b ( z - 1 ) d z z = 1 2 π i z = 1 u q ( z ) v q ( z ) b ( z ) b ^ ( z ) d z

where the integration above and everywhere below is counterclockwise around the unit circle. The reciprocal monic polynomials â(z) and (z) are defined as â(z) = zpa(z−1) and (z) = zqb(z−1) and ϑ =(a1, . . . , ap, b1, . . . , bq) T introduced in (15). For each positive integer k we have uk(z) = (1, z, z2, . . . , zk−1) T and vk(z) = zk−1uk(z−1). Considering the stability condition of the ARMA(p, q) process implies that all the roots of the monic polynomials a(z) and b(z) lie outside the unit circle. Consequently, the roots of the polynomials â(z) and (z) lie within the unit circle and will be used as the poles for computing the integrals (18)–(21) when Cauchy’s residue theorem is applied. Notice that the FIM (ϑ) is symmetric block Toeplitz so that F a b ( ϑ ) = F b a ( ϑ ) and the integrands in (18)–(21) are Hermitian. The computation of the integral expressions, (18)–(21) is easily implementable by using the standard residue theorem. The algorithms displayed in [33] and [22] are suited for numerical computations of among others the FIM of an ARMA(p, q) process.

3.2. The Sylvester Resultant Matrix - The Fisher Information Matrix

The resultant property of a matrix is considered, in order to show that the FIM (ϑ) has the matrix resultant property implies to show that the matrix (ϑ) becomes singular if and only if the appropriate scalar monic polynomials â(z) and (z) have at least one common zero. To illustrate the subject, the following known property of two polynomials is set forth. The greatest common divisor (frequently abbreviated as GCD) of two polynomials is a polynomial, of the highest possible degree, that is a factor of both the two original polynomials, the roots of the GCD of two polynomials are the common roots of the two polynomials. Consider the coefficients of two monic polynomials p(z) and q(z) of finite degree, as the entries of a matrix such that the matrix becomes singular if and only if the polynomials p(z) and q(z) have at least one common root. Such a matrix is called a resultant matrix and its determinant is called the resultant. Therefore we present the known (p + q) × (p + q) Sylvester resultant matrix of the polynomials a and b, see e.g., [2], to obtain

S ( a , b ) = ( 1 a 1 a p 0 0 0 0 0 0 1 a 1 a p 1 b 1 b q 0 0 0 0 n × n 0 0 1 b 1 b q )

Consider the q ×(p+q) and p×(p+q) upper and lower submatrices Entropy 16 02023f4 (b) and Entropy 16 02023f5 (−a) of the Sylvester resultant matrix Entropy 16 02023f6 (−b, a) such that

S ( b , - a ) = ( S p ( b ) - S q ( a ) )

The matrix Entropy 16 02023f6 (a, b) becomes singular in the presence of one or more common zeros of the monic polynomials â(z) and (z), this property is assessed by the following equalities

R ( a , b ) = i = 1 , , p j = 1 , , q ( α i - β j ) , R ( b , a ) = ( - 1 ) p q i = 1 , , p j = 1 , , q ( α i - β j )

and

R ( b , - a ) = ( - 1 ) q i = 1 , , p j = 1 , , q ( β j - α i ) , and R ( - b , a ) = ( - 1 ) p i = 1 , , p j = 1 , , q ( β j - α i )

where (a, b) is the resultant of â(z) and (z), and is equal to Det Entropy 16 02023f6 (a, b). The string of equalities in (24) and (25) hold since (b, a) = (−1)pq(a, b), (b, −a) = (−1)q(b, a), and (−b, a) = (−1)p(b, a), see [34]. The zeros of the scalar monic polynomials â(z) and (z) are αi and βj respectively and are assumed to be distinct. By this is meant, when we have (zαi)nαi and (zβj)nβj with the powers nαi and nβj both greater than one, that only the distinct roots will be considered free from the corresponding powers. The key property of the classical Sylvester resultant matrix Entropy 16 02023f6 (a, b) is that its null space provides a complete description of the common zeros of the polynomials involved. In particular, in the scalar case the polynomials â(z) and (z) are coprime if and only if Entropy 16 02023f6 (a, b) is non-singular. The following key property of the classical Sylvester resultant matrix Entropy 16 02023f6 (a, b), is given by the well known theorem on resultants, to obtain

dim Ker S ( a , b ) = ν ( a , b )

where ν(a, b) is the number of common roots of the polynomials â(z) and (z), with counting multiplicities, see e.g., [3]. The dimension of a subspace Entropy 16 02023f7 is represented by dim ( Entropy 16 02023f7), Ker (X) is the null space or kernel of the matrix X, denoted by Null or Ker. The null space of an n × n matrix A with coefficients in a field K (typically the field of the real numbers or of the complex numbers) is the set Ker A = {xKn: Ax = 0}, see e.g., [1,2,20].

In order to prove that the FIM (ϑ) fulfills the resultant matrix property, the following factorization is derived, Lemma 2.1 in [5],

F ( ϑ ) = S ( b , - a ) P ( ϑ ) S ( b , - a )

where the matrix (ϑ) ∈ ℝ(p+q)×(p+q) admits the form

P ( ϑ ) = 1 2 π i z = 1 u p + q ( z ) u p + a ( z - 1 ) a ( z ) b ( z ) a ( z - 1 ) b ( z - 1 ) d z z = 1 2 π i z = 1 u p + q ( z ) v p + q ( z ) a ( z ) b ( z ) a ^ ( z ) b ^ ( z ) d z

It is proved in [5] that the symmetric matrix (ϑ) fulfills the property, (ϑ) ≻ O. The factorization (27) allows us to show the matrix resultant property of the FIM, Corollary 2.2 in [5] states.

The FIM of an ARMA(p, q) process with polynomials a(z) and b(z) of order p, q respectively becomes singular if and only if the polynomials â(z) and (z) have at least one common root. From Corollary 2.2 in [5] can be concluded, the FIM of an ARMA(p, q) process and the Sylvester resultant matrix Entropy 16 02023f6 (−b, a) have the same singularity property. By virtue of (26) and (27) we will specify the dimension of the null space of the FIM (ϑ), this is set forth in the following lemma.

Lemma 3.1

Assume that the polynomials â(z) and b̂(z) have ν(a, b) common roots, counting multiplicities. The factorization (27) of the FIM and the property (26) enable us to prove the equality

d i m     ( K e r F ( ϑ ) ) = d i m     ( K e r S ( b , - a ) ) = ν ( a , b )

Proof

The matrix (ϑ) ∈ ℝ(p+q)×(p+q), given in (27), fulfills the property of positive definiteness, as proved in [5]. This implies that a Cholesky decomposition can be applied to (ϑ), see [35] for more details, to obtain (ϑ) =LT(ϑ)L(ϑ), where L(ϑ) is a ℝ(p+q)×(p+q) upper triangular matrix that is unique if its diagonal elements are all positive. Consequently, all its eigenvalues are then positive so that the matrix L(ϑ) is also positive definite. Factorization of (27) now admits the representation

F ( ϑ ) = S ( b , - a ) L ( ϑ ) L ( ϑ ) S ( b , - a )

and taking the property, if A is an m× n matrix, then Ker (A) = Ker (ATA), into account, yields when applied to (30)

Ker F ( ϑ ) = Ker S ( b , - a ) L ( ϑ ) L ( ϑ ) S ( b , - a ) = Ker L ( ϑ ) S ( b , - a )

Assume the vector u ∈ Ker L(ϑ) Entropy 16 02023f8 (b, −a), such that L(ϑ) Entropy 16 02023f8 (b, −a)u = 0 and set Entropy 16 02023f8 (b, −a)u = v = ⇒ L(ϑ)v = 0, since the matrix L(ϑ) ≻ O = ⇒ v = 0, this implies Entropy 16 02023f8 (b, −a)u = 0 = ⇒ u ∈ Ker Entropy 16 02023f8 (b, −a). Consequently,

Ker F ( ϑ ) = Ker S ( b , - a )

We will now consider the Rank-Nullity Theorem, see e.g., [1], if A is an m × n matrix, then

dim    ( Ker A ) + dim    ( Im A ) = n

and the property dim (Im A) = dim (Im AT). When applied to the (p + q) × (p + q) matrix Entropy 16 02023f6 (b, −a), it yields

dim    ( Ker S ( b , - a ) ) = dim    ( Ker S ( b , - a ) ) dim    ( Ker F ( ϑ ) ) = dim    ( Ker S ( b , - a ) )

which completes the proof.

Notice that the dimension of the null space of matrix A is called the nullity of A and the dimension of the image of matrix A, dim (Im A), is termed the rank of matrix A. An alternative proof to the one developed in Corollary 2.2 in [5], is given in a corollary to Lemma 3.1, reconfirming the resultant matrix property of the FIM (ϑ).

Corollary 3.2

The FIM ℱ(ϑ) of an ARMA(p, q) process becomes singular if and only if the autoregressive and moving average polynomials â(z) and b̂(z) have at least one common root.

Proof

By virtue of the equality (31) combining with the property Det Entropy 16 02023f8 (b, −a) = Det Entropy 16 02023f6 (b, −a) and the matrix resultant property of the Sylvester matrix Entropy 16 02023f6 (b, −a) yields, Det Entropy 16 02023f8 (b, −a) = 0 ⇔ Ker Entropy 16 02023f8 (b, −a) ≠ {0} if and only if the ARMA(p, q) polynomials â(z) and (z) have at least one common root. Equivalently, Det Entropy 16 02023f8 (b, −a) ≠ 0 ⇔ Ker Entropy 16 02023f8 (b, −a) = {0} if and only if the ARMA(p, q) polynomials â(z) and (z) have no common roots. Consequently, by virtue of the equality Ker (ϑ) =Ker Entropy 16 02023f8 (b, −a) can be concluded, the FIM (ϑ) becomes singular if and only if the ARMA(p, q) polynomials â(z) and (z) have at least one common root. This completes the proof.

3.3. The Statistical Distance Measure and the Fisher Information Matrix

In [7] statistical distance measures are studied. Most multivariate statistical techniques are based upon the concept of distance. For that purpose a statistical distance measure is considered that is a normalized Euclidean distance measure with entries of the FIM as weighting coefficients. The measurements x1, x2,. . . , xn are subject to random fluctuations of different magnitudes and have therefore different variabilities. It is then important to consider a distance that takes the variability of these variables or measurements into account when determining its distance from a fix point. A rotation of the coordinate system through a chosen angle while keeping the scatter of points given by the data fixed, is also applied, see [7] for more details. It is shown that when the FIM is positive definite, the appropriate statistical distance measure is a metric. In case of a singular FIM of an ARMA stationary process, the metric property depends on the rotation angle. The statistical distance measure, is based on m parameters unlike a statistical distance measure introduced in quantum information, see e.g., [8,9], that is also related to the Fisher information but where the information about one parameter in a particular measurement procedure is considered.

The straight-line or Euclidean distance between the stochastic vector x = ( x 1 x 2 x n ) and fixed vector y = ( y 1 y 2 y n ) where x, y ∈ ℝn, is given by

d ( x , y ) = x - y = ( j = 1 n ( x j - y j ) 2 ) 1 / 2

where the metric d(x, y):= ||xy|| is induced by the standard Euclidean norm || · || on ℝn, see e.g., [2] for the metric conditions.

The observations x1, x2, . . . , xn are used to compute maximum likelihood estimated of the parameters ϑ1, ϑ2, . . . , ϑm and where m < n. These estimated parameters are random variables, see e.g., [15]. The distance of the estimated vector ϑ ∈ ℝm, given in (15), is studied. Entries of the FIM are inserted in the distance measure as weighting coefficients. The linear transformation

ϑ ˜ = L i ( φ ) ϑ

is applied, where i(φ) ∈ ℝm×n is the Givens rotation matrix with rotation angle φ, with 0 ≤ φ ≤ 2π and i ∈ {1, . . . , m − 1}, see e.g., [36], and is given by

L i ( φ ) = ( I i - 1 0 0 0 0 ( cos ( φ ) ) i , i ( - sin ( φ ) ) i , i + 1 0 0 ( sin ( φ ) ) i + 1 , i ( cos ( φ ) ) i + 1 , i + 1 0 0 0 0 I m - i - 1 ) ,             0 φ 2 π

The following matrix decomposition is applied in order to obtain a transformed FIM

F φ ( ϑ ) = L i ( φ ) F ( ϑ ) L i ( φ )

where φ(ϑ) and (ϑ) are respectively the transformed and untransformed Fisher information matrices. It is straightforward to conclude that by virtue of (35), the transformed and untransformed Fisher information matrices φ(ϑ) and (ϑ), are similar since the rotation matrix i(φ) is orthogonal. Two matrices A and B are similar if there exists an invertible matrix X such that the equality AX = XB holds. As can be seen, the Givens matrix i(φ) involves only two coordinates that are affected by the rotation angle φ whereas the other directions, which correspond to eigenvalues of one, are unaffected by the rotation matrix.

By virtue of (35) can be concluded that a positive definite FIM, (ϑ) ≻ 0, implies a positive definite transformed FIM, φ(ϑ) ≻ 0. Consequently, the elements on the main diagonal of (ϑ), f1,1, f2,2, . . . , fm,m, as well as the elements on the main diagonal of φ(ϑ), 1,1, 2,2, . . . , m,m are all positive. However, the elements on the main diagonal of a singular FIM of a stationary ARMA process are also positive.

As developed in [7], combining (33) and (35) yields the distance measure of the estimated parameters ϑ1, ϑ2, . . . , ϑm accordingly, to obtain

d F φ 2 ( ϑ ) = j = 1 , j i , i + 1 m { ϑ j 2 f j , j } + { ϑ i cos ( φ ) - ϑ i + 1 sin ( φ ) } 2 f ˜ i , i ( φ ) + { ϑ i + 1 cos ( φ ) + ϑ i sin ( φ ) } 2 f ˜ i + 1 , i + 1 ( φ )

where

f ˜ i , i ( φ ) = f i , i cos 2 ( φ ) - f i , i + 1 sin ( 2 φ ) + f i + 1 , i + 1 sin 2 ( φ )
f ˜ i + 1 , i + 1 ( φ ) = f i + 1 , i + 1 cos 2 ( φ ) + f i , i + 1 sin ( 2 φ ) + f i , i sin 2 ( φ )

and fj,l are entries of the FIM (ϑ) whereas i,i(φ) and i+1,i+1(φ) are the transformed components since the rotation affects only the entries, i and i+1, as can be seen in matrix i(φ). In [7], the existence of the following inequalities is proved

f ˜ i , i ( φ ) > 0             and            f ˜ i + 1 , i + 1 ( φ ) > 0

this guaratees the metric property of (36). When the FIM of an ARMA(p, q) process is the case, a combination of (27) and (35) for the ARMA(p, q) parameters, given in (15) yields for the transformed FIM,

F φ ( ϑ ) = S φ ( - b , a ) P ( ϑ ) S φ ( - b , a )

where (ϑ) is given by (28) and the transformed Sylvester resultant matrix is of the form

S φ ( - b , a ) = L i ( φ ) S ( - b , a )

Proposition 3.5 in [7], proves that the transformed FIM φ(ϑ) and the transformed Sylvester matrix Entropy 16 02023f9 (−b, a) fulfill the resultant matrix property by using the equalities (40) and (39). The following property is then set forth.

Proposition 3.3

The properties

K e r F φ ( ϑ ) = K e r S φ ( - b , a )     a n d     K e r S φ ( - b , a ) = K e r S ( - b , a )

hold true.

Proof

By virtue of the equalities (39), (40) and the orthogonality property of the rotation matrix i(φ) which implies that Ker i(φ) = {0} combined with the same approach as in Lemma 3.1 completes the proof.

A straightforward conclusion from Proposition 3.3 is then

dim Ker F φ ( ϑ ) = dim Ker S φ ( - b , a ) , dim Ker S φ ( - b , a ) = dim Ker S ( - b , a )

In the next section a distance measure introduced in quantum information is discussed.

Statistical Distance Measure - Fisher Information and Quantum Information

In quantum information, the Fisher information, the information about a parameter θ in a particular measurement procedure, is expressed in terms of the statistical distance s, see [810]. The statistical distance used is defined as a measure to distinguish two probability distributions on the basis of measurement outcomes, see [37]. The Fisher information and the statistical distance are statistical quantities, and generally refer to many measurements as it is the case in this survey. However, in the quantum information theory and quantum statistics context, the problem set up is presented as follows. There may or may not be a small phase change θ, and the question is whether it is there. In that case you can design quantum experiments that will tell you the answer unambiguously in a single measurement. The equality derived is of the form

F ( φ ) = ( d s d θ ) 2

the Fisher information is the square of the derivative of the statistical distance s with respect to θ. Contrary to (36), where the square of the statistical distance measure is expressed in terms of entries of a FIM (ϑ) which is based on information about m parameters estimated from n measurements, for m < n. A challenging question could therefore be formulated as follows, can a generalization of equality (41) be developed in a quantum information context but at the matrix level ? To be more specific, many observations or measurements that lead to more than one parameter such that the corresponding Fisher information matrix is interconnected to an appropriate statistical distance matrix, a matrix where entries are scalar distance measures. This question could equally be a challenge to algebraic matrix theory and to quantum information.

3.4. The Bezoutian - The Fisher Information Matrix

In this section an additional resultant matrix is presented, it concerns the Bezout matrix or Bezoutian. The notation of Lancaster and Tismenetsky [2] shall be used and the results presented are extracted from [38]. Assume the polynomials a and b given by a ( z ) = j = 0 n a j z j and b ( z ) = j = 0 n b j z j, cfr. (13) but where p = q = n, and we further assume a0 = b0 = 1. The Bezout matrix B(a, b) of the polynomials a and b is defined by the relation

a ( z ) b ( w ) - a ( w ) b ( z ) = ( z - w ) u n ( z ) B ( a , b ) u n ( z )

This matrix is often referred as the Bezoutian. We will display a decomposition of the Bezout matrix B(a, b) developed in [38]. For that purpose the matrix Uφ and its inverse Tφ are presented, where φ is a given complex number, to obtain

U φ = ( 1 0 0 - φ 1 0 0 0 0 - φ 1 ) , T φ = ( 1 0 0 φ 1 0 φ 2 φ n - 1 φ 2 φ 1 )

Let (1 − α1z) and (1 − β1z) be a factor of a(z) and b(z) respectively and α1 and β1 are zeros of â(z) and (z). Consider the factored form of the nth order polynomials a(z) and b(z) of the form a(z) = (1 − α1z)a−1(z) and b(z) = (1 − β1z)b−1(z) respectively. Proceeding this way, for α2, . . . , αn yields the recursion a−(k−1)(z) = (1 − αkz)ak(z), equivalently for the polynomials bk(z) and a0(z) = a(z) and b0(z) = b(z). Proposition 3.1 in [38] is presented.

The following non-symmetric decomposition of the Bezoutian is derived, considering the notations above

B ( a , b ) = U α 1 ( B ( a - 1 , b - 1 ) 0 0 0 ) U β 1 + ( β 1 - α 1 ) b β 1 a α 1

with aα1 such that a α 1 u n ( z ) = a - 1 similarly for bβ1. Iteration gives the following expansion for the Bezout matrix

B ( a , b ) = k = 1 n ( β k - α k ) U α 1 U α k - 1 U β k + 1 U β n e 1 n ( e 1 n ) U β 1 U β k - 1 U α k + 1 U α n

where e 1 n is the first unit standard basis column vector in ℝn, by ej we denote the jth coordinate vector, ej = (0, . . . , 1, . . . , 0) T, with all its components equal to 0 except the jth component which equals 1. The following corollarys to Proposition 3.1 in [38] are now presented.

Corollary 3.2 in [38] states. Let φ be a common zero of the polynomials â(z) and (z). Then a(z) = (1 − φz)a−1(z) and b(z) = (1 − φz)b−1(z) and

B ( a , b ) = U φ ( B ( a - 1 , b - 1 ) 0 0 0 ) U φ

This a direct consequence of (42) and from which can be concluded that the Bezoutian B(a, b) is non-singular if and only if the polynomials a(z) and b(z) have no common factors. A similar conclusion is drawn for the FIM in (27) so that matrices (ϑ) and B(a, b) have the same singularity property.

Related to Corollary 3.2 in[38], this is where we give a description of the kernel or nullspace of the Bezout matrix.

Corollary 3.3 in [38] is now presented. Let φ1, . . ., φm be all the common zeros of the polynomials â(z) and (z), with multiplicities n1, . . . , nm. Let be the last unit standard basis column vector in ℝn and put

w k j = ( T φ k j J j - 1 )

for k = 1, . . . , m and j = 1, . . . , nk and by J we denote the forward n × n shift matrix, Jij = 1 if i = j + 1. Consequently, the subspace Ker B(a, b) is the linear span of the vectors w k j.

An alternative representation to (27) but involving the Bezoutian B(b, a) and derived in Proposition 5.1 in [38] is of the form

F ( ϑ ) = M - 1 ( b , a ) H ( ϑ ) M - ( b , a )

where

H ( ϑ ) = ( I 0 0 B ( b , a ) ) Q ( ϑ ) ( I 0 0 B ( b , a ) )             and M ( b , a ) = ( P 0 P S ( a ^ ) P P S ( b ^ ) P )

and

P = ( 0 0 1 1 0 0 1 0 0 ) , S ( a ^ ) = ( a n - 1 a n - 2 a 0 a n - 2 a 0 0 a 0 0 0 )             and Q ( ϑ ) 0

The matrix S(â) is the symmetrizer of the polynomial â(z), in this paper a0 = 1, see [2] and P is a permutation matrix. In [38] it is shown that the matrix Q(ϑ) is the unique solution to an appropriate Stein equation and is strictly positive definite. However, in the next section an explicit form of the Stein solution Q(ϑ) is developed. Some comments concerning the property summarized in Corollary 5.2 in [38] follow.

The matrix (ϑ) is non-singular if and only if the polynomials a(z) and b(z) have no common factors. The proof is straightforward since the matrix Q(ϑ) is non-singular which implies that the matrix (ϑ) is only non-singular when the Bezoutian B(b, a) is non-singular and this is fulfilled if and only if the polynomials a(z) and b(z) have no common factors.

The matrix (b, a) is non-singular if a0 ≠ 0 and b0 ≠ 0, which is the case since we have a0 = b0 = 1. From (43) can be concluded that the FIM (ϑ) is non-singular only when the matrix (ϑ) is non-singular or by virtue of (44) when the Bezoutian B(b, a) is non-singular. Consequently, the singularity conditions of the Bezoutian B(b, a), the FIM (ϑ) and the Sylvester resultant matrix Entropy 16 02023f6 (b, −a) are therefore equivalent. Can be concluded, by virtue of (29) proved in Lemma 3.1 and the equality dim (Ker Entropy 16 02023f6 (a, b)) = dim (Ker B(a, b)) proved in Theorem 21.11 in [1], yields

dim    ( Ker S ( b , - a ) ) = dim    ( Ker F ( ϑ ) ) = dim     ( Ker B ( b , a ) ) = ν ( a , b )

3.5. The Stein Equation - The Fisher Information Matrix of an ARMA(p, q) Process

In [12], a link between the FIM of an ARMA process and an appropriate solution of a Stein equation is set forth. In this survey paper we shall present some of the results and confront some results displayed in the previous sections. However, alternative proofs will be given to some results obtained in [12,38].

The Stein matrix equation is now set forth. Let A ∈ ℂm×m, B ∈ ℂn×n and Γ ∈ ℂn×m and consider the Stein equation

S - B S A = Γ

It has a unique solution if and only if λμ ≠ 1 for any λσ(A) and μσ(B), the spectrum of D is σ(D) = {λ ∈ ℂ: det(λImD) = 0}, the set of eigenvalues of D. The unique solution will be given in the next theorem [11].

Theorem 3.4

Let A and B be, such that there is a single closed contour C with σ(B) inside C and for each non-zero wσ(A), w−1 is outside C. Then for an arbitrary Γ the Stein Equation (45) has a unique solution S

S = 1 2 π i C ( λ I n - B ) - 1 Γ ( I m - λ A ) - d λ

In this section an interconnection between the representation (27) of the FIM (ϑ) and an appropriate solution to a Stein equation of the form (45) as developed in [12] is set forth. The distinct roots of the polynomials â(z) and (z) are denoted by α1, α2, . . . , αp and β1, β2, . . . , βq respectively such that the non-singularity of the FIM (ϑ) is guaranteed. The following representation of the integral expression (28) is given when Cauchy’s residue theorem is applied, equation (4.8) in [12]

P ( ϑ ) = U ( ϑ ) D ( ϑ ) U ^ ( ϑ )

where

U ( ϑ ) = { u p + q ( α 1 ) , u p + q ( α 2 ) , , u p + q ( α p ) , u p + q ( β 1 ) , u p + q ( β 2 ) , , u p + q ( β q ) }

D ( ϑ ) = diag { ( 1 a ^ ( z ; α i ) b ^ ( α i ) a ( α i ) b ( α i ) ) , ( 1 a ^ ( β j ) b ^ ( z ; β j ) a ( β j ) b ( β j ) ) }, i = 1, ..., p and j = 1, ..., q

and

U ^ ( ϑ ) = { v p + q ( α 1 ) , v p + q ( α 2 ) , , v p + q ( α p ) , v p + q ( β 1 ) , v p + q ( β 2 ) , , v p + q ( β q ) }

the polynomial p(·; β) is defined accordingly, p ( z ; β ) = p ( z ) ( z - β ) and Entropy 16 02023f10(ϑ) is the (p + q) × (p + q) diagonal matrix. The matrices Entropy 16 02023f11 (ϑ) and Entropy 16 02023f12 (ϑ) in (47) are the (p + q)× (p + q) Vandermonde matrices Vαβ and αβ respectively, given by

V α β = ( 1 α 1 α 1 2 α 1 p + q - 1 1 α 2 α 2 2 α 2 p + q - 1 1 α p α p 2 α p p + q - 1 1 β 1 β 1 2 β 1 p + q - 1 1 β 2 β 2 2 β 2 p + q - 1 1 β q β q 2 β q p + q - 1 )             and V ^ α β = ( α 1 p + q - 1 α 1 p + q - 2 α 1 1 α 2 p + q - 1 α 2 p + q - 2 α 2 1 α p p + q - 1 α p p + q - 2 α p 1 β 1 p + q - 1 β 1 p + q - 2 β 1 1 β 2 p + q - 1 β 2 p + q - 2 β 2 1 β q p + q - 1 β q p + q - 2 β q 1 )

It is clear that the (p + q) × (p + q) Vandermonde matrices Vαβ and αβ are nonsingular when αiαj, βkβh and αiβk for all i, j = 1, . . . , p and k, h = 1, . . . , q. A rigorous systematic evaluation of the Vandermonde determinants DetVαβ and Detαβ, yields

Det V α β = ( - 1 ) ( p + q ) ( p + q - 1 ) / 2 Φ ( α i , β k )

where

Φ ( α i , β k ) = 1 i < j p ( α i - α j ) 1 k < h q ( β k - β h ) m = 1 , p n = 1 , q ( α m - β n )

Since V α β = P V ^ α β and given the configuration of the permutation matrix, P, this leads to the equalities Det V ^ α β = Det P Det V α β and DetP = (−1)(p+q)(p+q−1)/2 so that

Det V ^ α β = ( - 1 ) ( p + q ) ( p + q - 1 ) Φ ( α i , β k ) Det V α β = Det V ^ α β

We shall now introduce an appropriate Stein equation of the form (45) such that an interconnection with (ϑ) in (47) can be verified. Therefore the following (p + q)× (p + q) companion matrix is introduced,

C g = ( 0 1 0 0 0 1 - g p + q - g p + q - 1 - g 1 )

where the entries gi are given by z p + q + i = 1 p + q g i ( ϑ ) z p + q - i = a ^ ( z ) b ^ ( z ) = g ^ ( z , ϑ ) and ĝ(ϑ) is the vector ĝ(ϑ) = (gp+q(ϑ), gp+q−1(ϑ), . . . , g1(ϑ)) T. Likewise is the vector g(z, ϑ) = a(z)b(z) and g(ϑ) = (g1(ϑ), g1(ϑ), . . . , gp+q(ϑ)) T, for investigating the properties of a companion matrix see e.g., [36], [2]. Since all the roots of the polynomials â(z) and (z) are distinct and lie within the unit circle implies that the products αiβj ≠ 1, αiαj ≠ 1 and βiβj ≠ 1 hold for all i = 1, 2, . . . , p and j = 1, 2, . . . , q. Consequently, the uniqueness condition of the solution of an appropriate Stein equation is verified. The following Stein equation and its solution, according to (45) and (46), are now presented

S - C g S C g = Γ and S = 1 2 π i z = 1 ( z I p + q - C g ) - 1 Γ ( I p + q - z C g ) - d z

where the closed contour is now the unit circle |z| = 1 and the matrix Γ is of size (p + q)× (p + q). A more explicit expression of the solution S is of the form

S = 1 2 π i z = 1 adj ( z I p + q - C g ) Γ adj ( I p + q - z C g ) a ( z ) b ( z ) a ^ ( z ) b ^ ( z ) d z

where adj(X) = X−1 Det(X), the adjoint of matrix X. When Cauchy’s residue theorem is applied to the solution S in (49), the following factored form of S is derived, equation (4.9) in [12]

S = ( C 1 , C 2 ) ( I p + q Γ ) ( D ( ϑ ) I p + q ) ( C 3 , C 4 )

where

C 1 = adj ( α 1 I p + q - C g ) , adj ( α 2 I p + q - C g ) , , adj ( α p I p + q - C g ) C 2 = adj ( β 1 I p + q - C g ) , adj ( β 2 I p + q - C g ) , , adj ( β p I p + q - C g ) C 3 = adj ( I p + q - α 1 C g ) , adj ( I p + q - α 2 C g ) , , adj ( I p + q - α p C g ) C 4 = adj ( I p + q - β 1 C g ) , adj ( I p + q - β 2 C g ) , , adj ( I p + q - β p C g )

and Entropy 16 02023f10(ϑ) is given in (47), the following matrix rule is applied

( A B ) ( C D ) = A C B D

and the operator ⊗ is the tensor (Kronecker) product of two matrices, see e.g., [2], [20].

Combining (47) and (50) and taking the assumption, αiαj, βkβh and αiβk, into account implies that the inverse of the (p + q)× (p + q) Vandermonde matrices Vαβ and αβ exist, as Lemma 4.2 [12] states.

The following equality holds true

S = ( C 1 , C 2 ) ( V α β - 1 P ( ϑ ) V ^ α β - 1 Γ ) ( C 3 , C 4 )

or

S = ( C 1 , C 2 ) ( V α β - 1 S - 1 ( b , - a ) F ( ϑ ) S - ( b , - a ) V ^ α β - 1 Γ ) ( C 3 , C 4 )

Consequently, under the condition αiαj, βkβh and αiβk, and by virtue of (27) and (51), an interconnection involving the FIM (ϑ), a solution to an appropriate Stein equation S, the Sylvester matrix Entropy 16 02023f6 (b, −a) and the Vandermonde matrices Vαβ and αβ is established. It is clear that by using the expression (43), the Bezoutian B (a, b) can be inserted in equality (51).

We will formulate a Stein equation when the matrix Γ = e p + q e p + q ,

S - C g S C g = e p + q e p + q

where ep+q is the last standard basis column vector in ℝp+q, e i m is the i-th unit standard basis column vector in ℝm, with all its components equal to 0 except the i-th component which equals 1. The next lemma is formulated.

Lemma 3.5

The symmetric matrix ℘(ϑ) defined in (28) fulfills the Stein Equation (52).

Proof

The unique solution of (52) is according to (46)

S = 1 2 π i z = 1 ( z I p + q - C g ) - 1 e p + q e p + q ( I p + q - z C g ) - d z

more explictely written,

S = 1 2 π i z = 1 adj ( z I p + q - C g ) e p + q e p + q adj ( I p + q - z C g ) a ( z ) b ( z ) a ^ ( z ) b ^ ( z ) d z

Using the property of the companion matrix Entropy 16 02023f13, standard computation shows that the last column of adj(zIp+q Entropy 16 02023f13) is the basic vector up+q(z) and consequently the last column of adj(Ip+qz Entropy 16 02023f13) is the basic vector vp+q(z) = zp+q−1up+q(z−1). This implies that adj(zIp+q Entropy 16 02023f13)ep+q = up+q(z) and e p + q adj ( I p + q - z C g ) = v p + q ( z ) or

S = 1 2 π i z = 1 u p + q ( z ) v p + q ( z ) a ( z ) b ( z ) a ^ ( z ) b ^ ( z ) d z = P ( ϑ )

Consequently, the solution S to the Stein Equation (52) coincides with the matrix (ϑ) defined in (28).

The Stein equation that is verified by the FIM (ϑ) will be considered. For that purpose we display the following p × p and q × q companion matrices Entropy 16 02023f14 and Entropy 16 02023f15 of the form,

C a = ( - a 1 - a 2 - a p 1 0 0 0 0 0 1 0 ) , C b = ( - b 1 - b 2 - b q 1 0 0 0 0 0 1 0 )

respectively. Introduce the (p + q) × (p + q) matrix K ( ϑ ) = ( C a O O C b ) and the (p + q) × 1 vector B = ( e p 1 - e q 1 ), where e p 1 and e q 1 are the first standard basis column vectors in ℝp and ℝq respectively. Consider the Stein equation

S - K ( ϑ ) S K ( ϑ ) = B B

followed by the theorem.

Theorem 3.6

The Fisher information matrix ℱ(ϑ) (17) coincides with the solution to the Stein equation (53).

Proof

The eigenvalues of the companion matrices Entropy 16 02023f14 and Entropy 16 02023f15 are respectively the zeros of the polynomials â(z) and (z) which are in absolute value smaller than one. This implies that the unique solution of the Stein Equation (53) exists and is given by

S = 1 2 π i z = 1 ( z I p + q - K ( ϑ ) ) - 1 B B ( I p + q - z K ( ϑ ) ) - d z

developing this integral expression in a more explicit form yields

S = 1 2 π i z = 1 ( adj ( z I p - C a ) a ^ ( z ) O O adj ( z I q - C b ) b ^ ( z ) ) ( e p 1 - e q 1 ) { ( adj ( I p - z C a ) a ( z ) O O adj ( I q - z C b ) b ( z ) ) ( e p 1 - e q 1 ) } d z

Considering the form of the companion matrices Entropy 16 02023f14 and Entropy 16 02023f15 leads through straightforward computation to the conclusion, the first column of adj(zIp Entropy 16 02023f14) is the basic vector vp(z) and consequently the first column of adj(Ipz Entropy 16 02023f14) is the basic vector up(z). Equivalently for the companion matrix Entropy 16 02023f15, this yields

S = 1 2 π i z = 1 ( v p ( z ) a ^ ( z ) - v q ( z ) b ^ ( z ) ) ( u p ( z ) a ( z ) - u q ( z ) b ( z ) ) d z

Representation (54) is such that in order to obtain an equivalent representation to the FIM (ϑ) in (17), the transpose of the solution to the Stein Equation (53) is therefore required, to obtain

S = 1 2 π i z = 1 ( u p ( z ) v p ( z ) a ( z ) a ^ ( z ) - u p ( z ) v q ( z ) a ( z ) b ^ ( z ) - u q ( z ) v p ( z ) a ^ ( z ) b ( z ) u q ( z ) v q ( z ) b ( z ) b ^ ( z ) ) d z = F ( ϑ )

or

S = 1 2 π i z = 1 ( I p + q - z K ( ϑ ) ) - 1 B B ( z I p + q - K ( ϑ ) ) - d z = F ( ϑ )

The symmetry property of the FIM (ϑ), leads to S = (ϑ). From the representation (55) can be concluded that the solution S of the Stein Equation (53) coincides with the symmetric block Toeplitz FIM ( ϑ) given in (17). This completes the proof.

It is straightforward to verify that the submatrix (1,2) in (55) is the complex conjugate transpose of the submatrix (2,1), whereas each submatrix on the main diagonal is Hermitian, consequently, the integrand is Hermitian. This implies that when the standard residue theorem is applied, it yields ( ϑ) = T (ϑ).

An Illustrative Example of Theorem 3.6

To illustrate Theorem 3.6, the case of an ARMA(2, 2) process is considered. We will use the representation (17) for computing the FIM (ϑ) of an ARMA(2, 2) process. The autoregressive and moving average polynomials are of degree two or p = q = 2 and the ARMA(2, 2) process is described by,

y ( t ) a ( z ) = b ( z ) ɛ ( t )

where y(t) is the stationary process driven by white noise ɛ(t), a(z) = (1 + a1z + a2z2) and b(z) = (1+b1z + b2z2) and the parameter vector is ϑ = (a1, a2, b1, b2)T. The condition, the zeros of the polynomials

a ^ ( z ) = z 2 a ( z - 1 ) = z 2 + a 1 z + a 2 and b ^ ( z ) = z 2 b ( z - 1 ) = z 2 + b 1 z + b 2

are in absolute value smaller than one, is imposed. The FIM (ϑ) of the ARMA(2, 2) process (56) is of the form

F ( ϑ ) = ( F a a ( ϑ ) F a b ( ϑ ) F a b ( ϑ ) F b b ( ϑ ) )

where

F a a ( ϑ ) = 1 ( 1 - a 2 ) [ ( 1 + a 2 ) 2 - a 1 2 ] ( 1 + a 2 - a 1 - a 1 1 + a 2 ) F b b ( ϑ ) = 1 ( 1 - b 2 ) [ ( 1 + b 2 ) 2 - b 1 2 ] ( 1 + b 2 - b 1 - b 1 1 + b 2 ) F a b ( ϑ ) = 1 ( a 2 b 2 - 1 ) 2 + ( a 2 b 1 - a 1 ) ( b 1 - a 1 b 2 ) ( a 2 b 2 - 1 a 1 - a 2 b 1 b 1 - a 1 b 2 a 2 b 2 - 1 )

The submatrices ℱaa(ϑ) and ℱbb(ϑ) are symmetric and Toeplitz whereas ℱab(ϑ) is Toeplitz. One can assert that without any loss of generality, the property, symmetric block Toeplitz, holds for the class of Fisher information matrices of stationary ARMA(p, q) processes, where p and q are arbitrary, finite integers that represent the degrees of the autoregressive and moving average polynomials, respectively. The appropriate companion matrices Entropy 16 02023f14, Entropy 16 02023f15, the 4 × 4 matrices Entropy 16 02023f16(ϑ) and ℬℬT are

C a = ( - a 1 - a 2 1 0 ) , C b = ( - b 1 - b 2 1 0 ) , K ( ϑ ) = ( - a 1 - a 2 0 0 1 0 0 0 0 0 - b 1 - b 2 0 0 1 0 )             and B B = ( 1 0 - 1 0 0 0 0 0 - 1 0 1 0 0 0 0 0 )

where B = ( 1 0 - 1 0 ) . It can be verified that the Stein equation

F ( ϑ ) - K ( ϑ ) F ( ϑ ) K ( ϑ ) = B B

holds true, when (ϑ) is of the form (57) and the matrices Entropy 16 02023f16(ϑ) and ℬℬT are given in (58).

3.5.1. Some Additional Results

In Proposition 5.1 in [38], the matrix Q(ϑ) in (44) fulfills the Stein Equation (59) and the property Q(ϑ) ≻ 0 is proved. It states that when e P = ( e 1 P , 0 ) = ( e n , 0 n ) R 2 n, where e1 is the first unit standard basis column vector in ℝn and en is the last or n-th unit standard basis column vector in ℝn, the following Stein equation admits the form

Q ( ϑ ) = F N ( ϑ ) Q ( ϑ ) F N ( ϑ ) + e P e P

where

F N ( ϑ ) = ( C ^ a 0 e 1 e 1 C b ) , C ^ a = ( 0 1 0 0 0 0 1 0 0 1 - a p - a p - 1 - a 1 )

A corollary to Proposition 5.1, [38] will be set forth, the involvement of various Vandermonde matrices in the explicit solution to equation (59) is confirmed. For that purpose the following Vandermonde matrices are displayed,

V α = ( 1 1 1 α 1 α 2 α n α 1 2 α 2 2 α n 2 α 1 n - 1 α 2 n - 1 α n n - 1 ) , V ^ α = ( α 1 n - 1 α 1 n - 2 1 α 2 n - 1 α 2 n - 2 1 α 3 n - 1 α 3 n - 2 1 α n n - 1 α n n - 2 1 ) , V ^ α β = ( V ^ α V ^ β ) , and V α β = ( V α V β )

where β and Vβ have the same configuration as α and Vα respectively. A corollary to Proposition 5.1 in [38] is now formulated.

Corollary 3.7

An explicit expression of the solution to the Stein equation (59) is of the form

Q ( ϑ ) = ( V α D 11 ( ϑ ) V ^ α V α D 12 ( ϑ ) V α V ^ α β D 21 ( ϑ ) V ^ α β V ^ α β D 22 ( ϑ ) V α β )

where the n × n and 2n × 2n diagonal matrices Entropy 16 02023f17(ϑ) shall be specified in the proof.

Proof

The condition of a unique solution of the Stein Equation (59) is guaranteed since the eigenvalues of the companions matrices Entropy 16 02023f18 and Entropy 16 02023f15 given respectively by the zeros of the polynomials â (z) and (z) are in absolute value smaller than one. Consequently, the unique solution to the Stein Equation (59) exists and is given by

Q ( ϑ ) = 1 2 π i z = 1 ( z I 2 n - F N ( ϑ ) ) - 1 e P e P ( I 2 n - z F N ( ϑ ) ) - d z

in order to proceed successfully, the following matrix property is displayed, to obtain

( A O B C ) - 1 = ( A - 1 O - C - 1 B A - 1 C - 1 )

When applied to the Equation (62), it yields

Q ( ϑ ) = 1 2 π i z = 1 ( adj ( z I p - C ^ a ) a ^ ( z ) O adj ( z I q - C b ) e 1 e 1 adj ( z I p - C ^ a ) a ^ ( z ) b ^ ( z ) adj ( z I q - C b ) b ^ ( z ) ) ( e n 0 ) × { ( adj ( I n - z C ^ a ) a ^ ( z ) O adj ( I n - z C b ) e 1 e 1 adj ( I p - z C ^ a ) a ^ ( z ) b ^ ( z ) adj ( I n - z C b ) b ^ ( z ) ) ( e n 0 ) } d z

Considering that the last column vector of the matrices adj(zIp Entropy 16 02023f18) and adj(Inz Entropy 16 02023f18) are the vectors un(z) and vn(z) respectively, it then yields

Q ( ϑ ) = 1 2 π i z = 1 ( u n ( z ) a ^ ( z ) v n ( z ) a ^ ( z ) b ^ ( z ) ) ( v n ( z ) a ( z ) z n - 1 u n ( z ) a ( z ) b ( z ) ) d z = 1 2 π i z = 1 ( u n ( z ) v n ( z ) a ( z ) a ^ ( z ) z n - 1 u n ( z ) u n ( z ) a ^ ( z ) a ( z ) b ( z ) v n ( z ) v n ( z ) a ^ ( z ) b ^ ( z ) a ( z ) z n - 1 v n ( z ) u n ( z ) a ^ ( z ) b ^ ( z ) a ( z ) b ( z ) ) d z = ( Q 11 ( ϑ ) Q 12 ( ϑ ) Q 21 ( ϑ ) Q 22 ( ϑ ) )

Applying the standard residue theorem leads for the respective submatrices

Q 11 ( ϑ ) = { u n ( α 1 ) , , u n ( α n ) } D 11 ( ϑ ) { v n ( α 1 ) , , v n ( α n ) } Q 12 ( ϑ ) = { u n ( α 1 ) , , u n ( α n ) } D 12 ( ϑ ) { u n ( α 1 ) , , u n ( α n ) } Q 21 ( ϑ ) = { v n ( α 1 ) , , v n ( α n ) , v n ( β 1 ) , , v n ( β n ) } D 21 ( ϑ ) { v n ( α 1 ) , , v n ( α n ) , v n ( β 1 ) , , v n ( β n ) } Q 22 ( ϑ ) = { v n ( α 1 ) , , v n ( α n ) , v n ( β 1 ) , , v n ( β n ) } D 22 ( ϑ ) { u n ( α 1 ) , , u n ( α n ) , u n ( β 1 ) , , u n ( β n ) }

where the n × n diagonal matrices are

D 11 ( ϑ ) = diag { 1 / ( a ( α i ) a ^ ( z ; α i ) ) } , D 12 ( ϑ ) = diag { α i n - 1 / ( a ( α i ) b ( α i ) a ^ ( z ; α i ) ) }     for i = 1 , , n

and the 2n × 2n diagonal matrices are

D 21 ( ϑ ) = diag { 1 / ( a ( α i ) b ^ ( α i ) a ^ ( z ; α i ) ) , 1 / ( a ^ ( β j ) a ( β j ) b ^ ( z ; β j ) ) } ,     for i , j = 1 , , n D 22 ( ϑ ) = diag { α i n - 1 / ( a ( α i ) b ( α i ) b ^ ( α i ) a ^ ( z ; α i ) ) , β j n - 1 / ( a ^ ( β j ) a ( β j ) b ( β j ) b ^ ( z ; β j ) ) } ,     for i , j = 1 , , n

It is clear that the first and third matrices in Q11(ϑ), Q12(ϑ), Q21(ϑ) and Q22(ϑ) are the appropriate Vandermonde matrices displayed in (60), it can be concluded that the representation (61) is verified. This completes the proof.

In this section an explicit form of the solution Q(ϑ), expressed in terms of various Vandermonde matrices, is displayed. Also, an interconnection between the Fisher information (ϑ) and appropriate solutions to Stein equations and related matrices is presented. Proofs are given when the Stein equations are verified by the FIM (ϑ) and the associated matrix (ϑ). These are alternative to the proofs developed in [38]. The presence of various forms of Vandermonde matrices is also emphasized. In the next section some matrix properties of the FIM (ϑ) of an ARMAX process is presented.

3.6. The Fisher Information Matrix of an ARMAX(p, r, q) Process

The FIM of the ARMAX process (11) is set forth according to [4]. The derivatives in the corresponding representation (16) are

ɛ t ( ϑ ) a j = c ( z ) a ( z ) b ( z ) x ( t - j ) + 1 a ( z ) ɛ ( t - j ) , ɛ t ( ϑ ) c l = - 1 b ( z ) ɛ ( t - l ) and ɛ t ( ϑ ) b k = - 1 b ( z ) ɛ t - k

where j = 1, . . . , p, l = 1, . . . , r and k = 1, . . . , q. Combining all j, l and k yields the (p + r + q) × (p + r + q) FIM

G ( ϑ ) = ( G a a ( ϑ ) G a c ( ϑ ) G a b ( ϑ ) G a c ( ϑ ) G c c ( ϑ ) G c b ( ϑ ) G a b ( ϑ ) G c b ( ϑ ) G b b ( ϑ ) )

where the submatrices of Entropy 16 02023f19(ϑ) are given by

G a a ( ϑ ) = 1 2 π i z = 1 R x ( z ) u p ( z ) u p ( z - 1 ) c ( z ) c ( z - 1 ) a ( z ) a ( z - 1 ) b ( z ) b ( z - 1 ) d z z + 1 2 π i z = 1 u p ( z ) u p ( z - 1 ) a ( z ) a ( z - 1 ) d z z = 1 2 π i z = 1 R x ( z ) u p ( z ) v p ( z ) c ( z ) c ^ ( z ) a ( z ) a ^ ( z ) b ( z ) b ^ ( z ) z r - q d z + 1 2 π i z = 1 u p ( z ) v p ( z ) a ( z ) a ^ ( z ) d z G a b ( ϑ ) = - 1 2 π i z = 1 u p ( z ) u q ( z - 1 ) a ( z ) b ( z - 1 ) d z z = - 1 2 π i z = 1 u p ( z ) v q ( z ) a ( z ) b ^ ( z ) d z G a c ( ϑ ) = - 1 2 π i z = 1 R x ( z ) u p ( z ) u r ( z - 1 ) c ( z ) a ( z ) b ( z ) b ( z - 1 ) d z z = - 1 2 π i z = 1 R x ( z ) u p ( z ) v r ( z ) c ( z ) a ( z ) b ( z ) b ^ ( z ) z r - q d z G c c ( ϑ ) = 1 2 π i z = 1 R x ( z ) u r ( z ) u r ( z - 1 ) b ( z ) b ( z - 1 ) d z z = 1 2 π i z = 1 R x ( z ) u r ( z ) v r ( z ) b ( z ) b ^ ( z ) z r - q d z G b b ( ϑ ) = 1 2 π i z = 1 u q ( z ) u q ( z - 1 ) b ( z ) b ( z - 1 ) d z z = - 1 2 π i z = 1 u q ( z ) v q ( z ) b ( z ) b ^ ( z ) d z , and G c b ( ϑ ) = O

where Rx(z) is the spectral density of the process x(t) and is defined in (10). Let K(z) = a(z)a(z−1)b(z)b(z−1), combining all the expressions in (63) leads to the following representation of Entropy 16 02023f19(ϑ) as the sum of two matrices

1 2 π i z = 1 R x ( z ) K ( z ) ( c ( z ) u p ( z ) - a ( z ) u r ( z ) O ) ( c ( z ) u p ( z ) - a ( z ) u r ( z ) O ) * d z z + 1 2 π i z = 1 1 K ( z ) ( b ( z ) u p ( z ) O - a ( z ) u q ( z ) ) ( b ( z ) u p ( z ) O - a ( z ) u q ( z ) ) * d z z

where (X)* is the complex conjugate transpose of the matrix X ∈ ℂm×n. Like in (23) we set forth

S ( - c , a ) = ( - S p ( c ) S r ( a ) )

here Entropy 16 02023f4(c) is formed by the top p rows of Entropy 16 02023f6(−c, a). In a similar way we decompose

S ( - b , a ) = ( - S p ( b ) S q ( a ) )

The representation (64) can be expressed by the appropriate block representations of the Sylvester resultant matrices, to obtain

G ( ϑ ) = ( - S p ( c ) S r ( a ) O ) W ( ϑ ) ( - S p ( c ) S r ( a ) O ) + ( - S p ( b ) O S q ( a ) ) P ( ϑ ) ( - S p ( b ) O S q ( a ) )

where the matrix (ϑ) is given in (28) and the matrix Entropy 16 02023f20(ϑ) ∈ ℝ(p+r)×(p+r) is of the form

W ( ϑ ) = 1 2 π i z = 1 R x ( z ) u p + r ( z ) u p + r ( z - 1 ) a ( z ) a ( z - 1 ) b ( z ) b ( z - 1 ) d z z = 1 2 π i z = 1 R x ( z ) u p + r ( z ) v p + r ( z ) a ( z ) b ( z ) a ^ ( z ) b ^ ( z ) d z

It is shown in [4] that Entropy 16 02023f20(ϑ) ≻ O. As can be seen in (65), the ARMAX part is explained by the first term, whereas the ARMA part is described by the second term, the combination of both terms is a summary of the Fisher information of a ARMAX(p, r, q) process. The FIM Entropy 16 02023f19(ϑ) under form (65) allows us to prove the following property, Theorem 3.1 in [4]. The FIM Entropy 16 02023f19 (ϑ) of the ARMAX(p, r, q) process with polynomials a(z), c(z) and b(z) of order p, r, q respectively becomes singular if and only if these polynomials have at least one common root. Consequently, the class of resultant matrices is extended by the FIM Entropy 16 02023f19 (ϑ).

3.7. The Stein Equation - The Fisher Information Matrix of an ARMAX(p, r, q) Process

In Lemma 3.5 it is proved that the matrix (ϑ) (28) fulfills the Stein Equation (52). We will now consider the conditions under which the matrix Entropy 16 02023f20(ϑ) (66) verifies an appropriate Stein equation. For that purpose we consider the spectral density to be of the form Rx(z) = (1/h(z)h(z−1)). The degree of the polynomial h(z) is ℓ and we assume the distinct roots of the polynomial h(z) to lie outside the unit circle, consequently, the roots of the polynomial ĥ(z) lie within the unit circle. We therefore rewrite Entropy 16 02023f20(ϑ) accordingly

W ( ϑ ) = 1 2 π i z = 1 u p + r ( z ) u p + r ( z - 1 ) h ( z ) h ( z - 1 ) a ( z ) a ( z - 1 ) b ( z ) b ( z - 1 ) d z z

We consider a companion matrix of the form (48) and with size p + q + ℓ, it is denoted by Entropy 16 02023f21 and the entries fi are given by z p + q + + i = 1 p + q + f i ( ϑ ) z p + q + q - i = a ^ ( z ) b ^ ( z ) h ^ ( z ) = f ^ ( z , ϑ ) and (ϑ) is the vector (ϑ) = (fp+q+ℓ(ϑ), fp+q+ℓ−1(ϑ), . . . , f1(ϑ))T. Likewise for the vector f(z, ϑ) = a(z)b(z)h(z) and f(ϑ) = (f1(ϑ), f1(ϑ), . . . , fp+q+ℓ(ϑ))T. The property Det(zIp+q+ℓ Entropy 16 02023f21) = â(z)(z)ĥ(z) and Det(Ip+q+ℓz Entropy 16 02023f21) = a(z)b(z)h(z) holds and assume

r = q + or p + q + = p + r and r > q

Entropy 16 02023f20(ϑ) is then of the form

W ( ϑ ) = 1 2 π i z = 1 u p + r ( z ) v p + r ( z ) h ( z ) h ^ ( z ) a ( z ) a ^ ( z ) b ( z ) b ^ ( z ) d z

We will formulate a Stein equation when the matrix Γ = e p + r e p + r and which is of the form

S - C f S C f = e p + r e p + r

where ep+r is the last standard basis column vector in ℝp+r. The next lemma is formulated.

Lemma 3.8

The matrix Entropy 16 02023f20(ϑ) given in (68) fulfills the Stein Equation (69).

Proof

The unique solution of (69) is assured since the product of all the eigenvalues of Entropy 16 02023f21 are different from one, the solution is of the form

S = 1 2 π i z = 1 ( z I p + r - C f ) - 1 e p + r e p + r ( I p + r - z C f ) - d z

or

S = 1 2 π i z = 1 adj ( z I p + r - C f ) e p + r e p + r adj ( I p + r - z C f ) a ^ ( z ) b ^ ( z ) h ^ ( z ) a ( z ) b ( z ) h ( z ) d z

taking the property of the companion matrix Entropy 16 02023f21 into account implies that the last column vector of adj(zIp+r Entropy 16 02023f21) is the basic vector up+r(z), consequently the last column of adj(Ip+rz Entropy 16 02023f21) is the basic vector vp+r(z), this yields

S = 1 2 π i z = 1 u p + r ( z ) v p + r ( z ) a ^ ( z ) b ^ ( z ) h ^ ( z ) a ( z ) b ( z ) h ( z ) d z = W ( ϑ )

Consequently, the matrix Entropy 16 02023f20(ϑ) defined in (68) verifies the Stein Equation (69). This completes the proof.

The matrices, (ϑ) and Entropy 16 02023f20(ϑ), in (65), verify under specific conditions appropriate Stein equations, as has been shown in Lemma 3.5 and Lemma 3.8, respectively. We will now confirm the presence of Vandermonde matrices by applying the standard residue theorem to Entropy 16 02023f20(ϑ) in (68), to obtain

W ( ϑ ) = V α β ξ R ( ϑ ) V ^ α β ξ

The (p + r) × (p + r) diagonal matrix (ϑ) is of the form

R ( ϑ ) = diag { ( 1 / a ^ ( z ; α i ) b ^ ( α i ) h ^ ( α i ) φ ( α i ) ) , ( 1 / a ^ ( β j ) b ^ ( z ; β j ) h ^ ( β j ) φ ( β j ) ) , ( 1 / a ^ ( ξ k ) b ^ ( ξ k ) h ^ ( z ; ξ k ) φ ( ξ k ) ) }

where φ(z) = a(z)b(z)h(z) and i = 1, . . . , p, j = 1, . . . , q and k = 1, . . . , ℓ. Whereas the (p + r) × (p + r) matrices Vαβξ and αβξ are of the form

V α β ξ = ( 1 α 1 α 1 2 α 1 p + r - 1 1 α p α p 2 α p p + r - 1 1 β 1 β 1 2 β 1 p + r - 1 1 β q β q 2 β q p + r - 1 1 ξ 1 ξ 1 2 ξ 1 p + r - 1 1 ξ ξ 2 ξ p + r - 1 ) , V ^ α β ξ = ( α 1 p + r - 1 α 1 p + r - 2 α 1 1 α p p + r - 1 α p p + r - 2 α p 1 β 1 p + r - 1 β 1 p + r - 2 β 1 1 β q p + r - 1 β q p + r - 2 β q 1 ξ 1 p + r - 1 ξ 1 p + r - 2 ξ 1 1 ξ p + r - 1 ξ p + r - 2 ξ 1 )

The (p + r) × (p + r) Vandermonde matrices Vαβξ and αβξ are nonsingular when αiαj , βkβh, ξmξn, αiβk, αiξm, βkξm for all i, j = 1, . . . , p, k, h = 1, . . . , q and m,n = 1, . . . , ℓ. The Vandermonde determinants DetVαβξ and Det αβξ, are

Det V α β ξ = ( - 1 ) ( p + r ) ( p + r - 1 ) / 2 Ψ ( α i , β k , ξ m )

where

Ψ ( α i , β k , ξ m ) = 1 i < j p ( α i - α j ) 1 k < h q ( β k - β h ) 1 m < n ( ξ m - ξ n ) r = 1 , , p s = 1 , , q ( α r - β s ) r = 1 , , p w = 1 , , ( α r - ξ w ) s = 1 , , q w = 1 , , ( β s - ξ w )

Like for the Vandermonde matrices Vαβ and V ^ α β ,

Det V ^ α β ξ = ( - 1 ) ( p + r ) ( p + r - 1 ) Ψ ( α i , β k , ξ m ) Det V α β ξ = Det V ^ α β ξ

Equation (70) is the ARMAX equivalent to (47). A combination of both equations generates a new representation of the FIM Entropy 16 02023f19(ϑ), this is set forth in the following lemma.

Lemma 3.9

Assume the conditions (67) to hold and consider the representations of ℘(ϑ) and Entropy 16 02023f20(ϑ) in (47) and (70) respectively, leads to an alternative form to (65), it is given by

G ( ϑ ) = ( - S p ( c ) S r ( a ) O ) V α β ξ R ( ϑ ) V ^ α β ξ ( - S p ( c ) S r ( a ) O ) + ( - S p ( b ) O S q ( a ) ) V α β D ( ϑ ) V ^ α β ( - S p ( b ) O S q ( a ) )

In Lemma 3.9, the FIM Entropy 16 02023f19(ϑ) is expressed by submatrices of two Sylvester matrices and various Vandermonde matrices, both type of matrices become singular if and only if the appropriate polynomials have at least one common root.

3.8. The Fisher Information Matrix of a Vector ARMA(p, q) Process

The process (6) is summarized as,

A ( z ) y ( t ) = B ( z ) ɛ ( t )

and we assume that {y(t), t ∈ ℕ}, is a zero mean Gaussian time series and {ɛ(t), t ∈ ℕ} is a n-dimensional vector random variable, such that Entropy 16 02023f3{ɛ(t)} = 0 and Entropy 16 02023f3 {ɛ(t)ɛT (t)} = ∑ and the parameter vector ϑ is of the form (7). In [6] it is shown that representation (16) for the n2(p+q)×n2(p+q) asymptotic FIM of the VARMA process (6) is

F ( ϑ ) = E ϑ { ( ɛ ϑ ) Σ - 1 ( ɛ ϑ ) }

where ∂ɛ/∂ϑT is of size n×n2(p+q) and for convenience t is omitted from ɛ(t). Using the differential rules outlined in [6], yields

ɛ ϑ = { ( A - 1 ( z ) B ( z ) ɛ ) B - 1 ( z ) } vec A ( z ) ϑ - ( ɛ B - 1 ( z ) ) vec B ( z ) ϑ

The substitution of representation (72) of ∂ɛ/∂ϑ T in (71) yields the FIM of a VARMA process. The purpose is to construct a factorization of the FIM F(ϑ) that should be a multiple variant of the factorization (27), so that a multiple resultant matrix property can be proved for F(ϑ). As illustrated in [6], the multiple version of the Sylvester resultant matrix (22) does not fulfill the multiple resultant matrix property. In that case even when the matrix polynomials A(z) and B(z) have a common zero or a common eigenvalue, the multiple Sylvester matrix is not neccessarily singular. This has also been illustrated in [3]. In order to consider a multiple equivalent to the resultant matrix Entropy 16 02023f6(−b, a), Gohberg and Lerer set forth the n2(p + q) × n2(p + q) tensor Sylvester matrix

S ( - B , A ) : = ( ( - I n ) I n ( - B 1 ) I n ( - B q ) I n O n 2 × n 2 O n 2 × n 2 O n 2 × n 2 O n 2 × n 2 O n 2 × n 2 O n 2 × n 2 ( - I n ) I n ( - B 1 ) I n ( - B q ) I n I n I n I n A 1 I n A p O n 2 × n 2 O n 2 × n 2 O n 2 × n 2 O n 2 × n 2 O n 2 × n 2 O n 2 × n 2 I n I n I n A 1 I n A p )

In [3], the authors prove that the tensor Sylvester matrix Entropy 16 02023f22(−B,A) fulfills the multiple resultant property, it becomes singular if and only if the appropriate matrix polynomials A(z) and B(z) have at least one common zero. In Proposition 2.2 in [6], the following factorized form of the Fisher information F(ϑ) is developed

F ( ϑ ) = 1 2 π i z = 1 Φ ( z ) Θ ( z ) Φ * ( z ) d z z

where

Φ ( z ) = ( I p A - 1 ( z ) I n O p n 2 × q n 2 O q n 2 × p n 2 I q I n A - 1 ( z ) ) S ( - B , A ) ( u p + q ( z ) I n 2 )

and

Θ ( z ) = Σ σ ( z ) , σ ( z ) = B - ( z ) Σ - 1 B - 1 ( z - 1 )

In order to obtain a multiple variant of (27), the following matrix is introduced,

M ( ϑ ) = 1 2 π i z = 1 Λ ( z ) J ( z ) Λ * ( z ) d z z = S ( - B , A ) P ( ϑ ) ( S ( - B , A ) )

where

J ( z ) = Φ ( z ) Θ ( z ) Φ * ( z )     and     Λ ( z ) = ( I p A ( z ) I n O p n 2 × q n 2 O q n 2 × p n 2 I q I n A ( z ) )

and the matrix P(ϑ) is a multiple variant of the matrix (ϑ) in (28), it is of the form

P ( ϑ ) = 1 2 π i z = 1 ( u p + q ( z ) I n 2 ) Θ ( z ) ( u p + q ( z ) I n 2 ) * d z z

In Lemma 2.3 in [6], it is proved that the matrix M(ϑ) in (76) becomes singular if and only if the matrix polynomials A(z) and B(z) have at least one common eigenvalue-zero. The proof is a multiple equivalent of the proof of Corollary 2.2 in [5], since the equality (76) is a multiple version of (27). Consequently, the matrix M(ϑ) like the tensor Sylvester matrix Entropy 16 02023f22(−B,A), fulfills the multiple resultant matrix property. Since the matrix M(ϑ) is derived from the FIM F(ϑ), this enables us to prove that the matrix F(ϑ) fulfills the multiple resultant matrix property by showing that it becomes singular if and only if the matrix M(ϑ) is singular, this is done in Proposition 2.4 in [6]. Consequently, it can be concluded from [6] that the FIM of a VARMA process F(ϑ) and the tensor Sylvester matrix Entropy 16 02023f22(−B,A) have the same singularity conditions. The FIM of a VARMA process F(ϑ) can therefore be added to the class of multiple resultant matrices.

A brief summary of the contribution of [6] follows, in order to show that the FIM of a VARMA process F(ϑ) is a multiple resultant matrix two new representations of the FIM are derived. To construct such representations appropriate matrix differential rules are applied. The newly obtained representations are expressed in terms of the multiple Sylvester matrix and the tensor Sylvester matrix. The representation of the FIM expressed by the tensor Sylvester matrix is used to prove that the FIM becomes singular if and only if the autoregressive and moving average matrix polynomials have at least one common eigenvalue. It then follows that the FIM and the tensor Sylvester matrix have equivalent singularity conditions. In a numerical example it is shown, however, that the FIM fails to detect common eigenvalues due to some kind of numerical instability. The tensor Sylvester matrix reveals it clearly, proving the usefulness of the results derived in this paper.

3.9. The Fisher Information Matrix of a Vector ARMAX(p, r, q) Process

The n2(p + q + r) × n2(p + q + r) asymptotic FIM of the VARMAX(p, r, q) process (2)

A ( z ) y ( t ) = C ( z ) x ( t ) + B ( z ) ɛ ( t )

is displayed according to [23] and is an extension of the FIM of the VARMA(p, q) process (6). Representation (16) of the FIM of the VARMAX(p, r, q) process is then

G ( ϑ ) = E ϑ { ( ɛ ϑ ) Σ - 1 ( ɛ ϑ ) }

where

ɛ ϑ = { ( A - 1 ( z ) C ( z ) x ) B - 1 ( z ) } vec A ( z ) ϑ + { ( A - 1 ( z ) B ( z ) ɛ ) B - 1 ( z ) } vec A ( z ) ϑ - { x B - 1 ( z ) } vec C ( z ) ϑ - ( ɛ B - 1 ( z ) ) vec B ( z ) ϑ

To obtain the term ∂ɛ/∂ϑ T, of size n × n2(p + q + r), the same differential rules are applied as for the VARMA(p, q) process. In Proposition 2.3 in [23], the representation of the FIM of a VARMAX process is expressed in terms of tensor Sylvester matrices, this obtained when ∂ɛ/∂ϑ T in (78) is substituted in (16), to obtain

G ( ϑ ) = 1 2 π i z = 1 Φ x ( z ) Θ ( z ) Φ x * ( z ) d z z + 1 2 π i z = 1 Λ x ( z ) Ψ ( z ) Λ x * ( z ) d z z

The matrices in (79) are of the form

Φ x ( z ) = ( I p A - 1 ( z ) I n O p n 2 × r n 2 O p n 2 × q n 2 O r n 2 × p n 2 O r n 2 × r n 2 O r n 2 × q n 2 O q n 2 × p n 2 O q n 2 × r n 2 I q I n A - 1 ( z ) ) ( - S p ( B ) O r n 2 × n 2 ( p + q ) S q ( A ) ) ( u p + q ( z ) I n 2 ) Λ x ( z ) = ( I p A - 1 ( z ) I n O p n 2 × r n 2 O p n 2 × q n 2 O r n 2 × p n 2 I r I n A - 1 ( z ) O r n 2 × q n 2 O q n 2 × p n 2 O q n 2 × r n 2 O q n 2 × q n 2 ) ( - S p ( C ) S r ( A ) O q n 2 × n 2 ( p + r ) ) ( u p + r ( z ) I n 2 ) S p , q ( - B , A ) = ( - S p ( B ) S q ( A ) ) ,     S p , r ( - C , A ) = ( - S p ( C ) S r ( A ) )

additionally we have Ψ(z) = Rx(z) ⊗ σ(z) and the Hermitian spectral density matrix Rx(z) is defined in (10), whereas the matrix polynomials Θ(z) and σ(z) are presented in (75). In (80), we have the pn2 × (p + q)n2 and qn2 × (p + q)n2 submatrices S p ( - B ) and S q ( A ) of the tensor Sylvester resultant matrix S p , q ( - B , A ). Whereas the matrices S p ( - C ) and S r ( A ) are the upper and lower blocks of the (p+r)n2×(p+r)n2 tensor Sylvester resultant matrix S p , r ( - C , A ). As for the FIM of the VARMA(p, q) process, the objective is to construct a multiple version of (65), this done in [23], to obtain

M x ( ϑ ) = 1 2 π i z = 1 L ( z ) A ( z ) L * ( z ) d z z + 1 2 π i z = 1 W ( z ) B ( z ) W * ( z ) d z z = ( - S p ( B ) O r n 2 × n 2 ( p + q ) S q ( A ) ) P ( ϑ ) ( - S p ( B ) O r n 2 × n 2 ( p + q ) S q ( A ) ) + ( - S p ( C ) S r ( A ) O q n 2 × n 2 ( p + r ) ) T ( ϑ ) ( - S p ( C ) S r ( A ) O q n 2 × n 2 ( p + r ) )

The matrices involved are of the form

L ( z ) = ( I p A ( z ) I n O p n 2 × r n 2 O p n 2 × q n 2 O r n 2 × p n 2 O r n 2 × r n 2 O r n 2 × q n 2 O q n 2 × p n 2 O q n 2 × r n 2 I q I n A ( z ) ) and A ( z ) : = Φ x ( z ) Θ ( z ) Φ x * ( z ) W ( z ) = ( I p A ( z ) I n O p n 2 × r n 2 O p n 2 × q n 2 O r n 2 × p n 2 I r I n A ( z ) O r n 2 × q n 2 O q n 2 × p n 2 O q n 2 × r n 2 O q n 2 × q n 2 ) and B ( z ) : = Λ x ( z ) Ψ ( z ) Λ x * ( z ) T ( ϑ ) = 1 2 π i z = 1 ( u p + r ( z ) I n 2 ) Ψ ( z ) ( u p + r ( z ) I n 2 ) * d z z

and P(ϑ) is given in (77). Note, the matrices Φx(z), Λx(z), (z) and Entropy 16 02023f20(z) are the corrected versions of the corresponding matrices in [23].

A parallel between the scalar and multiple structures is straightforward. This is best illustrated by comparing the representations (27) and (28) with (76) and (77) respectively, confronting the FIM for scalar and vector ARMA(p, q) processes. The FIM of the scalar ARMAX(p, r, q) process contains an ARMA(p, q) part, this is confirmed by (65), through the presence of the matrix (ϑ) which is originally displayed in (28). The multiple resultant matrices M(ϑ) and Mx(ϑ) derived from the FIM of the VARMA(p, q) and VARMAX(p, r, q) processes respectively both contain P(ϑ), whereas the first matrix term of the matrices Φ(z) and Φx(z), which are of different size, consist of the same nonzero submatrices. To summarize, in [23] compact forms of the FIM of a VARMAX process expressed in terms of multiple and tensor Sylvester matrices are developed. The tensor Sylvester matrices allow us to investigate the multiple resultant matrix property of the FIM of VARMAX(p, r, q) processes. However, since no proof of the multiple resultant matrix property of the FIM G(ϑ) has been done yet, justifies the consideration of a conjecture. A conjecture that states, the FIMG(ϑ) of a VARMAX(p, r, q) process becomes singular if and only if the matrix polynomials A(z), B(z) and C(z) have at least one common eigenvalue. A multiple equivalent to Theorem 3.1 in [4] and combined with Proposition 2.4 in [6], but based on the representations (79) and (81), can be envisaged to formulate a proof which will be a subject for future study.

4. Conclusions

In this survey paper, matrix algebraic properties of the FIM of stationary processes are discussed. The presented material is a summary of papers where several matrix structural aspects of the FIM are investigated. The FIM of scalar and multiple processes like the (V)ARMA(X) are set forth with appropriate factorized forms involving (tensor) Sylvester matrices. These representations enable us to prove the resultant matrix property of the corresponding FIM. This has been done for (V)ARMA(p, q) and ARMAX(p, r, q) processes in the papers [46]. The development of the stages that lead to the appropriate factorized form of the FIM G(ϑ) (79) is set forth in [23]. However, there is no proof done yet that confirms the multiple resultant matrix property of the FIM G(ϑ) of a VARMAX(p, r, q) process. This justifies the consideration of a conjecture which is formulated in the former section, this can be a subject for future study.

The statistical distance measure derived in [7], involves entries of the FIM. This distance measure can be a challenge to its quantum information counterpart (41). Because (36) involves information about m parameters estimated from n measurements. Whereas in quantum information, like in e.g., [810], the information about one parameter in a particular measurement procedure is considered for establishing an interconnection with the appropriate statistical distance measure. A possible approach, by combining matrix algebra and quantum information, for developing a statistical distance measure in quantum information or quantum statistics but at the matrix level, can be a subject of future research. Some results concerning interconnections between the FIM of ARMA(X) models and appropriate solutions to Stein matrix equations are discussed, the material is extracted from the papers, [12] and [13]. However, in this paper, some alternative and new proofs that emphasize the conditions under which the FIM fulfills appropriate Stein equations, are set forth. The presence of various types of Vandermonde matrices is also emphasized when an explicit expansion of the FIM is computed. These Vandermonde matrices are inserted in interconnections with appropriate solutions to Stein equations. This explains, when the matrix algebraic structures of the FIM of stationary processes are investigated, the involvement of structured matrices like the (tensor) Sylvester, Bezoutian and Vandermonde matrices is essential.

Acknowledgements

The author thanks a perceptive reviewer for his comments which significantly improved the quality and presentation of the paper.

Conflicts of Interest

The authors have declared no conflict of interest.

References

  1. Dym, H. Linear Algebra in Action; American Mathematical Society: Providence, RI, USA, 2006; Volume 78. [Google Scholar]
  2. Lancaster, P.; Tismenetsky, M. The Theory of Matrices with Applications, 2nd ed; Academic Press: Orlando, FL, USA, 1985. [Google Scholar]
  3. Gohberg, I.; Lerer, L. Resultants of matrix polynomials. Bull. Am. Math. Soc 1976, 82, 565–567. [Google Scholar]
  4. Klein, A.; Spreij, P. On Fisher’s information matrix of an ARMAX process and Sylvester’s resultant matrices. Linear Algebra Appl 1996, 237/238, 579–590. [Google Scholar]
  5. Klein, A.; Spreij, P. On Fisher’s information matrix of an ARMA process. In Stochastic Differential and Difference Equations; Csiszar, I., Michaletzky, Gy., Eds.; Birkhäuser: Boston: Boston, USA, 1997; Progress in Systems and Control Theory. Volume 23, pp. 273–284. [Google Scholar]
  6. Klein, A.; Mélard, G.; Spreij, P. On the Resultant Property of the Fisher Information Matrix of a Vector ARMA process. Linear Algebra Appl 2005, 403, 291–313. [Google Scholar]
  7. Klein, A.; Spreij, P. Transformed Statistical Distance Measures and the Fisher Information Matrix. Linear Algebra Appl 2012, 437, 692–712. [Google Scholar]
  8. Braunstein, S.L.; Caves, C.M. Statistical Distance and the Geometry of Quantum States. Phys. Rev. Lett 1994, 72, 3439–3443. [Google Scholar]
  9. Jones, P.J.; Kok, P. Geometric derivation of the quantum speed limit. Phys. Rev. A 2010, 82, 022107. [Google Scholar]
  10. Kok, P. Tutorial: Statistical distance and Fisher information; Oxford: UK, 2006. [Google Scholar]
  11. Lancaster, P.; Rodman, L. Algebraic Riccati Equations; Clarendon Press: Oxford, UK, 1995. [Google Scholar]
  12. Klein, A.; Spreij, P. On Stein’s equation, Vandermonde matrices and Fisher’s information matrix of time series processes. Part I: The autoregressive moving average process. Linear Algebra Appl 2001, 329, 9–47. [Google Scholar]
  13. Klein, A.; Spreij, P. On the solution of Stein’s equation and Fisher’s information matrix of an ARMAX process. Linear Algebra Appl 2005, 396, 1–34. [Google Scholar]
  14. Grenander, U.; Szegő, G.P. Toeplitz Forms and Their Applications; University of California Press: New York, NY, USA, 1958. [Google Scholar]
  15. Brockwell, P.J.; Davis, R.A. Time Series: Theory and Methods, 2nd ed; Springer Verlag: Berlin, Germany; New York, NY, USA, 1991. [Google Scholar]
  16. Caines, P. Linear Stochastic Systems; John Wiley and Sons: New York, NY, USA, 1988. [Google Scholar]
  17. Ljung, L.; Söderström, T. Theory and Practice of Recursive Identification; M.I.T. Press: Cambridge, MA, USA, 1983. [Google Scholar]
  18. Hannan, E.J.; Deistler, M. The Statistical Theory of Linear Systems; John Wiley and Sons: New York, NY, USA, 1988. [Google Scholar]
  19. Hannan, E.J.; Dunsmuir, W.T.M.; Deistler, M. Estimation of vector Armax models. J. Multivar. Anal 1980, 10, 275–295. [Google Scholar]
  20. Horn, R.A.; Johnson, C.R. Topics in Matrix Analysis; Cambridge University Press: New York, NY, USA, 1995. [Google Scholar]
  21. Klein, A.; Spreij, P. Matrix differential calculus applied to multiple stationary time series and an extended Whittle formula for information matrices. Linear Algebra Appl 2009, 430, 674–691. [Google Scholar]
  22. Klein, A.; Mélard, G. An algorithm for the exact Fisher information matrix of vector ARMAX time series. Linear Algebra Its Appl 2014, 446, 1–24. [Google Scholar]
  23. Klein, A.; Spreij, P. Tensor Sylvester matrices and the Fisher information matrix of VARMAX processes. Linear Algebra Appl 2010, 432, 1975–1989. [Google Scholar]
  24. Rao, C.R. Information and the accuracy attainable in the estimation of statistical parameters. Bull. Calcutta Math. Soc 1945, 37, 81–91. [Google Scholar]
  25. Ibragimov, I.A.; Has’minskiĭ, R.Z. Statistical Estimation. In Asymptotic Theory; Springer-Verlag: New York, NY, USA, 1981. [Google Scholar]
  26. Lehmann, E.L. Theory of Point Estimation; Wiley: New York, NY, USA, 1983. [Google Scholar]
  27. Friedlander, B. On the computation of the Cramér-Rao bound for ARMA parameter estimation. IEEE Trans. Acoust. Speech Signal Process 1984, 32, 721–727. [Google Scholar]
  28. Holevo, A.S. Probabilistic and Statistical Aspects of Quantum Theory, 2nd ed; Edizioni Della Normale, SNS Pisa: Pisa, Italy, 2011. [Google Scholar]
  29. Petz, T. Quantum Information Theory and Quantum Statistics; Springer-Verlag: Berlin Heidelberg, Germany, 2008. [Google Scholar]
  30. Barndorff-Nielsen, O.E.; Gill, R.D. Fisher Information in quantum statistics. J. Phys. A 2000, 30, 4481–4490. [Google Scholar]
  31. Luo, S. Wigner-Yanase skew information vs. quantum Fisher information. Proc. Amer. Math. Soc 2004, 132, 885–890. [Google Scholar]
  32. Klein, A.; Mélard, G. On algorithms for computing the covariance matrix of estimates in autoregressive moving average processes. Comput. Stat. Q 1989, 5, 1–9. [Google Scholar]
  33. Klein, A.; Mélard, G. An algorithm for computing the asymptotic Fisher information matrix for seasonal SISO models. J. Time Ser. Anal 2004, 25, 627–648. [Google Scholar]
  34. Bistritz, Y.; Lifshitz, A. Bounds for resultants of univariate and bivariate polynomials. Linear Algebra Appl 2010, 432, 1995–2005. [Google Scholar]
  35. Horn, R.A.; Johnson, C.R. Matrix Analysis; Cambridge University Press: New York, NY, USA, 1996. [Google Scholar]
  36. Golub, G.H.; van Loan, C.F. Matrix Computations, 3rd ed; John Hopkins University Press: Baltimore, USA, 1996. [Google Scholar]
  37. Kullback, S. Information Theory and Statistics; John Wiley and Sons: New York, NY, USA, 1959. [Google Scholar]
  38. Klein, A.; Spreij, P. The Bezoutian, state space realizations and Fisher’s information matrix of an ARMA process. Linear Algebra Appl 2006, 416, 160–174. [Google Scholar]
Entropy EISSN 1099-4300 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert