Next Article in Journal
Fractional Form of a Chaotic Map without Fixed Points: Chaos, Entropy and Control
Previous Article in Journal
Economic Complexity Based Recommendation Enhance the Efficiency of the Belt and Road Initiative
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rate Distortion Function of Gaussian Asymptotically WSS Vector Processes

by
Jesús Gutiérrez-Gutiérrez
*,
Marta Zárraga-Rodríguez
,
Pedro M. Crespo
and
Xabier Insausti
Tecnun, University of Navarra, Paseo de Manuel Lardizábal 13, 20018 San Sebastián, Spain
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(9), 719; https://doi.org/10.3390/e20090719
Submission received: 26 July 2018 / Revised: 6 September 2018 / Accepted: 14 September 2018 / Published: 19 September 2018
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
In this paper, we obtain an integral formula for the rate distortion function (RDF) of any Gaussian asymptotically wide sense stationary (AWSS) vector process. Applying this result, we also obtain an integral formula for the RDF of Gaussian moving average (MA) vector processes and of Gaussian autoregressive MA (ARMA) AWSS vector processes.

1. Introduction

The present paper focuses on the derivation of a closed-form expression for the rate distortion function (RDF) of a wide class of vector processes. As stated in [1,2], there exist very few journal papers in the literature that present closed-form expressions for the RDF of non-stationary processes, and just one of them deals with non-stationary vector processes [3]. In the present paper, we obtain an integral formula for the RDF of any real Gaussian asymptotically wide sense stationary (AWSS) vector process. This new formula generalizes the one given in 1956 by Kolmogorov [4] for real Gaussian stationary processes and the one given in 1971 by Toms and Berger [3] for real Gaussian autoregressive (AR) AWSS vector processes of finite order. Applying this new formula, we also obtain an integral formula for the RDF of real Gaussian moving average (MA) vector processes of infinite order and for the RDF of real Gaussian ARMA AWSS vector processes of infinite order. AR, MA and ARMA vector processes are frequently used to model multivariate time series (see, e.g., [5]).
The definition of the AWSS process was first given by Gray (see [6,7]), and it is based on his concept of asymptotically equivalent sequences of matrices [8]. The integral formulas given in the present paper are obtained by using some recent results on such sequences of matrices [9,10,11,12].
The paper is organized as follows. In Section 2, we set up notation, and we review the concepts of AWSS, MA and ARMA vector processes and the Kolmogorov formula for the RDF of a real Gaussian vector. In Section 3, we obtain an integral formula for the RDF of any Gaussian AWSS vector process. In Section 4, we obtain an integral formula for the RDF of Gaussian MA vector processes and of Gaussian ARMA AWSS vector processes. We finish the paper with a numerical example where the RDF of a Gaussian AWSS vector process is computed.

2. Preliminaries

2.1. Notation

In this paper, N , Z , R and C denote the set of natural numbers (i.e., the set of positive integers), the set of integer numbers, the set of (finite) real numbers and the set of (finite) complex numbers, respectively. If m , n N , then C m × n , 0 m × n and I n are the set of all m × n complex matrices, the m × n zero matrix and the n × n identity matrix, respectively. The symbols ⊤ and ∗ denote transpose and conjugate transpose, respectively. E stands for expectation; i is the imaginary unit; t r denotes trace; δ stands for the Kronecker delta; and λ k ( A ) , k { 1 , , n } , are the eigenvalues of an n × n Hermitian matrix A arranged in decreasing order.
Let A n and B n be n N × n N matrices for all n N . We write { A n } { B n } if the sequences { A n } and { B n } are asymptotically equivalent (see ([9], p. 5673)), that is:
M [ 0 , ) : A n 2 , B n 2 M n N
and:
lim n A n B n F n = 0 ,
where · 2 and · F denote the spectral norm and the Frobenius norm, respectively. The original definition of asymptotically equivalent sequences of matrices, where N = 1 , was given by Gray (see ([6], Section 2.3) or [8]).
Let { x n : n N } be a random N-dimensional vector process, i.e., x n is a random (column) vector of dimension N for all n N . We denote by x n : 1 the random vector of dimension n N given by:
x n : 1 : = x n x n 1 x n 2 x 1 , n N .
Consider a matrix-valued function of a real variable X : R C N × N , which is continuous and 2 π -periodic. For every n N , we denote by T n ( X ) the n × n block Toeplitz matrix with N × N blocks given by:
T n ( X ) : = ( X j k ) j , k = 1 n = X 0 X 1 X 2 X 1 n X 1 X 0 X 1 X 2 n X 2 X 1 X 0 X 3 n X n 1 X n 2 X n 3 X 0 ,
where { X k } k Z is the sequence of Fourier coefficients of X:
X k = 1 2 π 0 2 π e k ω i X ( ω ) d ω k Z .

2.2. AWSS Vector Processes

We first review the well-known concept of the WSS vector process.
Definition 1.
Let X : R C N × N , and suppose that it is continuous and 2 π -periodic. A random N-dimensional vector process { x n : n N } is said to be WSS (or weakly stationary) with power spectral density (PSD) X if it has constant mean (i.e., E ( x n 1 ) = E ( x n 2 ) for all n 1 , n 2 N ) and { E x n : 1 x n : 1 } = { T n ( X ) } .
We now review the definition of the AWSS vector process given in ([11], Definition 7.1).
Definition 2.
Let X : R C N × N , and suppose that it is continuous and 2 π -periodic. A random N-dimensional vector process { x n : n N } is said to be AWSS with asymptotic PSD (APSD) X if it has constant mean and { E x n : 1 x n : 1 } { T n ( X ) } .
Definition 2 was first introduced by Gray for the case N = 1 (see, e.g., ([6], p. 225)).

2.3. MA and ARMA Vector Processes

We first review the concept of real zero-mean MA vector process (of infinite order).
Definition 3.
A real zero-mean random N-dimensional vector process { x n : n N } is said to be MA if:
x n = w n + j = 1 n 1 G j w n j n N ,
where G j , j N , are real N × N matrices, { w n : n N } is a real zero-mean random N-dimensional vector process and E w n 1 w n 2 = δ n 1 , n 2 Λ for all n 1 , n 2 N with Λ being an N × N positive definite matrix.
The MA vector process { x n : n N } in Equation (1) is of finite order if there exists q N such that G j = 0 N × N for all j > q . In this case, { x n : n N } is called an MA ( q ) vector process (see, e.g., ([5], Section 2.1)).
Secondly, we review the concept of a real zero-mean ARMA vector process (of infinite order).
Definition 4.
A real zero-mean random N-dimensional vector process { x n : n N } is said to be ARMA if:
x n = w n + j = 1 n 1 G j w n j j = 1 n 1 F j x n j n N ,
where G j and F j , j N , are real N × N matrices, { w n : n N } is a real zero-mean random N-dimensional vector process and E w n 1 w n 2 = δ n 1 , n 2 Λ for all n 1 , n 2 N with Λ being an N × N positive definite matrix.
The ARMA vector process { x n : n N } in Equation (2) is of finite order if there exist p , q N such that F j = 0 N × N for all j > p and G j = 0 N × N for all j > q . In this case, { x n : n N } is called an ARMA ( p , q ) vector process (see, e.g., ([5], Section 1.2.2)).

2.4. RDF of Gaussian Vectors

Let { x n : n N } be a real zero-mean Gaussian N-dimensional vector process satisfying that E x n : 1 x n : 1 is positive definite for all n N . If n N from [4], we know that the RDF of the real zero-mean Gaussian vector x n : 1 is given by:
R n ( D ) = 1 n N k = 1 n N max 0 , 1 2 ln λ k E x n : 1 x n : 1 θ n
with D 0 , tr E x n : 1 x n : 1 n N and where θ n is the real number satisfying:
D = 1 n N k = 1 n N min θ n , λ k E x n : 1 x n : 1 .
The RDF of the real zero-mean Gaussian vector process { x n : n N } is given by:
R ( D ) : = lim n R n ( D )
whenever this limit exists.

3. Integral Formula for the RDF of Gaussian AWSS Vector Processes

Theorem 1.
Let { x n : n N } be a real zero-mean Gaussian AWSS N-dimensional vector process with APSD X. Suppose that X ( ω ) is positive definite for all ω R and that E x n : 1 x n : 1 is positive definite for all n N . If D 0 , tr ( X 0 ) N , then:
R ( D ) = 1 4 π N 0 2 π k = 1 N max 0 , ln λ k X ( ω ) θ d ω
is the operational RDF of { x n : n N } , where θ is the real number satisfying:
D = 1 2 π N 0 2 π k = 1 N min θ , λ k X ( ω ) d ω .
Proof. 
See Appendix A. ☐
Corollary 1.
Let { x n : n N } be a real zero-mean Gaussian WSS N-dimensional vector process with PSD X. Suppose that X ( ω ) is positive definite for all ω R . If D 0 , tr ( X 0 ) N , then:
R ( D ) = 1 4 π N 0 2 π k = 1 N max 0 , ln λ k X ( ω ) θ d ω ,
where θ is the real number satisfying:
D = 1 2 π N 0 2 π k = 1 N min θ , λ k X ( ω ) d ω .
Proof. 
See Appendix B. ☐
The integral formula given in Equation (6) was presented by Kafedziski in ([13], Equation (20)). However, the proof that he proposed was not complete, because although Kafedziski pointed out that ([13], Equation (20)) can be directly proven by applying the Szegö theorem for block Toeplitz matrices ([14], Theorem 3), the Szegö theorem cannot be applied since the parameter θ that appears in the expression of R n ( D ) in ([13], Equation (7)), depends on n, as it does in Equation (3). It should be also mentioned that the set of WSS vector processes that he considered was smaller, namely, he only considered WSS vector processes with PSD in the Wiener class. A function X : R C N × N is said to be in the Wiener class if it is continuous and 2 π -periodic, and it satisfies k = | [ X k ] r , s | < for all r , s { 1 , , N } (see, e.g., ([11], Appendix B)).

4. Applications

4.1. Integral Formula for the RDF of Gaussian MA Vector Processes

Theorem 2.
Let { x n : n N } be as in Definition 3. Assume that { G k } k = , with G 0 = I N and G k = 0 N × N for all k > 0 , is the sequence of Fourier coefficients of a function G : R C N × N , which is continuous and 2 π -periodic. Then:
1. 
{ x n : n N } is AWSS with APSD X ( ω ) = G ( ω ) Λ ( G ( ω ) ) for all ω R .
2. 
If { x n : n N } is Gaussian, det ( G ( ω ) ) 0 for all ω R , and D 0 , tr ( X 0 ) N yields
R ( D ) = 1 4 π N 0 2 π k = 1 N max 0 , ln λ k G ( ω ) Λ ( G ( ω ) ) θ d ω ,
where θ is the real number satisfying:
D = 1 2 π N 0 2 π k = 1 N min θ , λ k G ( ω ) Λ ( G ( ω ) ) d ω .
Proof. 
See Appendix C. ☐

4.2. Integral Formula for the RDF of Gaussian ARMA AWSS Vector Processes

Theorem 3.
Let { x n : n N } be as in Definition 4. Assume that { G k } k = , with G 0 = I N and G k = 0 N × N for all k > 0 , is the sequence of Fourier coefficients of a function G : R C N × N , which is continuous and 2 π -periodic. Suppose that { F k } k = , with F 0 = I N and F k = 0 N × N for all k > 0 , is the sequence of Fourier coefficients of a function F : R C N × N , which is continuous and 2 π -periodic. Assume that { ( T n ( F ) ) 1 2 } is bounded and det ( F ( ω ) ) 0 for all ω R . Then:
1. 
{ x n : n N } is AWSS with APSD X ( ω ) = ( F ( ω ) ) 1 G ( ω ) Λ ( ( F ( ω ) ) 1 G ( ω ) ) for all ω R .
2. 
If { x n : n N } is Gaussian, det ( G ( ω ) ) 0 for all ω R , and D 0 , tr ( X 0 ) N yields:
R ( D ) = 1 4 π N 0 2 π k = 1 N max 0 , ln λ k ( F ( ω ) ) 1 G ( ω ) Λ ( ( F ( ω ) ) 1 G ( ω ) ) θ d ω ,
where θ is the real number satisfying:
D = 1 2 π N 0 2 π k = 1 N min θ , λ k ( F ( ω ) ) 1 G ( ω ) Λ ( ( F ( ω ) ) 1 G ( ω ) ) d ω .
Proof. 
See Appendix D. ☐

5. Numerical Example

We finish the paper with a numerical example where the RDF of a Gaussian AWSS vector process is computed. Specifically, we compute the RDF of the MA ( 1 ) vector process considered in ([5], Example 2.1), by assuming that it is Gaussian.
Let { x n : n N } be as in Definition 3 with N = 2 ,
G 1 = 0 . 8 0 . 7 0 . 4 0 . 6 ,
G j = 0 2 × 2 for all j > 1 , and:
Λ = 4 1 1 2 .
Assume that { x n : n N } is Gaussian. Figure 1 shows R ( D ) with D ( 0 , 5.77 ) that we have computed using Theorem 2.

Author Contributions

Authors are listed in order of their degree of involvement in the work, with the most active contributors listed first. J.G.-G. conceived the research question. All authors proved the main results. J.G.-G. and X.I. performed the simulations. All authors wrote the paper. All authors have read and approved the final manuscript.

Funding

This work was supported in part by the Spanish Ministry of Economy and Competitiveness through the CARMEN project (TEC2016-75067-C4-3-R).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Theorem 1

Proof. 
We divide the proof into six steps.
Step 1: We show that there exists n 0 N such that θ n in Equation (3) exists for all n n 0 , or equivalently, such that D 0 , tr E x n : 1 x n : 1 n N for all n n 0 .
Since E x n : 1 x n : 1 = E x n : 1 x n : 1 T n ( X ) , applying ([11], Theorem 6.6) yields:
lim n 1 n N k = 1 n N λ k E x n : 1 x n : 1 = 1 2 π N 0 2 π k = 1 N λ k X ( ω ) d ω = 1 2 π N 0 2 π tr X ( ω ) d ω = 1 2 π N 0 2 π k = 1 N [ X ( ω ) ] k , k d ω = 1 2 π N k = 1 N 0 2 π [ X ( ω ) ] k , k d ω = 1 2 π N k = 1 N 0 2 π X ( ω ) d ω k , k = 1 2 π N tr 0 2 π X ( ω ) d ω = 1 N tr 1 2 π 0 2 π X ( ω ) d ω = tr ( X 0 ) N .
Consequently, as D 0 , tr ( X 0 ) N , there exists n 0 N such that:
1 n N k = 1 n N λ k E x n : 1 x n : 1 tr ( X 0 ) N < tr ( X 0 ) N D n n 0 .
Therefore, since:
tr ( X 0 ) N 1 n N k = 1 n N λ k E x n : 1 x n : 1 1 n N k = 1 n N λ k E x n : 1 x n : 1 tr ( X 0 ) N n n 0 ,
we obtain:
D < 1 n N k = 1 n N λ k E x n : 1 x n : 1 = tr E x n : 1 x n : 1 n N n n 0 .
Step 2: We prove that the sequence of real numbers { θ n } n n 0 is bounded.
From Equation (A2), we have θ n < λ 1 E x n : 1 x n : 1 for all n n 0 . As { E x n : 1 x n : 1 } { T n ( X ) } , there exists M [ 0 , ) such that E x n : 1 x n : 1 2 , T n ( X ) 2 M for all n N . Thus,
0 < D = 1 n N k = 1 n N min θ n , λ k E x n : 1 x n : 1 1 n N k = 1 n N θ n = θ n < λ 1 E x n : 1 x n : 1 = E x n : 1 x n : 1 2 M n n 0 .
Step 3: We show that if { θ σ ( n ) } is a convergent subsequence of { θ n } n n 0 , then lim n θ σ ( n ) = θ .
We denote by θ ^ the limit of { θ σ ( n ) } . We need to prove that θ ^ = θ .
Since 0 < D θ n for all n n 0 , we have 0 < D θ ^ . Let { θ ^ n } be the sequence of real numbers such that { θ ^ σ ( n ) } = { θ σ ( n ) } and θ ^ n = θ ^ for all n N \ σ ( N ) . Obviously, lim n θ ^ n = θ ^ and 0 < θ ^ n for all n N . As lim n 1 θ ^ n = 1 θ ^ and rank E x n : 1 x n : 1 n = N for all n N , applying ([12], Lemma 1) yields 1 θ ^ n E x n : 1 x n : 1 1 θ ^ T n ( X ) . From ([11], Lemma 4.2) we obtain 1 θ ^ n E x n : 1 x n : 1 1 θ ^ T n ( X ) = T n 1 θ ^ X . Hence, applying ([11], Theorem 6.6) yields:
D = lim n 1 σ ( n ) N k = 1 σ ( n ) N min θ σ ( n ) , λ k E x σ ( n ) : 1 x σ ( n ) : 1 = lim n θ σ ( n ) 1 σ ( n ) N k = 1 σ ( n ) N min 1 , λ k E x σ ( n ) : 1 x σ ( n ) : 1 θ σ ( n ) = θ ^ lim n 1 σ ( n ) N k = 1 σ ( n ) N min 1 , λ k E x σ ( n ) : 1 x σ ( n ) : 1 θ ^ σ ( n ) = θ ^ lim n 1 n N k = 1 n N min 1 , λ k E x n : 1 x n : 1 θ ^ n = θ ^ lim n 1 n N k = 1 n N min 1 , λ k 1 θ ^ n E x n : 1 x n : 1 = θ ^ 1 2 π N 0 2 π k = 1 N min 1 , λ k 1 θ ^ X ( ω ) d ω = θ ^ 1 2 π N 0 2 π k = 1 N min 1 , λ k X ( ω ) θ ^ d ω = 1 2 π N 0 2 π k = 1 N min θ ^ , λ k X ( ω ) d ω .
Thus, θ ^ is a real number satisfying Equation (5). Since D < tr ( X 0 ) N = 1 2 π N 0 2 π k = 1 N λ k X ( ω ) d ω , there exists a unique real number θ satisfying Equation (5), and consequently, θ ^ = θ .
Step 4: We prove that lim n θ n = θ . From Steps 2 and 3, we have lim   inf n θ n = lim   sup n θ n = θ . Consequently, the sequence of real numbers { θ n } n n 0 is convergent, and its limit is θ (see, e.g., ([15], p. 57)).
Step 5: We show that Equation (4) holds.
Let { θ ^ n } be the sequence of positive numbers defined in Step 3 for the case in which { σ ( n ) } = { n + n 0 1 } , that is, θ ^ n = θ n if n n 0 and θ ^ n = θ if n < n 0 . From ([11], Theorem 6.6), we obtain:
R ( D ) = lim n R n ( D ) = lim n 1 n N k = 1 n N max 0 , 1 2 ln λ k E x n : 1 x n : 1 θ n = lim n 1 n N k = 1 n N 1 2 ln max 1 , λ k E x n : 1 x n : 1 θ n = 1 2 lim n 1 n N k = 1 n N ln max 1 , λ k E x n : 1 x n : 1 θ ^ n = 1 2 lim n 1 n N k = 1 n N ln max 1 , λ k 1 θ ^ n E x n : 1 x n : 1 = 1 4 π N 0 2 π k = 1 N ln max 1 , λ k 1 θ X ( ω ) d ω = 1 4 π N 0 2 π k = 1 N ln max 1 , λ k X ( ω ) θ d ω = 1 4 π N 0 2 π k = 1 N max 0 , ln λ k X ( ω ) θ d ω .
Step 6: We prove that Equation (4) is the operational RDF of { x n : n N } . Following the same arguments that Gray used in [16] for Gaussian AR AWSS one-dimensional vector processes, to prove the negative (converse) coding theorem and the positive (achievability) coding theorem, we only need to show that the sequence d max ( n ) defined in ([17], p. 490), is bounded. Hence, Equation (A1) finishes the proof. ☐

Appendix B. Proof of Corollary 1

Proof. 
Since X ( ω ) is positive definite for all ω R , from ([11], Theorem 4.4) and ([18], Corollary VI.1.6), we have:
0 < min ω [ 0 , 2 π ] λ N X ( ω ) = inf ω [ 0 , 2 π ] λ N X ( ω ) λ n N T n ( X ) = λ n N E x n : 1 x n : 1 = λ n N E x n : 1 x n : 1
for all n N , and consequently, E x n : 1 x n : 1 is positive definite for all n N . Combining ([11], Lemma 3.3) and ([11], Theorem 4.3) yields { E x n : 1 x n : 1 } = { T n ( X ) } { T n ( X ) } . The proof finishes by applying Theorem 1. ☐

Appendix C. Proof of Theorem 2

Proof. 
(1) From Equation (1), we have:
x n x n 1 x n 2 x 1 = I N G 1 G 2 G 1 n 0 N × N I N G 1 G 2 n 0 N × N 0 N × N I N G 3 n 0 N × N 0 N × N 0 N × N I N w n w n 1 w n 2 w 1 ,
or more compactly,
x n : 1 = T n ( G ) w n : 1
for all n N . Consequently,
x n : 1 x n : 1 = T n ( G ) w n : 1 w n : 1 ( T n ( G ) ) n N ,
and applying ([11], Lemma 4.2), yields:
E x n : 1 x n : 1 = T n ( G ) E w n : 1 w n : 1 ( T n ( G ) ) = T n ( G ) T n ( Λ ) ( T n ( G ) ) = T n ( G ) T n ( Λ ) T n ( G ) ,
where G ( ω ) = ( G ( ω ) ) , ω R . Combining ([11], Lemma 3.3) and ([11], Theorem 4.3), we obtain { T n ( G ) } { T n ( G ) } . Moreover, applying ([10], Theorem 3) yields T n ( Λ ) T n ( G ) T n ( Λ G ) . Hence, from ([10], Lemma 2) and ([10], Theorem 3), we have:
E x n : 1 x n : 1 = E x n : 1 x n : 1 = T n ( G ) T n ( Λ ) T n ( G ) T n ( G ) T n ( Λ G ) T n ( G Λ G ) = { T n ( X ) } .
Thus, as the relation ∼ is transitive (see ([11], Lemma 3.1)), { x n : n N } is AWSS with APSD X. (2) First, we prove that X ( ω ) is positive definite for all ω R . Fix ω R , and consider y C N × 1 . Since Λ is positive definite, we have:
y X ( ω ) y = y G ( ω ) Λ ( G ( ω ) ) y = ( G ( ω ) ) y Λ ( G ( ω ) ) y > 0
whenever ( G ( ω ) ) y 0 N × 1 . As det ( G ( ω ) ) 0 , ( G ( ω ) ) y = 0 N × 1 if and only if y = ( G ( ω ) ) 1 0 N × 1 = 0 N × 1 , and consequently, X ( ω ) is positive definite.
Secondly, we prove that E x n : 1 x n : 1 is positive definite for all n N . To do that, we only need to show that det E x n : 1 x n : 1 0 for all n N , because as E x n : 1 x n : 1 is a correlation matrix, it is positive semidefinite. We have:
det E x n : 1 x n : 1 = det T n ( G ) T n ( Λ ) T n ( G ) = det T n ( G ) det T n ( Λ ) det ( T n ( G ) ) = | det T n ( G ) | 2 det ( Λ ) n = | det ( I N ) n | 2 det ( Λ ) n = det ( Λ ) n 0
for all n N .
The result now follows from Theorem 1. ☐

Appendix D. Proof of Theorem 3

Proof. 
(1) From Equation (2), we have:
j = 0 n 1 F j x n j = j = 0 n 1 G j w n j ,
or more compactly,
T n ( F ) x n : 1 = T n ( G ) w n : 1
for all n N . Consequently,
T n ( F ) x n : 1 x n : 1 ( T n ( F ) ) = T n ( F ) x n : 1 T n ( F ) x n : 1 = T n ( G ) w n : 1 w n : 1 ( T n ( G ) ) n N ,
and applying Equation (A3) yields:
T n ( F ) E x n : 1 x n : 1 ( T n ( F ) ) = T n ( F ) E x n : 1 x n : 1 ( T n ( F ) ) = T n ( G ) E w n : 1 w n : 1 ( T n ( G ) ) = T n ( G ) T n ( Λ ) T n ( G ) .
Since det ( T n ( F ) ) = ( det ( I N ) ) n = 1 0 for all n N , we obtain:
E x n : 1 x n : 1 = ( T n ( F ) ) 1 T n ( G ) T n ( Λ ) T n ( G ) ( ( T n ( F ) ) ) 1 = ( T n ( F ) ) T n ( F ) 1 ( T n ( F ) ) T n ( G ) T n ( Λ ) T n ( G ) T n ( F ) ( T n ( F ) ) T n ( F ) 1 .
From Equation (A4) and the fact that the relation ∼ is transitive (see ([11], Lemma 3.1)), we have T n ( G ) T n ( Λ ) T n ( G ) T n ( G Λ G ) . Combining ([11], Lemma 3.3) and ([11], Theorem 4.3) yields { T n ( F ) } { T n ( F ) } . Therefore, applying ([10], Lemma 2) and ([10], Theorem 3), we obtain:
T n ( G ) T n ( Λ ) T n ( G ) T n ( F ) T n ( G Λ G ) T n ( F ) T n ( G Λ G F ) .
Using ([11], Lemma 3.1) yields { ( T n ( F ) ) } { ( T n ( F ) ) } , and applying ([10], Lemma 2), ([11], Lemma 4.2), and ([10], Theorem 3), we have:
( T n ( F ) ) T n ( G ) T n ( Λ ) T n ( G ) T n ( F ) ( T n ( F ) ) T n ( G Λ G F ) = T n ( F ) T n ( G Λ G F ) T n ( F G Λ G F ) .
If ω R and y C N × 1 , then,
y ( F ( ω ) ) F ( ω ) y = F ( ω ) y F ( ω ) y = F ( ω ) y 2 > 0
whenever F ( ω ) y 0 N × 1 . As det ( F ( ω ) ) 0 , F ( ω ) y = 0 N × 1 if and only if y = ( F ( ω ) ) 1 0 N × 1 = 0 N × 1 , hence ( F ( ω ) ) F ( ω ) is positive definite for all ω R , and applying ([11], Theorem 4.4) and ([18], Corollary VI.1.6), yields:
0 < min ω [ 0 , 2 π ] λ N ( F ( ω ) ) F ( ω ) = inf ω [ 0 , 2 π ] λ N ( F ( ω ) ) F ( ω ) λ n N T n ( F F ) n N .
Thus,
( T n ( F F ) ) 1 2 = max k { 1 , , n N } | λ k ( ( T n ( F F ) ) 1 ) | = max k { 1 , , n N } 1 λ k ( T n ( F F ) ) = max k { 1 , , n N } 1 λ k ( T n ( F F ) ) = 1 λ n N ( T n ( F F ) ) 1 min ω [ 0 , 2 π ] λ N ( F ( ω ) ) F ( ω )
for all n N . Observe that ( T n ( F ) ) T n ( F ) 1 2 is also bounded, because:
( T n ( F ) ) T n ( F ) 1 2 = T n ( F ) 1 ( T n ( F ) ) 1 2 = T n ( F ) 1 ( T n ( F ) ) 1 2 T n ( F ) 1 2 ( T n ( F ) ) 1 2 = T n ( F ) 1 2 2 n N .
Moreover, from ([10], Theorem 3), we obtain ( T n ( F ) ) T n ( F ) = T n ( F ) T n ( F ) T n ( F F ) . Consequently, applying Lemma A1 (see Appendix E) and ([11], Theorem 6.4), yields:
( ( T n ( F ) ) T n ( F ) ) 1 ( T n ( F F ) ) 1 T n ( ( F F ) 1 ) = T n ( F 1 ( F ) 1 ) .
Therefore, from Equation (A6), ([10], Lemma 2), and ([10], Theorem 3), we have:
( ( T n ( F ) ) T n ( F ) ) 1 ( T n ( F ) ) T n ( G ) T n ( Λ ) T n ( G ) T n ( F ) T n ( F 1 ( F ) 1 ) T n ( F G Λ G F ) ) T n ( F 1 G Λ G F ) ) .
Hence, applying Equation (A7), ([10], Lemma 2), and ([10], Theorem 3), we deduce that:
E x n : 1 x n : 1 T n ( F 1 G Λ G F ) ) T n ( F 1 ( F ) 1 ) T n ( F 1 G Λ G ( F ) 1 ) = T n ( F 1 G Λ G ( F 1 ) ) = { T n ( X ) } .
(2) First, we prove that X ( ω ) is positive definite for all ω R . Fix ω R , and consider y C N × 1 . Since G ( ω ) Λ ( G ( ω ) ) is positive definite (see the proof of Theorem 2), we have:
y X ( ω ) y = y ( F ( ω ) ) 1 G ( ω ) Λ ( G ( ω ) ) ( ( F ( ω ) ) 1 ) y = ( ( ( F ( ω ) ) 1 ) y ) G ( ω ) Λ ( G ( ω ) ) ( ( F ( ω ) ) 1 ) y > 0
whenever ( ( F ( ω ) ) ) 1 y = ( ( F ( ω ) ) 1 ) y 0 N × 1 . As ( ( F ( ω ) ) ) 1 y = 0 N × 1 if and only if y = ( F ( ω ) ) 0 N × 1 = 0 N × 1 , X ( ω ) is positive definite.
Secondly, we prove that E x n : 1 x n : 1 is positive definite for all n N , or equivalently, det E x n : 1 x n : 1 0 for all n N . Applying Equation (A5) yields:
det E x n : 1 x n : 1 = det ( ( T n ( F ) ) 1 T n ( G ) T n ( Λ ) T n ( G ) ( ( T n ( F ) ) ) 1 ) = det ( T n ( G ) T n ( Λ ) T n ( G ) ) det ( T n ( F ) ) det ( ( T n ( F ) ) ) = det ( Λ ) n | det ( T n ( F ) ) | 2 = det ( Λ ) n 0 n N .
The result now follows from Theorem 1. ☐

Appendix E. A Property of Asymptotically Equivalent Sequences of Invertible Matrices

Lemma A1.
Let A n and B n be n N × n N invertible matrices for all n N . Suppose that { A n } { B n } and { A n 1 2 } and { B n 1 2 } are bounded. Then, { A n 1 } { B n 1 } .
Proof. 
If M [ 0 , ) such that A n 1 2 , B n 1 2 M for all n N , then:
0 A n 1 B n 1 F n = B n 1 A n 1 F n = B n 1 A n A n 1 B n 1 B n A n 1 F n = B n 1 A n B n A n 1 F n B n 1 2 A n B n A n 1 F n B n 1 2 A n B n F n A n 1 2 M 2 A n B n F n 0 .
This result was presented in ([6], Theorem 1) for the case N = 1 .

References

  1. Hammerich, E. Waterfilling theorems for linear time-varying channels and related nonstationary sources. IEEE Trans. Inf. Theory 2016, 62, 6904–6916. [Google Scholar] [CrossRef]
  2. Kipnis, A.; Goldsmith, A.J.; Eldar, Y.C. The distortion rate function of cyclostationary Gaussian processes. IEEE Trans. Inf. Theory 2018, 64, 3810–3824. [Google Scholar] [CrossRef]
  3. Toms, W.; Berger, T. Information rates of stochastically driven dynamic systems. IEEE Trans. Inf. Theory 1971, 17, 113–114. [Google Scholar] [CrossRef]
  4. Kolmogorov, A.N. On the Shannon theory of information transmission in the case of continuous signals. IRE Trans. Inf. Theory 1956, 2, 102–108. [Google Scholar] [CrossRef]
  5. Reinsel, G.C. Elements of Multivariate Time Series Analysis; Springer: Berlin, Germany, 1993. [Google Scholar]
  6. Gray, R.M. Toeplitz and circulant matrices: A review. Found. Trends Commun. Inf. Theory 2006, 2, 155–239. [Google Scholar] [CrossRef]
  7. Ephraim, Y.; Lev-Ari, H.; Gray, R.M. Asymptotic minimum discrimination information measure for asymptotically weakly stationary processes. IEEE Trans. Inf. Theory 1988, 34, 1033–1040. [Google Scholar] [CrossRef]
  8. Gray, R.M. On the asymptotic eigenvalue distribution of Toeplitz matrices. IEEE Trans. Inf. Theory 1972, 18, 725–730. [Google Scholar] [CrossRef]
  9. Gutiérrez-Gutiérrez, J.; Crespo, P.M. Asymptotically equivalent sequences of matrices and Hermitian block Toeplitz matrices with continuous symbols: Applications to MIMO systems. IEEE Trans. Inf. Theory 2008, 54, 5671–5680. [Google Scholar] [CrossRef]
  10. Gutiérrez-Gutiérrez, J.; Crespo, P.M. Asymptotically equivalent sequences of matrices and multivariate ARMA processes. IEEE Trans. Inf. Theory 2011, 57, 5444–5454. [Google Scholar] [CrossRef]
  11. Gutiérrez-Gutiérrez, J.; Crespo, P.M. Block Toeplitz matrices: Asymptotic results and applications. Found. Trends Commun. Inf. Theory 2011, 8, 179–257. [Google Scholar] [CrossRef]
  12. Gutiérrez-Gutiérrez, J.; Crespo, P.M.; Zárraga-Rodríguez, M.; Hogstad, B.O. Asymptotically equivalent sequences of matrices and capacity of a discrete-time Gaussian MIMO channel with memory. IEEE Trans. Inf. Theory 2017, 63, 6000–6003. [Google Scholar] [CrossRef]
  13. Kafedziski, V. Rate distortion of stationary and nonstationary vector Gaussian sources. In Proceedings of the IEEE/SP 13th Workshop on Statistical Signal Processing, Bordeaux, France, 17–20 July 2005; IEEE: Piscataway, NJ, USA, 2005; pp. 1054–1059. [Google Scholar]
  14. Gazzah, H.; Regalia, P.A.; Delmas, J.P. Asymptotic eigenvalue distribution of block Toeplitz matrices and application to blind SIMO channel identification. IEEE Trans. Inf. Theory 2001, 47, 1243–1251. [Google Scholar] [CrossRef] [Green Version]
  15. Rudin, W. Principles of Mathematical Analysis; McGraw-Hill: New York, NY, USA, 1976. [Google Scholar]
  16. Gray, R.M. Information rates of autoregressive processes. IEEE Trans. Inf. Theory 1970, 16, 412–421. [Google Scholar] [CrossRef]
  17. Gallager, R.G. Information Theory and Reliable Communication; John Wiley & Sons: Hoboken, NJ, USA, 1968. [Google Scholar]
  18. Bhatia, R. Matrix Analysis; Springer: Berlin, Germany, 1997. [Google Scholar]
Figure 1. Rate Distortion Function (RDF) of the Gaussian MA vector process considered.
Figure 1. Rate Distortion Function (RDF) of the Gaussian MA vector process considered.
Entropy 20 00719 g001

Share and Cite

MDPI and ACS Style

Gutiérrez-Gutiérrez, J.; Zárraga-Rodríguez, M.; Crespo, P.M.; Insausti, X. Rate Distortion Function of Gaussian Asymptotically WSS Vector Processes. Entropy 2018, 20, 719. https://doi.org/10.3390/e20090719

AMA Style

Gutiérrez-Gutiérrez J, Zárraga-Rodríguez M, Crespo PM, Insausti X. Rate Distortion Function of Gaussian Asymptotically WSS Vector Processes. Entropy. 2018; 20(9):719. https://doi.org/10.3390/e20090719

Chicago/Turabian Style

Gutiérrez-Gutiérrez, Jesús, Marta Zárraga-Rodríguez, Pedro M. Crespo, and Xabier Insausti. 2018. "Rate Distortion Function of Gaussian Asymptotically WSS Vector Processes" Entropy 20, no. 9: 719. https://doi.org/10.3390/e20090719

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop