Next Article in Journal
Progress in and Opportunities for Applying Information Theory to Computational Biology and Bioinformatics
Previous Article in Journal
Numerical Study on Entropy Generation of the Multi-Stage Centrifugal Pump
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Distribution of the Information Density of Gaussian Random Vectors: Explicit Formulas and Tight Approximations

by
Jonathan E. W. Huffmann
1 and
Martin Mittelbach
2,*
1
Lehrstuhl für Theoretische Informationstechnik, Fakultät für Elektrotechnik und Informationstechnik, Technische Universität München, 80290 München, Germany
2
Lehrstuhl für Theoretische Nachrichtentechnik, Fakultät für Elektrotechnik und Informationstechnik, Technische Universität Dresden, 01062 Dresden, Germany
*
Author to whom correspondence should be addressed.
Entropy 2022, 24(7), 924; https://doi.org/10.3390/e24070924
Submission received: 16 May 2022 / Revised: 23 June 2022 / Accepted: 25 June 2022 / Published: 2 July 2022
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
Based on the canonical correlation analysis, we derive series representations of the probability density function (PDF) and the cumulative distribution function (CDF) of the information density of arbitrary Gaussian random vectors as well as a general formula to calculate the central moments. Using the general results, we give closed-form expressions of the PDF and CDF and explicit formulas of the central moments for important special cases. Furthermore, we derive recurrence formulas and tight approximations of the general series representations, which allow efficient numerical calculations with an arbitrarily high accuracy as demonstrated with an implementation in Python publicly available on GitLab. Finally, we discuss the (in)validity of Gaussian approximations of the information density.

1. Introduction and Main Theorems

Let ξ and η be arbitrary random variables on an abstract probability space ( Ω , F , P ) such that the joint distribution P ξ η is absolutely continuous w. r. t. the product P ξ P η of the marginal distributions P ξ and P η . If d P ξ η d P ξ P η denotes the Radon–Nikodym derivative of P ξ η w. r. t. P ξ P η , then
i ( ξ ; η ) = log d P ξ η d P ξ P η ( ξ , η )
is called the information density of ξ and η . The expectation E ( i ( ξ ; η ) ) = I ( ξ ; η ) of the information density, called mutual information, plays a key role in characterizing the asymptotic channel coding performance in terms of channel capacity. The non-asymptotic performance, however, is determined by the higher-order moments of the information density and its probability distribution. Achievability and converse bounds that allow a finite blocklength analysis of the optimum channel coding rate are closely related to the distribution function of the information density, also called information spectrum by Han and Verdú [1,2]. Moreover, based on the variance of the information density tight second-order finite blocklength approximations of the optimum code rate can be derived for various important channel models. First work on a non-asymptotic information theoretic analysis was already published in the early years of information theory by Shannon [3], Dobrushin [4], and Strassen [5], among others. Due to the seminal work of Polyanskiy et al. [6], considerable progress has been made in this area. The results of [6] on the one hand and the requirements of current and future wireless networks regarding latency and reliability on the other hand stimulated a significant new interest in this type of analysis (Durisi et al. [7]).
The information density i ( ξ ; η ) in the case when ξ and η are jointly Gaussian is of special interest due to the prominent role of the Gaussian distribution. Let ξ = ( ξ 1 , ξ 2 , , ξ p ) and η = ( η 1 , η 2 , , η q ) be real-valued random vectors with nonsingular covariance matrices R ξ and R η and cross-covariance matrix R ξ r v Y with rank r = rank ( R ξ η ) . (For notational convenience, we write vectors as row vectors. However, in expressions where matrix or vector multiplications occur, we consider all vectors as column vectors.) Without loss of generality for the subsequent results, we assume the expectation of all random variables to be zero. If ( ξ 1 , ξ 2 , , ξ p , η 1 , η 2 , , η q ) is a Gaussian random vector, then Pinsker [8], Ch. 9.6 has shown that the distribution of the information density i ( ξ ; η ) coincides with the distribution of the random variable
ν = 1 2 i = 1 r ϱ i ξ ˜ i 2 η ˜ i 2 + I ( ξ ; η ) .
In this representation ξ ˜ 1 , ξ ˜ 2 , , ξ ˜ r , η ˜ 1 , η ˜ 2 , , η ˜ r are independent and identically distributed (i.i.d.) Gaussian random variables with zero mean and unit variance and the mutual information I ( ξ ; η ) in (1) has the form
I ( ξ ; η ) = 1 2 i = 1 r log 1 1 ϱ i 2 .
Moreover, ϱ 1 ϱ 2 ϱ r > 0 denote the positive canonical correlations of ξ and η in descending order, which are obtained by a linear method called canonical correlation analysis that yields the maximum correlations between two sets of random variables (see Section 3). The rank r of the cross-covariance matrix R ξ η satisfies 0 r min { p , q } , and for r = 0 we have i ( ξ ; η ) 0 almost surely and I ( ξ ; η ) = 0 . This corresponds to P ξ η = P ξ P η and the independence of ξ and η such that the resulting information density is deterministic. Throughout the rest of the paper, we exclude this degenerated case when the information density is considered and assume subsequently the setting and notation introduced above with r 1 . As customary notation, we further write R , N 0 , and N to denote the set of real numbers, non-negative integers, and positive integers.
Main contributions. Based on (1), we derive in Section 4 series representations of the probability density function (PDF) and the cumulative distribution function (CDF) as well as explicit general formulas for the central moments of the information density i ( ξ ; η ) given subsequently in Theorems 1 to 3. The series representations are useful as they allow tight approximations with errors as low as desired by finite sums as shown in Section 5.2. Moreover, we derive recurrence formulas in Section 5.1 that allow efficient numerical calculations of the series representations in Theorems 1 and 2.
Theorem 1
(PDF of information density). The PDF f i ( ξ ; η ) of the information density i ( ξ ; η ) is given by
f i ( ξ ; η ) ( x ) = 1 ϱ r π k 1 = 0 k 2 = 0 k r 1 = 0 i = 1 r 1 ϱ r ϱ i ( 2 k i ) ! ( k i ! ) 2 4 k i 1 ϱ r 2 ϱ i 2 k i × K r 1 2 + k 1 + k 2 + + k r 1 x I ( ξ ; η ) ϱ r Γ r 2 + k 1 + k 2 + + k r 1 x I ( ξ ; η ) 2 ϱ r r 1 2 + k 1 + k 2 + + k r 1 , x R \ { I ( ξ ; η ) } ,
where Γ ( · ) denotes the gamma function [9], Sec. 5.2.1 and K α ( · ) denotes the modified Bessel function of second kind and order α [9], Sec. 10.25(ii). If r 2 , then f i ( ξ ; η ) ( x ) is also well defined for x = I ( ξ ; η ) .
Theorem 2
(CDF of information density). The CDF F i ( ξ ; η ) of the information density i ( ξ ; η ) is given by
F i ( ξ ; η ) ( x ) = 1 2 V I ( ξ ; η ) x i f x I ( ξ ; η ) 1 2 + V x I ( ξ ; η ) i f x > I ( ξ ; η ) ,
with V ( z ) defined by
V ( z ) = k 1 = 0 k 2 = 0 k r 1 = 0 i = 1 r 1 ϱ r ϱ i ( 2 k i ) ! ( k i ! ) 2 4 k i 1 ϱ r 2 ϱ i 2 k i z 2 ϱ r × [ K r 1 2 + k 1 + k 2 + + k r 1 z ϱ r L r 3 2 + k 1 + k 2 + + k r 1 z ϱ r + K r 3 2 + k 1 + k 2 + + k r 1 z ϱ r L r 1 2 + k 1 + k 2 + + k r 1 z ϱ r ] , z 0 ,
where L α ( · ) denotes the modified Struve L function of order α [9], Sec. 11.2.
The method to obtain the result in Theorem 1 is adopted from Mathai [10], where a series representation of the PDF of the sum of independent gamma distributed random variables is derived. Previous work of Grad and Solomon [11] and Kotz et al. [12] goes in a similar direction as Mathai [10]; however, it is not directly applicable since only the restriction to positive series coefficients is considered there. Using Theorem 1, the series representation of the CDF of the information density in Theorem 2 is obtained. The details of the derivations of Theorems 1 and 2 are provided in Section 4.
Theorem 3
(Central moments of information density). The m-th central moment E ( [ i ( ξ ; η ) I ( ξ ; η ) ] m ) of the information density i ( ξ ; η ) is given by
E ( [ i ( ξ ; η ) I ( ξ ; η ) ] m )
= ( m 1 , m 2 , , m r ) K m , r [ 2 ] m ! i = 1 r ( 2 m i ) ! 4 m i ( m i ! ) 2 ϱ i 2 m i i f m = 2 m ˜ 0 i f m = 2 m ˜ 1 ,
for all m ˜ N , where K m , r [ 2 ] = ( m 1 , m 2 , , m r ) N 0 r : 2 m 1 + 2 m 2 + + 2 m r = m .
Pinsker [8], Eq. (9.6.17) provided a formula for i = 1 r E ( ϱ i 2 ( ξ ˜ i 2 η ˜ i 2 ) m ) , which he called “derived m-th central moment” of the information density, where ξ ˜ i and η ˜ i are given as in (1). These special moments coincide for m = 2 with the usual central moments considered in Theorem 3.
The rest of the paper is organized as follows: In Section 2, we discuss important special cases which allow simplified and explicit formulas. In Section 3, we provide some background on the canonical correlation analysis and its application to the calculation of the information density and mutual information for Gaussian random vectors. The proofs of the main Theorems 1 to 3 are given in Section 4. Recurrence formulas, finite sum approximations, and uniform bounds of the approximation error are derived in Section 5, which allow efficient and accurate numerical calculations of the PDF and CDF of the information density. Some examples and illustrations are provided in Section 6, where also the (in)validity of Gaussian approximations is discussed. Finally, Section 7 summarizes the paper. Note that a first version of this paper was published on arXiv as preprint [13].

2. Special Cases

2.1. Equal Canonical Correlations

A simple but important special case for which the series representations in Theorems 1 and 2 simplify to a single summand and the sum of products in Theorem 3 simplifies to a single product is considered in the following corollary.
Corollary 1
(PDF, CDF, and central moments of information density for equal canonical correlations). If all canonical correlations are equal, i.e.,
ϱ 1 = ϱ 2 = = ϱ r ,
then we have the following simplifications.
(i) The PDF f i ( ξ ; η ) of the information density i ( ξ ; η ) simplifies to
f i ( ξ ; η ) ( x ) = 1 ϱ r π Γ r 2 K r 1 2 x I ( ξ ; η ) ϱ r x I ( ξ ; η ) 2 ϱ r r 1 2 , x R \ { I ( ξ ; η ) } ,
where I ( ξ ; η ) is given by
I ( ξ ; η ) = r 2 log 1 ϱ r 2 .
If r 2 , then f i ( ξ ; η ) ( x ) is also well defined for x = I ( ξ ; η ) .
(ii) The CDF F i ( ξ ; η ) of the information density i ( ξ ; η ) is given by
F i ( ξ ; η ) ( x ) = 1 2 V I ( ξ ; η ) x i f x I ( ξ ; η ) 1 2 + V x I ( ξ ; η ) i f x > I ( ξ ; η ) ,
with V ( z ) defined by
V ( z ) = z 2 ϱ r K r 1 2 z ϱ r L r 3 2 z ϱ r + K r 3 2 z ϱ r L r 1 2 z ϱ r , z 0 .
(iii) The m-th central moment E ( [ i ( ξ ; η ) I ( ξ ; η ) ] m ) of the information density i ( ξ ; η ) has the form
E ( [ i ( ξ ; η ) I ( ξ ; η ) ] m ) = m ! m / 2 ! j = 1 m / 2 r 2 + j 1 ϱ r m i f m = 2 m ˜ 0 i f m = 2 m ˜ 1 ,
for all m ˜ N .
Clearly, if all canonical correlations are equal, then the only nonzero term in the series (3) and (4) occur for k 1 = k 2 = = k r 1 = 0 . For this single summand, the product in squared brackets in (3) and (4) is equal to 1 by applying 0 0 = 1 , which yields the results of part (i) and (ii) in Corollary 1. Details of the derivation of part (iii) of the corollary are provided in Section 4.
Note, if all canonical correlations are equal, then we can rewrite (1) as follows:
ν = ϱ r 2 i = 1 r ξ ˜ i 2 i = 1 r η ˜ i 2 + I ( ξ ; η ) .
This implies that ν coincides with the distribution of the random variable
ν * = ϱ r 2 ζ 1 ζ 2 + I ( ξ ; η ) ,
where ζ 1 and ζ 2 are i.i.d. χ 2 -distributed random variables with r degrees of freedom. With this representation, we can obtain the expression of the PDF given in (6) also from [14], Sec. 4.A.4.
Special cases of Corollary 1. The case when all canonical correlations are equal is important because it occurs in various situations. The subsequent cases follow from the properties of canonical correlations given in Section 3.
(i) Assume that the random variables ξ 1 , ξ 2 , , ξ p , η 1 , η 2 , , η q are pairwise uncorrelated with the exception of the pairs ( ξ i , η i ) , i = 1 , 2 , , k min { p , q } for which we have cor ( ξ i , η i ) = ρ 0 , where cor ( · , · ) denotes the Pearson correlation coefficient. Then, r = k and ϱ i = | ρ | for all i = 1 , 2 , , r . Note, if p = q = k , then for the previous conditions to hold, it is sufficient that the two-dimensional random vectors ( ξ i , η i ) are i.i.d. However, the identical distribution of the ( ξ i , η i ) ’s is not necessary. In Laneman [15], the distribution of the information density for an additive white Gaussian noise channel with i.i.d. Gaussian input is determined. This is a special case of the case with i.i.d. random vectors ( ξ i , η i ) just mentioned. In Wu and Jindal [16] and in Buckingham and Valenti [17], an approximation of the information density by a Gaussian random variable is considered for the setting in [15]. A special case very similar to that in [15] is also considered in Polyanskiy et al. [6], Sec. III.J. To the best of the authors’ knowledge, explicit formulas for the general case as considered in this paper are not available yet in the literature.
(ii) Assume that the conditions of part (i) are satisfied. Furthermore, assume that A ^ is a real nonsingular matrix of dimension p × p and B ^ is a real nonsingular matrix of dimension q × q . Then, the random vectors
ξ ^ = A ^ ξ and η ^ = B ^ η
have the same canonical correlations as the random vectors ξ and η , i.e., ϱ i = | ρ | for all i = 1 , 2 , , k min { p , q } .
(iii) If r = 1 , i.e., if the cross-covariance matrix R ξ , η has rank 1, then Corollary 1 obviously applies. Clearly, the most simple special case with r = 1 occurs for p = q = 1 , where ϱ 1 = | cor ( ξ 1 , η 1 ) | .
As a simple multivariate example, let the covariance matrix of the random vector ( ξ 1 , ξ 2 , , ξ p , η 1 , η 2 , , η q ) be given by the Kac-Murdock–Szegö matrix
R ξ R ξ η R ξ η R η = ρ | i j | i , j = 1 p + q
which is related to the covariance function of a first-order autoregressive process, where 0 < | ρ | < 1 . Then, r = rank ( R ξ η ) = 1 and ϱ 1 = | ρ | .
(iv) As yet another example, assume p = q and R ξ η = ρ R ξ 1 / 2 R η 1 / 2 for some 0 < | ρ | < 1 . Then, ϱ i = | ρ | for i = 1 , 2 , , r = q . Here, A 1 / 2 denotes the square root of the real-valued positive semidefinite matrix A, i.e., the unique positive semidefinite matrix B such that B B = A .

2.2. More on Special Cases with Simplified Formulas

Let us further evaluate the formulas given in Corollary 1 and Theorem 3 for some relevant parameter values.
(i) Single canonical correlation coefficient. In the most simple case, there is only a single non-zero canonical correlation coefficient, i.e., r = 1 . (Recall, in the beginning of the paper, we have excluded the degenerated case when all canonical correlations are zero.) Then, the formulas of the PDF and the m-th central moment in Corollary 1 simplify to the form
f i ( ξ ; η ) ( x ) = 1 ϱ 1 π K 0 x I ( ξ ; η ) ϱ 1 , x R \ { I ( ξ ; η ) } ,
and
E ( [ i ( ξ ; η ) I ( ξ ; η ) ] m ) = m ! m / 2 ! 2 ϱ 1 2 m i f m = 2 m ˜ 0 i f m = 2 m ˜ 1 ,
for all m ˜ N . A formula equivalent to (10) is also provided by Pinsker [8], Lemma 9.6.1 who considered the special case p = q = 1 , which implies r = 1 .
(ii) Second and fourth central moment. To demonstrate how the general formula given in Theorem 3 is used, we first consider m = 2 . In this case, the summation indices m 1 , m 2 , , m r have to satisfy m i = 1 for a single i { 1 , 2 , , r } , whereas the remaining m i ’s have to be zero. Thus, (5) evaluates for m = 2 to
E ( [ i ( ξ ; η ) I ( ξ ; η ) ] 2 ) = var ( i ( ξ ; η ) ) = i = 1 r ϱ i 2 .
As a slightly more complex example, let m = 4 . In this case, either we have m i = 2 for a single i { 1 , 2 , , r } , whereas the remaining m i ’s are zero or we have m i 1 = m i 2 = 1 for two i 1 i 2 { 1 , 2 , , r } , whereas the remaining m i ’s have to be zero. Thus, (5) evaluates for m = 4 to
E ( [ i ( ξ ; η ) I ( ξ ; η ) ] 4 ) = 9 i = 1 r ϱ i 4 + 6 i = 2 r j = 1 i 1 ϱ i 2 ϱ j 2 .
(iii) Even number of equal canonical correlations. As in Corollary 1, assume that all canonical correlations are equal and additionally assume that the number r of canonical correlations is even, i.e., r = 2 r ˜ for some r ˜ N . Then, we can use [9], Secs. 10.47.9, 10.49.1, and 10.49.12 to obtain the following relation for the modified Bessel function K α ( · ) of a second kind and order α
K r 1 2 ( y ) = π 2 exp y i = 0 r / 2 1 r / 2 1 + i ! r / 2 1 i ! i ! 2 i y ( i + 1 2 ) , y ( 0 , ) .
Plugging (12) into (6) and rearranging terms yields the following expression for the PDF of the information density:
f i ( ξ ; η ) ( x ) = 1 ϱ r 2 r 1 r / 2 1 ! exp x I ( ξ ; η ) ϱ r × i = 0 r / 2 1 2 ( r / 2 1 ) i ! 2 i r / 2 1 i ! i ! x I ( ξ ; η ) ϱ r i , x R .
By integration, we obtain for the function V ( · ) in (8) the expression
V ( z ) = 1 2 1 2 r 1 r / 2 1 ! exp z ϱ r × i = 0 r / 2 1 2 ( r / 2 1 ) i ! 2 i r / 2 1 i ! j = 0 i 1 ( i j ) ! z ϱ r i j , z 0 .
Note that these special formulas can also be obtained directly from the results given in [14], Sec. 4.A.3.
To illustrate the principal behavior of the PDF and CDF of the information density for equal canonical correlations, it is instructive to consider the specific value r = 2 in the above formulas, which yields
f i ( ξ ; η ) ( x ) = 1 2 ϱ r exp x I ( ξ ; η ) ϱ r , x R , V ( z ) = 1 2 1 exp z ϱ r , z 0
and r = 4 , for which we obtain
f i ( ξ ; η ) ( x ) = 1 4 ϱ r exp x I ( ξ ; η ) ϱ r 1 + x I ( ξ ; η ) ϱ r , x R , V ( z ) = 1 2 1 exp z ϱ r 1 + z 2 ϱ r , z 0 .

3. Mutual Information and Information Density in Terms of Canonical Correlations

First introduced by Hotelling [18], the canonical correlation analysis is a widely used linear method in multivariate statistics to determine the maximum correlations between two sets of random variables. It allows a particularly simple and useful representation of the mutual information and the information density of Gaussian random vectors in terms of the so-called canonical correlations. This representation was first obtained by Gelfand and Yaglom [19] and further extended by Pinsker [8], Ch. 9. For the convenience of the reader, we summarize in this section the essence of the canonical correlation analysis and demonstrate how it is applied to derive the representations in (1) and (2).
The formulation of the canonical correlation analysis given below is particularly suitable for implementations. The corresponding results are given without proof. Details and thorough discussions can be found, e.g., in Härdle and Simar [20], Koch [21], or Timm [22].
Based on the nonsingular covariance matrices R ξ and R η of the random vectors ξ = ( ξ 1 , ξ 2 , , ξ p ) and η = ( η 1 , η 2 , , η q ) , and the cross-covariance matrix R ξ η with rank r = rank ( R ξ η ) satisfying 0 r min { p , q } , define the matrix
M = R ξ 1 2 R ξ η R η 1 2 ,
where the inverse matrices R ξ 1 / 2 = R ξ 1 / 2 1 and R η 1 / 2 = R η 1 / 2 1 can be obtained from diagonalizing R ξ and R η . Then, the matrix M has a singular value decomposition
M = U D V T ,
where V T denotes the transpose of V. The only non-zero entries d 1 , 1 , d 2 , 2 , , d r , r > 0 of the matrix D = d i , j i , j = 1 p , q are called canonical correlations of ξ and η , denoted by ϱ i = d i , i , i = 1 , 2 , , r . The singular value decomposition can be chosen such that ϱ 1 ϱ 2 ϱ r holds, which is assumed throughout the paper.
Define the random vectors
ξ ^ = ( ξ ^ 1 , ξ ^ 2 , , ξ ^ p ) = A ξ and η ^ = ( η ^ 1 , η ^ 2 , , η ^ q ) = B η ,
where the nonsingular matrices A and B are given by
A = U T R ξ 1 2 and B = V T R η 1 2 .
Then, the random variables ξ ^ 1 , ξ ^ 2 , , ξ ^ p , η ^ 1 , η ^ 2 , , η ^ q have unit variance and they are pairwise uncorrelated with the exception of the pairs ( ξ ^ i , η ^ i ) , i = 1 , 2 , , r for which we have cor ( ξ ^ i , η ^ i ) = ϱ i .
Using these results, we obtain for the mutual information and the information density
I ( ξ ; η ) = I ( A ξ ; B η ) = I ( ξ ^ ; η ^ ) = i = 1 r I ( ξ ^ i ; η ^ i )
i ( ξ ; η ) = i ( A ξ ; B η ) = i ( ξ ^ ; η ^ ) = i = 1 r i ( ξ ^ i ; η ^ i ) ( P-almost surely ) .
The first equality in (13) and (14) holds because A and B are nonsingular matrices, which follows, e.g., from Pinsker [8], Th. 3.7.1. Since we consider the case where ξ and η are jointly Gaussian, ξ ^ and η ^ are jointly Gaussian as well. Therefore, the correlation properties of ξ ^ and η ^ imply that all random variables ξ ^ i , η ^ j are independent except for the pairs ( ξ ^ i , η ^ i ) , i = 1 , 2 , , r . This implies the last equality in (13) and (14), where i ( ξ ^ 1 ; η ^ 1 ) , i ( ξ ^ 2 ; η ^ 2 ) , , i ( ξ ^ r ; η ^ r ) are independent. The sum representations follow from the chain rules of mutual information and information density and the equivalence between independence and vanishing mutual information and information density.
Since ξ ^ i and η ^ i are jointly Gaussian with correlation cor ( ξ ^ i , η ^ i ) = ϱ i , we obtain from (13) and the formula of mutual information for the bivariate Gaussian case the identity (2). Additionally, with ξ ^ i and η ^ i having zero mean and unit variance, the information density i ( ξ ^ i ; η ^ i ) is further given by
i ( ξ ^ i ; η ^ i ) = 1 2 log ( 1 ϱ i 2 ) ϱ i 2 2 ( 1 ϱ i 2 ) ξ ^ i 2 2 ξ ^ i η ^ i ϱ i + η ^ i 2 , i = 1 , 2 , , r .
Now assume ξ ˜ 1 , ξ ˜ 2 , , ξ ˜ r , η ˜ 1 , η ˜ 2 , , η ˜ r are i.i.d. Gaussian random variables with zero mean and unit variance. Then, the distribution of the random vector
1 2 1 + ϱ i ξ ˜ i + 1 ϱ i η ˜ i , 1 + ϱ i ξ ˜ i 1 ϱ i η ˜ i
coincides with the distribution of the random vector ( ξ ^ i , η ^ i ) for all i = 1 , 2 , , r . Plugging this into (15), we obtain together with (14) that the distribution of the information density i ( ξ ; η ) coincides with the distribution of (1).

4. Proof of Main Results

4.1. Auxiliary Results

To prove Theorem 1, the following lemma regarding the characteristic function of the information density is utilized. The results of the lemma are also used in Ibragimov and Rozanov [23] but without proof. Therefore, the proof is given below for completeness.
Lemma 1 
(Characteristic function of (shifted) information density). The characteristic function of the shifted information density i ( ξ ; η ) I ( ξ ; η ) is equal to the characteristic function of the random variable
ν ˜ = 1 2 i = 1 r ϱ i ξ ˜ i 2 η ˜ i 2 ,
where ξ ˜ 1 , ξ ˜ 2 , , ξ ˜ r , η ˜ 1 , η ˜ 2 , , η ˜ r are i.i.d. Gaussian random variables with zero mean and unit variance, and ϱ 1 , ϱ 2 , , ϱ r are the canonical correlations of ξ and η. The characteristic function of ν ˜ is given by
φ ν ˜ ( t ) = i = 1 r 1 1 + ϱ i 2 t 2 , t R .
Proof. 
Due to (1), the distribution of the shifted information density i ( ξ ; η ) I ( ξ ; η ) coincides with the distribution of the random variable ν ˜ in (16) such that the characteristic functions of i ( ξ ; η ) I ( ξ ; η ) and ν ˜ are equal.
It is a well known fact that ξ ˜ i 2 and η ˜ i 2 in (16) are chi-squared distributed random variables with one degree of freedom from which we obtain that the weighted random variables ϱ i ξ ˜ i 2 / 2 and ϱ i η ˜ i 2 / 2 are gamma distributed with a scale parameter of 1 / ϱ i and shape parameter of 1 / 2 . The characteristic function of these random variables therefore admits the form
φ ϱ i 2 ξ ˜ i 2 ( t ) = 1 ϱ i j t 1 2 .
Further, from the identity φ ϱ i ξ ˜ i 2 / 2 ( t ) = φ ϱ i ξ ˜ i 2 / 2 ( t ) for the characteristic function and from the independence of ξ ˜ i and η ˜ i , we obtain the characteristic function of ν ˜ i = ϱ i ( ξ ˜ i 2 η ˜ i 2 ) / 2 to be given by
φ ν ˜ i ( t ) = 1 ϱ i j t 1 2 1 + ϱ i j t 1 2 = 1 + ϱ i 2 t 2 1 2 .
Finally, because ν ˜ in (16) is given by the sum of the independent random variables ν ˜ i , the characteristic function of ν ˜ results from multiplying the individual characteristic functions of the random variables ν ˜ i . By doing so, we obtain (17). □
As further auxiliary result, the subsequent proposition providing properties of the modified Bessel function K α of second kind and order α will be used to prove the main results.
Proposition 1
(Properties related to the function K α ). For all α R , the function
y y α K α ( y ) , y ( 0 , ) ,
where K α ( · ) denotes the modified Bessel function of second kind and order α [9], Sec. 10.25(ii), is strictly positive and strictly monotonically decreasing. Furthermore, if α > 0 , then we have
lim y + 0 y α K α ( y ) = sup y ( 0 , ) y α K α ( y ) = Γ ( α ) 2 α 1 .
Proof. 
If α R is fixed, then K α ( y ) is strictly positive and strictly monotonically decreasing w. r. t. y ( 0 , ) due to [9], Secs. 10.27.3 and 10.37. Furthermore, we obtain
d y α K α ( y ) d y = y α K α 1 ( y ) , y ( 0 , )
by applying the rules to calculate derivatives of Bessel functions given in [9], Sec. 10.29(ii). It follows that y α K α ( y ) is strictly positive and strictly monotonically decreasing w. r. t. y ( 0 , ) for all fixed α R .
Consider now the Basset integral formula as given in [9], Sec. 10.32.11
K α ( y z ) = Γ α + 1 2 ( 2 z ) α y α π u = 0 cos ( u y ) u 2 + z 2 α + 1 2 d u
for | arg ( z ) | < π / 2 , y > 0 , α > 1 2 and the integral
u = 0 1 u 2 + 1 α + 1 2 d u = π Γ ( α ) 2 Γ α + 1 2
for α > 0 , where the equality holds due to [24], Secs. 3.251.2 and 8.384.1. Using (19) and (20), we obtain
lim y + 0 y α K α ( y ) = lim y + 0 Γ α + 1 2 2 α π u = 0 cos ( u y ) u 2 + 1 α + 1 2 d u = Γ α + 1 2 2 α π u = 0 1 u 2 + 1 α + 1 2 d u = Γ ( α ) 2 α 1 ,
for all α > 0 , where we also applied the dominated convergence theorem, which is possible due to cos ( u y ) / u 2 + 1 α + 1 / 2 1 / u 2 + 1 α + 1 / 2 . Using the previously derived monotonicity, we obtain (18). □

4.2. Proof of Theorem 1

To prove Theorem 1, we calculate the PDF f ν ˜ of the random variable ν ˜ introduced in Lemma 1 by inverting the characteristic function φ ν ˜ given in (17) via the integral
f ν ˜ ( v ) = 1 2 π φ ν ˜ ( t ) exp J t v d t , v R .
Shifting the PDF of ν ˜ by I ( ξ ; η ) , we obtain the PDF f i ( ξ ; η ) = f ν ˜ ( x I ( ξ ; η ) ) , x R , of the information density i ( ξ ; η ) .
The method used subsequently is based on the work of Mathai [10]. To invert the characteristic function φ ν ˜ , we expand the factors in (17) as
1 + ϱ i 2 t 2 1 2 = 1 + ϱ r 2 t 2 1 2 ϱ r ϱ i 1 + ϱ r 2 ϱ i 2 1 1 + ϱ r 2 t 2 1 1 2
= 1 + ϱ r 2 t 2 1 2 k = 0 ( 1 ) k 1 / 2 k ϱ r ϱ i 1 ϱ r 2 ϱ i 2 k 1 + ϱ r 2 t 2 k .
In (23), we have used the binomial series
( 1 + y ) a = k = 0 a k y k
where a R . The series is absolutely convergent for | y | < 1 and
a k = = 1 k a + 1 , k N ,
denotes the generalized binomial coefficient with a 0 = 1 . Since
1 ϱ r 2 ϱ i 2 1 + ϱ r 2 t 2 1 < 1
holds for all t R , the series in (23) is absolutely convergent for all t R . Using the expansion in (23) and the absolute convergence together with the identity
1 / 2 k = ( 1 ) k ( 2 k ) ! ( k ! ) 2 4 k
we can rewrite the characteristic function φ ν ˜ as
φ ν ˜ ( t ) = k 1 = 0 k 2 = 0 k r 1 = 0 i = 1 r 1 ϱ r ϱ i ( 2 k i ) ! ( k i ! ) 2 4 k i 1 ϱ r 2 ϱ i 2 k i × 1 + ϱ r 2 t 2 r 2 + k 1 + k 2 + + k r 1 , t R .
To obtain the PDF f ν ˜ , we evaluate the inversion integral (21) based on the series representation in (28). Since every series in (28) is absolutely convergent, we can exchange summation and integration. Let β = r 2 + k 1 + k 2 + + k r 1 . Then, by symmetry, we have for the integral of a summand
t = exp J t v ( 1 + ϱ r 2 t 2 ) β d t = 2 t = 0 cos t v ( 1 + ϱ r 2 t 2 ) β d t = 2 ϱ r u = 0 cos u v / ϱ r ( 1 + u 2 ) β d u ,
where the second equality is a result of the substitution t = u / ϱ r . By setting z = 1 , α = β 1 2 0 and y = v / ϱ r in the Basset integral formula given in (19) in the proof of Proposition 1 and using the symmetry with respect to v, we can evaluate (29) to the following form:
t = exp J t v ( 1 + ϱ r 2 t 2 ) β d t = π Γ β 2 β 3 2 ϱ r β + 1 2 K β 1 2 | v | ϱ r | v | β 1 2 , v R \ { 0 } .
Combining (21), (28), and (30) yields
f ν ˜ ( v ) = 1 2 π k 1 = 0 k 2 = 0 k r 1 = 0 i = 1 r 1 ϱ r ϱ i ( 2 k i ) ! ( k i ! ) 2 4 k i 1 ϱ r 2 ϱ i 2 k i × K r 1 2 + k 1 + k 2 + + k r 1 | v | ϱ r | v | r 1 2 + k 1 + k 2 + + k r 1 Γ r 2 + k 1 + k 2 + + k r 1 2 r 3 2 + k 1 + k 2 + + k r 1 ϱ r r + 1 2 + k 1 + k 2 + + k r 1 , v R \ { 0 } .
Slightly rearranging terms and shifting f ν ˜ ( · ) by I ( ξ ; η ) yields (3).
It remains to show that f i ( ξ ; η ) ( x ) is also well defined for x = I ( ξ ; η ) if r 2 . Indeed, if r 2 , then we can use Proposition 1 to obtain
lim x I ( ξ ; η ) f i ( ξ ; η ) ( x ) = 1 2 ϱ r π k 1 = 0 k 2 = 0 k r 1 = 0 i = 1 r 1 ϱ r ϱ i ( 2 k i ) ! ( k i ! ) 2 4 k i 1 ϱ r 2 ϱ i 2 k i × Γ r 1 2 + k 1 + k 2 + + k r 1 Γ r 1 2 + k 1 + k 2 + + k r 1 + 1 2
where we used the exchangeability of the limit and the summation due to the absolute convergence of the series. Since Γ ( α ) / Γ ( α + 1 2 ) is decreasing w. r. t. α 1 2 , we have
Γ r 1 2 + k 1 + k 2 + + k r 1 Γ r 1 2 + k 1 + k 2 + + k r 1 + 1 2 Γ r 1 2 Γ r 1 2 + 1 2 π .
Then, with (69) in the proof of Theorem 4, it follows that lim x I ( ξ ; η ) f i ( ξ ; η ) ( x ) exists and is finite. □

4.3. Proof of Theorem 2

To prove Theorem 2, we calculate the CDF F ν ˜ of the random variable ν ˜ introduced in Lemma 1 by integrating the PDF f ν ˜ given in (31). Shifting the CDF of ν ˜ by I ( ξ ; η ) , we obtain the CDF F i ( ξ ; η ) ( x ) = F ν ˜ ( x I ( ξ ; η ) ) , x R , of the information density i ( ξ ; η ) . Using the symmetry of f ν ˜ , we can write
F ν ˜ ( z ) = P ( ν ˜ z ) = 1 2 v = 0 z f ν ˜ ( v ) d v for z 0 1 2 + v = 0 z f ν ˜ ( v ) d v for z > 0 .
It is therefore sufficient to evaluate the integral
V ( z ) : = v = 0 z f ν ˜ ( v ) d v
for z 0 . To calculate the integral (32), we plug (31) into (32) and exchange integration and summation, which is justified by the monotone convergence theorem. To evaluate the integral of a summand, consider the following identity
x = 0 z x α K α ( x ) d x = 2 α 1 π Γ α + 1 2 z K α ( z ) L α 1 ( z ) + K α 1 ( z ) L α ( z )
for α > 1 / 2 given in [25], Sec. 1.12.1.3, where L α ( · ) denotes the modified Struve L function of order α [9], Sec. 11.2. Using (33) with α = r 1 2 + k 1 + k 2 + + k r 1 0 , we obtain (4). □

4.4. Proof of Theorem 3

Using the random variable
ν ˜ = i = 1 r ν ˜ i with ν ˜ i = ϱ i 2 ( ξ ˜ i η ˜ i )
introduced in Lemma 1 and the well-known multinomial theorem [9], Sec. 26.4.9
y 1 + y 2 + y r m = ( 1 , 2 , , r ) K m , r m ! i = 1 r y i i i ! ,
where K m , r = ( 1 , 2 , , r ) N 0 r : 1 + 2 + + r = m , we can write the m-th central moment of the information density i ( ξ ; η ) as
E ( [ i ( ξ ; η ) I ( ξ ; η ) ] m ) = E ( i = 1 r ν ˜ i m ) = ( 1 , 2 , , r ) K m , r m ! i = 1 r E ( ν ˜ i i ) i ! .
To obtain the second equality in (34), we have exchanged expectation and summation and additionally used the identity E ( i = 1 r ν ˜ i i ) = i = 1 r E ( ν ˜ i i ) , which holds due to the independence of the random variables ν ˜ 1 , ν ˜ 2 , , ν ˜ r .
Based on the relation between the -th central moment of a random variable and the -th derivative of its characteristic function at 0, we further have
E ( ν ˜ i i ) = ( j ) i d i d t i φ ν ˜ i ( t ) | t = 0 ,
where φ ν ˜ i ( t ) = 1 + ϱ i 2 t 2 1 / 2 , t R , is the characteristic function of the random variable ν ˜ i derived in the proof of Lemma 1. As in the proof of Theorem 1, consider now the binomial series expansion using (24)
φ ν ˜ i ( t ) = 1 + ϱ i 2 t 2 1 2 = m i = 0 1 / 2 m i ϱ i t 2 m i .
The series is absolutely convergent for all t < ϱ i 1 . Furthermore, consider the Taylor series expansion of the characteristic function φ ν ˜ i at the point 0
φ ν ˜ i ( t ) = i = 0 d i d t i φ ν ˜ i ( t ) | t = 0 t i i ! .
Both series expansions must be identical in an open interval around 0 such that we obtain by comparing the series coefficients
d i d t i φ ν ˜ i ( t ) | t = 0 = i ! 1 / 2 i / 2 ϱ i i if i = 2 m i 0 if i = 2 m i 1
for all m i N . With this result, (35) evaluates to
E ( ν ˜ i i ) = i ! 2 ( i / 2 ) ! 2 4 i 2 ϱ i i if i = 2 m i 0 if i = 2 m i 1
for all m i N , where we have additionally used the identity (27).
From (34) and (36) we now obtain E ( [ i ( ξ ; η ) I ( ξ ; η ) ] m ) = 0 for all m = 2 m ˜ 1 with m ˜ N because, if m is odd, then for all ( 1 , 2 , , r ) K m , r at least one of the i ’s has to be odd. If m = 2 m ˜ with m ˜ N , we obtain from (34) and (36)
E ( [ i ( ξ ; η ) I ( ξ ; η ) ] m ) = ( 1 , 2 , , r ) K m , r m ! i = 1 r 1 i ! i ! 2 ( i / 2 ) ! 2 4 i 2 ϱ i i = ( m 1 , m 2 , , m r ) K m , r [ 2 ] m ! i = 1 r ( 2 m i ) ! m i ! 2 4 m i ϱ i 2 m i .
 □

4.5. Proof of Part (iii) of Corollary 1

Using the random variable ν ˜ as in the proof of Theorem 3, we can write the m-th central moment of the information density i ( ξ ; η ) as
E ( [ i ( ξ ; η ) I ( ξ ; η ) ] m ) = E ( ν ˜ m ) = ( j ) m d m d t m φ ν ˜ ( t ) | t = 0 ,
where the characteristic function φ ν ˜ of ν ˜ is given by φ ν ˜ ( t ) = 1 + ϱ r 2 t 2 r / 2 , t R , due to Lemma 1 and the equality of all canonical correlations. Using the binomial series and the Taylor series expansion as in the proof of Theorem 3, we obtain
d m d t m φ ν ˜ ( t ) | t = 0 = m ! r / 2 m / 2 ϱ r m if m = 2 m ˜ 0 if m = 2 m ˜ 1
for all m ˜ N . Collecting terms and additionally using the definition of the generalized binomial coefficient given in (25) in the proof of Theorem 1 yields (9). □

5. Recurrence Formulas and Finite Sum Approximations

If there are at least two distinct canonical correlations, then the PDF f i ( ξ ; η ) and CDF F i ( ξ ; η ) of the information density i ( ξ ; η ) are given by the infinite series in Theorems 1 and 2. If we consider only a finite number of summands in these representations, then we obtain approximations amenable in particular for numerical calculations. However, a direct finite sum approximation of the series in (3) and (4) is rather inefficient since modified Bessel and Struve L functions have to be evaluated for every summand. Therefore, we derive in this section recursive representations, which allow efficient numerical calculations. Furthermore, we derive uniform bounds of the approximation error. Based on the recurrence relations and the error bounds, an implementation in the programming language Python has been developed, which provides an efficient tool to numerically calculate the PDF and CDF of the information density with a predefined accuracy as high as desired. The developed source code as well as illustrating examples are made publicly available in an open access repository on GitLab [26].
Subsequently, we adopt all the previous notation and assume r 2 and at least two distinct canonical correlations (since otherwise we have the case of Corollary 1, where the series reduce to a single summand).

5.1. Recurrence Formulas

The recursive approach developed below is based on the work of Moschopoulos [27], which extended the work of Mathai [10]. First, we rewrite the series representations of the PDF and CDF of the information density given in Theorem 1 and Theorem 2 in a form, which is suitable for recursive calculations. To begin with, we define two functions appearing in the series representations (3) and (4), which involve the modified Bessel function K α of second kind and order α and the modified Struve L function L α of order α . Let us define for all k N 0 the functions U k and D k by
U k ( z ) = K r 1 2 + k ( z ) Γ r 2 + k z 2 r 1 2 + k , z 0
and
D k ( z ) = z 2 ϱ r K r 1 2 + k z ϱ r L r 3 2 + k z ϱ r + K r 3 2 + k z ϱ r L r 1 2 + k z ϱ r , z 0 .
Furthermore, we define for all k N 0 the coefficient δ k by
δ k = ( k 1 , k 2 , , k r 1 ) K k , r 1 i = 1 r 1 ( 2 k i ) ! ( k i ! ) 2 4 k i 1 ϱ r 2 ϱ i 2 k i ,
where K k , r 1 = ( k 1 , k 2 , , k r 1 ) N 0 r 1 : k 1 + k 2 + + k r 1 = k . With these definitions, we obtain the following alternative series representations of (3) and (4) by observing that the multiple summations over the indices k 1 , k 2 , , k r 1 can be shortened to one summation over the index k = k 1 + k 2 + + k r 1 .
Proposition 2
(Alternative representation of PDF and CDF of the information density). The PDF f i ( ξ ; η ) of the information density i ( ξ ; η ) given in Theorem 1 has the alternative series representation
f i ( ξ ; η ) ( x ) = 1 ϱ r π i = 1 r 1 ϱ r ϱ i k = 0 δ k U k x I ( ξ ; η ) ϱ r , x R .
The function V ( · ) specifying the CDF F i ( ξ ; η ) of the information density i ( ξ ; η ) as given in Theorem 2 has the alternative series representation
V ( z ) = i = 1 r 1 ϱ r ϱ i k = 0 δ k D k ( z ) , z 0 .
Based on the representations in Proposition 2 and with recursive formulas for U k ( · ) , D k ( · ) and δ k , we are in the position to calculate the PDF and CDF of the information density by a single summation over completely recursively defined terms. In the following, we will derive recurrence relations for U k ( · ) , D k ( · ) and δ k , which allow the desired efficient calculations.
Lemma 2
(Recurrence formula of the function U k ). If for all k N 0 the function U k is defined by (37), then U k ( z ) satisfies for all k 2 and z 0 the recurrence formula
U k ( z ) = z 2 ( r + 2 k 2 ) ( r + 2 k 4 ) U k 2 ( z ) + r + 2 k 3 r + 2 k 2 U k 1 ( z ) .
Proof. 
First, assume z = 0 . Based on Proposition 1, we obtain for all k N 0
lim z + 0 U k ( z ) = Γ r 1 2 + k 2 Γ r 2 + k ,
such that U k ( 0 ) is well defined and finite. Using the recurrence relation Γ ( y + 1 ) = y Γ ( y ) for the Gamma function [24], Sec. 8.331.1 we have
Γ r 1 2 + k 2 Γ r 2 + k = r 1 2 + k 1 r 2 + k 1 · Γ r 1 2 + k 1 2 Γ r 2 + k 1 .
This shows together with (43) that the recurrence formula (42) holds for U k ( 0 ) and k 2 .
Now, assume z > 0 and consider the recurrence formula
z K α ( z ) = z K α 2 ( z ) + 2 ( α 1 ) K α 1 ( z )
for the modified Bessel function of the second kind and order α [24], Sec. 8.486.10. Plugging (44) into (37) for α = r 1 2 + k yields for k 2
U k ( z ) = K r 1 2 + k 2 ( z ) Γ r 2 + k z 2 r 1 2 + k 2 z 2 2 + r 1 2 + k 1 K r 1 2 + k 1 ( z ) Γ r 2 + k z 2 r 1 2 + k 1 .
Using again the relation Γ ( y + 1 ) = y Γ ( y ) , we obtain
Γ r 2 + k = r 2 + k 1 Γ r 2 + k 1 = r 2 + k 1 r 2 + k 2 Γ r 2 + k 2 ,
which yields together with (45) and (37) the recurrence formula (42) for U k ( z ) if z > 0 and k 2 . □
Lemma 3
(Recurrence formula of the function D k ). If, for all k N 0 , the function D k is defined by (38), then D k ( z ) satisfies for all k 1 and z 0 the recurrence formula
D k ( z ) = D k 1 ( z ) 1 2 π r 2 + k 1 z ϱ r U k 1 z ϱ r ,
with U k ( · ) as defined in (37).
Proof. 
First, assume z = 0 . We have D k ( 0 ) = 0 for all k N 0 and from the proof of Lemma 2 we have U k ( 0 ) = Γ r 1 2 + k / 2 Γ r 2 + k for all k N 0 . Thus, the left-hand side and the right-hand side of (46) are both zero, which shows that (46) holds for z = 0 and k 1 .
Now, assume z > 0 and consider the recurrence formula
z L α ( z ) = z L α 2 ( z ) 2 ( α 1 ) L α 1 ( z ) 2 1 α z α π Γ α + 1 2
for the modified Struve L function of order α [9], Sec. 11.4.25. Together with the recurrence formula (44) for the modified Bessel function of the second kind and order α , we obtain
z L α ( z ) K α 1 ( z ) = z L α 2 ( z ) K α 1 ( z ) 2 ( α 1 ) L α 1 ( z ) K α 1 ( z )
2 1 α z α π Γ α + 1 2 K α 1 ( z ) ,
z K α ( z ) L α 1 ( z ) = z K α 2 ( z ) L α 1 ( z ) + 2 ( α 1 ) K α 1 ( z ) L α 1 ( z ) .
Plugging (47) and (48) into (38) for α = r 1 2 + k yields for k 1
D k ( z ) = z 2 ϱ r [ K r 1 2 + k 1 z ϱ r L r 3 2 + k 1 z ϱ r + K r 3 2 + k 1 z ϱ r L r 1 2 + k 1 z ϱ r ] 1 π Γ r 2 + k z 2 ϱ r r 1 2 + k K r 1 2 + k 1 z ϱ r .
Together with (38), the identity Γ r 2 + k = r 2 + k 1 Γ r 2 + k 1 , and the definition of the function U k ( · ) in (37), we obtain the recurrence formula (46) for D k ( z ) if z > 0 and k 1 . □
Lemma 4
(Recursive formula of the coefficient δ k ). The coefficient δ k defined by (39) satisfies for all k N 0 the recurrence formula
δ k + 1 = 1 k + 1 j = 1 k + 1 j γ j δ k + 1 j ,
where δ 0 = 1 and
γ j = i = 1 r 1 1 2 j 1 ϱ r 2 ϱ i 2 j .
For the derivation of Lemma 4, we use an adapted version of the method of Moschopoulos [27] and the following auxiliary result.
Lemma 5.
For k N 0 , let g be a real univariate ( k + 1 ) -times differentiable function. Then, we have the following recurrence relation for the ( k + 1 ) -th derivative of the composite function h = exp g
h ( k + 1 ) = j = 1 k + 1 k j 1 g ( j ) h ( k j + 1 ) ,
where f ( i ) denotes the i-th derivative of the function f with f ( 0 ) = f .
Proof. 
We prove the assertion of Lemma 5 by induction over k. First, consider the base case for k = 0 . In this case, formula (51) gives
h ( 1 ) = g ( 1 ) h ,
which is easily seen to be true.
Assuming formula (51) holds for h ( k ) , we continue with the case k + 1 . Application of the product rule leads to
h ( k + 1 ) = h ( k ) ( 1 ) = j = 1 k k 1 j 1 g ( j ) h ( k j ) ( 1 ) = j = 1 k k 1 j 1 g ( j + 1 ) h ( k j ) + j = 1 k k 1 j 1 g ( j ) h ( k j + 1 ) .
Substitution of j = j + 1 in the first term gives
h ( k + 1 ) = j = 2 k + 1 k 1 j 2 g ( j ) h ( k j + 1 ) + j = 1 k k 1 j 1 g ( j ) h ( k j + 1 ) .
With this representation and the identity,
k 1 j 2 + k 1 j 1 = k j 1
We finally have
h ( k + 1 ) = g ( 1 ) h ( k ) + j = 2 k k 1 j 1 + k 1 j 2 g ( j ) h ( k j + 1 ) + g ( k + 1 ) h = k 0 g ( 1 ) h ( k ) + j = 2 k k j 1 g ( j ) h ( k j + 1 ) + k k g ( k + 1 ) h = j = 1 k + 1 k j 1 g ( j ) h ( k j + 1 ) .
This completes the proof of Lemma 5. □
Proof of Lemma 4.
To prove the recurrence formula (49), we consider the characteristic function
φ ν ˜ ( t ) = i = 1 r 1 + ϱ i 2 t 2 1 2 , t R
of the random variable ν ˜ introduced in Lemma 1. On the one hand, the series representation of φ ν ˜ given in (28) in the proof of Theorem 1 can be rewritten as follows using the coefficient δ k defined in (39):
φ ν ˜ ( t ) = 1 + ϱ r 2 t 2 r 2 i = 1 r 1 ϱ r ϱ i = 0 δ 1 + ϱ r 2 t 2 , t R .
On the other hand, recall the expansion of 1 + ϱ i 2 t 2 1 2 given in (22), which yields together with (52) and the application of the natural logarithm the identity
log φ ν ˜ ( t ) = log 1 + ϱ r 2 t 2 r 2 i = 1 r 1 ϱ r ϱ i + i = 1 r 1 log 1 + ϱ r 2 ϱ i 2 1 1 + ϱ r 2 t 2 1 1 2 .
Now consider the power series
log ( 1 + y ) = = 1 ( 1 ) + 1 y ,
which is absolutely convergent for | y | < 1 . With the same arguments as in the proof of Theorem 1, in particular due to (26), we can apply the series expansion (55) to the second term on the right-hand side of (54) to obtain the absolutely convergent series representation
log φ ν ˜ ( t ) = log 1 + ϱ r 2 t 2 r 2 i = 1 r 1 ϱ r ϱ i + = 1 γ 1 + ϱ r 2 t 2 ,
where we have further used the definition of γ given in (50). Applying the exponential function to both sides of (56) then yields the following expression for the characteristic function φ ν ˜ .
φ ν ˜ ( t ) = 1 + ϱ r 2 t 2 r 2 i = 1 r 1 ϱ r ϱ i exp = 1 γ 1 + ϱ r 2 t 2
Comparing (53) and (57) yields the identity
= 0 δ 1 + ϱ r 2 t 2 = exp = 1 γ 1 + ϱ r 2 t 2 .
We now define x = 1 + ϱ r 2 t 2 1 and take the ( k + 1 ) -th derivative w. r. t. x on both sides of (58) using the identity
d m d x m = 0 a x = d m d x m = 1 a x = = m ! ( m ) ! a x m
for the m-th derivative of a power series = 0 a x . For the left-hand side of (58), we obtain
d k + 1 d x k + 1 = 0 δ x = = k + 1 ! ( k 1 ) ! δ x k 1 .
For the right-hand side of (58), we obtain
d k + 1 d x k + 1 exp = 1 γ x = j = 1 k + 1 k j 1 d j d x j = 1 γ x d k j + 1 d x k j + 1 exp = 1 γ x = j = 1 k + 1 k j 1 d j d x j = 1 γ x d k j + 1 d x k j + 1 = 0 δ x = j = 1 k + 1 k j 1 = j ! γ ( j ) ! x j × = k + 1 j ! δ ( k + j 1 ) ! x k + j 1 ,
where we used Lemma 5 and the identities (58) and (59). From the equality
d k + 1 d x k + 1 = 0 δ x = d k + 1 d x k + 1 exp = 1 γ x
and the evaluation of the right-hand side of (60) and (61), we obtain
( k + 1 ) ! δ k + 1 x 0 + x 1 + x 2 = j = 1 k + 1 k j 1 j ! γ j ( k + 1 j ) ! δ k + 1 j x 0 + x 1 + x 2
Comparing the coefficients for x 0 finally yields
δ k + 1 = 1 ( k + 1 ) ! j = 1 k + 1 k j 1 j ! γ j ( k + 1 j ) ! δ k + 1 j = 1 ( k + 1 ) ! j = 1 k + 1 k ! ( j 1 ) ! ( k + 1 j ) ! j ! γ j ( k + 1 j ) ! δ k + 1 j = 1 ( k + 1 ) j = 1 k + 1 j γ j δ k + 1 j .
This completes the proof of Lemma 4. □

5.2. Finite Sum Approximations

The results in the previous Section 5.1 can be used in the following way for efficient numerical calculations. Consider
f ^ i ( ξ ; η ) ( x , n ) = 1 ϱ r π i = 1 r 1 ϱ r ϱ i k = 0 n δ k U k x I ( ξ ; η ) ϱ r , x R
for n N 0 , i.e., the finite sum approximation of the PDF given in (40). To calculate f ^ i ( ξ ; η ) ( x , n ) , first calculate U 0 | x I ( ξ ; η ) | / ϱ r and U 1 | x I ( ξ ; η ) | / ϱ r using (37). Then, use the recurrence formulas (42) and (49) to calculate the remaining summands in (62). The great advantage of this approach is that only two evaluations of the modified Bessel function are required and for the rest of the calculations efficient recursive formulas are employed making the numerical computations efficient.
Similarly, consider
F ^ i ( ξ ; η ) ( x , n ) = 1 2 V ^ I ( ξ ; η ) x , n if x I ( ξ ; η ) 1 2 + V ^ x I ( ξ ; η ) , n if x > I ( ξ ; η ) ,
with       V ^ ( z , n ) = i = 1 r 1 ϱ r ϱ i k = 0 n δ k D k ( z ) , z 0 ,
for n N 0 , i.e., the finite sum approximation of the alternative representation of the CDF of the information density, where V ^ ( z , n ) is the finite sum approximation of the function V ( · ) given in (41). To calculate F ^ i ( ξ ; η ) ( x , n ) , first calculate D 0 ( z ) , U 0 z / ϱ r , and U 1 z / ϱ r for z = I ( ξ ; η ) x or z = x I ( ξ ; η ) using (37) and (38). Then, use the recurrence formulas (42), (46), and (49) to calculate the remaining summands in (64). This approach requires only three evaluations of the modified Bessel and Struve L function resulting in efficient numerical calculations also for the CDF of the information density.
The following theorem provides suitable bounds to evaluate and control the error related to the introduced finite sum approximations.
Theorem 4
(Bounds of the approximation error for the alternative representation of PDF and CDF). For the finite sum approximations in (62)–(64) of the alternative representation of the PDF and CDF of the information density as given in Proposition 2, we have for n N summands the error bounds
| f i ( ξ ; η ) ( x ) f ^ i ( ξ ; η ) ( x , n ) | Γ r 1 2 + n 2 ϱ r π Γ r 2 + n 1 i = 1 r 1 ϱ r ϱ i k = 0 n δ k , x R
and
| V ( z ) V ^ ( z , n ) | 1 2 1 i = 1 r 1 ϱ r ϱ i k = 0 n δ k , z 0 .
Proof. 
From the special case where all canonical correlations are equal, we can conclude from the CDF given in Corollary 1 that the function
z z K α z L α 1 z + K α 1 z L α z , z 0 ,
is monotonically increasing for all α = ( j 1 ) / 2 , j N and that further
lim z z K α z L α 1 z + K α 1 z L α z = 1
holds. Using (68), we obtain from (4)
lim z 2 V ( z ) = k 1 = 0 k 2 = 0 k r 1 = 0 i = 1 r 1 ϱ r ϱ i ( 2 k i ) ! ( k i ! ) 2 4 k i 1 ϱ r 2 ϱ i 2 k i
by exchanging the limit and the summation, which is justified by the monotone convergence theorem. Due to the properties of the CDF, we have lim z 2 V ( z ) = 1 , which implies
i = 1 r 1 ϱ r ϱ i k = 0 δ k = k 2 = 0 k r 1 = 0 i = 1 r 1 ϱ r ϱ i ( 2 k i ) ! ( k i ! ) 2 4 k i 1 ϱ r 2 ϱ i 2 k i = 1 ,
where the first equality follows from the definition of the coefficient δ k in (39).
We now obtain with (41) and (64)
| V ( z ) V ^ ( z , n ) | = i = 1 r 1 ϱ r ϱ i k = n + 1 δ k D k ( z ) i = 1 r 1 ϱ r ϱ i k = n + 1 1 2 δ k .
The inequality follows from the definition of the function D k ( · ) in (38), the monotonicity of the function in (67), and from (68). Then, (66) follows from (69).
Similarly, we obtain with (40) and (62)
| f i ( ξ ; η ) ( x ) f ^ i ( ξ ; η ) ( x , n ) | = 1 ϱ r π i = 1 r 1 ϱ r ϱ i k = n + 1 δ k U k x I ( ξ ; η ) ϱ r 1 ϱ r π i = 1 r 1 ϱ r ϱ i k = n + 1 δ k Γ r 1 2 + n 2 Γ r 2 + n .
The inequality follows from the definition of the function U k ( · ) , Proposition 1, and the decreasing monotonicity of Γ ( r 1 2 + k ) / Γ ( r 2 + k ) w. r. t. k N 0 . Then, (65) follows from (69). □
Remark 1.
Note that the bound in (65) can be further simplified using the inequality Γ ( α ) / Γ α + 1 / 2 π . Further note that the derived error bounds are uniform in the sense that they only depend on the parameters of the given Gaussian distribution and the number of summands considered. As can be seen from (69), the bounds converge to zero as the number of summands jointly increase.
Remark 2
(Relation to Bell polynomials). Interestingly, the coefficient δ k can be expressed for all k N in the following form
δ k = B k γ 1 , 2 γ 2 , 6 γ 3 , , k ! γ k k ! ,
where γ j is defined in (50), and B k denotes the complete Bell polynomial of order k [28], Sec. 3.3. Even though this is an interesting connection to the Bell polynomials, which provides an explicit formula of δ k , the recursive formula given in Lemma 4 is more efficient for numerical calculations.

6. Numerical Examples and Illustrations

We illustrate the results of this paper with some examples, which all can be verified with the Python implementation publicly available on GITLAB [26].
Equal canonical correlations. First, we consider the special case of Corollary 1 when all canonical correlations are equal. The PDF and CDF given by (6) and (7) are illustrated in Figure 1 and Figure 2 in centered form, i.e., shifted by I ( ξ ; η ) , for r { 1 , 2 , 3 , 4 , 5 } and equal canonical correlations ϱ i = 0.9 , i = 1 , , r . In Figure 3 and Figure 4, a fixed number of r = 5 equal canonical correlations ϱ i { 0.1 , 0.2 , 0.5 , 0.7 , 0.9 } , i = 1 , , r is considered. When all canonical correlations are equal, then, due to the central limit theorem, the distribution of the information density i ( ξ ; η ) converges to a Gaussian distribution as r . Figure 5 and Figure 6 show for r { 5 , 10 , 20 , 40 } and equal canonical correlations ϱ i = 0.2 , i = 1 , 2 , , r the PDF and CDF of the information density together with corresponding Gaussian approximations. The approximations are obtained by considering Gaussian distributions, which have the same variance as the information density i ( ξ ; η ) . Recall that the variance of the information density is given by (11), i.e., by the sum of the squared canonical correlations. The illustrations show that only for a high number of equal canonical correlations the distribution of the information density becomes approximately Gaussian.
Different canonical correlations. To illustrate the case with different canonical correlations, let us consider two more examples.
(i) First, assume that the random vectors ξ = ( ξ 1 , ξ 2 , , ξ p ) and η = ( η 1 , η 2 , , η q ) have equal dimensions, i.e., p = q , and are related by
( η 1 , η 2 , , η p ) = ( ξ 1 + ζ 1 , ξ 2 + ζ 2 , , ξ p + ζ p ) ,
where ξ = ( ξ 1 , ξ 2 , , ξ p ) and ζ = ( ζ 1 , ζ 2 , , ζ p ) are zero mean Gaussian random vectors, independent of each other and with covariance matrices
R ξ = ρ | i j | i , j = 1 p and R ζ = σ z 2 I p ,
for parameters 0 < | ρ | < 1 and σ z 2 > 0 , where I p denotes the identity matrix of dimension p × p . The covariance matrix of the Gaussian random vector ( ξ 1 , ξ 2 , , ξ p , η 1 , η 2 , , η p ) is the basis of the canonical correlation analysis and is given by
R ξ R ξ η R ξ η R η = R ξ R ξ R ξ R ξ + R ζ .
The specified situation corresponds to a discrete-time additive noise channel, where a stationary first-order Markov-Gaussian input process is corrupted by a stationary additive white Gaussian noise process. In this setting, a block of p consecutive input and output symbols is considered.
For given parameter values ρ and σ z 2 , the canonical correlations can be calculated numerically with the method described in Section 3. However, the example at hand even allows the derivation of explicit formulas for the canonical correlations. Evaluating the approach in Section 3 analytically yields
ϱ i ( ρ , σ z 2 ) = λ i λ i + σ z 2 with λ i = 1 ρ 2 1 2 ρ cos ( θ i ) + ρ 2 , i = 1 , 2 , , r = p ,
where θ 1 , θ 2 , , θ r are the zeros of the function
g ( θ ) = sin ( r + 1 ) θ 2 ρ sin r θ + ρ 2 sin ( r 1 ) θ , θ ( 0 , π ) .
In this representation, λ 1 , λ 2 , , λ r denote the eigenvalues of the covariance matrix R ξ = ( ρ | i j | ) i , j = 1 p derived in [29], Sec. 5.3.
As numerical examples Figure 7 and Figure 8 show, the approximated PDF f ^ i ( ξ ; η ) I ( ξ ; η ) ( · , n ) and CDF F ^ i ( ξ ; η ) I ( ξ ; η ) ( · , n ) for p = r { 5 , 10 , 20 , 40 } and the parameter values ρ = 0.9 and σ z 2 = 10 using the finite sums (62) and (64). The bounds of the approximation error given in Theorem 4 are chosen < 1 × 10 3 to obtain a high precision of the plotted curves. The number n of summands required in (62) and (64) to achieve these error bounds for r { 5 , 10 , 20 , 40 } is equal to n { 217 , 333 , 462 , 649 } for the PDF and n { 282 , 444 , 618 , 847 } for the CDF. For this example, the distribution of the information density i ( ξ ; η ) converges to a Gaussian distribution as r . However, Figure 7 and Figure 8 show that, even for r = 40 , there is still a significant gap between the exact distribution and the corresponding Gaussian approximation.
(ii) As a second example with different canonical correlations, let us consider the sequence { ϱ 1 ( T ) , ϱ 2 ( T ) , , ϱ r ( T ) } with
ϱ i ( T ) = T 2 T 2 + π i 1 2 2 , i = 1 , 2 , , r .
These canonical correlations are related to the information density of a continuous-time additive white Gaussian noise channel confined to a finite time interval [ 0 , T ] with a Brownian motion as input signal (see, e.g., Huffmann [30], Sec. 8.1 for more details). Figure 9 and Figure 10 show the approximated PDF f ^ i ( ξ ; η ) I ( ξ ; η ) ( · , n ) and CDF F ^ i ( ξ ; η ) I ( ξ ; η ) ( · , n ) for r { 2 , 5 , 10 , 15 } and T = 1 using the finite sums (62) and (64). The bounds of the approximation error given in Theorem 4 are chosen < 1 × 10 2 such that there are no differences visible in the plotted curves by further lowering the approximation error. The number n of summands required in (62) and (64) to achieve these error bounds for r { 2 , 5 , 10 , 15 } is equal to n { 15 , 141 , 638 , 1688 } for the PDF and n { 20 , 196 , 886 , 2071 } for the CDF. Choosing r larger than 15 for the canonical correlations (71) with T = 1 does not result in visible changes of the PDF and CDF compared to r = 15 . This demonstrates, together with Figure 9 and Figure 10, that a Gaussian approximation is not valid for this example, even if r .
Indeed, from [8], Th. 9.6.1 and the comment above Eq. (9.6.45) in [8], one can conclude that, whenever the canonical correlations satisfy
lim r i = 1 r ϱ i 2 < ,
then the distribution of the information density is not Gaussian.

7. Summary of Contributions

We derived series representations of the PDF and CDF of the information density for arbitrary Gaussian random vectors as well as a general formula for the central moments using canonical correlation analysis. We provided simplified and closed-form expressions for important special cases, in particular when all canonical correlations are equal, and derived recurrence formulas and uniform error bounds for finite sum approximations of the general series representations. These approximations and recurrence formulas are suitable for efficient and arbitrarily accurate numerical calculations, where the approximation error can be easily controlled with the derived error bounds. Moreover, we provided examples showing the (in)validity of approximating the information density with a Gaussian random variable.

Author Contributions

J.E.W.H. and M.M. conceived this work, performed the analysis, validated the results, and wrote the manuscript. All authors have read and agreed to this version of the manuscript.

Funding

The work of M.M. was supported in part by the German Research Foundation (Deutsche Forschungsgemeinschaft) as part of Germany’s Excellence Strategy—EXC 2050/1—Project ID 390696704—Cluster of Excellence “Centre for Tactile Internet with Human-in-the-Loop” (CeTI) of Technische Universität Dresden. We acknowledge the open access publication funding granted by CeTI.

Data Availability Statement

An implementation in Python allowing efficient numerical calculations related to the main results of the paper is publicly available on GitLab: https://gitlab.com/infth/information-density (accessed on 24 June 2022).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Han, T.S.; Verdú, S. Approximation Theory of Output Statistics. IEEE Trans. Inf. Theory 1993, 39, 752–772. [Google Scholar] [CrossRef] [Green Version]
  2. Han, T.S. Information-Spectrum Methods in Information Theory; Springer: Berlin/Heidelberg, Germany, 2003. [Google Scholar]
  3. Shannon, C.E. Probability of Error for Optimal Codes in a Gaussian Channel. Bell Syst. Tech. J. 1959, 38, 611–659. [Google Scholar] [CrossRef]
  4. Dobrushin, R.L. Mathematical Problems in the Shannon Theory of Optimal Coding of Information. In Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability; Volume 1: Contributions to the Theory of Statistics; University of California Press: Berkeley, CA, USA, 1961; pp. 211–252. [Google Scholar]
  5. Strassen, V. Asymptotische Abschätzungen in Shannons Informationstheorie. In Transactions of the Third Prague Conference on Information Theory, Statistical Decision Functions, Random Processes (Held 1962); Czechoslovak Academy of Sciences: Prague, Czech Republic, 1964; pp. 689–723. [Google Scholar]
  6. Polyanskiy, Y.; Poor, H.V.; Verdú, S. Channel Coding Rate in the Finite Blocklength Regime. IEEE Trans. Inf. Theory 2010, 56, 2307–2359. [Google Scholar] [CrossRef]
  7. Durisi, G.; Koch, T.; Popovski, P. Toward Massive, Ultrareliable, and Low-Latency Wireless Communication With Short Packets. Proc. IEEE 2016, 104, 1711–1726. [Google Scholar] [CrossRef] [Green Version]
  8. Pinsker, M.S. Information and Information Stability of Random Variables and Processes; Holden-Day: San Francisco, CA, USA, 1964. [Google Scholar]
  9. Olver, F.W.J.; Lozier, D.W.; Boisvert, R.F.; Clark, C.W. (Eds.) NIST Handbook of Mathematical Functions; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  10. Mathai, A.M. Storage Capacity of a Dam With Gamma Type Inputs. Ann. Inst. Stat. Math. 1982, 34, 591–597. [Google Scholar] [CrossRef]
  11. Grad, A.; Solomon, H. Distribution of Quadratic Forms and Some Applications. Ann. Math. Stat. 1955, 26, 464–477. [Google Scholar] [CrossRef]
  12. Kotz, S.; Johnson, N.L.; Boyd, D.W. Series Representations of Distributions of Quadratic Forms in Normal Variables. I. Central Case. Ann. Math. Stat. 1967, 38, 823–837. [Google Scholar] [CrossRef]
  13. Huffmann, J.E.W.; Mittelbach, M. On the Distribution of the Information Density of Gaussian Random Vectors: Explicit Formulas and Tight Approximations. Entropy 2022, 24, 924. [Google Scholar] [CrossRef]
  14. Simon, M.K. Probability Distributions Involving Gaussian Random Variables: A Handbook for Engineers and Scientists; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  15. Laneman, J.N. On the Distribution of Mutual Information. In Proceedings of the Workshop Information Theory and Its Applications (ITA), San Diego, CA, USA, 13 February 2006. [Google Scholar]
  16. Wu, P.; Jindal, N. Coding Versus ARQ in Fading Channels: How Reliable Should the PHY Be? IEEE Trans. Commun. 2011, 59, 3363–3374. [Google Scholar] [CrossRef] [Green Version]
  17. Buckingham, D.; Valenti, M.C. The Information-Outage Probability of Finite-Length Codes Over AWGN Channels. In Proceedings of the 42nd Annual Conference on Information Sciences and Systems (CISS), Princeton, NJ, USA, 19–21 March 2008. [Google Scholar]
  18. Hotelling, H. Relations Between Two Sets of Variates. Biometrika 1936, 28, 321–377. [Google Scholar] [CrossRef]
  19. Gelfand, I.M.; Yaglom, A.M. Calculation of the Amount of Information About a Random Function Contained in Another Such Function. In AMS Translations, Series 2; AMS: Providence, RI, USA, 1959; Volume 12, pp. 199–246. [Google Scholar]
  20. Härdle, W.K.; Simar, L. Applied Multivariate Statistical Analysis, 4th ed.; Springer: Berlin/Heidelberg, Germany, 2015. [Google Scholar]
  21. Koch, I. Analysis of Multivariate and High-Dimensional Data; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  22. Timm, N.H. Applied Multivariate Analysis; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  23. Ibragimov, I.A.; Rozanov, Y.A. On the Connection Between Two Characteristics of Dependence of Gaussian Random Vectors. Theory Probab. Appl. 1970, 15, 295–299. [Google Scholar] [CrossRef]
  24. Gradshteyn, I.S.; Ryzhik, I.M. Table of Integrals, Series, and Products, 7th ed.; Elsevier: Amsterdam, The Netherlands, 2007. [Google Scholar]
  25. Prudnikov, A.P.; Brychov, Y.A.; Marichev, O.I. Integrals and Series, Volume 2: Special Functions; Gordon and Breach Science: New York, NY, USA, 1986. [Google Scholar]
  26. Huffmann, J.E.W.; Mittelbach, M. Efficient Python Implementation to Numerically Calculate PDF, CDF, and Moments of the Information Density of Gaussian Random Vectors. Source Code Provided on GitLab. 2021. Available online: https://gitlab.com/infth/information-density (accessed on 24 June 2022). Source Code Provided on GitLab.
  27. Moschopoulos, P.G. The Distribution of the Sum of Independent Gamma Random Variables. Ann. Inst. Stat. Math. 1985, 37, 541–544. [Google Scholar] [CrossRef]
  28. Comtet, L. Advanced Combinatorics: The Art of Finite and Infinite Expansions, Revised and Enlarged ed.; D. Reidel Publishing Company: Dordrecht, The Netherlands, 1974. [Google Scholar]
  29. Grenander, U.; Szegö, G. Toeplitz Forms and Their Applications; University of California Press: Berkeley, CA, USA, 1958. [Google Scholar]
  30. Huffmann, J.E.W. Canonical Correlation and the Calculation of Information Measures for Infinite-Dimensional Distributions. Diploma Thesis, Department of Electrical Engineering and Information Technology, Technische Universität Dresden, Dresden, Germany, 2021. Available online: https://nbn-resolving.org/urn:nbn:de:bsz:14-qucosa2-742541 (accessed on 24 June 2022).
Figure 1. PDF f i ( ξ ; η ) I ( ξ ; η ) for r { 1 , 2 , 3 , 4 , 5 } equal canonical correlations ϱ i = 0.9 .
Figure 1. PDF f i ( ξ ; η ) I ( ξ ; η ) for r { 1 , 2 , 3 , 4 , 5 } equal canonical correlations ϱ i = 0.9 .
Entropy 24 00924 g001
Figure 2. CDF F i ( ξ ; η ) I ( ξ ; η ) for r { 1 , 2 , 3 , 4 , 5 } equal canonical correlations ϱ i = 0.9 .
Figure 2. CDF F i ( ξ ; η ) I ( ξ ; η ) for r { 1 , 2 , 3 , 4 , 5 } equal canonical correlations ϱ i = 0.9 .
Entropy 24 00924 g002
Figure 3. PDF f i ( ξ ; η ) I ( ξ ; η ) for r = 5 equal canonical correlations ϱ i { 0.1 , 0.2 , 0.5 , 0.7 , 0.9 } .
Figure 3. PDF f i ( ξ ; η ) I ( ξ ; η ) for r = 5 equal canonical correlations ϱ i { 0.1 , 0.2 , 0.5 , 0.7 , 0.9 } .
Entropy 24 00924 g003
Figure 4. CDF F i ( ξ ; η ) I ( ξ ; η ) for r = 5 equal canonical correlations ϱ i { 0.1 , 0.2 , 0.5 , 0.7 , 0.9 } .
Figure 4. CDF F i ( ξ ; η ) I ( ξ ; η ) for r = 5 equal canonical correlations ϱ i { 0.1 , 0.2 , 0.5 , 0.7 , 0.9 } .
Entropy 24 00924 g004
Figure 5. PDF f i ( ξ ; η ) I ( ξ ; η ) for r { 5 , 10 , 20 , 40 } equal canonical correlations ϱ i = 0.2 vs. Gaussian approximation.
Figure 5. PDF f i ( ξ ; η ) I ( ξ ; η ) for r { 5 , 10 , 20 , 40 } equal canonical correlations ϱ i = 0.2 vs. Gaussian approximation.
Entropy 24 00924 g005
Figure 6. CDF F i ( ξ ; η ) I ( ξ ; η ) for r { 5 , 10 , 20 , 40 } equal canonical correlations ϱ i = 0.2 vs. Gaussian approximation.
Figure 6. CDF F i ( ξ ; η ) I ( ξ ; η ) for r { 5 , 10 , 20 , 40 } equal canonical correlations ϱ i = 0.2 vs. Gaussian approximation.
Entropy 24 00924 g006
Figure 7. Approximated PDF f ^ i ( ξ ; η ) I ( ξ ; η ) ( · , n ) for r { 5 , 10 , 20 , 40 } canonical correlations ϱ i ( ρ , σ z 2 ) given in (70) for ρ = 0.9 and σ z 2 = 10 (approximation error < 1 × 10 3 ) vs. Gaussian approximation ( r { 20 , 40 } ).
Figure 7. Approximated PDF f ^ i ( ξ ; η ) I ( ξ ; η ) ( · , n ) for r { 5 , 10 , 20 , 40 } canonical correlations ϱ i ( ρ , σ z 2 ) given in (70) for ρ = 0.9 and σ z 2 = 10 (approximation error < 1 × 10 3 ) vs. Gaussian approximation ( r { 20 , 40 } ).
Entropy 24 00924 g007
Figure 8. Approximated CDF F ^ i ( ξ ; η ) I ( ξ ; η ) ( · , n ) for r { 5 , 10 , 20 , 40 } canonical correlations ϱ i ( ρ , σ z 2 ) given in (70) for ρ = 0.9 and σ z 2 = 10 (approximation error < 1 × 10 3 ) vs. Gaussian approximation ( r { 20 , 40 } ).
Figure 8. Approximated CDF F ^ i ( ξ ; η ) I ( ξ ; η ) ( · , n ) for r { 5 , 10 , 20 , 40 } canonical correlations ϱ i ( ρ , σ z 2 ) given in (70) for ρ = 0.9 and σ z 2 = 10 (approximation error < 1 × 10 3 ) vs. Gaussian approximation ( r { 20 , 40 } ).
Entropy 24 00924 g008
Figure 9. Approximated PDF f ^ i ( ξ ; η ) I ( ξ ; η ) ( · , n ) for r { 2 , 5 , 10 , 15 } canonical correlations ϱ i ( T ) given in (71) for T = 1 (approximation error < 1 × 10 2 ) vs. Gaussian approximation ( r = 15 ).
Figure 9. Approximated PDF f ^ i ( ξ ; η ) I ( ξ ; η ) ( · , n ) for r { 2 , 5 , 10 , 15 } canonical correlations ϱ i ( T ) given in (71) for T = 1 (approximation error < 1 × 10 2 ) vs. Gaussian approximation ( r = 15 ).
Entropy 24 00924 g009
Figure 10. Approximated CDF F ^ i ( ξ ; η ) I ( ξ ; η ) ( · , n ) for r { 2 , 5 , 10 , 15 } canonical correlations ϱ i ( T ) given in (71) for T = 1 (approximation error < 1 × 10 2 ) vs. Gaussian approximation ( r = 15 ).
Figure 10. Approximated CDF F ^ i ( ξ ; η ) I ( ξ ; η ) ( · , n ) for r { 2 , 5 , 10 , 15 } canonical correlations ϱ i ( T ) given in (71) for T = 1 (approximation error < 1 × 10 2 ) vs. Gaussian approximation ( r = 15 ).
Entropy 24 00924 g010
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Huffmann, J.E.W.; Mittelbach, M. On the Distribution of the Information Density of Gaussian Random Vectors: Explicit Formulas and Tight Approximations. Entropy 2022, 24, 924. https://doi.org/10.3390/e24070924

AMA Style

Huffmann JEW, Mittelbach M. On the Distribution of the Information Density of Gaussian Random Vectors: Explicit Formulas and Tight Approximations. Entropy. 2022; 24(7):924. https://doi.org/10.3390/e24070924

Chicago/Turabian Style

Huffmann, Jonathan E. W., and Martin Mittelbach. 2022. "On the Distribution of the Information Density of Gaussian Random Vectors: Explicit Formulas and Tight Approximations" Entropy 24, no. 7: 924. https://doi.org/10.3390/e24070924

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop